url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/2003.01423 | A uniform result for the dimension of fractional Brownian motion level sets | Let $B =\{ B_t \, : \, t \geq 0 \}$ be a real-valued fractional Brownian motion of index $H \in (0,1)$. We prove that the macroscopic Hausdorff dimension of the level sets $\mathcal{L}_x = \left\{ t \in \mathbb{R}_+ \, : \, B_t=x \right\}$ is, with probability one, equal to $1-H$ for all $x\in\mathbb{R}$. | \section{Introduction}
Let $B =\{ B_t \, : \, t \geq 0 \}$ be a fractional Brownian motion of index $H \in (0,1)$, that is, a centered, real-valued Gaussian process with covariance function
\begin{align}
R(s,t)= \mathbb{E} \left( B_s B_t\right)= \dfrac{1}{2}\left( \lvert s \rvert ^{2H} + \lvert t \rvert ^{2H} - \lvert s - t\rvert ^{2H}\right).
\end{align}
Since $\mathbb{E} \big[\left( B_s - B_t\right)^2\big]= \left|s - t\right|^{2H}$, it is an immediate consequence of the Kolmogorov–Centsov continuity theorem that $B$ admits a continuous modification. Throughout this note, we will always assume that $B$ is continuous. It is also immediate (see, e.g., \cite{nourdin2012selected}) that $B$ is a self-similar process of exponent $H$,
that is, for any $a > 0$,
\begin{align*}
\left\{ B_{at} \, : \, t \geq 0 \right\} \stackrel{d}{=} \left\{ a^H B_t \, : \, t \geq 0 \right\},
\end{align*}
where $X \stackrel{d}{=} Y$ means that two processes $X$ and $Y$ have the same distribution.
Moreover, $B$ has stationary increments, that is, for every $s \geq 0$ ,
\begin{align*}
\left\{ B_{t+s} - B_s \, : \, t \geq 0 \right\} \stackrel{d}{=} \left\{B_t \, : \, t \geq 0 \right\}.
\end{align*}
\medskip
This article is concerned with estimating the size of the level sets of $B$, which are defined for any $x\in\mathbb{R}$ as
\begin{align}
\label{DefLS}
\mathcal{L}_x = \left\{ t \geq 0 \, : \, B_t=x\right\}.
\end{align}
\par This line of research started with the seminal work of Taylor and Wendel \cite{taylor1966}, who were the first to study the Hausdorff dimensions of the level sets (and of the graph) in the case of a standard Brownian motion. They proved among other things that, for any fixed $x\in\mathbb{R}$, each Brownian level set $\mathcal{L}_x$ has a Hausdorff dimension $\frac{1}{2}$ with probability one. Their results were extended later on by Perkins \cite{perkins1981} who showed that, with probability one, the level sets $\mathcal{L}_x$ have a Hausdorff dimension $\frac{1}{2}$ for all $x\in\mathbb{R}$. Hence, the local structure of the level sets in the Brownian case is well understood.
Another method to describe the geometric properties of the single paths of a given process is in terms of its sojourn times. Here, the goal is to study the dimension of the amount of time spent by the stochastic process inside a moving boundary, that is, of the form
\begin{align*}
E(\phi) := \left\{ t \geq 0 \, : \, |B_t| \leq \phi(t)\right\},
\end{align*}
where $\phi:\mathbb{R}_+\to\mathbb{R}$ is an appropriate function.
Strongly related to our note, we mention the recent work of Nourdin, Peccati and Seuret \cite{nourdin2018sojourn}, in which a specific large scale dimension is computed for the sojourn times
\begin{align}
\label{sojournTime}
E_\gamma := \left\{ t \geq 0\, : \, |B_t| \leq t^\gamma \right\}, \quad 0<\gamma< H,
\end{align}
of the fractional Brownian motion $B$.
Note that this choice for $\phi$ is completely
natural here because, on the one hand, the fractional Brownian motion is selfsimilar (hence the choice of a power function for $\phi$) and, on the other hand, it satisfies a law of iterated logarithm as $t\to\infty$ (hence the range $(0,H)$ for $\gamma$). Actually, \cite{nourdin2018sojourn} extended
to the fractional Brownian motion the results given by Seuret and Yang \cite{seuret2019sojourn}
in the framework of the standard Brownian case.
In general, defining a notion of fractal dimension for a subset of $\mathbb{R}^d$ involves taking into consideration the microscopic (i.e. local) properties of this set. However, many models in statistical physics are based on the Euclidean lattice $\mathbb{Z}^d$; in this case, it may look more natural to rely on the macroscopic (i.e. global) properties of the set to define a notion of dimension.
This is what Barlow and Taylor proposed in \cite{barlow1989, barlow1992}. Their
dimension, called {\it macroscopic Hausdorff dimension}, has proven to be relevant in many contexts.
This is the one that was used in \cite{nourdin2018sojourn,seuret2019sojourn}, and also the one we will
use in the present note, because it can give a good intuition about the geometry of the set into consideration, precisely whether it is scattered or not. Precise definitions will be given in Section \ref{MHDsubsection}. At this stage, we only mention that we denote this macroscopic Hausdorff dimension by $\mbox{Dim}_H$.
\medskip
Our note can be considered as an addendum to \cite{nourdin2018sojourn}.
Let $\mathcal{L}_x$ be the level sets associated with a fractional Brownian motion.
In \cite{nourdin2018sojourn}, the following is shown.
\begin{theorem}\label{thm1}
\normalfont
Fix $x \in \mathbb{R}$. Then
\begin{align*}
\mathbb{P}(\mbox{Dim}_H \mathcal{L}_x= 1-H)=1.
\end{align*}
\end{theorem}
Our aim is to extend Theorem \ref{thm1} from ``$\forall x$, $\mathbb{P}(\ldots)=1$''
to ``$\mathbb{P}(\forall x: \ldots)=1$''.
To this end, new and non-trivial arguments are required. We will prove the following.
\begin{theorem}
\normalfont
\label{LevelSets}
\begin{align}
\mathbb{P}(\forall x\in\mathbb{R}:\,\mbox{Dim}_H \mathcal{L}_x = 1-H)=1.
\end{align}
\end{theorem}
We note that our Theorem \ref{LevelSets} also recovers Seuret-Yang's result \cite[Theorem 2]{seuret2019sojourn} (Brownian motion), with what we believe is a more natural proof.
\medskip
Throughout all the note, every random object is defined on a common probability space $\left(\Omega , \mathcal{A}, \mathbb{P}\right)$, and $\mathbb{E}$ denotes the expectation with respect to $\mathbb{P}$.
\section{Preliminaries}
\label{Preliminaries}
This section gathers the different tools that will be needed in order to prove Theorem \ref{LevelSets}.
\subsection{Macroscopic Hausdorff Dimension}
\label{MHDsubsection}
Following the notations of \cite{khoshnevisan2017intermittency, khoshnevisan2017macroscopic}, we consider the intervals $S_{-1} =[0,1/2)$ and $S_n = [2^{n-1},2^n)$ for $ n \geq 0$. For $E \subset \mathbb{R}^+$, we define the set of {\it proper covers} of $E$ restricted to $S_n$ by
\begin{align*}
\mathcal{I}_n(E) = \left\{
\begin{matrix}
\left\{I_i\right\}_{i=1}^{m}\, : & I_i=[x_i,y_i] \, \mbox{with} \, x_i,y_i \in \mathbb{N}, \, y_i>x_i, \\ & I_i \subset S_n \, \mbox{ and } \, E \cap S_n \subset \bigcup_{i=1}^{m} I_i.
\end{matrix} \right\}
\end{align*}
For any set $E \subset \mathbb{R}^+$, $\rho \geq 0$ and $n \geq -1$, we define
\begin{align}
\label{nu^n_rho}
\nu_{\rho}^{n}(E) = \inf \left\{ \sum_{i=1}^{m} \left(\dfrac{\mbox{diam}(I_i)}{2^n }\right )^\rho : \, \left\{I_i\right\}_{i=1}^{m} \in \mathcal{I}_n(E)\right \},
\end{align}
where $\mbox{diam}([a,b])=b-a$.
The key point in the definition of $\nu_{\rho}^{n}(E)$ is that the sets $I_i$ are non-trivial intervals with {\it integer} boundaries; in particular, the infimum is reached.
\begin{definition}
Let $E \subset \mathbb{R}^+$. The macroscopic Hausdorff dimension of $E$ is defined by
\begin{align}
\mbox{Dim}_H E = \inf \left\{ \rho >0 : \, \sum_{n \geq -1} \nu_{\rho}^{n}(E) < + \infty \right \}.
\label{DefDim}
\end{align}
\end{definition}
We observe that $\mbox{Dim}_H E$ always belongs to $[0,1]$, whatever $E \subset \mathbb{R}^+$. Indeed,
consider the family $I_i=[2^{n-1}+i-1,2^{n-1}+i]$, $1\leq i\leq 2^{n-1}$, which belongs to $\mathcal{I}_n(E)$ and satisfies $\sum_{i=1}^{m} \left(\dfrac{\mbox{diam}(I_i)}{2^n }\right )^\rho=\frac12 2^{n(1-\rho)}$.
Thus, $\nu^n_{1+\varepsilon}(E)\leq 2^{-n\varepsilon}$ for all $\varepsilon>0$, implying in turn that $\mbox{Dim}_H E\leq 1+\varepsilon$ for all $\varepsilon>0$. As a result, we have that
$\mbox{Dim}_H E\in[0,1]$.
In (\ref{nu^n_rho}), the covers are chosen to have length larger than 1. This shows that the macroscopic Hausdorff dimension does not rely on the local structure of the underlying set.
The dimension of a set is unchanged when one removes any bounded subset, since the series in (\ref{DefDim}) converges if and only if its tail series converges.
Consequently, the dimension of any bounded set $E$ is zero. But the converse is not true, for example $\mbox{Dim}_H (\{ 2^n,\,n\geq 1\})=0$.
The macroscopic Hausdorff dimension not only counts the number of covers of a set but also it gives an intuition about the geometry of the set. Precisely, the more the points of the set are scattered, the larger its dimension. For instance for $0< \alpha < 1$, define the two sets $A_\alpha$ and $B_\alpha$ by for all $n \geq 1$,
\begin{align*}
A_\alpha \cap S_n = \left\{2^{n-1}+ k \frac{2^{n-1}}{2^{n\alpha}} : k \in \{0,...,2^{n\alpha}-1 \}\right\};
\\ B_\alpha \cap S_n = \left\{2^{n-1}+ \frac{k}{2^{n\alpha}} : k\in \{0,...,2^{n\alpha}-1 \}\right\}.
\end{align*}
Even though both sets have same cardinality but $\mbox{Dim}_H A_\alpha = \alpha$ whereas $\mbox{Dim}_H B_\alpha = 0$.
These features make the macroscopic Hausdorff dimension an interesting quantity describing the large scale geometry of a set; in particular, it appears to be well suited for the study of the level sets $\mathcal{L}_x$.
As we will see in our upcoming analysis, it might be sometimes wise to slightly modify the way $\mbox{Dim}_H E$ is defined, to get a definition that is more amenable to analysis. For this reason, let us introduce, for any $E \subset \mathbb{R^+}$, $\rho>0$, $\xi \geq 0$, and $n \geq -1$, the quantity
\begin{align}
\widetilde{\nu}_{\rho, \xi}^{n}(E) = \inf \left\{ \sum_{i=1}^{m} \left(\dfrac{\mbox{diam}(I_i)}{2^n }\right )^\rho \left| \mbox{log}_2\dfrac{\mbox{diam}(I_i)}{2^n }\right|^{\xi} :\left\{I_i\right\}_{i=1}^{m} \in \mathcal{I}_n(E) \right \}.
\label{DefDim2}
\end{align}
The difference between $\nu_{\rho}^{n}(E)$ and $ \widetilde{\nu}_{\rho, \xi}^{n}(E) $ is that we introduce a logarithmic factor in the latter. This modification has actually no impact on the definition of $\mbox{Dim}_H E$, as stated by the following lemma.
\begin{lemma}\label{lm3}
Let $\xi \geq 0$. For every set $E \subset \mathbb{R}^+$,
\begin{align}
\label{MHDmod}
\mbox{Dim}_H E = \inf \left\{ \rho >0 : \, \sum_{n \geq -1} \widetilde{\nu}_{\rho,\xi}^{n}(E) < + \infty \right\}.
\end{align}
\end{lemma}
\begin{proof}
Define $\tilde{d}_\xi = \inf \left\{ \rho >0 : \, \sum_{n \geq -1} \widetilde{\nu}_{\rho}^{n,\xi}(E) < + \infty \right\}$.
For $n \geq -1$, consider $\left\{ I_i \right\}_{i=1}^{m} \in \mathcal{I}_n(E)$. As $I_i \subset S_n$, one has $\mbox{diam}(I_i) \leq 2^{n-1}$, implying in turn that
$\left| \mbox{log}_2 \dfrac{\mbox{diam} (I_i)}{2^n} \right|^{\xi}\geq 1$.
Thus, $\widetilde{\nu}_{\rho,\xi}^{n}(E) \geq \nu_{\rho}^{n}(E) $ and then $\mbox{Dim}_H E \leq \tilde{d}_\xi$.
If $\mbox{Dim}_H E=1$, the conclusion is straightforward. So, let us assume that $\mbox{Dim}_H E<1$ and let us fix $\epsilon>0$ small enough and $\rho<1$ such that $ \rho > \mbox{Dim}_H E + \epsilon$. Since the function $x \mapsto x^\epsilon \left|\log_2 x\right|^\xi$ is continuous on $(0,1]$ and tends to zero as $x$ tends to zero, it follows that there exists $c>0$ such that
\begin{align*}
\left|\log_2 x\right|^\xi \leq c x^{-\epsilon}, \, \forall x \in (0,1]
\end{align*}
We deduce that, for all $\left\{ I_i\right\}_{i=1}^{m} \in \mathcal{I}_n(E)$,
\begin{align*}
\sum_{i=1}^{m} \left(\dfrac{\mbox{diam}(I_i)}{2^n }\right )^\rho \left| \mbox{log}_2\dfrac{\mbox{diam}(I_i)}{2^n }\right|^{\xi} \leq c\sum_{i=1}^{m} \left(\dfrac{\mbox{diam}(I_i)}{2^n }\right )^{\rho-\epsilon}
\end{align*}
By taking the infimum over all $\left\{ I_i\right\}_{i=1}^{m} \in \mathcal{I}_n(E)$ and recalling the definitions \eqref{nu^n_rho} and \eqref{DefDim2}, one deduces that $ \widetilde{\nu}_{\rho, \xi}^{n}(E) \leq c\nu_{\rho - \epsilon}^{n}(E)$, implying in turn $\widetilde{d}_\xi \leq \rho - \epsilon$. Letting $\rho$ tend to $\mbox{Dim}_H E + \epsilon$ yields the result.
\end{proof}
\subsection{Local Time of Fractional Brownian Motion}
As we will see, the use of the local time will play a key role throughout the proof of Theorem \ref{LevelSets}.
Provided it exists, the local time $x\mapsto L_t^x$ of a given process $(X_t)_{t \geq 0}$ is, for each $t$, the density of the occupation measure $\mu_t(A) = \mbox{Leb} \left\{s \in [0, t]\, : \, X_s \in A \right\}$ associated with $X$; otherwise stated, one has $L_t = \frac{d \mu_t}{d\mbox{Leb}}$. In what follows, we shall also freely use the notation $L_t([a,b])$ to indicate the quantity $L_t(b)-L_t(a)$.
The case where $X$ is Gaussian (and centered, say) has been widely studied in the literature.
For instance, we can refer to the survey by Dozzi \cite{dozzi2003occupation}.
One of the main striking results in the Gaussian framework is the following easy-to-check condition that ensures that $(L_t^x)_{t\in [0,T],x\in\mathbb{R}}$ exists in $L^2(\Omega)$ :
\begin{equation}
\label{LemmaBddI}
I:= \int \int_{[0,T]^2} \dfrac{ds \, dt}{\sqrt{R(s,s)R(t,t)-R(s,t)^2}} < + \infty,
\end{equation}
where $R(s,t)=\mathbb{E}\left(X_s X_t\right)$; morever, in this case we have the Fourier type representation:
\begin{align}
L^{x}_{t} = \dfrac{1}{2\pi} \int_{\mathbb{R}} dy \, e^{-iyx} \int_{0}^{t} du \, e^{iyB_u}.
\label{deflt}
\end{align}
If $X$ is Gaussian, selfsimilar of index $H$ and satisfies (\ref{LemmaBddI}), then it is immediate from (\ref{deflt}) that its local time at level $x$ also have some selfsimilarity properties in time with index $1-H$, but with a different level as stated below. More precisely, one has, for every $c>0$:
\begin{align}
\label{ss}
(L^{x}_{ct})_{t\geq 0, x\in\mathbb{R}}
\stackrel{d}{=}
c^{1-H} (L^{c^{-H}x}_{t})_{t\geq 0, x\in\mathbb{R}}.
\end{align}
When $X$ stands for the fractional Brownian motion $B$ of Hurst index $H\in(0,1)$, it is immediate that (\ref{LemmaBddI}) and (\ref{ss}) are satisfied.
But we can go further. A consequence of Berman's work \cite{berman1973local} is that the local time associated to $B$ is $\beta-$H\"older continuous in $t$ for every $\beta \leq 1-H$ and uniformly in $x$.
On their side, German and Horowitz (see \cite[Theorem 26.1]{german1980Occupation}) proved that, for all fixed $t$, the local time $(L_{t}^{x})_{x \in \mathbb{R}}$ admits the H\"older regularity in space stated in the following lemma.
\begin{lemma}[Spatial H$\ddot{\mbox{o}}$lder continuity of local time]
Assume $X$ is a fractional Brownian motion of Hurst index $H\in(0,1)$ and
consider its local time $(L_{t}^{x})_{x \in K}$, where $K$ is a given compact interval in $\mathbb{R}$.
Then, for all $\beta \in\big(0, \frac{1}{2} \left( \frac{1}{H} - 1 \right)\big)$ and for all $t \geq 0$,
\begin{align}
\label{defC}
\mathbb{P}\left(\sup_{x,y \in K} \dfrac{\left| L_t([x,y])\right|}{|x-y|^\beta} < \infty\right)=1.
\end{align}
\label{lemma3}
\end{lemma}
As we will see, Lemma \ref{lemma3} will be one of our main key tools in order to prove Lemma \ref{a} (which is one of the steps leading to the proof of Theorem \ref{LevelSets}).
\subsection{Filtration of Fractional Brownian Motion}
A last crucial property of the fractional Brownian $B$ that we will use in order to to prove Theorem \ref{LevelSets}, is that the natural filtration associated with $B$ is Brownian.
We mean by this that there exists a standard Brownian motion $(W_u)_{u \geq 0}$ defined on the same probability space than $B$ such that its filtration satisfies,
for all $t > 0$,
\begin{equation}
\label{eqFBM}
\sigma\{ B_u \, : \, u \leq t \} \subset \sigma \{ W_u \, : \, u \leq t\}.
\end{equation}
Property (\ref{eqFBM}) is an immediate consequence of the Volterra representation of $B$ (see, e.g., \cite{Baudoin2003equivalence}).
It will be exploited together with the Blumenthal's $0-1$ law, in the end of the proof of Proposition \ref{proposition4}.
\section{Proof of Theorem \ref{LevelSets}}
\subsection{Upper bound for Dim$_H \mathcal{L}_x$}
By a theorem in \cite{nourdin2018sojourn}, for every $\gamma\in (0,H)$, a.s.
\begin{align*}
\mbox{Dim}_H E_\gamma = 1 - H.
\end{align*}
On the other hand, observe that for a fixed $\gamma > 0$ and $x \in \mathbb{R}$, the level set $\mathcal{L}_x$ is ultimately included in $E_\gamma$.
Indeed,
\begin{align*}
\mathcal{L}_x \cap \left\{ t \geq |x|^{\frac1\gamma}\right\} \subset E_\gamma.
\end{align*}
We have recalled in Section \ref{MHDsubsection} that the macroscopic Haussdorff dimension is unsensitive to the suppression of any bounded subset.
As a result,
a.s. for every $x \in \mathbb{R}$,
$$\mbox{Dim}_H \mathcal{L}_x = \mbox{Dim}_H \left( \mathcal{L}_x \cap \left\{ t \geq |x|^{\frac1\gamma}\right\} \right) \leq \mbox{Dim}_H E_\gamma = 1 - H.$$
\subsection{Lower bound for Dim$_H \mathcal{L}_x$}
\label{LowerBound}
Recall $S_n$ from Section \ref{MHDsubsection}, and
let us introduce the random variables
\begin{equation}
Z_{n}^{x} = \dfrac{L^{x}\left(S_n\right)}{2^{n(1-H)}} \quad \mbox{and} \quad F^{x}_{N} = \sum_{n=1}^{N} Z_{n}^{x}.
\label{def}
\end{equation}
The random variables $\left(Z_{n}^{x} \right)_{n \geq -1}$ are positive, so $(F_{N}^{x})_{N \geq 1}$ is non-decreasing. We denote by $F_{\infty}^{x}$ its limit, i.e. $ F_{\infty}^{x} = \sum_{n=-1}^{\infty} Z_{n}^{x} \in [0, + \infty]$.
Using (\ref{ss}), we have for all $n \geq 0$
\begin{equation}
\label{Ynx}
Z_{n}^{x} \stackrel{d}{=} Z_{0}^{2^{-nH}x}.
\end{equation}
We note that similar random variables $Y_{n}^{x} = \dfrac{L^{2^{n}x}\left(S_n\right)}{2^{n(1-H)}} $ were introduced in \cite[Section 5.3]{nourdin2018sojourn}.
However, the fact that we are dealing with other space variables compared to \cite{nourdin2018sojourn} induce several differences in our proofs.
Although its statement is exactly the same than \cite[Lemma 5]{nourdin2018sojourn}, the meaning and proof in our context of the next lemma are different (albeit quite close).
This is why we provide all the details, for the convenience of the reader.
\begin{lemma}
There exists a (deterministic) constant $K>0$ such that
\begin{equation*}
\mathbb{P}\left(\forall x\in\mathbb{R}, \forall n\geq -1:\,\widetilde{\nu}_{1-H,H}^{n}(\mathcal{L}_x) \geq K^{-1} Z_{n}^{x}\right)=1.
\end{equation*}
\label{lemma2}
\end{lemma}
\begin{proof}
Let us introduce the random variables
\begin{equation}
\label{eqself}
A_n := \sup_{0 \leq t \leq 2^{n}} \sup_{0 \leq h \leq 2^{n-1}} \sup_{y \in \mathbb{R}} \dfrac{L^{y}\left([t,t+h]\right)}{h^{1-H}(n - \log_{2}h)^{H}},
\end{equation}
where $\log_2$ stands for the binary logarithm (base 2).
By (\ref{ss}), we have
\begin{align}
\label{X_n}
A_n := & \sup_{0 \leq t \leq 1} \ \sup_{0 \leq h \leq 1/2} \ \sup_{y \in \mathbb{R}} \dfrac{L^{y}\left([2^n t, 2^n(t+h)]\right)}{(2^n h)^{1-H}(- \log_{2}h)^{H}} \\
\stackrel{d}{=} & \sup_{0 \leq t \leq 1} \ \sup_{0 \leq h \leq 1/2} \ \sup_{y \in \mathbb{R}} \dfrac{L^{y}\left([t,t+h]\right)}{h^{1-H}( - \log_{2}h)^{H}}. \nonumber
\end{align}
By \cite[Theorem 1.2]{xiao1997holder}, there exists a (deterministic) constant $K > 0$ such that
\begin{align}
\mathbb{P}\left(\mbox{for every } \ n \geq -1, \ A_n \leq K\right)=1.
\label{eq1bis}
\end{align}
Now fix $x \in \mathbb{R}$, and consider the level set $\mathcal{L}_x$ defined by (\ref{DefLS}). Recall the definition (\ref{DefDim2}) of $\widetilde{\nu}^{n}_{1-H,H}(\mathcal{L}_x)$.
If $\left(I_i= [s_i,t_i]\right)_{i=1}^{m} \in \mathcal{I}_n(\mathcal{L}_x)$ is a cover minimizing the value in (\ref{DefDim2}), we have
\begin{align}
\label{nu_1-H}
\widetilde{\nu}^{n}_{1-H,H}(\mathcal{L}_x)
= & \sum_{i=1}^{m} \left(\dfrac{|t_i - s_i|}{2^n}\right)^{1-H} \left|\log_2 \dfrac{|t_i - s_i|}{2^n}\right|^{H}.
\end{align}
Using \eqref{X_n}-\eqref{eq1bis} with $t= \dfrac{s_i}{2^n}$, $h= \dfrac{t_i - s_i}{2^n}$, and $y = x$, we deduce that
\begin{align*}
\left(\dfrac{|t_i - s_i|}{2^n}\right)^{1-H} \left|\log_2 \dfrac{|t_i - s_i|}{2^n}\right|^{H} \geq K^{-1}\dfrac{L^{x}(I_i)}{2^{n(1-H)}}.
\end{align*}
Back to \eqref{nu_1-H}, we get
\begin{align*}
\widetilde{\nu}^{n}_{1-H,H}(\mathcal{L}_x) \geq
K^{-1} \sum_{i=1}^{m} \dfrac{L^{x}(I_i)}{2^{n(1-H)}} \geq
K^{-1} \dfrac{L^{x}(S_n)}{2^{n(1-H)}} = K^{-1} Z^{x}_{n}.
\end{align*}
where the last inequality holds because the local time $L^x_{\cdot}$ increases only on the set $I_i$ (whose union covers $\mathcal{L}_x \bigcap S_n$). This proves the claim.
\end{proof}
Using Lemma \ref{lemma2} for the first inclusion and Lemma \ref{lm3} for the second one, we can write
$$
\{\forall x \in \mathbb{R}, \, F_{\infty}^{x} = + \infty\}\subset\{\forall x\in\mathbb{R},\,
\sum_{n\geq -1} \widetilde{\nu}^{n}_{1-H,H}(\mathcal{L}_x) =+\infty
\}\subset\{\forall x\in\mathbb{R}, \,\mbox{Dim}_H \mathcal{L}_x \geq 1-H\}.
$$
As a consequence, we see that in order to conclude the proof of Theorem \ref{LevelSets}, it remains us to check that
$\mathbb{P}(\forall x \in \mathbb{R}, \, F_{\infty}^{x} = + \infty ) = 1$. This is the object of the next proposition.
\begin{proposition}
\label{proposition4} We have
\begin{align}
\mathbb{P}(\forall x \in \mathbb{R}, \, F_{\infty}^{x} = + \infty ) = 1
\label{prop}
\end{align}
\end{proposition}
Note that the following weaker statement of Proposition \ref{proposition4} was shown in \cite{nourdin2018sojourn}: for all $x \in \mathbb{R}$, $\mathbb{P}(F_{\infty}^{x} = + \infty ) = 1$.
Our main contribution in the present note is precisely to prove the strongest version stated in Proposition \ref{proposition4}.
\subsection{Proof of Proposition \ref{proposition4}}
For every $a>0$, define
\begin{align}
\widetilde{Z}^{a}_{n}= \inf_{x \in[-a,a]} Z^{x}_{n} \quad \mbox{and} \quad \widetilde{F}^{a}_{\infty}= \sum_{n \geq 1} \widetilde{Z}^{a}_{n}.
\label{defY_n^a}
\end{align}
Recalling \eqref{ss}, we get for all $n \geq 0$
\begin{align}
\label{tildeYnx}
\widetilde{Z}^{a}_{n}= \inf_{x \in[-a,a]} Z^{x}_{n} \stackrel{d}{=} \inf_{x \in[-a,a]} Z_{0}^{2^{-nH}x} = \inf_{x \in[-2^{-nH}a,2^{-nH}a]} Z_{0}^{x} = \widetilde{Z}^{2^{-nH}a}_{0}.
\end{align}
In the three forthcoming lemmas, the following three facts are established:
\begin{enumerate}
\item[(i)] the existence of $\epsilon>0$ such that $\mathbb{P}(Z^{0}_{0}>4\epsilon) > 0$ (Lemma \ref{az}),
\item[(ii)] the existence of $a>0$ such that $ \mathbb{P}(Z_{0}^{0}>4\epsilon) \leq 2 \mathbb{P}(\widetilde{Z}^{a}_{0}>0)$ (Lemma \ref{a}),
\item[(iii)] that $\mathbb{P}\left( \widetilde{F}_{\infty}^{b} = \infty\right) \geq \mathbb{P} \left( \widetilde{Z}_{0}^{a}>0\right)$ for all $b>0$ (Lemma \ref{LemmaF^b}).
\end{enumerate}
Combining the results obtained in (i) to (iii), we deduce that
\begin{equation}
\mathbb{P}\left( \widetilde{F}_{\infty}^{b} = \infty\right) >0\quad\mbox{for all $b>0$}.
\label{itoiii}
\end{equation}
Set $\widehat{B}_u = u^{2H}B_{1/u}$, $u>0$.
By the time inversion property of the fractional Brownian motion, $\widehat{B}$ is a fractional Brownian motion of Hurst index $H$ as well.
We can write
\begin{align*}
L^{x}\left(S_n\right)=
\dfrac{1}{2\pi} \int_{\mathbb{R}} dy \, e^{-iyx} \int_{2^{n-1}}^{2^n} du e^{iyu^{2H}\widehat{B}_{1/u}}.
\end{align*}
As a result, we get that $x \mapsto L^{x}\left(S_n\right)$ is $\sigma\left\{ \widehat{B}_u : u \leq 2^{-(n-1)}\right\}$-measurable,
implying in turn that
\begin{equation}
\sigma \left\{ \widetilde{Z}^b_n \, : \, n \geq M \right\} \subset \sigma\left\{ \widehat{B}_u : u \leq 2^{-(M-1)}\right\}
\label{sigma}
\end{equation}
for every $M \geq 1$.
Consequently,
\begin{align*}
\displaystyle \left\{\widetilde{F}^b_\infty = \infty\right\} \in \bigcap_{M \geq 1} \sigma\left\{ \widehat{B}_u : u \leq 2^{-(M-1)}\right\}.
\end{align*}
Using \eqref{eqFBM}, there exists a standard Brownian motion $(W_u)_{u \geq 0}$ defined on the same probability space such that
\begin{equation}
\label{sigmaM}
\displaystyle \left\{\widetilde{F}^b_\infty = \infty\right\} \in \bigcap_{M \geq 1} \sigma \left\{ W_u \, : \, u \leq 2^{-(M-1)}\right\}.
\end{equation}
By the Blumenthal's 0-1 law, the probability $\mathbb{P}\left(\widetilde{F}^b_\infty = \infty \right)$ is either 0 or 1. But by (\ref{itoiii}), this probability is strictly positive;
hence we conclude that
\begin{equation}
\mathbb{P}\left( \widetilde{F}_{\infty}^{b} = \infty\right) =1\quad\mbox{for all $b>0$}.
\label{itoiiibis}
\end{equation}
For every $b>0$, one has
\begin{eqnarray*}
\label{resultb}
&&\mathbb{P} \left(\forall x \in [-b, b]:\, F_{\infty}^{x}= \infty \right)=
\mathbb{P} \left(\displaystyle \inf_{x \in [-b, b]} F_{\infty}^{x}= \infty \right) =\mathbb{P}\left(\inf_{x \in [-b, b]} \sum_{N \geq 1} Z_{N}^{x}= \infty \right) \\
&\geq& \mathbb{P} \left(\sum_{N \geq 1}\inf_{x \in [-b, b]} Z_{N}^{x} = \infty \right)= \mathbb{P}\left(\widetilde{F}^b_\infty = \infty \right) =1.
\end{eqnarray*}
We finally conclude that
\begin{align*}
\mathbb{P}\left(\forall x \in \mathbb{R}, \, F_{\infty}^{x}= \infty \right)= \lim_{b \rightarrow \infty} \mathbb{P}\left(\forall x \in [-b,b], \, F_{\infty}^{x}= \infty \right) = 1,
\end{align*}
which is the desired conclusion of Proposition \ref{proposition4}.
To conclude, it remains to state and prove the three lemmas mentioned in points (i) to (iii).
\begin{lemma}
\label{az}
There exists $\epsilon>0$ small enough such that $ \mathbb{P}(Z^{0}_{0}>4\epsilon) > 0$.
\end{lemma}
\begin{proof}
Using that
$\displaystyle
L^{0}\left([\frac12,1]\right)= \dfrac{1}{2\pi} \int_{\mathbb{R}} dy \int_{\frac12}^{1} du \, e^{iyB_u}
$, we have
\begin{align*}
\mathbb{E}\left(L^{0}\left( \left[\frac{1}{2},1\right] \right)\right)= \dfrac{1}{2\pi} \int_{\frac{1}{2}}^{1} u^{-H} \, du \, \int_{\mathbb{R}} e^{-\frac{z^2}{2}} \, dz= \dfrac{1}{\sqrt{2\pi}} \int_{\frac{1}{2}}^{1} u^{-H} \, du >0.
\end{align*}
As a result, $ \mathbb{P}\left(Z^{0}_{0}>0\right) = \mathbb{P}\left(L^{0}\left( \left[\frac{1}{2},1\right] \right)>0\right) >0 $, and the desired conclusion follows.
\end{proof}
\begin{lemma}
\label{a}
For every $\epsilon>0$ small enough, there exists a real number $a>0$ such that
\begin{align*}
0< \mathbb{P}(Z_{0}^{0}>4\epsilon) \leq 2 \mathbb{P}(\widetilde{Z}^{a}_{0}>0).
\end{align*}
\end{lemma}
\begin{proof}
Let $\beta < \frac{1}{2} \left( \frac{1}{H} - 1 \right)$, $K=[-1,1]$ and $J=[\frac{1}{2},1]$.
Set
$$
c=c(\omega):=\sup_{x\in K\setminus\{0\}}\frac{\big| L^0(J)(\omega)-L^x(J)(\omega)\big|}{|x|^\beta}.
$$
By Lemma \ref{lemma3}, we have that $\mathbb{P}(c<\infty)=1$.
Set $\eta_\epsilon=\eta_\epsilon(\omega):= \min \left\{ \left( \dfrac{\epsilon}{c(\omega)}\right)^{1 / \beta},1\right\}$. As $[-\eta_\epsilon, \eta_\epsilon ] \subset [-1,1]$, one has
\begin{align}
\label{equation1}
\forall |x| \leq \eta_\epsilon(\omega), \, \left| (L^{0}_{1}(\omega) - L^{x}_{1}(\omega)) - (L^{0}_{\frac{1}{2}}(\omega) - L^{x}_{\frac{1}{2}}(\omega))\right|\leq \epsilon.
\end{align}
By triangle inequality,
\begin{align}
\label{equation2}
\left|L^{x}_{1}- L^{x}_{\frac{1}{2}}\right| \geq \left|L^{0}_{1} - L^{0}_{\frac{1}{2}}\right|- \left| (L^{0}_{1} - L^{x}_{1}) - (L^{0}_{\frac{1}{2}} - L^{x}_{\frac{1}{2}})\right|.
\end{align}
Using \eqref{equation1} and \eqref{equation2}, we have
\begin{align}
\left\{Z_0^0 = L^{0}_{1}- L^{0}_{\frac{1}{2}}>4\epsilon \right\} \subset \left\{ \forall |x| \leq \eta_\epsilon(\omega), \, |L^{x}_{1} - L^{x}_{\frac{1}{2}}| \geq 3 \epsilon \right\}.
\label{equation3}
\end{align}
But $\displaystyle \left\{ \forall |x| \leq \eta_\epsilon(\omega), \, |L^{x}_{1} - L^{x}_{\frac{1}{2}}| \geq 3\epsilon)\right\} = \left\{ \inf_{x \in [-\eta_\epsilon, \eta_\epsilon]} |L^{x}_{1} - L^{x}_{\frac{1}{2}}| \geq 3\epsilon\right\}$. Recalling the definition of $\widetilde{Z}_{0}^{\eta_\epsilon}$, we deduce that
\begin{align}
\mathbb{P} \left(\widetilde{Z}_{0}^{\eta_\epsilon}>0\right) \geq \mathbb{P} \left(\widetilde{Z}_{0}^{\eta_\epsilon}>3\epsilon\right)\geq \mathbb{P}\left( Z_{0}^{0}> 4 \epsilon \right) > 0.
\label{eq1}
\end{align}
Now for all $a>0$, we have
\begin{align}
\left\{ \widetilde{Z}_{0}^{\eta_\epsilon} >0 \right\} \subset \left\{ \widetilde{Z}_{0}^{a} >0 \right\} \cup \left\{ \eta_\epsilon \leq a \right\}.
\label{eq2}
\end{align}
Since $c < \infty$ a.s., one has that $\mathbb{P}\left( c \geq M \right) \rightarrow 0$ as $M \rightarrow \infty$. We can then choose $a>0$ small enough such that
\begin{align}
\mathbb{P} \left(\eta_\epsilon \leq a \right)= \mathbb{P}\left(c \geq \frac{\epsilon}{2a \beta} \right) \leq \frac{1}{2} \mathbb{P} \left( Z_0^0 > 4\epsilon \right).
\label{eq3}
\end{align}
Using \eqref{eq1}, \eqref{eq2} and \eqref{eq3} we deduce that
\begin{align*}
\mathbb{P} \left( Z_{0}^{0} > 4 \epsilon \right) \leq
\mathbb{P} \left( \widetilde{Z}_{0}^{\eta_\epsilon} > 0 \right) \leq
\mathbb{P} \left( \widetilde{Z}_{0}^{a} >0 \right) + \mathbb{P} \left( \eta_\epsilon \leq a \right) \leq
\mathbb{P} \left( \widetilde{Z}_{0}^{a}>0\right) + \frac{1}{2} \mathbb{P}\left( Z_{0}^0 > 4 \epsilon\right).
\end{align*}
Finally, this yields
\begin{align*}
0< \mathbb{P} \left(Z_{0}^{0} > 4 \epsilon \right) \leq 2\mathbb{P} \left( \widetilde{Z}_{0}^{a}>0\right),
\end{align*}
which is the desired conclusion.
\end{proof}
\begin{lemma}
\label{LemmaF^b}
For any $a,b>0$, we have
\begin{align*}
\mathbb{P}\left( \widetilde{F}_{\infty}^{b} = \infty\right) \geq \mathbb{P} \left( \widetilde{Z}_{0}^{a}>0\right).
\end{align*}
\end{lemma}
\begin{proof}
Fix $\gamma>0$ and $a,b >0$, consider the event $A_{\gamma, b} = \left\{ \widetilde{F}_{\infty}^{b} \leq \gamma\right\}$.
By Fubini's theorem,
\begin{align*}
\gamma \geq \mathbb{E} \left(\mathds{1}_{A_{\gamma, b}} \widetilde{F}_{\infty}^{b}\right) = \sum_{n \geq -1} \mathbb{E}\left( \mathds{1}_{A_{\gamma, b}} \widetilde{Z}_{n}^{b} \right) = \sum_{n \geq -1} \int_{0}^{\infty} \mathbb{P} \left( A_{\gamma, b} \cap \{ \widetilde{Z}_{n}^{b} > u \} \right)du.
\end{align*}
Using $\mathbb{P}\left(A \cap B\right) \geq \left( \mathbb{P}(A) - \mathbb{P}(B^c)\right)_{+}$ where $B^c$ denotes the complement of $B$, and recalling \eqref{tildeYnx}, we deduce that
\begin{align*}
\gamma \geq \sum_{n \geq 0} \int_{0}^{\infty} \left( \mathbb{P} \left( A_{\gamma, b} \right) - \mathbb{P} \left( \widetilde{Z}_{n}^{b} \leq u \right) \right)_{+} \, du= \sum_{n \geq 0} \int_{0}^{\infty} \left( \mathbb{P} \left( A_{\gamma,b} \right) - \mathbb{P} \left( \widetilde{Z}_{0}^{2^{-nH}b} \leq u \right) \right)_{+}du.
\end{align*}
There exists $M\geq1$ such that $2^{-nH}b \leq a$ for all $n \geq M$. Then, for all $n\geq M$,
\begin{align*}
\mathbb{P} \left( \widetilde{Z}_{0}^{2^{-nH}b} \leq u \right) \leq \mathbb{P} \left( \widetilde{Z}_{0}^{a} \leq u \right)
\end{align*}
and
\begin{align*}
\gamma \geq \sum_{n \geq M} \int_{0}^{\infty} \left( \mathbb{P} \left( A_{\gamma, b} \right) - \mathbb{P} \left( \widetilde{Z}_{0}^{a} \leq u \right) \right)_{+}du.
\end{align*}
Since the summand does not depend on $n$ and the series is bounded by $\gamma$ and thus finite, one has necessarily
\begin{align*}
\int_{0}^{\infty} \left( \mathbb{P} \left( A_{\gamma, b} \right) - \mathbb{P} \left( \widetilde{Z}_{0}^{a} \leq u \right) \right)_{+}du = 0.
\end{align*}
Hence, for almost every $u \geq 0$ and every $\gamma \geq 0$,
\begin{align}
\mathbb{P} \left( \widetilde{F}_{\infty}^{b} \leq \gamma\right) =\mathbb{P} \left( A_{\gamma, b} \right) \leq \mathbb{P} \left( \widetilde{Z}_{0}^{a} \leq u \right).
\label{result1}
\end{align}
We know that $\mathbb{P}(\widetilde{Z}_{0}^{a} \leq u)$ is increasing as a function of $u$. Hence, (\ref{result1}) is actually true for {\it every} $u \geq 0$ and $\gamma \geq 0$.
Hence $\mathbb{P} \left( \widetilde{F}_{\infty}^{b} > n \right) \geq \mathbb{P} \left( \widetilde{Z}_{0}^{a} > \frac{1}{n} \right)$ for all $n \in \mathbb{N}$. One conclude that
\begin{align*}
\mathbb{P} \left( \widetilde{F}_{\infty}^{b} = \infty \right) \geq \mathbb{P} \left( \widetilde{Z}_{0}^{a} > 0 \right).
\end{align*}
\end{proof}
{\bf Acknowledgements}. I thank my two advisors, Ivan Nourdin and St\'ephane Seuret, for their guidance in the elaboration of this note.
| {
"timestamp": "2020-03-04T02:11:54",
"yymm": "2003",
"arxiv_id": "2003.01423",
"language": "en",
"url": "https://arxiv.org/abs/2003.01423",
"abstract": "Let $B =\\{ B_t \\, : \\, t \\geq 0 \\}$ be a real-valued fractional Brownian motion of index $H \\in (0,1)$. We prove that the macroscopic Hausdorff dimension of the level sets $\\mathcal{L}_x = \\left\\{ t \\in \\mathbb{R}_+ \\, : \\, B_t=x \\right\\}$ is, with probability one, equal to $1-H$ for all $x\\in\\mathbb{R}$.",
"subjects": "Probability (math.PR)",
"title": "A uniform result for the dimension of fractional Brownian motion level sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534349454032,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7083573548434802
} |
https://arxiv.org/abs/0812.1817 | Least-Squares Approximation by Elements from Matrix Orbits Achieved by Gradient Flows on Compact Lie Groups | Let $S(A)$ denote the orbit of a complex or real matrix $A$ under a certain equivalence relation such as unitary similarity, unitary equivalence, unitary congruences etc. Efficient gradient-flow algorithms are constructed to determine the best approximation of a given matrix $A_0$ by the sum of matrices in $S(A_1), ..., S(A_N)$ in the sense of finding the Euclidean least-squares distance $$\min \{\|X_1+ ... + X_N - A_0\|: X_j \in S(A_j), j = 1, >..., N\}.$$ Connections of the results to different pure and applied areas are discussed. | \section{Introduction}
\setcounter{equation}{0}
Motivated by problems in pure and applied areas, there has been a
great deal of interest in studying equivalence classes on matrices, say,
under compact Lie group actions. For instance,
(a) the unitary (orthogonal)
similarity orbit of a complex (real) square matrix $A$ is the set of
matrices of the form $UAU^*$ for unitary (or real orthogonal) matrices $U$,
(b) the unitary (orthogonal)
equivalence orbit of a complex (real) rectangular matrix $A$
is the set of matrices of the form $UAV$ for unitary (orthogonal)
matrices
$U, V$ of appropriate sizes,
(c) the unitary $t$-congruence orbit of a complex square matrix $A$ is
the set of matrices of the form $UAU^t$ for unitary matrices $U$,
(d) the orthogonal similarity orbit of a complex square matrix
$A$ is the set of
matrices of the form $QAQ^t$ for complex orthogonal matrices $Q$,
i.e., $Q^tQ = I_n$,
(e) the similarity orbit of a square matrix $A$ is the set of matrices
of the form $SAS^{-1}$ for invertible matrices $S$.
\medskip
It is often useful to determine whether a matrix $A_0$ can be
written as a sum of matrices from orbits $S(A_1), \dots, S(A_N)$.
Equivalently, one would like to know whether
$$S(A_0) \subseteq S(A_1) + \cdots + S(A_N).$$
For $N = 1$, it reduces to the
basic problem of checking whether $A_0$ is equivalent to
$A_1$. In some cases, even this is non-trivial.
For instance, it is not easy to check
whether two $n\times n$ complex matrices are unitarily similar.
For $N > 1$, the problem is usually more involved.
Even if there are theoretical results, it may not be easy
to use them in practice or checking examples of matrices
of moderate sizes.
For instance,
given $10\times 10$ Hermitian matrices $A, B, C$,
to conclude that $C = UAU^* + VBV^*$ for some unitary matrices
$U$ and $V$, one needs to
check thousands of inequalities involving the eigenvalues of $A$, $B$,
and $C$; see \cite{Fu}.
Therefore, one purpose of this paper is to set up a general framework to
develop efficient computer algorithms and programs to solve such problems.
In fact, we will treat the more general problem of finding the best approximation
of a given matrix $A_0$ by the sum of matrices
from matrix orbits $S(A_1), \dots, S(A_N)$.
In other words, for given matrices $A_0, A_1, \dots, A_N$, we
determine
$$\min\left\{\|X_1 + \cdots + X_N - A_0\|:
(X_1, \dots, X_N) \in S(A_0)\times \cdots \times S(A_N)\right\}.$$
The results will be useful in solving numerical problems
efficiently, and helpful in testing conjectures of theoretical
development of the topics under considerations. As we will see in the
following discussion, some numerical examples indeed lead to
general theory; see Section 3.]
\medskip
We will consider different matrix orbits in the next few sections.
In each case, we will mention the motivation of the problems and
derive the gradient flows for the respective orbits, which will be used
to design the algorithms and computer programs to solve the optimization
problem. Note that we always consider the orbits of
similarity $SAS^{-1}$ and equivalence $SAT$, where $\left\{S,T\right\}$ can be elements
of any semisimple compact connected matrix Lie group,
in particular the special unitary group $SU(n)$
and subgroups thereof. Since these matrix Lie groups are
compact, they are themselves smooth Riemannian manifolds $M$,
which in turn implies they are endowed with a Riemannian
metric induced by the non-degenerate Killing form related to a bi-invariant
scalar product $\braket\cdot\cdot _x$ on their tangent
and cotangent spaces $T_xM$ and $T^*_xM$. The metric smoothly varies with
$x\in M$ and allows for identifying the Fr{\'e}chet differential in $T^*_xM$
with the gradient in $T_xM$. Moreover, in Riemannian manifolds the existence
and convergence of gradient flows with appropriate discretization schemes are
elaborated in detail in Ref.~\cite{SDHG07a}. In the present context, it is
important to note that the subsequent gradient flows on the unitary congruence
orbit and the unitary equivalence orbit are fundamental. The flows on compact
connected subgroups of $SU(n)$ such as $SO(n)$ or $SU(2)^{\otimes m}$ (with
$2^m=n$) can readily be derived from the flows on $SU(n)$
\cite{SDHG07,SDHG07a}.
Furthermore, in each case, we will provide numerical examples
to illustrate their efficiency and accuracy.
The situation in the general linear group $GL(N)$ and its subgroups
that are not in the intersection with the unitary groups is entirely
different: those groups are no longer
compact, but only locally compact.
For $GL(N)$~orbits we give an outlook with
some analytical results in infinma of Euclidean distances.
Since locally compact Lie groups lack {\em bi-invariant}
metrics on the tangent spaces to
their orbit manifolds, they can only be endowed with left-invariant {\em or}
right-invariant metrics. Moreover, the exponential map onto locally
compact Lie groups is no longer geodesic as in the compact case.
Consequently, one will have to devise other approximations
to the respective geodesics than obtained by the (Riemannian)
exponential. These numerics
are thus a separate topic of current research and will therefore
be pursued in a follow-up study.
With regard to notation, unless stated otherwise, the norm $||A||$
shall always be
read as Frobenius norm $||A||_2 := \sqrt{{\rm tr}\,\left\{A^* A\right\}}$.
\section{Unitary Similarity Orbits}
\subsection{The Hermitian Matrix Case}
For an $n\times n$ Hermitian matrix $A$, let
$S(A)$ be the set of matrices unitarily similar to $A$.
Then
$$S(A)+S(B) = \left\{X+Y: (X,Y) \in S(A)\times S(B)\right\}$$
is a union of unitary similarity orbits. Researchers have
determined the necessary and sufficient conditions of $S(C)$ to
be a subset of $S(A)+S(B)$ in terms of the eigenvalues of
$A, B$ and $C$; \cite{CW1,CW2,DST,Fu,H2,K,TF1,TF2}.
In particular, suppose $A, B, C$ have eigenvalues
$$a_1 \ge \cdots \ge a_n, \qquad
b_1 \ge \cdots \ge b_n, \qquad \hbox{ and } \qquad
c_1 \ge \cdots \ge c_n,$$
respectively. Then $S(C) \subseteq S(A) + S(B)$ if and only if
\begin{equation}\label{tr}
\sum_{j=1}^n (a_j+b_j - c_j) = 0\end{equation} and a collection of
inequalities in the form
\begin{equation}\label{rst}\sum_{r \in R} a_r +
\sum_{s\in S} b_s \ge \sum_{t \in T} c_t\end{equation} for certain
$m$ element subsets $R,S,T \subseteq \left\{1, \dots, n\right\}$ with
$1 \le m < n$ determined by the Littlewood-Richardson rules;
see \cite{DST,Fu} for details.
The study
has connections to many different areas such as representation
theory, algebraic geometry, and algebraic combinatorics, etc.
Note that the relation between Horn's problem and the Littlewood-Richardson rules
has recently also attracted attention in quantum information \cite{C08}.
The set of inequalities in (\ref{rst}) grows exponentially with $n$.
Therefore, it is not easy to check the conditions even for a moderate
size problem, say, for $10\times 10$ Hermitian matrices.
As a matter of fact, the theory has been extended to determine
whether $S(A_0)$ is a subset of $S(A_1) + \cdots + S(A_N)$
for given $n\times n$ Hermitian matrices $A_0, \dots, A_N$,
in terms of equality and linear inequalities
of the eigenvalues of the given matrices.
Of course, the number of inequalities involved are more
numerous. There does not seem to be an efficient way
to use these results in practise or testing numerical examples
or conjecture in research.
It is interesting to note that by the saturation conjecture (theorem)
(see \cite{BF} and its references),
there exist Hermitian matrices with
nonnegative integral eigenvalues $a_1 \ge \cdots \ge a_n$,
and $b_1 \ge \cdots \ge b_n$
such that $A+B$ has nonnegative integral eigenvalues
$c_1 \ge \cdots \ge c_n$ if and only if
the Young diagram corresponding to
$(c_1, \dots, c_n)$ can be obtained from those of
$(a_1, \dots, a_n)$ and $(b_1, \dots, b_n)$.
\subsection{The General Complex Matrix Case}
Likewise, we study the problem
$$\min\left\{\| \sum_{j=1}^N U_j A_j U^*_j - A_0 \|:
U_1, \dots, U_N \in SU(n)\; \hbox{unitary} \right\}$$
for general complex matrices $A_0, \cdots A_N$.
Even for $N = 1$, the result is highly nontrivial.
In theory, it is related to the problem of determining
whether $A_0$ and $A_1$ are unitarily similar; see \cite{Sh}.
Also, to determine
$$\min\left\{\|UAU^* - C^* \|:
U \hbox{ unitary} \right\}$$
for $A, C \in M_n$
leads to the study of the $C$-{\it numerical range} and the
$C$-{\it numerical radius} of $A$ defined by
$$W(C,A) = \left\{ {\rm tr}\,(CUAU^*): U \in SU(n) \right\},$$
and
$$r(C,A) = \max\left\{|\mu|: \mu \in W(C,a)\right\}.$$
The $C$-numerical radius is important in the study of
{\it unitary similarity
invariant} norms on $M_n$, i.e., norms $\nu$ satisfy
$\nu(UXU^*) = \nu(X)$ for all $X, U \in M_n$ such that $U$ is unitary.
For instance, it is known that for every unitary similarity invariant
norm $\nu$ there is a compact subset $S$ of $M_n$ such that
$$\nu(X) = \max\left\{ r(C,X): C \in S\right\}.$$
So, the $C$-numerical radii can be viewed as the building blocks
of unitary similarity invariant norms. We refer readers to
the survey \cite{Li} for further results on the $C$-numerical range and
$C$-numerical radius. For applications of
$C$-numerical ranges in quantum dynamics,
see also Ref.~\cite{SDHG07}
For two matrices, one may study whether
$C = UAU^* + VBV^*$
for, e.g., a Hermitian $A$ and a skew-Hermitian $B$. In other words,
we want to study whether a matrix can be written as the
sum of a Hermitian matrix and a skew-Hermitian matrix
with prescribed eigenvalues.
\subsection{Sum of Hermitian and Skew-Hermitian Matrices}
For $C = UAU^* + VBV^*$ with $A = A^*$ and $B = -B^*$, there are
many known inequalities relating the eigenvalues of $A$ and $B$
to the eigenvalues and singular values of $C$; see \cite{CHL} and
the references therein. However, there has been no known necessary and
sufficient
condition for the existence of matrices $A, B, C$ satisfying
$C = UAU^*+VBV^*$ with $A = A^*$ and $B=-B^*$ with prescribed
eigenvalues or with prescribed singular values.
Nevertheless, it is easy to solve the approximation
problem
$$\min\left\{\|U^*AU + V^*BV - C\|: U, V \hbox{ unitary} \right\}.$$
The following result actually holds for any {\it unitarily invariant}
norm on $n\times n$ matrices using the same proof; see \cite{LT}.
Furthermore, we can use this result to verify that our algorithm indeed
yield the optimal solution; see Example 2 in Section 2.5.
\begin{theorem} Let $\|\cdot\|$ be the Frobenius norm on $M_n$.
Let $A, B, C \in M_n$ with $A = A^*$ and $B = -B^*$.
Suppose $U, V \in M_n$ are unitary matrices
such that $U\tfrac{1}{2}(C+C^*)U^* = {\rm diag}\,(f_1, \dots, f_n)$ with
$f_1 \ge \cdots \ge f_n$,
and $V\tfrac{1}{2}(C-C^*)V^* = i\; {\rm diag}\,(g_1, \dots, g_n)$
with $g_1 \ge \cdots \ge g_n$.
Suppose $A$ is unitarily similar to a diagonal matrix
$A_1$ (respectively, $A_2)$ with diagonal entries arranged in
descending (respecitively, ascending) order.
Suppose $-i B$ is unitarily similar to a diagonal matrix
$-i B_1$ (respectively, $-i B_2)$ with diagonal entries arranged in
descending (respecitively, ascending) order.
Then
\begin{eqnarray*}
\|U^*A_1U + V^*B_1V -C \|^2 &=& {\sum_{j=1}^n(|f_j - a_j|^2 + |g_j-b_j|^2)}\\
\|U^*A_2U + V^*B_2V -C\|^2 &=& {\sum_{j=1}^n(|f_j - a_{n-j+1}|^2 + |g_j-b_{n-j+1}|^2)}
\end{eqnarray*}
and for any unitary $X, Y \in M_n$,
$$\|U^*A_1U + V^*B_1V -C\| \le \|X^*AX + Y^*BY -C\|
\le \|U^*A_2U + V^*B_2V -C\|.$$
\end{theorem}
\it Proof. \rm
Let $F = \tfrac{1}{2}(C+C^*)$ and $G = \tfrac{-i}{2}(C-C^*)$.
It is well known that
$$\|F - U^*A_1U\| \le \|F - X^*AX\| \le \|F - U^*A_2U\|$$
and
$$\|G - V^*B_1V\| \le \|G - Y^*BY\| \le \|G - V^*B_2V\|$$
for any unitary $X, Y \in M_n$; see \cite{LT}.
Since $\|H+iK\|^2 = \|H\|^2 + \|K\|^2$
for any Hermitian $H,K \in M_n$, the results follow.
\qed
\subsection{Deriving Gradient Flows on Unitary Similarity Orbits}\label{sec:flow_similarity}
To begin with, we focus on the problem of approximating a given
matrix $C$ using matrices from two unitary similarity orbits,
i.e., finding
$$\min \left\{ \|UAU^* + VBV^* - C\|: U, V \in SU(n)\; \hbox{ unitary} \right\}.$$
For simplicity, here we describe the steepest descent method to
search for unitary matrices $U_0, V_0$ attaining the optimum.
Refined approaches like conjugate gradients, Jacobi-type or Newton-type methods
may be implemented likewise, see for instance \cite{SDHG07a}.
As will be shown below, more than two unitary similarity orbits can be treated
similarly.
The basic idea is to improve the current unitary pair $(U_k,V_k)$ to
$(U_{k+1}, V_{k+1})$ so that
$$\|U_{k+1}A U^*_{k+1} + V_{k+1}B V^*_{k+1} - C\| <
\|U_{k}A U^*_{k} + V_{k}BV^*_{k} - C\|$$
until the successive iterations differ only by a small tolerance,
or the gradient ({\em vide infra}) vanishes.
Further, to avoid pitfalls by local minima whenever the Euclidean distance
cannot be made zero, we use a sufficiently large multitude
of different random starting points $(U_0,V_0)$ for our algorithm.
Needless to say, a positive matching result is constructive,
while a negative result may be due to local minima. It is therefore
important to use a sufficiently large set of initial conditions
for confident conclusions in the negative case.
For a start, consider the least-squares minimization task
\begin{equation}
\minover{U,V\in SU(n)} ||UAU^* + VBV^* - C||_2^2\;,
\end{equation}
which can be rewritten as
\begin{eqnarray*}
||UAU^* &+& VBV^* - C||_2^2 \\
&=& ||UAU^* + VBV^*||_2^2 + ||C||_2^2 - 2 \Re {\rm tr}\,\left\{C^*(UAU^* + VBV^*)\right\} \\
&=& ||A||_2^2 + ||B||_2^2 + ||C||_2^2 -
2 \Re {\rm tr}\,\left\{C^*(UAU^* + VBV^*) - UAU^*\; VB^*V^*\right\}
\end{eqnarray*}
and thus is equivalent to the maximisation task
\begin{equation}
\maxover{U,V\in SU(n)} \Re {\rm tr}\,\left\{C^*(UAU^* + VBV^*) - UAU^*\; VB^*V^*\right\}\;.
\end{equation}
\noindent
Therefore we set
\begin{equation}
f(U,V) := {\rm tr}\,\left\{(UAU^* + VBV^*)\, C^* - UAU^*\; VB^*V^*\right\}
\end{equation}
and $F(U,V) := \Re f(U,V)$.
Then its Fr{\'e}chet derivative $D_U f(U): T_U\mathcal U \to T_{f(U)}\mathcal
U$ can be seen as a tangent map, where the elements of the tangent space
$T_U\mathcal U$ to the Lie group of unitaries $\mathcal U=SU(n)$ or $U(n)$ at the point $U$
take the form $\Omega U$ with $\Omega=-\Omega^*$ being itself an element of
the Lie algebra. The differential thus reads
\begin{eqnarray*}
D_U f(U)(\Omega U)
&=& {\rm tr}\,\left\{((\Omega U) A U^* + UA(\Omega U)^*)(C^* - VB^*V^*)\right\} \\
&=& {\rm tr}\,\left\{((\Omega U) A U^* - UAU^*(\Omega U) U^*)(C^* - VB^*V^*)\right\}\\
&=& {\rm tr}\,\left\{( A U^* (C^* - VB^*V^*) - U^*(C^* - VB^*V^*)\, UAU^*) (\Omega U)\right\}
\end{eqnarray*}
where we used the invariance of the trace under cyclic permutations and
$(\Omega U)^* = - U^* (\Omega U) U^*$, which follows from the product rule for
$D(\ensuremath{{\rm 1 \negthickspace l}{}})(\Omega U) = D(UU^*)(\Omega U) =0 = (\Omega U)U^* + U(\Omega U)^*$
in consistency with the Lie-algebra elements $\Omega$ being skew-Hermitian.
Moreover, by identifying
\begin{equation}
D_U f(U)\cdot (\Omega U)
= \braket{\operatorname{grad}_U f(U)}{\Omega U} = {\rm tr}\,\left\{(\operatorname{grad}_U f(U))^* \Omega U\right\}
\end{equation}
one finds
\begin{equation*}
\operatorname{grad}_U f(U) = (C - VBV^*) U A^* - UA^*U^* (C - VBV^*) U
= \big[(C - VBV^*), UA^*U^*\big]\,U\quad.
\end{equation*}
With $[X^*,Y]_s
:= \tfrac{1}{2}([X^*,Y] - [X^*,Y]^*) = \tfrac{1}{2}([X^*,Y] + [X,Y^*])$
as skew-hermitian part of the commutator
one obtains for $F(U):= \Re f(U)$
\begin{equation}
\operatorname{grad}_U F(U) = \big[(C^* - VB^*V^*), UAU^*\big]_s\,U\quad.
\end{equation}
Taking the respective Riemannian exponentials
$\exp_U(\operatorname{grad}_U F(U))$ and $\exp_V(\operatorname{grad}_V F(V))$ thus gives
the recursive gradient flows
\begin{eqnarray*}
U_{k+1} &=& \exp\left\{-\alpha_k [ U_kAU_k^*, (C^* - V_kB^*V_k^*)]_s\right\}\;U_k\\
V_{k+1} &=& \exp\left\{-\beta_k [ V_kBV_k^*, (C^* - U_kA^*U_k^*)]_s\right\}\;V_k
\end{eqnarray*}
as discretized solutions of the coupled gradient system
\begin{equation}
\dot U = \operatorname{grad}_U F(U,V) \quad\text{\rm and}\quad \dot V = \operatorname{grad}_V F(U,V) \;.
\end{equation}
Conditions for convergence are described in detail in \cite{HM94}.
For appropriate step sizes $\alpha_k,\beta_k$ see also Ref.~\cite{NMRJOGO}.
Generalizing the findings from a sum of two orbits to
higher sums of unitary orbits is straightforward: the problem
\begin{equation}
\min\left\{\| \sum_{j=1}^N U_j A_j U^*_j - A_0 \|:
U_1, \dots, U_N \in SU(n)\;\hbox{unitary} \right\}
\end{equation}
can be addressed by the system of coupled gradient flows ($j=1,2,\dots, N$)
\begin{equation}\label{eqn:US-flows}
U_{k+1}^{(j)} = \exp\left\{-\alpha_k^{(j)}
{[ A_k^{(j)}, A_{0jk}^*]}_s\right\}\;U_k^{(j)}
\end{equation}
where for short we set $A_k^{(j)} := U_k^{(j)}A_j{U_k^{(j)}}^*$ and
$A_{0jk}:= A_0 - \sum\limits_{\nu=1 \atop \nu\neq j}^N A_k^{(\nu)}$.
These gradient flows follow the extension of the original idea on the
orthogonal group \cite{Bro88,HM94} to the unitary group \cite{Sci98}, where
here we introduce a larger system of coupled flows.
\subsection{Numerical Examples}
Here we demonstrate gradient flows minimising
$\| \sum_{j=1}^N U_j A_j U_j^* - A_0 \|$ over the unitaries $ U_1, \dots, U_N$
for given Hermitian matrices $A_0, \cdots A_N$.
\medskip
\noindent
{\bf Example 1}\\
As a test case, consider the following examples for
finding $U_j \in {\mathbf C}^{10\times 10}$. For $j=1,2,\dots,N$
choose a set of random unitaries $U_j^{(r)}\in{\mathbf C}^{10\times 10}$
distributed according to the Haar measure as recently described in \cite{Mez07}
and define
$A_j := {\rm diag}\,(1,3,5,\dots,19) +
\tfrac{j-1}{10}\ensuremath{{\rm 1 \negthickspace l}{}}_{10}$ and $
A_0^{(N)}:= {\rm diag}\,(a_1, ..., a_{10})$
where $a_1,a_2,\dots,a_{10}$ are the eigenvalues of
$A'_{0,N}:= \sum_{j=1}^N U_j^{(r)}
A_j {U_j^{(r)}}^*$ (and $\ensuremath{{\rm 1 \negthickspace l}{}}_{10}$ is the $10\times 10$ unity matrix).
\begin{figure}[Ht!]
\begin{center}
\includegraphics[width=0.65\textwidth]{lisch_F1.eps}
\end{center}
\caption{\label{fig:UAU_N}
Coupled flows minimizing $||\sum_{j=1}^N U_jA_jU_j^* - A_0^{(N)}||^2_2$ with
(a) $N=2$ and (b) $N=10$ for Example 1.
}
\end{figure}
As shown in Fig.~\ref{fig:UAU_N}, the gradient flow of Eqn.~\ref{eqn:US-flows} minimizes
$||\sum_{j=1}^N U_jA_jU_j^* - A_0^{(N)}||^2_2$ by driving it practically to
zero. Note that in Fig.~\ref{fig:UAU_N}b the combined flow on $N=10$ unitaries
converges even faster than in Fig.~\ref{fig:UAU_N}a, where $N=2$ and the flow
is more sensitive to saddle points as may be inferred from the jumps in trace
(a).
\bigskip
\noindent
{\bf Example 2}\\
Let $A,B$ be Hermitian and $C$ arbitrary, e.g.,
$ A = \left(\begin{smallmatrix} 2 & 5 & 11\\ 5 & 8 & 15\\ 11 & 15 & 16\end{smallmatrix}\right), \;
B = \left(\begin{smallmatrix} 6 & 8 & 9\\ 8 & 12 & 10\\ 9 & 10 & 0\end{smallmatrix}\right), \;
C = \left(\begin{smallmatrix} 1 & 11 & 3\\ 6 & 9 & 3\\ 8 & 9 & 2\end{smallmatrix}\right) \;.
$
Then $a := \operatorname {eig}(A)= (-5.6674; -0.4830; 32.1504),
b := \operatorname {eig}(B)= (-7.4816; 0.7123; 24.7693)$ and
$f: = \operatorname {eig}\tfrac{1}{2}(C+C^*)=(-4.9555; -1.3888; 18.3443), g := \operatorname {eig}\tfrac{-i}{2}(C-C^*)=(-4.6368; 0; 4.6368)$.
According to Theorem 2.1 one gets
\begin{equation}
\Delta:= \minover{U,V\in SU(3)} ||UAU^*+iVBV^*-C||_2^2 = (a-f)^*(a-f) + (b-g)^*(b-g) = 605.8521\;.
\end{equation}
More precisely $\Delta = 605.852131091'3004$, while 100 runs of the gradient flow with independent
random initial conditions give a mean $\pm$ rmsd.~of $\bar \Delta = 605.852131091'3570 \pm 1.13\cdot 10^{-10}$.
\section{Unitary Equivalence}
In this section, we study
$$\min\left\{\|\sum_{j=1}^N U_jA_jV_j - A_0\|: U_1, \dots, U_N \in U(n)
{\quad \text {and}\quad } V_1, \dots, V_N \in U(m)\; \hbox{ unitary}
\right\}$$
for rectangular matrices $A_0, \dots, A_N$.
By the result of O'Shea and Sjamaar \cite{OS},
$$\min\|\sum_{j=1}^N U_jA_jV_j - A_0\|=0$$
if and only if
$$\min\|\sum_{j=1}^N W_j^*\tilde A_jW_j - \tilde A_0\|=0$$
where
\begin{equation*}
\tilde A_j = \begin{pmatrix} 0 & A_j \cr
A_j^* & 0 \cr\end{pmatrix} \qquad \hbox{ for } j=0,1, \dots, N.
\end{equation*}
Thus, by the results concerning unitary similarity orbits (see Section 2),
\begin{equation}\label{eq1}
\min\left\{\|A_0 - \sum_{j=1}^N U_jA_jV_j\|:
U_1, \dots, U_N;\, V_1, \dots, V_N \; \hbox{ unitary} \right\} = 0
\end{equation}
if and only if the singular values of $A_0, A_1, \dots, A_N$
satisfy a certain set of linear inequalities.
Clearly,
$\min\left\{\|A - UBV\|: U, V \hbox{ unitary} \right\} = 0$
if and only if $A$ and $B$ have the same singular values.
In general,
it is interesting to check whether
$$\sqrt 2 \min\|\sum_{j=1}^N U_jA_jV_j - A_0\|
= \min\|\sum_{j=1}^N W_j^*\tilde A_jW_j - \tilde A_0\|=0.$$
In computer experiments
(see Example 6 in Section 3), we observe that
(\ref{eq1}) always holds if $A_0, A_1, \dots, A_N$ are randomly
generated matrices generated by {\sc matlab}.
We explain this phenomenon in the following.
We begin with a simple observation.
\begin{lemma}
Suppose $a_0, a_1, \dots, a_N \in (0, \infty)$.
The following are equivalent.
{\rm (a)} There are complex units $e^{it_1}, \dots, e^{it_N}$ such that
$a_0 - \sum_{j=1}^N a_j e^{it_j} = 0.$
{\rm (b)} There is an $N+1$ side convex polygon whose sides have lengths
$a_0, \dots, a_N$.
{\rm (c)} $\sum_{j=0}^N a_j - 2a_k \ge 0$ for all $k = 0, 1, \dots, N$.
\end{lemma}
Form this observation, one easily gets the following
condition related to the equality (\ref{eq1}).
\begin{proposition} \label{prop1} Let $A_j = {\rm diag}\,(a_{1j}, \dots, a_{nj})$ be
nonnegative diagonal matrices for $j = 0, 1, \dots, N$, and let
$v_j = (a_{1j}, \dots, a_{nj})^t$.
Then there exist permutation matrices $P_1, \dots, P_N$
and diagonal unitary matrices $D_1, \dots, D_N$ such that
$$A_0 = \sum_{j=1}^N D_jP_jA_jP_j^t$$
if and only if the entries of each row of the matrix
$$[v_0 | P_1v_1 | \cdots | P_N v_N]$$
correspond to the sides of a $N+1$ side convex polygon.
\end{proposition}
\noindent
If one examines the singular values of an $n\times n$
random matrix generated by {\sc matlab}, we see that there is
always a dominant singular values of size about $n/2$, and
the other singular values range from 0 to $1.5n$ in a rather
systematic pattern. So, it is often
possible to apply Proposition \ref{prop1} to get equality
(\ref{eq1}) if $A_0, \dots, A_N$
are random matrices generated by {\sc matlab} for $N \ge 2$.
\medskip
In contrast, for general matrices, it is easy to construct
$A_0, A_1, \dots, A_N$ such that (\ref{eq1}) fails.
\medskip
\medskip
\noindent
{\bf Example 3}\\
Let $A_0 = {\rm diag}\,(N^2,N+1) \oplus 0_{n-2}$
and $A_j = {\rm diag}\,(N,1) \oplus 0_{n-2}$ for $j = 1, \dots, N$.
Then clearly Eqn.~\ref{eq1} does not apply, because
$$\sum_{j=1}^n s_j(A_0) > \sum_{i=1}^N \sum_{j=1}^n s_j(A_j).$$
Recall that the Ky Fan $k$-norm of a matrix $A \in M_n$
is defined as $\|A\|_k = \sum_{j=1}^k s_j(A)$,
and a norm $\|\cdot\|$ on $M_n$ is unitarily invariant if
$\|A\| = \|UAV\|$ for all $A \in M_n$ and unitary $U, V \in M_n$.
By the Ky Fan dominance theorem, two matrices
$A, B \in M_n$ satisfy $\|A\|_k \le \|B\|_k$ for $k = 1, \dots, n$
if and only if $\|A\| \le \|B\|$ for all unitarily invariant
norms $\|\cdot\|$.
In view of this example, we have the following result.
\begin{proposition} Suppose $A_0, A_1, \dots, A_N \in M_n$ satisfy
(\ref{eq1}). Then for all unitarily invariant norms,
$$2\|A_i\| \le \sum_{j=0}^N \|A_j\|, \qquad i = 0, 1, \dots, N,$$
and equivalently, for $k = 1, \dots, n$,
\begin{equation} \label{eq2}
2\|A_i\|_k \le \sum_{j=0}^N \|A_j\|_k, \qquad i = 0,1, \dots, N.
\end{equation}
Moreover, if there is $k$ such that equality (\ref{eq2}) holds, then
(\ref{eq1}) holds if and only if
$A_j$ is unitarily similar to
$B_j \oplus C_j$ with $B_j \in M_k$ for $j=0, \dots, N$ such that
$$\min\left\{\|B_0 - \sum_{j=1}^N U_jB_jV_j\|:
U_1, \dots, U_N, V_1, \dots, V_N \in M_k \hbox{ are unitary} \right\} = 0$$
and
$$\min\left\{\|C_0 - \sum_{j=1}^N X_jC_jY_j\|:
X_1, \dots, X_N, Y_1, \dots, Y_N \in M_{n-k}
\hbox{ are unitary} \right\} = 0.$$
\end{proposition}
It would be nice if one can get (\ref{eq1}) by checking the relatively
easy condition (\ref{eq2}).
Unfortunately, the following example shows that it is not true.
\medskip
\medskip
\noindent
{\bf Example 4}\\
Let $A_0 = {\rm diag}\,(14,2)$, $A_1 = {\rm diag}\,(8,0)$, $A_2 = {\rm diag}\,(7,4)$.
Then (\ref{eq2}) is satisfied for all $k\ge 1$ but by the result in \cite{LP},
$${\rm diag}\,\(U_1A_1V_1+U_2A_2V_2\)\ne (14,2)$$
for all unitaries $U_i,\ V_j$.
\subsection{Deriving Gradient Flows on Unitary Equivalence Orbits}\label{sec:flow_equivalence}
For minimizing $||UAV - C||_2^2$ one has to maximize
$$F(U,V) := \Re {\rm tr}\, \left\{UAVC^*\right\} = \tfrac{1}{2} {\rm tr}\, \left\{UAVC^* + (UAVC^*)^*\right\}\;.$$
By the same arguments as before, from its Fr{\'e}chet differential
$$ D_U F(U,V)(\Omega U) =
\tfrac{1}{2} {\rm tr}\, \left\{(\Omega U) AVC^* - CV^*A^*U^* (\Omega U) U^*\right\}
= \tfrac{1}{2} {\rm tr}\, \left\{(AVC^* - U^*CV^*A^*U^*) (\Omega U)\right\}$$
one obtains the gradient---where henceforth we keep writing
$(\cdot)_s$ for the skew-Hermitian part
$$ \operatorname{grad}_U F(U,V) = \tfrac{1}{2} (AVC^* - U^*CV^*A^*U^*)^*
= -(UAVC^*)_s \;U\;. $$
An analogous result follows for $\operatorname{grad}_V F(U,V)$.
Taking again the respective Riemannian exponentials leads
to the recursive scheme
\begin{eqnarray*}
U_{k+1} &=& \exp\left\{-\alpha_k (U_kAV_kC^*)_s\right\}\;U_k\\
V_{k+1} &=& \exp\left\{-\beta_k (V_kC^*U_kA)_s\right\}\;V_k\;,
\end{eqnarray*}
which also can be used, {\em e.g.}, for a singular-value decomposition of $A$
by choosing $C$ real diagonal.
Likewise, minimizing $||UAV + XBY - C||^2_2$ by maximizing
$\Re {\rm tr}\, \left\{UAV(C-XBY)^* + XBY C^*\right\}$ translates into the same flows when
substituting $C\mapsto (C-X_kBY_k)$
with analogous recursions for $X_{k+1}$ and $Y_{k+1}$.
Along these lines, it is straightforward to address the general task
\begin{equation}
\min\left\{\|\sum_{j=1}^N U_jA_jV_j - A_0\|: U_1, \dots, U_N \in U(n)
{\quad \text {and}\quad } V_1, \dots, V_N \in U(m)\; \hbox{ unitary}
\right\}
\end{equation}
with rectangular matrices $A_0, \dots, A_N$ by a system of
$2 N$ coupled gradient flows ($j=1,2,\dots, N$)
\begin{eqnarray}\label{eqn:UE-flows}
U_{k+1}^{(j)}
&=& \exp\left\{-\alpha_k^{(j)} (U_k^{(j)} A_j V_k^{(j)} A_{0jk}^*)_s\right\}\;U_k^{(j)}\\
V_{k+1}^{(j)}
&=& \exp\left\{-\beta_k^{(j)} (V_k^{(j)} A_{0jk}^* U_k^{(j)}A_j)_s\right\}\;V_k^{(j)}
\end{eqnarray}
where we use the short-hand
$A_{0jk}:= A_0 - \sum\limits_{\nu=1\atop \nu\neq j}^N U_k^{(\nu)} A_\nu V_k^{(\nu)}$.
\vskip .5in
\subsection{Numerical Examples}
Using the flows derived in section~\ref{sec:flow_equivalence},
in this section, we study
$$\min\left\{\|\sum_{j=1}^N U_jA_jV_j - A_0\|: U_1, \dots, U_N \in U(n)
{\quad \text {and}\quad } V_1, \dots, V_N \in U(m) \hbox{ unitary} \right\}$$
for rectangular matrices $A_0, \dots, A_N$.
\medskip
\noindent
{\bf Example 5}\\
As an example of rectangular $A_j \in {\mathbf C}^{10\times 15}$, consider the
analogous flows. In order to obtain $U_j \in {\mathbf C}^{10\times 10}$ and $V_j \in
{\mathbf C}^{15\times 15}$ for $j=1,2,\dots,N$ choose a set of random unitary pairs
$(U_j^{(r)},V_j^{(r)})\in{\mathbf C}^{10\times 10} \times {\mathbf C}^{15\times 15}$ and
define
$$A_j := [\, {\rm diag}\,(1,3,5,\dots,19)
+ \tfrac{j-1}{10}\ensuremath{{\rm 1 \negthickspace l}{}}_{10}\; |\; \ensuremath{\mathbb O}{}_{10,5}\,] \quad\text{and}\quad%
A_0^{(N)}:= [\, {\rm diag}\,(s_1, ..., s_{10})\; |\; \ensuremath{\mathbb O}{}_{10,5}\,] $$
where $s_1,s_2,\dots,s_{10}$ are now the singular values of
$A'_{0,N}:= \sum_{j=1}^N U_j^{(r)} A_j V_j^{(r)}$
and $\ensuremath{\mathbb O}{}_{10,5}$ is the $10\times 5$ zero-matrix.
\begin{figure}[Ht!]
\begin{center}
\includegraphics[width=0.65\textwidth]{lisch_F3.eps}
\end{center}
\caption{\label{fig:UAV_N}
Coupled flows minimizing $||\sum_{j=1}^N U_jA_jV_j - A_0^{(N)}||^2_2$ with
(a) $N=2$ and (b) $N=10$.
Here the $A_j \in {\mathbf C}^{10\times 15}$ are rectangular so that
$U_j \in {\mathbf C}^{10\times 10}$ and $V_j \in {\mathbf C}^{15\times 15}$.
}
\end{figure}
Fig.~\ref{fig:UAV_N} shows how the coupled gradient flow
minimizes $||\sum_{j=1}^N U_jA_jV_j - A_0^{(N)}||^2_2$
by driving it practically to zero.
Again the combined flow on $N=10$ unitary pairs
(Fig.~\ref{fig:UAV_N}b) converges faster
than the one for $N=2$ unitary pairs given in Fig.~\ref{fig:UAV_N}a.
\subsubsection{Observation Concerning Sums of Unitary Equivalence Orbits}
A non-zero random complex matrix $A_0$ is typically distant from a single
equivalence orbit of another (non-zero) random matrix $U A_1 V$ of
the same dimension,
since generically $A_0$ and $A_1$ clearly do not share the same singular
values. However, a random complex matrix $A_0$ is in fact typically
arbitrarily close to {\em a sum of two or more equivalence orbits of
independent random matrices}. This is shown in Fig.~\ref{fig:UAV_N2}
by a numerical example for $10\times 10$ complex square matrices, where the
inset shows this does not hold for similarity orbits of random square
matrices. Interestingly, the findings hold independent of the dimensions and
explicitly include rectangular matrices as well as square matrices.
\bigskip
\noindent
{\bf Example 6}\\
For a single random complex square matrix $A_0 \in {\mathbf C}^{10\times 10}$ we now
ask how close it typically is to the sum of $N=1,2,3,4,5,10$ equivalence
orbits $\sum_{j=1}^N U_j A_j V_j$, where the $A_j$ are independently chosen
random complex matrices $A_j \in {\mathbf C}^{10\times 10}$. We compare the findings
with those of $N$ independent similarity orbits $\sum_{j=1}^N U_j A_j U^*_j$
and find the results of Fig.~\ref{fig:UAV_N2} underscoring Proposition 3.2.
\begin{figure}[Ht!]
\begin{center}
\includegraphics[width=0.65\textwidth]{lisch_F5.eps}
\end{center}
\caption{\label{fig:UAV_N2}
A random complex square matrix $A_0\in {\mathbf C}^{10\times 10}$ is typically distant
from a single ($N=1$) equivalence orbit
of another random square matrix $U A_1 V$, as shown in the upper trace.
However, it is typically arbitrarily close to a sum of equivalence
orbits of several independent random square matrices as demonstrated
in the lower traces:
$\|\sum_{j=1}^N U_jA_jV_j - A_0\|^2_2 \to 0$ for $N=2,3,4,5,10$.
In contrast, the inset shows this does not hold for $N=1$ through $N=10$
for similarity orbits $\sum_{j=1}^N U_jA_jU^*_j$.
}
\end{figure}
\section{Unitary $t$"~Congruence}
In this section, we consider
$$\min \left\{\|\sum_{j=1}^N U_jA_j U_j^t - A_0\|: U_1, \dots, U_N \in U(n)
\hbox{ unitary} \right\}$$
for given matrices $A_0, A_1, \dots, A_N$.
Sometimes, we can focus on special classes of matrices such as
symmetric matrices or skew-symmetric matrices. For symmetric
matrices or
skew-symmetric matrices,
the minimization problem
$$ \min\left\{ \|UAU^t - A_0\|: U \hbox{ unitary} \right\}$$
has an analytic solution; see \cite{MO}.
The problem is wide open even if $N = 2$.
Therefore, a computer algorithm will be most helpful in the theoretical development.
One may also consider whether we can have
$UAU^t + VBV^t = C$
for a symmetric $A$ and a skew-symmetric $B$.
In other words, we want to know whether one
can write $C$ as the sum of symmetric and skew-symmetric
matrices with prescribed singular values.
Of course, the problem for general matrices $A, B$ and $C$ is even
more challenging, and that is what we pursue by the numerical methods developed
in the next paragraph.
\subsection{Gradient Flows on Unitary $t$-Congruence Orbits}\label{sec:flow_congruence}
Again, the minimization task
\begin{equation}
\minover{U,V\in U(n)} ||UAU^t + VBV^t - C||_2^2\;,
\end{equation}
translates via
\begin{equation*}
||UAU^t + VBV^t - C||_2^2 \\
= ||A||_2^2 + ||B||_2^2 + ||C||_2^2 - 2 \Re {\rm tr}\,\left\{C^*(UAU^t + VBV^t) - UAU^t\; \Bar VB^* V^*\right\}
\end{equation*}
into maximising the function
\begin{equation}
F(U,V) := \Re f(U,V) := \Re {\rm tr}\,\left\{(UAU^t + VBV^t)\, C^* - UAU^t\; \Bar VB^*V^*\right\}\quad,
\end{equation}
where the differential reads (by virtue of the short-hand $\Tilde C:= C^* - \Bar VB^*V^*$)
\begin{eqnarray*}
D_U f(U)(\Omega U) &=& {\rm tr}\,\left\{((\Omega U) A U^t + UA(\Omega U)^t)(C^* - \Bar VB^*V^*)\right\} \\
&=& {\rm tr}\,\left\{(\Omega U) A U^t \Tilde C\right\} + {\rm tr}\,\left\{ (UA(\Omega U)^t \Tilde C)^t\right\}\\
&=& {\rm tr}\,\left\{( A U^t \Tilde C + A^tU^t\Tilde C^t) (\Omega U)\right\}\quad.
\end{eqnarray*}
From identifying
$
D_U f(U)\cdot (\Omega U) = \braket{\operatorname{grad}_U f(U)}{\Omega U} = {\rm tr}\,\left\{(\operatorname{grad}_U f(U))^* \Omega U\right\}
$
one finds
\begin{equation}
\operatorname{grad}_U f(U) = (U A U^t \Tilde C + U A^tU^t\Tilde C^t)^* U
\end{equation}
so as to obtain for $F(U):= \Re f(U)$
\begin{equation}
\operatorname{grad}_U F(U) = -\big(UAU^t \Tilde C + U A^tU^t \Tilde C^t\big)_s\,U\quad.
\end{equation}
Again, taking the respective Riemannian exponentials
$\exp_U(\operatorname{grad}_U F(U))$ and $\exp_V(\operatorname{grad}_V F(V))$ thus gives
the slightly lengthy formula
\begin{equation}
U_{k+1} = \exp \left\{-\alpha_k \big( U_kAU_k^t (C^* - \Bar V_kB^*V_k^*)
+ U_kA^tU_k^t (C^* - \Bar V_kB^*V_k^*)^t \big)_s \right\}\;U_
\end{equation}
---and an analogous equation for $V_{k+1}$ by substituting $V$ for $U$ and $B$ for $A$---as
discretized solutions of the coupled gradient system
\begin{equation}
\dot U = \operatorname{grad}_U F(U,V) \quad\text{\rm and}\quad \dot V = \operatorname{grad}_V F(U,V) \;.
\end{equation}
\bigskip
Likewise, for higher sums of congruence orbits one finds
\begin{equation}
\min\left\{\| \sum_{j=1}^N U_j A_j U^t_j - A_0 \|:
U_1, \dots, U_N \in U(n)\; \hbox{ unitary} \right\}
\end{equation}
to be solved by the coupled system of flows ($j=1,2,\dots, N$)
\begin{equation}
U_{k+1}^{(j)} = \exp \left\{-\alpha_k^{(j)}
{\big( A_k^{(j)} A_{0jk}^* + ( A_{0jk}^* A_k^{(j)})^t\big)}_s \right\}\;U_k^{(j)}\quad,
\end{equation}
where for short we set $A_k^{(j)} := U_k^{(j)}A_j{U_k^{^t(j)}}$ and
$A_{0jk}:= A_0 - \sum\limits_{\nu=1 \atop \nu\neq j}^N A_k^{(\nu)}$.
\section{Outlook: Non-Compact Groups}
For orbits $S(A)$ of matrices $A$ under the action of non-compact groups,
there are usually no good results for supremum or infinmum of the
quantity
$$\|X_0 - \sum_{j=1}^N X_j\|$$
with $X_j \in S(A_j)$ for $j = 0, 1, \dots, N$, for given
matrices $A_0, \dots, A_N$.
For example, for the invertible congruence orbit of $A \in M_n$
$$S(A) = \left\{ S^*AS: S \in M_n \hbox{ is invertible} \right\},$$
we can let $S = rI$. Then
$$\|S^*A_0S - \sum_{j=1}^N S^*A_jS\|$$
converges to 0 or $\infty$ depending on
$r \rightarrow 0$ or $r \rightarrow \infty$.
Similarly, the same problems occur for
the equivalence orbit of $A \in M_n$
$$S(A) = \left\{SAT: S, T \in M_n \hbox{ are invertible} \right\}.$$
For the similarity orbits, we have the following.
\begin{proposition} Suppose not all the matrices $A_0, \dots, A_N$ are scalar.
Then
$$\sup\|A_0 - \sum_{j=1}^N S_j^{-1}A_jS_j\| = \infty.$$
\end{proposition}
\it Proof. \rm
Suppose one of the matrices, say, $A_i$ is non-scalar.
Then there is $S_j$ such that $S_j^{-1}A_iS_j$ is in lower triangular form
with the $(2,1)$ entry equal to 1, and there are invertible matrices
$S_j$ such that $S_j^{-1}A_js_j$ is in upper triangular form
for other $j$. Let $D_r = {\rm diag}\,(r,1,1,\dots,1)$.
Then the sequence
$$(S_0D_r)^{-1}A_0 (S_0D_r) - \sum_{j=1}^N(S_jD_r)^{-1}A_j (S_jD_r)$$
has unbounded $(2,1)$ entry as $r \rightarrow \infty$.
The conclusion follows. \qed
Determining
$$\inf\|A_0 - \sum_{j=1}^N S_j^{-1}A_jS_j\|$$
is more challenging.
Let us first consider two matrices $A, B \in M_n$.
We have the following.
\begin{proposition} Let $A, B\in M_n$. Then for any unitary similarity
invariant norm $\|\cdot\|$,
$$\|({\rm tr}\, A-{\rm tr}\, B)I/n\| \le \|S^{-1}AS - T^{-1}BT\|$$
for any invertible $S$ and $T$.
\end{proposition}
\it Proof. \rm
Given two real vectors $x = (x_1, \dots, x_n),
y = (y_1, \dots, y_n)$, we say that
$x$ is weakly majorized by $y$, denoted by $x \prec_w y$ if the
sum of the $k$ largest entries of $x$ is not larger than that of
$y$ for $k = 1, \dots, n$. By the Ky Fan dominance theorem,
if $X = {\rm diag}\,(x_1, \dots, x_n)$ and $Y = {\rm diag}\,(y_1, \dots, y_n)$
are nonnegative matrices such that $(x_1, \dots, x_n) \prec_w (y_1, \dots,
y_n)$, then $\|X\| \le \|Y\|$ for any unitarily invariant norm $\|\cdot\|$.
Now, suppose $S^{-1}AS - T^{-1}BT$
has diagonal entries $d_1, \dots, d_n$ and singular values
$s_1, \dots, s_n$. Then
$$|{\rm tr}\, A - {\rm tr}\, B|=|\sum_{j=1}^n d_j|\le \sum_{j=1}^n |d_j|.$$
Thus,
$$|{\rm tr}\, A - {\rm tr}\, B| (1, \dots, 1)/n \prec_w (|d_1|, \dots, |d_n|)
\prec_w (s_1, \dots, s_n).$$
It follows that
$$\|({\rm tr}\, A - {\rm tr}\, B)I/n\|
\le \|{\rm diag}\,(|d_1|, \dots, |d_n|) \le \|{\rm diag}\,(s_1, \dots, s_n)\|
= \|S^{-1}AS - T^{-1}BT\|.$$
\vskip -.3in \qed
Can we always find invertible $S$ and $T$ such that
$$\|S^{-1}AS - T^{-1}BT\| = \|({\rm tr}\, A-{\rm tr}\, B)I/n\|?$$
The answer is no, and we have the following.
\begin{proposition} Let $\|\cdot\|$ be a unitarily invariant norm
on $M_n$. Suppose $A \in M_n$ has eigenvalues
$a_1, \dots, a_n$, and $B = bI$. Then
$$\inf\left\{\|S^{-1}AS - B\|: S \in M_n \hbox{ is invertible} \right\}
= \|{\rm diag}\,(a_1-b, \dots, a_n-b)\|.$$
\end{proposition}
\it Proof. \rm
Suppose $S^{-1}AS - B$ has eigenvalues
$a_1 - b, \dots, a_n - b$, and singular values $s_1, \dots, s_n$.
Then the product of the $k$ largest entries
of the vector
$(|a_1 - b|, \dots, |a_n - b|)$ is not larger than
$(s_1, \dots, s_n)$ for $k = 1, \dots, n$.
It follows that
$$(|a_1 - b|, \dots, |a_n - b|) \prec_w
(s_1, \dots, s_n),$$
and hence
$$\|{\rm diag}\,(|a_1 - b|, \dots, |a_n - b|)\| \le
\|{\rm diag}\,(s_1, \dots, s_n)\| = \|S^{-1}AS-B\|.$$
Note that there is $S$ such that $S^{-1}(A-B)S$ is in upper triangular
Jordan form with diagonal entries
$a_1 - b, \dots, a_n - b$. Let
$D_r = {\rm diag}\,(1, r, \dots, r^{n-1})$ for $r > 0$.
Then $(SD_r)^{-1}(A-B)(SD_r) \rightarrow {\rm diag}\,(a_1-b, \dots, a_n-b)$
and $\|(SD_r)^{-1}(A-B)(SD_r)\| \rightarrow \|{\rm diag}\,(a_1-b, \dots, a_n-b)\|$
as $r \rightarrow 0$. So, we get the conclusion about the infinmum.
\qed
From the above result and proof, we see that if
$A$ has an eigenvalue $a$ with eigenspace of dimension $p$
and $B$ has an eigenvalue $b$ with eigenspace of dimension $q$
such that $p+q-n=r > 0$ then $S^{-1}AS-T^{-1}BT$ has an eigenvalue
$a-b$ of multiplicity at least $r$. The question is whether
we can write $A = aI_r \oplus A_1$ and
$B = bI_r \oplus B_1$ and show that
$$\inf \|S_1^{-1}A_1S_1 - T_1^{-1}B_1T_1\| = \|({\rm tr}\, A_1-{\rm tr}\, B_1)I_{n-k}/(n-
k)\|.$$
\iffalse
Here is a simple case. If $A = {\rm diag}\,(1,1,-2)$ and $B \in M_3$ is a rank 2
nilpotent, can we show that $\inf\|S^{-1}AS - B \| = 0$?
\fi
\medskip\noindent
It is interesting to note that the following two quantities may be different.
\medskip
1) $\inf\left\{\|S^{-1}AS - T^{-1}BT\|: S \hbox{ is invertible} \right\}$.
\medskip
2) $\inf\left\{\|S^{-1}AS - B\|: S \hbox{ is invertible} \right\}$.
\medskip
For example, suppose $A = {\rm diag}\,(2,-1,-1)$ and
$B=\(\begin{array}{ccc}0&1&0\\ 0&0&1\\ 0&0&0\end{array}\)$.
Then there are invertible $S$ and $T$ such that
$$S^{-1}AS = \begin{pmatrix} 0&1&1\\ 1&0&1\\ 1&1&0\cr\end{pmatrix}
\quad \hbox{ and } \quad
T^{-1}BT = \begin{pmatrix} 0&1&1\\ 0&0&1\\ 0&0&0\cr\end{pmatrix}.$$
So, $C = S^{-1}AS-T^{-1}BT$ is a rank two nilpotent.
Thus for any $\varepsilon >
0$, there is an invertible $R_\varepsilon$ such that
$$R_\varepsilon^{-1}CR_\varepsilon =
\begin{pmatrix} 0&\varepsilon &0 \\ 0&0&\varepsilon\\ 0&0&0\cr\end{pmatrix}.$$
As a result,
$$\|R_\varepsilon^{-1}S^{-1}ASR_\varepsilon
- R_\varepsilon^{-1}T^{-1}BTR_\varepsilon\| \rightarrow 0 \quad
\hbox{ as } \quad \varepsilon \rightarrow 0.$$
So, the quantity in (1) equals zero.
On the other hand,
for every invertible $S$, we have
$$\|\(A-SBS^{-1}\)\(Se_1\)\|=\|A\(Se_1\)\|\ge \|Se_1\|$$
Therefore, $\inf\|A - SBS^{-1} \| \ge 1$.
So, we see that the quantities in (1) and (2) may be different.
\medskip\noindent
In connection to the above discussion, it is interesting to
study the following problem.
\medskip\noindent
1. Determine
$$\inf\left\{\|S^{-1}AS-TBT^{-1}\|: S, T \hbox{ are invertible} \right\}$$
and characterize the matrix pairs $(A,B)$
\medskip\noindent
2. Determine
$$\inf\left\{\|S^{-1}AS-B\|: S \hbox{ is invertible} \right\}$$
and characterize the matrix pairs $(A,B)$
attaining the infinmum if they exist.
\iffalse
\fi
\section{Conclusions}
We have treated the least-squares approximation problems by elements on the
{\em sum} of various matrix orbits including unitary similarity, equivalence
and congruence.
Special attention has been paid to sums of unitary similarity orbits of a
Hermitian $A$ and a skew-Hermitian $B$, where theoretical
results have been obtained and shown to be consistent with
numerical findings. Further, new results on unitary equivalence
orbits have been obtained stimulated by numerical experiments.
are related to geometric arguments.
A general framework based on the gradient flows on matrix orbits
arising from Lie group actions has been developed to study the proposed
problems.
The gradient flows devised to this end extend the existing toolbox
(see e.g. \cite{Bloch94, Chu04})
by referring to sums of matrix orbits as summerized in Tab.~1.
This general approach can be used to treat many problems in theory
and applications. For instance,
flows on such sums of unitary similarity orbits
can also be envisaged as on unitaries taking a block-diagonal form,
and hence they relate to relative $C$~numerical ranges,
where the group action is restricted to a compact subgroup
$\mathbf K \subseteq SU(n)$ of the full unitary group \cite{SDHG07}.
Finally, first results on matrix orbits under non-compact group actions invite
further research.
\begin{table}[Ht!]
\caption{\label{tab:flows}
Summary of Least-Squares Approximations by
Matrix Orbits and Related Gradient Flows}
\begin{tabular}{lll}\\
\hline\\[-3mm]
type\; and\; objective && coupled gradient flows \\[1mm]
\hline\hline\\[-2.5mm]
{\em unitary similarity:}\\
{$\minover{U\in SU(n)} = \| \sum\limits_{j=1}^N U_j A_j U^*_j - A_0 \|$}
&&{$U_{k+1}^{(j)} = \exp\left\{-\alpha_k^{(j)} {[ A_k^{(j)}, A_{0jk}^*]}_s\right\}\;U_k^{(j)}$}\\[2mm]
&
&{\hspace{5mm} where \; $A_k^{(j)} := U_k^{(j)}A_j{U_k^{(j)}}^*$ %
and $A_{0jk}:= A_0 - \sum\limits_{\nu=1 \atop \nu\neq j}^N A_k^{(\nu)}$} \\[7mm]
\hline\\[-2.5mm]
{\em unitary equivalence:}\\
{$ \minover{U,V \in SU(n)} \|\sum\limits_{j=1}^N U_jA_jV_j - A_0\|$}
&&{$U_{k+1}^{(j)}=\exp\left\{-\alpha_k^{(j)} (U_k^{(j)} A_j V_k^{(j)} A_{0jk}^*)_s\right\}\;U_k^{(j)}$} \\[2mm]
& &{$V_{k+1}^{(j)} = \exp\left\{-\beta_k^{(j)} (V_k^{(j)} A_{0jk}^* U_k^{(j)}A_j)_s\right\}\;V_k^{(j)}$} \\[2mm]
& &{\hspace{5mm} where \; $A_{0jk}:= A_0 - \sum\limits_{\nu=1\atop \nu\neq j}^N U_k^{(\nu)} A_\nu V_k^{(\nu)}$} \\[7mm]
\hline\\[-2.5mm]
{\em unitary congruence:}\\
{$ \minover{U\in SU(n)} \| \sum\limits_{j=1}^N U_j A_j U^t_j - A_0 \|$}
&&{$U_{k+1}^{(j)} = \exp\left\{-\alpha_k^{(j)}
{\big( A_k^{(j)} A_{0jk}^* + ( A_{0jk}^* A_k^{(j)})^t\big)}_s\right\}\;U_k^{(j)}$}\\[2mm]
& &{\hspace{5mm} where \; $A_k^{(j)} := U_k^{(j)}A_j{U_k^{(j)}}^t$ %
and $A_{0jk}:= A_0 - \sum\limits_{\nu=1 \atop \nu\neq j}^N A_k^{(\nu)}$} \\[7mm]
\hline\\[-2.5mm]
\end{tabular}
\end{table}
\section{Further Research}
In order to avoid the search in our algorithms is terminated in local extrema,
one has to ensure to choose a sufficiently large
set of random unitaries distributed according to the Haar measure.
Actually, one knows there are commutation properties
at the critical points. It would be nice to find a more
efficient method to choose starting points for the search,
and prove theorems ensuring that the absolute minimum will
be reached from one of these starting points using our
algorithms.
Our discussion focused on orbits of matrices under
actions of compact groups. We can consider other orbits
under actions of non-compact groups. Here are some examples
for $S,T \in SL(n,{\mathbf C})$:
(e) the general similarity orbit of a square matrix $A$ is the set
of matrices of the form $SAS^{-1}$,
(f) the equivalence orbit of a rectangular matrix $A$ is the set
of matrices of the form $SAT$,
(g) the $*$-congruence orbit of a complex square matrix $A$ is the set of
matrices of the form $SAS^*$,
(h) the $t$-congruence orbit of a square matrix $A$ is the set of matrices
of the form $SAS^t$.
\noindent
However, the fact that $GL(n,{\mathbf C})$ and $SL(n,{\mathbf C})$ are just {\em locally compact}
entails there is no Haar measure and consequently no {\em bi-invariant} metric
on the tangent spaces, but only left {\em or} right-invariant metrics. Hence
the Hilbert-Schmidt scalar product $\braket{B}{A}={\rm tr}\,\left\{B^* A\right\}$ has to be treated
with care, in particular since we are interested in the complex domain.
Moreover, while in compact Lie groups the exponential map is surjective and
geodesic \cite{Arv03}, in {\em locally compact} Lie groups, it is generically
neither surjective nor geodesic. It is for these reasons that
devising gradient flows in locally compact Lie groups is the subject of a follow-up study.
| {
"timestamp": "2008-12-10T00:48:37",
"yymm": "0812",
"arxiv_id": "0812.1817",
"language": "en",
"url": "https://arxiv.org/abs/0812.1817",
"abstract": "Let $S(A)$ denote the orbit of a complex or real matrix $A$ under a certain equivalence relation such as unitary similarity, unitary equivalence, unitary congruences etc. Efficient gradient-flow algorithms are constructed to determine the best approximation of a given matrix $A_0$ by the sum of matrices in $S(A_1), ..., S(A_N)$ in the sense of finding the Euclidean least-squares distance $$\\min \\{\\|X_1+ ... + X_N - A_0\\|: X_j \\in S(A_j), j = 1, >..., N\\}.$$ Connections of the results to different pure and applied areas are discussed.",
"subjects": "Numerical Analysis (math.NA); Dynamical Systems (math.DS); Optimization and Control (math.OC); Quantum Physics (quant-ph)",
"title": "Least-Squares Approximation by Elements from Matrix Orbits Achieved by Gradient Flows on Compact Lie Groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534333179648,
"lm_q2_score": 0.7217432122827969,
"lm_q1q2_score": 0.7083573536688877
} |
https://arxiv.org/abs/2006.04162 | The q-voter model on the torus | In the $q$-voter model, the voter at $x$ changes its opinion at rate $f_x^q$, where $f_x$ is the fraction of neighbors with the opposite opinion. Mean-field calculations suggest that there should be coexistence between opinions if $q<1$ and clustering if $q>1$. This model has been extensively studied by physicists, but we do not know of any rigorous results. In this paper, we use the machinery of voter model perturbations to show that the conjectured behavior holds for $q$ close to 1. More precisely, we show that if $q<1$, then for any $m<\infty$ the process on the three-dimensional torus with $n$ points survives for time $n^m$, and after an initial transient phase has a density that it is always close to 1/2. If $q>1$, then the process rapidly reaches fixation on one opinion. It is interesting to note that in the second case the limiting ODE (on its sped up time scale) reaches 0 at time $\log n$ but the stochastic process on the same time scale dies out at time $(1/3)\log n$. | \section{Introduction}
In the linear voter model, the state at time $t$ is $\xi_t :\mathbb{Z}^d \to \{0,1\}$, where 0 and 1 are two opinions. The individual at $x$ changes opinion at a rate equal to the fraction $f_x$ of its neighbors with the opposite opinion. For the last decade physicists have studied the $q$-voter model, in which the flip rate at $x$ is $f_x^q$. When $q$ is an integer, the dynamics may be thought of as: select $q$ neighbors of $x$ uniformly, and change the opinion of $x$ if all $q$ neighbors disagree with $x$. However, there is no reason to restrict $q$ to be an integer. Abrams and Strogatz \cite{AbrStr} introduced this system in 2003 as a model of language death, and argued based on data on languages in 42 regions that $q = 1.31 \pm 0.25$. In the physics literature there have been many studies of the system on lattices, complex networks, and even on graphs that co-evolve with the state of individuals. See \cite{CMPS, HCDM, AJ, MSM, MLCPS, VCSM, VazLop} and references therein. According to \cite{MSM}, for finite but large systems, the process with $q<1$ can remain in a dynamically active phase for observation times that grow exponentially with $n$, while for $q>1$ the transition into an absorbing state is `abrupt'.
The difference between $q<1$ and $q>1$ is due to the different types of frequency dependence in the two models. When $q<1$, rare opinions spread more rapidly compared to the voter model, while for $q>1$, they spread more slowly. A more quantitative viewpoint is provided by mean field theory. This analysis is often done by writing an equation by pretending sites are always independent of each other. Here, we will instead consider the system on the complete graph in which each site interacts equally with all the others. In this case, the frequency of 1's, $u$, satisfies
$$
du/dt = -u (1-u)^q + (1-u) u^q = u(1-u) g(u)
$$
where $g(u) = u^{q-1} - (1-u)^{q-1}$. This system has three fixed points: $0$, $1/2$ and $1$.
\begin{itemize}
\item
If $q < 1$, $g(u)$ decreases from $\infty$ to $-\infty$ as $u$ increases from 0 to 1. So the fixed points 0 and 1 are unstable and the interior one is attracting. In this case it is expected that coexistence occurs.
\item
If $q > 1$, $g(u)$ increases from $-1$ to $1$ as $u$ increases from 0 to 1. So the fixed points 0 and 1 are stable and the interior one is unstable. In this case it is expected that clustering occurs. That is, we will see larger and large regions occupied by one type.
\end{itemize}
\noindent
For more on the heuristics that lead to these conclusions, see the 1994 paper by Durrett and Levin \cite{DL}. In most of the papers in the physics literature, the analysis is done by using the pair-approximation, which is equivalent to supposing that the state of the system is always a Markov chain.
Recently, Vasconclos, Levin, and Pinheiro \cite{VLP} have considered a version of the $q$-voter in which the powers $q_1$ and $q_0$ for flipping to 1 and 0 can be different. They did this to study complex contagions which have been used to model the spread of idioms and hashtags on Twitter \cite{RMK} and in many other situations, see the book by Centola \cite{HBS}. When $q_1 \neq q_0$, there arises situations when one opinion dominates the other, see Figure 2a in \cite{VLP}, but the situation with $q_1=q_0$ seems to capture of all of the interesting behavior.
\subsection{Voter model perturbations}
The linear voter model has a rich theory due to its duality with coalescing random walk. This duality exists because the process can be constructed from a graphical representation. See Section \ref{ss:vmd} for details. However, the inherent asymmetry between 1's and 0's in the graphical representation makes it impossible to construct nonlinear voter models where the flip rates depend only on $f_x$. See Section \ref{ss:nlv} for a proof.
To get around this difficulty, we will suppose $q$ is close to 1 and view the system as a voter model perturbation in the sense of Cox, Durrett, and Perkins \cite{CDP}. On $\mathbb{Z}^d$, this theory requires $d\ge 3$ so that the voter model has a one parameter family of stationary distributions $\nu_u$, $0 \le u \le 1$. For this and other elementary facts about the voter model that we use, see Liggett's 1999 book \cite{L99}.
In general, the rate of flipping from $i$ to $j\neq i$ in a voter perturbation has the form
$$
c^\delta_{i,j}(x,\xi) = f_j + \delta^2 h_{i,j}(x,\xi)
$$
where $f_j$ is the fraction of neighbors in state $j$, and $h_{i,j}(x,\xi)$ is the perturbation to the rate of flipping from $i$ to $j$. Usually the perturbation variable is $\epsilon$, but here it will be convenient to let $\epsilon=\delta^2$. To simplify formulas we will assume $h_{i,j}(x,\xi)=0$ when $\xi(x)\neq i$. Here we will consider the special case in which the neighborhood has size $k$ and the flip rate only depends on the number of neighbors $n(x)$ in state $j$:
$$
c^\delta_{i,j}(x,\xi) = f_j + \delta^2 r^k_{n(x)} \qquad\hbox{for $1\le n(x) \le k$}.
$$
The $r^k_i$ do not have to be nonnegative, see (1.7) in \cite{CDP}, but we will suppose $r^k_0=0$ so that $\equiv 0$ and $\equiv 1$ are absorbing states. For simplicity, we will restrict our attention to three dimensions. In that context, we will consider neighborhoods $x +{\cal N}$ with $0 \notin {\cal N}$ and $|{\cal N}| \ge 3$ chosen so that the group generated by ${\cal N}$ is $\mathbb{Z}^3$
\medskip\noindent
{\bf q-voter model.} The rate at which a site $x$ flips to 0 in the $q$-voter model is $f_x^q$, where $f_x$ is the fraction of neighbors with the opposite opinion. Suppose for the moment that $q<1$. In this case, if we write
$$
f_x^q = f_x + (f_x^q - f_x),
$$
then the term in parentheses is $\ge 0$. Let $q=1 -\delta^2$ and write $u$ instead of $f_x$ Then,
$$
u^q - u = u \left( u^{-\delta^2} - 1 \right) = u \left( \exp(\delta^2 \log(1/u)) - 1 \right) \approx \delta^2 u \log(1/u).
$$
From this we see that if $q<1$, then the perturbation is
\begin{equation}
r^k_i = (i/k) \log(k/i).
\label{rforqv}
\end{equation}
which vanishes when $i=0$ or $k$.
If we let $q=1 + \delta^2$ and again write $u$ instead of $f_x$, then
$$
u^q - u = u \left( u^{\delta^2} - 1 \right) = u \left( \exp(\delta^2 \log(u)) - 1 \right) \approx - \delta^2 u \log(1/u).
$$
Hence when $q>1$, the perturbation is
\begin{equation}
r^k_i = -(i/k) \log(k/i).
\label{rforq>1}
\end{equation}
\subsection{ODE limit} \label{ss:ODElim}
Following the approach of Cox and Durrett \cite{CD}, who used the voter perturbation machinery to study evolutionary games on the torus in dimension $d \ge 3$, we will consider the $q$-voter model in what they called the weak-selection regime. (For results in the strong selection regime see Section \ref{ss:strsel}.) Let $\mathbb{T}_n$ be the three dimensional torus with $n$ points and hence side length $L=n^{1/3}$. Let $\epsilon_n = \delta^2_n$. The first thing to do is to prove convergence of the density of 1's,
$$
U_n(t) = \frac{1}{n} \sum_{x \in \mathbb{T}_n} \xi_{t/\epsilon_n}(x),
$$
to the solution of an ODE. Let $\rho_m^i$ denote the probability that in $\nu_u$ the origin is in state $i$ while exactly $m$ of the neighbors are in state $1-i$.
We write $a_n \ll b_n$ for positive quantities $a_n$ and $b_n$ to indicate $a_n/b_n \to 0$ as $n\to\infty$.
\begin{theorem}\label{detlim}
Suppose $q=1-\epsilon_n$ with $n^{-1} \ll \epsilon_n \ll n^{-2/3}$. If $U_n(0) \to u_0$ then $U_n(t)$ converges uniformly on compact sets to the solution of the ODE
\begin{equation}
\frac{du}{dt} = \sum_{m=1}^{k-1} r^k_i (\rho^0_m(u) - \rho^1_m(u)) \qquad u(0)=u_0
\label{ODElim}
\end{equation}
\end{theorem}
Intuitively, Theorem \ref{detlim} holds due to a separation of time scales. The voter model runs at a fast rate, so when the density is $u$ on the torus, the system has distribution $\approx\nu_u$. The rate of change of the density can then be computed by looking at the expected rate of change when the state is $\nu_u$. Writing $\langle \ \rangle_u$ for expected value with respect to $\nu_u$, the right hand side of the ODE is
\begin{equation}
\phi(u) = \langle h_{0,1}- h_{1,0} \rangle _u = \sum_{m=1}^{k-1} r^k_i (\rho^0_m(u) - \rho^1_m(u)).
\label{ODErhs}
\end{equation}
This result will be proved by constructing the process on a graphical representation and then defining a dual that is a coalescing branching random walk. The voter part of the process leads to a coalescing random walk. When a perturbation event occurs at a point $x$, the dual branches to include all of the points in $x+{\cal N}$. This will be described in detail in Section \ref{ss:vpdual}. The proof of Theorem \ref{detlim} is almost identical to the proof of Theorem 6 in Cox and Durrett \cite{CD} so we will only outline the proof, referring to \cite{CD} for details.
When $\epsilon_n \gg n^{-2/3}$ the particles in the dual have time to wrap around the torus and come to equilibrium in between branching events. It is known that on the torus if we start two random walks from independent randomly chosen locations, then the time to coalesce is of order $n$. Thus the assumption $\epsilon_n \ll n^{-1}$ is needed for the perturbation to have an effect.
Computing the $r^i_m(u)$, see Section \ref{sec:cpert}, leads to the following ODE
\begin{theorem} \label{limitODE}
In the three dimensions when the neighborhood has size $k$, the limiting ODE is
$$
\frac{du}{dt} = \pm c_k u(1-u)(1-2u)f_k(u)
$$
where $f_k(u)$ is a polynomial that is positive on $[0,1]$ and $f(0)=f(1)=1$.
We have $+$ for $q<1$ and $-$ for $q<1$.
\end{theorem}
When $q<1$, the fixed point at 1/2 is attracting and we have
\begin{theorem}\label{persist}
Suppose $q=1-\epsilon_n$ and $\epsilon_n \sim Cn^{-a}$ for some $a\in(2/3,1)$. There is a $T_0$ that only depends on $u_0$, so that for any $\gamma>0$ and $m<\infty$, if $n$ is large then with high probability
$$
| U_n(t) - 1/2 | \le \gamma \hbox{ for all $t\in[T_0,n^m]$}.
$$
\end{theorem}
\noindent
Here and in what follows ``with high probability'' means with probability $\to 1$ as $n\to\infty$.
To prove Theorem \ref{persist}, we will follow the approach of Huo and Durrett \cite{latent} who proved a similar result for the latent voter model on a random graph generated by the configuration model. Although the random graph has a more complicated geometry than the torus, the proof in that setting is simpler than the one given here, since on the graph random walks mix in time $O(\log n)$ rather that in time $O(n^{2/3})$.
\clearpage
\begin{figure}[tbp]
\centering
\includegraphics[width=3.92in,height=3.92in,keepaspectratio]{q09}
\caption{Cross-section from a simulation of $q=0.9$ on a $100 \times 100 \times 100$ grid with periodic boundary conditions.}
\label{fig:q09}
\end{figure}
\medskip
{\bf Outline of the proof of Theorem \ref{persist}.}
\begin{itemize}
\item
Section \ref{ss:DarNor} introduces a general result for proving convergence of stochastic processes to limiting ODEs, due to Darling and Norris \cite{DN}, which is the key to the proofs of the persistence results for our model (and for the latent voter model). The main difficulty is to bound the difference between the drift in the density $U_n$ of the particle system and the drift in the ODE. In particular, one must prove that the drift in the density of $U_n$, which is a function of the configuration, is almost a function of the overall density.
\item
In Section \ref{ss:igbr} we take the first step in the proof, which is to show that if $2/3 < b < a$ then we can ignore the perturbation on $[t/\epsilon_n-n^b,t/\epsilon_n]$, i.e., the process will evolve like the voter model. This has the consequence that if there are $n \cdot u$ 1's at time $t/\epsilon_n - n^b$, then at time $t/\epsilon_n$ the process is close to the voter equilibrium $\nu_u$. The argument here is an improvement over the one in Section 3.1 of \cite{latent}. We use Azuma's inequality to get error estimates that are stretched exponentially small, i.e., $\le C \exp(cn^{-\alpha})$ with $\alpha>0$ rather than polynomial, i.e., $\le Ct^{-p}$.
\item
In Section \ref{ss:dens} we introduce a result about ``renormalizing'' the voter model, that comes from work of Bramson and Griffeath \cite{BGrenorm} in $d=3$ and Z\"ahle \cite{Zrenorm} in $d \ge 3$. They show that if we consider the number of 1's in the voter model equilibrium with density $\lambda$, $\xi^\lambda$, in a cube $Q(r)$ of side $r$, then
\begin{equation}
\widehat S_r = (\lambda(1-\lambda))^{-1/2} r^{-5/2} \left(\sum_{x \in Q(r)} \xi^\lambda(x) - \lambda\right) \Rightarrow \hbox{Normal}(0,C)
\label{vlamclt}
\end{equation}
We use this to obtain information about a similar normalized sum $T_r$ of the number of ones in a cube of side $r$ on the torus at time $t/\epsilon_n$ when the number of 1's at time $t/\epsilon_n -n^b$ is $\lambda n$. To be specific, we let $\bar S_n$ be the normalized sum of $\xi^\lambda_{\sigma(n)}(x)$ in the process that starts at time 0 from product measure with density $\lambda$ and is run for time $\sigma(n) = n^{0.6}$. We show that $\bar S_r \le T'_r \le \widehat S_r$, where $T'_r$ is a small modification of $T_r$.
\item
In Section \ref{ss:TvsT} we bound the difference between $T'_r$ and $T_r$. This in turn gives us a bound on the largest coalescing random walk cluster in $T_r$ in $Q(r)$, see \eqref{maxZbd}, and a bound on the fluctuations of the density in the cubes, which is important for completing the next step.
\item
In Section \ref{ss:diffm} we bound the difference between the drifts in the particle system and the ODE. To do this, we have to show that the empirical finite distributions on the torus $\mathbb{T}_n$ are close to the values that come from $\nu_u$. In doing this we rely on the result about the density in cubes proved in Section \ref{ss:dens} to divide space at time $t/\epsilon_n + s_n$ into cubes with $n^{b(3)}$ sites, where $b(3) > b(2)$. Here $s_n = n^{(2+\alpha)b(2)/3}$ with $\alpha$ small, so that the empirical f.d.d.'s in cubes of volume $n^{b(3)}$ that do not touch are almost independent. This leads to errors of size $C\exp( - n^{1-b(3)-2\alpha})$.
\item
In Section \ref{ss:fd} we put the pieces together to prove the result. As in Section 3.5 of \cite{latent} we do this by showing that if the density $U_t$ reaches $|U_t -1/2| = 4\epsilon$ then with very high probability (i.e., for any $k$ with probability $\ge 1-n^{-k}$ for large $n$) it will return to $|U_t -1/2| \le \epsilon$ before we have $|U_t -1/2|> 5\epsilon$. Taking $\delta=5\epsilon$ gives the desired result
\end{itemize}
In all of our estimates except those in Sections \ref{ss:dens} and \ref{ss:TvsT}, the errors are bounded stretched exponentially small, so we
\medskip\noindent
{\bf Conjecture.} {\it When $q<1$ the process persists for time $\exp(n^\beta)$ for some $\beta>0$.}
\medskip\noindent
The could be proved with a rather small value of $\beta$ if the errors in \eqref{mombdT} and \eqref{maxZbd} could be improved to be stretched exponentially small.
Readers familiar with long time survival results for the contact process, see e.g., Section 3 in part I of Liggett \cite{L99}, might expect the conjecture to say survival occurs for time $\exp(\gamma n)$ with $\gamma>0$. However, the conjecture above cannot hold for $\beta > 1/3$. If we run time backwards from $t/\epsilon_n$ to $t/\epsilon_n - n^{2/3}$ then the $n$ initial particles in the CRW will have coalesced to $n^{1/3}$ particles. If all of these happen to land on sites in state 0 at time $t/\epsilon_n - n^{2/3}$ the process will go extinct at time $t/\epsilon_n$.
\subsection{Rapid Extinction when $q>1$} \label{ss:rapid}
When $q>1$, the fixed point at 1/2 is unstable while the ones at 0 and 1 are locally attracting
To get rid of the constant $c_k$ in the ODE limit we consider
$$
U_n(t) = \frac{1}{n} \sum_{x \in \mathbb{T}_n} \xi_{t/\epsilon_n c_k}(x)
$$
\begin{theorem}\label{dieout}
Suppose $q=1+\epsilon_n$ and $\epsilon_n \sim Cn^{-a}$ for some $a\in(2/3,1)$. If $U_n(0) = u_0 < 1/2$ and $\alpha> 1/3$ then
$$
P \left( U_n(\alpha \log n)=0 \right) \to 1 \qquad\hbox{as $n\to\infty$.}
$$
\end{theorem}
\noindent
This is proved in Section 5. Much of the work for the proof of Theorem \ref{dieout} has already been done in the proof of Theorem \ref{persist}. Those results imply that the density in the particle system stays close to the solution of the ODE. To be precise, we can show that with high probability.
$$
|U_n(t) - u(t)| \le \epsilon u(t) \quad\hbox{until}\quad \tau = \inf\{ t : x_t \le n^{-(1-b(0))} \}
$$
where $2/3 < b(0) < \min\{b,1-\alpha\}$. Since the ODE is $u'(t) = - f(u)$ with $f(u)/u \to 1$ as $u \to 0$, the limiting ODE has $u(\alpha \log n) \approx n^{-\alpha}$. Our proof shows that when the density gets to $\le n^{-b(0)}$ fluctuations in the voter model make the system go extinct in a time that is $\le Cn^b$. See Section \ref{sec:rapext} for details. The keys to the voter extinction result are (i) the observation that the number of 1's in the voter model is a time change of continuous-time symmetric random walk, and (ii) results on the size of the boundary of the voter model in the low density regime due to Cox, Durrett, and Perkins \cite{CDP2}.
\begin{figure}[tbp]
\centering
\includegraphics[width=4in,height=4.02in,keepaspectratio]{q11}
\caption{Simulation of $q=1.1$ on a $100 \times 100 \times 100$ grid.}
\label{fig:q11}
\end{figure}
\subsection{Results for strong selection} \label{ss:strsel}
Let $\xi^\epsilon_t$ be a voter model perturbation on $\mathbb{Z}^d$ with flip rates
$$
c^{\delta_n}_{i,j} = f_j + \delta^2_n h_{i,j}(x,\xi)
$$
where $f_j$ is the fraction of neighbors in state $j$ and the second term is the perturbation. As before we let $\epsilon_n = \delta_n^2$. In this section we will examine the case $\epsilon_n \gg n^{-2/3}$, which we call the strong selection regime.
Intuitively, the next result says that if we rescale space to $\delta_n\mathbb{T}_n$ (recall $\mathbb{T}_n$ is the three dimensional torus) and speed up time by $\delta_n^{-2}$, then the process converges to the solution of a partial differential equation on $\mathbb{R}^3$. The torus turns into $\mathbb{R}^3$ in the limit because $\delta_n \ll n^{-1/3}$ while the torus has side $n^{1/3}$. To make a precise statement, the first thing we have to do is to define the mode of convergence. To simplify the writing we drop the subscript $n$ on $\delta$. Given $r\in(0,1)$, let $a_\delta = \lceil \delta^{r-1} \rceil \delta$, $Q_\delta = [0, a_\delta)^3$, and
$|Q_\delta|$ the number of points in $Q_\delta$. For $x \in a_\delta \mathbb{Z}^d$ and $\xi\in \Omega_\delta$, the space of all functions
from $\delta\mathbb{Z}^3$ to $S$, let
$$
D_i(x,\xi) = |\{ y \in Q_\delta : \xi(x+y) = i \}|/|Q_\delta|.
$$
We endow $\Omega_\delta$ with the $\sigma$-field ${\cal F}_\delta$ generated by the finite-dimensional distributions. Given a sequence of measures
$\lambda_\delta$ on $(\Omega_\delta,{\cal F}_\delta)$ and continuous functions $w_i$, we say that $\lambda_\delta$ has asymptotic densities
$w_i$ if for all $0 < \eta, R < \infty$ and all $i\in S$
$$
\lim_{\delta\to 0} \sup_{x\in a_\delta\mathbb{Z}^3, |x| \le R} \lambda_\delta ( |D_i(x,\xi) - w_i(x)| > \eta ) \to 0.
$$
\begin{theorem} \label{hydro}
Suppose $d = 3$.
Let $w_i: \mathbb{R}^d \to [0,1]$ be continuous with $\sum_{i\in S} w_i = 1$. Suppose the initial conditions $\xi^\delta_0$
have laws $\lambda_\delta$ with asymptotic densities $w_i$ and let
$$
u^\delta_i(t,x) = P( \xi^\delta_{t\delta^{-2}}(x) = i)
$$
If $x_\delta \to x$ then $u^\delta_i(t,x_\delta) \to u_i(t,x)$ the solution of the system of partial differential equations:
\begin{equation}
\frac{\partial}{\partial t} u_i(t,x) = \frac{\sigma^2}{2} \Delta u_i(t,x) + \phi_i(u(t,x))
\label{PDElimit}
\end{equation}
with initial condition $u_i(0,x) = w_i(x)$. The reaction term
\begin{equation}
\phi_i(u) = \sum_{j \neq i} \langle h_{j,i}(0,\xi) - h_{i,j}(0,\xi) \rangle_u
\label{phidef}
\end{equation}
where the brackets are expected value with respect to the voter model stationary distribution $\nu_u$
in which the densities are given by the vector $u$.
\end{theorem}
\noindent
This result is Theorem 2 in \cite{CD}. For more details see that paper.
The intuition is similar to that for the ODE limit in Theorem \ref{detlim}.
On the fast time scale the voter model runs at rate $\delta^{-2}$ versus the perturbation at rate 1, so the states of sites near $x$ at time $t$
is always close to the voter equilibrium $\nu_{u(t,x)}$. Thus, we can compute the rate of change of $u_i(t,x)$ by assuming the nearby
sites are distributed according to the voter model equilibrium $\nu_{u(t,x)}$.
Cox and Durrett considered evolutionary games on the torus in $d \ge 3$ with game matrix ${\bf 1} + w G$, where ${\bf 1}$ is a matrix of 1's. Their $w$ corresponds to our $\epsilon_n$. When $w=0$ the system reduces to the voter model. They found convergence to an ODE when $n^{-1} \ll w \ll n^{-2/d}$ and convergence to a PDE when $w \gg n^{-2/d}$. Their results can be used prove a PDE limit for our system when $\epsilon_n \gg n^{-2/d}$. Since there are only two opinions we only need one variable $u_1$, which corresponds to our $u$. The $\phi$ in \eqref{phidef} is the same as the right hand side of our ODE, which should be clear from \eqref{ODErhs}.
In the case of a $2 \times 2$ game with a stable mixed strategy equilibrium that uses strategy 1 with probability $\rho$ with probability $\rho$ and strategy 2 with probability $1-\rho$, the limiting $\phi(u) = cu(\rho-u)(1-u)$ with $c>0$. Here, as in the case $q<1$, the fixed point $\rho$ is attracting.
To translate Theorem 4 in \cite{CD} to our situation, we note that $w=\epsilon_L^2$ and $n=L^d$.
\begin{theorem} \label{expsurv}
Suppose that $\epsilon_n \sim C n^{-2\alpha/3}$, where $0<\alpha<1$, and that we start from a product measure in which
each type has positive density. Let $N_1(t)$ be the number of sites occupied by 1's at time $t$.
There is a $c > 0$ so that for any $\eta > 0$ if $n$ is large and
$\log n \le t \le \exp( c n^{(1-\alpha)})$, then $N_1(t)/N \in (\rho-\eta,\rho+\eta)$ with high probability.
\end{theorem}
\noindent
The intuition behind the answer is that after space is rescaled the volume of the torus is asymptotically $n^{(1-\alpha)}$. Theorem \ref{expsurv} is a lower bound so it does not rule out survival for time $\exp(cn)$. However, Cox and Durrett proved for the contact process with fast voting introduced by Durrett, Liggett, and Zhang \cite{DLZ}
\begin{theorem} \label{CPdeath}
There is a $C<\infty$ so that extinction in the contact process plus fast voting occurs by time $\exp(cn^{1-2\alpha/d}\log n) $ in $d \ge 3$.
\end{theorem}
\noindent
Theorem \ref{expsurv} can be generalized to the $q$-voter with $q<1$ since it only relies on the hydrodynamic limit in Theorem \ref{hydro} and a block construction. Theorem \ref{CPdeath} does not extend, because $\xi\equiv 1$ is an absorbing state, and this limits our ability to suddenly kill the process.
\section{Graphical representation, duality} \label{sec:grep}
\subsection{Voter model} \label{ss:vmd}
We begin by describing the graphical representation and duality for the voter model in which the neighbors of $x$ are $x+{\cal N}$ and ${\cal N} = \{ y_1, \ldots y_k\}$. The state of the voter model at time $t$ is $\xi_t : \mathbb{Z}^d \to \{0,1\}$ where $\xi_t(x)$ gives the opinion of the individual at $x$ at time $t$. We write $y \sim x$ to indicate that $y$ is a neighbor of $x$.
In the usual voter model, the rate at which the voter at $x$ changes its opinion from $i$ to $j$ is
$$
c^v_{i,j}(x,\xi) = 1_{(\xi(x)=i)} f_j(x,\xi),
$$
where $f_j(x,\xi) = (1/k) \sum_{i=1}^k 1(\xi(x+y_i)=j)$ is the fraction of neighbors in state $j$.
To study the voter model, it is convenient to construct the process on a {\it graphical representation}, introduced by Harris \cite{H76} and further developed by Griffeath \cite{G78}. For each $x \in \mathbb{Z}^d$ and $y\in x+{\cal N}$ let $T_m^{x,y}$, $m \ge 1$, be the arrival times of a Poisson process with rate $1/k$. At the times $T^{x,y}_n$, $n \ge 1$, the voter at $x$ decides to change its opinion to match the one at $y$. To indicate this, at time $T^{x,y}_n$ we write a $\delta$ at $x$ and draw an arrow from $y$ to $x$. To calculate the state of the voter model on a finite set, we start at the bottom and work our way up. We think of the 1's in the initial configuration as sources of fluid, the $\delta$'s as dams that block the fluid, while the arrows move the fluid in the direction indicated. Arrows from $y$ to $x$ arrive just after the $\delta$. A nice feature of this approach is that it simultaneously constructs the
process for all initial conditions so that if $\xi_0(x) \le \xi'_0(x)$ for all $x$, then for all $t>0$
we have $\xi_t(x) \le \xi'_t(x)$ for all $x$.
\begin{figure}[ht]
\begin{center}
\begin{picture}(320,220)
\put(30,30){\line(1,0){260}}
\put(30,180){\line(1,0){260}}
\put(40,30){\line(0,1){150}}
\put(80,30){\line(0,1){150}}
\put(120,30){\line(0,1){150}}
\put(160,30){\line(0,1){150}}
\put(200,30){\line(0,1){150}}
\put(240,30){\line(0,1){150}}
\put(280,30){\line(0,1){150}}
\put(37,18){0}
\put(77,18){0}
\put(117,18){0}
\put(157,18){1}
\put(197,18){0}
\put(237,18){1}
\put(277,18){0}
\put(37,185){1}
\put(77,185){1}
\put(117,185){1}
\put(157,185){1}
\put(197,185){1}
\put(237,185){1}
\put(277,185){0}
\put(20,27){0}
\put(20,177){$t$}
\put(120,160){\vector(1,0){40}}
\put(163,155){$\delta$}
\put(200,145){\vector(1,0){40}}
\put(243,140){$\delta$}
\put(120,130){\vector(-1,0){40}}
\put(72,125){$\delta$}
\put(160,110){\vector(1,0){40}}
\put(202,105){$\delta$}
\put(40,100){\vector(1,0){40}}
\put(83,95){$\delta$}
\put(120,75){\vector(-1,0){40}}
\put(72,70){$\delta$}
\put(280,60){\vector(-1,0){40}}
\put(232,55){$\delta$}
\put(160,45){\vector(-1,0){40}}
\put(112,40){$\delta$}
\linethickness{1.0mm}
\put(160,30){\line(0,1){150}}
\put(80,75){\line(0,1){25}}
\put(80,130){\line(0,1){50}}
\put(120,45){\line(0,1){135}}
\put(200,110){\line(0,1){70}}
\put(240,30){\line(0,1){30}}
\put(240,145){\line(0,1){35}}
\end{picture}
\caption{Voter model graphical representation}
\end{center}
\end{figure}
To define the {\it dual process} starting from $x$ at time $t$, we set $\zeta^{x,t}_0 = x$ and work down the graphical representation. A particle stays at its current location until the first time that it encounters a $\delta$. At this point it jumps across the edge in the direction opposite its orientation. A little thought reveals that the path of a single particle in $\zeta^{x,t}_s$, $0 \le s \le t$, is a random walk that at rate 1 jumps to a randomly chosen neighbor. Intuitively, $\zeta^{x,t}_s$ gives the source at time $t-s$ of the opinion at $x$ at time $t$. That is,
$$
\xi_t(x) = \xi_{t-s}(\zeta^{x,t}_s).
$$
The example in Figure 3 should help explain the definitions. Here we work backwards to determine the states of the two sites marked by `?'. The dark lines indicate the locations of the two dual particles. The family of particles $\zeta^{x,t}_s$ are coalescing random walks. That is, if a particle $\zeta^{x,t}_s$ lands on the site occupied by $\zeta^{y,t}_s$, the two particles coalesce to form a single particle, and we know that $\xi_t(x)=\xi_t(y)$.
\begin{figure}[ht]
\begin{center}
\begin{picture}(320,220)
\put(30,30){\line(1,0){260}}
\put(30,180){\line(1,0){260}}
\put(40,30){\line(0,1){150}}
\put(80,30){\line(0,1){150}}
\put(120,30){\line(0,1){150}}
\put(160,30){\line(0,1){150}}
\put(200,30){\line(0,1){150}}
\put(240,30){\line(0,1){150}}
\put(280,30){\line(0,1){150}}
\put(37,18){0}
\put(77,18){0}
\put(117,18){0}
\put(157,18){1}
\put(197,18){0}
\put(237,18){1}
\put(277,18){0}
\put(77,185){?}
\put(197,185){?}
\put(20,27){0}
\put(20,177){$t$}
\put(120,160){\vector(1,0){40}}
\put(163,155){$\delta$}
\put(200,145){\vector(1,0){40}}
\put(243,140){$\delta$}
\put(120,130){\vector(-1,0){40}}
\put(72,125){$\delta$}
\put(160,110){\vector(1,0){40}}
\put(202,105){$\delta$}
\put(40,100){\vector(1,0){40}}
\put(83,95){$\delta$}
\put(120,75){\vector(-1,0){40}}
\put(72,70){$\delta$}
\put(280,60){\vector(-1,0){40}}
\put(232,55){$\delta$}
\put(160,45){\vector(-1,0){40}}
\put(112,40){$\delta$}
\linethickness{1.0mm}
\put(80,180){\line(0,-1){50}}
\put(120,130){\line(0,-1){85}}
\put(160,110){\line(0,-1){80}}
\put(200,180){\line(0,-1){70}}
\end{picture}
\caption{Dual coalescing random walk}
\end{center}
\end{figure}
To illustrate the power of duality, we analyze the asymptotic behavior of the voter model on $\mathbb{Z}^d$, proving a result of Holley and Liggett \cite{HL}.
In dimensions 1 and 2, nearest neighbor random walk is recurrent, so the voter model clusters, i.e.,
$$
P( \xi_t(x) \neq \xi_t(y) ) \le P( \zeta^{x,t}_t \neq \zeta^{x,t}_t ) \to 0.
$$
In $d \ge 3$ random walks are transient so differences in opinion persist as $t \to \infty$.
Let $\xi^u_t$ be the voter model starting from product measure in which 1's have density $u$, i.e., the initial voter opinions are independent and $=1$ with probability $u$. For a finite set $B \subset \mathbb{Z}^d$, let $\zeta^{B,t}_s = \cup_{x\in B} \zeta^{x,t}_s$. The distribution of $\zeta^{B,t}_s$ does not depend on $t$ so we drop the superscript $t$. Duality implies
$$
P( \xi^u_t(x) \equiv 0 \hbox{ on } B ) = P( \xi^u_0(y) = 0 \hbox{ for all } x \in \zeta^{B}_t ) = E\left( (1-u)^{|\zeta^{B}_t |} \right)
$$
As $t \uparrow \infty$ , $|\zeta^{B}_t | \downarrow |\zeta^{B}_\infty |$. From this it follows that
\begin{equation}
P( \xi^u_t(x) \equiv 0 \hbox{ on } B ) \to E\left( (1-u)^{|\zeta^{B}_\infty |} \right)
\label{Econv}
\end{equation}
The probabilities on the left-hand side of \eqref{Econv} are enough to determine the distribution of the limit $\xi^u_\infty$. Since the limit exists, it is a stationary distribution that we denote by $\nu_u$.
Before moving on, we note that the duality equation can be written as
\begin{equation}
P( \xi^A_t \cap B \neq \emptyset ) = P( A \cap \zeta^B_t \neq \emptyset )
\label{adual}
\end{equation}
where $\xi^A_t$ is the voter model starting with 1's on $A$ and $\zeta^B_t$ is the coalescing random walk starting with particles on $B$. This holds because the left-hand side is the probability of a path from $A \times \{0\}$ up to $B \times \{t\}$, while the right-hand side is the probability of a path from $B \times \{t\}$ down to $A \times \{0\}$.
There are several types of duality. This one is called {\it additive} because $\xi^{A \cup B}_t = \xi^A_t \cup \xi^B_t$, a property that holds because $\xi^A_t$ is defined to be the set of sites at time $t$ that can be reached from a path starting in $A$.
\subsection{Nonlinear voter models} \label{ss:nlv}
Though it is tempting to try to find a duality like the one between the voter model and coalescing random walk to help analyze the $q$-voter model, in this section we will prove
\medskip\noindent
{\bf Claim.} {\it Using the graphical representation described in the previous section we cannot construct a voter model in which the
flip rates depend only on the number of neighbors with the opposite opinion $n_x$ and are nonlinear.}
\begin{proof}
For simplicity, we only prove the result when the neighborhood has size 4. Consulting Griffeath's book we see that the only gadgets than can be used in the graphical representation are combination of arrows and $\delta$'s. To begin, we will consider the set of processes that can be constructed by only using gadgets that have a $\delta$ at $x$ and a number of arrows that point to x from its neighbors. We call these objects arrow-$\delta$s. Since the flip rates only depend on the number of sites, all arrow-$\delta$s with $k$ arrows have the same rate, $a_k$.
\begin{itemize}
\item
When there is a 1 at $x$ the $\delta$ will cause the 1 to flip to a 0. However, the site will only stay a 0 if \emph{all} neighbors connected to $x$ by arrows are in state 0.
\item
When there is a 0 at $x$ then the $\delta$ does nothing, and the site will flip to 1 if there is \emph{at least one} neighbor in state 1 connected to $x$ by an arrow.
\end{itemize}
The number of $k$-arrow gadgets is $\binom{k}{2}$ so the flip rates are as follows
\begin{center}
\begin{tabular}{ccc}
$n_x$ & rate $1\to 0$ & rate $0 \to 1$ \\
0 & 0 & 0 \\
1 & $a_1$ & $a_1 + 3a_2 + 3a_3 + a_4$ \\
2 & $2a_1 + a_2$ & $2a_1 + 5a_2 + 4a_3 + a_4$ \\
3 & $3a_1 + 3a_2 + a_3$ & $3a_1 + 6a_2 + 4a_3 + a_4$ \\
4 & $4a_1 + 6a_2+4a_3+a_4$ & $4a_1 + 6a_2+4a_3+a_4$
\end{tabular}
\end{center}
\noindent
\noindent
If we add $\delta$'s with no arrows then they will flip 1s even when all their neighbors are 1.
If $a_2$, $a_3$, or $a_4$ is positive the rate of flipping $1 \to 0$ is $<$ the rate of flipping $0 \to 1$.
when $n_x = 1,2,3$. Adding arrows with no $\delta$s will only further increase the rates of flips $0 \to 1$.
\end{proof}
\subsection{Duality for voter model perturbations} \label{ss:vpdual}
In the previous section we have shown that the $q$-voter does not have an additive dual. In this section we will introduce a generalization of the graphical representation used in Section \ref{ss:vmd} that allows us to construct voter model perturbations. This idea goes back to \cite{DN}. See also Section 2 in \cite{CDP}. Calculating the state of the process is not as simple as in the additive case, but it does allow us to compute the state of the process on a finite set $B$ at time $t$ by working backwards from time $t$.
Voter model perturbations have flip rates
\begin{equation}
c^\delta_{i,j} = f_j + \delta^2 h_{i,j}(x,\xi),
\label{frates}
\end{equation}
where $f_j$ is the fraction of neighbors in state $j$.
The perturbation function $h_{ij}$, $j\ne i$, may be negative (and this happens when $q>1$) but
in order for the analysis in \cite{CDP} to work, there must be a
law $q$ of $(Y_1, \ldots Y_k) \in (\mathbb{Z}^d)^k$ and a functions $g_{i,j} \ge 0$, so that for some $\gamma < \infty$, we have
\begin{equation}
h_{i,j}(x,\xi) = - \gamma f_j + E_{Y}[g_{i,j}(\xi(x+Y_1), \ldots \xi(x+Y_k))].
\label{vptech}
\end{equation}
In our situation $Y_1, \ldots Y_k$ are $k$ neighbors in ${\cal N}$ and $g_{i,j}$, which does not depend on $\epsilon$, is the fraction of sites $x+Y_1, \ldots x+Y_k$ in state $j=1-i$ raised to the $q$th power.
Suppose now that we have a voter model perturbation of the form \eqref{frates} which satisfies \eqref{vptech}.
We construct the voter model portion as in Section \ref{ss:vmd}.
We call the arrow-$\delta$s {\bf voter events}. To add the perturbation we let
$$
\|g_{i,j}\| = \sup_{\eta \in \{0,1\}^M} g_{i,j}(\eta_1, \ldots \eta_k)
$$
and introduce Poisson processes
$T^{x,i,j}_m$, $m\ge 1$ with rate $r_{i,j} = \epsilon \|g^\epsilon_{i,j}\|$, where $\epsilon=\delta^2$, and independent random variables $U^{x,i,j}_m$, $m\ge 1$ uniform on $(0,1)$. At the times $t=T^{x,i,j}_m$ with $m \ge 1$ we draw arrows from $x+Y^i$ to $x$ for $1\le i \le k$.
We call this a {\bf branching event.} If $\xi_{t-}(x)=i$ and
\begin{equation}
r_{i,j}U^{x,i,j}_k < g_{i,j}(\xi_{t-}(x+Y_1), \ldots \xi_{t-}(x+Y_k))
\label{jrule}
\end{equation}
then we set $\xi_t(x)=j$. The uniform random variables slow down the transition rate from the maximum possible rate $r_{i,j}$ to the one appropriate for the current configuration.
To define the dual, we proceed as before. When a particle encounters a $\delta$ associated with a voter event, it jumps to the other end of the arrow. When a particle encounters the head of an arrow associated with a branching event it gives birth to new particles at the other ends of all of the arrows. If either action results in two particles on the same site they coalesce to 1. Let $I^{B,t}_s$ be the set of particles at time $t-s$ when we start with particles on $B$ at time $t$. Durrett and Neuhauser \cite{DurNeu} called $I^{B,t}_s$ the {\bf influence set} because
\begin{lemma}
If we know the values of $\xi_{t-s}$ on $I^{B,t}_s$, then using the graphical representation (including the associated uniform random variables) we can compute the values of $\xi_t$ in $B$ by working our way up the graphical representation starting from time $t-s$ and determining the changes that should be made in the configuration at each jump time.
\end{lemma}
This fact should be clear from the construction. A formal proof can be found in Section 2.6 of \cite{CDP}.
The {\bf computation process}, as it is called in \cite{CDP}, is complicated, but is useful because
up to time $t/\epsilon_n$ there will only be $O(1)$ branching events affecting particles in the dual.
\section{Prolonged persistence} \label{sec:proper}
In this section, we will prove Theorem \ref{persist}. The key is to bound the difference between the density of the particle system and the ODE, using a result of Darling and Norris \cite{DN}. Section \ref{ss:DarNor} describes this result and the work needed to apply it to finish the proof of Theorem \ref{persist}. Sections \ref{ss:igbr}, \ref{ss:dens}, \ref{ss:TvsT}, and \ref{ss:diffm} complete this work and Section \ref{ss:fd} gives the final details.
\subsection{Darling-Norris theorem} \label{ss:DarNor}
To state the result from \cite{DN} result we need to introduce some notation. Let $\xi_t$ be a continuous time Markov chain with countable state space $S$ and jump rates $q(\xi,\xi')$. In our case $\xi_t$ will be the state of the $q$-voter model on the torus. We are interested in proving an ODE limit for $X_t =x(\xi_{t/\epsilon_n})$ where
$$
x(\xi_{t/\epsilon_n}) = \frac{1}{n} \sum_{x\in \mathbb{T}_n} \xi_{t/\epsilon_ n}(x).
$$
For each $\xi\in S$ we define the infinitesimal drift
$$
\beta(\xi) = \sum_{\xi'\neq\xi} (x(\xi')-x(\xi)) q(\xi,\xi')
$$
We let $b$ be the drift of the proposed deterministic limit $x_t$. In our case
$$
x_t = x_0 \pm \int_0^t b(x_s) \,ds, \,\,\,\,\,\,\, \quad\hbox{$b(x) = c_k x(1-x)(1-2x)f_k(x)$},
$$
where $f_k(x)$ is a polynomial with $f_k(0)=f_k(1)=1$ that is positive on $[0,1]$ and only depends on the number of neighbors $k$ . The sign is $+$ for $q=1-\epsilon_n$ and $-$ for $q=1+\epsilon_n$. The crucial theorem from \cite{DN} is
\begin{theorem} \label{DarNor}
For each fixed $t_0$ and $\eta > 0$,
$$
P \left( \sup_{s \le t_0} |X_s - x_s| > \eta \right) \le 2e^{-\gamma^2/(2At_0)} + P( \Omega_0^c \cup \Omega_1^c \cup \Omega_2^c )
$$
\end{theorem}
To make this statement meaningful we need more definitions. To measure the size of the jumps we let
$\sigma_\theta(y) = e^{\theta|y|} - 1 - \theta|y|$ and let
$$
\phi(\xi,\theta) = \sum_{\xi'\neq \xi} \sigma_\theta( x(\xi')-x(\xi) )q(\xi,\xi').
$$
The good sets $\Omega_i$, $i=0,1,2$ are given by
\begin{align}
&\Omega_0 = \{ |X_0 - x_0| \le \gamma \} \label{om1} \\
&\Omega_1 = \left\{ \int_0^{t} |\beta(\xi_{s/ \epsilon_n}) - b(X_s)| \, ds \le \gamma \right\},\label{om2} \\
&\Omega_2 = \left\{ \int_0^t \phi(\xi_{s/ \epsilon_n}, \theta) \, ds \le \theta^2At/2 \right\}. \label{om3}
\end{align}
The parameters in these events are coupled by the following relationships. If we let $K$ be the Lipschitz constant
of the drift $b$ and $\eta$ be the upper bound on the error in the approximation by the differential equation in Theorem \ref{DarNor},
then
$$
\gamma = \eta e^{-Kt_0}/3 \quad\hbox{and}\quad \theta = \gamma/(At_0), \quad\hbox{where $A>0$}.
$$
It is clear that our $b(x)$ is Lipschitz continuous.
Our assumption that $U_n(0) \to u_0$ implies that $\Omega_0^c = \emptyset$ for large $n$.
To bound $P(\Omega_2^c)$, we will choose an $A > 0$ that works well. We begin with a useful lemma:
\begin{lemma} \label{PoiLD}
If $Z \sim \hbox{Poisson}(\lambda)$, then
$$
P(Z \ge 2\lambda) \le \exp(-\gamma(2)\lambda)
$$
where $\gamma(2)$ is a constant independent of $\lambda$.
\end{lemma}
\begin{proof} The moment generating function of $Z$ is
$$
E\exp(\theta Z) \le \exp(\lambda (e^\theta-1 )).
$$
Taking $\theta = \log 2$, we have $E\exp(Z \log 2) = \exp(\lambda)$, so using Chebyshev's inequality we have
$$
P( Z \ge 2\lambda ) \le \exp(- 2 \log 2 \lambda), \exp(\lambda)
$$
which proves the result with $\gamma(2)=2\ln 2 - 1$.
\end{proof}
The process $X_t$ has jumps of size $1/n$ at total rate $n/\epsilon_n$. As $\theta |y| \to 0$, we have $\sigma_\theta(y) \sim \theta^2y^2/2$. So, when $\theta |y|$ is small, $\sigma_\theta(y) \sim \theta^2y^2$. Using Lemma \ref{PoiLD}, the probability of $2t_0n/\epsilon_n$ jumps during time $[0,t_0]$ is $\le \exp(-\gamma(2)t_0 n/\epsilon_n)$. When this occurs, and $n$ is large, the integral in $\Omega_2$ is
$$
\le \frac{\theta^2}{ n^2} \cdot \frac{2 t_0 n}{\epsilon_n} = \theta^2 t_0 \cdot \frac{2}{n \epsilon_n}.
$$
Thus, for the event $\Omega_2$ to hold, we need $2/(n \epsilon_n) \ll A/2$. Since $\epsilon_n \sim Cn^{-a}$ with $2/3< a < 1$, we have
\begin{lemma} If $t_0$ and $\gamma$ are fixed and $A=n^{-(1-a)/3}$ then $e^{-\gamma^2/(2At_0)} \to 0$ and $P(\Omega_2^c) \to 0$ exponentially
fast as $n\to\infty$.
\end{lemma}
\subsection{Ignoring branching} \label{ss:igbr}
The remainder of Section \ref{sec:proper} is devoted to bounding $P(\Omega_1^c)$.
To begin to do this, we return to the original time scale. We define $\tilde{\xi}_s$ to be the same as $\xi_s$ at time $s = t/\epsilon_n- n^{b}$, while on the time interval $[t/\epsilon_n - n^b, t/\epsilon_n ]$, $\tilde{\xi}_s$ only has voter events, ignoring the perturbation. The value $b \in (2/3,a)$ is chosen so that lineages in the dual coalescing random walk will have time to wrap around the torus but, as we will now show, the perturbation will not have much effect. Let
$$
\tilde{X}_t= \frac{1}{n} \sum_{x \in \mathbb{T}_n} \tilde{\xi}_{t/\epsilon_n}(x)
$$
be the density of this new process $\tilde{\xi}$.
We will now show that ignoring the perturbation changes the values of more that $\eta n$ sites with a stretched exponentially small probability.
\medskip\noindent
{\bf Step 1.} The number of perturbation events $M$ in time $n^b$ is bounded by a Poisson($\lambda$) random variable with $\lambda= Cn^{1+b+a}$.
Lemma \ref{PoiLD} implies that
\begin{equation}
P(M \ge 2 \lambda ) \le \exp(-\gamma(2)\lambda) \le \exp(-C\gamma(2)n^b),
\label{bdpert}
\end{equation}
since $\lambda \ge Cn^b$.
\medskip\noindent
{\bf Step 2.} Let $\eta_t(x) = |\xi_t(x)-\tilde\xi_t(x)|$, so that $\eta_t(x)=1$ means there is a discrepancy between the two processes $\xi_t$ and $\tilde\xi_t$ at position $x$. We want to prove that $\sum_x \eta_{t/\epsilon_n}(x)$ is less than $\eta n$ with a stretched exponentially small probability. To do this, note that when an edge $(x,y)$ with $\eta_s(x)=0$ and $\eta_s(y)=1$ is hit by a voter event (that is, there is an arrival in the Poisson process $T^{x,y}$ or $T^{y,x}$), then the 1 is changed to $0$ with probability 1/2 (when the arrival is in $T^{x,y}$) and the 0 is changed to a 1 with probability 1/2 (when the arrival is in $T^{y,x}$). Thus, the change in the number of discrepancies due to voter events is a martingale. The change is always $\le 1$ so if there are $N$ jumps, then by Azuma's inequality
$$
P( |X_N - X_0| \ge z | N = n_0) \le 2 \exp(-z^2/2n_0)
$$
If $N$ is the number of changes due to voter events in the time interval $[t/\epsilon_n - n^b, t/\epsilon_n]$, then $ N \le \hbox{Poisson}(n^{b+1})$. By Lemma \ref{PoiLD},
$$
P(N \ge 2 n^{1+b}) \le \exp(-\gamma(2)n^{1+b}).
$$
Note that if $n_0 < 2n^{1 + b}$, then $2 \exp(-z^2/2n_0) < 2 \exp( -z^2/4n^{1+b})$. So, taking $z=\eta n$ and $N=2n^{1+b}$, we get
\begin{align}
P( |X_N - X_0| \ge \eta n) \le 2\exp(-\eta^2 n^{1-b}/4).
\label{bdpert2}
\end{align}
\subsection{Bounding the density} \label{ss:dens}
The results in the previous section show that on the interval $[t/\epsilon_n - n^b, t/\epsilon_n]$ we can ignore the perturbation and assume that the process evolves like the voter model. To understand the distribution of 1's at time $t/\epsilon_n$ we will use results of Bramson and Griffeath \cite{BGrenorm}, and Z\"ahle \cite{Zrenorm}. The first reference only treats $d=3$. The second covers $d \ge 3$ and is more detailed, so we will follow it.
Let $\zeta^\lambda: \mathbb{Z}^d \to \{0,1\}$ have the distribution of the equilibrium of a finite range voter model on $\mathbb{Z}^d$ with density $\nu_\lambda$. For an explanation of this and the other basic facts about the voter model that we will use, see Liggett's book \cite{L99}. For simplicity we will do calculations for the nearest neighbor case. The results are the same in the finite range case, but are more awkward to write since, for example, the limiting normal has a general covariance matrix, we cannot use the reflection principle, etc. To formulate the limit theorem in \cite{Zrenorm}, we will write the process at a fixed time as a random field
$$
F_\lambda(\phi) = \sum_{i \in \mathbb{Z}^d} [\zeta^\lambda(i) - \lambda] \phi(i),
$$
where $\phi$ is a member of a suitable class of test functions. To rescale space, we let
$$
F_{\lambda,r}(\phi) = F_{\lambda}(\phi_r) \quad\hbox{where}\quad \phi_r(x) = r^{-(d+2)/2} \phi(x/r).
$$
Theorem 1 on pages 1265--1266 of \cite{Zrenorm} shows that in our nearest neighbor case
$$
F_{\lambda,r}(\phi) \Rightarrow \hbox{Normal}(0, a_d \lambda (1-\lambda) B(\phi,\phi)),
$$
where $\Rightarrow$ denotes weak convergence as $r \to \infty$, $\text{Normal}(\mu,\sigma^2)$ is a one-dimensional normal distribution
with mean $\mu$ and variance $\sigma^2$, and $B$ is the bilinear function
$$
B(\phi,\psi) = \int\kern -0.5em \int \frac{\phi(x) \psi(y)}{|x-y|^{(d-2)/2}} \, dx \, dy.
$$
Restricting our attention now to $d=3$, Z\"ahle's result implies that
\begin{equation}
\widehat S_r \equiv [\lambda (1-\lambda)]^{-1/2} r^{-5/2} \sum_{x \in [-r/2,r/2]^3} \left[\zeta^\lambda (x) -\lambda \right]
\Rightarrow \hbox{Normal}(0, c_{3,\lambda} )
\label{boxsum}
\end{equation}
Bramson and Griffeath \cite{BGrenorm} prove \eqref{boxsum} by the method of moments, which gives \
\begin{equation}
E(\widehat S_r)^{2m} \to c_{3,\lambda}^{2m} \mu_m \quad\hbox{where} \quad \mu_m = (2m-1)(2m-3) \cdots 3 \cdot 1.
\label{bsmom}
\end{equation}
In our situation, we need a slightly different result. In particular, these results are for the voter model on $\mathbb{Z}^3$, and we need a result for the voter model on the 3-d torus. Let
$$
T_r \equiv [\lambda (1-\lambda)]^{-1/2} r^{-5/2} \sum_{x \in Q(r)} [\xi_{t/\epsilon_n}(x) -\lambda]
$$
where $\lambda$ is the fraction of sites in state 1 at time $t/\epsilon_n-n^b$, and $Q(r)$ is a fixed cube with side $r=n^\beta$ with $\beta< 1/3$. To prove a limit result for $T_r$ we will sandwich it between $\widehat S_r$ and
$$
\bar S_r \equiv [\lambda (1-\lambda)]^{-1/2} r^{-5/2} \sum_{x \in Q(r)} [\bar\zeta^\lambda_{\sigma(n)} (x) -\lambda ],
$$
where $\hat\zeta^\lambda_{\sigma(n)}$ is the voter model on the torus starting from product measure with density $\lambda$ and run for time $\sigma(n)=n^{0.6}$. To couple this with $T_r$ we create $\bar S_r$ by running coalescing random walks starting at time $t/\epsilon_n$ from points in $Q(r)$ backwards in time for $\sigma(n)$, and then use independent coin flips with probability $\lambda$ of heads (1) and $1-\lambda$ of tails (0) to determine the states of the sites.
\medskip\noindent
(i) {\bf With stretched exponentially small probability, no coalescing random walk will move more than $n^{0.33}$ in any coordinate by time $\sigma(n)=n^{0.6}$.}
\medskip\noindent
\begin{proof} We will use a special case of (7.3) on page 553 in Feller volume II \cite{Feller}.
\begin{lemma} \label{tailbound}
Let $w_1, w_2, \ldots w_k$ be i.i.d.~with $P(w_i=1)=P(w_i=-1)$. Then if
$W_k = w_1 + \cdots w_k$, $\epsilon>0$, and $x=o(k)$, we have
$$
P( W_k/\sqrt{k} \ge x) \le \exp(-(1-\epsilon) x^2/2 ).
$$
\end{lemma}
\noindent
Taking $k=n^{0.6}$ and $x=0.03$ , it follows that the probability some coalescing random walk starting inside the cube $Q(r)$ and run for time $\sigma(n)$ moves by more than $n^{0.33}$ in any coordinate is
$$
\le 2 \cdot 6r^3 \exp( -(1-\epsilon) n^{0.06}/2 ).
$$
Here the 2 comes from using the reflection principle to relate the maximum to the value at time $n^{0.6}$, and 6 is 3 coordinates times 2 signs.
\end{proof}
\medskip
The result (i) implies that with very high probability there is no difference between the coalescing starting from $Q(r)$ with $r = n^\beta$ for $\beta < 1/3$, run to time $\sigma(n) = n^{0.6}$ on the torus or on $\mathbb{Z}^3$.
\medskip\noindent
(ii) {\bf There is a $\gamma>0$ so that at all times $t \ge (k+1)n^{2/3}$, the total variation between the distribution of a nearest neighbor random walk on the torus and the uniform distribution is $\le (1-\gamma)^{k}$.}
\begin{proof}
To prove the result, we use a simple coupling. At time $n^{2/3}$ the distribution of each particle has a density that is $\ge \gamma/n$ at each point of the torus. At time $n^{2/3}$ the distribution has the form $\gamma \cdot \mu_n + (1-\gamma) q_n$, where $\mu_n$ is uniform on the torus and $q_n$ is some transition probability. Uncoupled mass at time $(k-1)n^{2/3}$ can be coupled to the uniform distribution with probability $\ge \gamma$ at time $kn^{2/3}$ and the desired result follows.
\end{proof}
\medskip\noindent
{\bf Definition of $T'_n$.} We continue the construction of $T_r$: from the end of the construction of $\bar S_r$ at time $\sigma(n)$, we run the coalescing random walk particles on $\mathbb{Z}^3$. To assign values to the lineages at time $n^b$ we extend the configuration on the torus at that time to be periodic on $\mathbb{Z}^3$. It follows from (ii) that with very high probability there is no difference between flipping coins at time $n^{0.6}$ to determine the states of the sites in the sum $\bar S_n$ or continuing to run the coalescing random walks on $\mathbb{Z}^3$ until time $n^b$. Having done this, we no longer perfectly reproduce $T_n$, so we call the result $T'_n$. The good news is that when we run the coalescing random walk on $\mathbb{Z}^3$ starting at $\sigma(n)$, we will have $T'_r \prec \widehat S_n$. That is, the coalescing random walk clusters in $T_r'$ are contained in clusters in $\widehat S_r$.
\medskip\noindent
{\bf To prove the result in \eqref{boxsum}}, Z\"ahle defines a \emph{cluster} to be a set of sites that coalesce to the same limiting particle, and lets $Z_{r,k}$, $1 \le k \le K(r)$ be the cluster sizes and lets $\eta_{r,k}$ be independently $=1$ with probability $\lambda$ and $=0$ with probability $1 - \lambda$. As she notes in (3.6) on page 1274,
\begin{equation}
\widehat S_r =_d r^{-5/2} \sum_{k=1}^{K(r)} Z_{r,k} \cdot (\eta_{n,k} - \lambda)
\label{clrep}
\end{equation}
If we condition on the $Z_{r,k}$, then we have a sum of independent random variables. If we let $v_n^2 = \sum_k Z_{n,k}^2$, then using Lyapunov's theorem (see the bottom of page 1275) it follows that
$$
(\hat S_r/v_n \mid {\cal Z}) \Rightarrow \chi,
$$
where ${\cal Z}$ is the $\sigma$-field generated by the $Z_{r,k}$ and $\chi$ is a standard normal. In Lemma 1 on page 1276 in \cite{Zrenorm} she shows that $v_n^2$ converges in probability to a constant, so if we remove the conditioning we get the same limit. Lemma 2 computes the limit of $Ev_n^2$ and \eqref{boxsum} follows.
\medskip\noindent
{\bf The last argument can be applied to $\bar S_n$} to conclude that it converges to a normal distribution. To find the limiting variance we compute
$$
\sum_{x,y\in Q(r)} E( \bar\zeta^\lambda_{\sigma(n)} (x) -\lambda )( \bar\zeta^\lambda_{\sigma(n)} (y) -\lambda )
$$
When the coalescing random walks starting from $x$ and $y$ do not coalesce, the states at $x$ and $y$ are independent; otherwise, they are equal. Thus, if we let $\tau_{x,y}$ be the time the two coalescing random walks hit, then the above sum is
$$
\sum_{x,y\in Q(r)} \lambda(1-\lambda) P( \tau_{x,y} \le n^{0.6} )
$$
Using the local central limit theorem,
$$
P( n^{0.6} \le \tau_{x,y} < \infty ) \approx 2\beta_d \int_{n^{0.6}}^\infty \frac{1}{(2\pi t)^{3/2}}\exp \left( - |x-y|^2/2t \right) \, dt
$$
The right-hand side gives the expected amount of time the two particles spend together. When they hit they spend an exponential rate 2 amount of time together. In addition, they will hit a geometric number of times with success probability $\beta_d$.
Changing variables $t = |x-y|^2/2s$, $dt = -|x-y|^2/(2s^2)$ the integral becomes
\begin{align*}
& \int_0^{|x-y|^2/n^{0.6}} \left( \frac{s}{\pi|x-y|^2} \right)^{3/2} e^{-s} \left( \frac{\pi|x-y|^2}{2s^2} \right) \, ds \\
&= \frac{1}{2\pi^{3/2} |x-y|} \int_0^{|x-y|^2/n^{0.6}} s^{-1/2} e^{-s} \, ds \le C n^{-0.3}.
\end{align*}
Consulting Lemma 4 in \cite{Zrenorm} we find
$$
P( \tau_{x,y} < \infty ) \sim c'_3/|x-y|
$$
Using the formula for $c'_3$ it follows that the asymptotic variance for $\bar S_r$ is the same as for $\widehat S_r$.
\medskip\noindent
{\bf Limit theorem for $T'_r$.} Let $X_{r,k} \prec Y'_{r,k} \prec Z_{r,k}$ be the cluster sizes in $\bar S_r$, $T'_r$, and $\widehat S_r$. The limiting variances of the unnormalized sums are
$$
\sum_k EX_{r,k}^2 \le \sum_k E(Y'_{r,k})^2 \le \sum_k EZ_{r,k}^2
$$
Since the top and bottom sums have the same asymptotics, this gives us the Gaussian limit theorem for $T'_n$. Replacing 2 by $2m$ and recalling that Bramson and Griffeath \cite{BGrenorm} proved their result for $\widehat S_r$ by the method of moments gives the desired results for $T'_r$:
\begin{align}
&T'_r \equiv [\lambda (1-\lambda)]^{-1/2} r^{-5/2} \sum_{k=1}^{K(r)}Y'_{r,k} (\eta_{r,k} - \lambda)
\Rightarrow {\cal N}(0, c_{3,\lambda} )
\label{boxsumT}
\\
& E(T'_r)^{2m} \to c_{3,\lambda}^{2m} (2m-1)(2m-3) \cdots 3 \cdot 1
\label{bdmomT}
\end{align}
The last result implies
\begin{equation}
r^{2m\beta} P( |T'_r| \ge r^\beta ) \le E(\bar S_r)^{2m} \to E\chi^{2m}
\label{Chebym}
\end{equation}
so if we let $\tilde D'_r = [\lambda (1-\lambda)]^{1/2} r^{5/2} T'_r$, (i.e., we remove the scaling) then
\begin{equation}
P( |D'_r| \ge [\lambda (1-\lambda)]^{1/2} r^{5/2+\beta} ) \le C_m r^{-2m\beta}.
\label{mombdT}
\end{equation}
This is the concentration result we desired for $T'_n$. Recall that $T'_n$ was constructed as a slight modification of $T_n$, which is the true rescaled and centered density that we which to prove results about.
\subsection{Controlling the difference between $T'_n$ and $T_n$} \label{ss:TvsT}
The goal in this section is to generalize \eqref{mombdT} to $T_r$.
\medskip
{\bf Bounding the number of extra coalescences in $T'_n$.}
When we went from the torus to $\mathbb{Z}^3$ we may have eliminated some coalescence in $T_n$ at times in $[n^{0.6}, n^b]$. For this to happen the difference in two particles positions must have wrapped around the torus, an event we call $G$, and the particles projected back to the torus must have hit, an event we call $H$. To bound this event we note that
$$
P(G \cap H) \le \min\{P(G),P(H)\}.
$$
Let $\alpha = 2(1-\epsilon)/3$. Lemma \ref{tailbound} implies that the probability $G$ happens during $[n^{0.6},n^\alpha]$ is $\le \exp(-n^\eta)$ for some $\eta>0$. On $[n^{\alpha},n^b]$, the probability that a random walk is at a fixed site is $\le 1/n^{1-\epsilon}$. Thus, for a fixed pair of particles,
$$
P(H) \le C n^b /n^{1-\epsilon}.
$$
If $r=n^{b(2)/3}$, then $n^{b(2)}$ is a trivial upper bound for the number of particles at time $\sigma(n)$, which holds with probability 1. We will now estimate the number of collisions of a fixed particle with all of the others. This number is increased if we ignore coalescence, and run the particles as independent. We do this so that
\begin{lemma} \label{mpairs}
If $m \ge 1$ and a particle belongs to a cluster of size $2m$ or $2m+1$ with $m \ge 1$ formed by coalescence during $[n^\alpha,n^b]$, then there are at least $m$ disjoint pairs of particles that have coalesced.
\end{lemma}
\begin{proof} Recall that on this time interval we are running the lineages on $\mathbb{Z}^3$. We will prove the result by induction. To be able to disentangle the graph constructed by coalescence we will number the particles. Once two particles hit the two future trajectories could be assigned to either particle so we allow ourselves the liberty of be exchanging the labels at any collision. If the cluster has size 2 or 3, this is trivial. Suppose now that $m \ge 2$. Locate the time $t_0$ at which the first two particles coalesced. Call them $x$ and $y$ and let $t_1$ be the first time after $t_0$ that the coalesced particle collided with another one that we call $z$. Remove the $Y$-shaped part of the genealogy leading from $x$ and $y$ to the coalescence at time $t_1$. Label the lineage coming out $t_1$ the same as the one coming in on $z$'s trajecctory. We have identified one pair of coalescing particles and reduced the number of sites in the cluster by 2, so the result follows by induction.
\end{proof}
Given Lemma \ref{mpairs}, our next task is to estimate the probability that $m$ disjoint pairs will coalesce. Using the trivial upper bound $n^{b(2)}$ on the number of lineages, the number of coalescing pairs is
$$
N \le \hbox{Binomial}(n^{2 b(2)}, Cn^b/n^{1-\epsilon}).
$$
Note that this bounds the number of coalescing pairs that coalesce in the system, not just those that form one cluster.
The expected number is $Cn^{b + 2b(2) + \epsilon -1}$, where $b$ is larger than 2/3 and can be assumed to be $\le 0.7$. If $b(2)\le 0.1$, then $-\nu = b + 2b(2) + \epsilon -1 < 0$ when $\epsilon<0.5$. In this case,
$$
P(N=k) \le \binom{ n^{2b(2)} }{k} ( Cn^{b+\epsilon -1} )^k \le \frac{C^kn^{-k\nu}}{k!},
$$
so summing gives
\begin{equation}
P(N \ge k ) \le e^C n^{-k\nu}.
\label{Kbdd}
\end{equation}
\medskip\noindent
{\bf Bounding the size of clusters in $\hat S_r$.}
Formula \eqref{bsmom} tells us that
$$
E(\widehat S_r)^{2m} \to c_{3,\lambda}^{2m} \mu_m
$$
Using \eqref{clrep} we have
$[(1-\lambda)\lambda^{2m} + \lambda(1-\lambda)^{2m}] \sum_{k=1}^{K(r)} Z_{r,k}^{2m} \le E(\widehat S_r)^{2m}$.
From this we see that when $r$ is large
$$
r^{11m/2} P( \max_k Z_{r,k} \ge r^{5.5/2} ) \le C_m r^{5m}
$$
so we have
\begin{equation}
P( \max_k Z_{r,k} \ge r^{5.5/2} ) \le C_{m,\lambda} r^{-m/2}.
\label{maxZbd}
\end{equation}
Combining \eqref{Kbdd} and \eqref{maxZbd} we see that if $Y_{r,k}$ are cluster sizes in $T_n$, then
\begin{equation}
P\left( \max_k Y_{r,k} \ge \frac{m}{2\nu} r^{5.5/2} \right) \le C_{m,\lambda} r^{-m/2}
\label{maxYbd}
\end{equation}
Combining \eqref{Kbdd} with $k=m/2\nu$ and \eqref{maxYbd} we see that the combined size of the clusters in $T'_n$ but not in $T_n$ is
\begin{equation}
\le \frac{m}{2\nu} n^{5.5/2} \quad\hbox{with probability $1 - C_{m,\lambda} r^{-m/2}$.}
\label{Tdiffbd}
\end{equation}
Using this with \eqref{mombdT} and letting $D_r = [\lambda(1-\lambda]^{1/2} r^{5/2} T_n$ it follows that
\begin{equation}
P( |D_r| \ge [\lambda(1-\lambda)]^{1/2} r^{5/2+\beta} \le C_{m,\lambda} r^{-m\beta/2}.
\label{mombdT2}
\end{equation}
Suppose $r=n^{b(2)/3}$ where $0< b(2)<1$, then
$$
P\left( |D_r| \ge [\lambda(1-\lambda)]^{1/2} n^{5b(2)/6+\beta} \right) \le C_m n^{-m\beta b(2)/6}
$$
Now, partition the torus into cubes of side $n^{b(2)/3}$. Letting $N_{i}$ be the number of 1's in the $i$th cube we have
$$
P\left( \left| \frac{N_{r,i}}{n^{b(2)}} - \lambda \right|
\ge [\lambda(1-\lambda)]^{1/2} n^{-b(2)/6+\beta} \right) \le C_m n^{-m\beta b(2)/6}.
$$
For fixed $\beta>0$, given a $k < \infty$ we can pick $m$ large enough then the right hand side is $\le n^{-(1-b(2))-k}$.
Then we have,
\begin{equation}
P\left( \hbox{ for some $i$ }\left| \frac{N_{i}}{n^{b(2)}} - \lambda \right| \ge [\lambda(1-\lambda)]^{1/2} n^{-b(2)/6+\beta} \right) \le n^{-k}.
\label{mombd3}
\end{equation}
\subsection{Bounding the difference in the drifts} \label{ss:diffm}
Thus far we have been concerned with the overall density of particles on the torus. However, to successfully bound $P(\Omega^c_1)$ we need to show that if $u$ is the density of ones in the voter model at time $t/\epsilon_n - n^b$, then the empirical finite dimensional distributions on the torus are close to those of the voter model equilibrium $\nu_u$ at time $t/\epsilon_n +s_n$, where
\begin{equation}
s_n = n^{(2+\beta)b(2)/3}.
\label{sndef}
\end{equation}
The reasoning for introducing this extra time $s_n$ is described below. For $x, y_1, \ldots y_k \in \mathbb{Z}^d$ and $v_0, v_1, \ldots v_k \in \{0,1\}$ fixed we let
$$
G_{x,y,v} = \{ \xi(x)=v_0, \xi(x+y_1) = v_1, \ldots \xi(x+y_k) = v_k \}
$$
be a finite dimensional event. For simplicity, we do not display the dependence on the sites $y$ and the states $i$.
The first step is to partition the torus at time $t/\epsilon_n$ into boxes with side $r=n^{b(2)/3}$. Using \eqref{mombd3}, we can conclude that with high probability the density in each box is close to $u$, the density of 1's at time $t/\epsilon_n - n^b$.
We divide the torus at time $t/\epsilon_n + s_n$ into cubes with side $n^{b(3)/3}$, where $b(3) > b(2)$. The $\beta$ in the time guarantees that if we work backwards from time $t/\epsilon_n + s_n$ to $t/\epsilon_n$, the probability a random walk particle will move by an amount much larger than $n^{b(2)/3}$, the size of the boxes at time $t/\epsilon_n$, is stretched exponentially small. See Lemma \ref{tailbound}. As in \cite{DurNeu} and \cite{CDP} this implies the conditional distribution of the position given that the lineage ends in a specific box is almost uniform, and hence the probability it lands on a 1 will be close to $u$. A second consequence is that
\begin{lemma} \label{nuu}
With very high probability, the empirical finite dimension distributions at time $t/\epsilon_n + s_n$ will be close to $\nu_u(G_{x,y,v})$.
\end{lemma}
\begin{proof}
To see this, note that we compute the probabilities of finite dimensional sets in the voter model equilibrium $\nu_u$ by starting the CRW with points at $y_0, \ldots y_m$, and running time to $s_n$. The particles that coalesce are a partition of the original set. We then flip a coin with a probability $u$ of heads (state 1)
to determine the states. Here we are only running time to $s_n$ so our partition is finer, but the final particles are roughly independent and uniform on the torus so whether they land on 1 or 0 are roughly independent coin flips. \end{proof}
The last paragraph shows that probabilities of the f.d.d.'s are close to the voter model equilibrium $\nu_u$. This enables us to conclude that the expected value of the drift of our process when the density is $x$ is close to $b(x)$. The next step is control the fluctuations about the mean. Using normal tail bounds on random walks in Lemma \ref{tailbound}, it follows that if $B_n$ is the event that some coalescing random walk at time $t/\epsilon_n+s_n$ moves by more than $n^{b(3)/3}$ in time $s_n$, then for any $\gamma>0$ we have for large $n$
\begin{align}
P(B_n) & \le n\exp( - (1-\gamma) n^{2b(3)/3} / 2 n^{(2+\beta)b(2)/3} )
\nonumber \\
& = n \exp\left( -\frac{1-\gamma}{2} n^{ [2b(3)-(2+\beta) b(2)]/3} \right)
\label{bdmove}
\end{align}
\begin{figure}[ht]
\begin{center}
\begin{picture}(300,230)
\put(40,40){\line(1,0){220}}
\put(40,190){\line(1,0){220}}
\put(40,40){\line(0,1){150}}
\put(200,40){\line(0,1){150}}
\put(260,40){\line(0,1){150}}
\put(120,115){cube}
\put(120,105){sizes}
\put(170,110){$n^{b(2)}$}
\put(270,110){$n^{b(3)}$}
\put(20,20){$t/\epsilon_n - n^b$}
\put(120,20){time}
\put(195,20){$t/\epsilon_n $}
\put(240,20){$t/\epsilon_n + s_n$}
\put(195,50){\line(1,0){10}}
\put(195,60){\line(1,0){10}}
\put(195,70){\line(1,0){10}}
\put(195,80){\line(1,0){10}}
\put(195,90){\line(1,0){10}}
\put(195,100){\line(1,0){10}}
\put(195,110){\line(1,0){10}}
\put(195,120){\line(1,0){10}}
\put(195,130){\line(1,0){10}}
\put(195,140){\line(1,0){10}}
\put(195,150){\line(1,0){10}}
\put(195,160){\line(1,0){10}}
\put(195,170){\line(1,0){10}}
\put(195,180){\line(1,0){10}}
\put(195,190){\line(1,0){10}}
\put(255,70){\line(1,0){10}}
\put(255,100){\line(1,0){10}}
\put(255,130){\line(1,0){10}}
\put(255,160){\line(1,0){10}}
\put(257,120){$\bullet$}
\put(180,200){density}
\put(245,200){f.d.d.}
\thicklines
\linethickness{1mm}
\put(200,105){\line(0,1){40}}
\end{picture}
\caption{Picture summarizing the proof. Here $s_n = n^{(2+\beta)b(2)/3}$.
The words at the top indicate the quantity that is ``good'' at each time, i.e., close to its average value on the cubes. The dark line at time $t/\epsilon_n$
shows the interval in which we will with high probability find the lineage of the black dot when it is worked backwards in time.}
\end{center}
\end{figure}
For the last inequality to be useful we need to choose $\beta$ so that $2b(3) - (2 + \beta) b(2) > 0$. The estimate in \eqref{bdmove} implies that the states of sites in cubes in the decomposition at time $t/\epsilon_n+ s_n$ that do not touch are independent on $B_n^c$.
We can divide our collection of cubes into 27 subcollections ${\cal C}_i$ of size $n^{1-b(3)}/27$ so that no two cubes in the subcollection touch. For $1\le i \le 27$, let $N_i$ be the number of times $G_{x,y,v}$ occurs in the union of the cubes in ${\cal C}_i$, let $N_{i,j}$ be the number of times $G_{x,y,v}$ occurs for $x$ in the $j$th cube in ${\cal C}_i$. If $x$ is close to the edge of the cube then some of the $x+y_i$ may be outside. However, the $y_i$ are fixed, so for large $n$ they will at worst be in an adjacent cube.
For fixed $i$, the $N_{i,j}$ are independent on the event $B_n^c$, and $0 \le N_{i,j}/n^{b(3)} \le 1$. Let $\rho_{i,j} = EN_{i,j}/n^{b(3)}$. Let
$$
X_{i,j} = \frac{N_{i,j}}{n^{b(3)}} - \rho_{i,j} \in [-\rho_{i,j},1-\rho_{i,j}].
$$
Finally, let $\psi_{i,j}(\theta) = E\exp(\theta X_{i,j})$, let $Y_i = \sum_j X_{i,j}$, and let $M=n^{1-b(3)}/27$ be the number of cubes in each collection ${\cal C}_i$. If $\theta>0$, then, assuming $B_n^c$, we have
$$
e^{\theta M\eta} P( Y_i \ge M\eta ) \le \prod_{j} \psi_{i,j}(\theta),
$$
using the independence of the $N_{i, j}$ across $j$. So, we have
\begin{align}
P( Y_i \ge M\eta) &\le e^{-\theta M\eta} \prod_{j} \psi_{i,j}(\theta)
\nonumber\\
&= \exp\left( M \left[ -\theta\eta + M^{-1}\sum_j \log \psi_{i,j}(\theta) \right] \right)
\label{ldfdd0}
\end{align}
Since we do not know much about $\psi_{i,j}(\theta)$, we will let $\eta_n = n^{-\alpha}$, and later choose $\theta_n$ so that $\lim_{n \to \infty} \theta_n = 0$. Expanding $\log\psi_{i,j}$ around 0:
\begin{align*}
\frac{d}{d\theta} \log \psi_{i,j}(\theta) &= \frac{\psi_{i,j}'(\theta)}{\psi_{i,j}(\theta)}, \\
\frac{d^2}{d\theta^2} \log \psi_{i,j}(\theta) &= \frac{\psi_{i,j}''(\theta)}{\psi_{i,j}(\theta)} - \frac{(\psi_{i,j}'(\theta))^2}{\psi_{i,j}^2(\theta)}.
\end{align*}
When $\theta = 0$, we have $\psi_{i,j}(0)=1$ by definition, and also
\begin{align*}
\frac{d}{d\theta} \log \psi_{i,j}(0) &= EX_{i,j} = 0, \\
\frac{d^2}{d\theta^2} \log \psi_{i,j}(0) &= EX_{i,j}^2.
\end{align*}
So, if $\theta_{i,n} \to 0$, then we have the approximation
$$
\log \psi_{i,j}(\theta_{i,n}) \sim \frac{\theta_{i,n}^2}{2} EX^2_{i,j}.
$$
Since $X_{i,j} \in [-\rho_{i,j},1-\rho_{i,j}]$ and $EX_{i,j} = 0$,
$$
EX_{i,j}^2 \le \rho_{i,j}(1-\rho_{i,j})
$$
To optimize the bound in \eqref{ldfdd0} we $d/d\theta$ the term in square brackets in \eqref{ldfdd0} to get
\begin{equation}
0=-\eta _n+ \theta_{i,n} M^{-1} \sum_j \rho_{i,j}(1-\rho_{i,j}),
\label{chtheta}
\end{equation}
which says we want to take $\theta_n = \eta_n/\tau_i$,
where $\tau_{i} = M^{-1} \sum_j \rho_{i,j}(1-\rho_{i,j})$. This gives the following large deviations bound
\begin{align*}
P( Y_i \ge M\eta_n) &\le \exp\left( M \left[ -\frac{\eta_n^2}{\tau_i} + \frac{\eta_n^2}{2\tau_i^2} \tau_i \right] \right) \\
& = \exp\left( - \frac{M \eta_n^2}{2\tau_i} \right) \le \exp( - M \eta_n^2 ),
\end{align*}
since $2\tau_i \le 1$.
The same reasoning can be used to get a bound on the other deviation. Since we have expanded the moment generating function around
0 the bound is the same, giving the final result
$$
P( |Y_i| \ge M\eta_n) \le \exp( - M \eta_n^2 )
$$
Define $Y = \sum_{i = 1}^{27} Y_i$, and then use the triangle inequality to get
$$
P( |Y| \ge 27 M\eta_n) \le 27\exp( - M \eta_n^2 )
$$
The last task is to relate this to the difference of the drifts. To do this, we note that
$$
Y = n^{-b(3)} \sum_{i,j} N_{i,j} - \sum_{i,j} \rho_{i,j}
$$
so we have
$$
\frac{Y}{n^{1-b(3)}} = n^{-1} \sum_{x} 1(G_{x,y,v}) - \frac{1}{n^{1-b(3)}} \sum_{i,j} \rho_{i,j}
$$
Let $p^n_{x,y,v}$ be the probability of $G_{x,y,v}$ when we work backwards in the coalescing random walk starting from $x, x+y_1, \ldots x+y_k$ then we have
$$
\frac{1}{n^{1-b(3)}} \sum_{i,j} \rho_{i,j} = \frac{1}{n} \sum_x p^n_{x,y,v}
$$
In the three neighbor case we only have to consider: $y_1=e_1$, $y_2=e_2$, and $y_3=e_3$. When there are more neighbors, we have to consider a number of other possibilities, see the calculations in Section \ref{sec:cpert}. Let $r(v) = r(v_0,v_1,v_2,v_3)$ be the jump rate of vertex $x$ when the states are $v_i$. Multiplying by $r(v)$, summing over the relevant values of $y,v$ we have
$$
n^{-1} \sum_{x,y,v} 1(G_{x,y,v}) r(v) = \beta(\xi_{t/\epsilon_n + s_n})
$$
so we have
\begin{equation}
P\left( \left| \beta(\xi_{t/\epsilon_n + s_n}) - \frac{1}{n} \sum_{x,y,v} p^n_{x,y,v} r(v) \right| \ge 16 n^{-\alpha} \right)
\le 27 \exp\left( - n^{1-b(3)-2\alpha}/27 \right)
\label{betatob}
\end{equation}
The choice of $s_n$ guarantees that as we work backwards in time the particles in the CRW move by an amount $\gg n^{b(3)}$. The bound in \eqref{mombd3} implies that each particle in the CRW lands on a 1 with probability close to $u$. It follows that
$$
\left| \frac{1}{n} \sum_{y,v} p^n_{x,y,v} -b(u) \right| \le \eta/2
$$
with very high probability. The bounds derived above only works for fixed $t$. However, it is easy to extend them so that they hold uniformly on $[0,t_0]$
and hence are valid for the integral. To do this, we subdivide the interval into subintervals of length $1/n^{1/2}\epsilon_n$. Within each interval the probability
there are more than $2n^{1/2}$ flips is $\le \exp(-c\sqrt{n})$. If we add this to previous error probability and multiply by the number of subinterval
we still have a result that holds with very high probability.
\subsection{Final details} \label{ss:fd}
To get long time survival, we will iterate. Let
$$
T_0 = \inf\{ t: |x_t - 1/2| < \eta \}
$$
and note that $x_t$ is the solution of the ODE so this is not random.
Theorem \ref{DarNor} implies that $|X(T_0)-1/2| \le 2\eta$ with very high probability. Let
$$
T_1 = \inf\{ t > T_0 : |X_t - 1/2| \ge 4\eta \}
$$
and note that on $[T_0,T_1]$ we have $|X_t - 1/2| \le 4\eta$.
There is a constant $t_\eta$ so that if $x(0)=1/2 + 4\eta$ or $x(0)=1/2 - 4\eta$ then
$|x(t_\eta)-1/2| \le \eta$. Let $S_1=T_1+t_\eta$. Since $T_1$ is random, $S_1$ is a random time. However, due to the Markov process, we can translate time to apply Theorem \ref{DarNor} again. That is, consider $\tilde{X}_t := X_{t + T_1}$. Then since $| \tilde{X}_0 - 1/2 | = 4 \eta$, Theorem \ref{DarNor} implies that with high probability $|\tilde{X}_{t_\eta} - 1/2| = |X(S_1)-1/2| \le 2\eta$ and
$|X_t -1/2| \le 5\eta$ on $[T_1,S_1]$. For $m \ge 2$, let
$$
T_m = \inf\{ t > S_{m-1} : |X_t - 1/2| \ge 4\eta \}\quad\hbox{and}\quad S_m=T_m+t_\eta.
$$
We can with high probability iterate the construction $n^{k}$ times before it fails. Since each cycle takes at least $t_0$ units of time, taking $\eta=\gamma/5$ the proof of Theorem \ref{persist} is complete.
\section{Rapid extinction for $q>1$} \label{sec:rapext}
In this section we will prove Theorem \ref{dieout}. There are two steps to the proof. First, we use the results in Section 4 to show that the fraction of 1'sin the random process is close to solution of the ODE until time
\begin{equation}
\tau = \min\{ t: x_t < n^{-(1-b(0)} \},
\label{taudef}
\end{equation}
where $b(0)$ will be defined in the proof of Lemma \ref{lem:dyingFirst}. The second step is to prove that when we start with $\le n^{b(0)}$ ones, then fluctuations in the voter model will cause it to hit $0$ in time $\le C n^{b(0)}$. This time is $<n^b$ for large $n$, so by results in Section \ref{ss:igbr}, it is legitimate to assume that the process acts like the voter model. The proof for the second step is based on a Green's function calculation and estimates for the rate of change of the number of ones in the voter model.
\subsection{First step}
\begin{lemma} \label{lem:dyingFirst}
Suppose $X_0 < 1/2$ and let $\tau$ be defined in \eqref{taudef}. Then, for any $\eta > 0$, as $n \to \infty$,
\[
\mathbb{P} \left( |X_\tau - n^{-(1 - b(0))} | < \eta n^{-(1 - b(0))} \right) \to 1.
\]
\end{lemma}
\begin{proof}
We use \eqref{mombd3} from Section \ref{ss:TvsT}. If $X_0=u$ and we divide the torus at time $t/\epsilon_n$ into boxes of side $r=n^{b(2)/3}$, then taking $m$ large in\eqref{mombd3} gives
\begin{equation}
P\left( \hbox{ for some $i$ }\left| \frac{N_{r,i}}{n^{b(2)}} - u \right| \ge [u(1-u)]^{1/2} n^{-b(2)/6+\beta} \right) \le n^{-k},
\end{equation}
for any $\beta > 0$ and $k < \infty$. Since $u^{1/2} > (u(1 - u))^{1/2}$, we can change this to
\begin{equation}
P\left( \hbox{ for some $i$ }\left| \frac{N_{r,i}}{n^{b(2)}} - u \right| \ge u^{1/2} n^{-b(2)/6+\beta} \right) \le n^{-k}.
\label{mombdv}
\end{equation}
For this estimate to be useful, we need $u \gg u^{1/2} n^{-b(2)/6+\beta}$ which is equivalent to $u \gg n^{-b(2)/3 + 2\beta}$. If $b(2)$ is close to 1 and $\beta$ is small, we can define $b(0)$ by
$$
1 - b(0) = b(2)/3 - 2\beta,
$$
so that $b(0) < \min\{b,1-\alpha\}$ where $\alpha>1/3$ is the quantity from Theorem \ref{dieout}. Combining these estimates and using results from the previous section we have that if $x_0 < 1/2$ and $\eta>0$ then as $n\to\infty$
$$
P( |X_t - x_t| \le \eta x_t \hbox{ for all $t\le \tau$}) \to 1 .
$$
Lemma \ref{lem:dyingFirst} follows. \end{proof}
This result shows that the number of 1's gets driven to $\le (1+\epsilon) n^{b(0)}$ at the deterministic time $\tau$. To complete the process of extinction we will rely on fluctuations
in the voter model.
\subsection{Green's function calculation} \label{ss:greenf}
To motivate the calculation in the next lemma we note that the voter model is a time change of simple random walk.
\begin{lemma}
Let $S_t$ be continuous-time simple random walk on $\{0, \dots, n \}$ with jump-rate $r(j)$ at position $j$. Let $0 < x < z \le n$ be integers, and $T_{0, z}$ the first time that $S_t$ hits $0$ or $z$. Then,
\begin{equation} \label{gf}
E_x T_{0, z} = \sum_{y = 1}^{x} \frac{2y}{r(y)} + \sum_{y = x+1}^{z} \frac{2x}{r(y)}
- \sum_{y = 1}^{z} \frac{2xy}{z r(y)}.
\end{equation}
\end{lemma}
\noindent
Since $P_x(T_z< T_0) = x/z$, this is enough to bound the extinction time if $x/z\to 0$.
\begin{proof}
First consider the embedded discrete-time chain of $S_t$. For $0 \le y \le z$, let $N_x(y)$ be the number of times the random walk visits $y$ before hitting $0$ or $z$, starting from position $x$. Consider the Green's function
\[
G_0(x, y) = \mathbb{E}[N_x(y)].
\]
Fix $y$ and write $g(x) = G_0(x, y)$. Then we have that $g$ satisfies
\[
\begin{cases}
g(0) = 0 \\
g(x) = \frac{1}{2} \left( g(x + 1) + g(x-1) \right), & x \neq 0, y, z \\
g(y) = 1 + \frac{1}{2} \left( g(y + 1) + g(y-1) \right) \\
g(z) = 0
\end{cases}.
\]
From this it is clear that $g$ should be linear and increasing on $[0, y]$ and linear and decreasing on $[y, z]$. That is,
\[
\begin{cases}
g(x) = c_1 x & 0 \le x \le y \\
g(x) = c_2(z - x) & y \le x \le z.
\end{cases}.
\]
To satisfy the conditions for $g(x)$ and $g(y)$, the constants must be
\begin{align*}
c_1 = \frac{2(z - y)}{z}, \,\,\,\,\,
c_2 = \frac{2y}{z}.
\end{align*}
The walk will spend an average of $1/r(y)$ units of time at position $y$ before jumping. Thus, if $G(x, y)$ is defined to be the expected amount of time the continuous time walk spends at $y$, started from $x$, before hitting $0$ or $z$, we have:
\[
G(x, y) = \frac{1}{r(y)} \cdot G_0(x, y) = \frac{1}{r(y) } \cdot \begin{cases}
2x(z - y)/z & x \le y \\
2(z-x)y/z & x \ge y
\end{cases}
\]
Thus, the expected total time before being absorbed, started from $x$, is
\begin{align*}
E_x[T_{0, z}] &= \sum_{y = 1}^z G(x, y) = \sum_{y = 1}^{x} \frac{2y}{z} \cdot (z - x) \cdot \frac{1}{r(y)} + \sum_{y = x+1}^z \frac{2(z - y)}{z} \cdot x \cdot \frac{1}{r(y)} \\
&= \sum_{y = 1}^{x} \frac{2y}{r(y)} + \sum_{y = x+1}^z \frac{2x}{r(y)} - \sum_{y = 0}^z \frac{2xy}{z r(y)},
\end{align*}
which establishes \eqref{gf}
\end{proof}
\subsection{Boundary size calculations} \label{ss:boundary}
To use \eqref{gf} to bound the extinction time, we need to understand the size of the boundary of the voter model:
$\partial\xi = \{ \{x,y\} : x \sim y, \, \xi(x) \neq \xi(y) \}$.
Here $x\sim y$ means that $x$ and $y$ are neighbors and $\{x,y\}$ is the un-oriented edge that connects them. For a voter model configuration $\xi$, let $|\xi| = \sum_x \xi(x)$ be the number of 1s. The next result gives trivial upper and lower bounds on $|\partial\xi|$ when $|\xi| = k$:
\begin{equation}
\label{easybds}
C_d k^{1/d} \le | \partial \xi| \le 2d k.
\end{equation}
Using \eqref{gf}, we see that if $x=n^p$ and $z=n^q$ for some $0 < p < q < 1$, then for $r(y) = y$,
\begin{equation}
E_x T_{0, z} \le \sum_{y = 1}^{x} \frac{2y}{y} + \sum_{y = x+1}^{z} \frac{2x}{y}
\le C x + 2x [\log(z)-\log(x)] \le C'x \log(z)
\label{ub0}
\end{equation}
If $p=b(0)$ and $q> p$, this gives us what we want, an extinction time $\ll n^b$.
On the other hand, if we use the lower bound and plug in $r(y) = y^{1/3}$, then
\begin{equation}
E_x T_{0, z} \le \sum_{y = 1}^{x} \frac{2y}{y^{1/3}} + \sum_{y = x+1}^{z} \frac{2x}{y^{1/3}}
\le C ( x^{5/3} + x^{2/3}z) \le C' x^{2/3} z
\label{ub1}
\end{equation}
If we take $x=n^{b(0)}$ and $z=n^c$ then this is $\le C n^{5b(0)/3}$, which is much longer than the interval of length $n^{b}$ over which the process behaves like the voter model. Combining \eqref{easybds} and \eqref{ub1} gives
\begin{lemma} \label{smallk}
If $x=n^p$ with $p<3b/5$ and $z=n^q$ with $q>p$ and $2p/3+ q < b$ then
$$
P_x( T_{0,z} \le n^b ) \to 1 \quad\hbox{as $n\to\infty$}.
$$
\end{lemma}
\noindent
This will let us show that the time spent at small values of $|\partial \xi_t|$ can be ignored. For larger values, we
need a more precise statement about the size of the boundary. This has been done by Cox, Durrett, and Perkins \cite{CDP2},
in order to show that in $d\ge 2$ the rescaled voter model converged in distribution to super-Brownian motion.
This was later used by Bramson, Cox, and LeGall \cite{BCL} to prove a result for the voter model in $d\ge 3$ started at 0.
See Theorem 4 on page 1012 in \cite{BCL}.
To prepare for stating our lemma we describe the result from \cite{CDP2}.
They use a general probability kernel $p(z)$. In our case $p(z)=1/6$ for the nearest neighbors of 0.
If $\xi_t(x)=1$ we let
$$
V_t(x) = \sum_y p(y-x) 1_{(\xi_t(y) = 1)}
$$
If $\xi_t(x)=0$ we set $V_t(x)=0$. This part of the definition is not really needed in the statement since $X^N_s$ is supported by points
on the rescale lattice in state 1. On page 202 of their result you find the following result.
\medskip\noindent
(I1) There is a finite $\gamma>0$ so that for all $\phi \in C_0^\infty(\mathbb{R}^d)$ and $T>0$
$$
E\left[ \left( \int_0^T X^N_s( [V_{N,s}-\gamma] \phi^2 ) \, ds \right)^{\kern -0.2em 2} \right] \to 0
$$
Here $X^N_t$ is the voter model with space scaled by $\sqrt{N}$ and time scaled by $N$ and turned into a measure by assigning mass $1/N$ to states
in state 1, see (1.4), and $V_{N,s}(x)$ is a suitably rescaled version of $V_t(x)$. The formula on page 202 has $V'$ because they want to write
the formula so that it is valid for $d=2$ and $d \ge 3$.
In our situation $\gamma = 2d\beta_d$. However, in this proof we need control on the size of the error. The reader should think of $s$ as a point in the time
interval $[t/\epsilon_n - n^b/2, t/\epsilon_n]$ over which our process behaves like the voter model.
\begin{lemma}\label{bdyerr}
If $k$ is large and the density of 1's is small then
$$
P\left( \left. \frac{|\partial \xi_s|}{ | \xi_s|} \not\in [(1-\epsilon) 2d\beta_d k , (1+\epsilon) 2d\beta_d k]
\, \right| \, |\xi_s| = k \right) \le k^{-2/3}
$$
\end{lemma}
\begin{proof} Pick a site $x$ at time $s$ with $\xi_s(x)=1$. When this holds the coalescing random walk starting at $x$ at time $s$ lands on a site in state $1$ at time $t/\epsilon_n - n^b$. Let $r=k^\alpha$ where $\alpha$ is small and follow the CRW path backwards in time for $r$ units of time. If we let $h(s,x)$ be the probability the CRW starting at time $s$ lands on 1 at time $t/\epsilon_n - n^b$, then an elementary conditional probability shows that the probability our conditioned CRW particle at $x$ at time $s$ is at $y$ at time $s-r$ is
$$
\bar p_{s,r}(x,y) = p_r(x,y) \frac{h(r,y)}{h(s,x)}
$$
This result is often known as Doob's $h$-transform. Since the lineage will wrap around the torus in the remaining $\ge n^b/2$ units of time, the ratio is close to 1 and can be ignored.
For each neighbor $y$ of an $x$ with $\xi_t(x)=1$, let $V_{x,y}=1$ if it does not coalesce with $x$ by time $r$ and 0 otherwise. For any $\alpha>0$, if $k$ is large and the density of 1's is $u$ which is small then
$$
\left| \frac{P(V_{x,y}=1)}{ 2d\beta_d k} - 1\right| < \eta/2.
$$
Here we are using the hydrodynamic limit Lemma \ref{nuu} to conclude that the distribution of the process is close to $\nu_u$ at time $r$.
Let $W_x = \sum_{y\sim x} V_{x,y}$, $\mu(x) = \sum_{y\sim x} EV_{x,y}$, and
$$
S_k = \sum_x^\star \bar W_x \quad\hbox{where} \quad W_x - \mu(x)
$$
where $\Sigma^\star_x$ is short for $\sum_{x: \xi^0_t(x)=1}$. Arguments in Section \ref{ss:diffm} imply that if $|x-x'| > s$ then the correlation
between $W_x$ and $W_{x'}$ is small enough to be ignored so
$$
E(S_k^2) = \sum^\star_x \sum^\star_y E[\bar W_x \bar W_y] \le 36 k \cdot Cr^3
$$
since $|\bar W_x | \le 6$ and for a given $x$ there are at most $Cr^3$ values of $y$ with $|x-y| \le r$. If we use Chebyshev's inequality
$$
P( |S_k| \ge \epsilon k ) \le \frac{36 k \cdot Cs^3}{ k^2} \le C k^{-1+3\delta}
$$
If $\alpha < 1/10$ this gives the desired result.
\end{proof}
\subsection{Extinction time}
\label{ss:exttime}
The results about the boundary of the voter model can now be applied to the Green's function calculation to get the result
\begin{lemma} \label{lem:extinctionTime}
Consider the voter model started with configuration $|\xi_0| = x$ and let $T_{0, z}$ be the first time the configuration hits $0$ or $z$. If $x=n^{b(0)}$ and $z=n^c$ with $c>b(0)$ then
$$
E_x[T_{0,z}] \le C n^{b(0)}
$$
\end{lemma}
\begin{proof}
We can divide the sum in \eqref{gf} into the pieces where Lemma \ref{smallk} can be applied. That is, define $x' = n^{p} < x$ so that $p < 3 b(0)/5$ and $2p/3 + c < b(0)$. Then,
\[
\mathbb{E}_x[T_{0, z}] \le \mathbb{E}_{x'}[T_{0, z}] + \mathbb{E}_x[T_{x', z}].
\]
The first term is less than a constant times $n^{b(0)}$ by Lemma \ref{smallk}. To bound the second hitting time, we use \eqref{ub0} and Lemma \ref{bdyerr} to conclude that the expected amount of time
when $|\partial\xi_s|/|\xi_s|$ is not within $\epsilon$ of $2d\beta_d$ is
$$
\le \sum_{y = 1}^{x} \frac{2y}{y^{1/3}} y^{-2/3} + \sum_{y = x+1}^{z} \frac{2x}{y^{1/3}} y^{-2/3} \le \sum_{y = 1}^{x} \frac{2y}{y^{1/3}} + \sum_{y = x+1}^{z} \frac{2x}{y^{1/3}} \le Cn^b
$$
which finally completes the proof.
\end{proof}
Theorem \ref{dieout} now immediately follows: apply Lemma \ref{lem:dyingFirst} to get that $U_n(\alpha \log n) < n^{-(1-b(0))}$ with high probability. Next, use Section \ref{ss:igbr} so that with high probability we can assume the $q$-voter model only experiences voter branching events for the remainder of the time. Lemma \ref{lem:extinctionTime} then proves that with high probability the unscaled voter model started with $n^{b(0)}$ occupied sites will hit $0$ or $n^c$ in an additional time of $C n^{b(0)}$. The probability that the process hits $0$ first is simply $(n^c - n^{b(0)})/n^c \to 1$. Since $b(0) > 2/3$, this additional time is $o(1)$ for the time-scaled process $U_n(t)$. Thus,
$$
P \left( U_n(\alpha \log n)=0 \right) \to 1 \qquad\hbox{as $n\to\infty$.}
$$
\section{Computing the perturbation} \label{sec:cpert}
In this section, Theorem \ref{limitODE} is proved. Recall Theorem \ref{detlim} state that the limiting ODE for the model with a $k$-sized neighborhood is
\[
\frac{du}{dt} = \sum_{m = 1}^{k-1} r_i^k (\rho_m^0(u) - \rho_m^1(u)),
\]
where $\rho_m^i(u)$ is the probability under the voter model equilibrium $\nu_u$ that the origin is in state $i$ and a exactly $m$ of the neighbors are in state $1-i$. In this section, we analyze these quantities. Before stating the proof for a general $k$, we first show an explicit proof for a neighborhood of size $3$ to give a flavor of how the individual terms are computed, while introducing some necessary notations in an organic manner.
\subsection{\bf k=3}
To compute $\rho^0_i$ we have to compute the coalescence fate of 0, $e_1$, $e_2$, $e_3$. There are 7 possibilities
\begin{center}
\begin{tabular}{lcccc}
one & 0 ; 3 & 1: 2 & 2 ; 1 & 3; 0\\
two & 0; 2, 1 & 1: 1, 1 \\
three & 0; 1,1,1
\end{tabular}
\end{center}
The first number in each string gives the number of neighbors that coalesce with 0. The others give the size of the limiting coalescing clusters formed by the
remaining neighbors. The word at the beginning of the row is the number of numbers after the semi-colon. We can ignore $3;0$ because in that case all the neighbors have the same state as 0.
Let $\rho^0_i$ be the probability that in the voter equilibrium $\nu_u$ the origin is 0 while exactly $i$ of the neighbors are 1. Factoring out the probability the origin is we have $\rho^0_i = (1-u)q_i(u)$.To compute the $q_i(u)$ we use the following table.
\begin{itemize}
\item The coefficients of $u$ come from the ``one'' terms.
\item The coefficients of $u^2$ and $u(1-u)$ come from the ``two'' terms. There is no $(1-u)^k$ since all the neighbors would be 0. $p(1;1,1)$ appears three times since only 0,0 is impossible. $p(0;2,1)$ only appears twice since 0,0 and 1,1 are impossible.
\item The coefficients of $u^2(1-u)$ and $u(1-u)^2$ come from the ``three'' terms. There is no $u^3$ or $(1-u)^3$ since all neighbors would be 0 or 1. For this reason $p_{0;1,1,1}$ appears $2^3 - 2 = 6$ times
\end{itemize}
The meaning of the first column will become clear when the reader reaches \eqref{diffq}
\begin{equation}
\begin{matrix}
\Delta_i(u) & term & q_1(u) & q_2(u) \\
0 & u & p_{2;1} & p_{1;2} \\
-1 & u^2 & & p_{1;1,1} \\
1 & u(1-u) & p_{0;2,1} + 2p_{1;1,1} & p_{0;2,1}\\
1 & u(1-u)^2 & 3p_{0;1,1,1} \\
0 & u^2(1-u) & & 3p_{0;1,1,1}
\end{matrix}
\label{table3}
\end{equation}
so reading down the columns we have
\begin{align*}
q_1(u) & = p_{2,1} u + [2p_{1,1,1}+ p_{0,2,1}] u(1-u) + 3p_{0,1,1,1}u(1-u)^2 \\
q_2(u) & = p_{1,2} u +p_{1,1,1}u^2 + p_{0,2,1} u(1-u) + 3p_{0,1,1,1} u^2(1-u)
\end{align*}
Let $\rho^1_i$ be the probability that in the voter equilibrium $\nu_u$ the origin is 1 while exactly $i$ of the neighbors are 0. From the previous calculation we see that $\rho^1_i = u q_i(1-u)$ so we have
$$
\langle h_{0,1} - h_{1,0} \rangle_u = \sum_{i=1}^2 r_i (\rho^0_i - \rho^1_i)
$$
The quantity in parentheses is $\Delta_i(u) \equiv (1-u)q_i(u) - u q_i(1-u)$.
Taking difference we have (the first column indicates the term in $q_i(u)$)
\begin{align}
u \quad & \quad u(1-u) - (1-u) u = 0
\nonumber\\
u^2 \quad & \quad u^2(1-u) - (1-u)u^2 = u(1-u)(2u-1)
\nonumber\\
u(1-u) & \quad u(1-u)^2 - u^2(1-u) = u(1-u)(1-2u)
\label{diffq}\\
u(1-u)^2 & \quad u(1-u)^3 - u^3(1-u) = u(1-u)[ (1-u)^2 - u^2 ] = u(1-u)(1-2u)
\nonumber\\
u^2 (1-u) & \quad u^2(1-u)^2 - (1-u)^2u^2 = 0 \nonumber
\end{align}
so consulting \eqref{table3} we have
\begin{align*}
\Delta_1(u) & = [2p_{1,1,1}+ p_{0,2,1} + 3 p_{0,1,1,1}] u(1-u)(1-2u) \\
\Delta_2(u) & = [- p_{1,1,1} + p_{0,2,1}] u(1-u)(1-2u)
\end{align*}
and the reaction term is
\begin{align*}
\frac{\phi(u)}{u(1-u)(1-2u)} & = r_1[2p_{1,1,1} + p_{0,2,1} + 3 p_{0,1,1,1} ] \\
& + r_2 [ - p_{1,1,1} + p_{0,2,1} ]
\end{align*}
If $2r^3_1>r^3_2$ so the right-hand side is positive. Using \eqref{rforqv} we see that in the q-voter model with $q<1$
$$
2r^3_1 = 2/3 \log (3) > 2/3 \log(3/2) = r^3_2
$$
so the reaction term is $c_3 u(1-u)(1-2u)$ with $c_3>0$. When $q>1$ the reaction term is $-c_3 u(1-u)(1-2u)$.
\subsection{\bf General k}
In this case we have to compute the coalescence fate of $0$ with $k$ neighbors. Again $\rho^0_i=(1-u)q_i(u)$, where the functions $q_i(u)$, $i\leq k-1$ defined as before are polynomials with terms of the type $u^a(1-u)^b$. First let us look at the difference $\Delta_{a,b} (u)$ of these terms, where $\Delta_{a,b}(u)=\rho_i^0-\rho_i^1=u^a(1-u)^{b+1}-u^{b+1}(1-u)^a$. Note that $\Delta_{a,b} (u)=0$ if $a=b+1$.
In the case $a\leq b$ we have
\begin{align*}
\Delta_{a,b} (u)&= u^a(1-u)^{b+1}-u^{b+1}(1-u)^a\\
&=u^a(1-u)^a[(1-u)^{b-a+1}-u^{b-a+1}]\\
&=u^a(1-u)^a(1-2u)\left[\sum_{j=0}^{b-a}u^j(1-u)^{b-a-j}\right].
\end{align*}
To see the last step write $1-2u = (1- u)-u$ and the telescope the sum. In the case $a>b+1$
\begin{align*}
\Delta_{a,b} (u)&= u^a(1-u)^{b+1}-u^{b+1}(1-u)^a\\
&=u^{b+1}(1-u)^{b+1}[u^{a-b-1}-(1-u)^{a-b-1}]\\
&=-u^{b+1}(1-u)^{b+1}(1-2u)\left[\sum_{j=0}^{a-b-2}u^j(1-u)^{a-b-2-j}\right]
\end{align*}
Since $\sum_{j=0}^n u^j (1-u)^{n-j}>0$ on $[0,1]$ we have that $0,1$ and $1/2$ are the only roots of $\Delta_{a,b} (u)$. Also note that $\Delta_{a,b} (u)=-\Delta_{b+1,a-1} (u)$. We claim
$$
\frac{\phi(u)}{u(1-u)(1-2u)}=f(u),
$$
where $f(u)$ is a positive polynomial in $u$ with no real roots. To prove this, given a coalescence fate $s_0; s_1,s_2,s_3,\cdots, s_j$ where $\sum_j s_j=k$ we look at number of ways to obtain $a$ clusters with opinion $1$ (which gives the coefficients of the terms $u^a(1-u)^b$, $a>b+1$) and compare it with the number of ways to obtain $b+1$ clusters with opinion $1$ (which gives the coefficients of the terms $u^{b+1}(1-u)^{a-1}$).
First, suppose $b=0$ and $a\geq 2$. Let $s_0$ be the number of neighbors that have coalesced with $0$, and $s_1,s_2,\cdots, s_a$ be the sizes of the limiting coalescing clusters formed by the rest of the neighbors, where we assume that the sizes are arranged in an increasing order, i.e., $s_1\leq s_2\leq\cdots\leq s_a$. The coefficient of $\Delta_{a,0} (u)$ in $\phi(u)$ is given by $r_{s_1+\cdots+s_a}p_{s_0;s_1,\cdots,s_a}$(Since all the clusters have opinion 1, there is only one way to choose). Similarly the coefficient of $\Delta_{1,a-1} (u)$ in $\phi(u)$ is given by $(r_{s_1}+\cdots+r_{s_a})p_{s_0;s_1,\cdots,s_a}$ (Since exactly one of the clusters has opinion 1, there are $a$ different choices, the coefficient of each of the clusters needs to be added individually).
Since $s_i$'s are increasing in $i$, so
\begin{align*}
\log(k/s_a)\leq\log (k/s_j)\qquad \forall j\in\{1,2,\cdots, a-1\}.
\end{align*}
So by the definition $r_i^k=\frac{i}{k}\log(k/i)$, and using the inequality above we have
\begin{align*}
r_{s_1+\cdots+s_a}&=\frac{s_1+\cdots+s_a}{k}\log(k/{(s_1+\cdots+s_a)})\\
&\leq \frac{s_1+\cdots+s_a}{k}\log(k/{s_a})\\
&=\frac{s_1}{k}\log(k/{s_a})+\frac{s_2}{k}\log(k/{s_a})+\cdots+\frac{s_a}{k}\log(k/{s_a})\\
&\leq \frac{s_1}{k}\log(k/{s_1})+\frac{s_2}{k}\log(k/{s_2})+\cdots+\frac{s_a}{k}\log(k/{s_a})\\
&=r_{s_1}+r_{s_2}+\cdots+r_{s_a}.
\end{align*}
Since $\Delta_{a,0} (u)=-\Delta_{1,a-1} (u)$, if we only look at terms of the type $\Delta_{1,a-1} (u)p_{s_0;s_1,\cdots,s_a}$ (which is non-negative) in $\phi(u)$, we get a non-negative polynomial in $u$ with no roots other than $0,1$ and $1/2$.
Now suppose $b\neq 0$ and $a\geq b+2$. As explained in the previous case, let $s_0$ be the number of neighbors that coalesce with $0$, and $s_1,s_2,\cdots, s_{a+b}$ be the sizes of the limiting coalescing clusters formed by the rest of the neighbors, where we assume that the sizes are arranged in an increasing order, i.e., $s_1\leq s_2\leq\cdots\leq s_{a+b}$. There are $\binom{a+b}{a}$ ways of choosing $a$ clusters out of the $a+b$ clusters. Denote the total size of each of these clusters by $x_i$, where $1\leq i\leq \binom{a+b}{a}$, where wlog we assume that the sizes are arranged in an ascending order. The coefficient of $\Delta_{a,b} (u)$ in $\phi(u)$ is given by $p_{s_0;s_1,s_2,\cdots,s_{a+b}}\sum_{i=1}^{\binom{a+b}{a}} r_{x_i}$. Given $1\leq i\leq a+b$, the number of clusters in which cluster $s_i$ has opinion $1$ is given by $\binom{a+b-1}{a-1}$. Hence the total size of all the clusters, where $a$ of them have opinion $1$, is given by
$$\sum_{i=1}^{\binom{a+b}{a}}x_i=\binom{a+b-1}{a-1}\left(s_1+s_2+\cdots + s_{a+b}\right).$$
Using a similar argument there are $\binom{a+b}{b+1}$ ways of choosing $b+1$ clusters out of the $a+b$ clusters. Denote the total size of each of these clusters by $y_i$, where $1\leq i\leq \binom{a+b}{b+1}$, where wlog we assume that the sizes are arranged in an ascending order. The coefficient of $\Delta_{b+1,a-1} (u)$ in $\phi(u)$ is given by $p_{s_0;s_1,s_2,\cdots,s_{a+b}}\sum_{i=1}^{\binom{a+b}{b+1}} r_{y_i}$. Given $1\leq i\leq a+b$, the number of clusters in which cluster $s_i$ has opinion $1$ is given by $\binom{a+b-1}{b}=\binom{a+b-1}{a-1}$. Hence the total size of all the clusters, where $b+1$ of them have opinion $1$, is given by
$$\sum_{i=1}^{\binom{a+b}{b+1}}y_i=\binom{a+b-1}{a-1}\left(s_1+s_2+\cdots + s_{a+b}\right).$$
For ease of notation, let us denote $\binom{a+b}{a}$ by $n$ and $\binom{a+b}{b+1}$ by $m$.
Then $m>n$ since
\begin{align*}
\binom{a+b}{b+1}-\binom{a+b}{a}=&\binom{a+b}{a}\left(\frac{a}{b+1}-1\right)\\
=&\binom{a+b}{a}\left(\frac{a-b-1}{b+1}\right)>0.
\end{align*}
Since $\sum_{i=1}^{n}x_i=\sum_{i=1}^{m}y_i$, and the $x_i$s as well as the $y_i$ s are arranged in ascending order, we have $x_i>y_i+m-n$, for $1\leq i\leq n$.
\begin{align*}
&x_1\log x_1+ x_2\log x_2+\cdots +x_n\log x_n -y_1\log y_1-y_2\log y_2-\cdots-y_m\log y_m\\
>&x_1\log x_1+ x_2\log x_2+\cdots +x_n\log x_n -y_1\log y_1-y_2\log y_2-\cdots-y_m\log x_n\\
=&x_1\log x_1+ x_2\log x_2+\cdots +x_n\log x_n -y_1\log y_1-y_2\log y_2-\cdots-y_{j-1}\log y_{j-1}-c\log y_j,
\end{align*}
where $y_n+y_{n+1}+\cdots +y_j -c=x_n$. Now we have $\sum_{i=1}^{n-1}x_i=c+\sum_{i=1}^{j-1}y_i$. Repeating the same process as explained above $n-1$ times, we have
$$\sum_{i=1}^{n}x_i\log x_i>\sum_{i=1}^{m}y_i\log y_i.$$
Now using the definition of $r_i^k$
\begin{align*}
\sum_{i=1}^{n}r^k_{x_i}=&\sum_{i=1}^{n}\frac{x_i}{k}\log(k/x_i)
=\sum_{i=1}^{n}\frac{x_i}{k}\left[\log(k) -\log (x_i)\right]\\
=&\sum_{i=1}^{m}\frac{y_i}{k}\log(k)-\sum_{i=1}^{n}\frac{x_i}{k}\log(x_i)
<\sum_{i=1}^{m}\frac{y_i}{k}\log(k)-\sum_{i=1}^{m}\frac{y_i}{k}\log(y_i)\\
=&\sum_{i=1}^{m}\frac{y_i}{k}\log(k/y_i)
= \sum_{i=1}^{m}r^k_{y_i}.
\end{align*}
Now using the above inequality along with the fact that $\Delta_{a,b}=-\Delta_{b+1,a-1}$ , if we only look at terms of the type $\Delta_{b+1,a-1} (u)p_{s_0;s_1,\cdots,s_{a+b}}$ (which is non-negative) in $\phi(u)$, we get a non-negative polynomial in $u$ with no roots other than $0,1$ and $1/2$. This proves Theorem \ref{limitODE} for $q < 1$.
\begin{corollary}
Fix $q>1$. For a $q$-voter model with $k$-neighbors, the reaction function defined in \eqref{ODErhs} simplifies to
\begin{equation}
\phi(u)=-u(1-u)(1-2u)f^k(u),
\end{equation}
where $f^k(u)$ is a strictly positive polynomial in $u$.
\end{corollary}
\begin{proof}
Recalling the perturbation from \eqref{rforqv} and \eqref{rforq>1}, note that the perturbation when $q>1$ has the same value as the perturbation when $q< 1$ but with the opposite sign. This along with the above work proves the corollary.
\end{proof}
\section*{Acknowledgments}
This work was begun during the 2019 AMS Math Research Communities meeting on Stochastic Spatial Models, June 9-15, 2019. We would like to thank
Hwai-Ray Tung, a graduate student at Duke for producing the figures. RD was partially supported by NSF grant DMS 1809967 from the probability program. MS was supported by a National Defense Science \& Engineering Graduate Fellowship. PA was partially supported by the NSF Grant DMS 1407504.
| {
"timestamp": "2020-06-09T02:18:51",
"yymm": "2006",
"arxiv_id": "2006.04162",
"language": "en",
"url": "https://arxiv.org/abs/2006.04162",
"abstract": "In the $q$-voter model, the voter at $x$ changes its opinion at rate $f_x^q$, where $f_x$ is the fraction of neighbors with the opposite opinion. Mean-field calculations suggest that there should be coexistence between opinions if $q<1$ and clustering if $q>1$. This model has been extensively studied by physicists, but we do not know of any rigorous results. In this paper, we use the machinery of voter model perturbations to show that the conjectured behavior holds for $q$ close to 1. More precisely, we show that if $q<1$, then for any $m<\\infty$ the process on the three-dimensional torus with $n$ points survives for time $n^m$, and after an initial transient phase has a density that it is always close to 1/2. If $q>1$, then the process rapidly reaches fixation on one opinion. It is interesting to note that in the second case the limiting ODE (on its sped up time scale) reaches 0 at time $\\log n$ but the stochastic process on the same time scale dies out at time $(1/3)\\log n$.",
"subjects": "Probability (math.PR)",
"title": "The q-voter model on the torus",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534333179648,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7083573536688876
} |
https://arxiv.org/abs/1406.0716 | A strict undirected model for the $k$-nearest neighbour graph | Let $G=G_{n,k}$ denote the graph formed by placing points in a square of area $n$ according to a Poisson process of density 1 and joining each pair of points which are both $k$ nearest neighbours of each other. Then $G_{n,k}$ can be used as a model for wireless networks, and has some advantages in terms of applications over the two previous $k$-nearest neighbour models studied by Balister, Bollobás, Sarkar and Walters, who proved good bounds on the connectivity models thresholds for both. However their proofs do not extend straightforwardly to this new model, since it is now possible for edges in different components of $G$ to cross. We get around these problems by proving that near the connectivity threshold, edges will not cross with high probability, and then prove that $G$ will be connected with high probability if $k>0.9684\log n$, which improves a bound for one of the models studied by Balister, Bollobás, Sarkar and Walters too. | \section{Introduction}
Let $G_{n,k}$ be the graph formed by placing points in $S_{n}$, a $\sqrt{n}\times\sqrt{n}$ square, according to a Poisson process of density $1$ and connecting two points if they are both $k$-nearest neighbours of each other (i.e. one of the $k$-nearest points in $S_{n}$). We will refer to this as the strict undirected model. A natural question, especially when considering this as a model for a wireless network, is: Asymptotically, how large does $k$ have to be in order to ensure that $G_{n,k}$ is connected?
We cannot ensure with certainty that the resulting graph will be connected; there will always be a chance that a local configuration will occur that produces multiple components, but we can ask: what value of $k$ ensures that the probability of the graph being connected tends to one? Indeed we say that $G_{n,k}$ has a property $\Pi$ \emph{with high probability} if $\mathbb{P}(G_{n,k}\textrm{ has }\Pi)\rightarrow 1$ as $n\rightarrow\infty$. So we seek to answer the question: What $k=k(n)$ ensures that $G_{n,k}$ is connected with high probability?
Different variations of this problem have been studied previously, using different connection rules. Gilbert [\ref{Gil}] first introduced a model in which every point was joined to every other point within some fixed distance, $R$ (the Gilbert model). Equivalently, this can be viewed as joining each point, $x$, to every point within the circle of area $\pi R^{2}$ centred on $x$. Penrose proved in [\ref{Pen}], that if $\pi R^{2}\geq (1+o(1))\log n$ (so that on average each point is joined to at least $\log n$ other points), then the resulting graph is connected with high probability, whereas if $\pi R^{2}\leq (1+o(1))\log n$, then the resulting graph is disconnected with high probability.
Xue and Kumar [\ref{XandK}] studied the model in which two points are connected if either is the $k$-nearest neighbour of the other (we will denote this graph $G'_{n,k}$), and proved that the threshold for this model is $\Theta (\log n)$. Balister, Bollob\'{a}s, Sarkar and Walters [\ref{MW}] considerably improved their bounds (they showed that if $k<0.3043\log n$ then $G'_{n,k}$ is disconnected whp, while if $k>0.5139\log n$ then $G'_{n,k}$ is connected whp). In the same paper, Balister, Bollob\'{a}s, Sarkar and Walters also examined a directed version of the problem where a vertex sends out an out edge to all of its $k$ nearest neighbours, and again showed that the connectivity threshhold is $\Theta (\log n)$ obtaining upper and lower bounds of $0.7209\log n$ and $0.9967\log n$ respectively.
It has been pointed out that for practical uses (e.g. for wireless networks), it would be better to use a different connection rule, namely to connect two points only if they are both $k$ nearest neighbours of each other. This model has two advantages in terms of wireless networks: It ensures that no vertex will have too high a degree, and thus be swamped, as could happen with either of the previous models. It also ensures we can always receive an acknowledgement of any information sent at each step, which may not be the case in the directed model.
The edges in our new model are exactly the edges in the directed model which are bidirectional, and so any lower bound proved for the directed model will also be a lower bound for the strict undirected model. Thus, from Balister, Bollob\'{a}s, Sarkar and Walters [\ref{MW}] we know that if $k<0.7209\log n$ then $G_{n,k}$ is disconnected with high probability. It can be shown using a tessellation argument and properties of the Poisson process, that the connectivity threshold in this model is again $\Theta (\log n)$ (e.g. see the introduction of [\ref{MW}]), and so our task is to produce a good constant, $c$, for the upper bound such that if $k>c\log n$ then $G_{n,k}$ is connected with high probability. In particular we will show that some $c<1$ will do, to show that a conjecture of Xue and Kumar made for the original undirected model [\ref{XandK}] (and which is true for the Gilbert model) does not hold for this model. The method used in [\ref{MW}] for both of the previous models was to show first that for any $c'>0$, if $k>c'\log n$ then there could be only one `large' component of $G_{n,k}$ with high probability. This allowed them to concentrate on `small' components, and so gain their bounds.
We wish to do the same, however our model has some extra complications. One key property used in the proofs that there is only one large component was that edges in different components of $G$ cannot cross, but that is not the case in the strict undirected model. Indeed, Figure~\ref{FigCrossing} shows the outline of a construction in which the edges of two different components do cross.
\begin{figure}[h]
\centering
\includegraphics[height=80mm]{KnearCrossingEdges.eps}
\caption{If each of the shaded regions has the number of points shown, and there are no other points nearby, then $a_{1}a_{2}$ and $b_{1}b_{2}$ would be edges of $G_{n,k}$, but $a_{1}$ and $a_{2}$ would be in a different component from $b_{1}$ and $b_{2}$ (Here dashed arrows indicate directed out edges between regions).}
\label{FigCrossing}
\end{figure}
Luckily, the set-up required for edges of different components to cross is fairly restrictive, and we are able to show:
\begin{thm}\label{nocrossing}
If $k=c\log n$, then, for $c>0.7102$ (and in particular below the connectivity threshold), no two edges in different components inside $G$ will cross with high probability.
\end{thm}
\begin{rem*}
Officially this should read ``\emph{If $k=\lceil\log n\rceil$, then...},'' however, since we are considering the limit as $n$ tends to infinity, this makes no difference, and so for ease of notation we leave the ceiling notation out here, and for the rest of the paper.
\end{rem*}
There are further complications in proving good upper bounds on the connectivity threshold: In both of the previous models it was always the case that if there was no edge from a point $x$ to a point $y$, then there must be at least $k$ points closer to $x$ than $y$ is, whereas in our model we may only conclude that one or the other has $k$ nearer neighbours. For this reason we have to handle the case of small components differently too. We are able to show:
\begin{thm}\label{TightBoundThm}
If $k=c\log n$ and $c>0.9684$, then $G$ is connected with high probability.
\end{thm}
We first introduce some basic definitions and notation that will be used throughout the paper.
\section{Notation and Preliminaries}
\begin{definition}
Given a point $a\in G_{n,k} = G$, we write $\Gamma^{+}(a)$ for the set of the $k$-nearest neighbours of $a$ and define this to be the \emph{out neighbourhood of $a$}. We define the $k$-nearest neighbour disk of $a$, denoted $D^{k}(a)$, to be the smallest disk centred on $a$ that contains $\Gamma^{+}(a)$.
We will often say that that a point $x$ has an \emph{out edge} to a point $y$ (or that $\overrightarrow{xy}$ is an out edge) to mean that $y\in \Gamma^{+}(x)$. Note that $xy$ is an edge in $G$ if and only if both $\overrightarrow{xy}$ and $\overrightarrow{yx}$ are out edges.
Correspondingly we say that $x$ has an \emph{in edge} from $y$ if $\overrightarrow{yx}$ is an out edge.
\end{definition}
We will use the following notational conventions:
\begin{itemize}
\item We write $D_{a}(r)$ for the disk of radius $r$ centred on $a$.
\item We will use capital letters to represent sets (e.g. a region of the plane, or a component), and lower case letters for points in the plane (however if $a$ and $b$ are points, we will write $ab$ for the edge (straight line segment) from $a$ to $b$).
\item For two sets $A$ and $B$, we write $\textrm{d}(A,B)$ for the minimum distance from any point in $A$ to any point in $B$. For a point $x$ and a region $B$ we write $\textrm{d}(x,B)=\textrm{d}(\{x\},B)$.
\item For a set $A$, we write $\partial A$ for the boundary of the closure of $A$.
\item Given a region $A$, we write $\#A$ for the number of points of $G$ in $A$, and $|A|$ for the area of $A$. We write $\Vert ab\Vert$ for the length of the edge $ab$.
\item We will refer to the vertices of $G$ as \emph{points} (i.e. points of our Poisson process), and a single element of $S_{n}$ as a \emph{location}.
\item We will often introduce Cartesian co-ordinates onto $S_{n}$ (with scaling), and when this is the case, we will write $p^{(x)}$ and $p^{(y)}$ for the $x$ and $y$ co-ordinates of any point/location~$p$.
\end{itemize}
At times we will refer to specific points and regions of $G$ and $S_{n}$, especially in the proof that edges of different components cannot cross (Section~\ref{NoCrossSection}), and so to help keep things easy to follow, a list of definitions and notations is included in Appendix~\ref{DefApp}.
\section{Edges of different components cannot cross, and there can only be one large component}
The eventual aim of this section will be to show that if $c=0.7102$ and $k>c\log n$, then with high probability there will only be one large component. We will achieve this by bounding the minimal distance between two edges in different components of $G$. As a first step we establish a lower bound on the distance of a point of $G$ and an edge in a different component.
\subsection{Preliminaries - An edge of one component cannot be too close to a vertex in another component}
To prove a bound on the distance between a point of $G$ and an edge in a different component, we first state the following result of Balister, Bollob\'{a}s, Sarkar and Walters [\ref{MW}] that bounds how close points in different components of $G$ can be. This lemma was proved for the original undirected model, but the proof uses properties of the Poisson process only. Namely, they showed that, given a point $x$, for any point $y$ that is close enough to $x$ we will have $\mathbb{P}(\overrightarrow{xy} \textrm{ not an out edge})=O(n^{1-\varepsilon})$, and thus that with high probability all points close enough together have out edges to each other. Since this implies $\overrightarrow{xy}$ and $\overrightarrow{yx}$ are both out edges for $x$ and $y$ close enough together, it also shows that $xy$ would be an edge in our model.
\begin{lem}\label{edgelengths}
Fix $c>0$, and set;
\[c_{-}=ce^{-1-1/c}\textrm{ and }c_{+}=4e(1+c)\]
If $r$ and $R$ are such that $\pi r^{2}=c_{-}\log n$ and $\pi R^{2}=c_{+}\log n$, then whp every vertex in $G_{n,k}$ is joined to every vertex within distance $r$, and every vertex has at least $k+1$ other vertices within a distance $R$, and so in particular is not joined to any vertex more than a distance $R$ away.
\end{lem}
The next lemma will be used repeatedly, and is a result about how points can be connected in our graph. It states that the longest edge (in $G$) out of any point, $x$, is at most twice the shortest non-edge involving $x$, or, equivalently, that the region containing the neighbourhood of $x$ (in $G$) is at most a factor of two off being circular. This is certainly not the case in either of the two previous models.
\begin{lem}\label{D1/2}
Let $x$ and $y$ be two points of $G$ such that $D^{k}(x)\subset D^{k}(y)$, then $x$ is joined to $y$, and $\Gamma^{+}(x)\cup\{x\}=\Gamma^{+}(y)\cup\{y\}$. In particular, if $xy$ is an edge of $G$ then $x$ must be joined to every point inside $D_{x}(\Vert xy\Vert/2)$.
\end{lem}
\begin{proof}
Since $D^{k}(x)\subset D^{k}(y)$, the $k$ nearest neighbours of $y$ must all lie inside $D^{k}(x)$. If $y\notin D^{k}(x)$, then $D^{k}(y)$ contains $k+2$ points ($k+1$ in $D^{k}(x)$), which is impossible. Thus $xy$ is an edge of $G$ and the set of points (excluding $x$ and $y$) in $D^{k}(x)$ is precisely the same as those in $D^{k}(y)$.
To prove the last part, suppose that $z$ is a point in $D_{x}(\Vert xy\Vert/2)$. Then $\overrightarrow{xz}$ must be an out edge, since $\Vert xz\Vert<\Vert xy\Vert$. Now, if $\overrightarrow{zx}$ is not an out edge then $x\notin D^{k}(z)$, but $z\in D_{x}(\Vert xy\Vert/2)$, and so $D^{k}(z)\subset D_{x}(\Vert xy\Vert)\subset D^{k}(x)$. But this implies $xz\in G$ by the above.
\end{proof}
We will now show that there is an absolute minimum distance between a point and a edge from a different component. As the main step to doing so, (and for most of the rest of this subsection) we show that there is a relative minimum distance between an edge of $G$ and the distance of a point from a different component to that edge (as a function of the length of the edge). This result will be used both as the main part of that result of an absolute minimum distance, and later as part of the proof that with high probability edges in different components cannot cross. To this end we prove a fairly strong result and introduce a lot of the notation and set-up which we will meet again when proving that edges will not cross with high probability.
\begin{lem}\label{farapart1} Suppose $b_{1}$ and $b_{2}$ are in a component $X$, with $b_{1}b_{2}\in G$, $\Vert b_{1}b_{2}\Vert=\rho$ and $a\notin X$, then:
\begin{align}
\textrm{d}(a,b_{1}b_{2}) & \geq\frac{1}{4\sqrt{6}} \rho > 0.102\rho
\end{align}
\end{lem}
\begin{proof}
Suppose $a$, $b_{1}$ and $b_{2}$ are as above. We rescale and introduce Cartesian co-ordinates, fixing $b_{1}$ at $(0,0)$ and $b_{2}$ at $(1,0)$. Without loss of generality, $a^{(y)}\geq 0$ and $a^{(x)}\leq\frac{1}{2}$. We need to show that $\textrm{d}(a,b_{1}b_{2})\geq\frac{1}{4\sqrt{6}}$. We write $B_{i}$ for $D_{b_{i}}(1)$, and note that $B_{i}\subset D^{k}(b_{i})$ (as the edge $b_{1}b_{2}\in G$). We may assume that $a\in B_{1}$, since otherwise $\textrm{d}(a,b_{1}b_{2})\geq\frac{\sqrt{3}}{2}$ (as $a_{1}^{(x)}\leq 1/2$).
Since $a$ is not joined to either $b_{i}$, Lemma~\ref{D1/2} tells us that:
\begin{align}
a & \notin D_{b_{1}}(1/2)\cup D_{b_{2}}(1/2)\label{aoutside}
\end{align}
If $a^{(x)}<0$, then, using (\ref{aoutside}), $\textrm{d}(a,b_{1}b_{2})>1/2$. Thus we may assume $0< a^{(x)}\leq 1/2$, so that we have $\textrm{d}(a,b_{1}b_{2})=a^{(y)}$.
Let $w$ be the location $(\frac{1}{2},\frac{1}{2\sqrt{3}})$, and let $T$ be the triangle with vertices $b_{1}$, $b_{2}$ and $w$ (See figure \ref{FigTandT2}).
Note that $b_{1}\widehat{b_{2}}w=b_{2}\widehat{b_{1}}w=\frac{\pi}{6}$, and so $T$ intersects $D_{b_{1}}(1/2)$ and $D_{b_{2}}(1/2)$ at $(\frac{\sqrt{3}}{4},\frac{1}{4})$ and $(1-\frac{\sqrt{3}}{4},\frac{1}{4})$ respectively. In particular, (\ref{aoutside}) tells that if $a\notin T$ then $\textrm{d}(a,b_{1}b_{2})\geq \frac{1}{4}$.
Thus we may assume that $\overrightarrow{b_{1}a}$ and $\overrightarrow{b_{2}a}$ are out edges, and that:
\begin{align}
a & \in S = \left(T\cap\{p:p^{(x)}<\frac{1}{2}\}\right)\setminus D_{b_{1}}(1/2)\label{ainside}
\end{align}
See Figure~\ref{FigTandT2}.
\begin{figure}[h]
\centering
\includegraphics[height=60mm]{KnearTandT2.eps}
\caption{The region we are considering for $a$, shown with $T$ and $T_{2}$.}
\label{FigTandT2}
\end{figure}
Define $r=\Vert ab_{1}\Vert$, and write $A$ for the disk $D_{a}(r)$, so that $\Gamma^{+}(a)\subset D^{k}(a)\subset A$. Since $a\in S$, we have:
\begin{align}
r & \leq \Vert b_{1}w\Vert=\frac{1}{\sqrt{3}}\label{rsmall}
\end{align}
Let $z$ be the location $(\frac{1}{2},\frac{\sqrt{3}}{2})$. Note that $b_{1}$, $b_{2}$ and $z$ form an equilateral triangle~$T_{2}$ that contains $T$ (See figure \ref{FigTandT2}). Note that for any point in $T_{2}$ (and so, in particular, for every point in $S$), $z$ is the closest point on $\partial (B_{1}\cup B_{2})$. Thus:
\begin{align}
\textrm{d}(a,\partial (B_{1}\cup B_{2})) & = \Vert az\Vert \geq \Vert wz\Vert = \frac{1}{\sqrt{3}}\label{daBig}
\end{align}
Thus, putting (\ref{rsmall}) and (\ref{daBig})together, we have:
\begin{align}
D^{k}(a) & \subset A \subset B_{1}\cup B_{2} \label{DkaInside}
\end{align}
Now, Lemma~\ref{D1/2} tells us that we cannot have $\Gamma^{+}(a)\subset B_{i}$ for either $i$, and so $\Gamma^{+}(a)$ (and thus $A$) must contain points in both $B_{1}\setminus B_{2}$ and $B_{2}\setminus B_{1}$. We consider a point $p\in \Gamma^{+}(a)\cap (B_{2}\setminus B_{1})$. By definition, both $b_{2}$ and $a$ must have an out edge to $p$, and thus, since $a$ and $b_{2}$ are in different components, one of the following must hold:
\begin{enumerate}
\item $p$ has no out edge to $a$.\label{nopa}
\item $p$ has no out edge to $b_{2}$.\label{nopb}
\end{enumerate}
We will show that if $a$ is too close to $b_{1}b_{2}$, then $A$ (and so $\Gamma^{+}(a)$) cannot contain a suitable point with either of these conditions holding. In particular, writing $E$ for the ellipse $\{p:\Vert ap\Vert + \Vert b_{2}p\Vert\leq1\}$, we show that if $a$ is too close to $b_{1}b_{2}$ then $R:=A\cap(B_{2}\setminus B_{1})\subset E\cap D_{b_{2}}(1/2)$, and that no point in $E\cap D_{b_{2}}(1/2)$ can satisfy either of the above conditions.
\begin{lem}\label{EllipseLemma}
If $p\in E$ then $\overrightarrow{pa}$ is an out edges. In particular, if $p\in E\cap D_{b_{2}}(1/2)$, then both $\overrightarrow{pa}$ and $\overrightarrow{pb_{2}}$ are out edges.
\end{lem}
\begin{proof}
Suppose that $p\in E$ and $\overrightarrow{pa}$ is not an out edge. We must have $a\notin D^{k}(p)$, and so $D^{k}(p)\subset B_{2}\subset D^{k}(b_{2})$ by the definition of $E$. Thus lemma \ref{D1/2} tells us that $\Gamma^{+}(p)\cup\{p\}=\Gamma^{+}(b_{2})\cup\{b_{2}\}$. But $a\in \Gamma^{+}(b_{2})$, and so $a\in\Gamma^{+}(p)$, and we have a contradiction.
The second part follows by applying Lemma~\ref{D1/2}.
\end{proof}
We now identify a location, $q$, which is quite high up on $\partial B_{1}$ and must be inside $E\cap D_{b_{2}}(1/2)$. Lemma~\ref{EllipseLemma} tells us that $R$ must contain a point further round $\partial B_{1}$ than $q$, or else $a$ and $b_{2}$ are in the same component. This will force $a$ itself to not be too close to $b_{1}b_{2}$.
\begin{lem}
Let $q=(\frac{11}{12},\frac{\sqrt{23}}{12})$. Then, so long as $a\in S$, $q\in E\cap D_{b_{2}}(1/2)$.
\end{lem}
\begin{proof}
We have that $\Vert qb_{2}\Vert=\sqrt{(\frac{1}{12})^{2}+(\frac{\sqrt{23}}{12})^{2}}=\frac{1}{\sqrt{6}}<\frac{1}{2}$. Thus $q\in D_{b_{2}}(1/2)$, and moreover $q\in E$ if and only if $a\in D_{q}(1-\frac{1}{\sqrt{6}})$.
Since $S$ is contained within its complex hull, we will have $a\in D_{q}(1-\frac{1}{\sqrt{6}})$ so long as the corners of $S$ are contained within $D_{q}(1-\frac{1}{\sqrt{6}})$. Now, $S$ has three corners: $(\frac{1}{2},0)$, $(\frac{\sqrt{3}}{4},\frac{1}{4})$ and $(\frac{1}{2},\frac{1}{2\sqrt{3}})$, and by some simple calculations:
\begin{align*}
\textrm{d}(q,(\frac{1}{2},\frac{1}{2\sqrt{3}})) < \textrm{d}(q,(\frac{1}{2},0)) & < 1-\frac{1}{\sqrt{6}}
\end{align*}
And:
\begin{align*}
\textrm{d}(q,(\sqrt{3}/4,1/4)) & < 1-\frac{1}{\sqrt{6}}
\end{align*}
Thus all these locations are inside $D_{q}(1-\frac{1}{\sqrt{6}})$, and we are done.
\end{proof}
Note that $\Vert qb_{1}\Vert=1$ and so $q\in\partial B_{1}$. Now, $R$ must have its location furthest from $b_{2}$ on $\partial B_{1}$ (since $b_{2}\in \partial B_{1}$ and $a\in B_{1}$), and so if $R$ contains any location outside of $E\cap D_{b_{2}}(1/2)$ it must contain a location further up $\partial B_{1}$ than $q$.
Since $R$ is symmetric about the line through $a$ and $b_{1}$, $R$ could only contain a location above $q$ if $a$ is above the bisector of angle $q\widehat{b_{1}}b_{2}$ (denote this line $L$). Since we are assuming $a\in S$, we must have that $a^{(y)}$ (and so $\textrm{d}(a,b_{1}b_{2})$) is at least the second co-ordinate of the intersection between $\partial D_{b_{1}}(1/2)$ and $L$.
Writing $2\theta$ for $q\widehat{b_{1}}b_{2}$, we have that:
\begin{align}
\sin^{2} \theta & = \frac{1-\cos2\theta}{2}= \left(1-\frac{11/12}{\sqrt{(11/12)^{2}+(\sqrt{23}/12)^{2}}}\right)/2=\frac{1}{24}
\end{align}
Now, $a$ must be above the location which is $1/2$ along the line $L$ from $b_{1}$ (since $a\notin D_{b_{1}}(1/2)$). Thus:
\begin{align}
a^{(y)} & \geq \frac{1}{2}\sin\theta = \frac{1}{2}\frac{1}{\sqrt{24}} = \frac{1}{4\sqrt{6}}
\end{align}
\end{proof}
We want to bound the distance between a point and an edge in a different component independent of the length of the edge. We do this by applying Lemma~\ref{edgelengths} if the edge is short, and Lemma~\ref{farapart1} if the edge is long:
\begin{cor}\label{farapart2}
With $r$ as defined in Lemma~\ref{edgelengths}, we have that if $b_{1}$ and $b_{2}$ are in a component $X$ with $b_{1}b_{2}\in G$, and $a\notin X$, then;
\begin{align}
\textrm{d}(a,b_{1}b_{2}) & > \frac{r}{5}
\end{align}
\end{cor}
\begin{proof}
Suppose $b_{1}$, $b_{2}$ and $a$ are as above and let $\Vert b_{1} b_{2} \Vert=\rho$.
If $\rho\leq \frac{4\sqrt{6}}{5}r$: We may assume $\Vert ab_{1}\Vert\leq \Vert ab_{2}\Vert$. Then the perpendicular projection of $a$ onto $b_{1}b_{2}$ is at most $\rho/2$ from $b_{1}$. Thus, since $ab_{1}$ is not an edge of $G$, Lemma~\ref{edgelengths} tells us that $\Vert ab_{1}\Vert\geq r$ and so:
\begin{align}
\textrm{d}(a,b_{1}b_{2}) & \geq \sqrt{r^{2}-(\rho/2)^{2}} \geq \sqrt{r^{2}-(\frac{2\sqrt{6}}{5}r)^{2}}=\frac{r}{5}
\end{align}
If $\rho\geq\frac{4\sqrt{6}}{5}r$: By Lemma~\ref{farapart1} we have that:
\begin{align}
\textrm{d}(a,b_{1}b_{2}) & \geq \frac{1}{4\sqrt{6}}\rho\geq \frac{r}{5}
\end{align}
\end{proof}
\begin{rem*}
Lemma~\ref{farapart1} can be improved, with substantial extra work, to show the distance between $a$ and $b_{1}b_{2}$ is at least $0.1934\rho$, which is best possible.
\end{rem*}
\subsection{Proof of Theorem~\ref{nocrossing} - Edges in different components cannot cross}\label{NoCrossSection}
In this section we will show:
\begin{thm-hand}{\ref{nocrossing}}
If $k=c\log n$, then, for $c>0.7102$, no two edges in different components inside $G$ will cross with high probability.
\end{thm-hand}
The value $c=0.7102$ is strictly less than the current lower bound on the connectivity constant (i.e. $c=0.7209$), and so edges in different components stop crossing before everything is connected.
The proof of Theorem~\ref{nocrossing} will split into three main parts. In the first we prove that for two such edges to cross, there must be a fairly specific set-up of points, more precisely it must look similar to the construction in Figure~\ref{FigCrossing}. In the second section we show that we can define two regions within this set-up, one of which has high density (containing at least $k$ points and denoted $H$), and the other of which is empty (and denoted $L$). In the third section we bound the relative sizes of these two regions, and so achieve a bound on the likelihood of such a set-up occurring by using the following result of Balister, Bollob\'{a}s, Sarkar and Walters [\ref{MW}], proved using simple properties of the Poisson process:
\begin{lem}\label{Full-Empty}\mbox{}
If $X$ and $Y$ are two regions of the plain, then:
\begin{align}
\mathbb{P}(\#X\geq k\text{ and }\#Y=0) & \leq \left(\frac{|X|}{|X|+|Y|}\right)^{k}\notag
\end{align}
\end{lem}
It is worth remarking that there will exist a constant $c'$ such that if $k<c'\log n$ then with high probability we would have edges in different components crossing: We have a construction where we do have two edges in different components crossing (see Figure~\ref{FigCrossing} in the introduction). Now, the construction has 5 dense regions, which we denote $H_{i}$ ($i=1,\ldots,5$), each of which contains $m_{i}$ points, ($\sum_{i}m_{i}=4k$) and a large empty regions, which we will denote $L$. If we have a region of the right shape with an area equal to the number of points in the construction (namely $4k$), then, writing $p_{n}$ for the probability of the construction occurring in that region, we have:
\begin{align}
p_{n}&>\prod_{i}^{5}\left(\frac{|H_{i}|}{|L\cup H_{i}|}\right)^{m_{i}}\notag\\
&>\underset{|H_{i}|}{\text{min}}\left(\frac{|H_{i}|}{|L\cup H_{i}|}\right)^{4k}\notag\\
&=n^{4c'\underset{|H_{i}|}{\text{min}}\log\frac{|H_{i}|}{|L\cup H_{i}|}}\label{ExpIntro}
\end{align}
when $k=c'\log n$. Now, by taking $c'$ to be small enough, we can make the exponent of (\ref{ExpIntro}) arbitrarily close to $0$, and so the probability of such a set-up occurring can be $\textrm{O}(n^{-\varepsilon})$ for any $\varepsilon>0$. Since the region had an area of $\textrm{O}(\log n)$, we can fit $\textrm{O}(n/\log n)$ disjoint copies into $S_{n}$. Thus if we partition $S_{n}$ into $\textrm{O}(n/\log n)$ regions in each of which the set-up could occur, it will occur in some of them with high probability, and so $G$ will contain components with crossing edges with high probability.
\subsubsection{The set-up of the points}
To prove the result, we need to refer to several specific regions and locations within $S_{n}$, and so to make it easier to follow, all definitions and notation within this section are collated in the order that they appear in Appendix~\ref{DefApp}, in addition to being defined inside this section.
\begin{definition}
We say that the ordered set of points: $(a_{1}, a_{2}, b_{1}, b_{2})$ forms a \emph{crossing pair} if:
\begin{itemize}
\item The straight line segments $a_{1}a_{2}$ and $b_{1}b_{2}$ intersect and are both edges of the graph $G$,
\item the points $a_{1}$ and $a_{2}$ are in a different component from $b_{1}$ and $b_{2}$,
\item $\Vert a_{1}a_{2}\Vert\leq\Vert b_{1}b_{2}\Vert$, $\Vert a_{1}b_{1}\Vert\leq \Vert a_{1}b_{2}\Vert$ and $\textrm{d}(a_{1},b_{1}b_{2})\leq\textrm{d}(a_{2},b_{1}b_{2})$.
\end{itemize}
\end{definition}
Note that any four points that meet the first two conditions must also meet the third under a suitable identification of points, so that if two edges from different components cross then some four points must form a crossing pair.
We will use this definition of crossing pairs to determine exactly how a set-up with two edges from different components crossing must look. Given a crossing pair, we introduce Cartesian co-ordinates and rescale exactly as in Lemma~\ref{farapart1} throughout this section (i.e. setting $b_{1}=(0,0)$, $b_{2}=(1,0)$, $a_{1}^{(x)}\leq 1/2$, $a_{1}^{(y)}\geq 0$ and $a_{2}^{(y)}\leq 0$). We now introduce some definitions of regions (dependent on $a_{1}$, $a_{2}$, $b_{1}$ and $b_{2}$), which we will use to pin point where these points can lie in relation to each other:
\begin{definition}
Let $r_{i}=\textrm{min}\{\Vert a_{i}b_{1}\Vert,\Vert a_{i}b_{2}\Vert\}$ (so that $r_{1}=\Vert a_{1}b_{1}\Vert$) and define $A_{i}=D_{a_{i}}(r_{i})$ and $B_{i}=D_{b_{i}}(\Vert b_{1}b_{2}\Vert)=D_{b_{i}}(1)$ (See Figure~\ref{FigA1A2B1B2}).
\begin{figure}[h]
\centering
\includegraphics[height=50mm]{KnearA1A2B1B2.eps}
\caption{The regions $A_{1}$, $A_{2}$, $B_{1}$ and $B_{2}$.}
\label{FigA1A2B1B2}
\end{figure}
\end{definition}
\begin{definition}
We write $T$ for the isosceles triangle with vertices $b_{1}$, $b_{2}$ and $w$ where $w=(\frac{1}{2},\frac{1}{2\sqrt{3}})$, and $S_{1}$ for the region $\left(T\cap\{q:q^{(x)}\leq1/2\}\right)\setminus D_{b_{1}}(1/2)$ (This will turn out to be the region which can contain $a_{1}$. See Figure~\ref{RegionForA1}).
\begin{figure}[h]
\centering
\includegraphics[height=60mm]{KnearS1.eps}
\caption{The shaded region is the region $S_{1}$ (which can contain $a_{1}$).}
\label{RegionForA1}
\end{figure}
\end{definition}
\begin{definition}
We write $T_{2}$ for the equilateral triangle with vertices $b_{1}$, $b_{2}$ and $z$, where $z=(\frac{1}{2},-\frac{\sqrt{3}}{2})$, and $S_{2}$ for the region $T_{2}\cap A_{1}\cap\{x:x\widehat{b_{1}}b_{2}> \pi/6\textrm{ and }x\widehat{b_{2}}b_{1}> \pi/6\}$ (This will turn out to be the region that can contain $a_{2}$. See Figure~\ref{RegionForA2}).
\begin{figure}[h]
\centering
\includegraphics[height=60mm]{KnearRa2.eps}
\caption{The shaded region is the region $S_{2}$ (which can contain $a_{2}$).}
\label{RegionForA2}
\end{figure}
\end{definition}
\begin{definition}
For any set $S$, we define $S^{+}$ to be the part of $S$ that lies above the $x$-axis (i.e. the line through $b_{1}$ and $b_{2}$), and $S^{-}$ to be the part of $S$ that lies below the $x$-axis.
\end{definition}
To show that $a_{1}\in S_{1}$ and $a_{2}\in S_{2}$, (as well as later) we will need the following generalisation of Lemma~\ref{D1/2} to pairs of points:
\begin{lem}\label{IntersectUnion}
Suppose $w$, $x$, $y$ and $z$ are any four points such that:
\begin{enumerate}
\item $D^{k}(w)\cup D^{k}(x)\subset D^{k}(y)\cup D^{k}(z)$,\label{CondUnion}
\item $D^{k}(w)\cap D^{k}(x)\subset D^{k}(y)\cap D^{k}(z)$.\label{CondInter}
\end{enumerate}
Then at least one of $wy$, $wz$, $xy$ and $xz$ is an edge of $G$.
\end{lem}
\begin{proof}
Let $\#(D^{k}(w)\cap D^{k}(x))=m$ and $\#(D^{k}(y)\cap D^{k}(z))=\mu$. Then, by condition~\ref{CondInter}, $m\leq\mu$. However, $\#(D^{k}(w)\cup D^{k}(x))=2k+2-m$ and $\#(D^{k}(y)\cup D^{k}(z))=2k+2-\mu$, and so so condition~\ref{CondUnion} implies $2k+2-m\leq2k+2-\mu$ and thus $m\geq\mu$. Putting these together, we must have $m=\mu$.
This tells us that $\#(D^{k}(w)\cup D^{k}(x))=\#(D^{k}(y)\cup D^{k}(z))$, and so, by condition~\ref{CondUnion}, we have $\Gamma^{+}(w)\cup\Gamma^{+}(x)\cup\{w,x\}=\Gamma^{+}(y)\cup\Gamma^{+}(z)\cup\{y,z\}$. In particular $w,x\in\Gamma^{+}(y)\cup\Gamma^{+}(z)$ and $y,z\in\Gamma^{+}(w)\cup\Gamma^{+}(x)$, and so each of $w$ and $x$ receives an out-edge from at least one of $y$ and $z$ and each of $y$ and $z$ receives an out-edge from at least one of $w$ and $x$. We may assume by symmetry that $\overrightarrow{wy}$ is an out-edge.
Now, if $wy$ were not an edge of $G$, then $\overrightarrow{zw}$ must be an out-edge (since one of $\overrightarrow{yw}$ and $\overrightarrow{zw}$ must be). Similarly, if $zw$ is not an edge of $G$ either, then $\overrightarrow{xz}$ must be an out edge. Continuing, we find that either one of $wy$, $wz$, $xy$ and $xz$ is an edge of $G$, or all of $\overrightarrow{wy}$, $\overrightarrow{zw}$, $\overrightarrow{xz}$, $\overrightarrow{yx}$ are out-edges, but none are in-edges. This would imply:\[\Vert wy\Vert<\Vert zw\Vert<\Vert xz\Vert<\Vert yx\Vert<\Vert wy\Vert,\]which is impossible.
\end{proof}
We now finish this sub-section by showing that $a_{1}\in S_{1}$ and $a_{2}\in S_{2}$, and proving some other basic facts about crossing pairs:
\begin{lem}\label{Properties}
Suppose $(a_{1}, a_{2}, b_{1}, b_{2})$ forms a crossing pair, then:
\begin{enumerate}
\item \label{a1a2Short}$a_{1}a_{2}$ must be the shortest edge in the convex quadrilateral $a_{1}a_{2}b_{1}b_{2}$,
\item \label{AsAndBs}we must have $0<a_{1}^{(x)},a_{2}^{(x)}<1$, and $B_{i}\subset D^{k}(b_{i})$ and $\Gamma^{+}(a_{i})\subset A_{i}$ for $i=1,2$,
\item \label{Positiona1}$a_{1}\in S_{1}$,
\item \label{T2Lemma}for any point $p\in T_{2}$ with $b_{1}$, $b_{2}\notin D^{k}(p)$, if either of $b_{1}\widehat{b_{2}}p\leq\pi/6$ or $b_{2}\widehat{b_{1}}p\leq\pi/6$ then $D^{k}(p)\subset B_{1}\cup B_{2}$,
\item \label{Positiona2}$a_{2}\in S_{2}$.
\end{enumerate}
\end{lem}
\begin{proof}
\begin{enumerate}
\item Since $a_{1}a_{2}$ and $b_{1}b_{2}$ intersect, the four points must form a convex quadrilateral with $a_{1}a_{2}$ and $b_{1}b_{2}$ as the diagonals.
Suppose $a_{1}b_{1}$ is shorter than $a_{1}a_{2}$ (and so also shorter than $b_{1}b_{2}$), then $a_{1}\in D^{k}(b_{1})$ as $b_{2}$ is, and $b_{1}\in D^{k}(a_{1})$ as $a_{2}$ is. Thus $a_{1}b_{1}$ is an edge in $G$, contradicting $(a_{1}, a_{2}, b_{1}, b_{2})$ being a crossing pair. Similarly, $a_{i}b_{j}$ cannot be shorter than both $a_{1}a_{2}$ for any $i$ and $j$.
\item We know that $b_{1}b_{2}\in G$, and thus $B_{i}\subset D^{k}(b_{i})$, and know already that $a_{1}^{(x)}\leq \frac{1}{2}$.
Suppose that $a_{1}^{(x)}\leq0$. Since $a_{1}a_{2}$ and $b_{1}b_{2}$ intersect, we must have $a_{2}^{(x)}>0$. But then $\Vert b_{1}a_{2}\Vert<\Vert a_{1}a_{2}\Vert$, contradicting part~\ref{a1a2Short}. Thus $a_{1}^{(x)}>0$. The same argument shows that $a_{2}^{(x)}>0$ and $a_{2}^{(x)}<1$.
By the above, and using $\Vert a_{1}a_{2}\Vert\leq \Vert b_{1}b_{2}\Vert=1$ as well as $\textrm{d}(a_{1},b_{1}b_{2})\leq\textrm{d}(a_{2},b_{1}b_{2})$, we have that $0\leq a_{1}^{(y)}=\textrm{d}(a_{1},b_{1}b_{2})\leq\frac{1}{2}$. We also know that $0<a_{1}^{(x)}\leq \frac{1}{2}$, and so $\Vert a_{1}b_{1}\Vert\leq\frac{1}{\sqrt{2}}$, and in particular $a_{1}\in B_{1}$.
Thus $\overrightarrow{b_{1}a_{1}}$ is an out edge, and so $b_{1}\notin\Gamma^{+}(a_{1})$ as $a_{1}b_{1}$ is not an edge of $G$. This implies that $b_{2}\notin\Gamma^{+}(a_{1})$ as $a_{1}^{(x)}\leq\frac{1}{2}$. Thus $D^{k}(a_{1})\subset A_{1}$.
Since neither $b_{1}$ nor $b_{2}$ are in $A_{1}$ and $0<a_{1}^{(x)}\leq \frac{1}{2}$, we must have $(\partial A_{1})^{-}\subset B_{1}\cap B_{2}$. Thus $D^{k}(a_{1})^{-}\subset A^{-}\subset B_{1}\cap B_{2}$, and so $a_{2}\in B_{1}\cap B_{2}$ implying that $\overrightarrow{b_{1}a_{2}}$ and $\overrightarrow{b_{2}a_{2}}$ are both out edges. Thus neither $b_{1}$ nor $b_{2}$ are in $\Gamma^{+}(a_{2})$, so $D^{k}(a_{2})\subset A_{2}$.
\item We must have $2d(a_{1},b_{1}b_{2})\leq \Vert a_{1}a_{2}\Vert \leq \Vert a_{1}b_{1}\Vert$, since $0<a_{1}^{(x)},a_{2}^{(x)}<1$ and $a_{1}a_{2}$ is the shortest edge in our quadrilateral, and so in particular:
\begin{align*}
d(a_{1},b_{1}b_{2})\leq \frac{1}{2}\Vert a_{1}b_{1}\Vert
\end{align*}
Thus, using $\Vert a_{1}b_{1}\Vert\leq\Vert a_{1}b_{2}\Vert$:
\begin{align}
a_{1}\widehat{b_{2}}b_{1}\leq a_{1}\widehat{b_{1}}b_{2}\leq \sin^{-1}(\frac{1}{2})=\pi/6
\end{align}
This is exactly the region $T$, and since $a_{1}^{(x)}\leq1/2$ and $a_{1}\notin D_{b_{1}}(1/2)$ (by Lemma~\ref{D1/2}), we have:
\begin{align}
a_{1}\in \left(T\cap\{q:q^{(x)}\leq1/2\}\right)\setminus D_{b_{1}}(1/2) = S_{1}\notag
\end{align}
\item Let $p\in T_{2}$ be such that $b_{1}, b_{2}\notin D^{k}(p)$. Note that $z$ is the closest location to $p$ in $\partial (B_{1}\cup B_{2})$ (since $p\in T_{2}$), and so in particular $D_{p}(\Vert pz\Vert)\subset B_{1}\cup B_{2}$. Thus it suffices to show that $z\notin D^{k}(p)$.
If $b_{1}\widehat{p}b_{2}\leq\pi/6$, then $\Vert b_{1}p\Vert\leq \Vert pz\Vert$ since the line $\{q:b_{1}\widehat{b_{2}}q=\pi/6\}$ bisects $b_{1}\widehat{b_{2}}z$. Thus in particular, $z\notin D^{k}(p)$ since $b_{1}\notin D^{k}(p)$.
Similarly, if $b_{2}\widehat{p}b_{1}\leq\pi/6$ then $z\notin D^{k}(p)$.
\item Noting that the $a_{i}$ and $b_{i}$ fulfil condition~2 of Lemma~\ref{IntersectUnion} (with the identification, in the notation of Lemma~\ref{IntersectUnion}, of $a_{1}=w$, $a_{2}=x$, $b_{1}=y$ and $b_{2}=z$), and so, since the $a_{i}$ and $b_{i}$ are in different components, Lemma~\ref{IntersectUnion} implies that $A_{1}\cup A_{2}\not\subset B_{1}\cup B_{2}$. Thus at least one of $a_{1}$ and $a_{2}$ must be closer to a point outside of $B_{1}\cup B_{2}$ than it is to $b_{1}$ and $b_{2}$. This cannot be $a_{1}$ by parts~\ref{Positiona1} and \ref{T2Lemma}. Thus $a_{2}$ is closer to a point outside of $B_{1}\cup B_{2}$ than it is to $b_{1}$ or $b_{2}$.
Since $a_{1}a_{2}$ is the shortest edge in both triangles $a_{1}a_{2}b_{1}$ and $a_{1}a_{2}b_{2}$, we have $a_{1}\widehat{b_{i}}a_{2}\leq\pi/3$ for $i=1,2$, and so $a_{2}\in T_{2}$. Thus by part~\ref{T2Lemma}, $a_{2}\widehat{b_{1}}b_{2}> \pi/6$ and $a_{2}\widehat{b_{2}}b_{1}> \pi/6$. We also know that $a_{2}\in A_{1}$ as $a_{1}a_{2}\in G$, whence:
\begin{align}
a_{2}\in T_{2}\cap A_{1}\cap\{x:x\widehat{b_{1}}b_{2}> \pi/6\textrm{ and }x\widehat{b_{2}}b_{1}> \pi/6\}=S_{2}\notag
\end{align}
\end{enumerate}
\end{proof}
\subsubsection{The dense and empty regions}
We want to define our regions of high and low density, but first need some more basic regions that they will be built from. We define:
\begin{itemize}
\item $R_{i}$ to be $D^{k}(a_{1})\cap (B_{i}\setminus B_{j})$ where $i\neq j$,
\item $E_{i}$ to be the ellipse defined by the equation $\Vert a_{1}x\Vert+\Vert b_{i}x\Vert\leq1$ (This has its centre half way between $a_{1}$ and $b_{i}$, major axis running along the line $a_{1}b_{i}$ with radius $1/2$, and minor axis of radius $\frac{\sqrt{1-r_{i}^{2}}}{2}$),
\item $F_{i}$ to be the ellipse defined by the equation $\Vert a_{2}x\Vert+\Vert b_{i}x\Vert\leq1$,
\item $M$ to be $D^{k}(a_{1})\cap D^{k}(a_{2})$.
\end{itemize}
We can now define all our regions of high and low density (and will prove they are such shortly). All these regions are shown in Figure~\ref{FigHandL}. The empty regions are:
\begin{itemize}
\item $L_{1}=(D^{k}(a_{1})^{+}\cap E_{1}\cap D_{b_{1}}(1/2))\setminus M$
\item $L_{2}=(D^{k}(a_{1})^{+}\cap E_{2}\cap D_{b_{2}}(1/2))\setminus M$
\item $L_{3}=M^{+}\cap (D_{b_{1}}(1/2)\cup D_{b_{2}}(1/2))$
\item $L_{4}=T_{2}\cap D^{k}(a_{2})\cap \{x:x\widehat{b_{1}}b_{2}\leq \pi/6\textrm{ or }x\widehat{b_{2}}b_{1}\leq \pi/6\}$
\item $L_{5}=(D^{k}(a_{2})^{-}\cap F_{1}\cap D_{b_{1}}(1/2))\setminus T_{2}$
\item $L_{6}=(D^{k}(a_{2})^{-}\cap F_{2}\cap D_{b_{2}}(1/2))\setminus T_{2}$.
\end{itemize}
The high density regions are:
\begin{itemize}
\item $H_{1}=R_{1}\setminus L_{1}$
\item $H_{2}=R_{2}\setminus L_{2}$
\item $H_{3}=A_{2}^{-}\setminus (B_{1}\cup B_{2})$
\item $H_{4}=M^{+}\setminus L_{3}$.
\item $H_{5}=S_{2}$.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[height=100mm]{KnearHandL.eps}
\caption{The dark shaded region is $H$ and the light shaded region is $L$.}
\label{FigHandL}
\end{figure}
And we write:
\begin{align}
H & = \bigcup_{i=1}^{5}H_{i}\\
L & = \bigcup_{i=1}^{6}L_{i}
\end{align}
See Figure~\ref{FigHandL} for an illustration of this.
We want to show that $L$ is empty, and that $H$ contains at least $k$ points. To do this we will first show that $H\cup L$ contains at least $k$ points and then show that $\#L=0$.
\begin{lem}\label{Hdense}
With the regions as defined above, we have $\#(H\cup L)>k$.
\end{lem}
\begin{proof}
Note that $L_{4}\cup L_{5}\cup L_{6}\supset M^{-}\setminus S_{2}$, and thus: \begin{align}H\cup L\supset R_{1}\cup R_{2}\cup H_{3}\cup M\label{LHDense1}\end{align}
For ease of notation, let $\#(D^{k}(a_{1})\setminus (R_{1}\cup R_{2}\cup M))=\alpha$, $\#(D^{k}(a_{2})\cap B_{1}\cap B_{2})\setminus M=\beta$ and $\# (D^{k}(a_{2})\cap(B_{i}\setminus B_{j}))=\gamma_{i}$, as shown in Figure~\ref{FigRegions1}.
\begin{figure}[h]
\centering
\includegraphics[height=80mm]{KnearRegions.eps}
\caption{The regions we are considering, with their number of points.}
\label{FigRegions1}
\end{figure}
We have the following by counting points in each of the $D^{k}(a_{i})$ (which must contain $k+1$ points) and each of the $B_{i}$ (which can contain at most $k$ points).
\begin{align}
\# R_{1}+ \# R_{2}+ \# M+ \alpha & = k+1\label{A1}\\
\# H_{3}+ \# M + \beta + \gamma_{1}+ \gamma_{2} & = k+1\label{A2}\\
\# R_{1}+ \# M + \alpha+ \beta+ \gamma_{1} & \leq k\label{B1}\\
\# R_{2}+ \# M + \alpha+ \beta+ \gamma_{2} & \leq k\label{B2}
\end{align}
(\ref{A1}) and (\ref{B2}) together tell us that:
\begin{align}
\# R_{1} + \# R_{2} + \# M + \alpha & \geq \# R_{2} + \# M + \alpha + \beta + \gamma_{2}+1\notag
\end{align}
Cancelling terms we get:
\begin{align}
\# R_{1} & \geq \beta + \gamma_{2}+1\label{R1ineq}
\end{align}
Similarly, (\ref{A1}) and (\ref{B1}) imply:
\begin{align}
\# R_{2}\geq\beta+\gamma_{1}+1\label{R2ineq}
\end{align}
Thus, by using (\ref{LHDense1}), (\ref{R1ineq}), (\ref{R2ineq}) and finally (\ref{A2}) we get:
\begin{align}
\#(H\cup L) & \geq\# H_{3} + \# M + \# R_{1} + \# R_{2}\notag\\
& \geq \# H_{3} + \# M + (\beta + \gamma_{2}+1)+(\beta+\gamma_{1}+1)\notag\\
& = (\# H_{3}+ \# M + \beta + \gamma_{1}+ \gamma_{2}) + (\beta + 2)\notag\\
& = k + \beta +3\notag\\
& > k\notag
\end{align}
\end{proof}
We next show that for each $i$, $\#L_{i}=0$.
\begin{lem}
$\#L_{1}=\#L_{2}=\#L_{5}=\#L_{6}=0$.
\end{lem}
\begin{proof}
Lemma~\ref{EllipseLemma} tells us that any point in $L_{1}$ has an out edge to both $a_{1}$ and $b_{1}$, but $L_{1}$ is contained inside both $D^{k}(a_{1})$ and $D^{k}(b_{1})$, and thus must be empty. Similarly for $L_{2}$, $L_{5}$ and $L_{6}$.
\end{proof}
The cases for $L_{3}$ and $L_{4}$ require slightly more work and are dealt with separately.
\begin{lem}\label{L4Empty}
$\# L_{4}=0$
\end{lem}
\begin{proof}
Note that $L_{4}$ is contained in the polygon, $P$, with corners (moving around its perimeter clockwise) at $b_{1}$, $b_{2}$, $u^{-}=(\frac{3}{4},-\frac{\sqrt{3}}{4})$, $w^{-}=(\frac{1}{2},-\frac{1}{2\sqrt{3}})$ and $v^{-}=(\frac{1}{4},-\frac{\sqrt{3}}{4})$. We will show that the left half of this region (namely the convex polygon $P^{l}$, with corners $b_{1}$, $(\frac{1}{2},0)$, $w^{-}$ and $u^{-}$) is contained within $F_{1}$, and then use Lemma~\ref{EllipseLemma} to show that we can have no points in $L_{4}\cap P^{l}$. To do this it is convenient to first bound $S_{2}$ into a convex polygon:
By Lemma~\ref{farapart1}, $a_{1}^{(y)}\geq 0.102$, and thus the minimal possible $y$ co-ordinate of a point $q\in M^{-}$ (and so for $a_{2}$) can be no less than the minimum when taking $a_{1}$ to be at $(1/2,0.102)$ and $D^{k}(a_{1})=A_{1}$. This bounds $q^{(y)}$ (and in particular $a_{2}^{(y)}$) below by:
\[q^{(y)} \geq 0.102 - \sqrt{(1/2)^2+0.102^2}>v^{-(y)}=-\frac{\sqrt{3}}{4}\label{a2ymin}\]
Thus $S_{2}$ is contained in the triangle $T_{a_{2}}$, with corners $u^{-}$, $v^{-}$ and $w^{-}$.
By convexity, to check that $P^{l}\subset F_{1}$ it is enough to check that for every corner of $P^{l}$ and every corner of $T_{a_{2}}$ (labelling these corners by $p_{i}$ and $t_{j}$ respectively) the equation\[\Vert b_{1}p_{i}\Vert +\Vert p_{i}t_{j}\Vert\leq1\]holds. This is the case (calculations omitted), and so $P^{l}\subset F_{1}$.
Lemma~\ref{EllipseLemma} then tells us that any point in $L_{4}\cap P^{l}$ must have an out-edge to both $b_{1}$ and $a_{2}$, but $P^{l}\subset B_{1}$ and $L_{4}\subset D^{k}(a_{2})$, so any point in $L_{4}\cap P^{l}$ would then be joined to both $b_{1}$ and $a_{2}$ in $G$, and so no such point can exist. Similarly, defining $P^{r}$ to be the right half of $P$, $L_{4}\cap P^{r}$ must be empty, and so $\#L_{4}=0$.
\end{proof}
\begin{lem}\label{L3Empty}
The region $L_{3}\cap \{p:p^{(x)}<\frac{1}{2}\}\subset E_{1}$ and $L_{3}\cap \{p:p^{(y)}\geq\frac{1}{2}\}\subset E_{2}$, and so in particular $\# L_{3}=0$.
\end{lem}
\begin{proof}
We show that $L_{3}$ is contained in the polygon $Q$ with corners (moving around its perimeter clockwise) at $b_{1}$, $u^{+}=(\frac{1}{6},\frac{1}{2\sqrt{3}})$, $v^{+}=(\frac{5}{6},\frac{1}{2\sqrt{3}})$ and $b_{2}$. The proof will then follows as in Lemma \ref{L4Empty}; we show that the left and right halves of $Q$ are contained in $E_{1}$ and $E_{2}$ respectively, and use this to rule out any points in $L_{3}$.
Writing $z^{+}$ for the location $(\tfrac{1}{2},\tfrac{\sqrt{3}}{2})$, we have that $b_{1}\widehat{b_{2}}z^{+}=b_{2}\widehat{b_{1}}z^{+}=\frac{\pi}{3}$. Now, $L_{3}\subset A_{2}^{+}$ (by Lemma~\ref{Properties} part \ref{AsAndBs}), and $a_{2}\widehat{b_{i}}z^{+}\geq\frac{\pi}{2}$ (by Lemma~\ref{Properties} part \ref{Positiona2}), and thus, since $a_{2}\widehat{b_{i}}b_{j}\geq\frac{\pi}{6}$ ($i\neq j$), it follows that $L_{3}$ is contained in the triangle with vertices $b_{1}$, $b_{2}$ and $z^{+}=(\frac{1}{2},\frac{\sqrt{3}}{2})$ (as $L_{3}\subset A_{2}^{+}$). Now, $u^{+}$ and $v^{+}$ lie on the lines $b_{1}z^{+}$ and $b_{2}z^{+}$ respectively, and so we just need to show that $L_{3}$ can't come too high up inside this triangle: By Lemma~\ref{Properties} part \ref{Positiona2}, $a_{2}^{(y)}\leq -\frac{1}{2\sqrt{3}}$, and thus the maximal possible $y$ co-ordinate of a point $q\in M^{+}$ can be no more than the maximum when taking $a_{2}$ to be at $(1/2,-\frac{1}{2\sqrt{3}})$ and $D^{k}(a_{2})=A_{2}$. This bounds $q^{(y)}$ above by: \[q^{(y)} \leq \frac{1}{2\sqrt{3}}\]Thus every point in $M^{+}$, and hence every point in $L_{3}$, is inside $Q$.
By writing $Q^{l}$ for the left half of $Q$, $q_{i}$ for the corners of $Q^{l}$ and noting that $S_{1}$ (and hence $a_{1}$) is contained in the convex polygon $T_{a_{1}}$ with corners $t_{j}$ at $(\frac{1}{2},0)$, $(\frac{\sqrt{3}}{4},\frac{1}{4})$, $w$ and $(1-\frac{\sqrt{3}}{4},\frac{1}{4})$, it follows by convexity that since all of the equations $\Vert b_{1}q_{i}\Vert+\Vert q_{i}t_{j}\Vert\leq 1$ hold, $Q^{l}\subset E_{1}$. Lemma~\ref{EllipseLemma} and the definition of $L_{3}$ then tell us we can have no points inside $L_{3}\cap Q^{l}$. Similarly we can have no points in $L_{3}\cap Q^{r}$, where $Q^{r}$ is the right half of $Q$, and so $\# L_{3}=0$.
\end{proof}
Putting Lemmas~\ref{Hdense}--\ref{L3Empty} together we have:
\begin{lem}\label{HandLLemma}
$\#H\geq k$ and $\#L=0$.\flushright{$\square$}
\end{lem}
\subsubsection{Bounding the relative areas of $H$ and $L$ and the proof of Theorem~\ref{nocrossing}}
We define $\rho_{1}$ and $\rho_{2}$ to be the radius of $D^{k}(a_{1})$ and $D^{k}(a_{2})$ respectively and now move on to bound the relative areas of $H$ and $H\cup L$. However, the regions defined above are quite complicated in shape, and so computing the relative areas, even for particular positions of $a_{1}$ and $a_{2}$ and given values of $\rho_{1}$ and $\rho_{2}$, involves some complicated integrals. Moreover, we need to bound the relative areas over all possible positions of $a_{1}$ and $a_{2}$ and all allowable values of $\rho_{1}$ and $\rho_{2}$. To obtain a bound we will thus break things down into finite cases as follows:
We first tile $S_{n}$ with small squares and then consider the possible pairs of tiles which can contain $a_{1}$ and $a_{2}$. For each such pair, we will bound $|H|$ above and $|L|$ below, and thus bound $H$ above and $|L|$ below absolutely over all positions of $a_{1}$ and $a_{2}$.
Practically, this requires the use of a computer, but will still be completely rigorous.
To make the calculations as simple as possible, we wish to reduce the number of variables we have to maximise and minimise over. In light of this we split $L$ and $H$ into two parts, each of whose size will be dependent on the position of only one of $a_{1}$ and $a_{2}$ (we will show this on a case by case basis later); namely $L$ splits into $L^{+}=L_{1}\cup L_{2}\cup L_{3}$ and $L^{-}=L_{4}\cup L_{5}\cup L_{6}$ and $H$ splits into $H_{1}\cup H_{2}\cup S_{2}$ and $H_{3}\cup H_{4}$. Further, it is easy to see that for any fixed positions of $a_{1}$ and $a_{2}$, the area of any part of $H$ will be maximised by maximising $\rho_{1}$ and $\rho_{2}$, and that the area of any part of $L$ will be minimised by minimising $\rho_{1}$ and $\rho_{2}$. Thus, for each of the given parts of $H$ or $L$ above, we need only to bound the integral over the position of one of $a_{1}$ and $a_{2}$ and nothing else.
Our exact method is as follows: We tile $S_{n}$ with small squares of side length $s$, which are aligned with the edge $b_{1}b_{2}$, i.e. $b_{1}b_{2}$ will run along the edges of all the square it touches, and both $b_{1}$ and $b_{2}$ will be on the corners of squares (to prove our bound, we will use a square side length of $s=0.001\Vert b_{1}b_{2}\Vert$). Whilst bounding an area dependent on the position of $a_{i}$, and given some small square $X$ with centre $x$, we define $\sigma^{X}_{i}$ and $\rho^{X}_{i}$ to be the minimum and maximum values of $\rho_{i}$ over all possible positions of $a_{i}$ within $X$. We can then bound the area of the relevant part of $H$ above by simply counting every square that could be within the part of $H$ that contains any location within $\rho^{X}_{i}$ of any location in $X$, and bound the area of the relevant part of $L$ below by counting only squares that are entirely within that part of $L$ and are entirely within $\sigma^{X}_{i}$ of every location within $X$. In fact, it suffices to count every square that has its centre within $\rho^{X}_{i}+s\sqrt{2}$ of $x$ for the bound on $H$, and only squares that have their centres within $\sigma^{X}_{i}-s\sqrt{2}$ of $x$ for the bound on $L$, since this can only weaken the bounds obtained. We can then bound the areas of the relevant parts of $H$ and $L$ above and below respectively by taking the maximum and minimum of these sums over every square that could possibly contain $a_{i}$.
Since the regions we are using are often dependent on the ellipses $E_{i}$ and $F_{i}$, and these are dependent on the position of $a_{1}$ and $a_{2}$, it is useful to define:\[E_{i}^{X}=\{q\in S_{n}:\underset{a\in X}{\text{max}}\,\Vert b_{i}q\Vert+\Vert aq\Vert\leq1\}\]Similarly we define $F_{i}^{X}$ when $a_{2}\in X$. Thus $E_{i}^{X}$ is the intersection of the $E_{1}(a_{1})$ over all possible positions of $a_{1}$ within $X$. It is worth noting that when a region in $L$ depends on an ellipse, it is contained within the ellipse, and when a region in $H$ depends on an ellipse, it is outside the ellipse, so we will always want to use the intersection of the possible ellipses to bound our area, rather than a union. Note also that any small square $Y$, with centre $y$, such that $\Vert b_{i}y\Vert+\Vert xy\Vert\leq 1-\frac{3\sqrt{2}}{2}s$, will be entirely contained within $E_{i}^{X}$.
\begin{lem}\label{L+Lemma}
$|L^{+}|>0.3411$
\end{lem}
\begin{proof}
Note that:
\begin{align}
L^{+} & =L_{1}\cup L_{2}\cup L_{3}\label{L+Eq1}\\
& = \left(D^{k}(a_{1})^{+}\cap E_{1}\cap D_{b_{1}}(\tfrac{1}{2})\right)\cup\left(D^{k}(a_{1})^{+}\cap E_{2}\cap D_{b_{2}}(\tfrac{1}{2})\right)\label{L+Eq2}\\
& = D^{k}(a_{1})^{+}\cap\left[\left(E_{1}\cap D_{b_{1}}(\tfrac{1}{2})\right)\cup\left(E_{2}\cap D_{b_{2}}(\tfrac{1}{2})\right)\right]
\end{align}
Where (\ref{L+Eq2}) follows from (\ref{L+Eq1}) by Lemma~\ref{L3Empty}. Thus $|L^{+}|$ does not depend on $a_{2}$, and so is a function of the position of $a_{1}$ and $\rho_{1}$ only.
We know that $D^{k}(a_{1})$ must contain $a_{2}$ as well as at least one point in $H_{1}$ (i.e. in $R_{1}$ and outside of $E_{1}\cap D_{b_{1}}(1/2)$) and at least one point in $H_{2}$ (i.e. in $R_{2}$ and outside of $E_{2}\cap D_{b_{2}}(1/2)$). Call the closest locations to $a_{1}$ in $H_{1}$ and $H_{2}$, $h_{1}$ and $h_{2}$ respectively, and note that they are dependent only on the position of $a_{1}$.
Now, given that $a_{1}$ is in some small square $X$ with centre $x$, we set $h_{1}^{X}$ to be the lower down (on $\partial B_{2}=\partial D_{b_{2}}(1)$) of the two location $\partial B_{2}\cap \partial D_{b_{1}}(1/2)$ and the location $q$ on $\partial B_{2}$ for which $\Vert b_{1}q\Vert+\Vert xq\Vert=1-\frac{\sqrt{2}}{2}s$, and similarly define $h_{2}^{X}$. Thus $h_{1}^{X}$ (correspondingly $h_{2}^{X}$) is at least as far down $\partial B_{2}$ (correspondingly $\partial B_{1}$) as $h_{1}$ (or $h_{2}$) for any position of $a_{1}$ within $X$. Thus we define: \[\rho=\text{max}\{\Vert xh_{1}^{X}\Vert,\Vert xh_{2}^{X}\Vert,\Vert xa_{2}\Vert\}-\frac{\sqrt{2}}{2}s\leq\sigma_{1}^{X}\]
Then a small square $Y$ with centre $y$ will be entirely within $L^{+}$ regardless of where in $X$ $a_{1}$ lies, so long as:
\begin{itemize}
\item $Y$ is entirely above the line $b_{1}b_{2}$,
\item $\Vert yx\Vert\leq\rho-s\sqrt{2}$ (note that $s\tfrac{\sqrt{2}}{2}$ is subtracted twice from $\rho_{\text{min}}^{X}$ to account for the possible locations of points within both of the squares $X$ and $Y$) and finally,
\item every point in $Y$ is inside both $D_{b_{1}}(1/2)$ and $E_{1}^{X}$ or every point in $Y$ is inside both $D_{b_{2}}(1/2)$ and $E_{2}^{X}$.
\end{itemize}
See Figure~\ref{FigLPlus}.
\begin{figure}[ht]
\centering
\includegraphics[height=70mm]{KnearLPlus.eps}
\caption{An incidence of the squares that will be counted as being in $L^{+}$.}
\label{FigLPlus}
\end{figure}
Performing our numerical integration on a computer then gives us $|L^{+}|>0.3411\ldots$ with the minimum achieved when $a_{1}$ was in either of the squares with centres at $(0.4995,0.1895)$ and $(0.5005,0.1895)$.
\end{proof}
\begin{lem}\label{L-Lemma}
$|L^{-}|>0.3564$
\end{lem}
\begin{proof}
Note that:
\begin{align}
L^{-} & = L_{4}\cup L_{5}\cup L_{6}\notag
\end{align}
None of the definitions of $L_{4}$, $L_{5}$ or $L_{6}$ are dependent of the position of $a_{1}$ or the value of $\rho_{1}$, although the region where we can place $a_{2}$ (i.e. the region $S_{2}$) is dependent on $a_{1}$. From Lemma~\ref{farapart1} we know that we cannot have $a_{1}$ as low as the point $(\frac{1}{2},\frac{1}{4\sqrt{6}})$, and so, using Lemma~\ref{Properties} we may assume $a_{1}$ is at $(\frac{1}{2},\frac{1}{4\sqrt{6}})$ and $\rho_{1}$ is maximal when determining if a small square contains a possible location in $S_{2}$.
Given that $a_{2}$ is in some small square $X$ with centre $x$, we can define:\[\sigma=\text{max}\{\Vert xa_{1}\Vert,\Vert xz\Vert\}-\frac{\sqrt{2}}{2}s\leq\sigma_{2}^{X}\]
Then a small square $Y$, with centre $y$, will be entirely within $L^{-}$ regardless of where in $X$ $a_{2}$ lies, so long as:
\begin{itemize}
\item $Y$ is entirely below the line $b_{1}b_{2}$,
\item $\Vert yx\Vert\leq\sigma-s\sqrt{2}$,
\item every point $q\in Y$:\begin{enumerate}
\item is inside both $D_{b_{1}}(1/2)$ and $F_{1}^{X}$,
\item or is inside $D_{b_{2}}(1/2)$ and $F_{2}^{X}$,
\item or has $q\in T_{2}$ and either $b_{1}\widehat{b_{2}}q<\frac{\pi}{6}$ or $b_{2}\widehat{b_{1}}q<\frac{\pi}{6}$.
\end{enumerate}
\end{itemize}
Computer calculations then gives $|L^{-}|>0.3564\ldots$ with a minimum value achieved when $a_{2}$ was in either of the squares with centres at $(0.4995,-0.3825)$ and $(0.5005,-0.3825)$.
\end{proof}
\begin{lem}\label{H+Lemma}
$|H_{1}\cup H_{2}\cup S_{2}|<0.1300$.
\end{lem}
\begin{proof}
The areas of $H_{1}$, $H_{2}$ and $S_{2}$ all depend only on the position of $a_{1}$ and the value of $\rho_{1}$, and thus to bound their union above we may assume that $a_{2}$ is located at $(\frac{1}{2},-\frac{1}{2\sqrt{3}})$ and $\rho_{2}$ is maximal, as in lemma \ref{L+Lemma}. We know also that $D^{k}(a_{1})\subset A_{1}$, so that neither $b_{1}$ nor $b_{2}$ are within $\rho_{1}$ of $a_{1}$.
Given that $a_{1}$ is in some small square $X$ with centre $x$, the above tells us that, defining:\[\tau=\text{min}\{\Vert b_{1}x\Vert,\Vert b_{2}x\Vert\}+\frac{\sqrt{2}}{2}s\geq\rho_{1}^{X}\]
Then a small square $Y$, with centre $y$, can have some part of itself in $H_{1}$, $H_{2}$ or $S_{2}$ only if:
\begin{itemize}
\item $\Vert yx\Vert\leq\tau+s\sqrt{2}$ and
\item we have one of the following: \begin{enumerate}
\item Any location in $Y$ is inside $R_{1}$ and outside of either $E_{1}^{X}$ or $D_{b_{1}}(1/2)$ ($Y$ contains a location in $H_{1}$)
\item Any location in $Y$ is inside $R_{2}$ and outside of either $E_{2}^{X}$ or $D_{b_{2}}(1/2)$ ($Y$ contains a location in $H_{2}$)
\item Any location $q\in Y$ has $b_{1}\widehat{b_{2}}q\geq\frac{\pi}{6}$ and $b_{2}\widehat{b_{1}}q\geq\frac{\pi}{6}$ ($Y$ contains a location in $S_{2}$).
\end{enumerate}
\end{itemize}
Computer calculations then give $|H_{1}\cup H_{2}\cup S_{2}|<0.1299\ldots$ with a maximum achieved when $a_{1}$ was in the square with centre at $(0.4995,0.2885)$.
\end{proof}
\begin{lem}\label{H-Lemma}
$|H_{3}\cup H_{4}|<0.0958$.
\end{lem}
\begin{proof}
The areas of $H_{3}$ and $H_{4}$ depend only on the position of $a_{2}$ and the value of $\rho_{2}$, and that when calculating whether a small square could contain a location in $S_{2}$, we may assume that $a_{1}$ is at $(\frac{1}{2},\frac{1}{4\sqrt{6}})$ and $\rho_{1}$ is maximal, as in Lemma~\ref{L-Lemma}.
Given that $a_{2}$ is in some small square $X$ with centre $x$, the above tells us that, defining:\[\upsilon=\text{min}\{\Vert b_{1}x\Vert,\Vert b_{2}x\Vert\}+\frac{\sqrt{2}}{2}s\geq\rho_{2}^{X}\]
Then a small square $Y$ with centre $y$ can have some part of itself in $H_{3}$ or $H_{4}$ only if:
\begin{itemize}
\item $\Vert yx\Vert\leq\upsilon+s\sqrt{2}$ and
\item either of the following holds:\begin{enumerate}
\item Any location in $Y$ is outside $B_{1}\cup B_{2}$ ($Y$ contains a location in $H_{3}$)
\item Any location in $Y$ is above the line $b_{1}b_{2}$ and is outside $D_{b_{1}}(1/2)\cup D_{b_{2}}(1/2)$ ($Y$ contains a location in $H_{4}$)
\end{enumerate}
\end{itemize}
Our computer calculations gives us that $|H_{4}\cup H_{4}|<0.0957\ldots$ with a maximum achieved when $a_{2}$ was in the square with centre at $(0.4995,-0.4335)$.
\end{proof}
We can use Lemmas~\ref{L+Lemma}-\ref{H-Lemma} to bound the ratio $\frac{|H|}{|H\cup L|}$:
\begin{lem}\label{HandLSmall}
$\frac{|H|}{|H\cup L|}<0.2446$.
\end{lem}
\begin{proof}
Note that since $H$ and $L$ are disjoint, $\frac{|H|}{|H\cup L|}=\frac{|H|}{|H|+|L|}$, which is strictly increasing in $|H|$ and decreasing in $|L|$. Thus, by using Lemmas~\ref{L+Lemma}-\ref{H-Lemma} we have:
\begin{align}
\frac{|H|}{|H\cup L|} & < \frac{0.1300+0.0958}{0.1300+0.0958+0.3411+0.3564}\notag\\
& < 0.2446\notag
\end{align}
\end{proof}
Using all of the above, we can finally prove Theorem~\ref{nocrossing}:
\vspace{0.5cm}
\begin{proofof}{Theorem~\ref{nocrossing}}
We pick six points $a_{1}$, $a_{2}$, $b_{1}$, $b_{2}$, $a_{1}^{(k)}$ and $a_{2}^{(k)}$, and write $Z$ for the event that $a_{1}$, $a_{2}$, $b_{1}$ and $b_{2}$ form a crossing pair, and that $a_{1}^{(k)}$ and $a_{2}^{(k)}$ are the $k^{th}$ nearest neighbours of $a_{1}$ and $a_{2}$ respectively.
When $Z$ occurs, these six points define the regions $H$ and $L$, and so for any given six tuple of points, Lemmas~\ref{HandLLemma} and \ref{HandLSmall} tell us:
\begin{align}
\mathbb{P} (Z) & \leq\left(\frac{|H|}{|H\cup L|}\right)^{k}\notag\\
& < n^{c\log 0.2446}
\end{align}
Now, there are $O(n)$ choices for $a_{1}$, and once this has been chosen there are only $O(\log n)$ choices for each of $a_{2}$, $b_{1}$, $b_{2}$, $a_{1}^{(k)}$ and $a_{2}^{(k)}$ (since all five have either an out edge to or from $a_{1}$ (except for $a_{2}^{k}$ which must have an out edge from $a_{2}$), and so must be within $O(\sqrt{\log n})$ of $a_{1}$ by Lemma~\ref{edgelengths}). Thus there are $O(n\log^{5} n)$ choices for our system, and so, with high probability, no two edges in different components cross so long as:
\begin{align}
c\log 0.2446 & < -1\notag
\end{align}
or equivalently:
\begin{align}
c > 0.7102\notag
\end{align}
\end{proofof}
\subsection{There can only be one large component}
We use Lemma~\ref{farapart2} and Theorem~\ref{nocrossing} to get a bound on the absolute distance between any two edges in different components:
\begin{cor}
If $k=c\log n$, and $c>0.7102$, then with high probability the minimal distance between two edges in different components is at least $r/5$, where $r$ is as given in Lemma~\ref{edgelengths}.
\end{cor}
\begin{proof}
Since $c>0.7102$ we may assume, by Theorem~\ref{nocrossing}, that no two edges in different components cross. Thus the minimal distance between two such edges will be at the end point of one of them. Corollary~\ref{farapart2} then gives us the result.
\end{proof}
Using the above, we now meet all of the conditions for Lemma 12 of [\ref{MW}] so long as $k>0.7102\log n$, except that now the minimal distance between edges in different components is $r/5$ instead of $r/2$, however this requires only trivial changes in the proof, and so we gain:
\begin{prop}\label{PropOneBigComponent}
For fixed $c>0.7102$, if $k>c\log n$, then there exists a constant $c'$ such that the probability that $G_{n,\lfloor c \log n\rfloor}$ contains two components of (Euclidean) diameter at least $c'\sqrt{\log n}$ tends to zero as $n\rightarrow\infty$.\flushright{$\square$}
\end{prop}
\section{The main result}
\subsection{Approach and simple bound}
Using the results from the previous section we can now proceed to gain an upper bound for the threshhold for connectivity by ruling out the chance of having a small component.
We wish to prove a good bound on the critical constant $c$ such that if $k>c\log n$ then $\mathbb{P}(G_{n,k}\textrm{ disconnected})\rightarrow0$ as $n\rightarrow\infty$. Proposition~\ref{PropOneBigComponent} tells us that if $G$ is not connected, and $k>0.7102\log n$, then we may assume that there is a small component somewhere. In the next section we will show that such a small component will not exist with high probability for $c>0.9684$, but first illustrate a simpler proof that works for $c>1.0293$ to give the general approach. This proof is similar to the first part of Theorem 15 of [\ref{MW}]. We start by introducing some notation:
\begin{definition}
Let $d$ be $\max\{c',4\sqrt{c_{+}/\pi},\frac{1}{4\sqrt{c_{-}/\pi}},1\}$, (where $c_{+}$ and $c_{-}$ are the constants from Lemma~\ref{edgelengths}, and $c'$ is the constant given by Proposition~\ref{PropOneBigComponent}).
Given four points, $a$, $b$, $x_{l}$ and $x_{r}$ in $S_{n}$, we define $\rho=\Vert ab\Vert$ and, writing $D^{l}_{x}(y)$ and $D^{r}_{x}(y)$ for the left and right half-disks of radius $y$ centred on $x$, we define the regions:
\begin{itemize}
\item $C=\left(D^{l}_{x_{l}}(\rho)\cup D^{r}_{x_{r}}(\rho)\right)\cap S_{n}$,
\item $A=\left(D_{a}(\rho)\setminus \left(D_{b}(\rho)\cup C\right)\right)\cap S_{n}$, and
\item $B=\left(D_{b}(\rho)\setminus \left(D_{a}(\rho)\cup C\right)\right)\cap S_{n}$.
\end{itemize}
See Figure~\ref{FigBasicBound} for an illustration of these regions.
We say that $a$, $b$, $x_{l}$ and $x_{r}$ form a \emph{component set-up} if:
\begin{enumerate}
\item The points $b$, $x_{l}$ and $x_{r}$ are all within $d\sqrt{\log n}$ of $a$,\label{CSClose}
\item $\#C=0$,\label{CSEmpty}
\item and at least one of $\#A\geq k$ and $\#B\geq k$ holds.\label{CSFull}
\end{enumerate}
\begin{figure}[ht]
\centering
\includegraphics[height=70mm]{KnearBasicBound.eps}
\caption{The set up of the points $a$, $b$, $x_{l}$ and $x_{r}$ and the regions they define.}
\label{FigBasicBound}
\end{figure}
\end{definition}
\begin{lem}\label{BBRegions}
If there is a component, $X$, of diameter at most $d\sqrt{\log n}$ in $G$, then with high probability some four points form a component set-up.
\end{lem}
\begin{proof}
Let $a\in X$ and $b\notin X$ be such that they minimise $\Vert ab\Vert$ over all such pairs. Let $x_{l}$ be the left most point in the component $X$ and $x_{r}$ the right most point. We show that these four points form a component set-up with high probability.
Since $\textrm{diam}(X)\leq d\sqrt{\log n}$, $x_{l}$ and $x_{r}$ are within $d\sqrt{\log n}$ of $a$, and Lemma~\ref{edgelengths} tell us that $b$ is within $d\sqrt{\log n}$ of $a$ with high probability, so Condition~\ref{CSClose} holds with high probability. For any $z\in X$ we cannot have any points in $D_{z}(\rho)$ that are not in $X$, by the minimality of $\Vert ab\Vert$, and so in particular $C$ is empty, i.e. Condition~2 is met. Finally, since $ab\notin G$ and since $D_{a}(\rho)\cap D_{b}(\rho)$ is empty by the minimality of $\Vert ab\Vert$, there must be at least $k$ points in at least one of $A$ or $B$, so Condition~3 is met.
\end{proof}
We will show that if $k=c\log n$ and $c>1.0293$, then with high probability no quadruple forms a component set-up, at which point Lemma~\ref{BBRegions} tells us there will be no small component in $G$ with high probability.
\begin{lem}\label{EasyBound'}If:
\begin{align}
c&>\log\left(\frac{8\pi+3\sqrt{3}}{2\pi+3\sqrt{3}}\right)^{-1}\approx 1.0293\notag
\end{align}
and $k=c\log n$, then, with high probability, no quadruple $(a,b,x_{l},x_{r})$ with all of $a$, $b$, $x_{l}$ and $x_{r}$ at least $d\sqrt{\log n}$ from the boundary of $S_{n}$ form a component set-up.
\end{lem}
\begin{proof}
We will show that if we pick four points in $S_{n}$; $a$, $b$, $x_{l}$ and $x_{r}$ that are all within $d\sqrt{\log n}$ of $a$ (i.e. meet Condition~\ref{CSClose} of being a component set-up), then the probability, $p(n)$, that they meet Conditions~\ref{CSEmpty} and \ref{CSFull} of being a component set-up decays as at least $n^{-(1+\varepsilon)}$ for some $\varepsilon>0$. Then, since there are only $\textrm{O}(n)$ points in $S_{n}$ in total (with high probability), and since all four points are within $d\sqrt{\log n}$ of $a$, Lemma~\ref{edgelengths} tells us that there are only $\textrm{O}(n(\log n)^{3})$ choices for such a system, and so, with high probability, no four points form a component set-up.
Since $x_{l}$ and $x_{r}$ are at least $d\sqrt{\log n}$ from the boundary of $S_{n}$, and $\rho=\Vert ab\Vert\leq d\sqrt{\log n}$, we have that $|C|=\pi\rho^{2}$. We also know that $|A|,|B|\leq(\pi/3+\sqrt{3}/2)\rho^{2}$, and so, by Lemma~\ref{Full-Empty}:
\begin{align}
p(n) & \leq \mathbb{P}(\#C=0\textrm{ and }\#A\geq k)+\mathbb{P}(\#C=0\textrm{ and }\#B\geq k)\notag\\
& \leq \left(\frac{|A|}{|A\cup C|}\right)^{k} + \left(\frac{|B|}{|B\cup C|}\right)^{k}\notag\\
& \leq 2\left(\frac{(\pi/3+\sqrt{3}/2)\rho^{2}}{\pi\rho^{2}+(\pi/3+\sqrt{3}/2)\rho^{2}}\right)^{k}\notag\\
& = 2\left(\frac{2\pi+3\sqrt{3}}{8\pi+3\sqrt{3}}\right)^{k}\notag\\
& = 2\textrm{exp} \left(-c\log\left(\frac{8\pi+3\sqrt{3}}{2\pi+3\sqrt{3}}\right)\log n\right)\label{BBEq1}
\end{align}
If $c>\log\left(\frac{8\pi+3\sqrt{3}}{2\pi+3\sqrt{3}}\right)^{-1}$, then (\ref{BBEq1}) is at most $2n^{-(1+\varepsilon(c))}$ for some $\varepsilon(c)>0$, and so we are done.
\end{proof}
We now rule out having a component set-up near the edge of $S_{n}$, and so having a small component near the edge of $S_{n}$. The bound we prove here will also be strong enough to rule out the edge case in our stronger bound on the connectivity threshhold that we give in the next section.
\begin{lem}\label{NoBoundaries}\mbox{}
\begin{enumerate}
\item\label{NBCor} If $c>0$ and $k=c\log n$, then with high probability there is no component set-up containing a point within $2d\sqrt{\log n}$ of a corner of $S_{n}$.
\item\label{NBEdge} If $c>0.8343$ and $k=c\log n$, then with high probability there is no component set-up containing a point within $d\sqrt{\log n}$ of any edge of $S_{n}$.\end{enumerate}
\end{lem}
\begin{proof}
The proof proceeds almost exactly as in the previous lemma. We again pick our four points $a$, $b$, $x_{l}$ and $x_{r}$ with $b$, $x_{l}$ and $x_{r}$ within $d\sqrt{\log n}$ of $a$ and bound the probability that they meet Conditions~\ref{CSEmpty} and \ref{CSFull} of forming a component-set-up. We write $p_{c}(n)$ and $p_{e}(n)$ for the probabilities of these events for a quadruple near a corner and an edge respectively.
\begin{Parts}
\item The number of such quadruples with at least one point within $2d\sqrt{\log n}$ of a corner is $\textrm{O}((\log n)^{4})$. We show that $p_{c}(n)$ decays as at least $n^{-\varepsilon}$, for some $\varepsilon>0$.
We will have that $|A|,|B|\leq(\pi/3+\sqrt{3}/2)\rho^{2}$ (where again $\rho=\Vert ab\Vert$).
If one of our points is within $d\sqrt{\log n}$ of a corner of $S_{n}$ we must still have $|C|\geq\pi/4$, and so, using Lemma~\ref{Full-Empty}:
\begin{align}
p_{c}(n) & \leq \mathbb{P}(\#C=0\text{ and }\#A\geq k)+\mathbb{P}(\#C=0\text{ and }\#B\geq k)\notag\\
& \leq\left(\frac{|A|}{|A|+|C|}\right)^{k}+\left(\frac{|B|}{|B|+|C|}\right)^{k}\notag\\
& < 2\left(\frac{(\pi/3+\sqrt{3}/2)\rho^{2}}{(\pi/4)\rho^{2}+(\pi/3+\sqrt{3}/2)\rho^{2}}\right)^{c\log n}\notag\\
& < 2n^{-0.3439c}\label{EBEq1}
\end{align}
And thus for any $c>0$ the exponent of (\ref{EBEq1}) is strictly less than zero, and so with high probability there are no small components containing a point within $d\sqrt{\log n}$ of any corner of $S_{n}$.
\item The number of such quadruples with at least one point within $d\sqrt{\log n}$ of an edge is $\textrm{O}(\sqrt{n}(\log n)^{3})$. We show that $p_{e}(n)$ decays as at least $n^{-(1/2+\varepsilon)}$, for some $\varepsilon>0$.
If none of our points are within $2d\sqrt{\log n}$ of a corner, but at least one is within $2d\sqrt{\log n}$ of an edge, then $|C|\geq\frac{\pi}{2}\rho^{2}$ (either we have all of one of the half disks $D_{x_{l}}^{l}$ and $D_{x_{r}}^{r}$ or at least half of each), and so:
\begin{align}
p_{e}(n) & \leq \left(\frac{|A|}{|A|+|C|}\right)^{k}+\left(\frac{|B|}{|B|+|C|}\right)^{k}\notag\\
& < 2\left(\frac{(\pi/3+\sqrt{3}/2)\rho^{2}}{(\pi/2)\rho^{2}+(\pi/3+\sqrt{3}/2)\rho^{2}}\right)^{c\log n}\notag\\
& < 2n^{-0.5993c}\label{EBEq2}
\end{align}
For any $c>0.8343$ the exponent of (\ref{EBEq2}) is strictly less than $-\tfrac{1}{2}$ and so we are done.
\end{Parts}
\end{proof}
Putting together Lemmas~\ref{EasyBound'} and \ref{NoBoundaries}, and applying Lemma~\ref{BBRegions} and Proposition~\ref{PropOneBigComponent}, we have:
\begin{prop}\label{EasyBound}
Let $p(n)$ be the probability that $G_{n,k}$ is disconnected, then, provided $k=c\log n$ and:
\begin{align}
c&>\log\left(\frac{8\pi+3\sqrt{3}}{2\pi+3\sqrt{3}}\right)^{-1}\approx 1.0293\notag
\end{align}
we have:
\[p(n)\rightarrow 0,\textrm{ as }n\rightarrow\infty\]
\end{prop}
\subsection{The Size of Small Components\\ and an Improved Bound}
The previous section gives a reasonably good upper bound on the connectivity threshold for $G_{n,k}$, so that we know if $k>1.0293\log n$, then $G_{n,k}$ is connected with high probability. The best lower bound known is that if $k<0.7209\log n$ then $G_{n,k}$ is disconnected with high probability, which follows from Balister, Bollob\'{a}s, Sarkar and Walter's bound on the directed model [\ref{MW}]. This leaves the question: could the connectivity threshold be exactly $k=\log n$? We show that this hypothesis, which was conjectured originally by Xue and Kumar for the original undirected model [\ref{XandK}], and is true in the Gilbert model, does not hold here, thus further disproving their conjecture, since the threshold for the strict undirected model must be at least as high as that in the original undirected model. In particular we show that if $k>0.9684\log n$ then $G$ is connected with high probability.
To show this improved bound, we first show that the small components in $G$ (i.e. of diameter $\Phi(\log n)$) contain far fewer than $k$ points as $k$ approaches the lower bound on the connectivity threshold, and then use this to improve our upper bound. One major tool that we use in this section is an isoperimetric argument. As in [\ref{MW2}] this will allow us to bound the empty area around any small component as a function of how much space that component takes up. We use the isoperimetric theorem in its following form, which is a consequence of the Brunn-Minkowski inequality, see e.g. [\ref{BMI}]. Part 2 of the Lemma follows from an easy reflection argument.
\begin{lem}\label{IsoLem1}\mbox{}
\begin{enumerate}
\item For any $\lambda>0$ the subset $A$ of the plane of area $\lambda$ that minimises the area of the $\delta$-blowup, $A(\delta)$ (the subset of the plane within $\delta$ of any location in $A$), is the disc of area $\lambda$.
\item The subset $A$ on the half plane $E^{+}$ of area $\lambda$ that minimises the area of the intersection of $A(\delta)$ and $E^{+}$ is the half disc of area $\lambda$ centred along the edge of $E^{+}$.
\end{enumerate}
\end{lem}
To use Lemma~\ref{IsoLem1}, we follow [\ref{MW2}] and tile $S_{n}$ with a fine square grid. We can then look at the number of tiles that a small component hits to give a bound on the empty area around it. To be precise:
We set $M=20000d$ (a large enough value to gain a good result) and tile $S_{n}$ with small squares of side length $s=\sqrt{\log n}/M$. We form a graph $\widehat{G}$ on these tiles by joining two tiles whenever the distance between their centres is at most $2d\sqrt{\log n}$. We call a pointset \emph{bad} if any of the following hold (and \emph{good} otherwise):
\begin{enumerate}
\item there exist two points that are joined in $G$ but the tiles containing these points are not joined in $\widehat{G}$
\item there exist two points at most distance $\tfrac{1}{d}\sqrt{\log n}$ apart that are not joined
\item there exists a half-disc based at a point of $G$ of radius $d\sqrt{\log n}$ that is contained entirely within $S_{n}$ and contains no (other) point of $G$
\item there exists two components in $G_{n,k}$ with Euclidean diameter at least $d\sqrt{\log n}$
\item there exists a component of diameter at most $d\sqrt{\log n}$ containing a vertex within distance $2d\sqrt{\log n}$ of a corner of $S_{n}$
\item there exists two different components $X$ and $Y$ such that an edge in component $X$ crosses an edge in component $Y$
\end{enumerate}
Note that unlike in [\ref{MW2}], we do not insist that a small component cannot be near an edge of $S_{n}$, but only that it can't be near a corner, since our Lemma~\ref{NoBoundaries} is not strong enough to rule out the existence of small components near the edge of $S_{n}$ around the lower bound on the connectivity threshold ($k=0.7209\log n$).
\begin{lem}\label{GoodLem}
If $k=c\log n$ and $c>0.7102$, then with high probability the configuration is good.
\end{lem}
\begin{proof}\mbox{}
\begin{itemize}
\item By our choice of~$d$ and Lemma~\ref{edgelengths} Conditions~1, 2 and 3 hold with high probability.
\item For $k>0.7102\log n$, Proposition~\ref{PropOneBigComponent} ensures Condition 4 holds with high probability.
\item Lemma~\ref{NoBoundaries} part~1 ensures Condition 5 holds with high probability.
\item For $k>0.7102\log n$, Theorem~\ref{nocrossing} ensures Condition 6 holds with high probability.
\end{itemize}
Since each condition holds with high probability, they will all hold together with high probability, and so the configuration will be good with high probability.
\end{proof}
We will consider what can happen around a small component once we know which tiles the component meets. We make the following definitions:
\begin{definition}
Given two points, $a$, $b$, and a collection of tiles $Y$ with $a\in Y$ and $b\notin Y$, we define, as before, $\rho=\Vert ab\Vert$ and $A=\left(D_{a}(\rho)\setminus D_{b}(\rho)\right)\cap S_{n}$, and define the regions:
\begin{itemize}
\item $Z$ to be all tiles not in $Y$ with their centre within $\rho-\sqrt{2}s$ of the centre of a tile in $Y$,
\item $B'$ to be $D_{b}(\rho)\setminus (D_{a}(\rho)\cup Y\cup Z)$, and
\item $Y'$ to be the tiles in $Y$ that have their centre within $\rho+\sqrt{2}s$ of $a$ (so that the tiles in $Y$ that meet the region $A$ defined previously are all in $Y'$).
\end{itemize}
See Figure~\ref{FigBasicTile} for an illustration.
\begin{figure}[h]
\centering
\includegraphics[height=90mm]{KNearTileBound.eps}
\caption{The points $a$ and $b$, and the regions $Y$, $Y'$, $Z$ and $B'$.}
\label{FigBasicTile}
\end{figure}
\end{definition}
We can use these new regions to form a analogous version of Lemma~\ref{BBRegions}.
\begin{lem}\label{BasicTiles}
If $G$ contains a component, $X$, of diameter at most $d\sqrt{\log n}$, then with high probability there will be some triple $(a,b,Y)$ such that:
\begin{enumerate}
\item \label{BTDiam}The diameter of $Y$ is at most $d\sqrt{\log n}+2\sqrt{2}s$,
\item \label{BTDist}$b$ is within $d\sqrt{\log n}$ of $a$,
\item \label{BTEmpty}$\#Z=0$, and
\item \label{BTDense}at least one of $\#Y'$ and $\#B'$ is at least $k$.
\end{enumerate}
\end{lem}
\begin{proof}
Given a component $X$, we set $Y$ to be the set of tiles that contain a point in $X$, and $a$ and $b$ to be the pair of points such that $a\in X$, $b\notin X$ that minimise $\rho=\Vert ab\Vert$.
\begin{itemize}
\item Condition~\ref{BTDiam} holds as $\textrm{diam}(Y)\leq \textrm{diam}(X)+2\sqrt{s}$.
\item Condition~\ref{BTDist} follows from Lemma~\ref{edgelengths}.
\item Condition~\ref{BTEmpty} follows since no point outside of $X$ can be within $\rho$ of a point in $X$ and every tile of $Y$ contains a point in $X$.
\item Condition~\ref{BTDense} follows since $ab$ is not an edge of $G$, and every location in any tile with its centre within $\rho-\sqrt{2}$ of the centre of a tile containing a point $x\in X$ must be within $\rho$ of $x$.
\end{itemize}
\end{proof}
The Isoperimetric Theorem (Lemma~\ref{IsoLem1}) allows us to bound the area of $Z$ in terms of the area of $Y$:
\begin{lem}\label{IsoLem}
For a triple $(a,b,Y)$, if no tile of $Y$ is within $d\sqrt{\log n}$ of the edge of $S_{n}$ then, writing $r=\rho-\sqrt{2}s>(1-10^{-4})\rho$ (where again $\rho=\Vert ab\Vert$), we have:\[|Z|\geq \pi r^{2}+2r\sqrt{\pi|Y|}\]
If $Y$ does contain a tile within $d\sqrt{\log n}$ of the edge of $S_{n}$, but no tile within $2d\sqrt{\log n}$ of a corner then:\[|Z|\geq \frac{\pi}{2} r^{2}+r\sqrt{\pi|Y|}\]
\end{lem}
\begin{proof}
The Isoperimetric Theorem tells us that the area of $|Z|$ is at least what it would be if $Y$ was a disk and $Z$ was its $r$ blow-up. In this case:
\begin{align*}
\text{radius}(Y) & = \sqrt{|Y|/\pi}
\end{align*}
and so:
\begin{align*}
|Z| & \geq \pi\left(r+\sqrt{\pi/|Y|}\right)^{2}-|Y|\\
& = \pi r^{2}+2r\sqrt{\pi|Y|}
\end{align*}
The second part follows in exactly the same way, using part~2 of our version of the Isoperimetric Theorem.
\end{proof}
With this machinery in place, we can now proceed to prove that as $k$ nears the connectivity threshold, all small components are very small, i.e. of size much less than $k$. The proof works in two parts: We first prove that, with high probability, no triple $(a,b,Y)$ has $\#Y'\geq k$ and $\#Z=0$ for $k\geq 0.7209\log n$. This allows us to conclude that if $G$ contains a small component, then with high probability some triple $(a,b,Y)$ has $B'\geq k$ and $\#Z=0$ by Lemma~\ref{BasicTiles}. We then use this to bound the size of any small component by showing that no triple $(a,b,Y)$ has $\#B'\geq k$, $\#Z=0$ and $\#Y\geq0.309k$ with high probability.
\begin{lem}\label{ANotDense}
If $c>0.7209$ and $k=c\log n$, then with high probability, no triple $(a,b,Y)$ meeting Condition~1-4 of Lemma~\ref{BasicTiles} has $\#Y'\geq k$.
\end{lem}
\begin{proof}
Let $p_{A}(n)$ be the probability that a given triple $(a, b, Y)$ with no part of $Y$ within $d\sqrt{\log n}$ of the boundary of $S_{n}$ and meeting Conditions~\ref{BTDiam} and \ref{BTDist} of Lemma~\ref{BasicTiles} also meets Conditions~\ref{BTEmpty} and has $\#Y'\geq k$. Let $p_{A'}(n)$ be this same probability when $Y$ does contain a tile within $d\sqrt{\log n}$ of the boundary of $S_{n}$.
\begin{Cases}
\item $Y$ does not contain a tile within $d\sqrt{\log n}$ of the boundary of $S_{n}$:
There will be $\textrm{O}(n)$ choices for the point $a$, and once $a$ has been chosen, there are only $\textrm{O}(\log n)$ choices for $b$ (since it is within $d\sqrt{\log n}$ of $a$), and only a (large) constant number of choices for $Y$, since $Y$ can only include tiles from the fixed collection of $16(dM)^{2}$ tiles nearest to $a$ (i.e. the tiles within $d\sqrt{\log n}$ of $a$). Thus there are $\textrm{O}(n\log n)$ possible triples $(a, b, Y)$ meeting Conditions~\ref{BTDiam} and \ref{BTDist} of Lemma~\ref{BasicTiles}.
We show that $p_{A}(n)$ decays at least as fast as $n^{-(1+\varepsilon)}$.
By Lemma~\ref{IsoLem}:
\begin{align}
|Z| & \geq \pi r^{2}+2r\sqrt{\pi|Y|}\notag\\
& \geq \pi r^{2}+2r\sqrt{\pi|Y'|}\notag
\end{align}
where $r=\rho-\sqrt{2}s>(1-10^{-4})\rho$.
Since every tile of $Y'$ contains a location within $\rho+2\sqrt{2}s$ of $a$, and no tile in $Y'$ contains a location within $\rho-2\sqrt{2}s$ of $b$, we have:
\begin{align}
|Y'| & \leq \left(\frac{\pi}{3}+\frac{\sqrt{3}}{2}\right)\rho^{2}+\pi\left((\rho+2\sqrt{2}s)^{2}-\rho^{2}\right)\notag\\
& <\left(\frac{\pi}{3}+\frac{\sqrt{3}}{2}+\frac{\pi}{1000}\right)\rho^{2}\label{AreaYEq}
\end{align}
If $(a,b,Y)$ meets Condition~3 of Lemma~\ref{BasicTiles} (i.e. has $\#Z=0$), and $\#Y'\geq k$, then by Lemma~\ref{Full-Empty}:
\begin{align}
p_{A}(n) & \leq \left(\frac{|Y'|}{|Y'|+|Z|}\right)^{k}\notag\\
& \leq \left(\frac{|Y'|}{\pi r^{2}+2r\sqrt{\pi|Y'|}+|Y'|}\right)^{k}\notag\\
& = \textrm{exp}\left(-c\log\left(\frac{\pi r^{2}+2r\sqrt{\pi|Y'|}+|Y'|}{|Y'|}\right)\log n\right)\label{ANotDenseExp}
\end{align}
Maximising (\ref{ANotDenseExp}) over the range $0<|Y'|<\left(\tfrac{\pi}{3}+\tfrac{\sqrt{3}}{2}+\tfrac{\pi}{1000}\right)\rho^{2}$, we achieve a maximum of $n^{-1.18\ldots}$ (when $|Y'|$ is maximal). Thus, with high probability, we will have no system with $\#Y'\geq k$.
\item $Y$ does contain a tile within $d\sqrt{\log n}$ of the boundary of $S_{n}$:
We will have $\textrm{O}(n^{1/2})$ choices for $a$, and the same argument as in the previous case shows that there are $\textrm{O}(n^{1/2}\log n)$ such triples meeting Conditions~\ref{BTDiam} and \ref{BTDist} of Lemma~\ref{BasicTiles} that also have some tile of $Y$ within $d\sqrt{\log n}$ of the boundary of $S_{n}$.
We show that $p_{A'}(n)$ decays as at least $n^{-(1/2+\varepsilon)}$.
Here Lemma~\ref{IsoLem} only ensures $|Z|\geq\frac{1}{2}\pi r^{2}+r\sqrt{\pi|Y'|}$. Equation (\ref{AreaYEq}) still holds and (\ref{ANotDenseExp}) becomes:
\begin{align}
p'_{A}(n) & \leq \textrm{exp}\left(-c\log\left(\frac{\tfrac{1}{2}\pi r^{2}+r\sqrt{\pi|Y'|}+|Y'|}{|Y'|}\right)\log n\right)\label{ANotDenseExp2}
\end{align}
Maximising (\ref{ANotDenseExp2}) over the range $0<|Y'|<\left(\frac{\pi}{3}+\frac{\sqrt{3}}{2}+\frac{\pi}{1000}\right)\rho^{2}$, we achieve a maximum of $n^{-0.81\ldots}$ (again when $|Y'|$ is maximal). Thus again, with high probability, we will have no system with $\#Y'\geq k$, and thus with high probability no small component has $\#Y'\geq k$.
\end{Cases}
\end{proof}
Lemma~\ref{ANotDense} tells us that, with high probability, as $k$ approaches the connectivity threshold, every triple $(a,b,Y)$ that corresponds exactly to a small component, will have $\#B'\geq k$ (i.e. we can change Condition~\ref{BTDense} in Lemma~\ref{BasicTiles} (from $\#A\geq k$ or $\#B'\geq k$) to simply $\#B'\geq k$ (denote this Condition~\ref{BTDense}'), and the Lemma will stay true). We use this to strengthen the previous argument and show that in fact there are far fewer than $k$ points in the whole of any small component, but first need a result about how dense two disjoint regions can be simultaneously. The following is a result about the Poisson process that is a slight alteration of Lemma~6 from [\ref{MW2}] which goes through by exactly the same proof:
\begin{lem}\label{Full-Empty2}
If $X$, $Y$ and $Z$ are three regions with $|X|\leq|Y\cup Z|$, $|Y|\leq |X\cup Z|$ and $X\cap Y=\emptyset$, then, writing $E$ for the event that $\# X\geq mk$, $\#Y\geq k$ and $\#Z=0$, we have:
\begin{align}
\mathbb{P}(E) & \leq \left(\frac{2|X|}{|X|+|Y|+|Z|}\right)^{mk}\left(\frac{2|Y|}{|X|+|Y|+|Z|}\right)^{k}
\end{align}
\end{lem}
We can now show, by a similar argument to Lemma~\ref{ANotDense}:
\begin{prop}\label{XSmall}
Let $c>0.7209$ and $k=c\log n$. Then with high probability no small component contains more than $0.309k$ points of $G$.
\end{prop}
\begin{proof}
If $G$ contains a small component with at least $0.309k$ points, then with high probability there will be some triple $(a,b,Y)$ that meets Conditions~\ref{BTDiam}--\ref{BTEmpty} of Lemma~\ref{BasicTiles}, Condition~\ref{BTDense}' and $\#Y\geq 0.309k$. We write $p_{X}$ for the probability that a triple $(a,b,Y)$ meeting Conditions~\ref{BTDiam} and \ref{BTDist} meets the rest of these conditions when $Y$ contains no tile within $d\sqrt{\log n}$ of the boundary of $S_{n}$ and $p_{X'}$ for the same probability when $Y$ does contain such a tile. As in Lemma~\ref{ANotDense} it suffices to show that $p_{X}$ decays at least as fast as $n^{-1-\varepsilon}$ and $p_{X'}$ decays as at least $n^{-1/2-\varepsilon}$ for some $\varepsilon>0$ to complete the proof.
We wish to apply Lemma~\ref{Full-Empty2}, but need to check the conditions of the Lemma first:
\begin{enumerate}
\item The condition $|B'|\leq |Y\cup Z|$ follows as $|Z|\geq \pi r^{2}\approx 3.14\rho^{2}$ and $|B'|\leq(\pi/3+\sqrt{3}/2)\rho^{2}\approx 1.91\rho^{2}$, and so $|Z|\geq|B'|$.
\item The condition that $B'\cap Y=\emptyset$ follows by definition.
\item The condition $|Y|<|B'\cup Z|$: By Lemma~\ref{IsoLem}, $|Z|\geq\pi r^{2}+2 r\sqrt{\pi|Y|}$ when $Y$ contains no tile within $d\sqrt{\log n}$ of the edge of $S_{n}$ and $|Z|\geq\pi r^{2}/2+ r\sqrt{\pi|Y|}$ when $Y$ does. Solving $|Y|>\pi r^{2}+2 r\sqrt{\pi|Y|}$ and $|Y|>\pi r^{2}/2+ r\sqrt{\pi|Y|}$, we gain that $|Y|>11.72\rho^{2}$ and $|Y|>5.861\rho^{2}$ respectively. Thus, so long as $|Y|\leq 11.7\rho^{2}$ in the centre case, and $|Y|\leq 5.86\rho^{2}$ in the edge case, $|Y|<|Z|$, and so the condition holds. When $Y$ exceeds these bounds, we cannot apply Lemma~\ref{Full-Empty2}, but instead note that, for $Y$ in this range:
\begin{align}
p_{X} & \leq \mathbb{P}(\#Z=0\text{ and }\#B'\geq k)\notag\\
& \leq \left(\frac{|B'|}{|B'|+|Z|}\right)^{k}\notag\\
& \leq \left(\frac{(\pi /3+\sqrt{3}/2)\rho^{2}}{(\pi /3+\sqrt{3}/2)\rho^{2}+\pi r^{2} + 2r\sqrt{\pi |Y|}}\right)^{k}\notag\\
& < \left( \frac{ \pi /3+\sqrt{3}/2 }{ 4\pi /3 +\sqrt{3}/2 + 2\sqrt{11.7} } \right)^{k}\notag\\
& < n^{-1.58}
\end{align}
By an exact analogy in the edge case, when $|Y|>5.86\rho^{2}$, we find that:
\begin{align}
p_{X'} & < n^{-1.01}
\end{align}
\end{enumerate}
Thus, for $c\geq 0.7209$, and recalling that $r>(1-10^{-4})\rho$:
\begin{align}
p_{X} & \leq \mathbb{P}(|Y|\leq 11.7\rho^{2})\mathbb{P}\left(\# Z=0,\#B'\geq k,\#Y\geq 0.309k\Big| |Y|\leq 11.7\rho^{2}\right)\notag\\
& \quad + \mathbb{P}(|Y|> 11.7\rho^{2})n^{-1.58}\notag\\
& \leq \Max{|Y|\leq11.7\rho^{2}}\left(\frac{2|Y|}{|B'|+|Y|+|Z|}\right)^{0.309k}\left(\frac{2|B'|}{|B'|+|Y|+|Z|}\right)^{k} + n^{-1.58}\notag\\
& \leq \Max{|Y|\leq11.7\rho^{2}}\frac{(2|Y|)^{0.309k}(2(\pi/3+\sqrt{3}/2)\rho^{2})^{k}}{\left((\pi/3+\sqrt{3}/2)\rho^{2}+|Y|+\pi r^{2}+2r\sqrt{\pi|Y|}\right)^{1.309k}} +n^{-1.58}\notag\\
& \leq \Max{|Y|\leq11.7\rho^{2}}\frac{(2|Y|)^{0.309k}(2(\pi/3+\sqrt{3}/2)\rho^{2})^{k}}{\left((\pi/3+\sqrt{3}/2)\rho^{2}+|Y|+\pi r^{2}+2r\sqrt{\pi|Y|}\right)^{1.309k}} +n^{-1.58}\label{XSmallExp}
\end{align}
Maximising the first term over the range $0\leq|Y|\leq11.7\rho^{2}$, we find that the first term of (\ref{XSmallExp}) achieves a maximum of $n^{-1.0001\ldots}$ when $|Y|=0.6069\rho^{2}\ldots$.
Similarly we have:
\begin{align}
p_{X'} & \leq \mathbb{P}(|Y|\leq 5.86\rho^{2})\mathbb{P}\left(\# Z=0,\#B'\geq k,\#Y\geq 0.309k\Big| |Y|\leq 5.86\rho^{2}\right)\notag\\
& \quad + \mathbb{P}(|Y|> 5.86\rho^{2})n^{-1.01}\notag\\
& \leq \Max{|Y|\leq5.86\rho^{2}}\frac{(2|Y|)^{0.309k}(2(\pi/3+\sqrt{3}/2)\rho^{2})^{k}}{\left((\pi/3+\sqrt{3}/2)\rho^{2}+|Y|+\pi r^{2}/2+r\sqrt{\pi|Y|}\right)^{1.309k}}+n^{-1.01}\label{XSmallExp2}
\end{align}
Maximising the first term over the range $0\leq|Y|\leq5.86\rho^{2}$, we find that the first term of (\ref{XSmallExp2}) achieves a maximum of $n^{-0.593\ldots}$ when $|Y|=0.601\rho^{2}$.
Thus, with high probability, no triple $(a,b,Y)$ has $\#Y\geq0.309k$, $\#B'\geq k$ and $\#Z=0$, and so with high probability there is no small component containing more than $0.309k$ points.
\end{proof}
We will use this result to prove a stronger bound on the connectivity threshold. The idea is to show that, with high probability, any triple $(a,b,Y)$ which meets Conditions~\ref{BTDiam}-\ref{BTEmpty} of Lemma~\ref{BasicTiles}, Condition~\ref{BTDense}' and has $\#Y\leq 0.309k$, which we know happens with high probability if $G$ contains a small component, will have another point, $\beta$, in neither $B'$ nor $Y$, but is within $1.0767\rho$ of $a$ such that $\overrightarrow{a\beta}$ is an out edge, but $\overrightarrow{\beta a}$ is not. There must then be a dense region around $\beta$, and we can use this to improve our bound on the connectivity threshold. More precisely we will show that there are $k$ points in the following region:
\begin{definition}
Given the system $(a,b,\beta,Y)$ with $a$, $b$ and $Y$ as usual and $\beta\notin Y\cup B'$, we define the region (shown in Figure~\ref{FigPosBet}):
\[B^{*} = \Bigl[\bigl(D_{\beta}(\Vert a\beta\Vert)\cap B'\bigr)\cup\bigl(D_{\beta}(\Vert a\beta\Vert)\setminus D_{a}(\Vert a\beta\Vert)\bigr)\Bigr]\setminus\bigl( Y\cup Z\bigr)\]
\begin{figure}[h]
\centering
\includegraphics[height=70mm]{KnearPositionqBeta.eps}
\caption{The point $\beta$ and the region $B^{*}$.}
\label{FigPosBet}
\end{figure}
\end{definition}
We introduce one more piece of notation, and then prove that there will be a suitable $\beta$ with high probability.
\begin{definition}
Given $\lambda>\rho$, we write $B(\lambda)=B'\cap D_{a}(\lambda)$ and $A(\lambda)=D_{a}(\lambda)\setminus\left(D_{a}(\rho)\cup B\right)$. See Figure~\ref{FigAlandBl}.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[height=70mm]{KnearAlandBl.eps}
\caption{The region $A(\lambda)$ and $B(\lambda)$.}
\label{FigAlandBl}
\end{figure}
The following lemma tells us that with high probability, if $G$ contains a small component, then we can find a suitable point $\beta$.
\begin{lem}\label{TBound1}
If $k>0.9684\log n$ and $G$ contains a component of diameter at most $d\sqrt{\log n}$, then with high probability there is some quadruple $(a,b,\beta,Y)$ such that:
\begin{enumerate}
\item The diameter of $Y$ is at most $d\sqrt{\log n}+2\sqrt{2}s$,\label{BTDiam2}
\item $b$ is within $d\sqrt{\log n}$ of $a$,
\item $\#Z=0$,\label{BTZEmpty2}
\item \label{BTDense2}$\#B'\geq k$,
\item \label{BTDense3}$\#Y\leq0.309k$,
\item \label{BTNoBoundary}$Y$ contains no tile within $d\sqrt{\log n}$ of the boundary of $S_{n}$,
\item \label{BTBeta}$\beta\in A(1.0767\rho)$ and
\item \label{BTBDense}$\#B^{*}\geq k$.
\end{enumerate}
\end{lem}
\begin{proof}
Given a small component, $X$, we take $Y$ to be exactly the tiles that meet $X$ and $a$ and $b$ to be the pair such that $a\in X$, $b\notin X$ and $\Vert ab\Vert$ is minimal, all as usual. Then Conditions~\ref{BTDiam2}--\ref{BTZEmpty2} are met with high probability by Lemma~\ref{BasicTiles}, Condition~\ref{BTDense2} is met by Lemma~\ref{ANotDense}, Condition~\ref{BTDense3} is met by Proposition~\ref{XSmall} and Condition~\ref{BTNoBoundary} is met by Lemma~\ref{NoBoundaries}. We take $\beta$ to be the point outside of $B'\cup Y\cup Z$ that is closest to $a$.
To show Condition~\ref{BTBeta} holds with high probability we show that no triple $(a,b,Y)$ meeting Conditions~\ref{BTDiam} and \ref{BTDist} has both:
\begin{enumerate}
\item $\#B'\geq k$ and,
\item $\#\Bigl(Z\cup A(1.0767\rho)\setminus Y\Bigr)=0$.
\end{enumerate}
If, with high probability, this does not occur, then with high probability there will be some point in $A(1.0767\rho)$, and so in particular $\beta\in A(1.0767\rho)$.
We write $E_{1}$ for the event that a particular triple has $\#B'\geq k$, $\#\bigl(Z\cup A(1.0767\rho)\setminus Y\bigr)=0$ and meets Conditions~\ref{BTDiam} and \ref{BTDist}. We know that $|B'|\leq \pi/3+\sqrt{3}/2$ and, by Lemma~\ref{Full-Empty}:\[\mathbb{P}(E_{1})\leq \left(\frac{|B'|}{|B'|+|Z\cup A(1.0767\rho)\setminus Y|}\right)^{k}\]
Thus $\mathbb{P}(E_{1})$ will be maximised when $B'$ is maximised and $|\bigl(A(1.0767\rho)\cup Z\bigr)\setminus Y|$ is minimised. By the Isoperimetric Theorem, this will occur when $Y$ is the small disk centred on $a$ whose $r$ blow-up just covers $A(1.0767\rho)$. In this case:
\[\text{radius}(Y)=1.0767\rho-r\leq 0.0768\rho\]
And so, omitting the trivial but tedious calculations to evaluate $|A(1.0767\rho)|$:
\begin{align}
|\bigl(Z\cup A(1.0767\rho)\setminus Y\bigr)| & \geq |D_{a}(\rho)|+|A(1.0767\rho)|-\pi (0.0768\rho)^{2}\notag\\
& > 3.4602\rho^{2}\notag\\
\end{align}
Thus:
\begin{align}
\mathbb{P}(E_{1}) & \leq \left(\frac{|B'|}{|B'|+|Z\cup A(1.0767\rho)\setminus Y|}\right)^{k}\notag\\
&\leq \left(\frac{(\pi/3+\sqrt{3}/2)\rho^{2}}{(\pi/3+\sqrt{3}/2)\rho^{2}+3.4602\rho^{2}}\right)^{0.9684\log n}\notag\\
&< n^{-1.00004}\label{ShowingBeta1}
\end{align}
Since there are only $\textrm{O}(n\log n)$ such systems, (\ref{ShowingBeta1}) tells us that $E_{1}$ will not occur for any of them with high probability, and so Condition~\ref{BTBeta} holds with high probability.
To show Condition~\ref{BTBDense} holds with high probability we first show that $\overrightarrow{a\beta}$ is an out edge with high probability. Then since $a\beta$ cannot be an edge, $D_{\beta}(\Vert a\beta\Vert)$ must contain $k$ points, and we finish the proof by showing that the nearest $k$ of these to $\beta$ will all lie in $B^{*}$ with high probability.
If some small component did not have $\overrightarrow{a\beta}$ being an out-edge, then, since $\#Y\leq 0.309k$, there would be at least $(1-0.309)k=0.691k$ points in $B(\Vert a\beta\Vert)\subset B(1.0767\rho)$. Then there would be some triple $(a,b,Y)$ with $\# B(1.0767\rho)\geq 0.691k$ and $\# Z=0$. We write $E_{2}$ for the event that a given triple meeting Conditions~\ref{BTDiam} and \ref{BTDist} has $\# B(1.0767\rho)\geq 0.691k$ and $\# Z=0$. Calculations show that $|B(1.0767\rho)|\leq 0.1632\rho^{2}$, and we know that $|Z|\geq \pi r^{2}$, thus:
\begin{align}
\mathbb{P}(E_{2}) & \leq \left(\frac{|B(1.0767)|}{||B(1.0767\rho)\cup Z|}\right)^{0.691k}\notag\\
& \leq \left(\frac{0.1632\rho^{2}}{0.1632\rho^{2}+\pi r^{2}}\right)\notag\\
& < n^{-2.3}
\end{align}
Thus, $E_{2}$ does not occur for any triple $(a,b,Y)$ with high probability, and so $\overrightarrow{a\beta}$ will be an out edge with high probability.
This tells us that $D_{\beta}(\Vert a\beta\Vert)$ must contain $k$ points, and we know that none of these points are in $Z\cup A(\Vert a\beta\Vert)$. Thus they must lie in $B^{*}\cup Y$. We complete the proof by showing that with high probability none of the $k$-nearest neighbours of $\beta$ lie in $Y$.
If there were a point, $\gamma$, in $D_{\beta}(\Vert a\beta\Vert)\cap Y$ such that $\gamma$ was one of the $k$-nearest neighbours of $\beta$, then there must be $k$ points within $D_{\gamma}(\Vert\beta\gamma\Vert)$ since $\beta\gamma$ is not an edge of $G$. At most $0.309k$ of these can be in $Y$ by Proposition~\ref{XSmall}, and no other points can be within $D_{\gamma}(\rho)$. Thus there must be at least $0.691k$ points within $D_{\gamma}(\Vert\beta\gamma\Vert)\setminus (D_{\gamma}(\rho)\cup Y\cup Z)\subset D_{\gamma}(\Vert\beta\gamma\Vert)\setminus (D_{\gamma}(\rho)\cup Z)$.
Given a system $(a,b,\beta,\gamma,Y)$ with $a$, $b$, and $Y$ as before, $\beta\in A(1.0767\rho)$ and $\gamma\in D_{\beta}(\Vert a\beta\Vert)\cap Y$, we write $E_{3}$ for the event that $\# Z=0$ and $\# D_{\gamma}(\Vert\beta\gamma\Vert)\setminus (D_{\gamma}(\rho)\cup Z)\geq 0.691k$. We know $|Z|\geq \pi r^{2}$ and $|D_{\gamma}(\Vert\beta\gamma\Vert)\setminus (D_{\gamma}(\rho)\cup Z)|\leq \pi (1.0767^{2}-1)\rho^{2}$, thus:
\begin{align}
\mathbb{P}(E_{3}) & \leq \left(\frac{|D_{\gamma}(\Vert\beta\gamma\Vert)\setminus (D_{\gamma}(\rho)\cup Z)|}{|Z\cup D_{\gamma}(\Vert\beta\gamma\Vert)\setminus (D_{\gamma}(\rho)\cup Z)|}\right)^{0.691k}\notag\\
& \leq \left(\frac{\pi (1.0767^{2}-1)\rho^{2}}{\pi (r^{2}+1.0767^{2}\rho^{2}-\rho^{2})}\right)^{0.691k}\notag\\
& < n^{-1.3}
\end{align}
Thus, with high probability, $E_{3}$ does not occur for any such system $(a,b,\beta,\gamma,Y)$, and so in particular none of the $k$ nearest neighbours of $\beta$ will be in $Y$ with high probability, and so we will have $\# B^{*}\geq k$ with high probability as required.
\end{proof}
We can now prove our stronger bound on the connectivity threshold, but first state a result about the probability of two intersecting regions being dense, which can be read out of the proof of Theorem~15 of [\ref{MW}].
\begin{lem}\label{IntersectingLemma}
Let $A_{1}$, $A_{2}$, $A_{3}$ and $A_{4}$ be four disjoint regions of $S_{n}$ and let $n_{i}=\# A_{i}$. Then, so long as $|A_{1}|\leq|A_{3}|<2|A_{1}|$, we have:
\begin{align*}
\mathbb{P}(n_{1}+n_{2}\geq k\textrm{, }n_{2}+n_{3}\geq k\textrm{ and }n_{4}=0) & \leq \mu^{-k}n^{o(1)}
\end{align*}
where $\mu$ is the solution to:
\begin{align*}
\sum_{i=1}^{4}|A_{i}|=\mu|A_{2}|+\sqrt{4\mu|A_{1}||A_{3}|}
\end{align*}
\flushright{$\square$}
\end{lem}
\begin{thm-hand}{\ref{TightBoundThm}}
If $k=c\log n$ and $c>0.9684$, then $G$ is connected with high probability.
\end{thm-hand}
\begin{proof}
We know that if $G$ contains a small component then with high probability there will be a system $(a,b,\beta,Y)$ meeting all the conditions of Lemma~\ref{TBound1}. We show that for $c>0.9684$ no such system meets all these conditions with high probability.
Given a system $(a,b,\beta,Y)$ meeting Conditions~\ref{BTDiam}, \ref{BTDist}, \ref{BTNoBoundary} and \ref{BTBeta} of Lemma~\ref{TBound1} (so that there are $\textrm{O}(n(\log n)^{2})$ such systems), we write $E$ for the event $\#B'\geq k$ and $\#B^{*}\geq k$ and set:
\begin{align*}
B_{1} & = B'\setminus B^{*}\\
B_{2} & = B'\cap B^{*}\\
B_{3} & = B^{*}\setminus B'
\end{align*}
We write $n_{i}=\# B_{i}$ for $(i=1,2,3)$, $n_{4}=\#Z$, then $E$ is the event $n_{1}+n_{2}\geq k$, $n_{2}+n_{3}\geq k$ and $n_{4}=0$.
We wish to apply Lemma~\ref{IntersectingLemma}, but need to make sure that either $|B_{1}|\leq|B_{3}|< 2|B_{1}|$ or $|B_{3}|\leq|B_{1}|< 2|B_{3}|$. We know that $|B'|\leq (\tfrac{\pi}{3}+\tfrac{\sqrt{3}}{2})\rho^{2}$ and calculations show that $|B^{*}|<2.31\rho^{2}$ and $|B'\cap B^{*}|<0.6515\rho^{2}$. From this it is easily checked that the conditions will hold unless at least one of $|B^{*}|$ or $|B'|$ is small whilst the other is large, in particular, at least one of $|B_{1}|\leq|B_{3}|< 2|B_{1}|$ or $|B_{3}|\leq|B_{1}|< 2|B_{3}|$ will hold so long as $|B^{*}|\geq 1.73\rho^{2}$ and $|B'|\geq 1.73\rho^{2}$. When one of these does not hold, we note that:\[\mathbb{P}(E)\leq\mathbb{P}(\#Z=0\text{ and }\#B'=0)\]And:\[\mathbb{P}(E)\leq\mathbb{P}(\#Z=0\text{ and }\#B^{*}=0)\]And apply Lemma~\ref{Full-Empty}. Thus we have:
\begin{align}
\mathbb{P}(E) & \leq \mathbb{P}(|B'|,|B^{*}|\geq1.73\rho^{2})\mathbb{P}(E\big| |B'|,|B^{*}|\geq1.73\rho^{2})\notag\\
& \quad +\mathbb{P}(|B'|<1.73)\mathbb{P}(E\big| |B'|<1.73)\notag\\
& \quad +\mathbb{P}(|B^{*}|<1.73)\mathbb{P}(E\big| |B^{*}|<1.73)\notag\\
& \leq \Max{|B'|,|B^{*}|\geq1.73\rho^{2}}\mu^{-k}n^{o(1)} + \Max{|B'|<1.73\rho^{2}}\left(\frac{|B'|}{|B'|+|Z|}\right)^{k}\notag\\
& \quad + \Max{|B^{*}|<1.73\rho^{2}}\left(\frac{|B^{*}|}{|B^{*}|+|Z|}\right)^{k}\notag\\
& < \Max{|B'|,|B^{*}|\geq1.73\rho^{2}}\mu^{-k}n^{o(1)} + 2\left(\frac{1.73\rho^{2}}{1.73\rho^{2}+\pi r^{2}}\right)^{k}\notag\\
& \leq \Max{|B'|,|B^{*}|\geq1.73\rho^{2}}\mu^{-k}n^{o(1)} + 2n^{-1.01}\label{ThmExp2}
\end{align}
where:
\begin{align}
|Z|+\sum_{i} |B_{i}|= \mu|B_{2}| + \sqrt{4\mu|B_{1}||B_{3}|}\label{ThmEqn3}
\end{align}
Thus $\mathbb{P}(E)$ will be maximised exactly when $\mu$ is minimised, which will be when $B^{*}$ overlaps with $B'$ as much as possible and $|B'|$ and $|B^{*}|$ are maximal. This will happen when $\beta$ is located at $\partial D_{a}(1.0767\rho)\cap \partial B'$. Calculating $\mu$ in this case yields $\mu>2.8087$.
Using this, we gain that the exponent of the first term of (\ref{ThmExp2}) is strictly less than $-1$ for $c>0.9684$, and so if $c>0.9684$, $E$ will not occur for any system $(a,b,\beta,Y)$ with high probability, and so, with high probability, $G$ will be connected.
\end{proof}
\section{Conclusion and Open Questions}
In the last section we worked quite hard to bring the bound for the connectivity threshold down below $\log n$. However, the bound we proved, $0.9684\log n$, is actually lower than the previously best known bound for the directed model of $0.9967\log n$ proved in [\ref{MW2}], and so since the edge in our strict undirected model are exactly the bidirectional edges in the connected model, it improves the bound for the directed model as well.
In fact, we believe a much stronger result holds. It seems that in both the directed model and strict undirected model the barrier to connectivity is an isolated vertex (or at least a very concentrated cluster of sub-logarithmic size). If this is the case, then it seems likely that the connectivity threshold for both models is the same (this does not immediately follow from the barrier in both cases being an isolated vertex, since in the directed model the isolated vertex is in an in-component by itself, where as it may be possible that an isolated point in the strict undirected model has in-edges, but not from any of its $k$-nearest neighbours, however set-ups where this occurs seem less likely than an isolated vertex in an in-component).
In fact, the lower bound proved on the connectivity threshold for both models is essentially the threshold for having a point with no in-edges, and so putting this all together motivates the following conjecture:
\begin{conjecture}
The barrier for connectivity for both the directed model and the strict undirected model, is an isolated vertex (or concentrated cluster of sub-logarithmic size) with no in-edges, and so the connectivity threshold in both models is the same (and something a little over $0.7209\log n$).
\end{conjecture}
It is possible to strengthen the bounds of several of the results proved in this paper (although with a fair amount of extra work). The upper bound on the size of a small component around the connectivity threshold of $0.309\log n$ (Lemma~\ref{XSmall}) can be improved to $0.203\log n$ by using a stronger version of Lemma~\ref{Full-Empty2} (although the conditions needed to apply it then require more work to check).
The bound on the threshold for the edges of different components crossing (Theorem~\ref{nocrossing}) can also be improved significantly. By determining the exact positions of $a_{1}$ and $a_{2}$ that maximise the ratio $|H|/|H\cup L|$ the bound can be reduced to around $0.5\log n$, although this is almost certainly still a long way off the actual threshold.
\begin{appendix}
\section{Definitions and Notation from Section \ref{NoCrossSection}}\label{DefApp}
We collate here all the definitions and notation used in Section \ref{NoCrossSection} in the order in which they appear.
\begin{itemize}
\item We say that $a_{1}$, $a_{2}$, $b_{1}$ and $b_{2}$ form a \emph{crossing pair} if there are two different components $X$ and $Y$ with $a_{1}$, $a_{2}\in X$, $b_{1}$, $b_{2}\in Y$ and the straight line segments $a_{1}a_{2}$ and $b_{1}b_{2}$ intersect and are both in the graph $G$, such that $\Vert a_{1}a_{2}\Vert\leq\Vert b_{1}b_{2}\Vert$, $\Vert a_{1}b_{1}\Vert\leq \Vert a_{1}b_{2}\Vert$ and $\textrm{d}(a_{1},b_{1}b_{2})\leq\textrm{d}(a_{2},b_{1}b_{2})$.
\item For $i=1,2$, $r_{i}=\min\{\Vert a_{i}b_{1}\Vert,\Vert a_{i}b_{2}\Vert\}$ (so that $r_{1}=\Vert a_{1}b_{1}\Vert$.
\item For $i=1,2$, $A_{i}=D_{a_{i}}(r_{i})$.
\item For $i=1,2$, $B_{i}=D_{b_{i}}(1)$.
\item $w=(\frac{1}{2},\frac{1}{2\sqrt{3}})$.
\item $T$ is the triangle with vertices $b_{1}$, $b_{2}$ and $w$.
\item $S_{1}$ is the region $T\setminus(D_{b_{1}}(\frac{1}{2})\cup D_{b_{2}}(\frac{1}{2}))$.
\item $z=(\frac{1}{2},-\frac{\sqrt{3}}{2})$.
\item $T_{2}$ is the triangle with vertices $b_{1}$, $b_{2}$ and $z$.
\item $S_{2}$ is the region $T_{2}\cap A_{1}\cap \{x\in S_{n}:x\widehat{b_{1}}b_{2}>\frac{\pi}{6}\textrm{ and }x\widehat{b_{2}}b_{1}>\frac{\pi}{6}\}$.
\item $R_{1}$ is the region $D^{k}(a_{1})\cap(B_{1}\setminus B_{2})$ and $R_{2}$ is the region $D^{k}(a_{1})\cap(B_{2}\setminus B_{1})$.
\item For $i=1,2$, $E_{i}$ is the elliptical region $\{x\in S_{n}:\Vert b_{i}x\Vert+\Vert a_{1}x\Vert\leq 1$. We write $E_{i}(a_{1})$ for this ellipse when $a_{1}$ is specified.
\item For $i=1,2$, $F_{i}$ is the elliptical region $\{x\in S_{n}:\Vert b_{i}x\Vert+\Vert a_{2}x\Vert\leq 1$. We write $F_{i}(a_{1})$ for this ellipse when $a_{2}$ is specified.
\item For a set $S\subset S_{n}$, we write $S^{+}$ for the part of $S$ which lies above the line through $b_{1}$ and $b_{2}$, and $S^{-}$ for the part of $S$ which lies below the line $b_{1}$ and $b_{2}$.
\item $M$ for the region $D^{k}(a_{1})\cap D^{k}(a_{2})$.
\item $L_{1}=(D^{k}(a_{1})\cap E_{1}\cap D_{b_{1}}(1/2))\setminus M$.
\item $L_{2}=(D^{k}(a_{1})\cap E_{2}\cap D_{b_{2}}(1/2))\setminus M$.
\item $L_{3}=M^{+}\cap D_{b_{1}}(1/2)\cap D_{b_{2}}(1/2)$.
\item $L_{4}=T_{2}\cap D^{k}(a_{2})\cap \{x:x\widehat{b_{1}}b_{2}\leq \pi/6\textrm{ or }x\widehat{b_{2}}b_{1}\leq \pi/6\}$.
\item $L_{5}=(D^{k}(a_{2})\cap F_{1}\cap D_{b_{1}}(1/2))\setminus T_{2}$.
\item $L_{6}=(D^{k}(a_{2})\cap F_{2}\cap D_{b_{2}}(1/2))\setminus T_{2}$.
\item $H_{1}=R_{1}\setminus L{1}$.
\item $H_{2}=R_{2}\setminus L{2}$.
\item $H_{3}=A_{2}\setminus (B_{1}\cup B_{2})$.
\item $H_{4}=M^{+}\setminus L_{3}$.
\item $H = S_{2}\cup\bigcup_{i=1}^{4}H_{i}$.
\item $L = \bigcup_{i=1}^{6}L_{i}$.
\item $v^{+}=(\frac{3}{4},\frac{\sqrt{3}}{4})$.
\item $v^{-}=(\frac{3}{4},-\frac{\sqrt{3}}{4})$.
\item $u^{+}=(\frac{1}{4},\frac{\sqrt{3}}{4})$.
\item $u^{-}=(\frac{1}{4},-\frac{\sqrt{3}}{4})$.
\item $w'=(\frac{1}{2},-\frac{1}{2\sqrt{3}})$.
\item For $i=1,2$, $\rho_{i}$ is the radius of $D^{k}(a_{i})$.
\end{itemize}
\end{appendix}
| {
"timestamp": "2014-06-04T02:09:17",
"yymm": "1406",
"arxiv_id": "1406.0716",
"language": "en",
"url": "https://arxiv.org/abs/1406.0716",
"abstract": "Let $G=G_{n,k}$ denote the graph formed by placing points in a square of area $n$ according to a Poisson process of density 1 and joining each pair of points which are both $k$ nearest neighbours of each other. Then $G_{n,k}$ can be used as a model for wireless networks, and has some advantages in terms of applications over the two previous $k$-nearest neighbour models studied by Balister, Bollobás, Sarkar and Walters, who proved good bounds on the connectivity models thresholds for both. However their proofs do not extend straightforwardly to this new model, since it is now possible for edges in different components of $G$ to cross. We get around these problems by proving that near the connectivity threshold, edges will not cross with high probability, and then prove that $G$ will be connected with high probability if $k>0.9684\\log n$, which improves a bound for one of the models studied by Balister, Bollobás, Sarkar and Walters too.",
"subjects": "Combinatorics (math.CO)",
"title": "A strict undirected model for the $k$-nearest neighbour graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534327754852,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7083573532773567
} |
https://arxiv.org/abs/1412.4595 | Maximal-clique partitions and the Roller Coaster Conjecture | A graph $G$ is {\em well-covered} if every maximal independent set has the same cardinality $q$. Let $i_k(G)$ denote the number of independent sets of cardinality $k$ in $G$. Brown, Dilcher, and Nowakowski conjectured that the independence sequence $(i_0(G), i_1(G), \ldots, i_q(G))$ was unimodal for any well-ordered graph $G$ with independence number $q$. Michael and Traves disproved this conjecture. Instead they posited the so-called ``Roller Coaster" Conjecture: that the terms \[i_{\left\lceil\frac{q}2\right\rceil}(G), i_{\left\lceil\frac{q}2\right\rceil+1}(G), \ldots, i_q(G) \] could be in any specified order for some well-covered graph $G$ with independence number $q$. Michael and Traves proved the conjecture for $q<8$ and Matchett extended this to $q<12$.In this paper, we prove the Roller Coaster Conjecture using a construction of graphs with a property related to that of having a maximal-clique partition. In particular, we show, for all pairs of integers $1\le k<q$ and positive integers $m$, that there is a well-covered graph $G$ with independence number $q$ for which every independent set of size $k+1$ is contained in a unique maximal independent set, but each independent set of size $k$ is contained in at least $m$ distinct independent sets. | \section{Introduction}\label{S:introduction}
The behavior of the coefficients of the independence polynomial of graphs in various classes has produced many interesting problems. For a graph $G$, we let $\mathcal{I}(G)$ be the set of independent sets in $G$, i.e., $\mathcal{I}(G)=\setof{I\subseteq V(G)}{E(G[I])=\emptyset}$. Also, let $\mathcal{I}_k(G)=\setof{I\in \mathcal{I}(G)}{\abs{I}=k}$ and $i_k(G)=\abs{\mathcal{I}_k(G)}$. The \emph{independence number of $G$} is given by $\alpha(G)=\max \setof{k\in \mathbb{N}}{i_k(G)>0}$. We let the \emph{independence polynomial of $G$} be the polynomial defined by
\[
I(G;x)=\sum_{k=0}^{\alpha(G)} i_k(G)x^k.
\]
We refer to $(i_0(G),i_1(G),\ldots,i_{\alpha(G)}(G))$ as the \emph{independence sequence of $G$}.
Natural questions arise when one considers possible orderings of the coefficients of the independence sequence over various classes of graphs. If one considers the class of all graphs, then Alavi, Erd\H{o}s, Malde, and Schwenk \cite{AEMS} proved that the coefficients can be ordered in any way apart from $i_0(G)=1$. In particular, they proved the following. Throughout the paper, we let $[n]=\set{1,2,\ldots,n}$.
\begin{thm}[Alavi, Erd\H{o}s, Malde, Schwenk \cite{AEMS}]
Given a positive integer $q$ and a permutation $\pi$ of $[q]$, there is a graph $G$ with $\alpha(G)=q$ such that
\[
i_{\pi(1)}(G)<i_{\pi(2)}(G)<\cdots<i_{\pi(q)}(G).
\]
\end{thm}
A graph $G$ is said to be \emph{well-covered} if every maximal independent set in $G$ has the same size. Brown, Dilcher, and Nowakowski \cite{BDN} conjectured that the independence sequence of any well-covered graph is unimodal. This conjecture was disproved by Michael and Traves \cite{MT}. However, they were able to show the following.
\begin{thm}[Michael, Traves~\cite{MT}]\label{T:mt}
The independence sequence of a well-covered
graph $G$ with $\alpha(G)=q$ satisfies
\[
\frac{i_0(G)}{\binom{q}0}\le\frac{i_1(G)}{\binom{q}1}\le\ldots\le\frac{i_{q}(G)}{\binom{q}{q}}.
\]
\end{thm}
This implies the following.
\begin{cor}[Michael, Traves \cite{MT}]
If $G$ is a well-covered graph with $\alpha(G)=q$, then
\[
i_0(G)<i_1(G)<\cdots<i_{\ceil{q/2}}(G).
\]
\end{cor}
In addition, Michael and Traves conjectured that the second half of the independence sequence can be ``any-ordered''. To be precise, they conjectured the following, which has become known as the Roller Coaster Conjecture.
\begin{conj}[Michael, Traves \cite{MT}; Roller Coaster Conjecture]\label{C:rollercoaster}
Given a positive integer $q$ and a permutation $\pi$ of $\set{\ceil{q/2},\ceil{q/2}+1,\ldots,q}$, there is a well-covered graph $G$ with $\alpha(G)=q$ and
\[
i_{\pi(\ceil{q/2})}(G)<i_{\pi(\ceil{q/2}+1)}(G)<\cdots<i_{\pi(q)}(G).
\]
\end{conj}
In addition, Michael and Traves proved the conjecture for $q\leq 7$. Matchett \cite{M} was able to prove the Roller Coaster Conjecture for $q\leq 11$. He also proved that for sufficiently large $q$, the last $(.1705)q$ terms in the independence sequence of some well-covered graph can be any-ordered. Related work has been done in the context of pure $O$-sequences in order ideals \cite{BMMNZ}.
We will show that a partial converse to Theorem~\ref{T:mt} is true. Consider the following definition.
\begin{definition}
We say that a polynomial $a_{q}x^{q}+\cdots+a_1x$ is an \emph{approximate
well-covered independence polynomial} if for all real numbers
$\epsilon>0$, there exists a well-covered graph $G$ of independence
number $q$ and a real number $T$ such that for all $1\le k\le q$,
\begin{equation}
\abs{\frac{i_k(G)}T-a_k}<\epsilon.\label{eqn:eps}
\end{equation}
Given such real numbers $T,\epsilon$ and graph $G$, say that $G$ is an \emph{$\epsilon$-certificate for $a_{q}x^{q}+\ldots+a_1x+a_0$ with scaling factor T}.
\end{definition}
Theorem~\ref{T:mt} implies that for an approximate well-covered independence polynomial $a_{q}x^{q}+\ldots+a_1x$, we have
\begin{equation}
\frac{a_1}{\binom{q}{1}}\le\frac{a_2}{\binom{q}2}
\le\ldots\le\frac{a_{q}}{\binom{q}{q}}.\label{eqn:binoms}
\end{equation}
We will show that given a sequence of non-negative real numbers $(a_1,a_2,\ldots,a_{q})$ satisfying (\ref{eqn:binoms}), the polynomial $\sum_{i=1}^{q} a_i x^i$ is an approximate well-covered independence polynomial. In order to do this, we will construct well-covered graphs with independence sequence satisfying (\ref{eqn:eps}) for some real number $T$. We will construct these graphs from graphs satisfying the following property.
\begin{definition}
For integers $0\le k<q$ and $1\le m$, say that graph $G$ satisfies the
property $P(k,q;m)$ if:
\begin{enumerate}
\item All maximal cliques in $G$ are of size $q$,
\item Each clique of size $k+1$ in $G$ is contained in a unique maximal clique, and
\item Each clique of size $k$ in $G$ is contained in at least $m$ maximal cliques.
\end{enumerate}
\end{definition}
Note that if $G$ satisfies property $P(k,q;m)$, then its complement is a well-covered graph with independence number $q$. It seems that graphs that satisfy property $P(k,q;m)$ have not been studied up to this point, but they are related to the study of maximal-clique covers and partitions in graphs (see, e.g., \cite{PSW}). A \emph{maximal-clique covering} of a graph $G$ is a set of maximal cliques in $G$ whose union contains each edge of $G$ at least once. A maximal-clique covering in which every edge is in exactly one element of the covering is a \emph{maximal-clique partition}. In our case, instead of covering edges, we are covering cliques of size $k+1$ with maximal cliques. In addition to this, we are covering cliques of size $k$ with at least $m$ distinct maximal cliques. Clique coverings have recently been found to have implications in design theory (see, e.g., \cite{BBCM}) and so graphs satisfying $P(k,q;m)$ may as well.
In Section~\ref{S:construction} we give a construction of graphs which
satisfy property $P(k,q;m)$. In Section~\ref{S:proofs}, we use these graphs to
prove that (\ref{eqn:binoms}) is a necessary condition for $a_{q}x^{q}+\cdots+a_1x$
to be an approximate well-covered independence polynomial. Finally,
in Section~\ref{S:proofs2}, we show that this implies the Roller Coaster Conjecture, i.e., we prove the following.
\begin{thm}\label{thm:rct}
Given a positive integer $q$ and a permutation $\pi$ of $\set{\ceil{q/2},\ceil{q/2}+1,\ldots,q}$, there is a well-covered graph $G$ with $\alpha(G)=q$ and
\[
i_{\pi(\ceil{q/2})}(G)<i_{\pi(\ceil{q/2}+1)}(G)<\cdots<i_{\pi(q)}(G).
\]
\end{thm}
\section{Graph Construction}\label{S:construction}
For a set $S$ and positive integer $k$, let $\binom Sk=\setof{A\subseteq S}{\abs{S}=k}$. Fix integers $k$, $q$, and $m$ with $1\le k<q$ and $1\le m$. For $i\in [q]$, define $\Fmk{i}$ to be the following set of functions:
\[
\Fmk{i}=\set{f:\binom{[q]\setminus \set{i}}{k}\to [m]}.
\]
Our graph is defined in terms of elements of $\Fmk{i}$.
\begin{definition}
For integers $k$, $q$, and $m$ with $1\le k<q$ and $1\le m$, we define $H_{k,q;m}$ to be the graph with vertex set
\[
\bigcup_{i=1}^{q} \Fmk{i},
\]
and, for $f\in \Fmk{i}$ and $g\in \Fmk{j}$, we let $f\sim g$ if and only if $i\neq j$ and
\[
f\big|_A=g\big|_A,
\]
where $A=\binom{[q]\wo \set{i,j}}k$. If $k=0$, we define $H_{0,q;m}=mK_q$.
\end{definition}
For example, if $m=1$ and $k\ge 1$, then $\Fmk{i}$ consists of one (constant) function and so $H_{k,q;m}=K_{q}$. For a function $f:\binom{[q]}k\to[m]$, denote by $C_f$ the set of restrictions of $f$ to $\binom{[q]\setminus\{i\}}k$ for $1\le i\le q$. Note that $C_f$ has size $q$.
\begin{lem}\label{lem:fcns}
For integers $k$, $q$, and $m$ with $1\le k<q$ and $1\le m$, every clique in $H_{k,q;m}$ is contained in a clique of the form $C_f$ for some function $f:\binom{[q]}k\to[m]$, and so each maximal clique in $H_{k,q;m}$ is of size $q$. Furthermore, each clique of size $k+1$ is contained in a unique such clique, while every clique of size $k$ is contained in $m$ distinct such cliques.
\end{lem}
\begin{proof}
In order for a set of vertices in $H_{k,q;m}$, say $\set{f_1,f_2,\ldots,f_r}$, to be a clique, it must be the case that, for each $j\in [r]$, there is an $i_j\in [q]$ such that $f_j\in \Fmk{i_j}$. Further, we have $i_j\neq i_k$ if $j\neq k$. If $A\in \binom{[q]}k$ and $i_j\not\in A$ for some $j\in [r]$, then $A$ is in the domain of $f_j$ and any other function in the clique must agree with $f_j$ on $A$ (provided $A$ is in its domain). Thus, any clique in $H_{k,q;m}$ consists of restrictions of functions of the form $f:\binom{[q]}k\to [m]$. Note that if $B\in \binom{[q]}k$ and $B\supseteq \setof{i_j}{j\in [r]}$, then $B$ is not in the domain of any of the functions in the clique.
Consider a clique in $H_{k,q;m}$ of size $k+1$ consisting of vertices $\set{g_1,g_2,\ldots,g_{k+1}}$ where $g_j\in \Fmk{i_j}$ for $j\in [k+1]$. There is no $A\in \binom{[q]}k$ such that $A\supseteq \setof{i_j}{j\in [k+1]}$. Thus, there is a unique $f:\binom{[q]}k\to [m]$ such that $g_j\in C_f$ for each $j\in [k+1]$ and so there is a unique $q$-clique containing the $(k+1)$-clique.
If $\set{h_1,h_2,\ldots,h_k}$ is a $k$-clique in $H_{k,q;m}$ where $h_j\in \Fmk{i_j}$ for $j\in [k]$, then $B=\setof{i_j}{j\in [k]}\in \binom{[q]}k$ is not in the domain of any of the $h_j$s. Since a value on $B$ has not been specified, there are $m$ functions $f:\binom{[q]}k\to [m]$ such that $h_j\in C_f$ for all $j\in [k]$. Therefore, the $k$-clique is contained in at least $m$ maximal cliques in $H_{k,q;m}$.
\end{proof}
\begin{thm}\label{C:bigguess}
For any integers $k$, $q$, and $m$ with $0\le k<q$ and $1\le m$, the graph $H_{k,q;m}$ satisfies property $P(k,q;m)$.
\end{thm}
\begin{proof}
The case when $k\ge 1$ immediately from Lemma~\ref{lem:fcns}. When $k=0$, we have $H_{0,q;m}=mK_q$. Each vertex, or $K_1$, in $mK_q$ is in a unique $K_q$, while the empty set is in all $m$ of the $K_q$s. Thus, $H_{0,q;m}$ satisfies $P(0,q;m)$.
\end{proof}
\section{Partial converse to Theorem~\ref{T:mt}}\label{S:proofs}
Our main goal in this section is to prove the following theorem.
\begin{thm}\label{thm:converse}
For all positive integers $q$ and sequences of real numbers $(a_1, \ldots, a_{q})$ satisfying
\[\frac{a_1}{\binom{q}{1}}\le\frac{a_2}{\binom{q}2}\le\ldots\le\frac{a_{q}}{\binom{q}{q}},\]
$a_1x+a_2x^2+\ldots+a_{q}x^{q}$ is an approximate well-covered independence polynomial.
\end{thm}
We begin by using the graphs $H_{k,q;m}$ to generate approximate well-covered independence polynomials.
\begin{lem}\label{lem:hpoly}
For all integers $0\le k< q$, the polynomial $\sum_{j=k+1}^{q}\binom{q}{j}x^j$ is an approximate well-covered independence polynomial.
\end{lem}
\begin{proof}
Fix $\epsilon>0$, and let $m$ be a positive integer such that $\frac{2^{q}}m<\epsilon$. Note that, by Theorem~\ref{C:bigguess}, for integers $k$, $q$, and $m$ with $0\le k< q$ and $1\le m$, $H_{k,q;m}$ satisfies property $P(k,q;m)$ and so the complement of $H_{k,q;m}$ is a well-covered graph with independence number $q$.
Suppose $H_{k,q;m}$ has $T$ cliques of size $q$, and let us consider the number of cliques of size $j$ for $1\le j\le q$. Clearly, there are $T\binom{q}{j}$ pairs of cliques $(K_1,K_2)$ such that $K_1$ is of size $q$, $K_2$ of size $j$, and $K_2\subseteq K_1$. If $j\geq k+1$, then each clique of size $j$ contains a clique of size $k+1$, and
hence is contained in at most one clique of size $q$. Since all maximal cliques
are of size $q$, each clique of size $j$ is contained in a unique clique of size
$q$, and hence there are $T\binom{q}{j}$ cliques of size $j$. On the other hand, if $1\leq j\le k$, then each clique of size $j$ is contained in a clique of size $k$ and is therefore contained in at least $m$ cliques of size $q$. Hence there are at most $T\binom{q}{j}/m<T\epsilon$ cliques of size $j$.
Thus, if $G$ is the complement of $H_{k,q;m}$, then $\frac{i_j(G)}T=\binom{n}{j}$
for $j\geq k+1$. For $1\leq j\le k$, we have $\abs{\frac{i_j(G)}T-0}<\epsilon$, so $G$ is an
$\epsilon$-certificate for $\sum_{j=k+1}^n\binom{n}{j}x^j$ with scaling factor $T$.
\end{proof}
Now we show that the class of approximate well-covered independence polynomials
of a given degree is additive. In order to do this, we use the join\footnote{The \emph{join} of graphs $G$ and $H$, denoted $G\vee H$, is the graph with vertex set $V(G)\cup V(H)$ and edge set $E(G)\cup E(H)\cup \setof{xy}{x\in V(G), y\in V(H)}$.} operation on graphs. Note that if $G$ and $H$ are graphs and $k\geq 1$, then $i_k(G\vee H)=i_k(G)+i_k(H)$.
\begin{lem}\label{lem:sumpoly}
If $P_1(x)$ and $P_2(x)$ are approximate well-covered independence polynomials of degree $q$,
then $P_1(x)+P_2(x)$ is an approximate well-covered independence polynomial of degree $q$.
\end{lem}
\begin{proof}
Fix $\epsilon>0$. Let $G_1, G_2$ be $\frac{\epsilon}3$-certificates of $P_1(x)$ and
$P_2(x)$ with scaling factors $T_1$ and $T_2$ respectively. Suppose that all coefficients of $P_1(x)$ and $P_2(x)$ are bounded above by $N$, and let $k_1, k_2$ be positive integers such that
\[
1-\frac{\min(k_1T_1, k_2T_2)}{\max(k_1T_1, k_2T_2)}<\frac{\epsilon}{6N}.
\]
Define $T:=\max(k_1T_1, k_2T_2)$.
Let $G$ be the graph defined as the join of $k_1$ copies of $G_1$ joined to the join of $k_2$ copies of $G_2$, i.e.,
\[
G=\left(\bigvee_{i=1}^{k_1} G_1\right)\vee\left(\bigvee_{i=1}^{k_2} G_2\right).
\]
All independent sets in $G$ are completely contained in a single copy of $G_1$ or a single copy of $G_2$. As such, $G$ is well-covered (since $q(G_1)=q(G_2)=q$) and $i_j(G)=k_1i_j(G_1)+k_2i_j(G_2)$ for all $j\geq 1$.
Suppose that, for $j\geq 1$, the $x^j$ coefficients of $P_1(x)$ and $P_2(x)$ are $p^1_{j}$ and $p^2_{j}$, respectively. Then since $G_1$ is a $\frac{\epsilon}3$-certificate of $P_1(x)$ with scaling factor $T_1$, by definition, $\abs{p^1_{j}-\frac{i_j(G_1)}{T_1}}<\frac{\epsilon}3$. Thus, since $k_1T_1\le T$, we have that
\[
\abs{\frac{k_1T_1p^1_{j}}T-\frac{k_1i_j(G_1)}T}<\frac{\epsilon}3.
\]
Further, $p^1_{j}<N$ and
\[
1-\frac{k_1T_1}T<1-\frac{\min(k_1T_1, k_2T_2)}{\max(k_1T_1, k_2T_2)}<\frac{\epsilon}{6N},
\]
and so we have $\abs{p^1_{j}-\frac{k_1T_1p^1_{j}}T}<\frac{\epsilon}6$. Thus, we see that
\[
\abs{p^1_{j}-\frac{k_1i_j(G_1)}T}<\frac{\epsilon}2.
\]
Similarly,
\[
\abs{p^2_{j}-\frac{k_2i_j(G_2)}T}<\frac{\epsilon}2.
\]
It follows that
\[
\abs{p^1_{j}+p^2_{j}-\frac{k_1i_j(G_1)+k_2i_j(G_2)}T}=\abs{p^1_{j}+p^2_{j}-\frac{i_j(G)}T}<
\epsilon,
\]
so $G$ is an $\epsilon$-certificate of $P_1(x)+P_2(x)$ with scaling factor $T$.
\end{proof}
The same is true for more complicated linear combinations.
\begin{lem}\label{lem:polylincomb}
If $P_1(x), P_2(x), \ldots, P_k(x)$ are approximate well-covered independence polynomials of degree $q$, and $\lambda_1, \ldots, \lambda_k$ are positive real numbers then
$\sum_{i=1}^k\lambda_iP_i(x)$ is an approximate well-covered independence polynomial of degree $q$.
\end{lem}
\begin{proof}
If $G$ is an $\epsilon$-certificate of $P_i(G)$ with scaling factor $T$, then
it is a $\lambda_i\epsilon$-certificate of $\lambda_iP_i(G)$ with scaling factor $\frac{T}{\lambda_i}$. Thus for each $i\in [k]$, $\lambda_iP_i(G)$ is an
approximate well-covered independence polynomial of degree $q$, and therefore
the sum of them is by Lemma~\ref{lem:sumpoly}.
\end{proof}
With these lemmas in hand, we are now ready to prove the main result of this section, i.e., Theorem~\ref{thm:converse}.
\begin{proof}[Proof of Theorem~\ref{thm:converse}]
Fix a sequence $a_1,a_2,\ldots,a_{q}$ satisfying
\[
\frac{a_1}{\binom{q}{1}}\le\frac{a_2}{\binom{q}{2}}\le\ldots\le\frac{a_{q}}{\binom{q}{q}}.
\]
Let $b_1=\frac{a_1}{\binom{q}{1}}$ and, for $i>1$, let $b_i=\frac{a_i}{\binom{q}{i}}-\frac{a_{i-1}}{\binom{q}{i-1}}$. Then, for all $i$, $b_i>0$ and
\[
a_i=\binom{q}i\sum_{j=1}^i b_j.
\]
Let $P_k(x)=\sum_{j=k}^{q}\binom{q}{j}x^j$ so, by Lemma~\ref{lem:hpoly},
$P_k(x)$ is an approximate well-covered independence polynomial for all $1\le k\le q$. Therefore, by Lemma~\ref{lem:polylincomb}, so is
\[
\sum_{j=1}^{q} b_jP_j(x)=\sum_{j=1}^{q} b_j\sum_{i=j}^{q} \binom{q}{i}x^i=\sum_{i=1}^{q} \left(\sum_{j=1}^i\binom{q}j b_j\right) x^i=\sum_{i=1}^{q} a_ix^i.\qedhere
\]
\end{proof}
\section{Proof of the Roller Coaster Conjecture}\label{S:proofs2}
Finally, we will show that Theorem~\ref{thm:converse} itself implies the Roller Coaster Conjecture.
\begin{lem}\label{lem:approxwell}
If $a_{q}x^{q}+\ldots+a_1x$ is an approximate well-covered independence polynomial
and $S$ is a subset of $[q]$ such that $a_i\neq a_j$ if $i\neq j$ and $i,j\in S$,
then there exists a well-covered graph $G$ of independence number $q$ such that
for all $j, k\in S$, $i_j(G)<i_k(G)$ if and only if $a_j<a_k$.
\end{lem}
\begin{proof}
Let
\[
\epsilon=\frac{1}3 \min\setof{\abs{a_i-a_j}}{i\neq j, i,j\in S}.
\]
Note that $\epsilon>0$. Let $G$ be an $\epsilon$-certificate of $a_{q}x^{q}+\ldots+a_1x+a_0$ with scaling factor $T$. Then $G$ is a well-covered graph of independence number $q$ such that for all $j$, $|\frac{i_j(G)}T-a_j|<\epsilon$.
For $j, k\in S$, if $a_j<a_k$, we have $3\epsilon\le a_k-a_j$. Thus, $a_j+\epsilon<a_k-\epsilon$, and so
\[
i_j(G)<T(a_j+\epsilon)<T(a_k-\epsilon)<i_k(G).\qedhere
\]
\end{proof}
Therefore, if we can any-order the initial coefficients of approximate well-covered
independence polynomials, we can do the same for actual well-covered independence polynomials.
\begin{lem}\label{lem:approxrct}
For any integer $n$ and for any permutation $\pi$ of the set
$\set{\ceil{q/2},\ceil{q/2}+1,\ldots,q}$, there exists an approximate well-covered independence polynomial $a_{q}x^{q}+\cdots+a_1x+a_0$ such that for all $\ceil{\frac{q}2}\le k,l\le q$, $a_k<a_l$ if and only
if $\pi(k)<\pi(l)$.
\end{lem}
\begin{proof}
Define the sequence $(a_1,a_2,\ldots,a_{q})$ as follows.
\[
a_i=\begin{cases}
\binom{q}i & \text{if $1\le i<\ceil{\frac{q}2}$},\\
2^{q}+\pi(i) & \text{if $\ceil{\frac{q}2}\leq i\leq q$}.
\end{cases}
\]
Then $\frac{a_i}{\binom{q}i}=1$ for $1\le i<\ceil{\frac{q}2}$, while $\frac{a_i}{\binom{q}i}>1$ for $\ceil{\frac{q}2}\le i$. Further, for $\ceil{\frac{q}2}\le i<q$,
\[
\frac{a_i}{a_{i+1}}\le\frac{2^{q}+q}{2^{q}}\le1+\frac{2}{q},
\]
while
\[
\frac{\binom{q}i}{\binom{q}{i+1}}=\frac{i+1}{q-i}\ge\frac{\frac{q}2+1}{\frac{q}2}=1+\frac{2}{q}.
\]
It follows that
\[\frac{a_1}{\binom{q}{1}}\le\frac{a_2}{\binom{q}{2}}\le\ldots\le\frac{a_{q}}{\binom{q}{q}}.\]
Therefore, by Theorem~\ref{thm:converse}, $a_{q}x^{q}+\ldots+a_1x$ is an approximate well-covered independence polynomial. Furthermore, for $\ceil{\frac{q}2}\le k,l\le q$, $a_k=2^{q}+\pi(k)<a_l=2^{q}+\pi(l)$ if and only if
$\pi(k)<\pi(l)$.
\end{proof}
Our main theorem, Theorem~\ref{thm:rct}, follows.
\begin{proof}[Proof of Theorem~\ref{thm:rct}]
The statement follows from applying Lemma~\ref{lem:approxrct} and then Lemma~\ref{lem:approxwell} with $S=\set{\ceil{q/2},\ceil{q/2}+1,\ldots,q}$.
\end{proof}
\section{Conclusion}\label{S:conclusion}
Many interesting questions about the independence sequence of graphs are still open. It was conjectured by Levit and Mandrescu \cite{LM} that every K\"onig-Egerv\'ary graph (a graph $G$ with $\alpha(G)+\nu(G)=n(G)$, where $\nu(G)$ is the size of the largest matching in $G$ and $n(G)$ is the number of vertices in $G$) has a unimodal independence sequence. This conjecture was recently disproved by Bhattacharyya and Kahn \cite{BK}, who provided a bipartite graph with non-unimodal independence sequence (since every bipartite graph is a K\"onig-Egerv\'ary graph). However, the following conjecture of Alavi et al. is still open.
\begin{conj}[Alavi, Erd\H{o}s, Madle, Schwenk \cite{AEMS}]
Every tree and forest has unimodal independence sequence.
\end{conj}
We also believe that graphs satisfying property $P(k,q;m)$ may be of independent interest. Often the question for such structures is how small can such an object be? To be precise, our question is as follows.
\begin{question}
Given integers $k$, $q$, and $m$ with $0\leq k<q$ and $m\geq 1$, what is the minimum number of vertices in a graph $G$ with property $P(k,q;m)$?
\end{question}
The graph $H_{k,q;m}$ has $q m^{\binom{q-1}{k}}$ vertices which we suspect is far from the minimum. Recall that, for integers $k$ and $n$ with $1\leq k\leq n$, the Kneser graph $KG_{n,k}$ is the graph with vertex set $\binom{[n]}k$, where two vertices are adjacent if and only if they are disjoint. One can check that $KG_{q(q-2),q-2}$ satisfies property $P(q-2,q;\frac{1}2\binom{2(q-2)}{q-2})$. Further, we have
\[
n(H_{q-2,q;m})=q m^{q-1}\gg \binom{q(q-2)}{q-2}=n(KG_{q(q-2),q-2})
\]
when $m=\frac{1}2\binom{2(q-2)}{q-2}$.
\bibliographystyle{amsplain}
| {
"timestamp": "2014-12-16T02:19:22",
"yymm": "1412",
"arxiv_id": "1412.4595",
"language": "en",
"url": "https://arxiv.org/abs/1412.4595",
"abstract": "A graph $G$ is {\\em well-covered} if every maximal independent set has the same cardinality $q$. Let $i_k(G)$ denote the number of independent sets of cardinality $k$ in $G$. Brown, Dilcher, and Nowakowski conjectured that the independence sequence $(i_0(G), i_1(G), \\ldots, i_q(G))$ was unimodal for any well-ordered graph $G$ with independence number $q$. Michael and Traves disproved this conjecture. Instead they posited the so-called ``Roller Coaster\" Conjecture: that the terms \\[i_{\\left\\lceil\\frac{q}2\\right\\rceil}(G), i_{\\left\\lceil\\frac{q}2\\right\\rceil+1}(G), \\ldots, i_q(G) \\] could be in any specified order for some well-covered graph $G$ with independence number $q$. Michael and Traves proved the conjecture for $q<8$ and Matchett extended this to $q<12$.In this paper, we prove the Roller Coaster Conjecture using a construction of graphs with a property related to that of having a maximal-clique partition. In particular, we show, for all pairs of integers $1\\le k<q$ and positive integers $m$, that there is a well-covered graph $G$ with independence number $q$ for which every independent set of size $k+1$ is contained in a unique maximal independent set, but each independent set of size $k$ is contained in at least $m$ distinct independent sets.",
"subjects": "Combinatorics (math.CO)",
"title": "Maximal-clique partitions and the Roller Coaster Conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534398277176,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573524930635
} |
https://arxiv.org/abs/1903.01845 | Maximal orthogonal sets of unimodular vectors over finite local rings of odd characteristic | Let $R$ be a finite local ring of odd characteristic and $\beta$ a non-degenerate symmetric bilinear form on $R^2$.In this short note, we determine the largest possible cardinality of pairwise orthogonal sets of unimodular vectors in $R^2$. | \section{Introduction}
Two famous distinct distances and unit distances problems for the plane $\mathbb{R}^2$ were posed by Erd{\H o}s \cite{Erdos1946}. The first problem asks for the minimum number of distinct distances among $n$ points in the plane while the latter problem asks for the maximum number of the unit distances that can occur among $n$ points in the plane. These problems were generalized to the $n$-dimensional Euclidean space case \cite{Erdos1975}.
Points in the $n$-dimensional vector space over $\mathbb{F}_q$ were considered for similar questions, see \cite{BKT04,IR07,ISX10} for example. In \cite{IR07}, the authors defined a specific distance between two points in $\mathbb{F}_q^n$ where $q$ is an odd prime power and studied the distinct distances problem. It is natural to ask the two mentioned problems more generally by using an arbitrary quadratic form over $\mathbb{F}_q^n$. Recently, the problems in this direction were studied as follow. Let $\beta$ be a non-degenerate symmetric bilinear form over $\mathbb{F}_q^n$. The largest possible cardinality of a subset $S \subset \mathbb{F}_q^n$ so that $\beta(\vec{x}, \vec{y}) = 0$ for every distinct vectors $\vec{x}, \vec{y} \in S$ was determined in \cite{Vinh12} and the case of $\beta (\vec{x}, \vec{y}) = l$ for all $l \in \mathbb{F}_q$ was treated in \cite{Omran2016}.
Let $R$ be a finite local ring of odd characteristic with identity and a non-degenerate symmetric bilinear form $\beta$ over $R^2$. In this paper, we consider a subset $S$ of unimodular vectors in $R^2$ and determine the largest possible size of $S$ so that for any two distinct vectors $\vec{x}, \vec{y} \in S$, $\beta(\vec{x}, \vec{y}) = 0$. This is a generalization of the problem over $\mathbb{F}_q^2$.
Throughout the paper, we assume that all rings have the identity. In Section \ref{Bil}, we review some backgrounds about bilinear forms over $R^n$. We then show the main result when $n = 2$ in Section \ref{Main}. Finally, we conclude the paper with some comments in Section \ref{Con}.
\section{Bilinear form over finite local rings}\label{Bil}
Let $R$ be a commutative ring and $n$ a positive integer.
A \textit{bilinear form $\beta$ on $R^n$} is a map $\beta:R^n\times R^n\rightarrow R$ such that
\[
\beta(\vx+\vy,\vz)=\beta(\vx,\vz)+\beta(\vy,\vz) \text{ and } \beta(r\vx,\vz)=r\beta(\vx,\vz)
\]
and
\[
\beta(\vx,\vz+\vw)=\beta(\vx,\vz)+\beta(\vx,\vw) \text{ and }
\beta(\vx,s\vz)=s\beta(\vx,\vz).
\]
for all $\vx,\vy, \vz, \vw\in V$ and $r,s\in R$.
Suppose that $\{e_1,e_2,\dots, e_n\}$ is a basis of $R^n$.
For each bilinear form $\beta$ on $R^n$, we have the $n\times n$ associate matrix $B=\big(\beta(e_i,e_j)\big)$.
A bilinear form $\beta$ is said to be \textit{symmetric} if its associate matrix $B$ is symmetric, and $\beta$ is said to be \textit{non-degenerate} if $B$ is invertible. A \textit{determinant} of a bilinear form $\beta$, denoted by $\det\beta$, is defined to be $\det B$. Two bilinear forms $\beta_1$ and $\beta_2$ over $R^n$ with corresponding matrices $B_1$ and $B_2$ are {\it equivalent} if there exists an invertible matrix $P$ over $R$ such that $B_2 = P^T B_1 P$.
A local ring is a commutative ring with unique maximal ideal.
If $M$ is the unique maximal ideal of a local ring $R$, then the group of units of $R$ is $R\setminus M$. Note that if $u$ is a unit in $R$, then $u + m$ is a unit for all $m \in M$.
In case that $R$ is of odd characteristic, it is shown in \cite{Sri2017} the classifications of non-degenerate symmetric bilinear forms over $R$.
\begin{lem}\label{form}\cite{Sri2017}
Let $R$ be a finite local ring of odd characteristic with unique maximal ideal $M$. Suppose that $\beta$ is a non-degenerate symmetric form on $R^n$, then one of the following holds:
\begin{enumerate}
\item if $n$ is odd, then $\beta$ is equivalent to
\[
\beta(\vx,\vy)=x_1y_1-x_2y_2+\dots+x_{n-2}y_{n-2}-x_{n-1}y_{n-1}+u x_ny_n;
\]
\item if $n$ is even, then $\beta$ is equivalent to
\[
\beta(\vx,\vy)=x_1y_1-x_2y_2+\dots+x_{n-3}y_{n-3}-x_{n-2}y_{n-2}+x_{n-1}y_{n-1}-ux_ny_n;
\]
\end{enumerate}
where $u=1$ or $u$ is a non-square unit in $R$, and $\vx=(x_1,x_2,\dots,x_n), \vy=(y_1,y_2,\dots,y_n)$ are in $R^n$.
\end{lem}
A vector $\vx=(x_1,x_2,\dots,x_n)$ in $R^n$ is said to be \textit{unimodular} if the ideal generated by $x_1,x_2,\dots,x_n$ is equal to $R$.
In particular, if $R$ is a field, then every nonzero vector is unimodular.
For the case $R$ is a finite local ring, we have the following lemma on unimodular vectors.
\begin{lem}\cite{MeePui2013}\label{uni}
Let $R$ be a finite local ring. Then a vector $\vx=(x_1,x_2,\dots,x_n)\in R^n$ is unimodular if and only if $x_i$ is a unit for some $i\in \{1,2,\dots,n\}$.
\end{lem}
\section{Main result} \label{Main}
For a non-degenerate symmetric bilinear form $\beta$ on $R^n$,
a \textit{unimodular orthogonal set} is a set $S$ of unimodular vectors in $R^n$ in which ${\beta(\vx,\vy)=0}$ for any two distinct vectors $\vx,\vy \in S$.
We denote by $\mathcal{S}(R,n)$ the largest possible cardinality of a unimodular orthogonal set in $R^n$. Here, we only consider $\mathcal{S}(R, 2)$ where $R$ is a finite local ring of odd characteristic.
\begin{thm}\label{main}
Let $M$ be the maximal ideal of $R$.
Then
\begin{align*}
\mathcal{S}(R,2)=\begin{cases}
|R|-|M|, & \det{\beta} \text{ is square;} \\
2, & \det\beta \text{ is non square}.
\end{cases}
\end{align*}
\end{thm}
\begin{proof}
Assume that $\det\beta$ is a non square unit.
By Theorem\td\ref{form},
\[
\beta(\vx,\vy)=x_1y_1-zx_2y_2
\]
where $z$ is a fixed non square unit and $\vx=(x_1,x_2), \vy=(y_1,y_2) \in R^2$.
Let $S$ be a unimodular orthogonal set in $R^2$ and $(a,b)\in S$.
Since $(a,b)$ is unimodular, $a$ or $b$ is a unit (by Lemma\ref{uni}).
Suppose that $a$ is a unit.
Then another vectors in $S$ are of the form $(a^{-1}zby,y)$ where $y$ is a unit in $R$.
If $\vy_1=(a^{-1}zby_1,y_1),\vy_2=(a^{-1}zby_2,y_2)\in S$,
then $\beta(\vy_1,\vy_2)=0$ implies $1=z(a^{-1}b)^2$, so
$1\in M$ or $z$ is a square unit which is a contradiction.
We have a similar argument for the case $b$ is a unit.
Thus, $\mathcal{S}(R,2)\leq 2$.
Clearly, $\{(1,0),(0,1)\}$ is a unimodular orthogonal set.
Therefore, $\mathcal{S}(R,2)= 2$.
Next, assume that $\det\beta$ is a square unit.
By Theorem\td\ref{form},
\[
\beta(\vx,\vy)=x_1y_1-x_2y_2
\]
for $\vx=(x_1,x_2), \vy=(y_1,y_2)\in R^2$.
Clearly,
\[
S=\{(x,x)\mid x\in R\setminus M\}
\]
is a unimodular orthogonal set.
Then $\mathcal{S}(R,2)\geq |R|-|M|$.
We show that the converse of the inequality holds.
Since $\cha(R)$ is odd, $|R|-|M|\geq 2$.
If $|R|-|M|=2$, then $|R|=3$ and $|M|=1$, i.e., $R=\F_3$.
By Theorem\td4 of \cite{Omran2016}, $\mathcal{S}(R,2)=2=|R|-|M|$.
Assume that $|R|-|M|\geq 3$.
Let $S$ be a maximal unimodular orthogonal set in $R^2$ and $(a,b)\in S$.
Since $(a,b)$ is unimodular, $a$ or $b$ is a unit (by Lemma\ref{uni}).
Suppose that $a$ is a unit.
Then another vectors in $S$ are of the form $(a^{-1}by,y)$ where $y$ is a unit in $R$, so $|S| \le |R| - |M| + 1$.
If $\vy_1=(a^{-1}by_1,y_1)$ and $\vy_2=(a^{-1}by_2,y_2)$ are two distinct vectors in $S$.
then $\beta(\vy_1,\vy_2)=0$ implies $1=(a^{-1}b)^2$, and so $(a,b)=(a^{-1}b^2,b)$ is also in that form.
It can be argued similarly for the case that $b$ is a unit.
Thus, we have $\mathcal{S}(R,2)\leq |R|-|M|$.
Therefore, the equality holds.
\end{proof}
\begin{rem}
From the proof of Theorem\td\ref{main}, we have the following.
\begin{enumerate}
\item If $\det\beta$ is square, then all maximal unimodular orthogonal sets of $R^2$ are
\begin{itemize}
\item $\{(a,b),(a^{-1}bzy,y)\}$ where $a\in R^\times$, $b\in R$ and $y\in R^\times$, and
\item $\{(a,b),(x,{(bz)}^{-1}ax)\}$ where $a\in M$ and $b,x\in R^\times$.
\end{itemize}
\item If $\det\beta$ is non-square, then all maximal unimodular orthogonal sets of $R^2$ are
\[
\{(ux,x)\mid x\in R^\times \} \text{ or } \{(x,ux)\mid x\in R^\times\}
\]
where $u\in R$ with $u^2=1$.
In particular, if $R=\Z_{p^s}$, then all maximal unimodular orthogonal sets of $R^2$ are
\[
\{(x,x)\mid x\in R^\times \}, \{(-x,x)\mid x\in R^\times \} \text{ and } \{(x,-x)\mid x\in R^\times\}.
\]
\end{enumerate}
\end{rem}
\section{Concluding remarks}\label{Con}
The problem of finding the largest cardinality of pairwise orthogonal subset in $\mathbb{F}_q^n$ has been solved in \cite{Omran2016,Vinh12}. In Theorem \ref{main}, we solve the similar problem for unimodular vectors in $R^2$ where $R$ is a finite local ring of odd characteristic by using an elementary counting method from the properties of $R$. The problem when $n \ge 3$ could also be considered but it seems to be difficult if we use the method in the proof of Theorem \ref{main} because all equations will be more complicated. Unlike the finite fields case \cite{Omran2016}, the problem when $n$ is odd and $R$ is a general finite local ring is not easy to manage even finding a lower bound for $\mathcal{S}(n, R)$ since $R$ can have zero divisors. We plan to discuss some of these extension works using a new technique in another paper.
| {
"timestamp": "2019-03-06T02:20:44",
"yymm": "1903",
"arxiv_id": "1903.01845",
"language": "en",
"url": "https://arxiv.org/abs/1903.01845",
"abstract": "Let $R$ be a finite local ring of odd characteristic and $\\beta$ a non-degenerate symmetric bilinear form on $R^2$.In this short note, we determine the largest possible cardinality of pairwise orthogonal sets of unimodular vectors in $R^2$.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Maximal orthogonal sets of unimodular vectors over finite local rings of odd characteristic",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534392852384,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573521015328
} |
https://arxiv.org/abs/1510.08417 | Monotone Projection Lower Bounds from Extended Formulation Lower Bounds | In this short note, we reduce lower bounds on monotone projections of polynomials to lower bounds on extended formulations of polytopes. Applying our reduction to the seminal extended formulation lower bounds of Fiorini, Massar, Pokutta, Tiwari, & de Wolf (STOC 2012; J. ACM, 2015) and Rothvoss (STOC 2014; J. ACM, 2017), we obtain the following interesting consequences.1. The Hamiltonian Cycle polynomial is not a monotone subexponential-size projection of the permanent; this both rules out a natural attempt at a monotone lower bound on the Boolean permanent, and shows that the permanent is not complete for non-negative polynomials in VNP$_{\mathbb R}$ under monotone p-projections.2. The cut polynomials and the perfect matching polynomial (or "unsigned Pfaffian") are not monotone p-projections of the permanent. The latter, over the Boolean and-or semi-ring, rules out monotone reductions in one of the natural approaches to reducing perfect matchings in general graphs to perfect matchings in bipartite graphs.As the permanent is universal for monotone formulas, these results also imply exponential lower bounds on the monotone formula size and monotone circuit size of these polynomials. | \section{Introduction} \label{sec:intro}
The permanent
\[
\perm_n(X) = \sum_{\pi \in S_n} x_{1,\pi(1)} x_{2,\pi(2)} \dotsb x_{n,\pi(n)}
\]
(where $S_n$ denotes the symmetric group of all permutations of $\{1,\dotsc,n\}$) has long fascinated combinatorialists \cite{minc,vanLintWilson,muirMetzler}, more recently physicists \cite{permQuantum,linearOptics}, and since Valiant's seminal paper \cite{valiant}, has also been a key object of study in computational complexity. Despite its beauty, the permanent has some computational quirks: in particular, although the permanent of integer matrices is $\cc{\# P}$-complete and the permanent is $\cc{VNP}$-complete in characteristic zero, the permanent \emph{mod 2} is the same as the determinant, and hence can easily be computed. In fact, computing the permanent mod $2^k$ is easy for any $k$ \cite{valiant}, though the proof is more involved. Modulo any other number $n$, the permanent of integer matrices is $\cc{Mod}_n\cc{P}$-complete.
In contrast, the seemingly similar Hamiltonian Cycle polynomial,
\[
HC_n(X) = \sum_{\text{$n$-cycles } \sigma} x_{1,\sigma(1)} x_{2,\sigma(2)} \dotsb x_{n,\sigma(n)},
\]
where the sum is only over $n$-cycles rather than over all permutations, does not have these quirks: The Hamiltonian Cycle polynomial is $\cc{VNP}$-complete over any ring $R$ \cite{valiant2} and $\cc{Mod}_n\cc{P}$-complete for all $n$ (that is, counting Hamiltonian cycles is complete for these Boolean counting classes).
Jukna \cite{juknaQ} observed that, over the Boolean semi-ring, if the Hamiltonian Cycle polynomial were a monotone p-projection of the permanent, there would be a $2^{n^{\Omega(1)}}$ lower bound on monotone circuits computing the permanent, a lower bound that still remains open (the current record is still Razborov's $n^{\Omega(\log n)}$ \cite{razborov}). Even over the real numbers, such a monotone p-projection would give an alternative proof of a $2^{n^{\Omega(1)}}$ lower bound on the permanent (Jerrum and Snir \cite{jerrumSnir} already showed the permanent requires monotone circuits of size $2^{\Omega(n)}$ over $\mathbb{R}$ and over the tropical $(\min,+)$ semi-ring). Here we show that no such monotone reduction exists---over $\mathbb{R}$, nor over the tropical semi-ring, nor over the Boolean semi-ring---by connecting monotone p-projections to extended formulations of linear programs.
We use the same technique to show that the perfect matching polynomial or ``unsigned Pfaffian''
\[
\frac{1}{2^n n!} \sum_{\pi \in S_{2n}} \prod_{i=1}^{n} x_{\pi(2i-1), \pi(2i)} = \sum_{\substack{\pi \in S_{2n} \\ \pi(1) < \pi(3) < \dotsb < \pi(2n-1) \\ \pi(2k-1) < \pi(2k) \; \forall k}} \prod_{i=1}^{n} x_{\pi(2i-1), \pi(2i)}
\]
is not a monotone p-projection of the permanent. As the perfect matching polynomial counts perfect matchings in a general graph, and the permanent counts perfect matchings in a bipartite graph, we have:
\begin{corollary}
Any efficient projection reduction from counting perfect matchings in general graphs to counting perfect matchings in bipartite graphs must be non-monotone (and, therefore, seemingly quite unintuitive!).
\end{corollary}
\begin{remark}[On the Boolean semi-ring]
Our results also hold for \emph{formal polynomials} over the Boolean semi-ring $\mathbb{B} = (\{0,1\}, \vee, \wedge)$. Over the Boolean semi-ring, the permanent is the indicator function of the existence of a perfect matching in a bipartite graph, and the unsigned Pfaffian is the indicator function of a perfect matching in a general graph. However, over $\mathbb{B}$, each function is represented by more than one formal polynomial, and we do not yet know how to extend our results to the setting of \emph{functions} over $\mathbb{B}$. See Section~\ref{sec:open} for details and specific questions.
\end{remark}
We also use the same technique to show that the cut polynomials $\Cut^q = \sum_{A \subseteq [n]} \prod_{i \in A, j \notin A} x_{ij}^{q-1}$ are not monotone p-projections of the permanent. Perhaps the main complexity-theoretic interest in the cut polynomials is that $\Cut^q$ \emph{over the finite field $\mathbb{F}_q$} was (until recently \cite{MS}) the only known example of a natural polynomial that is neither in $\cc{VP}_{\mathbb{F}_q}$ nor $\cc{VNP}_{\mathbb{F}_q}$-complete under a standard complexity-theoretic assumption (that $\cc{PH}$ doesn't collapse) \cite{burgisser}; there it was also shown that if $\cc{VP}_{\mathbb{F}_q} \neq \cc{VNP}_{\mathbb{F}_q}$ then such polynomials of intermediate complexity must exist. In that paper, it was asked whether the cut polynomials, considered as polynomials \emph{over the rationals}, were $\cc{VNP}_{\mathbb{Q}}$-complete.\footnote{$\Cut^2$ was subsequently shown to be $\cc{VNP}_{\mathbb{Q}}$-complete under circuit reductions \cite{cut}; its completeness under projections remains open.}
Although our results don't touch on this question, these previous results motivate the study of these polynomials over $\mathbb{Q}$.
Because the permanent is universal for monotone formulas, our lower bounds also imply exponential lower bounds on the monotone algebraic formula size---and, by balancing algebraic circuits, monotone algebraic circuit size---of these polynomials; see Section~\ref{sec:formula}.
Finally, we note that our results shed a little more light on the complicatedness of the known $\cc{VNP}$-completeness proofs for the permanent \cite{valiant,aaronson}. Namely, prior to our result, the fact that the permanent is not hard modulo $2$ already implied that any completeness result must use $2$ in a ``bad'' way: for example, dividing by 2 somewhere. This is indeed true of both Valiant's original proof \cite{valiant} and Aaronson's independent quantum linear-optics proof \cite{aaronson}. One might hope for a classical analogue of Aaronson's quantum proof, using the characterization of $\cc{BPP}$ in terms of stochastic matrices as a replacement for the characterization of $\cc{BQP}$ using unitary matrices. However, our result says that any completeness proof for the permanent must use non-monotone reductions, so such a classical analogue is not possible:
\begin{corollary}
Aaronson's quantum linear optics proof \cite{aaronson} that the permanent is $\cc{\# P}$-hard cannot be replaced by one using classical randomized algorithms in place of quantum algorithms.
\end{corollary}
Our results also imply the necessity of the use of negative numbers in Valiant's $4 \times 4$ gadget \cite[p.~195]{valiant}. In light of these results, Valiant's $4 \times 4$ gadget may perhaps seem less mysterious than the fact that such a gadget exists that is \emph{only} $4 \times 4$!
To prove these results, we show that a monotone projection between non-negative polynomials essentially implies that the Newton polytope of one polynomial is an extension of the Newton polytope of the other (Lemma~\ref{lem:main}), and then apply known lower bounds on the extension complexity of certain polytopes. We hope that the connection between Newton polytopes, monotone projections, and extended formulations finds further use.
\section{Preliminaries} \label{sec:prelim}
A polynomial $f(x_1,\dotsc,x_n)$ is a \definedWord{(simple) projection} of a polynomial $g(y_1, \dotsc, y_m)$ if $f$ can be constructed from $g$ by replacing each $y_i$ with a constant or with some $x_j$. The polynomial $f$ is an \definedWord{affine projection} of $g$ if $f$ can be constructed from $g$ by replacing each $y_i$ with an affine linear function $\pi_i(\vec{x})$. When we say ``projection'' we mean simple projection. Given two families of polynomials $(f_n)$, $(g_n)$, if there is a function $p(n)$ such that $f_n$ is a projection of $g_{p(n)}$ for all sufficiently large $n$, then we say that $(f_n)$ is a projection of $(g_n)$ with \definedWord{blow-up} $p(n)$. If $(f_n)$ is a projection of $(g_n)$ with polynomial blow-up, we say that $(f_n)$ is a \definedWord{p-projection} of $(g_n)$.
Over any subring of $\mathbb{R}$---or more generally any totally ordered semi-ring (see below)---a \definedWord{monotone projection} is a projection in which all constants appearing in the projection are non-negative.
Monotone p-projection is defined analogously.
To each monomial $x_1^{e_1} \dotsb x_n^{e_n}$ we associate its exponent vector $(e_1, \dotsc, e_n)$, as a point in $\mathbb{N}^n \subseteq \mathbb{R}^n$. We then have:
\begin{definition}
The \definedWord{Newton polytope} of a polynomial $f(x_1, \dotsc, x_n)$, denoted $\New(f)$, is the convex hull in $\mathbb{R}^n$ of the exponent vectors of all monomials appearing in $f$ with non-zero coefficient.
\end{definition}
A polytope is \definedWord{integral} if all its vertices have integer coordinates; note that Newton polytopes are always integral. A \definedWord{face} of a polytope $P$ is the intersection of $P$ with a linear space $L$ such that none of the interior points of $P$ lie in $L$.
For a polytope $P$, let $c(P)$ denote the ``complexity'' of $P$, as measured by the minimal number of linear inequalities needed to define $P$ (equivalently, the number of faces of $P$ of dimension $\dim P -1$). A polytope $Q \subseteq \mathbb{R}^m$ is an \definedWord{extension} of $P \subseteq \mathbb{R}^n$ if there is an affine linear map $\ell\colon \mathbb{R}^m \to \mathbb{R}^n$ such that $\ell(Q) = P$. The \definedWord{extension complexity} of $P$, denoted $xc(P)$, is the minimum complexity of any extension of $P$ (of any dimension): $xc(P) = \min \{ c(Q) | Q \text{ is an extension of } P\}$.
The $m$-th \definedWord{cycle cover polytope} (also known as the bipartite perfect matching polytope) is the convex hull in $\mathbb{R}^{m^2}$ of the $\{0,1\}$-indicator functions of the directed cycle covers of the complete directed graph with self-loops on $m$ vertices. The cycle cover polytope is the Newton polytope of the permanent, as each monomial in the permanent corresponds to such a cycle cover.
A totally ordered semi-ring (we only consider commutative ones here) is a totally ordered set together with two operations, denoted $(R,\leq,\times,+,1,0)$ such that $(R,\times,1)$ and $(R,+,0)$ are both commutative monoids, $\times$ distributes over $+$, $0 \times a=0$ for all $a$, $a+c \leq b + c$ whenever $a \leq b$, and $ac \leq bc$ whenever $a \leq b$ and $0 \leq c$. An element $c$ of a totally ordered semi-ring is non-negative if $0 \leq c$, and is positive if furthermore $c \neq 0$. We will restrict our attention to \emph{non-zero} totally ordered semi-rings; equivalently, we assume $1 \neq 0$. (There is only one polynomial over the zero semi-ring---the zero polynomial---so that case was of no interest anyways.)
Note that (non-zero) totally ordered semi-rings always have ``characteristic zero:'' $1 + \dotsb + 1 \neq 0$ (any number of times). In a non-zero totally ordered semi-ring, either $0 < 1$ or $1 < 0$; we handle the first case, the second being analogous. If $0 < 1$, then by adding $1$ to both sides $k$ times we get that $\sum_{i \in [k]}1 < \sum_{i \in [k+1]} 1$. If the ring had nonzero characteristic, then for some $k > 1$, the right-hand side here would become zero, and we would get $0 < 1 < 1+1 < \dotsb < 0$, a contradiction.
The following totally ordered semi-rings are of particular interest:
\begin{itemize}
\item The real numbers with its usual ordering and algebraic operations $(\mathbb{R}, \leq, \times, +)$
\item The so-called ``tropical semi-ring'' $(\mathbb{R}, \leq, +, \min)$, which is the real numbers with its usual ordering, where the product is taken to be real addition and the addition operation is taken to be the minimum.
\item The Boolean and-or semi-ring $\mathbb{B}=(\{0,1\}, \leq, \wedge, \vee)$, where $0 \leq 1$.
\end{itemize}
To get a feel for the latter two semi-rings, note that polynomials over the tropical semi-ring generally compute some optimization problem, and over $\mathbb{B}$ generally compute a decision problem. For example, the Hamiltonian Cycle polynomial over the tropical semi-ring computes the Traveling Salesperson Problem, and over $\mathbb{B}$ is the indicator function of the existence of a Hamiltonian cycle. Note that over $\mathbb{R}$, if two formal polynomials compute the same function then they must be identical, but this is not true over the tropical nor Boolean semi-rings.
\section{Main Lemma}
\begin{lemma} \label{lem:main}
Let $R$ be a totally ordered semi-ring, and let $f(x_1,\dotsc, x_n)$ and $g(y_1, \dotsc, y_m)$ be polynomials over $R$ with non-negative coefficients. If $f$ is a monotone projection of $g$, then some face of $\New(g)$ is an extension of $\New(f)$. In particular, $xc(\New(f)) \leq c(\New(g))$.
\end{lemma}
\begin{proof}
Under simple projections, a monomial in the $y$'s maps to some scalar multiple of a monomial in the $x$'s (possibly the empty monomial, resulting in a constant term, or possibly the zero multiple, resulting in zero). Let $\pi$ be a monotone projection map, defined on the variables $y_i$, and extended naturally to monomials and terms in the $y$'s. (Recall that a \definedWord{term} of a polynomial is a monomial together with its coefficient.) Since each term $t$ of $g$ is a monomial multiplied by a positive coefficient, and since $\pi$ is non-negative, $\pi(t)$ is either zero or a single monomial in the $x$'s with nonzero coefficient. The former situation can happen only if $t$ contains some variable $y_i$ such that $\pi(y_i)=0$. Let $\ker(\pi)$ denote the set $\{y_i | \pi(y_i) = 0\}$. Thus, for every term $t$ of $g$ that is disjoint from $\ker(\pi)$, $\pi(t)$ actually appears in $f$---possibly with a different coefficient, but still non-zero---since no two terms can cancel under projection by $\pi$.
Let $e_1,\dotsc,e_m$ be the coordinates on $\mathbb{R}^m$, the ambient space of $\New(g)$. Let $K$ denote the subspace of $\mathbb{R}^m$ defined by the equations $e_i = 0$ for each $i$ such that $y_i \in \ker(\pi)$. Let $P$ be the intersection of $\New(g)$ with $K$, considered as a polytope in $K$; since all vertices of $\New(g)$ are non-negative, intersecting $\New(g)$ with a coordinate hyperplane, $e_i = 0$, results in a face of $\New(g)$, and thus $P$ is a face of $\New(g)$. Note that $P$ is exactly the convex hull of the exponent vectors of monomials in $g$ that are disjoint from $\ker(\pi)$. In particular, since $\pi$ is multiplicative on monomials, it induces a \emph{linear} map $\ell_\pi$ from $K$ to $\mathbb{R}^n$ (the ambient space of $\New(f)$). By the previous paragraph, the exponent vectors of $f$ are exactly $\ell_\pi$ applied to the exponent vectors of monomials in $g$ that are disjoint from $\ker(\pi)$. By the linearity of $\ell_\pi$ and the convexity of $P$ and $\New(f)$, we have that $\New(f) = \ell_\pi(P)$, so $P$ is an extension of $\New(f)$. Since $P$ is defined by intersecting $\New(g)$ with additional linear equations, the lemma follows.
\end{proof}
Several partial converses to our Main Lemma also hold. Perhaps the most natural and interesting of these is:
\begin{observation}
Let $R$ be a totally ordered semi-ring. Given any sequence of integral polytopes $(P_n \subseteq \mathbb{R}^n)$ such that the $\text{poly}\xspace(n)$-th cycle cover polytope is an extension of $P_n$ along an affine linear map $\ell_n\colon \mathbb{R}^{\text{poly}\xspace(n)} \to \mathbb{R}^n$ with integer coefficients of polynomial bit-length, there is a sequence of polynomials $(f_n) \in \cc{VNP}_R$ such that $\New(f_n) = P_n$ and $f$ is a monotone p-projection of the permanent.
\end{observation}
\begin{proof}
Let $C_m$ denote the $m$-th cycle cover polytope, let $m(n)$ be a polynomial such that $C_{m(n)}$ is an extended formulation of $P_n$, and let $b(n)$ be a polynomial upper bound on the bit-length of the coefficients of $\ell_n$. Let $V_m$ denote the vertex set of the cycle cover polytope, i.e. the incidence vectors of cycle covers. Define $f_n$ as $\sum_{\vec{e} \in V_m} \vec{y}^{\ell_n(\vec{e})}$, where $\vec{y}=(y_1,\dotsc,y_n)$, and the vector notation $\vec{y}^{\vec{e}'}$ is defined as $y_1^{e_1'} y_2^{e_2'} \dotsb y_n^{e_n'}$. Note that $\ell_n$ has only integer coefficients by assumption, and each $\vec{e} \in V_m$ is integral, so the vector $\ell_n(\vec{e})$ is also integral, and the above expression is well-defined. By construction, every exponent vector of $f_n$ is in $\ell_n(C_{m(n)}) = P_n$. Conversely, every vertex of $P_n$ is an exponent vector of $f_n$, for any non-zero totally ordered semi-ring has characteristic zero (see Section~\ref{sec:prelim}). (Without noting this, it would be possible that $k$ distinct vertices in $V_m$ would get mapped to the same point under $\ell_n$, for $k$ a multiple of the characteristic, and then the corresponding monomials $\vec{y}^{\ell_n(\vec{e})}$ would add up to $0$ in $f_n$.)
Thus $\New(f_n) = P_n$. Furthermore, $f_n$ is a monotone \emph{nonlinear} projection of the permanent using the map $x_{ij} \mapsto \vec{y}^{\ell((0,0,\dotsc,1,\dotsc,0))}$, where the $1$ is in the $(i,j)$ position. Using the universality of the permanent and repeated squaring, this can easily be turned into a monotone \emph{simple} projection of the permanent of size $\text{poly}\xspace(m(n), b(n))$.
\end{proof}
This can be generalized from the cycle cover polytopes and the permanent to arbitrary integral polytopes and the natural associated polynomial (the sum over all monomials whose exponent vectors are vertices of the polytope), but at the price of using ``monomial projections''---in which each variable is replaced by a monomial---rather than simple projections. There ought to be a version of this observation over sufficiently large fields and allowing rational coefficients in $\ell$, using Strassen's division trick \cite{strassen}, but the only such versions the author could come up with had so many hypotheses as to seem uninteresting.
\section{Applications}
\subsection{Projection Lower Bounds}
\begin{remark}
The following theorems hold over any totally ordered semi-ring, including the Boolean and-or semi-ring, the non-negative real numbers under multiplication and addition, and the tropical semi-ring of real numbers under addition and $\min$. To see that this introduces no additional difficulty, note that over any totally ordered semi-ring $R$, the Newton polytope of a polynomial over $R$ is still a polytope in a vector space over the real numbers, so standard results on polytopes and the cited results on extension complexity still apply.
\end{remark}
\begin{theorem} \label{thm:HC}
Over any totally ordered semi-ring, the Hamiltonian Cycle polynomial is not a monotone affine p-projection of the permanent; in fact, any monotone affine projection from the permanent to the Hamiltonian Cycle polynomial has blow-up at least $2^{\Omega(n)}$.
\end{theorem}
\begin{proof}
First, recall that if an $n$-variable polynomial is an affine projection of the $m \times m$ permanent, then it is a simple projection of the $(n+1)m \times (n+1)m$ permanent. For completeness we recall the brief proof: Let $\pi_{ij}(\vec{x})$ be the affine linear function corresponding to the variable $y_{ij}$ of the $m \times m$ permanent, and write $\pi_{ij} = a_0 + a_1 x_1 + \dotsb + a_n x_n$. Let $G$ be the complete directed graph with loops on $m$ vertices and edge weights $y_{ij}$. Replace the edge $(i,j)$ by $n+1$ parallel edges with weights $a_0, a_1 x_1, \dotsb, a_n x_n$. Add a new vertex on each of these parallel edges, splitting each parallel edge into two. For the edge weighted $a_0$, the two edges have weights $1,a_0$, and for the remaining edges the new edges get weights $a_i, x_i$. It is a simple and instructive exercise to see that this has the desired effect. Note also that if the original affine projection $\pi$ was monotone, then so is the constructed simple projection.
Now we show the result for simple projections. If the Hamiltonian Cycle polynomial were a monotone projection of the permanent, then by the Main Lemma, some face of $\New(\perm)$ would be an extension of $\New(HC)$.
The Newton polytope of the permanent is the cycle cover polytope (see Section~\ref{sec:prelim}). The cycle cover polytope can easily be described by the $m^2$ inequalities saying that all variables $x_{i,j}$ are non-negative, together with the equalities saying that each vertex has in-degree and out-degree exactly 1, namely $\sum_i x_{i,j} = 1$ for all $j$ and $\sum_j x_{i,j} = 1$ for all $i$ (it is easy to see that these are necessary; for sufficiency, see, e.\,g., \cite[Theorem~18.1]{schrijver}). Since equalities do not count towards the complexity of a polytope, we have $c(\New(\perm_m)) \leq m^2$.
But the Newton polytope of the $n$-th Hamiltonian Cycle polynomial is exactly the TSP polytope, which by \cite[Corollary~2]{rothvoss} requires extension complexity $2^{\Omega(n)}$.
\end{proof}
\begin{theorem} \label{thm:matching}
Over any totally ordered semi-ring, the perfect matching polynomial (or ``unsigned Pfaffian'') is not a monotone affine p-projection of the permanent; in fact, any monotone affine projection from the permanent to the perfect matching polynomial has blow-up at least $2^{\Omega(n)}$.
\end{theorem}
\begin{proof}
The proof is the same as for the Hamiltonian Cycle polynomial, using \cite[Theorem~1]{rothvoss}, which gives a lower bound of $2^{\Omega(n)}$ on the extension complexity of the perfect matching polytope, which is the Newton polytope of the perfect matching polynomial.
\end{proof}
\begin{theorem} \label{thm:cut}
Over any totally ordered semi-ring, for any $q$, the $q$-th cut polynomial is not a monotone affine p-projection of the permanent; in fact, any monotone affine projection from the permanent to the $q$-th cut polynomial has blow-up at least $2^{\Omega(n)}$.
\end{theorem}
\begin{proof}
Use \cite[Theorem~7]{EF}, which says that $xc(\New(\Cut^2)) \geq 2^{\Omega(n)}$, as $\New(\Cut^2)$ is the cut polytope. The one additional observation we need is that $\New(\Cut^q)$ is just the $(q-1)$-scaled version of $\New(\Cut^2)$, and this rescaling does not affect the extension complexity.
\end{proof}
\subsection{Monotone Formula and Circuit Lower Bounds} \label{sec:formula}
As pointed out by an anonymous reviewer, the universality of the permanent also holds in the monotone setting, so lower bounds on monotone projections from the permanent imply the same lower bounds on monotone formula size, and therefore quasi-polynomially related lower bounds on monotone circuit size. We assume circuits only have gates of bounded fan-in; with unbounded fan-in, rather than losing a factor of a half in the exponent of the exponent, we lose a factor of a third.
\begin{proposition} \label{prop:formula}
Any polynomial computable by a monotone formula of size $s$ is a monotone projection of $\perm_{s+1}$.
\end{proposition}
\begin{proof}
The proof of the universality of the permanent given in \cite[Proposition~2.16]{burgisserBook} works \emph{mutatis mutandis} in the monotone setting.
\end{proof}
As a consequence of this, Theorems~\ref{thm:HC}--\ref{thm:cut} are nearly tight, since every monotone polynomial in $n$ variables of $\text{poly}\xspace(n)$ degree can be written as a monotone formula of size $2^{O(n \log n)}$ (write it as a sum of monomials).
\begin{corollary}
Over any totally ordered semi-ring, any monotone formula computing the Hamiltonian Cycle polynomial, the perfect matching polynomial, or the $q$-th cut polynomial has size at least $2^{\Omega(n)}$. Consequently, any monotone circuit computing these polynomials has size at least $2^{\Omega(\sqrt{n})}$.
\end{corollary}
For the cut polynomials, we believe this result to be new. For the other polynomials, this provides a new proof of (slightly weaker versions of) previously known lower bounds. Namely, Jerrum and Snir gave a lower bound of $(n-1)((n-2)2^{n-3}+1) = 2^{n+\Omega(\log n)}$ on the monotone circuit size of $HC$\ \cite[Section~4.4]{jerrumSnir}, and a lower bound of $n(2^{n-1}-1)$ on the monotone circuit size of the permanent \cite[Section~4.3]{jerrumSnir}. As the permanent is a monotone projection of the perfect matching polynomial---namely, restrict the perfect matching polynomial to a bipartite graph, e.\,g., by setting $x_{ij}=0$ whenever $i$ and $j$ have the same parity---the same lower bound holds for the perfect matching polynomial.
\begin{proof}
The first part follows by combining Proposition~\ref{prop:formula} with Theorems~\ref{thm:HC}--\ref{thm:cut}. The second part follows from the fact that monotone circuits of size $s$ can be balanced to have size $\text{poly}\xspace(s)$ and depth $O(\log^2 s)$ (the proof in \cite{VSBR} works \emph{mutatis mutandis} in the monotone setting), which can then be converted to monotone formulas of size $s^{O(\log s)} = 2^{O(\log^2 s)}$ by the usual conversion from bounded fan-in circuits to formulas. If there is a monotone circuit of size $s$ computing any of these polynomials, there is thus a monotone formula of size $2^{O(\log^2 s)}$, which must be at least $2^{\Omega(n)}$, so $s \geq 2^{\Omega(n^{1/2})}$.
\end{proof}
\section{Open Questions} \label{sec:open}
Despite the common feeling that Razborov's super-polynomial lower bound \cite{razborov} on monotone Boolean circuits for CLIQUE ``finished off'' monotone Boolean circuit lower bounds, several natural and interesting question remain. For example, does Directed $s$-$t$ Connectivity require monotone Boolean circuits of size $\Omega(n^3)$? (A matching upper bound is given by the Bellman--Ford algorithm.) Is there a monotone Boolean reduction from general perfect matching to bipartite perfect matching? A positive answer to the following question would rule out such monotone (projection) reductions.
\begin{open}
Extend Theorem~\ref{thm:matching} from formal polynomials over the Boolean semi-ring to Boolean functions.
\end{open}
However, there are even easier questions, intermediate between the Boolean function case and the algebraic case considered in this paper; Jukna \cite{juknaMonotone} discusses the notion of one polynomial ``counting'' another, which means that they agree on all $\{0,1\}$ inputs.
\begin{open} \label{open:count}
Prove that no monotone polynomial-size projection of the permanent agrees with the perfect matching polynomial on all $\{0,1\}$ inputs (``counts the perfect matching polynomial''). Similarly, prove that no monotone polynomial-size projection of the permanent counts the Hamiltonian cycle polynomial.
\end{open}
S. Jukna points out (personal communication) that projections of the $s$-$t$ connectivity polynomial correspond, even in the Boolean setting, to switching-and-rectifier networks, so the known lower bounds on monotone switching-and-rectifier networks (see, e.\,g., the survey \cite{razborovSRN}) imply that the Hamiltonian path polynomial and the permanent are not monotone p-projections of the $s$-$t$ connectivity polynomial, even over the Boolean semi-ring. This helps explain why the only known monotone lower bound on the $s$-$t$ connectivity polynomial that we are aware of \cite{juknaMonotone} goes by a somewhat roundabout proof: Razborov's lower bound on CLIQUE \cite{razborov}, followed by Valiant's reduction from the clique polynomial to the Hamiltonian path polynomial \cite{valiant2}, followed by a standard reduction from Hamiltonian path to counting $s$-$t$ paths. In the course of discussing this, we were led to the following question; although the motivation for the question has since disappeared, it still seems like an interesting question about polytopes, whose answer may require new methods.
\begin{open}[S. Jukna, personal communication] \label{open:path}
Is the $m$-th $s$-$t$ path polytope an extension of the $n$-th TSP polytope (or $n$-th cycle cover polytope) with $m \leq \text{poly}\xspace(n)$?
\end{open}
Since the separation problem for the $s$-$t$ path polytope is $\cc{NP}$-hard (see, e.\,g., \cite[\S 13.1]{schrijver})---and the cycle cover polytope has low (extension) complexity---answering this question negatively seems to require more subtle understanding of these polytopes than ``simply'' an extended formulation lower bound.
Another example of a natural polytope question with a similar flavor comes from the cut polynomials. In combination with B\"{u}rgisser's results and questions on the cut polynomials \cite{burgisser} (discussed in Section~\ref{sec:intro}), we are led to the following question.
\begin{open}
Is the $m$-th cut polytope an extension of the $n$-th TSP polytope, for $m \leq \text{poly}\xspace(n)$?
\end{open}
A negative answer would show that $\Cut^q$ is not complete for non-negative polynomials in $\cc{VNP}_{\mathbb{Q}}$ under monotone p-projections, though as with the example of the permanent, this is not necessarily an obstacle to being $\cc{VNP}$-complete under general p-projections. Yet even the monotone completeness of the cut polynomials remains open. In fact, even more basic questions remain open:
\begin{open} \label{open:poscomplete}
Is every non-negative polynomial in $\cc{VNP}$ a monotone projection of the Hamiltonian Cycle polynomial? Is there any polynomial that is ``positive $\cc{VNP}$-complete'' in this sense?
\end{open}
To relate this to the current proofs of $\cc{VNP}$-completeness of $HC_n$, we need to draw a distinction. Let $\cc{VP}_{\mathbb{R}}^{\geq 0}$ denote the polynomial families in $\cc{VP}_{\mathbb{R}}$ all of whose coefficients are non-negative, and let $\cc{mVP_\mathbb{R}}$ (``monotone $\cc{VP}$'') denote the class of families of polynomials with polynomially many variables, of polynomial degree, and computable by polynomial-size \emph{monotone} circuits over $\mathbb{R}$. Similarly, define $\cc{VNP}_{\mathbb{R}}^{\geq 0}$ to be the non-negative polynomials in $\cc{VNP}_{\mathbb{R}}$, and $\cc{mVNP}_\mathbb{R}$ to be the function families of the form $f_n = \sum_{\vec{e} \in \{0,1\}}^{\text{poly}\xspace(n)} g_m(\vec{e}, \vec{x})$, where $m \leq \text{poly}\xspace(n)$ and $(g_m) \in \cc{mVP}_\mathbb{R}$.
Valiant's original completeness proof for the Hamiltonian Cycle polynomial \cite{valiant2} is ``mostly'' monotone: It uses polynomial-size formulas for the coefficients of the monomials (coming from the definition of $\cc{VNP}$), but otherwise is entirely monotone. In other words, the proof shows that $HC$ is $\cc{mVNP}$-hard under monotone projections. However, we note that it's not clear whether $HC$ is even \emph{in} $\cc{mVNP}$! Question~\ref{open:poscomplete} asks whether $HC$, or indeed any polynomial, is $\cc{VNP}^{\geq 0}$-complete under monotone projections; the question of whether there exist polynomials that are $\cc{mVNP}$-complete under monotone projections also seems potentially interesting.
Finally, we ask about stronger notions of monotone reduction, which seem to require a different kind of proof technique. Recall that a \definedWord{c-reduction} from $f$ to $g$ is a family of algebraic circuits for $f$ with oracle gates for $g$.
\begin{open}
Do the analogues of Theorems~\ref{thm:HC}--\ref{thm:cut} hold for monotone bounded-depth c-reductions in place of affine p-projections? What about weakly-skew or even general monotone c-reductions?
\end{open}
\section{Subsequent Developments}
Since the appearance of the preliminary version of this paper \cite{prelim}, our Main Lemma~\ref{lem:main} has been used to prove that several other polynomials of combinatorial and complexity-theoretic interest are not sub-exponential-size projections of the permanent.
\begin{enumerate}
\item The $n$-th \definedWord{satisfiability polynomial} over $\mathbb{F}_q$ is a polynomial in $n + 8\binom{n}{3}$ variables denoted $X_1, \dotsc, X_n$ and $\{Y_c : c \in C_n\}$ where $C_n$ denote the set of clauses on 3 literals in $n$ variables. It is defined as:
\[
\Sat^q_n(X, Y) = \sum_{a \in \{0,1\}^n} \left(\prod_{i \in [n]} X_i^{q-1} \right) \left( \prod_{c \in C_n : c(a) = 1} Y_c^{q-1} \right)
\]
\item A \definedWord{clow} in an $n$-vertex graph is a closed walk of length exactly $n$, in which the minimum-numbered vertex appears exactly once. The $n$-th \definedWord{clow polynomial} over $\mathbb{F}_q$ is a polynomial in $\binom{n}{2} + n$ variables $X_e$ for each edge $e$ in the complete undirected graph $K_n$ on $n$ vertices and $Y_v$ for each $v \in [n]$. It is defined as:
\[
\Clow^q_n(X,Y) = \sum_{w : \text{clow of length $n$}} \left(\prod_{e : \text{edges in $w$}} X_e^{q-1} \right) \left(\prod_{v : \text{distinct vertices in $w$}} Y_v^{q-1} \right),
\]
or more precisely,
\[
\Clow^q_n(X,Y) = \sum_{w = [v_0, \dotsc, v_{n-1}]} \left( \prod_{i \in [n]} X_{(v_{i-1},v_{i \text{ mod } n})}^{q-1} \right) \left(\prod_{v \in \{v_0, \dotsc, v_{n-1}\}} Y_v^{q-1} \right),
\]
where the sum is over clows $w$ and $v_0$ denotes the minimum-numbered vertex in $w$.
\item The \definedWord{clique polynomial} is a polynomial in $\binom{n}{2}$ variables $X_e$:
\[
\Clique_n(X) = \sum_{T \subseteq \binom{[n]}{2} : T \text{ is a clique in $K_n$}} \prod_{e \in T} X_e.
\]
\end{enumerate}
\begin{theorem*}[{Mahajan and Saurabh \cite[Theorems~2 and 6]{MS}}]
Over any totally ordered semi-ring, any monotone affine projection from the permanent to $\Sat^q_n$ or to the clique polynomial requires blow-up at least $2^{\Omega(\sqrt{n})}$. Any monotone affine projection from the permanent to $\Clow^q_n$ requires blow-up at least $2^{\Omega(n)}$.
\end{theorem*}
As in Section~\ref{sec:formula}, we get the following corollary. Again, we note that the lower bound on the clique polynomial over the Boolean semi-ring only works for the formal clique polynomial (in contrast to Razborov's result \cite{razborov}, which works for any monotone Boolean circuit computing the CLIQUE \emph{function}).
\begin{corollary} \label{cor:MS}
Over any totally ordered semi-ring, any monotone formula computing $\Clow^q_n$ has size at least $2^{\Omega(n)}$ and any monotone circuit computing $\Clow^q_n$ has size at least $2^{\Omega(\sqrt{n})}$. Any monotone formula computing $\Sat^q_n$ or $\Clique_n$ has size at least $2^{\Omega(\sqrt{n})}$ and any monotone circuit computing these polynomials has size at least $2^{\Omega(n^{1/4})}$.
\end{corollary}
For the clow and satisfiability polynomials, we believe this result to be new. For the clique polynomials, this provides a new proof of (a slightly weaker version of) the exponential monotone circuit lower bound due to Schnorr \cite{schnorr}.\footnote{Schnorr showed a $\binom{n}{k}-1$ lower bound on the monotone circuit size of the the $k$-th clique polynomial $\Clique^k_n$, the sum over all cliques of size $k$, rather than all cliques. For $k=n/2$, this lower bound is asymptotically equal to $2^n / \sqrt{\pi n/2}$. The $k$-th clique polynomial $\Clique_n^k$ is the degree $k$ homogeneous component of the clique polynomial $\Clique_n$; by homogenization (implicit in Strassen's work, explicit in Valiant \cite[Lemma~2]{valiantNeg}), any monotone circuit of size $s$ for $\Clique_n$ can be converted into a monotone circuit of size $s(n/2+1)^2$ computing $\Clique^{n/2}_n$. Thus Schnorr's result implies a lower bound of $\Omega(2^n/n^{5/2})$ on the monotone circuit complexity of $\Clique_n$. }
\textbf{Acknowledgment.} We would like to thank Stasys Jukna for the question that motivated this paper \cite{juknaQ}, and \url{cstheory.stackexchange.com} for providing a forum for the question. We also thank Stasys for comments on a draft, pointing out the paper by Schnorr \cite{schnorr}, and interesting discussions leading to Questions~\ref{open:count} and \ref{open:path}. We thank Leslie Valiant for an interesting conversation that led to Question~\ref{open:poscomplete}. We thank Ketan Mulmuley and Youming Qiao for collaborating on \cite{GMQ}, which is why the author had Newton polytopes on the mind. We thank an anonymous reviewer for pointing out the monotone universality of the permanent and therefore the implications for monotone formula and circuit size (Section~\ref{sec:formula} and Corollary~\ref{cor:MS}). The author was supported during this work by an Omidyar Fellowship from the Santa Fe Institute.
\bibliographystyle{plainurl}
| {
"timestamp": "2016-12-26T02:04:55",
"yymm": "1510",
"arxiv_id": "1510.08417",
"language": "en",
"url": "https://arxiv.org/abs/1510.08417",
"abstract": "In this short note, we reduce lower bounds on monotone projections of polynomials to lower bounds on extended formulations of polytopes. Applying our reduction to the seminal extended formulation lower bounds of Fiorini, Massar, Pokutta, Tiwari, & de Wolf (STOC 2012; J. ACM, 2015) and Rothvoss (STOC 2014; J. ACM, 2017), we obtain the following interesting consequences.1. The Hamiltonian Cycle polynomial is not a monotone subexponential-size projection of the permanent; this both rules out a natural attempt at a monotone lower bound on the Boolean permanent, and shows that the permanent is not complete for non-negative polynomials in VNP$_{\\mathbb R}$ under monotone p-projections.2. The cut polynomials and the perfect matching polynomial (or \"unsigned Pfaffian\") are not monotone p-projections of the permanent. The latter, over the Boolean and-or semi-ring, rules out monotone reductions in one of the natural approaches to reducing perfect matchings in general graphs to perfect matchings in bipartite graphs.As the permanent is universal for monotone formulas, these results also imply exponential lower bounds on the monotone formula size and monotone circuit size of these polynomials.",
"subjects": "Computational Complexity (cs.CC)",
"title": "Monotone Projection Lower Bounds from Extended Formulation Lower Bounds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534365728416,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573501438788
} |
https://arxiv.org/abs/1805.01368 | Homological stability for spaces of commuting elements in Lie groups | In this paper we study homological stability for spaces ${\rm Hom}(\mathbb{Z}^n,G)$ of pairwise commuting $n$-tuples in a Lie group $G$. We prove that for each $n\geqslant 1$, these spaces satisfy rational homological stability as $G$ ranges through any of the classical sequences of compact, connected Lie groups, or their complexifications. We prove similar results for rational equivariant homology, for character varieties, and for the infinite-dimensional analogues of these spaces, ${\rm Comm}(G)$ and ${\rm B_{com}} G$, introduced by Cohen-Stafa and Adem-Cohen-Torres-Giese respectively. In addition, we show that the rational homology of the space of unordered commuting $n$-tuples in a fixed group $G$ stabilizes as $n$ increases. Our proofs use the theory of representation stability - in particular, the theory of ${\rm FI}_W$-modules developed by Church-Ellenberg-Farb and Wilson. In all of the these results, we obtain specific bounds on the stable range, and we show that the homology isomorphisms are induced by maps of spaces. | \section{Introduction}
Let $G$ be a compact, connected Lie group. The focus of this article is the space ${\rm Hom}({\mathbb Z}^n,G)$
consisting of all group homomorphisms $\rho: {\mathbb Z}^n \to G$.
The standard basis for ${\mathbb Z}^n$ defines an embedding of ${\rm Hom}({\mathbb Z}^n, G)\hookrightarrow G^n$, and
${\rm Hom}({\mathbb Z}^n, G)$ has the resulting subspace topology.
This space, then, consists of ordered commuting $n$--tuples in $G$.
We will consider various related spaces as well, such as
the character variety, or representation space, ${\rm Rep}({\mathbb Z}^n,G)={\rm Hom}({\mathbb Z}^n,G)/G$.
These spaces can also be interpreted as
moduli spaces of flat connections on principal $G$--bundles over the $n$--torus,
as studied by Witten~\cite{witten1998toroidal} and Kac--Smilga~\cite{kac2000Smilga}.
The cases of commuting pairs and triples were examined in detail in
the monograph by Borel, Friedman and Morgan~\cite{borel2002almost}. More recently, homological and homotopical properties of these spaces have been studied by a variety of authors, including Baird~\cite{baird2007cohomology}, Adem--Cohen~\cite{adem2007commuting}, Florentino--Lawton~\cite{florentino2014topology}, and Pettet--Souto~\cite{pettet2013souto}.
In three of the most classical cases, namely
$G = {\rm U}(r)$, ${\rm SU}(r)$, or ${\rm Sp}(r)$, the representation space ${\rm Hom}({\mathbb Z}^n, G)$ is
connected for all $n\geqslant 1$. In fact, these are the only
semisimple compact connected Lie groups with this
property~\cite{adem2007commuting,kac2000Smilga}. For instance
Sjerve and Torres-Giese \cite{torres2008fundamental} show that
${\rm Hom}({\mathbb Z}^n, {\rm SO}(3))$ is disconnected for $n\geqslant 2$.
The authors \cite{ramras2017hilbert} gave an explicit formula for
the Poincar\'e series of the path component of the trivial representation
${\rm Hom}({\mathbb Z}^n,G)_1$ using methods from \cite{cohen2016spaces}, and the second author \cite{stafa2017poincare}
gave a similar formula for the path component ${\rm Rep}({\mathbb Z}^n,G)_1$:
\begin{equation}\label{eqn: Hom-PP}
P({\rm Hom}({\mathbb Z}^n,G)_1;q)=\frac{1}{|W|} \left( \prod_{i=1}^r (1-q^{2d_i}) \right)
\left(\sum_{w\in W} \frac{ \det(1+qw)^n}{\det(1-q^2w)} \right),
\end{equation}
and
\begin{equation}\label{eqn: Rep-PP}
P({\rm Rep}({\mathbb Z}^n,G)_1;q)=\frac{1}{|W|} \sum_{w\in W} \det(1+qw)^n,
\end{equation}
where $T$ is a maximal torus, $W$ is the Weyl group, and the positive integers $d_1,\dots,d_r$
are the characteristic degrees of $W$. (For complex reductive groups, the mixed Hodge structure of these character varieties was subsequently determined by Florentino--Silva~\cite{FS2017}.)
Explicit computations based on these formulas indicated a stability pattern, for fixed $n$, as $G$ varies through one of the classical sequences of Lie groups (e.g. the unitary groups). The aim of this article is to prove that this phenomenon does in fact hold. We establish similar results for a wide range of spaces built from commuting tuples, summarized in Table~\ref{table: all sequences of spaces}.
These stability results can be seen as extending the well-known fact that the homology of $G$ itself stabilizes in each of the classical families of Lie groups.
In Section~\ref{sec: nil}, we extend our results to reductive Lie groups and to nilpotent discrete groups. In the case of homomorphism spaces, it is a theorem of Pettet and Souto~\cite{pettet2013souto} that if $G$ is the group of real or complex points of a (Zariski) connected, reductive algebraic group with maximal compact subgroup $K$, then ${\rm Hom}({\mathbb Z}^n, G)$ deformation retracts to ${\rm Hom}(\mathbb{Z}^n, K)$. The corresponding result for ${\rm Rep}({\mathbb Z}^n, G)$ was established by Florentino--Lawton~\cite{florentino2014topology} (note that when $G$ is reductive, ${\rm Rep}({\mathbb Z}^n, G)$ should be interpreted as a GIT quotient), and generalizations to nilpotent groups discrete groups were obtained by Bergeron~\cite{bergeron2015topology}.
Moreover,
Bergeron and Silberman showed in \cite{bergeron2016note} that when $G$ is compact and $\Gamma$ is nilpotent, the
connected component of the identity in ${\rm Hom}(\Gamma, G)$ contains only abelian representations, and hence is the same as ${\rm Hom}(H_1 (\Gamma),G)_1$.
These results allow us to extend most of the stability statements in the paper (Section~\ref{sec: nil}). In Section~\ref{sec: covers}, we show that the homology of the spaces considered here is invariant under finite coverings of Lie groups.
We approach homological stability using the machinery of \emph{representation stability}
in the sense of Church and Farb \cite{church2013representation}. In particular, we use J. Wilson's extension of this theory to the classical sequences of Weyl groups.
Each of our homological stability results is obtained by first proving representation stability for an associated sequence.
For instance, work of Baird identifies the homology of ${\rm Hom}(\mathbb{Z}^n, G)_1$ as the fixed points of a certain Weyl group representation.
We use representation stability to control the behavior of this sequence of Weyl group representations as $G$ varies through one of the classical sequences of Lie groups, finding in particular that the ranks of the fixed-point subspaces (the isotypical components of the trivial representation) stabilize. Stability bounds are obtained using the theory of ${\rm FI\#}$--modules introduced by Church--Ellenberg--Farb~\cite{church2015fi}, and Wilson's extension of this theory to Weyl groups of type B/C~\cite{Wilson-Math-Z}.
We give a minimalist introduction to
representation stability, using the language of ${\rm FI}_W$-- and ${\rm FI}_W\#$--modules,
in Section \ref{FIW-modules}. We show that if an ${\rm FI}_W$--module $V = \{V_i\}_{i\geqslant 1}$ is finitely generated in stage $d$, then for $n\geqslant d$,
there are canonical surjections $V_n^{W_n} \twoheadrightarrow V_{n+1}^{W_{n+1}}$
between the subspaces of invariants (Proposition~\ref{avg}).
This statement has the following topological consequence:
if the degree--$k$ rational homology of an ${\rm FI}_W$--space $\{X_r\}_{r\geqslant 1}$ is generated in stage $N$, then the maps $H_k (X_n/W_n) \to H_k (X_{n+1}/W_{n+1})$ are surjective for $n\geqslant N$ (Theorem~\ref{quot-stab}).
The reader familiar with the theory of ${\rm FI\#}$--modules may notice that the spaces ${\rm Hom}(\mathbb{Z}^n, G)$ \emph{do not} have the structure of ${\rm FI\#}$--spaces as $G$ varies. We exploit work of Baird~\cite{baird2007cohomology}, which shows that their \emph{equivariant} homology does have this extended structure. This allows us to deduce stability bounds for equivariant homology (Section~\ref{sec: equivariant stability}), from which we derive bounds on ordinary homological stability (Section~\ref{sec: stable range for Hom}) using a comparison of Eilenberg--Moore spectral sequences.
It is worth mentioning that homological stability results have a long history, going back to work of Arnold
\cite{arnol1969cohomology} and Cohen \cite{cohen1972thesis} on braid groups, and work of Quillen on general linear groups~\cite{Quillen}. The methods we use here are quite different.
Before describing the main results in this article, we make precise our terminology regarding homological stability.
A sequence of topological spaces $\{X_n\}_{n\geqslant 0}$ with maps
$\phi_{n}: X_n \to X_{n+1} $ between them
is {\it strongly rationally homologically stable} if for each $k\geqslant 0$,
there exists $N = N(k)$ such that
$$(\phi_n)_*\colon\thinspace H_k (X_n)\longrightarrow H_k (X_{n+1})$$
is an isomorphism for $n\geqslant N$. Since our main results are all for rational homology, we will often drop the coefficient group ${\mathbb Q}$ from the notation. While we focus on rational homology, the corresponding statements for rational cohomology are equivalent.
\subsection{Ordered commuting tuples}
Let $\{G_r\}_{r\geqslant 1}$ denote one of the classical families of
compact, connected Lie groups -- namely $G_r = {\rm SU}(r)$, ${\rm U}(r)$, ${\rm SO}(2r+1)$, ${\rm Sp}(r)$, or ${\rm SO}(2r)$ -- or the complexifications thereof. We have standard inclusions $G_r\hookrightarrow G_{r+1}$ (see Section~\ref{Lie}).
\begin{thm}
Fix $k, n\geqslant 0$ and let $\{G_r\}_{r\geqslant 1}$ be one of the classical sequences of Lie groups listed above.
Then the standard inclusions $G_r\hookrightarrow G_{r+1}$ induce isomorphisms
$$H_k({\rm Rep}({\mathbb Z}^n,G_r)_1; \mathbb{Q})\srm{\cong} H_k({\rm Rep}({\mathbb Z}^n,G_{r+1})_1; \mathbb{Q})$$
for $r\geqslant k$, and
$$H_k({\rm Hom}({\mathbb Z}^n,G_r)_1; \mathbb{Q})\srm{\cong} H_k({\rm Hom}({\mathbb Z}^n,G_{r+1})_1; \mathbb{Q})$$
for $r - \lfloor \sqrt{r} \rfloor \geqslant k$, where $\lfloor \sqrt{r} \rfloor$ is the floor function.
\end{thm}
These results are proved in Theorems~\ref{stability-wrt-r-Rep} and~\ref{Hom-bound}, and Corollary~\ref{cor: nil complex} respectively. We prove similar results for $G_r$--equivariant homology in Section~\ref{sec: equivariant stability}.
This result, and the others discussed below, also apply to the groups ${\rm Spin}(r)$ (Example~\ref{ex: Spin}), and to the projectivizations of the above sequences (Example~\ref{ex: proj}), although for the projective groups there are no maps inducing the isomorphisms.
\subsection{Unordered commuting tuples}
Let $G$ be a compact and connected Lie group (not necessarily of classical type).
The symmetric group $S_n$ acts on ${\mathbb Z}^n$ by permuting the standard basis, and hence acts on
${\rm Hom}({\mathbb Z}^n,G)$ and ${\rm Rep}({\mathbb Z}^n,G)$ by permuting the entries of commuting $n$--tuples.
The space of
\textit{unordered pairwise commuting $n$--tuples} in $G$ is the quotient ${\rm Hom}({\mathbb Z}^n,G)/S_n$, and the corresponding moduli space is
${\rm Rep}({\mathbb Z}^n,G)/S_n.$
We have natural inclusions
$${\rm Hom}({\mathbb Z}^n,G)/S_n \hookrightarrow {\rm Hom}({\mathbb Z}^{n+1},G)/S_{n+1}$$
sending $[(g_1, \ldots, g_n)]$ to $[(g_1, \ldots, g_n, 1)]$, where $1\in G$ is the identity, and similarly for unordered representation spaces.
\begin{thm}\label{unordered-thm}
Let $G$ be the group of
complex or real points of a reductive linear algebraic group,
defined over ${\mathbb R}$ in the latter case. Then for each $k\geqslant 0$ the
sequences
$$n\mapsto H_k({\rm Hom}({\mathbb Z}^n,G)_1; \mathbb{Q}) \textrm{ and } n\mapsto H_k({\rm Rep}({\mathbb Z}^n,G)_1; \mathbb{Q})$$
satisfy uniform representation stability, with stable range $n\geqslant 2k$. Moreover, the
natural maps
$$H_k({\rm Hom}({\mathbb Z}^n,G)_1/S_{n}; \mathbb{Q})\longrightarrow H_k({\rm Hom}({\mathbb Z}^{n+1},G)_1/S_{n+1}; \mathbb{Q})$$
and
$$H_k({\rm Rep}({\mathbb Z}^n,G)_1/S_{n}; \mathbb{Q})\longrightarrow H_k({\rm Rep}({\mathbb Z}^{n+1},G)_1/S_{n+1}; \mathbb{Q})$$
are isomorphisms for $n\geqslant k$.
\end{thm}
This is proven in Theorem~\ref{thm: stability for fixed G} and Corollary~\ref{cor: stability in n nil}, while an analogue for equivariant homology appears in Theorem~\ref{thm: equiv.coh. stability Hom mod S_n}. We note that these results bear some similarity to homological stability for configuration spaces, where one stabilizes with respect to the size of the configurations. It would be interesting to know whether stability holds for \emph{configurations} of commuting elements in $G$; that is, commuting tuples in which repetition is not allowed.
\subsection{Symmetric products and representation stability}
The above results are closely related to a general homological stability result for symmetric products. This stability result goes back at least to Steenrod~\cite{steenrod}; here we use representation stability to give a simple proof of rational homological stability for these spaces. (Steenrod in fact proved stability for \emph{integral} homology, and this will allow us to deduce integral stability in several of the situations we study.)
Recall that the $n$--th symmetric product of a space $X$ is the quotient space
$\mathrm{Sym}^n X = X^n/S_n$, where the symmetric group $S_n$ acts
by permuting the factors. A basepoint in $X$ determines maps $\mathrm{Sym}^n X \hookrightarrow \mathrm{Sym}^{n+1} (X)$ (adding the basepoint in the last factor), and the colimit of this sequence is $\mathrm{Sym}^\infty (X)$.
In general, if $X$ is path connected and $H_* (X; \mathbb{Q})$ is finitely generated in each degree, then we show (Proposition \ref{symm-prod}) that for each $k\geqslant 0$,
$H_k (X^n; \mathbb{Q})$
forms a uniformly representation stable
sequence of $S_n$-representations.
In Corollary~\ref{symm-prod-cor}, we explain how homological stability for symmetric products then follows from the general theory. For spaces equipped with an involution, we obtain a similar representation stability result in Proposition~\ref{signed-symm-prod}.
In addition to the representation stability result for products, we also use work of Baird \cite{baird2007cohomology} to prove various related representation stability results, outlined in Table \ref{table: rep stability results}, that help us prove the homological stability results in Table \ref{table: all sequences of spaces}.
\subsection{Infinite-dimensional analogues}
We also study two constructions that combine, in different ways, the spaces ${\rm Hom}({\mathbb Z}^n,G)$ and ${\rm Rep}({\mathbb Z}^n,G)$ for varying $n$.
The space $B_{\rm com} G$, known as the \emph{classifying space for commutativity in $G$}, is the
geometric realization of the simplicial space ${\rm Hom}({\mathbb Z}^\bullet,G)$. Introduced by Adem--Cohen--Torres-Giese~\cite{adem2012commuting}, this space has been studied by a variety of authors~\cite{adem2015gomez, adem2017gomezlindtillman, AGV, gritschacher2018spectrum}.
When $G={\rm U}$ is the infinite unitary group, $B_{\rm com} U$ represents \emph{commutative complex $K$--theory}.
The space ${\rm Comm}(G)$ is a subspace of the James reduced product $J(G)$, and
${\rm Comm}(G)/G$ is its image in $J(G)/G$. These spaces played an important role in the calculations of Poincar\'e series discussed above. We show that these constructions
satisfy strong rational homological stability for all of the above sequences of Lie groups (Theorem~\ref{Hom-bound} and Corollary~\ref{cor: nil stable}). We note that the rational cohomology rings of $B_{\rm com} {\rm U}$, $B_{\rm com} {\rm SU}$ and $B_{\rm com} {\rm Sp}$ were calculated
in \cite{adem2015gomez}, and each is polynomial.
\
In conclusion, our main results can
be summarized by stating that the sequences of rational
homology groups in Table \ref{table: rep stability results} (below) enjoy representation stability, while the sequences in Table \ref{table: all sequences of spaces}
stabilize. In these tables, the sequence $G_r$ can be any of the sequences discussed above, with maximal torus $T_r < G_r$; in fact, our results also extend to certain sequences of real reductive groups (Section~\ref{sec: nil}). Additionally, in Section~\ref{sec: nil} we extend our results to finitely generated nilpotent groups in place of ${\mathbb Z}^n$.
We do not expect the bounds in Table \ref{table: all sequences of spaces} to be optimal;
in fact, the bounds $r- \lfloor \sqrt{r} \rfloor \geqslant k$ can be improved to $r- \lfloor \sqrt{r} \rfloor +1\geqslant k$ (except for $G_1 = {\rm SO}(2)$ -- see Remark~\ref{rmk: +1}).
Note that these bounds imply slightly weaker linear bounds.
\begin{table}[ht!]
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{llr}
\hline
\textit{Module} & \textit{Groups acting} & \textit{Stable range}\\
\hline
\hline
$H_k(X^n)$ & $S_n$, permuting coordinates & $n\geqslant 2k$\\
\hline
$H_k(T_r)$ & $W_r$, acting diagonally & $r\geqslant 2k$*\\
$H_k(G_r/T_r)$ & $W_r$, acting by translation & \\
$H_k(BT_r)$ & $W_r$, acting by functoriality & $r\geqslant 2k$*\\
$H_k(J(T_r))$ & $W_r$, acting by functoriality & $r\geqslant 2k$* \\
\hline
\hline
$H_k({\rm Hom}({\mathbb Z}^n,G)_1)$ & $S_n$, permuting coordinates & $n\geqslant 2k$\\
$H_k({\rm Rep}({\mathbb Z}^n,G)_1)$ & $S_n$, permuting coordinates & $n\geqslant 2k$\\
\hline
\
\end{tabular}
\caption{Representation stability bounds.\\
*These bounds are established in all cases \emph{except} $G_r = SU(r)$.}
\label{table: rep stability results}
\end{table}
\
\begin{table}[ht!]
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{llr}
\hline
\textit{Homology sequence} & \textit{Space description} & \textit{Stable range}\\
\hline
\hline
$H_k(\mathrm{Sym}^r(X))$ & $r$--fold symmetric product of $X$ & $r\geqslant k$\\
\hline
$H_k({\rm Hom}({\mathbb Z}^n,G_r)_1)$ & ordered commuting $n$--tuples in $G_r$ & $r - \lfloor \sqrt{r} \rfloor \geqslant k$\\
$H_k({\rm Rep}({\mathbb Z}^n,G_r)_1)$ & $G_r$--character variety of ${\mathbb Z}^n$& $r\geqslant k$\\
$H_k(B_{\rm com} (G_r)_1)$ & classifying space $|{\rm Hom}({\mathbb Z}^\bullet,G_r)_1|$ & $r - \lfloor \sqrt{r} \rfloor \geqslant k$ \\
$H_k({\rm Comm}(G_r)_1)$ & James-type construction & $r - \lfloor \sqrt{r} \rfloor\geqslant k$ \\
$H_k(B_{\rm com} (G_r)_1/G_r)$ & classifying space $|{\rm Hom}({\mathbb Z}^\bullet,G_r)_1/G_r|$ & $r\geqslant k$\\
$H_k({\rm Comm}(G_r)_1/G_r)$ & James-type construction & $r \geqslant k$ \\
\hline
$H_k^{G_r}({\rm Hom}({\mathbb Z}^n,G_r)_1)$ & ordered commuting $n$--tuples in $G_r$ & $r\geqslant k$\\
$H_k^{G_r}({\rm Comm}(G_r)_1)$ & James-type construction & $r\geqslant k$\\
$H_k^{G_r}(B_{\rm com} (G_r)_1)$ & classifying space $|{\rm Hom}({\mathbb Z}^\bullet,G_r)_1|$ & $r\geqslant k$\\
\hline
\hline
$H_k({\rm Hom}({\mathbb Z}^n,G)_1/S_n)$ & unordered commuting $n$--tuples in $G$ & $n\geqslant k$\\
$H_k({\rm Rep}({\mathbb Z}^n,G)_1/S_n)$ & unordered $G$--character variety of ${\mathbb Z}^n$ & $n\geqslant k$\\
$H_k^G({\rm Hom}({\mathbb Z}^n,G)_1/S_n)$ & unordered commuting $n$--tuples in $G$ & $n\geqslant k$\\
\hline
\
\end{tabular}
\caption{Stability bounds for rational (equivariant) homology groups.}
\label{table: all sequences of spaces}
\end{table}
\newpage
\begin{rmk}\label{rmk: SO}
While our methods naturally lead us to divide the special orthogonal groups into two sequences (and the stability bounds are most easily stated in this way), the entire sequence $\{{\rm Hom}({\mathbb Z}^n, {\rm SO}(m))\}_{m\geqslant 2}$ in fact satisfies strong homological stability.
The stabilization maps are induced by block sum with the identity matrix, and
in the stable range, composing any two adjacent maps in the sequence
\begin{eqnarray*}H_k ({\rm Hom}({\mathbb Z}^n, {\rm SO}(m-1))_1)\longrightarrow H_k ({\rm Hom}({\mathbb Z}^n, {\rm SO}(m))_1)\qquad \qquad\qquad \qquad\\
\qquad \qquad \qquad \longrightarrow H_k ({\rm Hom}({\mathbb Z}^n, {\rm SO}(m+1))_1)\longrightarrow H_k ({\rm Hom}({\mathbb Z}^n, {\rm SO}(m+2))_1)
\end{eqnarray*}
yields an isomorphism; hence each of these maps is in fact an isomorphism.
Similar remarks apply to the other functors considered in the paper, as well as to the complexifications ${\rm SO}(2m, {\mathbb C})$ and to the Spin groups.
\end{rmk}
\subsection*{Acknowledgments}
The authors would like to thank Jenny Wilson and John Wiltshire-Gordon for helpful conversations regarding representation stability, Tom Baird for suggesting the strategy in Lemma~\ref{lem: translation=conjugation}, Sean Lawton for helpful conversations about Lie theory, Fred Cohen for his feedback on a technical aspect, Jeremy Miller for telling us about Steenrod's result \cite{steenrod}, Bernardo Villarreal for comments on the first version, and Sarkhan Badirli for his assistance with Matlab computations. We also thank Mahir Can and Soumya Banerjee for asking about stability for equivariant homology. Finally, we thank the referees, whose comments helped to improve the exposition.
\section{Homology of commuting tuples}\label{Hom}
In this section we review work of Baird~\cite{baird2007cohomology} on the (co)homology of the spaces ${\rm Hom}({\mathbb Z}^n, G)$, and explain how Baird's results interact with homomorphisms between Lie groups.
For a topological group $G$ and a finitely generated discrete group $\pi$, a group
representation $\rho : \pi \to G$ is completely determined by its image on a set of generators
$g_1,\dots,g_n \in \pi.$ That is, the association of $\rho$ with the $n$--tuple
$(\rho(g_1),\dots,\rho(g_n)) \in G^n$ gives an inclusion
${\rm Hom}(\pi,G) \hookrightarrow G^n$, and the subspace topology on ${\rm Hom}(\pi, G)$ is independent of the choice of generating set (see \cite[\S 2]{villarreal2017cosimplicial} for a direct proof). The group $G$ acts by conjugation on ${\rm Hom}(\pi,G)$, and the quotient space is denoted by ${\rm Rep}(\pi, G) = {\rm Hom}(\pi,G)/G$.
\subsection{The conjugation map}
Throughout this section, $G$ will be a compact and connected Lie group, $T\leqslant G$ a maximal torus, and $W = N(T)/T$ its Weyl group.
Work of Baird~\cite{baird2007cohomology} allows us to describe the homology of the identity components
${\rm Hom}({\mathbb Z}^n, G)_1$ and ${\rm Rep}({\mathbb Z}^n, G)_1$ (and their unordered versions) as Weyl group invariants in the homology of some related spaces, and we will explain how various maps between spaces of commuting elements behave on homology.
The conjugation map $G \times T \to G$ given by $(g,t)\mapsto gtg^{-1}$
defines a surjection onto $G$. Baird~\cite{baird2007cohomology} generalized this to a map
\begin{align*}\label{eqn: conj. GxTn map to Hom}
\begin{split}
G\times T^n & \to {\rm Hom}({\mathbb Z}^n,G) \\
(g,t_1,\dots,t_n) &\mapsto (gt_1 g^{-1},\dots,gt_n g^{-1}).
\end{split}
\end{align*}
This map is certainly not a surjection in general (since its domain is always path connected). However, the image of
this map is precisely the path component of the trivial representation ${\rm Hom}({\mathbb Z}^n,G)_1$,
as shown by Baird.
This map factors through the quotient by the normalizer of $T$ to give a map
$$
G \times_{NT} T^n \to {\rm Hom}({\mathbb Z}^n,G)_1,
$$
where $NT$ acts by right multiplication on $G$ and diagonally by
conjugation on $T^n$. Moreover, the conjugation action of $T\leqslant NT$ on $T^n$ is trivial, so we have a homeomorphism $G \times_{NT} T^n \srm{\cong} G/T \times_{W} T^n$, and we obtain a map
\begin{equation}\label{phi}
\phi = \phi_n \colon\thinspace G/T\times_W T^n \to {\rm Hom}({\mathbb Z}^n,G)_1,
\end{equation}
which we call the \emph{conjugation map}.
The stability properties of the spaces mentioned in the introduction
are studied mainly via this map and others derived from it.
\begin{thm}[Baird] \label{Baird}
The map $\phi$ induces an isomorphism
$$H_* (G/T \times_{W} T^n; \mathbb{Q}) \srm{\cong} H_*({\rm Hom}({\mathbb Z}^n,G)_1; \mathbb{Q}),$$
and torsion in
$H_*({\rm Hom}({\mathbb Z}^n,G)_1; {\mathbb Z})$ has order dividing $|W|$.
\end{thm}
Baird stated his result in cohomology, but by the Universal Coefficient Theorem, the homological and cohomological versions with rational coefficients are equivalent.
One can use the map (\ref{phi}) to give an analogous model for the identity component ${\rm Rep}({\mathbb Z}^n,G)_1$. The maps
\begin{equation}
\label{in} T^n = {\rm Rep}(\mathbb{Z}^n, T) \xrightarrow{i=i_n} {\rm Rep}({\mathbb Z}^n,G)_1
\end{equation}
induced by the inclusion $T\hookrightarrow G$ are invariant with respect to the diagonal conjugation action of $W$ on $T^n$, and hence induce maps
\begin{equation}
\label{j} T^n/W \to {\rm Rep}({\mathbb Z}^n,G)_1.
\end{equation}
\begin{thm}\label{Stafa}
The map (\ref{j}) is a homeomorphism.
Consequently,
we have an isomorphism
$$H_* (T^n/W; \mathbb{Q}) \srm{\cong} H_* ({\rm Rep}({\mathbb Z}^n,G)_1; \mathbb{Q}).$$
\end{thm}
This was first observed by Baird~\cite{baird2007cohomology}, and a proof is given in
\cite[Theorem 5.1]{stafa2017poincare}.
\begin{rmk}\label{rmk: phi natural} The isomorphisms in Theorems~\ref{Baird} and~\ref{Stafa} are natural in the following senses. Let $f\colon\thinspace G\to G'$ be a map of Lie groups and assume there exist maximal tori $T\leqslant G$, $T'\leqslant G'$ such that $f(T) \subset T'$ and $f(NT) \subset NT'$. Then we have a commutative diagram
\begin{center}
\begin{tikzcd}
G/T \times_W T^n \arrow{r}{\phi} \arrow{d} &
{\rm Hom}({\mathbb Z}^n, G)_1 \arrow{d}
\\
G'/T' \times_{W'} (T')^n\arrow{r}{\phi} &
{\rm Hom}({\mathbb Z}^n, G')_1,
\\
\end{tikzcd}
\end{center}
where $W =NT/T$ and $W' = NT'/T'$ are the Weyl groups, and the vertical maps are induced by $f$. Similarly, we have
a commutative diagram
\begin{center}
\begin{tikzcd}
T^n/W \arrow{r}{\cong} \arrow{d} &
{\rm Rep}({\mathbb Z}^n, G)_1 \arrow{d}
\\
(T')^n/W' \arrow{r}{\cong} &
{\rm Rep}({\mathbb Z}^n, G')_1,
\\
\end{tikzcd}
\end{center}
where the horizontal maps are the homeomorphisms (\ref{j}), and again the vertical maps are induced by $f$.
\end{rmk}
\section{Homology of finite quotients}
It is well-known that (under suitable conditions) if $H$ is a finite group acting on a space $X$, then there is an isomorphism
\begin{equation}\label{eqn: fin-quot} H^*(X/H; \mathbb{Q}) \cong H^*(X; \mathbb{Q})^H,\end{equation}
where the right-hand side denotes the subring of $H$--invariant elements.
This applies in particular to Theorems~\ref{Baird} and~\ref{Stafa}.
Here we explain a homological version of (\ref{eqn: fin-quot}), and describe how these results interact with equivariant maps.
Recall that a space $X$ is semi-locally contractible if every open set $U\subset X$ has a covering by open sets $U_i\subset U$ for which the inclusion maps $U_i\hookrightarrow U$ are null-homotopic.
\begin{defn} We say that a space $X$ is \emph{good} if it is homotopy equivalent to a semi-locally contractible space.
We say that an action of a finite group
$H$ on a space $X$ is \emph{good} if both $X$ and $X/H$ are good.
\end{defn}
Note that every CW complex is locally contractible and hence also semi-locally contractible. In all of the situations we encounter in this paper, the actions will be good because $X$ and $X/H$ will have the homotopy types of CW complexes. This will often be proven using the fact that if $n\mapsto X_n$ is a proper simplicial space in which each $X_n$ has the homotopy type of a CW complex, then so does the geometric realization $|X_\bullet|$ (see May~\cite[Appendix]{May-EGP}).
\begin{prop}\label{coh-quot}
Consider a good action of a finite group $H$ on a space $X$, and let $\k$ be a field whose characteristic does not divide $|H|$. Let $\pi\colon\thinspace X\to X/H$ denote the quotient map. Then $\pi^* \colon\thinspace H^*(X/H; \k) \to H^*(X; \k)$ has image contained in the subspace of invariants $H^*(X; \k)^H$, and in fact $\pi^*$ induces an isomorphism
$$H^*(X/H; \k) \srm{\cong} H^*(X; \k)^H.$$
Similarly, the map $\pi_* \colon\thinspace H_* (X; \k) \to H_* (X/H; \k)$ restricts to an isomorphism
$$H_* (X; \k)^H \srm{\cong} H_* (X/H; \k).$$
\end{prop}
\begin{proof}
Bredon~\cite[Theorem 19.2]{bredon2012sheaf} proves a version of this result for sheaf cohomology, with coefficients in the constant sheaf $\k$, for arbitrary actions of finite groups. For semi-locally contractible spaces, the sheaf cohomology and the singular cohomology are naturally isomorphic by Sella~\cite{Sella}, and this fact also holds for spaces homotopy equivalent to semi-locally contractible spaces because both sheaf cohomology and singular cohomology are homotopy invariant (for sheaf cohomology, see~\cite[Corollary II.11.13]{bredon2012sheaf}).
Hence for good actions, the result for sheaf cohomology is equivalent to the result for singular cohomology.\footnote{For our purposes, the more classical fact that sheaf cohomology agrees with singular cohomology for locally contractible, paracompact Hausdorff spaces is sufficient, since in all the situations where we apply this result the spaces in question have the homotopy types of CW complexes.}
We now deduce the homological case from the cohomological case. By the Universal Coefficient Theorem for cohomology, there is a commutative diagram of the form
\begin{center}
\begin{tikzcd}
H^*(X/H) \arrow{r}{\cong} \arrow{d}{\pi^*} &
{\rm Hom}(H_*(X/H), \k) \arrow{d}{(\pi_*)^*}
\\
H^*(X)^H \arrow{r}{\cong} &
{\rm Hom}(H_*(X), \k)^H
\\
\end{tikzcd}
\end{center}
where ${\rm Hom}( - , \k)$ denotes the $\k$--linear dual.
Note here that naturality of the isomorphism
$$H^*(X) \srm{\cong} {\rm Hom}(H_* (X), \k)$$
in the Universal Coefficient Theorem implies that this isomorphism is equivariant with respect to the action of $H$, and hence restricts to an isomorphism between the subspaces of $H$--fixed points. Since the map $\pi^*$ is an isomorphism, we conclude that
$$(\pi_*)^*\colon\thinspace {\rm Hom}(H_*(X/H), \k) \to {\rm Hom}(H_*(X), \k)^H $$
is an isomorphism as well.
We need the following lemma. To simplify notation we set $V^* := {\rm Hom}(V, \k)$.
\begin{lem} Let $H$ be a finite group, and $\k$ a field of characteristic not dividing $|H|$. Let $V$ and $W$ be representations of $H$ over $\k$, and let
$p\colon\thinspace V\to W$ be an $H$--equivariant map. Then the induced map $p^*\colon\thinspace W^* \to V^*$ is $H$--equivariant, where $H$ acts on $V^*$ by $(h\cdot l) (v) := l(h^{-1} v)$.
By equivariance, $p$ and $p^*$ restrict to maps $p^H \colon\thinspace V^H\to W^H$ and $(p^*)^H \colon\thinspace (W^*)^H \to (V^*)^H$.
The map $p^H$ is injective if and only if $(p^*)^H$ is surjective, and $p^H$ is surjective if and only if $(p^*)^H$ is injective. In particular, $p^H$ is an isomorphism if and only if $(p^*)^H$ is an isomorphism.
\end{lem}
\begin{proof} It suffices to show that the natural map $\phi\colon\thinspace (U^*)^H \to (U^H)^*$ sending an invariant linear functional $l$ to its restriction $l|_{U^H}$ is an isomorphism, because then $(p^*)^H$ and $(p^H)^*$ are naturally isomorphic, and in general a linear map is injective (respectively, surjective) if and only if its dual is surjective (respectively, injective).
The map $\phi$ is injective, since if
$l \in (U^*)^H$ is non-zero, then there exists $x\in U$ such that
$l(x)\neq 0$, and then linearity and $H$--invariance of $l$ give
$$l\left(\sum_{h\in H} h\cdot x\right) = \sum_{h\in H} l(h\cdot x) = |H| l(x) \neq 0$$
(since the characteristic of $\k$ does not divide $|H|$).
To prove surjectivity of $\phi$, consider a linear functional $l\colon\thinspace U^H \to \k$. Choose a splitting $q\colon\thinspace U\to U^H$ of the inclusion $U^H\hookrightarrow U$ and set
$$\widetilde{l} := \dfrac{1}{|H|} \sum_{h\in H} h\cdot (l\circ q).$$
Then $\widetilde{l}$ is an $H$--invariant extension of $l$, so $\phi(\,\widetilde{l}\,) = l$.
\end{proof}
We now complete the proof of Proposition~\ref{coh-quot}.
The fact that $\pi_*$ restricts to an isomorphism $H_*(X)^H \to H_* (X/H)$ follows by applying the Lemma to the case $V = H_* (X)$, $W = H_* (X/H)$ (with trivial $H$--action) and $p = \pi_*$.
\end{proof}
In order to describe how Proposition~\ref{coh-quot} interacts with equivariant maps of spaces, we will need the following construction.
\begin{defn}\label{alpha}
Let $H$ be a finite group, and let $V$ be a $\k[H]$--module, with $\k$ a field of characteristic not dividing $|H|$. The averaging map
$$\alpha\colon\thinspace V\to V^H$$
is the $\k$--linear map
defined by
$$\alpha (v) = \dfrac{1}{|H|} \sum_{h\in H} h\cdot v.$$
To simplify notation, we set $\ol{v} := \alpha(v)$.
\end{defn}
We now list some basic properties of the averaging map.
\begin{lem}\label{avg-lem} Let $\alpha\colon\thinspace V\to V^H$ be the averaging map of the $\k[H]$--module $V$ $($where $\k$ is a field of characteristic not dividing $|H|$$)$. Then
\begin{enumerate}
\item $\alpha|_{V^H}$ is the identity $($so $\alpha$ is idempotent with image $V^H$$)$;
\item $\alpha(h\cdot v) = \alpha (v)$ for all $h \in H$, $v\in V$;
\item $\alpha$ is the orthogonal projection of $V$ onto $V^H$ with respect to any $H$--invariant inner product $($that is, $\langle v - \alpha (v), \alpha (v)\rangle = 0$ if $\langle \,, \, \rangle$ is $H$--invariant$)$.
\end{enumerate}
\end{lem}
\begin{prop}\label{coh-quot2}
Let $H_1$ and $H_2$ be finite groups acting on spaces $X_1$ and $X_2$, and assume the actions are good. Let $f\colon\thinspace X_1\to X_2$ be a map that is equivariant with respect to a homomorphism $\phi\colon\thinspace H_1 \to H_2$ $($meaning that $f(h\cdot x) = \phi(h)\cdot f(x)$ for all $h\in H, x\in X$$)$. Let $\ol{f} \colon\thinspace X_1/H_1\to X_2/H_2$ be the map induced by $f$.
Let $\k$ be a field of characteristic not dividing $|H_1|$ or $|H_2|$.
Then under the isomorphisms in Proposition~\ref{coh-quot}, the map
$$\ol{f}^*\colon\thinspace H^*(X_2/H_2; \k) \to H^*(X_1/H_1; \k)$$
corresponds to
$$f^*\colon\thinspace H^*(X_2; \k)^{H_2} \to H^*(X_1; \k)^{H_1}$$
$($in particular, $f^*$ maps $H_2$--invariant classes to $H_1$--invariant classes$)$, while
the map
$$\ol{f}_*\colon\thinspace H_*(X_1/H_1; \k) \to H_*(X_2/H_2; \k)$$
corresponds to
$$\ol{ f_*} = \alpha\circ f_* \colon\thinspace H_*(X_1; \k)^{H_1} \to H_*(X_2; \k)^{H_2}.$$
\end{prop}
\begin{proof} Let $\pi_i \colon\thinspace X_i\to X_i/H_i$ be the quotient map. First we consider $\ol{f}^*$.
Say $x_2\in H^* (X_2; \k)^{H_2}$. By Proposition~\ref{coh-quot}, there is a unique class $\ol{x}_2\in H^*(X_2/H_2; \k)$ satisfying $\pi_2^* \ol{x}_2 = x_2$. Then we have
$$\pi_1^* \ol{f}^* \ol{x}_2 = f^* \pi_2^* \ol{x}_2 = f^* x_2,$$
as desired.
Next, given $x_1 \in H_*(X_1)^{H_1}$, we want to show that $ \ol{f}_* (\pi_1)_* x_1 =(\pi_2)_* \alpha (f_* x_1)$. We have
\begin{align*}(\pi_2)_* \alpha (f_* x_1) & = \dfrac{1}{|H_2|} \sum_{h\in H_2} (\pi_2)_* h_* f_* x_1
= \dfrac{1}{|H_2|} \sum_{h\in H_2} (\pi_2)_* f_* x_1\\
& = (\pi_2)_* f_* x_1 = \ol{f}_* (\pi_1)_* x_1
\end{align*}
as desired.
\end{proof}
\section{${\rm FI}_W$--modules and ${\rm FI}_W\#$--modules}\label{FIW-modules}
Here we provide a quick introduction to the theory of ${\rm FI}_W$--modules, as developed by Church, Ellenberg, and Farb \cite{church2015fi} in type A, and later extended by J. Wilson \cite{wilson2014fiw}.
More details can be found in these sources.
Let $W_n$ denote the $n^{\textrm th}$ Weyl group of classical type A, B, C, or D. These correspond to the
symmetric group $S_n$ for type A, the signed permutation group $B_n = {\mathbb Z}_2 \wr S_n$
for both types B and C, and the even signed permutation group $D_n \leqslant B_n$ for D.
In this section, we write ${\mathbb Z}_2$ as the group $\{-, +\}$ with $+$ as identity element, so that elements of ${\mathbb Z}_2 \wr S_n$ are pairs consisting of a permutation and a list of $n$ signs; note that ${\mathbb Z}_2 \wr S_n$ is in bijection with the set of permutations $\sigma$ of the $2n$--element set $\{\pm 1, \ldots, \pm n\}$ satisfying $\sigma (\{-t, t\}) = \{\sigma(t), -\sigma(t)\}$.
The even signed permutation group $D_n$ can be seen
as the index 2 subgroup of $B_n$ obtained as the kernel of the projection
${\mathbb Z}_2 \wr S_n \to {\mathbb Z}/2{\mathbb Z}$ counting the number of minus signs modulo 2.
We have standard inclusions $j_{n} \colon\thinspace W_n \hookrightarrow W_{n+1}$ sending a signed permutation $\sigma$ of ${\rm \bf n}$ to the signed permutation of ${\bf n+1}$ that agrees with $\sigma$ on ${\rm \bf n}\subset {\bf n+1}$ and is the identity on $\{\pm (n+1)\}$.
As we saw in Section~\ref{Hom}, the homology of spaces of commuting elements can be described
using representations of Weyl groups and this perspective will be used throughout the paper.
\subsection{${\rm FI}_W$--modules} For $W =$ A, B, C, or D,
the \textit{category ${\rm FI}_W$} is the category whose objects are the sets ${\rm \bf n}=\{\pm 1,\dots,\pm n\}$
and ${\rm \bf 0}=\emptyset$,
indexed by the natural numbers, and whose morphisms are the injections
$\phi: {\rm \bf m} \to {\rm \bf n}$ such that $\phi(-t)=-\phi(t)$ for all $t \in {\rm \bf m}$ (here we use the convention that $--i = i$), together with an extra condition depending on the type ($A$, $B$, $C$ or $D$).
Namely, the category ${\rm FI}_D$ is the subcategory defined by the additional condition that
an isomorphism must reverse an even number of signs (that is, $|\{i \in \{1, \ldots, n\} \,:\, \phi(i) = -i\}|$ must be even),
${\rm FI}_A$ is the subcategory consisting of morphisms that
preserve all signs, and ${\rm FI}_{B} = {\rm FI}_{C}$ is the category with no further restriction on morphisms (following Wilson, we will use the notation ${\rm FI}_{BC} := {\rm FI}_{B} = {\rm FI}_{C}$).
This implies that ${\rm Aut}({\rm \bf n})= S_n$ in type A,
${\rm Aut}({\rm \bf n})= {\mathbb Z}_2 \wr S_n = B_n$ in types B and C, and ${\rm Aut}({\rm \bf n})= D_n$ in type D.
Note that
there are inclusions of categories ${\rm FI}_A \hookrightarrow {\rm FI}_D \hookrightarrow {\rm FI}_{BC}$.
The notation ${\rm FI}_W$ will stand for one of the above three categories.
\begin{rmk}\label{trans} Elementary arguments show that each of the categories ${\rm FI}_W$ has the property that ${\rm Aut}({\rm \bf n})$ acts \emph{transitively} on the set of morphisms $\textrm{Mor}({\rm \bf m}, {\rm \bf n})$ (via composition). In particular, every morphism ${\rm \bf m}\to {\rm \bf n}$ can be written in the form $\sigma \circ i_{mn}$, where $i_{mn} \colon\thinspace {\rm \bf m} \to {\rm \bf n}$ is the (unique) morphism satisfying $i(j) = j$ for $1\leqslant j\leqslant m$ and $\sigma$ is a signed permutation lying in ${\rm Aut}({\rm \bf n})$. Note here that $\textrm{Mor}({\rm \bf m}, {\rm \bf n})$ is empty unless ${\rm \bf m} \leqslant {\rm \bf n}$.
\end{rmk}
\begin{defn}[${\rm FI}_W$--modules]\label{defn: FI_W module}
An \textit{${\rm FI}_W$--module} $V$ over a ring $R$ is a functor
$$V\colon\thinspace {\rm FI}_W \to {{\rm \bf Mod}_R }$$
to the category of $R$--modules. Given a morphism $\phi\colon\thinspace {\rm \bf n}\to {\rm \bf m}$ in ${\rm FI}_W$, we will sometimes abbreviate $V(\phi)$ to $\phi_*$ when $V$ is clear from context.
A sub--${\rm FI}_W$--module of $V$ is an ${\rm FI}_W$--module $V'$ with $V'({\rm \bf n}) \leqslant V({\rm \bf n})$ a submodule for each $n$, and $V'(\phi) = V(\phi)|_{V'({\rm \bf n})}$ for each morphism $\phi\colon\thinspace {\rm \bf n}\to {\rm \bf m}$ in ${\rm FI}_W$.
An \textit{${\rm FI}_W$--space} $X$ is a functor
$$X\colon\thinspace {\rm FI}_W \to \textrm{\rm Top}.$$
\end{defn}
Note that the homology of an ${\rm FI}_W$--space with coefficients in a ring $R$ is an ${\rm FI}_W$--module over $R$.
\begin{rmk} \label{FI-rmk} A \emph{consistent sequence} of $W_n$--representations is a sequence of $W_n$--representations $V_n$ together with structure maps $f_n \colon\thinspace V_n\to V_{n+1}$ that are equivariant with respect to the inclusions $j_n\colon\thinspace W_n \hookrightarrow W_{n+1}$.
Remark~\ref{trans} implies that each ${\rm FI}_W$--module $V$ is determined by its underlying consistent sequence of
$W_n$--representations $V_n:=V({\rm \bf n})$, with structure maps
$$(i_n)_* = V(i_{n})\colon\thinspace V_n \to V_{n+1},$$
where $i_n \colon\thinspace {\rm \bf n} \to ({\bf n+1})$ satisfies $i_n (j) = j$ for $1\leqslant j\leqslant n$.
Similarly, an ${\rm FI}_W$--space $X$ is determined by its sequence of $W_n$--spaces $X_n = X({\rm \bf n})$, together with the structure maps $(i_n)_*$.
\end{rmk}
Not all consistent sequences of $W_n$--representations (or $W_n$--spaces) extend to ${\rm FI}_W$--modules (or ${\rm FI}_W$--spaces). However, Wilson~\cite[Lemma 3.4]{wilson2014fiw} provides a characterization of those consistent sequence that \emph{do} extend. Wilson's proof immediately extends to ${\rm FI}_W$--objects in any category, and we will apply the result to
${\rm FI}_W$--spaces in Section~\ref{Lie}. To state the result in full generality, we define a consistent sequence
$C$ of $W$--representations in a category $\mathcal{C}$ to be a sequence of objects $C_n$ in $\mathcal{C}$ together with homomorphisms $W_n\to \textrm{Aut}_\mathcal{C} (C_n)$ (denoted $\tau\mapsto \tau_*$) and morphisms $\phi_{n}\colon\thinspace C_n\to C_{n+1}$ satisfying the equivariance condition $\phi_{n} \circ \tau_* = (j_{n} (\tau))_* \circ \phi_n$. Note that every functor ${\rm FI}_W\to \mathcal{C}$ has an underlying consistent sequence of $W_n$--representations.
\begin{lem}[Church-Ellenberg-Farb,Wilson] \label{lem: extension}
Let $C$ be a consistent sequence of $W_n$--representations in the category $\mathcal{C}$. Then $C$ extends to an ${\rm FI}_W$--object $\widetilde{C} \colon\thinspace {\rm FI}_W\to \mathcal{C}$ $($with underlying consistent sequence $C$$)$ if and only if $\tau_* \circ \phi_{n-1} \circ \cdots \circ \phi_m = \phi_{n-1} \circ \cdots \circ \phi_m$ for all $\tau\in W_n$ satisfying $\tau (i) = i$ for $i\leqslant m$.
\end{lem}
As discussed in Section~\ref{Lie}, our key examples of ${\rm FI}_W$--modules will be produced from the classical sequences of Lie groups using Lemma~\ref{lem: extension}.
\begin{defn}[Finite Generation]\label{defn: finite generation}
We say that an ${\rm FI}_W$--module $V$ is \emph{generated in stage $\leqslant n$} (or more briefly, stage $n$)
if for each $m\geqslant n$, the union of the images of $V_n$ under ${\rm FI}_W$--morphisms ${\rm \bf n}\to {\rm \bf m}$ spans $V_m$. (We note that $V$ is generated in stage $\leqslant n$ if and only if $V_n$ is not contained in any proper sub--${\rm FI}_W$--module of $V$.)
We say that $V$ is finitely generated if it is generated in stage $n$ for some $n$ and $V_n$ is finite-dimensional.
\end{defn}
We note that the term \emph{degree} is often used in place of \emph{stage} in the previous definition. We use the term stage to avoid confusion with (co)homological degree.
\begin{rmk}\label{fg-rmk}
All of the ${\rm FI}_W$--modules $V$ considered in this article will be finite-dimensional, in the sense that each $V_n$ is finitely generated over the coefficient ring $R$. In this setting, if $V$ is generated in stage $n$, then it is automatically finitely generated as well.
\end{rmk}
We will use the following result of Wilson repeatedly.
\begin{prop}[{\cite[Proposition 5.2]{wilson2014fiw}}]\label{prop: tensor}
If $V$ and $W$ are finite-dimensional $FI_W$--modules that are generated in stages $m$ and $n$,
respectively, then $V \otimes W$ is generated in stage $($at most$)$ $m+n$.
\end{prop}
\subsection{Representation stability}
Now we give the precise definition of representation stability and uniform representation
stability. These notions were first defined by Church and Farb \cite{church2013representation}
for a sequence of $G_n$--representations $V_n$ (for some sequence of groups $G_n$), mainly for the purpose of studying
stability properties of representations of symmetric groups.
Afterwards we will explain how these notions relate to homological stability.
A partition of a non-negative
integer $n$ is a sequence of non-negative integers
$$\lambda:=(\lambda_1\geqslant \lambda_2\geqslant \cdots \geqslant\lambda_l \geqslant 0)$$
with $\sum \lambda_i =n$. We write $\lambda \vdash n$ or $|\lambda|= n$ to indicate that $\lambda$ is a partition of $n$. Note that for $n=0$, we allow the empty partition.
There is a well-known bijection between partitions $\lambda \vdash n$ and irreducible representations of the symmetric group $S_n$ over fields of characteristic zero; we denote the representation corresponding to the partition
$\lambda \vdash n$ by $V_\lambda.$ For instance if
$\lambda=(n)$ then $V_\lambda$ is the \textit{trivial} one-dimensional representation of $S_n.$
If $\lambda:=(\lambda_1,\dots,\lambda_t) \vdash k$ is a partition of $k$,
then for any $n \geqslant k + \lambda_1$ we may define a partition
$
\lambda[n]:=(n-k,\lambda_1,\dots,\lambda_t).
$
Now we can define the irreducible $S_n$--representation $V(\lambda)_n$ as
$$
V(\lambda)_n := V_{\lambda[n]}.
$$
Note that for each $\lambda$, $V(\lambda)_n$ is defined only for sufficiently large $n$ (namely $n\geqslant |\lambda|+\lambda_1$).
This process associates an infinite sequence of irreducible representations of symmetric groups to each partition $\lambda$ of a non-negative integer. In particular, for the empty partition $\lambda = \emptyset$, this sequence is $n\mapsto V(\emptyset)_n = V_{(n)}$, the sequence of trivial one-dimensional representations of $S_n$.
Over fields of characteristic zero, the hyperoctahedral group $B_n = {\mathbb Z}_2 \wr S_n$ has irreducible representations indexed
by double partitions $\lambda = (\lambda^+,\lambda^-)$ of $n$,
where $\lambda^+ \vdash m$ and $\lambda^- \vdash n-m$ for some $m\leqslant n$. As before, we write
$V_\lambda$ for the representation associated to $\lambda$.
In general, for a double partition $\lambda=(\lambda^+,\lambda^-)$ of $k$ and for
$n \geqslant k + \lambda_1^+$ we define the partition
$\lambda[n]:=((n-k,\lambda^+),\lambda^-)$. Now we can define the irreducible
$B_n$--representation
$$
V(\lambda)_n := V_{\lambda[n]}.
$$
Once again, when $\lambda^+ = \lambda^- = \emptyset$, we obtain the sequence of trivial one-dimensional representations of $B_n$.
Finally, we consider the case of the even-signed permutation group $D_n\leqslant B_n$. For each
double partition $\lambda$ as above, we set
$$V(\lambda)_n := \textrm{Res}^{B_n}_{D_n} V_{\lambda[n]}.$$
Wilson \cite[Proposition 3.30]{wilson2014fiw} showed that if $V$ is a
finitely generated ${\rm FI}_D$--module, then $V_n$ is the restriction of a
$B_n$--representation for sufficiently large $n$, and hence (over a field of characteristic zero) admits a (unique) decomposition as a sum of representations of the form $V(\lambda)_n$. (It should be noted that $V(\lambda)_n$ is not always irreducible as a $D_n$--module.) Once again, the empty double partition gives rise to the sequence of 1--dimensional trivial representations of $D_n$.
\begin{defn}[Representation stability] \label{def: rep stability}
Let $V$ be an ${\rm FI}_W$--module over a field $\k$ of characteristic zero.
We say that $V$ is
\textit{representation stable} if it satisfies the following conditions:
\begin{enumerate}
\item[(I)] {\it Injectivity}: The maps $(i_n)_*: V_n \to V_{n+1}$ are injective,
for all sufficiently large $n$;
\item[(II)] {\it Surjectivity}: The image $(i_n)_*(V_n)$ generates $V_{n+1}$ as a $\k [W_{n+1}]$--module,
for all sufficiently large $n$;
\item[(III)] {\it Multiplicities}: For all sufficiently large $n$, there exists an isomorphism of $W_n$--representations
$$
V_n \cong \bigoplus_{\lambda} c_{\lambda,n} V(\lambda)_n,
$$
and for each $\lambda$, the multiplicity $c_{\lambda,n}$ of $V(\lambda)_n$ is eventually independent
of $n$.
\end{enumerate}
\end{defn}
\begin{rmk} The decompositions in (III) above are unique if they exist, and so the multiplicities $c_{\lambda, n}$ are well-defined. In types A, B, and C, such a decomposition always exists, since all irreducible representations of $W_n$ are of the form $V(\lambda)_n$ in these cases (over fields of characteristic zero).
\end{rmk}
\begin{defn}[Uniform Representation Stability]
Let $V=\{V_n,\phi_n\}$ be a representation stable ${\rm FI}_W$--module with
$c_{\lambda,n}$ constant for all $n\geqslant N_\lambda.$
Then $V$ is called \textit{uniformly representation stable}
if $N=N_\lambda$ can be chosen independently of $\lambda$.
In this case, we say that $V$ has \emph{stable range} $n\geqslant N$.
\end{defn}
Wilson~\cite{wilson2014fiw} shows that an ${\rm FI}_W$--module $V$ is
uniformly representation stable if and only if it is finitely generated. Here we are mainly
interested in the ``if'' part of the statement.
\begin{thm}[{{\cite[Theorem 4.27, 4.28]{wilson2014fiw}}}]\label{thm: fin. gen. implies uniform rep. stability}
Let $\k$ be a field of characteristic 0 and let $V$ be a finitely generated
${\rm FI}_W$--module over $\k$. Then $V$ is uniformly representation stable.
\end{thm}
Given an ${\rm FI}_B$--module $V$, composing $V\colon\thinspace {\rm FI}_B \to {{\rm \bf Mod}_R }$ with the inclusion
${\rm FI}_D\hookrightarrow {\rm FI}_B$ yields the restricted ${\rm FI}_D$--module $V|_D$.
The following corollary is immediate from the definition of $V(\lambda)_n$ in type D.
\begin{cor}\label{stable-range-D}
Let $V$ be a an ${\rm FI}_B$--module that is uniformly representation stable with stable range $n\geqslant N$. Let $V|_D$ be the restriction of $V$ to an ${\rm FI}_D$--module $($that is, the composition of the functor $V$ with the inclusion ${\rm FI}_D\hookrightarrow {\rm FI}_B$$)$. Then $V|_D$ is also uniformly representation stable with stable range $n\geqslant N$.
\end{cor}
\begin{rmk}\label{rmk: iso-comp} Since the trivial representation of $W_n$ corresponds to $V(\emptyset)_n$,
Theorem~\ref{thm: fin. gen. implies uniform rep. stability} implies that in a finitely generated ${\rm FI}_W$--module, the dimensions of the isotypical components of the trivial representation eventually stabilize. We will deduce a stronger version of this statement in Proposition~\ref{avg}.
\end{rmk}
\subsection{${\rm FI}_W \#$--modules}
We now introduce extensions of the categories ${\rm FI}_A = {\rm FI}$ and ${\rm FI}_{BC}$, due to Church--Ellenberg--Farb~\cite{church2015fi} and Wilson~\cite{wilson2014fiw}, respectively. These categories can be used to establish bounds on the stable range of a finitely generated module.
Define ${\rm FI}_{BC} \#$ to be the category whose objects are the based sets ${\rm \bf n}_0 = {\rm \bf n}\cup \{0\}$, $n=1, 2, \ldots$
(with $0$ as the basepoint), and whose morphisms are based maps
$$
\phi\colon\thinspace {\rm \bf n}_0 \to {\rm \bf m}_0
$$
that are injective on ${\rm \bf n} \setminus \phi^{-1} (0)$ and satisfy $\phi(\{i, -i\}) = \{\phi(i), -\phi(i)\}$ (we set $-0 = 0$).
The category ${\rm FI}_{A}\#$ is the subcategory of ${\rm FI}_{BC} \#$ consisting of those morphisms that preserve signs (that is, $\phi\colon\thinspace {\rm \bf n}_0 \to {\rm \bf m}_0$ lies in ${\rm FI}_{A}\#$ if $\phi(i) \in \{0,1, \ldots, m\}$ for each $i\in\{1, \ldots, n\}$). This category is equivalent to the category ${\rm FI\#}$ introduced in~\cite{church2015fi}, whose objects are finite sets and whose morphisms $S\to T$ are \emph{partially defined} injections
$$S \supset A \stackrel[\cong]{\psi}{\longrightarrow} B \hookrightarrow T.$$
Note that each morphism $\phi\colon\thinspace {\rm \bf n}_0 \to {\rm \bf m}_0$ in ${\rm FI}_{BC}\#$ restricts to a partially defined injection $\{1, \ldots, n\}\supset A \stackrel[\cong]{\phi'}{\longrightarrow} B\subset \{1, \ldots, m\}$, where $A = \{i: \phi(i)\neq 0\}$. The equivalence of categories ${\rm FI}_A\#\to {\rm FI\#}$ carries ${\rm \bf n}_0$ to $\{1, \ldots, n\}$ and takes $\phi\colon\thinspace {\rm \bf n}_0 \to {\rm \bf m}_0$ to $\phi'$. (Composition in ${\rm FI}\#$ is defined so as to make this a functor.)
As observed by Wilson, there is no natural analogue of these extensions in type D, so we do not define ${\rm FI}_D \#$. As such, the symbol ${\rm FI}_W \#$ will refer to one of the above two categories.
There are canonical embeddings of categories ${\rm FI}_W \hookrightarrow {\rm FI}_W\#$, sending ${\rm \bf n}\mapsto {\rm \bf n}_0$ and sending a morphism $\phi\colon\thinspace {\rm \bf n}\to {\rm \bf m}$ to the unique base-point preserving extension $\phi_0 \colon\thinspace {\rm \bf n}_0 \to {\rm \bf m}_0$.
\begin{defn}[${\rm FI}_W \#$--modules]
An ${\rm FI}_W\#${--\it module} over a ring $R$ is a functor
$$V: {\rm FI}_W\# \to {{\rm \bf Mod}_R }$$
from ${\rm FI}_W\#$ to the category of $R$--modules.
\end{defn}
Note that each ${\rm FI}_W\#$--module has an underlying ${\rm FI}_W$--module, defined via restriction along the embedding ${\rm FI}_W \hookrightarrow {\rm FI}_W\#$.
The following result is due to Church--Ellenberg--Farb~\cite{church2015fi} in the case of ${\rm FI}_A\#$--modules, and due to Wilson~\cite{wilson2014fiw}
in the case of ${\rm FI}_{BC}\#$--modules.
\begin{thm}\label{sharp} Let $V$ be an ${\rm FI}_W\#$--module over a field of characteristic zero. If $V$ is finitely generated in stage $k$, then $V$ is uniformly representation stable with stable range
$n\geqslant 2k$.
\end{thm}
We are mainly interested in stability for subspaces of invariants, and in this case a better bound can be obtained.
First let us recall the branching rules for induced
representations of symmetric and hyperoctrahedral groups (see \cite{geck2000characters}).
Given a representation $V$ of $S_n$ over a ring $R$, let $V\boxtimes R$ denote the external tensor product of $V$ with the trivial representation of $S_k$ (so $V\boxtimes R$ is a representation of $S_n\times S_k$). We will use similar notation for the hyperoctahedral groups $B_n$.
\begin{lem}\label{lem: branching rule for S_n}
Let $\k$ be a field of characteristic zero.
For each partition $\lambda$ of $n$,
the $S_{n+k}$--representation induced by $V_\lambda$ is
$$
{\rm Ind}_{S_n \times S_k}^{S_{n+k}} V_\lambda \boxtimes \k \cong \bigoplus_{\mu} V_\mu,
$$
where the direct sum is taken over those partitions $\mu$ of $n+k$ obtained from $\lambda$ by adding one box to each of $k$ different columns $($of the corresponding Young tableaux$)$.
\end{lem}
\begin{lem}\label{lem: branching rule for B_n} Let $\k$ be a field of characteristic zero.
For each double partition $\lambda=(\lambda^+,\lambda^-)$ of $n$,
the $B_{n+k}$--representation induced by $V_{(\lambda^+,\lambda^-)}$ is
$$
{\rm Ind}_{B_n \times B_k}^{B_{n+k}} V_{(\lambda^+,\lambda^-)} \boxtimes \k \cong
\bigoplus_{\mu^+} V_{(\mu^+,\lambda^-)},
$$
where the direct sum is taken over those partitions $\mu^+$ of $n+k$ obtained from $\lambda^+$ by adding one box to each of $k$ different columns $($of the corresponding Young tableaux$)$.
\end{lem}
We learned the following result from John Wiltshire-Gordon.
\begin{prop}\label{prop: stability isotypical component}
In types A, B, and C, if an ${\rm FI}_W$--module $V$ is generated in stage $d$, and $V$ extends to an ${\rm FI}_W \#$--module, then for $n\geqslant d$ we have
$$\dim (V_n^{W_n}) = \dim (V_d^{W_d}).$$
\end{prop}
\begin{proof}
A complete classification of ${\rm FI} \#$--modules is given in \cite[Theorem 4.1.5]{church2015fi},
whereas Wilson gives a classification of ${\rm FI}_B \#$--modules in \cite[Theorem 3.7]{Wilson-Math-Z}.
In particular, an ${\rm FI}_W\#$--module $V$ can
always
be written as
\begin{equation} \label{eqn: decomp} \displaystyle V \cong \bigoplus_{n \geqslant 0} M_W(U_n),
\end{equation}
where
$U_n$ is a representation of $W_n$, and the ${\rm FI}_W \#$--module $M_W (U_n)$ satisfies
$$M_W(U_n)_r = {\rm Ind}_{W_n \times W_{r-n}}^{W_{r}} U_n \boxtimes {\mathbb Q}$$
if $r\geqslant n$ and $M_W(U_n)_r = 0$ otherwise; moreover, $M_W (U_n)$ is generated in stage $n$.
The trivial representation corresponds to the partition $\lambda=(n)$ of $n$, whose corresponding Young tableaux is simply $n$ boxes aligned horizontally.
Using the
branching rules above, one sees that if $U_n$ is an irreducible representation of
$W_n$ and
$${\rm Ind}_{W_n \times W_{k}}^{W_{n+k}} U_n \boxtimes {\mathbb Q}$$
contains a copy of the trivial representation of $W_{n+k}$, then $U_n$ must be trivial itself.
Next, observe that for any $W_n$--representations $A$ and $B$,
$${\rm Ind}_{W_n \times W_{k}}^{W_{n+k}} (A\oplus B) \boxtimes {\mathbb Q}
\cong \left({\rm Ind}_{W_n \times W_{k}}^{W_{n+k}} A \boxtimes {\mathbb Q}\right)
\oplus \left({\rm Ind}_{W_n \times W_{k}}^{W_{n+k}} B \boxtimes {\mathbb Q}\right).$$
Decomposing an arbitrary representation $U_n$ of $W_n$ into irreducibles, we now see that the dimension of
$$\left(M_W (U_n)_{n+k}\right)^{W_{n+k}} \cong \left({\rm Ind}_{W_n \times W_{k}}^{W_{n+k}} U_n \boxtimes {\mathbb Q}\right)^{W_{n+k}}$$
agrees with that of $U_n^{W_n} = \left( M_W (U_n)_n\right)^{W_n}$ (for $k\geqslant 0)$. This establishes the lemma for modules of the form $M_W (U_n)$, and the general case
then follows from the decomposition (\ref{eqn: decomp}).
\end{proof}
\subsection{Representation stability and homological stability}
\begin{defn} An ${\rm FI}_W\#$--space is a functor $X\colon\thinspace {\rm FI}_W\#\to \textrm{\rm Top}$.
\end{defn}
Recall from Remark~\ref{FI-rmk} that an ${\rm FI}_W$--space $X$ is determined by the sequence of $W_n$--spaces $X_n = X({\rm \bf n})$, together with the $j_n$--equivariant maps $(i_n)_*\colon\thinspace X_n\to X_{n+1}$ induced by the morphisms $i_n$. We will denote the induced maps $X_n/W_n \to X_{n+1}/W_{n+1}$ by $\ol{i}_n$.
The goal of this section is to establish the following result, which will be used to prove the homological stability results to follow.
\begin{thm}\label{quot-stab}
Let $n\mapsto X_n$ be a consistent sequence of good $W_n$--spaces
whose degree $k$ rational homology extends to a finitely generated
${\rm FI}_W$--module $H_k (X; \mathbb{Q})$. Then the sequence of quotient spaces
$$ \cdots \xrightarrow{\ol{i}_{n-1}} X_n/W_n \xrightarrow{\ol{i}_n} X_{n+1}/W_{n+1} \xrightarrow{\ol{i}_{n+1}} X_{n+2}/W_{n+2} \xrightarrow{\ol{i}_{n+2}} \cdots$$
is strongly rationally homologically stable in degree $k$.
If, moreover, the ${\rm FI}_W$--module $H_k (X; {\mathbb Q})$ is generated in stage $d$, then the maps
$(\ol{i}_n)_*$ are surjective for $n\geqslant d$.
In types A, B, and C, if the ${\rm FI}_W$--module $H_k (X; {\mathbb Q})$ is generated in stage $d$, and extends to an ${\rm FI}_W \#$--module, then the above sequence is strongly rationally homologically stable for $n\geqslant d$. In type $D$, if the ${\rm FI}_D$--module $H_k (X; {\mathbb Q})$ is generated in stage $d$, and is the restriction of an ${\rm FI}_{BC} \#$--module, then once again the above sequence is strongly rationally homologically stable for $n\geqslant d$.
\end{thm}
In order to prove Theorem~\ref{quot-stab}, we need a general observation regarding equivariant maps between representations. We will use the averaging operator $\alpha : V \to V^H$, denoted $\alpha(v) = \ol{v}$, from Definition~\ref{alpha}.
\begin{prop}\label{avg} Let $j\colon\thinspace H_1 \to H_2$ be a homomorphism between finite groups, and consider a $j$--equivariant map $\phi\colon\thinspace V_1 \to V_2$, between $\mathbb{Q} H_i$--modules.
If $\phi (V_1)$ generates $V_{2}$ as a $\mathbb{Q} H_{2}$--module, then the averaging map
$$V_1^{H_1} \to V_{2}^{H_{2}}$$
defined by $v\mapsto \ol{\phi (v)}$
is \emph{surjective}.
Consequently, if $V$ is a finitely generated ${\rm FI}_W$--module over a field of characteristic zero, and $V$ is generated in stage $d$ and has stable range
$n\geqslant N$, then the averaging map $V_n^{W_n} \to V_{n+1}^{W_{n+1}}$ is surjective for $n\geqslant d$ and is an isomorphism for $n\geqslant N$.
Moreover, if $V$ is a finitely generated ${\rm FI}_W\#$--module over a field of characteristic zero, and $V$ is generated in stage $d$, then the averaging map $V_n^{W_n} \to V_{n+1}^{W_{n+1}}$ is an isomorphism for $n\geqslant d$.
\end{prop}
\begin{proof}
This follows from Wilson~\cite[Proposition 4.16]{wilson2014fiw}, which states that if an ${\rm FI}_W$--module $V$ is generated in stage $d$, then its \emph{surjectivity degree} (\cite[Definition 4.12]{wilson2014fiw}) is at most $d$. Taking $a=0$ in the definition of surjectivity degree, this says that the natural map
$$I_n\colon\thinspace (V_n)_{W_n} \to (V_{n+1})_{W_{n+1}}$$
between coinvariants (see~\cite[p. 288]{wilson2014fiw}) is surjective for $n\geqslant d$.
The composition $V^{W} \hookrightarrow V \twoheadrightarrow (V)_{W}$ is an isomorphism for every ${\mathbb Q}[W]$--module $V$, and under these isomorphisms the averaging map agrees with the map $I_n$.
\end{proof}
\begin{rmk} In types A, B, and C, Proposition~\ref{avg} admits an elementary proof.
A straightforward verification, using Lemma~\ref{avg-lem}, shows that
\begin{equation}\label{avg2}\ol{\phi (v)} = \ol{\phi (\ol{v})}.\end{equation}
By hypothesis, for each $v\in V_{n+1}^{H_{n+1}}$ there exist $x_i \in V_n$ and $h_i \in H_{n+1}$ such that
$v = \sum_i h_i \cdot \phi_n (x_i)$.
Since $v = \ol{v}$, we find (using (\ref{avg2}))
$$v = \ol{\sum_i h_i \cdot \phi_n (x_i)} = \sum_i \ol{h_i \cdot \phi_n (x_i)} = \sum_i \ol{\phi_n (x_i)} = \sum_i \ol{\phi_n (\ol{x_i})} = \ol{\phi_n \left( \sum_i\ol{x_i}\right)}.$$
Since $\sum_i\ol{x_i}\in V_n^{H_n}$, this proves the surjectivity statement. In types A, B, and C, the consequences for finitely generated ${\rm FI}_W$-- and ${\rm FI}_W\#$--modules follow from Remark~\ref{rmk: iso-comp} and Proposition~\ref{prop: stability isotypical component}.
For ${\rm FI}_D$--modules
that are restrictions of ${\rm FI}_B$--modules,
we may instead apply Corollary~\ref{stable-range-D}.
\end{rmk}
\begin{proof}[Proof of Theorem~\ref{quot-stab}] By Proposition~\ref{coh-quot2}, the maps
$$H_k (X_n/W_n; \mathbb{Q}) \xrightarrow{(\ol{\phi}_n)_*} H_k (X_{n+1}/W_{n+1}; \mathbb{Q})$$
are isomorphic to the averaging maps
$$H_k (X_n; \mathbb{Q})^{W_n} \longrightarrow H_k (X_{n+1}; \mathbb{Q})^{W_{n+1}}.$$
If $H_k (X; \mathbb{Q})$ is generated in stage $d$ as an ${\rm FI}$--module, then
Proposition~\ref{avg} shows that this map is surjective for $n\geqslant d$.
Moreover,
Theorem~\ref{thm: fin. gen. implies uniform rep. stability} (together with the fact that $V(\emptyset)_n$ is the trivial representation for each $n$) shows that there exists $N$ such that the domain and range of this map have the same dimension (as $\mathbb{Q}$--vector spaces) for $n\geqslant N$, and hence are isomorphisms (note that $N\geqslant d$).
If $H_k (X; \mathbb{Q})$ extends to an ${\rm FI}_W \#$--module, then by Proposition~\ref{prop:
stability isotypical component} we can take $N = d$.
\end{proof}
\section{Direct powers and symmetric products}\label{symm-sec}
In this section, we discuss some simple examples of representation stability and deduce a homological stability result for symmetric products. In subsequent sections,
the representation stability results proven here will be applied to spaces of commuting elements in Lie groups.
\subsection{Direct powers} We wish to describe an ${\rm FI\#}$--structure on the sequence of direct powers $X^n$ of a based space $(X, x_0)$. To do this, we need to introduce some notation. Given a partially defined function $S \subset A \srt{g} X$, where $S$ is a (finite) set, let $g_0 \colon\thinspace S\to X$ denote the extension defined by setting $g_0 (s) = x_0$ for all $s\notin A$.
\begin{defn} Given a based space $(X, x_0)$, let $\P (X) = \P(X, x_0)$ denote the ${\rm FI\#}$--space defined by sending a finite set $S$ to the $|S|$--fold product
$$X^S = \mathrm{Map}(S, X)$$
and sending a partially defined injection $S\supset A \srt{\phi} B \subset T$ to the map $X^S \to X^T$ given by sending $f\colon\thinspace S\to X$ to $(f \circ \phi^{-1})_0$.
\end{defn}
It is an exercise to check that this defines a functor out of the category ${\rm FI\#}$.
The corresponding ${\rm FI}_A \#$--space takes $f\colon\thinspace {\rm \bf m}_0 \to {\rm \bf n}_0$ to the map
$f_*\colon\thinspace X^m\to X^n$ defined by
$$f_*(x_1, \ldots, x_m) = (y_1, \ldots, y_n),$$
where $y_i = x_{f^{-1} (i)}$ if $f^{-1} (i)$ exists, and $y_i = x_0$ otherwise.
Applying the functor $H_* ( - ; \mathbb{Q})$ or $H^* ( - ; \mathbb{Q})$ then yields an ${\rm FI\#}$--module (in the latter case, this relies on the canonical isomorphism between ${\rm FI\#}$ and ${\rm FI\#}^{\rm op}$; see~\cite[Section 4]{church2015fi}). Since an ${\rm FI\#}$--module restricts to an ${\rm FI}$--module, we see that $H_* ( - ; \mathbb{Q})$ and $H^* ( - ; \mathbb{Q})$ are ${\rm FI}$--modules.
In both cases, the underlying consistent sequences of $S_n$--representations are determined by the natural actions of $S_n$ on $X^n$ (permuting the factors), along with the maps $(x_1, \ldots, x_n)\mapsto (x_1, \ldots, x_n, x_0)$ (for the homological case) or the maps $(x_1, \ldots, x_{n+1})\mapsto (x_1, \ldots, x_n)$ (for the cohomological case).
\begin{defn}
We say that $X$ is of finite (rational) type if $H_k (X; \mathbb{Q})$ is finite-dimensional for each $k \geqslant 0$.
\end{defn}
\begin{prop}\label{symm-prod} If $X$ is path connected and of finite type, then the ${\rm FI\#}$--module $H_k (\P (X); \mathbb{Q})$ is finite-dimensional and is generated in stage $k$. Consequently, the underlying sequence of symmetric group representations is uniformly representation stable with stable range $n\geqslant 2k$.
\end{prop}
\begin{proof}
By the K\"unneth Theorem, $H_* (X^n; \mathbb{Q})$ is isomorphic to the $n$--fold tensor product of $H_*(X; \mathbb{Q})$ with itself. Under this isomorphism, the ${\rm FI\#}$--module structure
on $H_k (\P (X); \mathbb{Q})$
is described by essentially the same formulas as for $X^n$ itself, with sequences of points in $X$ replaced by $n$--fold tensors and the basepoint $x_0$ replaced by the class $[x_0]\in H_0 (X;\mathbb{Q})$.
Now, $H_k (X^n; \mathbb{Q})$ is generated by simple tensors of the form $a = a_1 \otimes a_2 \otimes \cdots \otimes a_n$, with $a_i \in H_{|a_i|} (X; \mathbb{Q})$, and $\sum_i |a_i| = k$. If $n>k$, then we must have $|a_i| = 0$ for some $i$, meaning that $a_i = q [x_0]$ for some $q\in \mathbb{Q}$, and hence $a$ is in the image of one of the structure maps $H_k (X^{n-1}; \mathbb{Q}) \to H_k (X^n; \mathbb{Q})$ defining the ${\rm FI}$--module structure on $H_k (\P (X); \mathbb{Q})$. This shows that $H_k (\P (X); \mathbb{Q})$ is generated in stage $k$,
and the result now follows from Theorem~\ref{sharp}.
\end{proof}
\begin{lem} If $X$ is semi-locally contractible, then the same holds for $\mathrm{Sym}^n (X)$ for each $n\geqslant 1$ $($in other words, $X^n$ is a good $S_n$--space$)$.
\end{lem}
The proof is similar to the proof that the symmetric product construction is homotopy invariant.
Recall that the infinite symmetric product
$\mathrm{Sym}^\infty (X)$ is simply the colimit of the sequence of maps between the finite symmetric products. Theorem~\ref{quot-stab} now yields the following corollary.
\begin{cor}\label{symm-prod-cor}
Let $X$ be a good, path connected space of finite type.
Then the maps
$$H_k (\mathrm{Sym}^n X; \mathbb{Q}) \longrightarrow H_k (\mathrm{Sym}^{n+1} X; \mathbb{Q}) \longrightarrow H_k (\mathrm{Sym}^\infty (X); \mathbb{Q})$$
are isomorphisms for $n\geqslant k$.
\end{cor}
\begin{rmk}\label{rmk: Steenrod symmetric prod}
In the case where $X$ has the homotopy type of a CW complex, the statement of Corollary \ref{symm-prod-cor} was shown by Steenrod~\cite[Eq. (22.7)]{steenrod} (using other methods) with ${\mathbb Q}$ replaced by the ring of integers.
Following Steenrod, we use the compactly generated topology on $X^n$ (as in Steenrod~\cite{Steenrod-convenient}) when forming the symmetric product.
We note that if $X$ is Hausdorff, this does not affect the weak homotopy type of $\mathrm{Sym}^n (X)$. More generally, we claim that if $X$ is Hausdorff and $G$ is a compact group acting on $X$, then the identity map $k(X)/G\to X/G$ is a weak equivalence, where $k(X)$ denotes $X$ with the compactly generated topology.
Since homotopy groups are defined in terms of maps out of compact spaces,
it suffices to show that a subset of $k(X)/G$ is compact if and only if it is compact in $X/G$, and that the two subspace topologies on such sets coincide.
Compact sets in $k(X)$ and in $X$ coincide by Steenrod~\cite[Theorem 3.2]{Steenrod-convenient}, and since the quotient maps $k(X)\to k(X)/G$ and $X\to X/G$ are proper (Bredon~\cite[Theorem I.3.1]{Bredon}), the same holds for $k(X)/G$ and $X/G$.
Finally, if $K\subset k(X)/G$ is compact, then the identity map to $K\subset X/G$ is a homeomorphism, since $X/G$ is Hausdorff (ibid.).
\end{rmk}
\subsection{Signed direct powers}\label{sdp-sec}
Consider a space $(X, x_0)$ equipped with an involution $\tau\colon\thinspace X\to X$
fixing the basepoint $x_0$.
We wish to describe an action of $B_r = {\mathbb Z}_2 \wr S_r$, on $X^r$ extending the permutation action of $S_r$.
For ease of notation, in this section we view ${\mathbb Z}_2$ as the group $\{0,1\}$, with $0$ as identity, and we write the group operation as $+$. (One should think of the sign associated to $\epsilon\in {\mathbb Z}_2$ as $(-1)^\epsilon$.)
In this notation, the subgroup $D_r\leqslant B_r$ of even-signed permutations is given by
$$D_r = \{((t_1, \ldots, t_r), \sigma) \in B_r \,:\, |\{i \,:\, t_i = 1\}| \textrm{ is even.}\}$$
Using the involution $\tau$, we can now endow $X^r$ with the action of ${\mathbb Z}_2 \wr S_r$ given by
$$(t_1, \ldots, t_r, \sigma) \cdot (x_1, \ldots, x_r) = (\tau^{t_1} (x_{\sigma^{-1}(1)}), \ldots, \tau^{t_r} (x_{\sigma^{-1}(r)})),$$
where $\tau^0 := \textrm{Id}_X$ (and $\tau^1 := \tau$).
Every morphism ${\rm \bf m}_0\to {\rm \bf n}_0$ in the category ${\rm FI}_{BC}\#$ factors uniquely as $\nu\circ f$, where $f\colon\thinspace {\rm \bf m}_0\to {\rm \bf n}_0$ is a morphism in ${\rm FI}_{BC} \#$ that preserves signs (that is, $f(\{0, \ldots, m\}) \subset \{0, \ldots n\}$), and $\nu\colon\thinspace {\rm \bf n}_0\to {\rm \bf n}_0$ is a bijection that satisfies $\nu(\{\pm i\}) = \{\pm i\}$ for $1\leqslant i \leqslant n$ and is the identity on the complement of the image of $f$ (that is, $\nu$ acts as negation of some subset of the image of $f$ and acts as the identity on the remaining elements).
We will refer to such maps $f$ and $\nu$ as \emph{unsigned partial injections} and \emph{partial negations}, respectively. If $f'\colon\thinspace {\rm \bf n}_0\to {\rm \bf p}_0$ is another unsigned partial injection and $\nu'\colon\thinspace {\rm \bf p}_0\to {\rm \bf p}_0$ is another partial negation that restricts to the identity on the complement of the image of $f'$, then we have
$$(\nu'\circ f') \circ (\nu \circ f) = \nu'' \circ (f'\circ f),$$
where $\nu''$ is the partial negation
$$\nu'' (i) = \begin{cases}
\nu'\circ f'\circ \nu\circ f (j), & \mbox{if $f'\circ f (j) = i$}, \\
i, & \mbox{if $i$ is not in the image of $f'\circ f$.} \\
\end{cases}$$
Note that, by definition, $\nu''$ restricts to the identity on the complement of the image of $f'\circ f$.
\begin{lem} \label{ext} Let $(X, x_0)$ be a based space equipped with an involution $\tau\colon\thinspace X\to X$ fixing $x_0$.
Then $\P(X)$ extends to an ${\rm FI}_{BC}\#$--space $($and, by restriction, to an ${\rm FI}_{BC}$--space and an ${\rm FI}_{D}$--space$)$, whose structure maps
are determined as follows: For an unsigned partial injection $f\colon\thinspace {\rm \bf m}_0\to {\rm \bf n}_0$
we set
$$f_* (x_1, \ldots, x_m) = (y_1, \ldots, y_n),$$
where
$$y_i = \begin{cases}
x_j & \mbox{if $f(j) = i$}, \\
x_0 & \mbox{if $f^{-1} (i) = \emptyset$,} \\
\end{cases}$$
and for a partial negation $\nu\colon\thinspace {\rm \bf n}_0 \to {\rm \bf n}_0$ we set
$$\nu_* (x_1, \ldots, x_n) = (y_1, \ldots, y_n),$$
where
$$y_i = \begin{cases}
x_i & \mbox{if $\nu(i) = i$}, \\
\tau(x_i) & \mbox{if $\nu(i) = -i$.} \\
\end{cases}$$
\end{lem}
\begin{proof} One checks, by a (tedious) computation, that these assignments satisfy
$$\nu'_*\circ f'_* \circ \nu_* \circ f_* = \nu''_* \circ f'_* \circ f_*,$$
where $\nu''$ is the partial negation defined above.
\end{proof}
We denote the ${\rm FI}_{BC}\#$--space constructed in Lemma~\ref{ext} by $\P_\tau (X) = \P_\tau (X, x_0)$.
\begin{rmk} For a general morphism $\phi \colon\thinspace {\rm \bf m}_0\to {\rm \bf n}_0$, the ${\rm FI}_{BC}\#$--space $\P_\tau (X)$ satisfies
$$\phi_* (x_1, \ldots, x_m) = (y_1, \ldots, y_n)$$
where
$$y_i = \begin{cases}
x_j, & \mbox{if $\phi(j) = i$}, \\
\tau (x_j), & \mbox{if $\phi(j) = -i$}, \\
x_0 & \mbox{if $\phi^{-1} (i) = \emptyset$.} \\
\end{cases}$$
\end{rmk}
\begin{prop}\label{signed-symm-prod} Let $X$ be a path connected, semi-locally contractible space of finite type. If $\tau\colon\thinspace X\to X$ is an involution fixing $x_0\in X$,
then the homology groups
$H_k (X^r; \mathbb{Q})$
form a uniformly representation stable
sequence of $B_r$--representations,
with stable range $r\geqslant 2k$.
Consequently, the maps
$$H_k (X^r/B_r ; \mathbb{Q}) \longrightarrow H_k (X^{r+1}/B_{r+1} ; \mathbb{Q})$$
$($induced by inserting the basepoint in one factor$)$
are isomorphism for $r\geqslant k$.
The same statement holds with $D_r$ in place of $B_r$.
\end{prop}
\begin{proof} By Proposition~\ref{symm-prod}, $H_k (\P_\tau (X); \mathbb{Q})$ is finitely generated as an ${\rm FI}$--module. It follows immediately that it is also finitely generated as an ${\rm FI}_D$--module and as an
${\rm FI}_{BC}$--module, since the structure maps for these enhanced modules include the ${\rm FI}$ structure maps. Representation stability in type B/C now follows from Theorem~\ref{thm: fin. gen. implies uniform rep. stability}, and in type D it then follows from Corollary~\ref{stable-range-D}.
The last part follows from Theorem~\ref{quot-stab}.
\end{proof}
\begin{rmk}\label{rmk: signed sym product = sym product} Say $X$ is a space with involution $\tau$.
Since $({\mathbb Z}_2)^r$ is normal in $B_r$, we have a homeomorphism $X^r/B_r \cong \mathrm{Sym}^r(X/\mathbb{Z}_2)$, where $\mathbb{Z}_2$ acts via $\tau$. Hence in type B/C the homological stability statement in Proposition~\ref{signed-symm-prod} is a special case of the one for ordinary symmetric products.
It follows from Steenrod~\cite[Eq. (22.7)]{steenrod} that in type B/C, the stability result in Proposition \ref{signed-symm-prod} also holds integrally
(see Remark \ref{rmk: Steenrod symmetric prod}).
\end{rmk}
\section{${\rm FI}_W$--modules arising from Lie groups}\label{Lie}
Let $\{G_r\}_{r\geqslant 1}$ denote one of the following classical sequences of Lie groups:
\begin{itemize}
\item $G_r = {\rm SU}(r)$ or $G_r = {\rm U}(r)$ (type A);
\item $G_r = {\rm SO}(2r+1)$ (type B);
\item $G_r = {\rm Sp}(r)$ (type C);
\item or $G_r = {\rm SO}(2r)$ (type D).
\end{itemize}
In this section, we review the structure of the maximal tori and Weyl groups in these sequences and construct two finitely-generated ${\rm FI}_W$--modules associated to each sequence.
We will work exclusively with rational (co)homology in this section and in all subsequent sections (any field of characteristic zero would suffice), so we will drop the coefficients from the notation for simplicity.
We begin by specifying the \emph{standard inclusions} $G_r\hookrightarrow G_{r+1}$. For $G_r = {\rm SU}(r)$, ${\rm U}(r)$, ${\rm SO}(2r)$, and ${\rm SO}(2r+1$) these inclusions are given by $A\mapsto A\oplus I$, where $I$ denotes an identity matrix of size 1 (in type A) or 2 (in types B and D); so our convention is to put the additional 1's in the lower right corner of the matrix.
Following Brocker--tom Dieck~\cite{BTD}, we view ${\rm Sp}(r)$ as the group of $2r\times 2r$ block matrices
$$C=C(A,B)=\left[ \begin{array}{rrr} A & -\ol{B} \\ B & \ol{A} \end{array} \right]$$
such that $C\in {\rm U}(2n)$. (Here $A$ and $B$ are arbitrary $n\times n$ complex matrices, and $\ol{A}$ and $\ol{B}$ are their entry-wise complex conjugates.) The standard inclusion ${\rm Sp}(r)\hookrightarrow {\rm Sp}(r+1)$ is the homomorphism
$$\left[ \begin{array}{rrr} A & -\ol{B} \\ B & \ol{A} \end{array} \right] \mapsto \left[ \begin{array}{rrr} A\oplus 1 & -\ol{B}\oplus 0 \\ B\oplus 0 & \ol{A}\oplus 1 \end{array} \right],$$
where $0$ and $1$ are viewed as $1\times 1$ complex matrices.
Next we make a choice of maximal torus $T_r = T(G_r) \leqslant G_r$ in each of our groups, with the property that in each sequence, the standard inclusion maps $T_r$ to $T_{r+1}$, and we describe the associated Weyl groups $NT_r/T_r$ and their actions on the maximal tori. This discussion will mostly follow~\cite{BTD}, and we refer the reader there for further details. Our choices are as follows:
\begin{itemize}
\item $T({\rm U}(r))$ is set of diagonal unitary matrices;
\item $T({\rm SU}(r)):= T({\rm U}(r))\cap {\rm SU}(r)$;
\item $T({\rm SO}(2r)) := {\rm SO}(2)\oplus \cdots \oplus {\rm SO}(2)$;
\item $T({\rm SO}(2r+1)) := 1\oplus {\rm SO}(2)\oplus \cdots \oplus {\rm SO}(2)$;
\item $T({\rm Sp}(r)):=\{ C(D,0) \,:\, D\in T({\rm U}(r))\} = \{D\oplus \ol{D} \,:\, D\in T({\rm U}(r))\}$.
\end{itemize}
Note that~\cite{BTD} uses the torus ${\rm SO}(2)\oplus \cdots \oplus {\rm SO}(2)\oplus 1\leqslant {\rm SO}(2r+1)$; our choice above ensures that the standard inclusion ${\rm SO}(2r+1) \hookrightarrow {\rm SO}(2r+3)$ carries $NT(SO(2r+1))$ into $NT(SO(2r+3))$.
In each type, there are isomorphisms of Weyl groups $NT_r/T_r \cong W_r$, where $W_r$ is the abstract Weyl group associated to the type (as defined in Section~\ref{FIW-modules}).
We briefly specify these isomorphisms.
(In the sequel, these isomorphisms are treated as identifications, so that $W_r$ refers to either the abstract group or to $NT_r/T_r$).
{\bf Type A:} In type A, $NT_r$ is generated by $T_r$ together with the signed permutation matrices $P$ of determinant one, and the desired isomorphism is provided by sending $P$ to its underlying (unsigned) permutation of the standard basis for ${\mathbb C}^n$. The action of $W_r = S_r$ on $T({\rm U}(r)) \cong (S^1)^r$ is simply given by permuting the coordinates of the torus (and as $r$ varies, this in fact gives us the the ${\rm FI}_A\#$--space $\P(S^1)$ considered in Section~\ref{symm-sec}). This action restricts to the determinant-one sub-torus $T({\rm SU}(r))\subset T({\rm U}(r))$.
{\bf Types B and D:} For the special orthogonal groups, we will use the notation for $B_r$ and $D_r$ from Section~\ref{sdp-sec}. For $G_r = {\rm SO}(2r+1)$, the isomorphism $W_r = B_r\to NT_r/T_r$ sends $\sigma \in S_r\subset W_r$ to the class represented by the permutation matrix that permutes the ordered pairs of standard basis vectors $p_i = (e_{2i}, e_{2i+1})$ ($i=1, \ldots, r$) according to $\sigma$ (preserving the ordering within the pairs) and fixes $e_1$, while the element $((0,\ldots,0, 1), e)$ (where $e\in S_r$ is the identity) maps to the class of $\textrm{diag}(-1,1, \ldots, 1, -1)$. These elements generate $B_r$, so this determines the map. The case of $G_r = {\rm SO}(2r)$ is similar: $\sigma\in S_r$ maps to the class of the permutation matrix that permutes the pairs $q_i = (e_{2i-1}, e_{2i})$ ($i=1, \ldots, r$) according to $\sigma$, while, for instance, $((1,1,0,\ldots, 0), e)$ maps to the class of $\textrm{diag}(1,-1, 1, -1, 1, \ldots, 1)$.
Similarly, for ${\rm SO}(2r)$ and ${\rm SO}(2r+1)$, we have $T_r \cong ({\rm SO}(2))^r$.
In both of the above cases, the Weyl group acts on $T_r \cong ({\rm SO}(2))^r$ by permuting the factors and negating the angle of rotation (this is seen by conjugating matrices in ${\rm SO}(2)$ by
$\textrm{diag}(1,-1)$).
If we identify ${\rm SO}(2)$ with $S^1$ in the usual way, then signed permutations in $W_r$ act via permutations and complex conjugation.
{\bf Type C:} In our notation $NT({\rm Sp}(r))$ is generated by $T({\rm Sp}(r))$ together with matrices of the form $P\oplus P$, with $P$ a permutation matrix, together with the matrices
$$C_j=\left[ \begin{array}{rrr} A_j & B_j \\ B_j & A_j \end{array} \right],$$
for $j=1, \ldots r$, where $B_j$ is the $r\times r$ matrix with a 1 in position $(j,j)$ and all other entries zero, and $A_j = I - B_j$ . The isomorphism $W_r = B_r \to NT({\rm Sp}(r))/T({\rm Sp}(r))$ sends $\sigma\in S_r$ to the class of $P_\sigma \oplus P_\sigma$ (where $P_\sigma$ is the permutation matrix associated to $\sigma$) and sends $(\epsilon_1, \ldots, \epsilon_r, e)$ ($\epsilon_i\in \{0,1\}$) to the class of the product $C_1^{\epsilon_1} \cdots C_r^{\epsilon_r}$.
Note that there are canonical homeomorphisms $T({\rm Sp}(r)) \srm{\cong} T({\rm U}(r)) \cong (S^1)^r$, sending $C(D,0)$ to $D$, and the Weyl group acts by permutations and complex conjugation on these circle factors: Explicitly,
$$(P_\sigma \oplus P_\sigma)\cdot C(\textrm{diag}(\lambda_1, \ldots, \lambda_n),0) = C(\textrm{diag}(\lambda_{\sigma^{-1}(1)}, \ldots, \lambda_{\sigma^{-1}(r)}), 0),$$
and
$$C_j\cdot C(\textrm{diag}(\lambda_1, \ldots, \lambda_n),0) = C(\textrm{diag}(\lambda_1, \ldots, \ol{\lambda_j}, \ldots, \lambda_n), 0).$$
The following result is proven by a case-by-case inspection.
\begin{lem}\label{NT} The standard inclusions $G_r \hookrightarrow G_{r+1}$ map $NT_r$ to $NT_{r+1}$, and the induced maps $NT_r/T_r\to NT_{r+1}/T_{r+1}$ agree with the standard inclusions $j_r\colon\thinspace W_r\hookrightarrow W_{r+1}$.
\end{lem}
\begin{prop} \label{prop: T is FI}
For each $n\geqslant 0$, the $W_r$--spaces $T^n_r$ form a consistent sequence with respect to the standard inclusions $T_r\hookrightarrow T_{r+1}$, and these sequences extend to ${\rm FI}_W$--spaces.
For $G_r = {\rm U}(r)$, the ${\rm FI}_A$--space $r\mapsto T^n_r$ is isomorphic to $\P((S^1)^n)$ $($with the identity element in $(S^1)^n$ as basepoint$)$, and hence extends to an ${\rm FI}_A \#$--space.
For $G_r = {\rm SO}(2r+1)$ or ${\rm Sp}(r)$, the ${\rm FI}_{W}$--space $r\mapsto T^n_r$ is isomorphic to $\P_\tau ((S^1)^n)$, where $\tau$ is complex conjugation, and hence extends to an
${\rm FI}_{BC} \#$--space.
For $G_r = {\rm SO}(2r)$, the ${\rm FI}_D$--space $r\mapsto T^n_r$ is isomorphic to the restriction $\P_\tau ((S^1)^n)|_D$.
\end{prop}
\begin{proof} Consistency of the sequences follows from Lemma~\ref{NT}.
Except in the case $G_r = {\rm SU}(r)$, we have
$$T_r^n \cong ((S^1)^r)^n \cong ((S^1)^n)^r,$$
where the second homeomorphism is defined by
$$((z_{11}, \ldots, z_{1r}), \ldots, (z_{n1}, \ldots, z_{nr}))\mapsto
((z_{11}, \ldots, z_{n1}), \ldots, (z_{1r}, \ldots, z_{nr})).$$
From the above descriptions of the Weyl group actions and Lemma~\ref{NT},
we see that this homeomorphism is equivariant with respect to the signed permutation actions of $W_r$, where on the left the action is the diagonal action on $(T_r)^n$ induced by the action of $W_r$ on $T_r\cong (S^1)^r$, and on the right the action is exactly that occurring in the definition of $\P((S^1)^n)$, where $(S^1)^n$ is considered as a space with involution $z\mapsto \ol{z}$ (complex conjugation in each coordinate).
The result now follows from Corollary~\ref{symm-prod-cor} in the case $G_r = {\rm U}(r)$, and from Proposition~\ref{signed-symm-prod} in the other cases.
For $G_r = {\rm SU}(r)$, Lemma~\ref{lem: extension} shows that $r\mapsto T({\rm SU}(r))^n$ is a sub--${\rm FI}$--space of $r\mapsto T({\rm U}(r))^n$.
(Note that in this case $T_r^n \cong ((S^1)^{r-1})^n$, and we no longer have an ${\rm FI}\#$--space).
\end{proof}
\begin{cor}\label{cor: T-rep-stable}
For each $n\geqslant 0$, the ${\rm FI}$--modules $\{H_k (T^n_r)\}_{r\geqslant 1}$
are generated in stage $k$, and, except possibly in the case $G_r = {\rm SU}(r)$, are uniformly representation stable for
$r\geqslant 2k$.
\end{cor}
\begin{proof} In all cases except for $G_r = {\rm SU}(r)$, this follows from Proposition~\ref{prop: T is FI} together with Propositions~\ref{symm-prod} and~\ref{signed-symm-prod}.
For $G_r = {\rm SU}(r)$, we need to show that
the ${\rm FI}$--module $\{H_k (T^n_r)\}_{r\geqslant 1}$ is generated in stage $k$.
Note that the action of $W_{r-1} \leqslant W_r$ on $T_r$ agrees with the usual permutation action on
$(S^1)^{r-1}$. The homeomorphism
$$T_r^n \cong ((S^1)^{r-1})^n \cong ((S^1)^n)^{r-1},$$
now endows $((S^1)^n)^{r-1}$ with a $W_r$--action, which when restricted to $W_{r-1}$ gives the defining action for $\P((S^1)^n)$. In fact, if we restrict the ${\rm FI}$--space $\r \mapsto T_r^n$ along the functor ${\rm FI}\to {\rm FI}$ defined by sending $\r$ to ${\bf r+1}$ and sending $\phi\colon\thinspace \r\to {\rm \bf s}$ to the unique extension $\widetilde{\phi} \colon\thinspace {\bf r+1} \to {\bf s+1}$ satisfying $\widetilde{\phi}(r+1) = s+1$, we obtain the ${\rm FI}$--space $\P((S^1)^n)$. For each $k\geqslant 0$, the $k^{\textrm th}$ homology of this restricted ${\rm FI}$--space is generated in stage $k$, and so the same is automatically true for the original ${\rm FI}$--space.
\end{proof}
Next we consider the flag manifolds $G/T$.
For each of the above sequences of Lie groups, the spaces $G_r/T_r$ admit left actions of $W_r$, defined by $[n]\cdot gT_r = gn^{-1} T_r$. The standard inclusions induce maps $G_r/T_r \to G_{r+1}/T_{r+1}$ making these sequences consistent, but these sequences \emph{do not} satisfy the conditions in Lemma~\ref{lem: extension} and hence do not extend to ${\rm FI}_W$--spaces. Nevertheless, we will show that the associated consistent sequences in \emph{homology} do in fact extend to ${\rm FI}_W$--modules. Moreover, we will see that these ${\rm FI}_W$--modules are in fact dual to certain algebraically-defined co--${\rm FI}_W$--modules, which were shown by
Church--Ellenberg--Farb~\cite[Theorem 5.1.5]{church2015fi} and Wilson~\cite[Corollary 6.5]{wilson2014fiw} to be finitely generated.
Let $G$ be a compact Lie group with maximal torus $T\leqslant G$, and let $ET\to BT$ and $EG\to BG$ denote the (functorial) simplicial models for the universal principal bundles (as defined, for instance, in~\cite{segal1968csss}).
The principal action of $NT\leqslant G$ on $EG$ descends to an action of $W = NT/T$ on $EG/T$.
We will call this the translation action of $W$ on $EG/T$.
\begin{lem}\label{lem: translation=conjugation}
Consider the homotopy equivalence $i \colon\thinspace ET/T \to EG/T$ induced by the inclusion $T\hookrightarrow G$.
Then the conjugation action of $W$ on $H^*(ET/T)$ and the translation action of $W$ on $H^*(EG/T)$ coincide under the isomorphism $i^*$,
and similarly for homology.
\end{lem}
\begin{proof}
Recall that for a Lie group $H$, the simplicial model for $EH$ is the geometric realization of the simplicial space whose space of $k$--simplices is $H^{k+1}$. The face maps are given by deletion of elements from a string, while the degeneracies are given by repetition. The principal action of $H$ on $EH$ is given by right-multiplication.
Note that the inclusions $ET/T \to ENT/T\to EG/T$ are both homotopy equivalences. The space $EN$ also admits a conjugation action of
$N$, which descends to a conjugation action of $W$ on $ENT/T$, and the map $ET/T \to ENT/T$ is $W$--equivariant (when $ET/T$ has the conjugation action). Moreover, the principal action of $N$ on $EN$ also descends to a $W$--action on $ENT/T$, making the map $ENT/T\to EG/T$ equivariant. Hence to prove the lemma, it will suffice to show that for every $[n]\in W$, the conjugation and translation actions of $[n]$ on $ENT/T$ are homotopic to each other, and we prove this by exhibiting a simplicial homotopy between these maps. Recall (see~\cite{may1992simplicial} for instance) that a simplicial homotopy between maps $f,g\colon\thinspace ENT\to ENT$ consists of a collection of maps $h_{ki} \colon\thinspace (NT)^{k+1} \to NT^{k+2}$ for $k=0,1,2,\ldots$ and $0\leqslant i\leqslant k$, satisfying a collection of identities (these identities simply amount to saying that the $h_{ki}$ combine into a simplicial map $ENT \times \Delta^1\to ENT$, where $\Delta^1$ is the standard simplicial 1--simplex).
We define
$$h_i (n_0,...,n_k) = (n_0 n, ..., n_i n, n^{-1} n_i n, ..., n^{-1} n_k n);$$
one readily checks that all of the identities hold.
One checks that this map is well-defined on equivalence classes modulo the principal action of $T$ (given by right-multiplication in all coordinates), and hence descends to a simplicial homotopy between the translation and conjugation actions of $[n]$ on $ENT/T$, as desired.
\end{proof}
Borel~\cite[Proposition 29.2(a)]{Borel57} showed that for a compact, connected Lie group $G$ with maximal torus $T \leqslant G$, the ring $H^*(G/T)$ is isomorphic to the cokernel of the map $(i^*)^+\colon\thinspace H^*(BG)^+ \to H^*(BT)$ induced by the inclusion $i\colon\thinspace T\hookrightarrow G$ (here $H^*(BG)^+ $ is the ideal consisting of elements in non-zero degrees). The conjugation action of $NT$ on $BG$ is homologically trivial because for every element $g\in G$, the conjugation map $c_g\colon\thinspace BG\to BG$ induced by $g$ is nullhomotopic (thought of as an automorphism of the groupoid whose nerve is $BG$, the functor $c_g$ is isomorphic to the identity via a continuous natural transformation).
This implies that the image of $(i^*)^+$ is contained in the $W$--invariants $H^*(BT)^W$, and by Baird~\cite[Theorem B.2]{baird2007cohomology}, we also have an isomorphism
$H^*(G/T) \cong H^*(BT)/(H^*(BT)^W)^+$, where again $+$ denotes the ideal of non-zero degree elements. By comparing dimensions, it follows that the image of $(i^*)^+$ is exactly $(H^*(BT)^W)^+$.
We now give a functorial version of these results.
\begin{prop}\label{prop: G/T}
For each compact, connected Lie group $G$ and maximal torus $T\leqslant G$, the map $f \colon\thinspace G/T \to BT$ classifying the principal $T$--bundle $G\to G/T$ is surjective in rational cohomology $($and, dually, injective in rational homology$)$, and the kernel of $f^*$ is precisely $(H^*(BT)^W)^+$.
\end{prop}
\begin{proof}
First we prove that $f^* \colon\thinspace H^*(BT)\to H^*(G/T)$ is surjective.
It suffices to prove surjectivity in degree 2, because $H^*(G/T)$ is generated in degree 2 (since $H^*(BT)$ is generated in degree 2, this follows from either Borel or Baird's result).
We have a map of fibrations
\begin{equation}\label{eqn: G/T fib}
\begin{tikzcd}
T \arrow{r}{=} \arrow{d}{i} & T\ar{d} \\
G \arrow{r} \arrow{d} & EG \ar{d} \\
G/T \ar{r}{f} & EG/T \simeq BT,
\end{tikzcd}
\end{equation}
which gives rise to a map between the Serre spectral sequences for these fibrations.
Consider the differentials $d_2\colon\thinspace E_2^{0,1}\to E_2^{2,0}$ in the two spectral sequences: on the right, this differential is an isomorphism (since $EG$ is contractible), and on the left, it is surjective since $H^2(G)=0$ (in general, $H^*(G)$ is an exterior algebra concentrated in odd degrees -- see Reeder~\cite{reeder1995cohomology}, for instance). We thus have a commutative diagram
\begin{center}
\begin{tikzcd}
H^1(T) \arrow[d, tail, twoheadrightarrow, "d_2"] \arrow[r, leftarrow, "="]& H^1(T)
\arrow{d}[swap]{\cong}{d_2} \\
H^2(G/T) \arrow[r, leftarrow, "f^*"] & H^2(BT), \\
\end{tikzcd}
\end{center}
and it follows that $f^*$ is surjective in degree 2 (and hence in all degrees).
Since we know that there is an isomorphism $H^*(G/T)\cong H^*(BT)/(H^*(BT)^W)^+$, and these graded vector spaces are finite-dimensional in each degree, to show that $\ker (f^*) = (H^*(BT)^W)^+$ it suffices to check that $(H^*(BT)^W)^+\leqslant \ker(f^*)$. As an ungraded $W$--representation, $H^*(G/T)$ is the regular representation (see, for instance,
Baird~\cite[Theorem B.1]{baird2007cohomology}), and hence $W$ acts freely on $H^*(G/T)^+$, so $f^*((H^*(BT)^W)^+) = 0$.
\end{proof}
\begin{prop} \label{prop: H(G/T) is FI}
The consistent sequences of $W_r$--modules $r\mapsto H_k (G_r/T_r)$ extend to finitely generated ${\rm FI}_W$--modules.
\end{prop}
\begin{proof} Let $1\times W_s\subset W_{r+s}$ denote the subgroup of elements fixing $1, \ldots, r\in {\rm \bf s}_0$. By Lemma~\ref{lem: extension}, we need to prove that the action of $1\times W_{s}$ on the image of the map
$$H_k (G_r/T_r) \longrightarrow H_k (G_{r+s}/T_{r+s})$$
induced by the standard inclusion is trivial.
Note that in Diagram (\ref{eqn: G/T fib}) above, we may take the map $G\to EG$ to be the inclusion of one fiber of the fibration $EG\to EG/G$ (giving a specific choice for the classifying map $f$).
This choice gives us a commutative diagram
\begin{equation}\label{eqn: f natural}
\begin{tikzcd}
G_r/T_r \arrow{r}{f_r} \arrow{d} & EG_r/T_r\ar{d} \\
G_{r+s}/T_{r+s} \arrow{r}{f_{r+s}} & EG_{r+s}/T_{r+s}
\end{tikzcd}
\end{equation}
relating the classifying maps from Proposition~\ref{prop: G/T}, and moreover this choice ensures that the maps $f_r$ and $f_{r+s}$ are $W_r$-- and $W_{r+s}$--equivariant (respectively), where on the right the Weyl group actions are induced by the principal actions of $NT_r\leqslant G_r$ and $NT_{r+s} \leqslant G_{r+s}$ on the universal bundles.
Since $(f_{r+s})_*$ is injective, to complete the proof it suffices to show that $1\times W_{s}$ acts trivially on the image of $H_k (EG_r/T_r)$ in $H_k (EG_{r+s}/T_{r+s})$. By Lemma~\ref{lem: translation=conjugation}, this is equivalent to showing that $1\times W_{s}$ acts trivially on the image of $H_k (BT_r)$ in $H_k (BT_{r+s})$, where now $1\times W_{s} \leqslant W_{r+s}$ is acting by conjugation. But the image of $T_r$ in $T_{r+s}$ is fixed point-wise under conjugation by $1\times W_{s}$, and the same is true after applying the bar construction.
Finally, we show that $H_k (G_r/T_r)$ is finitely generated as an ${\rm FI}_W$--module. For $k=0$, connectedness of $G_r$ implies that the ${\rm FI}_W$--module $H_k (G_r/T_r)$ is constant, with value the trivial representation.
For $k>0$, Proposition~\ref{prop: G/T} and commutativity of Diagram (\ref{eqn: f natural}) yield an isomorphism of ${\rm FI}_W$--modules
$$H_k (G_r/T_r) \srm{\cong} (H^k(BT_r)/(H^k(BT_r)^{W_r}))^*,$$
where on the right, $W_r$ acts by conjugation.
It follows from Proposition~\ref{prop: T is FI} that $(H^k(BT_r)/(H^k(BT_r)^W))^*$ is isomorphic, as a co--${\rm FI}_W$--module, to the degree--$k$ part of the diagonal coinvariant algebra $\mathbb{Q}[x_1, \ldots, x_r]/\mathcal{I}_r$, with the ${\rm FI}_W$--module structure coming from the signed permutation action of $W_r$ on the variables and the natural projections $x_1 \mapsto x_1, \cdots, x_r\mapsto x_r, x_{r+1} \mapsto 0$. Wilson~\cite[Theorem 6.1]{wilson2014fiw} shows $(\mathbb{Q}[x_1, \ldots, x_r]/\mathcal{I}_r)^*$ is finitely generated as an ${\rm FI}_W$--module (in each degree).
\end{proof}
\section{Stability for commuting elements in compact Lie groups}\label{stab}
In this section we prove several of our main results regarding the
homology of spaces of commuting elements in compact, connected Lie groups.
All coefficients in this section are, implicitly, rational (any field of characteristic zero would suffice).
\subsection{Stability for classical sequences of Lie groups}
Throughout this section, $G_r$ will again denote one of the classical Lie groups: ${\rm SU}(r)$ or ${\rm U}(r)$ (type A); ${\rm SO}(2r+1)$ (type B); ${\rm Sp}(r)$ (type C); or ${\rm SO}(2r)$ (type D);
and $T_r = T(G_r)\leqslant G_r$ will be the maximal torus defined in Section~\ref{Lie}, with Weyl group $W_r$.
Fix positive integers $n$
and $k$. In Section~\ref{sec: stable range for Hom}, we will establish homological stability for the sequences $r\mapsto {\rm Hom}({\mathbb Z}^n, G_r)_1$. Here we consider the conjugation quotients $Rep({\mathbb Z}^n, G_r)_1$, which turn out to be simpler to analyze.
\begin{thm}\label{stability-wrt-r-Rep}
The sequences
$r\mapsto {\rm Rep}({\mathbb Z}^n,G_r)_1$
satisfy strong rational homological stability.
In homological degree $k$, stability holds for $r \geqslant k$.
\end{thm}
\begin{proof} By Theorem~\ref{Stafa}, these spaces are homeomorphic to $T_r^n/W_r$, and the homeomorphisms commute with the stabilization maps induced by the standard inclusions (Remark~\ref{rmk: phi natural}).
By Corollary~\ref{cor: T-rep-stable}, the sequence $r\mapsto H_k (T_r^n)$ is generated in stage $k$ as an ${\rm FI}_W$--module, and in all cases except $G_r = {\rm SU}(r)$, this module extends to ${\rm FI}_W \#$. For $G_r \neq {\rm SU}(r)$, the result now follows from Theorem~\ref{quot-stab}.
When $G_r = {\rm SU}(r)$,
Theorem~\ref{quot-stab} still tells us that the the maps
$$H_k (T_r^n)^{W_r} \cong H_k (T_r^n/W_r) \longrightarrow H_k (T_{r+1}^n/W_{r+1}) \cong H_k (T_r^n)^{W_r}$$
are surjective for $r\geqslant k$. It remains to prove injectivity for $r\geqslant k$.
The inclusions ${\rm SU}(r)\hookrightarrow {\rm U}(r)$ restrict to inclusions $i = i_r\colon\thinspace T_r \hookrightarrow T_r':= T({\rm U}(r))$ between the diagonal maximal tori, and the action of $W_r$ on $T_r$ is just the restriction of its action on $T_r'$. For $r\geqslant k$, we thus have a commutative diagram
\begin{center}
\begin{tikzcd}
H_k (T^n_r)^{W_r} \arrow{d}{i_*} \arrow[r, tail, twoheadrightarrow] & H_k (T^n_{r+1})^{W_{r+1}} \arrow{d}{i_*} \\
H_k ((T'_r)^n)^{W_r} \arrow{r}{\cong} & H_k ((T'_{r+1})^n)^{W_{r+1}},
\end{tikzcd}
\end{center}
and it will suffice to show that $i_* \colon\thinspace H_k (T^n_r) \to H_k ((T'_r)^n)$ is injective. This can be seen by direct computation using the K\"unneth Theorem, or from the fact that the Serre spectral sequence for the fibration sequence $T_r \to T_r'\xrightarrow{\det} S^1$ collapses (for dimension reasons) at the $E^2$ page.
\end{proof}
\begin{rmk}\label{rmk: rep space integral stability}
In types A, B, and C, the homological stability result in Theorem \ref{stability-wrt-r-Rep} holds integrally
by Steenrod~\cite[Eq. (22.7)]{steenrod}.
\end{rmk}
\subsection{A note on $\pi_2({\rm Rep}({\mathbb Z}^n,{\rm U}(r)))$ and $\pi_2({\rm Rep}({\mathbb Z}^n,{\rm SU}(r)))$}
Here we make a short note on the second homotopy group of these representation spaces by showing
that they are non-trivial, which is in contrast with a result of Florentino, Lawton and Ramras in the case of free group character varieties~\cite[Theorem 5.12]{FLR}.
We will make use of a result of Lawton and Ramras~\cite{Lawton-Ramras}, who show that the
universal cover of ${\rm Rep}({\mathbb Z}^n,{\rm U}(r))$ is ${\rm Rep}({\mathbb Z}^n,{\mathbb R} \times {\rm SU}(r)) \cong {\rm Rep}({\mathbb Z}^n,{\mathbb R}) \times {\rm Rep}({\mathbb Z}^n,{\rm SU}(r)).$ Therefore, there is an
isomorphism $$\pi_2({\rm Rep}({\mathbb Z}^n,{\rm U}(r))) \cong \pi_2({\rm Rep}({\mathbb Z}^n,{\rm SU}(r))).$$
Moreover, if $\pi_1(G)$ is finite, it was shown by Biswas, Lawton and Ramras \cite{BLR} that ${\rm Rep}({\mathbb Z}^n,G)_1$ is simply connected, and by the Hurewicz theorem we have
$$\pi_2({\rm Rep}({\mathbb Z}^n,G)_1)\cong H_2({\rm Rep}({\mathbb Z}^n,G)_1; {\mathbb Z}).$$
One such Lie group is ${\rm SU}(r)$.
By Theorem \ref{stability-wrt-r-Rep} we know that the second homology of ${\rm Rep}({\mathbb Z}^n,{\rm SU}(r))$
stabilizes for $r\geqslant 2.$ That is, for all $r\geqslant 2$ we have
$$H_2({\rm Rep}({\mathbb Z}^n,{\rm SU}(2));{\mathbb Q})\cong H_2({\rm Rep}({\mathbb Z}^n,{\rm SU}(r));{\mathbb Q})$$
(and in fact, this isomorphism hold integrally by Remark~\ref{rmk: rep space integral stability}).
From \cite[Ex. 7.1]{stafa2017poincare} the Poincar\'e series
of ${\rm Rep}({\mathbb Z}^n,{\rm SU}(2))$ is $((1+s)^n+(1-s)^n)/2,$ and we can see that
the coefficient of $s^2$ is $n\choose 2$. Therefore we have the following
\begin{prop}
The homotopy groups $\pi_2({\rm Rep}({\mathbb Z}^n,{\rm U}(r)))\cong\pi_2({\rm Rep}({\mathbb Z}^n,{\rm SU}(r)))$ have rank ${n\choose 2}.$
\end{prop}
We note that Lawton--Ramras~\cite[Theorem 5.3]{Lawton-Ramras} actually shows that
$$\pi_2 ({\rm Rep}({\mathbb Z}^2,{\rm SU}(2))) \cong \pi_2 ({\rm Rep}({\mathbb Z}^2,{\rm U}(2))) \cong \mathbb{Z}.$$
\subsection{Stability with respect to number of commuting elements}\label{sec: n-stability}
Fix a compact, connected Lie group $G$ (not necessarily of classical type) with maximal torus $T$ and Weyl group $W$.
We now study how the homology of spaces of commuting $n$--tuples in $G$ varies as we increase $n$.
Observe that the maps
$$ T^n = {\rm Rep}(\mathbb{Z}^n, T) \srm{i} {\rm Rep}({\mathbb Z}^n,G)_1$$
and
$$\phi \colon\thinspace G/T\times_W T^n \to {\rm Hom}({\mathbb Z}^n,G)_1$$
from Section~\ref{Hom} are $S_n$--equivariant, where on the left, $S_n$ acts by permuting the coordinates of $T^n$ and acts trivially on $G/T$, and on the right, $S_n$ acts by permuting the coordinates of ${\mathbb Z}^n$. Hence the induced maps
$$i_*\colon\thinspace H_*(T^n/W) \srm{\cong} H_*({\rm Rep}({\mathbb Z}^n,G)_1)$$
and
$$\phi_* : H_*(G/T \times_W T^n) \srm{\cong} H_* ({\rm Hom}({\mathbb Z}^n,G)_1),$$
which are isomorphisms by Theorems~\ref{Baird} and~\ref{Stafa}, are isomorphisms of $\mathbb{Q} [S_n]$--modules.
We now extend these maps to isomorphisms of ${\rm FI\#}$--modules.
First we explain the ${\rm FI\#}$--module structure on the domains of $i_*$ and $\phi_*$.
The diagonal action of $W$ on $T^n$ commutes with the permutation action of $S_n$. Moreover, the inclusions $i_n\colon\thinspace T^n\hookrightarrow T^{n+1}$, defined by $(t_1, \ldots, t_n)\mapsto (t_1, \ldots,t_n, 1)$, are $W$--equivariant. It follows that $W$ in fact acts on the ${\rm FI\#}$--space $\P(T)$ through maps of ${\rm FI\#}$--spaces. Hence $\P(T)$ has the structure of a $W$--object in ${\rm FI\#}$--spaces, or, equivalently, an ${\rm FI\#}$--object in $W$--spaces. Applying the quotient space functor then yields an ${\rm FI\#}$--space $n\mapsto T^n/W$, whose underlying consistent sequence is given by the permutation actions and the maps induced by the inclusions $i_n$.
To understand the ${\rm FI\#}$--module structure on the domain of $\phi_*$,
first note that given two ${\rm FI\#}$--spaces $X$ and $Y$, there is a direct product ${\rm FI\#}$--space $X\times Y$ defined on objects by ${\rm \bf n}_0 \mapsto X({\rm \bf n}_0) \times Y({\rm \bf n}_0)$ and on morphisms by sending
$\phi\colon\thinspace {\rm \bf n}_0 \to {\rm \bf m}_0$ to $X(\phi)\times Y(\phi)$. In particular, considering $G/T$ as a constant ${\rm FI\#}$--space, we obtain an ${\rm FI\#}$--space $G/T\times \P(T)$, with underlying consistent sequence ${\rm \bf n}_0\mapsto G/T\times T^n$. Again, $W$ acts through maps of ${\rm FI\#}$--spaces, so the consistent sequence ${\rm \bf n}_0\mapsto G/T\times_W T^n$ extends to an ${\rm FI\#}$--space by the same reasoning as above.
In order to endow the ranges of $i_*$ and $\phi_*$ with the structures of ${\rm FI\#}$--modules, note that there is a canonical isomorphism of categories ${\rm FI\#} \xrightarrow{\cong} {\rm FI\#}^{\textrm{op}}$ which is the identity on objects and takes a partial injection
$$S \supset A \stackrel[\cong]{}{\xrightarrow{\psi}} B \hookrightarrow T$$
to the morphism $S\to T$ in the opposite category corresponding to
$$T \supset B \stackrel[\cong]{}{\xrightarrow{\psi^{-1}}} A \hookrightarrow S.$$
We may thus consider $\P(\mathbb{Z})$ as an ${\rm FI\#}^\textrm{op}$--object in groups, and applying the contravariant functors ${\rm Rep}$ and ${\rm Hom}$ (followed by homology) makes the ranges of $i_*$ and $\phi_*$ into ${\rm FI\#}$--modules.
Tracing the definitions yields the following result.
\begin{lem}\label{lem: i* phi*} The maps $i_*$ and $\phi_*$ induce isomorphisms of ${\rm FI\#}$--modules.
\end{lem}
Next, if $X$ is an ${\rm FI\#}$--object in $W$--spaces, where $W$ is a finite group, then
since the isomorphisms
$$H_k (X_n)^W \srm{\cong} H_k (X_n/W)$$
in Proposition~\ref{coh-quot} are natural with respect to equivariant maps, they in fact provide an isomorphism of ${\rm FI\#}$--modules.
\begin{lem}\label{lem: FIs-iso} The ${\rm FI\#}$--modules
$${\rm \bf n}_0\mapsto H_k ({\rm Rep}({\mathbb Z}^n, G)_1) \,\,\,\textrm{ and }\,\,\, {\rm \bf n}_0\mapsto H_k ({\rm Hom}({\mathbb Z}^n, G)_1)$$
are isomorphic $($respectively$)$ to the ${\rm FI\#}$--modules
$$H_k(\P(T))^W \,\,\,\textrm{ and }\,\,\, H_k(G/T\times \P(T))^W$$
\end{lem}
Since the structure maps for the ${\rm FI\#}$--modules
$$H_k(\P(T)) \,\,\,\,\,\textrm{ and }\,\,\,\,\, H_k(G/T\times \P(T))$$
are $W$--equivariant, these are in fact ${\rm FI\#}$--objects in the category of ${\mathbb Q}[W]$--modules.
In general, passage to $W$--invariants defines a functor from ${\mathbb Q} [W]$--modules to ${\mathbb Q}$--modules, so the $W$--invariants of an ${\rm FI\#}$--module over ${\mathbb Q} [W]$ form an ${\rm FI\#}$--module over ${\mathbb Q}$, and similarly for ${\rm FI}$--modules. We have the following general fact regarding generators for these fixed-point submodules.
\begin{lem}\label{W-invt-fin-gen}
Let $H$ be a finite group, and let $V\colon\thinspace {\rm FI} \to\k [H]\textrm{--}{\bf Mod}$ be an ${\rm FI}$--module over the group ring $\k [H]$, where $\k$ is a field of characteristic not dividing $|H|$. Let $R\colon\thinspace\k [H]\textrm{--}{\bf Mod}\to \k\textrm{--}{\bf Mod}$ denote the
forgetful functor, and assume that $R\circ V$ is generated in stage $k$. Then the ${\rm FI}$--module $V^H$ is generated in stage $k$ as well.
\end{lem}
\begin{proof}
Say $x\in V_n^H$, with $n>k$. We must show that $x$ can be written as a sum of images of elements in $V_k^H$ under the structure maps for $V^H$.
By hypothesis, there exist $x_i\in V_n$, $y_i\in V_k$, and $\phi_i\colon\thinspace \k \to {\rm \bf n}$ such that
$$x = \sum_i x_i\,\,\,\,\, \textrm{ and } \,\,\,\,\,x_i = {\phi_i}_* (y_i).$$
Consider the $H$--invariant elements $\ol{x_i}$ and $\ol{y_i}$ (as in Definition~\ref{alpha}).
Since $x = \ol{x}$, linearity of the averaging map implies that $x = \sum_i \ol{x_i}$.
Since each ${\phi_i}_*$ is a map of $\k [H]$--modules, we have
$${\phi_i}_* (\ol{y_i}) =\ol{{\phi_i}_* (y_i)} = \ol{x_i},$$
so
$$x = \sum_i \ol{x_i} = \sum_i {\phi_i}_* (\ol{y_i}).$$
But $\ol{y_i}\in V_k^H$, so this shows that $x$ is in the $\k$--linear subspace of $V_n^H$ spanned by the images of the maps
$${\phi_i}_*\colon\thinspace V_k^H \longrightarrow V_n^H,$$
as desired.
\end{proof}
\begin{thm}\label{thm: stability for fixed G}
Let $G$ be a compact, connected Lie group, and fix $k\in\mathbb{N}$. The sequences of $S_n$--representations
$$n\mapsto H_k({\rm Rep}({\mathbb Z}^n, G)_1)
\,\,\,\text{ and }\,\,\,
n\mapsto H_k({\rm Hom}({\mathbb Z}^n, G)_1)
$$
are generated in stage $k$, and are uniformly representation stable for
$n\geqslant 2k$. Moreover,
the sequences $\{{\rm Rep}({\mathbb Z}^n, G)_1/S_n\}_{n\geqslant 1}$ and $\{{\rm Hom}({\mathbb Z}^n, G)_1/S_n\}_{n\geqslant 1}$ satisfy strong rational homological stability, and in homological degree $k$, stability in fact holds for $n\geqslant k$.
\end{thm}
\begin{proof} By Lemmas~\ref{lem: i* phi*} and~\ref{lem: FIs-iso} together with Theorems~\ref{sharp} and~\ref{quot-stab},
it suffices to show that the ${\rm FI\#}$--modules $H_k (\P(T))^W$ and $H_k (G/T \times \P(T))^W$ are generated in stage $k$. Proposition~\ref{symm-prod} tells us that $H_k (\P(T))$ is finitely generated in stage $k$, and the same holds for $H_k (\P(T))^W$ by Lemma~\ref{W-invt-fin-gen}.
Next, we have a decomposition of ${\rm FI\#}$--modules
\begin{equation}\label{decomp} H_k(G/T \times \P(T)) \cong \bigoplus_{i=0}^k H_i (G/T) \otimes H_{k-i} (\P(T)).
\end{equation}
Since the ${\rm FI}$--module $H_{k-i} (\P(T))$ is generated in stage $k-i$, and $H_i (G/T)$ is constant, we see that each term in this decomposition is generated in stage $k$ or earlier. Hence $H_k(G/T \times \P(T))$ is generated in stage $k$ as well, and Lemma~\ref{W-invt-fin-gen} completes the proof.
\end{proof}
\section{Infinite-dimensional constructions}\label{sec: B(2,G) stability}
Fix a compact and connected Lie group $G$. As observed in Section~\ref{sec: n-stability}, the sequence
$\{{\rm Hom}({\mathbb Z}^n,G)\}_{n\geqslant 0}$ extends to an ${\rm FI}$--space.
In this section we will consider several infinite-dimensional constructions associated to this ${\rm FI}$--space. Once again, all coefficients in this section are rational.
First, note that $\{{\rm Hom}({\mathbb Z}^n,G)\}_{n\geqslant 0}$ also has the structure of a simplicial space, in which the structure maps are the restrictions of those on the bar construction of $G$.
Following~\cite{adem2012commuting}, we denote the geometric realization of this simplicial space by
$$
B_{\rm com} G := |{\rm Hom}({\mathbb Z}^\bullet,G)|.
$$
The inclusion ${\rm Hom}({\mathbb Z}^n,G) \subseteq G^n$ implies that the space $B_{\rm com} G$
is a subspace of the classifying space $BG$ of $G$.
We define $B_{\rm com}(G)_1\subset B_{\rm com} G$ to be the subspace
$$
B_{\rm com} (G)_1 := |{\rm Hom}({\mathbb Z}^\bullet,G)_1|.
$$
It is important to note that for $G={\rm SU}(n)$, ${\rm U}(n)$, or ${\rm Sp}(n)$,
we have $$B_{\rm com} G=B_{\rm com} G_1$$ (since all of the representation spaces are connected in these cases).
The spaces $B_{\rm com} G$ and $B_{\rm com}(G)_1$ were introduced by
Adem, Cohen and Torres-Giese~\cite{adem2012commuting}
and used by Adem, G\'omez \cite{adem2015gomez} and Adem, G\'omez, Lind, Tillman~\cite{adem2017gomezlindtillman}
to define \emph{commutative $K$--theory}.
In particular, $B_{\rm com} {\rm U} := \colim_n B_{\rm com} {\rm U}(n)$ represents (reduced) commutative complex $K$--theory.
\
The other construction we will study is an analogue of the James reduced product,
and was introduced by Cohen and Stafa \cite{cohen2016spaces}.
Recall that for a CW-complex $X$ with a non-degenerate basepoint $*$, the James reduced product $J(X)$
of $X$ is defined as the quotient space
$$
J(X) : = \bigg(\coprod_{n \geq 0} X^n \bigg)/\sim,
$$
where $\sim$ is the equivalence relation generated by $(\dots,*,\dots) \sim (\dots,\hat*,\dots)$,
which omits the basepoint. Equivalently, this is the free topological monoid generated by $X$ with the basepoint $*$
acting as the identity element, endowed with the weak topology above.
Define the space ${\rm Comm}(G)$ as the quotient space
$$
{\rm Comm}(G) : = \bigg(\coprod_{n \geq 0} {\rm Hom}({\mathbb Z}^n,G)\bigg)/\sim,
$$
where $\sim$ is the same relation as above. Note that ${\rm Comm}(G) \subseteq J(G).$
We will focus here on the subspace
$${\rm Comm}(G)_1 : = \bigg(\coprod_{n \geq 0} {\rm Hom}({\mathbb Z}^n,G)_1\bigg)/\sim,$$
and as above, for $G={\rm SU}(n)$, ${\rm U}(n)$, or ${\rm Sp}(n)$, we have
$${\rm Comm} (G)_1={\rm Comm} (G).$$
Now let $T \leqslant G$ be a maximal torus with Weyl group $W$.
The conjugation maps
$$\phi_n \colon G/T\times_{W} T^n \to {\rm Hom}({\mathbb Z}^n,G)_1 $$
can be assembled to give maps
$$
\phi' \colon G/T\times_{W} BT \to B_{\rm com} (G)_1
$$
and
$$
\phi'' \colon G/T\times_{W_r} J(T) \to {\rm Comm}(G)_1.
$$
The next result was proven for $B_{\rm com} (G)$ in \cite[Theorem 6.1]{adem2012commuting} and for ${\rm Comm}(G)$ in
\cite[Theorem 7.1]{cohen2016spaces}.
\begin{thm}\label{thm: phi}
The maps
$$
\phi' \colon G/T\times_{W} BT \to B_{\rm com} ({G})_1
$$
and
$$
\phi'' \colon G/T\times_{W} J(T) \to {\rm Comm}(G)_1
$$
induce isomorphisms in rational $($co$)$homology.
\end{thm}
Now let $\{G_r\}_{r\geqslant 1}$ be one of the classical sequences of Lie groups, with maximal tori $T_r \leqslant G_r$ (as in Section~\ref{Lie}) and Weyl groups $W_r$.
For a fixed positive integer $k$, consider the consistent sequences of $W_r$--representations
$$
\{H_k(G_r/T_r\times BT_r)\}_{r\geqslant 1}, \text{ and }
\{H_k(G_r/T_r\times J(T_r))\}_{r\geqslant 1},
$$
with structure maps induced by those for the consistent sequences
$\{G_r/T_r\}_{r\geqslant 1}$ and $\{T_r\}_{r\geqslant 1}$ (note here that both constructions $B$ and $J$ are functorial).
The argument in the proof of Proposition~\ref{prop: H(G/T) is FI} shows that these sequences extend to ${\rm FI}_W$--modules.
First we show that these ${\rm FI}_W$--modules are
representation stable, which implies strong homological stability for $B_{\rm com} ({G_r})_1$ and ${\rm Comm}(G_r)_1$. Note that except in the case $G_r = {\rm SU}(r)$,
the ${\rm FI}_W$--modules $\r\mapsto B(T_r)$ and
$\r\mapsto J(T_r)$ extend to ${\rm FI}_W\#$--modules (since this holds before applying the functors $B$ and $J$ by Proposition~\ref{prop: T is FI}).
\begin{prop}\label{prop: stability BT}
The ${\rm FI}_W$--modules $\r \mapsto H_k (BT_r)$ are generated in
stage $k$. Hence these modules are uniformly representation stable for $r\geqslant 2k$ $($except possibly in the case $G_r = {\rm SU}(r)$$)$.
\end{prop}
\begin{proof}
For $G_r = {\rm U}(r)$, ${\rm Sp}(r)$, ${\rm SO}(2r)$, or ${\rm SO}(2r+1)$, the Weyl group $W_r$ acts
on $T_r\cong (S^1)^r$ via permutations and complex conjugation, so $S_r < W_r$ acts on $BT_r\cong (BS^1)^r$ by permuting the factors.
We have an isomorphism
$$H^*(BT_r) \cong {\mathbb Q}[x_1,\dots,x_r],$$
where $|x_i|=2$,
and in cohomology this action just permutes the set of generators $x_1,\dots,x_r$; the action in homology is simply dual to this action.
It can easily be seen
(e.g. \cite[Example 1.4]{wilson2014fiw}) that the degree--$k$ submodule of the ${\rm FI}$--module $\r\mapsto {\mathbb Q}[x_1,\dots,x_r]$, with the prescribed
action of the symmetric group, is finitely generated in stage $k$. Alternatively, we can view $BT_r$ as $(BS^1)^r\simeq ({\mathbb C} P^\infty)^r$,
and for increasing $r$ its degree $k$ rational cohomology is generated in stage
$k$ by Proposition~\ref{signed-symm-prod}.
For $G_r = {\rm SU}(r)$, the subgroup $S_{r-1}\leqslant W_r \cong S_{r}$ consisting of permutations fixing $r$ acts on $T({\rm SU}(r)) \cong (S^1)^{r-1}$ by permuting the factors, and hence acts on $$H^*(BT({\rm SU}(r))) \cong {\mathbb Q}[x_1,\dots,x_{r-1}]$$
by permuting the generators. Finite generation in this case now follows by the same argument as in Corollary~\ref{cor: T-rep-stable}.
\end{proof}
\begin{prop}\label{prop: stability J}
The ${\rm FI}_W$--modules $\r\mapsto H_k(J(T_r))$ are generated in stage $k$.
Hence these modules are uniformly representation stable for $r\geqslant 2k$ $($except possibly in the case $G_r = {\rm SU}(r)$$)$.
\end{prop}
\begin{proof} The homology of the James reduced product $J(T_r)$ is the tensor
algebra $\mathcal{T} \left(\widetilde{H}_* (T_r)\right)$ generated by the reduced homology of the maximal torus. Define
$$V_r:=\widetilde H_*(T_r)=\bigoplus_{i=1}^r H_i(T_r).$$
The action of the Weyl group preserves homological
degree in $H_*(T_r)$ and the tensor grading in the direct sum decomposition
$$
\mathcal{T}[V_r]= \bigoplus_{n \geqslant 0} V_r^{\otimes n} = \bigoplus_{n \geqslant 0}
\left( \bigoplus_{i=1}^r H_i(T_r)\right)^{\otimes n}.
$$
Then we have
$$
H_k(J(T_r)) \cong \bigoplus_{n = 0}^m
\left( \bigoplus_{\Sigma i_j=k} H_{i_1}(T_r) \otimes \cdots \otimes H_{i_n}(T_r)\right),
$$
and this extends to a decomposition of ${\rm FI}_W$--modules.
Each factor $\{H_{i_j}(T_r)\}_{r\geqslant 1}$ is generated in stage $i_j$
by Corollary~\ref{cor: T-rep-stable}, so by Proposition~\ref{prop: tensor}, $\{H_k(J(T_r))\}_{r\geqslant 1}$ is generated in stage $k =\Sigma i_j$.
\end{proof}
Recall from Theorem~\ref{Stafa}
the homeomorphism
\begin{equation}\label{eqn: j7} T^n/W \srm{\cong}
{\rm Hom}({\mathbb Z}^n,G)_1/G = {\rm Rep}({\mathbb Z}^n, G)_1.
\end{equation}
Similarly, we can obtain the following homeomorphisms.
\begin{prop}\label{prop: homeo}
Let $G$ be a compact Lie group. Then there are homeomorphisms
$$
{\rm Comm}(G)_1/G\cong J(T)/W,
$$
and
$$
B_{\rm com} (G)_1/G \cong BT/W.
$$
\end{prop}
\begin{proof}
The first homeomorphism was shown in \cite[Theorem 1.2]{stafa2017poincare}.
The second homeomorphism follows from the homeomorphism (\ref{eqn: j7}),
which yields a simplicial homeomorphism between the simplicial spaces
$n\mapsto T^n/W$ (whose geometric realization is $BT/W$) and $n\mapsto {\rm Rep}({\mathbb Z}^n, G)_1$ (whose realization is $B_{\rm com} (G)_1/G$).
\end{proof}
In Section~\ref{sec: stable range for Hom}, we establish homological stability for the sequences
$$r\mapsto B_{\rm com} (G_r)_1 \,\,\, \textrm{ and }\,\,\, r\mapsto {\rm Comm}(G_r)_1.$$
Here we consider the conjugation quotients, which are simpler to handle.
\begin{thm}\label{thm: stability for BcomG/G and ComG/G}
The sequences
$$r\mapsto B_{\rm com} (G_r)_1/G_r \,\,\, \textrm{ and }\,\,\, r\mapsto {\rm Comm}(G_r)_1/G_r$$
satisfy strong rational homological stability. In homological degree $k$, stability holds for $r\geqslant k$.
\end{thm}
\begin{proof} The proof is similar to that of Theorem~\ref{stability-wrt-r-Rep}.
We will show that the actions of $W$ on $J(T)$ and on $BT$ are good.
Then by Proposition~\ref{prop: homeo} we have
$$H_*(B_{\rm com} (G)_1/G)\cong H_* (BT)^W,$$
and
$$H_*({\rm Comm}(G)_1/G)\cong H_*(J(T))^W,$$
and except in the case of ${\rm SU}(r)$, the result will follow from Propositions~\ref{prop: stability J} and~\ref{prop: stability BT} together with Theorem~\ref{quot-stab}
Note that $J(T)$ is a CW complex, and May~\cite[Appendix]{May-EGP} shows that $BT$ has the homotopy type of a CW complex, since it is the geometric realization of a proper simplicial space in which each level is CW complex. The space $BT/W$ has the homotopy type of a CW complex for the same reason: this space is the geometric realization of a simplicial space $n\mapsto T^n/W$, and $T^n/W$ is triangulable for each $n$ (since the action of $W$ is algebraic, the quotients are semi-algebraic by Schwarz~\cite{Schwarz}).
We claim that $J(T)/W$ also has the homotopy type of a CW complex. This space is the colimit of the diagram formed by all maps $T^n/W\to T^{n+1}/W$ given by inserting the identity element in one coordinate. These maps are cofibrations, so this colimit is homotopy equivalent to the corresponding homotopy colimit.
In the case $G_r = {\rm SU}(r)$, following the argument in~\ref{stability-wrt-r-Rep}, it suffices to prove injectivity of the maps $BT({\rm SU}(r))\to BT({\rm U}(r))$
and $JT({\rm SU}(r))\to JT({\rm U}(r))$
induced by the inclusion ${\rm SU}(r)\hookrightarrow {\rm U}(r)$.
For the classifying spaces, the determinant map $T'_r\to S^1$ induces a simplicial map $BT({\rm U}(r)) \to BS^1$. This map is a level-wise Hurewicz fibration and hence (by~\cite[Theorem 12.7]{May-GOILS}) a fibration on geometric realizations, with fiber $BT({\rm SU}(r))$. Since the homology of $BS^1\simeq {\mathbb C} P^\infty$ is concentrated in even degrees, the Serre spectral sequence for this fibration collapses at the $E^2$ page, and hence the inclusion of the fiber is injective in homology, as desired.
For the James constructions, recall that we saw in the proof of Theorem~\ref{stability-wrt-r-Rep} that $T_r\hookrightarrow T_{r+1}$ is injective on homology, and since we are working rationally this map admits a splitting. To obtain the homology of the James construction, we take the tensor algebras, and functoriality of this construction implies that we again have a split injection between the tensor algebras.
\end{proof}
\begin{rmk}\label{rmk: Bcom integral stability}
For $G={\rm U}(r)$, Proposition~\ref{prop: homeo} implies that
\begin{equation}\label{cg}
B_{\rm com} (G)_1/G \cong BT/W \cong \mathrm{Sym}^r(BS^1).
\end{equation}
Hence in this case the homological stability result in Theorem \ref{thm: equiv.coh. stability Hom, Comm,Bcom} holds integrally, as explained in Remark~\ref{rmk: Steenrod symmetric prod}. In light of Remark~\ref{rmk: signed sym product = sym product}, the same holds in types B and C.
We note that in (\ref{cg}) we are using the homeomorphism $BT\cong (BS^1)^r$,
which holds when the product $(BS^1)^r$ is given the compactly generated topology~\cite[Corollary 11.6]{May-GOILS}.
Since geometric realizations of simplicial Hausdorff spaces are Hausdorff~\cite{Pazzis}, using the compactly generated topology
does not affect the weak homotopy type of the symmetric product
(as discussed in Remark~\ref{rmk: Steenrod symmetric prod}).
\end{rmk}
\section{Stability for equivariant (co)homology}\label{sec: equivariant stability}
In this section we prove that the $G_r$--equivariant (co)homology of
${\rm Hom}({\mathbb Z}^n,G_r)_1$, ${\rm Comm}(G_r)_1$, and $(B_{\rm com} G_r)_1$,
where $G_r$ acts by conjugation, also stabilizes for sufficiently large $r$.
In the case of ${\rm Hom}({\mathbb Z}^n,G)_1$, equipped with the permutation action of the
symmetric group $S_n$, we show that the $G$--equivariant (co)homology forms
a representation stable sequence of $\mathbb{Q}[S_n]$--modules. We emphasize the cohomological statements in this section, which are needed in the next section.
We will need a result from Baird \cite{baird2007cohomology} regarding
rational equivariant (co)homology.
As naturality will be crucial for us, we explain in detail the maps involved in this isomorphism. Given a left $G$--space $X$, the inclusion $X^T \hookrightarrow X$ induces a map
\begin{equation}\label{eqn: X^T X} EG\times_T X^T \longrightarrow EG\times_G X,
\end{equation}
where $EG$ is a universal principal (right) $G$--bundle. (We will sometimes denote the homotopy orbit space $EG\times_G X$ by $X_{hG}$.)
The Weyl group $W=NT/T$ acts on $EG\times_T X^T \cong EG/T \times X^T$ via $([e], x)\cdot [n] = ([e\cdot n], n^{-1} \cdot x)$, and the map (\ref{eqn: X^T X}) is invariant under this action, yielding an induced map
\begin{equation}\label{eqn: iota} \iota\colon\thinspace (EG\times_T X^T)/W \longrightarrow EG\times_G X
\end{equation}
sending $[e, x]$ to $[e,x]$.
\begin{thm}[{\cite[Theorem 3.5]{baird2007cohomology}}]\label{thm: Baird G-equiv cohom thm}
Let $G$ be a compact and connected Lie group acting on a paracompact Hausdorff
space $X$ such that for every $x\in X$, there exists a maximal torus $T(x)\leqslant G$ such that $T(x) \leqslant G_x$ $($where $G_x\leqslant G$ denotes the stabilizer subgroup$)$.
Then for each maximal torus $T\leqslant G$,
the map $\iota$
induces an isomorphism
$$\iota^*\colon\thinspace H_G^*(X) \srm{\cong} H^*\left(\left(EG\times_T X^T\right)/W\right)$$
in rational cohomology.
\end{thm}
The action of $W$ on $H^*(EG/T)$ in Theorem~\ref{thm: Baird G-equiv cohom thm} is induced by the \emph{principal action} of $NT\leqslant G$ on $EG$, which descends to an action of $W = NT/T$ on $EG/T$.
Recall (Lemma~\ref{lem: translation=conjugation}) that up to homotopy, this action agrees with the conjugation action of $W$ on $BT$.
\begin{rmk}\label{rmk: natural} The map $\iota$ in Theorem~\ref{thm: Baird G-equiv cohom thm} is natural in the following sense. Say $h\colon\thinspace G\to G'$ is a continuous homomorphism and
$h(T) \leqslant T'$ for some maximal tori $T\leqslant G$ and $T'\leqslant G'$. If $X$ and $X'$ are $G$ and $G'$--spaces (respectively), and $f\colon\thinspace X\to X'$ is an $h$--equivariant map (meaning that $f(g\cdot x) = h(g)\cdot f(x)$ for all $g\in G$, $x\in X$) satisfying
$$f(X^T) \subset (X')^{T'},$$
then we have a commutative diagram
\begin{center}
\begin{tikzcd}
(EG\times_T X^T)/W \arrow[d] \arrow{r}{\iota}
& EG\times_G X \arrow[d]\\
(EG'\times_{T'} (X')^{T'})/W \arrow{r}{\iota}
& EG'\times_{G'} X'
\end{tikzcd}
\end{center}
in which the vertical maps are induced by $h$ and $f$.
\end{rmk}
For completeness, we sketch the proof of Theorem~\ref{thm: Baird G-equiv cohom thm}.
\begin{proof}[Proof of Theorem~\ref{thm: Baird G-equiv cohom thm}]
To begin, let $E$ be a right $G$--space and let $H\leqslant G$ be a subgroup. Then
$G$ acts on $E\times G/H$ via $(e, [g])\cdot g' = (e\cdot g', [(g')^{-1} g])$,
and we have a homeomorphism
\begin{equation}\label{eqn: ExG/H}
\psi \colon\thinspace E/H \srm{\cong} E\times_G (G/H)
\end{equation}
induced by the inclusion $E\hookrightarrow E\times (G/H)$, $e\mapsto (e, [1])$ (the inverse of (\ref{eqn: ExG/H}) is the map induced by $(e, [g])\mapsto [e\cdot g])$.
Next, let $W = NH/H$, and let $Y$ be a left $W$--space. Then the induced homeomorphism
\begin{equation}\label{eqn: ExG/HxY}
\psi\times \textrm{Id}_Y\colon\thinspace E\times_H Y = E/H\times Y \srm{\cong} \left(E\times_G (G/H)\right) \times Y,
\end{equation}
is $W$--equivariant,
where $[n]\in W$ acts on $(E\times_H Y)/W$ via $[e, y]\cdot [n] = [e\cdot n, n^{-1} \cdot y]$ and on $\left(E\times_G (G/H)\right) \times Y$ via $(e, [g], y) \cdot [n] = (e, [gn], n^{-1} \cdot y)$. Hence $(\ref{eqn: ExG/HxY})$ descends to a homeomorphism
\begin{equation}\label{eqn: homeo2} (E\times_H Y)/W \srm{\cong} (\left(E\times_G (G/H)\right) \times Y)/W \cong
E\times_G (G/H\times_W Y).
\end{equation}
Now, consider
a left $G$--space $X$. Then the fixed point space $X^H$ is invariant under $NH$, and hence inherits an action of $W$. Setting $Y = X^H$ in (\ref{eqn: homeo2}) gives a homeomorphism
\begin{equation}\label{eqn: homeo3} (E\times_H X^H)/W \srm{\cong} (\left(E\times_G (G/H)\right) \times X^H)/W \cong
E\times_G (G/H\times_W X^H).
\end{equation}
The map
\begin{equation}\label{eqn: phi}\phi\colon\thinspace G/H\times_W X^H \longrightarrow X\end{equation}
induced by $([g], x)\mapsto (g\cdot x)$ is $G$--equivariant (where in the domain of $\phi$, $G$ acts by left translation on $G/H$, and trivially on $X^H$)
so we have an induced map
\begin{equation}\label{eqn: phiG} \phi_G \colon\thinspace E\times_G \left(G/H\times_W X^H\right) \longrightarrow E\times_G X.
\end{equation}
Composing (\ref{eqn: homeo3}) and (\ref{eqn: phiG}) yields a map
$$(E\times_H X^H)/W \longrightarrow E\times_G (G/H\times_W X^H) \longrightarrow E\times_G X$$
sending $[e, x]$ to $[e, x]$, so it remains only to show that (\ref{eqn: phiG}) induces an isomorphism in rational cohomology. The proof of
Baird~\cite[Theorem 3.3]{baird2007cohomology} shows that the underlying $G$--equivariant map
(\ref{eqn: phi}) is an isomorphism in rational cohomology. When $E=EG$, the domain and range of (\ref{eqn: phiG}) fiber over $EG/G = BG$, and by comparing the Serre spectral sequences for these fibrations we see that (\ref{eqn: phiG}) induces an isomorphism in rational cohomology, as desired.
\end{proof}
When $X$ is ${\rm Hom}({\mathbb Z}^n,G)$, ${\rm Comm}(G)$,
or $B_{\rm com} G$, with $G$ acting by conjugation, we have the following corollary of Theorem~\ref{thm: Baird G-equiv cohom thm}.
\begin{cor}\label{cor: G-equiv. cohomology of hom, comm, b_com}
For each compact, connected Lie group $G$, the inclusion $T\hookrightarrow G$ of a maximal torus induces maps
\begin{enumerate}
\item $(EG/T \times T^n)/W \longrightarrow EG\times_G {\rm Hom}({\mathbb Z}^n, G)_1 $
\item $(EG/T \times J(T))/W \longrightarrow EG\times_G {\rm Comm}(G)_1$
\item $(EG/T \times BT)/W \longrightarrow EG\times_G B_{\rm com}(G)_1$,
\end{enumerate}
each of which induces an isomorphism in rational cohomology:
\begin{enumerate}
\item $H^*_G({\rm Hom}({\mathbb Z}^n,G)_1) \srm{\cong} H^*(BT\times T^n)^{W}$,
\item $H^*_G({\rm Comm}(G)_1) \srm{\cong} H^*(BT \times J(T))^{W}$,
\item $H^*_G(B_{\rm com} (G)_1) \srm{\cong} H^*(BT \times BT)^{W}$.
\end{enumerate}
\end{cor}
\begin{proof} Item (1) is due to Baird \cite[Corollary 4.4]{baird2007cohomology},
and follows from Theorem~\ref{thm: Baird G-equiv cohom thm} and Proposition~\ref{coh-quot} by taking $X = {\rm Hom}({\mathbb Z}^n, G)_1$. The key point is that each $n$--tuple in ${\rm Hom}({\mathbb Z}^n, G)_1$ lies in a maximal torus (note also that ${\rm Hom}({\mathbb Z}^n, G)_1^T = T^n$, since if $(g_1, \ldots, g_n)$ is fixed by all $t\in T$, then each $g_i$ lies in the centralizer $Z(T)$, which is just $T$ itself). Note that the action of $W$ on $BT\times T^n$ is good; the proof is similar to the argument in the proof of Theorem~\ref{thm: stability for BcomG/G and ComG/G}.
The other two cases are similar, by taking $X = {\rm Comm}(G)_1$ or $X = B_{\rm com}(G)_1$ in Theorem~\ref{thm: Baird G-equiv cohom thm}. We begin by showing that both of these spaces are paracompact and Hausdorff, and that the actions of $W$ on $BT\times J(T)$ and $BT\times BT$ are good. First, ${\rm Comm}(G)_1$ is a closed subspace of the CW complex $J(G)$, and in general closed subspaces of paracompact spaces are paracompact.
Pazzis~\cite{Pazzis} implies that the simplicial space $B_{\rm com}(G)_1$ is Hausdorff.
To see that $B_{\rm com}(G)_1$ is paracompact, note that this space, being a geometric realization, may be written as the colimit of its skeleta, which are all compact. Moreover, the inclusion of one skeleton into the next is a closed embedding, since the skeleta are compact Hausdorff. In general, it follows from Michael's theory of selections~\cite{Michael-selection} that colimits of closed embeddings between paracompact Hausdorff spaces are paracompact (see~\cite{nLab-colim}).
The proof that the action on $BT\times BT$ is good is similar to the case of $BT\times T^n$. For $BT\times J(T)$, note that since $T$ is a CW complex, so is $J(T)$.
The quotient space $(BT\times J(T))/W$ is the geometric realization of a simplicial space $n\mapsto (T^n\times J(T))/W$. We showed in the proof of Theorem~\ref{thm: stability for BcomG/G and ComG/G} that $J(T)/W$ has the homotopy type of a CW complex, and a similar argument applies to $(T^n\times J(T))/W$. We conclude, using May~\cite[Appendix]{May-EGP}, that $(BT\times J(T))/W$ has the homotopy type of a CW complex as well.
Next, we need to verify that each stabilizer in ${\rm Comm} (G)_1$ and in $B_{\rm com} (G)_1$ contains a maximal torus. But this follows immediately from the fact that each $n$--tuple in ${\rm Hom}({\mathbb Z}^n, G)_1$ lies in a maximal torus.
To complete the proof, we need to check that
${\rm Comm}(G)_1^{T}=J({T})$ and $B_{\rm com} (G)_1^{T}=B{T}$.
Each point in ${\rm Comm} (G)_1$ has a unique non-degenerate representative $(g_1, \ldots, g_n)$ with $g_i \neq 1$, and if this point is fixed by all $t\in T$, then as above we find that $g_i\in Z(T) = T$ for each $i$. The equality $B_{\rm com} (G)_1^{T} = BT$ follows similarly, using the fact that fixed points commute with geometric realization.
\end{proof}
\begin{thm}\label{thm: equiv.coh. stability Hom, Comm,Bcom}
Each of the following sequences of spaces satisfies strong rational $($co$)$homological stability:
\begin{align*}
r&\mapsto EG_r\times_{G_r} {\rm Hom}({\mathbb Z}^n, G_r)_1,\\
r&\mapsto EG_r\times_{G_r} {\rm Comm}(G_r)_1,\\
r&\mapsto EG_r\times_{G_r} B_{\rm com}(G_r)_1.
\end{align*}
In $($co$)$homological degree $k$, stability holds for $r\geqslant k$.
\end{thm}
\begin{proof} We begin by considering the homomorphism spaces.
Using the K\"unneth Theorem and Corollary \ref{cor: G-equiv. cohomology of hom, comm, b_com} we have isomorphisms
$$H^{G_r}_k({\rm Hom}({\mathbb Z}^n,G_r)_1)\cong \bigoplus_{i+j=k}(H_i({BT_r}) \otimes H_j(T_r^n) )^{W_r}$$
that are natural in $r$, and by Lemma~\ref{lem: translation=conjugation}, the action of $W_r$ on $H_i({BT_r})$ is that induced by conjugation.
Each term $H_i({BT_r}) \otimes H_j(T_r^n) $ in the direct sum is an
${\rm FI}_W$--module finitely generated in stage $\leqslant i+ j= k$
by Corollary~\ref{cor: T-rep-stable} and Proposition~\ref{prop: stability BT}, along with
Proposition~\ref{prop: tensor}. By Proposition~\ref{prop: T is FI}, when $G_r = {\rm U}(r)$, ${\rm Sp}(r)$, or ${\rm SO}(2r+1)$, the ${\rm FI}_W$--modules in question are in fact ${\rm FI}_W\#$--modules, and when $G_r = {\rm SO}(2r)$, the ${\rm FI}_D$--modules in question are restrictions of the ${\rm FI}_B$--modules associated to ${\rm SO}(2r+1)$.
Theorem~\ref{quot-stab} now implies that the $W_r$--invariants in these modules stabilize for $r\geqslant k$.
Finally, we address the case $G_r = {\rm SU}(r)$.
Let $T_r' = T({\rm U}(r)) \leqslant {\rm U}(r)$ denote the diagonal maximal torus.
As in the proof of
Corollary~\ref{cor: T-rep-stable}, it suffices to show that the maps
$$H_* (BT_r \times T_r^n) \longrightarrow H_* (BT'_r \times (T'_r)^n)$$
(induced by the inclusion $T_r \hookrightarrow T_r'$) are injective. In that proof we established injectivity of $H_*(T_r^n)\to H_*((T'_r)^n)$, and injectivity of $H_* (BT_r) \to H_* (BT'_r)$ was established in the proof of Proposition~\ref{prop: stability BT}.
The K\"unneth decomposition now completes the proof in this case.
Similar arguments apply to ${\rm Comm}(G_r)_1$ and $B_{\rm com} (G_r)_1$, using Propositions~\ref{prop: stability J} and~\ref{prop: stability BT}.
\end{proof}
\begin{rmk}\label{rmk: Hom, Bcom integral equivariant stability}
For $G={\rm U}(r)$, ${\rm Sp}(r)$, or ${\rm SO}(2r+1)$, Corollary \ref{cor: G-equiv. cohomology of hom, comm, b_com} implies that
\begin{enumerate}
\item $H_*^G({\rm Hom}({\mathbb Z}^n,G)_1) {\cong} H_*(BT\times T^n)^{W} {\cong} H_*(\mathrm{Sym}^r(BS^1 \times (S^1)^n))$, and
\item $H_*^G(B_{\rm com} (G)_1) {\cong} H_*(BT \times BT)^{W} {\cong} H_*(\mathrm{Sym}^r(BS^1 \times BS^1))$.
\end{enumerate}
\end{rmk}
\begin{thm}\label{thm: equiv.coh. stability Hom mod S_n}
Let $G$ be a compact and connected Lie group.
The sequence of $S_n$--representations
$$
n\mapsto H^G_k({\rm Hom}({\mathbb Z}^n,G)_1)
$$
is uniformly representation stable with stable range $n\geqslant 2k.$
Consequently, the sequence
$$n\mapsto {\rm Hom}({\mathbb Z}^n,G)_1/S_n$$
satisfies strong $G$--equivariant rational homological stability, and in homological degree $k$, stability holds for $r\geqslant k$.
\end{thm}
\begin{proof}
Consider the action of the symmetric group $S_n$ on $EG \times_G {\rm Hom}({\mathbb Z}^n,G)_1$
that is trivial on $EG$ and permutes the coordinates of $n$--tuples in ${\rm Hom}({\mathbb Z}^n,G)_1.$
The latter action of $S_n$ commutes with the conjugation action of $G$, giving a well-defined $S_n$--action on the homotopy orbit space.
Recall that the conjugation map
$$\phi \colon\thinspace G/T\times_W T^n \longrightarrow {\rm Hom}({\mathbb Z}^n,G)_1$$
is both $S_n$--equivariant and $G$--equivariant (where on the left, $S_n$ acts trivially on $G/T$ and by permutations on $T^n$, while $G$ acts by left-translation on $G/T$ and trivially on $T^n$).
Since $\phi$ is $G$--equivariant, it induces a map of fibration sequences
\begin{center}
\begin{tikzcd}
G/T\times_W T^n \arrow{d}{\phi} \arrow{r} & (G/T\times_W T^n)_{hG}\arrow{d}{\phi_{hG}} \arrow{r}
& BG \ar{d}{=} \\
{\rm Hom}({\mathbb Z}^n,G)_1 \arrow{r} & ({\rm Hom}({\mathbb Z}^n,G)_1)_{hG} \arrow{r} & BG,
\end{tikzcd}
\end{center}
where in the middle column we use the notation $X_{hG}:=EG\times_G X$.
By Theorem~\ref{Baird}, $\phi$ induces an isomorphism in rational (co)homology, and a comparison of the Serre spectral sequences for these fibrations shows that the induced map $\phi_{hG}$ between homotopy orbit spaces is also an isomorphism in rational
(co)homology. Note that $S_n$--equivariance of $\phi$ implies $S_n$--equivariance of $\phi_{hG}$.
Next, we have an $S_n$--equivariant homeomorphism
$$(G/T\times_W T^n)_{hG} \cong ((G/T\times T^n)_{hG})/W = (EG\times_G (G/T\times T^n))/W,$$
where on the right $W$ acts trivially on $EG$.
Furthermore,
we have an $S_n$--equivariant homeomorphism \cite[eq. (28)]{baird2007cohomology}
$$
EG \times_G (G/T \times T^n) \cong EG \times_T T^n = EG/T \times T^n
$$
given by $[e, gT, t]\mapsto [e\cdot g, t]$ (with inverse $[e, t]\mapsto [e, T, t]$). This homeomorphism is also $W$--equivariant if we give $EG\times_T T^n$ the action $[e, t]\cdot [n] = [en, n^{-1} t n]$.
Lemma~\ref{lem: translation=conjugation} now gives
an isomorphism of ${\rm FI}$--modules
$$\{H_k^G ({\rm Hom}({\mathbb Z}^n, G))\}_{n\geqslant 1} \cong \{H_k (BT \times T^n)^W\}_{n\geqslant 1},$$
where $W$ acts on $BT\simeq EG/T$ by conjugation.
So it will suffice to show that $\{H_k (BT \times T^n)^W\}_{n\geqslant 1}$ is generated in stage $k$ (note that both of these are in fact ${\rm FI\#}$--modules).
By Lemma~\ref{W-invt-fin-gen}, it will suffice to show that the ${\rm FI}$--module
$\{H_k (BT \times T^n)\}_{n\geqslant 1}$ is generated in stage $k$.
The K\"unneth Theorem gives
$$H_k (BT \times T^n) \cong \bigoplus_{i+j=k} H_i(BT)\otimes H_j(T^n),$$
which is a decomposition of ${\rm FI}$--modules (where the ${\rm FI}$--structure on $ H_i(BT)$ is trivial). By Corollary~\ref{cor: T-rep-stable}, each term in this direct sum decomposition is generated in stage $k$, and hence the same is true for the sum.
For the last statement of the Theorem, note that we have a homeomorphism
$$EG\times_G ({\rm Hom}({\mathbb Z}^n, G)/S_n) \cong (EG\times_G {\rm Hom}({\mathbb Z}^n, G))/S_n,$$
where on the right, $S_n$ acts trivially on $EG$. The statement now follows from
Proposition~\ref{coh-quot2},
because $EG\times_G {\rm Hom}({\mathbb Z}^n, G)$ has the homotopy type of a CW complex; indeed it is the geometric realization of a simplicial space $k\mapsto (G^{k+1} \times {\rm Hom}({\mathbb Z}^n, G))/G$, and each of these quotients is triangulable
by Schwarz~\cite{Schwarz} (as in the proof of Theorem~\ref{thm: stability for BcomG/G and ComG/G}).
\end{proof}
\begin{rmk} In Sections~\ref{sec: nil} and~\ref{sec: covers} we extend the homological stability results in this article in several directions. Remarks~\ref{rmk: eqvt11} and~\ref{rmk: eqvt12} discuss extensions of the results in the present section.
\end{rmk}
\section{Stability bounds for classical sequences of Lie groups}\label{sec: stable range for Hom}
In this section, we derive homological stability bounds for $\{{\rm Hom}({\mathbb Z}^n, G_r)\}_{r\geqslant 1}$, where $n$ is fixed,
using the Eilenberg--Moore spectral sequences associated to the fibrations
\begin{equation}\label{eq: EM} {\rm Hom}({\mathbb Z}^n, G_r)\longrightarrow {\rm Hom}({\mathbb Z}^n, G_r)_{hG_r}\longrightarrow BG_r
\end{equation}
(and similarly for $B_{\rm com} (G_r)$ and ${\rm Comm}(G_r)$ in place of ${\rm Hom}({\mathbb Z}^n, G_r)$).
We refer to McCleary~\cite{mccleary2001ssbook} and Smith~\cite{smith_Eilenberg-Moore-SS} for background on the
Eilenberg--Moore spectral sequence (in particular, see \cite[Theorem 6.1]{smith_Eilenberg-Moore-SS}.) We note that these sources place the spectral sequence in the second quadrant, so that the differentials are of ``cohomological type" and the total cohomological degree of $E_2^{p,q}$ is $p+q$. In order to simplify notation in the arguments to follow, we will reindex this as a first quadrant spectral sequence by reflecting across the vertical axis (that is, the $q$--axis).
\begin{thm}\label{thm: EM} Let $E\to B$ be a fibration, with $B$ simply connected, and consider a map $f\colon\thinspace X\to B$. Let $X \stackrel[B]{\bigtimes}{} E$ denote the pullback of $E$ along $f$. If all four spaces are of finite type, then there is a first quadrant spectral sequence with
$$E_2^{p,q} = {\rm Tor}^{H^*(B; {\mathbb Q})}_{p,q} (H^*(X; {\mathbb Q}); H^*(E; {\mathbb Q}))$$
converging to $H^*(X \stackrel[B]{\bigtimes}{} E; {\mathbb Q})$. The differential on the $m^{\textrm th}$ page has the form
$$d_m \colon\thinspace E_m^{p,q}\longrightarrow E_2^{p-m,q-m+1}.$$
A commutative diagram of pullback squares induces a map of spectral sequences, and on the $E_2$ page this map agrees with the induced map between ${\rm Tor}$ groups.
\end{thm}
We will only need the last statement for the case when $X$ is a point, and we spell out the statement in more detail below (Corollary~\ref{cor: EM}).
Some additional comments are in order. First, convergence means that for each $k\geqslant 0$,
there exists $M = M(k)$ such that for $m\geqslant M(k)$, the groups
$E_m^{p,q}$ with $q=p+k$ form the associated graded group of a filtration on
$H^{k} (X \stackrel[B]{\bigtimes}{} E;{\mathbb Q})$. Next, to understand the groups ${\rm Tor}^{H^*(B; {\mathbb Q})}_{p,q} (H^*(X; {\mathbb Q});
H^*(E; {\mathbb Q}))$, we consider $H^*(X; {\mathbb Q})$ as a graded module over the
graded ring $H^*(B;{\mathbb Q})$ via the map $f^*$. Given a graded
module $M$ over a graded ring $R$, there exists a resolution of $M$ by
grading-preserving maps between graded free $R$--modules (here freeness just
refers to the underlying ungraded module). Tensoring such a resolution with
$H^*(E;{\mathbb Q})$ (over $H^*(B; {\mathbb Q})$) yields a graded chain complex. If we consider the
sub-chain complex consisting of elements in grading $q$, then the $p^{\textrm th}$
homology of this complex is independent of the chosen resolution, and is the
group ${\rm Tor}^{H^*(B; {\mathbb Q})}_{p,q} (H^*(X; {\mathbb Q}); H^*(E; {\mathbb Q}))$.
Taking $X$ to be a point yields the following special case of Theorem~\ref{thm: EM}.
\begin{cor}\label{cor: EM} Let $F\to E\to B$ be a fibration, with $B$ simply connected and
all spaces of finite type. Then there is a first quadrant spectral sequence with
$$E_2^{p,q} = {\rm Tor}^{H^*(B; {\mathbb Q})}_{p,q} ({\mathbb Q}; H^*(E; {\mathbb Q}))$$
converging to $H^*(F; {\mathbb Q})$. The differential on the $m^{\textrm th}$ page has the form
$$d_m \colon\thinspace E_m^{p,q}\longrightarrow E_2^{p-m,q-m+1}.$$
A commutative diagram
\begin{equation}\label{fib-map}
\begin{tikzcd}
F \arrow[d] \arrow[rr]
& & F'\arrow[d]\\
E \arrow[rd] \arrow{rr}{f}
& & E' \arrow[ld]
\\
&B
\\
\end{tikzcd}
\end{equation}
of fibrations over $B$ induces a
map between the associated spectral sequences, and on the $E_2$ page this map agrees with the map
$${\rm Tor}^{H^*(B; {\mathbb Q})}_{p,q} ({\mathbb Q}; H^*(E'; {\mathbb Q})) \longrightarrow {\rm Tor}^{H^*(B; {\mathbb Q})}_{p,q} ({\mathbb Q}; H^*(E; {\mathbb Q}))$$
induced by the map of $H^*(B; {\mathbb Q})$--modules $f^*\colon\thinspace H^*(E'; {\mathbb Q}) \to H^*(E; {\mathbb Q})$.
\end{cor}
The ${\rm Tor}$ groups in the spectral sequences used below are computed over the rational cohomology rings $H^*(BG_r)$.
We now recall the structure of these rings and of the maps
\begin{equation}\label{HBG} (Bi)^*\colon\thinspace H^*(BG_{r+1})\longrightarrow H^*(BG_r)\end{equation}
induced by the standard inclusions $i\colon\thinspace G_r \hookrightarrow G_{r+1}$.
\begin{prop}\label{prop: BG coh}
We have the following isomorphisms:
\begin{enumerate}
\item $H^*(B{\rm U}(r))\cong \mathbb{Q}[c_1, \ldots, c_r]$, with $|c_i| = 2i$.
\item $H^*(B{\rm SU}(r))\cong \mathbb{Q}[c_2, \ldots, c_r]$, with $|c_i| = 2i$.
\item $H^*(B{\rm Sp}(r))\cong \mathbb{Q}[p_1, \ldots, p_r]$, with $|p_i| = 4i$.
\item $H^*(B{\rm SO}(2r+1)) \cong \mathbb{Q}[p_1, \ldots, p_r]$, with $|p_i| = 4i$.
\item For $r>1$, $H^*(B{\rm SO}(2r))\cong \mathbb{Q}[p_1, \ldots, p_{r-1}, y_{2r}]$, with $|p_i| = 4i$, $|y_{2r}| = 2r$.
\end{enumerate}
The maps (\ref{HBG}) have the following behavior on the above polynomial generators:
\begin{enumerate}
\item For $G_r = {\rm U}(r)$ or ${\rm SU}(r)$, we have $c_i \mapsto c_i$ for $i<r$ and $c_{r+1}\mapsto 0$.
\item For $G_r = {\rm Sp}(r)$ or ${\rm SO}(2r+1)$, we have $p_i \mapsto p_i$ for $i<r$ and $p_{r+1}\mapsto 0$.
\item For $G_r = {\rm SO}(2r)$ and $r>1$, we have $p_i \mapsto p_i$ for $i<r-1$ and $p_{r}, y_{2r}\mapsto 0$.
\end{enumerate}
\end{prop}
\begin{proof} This follows from the arguments in~\cite[\S III.3]{Toda-Mimura}.
Briefly, one first calculates $H^* (G_r)$ by analyzing the spectral sequence for the fibration
$$G_r\longrightarrow G_{r+1} \longrightarrow G_{r+1}/G_r,$$
finding that it is exterior on generators in one degree less than the above polynomial generators for $H^*(BG_r)$. The structure of these spectral sequences also determines the map $H^*(G_{r+1})\to H^*(G_r)$. The spectral sequences for the fibrations
$$G_r\longrightarrow EG_r \longrightarrow BG_r$$
then determine the structure of $H^*(BG_r)$, and a comparison of these spectral sequences for $r$ and $r+1$ shows that the maps (\ref{HBG}) are determined by the corresponding maps $G_r\hookrightarrow G_{r+1}$ between the fibers of these fibrations.
\end{proof}
\begin{rmk} In each of the above cases, we can rename the polynomial generators as $x_1, \ldots, x_r$ so that their degrees are non-decreasing. This will be implicit in the arguments to follow.
For $G_r = {\rm SO}(2r)$, there is sometimes an ambiguity in this choice of ordering, since $|y_{2r}|= |p_{r/2}|$ when $r$ is even. This will not affect the arguments.
\end{rmk}
\begin{lem}\label{lem: EM}
The map
$$H^k_{G_{r+1}} ({\rm Hom}({\mathbb Z}^n, G_{r+1})_1) \longrightarrow H^k_{G_{r}} ({\rm Hom}({\mathbb Z}^n, G_{r+1})_1)$$
induced by the standard inclusion $G_r\hookrightarrow G_{r+1}$ is an isomorphism for
$k\leqslant 2r$, and similarly for $B_{\rm com}(-)$ or ${\rm Comm}(-)$ in place of ${\rm Hom}({\mathbb Z}^n, -)$.
\end{lem}
\begin{proof} We will study the Eilenberg--Moore spectral sequence associated to the pullback diagram
\begin{center}
\begin{tikzcd}
EG_{r}\times_{G_{r}} {\rm Hom}({\mathbb Z}^n, G_{r+1})_1 \arrow{r} \arrow{d}
& EG_{r+1}\times_{G_{r+1}} {\rm Hom}({\mathbb Z}^n, G_{r+1})_1 \arrow{d}
\\
BG_{r} \arrow{r} &BG_{r+1}
\end{tikzcd}
\end{center}
induced by the standard inclusion $i\colon\thinspace G_r \hookrightarrow G_{r+1}$.
To describe the $E_2$ page of this spectral sequence, we need a graded resolution of $H^*(BG_{r}; {\mathbb Q})$ as a graded module over $H^*(BG_{r+1}; {\mathbb Q})$. In all cases other than $G_r = {\rm SO}(2r)$, we see that $H^*(BG_{r}; {\mathbb Q})$ is simply the quotient of $H^*(BG_{r+1}; {\mathbb Q})$ by the ideal generated by the polynomial generator in the highest grading; for ease of notation we denote this generator by $x$. This gives us a 2-step free resolution
\begin{equation}\label{eq: res}
0 \longleftarrow H^*(BG_{r}; {\mathbb Q})\stackrel{(Bi)^*}{\longleftarrow} H^*(BG_{r+1}; {\mathbb Q}) \stackrel{x\cdot}{\longleftarrow} \Sigma^{|x|} H^*(BG_{r+1}; {\mathbb Q})
\end{equation}
where $\Sigma^d$ is the operator on graded modules that adds $d$ to all gradings, and the right-hand map in (\ref{eq: res}) is multiplication by $x$.
Note that the shift in grading in the first term of this sequence makes multiplication by $x$ grading-preserving. To compute the ${\rm Tor}$ groups on the $E_2$ page of the spectral sequence, we tensor this resolution (over $H^*(BG_{r+1})$) with
$H^*_{G_{r+1}}({\rm Hom}({\mathbb Z}^n, G_{r+1})_1; {\mathbb Q})$, which gives a graded chain complex concentrated in degrees $p=0$ and $p=1$.
The shift in grading implies that $E^{1,q}_2 = 0$ for $q \leqslant 2r+1$, so that for $k \leqslant 2r$ the only group contributing to the line of total cohomological degree $k$ (namely the line $q=p+k$) is $E_2^{0,k} \cong H_{G_{r+1}}^k ({\rm Hom}({\mathbb Z}^n, G_{r+1})_1; {\mathbb Q})$. Moreover, there is no room for non-trivial differentials in the spectral sequence, so this gives the desired isomorphism.
For $G_r = {\rm SO}(2r)$, let
$A = {\mathbb Q}[p_1, \ldots, p_{r}, y_{2r+2}] \cong H^*(BG_{r+1})$.
For $z\in A$, set $A\tilde{z}:=\Sigma^{|z|} A$, and write elements in $A\tilde{z}$ in the form $a\cdot\tilde{z}$ (so $|a\cdot\tilde{z}| = |a|+|z|$).
Similarly, we define $A\,\tilde{z}\wedge \tilde{w}: =\Sigma^{|z|+|w|} A$, and elements in $A\,\tilde{z}\wedge \tilde{w}$ are written in the form $a\cdot \tilde{z}\wedge \tilde{w}$.
We can resolve ${\mathbb Q}[p_1, \ldots, p_{r-1}, y_{2r}]$ over $A$ as follows:
\begin{center}
\begin{tikzcd}[column sep=3ex, nodes={inner sep=2pt}]
0
& {\mathbb Q}[p_1, \ldots, p_{r-1}, y_{2r}] \ar{l} \ar[r, leftarrow, "\,\,\alpha"]
& A \oplus \Sigma^{2r}A \ar[r, leftarrow, "\,\,\beta"]
& ( A\tilde{y}_{2r+2} \oplus A\tilde{p}_r) {\oplus} ( \Sigma^{2r} A \tilde{y}_{2r+2} \oplus\Sigma^{2r}A\tilde{p}_r)
\\
&&& (A\, \tilde{p}_r \wedge\tilde{y}_{2r+2} ) {\oplus} \Sigma^{2r} (A\,\tilde{p}_r\wedge\tilde{y}_{2r+2}) \ar[u, "\gamma"]
\end{tikzcd}
\end{center}
The maps $\alpha$, $\beta$, and $\gamma$, are defined by
$$\alpha (a, b) = (Bi)^*(a)+((Bi)^*(b))y_{2r},$$
$$\beta ((a\cdot \widetilde{y}_{2r+2}, a'\cdot \widetilde{p}_r ), (b\cdot \widetilde{y}_{2r+2}, b'\cdot \widetilde{p}_r )) = ( a y_{2r+2} + a' p_r , b y_{2r+2} + b' p_r ),$$
and
$$\gamma (a \cdot \widetilde{p}_r \wedge\widetilde{y}_{2r+2}, b\cdot \widetilde{p}_r \wedge\widetilde{y}_{2r+2} )
= ((ap_r \cdot \widetilde{y}_{2r+2}, -a y_{2r+2} \cdot \widetilde{p}_r), (bp_r \cdot \widetilde{y}_{2r+2}, -b y_{2r+2} \cdot \widetilde{p}_r) ).$$
The shift in grading again implies that $E^{1,q}_2 = 0$ for $q \leqslant 2r+1$,
and also that $E^{2,q}_2 = 0$ for $q \leqslant 6r+1$,
so for $k \leqslant 2r$ the only group contributing to the line of total cohomological degree $k$ (namely the line $q=p+k$) is $E_2^{0,q} \cong H_{G_{r+1}}^k ({\rm Hom}({\mathbb Z}^n, G_{r+1}); {\mathbb Q})$. Moreover, there is no room for non-trivial differentials into the groups $E^{0,q}_r$ for $q \leqslant 2r$.
\end{proof}
In the arguments below, we will resolve ${\mathbb Q} = H^*(\textrm{pt}; {\mathbb Q})$ as a module over $H^*(BG_r)$
using the \emph{Koszul complex}.
We now recall the details of this construction, following
Lang~\cite[Section XXI.4]{Langalgebra}. Lang works in the ungraded setting,
so we will explain how to include gradings. Let $A={\mathbb Q}[x_1, \ldots, x_r]$, with
grading satisfying $|x_1| \leqslant |x_2| \leqslant \cdots \leqslant |x_r|$ (we give constant polynomials grading zero).
We now recall the definition of the (augmented) Koszul
complex of $A$. This is a graded chain complex
\begin{center}
\begin{tikzcd}
0 & {\mathbb Q} \ar[l] \ar[r, leftarrow, "d_{0}"]& A = F_0 \ar[r, leftarrow, "d_{1}"]\ar[l]& \cdots \ar[r, leftarrow, "d_{r-1}"] & F_{r-1} \ar[r, leftarrow, "d_{r}"] &F_r
\end{tikzcd}
\end{center}
whose unaugmented portion (starting at $A$) will be denoted ${\it \mathbf k}(A)$.
We define $F_p = F_p (A)$ to be the free $A$--module of rank ${r \choose p}$ (in
particular, $F_0 = A$). The differential $d_0$ is simply the unital surjection sending all $x_i$ to zero. In order to describe the grading on $F_p$ and the
differential $d_p \colon\thinspace F_p \to F_{p-1}$ for $p>0$, we adopt the following notation. For
$p\geqslant 1$, we will view $F_p$ as the free $A$--module on the set of
formal symbols $\tilde{x}_{i_1} \wedge \tilde{x}_{i_2} \wedge \cdots \wedge \tilde{x}_{i_p}$ with $1 \leqslant i_1 < i_2 < \ldots < i_p \leqslant r$.
The grading on the submodule
$A\tilde{x}_{i_1} \wedge \tilde{x}_{i_2} \wedge \cdots \wedge \tilde{x}_{i_p}\leqslant F_p$
is defined so that the map $a\cdot \tilde{x}_{i_1} \wedge \tilde{x}_{i_2} \wedge \cdots \wedge \tilde{x}_{i_p}\mapsto a$ gives an isomorphism
$$A\tilde{x}_{i_1} \wedge \tilde{x}_{i_2} \wedge \cdots \wedge \tilde{x}_{i_p} \cong \Sigma^{|x_{i_1}|+\cdots+| x_{i_p}|} A$$
of graded $A$--modules.
The differential $d_p$ is defined on generators by
$$d_p(\tilde{x}_{i_1} \wedge \tilde{x}_{i_2} \wedge \cdots \wedge \tilde{x}_{i_p}) = \sum_{j=1}^p (-1)^{j-1} x_{i_j} \cdot \tilde{x}_{i_1} \wedge \cdots\wedge \widehat{\tilde{x}_{i_j}} \wedge \cdots \wedge \tilde{x}_{i_p}.$$
The definition of the gradings ensures that $d_p$ preserves them,
so the subgroups $$(F_p)_q \leqslant F_p$$
consisting of homogeneous elements in grading $q$ form a subcomplex
${\it \mathbf k}(A)_q$ of ${\it \mathbf k}(A)$.
Since the sequence $x_1, \ldots, x_r$ is regular, the Koszul complex is
exact~\cite[Theorem XXI.4.6]{Langalgebra}. Note that exactness in the
ungraded sense immediately implies that the portion of the sequence in each
homogeneous degree is also exact.
The next result will give a vanishing curve in our Eilenberg--Moore spectral sequences.
\begin{lem}\label{grading}
The Koszul resolution of $H^*(BG_r)$ satisfies $(F_p)_q = 0$ for
$q < p(p+1)$ $($except in the case $G_r = {\rm SO}(2) \cong S^1$$)$.
\end{lem}
\begin{proof}
The minimum shift in grading among the free summands in $F_p$ occurs for the summand corresponding to $x_1\wedge x_2 \wedge \cdots \wedge x_p$,
where the
this shift is $\sum_{i=0}^p |x_i|$. By inspection, this sum is always smallest in the case $G_r = {\rm U}(r)$.
\end{proof}
We can now derive our stability bounds for the homology of spaces of commuting elements in the classical groups.
\begin{thm}\label{Hom-bound}
For each $n\geqslant 1$, the sequences
$$\{{\rm Hom}({\mathbb Z}^n, G_r)_1\}_{r\geqslant 1}, \,\,\,\,\{B_{\rm com} (G_r)_1\}_{r\geqslant 1},
\,\,\,\,\textrm{and} \,\,\,\,
\{{\rm Comm} (G_r)_1\}_{r\geqslant 1}$$
satisfy strong rational homological stability, and in homological degree $k$, stability holds once $r - \lfloor \sqrt{r} \rfloor \geqslant k$.
\end{thm}
\begin{proof} We work in cohomology. The arguments for ${\rm Hom}({\mathbb Z}^n, -)$, $B_{\rm com}(-) $, and ${\rm Comm} (-)$ are completely analogous, so we will focus on ${\rm Hom} ({\mathbb Z}^n, -)$.
We consider the map of Eilenberg--Moore spectral sequences associated to the diagram of fibrations
\begin{equation}
\begin{tikzcd} \label{fib-diag}
{\rm Hom}({\mathbb Z}^n, G_r)_1 \arrow[d] \arrow{rr}{i}
& & {\rm Hom}({\mathbb Z}^n, G_{r+1})_1\arrow[d]\\
({\rm Hom}({\mathbb Z}^n, G_r)_1)_{hG_r} \arrow[rd] \arrow{rr}{j}
& & ({\rm Hom}({\mathbb Z}^n, G_{r+1})_1)_{hG_{r}} \arrow[ld]
\\
&BG_{r}
\\
\end{tikzcd}
\end{equation}
where the horizontal maps $i$ and $j$ are induced by the standard inclusion.
Consider the commutative diagram
\begin{center}
\begin{tikzcd}
H^k_{G_{r+1}} \left({\rm Hom}({\mathbb Z}^n, G_{r+1})_1\right) \arrow{dr} \arrow{d}{\iota} \\
H^k_{G_{r}} \left({\rm Hom}({\mathbb Z}^n, G_{r+1})_1\right) \arrow{r}{j^*}
& H^k_{G_{r}} \left({\rm Hom}({\mathbb Z}^n, G_{r})_1\right),
\end{tikzcd}
\end{center}
in which $\iota$ is the map from Lemma~\ref{lem: EM}, and hence is an isomorphism for $r\geqslant k/2$. By Theorem~\ref{thm: equiv.coh. stability Hom, Comm,Bcom}, the composite $j^*\circ \iota$ is an isomorphism for $r\geqslant k$, and we conclude that $j$ induces an isomorphism in cohomology for $r\geqslant k$.
To simplify notation,
let $\H^*_r$ and $\H^*_{r+1}$ be the graded $H^*BG$--modules
$$\H^*_r = H_{G_r}^* {\rm Hom}({\mathbb Z}^n, G_r)_1 \,\,\,\, \textrm{and}\,\,\,\, \H^*_{r+1} = H_{G_r}^* {\rm Hom}({\mathbb Z}^n, G_{r+1})_1,$$
with module structures induced by the projections from the homotopy orbit spaces to $BG_r$. Denote the spectral sequences associated to the fibrations on the left and right of Diagram (\ref{fib-diag}) by $\{E_m^{p,q} (r), d_m = d_m (r)\}$ and $\{E_m^{p,q} (r+1), d_m = d_m (r+1)\}$, respectively.
\begin{claim}\label{claim: stab-reg}
The map
\begin{equation}\label{eqn: E2-map} E_2^{p,q} (r+1) \longrightarrow E_2^{p,q} (r)
\end{equation}
induced by (\ref{fib-diag}) is an isomorphism if $q\leqslant r + p(p+1)$.
\end{claim}
\begin{proof} This map of ${\rm Tor}$ groups arises from
the map of graded chain complexes
\begin{equation}\label{kmap}{\it \mathbf k}(A)\otimes_{A} \H^*_{r+1} \longrightarrow {\it \mathbf k}(A)\otimes_{A} \H^*_r\end{equation}
induced by $j^*\colon\thinspace \H^*_{r+1}\to \H^*_r$.
Say $q \leqslant r + p(p+1)$.
Recall that ${\it \mathbf k}(A)_p$ is a direct sum of copies $A$, with gradings shifted upwards by at least $p(p+1)$. Hence ${\it \mathbf k}(A)\otimes_{A} \H^*_{r+1}$ and ${\it \mathbf k}(A)\otimes_{A} \H^*_{r}$ are direct sums of copies of $\H^*_{r+1}$ and $\H^*_r$, respectively, and the map (\ref{kmap}) is simply a direct sum of copies of the map $\H^*_{r+1}\to \H^*_r$, but again with grading shifted up by at least $p(p+1)$. In grading $q$, then, (\ref{kmap}) splits as a sum of maps of the form $j^* \colon\thinspace \H^l_{r+1} \to \H^l_{r}$, where $l\leqslant q-p(p+1)\leqslant r$, and this map is an isomorphism by Theorem \ref{thm: equiv.coh. stability Hom, Comm,Bcom}.
\end{proof}
A triple of integers $(p,q,m)$ with $m\geqslant 2$, thought of as a point on page $m$ of the spectral sequence(s), will be called \emph{stable} if the map
\begin{equation}\label{eqn: Em-map} E_m^{p,q} (r+1) \longrightarrow E_m^{p,q} (r)
\end{equation}
is an isomorphism.
So Claim~\ref{claim: stab-reg} asserts that all triples $(p,q,2)$ with $q\leqslant r + p(p+1)$ are stable, and we wish to prove that all points of the form $(p, q, m)$ with $q\leqslant r-\sqrt{r}+p$ are stable. To simplify notation and terminology, we refer to $q-p$ as the (total) \emph{cohomological degree} of the point $(p,q,m)$, and we refer to the points
$\{(p,q,m)\,:\, q=p+k\}$ as the line of cohomological degree $k$ (on page $m$).
Note that each differential out of the line of cohomological degree $k$ maps to the line of cohomological degree $k+1$.
\begin{claim}
On page $2+s$, all points of cohomological degree at most $r-s$ are
stable ($s=0, 1, \ldots$).
\end{claim}
\begin{proof} We use induction on $s$. For $s=0$, this is a weaker statement than
Claim~\ref{claim: stab-reg}. Assume the claim for some $s\geqslant 0$, and consider a
point $(p,q,2+s+1)$ with cohomological degree at most $r-(s+1)$.
We need to prove that
\begin{equation}\label{eqn: Em+1-map} E_{2+s+1}^{p,q} (r+1) \longrightarrow E_{2+s+1}^{p,q} (r)
\end{equation}
is an isomorphism. Let $m=2+s$. The map (\ref{eqn: Em+1-map}) is simply the map on homology induced by the
map of chain complexes
\begin{equation}\label{Em}
\begin{tikzcd}
E_{m}^{p-m,q-m+1} (r+1) \arrow[d] \arrow[r, leftarrow, "d_{m}"] &
E_{m}^{p,q} (r+1) \arrow[d] \arrow[r, leftarrow, "d_{m}"]
& E_{m}^{p+m,q+m-1} (r+1) \arrow[d]
\\
E_{m}^{p-m,q-m+1} (r) \arrow[r, leftarrow, "d_{m}"] & E_{m}^{p,q} (r) \arrow[r, leftarrow, "d_{m}"]
& E_{m}^{p+m,q+m-1} (r).
\end{tikzcd}
\end{equation}
The point $(p,q,m+1)$ has cohomological degree $q-p\leqslant r-(s+1)$, and the points
$$(p-m,q-m+1, m),\,\,\, (p,q,m), \,\,\, \textrm{and} \,\,\,(p+m, q+m-1,m)$$
have cohomological degrees $(q-p)+1$, $q-p$, and $(q-p)-1$, respectively,
so all three are
stable by the induction hypothesis.
Hence the vertical maps in (\ref{Em}) are isomorphisms, and it follows that the induced map in homology is an isomorphism as well.
\end{proof}
Letting $s=\lfloor \sqrt{r} \rfloor$, we find that all points of cohomological degree at most $r-\lfloor \sqrt{r} \rfloor$
are stable on page $m = 2+\lfloor \sqrt{r} \rfloor$.
Next, we claim that for $m\geqslant 2+\lfloor \sqrt{r} \rfloor$, all differentials in and out of the
lines of cohomological degree at most $r-\lfloor \sqrt{r} \rfloor$ are zero.
Recall that
by Lemma~\ref{grading}, all groups $E^{p,q}_m$ with $q<p(p+1)=p^2+p$ are zero.
The line of cohomological degree $k$ (that is, the line $q=p+k$) intersects the curve $q=p^2+p$ at $p=\sqrt{k}$, so
all groups of the form $E_m^{p,p+k}$ with $p>\sqrt{k}$ are zero.
In particular, if $p>\sqrt{r}$ then
the groups $E_m^{p,q}$ with cohomological degree at most $r-\lfloor \sqrt{r} \rfloor$
are zero
(since in this situation we have $p> \sqrt{r} \geqslant \sqrt{r-\lfloor \sqrt{r}\rfloor} \geqslant \sqrt{q-p}$).
On pages $m\geqslant 2+\lfloor \sqrt{r} \rfloor$, differentials map at least $2+\lfloor \sqrt{r} \rfloor>\sqrt{r}$ units in the horizontal direction.
Hence on pages $m\geqslant 2+\lfloor \sqrt{r} \rfloor$, all differentials out
of non-zero groups with cohomological degree at most $r-\lfloor \sqrt{r} \rfloor$ map to trivial groups (in columns $p<0$), while all differentials into non-zero groups with cohomological degree at most $r-\lfloor \sqrt{r} \rfloor$ map out of trivial groups (in columns $p>\sqrt{r}$ and cohomological degree at most $r-\lfloor \sqrt{r} \rfloor - 1$). This proves the claim.
It follows that for each integer $k$ with $0\leqslant k \leqslant r-\lfloor \sqrt{r} \rfloor$, the groups
$$E_{2+\lfloor \sqrt{r} \rfloor}^{p,q} (r+1) \,\,\,\, \textrm{and} \,\,\,\, E_{2+\lfloor \sqrt{r} \rfloor}^{p,q} (r)$$
of cohomological degree
$k$ form the associated graded groups of filtrations on $H^k ({\rm Hom}({\mathbb Z}^n, G_{r+1}))$ and $H^k ({\rm Hom}({\mathbb Z}^n, G_{r}))$ (respectively). The maps
\begin{equation*} E_{2+\lfloor \sqrt{r} \rfloor}^{p,q} (r+1) \longrightarrow E_{2+\lfloor \sqrt{r} \rfloor}^{p,q} (r)
\end{equation*}
are
the induced maps between the associated graded groups of these filtrations, and we have shown that these maps are isomorphisms. It follows that
$$H^k ({\rm Hom}({\mathbb Z}^n, G_{r+1})_1)\longrightarrow H^k ({\rm Hom}({\mathbb Z}^n, G_{r})_1)$$
is an isomorphism as well, completing the proof.
\end{proof}
\begin{rmk}\label{rmk: +1} The argument above in fact yields the slightly better bound
$$r - \lfloor \sqrt{r} \rfloor + 1 \geqslant k,$$
except in the case $r=1$, $k=1$, and $G_1 = {\rm SO}(2)$.
\end{rmk}
\begin{rmk} The methods from Section~\ref{stab} may be used to show that the sequences
$r\mapsto H_k (G_r/T_r \times T_r^n)$ extend to uniformly representation stable ${\rm FI}_W$--modules (this is similar to the arguments in Proposition~\ref{prop: stability J} and Theorem~\ref{thm: equiv.coh. stability Hom, Comm,Bcom}). By Theorem~\ref{quot-stab}, this implies homological stability for the sequences $r\mapsto {\rm Hom}(\mathbb{Z}^n, G_r)$. However, this approach does not appear to yield a bound on the stable range, because we have limited information about the ${\rm FI}_W$--module $r\mapsto H_k (G_r/T_r)$. For instance, we do not know a bound on its stable range. Similar comments apply to the other sequences considered in this section.
\end{rmk}
\section{Nilpotent representations and noncompact Lie groups}\label{sec: nil}
In this section, we extend our stability results to certain noncompact Lie groups, and to finitely generated nilpotent discrete groups.
Let $G$ be a complex reductive affine algebraic group (that is, the complexification of a compact Lie group). For a discrete group $\pi$,
let $\mathfrak{X}_{\pi} (G)$ denote the $G$--character variety of
$\pi$, defined as the GIT quotient ${\rm Hom}({\mathbb Z}^n,G)/\!\!/G$ of ${\rm Hom}({\mathbb Z}^n,G)$ by $G$. This space is homeomorphic to the subspace of closed orbits in ${\rm Rep}({\mathbb Z}^n, G) = {\rm Hom}({\mathbb Z}^n, G)/G$, and the inclusion is in fact a homotopy equivalence, and hence a homology isomorphism~\cite[Proposition 3.4]{FLR}. We denote by
$ \mathfrak{X}_{\pi} (G)_1$ the connected component of the trivial
representation.
These results in this section are based on the following result, as well as earlier work of Florentino--Lawton~\cite{florentino2014topology} and
Pettet--Souto~\cite{pettet2013souto} in the abelian case.
\begin{thm}[{Bergeron~\cite{bergeron2015topology}}]\label{thm: bergeron}
Let $\Gamma$ be a finitely generated nilpotent group and let $G$ be the group of
complex or real points of a $($possibly disconnected$)$ reductive linear algebraic group,
defined over ${\mathbb R}$ in the latter case. If $K$ is a maximal compact subgroup of $G$,
then there is a $K$--equivariant strong deformation retraction of ${\rm Hom}(\Gamma,G)$ onto
${\rm Hom}(\Gamma,K)$. In particular, ${\rm Hom}(\Gamma,G)_1$ deformation retracts to
${\rm Hom}(\Gamma,K)_1$, and $\mathfrak{X}_{{\mathbb Z}^n} (G)_1$ deformation retracts to
${\rm Rep}({\mathbb Z}^n, K)_1$
\end{thm}
We will mainly be interested in the free nilpotent groups, which we now define.
The descending central series of the free group $F_n$ is given by
\begin{equation}\label{eqn: descending central series}
F_n = \Gamma^1 \rhd \Gamma^2 \rhd \Gamma^3 \rhd \cdots,
\end{equation}
where $\Gamma^2=[F_n,F_n]$ and inductively $\Gamma^{q+1}=[F_n,\Gamma^q]$, for all
$q \geqslant 2$. The free nilpotent groups, then, are the quotients $F_n/\Gamma^q$.
We will also use the following interesting result of Bergeron and Silberman.
Their result works for any nilpotent group, but we state it only for $F_n/\Gamma^q.$
\begin{thm}[{Bergeron-Silberman~\cite{bergeron2016note}}]\label{thm: bergeron-silberman}
Let $K$ be a compact Lie group. Then the abelianization map
$F_n/\Gamma^q \longrightarrow {\mathbb Z}^n$ induces an inclusion
$${\rm Hom}({\mathbb Z}^n,K) \hookrightarrow {\rm Hom}(F_n/\Gamma^q,K)$$
for each $q\geqslant 2$, and on identity components this map is in fact a homeomorphism
$${\rm Hom}({\mathbb Z}^n,K)_1 \srm{\cong}{\rm Hom}(F_n/\Gamma^q,K)_1.$$
\end{thm}
In other words, if a homomorphism $F_n/\Gamma^q \to K$ lies in the path component of the trivial representation, then its image is in fact abelian.
Let $\{G_r\}_{r\geqslant 1}$ denote one of the classical infinite families of
compact, connected Lie groups -- namely $G_r = {\rm SU}(r)$, ${\rm U}(r)$, ${\rm SO}(2r+1)$,
${\rm Sp}(r)$, or ${\rm SO}(2r)$, and let $G_r ({\mathbb C})$ denote its complexification (explicitly, these groups are ${\rm SL}_r ({\mathbb C})$, ${\rm GL}_r ({\mathbb C})$, ${\rm SO}(2r+1, {\mathbb C})$, ${\rm Sp}(2r, {\mathbb C})$, and ${\rm SO}(2r, {\mathbb C})$, respectively).
Let $T_r ({\mathbb C}) = T (G_r ({\mathbb C})) \leqslant G_r ({\mathbb C})$ denote the complexification of $T_r\leqslant G_r$.
Since the standard inclusions $G_r\hookrightarrow G_{r+1}$ and the inclusions $T_r\hookrightarrow G_r$ are algebraic maps, they induce maps between complexifications, which restrict to maps $T (G_r ({\mathbb C}))\to T (G_{r+1} ({\mathbb C}))$. It is a standard fact that these inclusions in fact induce isomorphisms
$NT_r/T_r \srm{\cong} NT(G_r({\mathbb C}))/T(G_r({\mathbb C}))$ (in the semi-simple case, this follows from~\cite[\S 5.1.4, Problem 24]{Onishchik-Vinberg}). We will denote both Weyl groups by $W_r$.
Combining Theorems~\ref{thm: bergeron} and~\ref{thm: bergeron-silberman}, we see that for each of the classical sequences of Lie groups, and for each $n\geqslant 1$, the inclusion
$${\rm Hom}({\mathbb Z}^n, G_r)_1\hookrightarrow {\rm Hom}(F_n/\Gamma^q, G_r ({\mathbb C}))_1$$
is a homotopy equivalence, and the same holds for the character varieties. Combined with Theorem~\ref{Hom-bound} and Theorem~\ref{stability-wrt-r-Rep}, this yields the following corollary.
\begin{cor}\label{cor: nil complex}
Fix positive integers $n$ and $q$, with $q\geqslant 2$, and let $G_r$ be as above. The
sequences
$$r\mapsto {\rm Hom}(F_n/\Gamma^q, G_r ({\mathbb C}))_1 \,\,\, \textrm{ and }\,\,\,
r\mapsto\mathfrak{X}_{F_n/\Gamma^q} (G_r ({\mathbb C}))_1$$
are strongly rationally homologically stable, and in degree $k$, stability holds for
$r - \lfloor \sqrt{r} \rfloor \geqslant k$ in the former case and for $r\geqslant k$ in the latter case.
\end{cor}
The infinite-dimensional constructions from Section~\ref{sec: B(2,G) stability} also have nilpotent analogues.
The spaces
\begin{equation}\label{eqn: B_q-nil (G)}
B(q, G) :=|{\rm Hom}(F_\bullet/\Gamma^q,G)|
\end{equation}
were introduced in~\cite{adem2012commuting}, and were used to define \emph{nilpotent $K$--theory}~\cite{adem2015gomez,adem2017gomezlindtillman}.
(Note that the case $q=2$ corresponds to $B_{\rm com} G$.)
Since the spaces ${\rm Hom}(F_n/\Gamma^q,G)$
are not necessarily path-connected, we will focus on the
subspace $B(q, G)_1\subset B(q, G)$ defined by
$$
B(q, G)_1 :=|{\rm Hom}(F_\bullet/\Gamma^q,G)_1|.
$$
This gives filtrations of $BG$ as follows:
$$
B_{\rm com}(G) = B(2,G) \subseteq B(3, G) \subseteq B(4, G) \subseteq \cdots \subseteq BG
$$
and
$$
B_{\rm com}(G)_1 = B(2,G)_1 \subseteq B(3, G)_1 \subseteq B(4, G)_1 \subseteq \cdots \subseteq BG.
$$
\begin{prop} \label{prop: Bcom-Comm}
Let $G$ be a compact connected Lie group. Then the inclusion $G\hookrightarrow G({\mathbb C})$ induces homotopy equivalences
$$B_{\rm com} (G)_1 \srm{\simeq} B(q, G ({\mathbb C}))_1 \,\,\, \textrm{ and }\,\,\, {\rm Comm}(G)_1\srm{\simeq} {\rm Comm}(G ({\mathbb C}))_1$$
$($for each $q\geq 2$$)$.
\end{prop}
\begin{proof} Theorems~\ref{thm: bergeron} and~\ref{thm: bergeron-silberman} show that the inclusion of simplicial spaces
\begin{equation}\label{eqn: lwwe}B_{\rm com} (G)_1 \hookrightarrow B(q, G ({\mathbb C}))_1
\end{equation}
is a level-wise homotopy equivalence. The degeneracy maps for these simplicial spaces are inclusions of simplicial complexes
by the argument in Villarreal~\cite[Theorem 2.19]{villarreal2017cosimplicial} (see also \cite{hofmann2009triangulation} in the non-compact case).
By Lillig's Union Theorem~\cite{Lillig}, this implies that these simplicial spaces are \emph{proper}, and hence the map (\ref{eqn: lwwe}) is a homotopy equivalence by the results in~\cite[Appendix]{May-EGP}.
The spaces ${\rm Comm}(G)_1$ and ${\rm Comm}(G({\mathbb C}))_1$ are colimits of diagrams built from the spaces ${\rm Hom}(\mathbb{Z}^n, G)_1$ and ${\rm Hom}(\mathbb{Z}^n, G ({\mathbb C}))_1$, respectively, with maps induced by the coordinate projections of $\mathbb{Z}^{n+1}$ onto $\mathbb{Z}^{n}$. As these induced maps are algebraic maps between algebraic sets, they are cofibrations, and hence these colimits are homotopy equivalent to the corresponding homotopy colimits. The inclusions ${\rm Hom}(\mathbb{Z}^n, G)_1 \hookrightarrow {\rm Hom}(\mathbb{Z}^n, G ({\mathbb C}))_1$ are homotopy equivalences~\cite{pettet2013souto}, so they induce a homotopy equivalence between the corresponding homotopy colimits.
\end{proof}
Our stability results for compact groups now yield:
\begin{cor} \label{cor: nil stable}
Fix a positive integer $q\geqslant 2$, and let $G_r({\mathbb C})$ be as above. The
sequences
$$r\mapsto B(q, G_r ({\mathbb C}))_1
\,\,\, \textrm{ and }\,\,\,
r\mapsto {\rm Comm}(G_r ({\mathbb C}))_1
$$
are satisfy strong rationally homological stability, and in degree $k$, stability holds for $r - \lfloor \sqrt{r} \rfloor \geqslant k$.
\end{cor}
Nilpotent analogues of ${\rm Comm} (G)$ were introduced in~\cite{cohen2016spaces}.
Define
\begin{equation}\label{eqn: X(q,G)}
{\it X}(q,G) : = \bigg(\coprod_{n \geq 0} {\rm Hom}(F_n/\Gamma^q,G)\bigg)\bigg/\sim,
\end{equation}
where ${\it X}(2,G)={\rm Comm}(G)$, and the corresponding subspaces
$$
{\it X}(q,G)_1 : = \bigg(\coprod_{n \geq 0} {\rm Hom}(F_n/\Gamma^q,G)_1\bigg)\bigg/\sim.
$$
We thereby obtain filtrations of $J(G)$ as follows:
$$
{\rm Comm}(G) = {\it X}(2,G) \subseteq {\it X}(3,G) \subseteq {\it X}(4,G) \subseteq \cdots \subseteq J(G)
$$
and
$$
{\rm Comm}(G)_1 = {\it X}(2,G)_1 \subseteq {\it X}(3,G)_1 \subseteq {\it X}(4,G)_1 \subseteq \cdots \subseteq J(G).
$$
We now prove (weak) homological stability for these constructions.
\begin{cor}\label{cor: X(q,G)}
Fix a positive integer $q\geqslant 2$, and let $G_r({\mathbb C})$ be as above. Then there are isomorphisms
$$H_k {\it X}(q, G_r ({\mathbb C}))_1 \cong H_k{\it X}(q, G_{r+1} ({\mathbb C}))_1$$
for $r - \lfloor \sqrt{r} \rfloor \geqslant k$.
\end{cor}
\begin{proof} In light of Theorem \ref{thm: bergeron-silberman} and Theorem~\ref{Hom-bound}, it suffices to show that
$$H_k {\it X}(q, G_r ({\mathbb C}))_1 \cong H_k {\rm Comm}(G_r ({\mathbb C}))_1$$
for each $r\geqslant 1$.
To begin, note that the stable splitting in \cite[Theorem 5.2]{cohen2016spaces} (stated there for compact groups) still holds for $G_r ({\mathbb C})$ since (as discussed in the proof of Corollary~\ref{cor: nil stable}) the inclusions
$${\rm Hom}(F_n/\Gamma^q, G_r ({\mathbb C}))_1 \hookrightarrow {\rm Hom}(F_{n+1}/\Gamma^q, G_r ({\mathbb C}))_1$$
are cofibrations.
That is, we have decompositions
$$
\Sigma X(q,G_r({\mathbb C}))_1 \simeq \Sigma \bigvee_{n\geqslant 1} \widehat{{\rm Hom}}(F_n/\Gamma^q,G_r({\mathbb C}))_1,
$$
where $\widehat{{\rm Hom}}(F_n/\Gamma^q,G_r({\mathbb C}))_1$ is the quotient of ${{\rm Hom}}(F_n/\Gamma^q,{\rm GL}_r({\mathbb C}))_1$
by the subspace $S_{n,q}(G_r({\mathbb C}))$ consisting of all the nilpotent $n$--tuples with at least one coordinate equal to
the identity element of $G_r({\mathbb C})$.
The argument in~\cite[Theorem 4.1]{ramras2017hilbert} now shows that the natural map
$$\widehat{{\rm Hom}}({\mathbb Z}^n,G_r({\mathbb C}))_1\longrightarrow \widehat{{\rm Hom}}(F_n/\Gamma^q,G_r({\mathbb C}))_1$$
is a homotopy equivalence, completing the proof.
\end{proof}
As an immediate consequence of Theorem~\ref{thm: bergeron} and Theorem~\ref{thm: stability for fixed G}, we also have stability with respect to the maps of discrete groups $F_n/\Gamma^q\longrightarrow F_{n+1}/\Gamma^q$.
\begin{cor}\label{cor: stability in n nil} Let $G$ be as in Theorem~\ref{thm: bergeron}. Then for each $q\geqslant 2$, the sequences
$$n\mapsto {\rm Hom}(F_n/\Gamma^q, G)_1\,\,\, \textrm{ and }
\,\,\, n\mapsto \mathfrak{X}_{F_n/\Gamma^q} (G)_1$$
are strongly rationally homologically stable, and in homological degree $k$, stability holds for $n\geqslant k$.
\end{cor}
\noindent {\bf Real reductive groups.}
Since Theorems~\ref{thm: bergeron} applies to real reductive groups, we can also consider families of such groups whose maximal compact subgroups correspond to the classical sequences of compact Lie groups. For instance, the maximal compact subgroup of ${\rm SL}_r (\mathbb{R})$ is ${\rm SO}(r)$, and hence we obtain strong homological stability (with the same bounds as above) for all four sequences
$$\{{\rm Hom}(F_n/\Gamma^q, {\rm SL}_r(\mathbb{R}))_1\}_{r\geqslant 1}, \,\,\,\{\mathfrak{X}_{F_n/\Gamma^q} ({\rm SL}_r(\mathbb{R}))_1\}_{r\geqslant 1},$$
$$\{B(q, {\rm SL}_r(\mathbb{R}))_1\}_{r\geqslant 1},\,\,\, \textrm{ and }\,\,\, \{{\rm Comm} ({\rm SL}_r(\mathbb{R}))_1\}_{r\geqslant 1}.$$
A similar statement applies to the sequence of real symplectic groups ${\rm Sp}(2r, \mathbb{R})$, which are reductive with maximal compact ${\rm U}(r)$.
Finally, consider the indefinite groups ${\rm U}(s,r)$ and ${\rm SO}(s,r)$, which are real reductive with maximal compact subgroups ${\rm U}(s)\times {\rm U}(r)$ and ${\rm SO}(s)\times {\rm SO}(r)$, respectively. In these cases, we may stabilize with respect to either variable $r$ or (equivalently) $s$. In general, for Lie groups $G$ and $H$ there are homeomorphisms
$${\rm Hom}({\mathbb Z}^n, G\times H) \cong {\rm Hom}({\mathbb Z}^n, G)\times {\rm Hom}({\mathbb Z}^n, H),$$
and similarly for the character varieties. Moreover, by Ebert--Randal-Williams~\cite[Theorem 7.2]{ERW}, geometric realization commutes with products of simplicial spaces up to weak homotopy equivalence, so the map
$$B_{\rm com} (G\times H) \longrightarrow B_{\rm com} (G)\times B_{\rm com} (H)$$
is a weak homotopy equivalence, and hence an isomorphism in (co)homology.\footnote{Here it is important that we give $B_{\rm com} (G)\times B_{\rm com} (H)$ the compactly generated topology associated to the product topology.}
The K\"unneth Theorem now shows that the sequences
$$\{{\rm Hom}({\mathbb Z}^n, {\rm U}(s,r))\}_{r\geqslant 1}, \,\,\,\{\mathfrak{X}_{{\mathbb Z}^n} ({\rm U}(s,r))\}_{r\geqslant 1},\,\,\, \textrm{ and }\,\,\, \{B_{\rm com} ({\rm U}(s,r))\}_{r\geqslant 1}$$
are all strongly rationally homologically stable (with the same stability bounds), and similarly for ${\rm SO}(s,r)$ in place of ${\rm U}(s,r)$ (at least after restricting to the connected components of the trivial representations).
\begin{rmk}\label{rmk: eqvt11} The results in this section can be extended to equivariant homology. For instance, consider the $G_r ({\mathbb C})$--equivariant homology of ${\rm Hom}({\mathbb Z}^n, G_r ({\mathbb C}))_1$.
Comparing the fibrations
$$({\rm Hom}({\mathbb Z}^n, G_r ({\mathbb C}))_1)_{hG_r ({\mathbb C})}\longrightarrow BG_r ({\mathbb C})$$
and
$$({\rm Hom}({\mathbb Z}^n, G_r)_1)_{hG_r}\longrightarrow BG_r,$$
one sees (using Theorem~\ref{thm: bergeron}) that the homotopy orbit spaces are in fact homotopy equivalent. The other cases are similar.
\end{rmk}
\section{Finite covers of Lie groups}\label{sec: covers}
In this section we show that passing to a finite cover of the underlying Lie group does not change
the homology of the various spaces of commuting elements considered in this article. In particular, this allows us to extend our stability results to the Spin groups, and to the projective unitary and general linear groups
(although in the latter case, there are no stabilization maps, so we have only weak stability).
\begin{prop}\label{prop: fin cov} Let $p\colon\thinspace G\to H$ be a finite covering homomorphism between connected Lie groups, and assume that $G$ and $H$ are either compact or complex reductive affine algebraic groups.
Then for each $n\geqslant 1$ and each $q\geqslant 2$, the induced maps
\begin{equation}\label{HX} {\rm Hom}(F_n/\Gamma^q, G)_1\longrightarrow {\rm Hom}(F_n/\Gamma^q, H)_1 \,\,\, \textrm{and}\,\,\,\mathfrak{X}_{F_n/\Gamma^q} (G)_1\longrightarrow \mathfrak{X}_{F_n/\Gamma^q} (H)_1\end{equation}
are $($rational$)$ homology isomorphisms, as are the maps
$$B(q, G)_1\longrightarrow B(q, H)_1 \,\,\, \textrm{and}\,\,\,
{\rm Comm}(G)_1\longrightarrow {\rm Comm}(H)_1.$$
Finally, when $G$ and $H$ are compact, the same holds for
$$B_{\rm com} (G)_1/G \to B_{\rm com} (H)_1/H\,\,\, \textrm{and}\,\,\, {\rm Comm}(G)_1/G \to {\rm Comm}(H)_1/H.$$
\end{prop}
\begin{proof} First we consider the case in which $G$ and $H$ are compact.
Let $T\leqslant H$ be a maximal torus, and define $\widetilde{T} := p^{-1} (T)$.
We claim that $\widetilde{T}$ is a maximal torus in $G$, and that the induced map of Weyl groups is an isomorphism.
Note that $p$ induces an isomorphism of Lie algebras, and hence the maximal tori in $G$ and $H$ have the same rank.
Since all finite covers of a torus are disjoint unions of tori (of the same rank as the base torus), the identity component $p^{-1} (T)_0 \leqslant p^{-1} (T)$ is a maximal torus in $G$, so it suffices to show that $p^{-1} (T)$ is connected.
Observe that $\ker (p)\leqslant G$ is discrete and normal, hence central by~\cite[Theorem 6.13]{HM06},
so $p^{-1} (T)$ centralizes $p^{-1} (T)_0$
(note that by an elementary covering space argument, $p^{-1} (T)$ is generated by $p^{-1} (T)_0$ together with $\ker (p)$). Since maximal tori in compact Lie groups are their own centralizers, this shows that $\widetilde{T}$ is a maximal torus of $G$.
It follows that $p$ induces an isomorphism $\widetilde{W}\to W$ between the Weyl groups of
$\widetilde{T}$ and $T$, since these groups are naturally isomorphic to the Weyl groups of the root systems associated to $\widetilde{T}$ and
$T$~\cite[Chapter 11]{Hall-Lie-groups}, and $p$ induces an isomorphism of Lie algebras.\footnote{Alternatively, surjectivity of $p$ implies that $p^{-1} (NT) = N\widetilde{T}$, and then the 9-lemma implies that $\widetilde{W}\to W$ is an isomorphism.}
To study the maps (\ref{HX}), first recall that the
Poincar\'{e} polynomial of these spaces are given by the formulas (\ref{eqn: Hom-PP}) and (\ref{eqn: Rep-PP}), which depend only on the Weyl group and its action on the Lie algebra of the maximal torus. Since $p$ induces an isomorphism $\widetilde{W} \srm{\cong} W$ and a Weyl--equivariant diffeomorphism $\widetilde{T}\to T$, the Poincar\'e polynomials (that is, the ranks of the rational homology groups) are unchanged under passage from $G$ to $H$. It will thus suffice to show that the maps (\ref{HX}) are surjective on rational homology.
By a result of Goldman~\cite[Lemma 2.2]{goldman1988topological},
the map of homomorphism spaces is a normal covering map with structure group ${\rm Hom}(F_n/\Gamma^q, \ker(p)) = \ker(p)^n$ (note that, as observed above, $\ker (p)$ is central in $G$ and in particular is abelian). Hence ${\rm Hom}(F_n/\Gamma^q, H)_1$ is the quotient of ${\rm Hom}(F_n/\Gamma^q, G)_1$ by the finite group $\ker(p)^n$, and by Lawton--Ramras~\cite[Lemmas 3.7 and 3.9]{Lawton-Ramras}, we have
$$\mathfrak{X}_{F_n/\Gamma^q} (G)_1/\ker(p)^n \cong \mathfrak{X}_{F_n/\Gamma^q} (H)_1$$
as well. Now Proposition~\ref{coh-quot} implies that the maps (\ref{HX}) are surjective on rational homology (for homomorphism spaces, this can also be seen by considering the transfer map).
The result for $B(q, G)_1\to B(q, H)_1$ now follows from the fact that a level-wise homology equivalence between proper simplicial spaces is a homology equivalence on realizations (May~\cite[Appendix]{May-EGP}). The assumption that $G$ and $H$ are compact ensures properness, as discussed in the proof of Proposition~\ref{prop: Bcom-Comm}.
Next we show that ${\rm Comm}(G)_1\to {\rm Comm}(H)_1$ is a homology equivalence.
Our choice of maximal tori implies that $p$ induces a commutative diagram of the form
\begin{equation}\label{eqn: phi'2}
\begin{tikzcd}
G/\widetilde{T}\times_{\widetilde{W}} J(\widetilde{T}) \arrow[d, "p_*"] \arrow[r, "\phi''"]
& {\rm Comm}(G)_1 \arrow[d, "p_*"]
\\
H/T\times_{W} J(T) \arrow[r, "\phi''"]
& {\rm Comm}(H)_1.
\end{tikzcd}
\end{equation}
The horizontal maps in (\ref{eqn: phi'2}) are homology equivalences by Theorem~\ref{thm: phi}, so to show that ${\rm Comm}(G)_1\to {\rm Comm}(H)_1$ is a homology equivalence it suffices to prove the same for the map $G/\widetilde{T}\times_{\widetilde{W}} J(\widetilde{T}) \to H/T\times_{W} J(T)$ induced by $p$.
By Proposition~\ref{coh-quot} and the K\"unneth Theorem, it suffices to show that the maps
\begin{equation}\label{J} H_* (J(\widetilde{T}))\to H_* (J(T))
\end{equation}
and
\begin{equation}\label{/T} H_* (G/\widetilde{T})\to H_* ( H/T)
\end{equation}
induced by $p$ are homology equivalences.\footnote{To apply Proposition~\ref{coh-quot}, we need to establish that the Weyl group actions in (\ref{eqn: phi'2}) are good. This follows from the argument in the proof of Theorem~\ref{thm: stability for BcomG/G and ComG/G}.}
To see that (\ref{J}) is an isomorphism, first note that
$p\colon\thinspace \widetilde{T}\to T$ is a covering map between spaces with isomorphic homology. As discussed above, this implies that $p$ is a homology equivalence. Now the natural isomorphism $H_*(J(X)) \cong \mathcal{T} (\widetilde{H}_* (X))$ shows that (\ref{J}) is a homology equivalence as well.
To analyze (\ref{/T}), we construct a commutative diagram
\begin{center}
\begin{tikzcd}
G/\widetilde{T} \arrow{r}{\widetilde{j}} \arrow{d}{p} &
B\widetilde{T} \arrow[d, "Bp"]
\\
H/T \arrow{r}{j} &
BT,
\end{tikzcd}
\end{center}
where the horizontal maps are classifying maps for the bundles $G\to G/\widetilde{T}$ and $H\to H/T$. To obtain these compatible classifying maps, choose a point $e_0\in EG$ and let $\overline{e_0}$ denote its image in $EH$ under the map $EG\to EH$ induced by $p$. The inclusions $G/\widetilde{T} \subset EG/\widetilde{T}$, $[g]\mapsto e_0\cdot g$ and $H/T \subset EH/T$, $[h]\mapsto \overline{e_0}\cdot h$ yield the desired classifying maps.
By Proposition~\ref{prop: G/T}, the horizontal maps $\widetilde{j}$ and $j$ become isomorphisms in cohomology after modding out the ideal of positive degree Weyl--invariants on the right-hand side. The map $Bp\colon\thinspace B\widetilde{T}\to BT$ is Weyl--equivariant and a (co)homology equivalence (again by~\cite[Appendix]{May-EGP}), so it restricts to an isomorphism between these ideals. It follows that
the map $G/\widetilde{T}\to H/T$ is an isomorphism in (co)homology, as desired.
The fact that $p$ induces homology isomorphisms
$$B_{\rm com} (G)_1/G \to B_{\rm com} (H)_1/H\,\,\, \textrm{and}\,\,\, {\rm Comm}(G)_1/G \to {\rm Comm}(H)_1/H$$
follows similarly, using Proposition~\ref{prop: homeo}.
Finally, we consider the case in which $G$ and $H$ are complex. First, note that if $K\leq H$ is a maximal compact subgroup, then since finite covers preserve compactness, $p^{-1} (K)$ is a maximal compact subgroup of $G$. Now if $F$ is any of the functors under consideration, we have a commutative diagram
\begin{center}
\begin{tikzcd}
F(p^{-1} K) \arrow{r} \arrow{d} &
F(G) \arrow[d]
\\
F(K) \arrow{r}&
F(H),
\end{tikzcd}
\end{center}
in which the left-hand vertical map is a homology isomorphism, while the horizontal arrows are homotopy equivalences (see Theorem~\ref{thm: bergeron} and Proposition~\ref{prop: Bcom-Comm}), and it follows that the right-hand vertical map is a homology isomorphism as well.
\end{proof}
We end by discussing some examples in which Proposition~\ref{prop: fin cov} applies.
\begin{ex}\label{ex: Spin} For $r\geqslant 3$, the Spin group ${\rm Spin}(r)$ is the universal covering group of ${\rm SO}(r)$, and since $\pi_1 ({\rm SO}(r)) = {\mathbb Z}/2$ for $r\geqslant 3$, this is in fact a double covering. Let $p \colon\thinspace {\rm Spin}(r)\to {\rm SO}(r)$ be the covering map. The standard inclusions
$${\rm SO}(r)\hookrightarrow {\rm SO}(r+1)$$
(block sum with the $1\times 1$ identity matrix) induce maps ${\rm Spin}(r)\to {\rm Spin}(r+1)$, and it follows from Proposition~\ref{prop: fin cov} (and the earlier results in the article) that these maps induce homology isomorphisms after applying any of the functors considered in Proposition~\ref{prop: fin cov}, except possibly in the case of character varieties (see also Remark~\ref{rmk: SO}). Moreover, the stable ranges are the same as for the special orthogonal groups.
In the case of character varieties, the proof of Proposition~\ref{prop: fin cov} involves comparing Poincar\'{e} polynomials, so we obtain only weak stability in this case.
\end{ex}
\begin{ex}\label{ex: proj} For each of the families of compact or complex Lie groups considered in this article, there is an associated family of projective groups obtained by modding out the centers (although in some cases the center is trivial). In each case, we obtain (weak) homological stability results for the various functors considered in Proposition~\ref{prop: fin cov}. Note that the standard inclusions \emph{do not} map centers to centers, and hence we do not have maps between the projective groups inducing these homology isomorphisms.
\end{ex}
\begin{rmk}\label{rmk: eqvt12} As in Remark~\ref{rmk: eqvt11},
the results in this section extend to equivariant homology. For instance, if $p\colon\thinspace G\to H$ is a finite covering, to compare the $G$--equivariant homology of ${\rm Hom}({\mathbb Z}^n, G)_1$ to the $H$--equivariant homology of ${\rm Hom}({\mathbb Z}^n, H)_1$, we compare the
fibrations
$$({\rm Hom}({\mathbb Z}^n, G)_1)_{hG}\longrightarrow BG$$
and
$$({\rm Hom}({\mathbb Z}^n, H)_1)_{hH}\longrightarrow BH.$$
The induced map $Bp\colon\thinspace BG\to BH$ is a (rational) homology equivalence by~\cite[Appendix]{May-EGP}, and the map of fibers is as well (by Proposition~\ref{prop: fin cov}). Comparing the Serre spectral sequences for the two fibrations, we obtain the desired isomorphism in equivariant homology.
\end{rmk}
| {
"timestamp": "2020-04-30T02:02:37",
"yymm": "1805",
"arxiv_id": "1805.01368",
"language": "en",
"url": "https://arxiv.org/abs/1805.01368",
"abstract": "In this paper we study homological stability for spaces ${\\rm Hom}(\\mathbb{Z}^n,G)$ of pairwise commuting $n$-tuples in a Lie group $G$. We prove that for each $n\\geqslant 1$, these spaces satisfy rational homological stability as $G$ ranges through any of the classical sequences of compact, connected Lie groups, or their complexifications. We prove similar results for rational equivariant homology, for character varieties, and for the infinite-dimensional analogues of these spaces, ${\\rm Comm}(G)$ and ${\\rm B_{com}} G$, introduced by Cohen-Stafa and Adem-Cohen-Torres-Giese respectively. In addition, we show that the rational homology of the space of unordered commuting $n$-tuples in a fixed group $G$ stabilizes as $n$ increases. Our proofs use the theory of representation stability - in particular, the theory of ${\\rm FI}_W$-modules developed by Church-Ellenberg-Farb and Wilson. In all of the these results, we obtain specific bounds on the stable range, and we show that the homology isomorphisms are induced by maps of spaces.",
"subjects": "Algebraic Topology (math.AT)",
"title": "Homological stability for spaces of commuting elements in Lie groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534365728416,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573501438788
} |
https://arxiv.org/abs/2012.01503 | A density bound for triangle-free $4$-critical graphs | We prove that every triangle-free $4$-critical graph $G$ satisfies $e(G) \geq \frac{5v(G)+2}{3}$. This result gives a unified proof that triangle-free planar graphs are $3$-colourable, and that graphs of girth at least five which embed in either the projective plane, torus, or Klein Bottle are $3$-colourable, which are results of Grötzsch, Thomassen, and Thomas and Walls. Our result is nearly best possible, as Davies has constructed triangle-free $4$-critical graphs $G$ such that $e(G) = \frac{5v(G) + 4}{3}$. To prove this result, we prove a more general result characterizing sparse $4$-critical graphs with few vertex-disjoint triangles. | \section{Introduction}
Given two graphs $G$ and $H$, a \textit{homomorphism} from $G$ to $H$ is a map $f:V(G) \to V(H)$ such that for any edge $xy \in E(G)$, we have that $f(x)f(y) \in E(H)$. A \textit{$k$-colouring} of a graph $G$ is a homomorphism from $G$ to $K_{k}$. A graph $G$ is $k$-critical if $G$ is $k$-colourable, but all proper subgraphs of $G$ are $(k-1)$-colourable. A remarkable result of Kostochka and Yancey, below, says that $k$-critical graphs have many edges.
\begin{thm}[\cite{oresconjecture}]\label{KY}
If $G$ is $k$-critical, then
\[e(G) \geq \frac{(k+1)(k-2)v(G) - k(k-3)}{2(k-1)}.\]
\end{thm}
Here and throughout we use the notation $e(G) = |E(G)|$ and $v(G) = |V(G)|$. Later, Kostochka and Yancey characterized the graphs for which equality holds in the above theorem.
\begin{thm}[\cite{tightnessore}]
A $k$-critical graph $G$ sastifies
\[e(G) = \frac{(k+1)(k-2)v(G) - k(k-3)}{2(k-1)}\]
if and only if $G$ is $k$-Ore.
\end{thm}
A graph $G$ is \emph{$k$-Ore} if it is obtained from a number of copies of $K_k$ via Ore compositions. An \textit{Ore Composition} of two graphs $H_{1}$ and $H_{2}$ is the graph $H$ obtained by deleting an edge $xy \in E(H_{1})$, splitting a vertex $z \in V(H_2)$ into two vertices $z_1$ and $z_2$ of positive degree such that $N(z) = N(z_{1}) \cup N(z_{2})$ and $N(z_{1}) \cap N(z_{2}) = \emptyset$, and then identifying $x$ with $z_{1}$ and $y$ with $z_{2}$. Here $N(v)$ refers to the \emph{neighbourhood} of $v$: the set of vertices adjacent to $v$. We say that $H_{1}$ is the \textit{edge side of the composition} and $H_{2}$ is the \textit{split side of the composition}, and we denote the graph obtained from $H_2$ by splitting $z$ as $H_{2}^{z}$.
A natural avenue to pursue is to restrict our attention to special subclasses of $k$-critical graphs and try to improve the bound given in Theorem \ref{KY}. It is easy to see that every $k$-Ore graph contains a $K_{k-1}$ subgraph. Hence a natural question is: what is the tightest density bound for $k$-critical graphs that contain no $K_{k-1}$ subgraph? Generalizing a construction of Thomas and Walls shows that asymptotically the Kostochka-Yancey bound is best possible for $k$-critical graphs with no $K_{k-1}$ \cite{thomaswalls}. Nevertheless, improvements to the lower order terms are still possible. Liu and Postle conjectured the following bound for triangle-free $4$-critical graphs.
\begin{conj}[\cite{4criticalgirth5}]
If $G$ is a $4$-critical triangle free graph, then $e(G) \geq \frac{5v(G)+5}{3}$.
\end{conj}
This conjecture, if true, is best possible: it is sharp for the Gr\"{o}tzsch Graph (isomorphic to the Mycielski Graph of order four).
Our main result is, at the time of writing, the only result known towards the conjecture, aside from the Kostochka-Yancey bound.
\begin{thm}
\label{fakemain}
If $G$ is a $4$-critical triangle-free graph, then $e(G) \geq \frac{5v(G)+2}{3}$.
\end{thm}
This generalizes the following result of Liu and Postle.
\begin{thm}[\cite{4criticalgirth5}]
\label{luke}
If $G$ is a $4$-critical graph of girth at least five, then $e(G) \geq \frac{5v(G) + 2}{3}$.
\end{thm}
Theorem \ref{luke} in turn simultaneously generalized the following two results, via Euler's formula.
\begin{thm}[\cite{Thomassen}]
\label{Carstenresult}
Every graph of girth at least five embeddable on the torus or the projective plane is $3$-colourable.
\end{thm}
\begin{thm}[\cite{thomaswalls}]
\label{thomaswallsresult}
Every graph of girth at least five embeddable on the Klein Bottle is $3$-colourable.
\end{thm}
Theorem \ref{luke} shows that the assumption of being embeddable in a surface is not needed in either Theorem \ref{Carstenresult} or Theorem \ref{thomaswallsresult}. Our main result shows that, in addition, the girth five assumption is not needed, and can be replaced with merely being triangle-free so long as the graph is sufficiently sparse.
To prove our result, we use the potential method developed by Kostochka and Yancey, a technique that has been used in many recent papers (e.g. \cite{tightnessore, oresconjecture,postletrianglefree5crit,4criticalgirth5,6critk4free, nearbipartite,EvelyneMasters}). As the potential method relies on a certain quotient operation, it naturally creates triangles. Following the ideas in \cite{4criticalgirth5} we prove a stronger statement about $4$-critical graphs which incorporates triangles and immediately implies our main result. Before we state the stronger theorem, we give the following necessary definitions.
For a graph $G$, let $T^{k-1}(G)$ be the maximum number of vertex-disjoint copies of $K_{k-1}$ in $G$. For real numbers $(a,b,c)$ and an integer $k$, let the \emph{$(a,b,c)$-potential} of a graph, denoted $p_{a,b,c}(G)$, be defined as $p_{a,b,c}(G) = av(G) - be(G) -cT^{k}(G)$. Let $W_{n}$ denote the wheel on $n+1$ vertices, which is the graph obtained from a cycle on $n$ vertices by adding a vertex adjacent to all other vertices. Let $T_{8}$ be the graph with vertex set $V(T_{8}) = \{u_{1},u_{2},u_{3},u_{4},u_{5},u_{6},u_{7},u_{8}\}$ and $E(T_{8}) = \{u_{1}u_{2},u_{1}u_{3},u_{1}u_{4},u_{1}u_{5},u_{2}u_{3},u_{2}u_{4},u_{2}u_{5},u_{3}u_{8},u_{4}u_{7},u_{5}u_{6},u_{6}u_{7},u_{6}u_{8},u_{7}u_{8}\}$. Let $\mathcal{B}$ be defined as follows: the graph $T_{8}$ is in $\mathcal{B}$, and given a graph $G \in \mathcal{B}$ and a $4$-Ore graph $H$, the Ore composition $G'$ of $G$ and $H$ is in $\mathcal{B}$ if $T^{3}(G') =2$. See Figure \ref{T8pic} for an illustration.
\begin{figure}
\label{T8pic}
\begin{center}
\begin{tikzpicture}
\node[blackvertex] at (.5,3) (u1) {};
\node[smallwhite] at (.15,3) (dummy1) {$u_{1}$};
\node[blackvertex] at (1.5,3) (u2) {};
\node[smallwhite] at (1.95,3) (dummy2) {$u_{2}$};
\node[blackvertex] at (0,2) (u3) {};
\node[smallwhite] at (-.35,2) (dummy3) {$u_{3}$};
\node[blackvertex] at (1,2) (u4) {};
\node[smallwhite] at (.55,2) (dummy4) {$u_{4}$};
\node[blackvertex] at (2,2) (u5) {};
\node[smallwhite] at (1.55,2) (dummy5) {$u_{5}$};
\node[blackvertex] at (0,0) (u6) {};
\node[smallwhite] at (-.45,0) (dummy6) {$u_{6}$};
\node[blackvertex] at (1,1) (u7) {};
\node[smallwhite] at (.55,1) (dummy7) {$u_{7}$};
\node[blackvertex] at (2,0) (u8) {};
\node[smallwhite] at (2.55,0) (dummy8) {$u_{8}$};
\draw[thick,black] (u1)--(u2);
\draw[thick,black] (u1)--(u3);
\draw[thick,black] (u1)--(u4);
\draw[thick,black] (u1)--(u5);
\draw[thick,black] (u2)--(u3);
\draw[thick,black] (u2)--(u4);
\draw[thick,black] (u2)--(u5);
\draw[thick,black] (u3)--(u6);
\draw[thick,black] (u4)--(u7);
\draw[thick,black] (u5)--(u8);
\draw[thick,black] (u6)--(u7);
\draw[thick,black] (u6)--(u8);
\draw[thick,black] (u7)--(u8);
\begin{scope}[xshift =5cm]
\node[blackvertex] at (.5,3) (u1) {};
\node[blackvertex] at (1.5,3) (u2) {};
\node[blackvertex] at (0,2) (u3) {};
\node[blackvertex] at (1,2) (u4) {};
\node[blackvertex] at (2,2) (u5) {};
\node[blackvertex] at (0,0) (u6) {};
\node[blackvertex] at (1,1) (u7) {};
\node[blackvertex] at (2,0) (u8) {};
\node[blackvertex] at (1.5,.2) (u9){};
\node[blackvertex] at (1.5,-.2) (u10) {};
\node[blackvertex] at (1,0) (u11) {};
\draw[thick,black] (u1)--(u2);
\draw[thick,black] (u1)--(u3);
\draw[thick,black] (u1)--(u4);
\draw[thick,black] (u1)--(u5);
\draw[thick,black] (u2)--(u3);
\draw[thick,black] (u2)--(u4);
\draw[thick,black] (u2)--(u5);
\draw[thick,black] (u3)--(u6);
\draw[thick,black] (u4)--(u7);
\draw[thick,black] (u5)--(u8);
\draw[thick,black] (u6)--(u7);
\draw[thick,black] (u6)--(u11);
\draw[thick,black] (u7)--(u8);
\draw[thick,black] (u8)--(u9);
\draw[thick,black] (u8)--(u10);
\draw[thick,black] (u9)--(u10);
\draw[thick,black] (u9)--(u11);
\draw[thick,black] (u10)--(u11);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{
The graph on the left is $T_{8}$, and the graph on the right is an example of a graph in $\mathcal{B}$.
}
\end{figure}
Our main theorem is given below.
\begin{thm}
\label{maintheorem}
Let $G$ be a $4$-critical graph and let $p(G)$ denote the $(5,3,1)$-potential of $G$. Then
\begin{itemize}
\item{$p(K_{4}) = 1$, }
\item{$p(G) = 0$ if $T^{3}(G) = 2$ and $G$ is $4$-Ore,}
\item{$p(G) = -1$ if $G = W_{5}$, or $G \in \mathcal{B}$, or $G$ is $4$-Ore with $T^{3}(G) =3$, and}
\item{$p(G) \leq -2$ otherwise.}
\end{itemize}
\end{thm}
As all of the graphs with $p(G) \geq -1$ contain triangles, Theorem \ref{fakemain} immediately follows.
We now give a brief outline of the proof of Theorem \ref{maintheorem}. Use of the potential method allows us to assume a vertex-minimum counterexample does not contain certain induced subgraphs, such as $K_{4}-e$ and cycles of degree three vertices. The heart of the potential method is a quotient operation combined with a counting lemma (the potential-extension lemma, Lemma \ref{potentialextensionlemma}), which shows that all subgraphs of a vertex-minimum counterexample have large potential (and hence that no subgraphs with smaller potential exist in our counterexample). To be able to utilize the potential-extension lemma effectively, we prove a series of structural lemmas about the triangles of $4$-Ore graphs and graphs in $\mathcal{B}$. This allows us to prove that in a minimum counterexample, the subgraph induced by the degree three vertices does not contain a component with at least seven vertices. Further, if the components have more than two vertices, then the local structure around the components is heavily constrained. Once enough structure is established, a discharging argument rules out the existence of a minimum counterexample, thereby completing the proof.
The paper is organized as follows. In Section \ref{cliquesection}, we present structural lemmas regarding $K_{k-1}$ cliques in $k$-Ore graphs, focusing on the case where $k = 4$. In Section \ref{T8structure}, we present results regarding the triangles of graphs in $\mathcal{B}$. In Section \ref{potentialsection}, we give a brief overview of the potential method and results specific to its use for colour-critical graphs. Section \ref{mincounterexamplesection} begins by supposing the existence of a minimum counterexample to Theorem \ref{maintheorem}; the structure of this counterexample is then uncovered. Finally, the discharging portion of the proof is found in Section \ref{dischargingsection}.
\section{($k-1$)-cliques in $k$-Ore graphs}
\label{cliquesection}
In this section we prove many structural results about $(k-1)$-cliques in $k$-Ore graphs. An important graph is the unique $4$-Ore graph on seven vertices, called the \textit{Moser spindle}. We denote the Moser spindle as $M$. The first two observations are easy and follow from effectively the same proofs as similar statements about $4$-Ore graphs in \cite{4criticalgirth5}.
\begin{obs}
\label{deletingavertex}
Let $G$ be $k$-Ore and $v$ be a vertex in $V(G)$. Then $G-v$ contains a $K_{k-1}$ subgraph.
\end{obs}
\begin{proof}
We proceed by induction on $v(G)$. If $G = K_{k}$, then this is immediate. Otherwise, $G$ is the Ore composition of two graphs $H_{1}$ and $H_{2}$ where $H_{1}$ is the edge side obtained by deleting edge $e=xy$ and $H_{2}$ is the split side obtained by splitting a vertex $z$ into vertices $z_{1}$ and $z_{2}$. By induction both $H_{1} -e$ and $H_{2}-z$ contain a $K_{k-1}$ subgraph.
Now let $v \in V(G)$. If $v \in V(H_{1} \setminus \{x,y\})$, then $G-v$ contains a $K_{k-1}$ subgraph as there is a $K_{k-1}$ subgraph in $H_{2}-z$. A similar argument applies if $v \in V(H_{2}-z)$. Therefore $v \in \{z_{1},z_{2}\}$. Since $H_{2} - z$ contains a $K_{k-1}$ subgraph, it then follows that $G-v$ contains a $K_{k-1}$-subgraph.
\end{proof}
\begin{obs}
\label{deletingaclique}
If $G$ is $k$-Ore and not isomorphic to $K_{k}$, then for any subgraph $K$ in $G$ isomorphic to $K_{k-1}$, $G - K$ contains a $K_{k-1}$ subgraph.
\end{obs}
\begin{proof}
We proceed by induction on $v(G)$. As $G$ is not isomorphic to $K_{k}$, $G$ is the Ore composition of two graphs $H_{1}$ and $H_{2}$. Let $H_{1}$ be the edge side of $G$ where we delete the edge $xy$, and let $H_{2}$ be the split side where we split the vertex $z$ into two vertices $z_{1}$ and $z_{2}$. Let $K$ be any $K_{k-1}$ subgraph in $G$. \\
\textbf{Case 1: $H_{1} = K_{k}$.}\\
First suppose that $H_{2}$ is also isomorphic to $K_{k}$. Then each of $H_{2} -z$ and $H_{1} - xy$ contains a $K_{k-1}$ subgraph. Note that either $x \not \in V(K)$ or $y \not \in V(K)$ since $xy \not \in E(G)$. Additionally, either $V(K) \subseteq V(H_{1}-xy)$ or $V(K) \subseteq V(H_{2}^{z})$. If $V(K) \subseteq V(H_{1}-xy)$, then in $G-K$, there is a $K_{k-1}$ subgraph in $H_{2}-z$. Therefore we may assume that $V(K) \subseteq V(H_{2}^{z})$, and without loss of generality that $x \not \in V(K)$. But then $H_1-y$ contains a $K_{k-1}$ subgraph that is contained in $G-K$, as desired.
Hence we can assume that $H_{2} \neq K_{k}$. By induction, deleting any $K_{k-1}$ subgraph in $H_{2}$ leaves a $K_{k-1}$ subgraph in $H_2$. Let $K$ be any $K_{k-1}$ subgraph in $G$. If $V(K) \subseteq V(H_{1}-xy)$, then $G-K$ contains a $K_{k-1}$ subgraph in $H_{2} - z$ by Observation \ref{deletingavertex}. If $K$ lies in $H_{2}^z$, then there is a $K_{k-1}$ subgraph in at least one of $H_{1}-x$ or $H_{1}-y$, depending on if $x \in V(K)$ or $y \in V(K)$ or neither is. \\
\textbf{Case 2: $H_{2} = K_{k}$.}\\
From the previous case we may assume that $H_{1} \neq K_{k}$. Hence by induction, deleting any $K_{k-1}$ subgraph in $H_{1}$ leaves a $K_{k-1}$ subgraph in $H_1$. If $V(K) \subseteq V(H_{1}-xy)$, then $G-K$ has a $K_{k-1}$ subgraph in $H_{2}^{z} - \{z_{1},z_{2}\}$. Therefore $K \subseteq V(H_{2}^{z})$. But since $xy \not \in E(G)$, $K$ uses at most one of $z_{1}$ and $z_{2}$. Thus by Observation \ref{deletingavertex} there is a $K_{k-1}$ subgraph in $G-K$ that lies in $H_{1}-xy$. \\
\textbf{Case 3: Neither $H_{1}$ nor $H_{2}$ is $K_{k}$.}\\
In this case, since the induction hypothesis holds for both $H_{1}$ and $H_{2}$ and any $K_{k-1}$ subgraph uses at most one of $x$ or $y$, it is easy to see the same arguments as above give the result.
\end{proof}
The following definition is helpful to avoid repetition.
\begin{definition}
Let $G$ be a graph. A \textit{$(k-1)$-clique packing of $G$} is a maximum collection of vertex-disjoint $K_{k-1}$ subgraphs in $G$.
\end{definition}
Thus $T^{k-1}(G)$ is the size of a $(k-1)$-clique packing of a graph $G$. It will be useful to bound the size of a $(k-1)$-clique packing of an Ore composition, which we do now.
\begin{prop}
\label{cliqueboundinequality}
If $G$ is an Ore composition of $H_{1}$ and $H_2$ where $H_{1}$ is the edge side and uses edge $xy$ and $H_{2}$ is the split side using vertex $z$, then
\[T^{k-1}(G) \geq T^{k-1}(H_{1}) + T^{k-1}(H_{2}) -f(H_{1},H_{2}),\]
where $f(H_{1},H_{2})$ is defined as follows.
\begin{itemize}
\item If every $(k-1)$-clique packing of $H_{1}$ contains a clique using the edge $xy$, and every $(k-1)$-clique packing of $H_{2}$ contains a clique using the vertex $z$, then $f(H_{1},H_{2}) =-2$.
\item If there exists a $(k-1)$-clique packing of $H_{1}$ where no clique uses the edge $xy$, but every $(k-1)$-clique packing of $H_{2}$ contains a clique using the vertex $z$, then $f(H_{1},H_{2}) =-1$.
\item If every $(k-1)$-clique packing of $H_{1}$ contains a clique using the edge $xy$ but there exists a $(k-1)$-clique packing of $H_{2}$ where no clique uses the vertex $z$, then $f(H_{1},H_{2})=-1$.
\item If none of these occur, then $f(H_{1},H_{2}) =0$.
\end{itemize}
\end{prop}
\begin{proof}
Let $H_{1}$ be the edge side of the composition with edge $xy$ and $H_{2}$ the split side,where we split $z$ into vertices $z_{1}$ and $z_{2}$.
For $i \in \{1,2\}$, let $\mathcal{T}_{i}$ be a $(k-1)$-clique packing of $H_{i}$, and let $\mathcal{T}_{i}'$ be the $(k-1)$-clique packing obtained from $\mathcal{T}_{i}$ by removing a clique if it uses either $z$ or the edge $xy$. Note that $|\mathcal{T}_{i}'| \geq |\mathcal{T}_{i}| -1$, and equality holds if and only if $\mathcal{T}_{i}$ contains a clique using either the edge $xy$ or $z$.
Now $\mathcal{T}_{1}' \cup \mathcal{T}_{2}'$ is a collection of vertex-disjoint $(k-1)$-cliques by construction. It follows that
\[T^{k-1}(G) \geq T^{k-1}(H_{1}) + T^{k-1}(H_{2}) -f(H_{1},H_{2}).\]
\end{proof}
\begin{cor}
\label{Kkbound}
If $G$ is the Ore composition of a graph $H$ and $K_{k}$, then $T^{k-1}(G) \geq T^{k-1}(H)$.
\end{cor}
\begin{proof}
Note that $T^{k-1}(K_{k}) =1$, and that for any vertex $v \in V(K_{k})$, there is a $(k-1)$-clique in $K_{k}-v$. Hence regardless of whether $K_{k}$ is the split side or edge side of the composition, by Proposition \ref{cliqueboundinequality}
\[T^{k-1}(G) \geq T^{k-1}(H) +T^{k-1}(K_{k}) -1 = T^{k-1}(H),\]
as desired.
\end{proof}
\begin{cor}
\label{onecliquecharacterization}
The only $k$-Ore graph $G$ with $T^{k-1}(G) =1$ is $K_{k}$.
\end{cor}
\begin{proof}
Let $G$ be a vertex-minimum counterexample. Since $G \neq K_{k}$, $G$ is the Ore composition of two graphs $H_{1}$ and $H_{2}$. If neither $H_{1}$ nor $H_{2}$ are $K_{k}$, then by induction $T^{k-1}(H_{i}) \geq 2$ for $i \in \{1,2\}$ and hence by Proposition \ref{cliqueboundinequality} we have $T^{k-1}(G)$ $\geq T^{k-1}(H_{1}) + T^{k-1}(H_{2}) -2 \geq 2$.
Now consider the case where $H_{1} =K_{k}$ but $H_{2} \neq K_{k}$. Then $T^{k-1}(G)$ $\geq T^{k-1}(H_{2}) \geq 2$ by Corollary \ref{Kkbound}. Therefore both $H_{1}$ and $H_{2}$ are isomorphic $K_{k}$.
In this case, without loss of generality we may assume $H_{2}$ is the split side where we split vertex $z$. Observe that $H_{2}-z$ contains a $K_{k-1}$ subgraph, and there is a $K_{k-1}$ subgraph in $H_{1}$ after deleting an edge. Hence every $(k-1)$-clique packing of $G$ has size at least least two, a contradiction.
\end{proof}
For the rest of this section we restrict our attention to $4$-Ore graphs, since there is no straightforward generalization of our lemmas to $k$-Ore graphs for $k > 4$.
\begin{definition}
A \textit{kite} in $G$ is a $K_{4}-e$ subgraph $K$ such that the vertices of degree three in $K$ have degree three in $G$. The \textit{spar} of a kite $K$ is the unique edge in $E(K)$ contained in both triangles of $K$.
\end{definition}
The following two lemmas partially describe the structure of $4$-Ore graphs with 3-clique packings of size two.
\begin{lemma}
\label{K4e}
If $G$ is a $4$-Ore graph with $T^{3}(G) =2$, then $G$ contains two edge-disjoint kites that share at most one vertex. Furthermore, if $G \neq M$, then $G$ contains two vertex-disjoint kites.
\end{lemma}
\begin{proof}
We proceed by induction on $v(G)$. As $T^{3}(G) =2$, by Corollary \ref{onecliquecharacterization} we have that $G \neq K_{4}$. Hence $G$ is the Ore composition of two graphs $H_{1}$ and $H_{2}$. Up to relabelling, we may assume that $H_{1}$ is the edge side of the composition where we delete the edge $xy$, that $H_{2}$ is the split side where the vertex $z$ is split into two vertices $z_{1}$ and $z_{2}$, and that $x$ is identified with $z_1$ and $y$ with $z_2$ in $G$. We break into cases depending on which (if any) of $H_1$ and $H_2$ is isomorphic to $K_4$. \\
\textbf{Case 1: $H_1 = H_2 = K_4$}. \\ In this case, $G$ is isomorphic to the Moser spindle, which contains two edge-disjoint kites that share exactly one vertex.
\noindent
\textbf{Case 2: $H_{1} = K_{4}$, and $H_2 \neq K_4$.} \\
First suppose that $H_{2} \neq M$. Then by the induction hypothesis, $H_{2}$ contains two vertex-disjoint kites. Note that $z$ belongs to at most one of these kites, and hence $H_{2}^{z}$ contains a kite not containing $z_{1}$ or $z_{2}$. Observe that $H_{1}-xy$ is a kite, and thus in this case $G$ contains two vertex-disjoint kites. Therefore we may assume that $H_{2} = M$. If $z$ is the unique vertex of degree four in $M$, then $T^{3}(G) =3$, a contradiction. But if $z$ is not the unique vertex of degree four, then there is a kite in $M^{z}$ not containing $z_{1}$ or $z_{2}$, and thus this kite and $H_{1}-xy$ give two vertex-disjoint kites.
\noindent
\textbf{Case 3: $H_1 \neq K_4$, and $H_{2} = K_{4}$.} \\
First suppose that $H_{1} \neq M$. Then by the induction hypothesis there are two vertex-disjoint kites in $H_{1}$, say $D_{1}$ and $D_{2}$. Thus either there is a kite in $H_{1}-xy$ that does not contain $x$ or $y$, or up to relabelling $x \in V(D_{1})$ and $y \in V(D_{2})$. In either case, since $H_{2}^{z}$ contains a kite that contains at most one of $z_1$ and $z_2$, it follows that $G$ contains two vertex-disjoint kites.
Therefore $H_{1}=M$. If $T^{3}(H_{1}-xy) =2$, then $T^{3}(G) =3$, a contradiction. Hence in $H_{1}$, both $x$ and $y$ have degree $3$. Further, $x$ and $y$ are not the two vertices that have degree three and are not incident to a spar of a kite. Thus it follows that there is a kite in $H_{1}-xy$ that does not contain $x$ or $y$. Since there is a kite in $H_{2}^{z}$, there are two vertex-disjoint kites in $G$.\\
\textbf{Case 4: $H_{1} \neq K_{4}$, and $H_{2} \neq K_{4}$.}\\
First suppose $H_{2} \neq M$. Then by induction, $H_{2}$ contains two vertex-disjoint kites, and hence $H_{2}^{z}$ contains a kite that does not contain $z_1$ or $z_2$. Similarly, $H_{1}$ contains two edge-disjoint kites by induction. Thus $H_{1}-xy$ contains a kite, and so $G$ contains two vertex-disjoint kites.
Therefore $H_{2} = M$. If $z$ is the unique vertex of degree four in $M$, then $T^{3}(H_{2}^{z}-z_{1}-z_{2}) =2$, and it follows that $T^{3}(G) \geq 3$, a contradiction. Thus $z$ is not the degree four vertex in $M$, and thus there is a kite in $H_{2}^{z}$ that does not contain $z_1$ or $z_2$. By induction, there are two edge-disjoint kites in $H_{1}$, and hence there is a kite in $H_{1}-xy$. Thus it follows that there are two vertex-disjoint kites in $G$, as desired.
\end{proof}
\begin{lemma}
\label{splittinglemma}
Let $G$ be $4$-Ore with $T^{3}(G) = 2$. Let $v \in V(G)$ and let $G^{v}$ be the graph obtained by splitting $v$ into two vertices of positive degree $v_{1}$ and $v_{2}$, with $N(v_1) \cup N(v_2) = N(v)$ and $N(v_1) \cap N(v_2) = \emptyset$. Then either
\begin{enumerate}[(i)]
\item{$T^{3}(G^{v}) \geq 2$, or}
\item{ $\deg(v) = 3$, there is an $i \in \{1,2\}$ such that $\deg(v_{i}) = 1$, and the edge $e$ incident to $v_{i}$ is the spar of a kite in $G$.}
\end{enumerate}
\end{lemma}
\begin{proof}
We proceed by induction on the number of vertices. First suppose that $G =M$. If $v$ is the unique vertex of degree four, then it is easy to verify that any split leaves two vertex-disjoint triangles, and hence $T^{3}(G^{v}) \geq 2$. Now suppose that $v$ is incident to one of the two spars of kites. Again, it is easy to check that if the vertex of degree one is incident to the spar of the kite, then $T^{3}(G^{v}) = 1$, and otherwise $T^{3}(G^{v}) \geq 2$. Lastly, if $v$ is either of the other two vertices in $M$, then one simply checks that $T^{3}(G^{v}) \geq 2$ for any split.
Therefore we can assume that $G \neq M$. Let $G$ be the Ore composition of $H_{1}$ and $H_{2}$ where $H_{1}$ is the edge side, we delete the edge $xy$, and $H_{2}$ is the split side where we split the vertex $z$ into two vertices $z_{1}$ and $z_{2}$. \\
\textbf{Case 1: $H_{1} = K_{4}$.}\\
We can assume that $H_{2} \neq K_{4}$, as otherwise $G = M$. This implies that $T^{3}(H_{2}) = 2$: if $T^{3}(H_{2}) \geq 3$, then Proposition \ref{cliqueboundinequality} implies that $T^{3}(G) \geq 3$, a contradiction. First suppose that $v \in \{x,y\}$. Without loss of generality, let $v =x$. By Observation \ref{deletingavertex} there is a triangle in $H_{2} -z$. Since there is also a triangle in $H_{1}- x$, we have that $T^{3}(G^{v}) \geq 2$ as desired.
Now suppose that $v \in V(H_{1})\setminus \{x,y\}$. Note $H_{1}-xy$ is a kite. Assume $\deg(v_{1}) =1$. If $v_{1}$ is not incident to the spar of a kite, then there is a triangle on $v_{2},x,y$. By Observation \ref{deletingavertex} there is also a triangle in $H_{2} -z$. This implies that $T^{3}(G^{v}) \geq 2$, as desired.
Thus we can assume that $v \in V(H_{2} -z)$. We apply induction to $H_{2}$. If $T^{3}(H_{2}^{v}) \geq 2$, this implies $T^{3}(H_{2}^{v}-z) \geq 1$, and hence $T^{3}(G^{v}) \geq 2$. Otherwise we split $v$ in such a way that in $H_{2}^{v}$, $\deg(v_{1}) = 1$ and $v_1$ is incident to a spar of a kite in $H_{2}$. Let $K$ be this kite. Note if $z \not \in V(K)$ then the same split occurs in $G^{v}$ and $(ii)$ occurs. Hence $z \in V(K)$. If $T^{3}(H_{2}-z) \geq 2$, then $T^{3}(H_{2}^{v} - z) \geq 1$, and it follows that $T^{3}(G^{v}) \geq 2$. Hence every $3$-clique packing of $H_{2}$ uses the vertex $z$. If $vz$ is the spar in $K$, then without loss of generality we may assume $z_{1}v \in E(G)$ and $z_{2}$ is incident to the other two vertices of $K$ (we may make this assumption as otherwise $T^{3}(G) \geq 3$). But then since $\deg(v) =3$ in both $H_{2}$ and in $G$, $v$ is not in any triangle in $G$. Thus $T^{3}(G^{v}) = T^{3}(G) =2$, as desired. Therefore $z$ is not incident to the spar of $K$. Since every $3$-clique packing of $G$ contains a triangle using $z$ and $z$ is not incident to the spar of $K$, we have that $H_{2} \neq M$. To see this, note that if $H_{2} =M$, then $H_{2}-z$ contains two vertex disjoint triangles and then $T^{3}(G) =3$. Thus by Lemma \ref{K4e}, since $H_2 \neq M$ it follows that $H_{2}$ contains two vertex-disjoint kites, $D_{1}$ and $D_{2}$. If neither $D_{1}$ nor $D_{2}$ is $K$, then $T^{3}(H_{2}) \geq 3$, as there is a set of three vertex-disjoint triangles in $D_1 \cup D_2 \cup K$, a contradiction. Thus without loss of generality we may assume $D_{1} = K$. But again, $H_{2}-z$ contains two vertex-disjoint triangles, namely the triangle in $K-z$, and one triangle in $D_{2}$. Hence $T^{3}(G) \geq 3$, a contradiction. \\
\textbf{Case 2: $H_{2} = K_{4}$.}\\
In this case, $T^{3}(H_{1}) = 2$ and every $3$-clique packing of $H_{1}$ uses the edge $xy$ since otherwise $G = M$ or $T^{3}(G) \geq 3$ by Proposition \ref{cliqueboundinequality}.
First suppose that $v \in \{z_{1},z_{2}\}$. Without loss of generality, let $v=z_{1}$. Then since $T^{3}(H_{1}-z_{1}) \geq 1$ by Observation \ref{deletingavertex} and $T^{3}(H_{2}^{z}- \{z_{1},z_{2}\}) =1$, it follows that $T^{3}(G^{v}) \geq 2$, as desired.
Now suppose that $v \in V(H_{1}) \setminus \{x,y\}$. Consider $H_{1}^{v}$, where we perform the same split as in $G^v$. First suppose that $T^3(H_{1}^{v}) \geq 2$. Hence $H_{1}^{v}-xy$ has at least one triangle. Since there is a triangle in $H_{2}^z - \{z_{1},z_{2}\}$, it follows that $T^{3}(G^{v}) \geq 2$ as desired. Therefore we may assume that $T^3(H_1^v) < 2$, and so by induction $v$ is incident to the spar in a kite $K$ in $H_{1}$, and that after splitting $v_{1}$ has degree one and is incident to the spar of the kite. If $K$ is in $G$, then we are done. Therefore we may assume that $xy \in E(K)$, and so that $\{x,y,v\}$ induces a triangle in $G$. Since $H_1 \neq K_4$ by assumption, by Observation \ref{deletingaclique} we have that $H_1 - \{x,y,v\}$ contains a triangle. As there is also a triangle in $H_{2}- z$ by Observation \ref{deletingavertex}, it follows that $T^{3}(G^{v}) \geq 2$, as desired.
The final case to consider is if $v \in V(H_{2}) - \{z_{1},z_{2}\}$. If $v$ is not incident to the spar of the kite in $H_{2}^{z}$, then any split of $v$ leaves a triangle in $(H_2^{z})^v$, and since there is a triangle in $H_{1} - \{x,y\}$, we get that $T^{3}(G^{v}) \geq 2$ as desired. A similar argument works for the other splits, unless we split $v$ in such a way that $v_{1}$ has degree one and is incident to a spar of a kite in $G$. \\
\textbf{Case 3: Neither $H_{1}$ nor $H_{2}$ is $K_{4}$.}\\
First suppose that $v \in \{x,y\}$ and without loss of generality, that $v =x$. Note that since $T^{3}(H_1) \geq 2$ and $T^3(H_2) \geq 2$, it follows that $T^{3}(H_{1}-x) \geq 1$ and $T^{3}(H_{2}^z -\{z_{1},z_{2}\}) \geq 1$. Hence in this case, after splitting $v$ we have that $T^{3}(G^{v}) \geq 2$.
Now suppose that $v \in V(H_{1})\setminus \{x,y\}$. If $T^3(H_1^v) \geq 2$, then it follows that $T^3(H_1^v-xy) \geq 1$. Since $T^3(H_2^z-\{z_1,z_2\}) = 1$, we have that $T^3(G^v) \geq 2$, a contradiction. Thus by induction we can assume that $\deg(v) = 3$, and $v_{1}$ has degree one and is incident to the spar of a kite in $H_{1}$. Let $K$ be this kite. If $K$ does not contain $x$ and $y$ then $(ii)$ holds in $G^{v}$, a contradiction. Otherwise $x,y,v$ induce a triangle, and by Observation \ref{deletingaclique}, $H_{1} - \{x,y,v\}$ contains a triangle, and $H_{2}^{z}- \{z_{1},z_{2}\}$ contains a triangle by Observation \ref{deletingavertex}. Hence $T^{3}(G^{v}) \geq 2$ as desired.
Finally suppose that $v \in V(H_{2}) \setminus \{z_{1},z_{2}\}$. If $T^{3}(H_{2}^{z}) \geq 2$, then $T^{3}((H_{2}^{z})^{v}) \geq 1$, and since $T^{3}(H_{1}-xy) \geq 1$, it follows that $T^{3}(G^{v}) \geq 2$ as desired. Therefore $v$ has degree three, lies in a kite $K$ in $H_{2}$, $\deg(v_{1}) = 1$ and is incident to the spar of the kite in $K$. If $z \not \in V(K)$, then $(ii)$ occurs in $G^{v}$. So $z \in V(K)$. Let $T$ be a triangle in $K$ which contains $z$ and $v$. Then by Observation \ref{deletingaclique}, $H_{2} - T$ contains a triangle, and as deleting any vertex in $H_{1}$ leaves a triangle, it follows that $T^{3}(G^{v}) \geq 2$.
\end{proof}
We now describe the structure of the $4$-Ore graphs with $3$-clique packings of size three.
\begin{lemma}
\label{deletingatriangle}
Let $G$ be a $4$-Ore graph with $T^{3}(G) =3$, and let $T$ be a triangle in $G$. Either $T^{3}(G-T) \geq 2$, or there exists a kite in $G-T$.
\end{lemma}
\begin{proof}
Let $G$ be a vertex-minimum counterexample. Since $G \neq K_{4}$, $G$ is the Ore composition of two $4$-Ore graphs $H_{1}$ and $H_{2}$. Up to relabelling, we may assume that $H_{1}$ is the edge side of the composition where we delete the edge $xy$, and that $H_{2}$ is the split side where we split the vertex $z$ into two vertices $z_{1}$ and $z_{2}$. Let $T$ be a triangle of $G$. Observe that at most one of $z_{1}$ and $z_{2}$ is in $V(T)$. Additionally, notice if $T^3(H_{i}) \geq 3$ for all $i \in \{1,2\}$, then by Proposition \ref{cliqueboundinequality} we have that $T^{3}(G) \geq T^{3}(H_{1}) +T^{3}(H_{2}) -2 \geq 4$, a contradiction. We break into cases depending on which (if any) of $H_{1}$ or $H_{2}$ is isomorphic to $K_{4}$. \\
\textbf{Case 1: $H_{1} =K_{4}$.} \\
Note that $H_{2} \neq K_{4}$ as otherwise $T^{3}(G) =2$. \\
\textbf{Subcase 1: $T^{3}(H_{2}) = 3$.}\\
We may assume that $T^{3}(H_{2}^{z}-z_1-z_2) \leq 2$ since otherwise $T^{3}(G) \geq 4$, a contradiction. Note this implies that $T^{3}(H_{2}^{z}-z_{1}-z_{2}) = 2$, since $T^{3}(H_{2}-z) \geq T^{3}(H_{2}) -1 \geq 2$. Hence every $3$-clique packing of $H_{2}$ has a triangle which contains the vertex $z$. If $V(T) \subseteq V(H_1)$, then as $T^{3}(H_{2}^{z}-z_{1}-z_{2}) = 2$, we have $T^{3}(G-T) \geq 2$ as desired. Thus we may assume $V(T) \subseteq V(H_{2}^{z})$. Note that $T$ contains one of $z_1$ and $z_2$, since otherwise $H_1-xy$ is a kite in $G-T$, a contradiction. Without loss of generality, let $z_{1} \in V(T)$. Let $T'$ be the triangle in $H_{2}$ whose vertex set is $V(T) \setminus \{z_{1}\} \cup \{z\}$. Consider $H_{2}-T'$. By minimality, we have two possibilities: either $T^3(H_2-T') \geq 2$, or $H_2-T'$ contains a kite. First suppose $T^3(H_{2}-T') \geq 2$. Then since $z \in V(T')$, it follows that there are two vertex-disjoint triangles in $H^z_2-T$, as desired. Therefore $H_2-T'$ contains a kite, $D$; but then $D$ exists in $G-T$, a contradiction.\\
\textbf{Subcase 2 : $T^{3}(H_{2}) =2$.}\\
Again up to relabelling $z_{1}$ with $z_{2}$, we may assume $z_{1} \in V(T)$, as otherwise the $G-T$ contains the kite $H_{1}-e$. By Lemma \ref{K4e}, either $H_{2} = M$, or $H_{2}$ contains two vertex-disjoint kites. First consider the case where $V(T) \subseteq V(H_{1})$. If there is a kite in $H_2^z$, then $G-T$ contains a kite and we are done. Thus we may assume that $H_2^z$ does not contain a kite, and so that $H_{2} = M$ and $z$ is the unique vertex of degree four in $H_2$. But in this case there are two vertex-disjoint triangles in $H_{2}-z$, which implies that $T^{3}(G-T) \geq 2$, as desired.
Thus for the remainder of the analysis, we assume that $V(T) \subseteq V(H_{2}^z)$, and $z_{1} \in V(T)$. Let us deal with the case where $H_{2} = M$ first. Suppose that $z$ is the unique vertex of degree four in $M$. Since $z_{1} \in V(T)$, we have that $H_{2}^{z}-T$ contains a triangle not using $z_{2}$. Since $H_{1}-z_{1}$ also contains a triangle it follows that $T^{3}(G-T) \geq 2$, as desired. Thus $z$ is not the unique vertex of degree four in $M$. If $z$ is any of the vertices in $M$ incident to a spar of a kite, then either $T^{3}(G) =2$, a contradiction, or for any triangle intersecting $z_{1}$ in $H_{2}^{z}$, there is a triangle in $H_{2}^{z}-T-z_{2}$. Thus it follows that $T^{3}(G-T) \geq 2$ by using the triangle in $H_{1}-xy$ which contains $z_{2}$. If $z$ is either of the other two vertices of degree three, we have a kite in $H_{2}^{z} - T$, and hence there is a kite in $G-T$.
Therefore $H_{2} \neq M$, and so by Lemma \ref{K4e} we have that $H_{2}$ contains two vertex-disjoint kites $D_{1}$ and $D_{2}$. Without loss of generality, we may assume $V(D_{1}) \subseteq V(H_{2}-z)$. We claim that no vertex in $D_1$ incident with the spar is contained in $T$. To see this, suppose not, and let $v$ be a vertex incident with the spar of $D_1$. If $v$ is in $T$, then since all neighbours of $v$ are in $D_1$ and $z$ is in $T$, it follows that $v$ is adjacent to $z$. But then $z$ is in $D_1$, contradicting the definition of $D_1$. Moreover, we claim at most one vertex of $D_1$ is contained in $T$.
If the two vertices in $D_{1}$ which are not incident to the spar of $D_{1}$ are in $T$, then $G$ contains a $K_{4}$ subgraph, which implies $G = K_{4}$, a contradiction.
It follows that we have $T^{3}(H_{2}-T) \geq 1$. Hence using one of the triangles in $H_{1}-xy$, we see that $T^{3}(G-T) \geq 2$, as desired.
\noindent
\textbf{Case 2: $H_{2} = K_{4}$.}
Note in this case $H_{1} \neq K_{4}$ as otherwise $T^{3}(G) =2$. \\
\textbf{Subcase 1: $T^{3}(H_{1}) = 3$.}\\
In this case, $T^{3}(H_{1}-xy) = 2$ since otherwise Proposition \ref{cliqueboundinequality} implies $T^{3}(G) =4$. It follows that every $3$-clique packing of $H_{1}$ contains a triangle using the edge $xy$, and thus there are two vertex-disjoint triangles in $H_{1}-x-y$. If $V(T) \subseteq V(H_{2}^{z})$, then since $T^{3}(H_{1}-x-y) =2$, it follows that $T^{3}(G-T) \geq 2$ as desired. Thus $V(T) \subseteq V(H_{1})$. Now consider $H_{1}-T$. By minimality, we have two possibilites: either $T^3(H_1-T) \geq 2$, or $H_1-T$ contains a kite. If $H_{1}-T$ contains two vertex-disjoint triangles, then $H_{1}-T-xy$ contains at least one triangle. Since $H_{2}-\{z_{1},z_{2}\}$ contains a triangle, we see that $T^{3}(G-T) \geq 2$ as desired. Otherwise, $H_{1}-T$ contains a kite $D$. Thus either $xy$ is the spar of $D$, or $T^{3}(H_{1}-T-xy) \geq 1$. If $T^{3}(H_{1}-T-xy) \geq 1$, then again using the triangle in $H_{2}- \{z_{1},z_{2}\}$ we see that $T^{3}(G-T) \geq 2$. Thus $xy$ is the spar of $D$. In this case, $V(T) \subseteq V(H_{1}- x-y)$ as $x$ and $y$ both do not lie in a triangle. But then $G-T$ contains the kite in $H_{2}^{z}$. \\
\textbf{Subcase 2: $T^{3}(H_{1}) = 2$.} \\
Suppose first that $H_{1} = M$. If $xy$ is not incident to the unique vertex of degree four, then either there is a kite in $H_{1} -xy$ that does not contain $x$ or $y$, or $H_{1}-xy$ contains two edge-disjoint kites. First suppose there is a kite in $H_{1}-xy$ that does not contain $x$ or $y$. Observe there is a kite in $H_{2}^{z}$. Since either $V(T) \subseteq V(H_{1})$ or $V(T) \subseteq V(H_{2}^{z})$, by the structure of the Moser spindle it follows that $G-T$ contains a kite for any $T$. Now consider the case where $H_{1}-xy$ contains two edge-disjoint kites. Since $T$ does not contain both $x$ and $y$, $T$ contains the unique vertex of degree four in $M$. Otherwise, $H_1-xy-T$ contains a kite. But then $H_{1}-xy-T$ contains a triangle using (say) $z_{1}=x$. As $H_{2}^{z}-z_{1}$ contains a triangle, we have that $T^{3}(G-T) \geq 2$, as desired.
So we may assume that $xy$ is incident to the unique vertex of degree four in $H_1=M$. Note that in this case, either up to relabeling $\deg(z_1) = 5$ and $\deg(z_2) = 3$, or $\deg(z_1) = \deg(z_2) = 4$. If $\deg(z_1) = 5$, then $T$ contains $z_1$: otherwise, $G-T$ contains a kite. If $T \subseteq H_1-xy$, then since $H_1-xy$ contains a triangle disjoint from $T$ and $H_2-z$ contains a triangle, it follows that $T^3(G-T) = 2$, as desired. If on the other hand $T \subseteq H_2^z$, then since $T^3(H_1-xy-z_1)\geq 2$, again it follows that $T^3(G-T) \geq 2$. Thus we may assume $\deg(z_1) = \deg(z_2) = 4$. But then $G$ contains two vertex-disjoint kites $K_1$ and $K_2$, and no triangle in $G$ intersects both $K_1$ and $K_2$. Thus $G-T$ contains a kite, as desired.
Therefore by Lemma \ref{K4e} we may assume that $H_{1} \neq M$, and so that $H_1$ contains two vertex-disjoint kites $D_{1}$ and $D_{2}$. Up to relabelling, let $z_{1}$ be in the kite in $H_{2}^{z}$. First suppose $V(T) \subseteq V(H_{2}^{z})$. Note there is kite in $H_{1}-xy$ not using $z_1$. Since $z_1z_2 \not \in E(G)$, we have that $z_{2} \not \in V(T)$. Thus $H_1-xy-T = H_1-xy-z_1$, and so $G-T$ contains at least one of the kites $D_{1}$ and $D_{2}$.
Therefore we may assume that $H_1 \neq M$. Up to relabeling, let $z_1$ be in the kite in $H_2^z$. First suppose $T \subseteq H_1-xy$. Note then that $z_1 \in V(T)$, since otherwise $G-T$ contains the kite in $H_z^2$. Thus $H_1-xy-T = H_1-T$, since $xy$ is incident with a vertex in $T$. By Observation \ref{deletingaclique}, $H_1-T$ contains a triangle. Since $H_2-z$ also contains a triangle, it follows that $T^3(G-T) \geq 2$, as desired. Thus we may assume $T \subseteq H_2^z$. By Lemma \ref{K4e}, since $H_1 \neq M$, $H_1$ contains two vertex-disjoint kites. But then $H_1-z_1 \subseteq H_1-T$ contains a kite, as desired.
\\
\noindent
\textbf{Case 3: Neither $H_{1}$ nor $H_{2}$ is $K_{4}$.} \\
\textbf{Subcase 1: $T^{3}(H_{1}) =2$ and $T^{3}(H_{2}) = 2$.}\\
Note that by Lemma \ref{K4e}, $H_1$ and $H_2$ contains two edge-disjoint kites. If $T \subseteq H_2^z-z_1-z_2$, then $H_1-xy$ (and therefore $G-T$) contains a kite, as desired. Moreover, if $T \subseteq H_1-z_1-z_2$, then either $H_2^z$ (and therefore $G-T$) contains a kite, or $H_2 = M$ and $z$ is the unique vertex of degree four in $M$, in which case $T^3(G-T)\geq T^3(H_2-z) \geq 2$. Thus we may assume that $T$ contains one of $z_1$ and $z_2$: up to relabeling, suppose $T$ contains $z_1$. If $T \subseteq H_1-xy$, then $H_1-xy-T = H-T$. By Observation \ref{deletingaclique}, $H_1-xy-T$ (and therefore $G-T$) contains a triangle. Since $T^3(H_2) = 2$, there is a triangle in $H_2-z$, and so $T^3(G-T) \geq 2$ as desired. If, on the other hand, $T \subseteq H_2^z$, then since $T^3(H_1) = 2$, again we have $T^3(H_1-z_1) \geq 1$. Since $z_1 \in T$, it follows that $H_2-T \subseteq H_2^z-z_1-z_2$. By Observation \ref{deletingaclique}, $H_1-T$ (and thus $H_2^z-z_1-z_2$) also contains a triangle. Thus $T^3(G-T) \geq 2$, as desired. \\
\textbf{Subcase 2: $T^{3}(H_{1}) = 3$.}\\
Note $T^3(H_2) = 2$, since $H_2 \neq K_4$ by assumption and as noted prior to Case 1, if $T^3(H_1) \geq 3$, then $T^3(H_2) < 3$. Suppose first that every 3-clique packing of $H_1$ uses the edge $xy$. Then $H_1-x-y$ has a 3-clique packing of size two, and so $T \subseteq H_1-xy$ as otherwise $T^3(G-T) \geq 2$ and we are done. Similarly, if there is a 3-clique packing of $H_1$ that does not use the edge $xy$, then $T^3(H_1-xy) = 3$, and so again $T \subseteq H_1-xy$, as otherwise $T^3(G-T) \geq 2$ and we are done (since $T$ contains at most one of $x$ and $y$). Since $T^3(H_2) = 2$, it follows from Lemma \ref{K4e} that $H_2$ contains two edge-disjoint kites $D_1$ and $D_2$. Thus either $H_2^z$ contains a kite that does not contain $z_1$ or $z_2$ (and so $G-T$ contains this kite), or $z \in D_1 \cap D_2$. In this case, $H_2 = M$, and $z$ is the unique vertex of degree four in $M$. But then $H_2^z-z_1-z_2$ contains a 3-clique packing of size two, and since $T \subseteq H_1-xy$, it follows that $T^3(G-T) \geq 2$ as desired. \\
\textbf{Subcase 3: $T^{3}(H_{2})=3$.} \\
Then $T^3(H_z^2-\{z_1, z_2\}) \geq 2$. It follows that $V(T) \subseteq V(H_2^z)$, as otherwise $T^3(G-T) \geq 2$. Note that since $H_1 \neq K_4$ by assumption, $T^3(H_1) \neq 1$. Furthermore, as noted prior to Case 1, $T^3(H_1) < 3$. Thus $T^3(H_1) = 2$. By Lemma \ref{K4e}, it follows that either $H_1 = M$, or that $H_1$ contains two vertex-disjoint kites. Suppose first $H_1 = M$. Then either $H_1-xy-T$ contains a kite, or $T^3(H_1-xy-T) = 2$, since $T \subset H_2^z$. (To see this, note that since $T \subseteq H^z_2$ and $T$ contains at most one of $x$ and $y$, removing the edge $xy$ and $T$ from $H_1$ amounts to deleting one edge and at most one of its incident vertices.) Thus we may assume $H_1 \neq M$, and so that $H_1$ contains two vertex-disjoint kites $D_1$ and $D_2$. But then $H_1-xy-T$ contains a kite (since removing $xy$ and $T$ from $H_1$ again amounts to deleting an edge and at most one of its incident vertices, and $D_1$ and $D_2$ are vertex-disjoint).
\end{proof}
\begin{definition}
Let $G$ be a $4$-Ore graph with $T^3(G) =3$. An edge $f$ is \textit{foundational} if both $T^3(G-f) =2$ and there is no kite in $G-f$.
\end{definition}
\begin{lemma}
\label{foundationaledgesin4Ore}
Let $G$ be a $4$-Ore graph with $T^3(G) =3$. Then there is at most one foundational edge in $G$. Moreover, if $f$ is a foundational edge, then $f$ is the spar of a kite.
\end{lemma}
\begin{proof}
Suppose not and let $G$ be a vertex-minimum counterexample. As $G \neq K_{4}$, $G$ is the Ore composition of two $4$-Ore graphs $H_{1}$ and $H_{2}$. Up to relabelling, let $H_{1}$ be the edge side of the composition where we delete the edge $xy$ and $H_{2}$ the split side of the composition where we split $z$ into two vertices $z_{1}$ and $z_{2}$, and identify $z_{1}$ with $x$ and $z_{2}$ with $y$. Let $f$ be an edge in $G$. \\
\textbf{Case 1: $H_{1} = K_{4}$.} \\
Suppose $f$ is foundational. Observe that if $f \not \in E(H_{1})$, then $f$ is not foundational since there is a kite left over after deleting $f$. If $T^{3}(H_{2}^{z}) = 3$, then since $f \in E(H_{1})$ we have that $T^{3}(G-f)=3$, a contradiction. Thus, $T^3(H_2^z) = 2$, and so $T^3(H_2) \leq 3$. Further, $T^{3}(H_{2}) \geq 2$, since if $H_{2} =K_{4}$, then $G = M$ and $T^{3}(M) =2$.
If $T^3(H_2) = 3$, then there are two vertex-disjoint triangles in $H_{2}-z$, say $T_{1}$ and $T_{2}$. If $f$ is not the spar of the kite $H_{1}-xy$, then $H_{1}-xy-f$ contains a triangle, so it follows that $T^{3}(G-f) \geq 3$. Therefore in this case there is at most one foundational edge, and if there is a foundational edge, it is the spar of a kite.
Thus we may assume $T^3(H_2) = 2$. By Lemma \ref{K4e}, we have that $H_2$ contains two edge-disjoint kites that share at most one vertex. If $H_{2}^{z}$ contains a kite, then $G-f$ contains a kite, and so $f$ is not foundational. Thus by Lemma \ref{K4e}, the two copies are not vertex-disjoint, and so $H_2 = M$, and further $z$ is the vertex of degree four in $M$. But for any split of $z$ into $z_1$ and $z_2$, we get that $T^3(H_2^z -z_{1}-z_{2}) = 2$. Thus if $f$ is not the spar in $H_1-xy$, $G-f$ has a 3-clique packing of size three, a contradiction. \\
\textbf{Case 2: $H_{2} = K_{4}$.} \\
By possibly relabelling, let $z_1$ be the vertex of degree two in $H_2^z$ resulting from the split of $z$. Notice that splitting $K_{4}$ leaves a kite subgraph, and hence as $f$ is foundational, $f$ is in $E(H_{2})$. Furthermore, $f$ is not incident with $z_1$ or $z_2$, as otherwise $T^3(G-f) = T^3(G) =3$.
Note that $T^{3}(H_{1}) =3$, since if $T^{3}(H_{1}) =2$ then by Lemma \ref{K4e} $H_{1}$ contains two edge-disjoint copies of kites, and thus $H_{1}-xy$ contains at least one kite, contradicting that $f$ is foundational. Hence $T^{3}(H_{1}) =3$. Observe that $T^{3}(H_{1}-xy) =2$ and there exists a $3$-clique packing of $H_{1}-xy$ which does not use $x$ or $y$. To see this: if $T^{3}(H_{1}-xy) = 3$, then consider any $3$-clique packing of $H_{1}-xy$. This clique packing combined with the triangle in $H_{2}-z$ gives four vertex-disjoint triangles, contradicting that $T^{3}(G) =3$. Thus every $3$-clique packing of $H_{1}$ uses $xy$, and hence there exists a $3$-clique packing of $H_{1}-xy$ which does not use $x$ or $y$. Therefore if $f$ is not the spar in the kite contained in $H_z^2$, $G-f$ contains three vertex-disjoint triangles.\\
\textbf{Case 3: Neither $H_{1}$ nor $H_{2}$ is $K_{4}$.} \\
Note that either $T^{3}(H_{1}) = 2$ or $T^{3}(H_{2}) =2$, as otherwise $T^{3}(G)\geq 4$ by Proposition \ref{cliqueboundinequality}.
First suppose both $T^{3}(H_{1})=2$ and $T^{3}(H_{2})=2$. Then by Lemma \ref{K4e}, in both $H_{1}$ and $H_{2}$ there are two edge-disjoint kites which share at most one vertex. Hence there is a kite in $H_{1}-xy$. If $f \in E(H_{2})$, then $G -f$ thus contains a kite, contradicting that $f$ is foundational. Therefore $f \in E(H_{1})$. If $H_{2}^{z}$ contains a kite, then $G-f$ contains a kite, and thus in this case $G$ contains no foundational edge. It follows that the kites in $H_2$ were not vertex-disjoint, and hence that $H_{2} = M$, and $z$ is the unique vertex of degree four in $M$. Thus $T^{3}(H_{2}^{z} - \{z_{1},z_{2}\}) =2$. Moreover, since $H_1-xy$ contains a kite, if $f$ is not the spar of the kite in $H_1-xy$ then $T^3(G-f) = 3$, contradicting that $f$ is foundational.
Now suppose that $T^{3}(H_{1}) =3$. Since $T^3(G) = 3$, this implies that $T^{3}(H_{2}) =2$. Then by Lemma \ref{K4e}, there are two edge-disjoint kites in $H_{2}$ which share at most one vertex. If there is no kite in $H_{2}^{z}$, then $H_2 = M$ and $z$ is the unique vertex of degree four in $M$. But then $T^{3}(H_{2}^{z}-z_1-z_2) =2$, which implies that $T^{3}(G) \geq 4$, a contradiction. Thus there is a kite in $H_{2}^{z}$. If both of the edge-disjoint kites in $H_2$ are in $H_{2}^{z}$, then $G-f$ contains a kite for all edges $f$, and hence $G$ has no foundational edge. Therefore $H_{2}^{z}$ contains exactly one kite, and this kite does not contain either $z_1$ nor $z_2$. If $f$ does not lie in this kite, then $G-f$ contains a kite. If $f$ is not the spar, then $T^3(H_2^z-f-z_1-z_2) \geq 1$ and so $T^3(G-f) \geq 3$, contradicting that $f$ is foundational. Thus there is at most one foundational edge, and it is the spar of a kite.
Now suppose that $T^3(H_{2}) =3$. Thus $T^3(H_2^z -z_1-z_2) \geq 2$. Since $T^3(G) = 3$, this implies that $T^{3}(H_{1})=2$. Thus $H_{1}$ contains two edge-disjoint kites which share at most one vertex. Thus $H_{1}-xy$ contains a kite. If $f$ does not lie in this kite, then $f$ is thus not foundational. Moreover, if $f$ is not the spar of this kite, then $T^{3}(G-f) \geq 3$ by using two triangles in $H_{2}-z$ and a triangle from $H_1-xy-f$. Hence there is at most one foundational edge, and it is the spar in a kite.
\end{proof}
\begin{lemma}
\label{4Oresplit3triangle}
Let $G$ be $4$-Ore with $T^3(G) =3$. Let $v \in V(G)$, and let $G^{v}$ be obtained from $G$ by splitting $v$ into two vertices of positive degree $v_{1}$ and $v_{2}$ with $N(v_1) \cup N(v_2) = N(v)$ and $N(v_1) \cap N(v_2) = \emptyset$. Then one of the following occurs:
\begin{enumerate}[(i)]
\item{$T^{3}(G^{v}) \geq 3$,}
\item{$G^{v}$ contains a kite,}
\item{there is an $i \in \{1,2\}$ such that $\deg(v_{i}) =1$, and the edge incident to $v_{i}$ is foundational in $G$.}
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose not. Let $v \in V(G)$, and suppose that $G$ is a vertex-minimum counterexample. As $G \neq K_{4}$, $G$ is the Ore composition of two $4$-Ore graphs $H_{1}$ and $H_{2}$. Up to relabelling, let $H_{1}$ be the edge side of the composition where we delete the edge $xy$ and $H_{2}$ the split side of the composition where we split $z$ into two vertices $z_{1}$ and $z_{2}$, and identify $z_{1}$ with $x$ and $z_{2}$ with $y$. Note that at least one of $H_{1}$ or $H_{2}$ is not $K_{4}$, as otherwise $G = M$ and $T^{3}(M) =2$. \\
\textbf{Case 1: $H_{1} =K_{4}$.}\\
Observe that $H_{1}-xy$ is a kite, so if $v \not \in V(H_{1})$, then $G^{v}$ contains a kite and so (ii) holds. Hence we assume that $v \in V(H_{1})$.
Suppose $T^{3}(H_{2}) =3$. Observe that there are two vertex-disjoint triangles in $H_{2}-z$. If the split of $H_{1}-xy$ contains a triangle, then $T^{3}(G^{v}) \geq 3$ and (ii) holds. Notice if we split either $x$ or $y$, then $H_{1}-xy$ contains a triangle, so we can assume that $v \in V(H_{1}) - \{x,y\}$. Let $w$ and $v$ be the two vertices in $V(H_{1}) - \{x,y\}$. Observe there is exactly one way to split $v$ into $v_{1}$ and $v_{2}$ so that there is no triangle left over in $H_{1}^{v} - xy$. That is, up to relabelling $v_{1}$ to $v_{2}$, to have $v_{1}$ adjacent to $w$, and $v_{2}$ adjacent to $x$ and $y$. To finish, notice that the number of vertex-disjoint triangles in $G^{v}$ after performing such a split is the same as the number of vertex-disjoint triangles in $G-vw$. Hence if $T^{3}(G-vw) \geq 3$, we have $T^{3}(G^{v}) = 3$ and thus (i) holds. So $T^{3}(G-vw) = 2$, and further $G^{v}$ has no kite subgraph, which implies that $G-vw$ does not contain a kite. Thus $vw$ is foundational, and so (iii) holds.
Therefore we can assume that $T^{3}(H_{2}) = 2$. If there is a kite in $H_{2}^{z} -z_{1}-z_{2}$, then $G$ contains two vertex-disjoint kites, and thus there is a kite in $G^{v}$. Thus by Lemma \ref{K4e}, we have that $G = M$ and $z$ is the unique vertex of degree four in $M$. Then $T^{3}(H_{2}^{z}-z_{1}-z_{2}) =2$. Therefore we can assume that $v \not \in \{x,y\}$ as otherwise $T^{3}(G^{v}) \geq 3$ and (i) holds. Let $w,v$ be the two vertices in $H_{1}-x-y$. By the same argument as in the $T^{3}(H_{2})=3$ case, there is exactly one split so that $T^{3}(G^{v}) \leq 2$, and in this case, we split $v$ into two vertices $v_{1},v_{2}$ where without loss of generality, $\deg(v_{1}) = 1$, and $v_{1}$ is incident to a foundational edge in $G$. In this case, (iii) holds.\\
\textbf{Case 2: $H_{2} =K_{4}$.} \\
Throughout this case, without loss of generality let $z_{1}$ have degree two in $H_{2}^{z}$ and $z_{2}$ have degree one in $H_{2}^{z}$. Observe that $H_{2}^{z}-z_{2}$ contains a kite, so if $v \not \in V(H_{2}^{z})-z_{2}$, then $G^{v}$ contains a kite and (ii) holds. If $T^{3}(H_{1}) = 3$, then $T^{3}(H_{1}-x) \geq 2$, and it follows that if $v =z_{1}$, then $T^{3}(G^{v}) =3$ and (i) holds, as desired. Therefore we can assume that $v \in V(H_{2}) - \{z_{1},z_{2}\}$. If $v$ is incident to $z_{2}$, then there is a triangle in $H_{2}^{z}$ after splitting $v$. Since $T^3(H_1-x) \geq 2$, we get $T^{3}(G^{v}) \geq 3$ and (i) holds. If $v$ is either of the other two possible vertices, the only split which does not leave a triangle is one where up to relabelling $v_{1}$ has degree one, and is incident to the spar of the kite in $H_{2}^{z}$. Let $f$ be this edge. Then if $T^{3}(G-f) \geq 3$, we have $T^{3}(G^{v}) \geq 3$, so $T^{3}(G-f) =2$, and $G-f$ does not contain a kite as otherwise $G^{v}$ contains a kite (satisfying (ii)), Hence $f$ is foundational in $G$, and thus $v_{1}$ is incident to the foundational edge in $G$. Thus (iii) holds, as desired.
Therefore we can assume that $T^{3}(H_{1}) = 2$. Then by Lemma \ref{K4e}, either $H_{1} =M$ or $G$ contains two vertex-disjoint kites. If $H_{1}-x-y$ contains a kite, then there are two vertex-disjoint kites in $G$, and hence $G^{v}$ contains a kite, satisfying (ii). Thus $H_{1}=M$, and $T^{3}(H_{1}-xy) =2$. Then if we split $z_{1}$, observe we have $T^{3}(G^{v}) \geq 3$ (thus (i) holds), and if we split $z_{2}$, we have a kite in $G^{v}$ (and so (ii) holds). If $v$ is incident to $z_{2}$ in $H_{2}^{z}$, then observe that any split of $v$ results in a triangle, and hence $T^{3}(G^{v}) \geq 3$ and (i) holds. Thus $v$ is a vertex in $H_{2}^{z}$ incident to a spar of the kite. By the same arguments as the case when $T^{3}(H_{1}) =3$, the only split of such a vertex that does not leave a triangle results in, up to relabelling, $v_{1}$ having degree one and being incident to a foundational edge in $G$. But then (iii) holds, as desired. \\
\textbf{Case 3: $T^{3}(H_{1}) =2$.} \\
By the previous cases, we may assume that $H_{2} \neq K_{4}$. Thus $T^{3}(H_{2}) \geq 2$. By Lemma \ref{K4e}, either $H_{1} = M$ or $H_{1}$ contains two vertex-disjoint kites. In either case, there is a kite subgraph in $H_{1}-xy$. Let $L$ be such a subgraph. Then $v \in V(L)$, as otherwise $G^{v}$ contains a kite subgraph, satisfying (ii).
First suppose that $T^{3}(H_{2}) = 3$. If we split $v$ and there is still a triangle left in $H_{1}^{v}-xy$, then as $T^{3}(H_{2}-z) \geq 2$, we have $T^{3}(G^{v}) \geq 3$. Hence (i) holds. Therefore if we split $v$, there is no triangle left in $H_{1}^{v}-xy$, and by the same arguments as in previous cases, this implies that $v$ is incident to the spar of $L$, and we split $v$ in such a way that up to relabelling $v_{1}$ is incident to the spar of the kite and has degree one in $G^{v}$. Further, the spar of $L$ is foundational, satisfying (iii); otherwise, such a split satisfies at least one of (i) and (ii).
Thus we may assume that $T^{3}(H_{2}) = 2$. First suppose that $T^{3}(H_{2}-z) =2$. Then if we split $v$ in $L$ and are left with a triangle, $T^{3}(G^{v}) \geq 3$ and (i) holds. Thus by the same arguments as in previous cases, $v$ is incident to the spar of $L$, and we split $v$ in such a way that up to relabelling $v_{1}$ is incident to the spar of the kite and has degree one in $G^{v}$. Further, the spar of $L$ is foundational, satisfying (iii); otherwise, such a split satisfies at least one of (i) and (ii). Thus $T^{3}(H_{2}-z) =1$. By Lemma \ref{K4e}, since $T^3(H_2) = 2$ either $H_{2} = M$ or $H_{2}$ has two vertex-disjoint kites. As $T^{3}(H_{2}-z) =1$, this implies that $z$ is incident to a spar of a kite. But then regardless of whether $H_{2} = M$ or $H_2$ has two vertex-disjoint kites, we have that there is a kite in $H_{2}^{z}$ that does not contain either of $z_1$ or $z_2$. But then $G^{v}$ contains a kite, satisfying (ii). \\
\textbf{Case 4: $T(H_{2}) =2$.} \\
From the previous cases, we may assume that $T^{3}(H_{1}) = 3$. Then $T^{3}(H_{1}-xy) \geq 2$, and it equals two only if every $3$-clique packing contains a triangle which uses the edge $xy$. Note by Proposition \ref{cliqueboundinequality}, if $T^{3}(H_{1}-xy) \geq 3$, then $T^{3}(G) \geq 4$, a contradiction. Hence there are two disjoint triangles in $H_{1}-xy$ which do not use $x$ or $y$. Therefore $T^{3}(H_{2}^{z}) = 1$, as otherwise by Proposition \ref{cliqueboundinequality}, $T^{3}(G) =4$. By appealing to Lemma \ref{K4e}, this implies that there is a kite $L$ in $H_{2}^z$ that does not contain $z_{1}$ or $z_{2}$. If $v \not \in V(L)$, then $G^{v}$ contains a kite, satisfying (ii). If splitting $v$ leaves a triangle, then $T^{3}(G^{v}) \geq 3$ and so (i) holds. Let $w,v$ be the two vertices of degree three in $L$. It follows that up to relabelling, after splitting we have $v_{1}w \in E(G^{v})$ and $v_{2}$ is incident to the other two edges of $v$. Thus $\deg(v_{1}) =1$. Note that if $vw$ is not foundational, then this split satisfies one of (i) and (ii) and we are done. Hence $vw$ is foundational, and thus (iii) holds. \\
\textbf{Case 5: Both $T^{3}(H_{1}) =3$ and $T^{3}(H_{2}) =3$.} \\
Then by Proposition \ref{cliqueboundinequality}, $T^{3}(G) \geq 3 + 3 -2 =4$, a contradiction.
\end{proof}
\section{Properties of graphs in $\mathcal{B}$}
\label{T8structure}
In this section we prove similar lemmas as in Section \ref{cliquesection} except now we focus on graphs in $\mathcal{B}$. We recall the definition of $\mathcal{B}$. The graph $T_{8}$ is in $\mathcal{B}$, and given a graph $G \in \mathcal{B}$ and a $4$-Ore graph $H$, the Ore composition $G'$ of $G$ and $H$ is in $\mathcal{B}$ if $T^{3}(G') =2$. We start off by proving that the potential of graphs in $\mathcal{B}$ is in fact $-1$.
Let the \textit{Kostochka-Yancey potential} of a graph $G$ be $\text{KY}(G) = 5v(G) -3e(G)$. The following observation is immediate from the definition of Ore composition.
\begin{obs}
\label{Orecomposition}
Let $G$ be the Ore composition of two graphs $H_{1}$ and $H_{2}$. Then $v(G) = v(H_{1}) +v(H_{2}) -1$, $e(G) = e(H_{1}) + e(H_{2}) -1$, and $\text{KY}(G) = \text{KY}(H_{1}) + \text{KY}(H_{2}) -2$.
\end{obs}
\begin{cor}
Let $G \in \mathcal{B}$. Then $\text{KY}(G) =1$, and $p(G) =-1$.
\end{cor}
\begin{proof}
Let $H$ be a vertex-minimum counterexample. If $H=T_{8}$, then $v(T_{8}) = 8$ and $e(T_{8}) = 13$, and thus $5 \cdot 8- 13 \cdot 3 =1$. Now suppose $H$ is the Ore composition of two graphs $H_{1}$ and $H_{2}$. Without loss of generality, $H_{1} \in \mathcal{B}$, and $H_{2}$ is $4$-Ore. By minimality, it follows that $\text{KY}(H_1) = 1$. Note that by Theorem \ref{KY}, $\text{KY}(H_2) = 2$. Then by Observation \ref{Orecomposition}, it follows that $\text{KY}(H) = 1 + 2 -2=1$, as desired.
Since $\text{KY}(G) = 1$ and $T^{3}(G) = 2$, it follows that $p(G) =-1$, as desired.
\end{proof}
We overload the terminology.
\begin{definition}
Given a graph $G \in \mathcal{B}$, an edge $e \in E(G)$ is \textit{foundational} if $T^{3}(G-e) =1$ and there is no $K_{4}-e$ subgraph in $G-e$.
\end{definition}
Note in this definition we enforce that $G-e$ contains no $K_{4}-e$ subgraph. Such a subgraph may not be a kite.
\begin{lemma}
\label{foundationaledgesinB}
If $G$ is a graph in $\mathcal{B}$, then $G$ contains at most one foundational edge. Further, if $G$ is not $T_{8}$ and $G$ contains a foundational edge, then this edge is the spar of a kite.
\end{lemma}
\begin{proof}
Suppose not. First suppose that $G = T_{8}$. Observe that the edge $u_{1}u_{2}$ is the only foundational edge in $G$. Hence we may assume that $G$ is the Ore composition of a $4$-Ore graph $H_{1}$ and a graph $H_{2}$ in $\mathcal{B}$. Let $f$ be an edge in $G$. Note that $T^{3}(H_{1}) \leq 2$, as otherwise $T^{3}(G) \geq 3$ by Proposition \ref{cliqueboundinequality}.\\
\textbf{Case 1: $H_{1} = K_{4}$.} \\
Suppose that $H_{1}$ is the edge side where we delete the edge $xy$, and we split a vertex $z$ in $H_{2}$ into two vertices $z_{1}$ and $z_{2}$. Note that if $f$ is not in $H_{1}-xy$, then $G-f$ contains a kite. Hence $f$ lies in $E(H_{1}-xy)$.
As $H_{1}-xy$ is a kite, if we delete any edge that is not the spar of this kite, we have $T^{3}(H_{1}-xy-f) \geq 1$. Further, there is at least one triangle in $H_{2}$ which does not use $z$, and hence $T^{3}(G-f) \geq 2$. Thus the only possible foundational edge is the spar of a kite, as desired.
Now suppose that $H_{1}$ is the split side where we split $z$ into $z_{1}$ and $z_{2}$. Then $H_{1}^{z}$ contains a kite. If $f$ does not lie in this kite, $G-f$ contains a kite as desired. If $f$ is not the spar of the kite, then $T^{3}(H_{1}^{z}-f) \geq 1$, and since there is a triangle in $H_{2}-xy$, we see that $T^{3}(G-f) \geq 2$, as desired. Hence there is at most one foundational edge, and if there is a foundational edge it is the spar of a kite. \\
\textbf{Case 2: $T^{3}(H_{1}) =2$.}\\
By Lemma \ref{K4e}, either $H_{1} = M$ or $H_{1}$ contains two vertex-disjoint kites. First suppose that $H_{1}$ is the edge side of the composition, where we delete the edge $xy$. By Proposition \ref{cliqueboundinequality} and the fact that $T^{2}(G) = 2$, every $3$-clique packing of $H_{1}$ contains a triangle using the edge $xy$. Thus regardless of whether $H_{1} = M$ or not, there is a kite $L$ in $H_{1}-xy$. Then $f$ is in $L$, otherwise $G-f$ contains a kite. If $f$ is not the spar of the kite in $L$, then $T^{3}(H_{1}-f-xy) \geq 1$, and since $T^3(H_2-z) \geq 1$, we have that $T^{3}(G-f) \geq 2$. Therefore in this case a foundational edge is the spar of a kite.
Therefore we may assume that $H_{1}$ is the split side of the composition where we split a vertex $z$ into two vertices $z_{1}$ and $z_{2}$. Again by Proposition \ref{cliqueboundinequality} and the fact that $T^{2}(G) =2$, every $3$-clique packing of $H_{1}$ uses $z$. Thus regardless of whether $H_{1} = M$ or not, $z$ is incident to the spar of a kite in $H_{1}$, and so there is a kite $L$ in $H_{1}^{z}$ that does not contain either of $z_1$ or $z_2$. Then $f$ is in $L$ otherwise $G-f$ contains a kite. If $f$ is not the spar of the kite in $L$, then $T^{3}(H_{1}-f-z) \geq 1$, and thus $T^{3}(G-f) \geq 2$. Otherwise $f$ is the spar of a kite, and is foundational (otherwise at least one of the other conditions holds).
\end{proof}
\begin{lemma}
\label{T8splits}
Let $G \in \mathcal{B}$, and let $G^{v}$ be obtained from $G$ by splitting a vertex $v$ into two vertices $v_{1}$ and $v_{2}$. Then at least one of the following occurs:
\begin{enumerate}[(i)]
\item{$T^{3}(G^{v}) \geq 2$,}
\item{$G^{v}$ contains a $K_{4}-e$ subgraph,}
\item{there is an $i \in \{1,2\}$ such that $\deg(v_{i}) = 1$ and the edge incident to $v_{i}$ is foundational in $G$}
\end{enumerate}
\end{lemma}
\begin{proof}
Let $G$ be a vertex-minimum counterexample. First consider the case where $G = T_{8}$. If we do not split one of $u_{1}$ or $u_{2}$ we have a $K_{4}-e$ subgraph remaining. If we split either $u_{1}$ or $u_{2}$ such that (iii) does not hold, then it is easy to see $T^{3}(G^{v}) =2$ and so (i) holds.
Therefore we can assume that $G$ is the Ore composition of a graph $H_{1} \in \mathcal{B}$ and a $4$-Ore graph $H_{2}$. If $T^{3}(H_{2}) \geq 3$, then by Proposition \ref{cliqueboundinequality} we have that $T^{3}(G) \geq 3 +2 -2 \geq 3$ contradicting that $T^{3}(G) =2$. Hence $T^{3}(H_{2}) \leq 2$. \\
\noindent
\textbf{Case 1: $H_{2}=K_{4}$.} \\
Suppose first that $H_{2}$ is the split side where we split a vertex $z$ into two vertices $z_{1}$ and $z_{2}$. Then $H_{2}^{z}$ contains a kite, say $L$, so if $v \not \in V(L)$, then $G^{v}$ contains a kite and (ii) holds. If $v$ is not incident to a spar of the kite, then any split of $v$ results in a triangle, and thus $T^{3}(G^{v}) \geq 2$ and so (i) holds. Therefore $v$ is incident to the spar of the kite, and further the split of $v$ must leave up to relabelling $v_{1}$ with degree one and incident to the spar of the kite. This edge is foundational, otherwise (i) or (ii) occurs. Thus (iii) holds.
Now suppose that $H_{2}$ is the edge side of the composition where we delete the edge $xy$. Then $H_{2}-xy$ is a kite. If $v \not \in V(H_{2}-xy)$, then $G^{v}$ contains a kite, as desired. If $v$ is not incident to a spar of a kite, then any split leaves a triangle in $H_{2}^{v}-xy$, and thus $T^{3}(G^{v}) \geq 2$ and (i) holds. To see this, note that since $T^3(H_1) = 2$, it follows that $T^3(H_1-z) \geq 1$. Thus $v$ must be incident to the spar of $H_{2}-xy$, and if $T^{3}(G^{v}) =1$, then we must have split $v$ in such a way that up to relabelling, $\deg(v_{1}) = 1$ and is incident to the spar of the kite. Thus (iii) holds. \\
\noindent
\textbf{Case 2: $H_{2} \neq K_{4}$.}\\
Then $T^{3}(H_{2}) = 2$ by Proposition \ref{cliqueboundinequality}. First suppose $H_{2}$ is the edge side of the composition where we delete the edge $xy$. Then $T^{3}(H_{2}-xy) =1$ as otherwise $T^{3}(G) \geq 3$ by Proposition \ref{cliqueboundinequality}. By Lemma \ref{K4e} either $H_{2} = M$ or there are two vertex-disjoint kites in $H_{2}$. This implies that there is a kite in $H_{2} -xy$. Let $L$ be this kite. If $v \not \in V(L)$, then $G^{v}$ contains a kite, and so (ii) holds. If $v$ is not incident to a spar of a kite, then any split leaves a triangle in $H_{2}^{v}-xy$, and thus $T^{3}(G^{v}) \geq 2$ and (i) holds. As in the previous case, this follows from the fact that $T^3(H_1-z) \geq 1$. Thus $v$ must be incident to the spar of $H_{2}-xy$, and if $T^{3}(G^{v}) =1$, then we must have split $v$ in such a way that up to relabelling, $\deg(v_{1}) = 1$ and is incident to the spar of the kite. But then (iii) holds.
Therefore we can suppose that $H_{2}$ is the split side of the composition, where we split the vertex $z$ into two vertices $z_{1}$ and $z_{2}$. By Proposition \ref{cliqueboundinequality}, every $3$-clique packing of $H_{2}$ uses the vertex $z$. Thus regardless of whether $H_{2} = M$ or not, $z$ is incident to a spar of a kite in $H_{2}$. Thus there is a kite in $H_{2}^{z}$ that does not contain either of $z_1$ or $z_2$. Let $L$ be this kite. If $v \not \in V(L)$, then $G^{v}$ contains a kite, as desired. If $v$ is not incident to the spar of $L$, then any split leaves a triangle in $(H_{2}^{z}-z_1-z_2)^v$, and thus $T^{3}(G^{v}) \geq 2$, and (ii) holds. Thus $v$ is incident to the spar of $L$, and if $T^{3}(G^{v}) =1$, then $v$ was split in such a way that up to relabelling, $\deg(v_{1}) = 1$ and $v_1$ is incident to the spar of the kite. But then (iii) holds, as desired.
\end{proof}
\section{Potential Method}
\label{potentialsection}
In this section we review the potential method which will be the critical tool for the rest of the paper.
Let $H$ and $G$ be graphs such that $G$ does not admit a homomorphism to $H$. Let $F$ be an induced subgraph of $G$ such that $F$ has a homomorphism to $H$. Let $f:V(F) \to V(H)$ be a homomorphism. Let $C_{1},\ldots,C_{t}$ be the non-empty colour classes of $f$ (where a \emph{colour class} is understood here to be a set of vertices in $G$ which are mapped to the same vertex in $H$ under $f$).
The \textit{quotient of $G$ by $f$} denoted $G_{f}[F]$, is a graph with vertex set $(V(G) \setminus V(F)) \cup \{c_{i} \, | \, 1 \leq i \leq t\}$ and edge set $E_1 \cup E_2 \cup E_3$, where:
\begin{itemize}
\item $E_1 =\{uv \, | \, uv \in E(G[V(G) \setminus V(F)])\}$;
\item $E_2$ is the set of edges of the form $c_{i}c_{j}$ where there is a $u \in C_{i}$ and a $v \in C_{j}$ such that $uv \in E(G)$;
\item $E_3$ is the set of edges of the form $uc_{i}$ such that there is a $v \in C_{i}$ where $uv \in E(G)$.
\end{itemize}
We record some easy observations which have been made before (see for example \cite{EvelyneMasters, oresconjecture,tightnessore}).
\begin{obs}
\label{easyhom}
Let $G$ be a graph with no homomorphism to a graph $H$, let $F$ be a strict induced subgraph of $G$, and let $f: V(F) \to V(H)$ be a homomorphism. Then $G \to G_{f}[F]$.
\end{obs}
\begin{proof}
For all $v \in V(G) \setminus V(F)$, let $\phi(v) = v$. For all $v \in V(F)$, let $C_{i}$ be the colour class that $v$ is in with respect to $f$, and define $\phi(v) = c_{i}$. We claim that $\phi$ is a homomorphism. Consider any edge $uv \in E(G_{f}[F])$. If both of $u$ and $v$ are in $V(G) \setminus V(F)$, then $\phi(u)\phi(v) = uv \in E(G_{f}[F])$. If $u \in V(F)$ and $v \in V(G) \setminus V(F)$ then $\phi(u)\phi(v) = c_{i}v$ where $f(u) = i$. Then by definition $c_{i}v \in E(G_{f}[F])$. Finally, if both $u$ and $v$ are in $V(F)$, then $\phi(u)\phi(v) = c_{i}c_{j}$ where $f(u) = i$ and $f(v) = j$. Then as $f$ is a homomorphism, $ij \in E(H)$, and hence $c_{i}c_{j} \in E(G_{f}[F])$. Hence $G \to G_{f}[F]$.
\end{proof}
\begin{cor}
Let $G$ be a graph with no homomorphism to $H$, $F$ a strict induced subgraph of $G$, and $f$ a homomorphism from $F \to H$. Then $G_{f}[F]$ does not admit a homomorphism to $H$. In particular, if $G$ is $k$-critical, then $G_{f}[F]$ contains a $k$-critical subgraph $W$, where at least one vertex of $W$ is in $V(G) \setminus V(F)$.
\end{cor}
\begin{proof}
If $G_{f}[F] \to H$ then as $G \to G_{f}[F]$, then composing homomorphisms implies that $G \to H$, a contradiction. If $G$ is $k$-critical, $G \not \to K_{k-1}$, and hence $G_{f}[F] \not \to K_{k-1}$. Thus $G_{f}[F]$ contains a $k$-critical subgraph $W$, and further at least one vertex of $W$ is in $V(G) \setminus V(F)$, as otherwise $G$ contains a $k$-critical subgraph as a strict subgraph, a contradiction.
\end{proof}
This motivates the following definitions.
\begin{definition}
Let $G$ be a $k$-critical graph. Let $F$ a strict induced subgraph of $G$. Let $f$ be a $k$-colouring of $F$. Let $W$ be a $k$-critical subgraph of $G_{f}[F]$. Let $X$ be the graph induced in $W$ by the vertices which are not vertices of $G$. We will call $X$ the \textit{source}. Let $F'$ be the subgraph of $G$ obtained by taking the induced subgraph in $G$, of vertices $x$, where $x \in f^{-1}(y)$ for some $y \in W \cap X$, and any vertex $z \in W \setminus X$. We say $F'$ is the \textit{extension} of $W$ and $W$ is the \textit{extender} of $F$.
\end{definition}
The following lemma has effectively been proven before (see \cite{4criticalgirth5}).
\begin{lemma}
\label{countinglemma}
Let $F$ be a strict induced subgraph of $G$ and $f$ a $(k-1)$-colouring of $F$. Let $W$ be a $k$-critical graph of $G_{f}[F]$, and let $F'$ be the extension of $W$. Let $X$ be the source of $f$. Then the following hold
\begin{itemize}
\item{$v(F') = v(F) +v(W) - v(X)$,}
\item{$e(F') \geq e(F) + e(W) - e(X)$, and}
\item{$T^{k-1}(F') \geq T^{k-1}(F) + T^{k-1}(W \setminus X)$.}
\end{itemize}
\end{lemma}
\begin{proof}
Observe that $V(F') = V(F) \cup (V(F') \setminus V(F))$. Additionally, $V(W) = (V(F') \setminus V(F)) \cup X$. Thus
$V(F') = (V(F) \cup V(W)) \setminus X$. Thus $v(F') = v(F) + v(W) - v(X)$. From the above identity and the fact that the subgraphs are induced that $e(F') \geq e(F) + e(W) -e(X)$.
Finally, consider a maximal set of $k-1$-cliques both $F$ and $W \setminus X$, say $\mathcal{T}_{1}$ and $\mathcal{T}_{2}$. Then $\mathcal{T}_{1} \cup \mathcal{T}_{2}$ is a set of disjoint $(k-1)$-cliques in $F'$, and has size $T^{k-1}(F) + T^{k-1}(W \setminus X)$, which gives the result.
\end{proof}
We will refer to the next lemma as the potential-extension lemma and will use it frequently.
\begin{lemma}[Potential-Extension Lemma]
\label{potentialextensionlemma}
Let $F$ be a strict induced subgraph of $G$ and fix a $k$-colouring $\phi$ of $F$. With respect to $\phi$, let $F',W$ and $X$ be an extension, extender and source of $F$. Then for any positive numbers $a,b$ and $c$ we have
\[p(F') \leq p(F) + p(W) - av(X) +be(X) +cT^{k-1}(W) -cT^{k-1}(W \setminus X)\]
and
\[p(F') \leq p(F) + p(W) -av(X) +be(X) +cv(X).\]
\end{lemma}
\begin{proof}
Observe that $T^{k-1}(W) \leq T^{k-1}(W \setminus X) + v(X)$ since every vertex-disjoint $k-1$-clique either lies in $W - X$ or uses a vertex from $X$. Thus we have
\begin{align*}
p(F') &= av(F') -be(F') -cT^{k-1}(F') \\
&\leq a(v(F) + v(W) -v(X)) - b(e(F) + e(W) - e(X)) - cT^{k-1}(F) - cT^{k-1}(W \setminus X) \\
& = p(F) + p(W) -av(X) +be(X) +cT^{k-1}(W) - cT^{k-1}(W \setminus X) \\
& \leq p(F) + p(W) -av(X) + be(X) + cv(X).
\end{align*}
\end{proof}
\section{Properties of a minimum counterexample}
\label{mincounterexamplesection}
In this section we prove lemmas regarding the structure of a vertex-minimum counterexample. For this entire section, let $G$ be a vertex-minimum counterexample to Theorem \ref{maintheorem}. Then $p(G) \geq -1$.
\begin{obs}
\label{not4ore}
$G$ is not $4$-Ore.
\end{obs}
\begin{proof}
Observe that if $G$ is $4$-Ore, then by Theorem \ref{KY}, $p(G) = 2 - T^{3}(G)$. If $T^{3}(G) \geq 4$, then $p(G) \leq -2$. All other cases are covered as special cases of Theorem \ref{maintheorem}.
\end{proof}
We note the well-known folklore result (See Fact 12 in \cite{tightnessore}).
\begin{thm}
If $G$ is $k$-critical and has a two-vertex cut $\{x,y\}$, then $G$ is the Ore composition of two graphs $H_{1}$ and $H_{2}$.
\end{thm}
\begin{obs}\label{notorecomp}
$G$ is not the Ore composition of two graphs $H_{1}$ and $H_{2}$. In other words, $G$ is $3$-connected.
\end{obs}
\begin{proof}
Suppose not: that is, suppose $G$ is the Ore composition of $H_1$ and $H_2$ with $H_1 \cap H_2 = \{x,y\}$. From Observation \ref{Orecomposition}, we have
\[p(G) = \text{KY}(H_{1}) + \text{KY}(H_{2}) -2 - T^{3}(G).\]
Note that if both $H_{1}$ and $H_{2}$ are $4$-Ore, then $G$ is $4$-Ore and the claim follows from Observation \ref{not4ore}. \\
\textbf{Case 1: $H_{1} =K_{4}$.}\\
First suppose that $ H_{2} = W_{5}$. Then $T^{3}(G) \geq 2$ as every split of $W_{5}$ contains at least one triangle that does not contain at least one of $x$ or $y$, and deleting any edge of $W_{5}$ also leaves at least one triangle that does not contain at least one of $x$ or $y$. Therefore we have $p(G) \leq 2 + 0 -2 -2 =-2$ as desired. Thus $H_{2} \neq W_{5}$. Now suppose $H_{2} \in \mathcal{B}$. Then either $G$ is in $\mathcal{B}$, in which case $T^{3}(G) = 2$ and $p(G) = 2 +1 -2 -2 =-1$ as desired, or $T^{3}(G) \geq 3$, in which case $p(G) \leq -2$ as desired. Finally we have the case where $p(H_{2}) \leq -2$. Here it follows that $p(G) \leq p(H_{2}) \leq -2$ as desired.\\
\textbf{Case 2: $H_{1}$ is $4$-Ore with $T^{3}(G) =2$.}\\
Then $KY(H_1) = 2$. First suppose that $H_{2} = W_{5}$. Then as above we have $T^{3}(G) \geq 2$, and hence $p(G) \leq -2$ as desired. Now suppose $H_{2} \in \mathcal{B}$. If $G$ is in $\mathcal{B}$, we have $T^{3}(G) =2$, and $p(G) =-1$ as desired. Therefore $G$ is not in $\mathcal{B}$ and thus $T^{3}(G) \geq 3$, but then $p(G) \leq -2$ as desired.
The last case is when $p(H_{2}) \leq -2$. In this case, by Proposition \ref{cliqueboundinequality} we have $T^{3}(G) \geq T^{3}(H_{2}) + T^{3}(H_{1}) -2 \geq T^{3}(H_{2})$, and so $p(G) \leq p(H_{2}) \leq -2$ as desired.\\
\textbf{Case 3: $H_{1}$ is $W_{5}$.}\\
Suppose first $H_{2} = W_{5}$. Immediately it follows that $p(G) \leq -4$ as desired. Now suppose that $H_{2} \in \mathcal{B}$, then $T^{3}(G) \geq T^{3}(H_{2}) + T^{3}(W_{5}) -1 \geq T^{3}(H_{2}) =2$ so $p(G) \leq -3$ as desired. Otherwise $p(H_{2}) \leq -2$ and $p(G) \leq p(H_{2}) \leq -2$ as desired.\\
\textbf{Case 4: $H_{1}$ is in $\mathcal{B}$.}\\
If $H_{2} \in \mathcal{B}$, we have $p(G) \leq -2$ as desired. If $p(H_{2}) \leq -2$ then $p(G) \leq p(H_{2}) \leq -2$ as desired. \\
\textbf{Case 5: $p(H_{1}) \leq -2$.} \\
Note that the previous cases cover all outcomes except when $p(H_2) \leq -2$. In this case, it follows that $p(G) \leq -2$ as desired.
\end{proof}
\begin{lemma}
\label{potentiallemma}
If $F$ is a subgraph of $G$ with $v(F) < v(G)$, then $p(F) \geq 3$. Further, $p(F) \geq 4$ unless one of the following occurs: $G \setminus F$ is a triangle of degree three vertices, or $G \setminus F$ is a vertex of degree three, or $G$ contains a kite.
\end{lemma}
\begin{proof}
Suppose not. Let $F$ counterexample that is maximal with respect to $v(F)$ and, subject to that, with $p(F)$ minimized. We may assume that $F$ is an induced subgraph, as adding edges reduces the potential. Observing that $p(K_{1}) = 5$, $p(K_{2}) = 7$, $p(P_2) = 9$ (where $P_2$ is the path of length two), and $p(K_{3}) = 5$, we may assume that $v(F) \geq 4$. Let $\phi$ be a $3$-colouring of $F$, and let $F', W,X$ be an extension, extender, and source of $G_{\phi}[F]$ respectively. If $F' \neq G$, then by the potential-extension lemma (Lemma \ref{potentialextensionlemma}) we have $p(F') \leq p(F)$, which implies that $F'$ is a larger counterexample. Therefore $F' =G$. We split into cases.
Throughout let $f$ be a function that takes as input the number of vertices of $X$, and returns $5$ if $v(X) =1$, $7$ if $v(X) = 2$, and $6$ if $v(X) = 3$. \\
\textbf{Case 1: $W = K_{4}$}.\\
First suppose that $v(X) =1$. Then by the potential-extension lemma, $-1 \leq p(F) +1 -5$ which implies that $p(F) \geq 3$. If further, $p(F) \geq 4$, then we are done, so we can assume that $p(F) = 3$. Observe that $G \setminus F$ contains three vertices, and they must induce a triangle as $W -X$ is a triangle. Let $T$ be the triangle in $G \setminus F$. Then $-1 \leq p(G) \leq p(F) +5 - 3e(T,F)$, where $e(T,F)$ is the number of edges with one endpoint in $T$ and one endpoint in $F$. If $e(T,F) \geq 4$, then we have $-1 \leq p(F) -7$ so $p(F) \geq 6$. Hence $e(T,F) \leq 3$. But as $G$ is $4$-critical, the minimum degree is $3$, hence $e(T,F) \geq 3$ and $T$ is a triangle of degree three vertices.
If $v(X) = 2$, then $ -1 \leq p(F) +1 - 7 +1$ which implies that $p(F) \geq 4$.
If $v(X) =3$, then $ -1 \leq p(F) +1 -6 +1$ which gives $p(F) \geq 3$. If further $p(F) \geq 4$, then we are done. So we can assume that $p(F) =3$. Note that $G \setminus F$ is a single vertex $v$. We want to argue that this vertex has degree three. Note that $-1 \leq p(G) \leq p(F) + 5 - 3 \deg(v) = 8 - 3\deg(v)$. As $G$ is $4$-critical, $\deg(v) \geq 3$, and by the inequality, $\deg(v) \leq 3$, hence $\deg(v) =3$. \\
\textbf{Case 2: $W$ is $4$-Ore with $T^{3}(W) =2$.}\\
If $v(X) = 1$, then $-1 \leq p(F) +0 -5 + 1$ so $p(F) \geq 3$. As $W$ is $4$-Ore with $T^{3}(G) =2$, by Lemma \ref{K4e} either $W$ contains two vertex-disjoint kites, or $W = M$. If $W \neq M$, then $G$ contains a kite. If $W = M$, and the vertex in $X$ is not the unique degree four vertex in the Moser spindle, then $G$ contains a kite.
Otherwise $X$ contains only the unique vertex of degree four in the Moser spindle, and in this case $T^{3}(W-X) = 2$, so from the potential-extension lemma we get $-1 \leq p(F) +0 -5$ which implies that $p(F) \geq 4$. Thus it follows that either $G$ contains a kite or $p(F) \geq 4$.
If $v(X) \in \{2,3\}$, then $-1 \leq p(F) - f(X) +1$. To see this, note that by Lemma \ref{K4e}, $W$ contains two edge-disjoint kites that share at most one vertex. It follows from this that $T^3(W\setminus X) \in \{1,2\}$. Since $f(X) \geq 6$, it follows that $p(F) \geq 4$. \\
\textbf{Case 3: $W = W_{5}$.} \\
Observe that deleting any of a vertex, $K_{2}$ or triangle in $W_{5}$ may result in having no triangles left over. Hence, for any $v(X) \in \{1,2,3\}$, we have
$-1 \leq p(F) -1 - f(X) +1$. Thus as $f(X) \geq 5$, $p(F) \geq 4$ as desired. \\
\textbf{Case 4: $W \in \mathcal{B}$.} \\
Note that in this case $T^3(W) = 2$. If $v(X) = 1$, then $T^3(W\setminus X) \geq 1$, and so $-1 \leq p(F) -1 -5 +1$ which gives $p(F) \geq 4$. If $v(X) \in \{2,3\}$, then $-1 \leq p(F) -1 -f(X) +2$, which gives $p(F) \geq 4$.\\
\textbf{Case 5: $W$ is $4$-Ore with $T^{3}(W) = 3$.}\\
If $v(X) =1$, then $T^3(W\setminus X) \geq 2.$ Thus $-1 \leq p(F) -1 -5 +1$ which gives $p(F) \geq 4$.
If $v(X) \in \{2,3\}$, then $-1 \leq p(F) - 1 - f(X) + 2$, which gives $p(F) \geq 4$. (Note that in the $v(X) = 3$ case, we are using the fact that $T^3(W \setminus X) \geq 1$ (see Observation \ref{deletingaclique})).\\
\textbf{Case 6: All other cases.}\\
If $v(X) =1$, then $-1 \leq p(F) -2 -5 +1$ which gives $p(F) \geq 4$.
If $v(X) = 2$, then $-1 \leq p(F) -2 -7 +2$ which gives $p(F) \geq 6$.
If $v(X) =3$, then $-1 \leq p(F) - 2 - 6 +3$ which gives $p(F) \geq 4$.
This is all possible cases, so the result follows.
\end{proof}
We will attempt to strengthen this bound now.
\begin{lemma}\label{k4-e}
$G$ does not contain $K_{4}-e$ as a subgraph.
\end{lemma}
\begin{proof}
Suppose not. Let $F$ be a $K_{4}-e$ subgraph in $G$ where $ e=xy$ and $w,z$ are the two other vertices in $F$, where we pick $F$ so that the number of degree three vertices in $F$ which are degree three in $G$ is maximized. Note that as $G \neq K_{4}$, $F$ is an induced subgraph.
We claim that $x$ and $y$ have no common neighbours aside from $w$ and $z$. Suppose not, and let $u$ be a common neighbour of $x$ and $y$ with $u \not \in \{w,z\}$. By $4$-criticality, $G-ux$ has a $3$-colouring, say $f$. Then $f(u) = f(x)$ as otherwise $G$ has a $3$-colouring. Notice in any $3$-colouring of $F$, $f(x) =f(y)$. But then $uy \in E(G)$ and $f(u) = f(y)$, a contradiction. Hence $x$ and $y$ have no common neighbours outside $\{w, z\}$.
Fix any $3$-colouring of $F$, and let $F', W$ and $X$ be an extension, extender, and source of $F$. By the potential-extension lemma, we have
\[p(F') \leq p(F) +p(W) -5v(X) +3e(X) +T^{3}(W) - T^{3}(W \setminus X).\]
Observe that $p(F) = 4$. Throughout the proof of this lemma, we let $d$ be the vertex obtained by identifying $x$ and $y$. Note that since $W \not \subseteq G$, it follows that $d \in X$.
We now split into cases depending on what graph $W$ is. \\
\textbf{Case 1: $W = K_{4}$.}\\
First suppose $v(X) =3$. In this case, $w, z, $ and one of $y$ and $x$ share a common neighbour, and so $G$ contains a $K_4$. This is a contradiction, as $K_4$ is $4$-critical and $G \neq K_4$.
Now suppose $v(X) =2$. Then without loss of generality let $X = \{z,d\}$. Then there is a subgraph $H$ of $G$ where $V(H) = V(F) \cup \{u,u'\}$ and there are edges $u'z$, $u'u$, $u'x$, $uy$, $uz$ and $E(F)$. But this subgraph is $W_{5}$, so $G = W_{5}$, a contradiction.
Finally suppose that $v(X) =1$. Then similarly to the above argument, $G$ is isomorphic to the Moser spindle, and we are done. \\
\textbf{Case 2: $W$ is $4$-Ore with $T^{3}(W) =2$.}\\
First suppose that $v(X) =1$. Then it follows that $G$ contains a subgraph $H$ that is the Ore composition of $W$ and $K_{4}$. Since $H$ is $4$-critical, $G = H$. This implies that $G$ is $4$-Ore, contradicting Observation \ref{not4ore}.
Now suppose that $v(X) =2$. By Lemma \ref{K4e}, $W$ contains two edge-disjoint kites that share at most a vertex. Thus $T^3(W \setminus X) \geq 1$. By the potential-extension lemma, we have $p(F') \leq 4 + 0 -7 +1$ which gives $p(F') \leq -2$. If $F' \subset G$, this contradicts Lemma \ref{potentiallemma}. If $F' = G$, this contradicts the assumption that since $G$ is a counterexample to Theorem \ref{maintheorem}, $p(G) \geq -1$.
Now suppose $v(X) =3$. Note that by Lemma \ref{K4e}, $W$ contains two edge-disjoint kites that share at most one vertex. Thus $T^3(W \setminus X) \geq 1$. By the potential-extension lemma, we have $p(F') \leq 4 + 0 -6 +1$ which implies that $p(F') \leq -1$. By Lemma \ref{potentiallemma}, since $F' \subseteq G$, it follows that $F' = G$. Thus $G$ is obtained from $W$ by unidentifying $d$ into $x$ and $y$. Note as $x$ and $y$ have no common neighbours aside from $w$ and $z$, and every vertex in $G$ has degree at least three, $d$ has degree at least four in $W$. First consider the case where $W = M$. Then $d$ is the unique vertex of degree four in $M$. As $G$ is obtained by unidentifying $d$ to $x$ and $y$, it follows that either $G$ is $3$-regular, in which case as $G \neq K_{4}$, $G$ is $3$-colourable by Brook's Theorem, or $G$ has a vertex of degree two, contradicting $4$-criticality. Hence $W \neq M$. Therefore by Lemma \ref{K4e}, $W$ contains two vertex-disjoint kites. As $G$ is obtained by unidentifying $d$, this implies that $G$ contains a kite. But note that both $w$ and $z$ have degree at least three in $W$, as $W$ is $4$-critical, and thus in $G$ after unidentifying, have degree at least four. But this contradicts our choice of $F$, as we picked $F$ to contain the largest number of vertices which are degree three in the $K_{4}-e$ and in $G$. \\
\textbf{Case 3: $W = W_{5}$.}\\
If $v(X) = 1$, then $G$ is an Ore composition of $K_{4}$ and $W_{5}$, a contradiction to Observation \ref{notorecomp}. If $v(X) \in \{2,3\}$, then by the potential-extension lemma we have $p(F') \leq 4 -1 -6+1$ which gives $p(F') \leq -2$. If $F' \subset G$, this contradicts Lemma \ref{potentiallemma}. If $F' = G$, this contradicts the assumption that $p(G) \geq -1.$\\
\textbf{Case 4: $W \in \mathcal{B}$.}\\
If $v(X) =1$, then $G$ is the Ore composition of $W$ and $K_{4}$, contradicting Observation \ref{notorecomp}. If $v(X) \in \{2,3\}$, then by the potential-extension lemma we have $p(F') \leq 4 -1 -6 +1$ which gives $p(F') \leq -2$. As in Case 3, this leads to a contradiction. \\
\textbf{Case 5: $W$ is $4$-Ore with $T^{3}(G) =3$.}\\
If $v(X) =1$, then $G$ is $4$-Ore, contradicting Observation \ref{not4ore}. If $v(X) =2$, then $p(F') \leq 4 -1 -7 + 2$ and $p(F') \leq -2$. As in Cases 3 and 4, this leads to a contradiction.
So $v(X) =3$. In this case we claim $F'$ is all of $G$. If not, take any $3$-colouring $\psi$ of $F'$ (which exists by $4$-criticality). As $x$ and $y$ get the same colour in this $3$-colouring, this implies when we identify $x$ and $y$, we get a $3$-colouring of $W$, contradicting that $W$ is $4$-critical. Hence $F' = G$.
If $T^{3}(W \setminus X) \geq 2$, then by the potential-extension lemma we have
$p(F') \leq 4 - 1 - 6 +1$, which gives $p(F') \leq -2$, a contradiction.
Therefore by Lemma \ref{deletingatriangle} it follows that $W-X$ contains a kite.
Let $K$ be the kite in $W-X$, with spar $st$. We claim there is at most one edge from $F$ to $K$: otherwise, $p(G[V(F) \cup V(K)]) \leq 5(8)-3(10+2)-2 = 2$, contradicting Lemma \ref{potentiallemma}. (Note trivially $G[V(F) \cup V(K)] \neq G$, since $T^3(W) = 3$ but $T^3(G[V(F) \cup V(K)]) = 2$.) Thus at least one of $s$ and $t$ has degree three in $G$. It now suffices to argue that $w$ and $z$ do not have degree three in $G$, thus contradiction our choice of $F$.
To see this, note that since $W$ is 4-critical, both $w$ and $z$ have degree at least three in $W$. But $G$ is obtained from $W$ by unidentifying $d$ into the vertices $x$ and $y$. As $x$ and $y$ share $w$ and $z$ as neighbours, $w$ and $z$ have degree at least four in $G$. But this contradicts that we picked $F$ such that the number of degree three vertices in the $K_{4}-e$ subgraph which have degree three in $G$ is maximized, a contradiction.
\textbf{Case 6: All other cases.} \\
In this case, $p(W) \leq -2$. If $v(X) =1$, then $p(F') \leq 4-2 -5 +1 \leq -2$, a contradiction. If $v(X) =2$, then $p(F') \leq 4 -2 -7 +2 \leq -3$, a contradiction.
Lastly, assume that $v(X) =3$. In this case, by a similar argument as in Case $5$, $F' =G$, and thus $G$ is obtained from $W$ by splitting $d$. Then $T^{3}(G) \geq T^{3}(W)-1$, and thus $p(G) = p(W) +5 -6 +1 \leq -2$, a contradiction.
\end{proof}
Let $D_{3}(G)$ be the subgraph of $G$ induced by the vertices of degree three. Now we will build towards showing that $D_{3}(G)$ is acyclic, and further if a vertex of degree three is in a triangle, then it is the only vertex of degree three in this triangle.
\begin{definition}
For an induced subgraph $R$ of $G$, where $R \neq G$, we say $u,v \in V(R)$ are an \textit{identifiable pair} if $R + uv$ is not $3$-colourable.
\end{definition}
\begin{lemma}
\label{noidentifiablepair}
If $R$ is an induced subgraph of $G$ with $R \neq G$, $v(R) \leq v(G) -3$, and such that $G \setminus R$ is not a triangle of degree three vertices, then $R$ has no identifiable pair.
\end{lemma}
\begin{proof}
Suppose not. Let $x$ and $y$ be an identifiable pair in $R$, and consider $R + xy$. As $R + xy$ is not $4$-colourable by definition, there exists a $4$-critical subgraph $W$ of $R + xy$. Moreover, since $T^3(W-xy) \geq T^3(W)-1$, we have that $p(W-xy) \leq p(W) +4$ (note that $xy \in E(W)$, as otherwise $G$ contains a $4$-critical subgraph, a contradiction). By the assumptions, Lemma \ref{potentiallemma} implies that $p(W-xy) \geq 4$. If $p(W) \leq -1$, then we obtain a contradiction. If $W = K_{4}$, then $G$ has a $K_{4}-e$ subgraph, contradicting Lemma \ref{k4-e}. If $W$ is $4$-Ore with $T^{3}(G) = 2$, then by Lemma \ref{K4e}, $G$ contains a $K_{4}-e$ subgraph, again contradicting Lemma \ref{k4-e}. For all other $W$, we have $p(W) \leq -1$, and thus we get a contradiction.
\end{proof}
For a subgraph $H$, let $N(H)$ be the set of vertices not in $H$ which have a neighbour in $H$. We will need the following well-known consequence of the Gallai-Tree Theorem \cite{GallaiForests}.
\begin{thm}
\label{Independentsettheorem}
Let $C$ be a cycle of degree three vertices in a $4$-critical graph. Then $v(C)$ is odd, $N(C)$ induces an independent set, and in any $3$-colouring of $G-C$, all vertices in $N(C)$ receive the same colour.
\end{thm}
\begin{cor}
All cycles in $D_{3}(G)$ are triangles.
\end{cor}
\begin{proof}
Let $C$ be a cycle in $D_{3}(G)$ where $v(C) \geq 5$. If $|N(C)| =1$, then since $G$ has minimum degree three, it follows that $G$ is isomorphic to an odd wheel. If $G = W_{5}$, then $G$ is not a counterexample to Theorem \ref{maintheorem}. So we may assume that $v(C) \geq 7$. Note that $v(G) = v(C) +1$, and $e(G) = 2v(C)$. So $p(G) = 5(v(C)+1) -6v(C) -1 = -v(C)+4 \leq -3$ since $v(C) \geq 7$. Thus $|N(C)| \geq 2$. Then by Theorem \ref{Independentsettheorem}, any pair of vertices in $N(C)$ are an identifiable pair in $G$. This contradicts Lemma \ref{noidentifiablepair} as $v(G-C) < v(G) -3$.
\end{proof}
\begin{cor}
\label{triangledegrees}
If $T$ is a triangle in $G$, then $V(T)$ does not contain exactly two vertices of degree three.
\end{cor}
\begin{proof}
Suppose not. Let $x,y$ and $z$ induce a triangle where $x$ and $y$ are vertices of degree three and $z$ has degree at least four. Let $x'$ and $y'$ be the unique other neighbours of $x$ and $y$ respectively. Note $x' \neq y'$ as otherwise $G$ contains a subgraph isomorphic to $K_{4}-e$, contradicting Lemma \ref{k4-e}.
If $x'y' \in E(G)$, then any $3$-colouring of $G-\{x,y\}$ extends to a $3$-colouring of $G$, a contradiction. In particular, every $3$-colouring of $G-\{x,y\}$ gives $x'$ and $y'$ the same colour and hence $G-\{x,y\}$ contains an identifiable pair.
So consider $G-\{x,y\} + \{x'y'\}$. Then there is a $4$-critical subgraph $W$ containing $x'y'$, and $p(W-x'y') \leq p(W) +4$. By the same argument as in Lemma \ref{noidentifiablepair}, it suffices to show $ H := G \setminus (W -x'y')$ is not a triangle of degree three vertices, or a single vertex of degree three. Notice that $H$ is not a vertex of degree three, as both $x$ and $y$ are in $V(H)$. We claim $H$ is not a triangle of degree three vertices. If so, then $z \not \in V(H)$, as $\deg(z) \geq 4$, but then as $x,y \in V(H)$, $x$ and $y$ lie in a triangle of degree three vertices. But then as $x,y,z$ induce a triangle, $G$ contains a $K_{4}-e$ subgraph, again contradicting Lemma \ref{k4-e}.
\end{proof}
An \textit{$M$-gadget} is a graph obtained from $M$ by first splitting the vertex $v$ of degree four into two vertices $v_{1}$ and $v_{2}$ such that there is no $K_{4}-e$ in the resulting graph, $N(v_{1}) \cap N(v_{2}) = \emptyset$, and $N(v) = N(v_{1}) \cup N(v_{2})$, and after this, adding a vertex $v'$ adjacent to only $v_{1}$ and $v_{2}$. We call$v'$ the \textit{end} of the $M$ gadget.
\begin{lemma}
\label{identificationlemma}
Let $C$ be a component of $D_{3}(G)$ with $v(C) \geq 3$. Let $x,y,z \in V(C)$ such that such that $xy,yz \in E(G)$, and $y,z$ do not lie in a triangle of degree three vertices. Let $x',x''$ be two neighbours of $x$ which are not $y$. Then either $x'x'' \in E(G)$, or $x,x',x''$ lie in an $M$-gadget with end $x$, and this $M$-gadget does not contain $y$ or $z$.
\end{lemma}
\begin{proof}
Suppose not. Then $x'x'' \not \in E(G)$, so identify $x'$ and $x''$ into a new vertex $x'''$, and let $G'$ be the resulting graph. Note neither $x'$ nor $x''$ is $z$, as otherwise $x,y,z$ lie in a triangle of degree three vertices. Moreover, if there exists a 3-colouring of $G'$, then this 3-colouring readily extends to $G$. Hence $G'$ is not $3$-colourable. Let $W'$ be a $4$-critical subgraph of $G'$. Note that $y$ and $z$ are not in $W'$, since after identifying $x'$ and $x''$, $x$ has degree two, which implies that $x \not \in V(W')$, and in $G'-x$, $y$ has degree two hence $y \not \in V(W')$, and similarly this implies that $z \not \in V(W')$.
Let $W = W' \setminus \{x'''\} \cup \{x,x',x''\}$. If $T^{3}(W) = T^{3}(W')-1$, we have $p(W) \leq p(W') +10 -6 +1 = p(W') +5$, and otherwise $p(W) \leq p(W') +4$. Note that $G \setminus W$ is not a single vertex of degree three as both $y$ and $z$ are in $G \setminus W$, and $G \setminus W$ is not a cycle of degree three vertices as $y$ and $z$ do not lie in a cycle of degree three vertices by assumption. Thus by Lemma \ref{potentiallemma}, we have that $p(W) \geq 4$. Hence if $p(W') \leq -2$ we get a contradiction. Observe that $W' \neq K_{4}$, as $W$ is obtained from $W'$ by splitting $x'''$ into two vertices, and that would imply that $G$ contains a $K_{4}-e$ subgraph, contradicting Lemma \ref{k4-e}. If $T^{3}(W') =2$ and $W'$ is $4$-Ore, then if $W'$ is not $M$, $G$ contains a $K_{4}-e$ subgraph. Again, this contradicts Lemma \ref{k4-e}. Further, if $W' = M$, and the split is not on the unique vertex of degree four in such a way that does not leave a $K_{4}-e$, then $G$ contains a $K_{4}-e$ subgraph. Therefore, $x$ is the end of an $M$-gadget.
Now consider the case where $T^{3}(W') = 3$ and is $4$-Ore, or $W' \in \mathcal{B}$. By Lemma \ref{4Oresplit3triangle} and Lemma \ref{T8splits} then either splitting $x'''$ does not reduce the number of triangles, or $G$ contains a $K_{4}-e$, or in $W \setminus x$ either $x'$ or $x''$ has degree one and is incident to a foundational edge in $G$. The first case gives a contradiction as then $p(W) \leq p(W') + 4$, and $p(W') \leq -1$, which contradicts that $p(W) \geq 4$. The second case contradicts that $G$ has no $K_{4}-e$ subgraph. Therefore without loss of generality suppose that $x'$ has degree one in $W'$ after splitting $x'''$ back into $x'$ and $x''$ and that the edge incident to $x'$ is foundational in $W'$. Let the other endpoint of this foundational edge be $y'$. Now we claim that in any $3$-colouring of $ W'' := W' -x''' \cup \{x''\}$, $x''$ and $y'$ get the same colour. If not, then we have a $3$-colouring of $W'$, which contradicts that $W'$ is $4$-critical. Hence $W''$ contains an identifiable pair. Further, $y,z,x,x' \not \in V(W'')$. Thus we contradict by Lemma \ref{noidentifiablepair}.
\end{proof}
\begin{cor}\label{ind-p4}
$D_{3}(G)$ does not contain an induced path of length four.
\end{cor}
\begin{proof}
Suppose not, and let $v_{1},v_{2},v_{3},v_{4},v_{5}$ be such a path. Let $x_{3}$ be the vertex not $v_{2}$ and $v_{4}$ which is incident to $v_{3}$.
By Lemma \ref{triangledegrees}, $x_{3}v_{2} \not \in E(G)$ and $x_{3}v_{4} \not \in E(G)$. Further as the path is induced, $v_{2},v_{3},v_{4}$ does not induce a triangle. Additionally, $v_{4}$ and $v_{5}$ are not in a triangle of degree three vertices as the path is induced. Hence by Lemma \ref{identificationlemma}, applied to $v_{3},v_{4},v_{5}$ with $v_{3}$ playing the role of $x$, $v_{3}$ is the end of a $M$-gadget containing $v_{2}$ and $x_{3}$ but not $v_{4}$ or $v_{5}$. This implies that $v_{1}$ is in a triangle, say $v_{1},u_{1},u_{2}$ and by Lemma \ref{triangledegrees} both $u_{1}$ and $u_{2}$ have degree at least four. Now we apply Lemma \ref{identificationlemma} to $v_{2}$ $v_{3},v_{4}$ with $v_{2}$ playing the role of $x$. By similar reasoning as above, $v_{2}$ is not in a triangle and $v_{3}$ and $v_{4}$ do not lie in a triangle of degree three vertices, and thus $v_{2}$ lies in an $M$-gadget say $M'$. We claim the subgraph $M'$ is not induced. First observe that $v_{1} \in V(M')$, since $v_{2}$ has degree three. Then it follows that $u_{1},u_{2} \in V(M')$, as $v_{1} \in V(M')$ and $v_{1}$ has degree three. Further, as $v_{2}$ is the degree two vertex in the $M$-gadget, the edge $u_{1}u_{2}$ does not lie in $M'$. Let $H' = M' \cup \{u_{1}u_{2}\}$. Then $v(H') =9$ and $e(H') \geq 14$ so $p(H') \leq 45-42-2 =1$. This contradicts Lemma \ref{potentiallemma}.
\end{proof}
\begin{cor}
If $C$ is an acyclic component of $D_3(G)$, then $v(C) \leq 6$.
\end{cor}
\begin{proof}
Let $P$ be a longest path in a component $C$. First suppose that $P$ is a path of length $3$, say $v_{1},v_{2},v_{3},v_{4}$. Then $v_{2}$ and $v_{3}$ are adjacent to at most one vertex not in the path, say $v_{2}'$ and $v_{3}'$ respectively. Suppose $v_{2}'$ has degree three. Then $v_{2}'$ is not adjacent to a vertex $u \in D_{3}(G) \setminus \{v_{2}\}$. If not, the path $u,v_{2}',v_{2},v_{3},v_{4}$ is an induced path on five vertices, contradicting Corollary \ref{ind-p4}. Applying a similar argument to $v_{3}'$ we see that $v(C) \leq 6$ in this case. Now suppose that $P$ is a path of length $2$, say $v_{1},v_{2},v_{3}$. If $v_{2}$ is adjacent to a vertex of degree three, say $v_{2}'$, then $v_{2}'$ is not adjacent to another vertex of degree three, as otherwise we have a path of length $3$, contradicting our choice of $P$. Hence in this case, $v(C) \leq 4$. Lastly, if the longest path has length at most one, then $v(C) \leq 2$ as desired.
\end{proof}
Now we build towards proving every component of $D_{3}(G)$ is acyclic.
\begin{lemma}
\label{neighbourshavelargedegree}
Let $\{x,y,z\} \subseteq V(G)$ form a triangle of degree three vertices. Then at most one vertex in $N(C)$ has degree three.
\end{lemma}
\begin{proof}
First observe that $|N(C)| =3$. If not, either $G = K_{4}$, or $G$ is not $2$-connected, which in either case gives a contradiction. Let $x',y',z'$ be the vertices in $N(C)$, where $x'$ is adjacent to $x$, $y'$ is adjacent to $y$, and $z'$ is adjacent to $z$. Without loss of generality, suppose that $x'$ and $y'$ both have degree three. Note that $yy'$ is not in a triangle of degree three vertices as otherwise $G$ contains a $K_4-e$, contradicting Lemma \ref{k4-e}. Thus Lemma \ref{identificationlemma} applies to $x,y,y'$. Since $x'z \not \in E(G)$, $x$ is the end of an $M$-gadget not containing $y$ or $y'$. But now it follows that there are two vertex-disjoint triangles in $G-x-y-z$, and hence $T^{3}(G) \geq 3$. As $G$ is not $4$-Ore, $\text{KY}(G) \leq 1$, and thus $p(G) \leq -2$, a contradiction.
\end{proof}
\begin{lemma}
$D_{3}(G)$ is acyclic.
\end{lemma}
\begin{proof}
Suppose not. Let $T$ be a triangle in $D_{3}$. As $G$ is $3$-connected and $G \neq K_4$, it follows that $|N(T)| =3$. By Theorem \ref{Independentsettheorem} all vertices of $N(T)$ receive the same colour in any three colouring. Hence every pair of vertices in $N(T)$ are an identifiable pair in $G-T$. Let $R := G-T$. Let $x,y$ be two vertices in $N(T)$, such that $y$ is adjacent to a vertex $z$ in $T$. Observe that in $R + xy$, we have a $4$-critical graph $W$, and since $T^3(W-xy) \geq T^3(W)-1$, we have that
\begin{equation}\label{p(W-xy)}
p(W-xy) \leq p(W) +4.
\end{equation} If $W = K_{4}$, then $G$ has a $K_{4}-e$ subgraph, contradicting Lemma \ref{k4-e}. If $W$ is $4$-Ore with $T^{3}(G) = 2$, then by Lemma \ref{K4e}, again $G$ contains a $K_{4}-e$ subgraph, contradicting Lemma \ref{k4-e}. If $p(W) \leq -2$, then we obtain a contradiction to Lemma \ref{potentiallemma}. Further, we can assume that $W \neq W_{5}$ since otherwise $p(W-xy) = 5(6)-3(9)-1 = 2$, again contradicting Lemma \ref{potentiallemma}.
Additionally, if $W -xy \neq R$, then we obtain a contradiction to Lemma \ref{potentiallemma} when $p(W) \leq -1$. Thus we can assume that $W=R +xy$ and that either $W$ is $4$-Ore with $T^{3}(W)=3$, or $W \in \mathcal{B}$.
First assume that $W$ is $4$-Ore with $T^{3}(W) = 3$. Now consider splitting $x$ into two vertices $x_{1}$ and $x_{2}$ such that $\deg(x_{1}) = 1$ and $x_{1}$ is only adjacent to $y$. Let $W^{x}$ denote this graph. Note that $W^x \subset G$ with $x_2$ playing the role of $x$ and $x_1$ playing the role of $z$. By Lemma \ref{4Oresplit3triangle} either $W^{x}$ has $T^{3}(W^{x}) \geq T^3(W)$, $W^{x}$ contains a $K_{4}-e$, or $xy$ is a foundational edge. If $W^{x}$ has $T^{3}(W^{x}) \geq T^3(W)$, then
since $T^3(W^x) = T^3(W^x-x_1y) = T^3(W-xy)$ it follows that Equation \ref{p(W-xy)} can be strengthened to $p(W-xy) \leq p(W) + 3$. Since $p(W) = -1$ and $W-xy \subset G$, this contradicts Lemma \ref{potentiallemma}.
If $W^{x}$ contains a $K_{4}-e$, then $G$ contains a $K_{4}-e$, contradicting Lemma \ref{k4-e}. Therefore we can assume that $xy$ is a foundational edge, and by Lemma \ref{foundationaledgesin4Ore} such an edge is the spar of a kite. Thus in $W-xy$, both $x$ and $y$ have degree two, which implies that in $G$, both $x$ and $y$ have degree three. But this contradicts Lemma \ref{neighbourshavelargedegree}.
Therefore we can assume that $W$ is in $\mathcal{B}$. Then $xy$ is a foundational edge, as otherwise by Lemma \ref{4Oresplit3triangle} either $G-xy$ contains a $K_{4}-e$ subgraph, contradicting Lemma \ref{k4-e}, or as above we can strengthen Equation \ref{p(W-xy)} and obtain a contradiction. If $W \neq T_{8}$, then by Lemma \ref{foundationaledgesinB} we have that $xy$ is the spar of a kite. Then in $W-xy$, both $x$ and $y$ have degree two, which implies that in $G$, both $x$ and $y$ have degree three, contradicting Lemma \ref{neighbourshavelargedegree}. Therefore $W = T_{8}$. As $W = R +xy = G-T+xy$, our entire graph is $T_{8} - u_{1}u_{2} + T$. In this case, we label the vertices of $T$ by setting $T = v_1v_2v_3v_1$. We may assume without loss of generality, $v_1$ is adjacent to $u_1$, and $v_2$ is adjacent to $u_2$. Moreover, by Theorem \ref{Independentsettheorem}, the neighbour of $v_3$ outside of $\{v_1, v_2\}$ forms an independent set with $\{u_1, u_2\}$. It follows that the third edge incident with $v_3$ is incident with a vertex in $\{u_6, u_7, u_8\}$. It is easy to verify that the resulting graph is 3-colourable. As these are all the cases, it follows that $D_{3}(G)$ is acyclic.
\end{proof}
From the above sequence of lemmas, we obtain the following corollary.
\begin{cor}
Every component in $D_{3}(G)$ has at most six vertices.
\end{cor}
\section{Discharging}
\label{dischargingsection}
In this section we provide the discharging argument which shows a vertex-minimum counterexample does not exist.
We start off by showing that there exists a component of $D_{3}(G)$ with at least three vertices.
\begin{lemma}\label{bigcomp}
There exists a component of $D_{3}(G)$ with at least three vertices.
\end{lemma}
\begin{proof}
Suppose not. Let $F$ be the subgraph of $G$ with $V(F) = V(G)$ and $E(F) = \{ xy \in E(G) \, | \, \deg(x) \geq 4 \text{ and } \deg(y) \geq 4\}$.
\begin{claim}
$F$ is not bipartite.
\end{claim}
\begin{proof}
Suppose not. Let $(A,B)$ be a bipartition of $F$. By the assumption, every component of $D_{3}(G)$ contains at most two vertices, and by Corollary \ref{triangledegrees}, each triangle contains at most one vertex of degree three. It follows that $D_{3}(G) \cup \{A\}$ is bipartite. Now colour $D_{3}(G) \cup \{A\}$ with colours $1$ and $2$, and colour $B = G - D_{3}(G) -A$ with colour $3$. This is a proper $3$-colouring of $G$, contradicting that $G$ is $4$-critical.
\end{proof}
Set $ch_{i}(v) = \deg(v)$ for each vertex $v \in V(G)$, and have every vertex of degree at least four send $\frac{1}{6}$ charge to each neighbour of degree three. For each $v \in V(G)$, let $ch_{f}(v)$ denote the final charge of $v$. Note that all degree three vertices end up with $\frac{10}{3}$ final charge, and if $v$ has degree at least four, then $ch_f(v) = \frac{10}{3}$ if and only if $\deg(v) = 4$ and $v$ is adjacent to exactly four vertices of degree three. Further, if either of those conditions do not hold, the final charge of $v$ is at least $\frac{10}{3} + \frac{1}{6}$. Therefore for every edge $e=xy \in E(F)$, we have $ch_{f}(x) \geq \frac{10}{3} + \frac{1}{6}$ and $ch_{f}(y) \geq \frac{10}{3} + \frac{1}{6}$. Thus it follows that
\[\sum_{v \in v(G)}ch_{f}(v) \geq \frac{10v(G)}{3} + \frac{e(F)}{3}.\]
Since $F$ is not bipartite, $e(F) \geq 3$. Then we have
\[2e(G) \leq \frac{10v(G) + 3}{3}.\]
Thus it follows that
\[p(G) \leq \text{KY}(G) \leq -\frac{3}{2}.\]
Since potential is integral, we get that $p(G) \leq -2$, contradicting that $G$ is a counterexample.
\end{proof}
Now we proceed with the main discharging argument.
We assign to each vertex $v \in V(G)$ an initial charge $ch_i(v) = \deg(v)$. We discharge in three steps: in each step, the discharging occurs instantaneously throughout the graph. The final charge will be denoted by $ch_f$. For $v \in V(G)$, let $i_3(v)$ denote the number of neighbours of $v$ that are isolated vertices in $D_3(G)$.
\textbf{Discharging Steps}
\begin{enumerate}
\item If $u$ is a vertex of degree at least four, $uv$ is an edge, and $v$ is a vertex of degree three, then $u$ sends $\frac{3ch_i(u)-10}{3\deg_3(u)}$ charge to $v$.
\item If $u$ is an isolated vertex in $D_3(G)$, $u$ sends $\frac{1}{18}$ charge to each adjacent vertex in $G$.
\item Let $u$ be a vertex of degree at least four, and let $f(u)$ be the total charge received by $u$ in Step 2. The vertex $u$ sends $\frac{f(u)}{\deg_3(u)-i_3(u)}$ charge to each adjacent vertex of degree three that is not isolated in $D_3(G)$.
\end{enumerate}
We will show that after discharging, the sum of the charges is at least $v(G)\left(\frac{10}{3}\right)$. Note that by the discharging rules, we have immediately that every vertex of degree at least four has final charge at least $\frac{10}{3}$. In light of this, we will focus our attentions on the vertices of degree three: let $C$ be a component in $D_3(G)$, and let $ch_f(C) = \sum_{v \in V(C)} ch_f(v)$.
We note the following.
\begin{obs}\label{onesixth}
If $u$ sends charge to $v$ in Step 1, then $u$ sends $v$ at least $\frac{1}{6}$ charge.
\end{obs}
\begin{claim}\label{isolated}
If $C$ is an isolated vertex, then $ch_f(C) \geq \frac{10}{3}$.
\end{claim}
\begin{proof}
Let $v \in V(C)$. Note that $ch_i(v) = \deg(v) = 3$. Since $v$ is isolated in $D_3(G)$, every neighbour of $v$ has degree at least four. Thus by Observation \ref{onesixth}, $v$ receives at least $\frac{1}{6}$ from each of its neighbours in Step 1. Moreover, $v$ returns at most $\frac{1}{18}$ to each of its neighbours in Step 2. It follows that:
\begin{align*}
ch_f(v) &\geq 3 + 3\left(\frac{1}{6}\right)-3\left(\frac{1}{18}\right) \\
&= \frac{10}{3}
\end{align*} as desired.
\end{proof}
\begin{claim}
\label{pathlength1}
If $C$ is a path of length one, then $ch_f(C) \geq v(C) \left(\frac{10}{3}\right)$.
\end{claim}
\begin{proof}
Let $v_1v_2 \in V(C)$. Note that $ch_i(v_1) = ch_i(v_2) = 3$, and that by Observation \ref{onesixth}, each of $v_1$ and $v_2$ receive at least $\frac{1}{6}$ from each of their neighbours of degree at least four. It follows that:
\begin{align*}
ch_f(C) &\geq ch_i(v_1) + 2\left(\frac{1}{6}\right) + ch_i(v_2) + 2\left(\frac{1}{6}\right) \\
&= 2\left(\frac{10}{3}\right),
\end{align*}
as desired.
\end{proof}
For the remaining cases, we will make use of the following.
\begin{claim}\label{leaves}
If $v$ is a leaf in a tree $C \subseteq D_3(G)$ with $v(C) \geq 3$, then $v$ receives at least $\frac{4}{9}$ charge from its neighbourhood during Step 1.
\end{claim}
\begin{proof}
As $v$ is a leaf in a tree with at least three vertices, there exists a path $vuw$ in $C$. Let $x$ and $y$ be two neighbours of $v$ which are not $u$. By Lemma \ref{identificationlemma}, either $xy \in E(G)$, or $x$, $v$, and $y$ lie in an $M$-gadget with end $v$ that does not contain $u$ or $w$.
If $xy \in E(G)$, then note that $\deg_3(x) \leq \deg(x) -1$, and likewise $\deg_3(y) \leq \deg(y)-1$. In this case, each of $z \in \{x, y\}$ sends at least $\frac{\deg(z)}{\deg(z)-1}-\frac{10}{3(\deg(z)-1)}$ to $x$. Since $\deg(z) \geq 4$, it follows that $v$ receives at least $\frac{2}{9}$ from each of $x$ and $y$, and so at least $\frac{4}{9}$ in total.
If $xy \not \in E(G)$, then $v, x,$ and $y$ lie in an $M$-gadget with end $v$. Thus there exist two triangles $T$ and $T'$ such that $x$ is adjacent to a vertex $a \in V(T)$ and $a' \in T'$, and $y$ is adjacent to a vertex $b \neq a$ in $V(T)$ and $b' \neq a'$ in $V(T')$. Note by Corollary \ref{triangledegrees}, each of $T$ and $T'$ contain at most one vertex of degree three. Thus at least two of $\{a, a', b, b'\}$ have degree at least four. Without loss of generality, we may assume that either $a$ and $a'$ have degree at least four, or that $a$ and $b'$ have degree at least four. In the first case, $x$ sends at least $\frac{1}{3}$ to $v$, and $y$ sends at least $\frac{1}{6}$ to $v$. Thus $v$ receives at least $\frac{1}{2}$ from $x$ and $y$. In the second case, each of $x$ and $y$ sends at least $\frac{2}{9}$ to $v$, and so $v$ receives at least $\frac{4}{9}$.
Thus $v$ receives at least $\frac{4}{9}$ charge from its neighbourhood.
\end{proof}
\begin{claim}\label{p2}
If $C= v_1v_2v_3$ is a path of length $2$, then $ch_f(C) \geq v(C) \left(\frac{10}{3}\right)$.
\end{claim}
\begin{proof}
By Claim \ref{leaves}, each of $v_1$ and $v_3$ receives at least $\frac{4}{9}$ units of charge from its neighbourhood during Step 1. Moreover, by Observation \ref{onesixth}, $v_2$ receives at least $\frac{1}{6}$ units of charge during Step 1. Thus
\begin{align*}
ch_f(C) &\geq ch_i(v_1) + \frac{4}{9} + ch_i(v_2) + \frac{1}{6} + ch_i(v_3) + \frac{4}{9} \\
&=\frac{181}{18} \\
&> 3\left(\frac{10}{3}\right),
\end{align*}
as desired.
\end{proof}
\begin{claim}
If $C$ is a star with four vertices, then $ch_f(C) \geq 4 \left( \frac{10}{3} \right)$.
\end{claim}
\begin{proof}
By Claim \ref{leaves}, each leaf in $C$ receives at least $\frac{4}{9}$ from its neighbourhood during Step 1 of the discharging process. Moreover, each $u \in V(C)$ has $ch_i(u) = 3$. Thus
\begin{align*}
ch_f(C) &\geq \left(\frac{4}{9} + 3\right) + \left(\frac{4}{9} + 3\right) + \left(\frac{4}{9} + 3\right) + 3 \\
&=12 + \frac{4}{3} \\
&= 4\left(\frac{10}{3}\right),
\end{align*}
as desired.
\end{proof}
For the remaining cases, we will need the following lemma.
\begin{lemma}\label{betterleaves}
Let $v$ be a leaf in a tree $C \subseteq D_3(G)$ with $v(C) \geq 3$, and let $u, w$ be the neighbours of $v$ that are not contained in $C$. Suppose $u, w$ are contained in an $M$-gadget with end $v$. At the end of the discharging process, $v$ will have received at least $\frac{1}{2}$ charge from its neighbours.
\end{lemma}
\begin{proof}
By the structure of $M$-gadgets, there exist two distinct triangles $T$ and $T'$ such that $u$ is adjacent to vertex $a$ in $T$ and $a'$ in $T'$, $w$ is adjacent to $b \neq a$ in $T$ and $b' \neq a'$ in $T'$. First, we note that if either $\deg(u) \geq 5$ or $\deg(w) \geq 5$, we are done. To see this, suppose without loss of generality that $\deg(u) \geq 5$. Note that by Lemma \ref{triangledegrees}, at most one vertex in $T$ and at most one vertex in $T'$ has degree three. If both $b$ and $b'$ have degree at least four, then in Step 1 $u$ sends $v$ at least $\frac{1}{3}$ and $w$ sends $v$ at least $\frac{1}{3}$. If $a$ and $a'$ have degree at least four, then in Step 1 $u$ sends $v$ at least $\frac{5}{9}$. Finally, if $a$ and $b'$ have degree at least four, then in Step 1 $u$ sends $v$ at least $\frac{5}{12}$, and $w$ sends $v$ at least $\frac{2}{9}$. In all cases, $v$ receives at least $\frac{1}{2}$.
Thus we may assume that $\deg(u) = \deg(w) = 4$. We now break into cases depending on the degrees of $a, b, a',$ and $b'$.
\noindent
\textbf{Case 1. Precisely one of $a, b, a',$ and $b'$ has degree three.}
Suppose $\deg(a) = 3$. Then $w$ is adjacent to at least two vertices of degree not equal to three, and so $w$ sends at least $\frac{1}{3}$ to $v$ in Step 1. Moreover, $u$ is adjacent to $a'$ with $\deg(a') \neq 3$, and so $u$ sends $v$ at least $\frac{2}{9}$ in Step 1. By Lemma \ref{triangledegrees}, since $a$ is contained in a triangle, it follows that $a$ is isolated in $D_3(G)$. Thus $a$ sends $\frac{1}{18}$ to $u$ in Step 2. Since $a$ is isolated and $\deg(a') \neq 3$, we have that $u$ sends at least $\frac{1}{18(\deg(u)-2)}$ to $v$ in Step 3. As our choice for $a$ was arbitrary but $\deg(u) = \deg(w) = 4$, it follows that $v$ receives at least
\[ \frac{1}{3} + \frac{2}{9} + \frac{1}{18(4)-36} = \frac{7}{12}
\]
during discharging.
\noindent
\textbf{Case 2. Either $\deg(a) = \deg(a') = 3$, or $\deg(b) = \deg(b')=3$.}
Suppose $\deg(a) = \deg(a') = 3$. Then $u$ sends $v$ at least $\frac{1}{6}$ in Step 1. By Lemma \ref{triangledegrees}, neither $b$ nor $b'$ has degree three, and so $w$ sends $v$ at least $\frac{1}{3}$ in Step 1. By Lemma \ref{triangledegrees}, since $a$ and $a'$ are each contained in a triangle, it follows that both $a$ and $a'$ are isolated in $D_3(G)$. Thus each of $a$ and $a'$ sends $\frac{1}{18}$ to $u$ in Step 2, and so $u$ sends at least $\frac{1}{9(\deg(u)-2)}$ to $v$ in Step 3.
As our choice of premise was arbitrary and $\deg(u) = \deg(w) = 4$ by assumption, it follows that $v$ receives at least
\[
\frac{1}{6} + \frac{1}{3} + \frac{1}{9(4-2)} = \frac{5}{9}
\]
during discharging.
\textbf{Case 3. $\deg(c) = \deg(d') = 3$ for $c \in \{a,b\}$ and $d \in \{a,b\} \setminus \{c\}$.}
Suppose $\deg(a) = \deg(b') = 3.$ By Lemma \ref{triangledegrees}, each of $a$ and $b'$ are isolated in $D_3(G)$, and so each of $u$ and $w$ sends $v$ at least $\frac{2}{9}$ in Step 1. Moreover, $a$ sends $u$ $\frac{1}{18}$ charge in Step 2; similarly, $b'$ sends $w$ $\frac{1}{18}$ charge in Step 2. Thus $v$ receives at least $\frac{1}{18(\deg(u)-2)}$ from $u$ in Step 3, and at least $\frac{1}{18(\deg(w)-2)}$ from $w$ in Step 3. It follows that $v$ receives at least
\[
\frac{2}{9} + \frac{2}{9} + \frac{2}{18(4-2)} = \frac{1}{2}
\]
during discharging.
The result follows.
\end{proof}
\begin{claim}\label{Y}
If $V(C) = \{v_1, v_2, v_3, v_4, v_5\}$ and $E(C) = \{v_1v_2, v_1v_3, v_1v_4, v_4v_5\}$, then $ch_2(C) \geq v(C)\left(\frac{10}{3}\right)$.
\end{claim}
\begin{proof}
Let $u_1, u_2$ be the neighbours of $v_2$ that are not in $C$. Let $w_1, w_2$ be the neighbours of $v_3$ not in $C$. Note by Lemma \ref{identificationlemma} applied to the path $v_4v_1v_2$, either $u_1u_2 \in E(G)$ or $u_1, u_2$, and $v_2$ are in an $M$-gadget with end $v_2$. By symmetry, either $w_1w_2 \in E(G)$ or $w_1, w_2,$ and $v_3$ are in an $M$-gadget with end $v_3$. We will aim to show $u_1u_2 \not \in E(G)$ (and by a symmetrical argument, $w_1w_2 \not \in E(G)$) as otherwise we are done. To see this, suppose not. To see this, suppose not. Then $u_1u_2 \in E(G).$ By Lemma \ref{identificationlemma} applied to the path $v_5v_4v_1$, since $v_2v_3 \not \in E(C)$ it follows that $v_1, v_2,$ and $v_3$ are in an $M$-gadget with end $v_1$. Thus since $\deg(v_2) = \deg(v_3) = 3$, up to relabelling of $w_1, w_2$ there exist triangles $T_1$ and $T_2$ with $u_1w_1 \in T_1$ and $u_2w_2$ in $T_2$. Note that each of $u_1, u_2, w_1, $ and $w_2$ has degree at least four as they are not contained in $C$. Since $u_1u_2 \in E(G)$ by assumption, each of $u_1$ and $u_2$ sends at least $\frac{1}{3}$ to $v_2$ in Step 1. By Lemma \ref{leaves}, each of $v_3$ and $v_5$ receives at least $\frac{4}{9}$ in Step 1. Finally, $v_4$ receives at least $\frac{1}{6}$ by Observation \ref{onesixth}. Thus
\begin{align*}
ch_f(C) &\geq 5(3) + 2\left(\frac{1}{3}\right) + 2\left(\frac{4}{9}\right) + \frac{1}{6} \\
&> 5\left(\frac{10}{3}\right)
\end{align*}
as desired. So we may assume $u_1u_2 \not \in E(G)$, and by symmetry $w_1w_2 \not \in E(G)$. By Lemma \ref{identificationlemma}, it follows that each of $v_2$ and $v_3$ is the end of an $M$-gadget with its neighbours outside $C$. By Lemma \ref{betterleaves}, $v_2$ and $v_3$ each receives at least $\frac{1}{2}$ during discharging. As above, $v_5$ receives at least $\frac{4}{9}$ in Step 1, and $v_4$ receives at least $\frac{1}{6}$ by Observation \ref{onesixth}. It follows that
\begin{align*}
ch_f(C) &\geq 5(3) + 2\left(\frac{1}{2}\right) +\frac{4}{9} + \frac{1}{6} \\
&> 5\left(\frac{10}{3}\right)
\end{align*}
as desired.
\end{proof}
\begin{claim}\label{>-<}
If $V(C) = \{v_1, v_2, v_3, v_4, v_5, v_6\}$ and $E(C) = \{v_{1}v_{2},v_{1}v_{3},v_{1}v_{4},v_{2}v_{5},v_{2}v_{6}\}$, then $ch_f(C) \geq v(C) \left(\frac{10}{3}\right)$.
\end{claim}
\begin{proof}
Let $u_1, u_2$ be the neighbours of $v_3$ that are not in $C$. Let $w_1, w_2$ be the neighbours of $v_{4}$ not in $C$. Note by Lemma \ref{identificationlemma} applied to the path $v_2v_1v_3$, either $u_1u_2 \in E(G)$ or $u_1, u_2$, and $v_3$ are in an $M$-gadget with end $v_3$. By symmetry, either $w_1w_2 \in E(G)$ or $w_1, w_2,$ and $v_4$ are in an $M$-gadget with end $v_4$. We will aim to show $u_1u_2 \not \in E(G)$ (and by a symmetrical argument, $w_1w_2 \not \in E(G)$) as otherwise we are done. To see this, suppose not. Then $u_1u_2 \in E(G).$ By Lemma \ref{identificationlemma} applied to the path $v_5v_2v_1$, since $v_3v_4 \not \in E(C)$ it follows that $v_1, v_3,$ and $v_4$ are in an $M$-gadget with end $v_1$. Thus since $\deg(v_3) = \deg(v_4) = 3$, up to relabelling of $w_1, w_2$ there exist triangles $T_1$ and $T_2$ with $u_1w_1 \in T_1$ and $u_2w_2$ in $T_2$. Note that each of $u_1, u_2, w_1, $ and $w_2$ has degree at least four as they are not contained in $C$. Since $u_1u_2 \in E(G)$ by assumption, each of $u_1$ and $u_2$ sends at least $\frac{1}{3}$ to $v_2$ in Step 1. By Lemma \ref{leaves}, each of $v_4$, $v_5$, and $v_6$ receives at least $\frac{4}{9}$ in Step 1. Thus
\begin{align*}
ch_f(C) &\geq 6(3) + 2\left(\frac{1}{3}\right) + 3\left(\frac{4}{9}\right) + \\
&= 6\left(\frac{10}{3}\right),
\end{align*}
as desired. So we may assume $u_1u_2 \not \in E(G)$, and by symmetry $w_1w_2 \not \in E(G)$. By symmetry, $v_5$ is not contained in a triangle with its neighbours outside $C$, and nor is $v_6$. By Lemma \ref{identificationlemma}, it follows that each of $v_3$, $v_4$, $v_5$, and $v_6$ is the end of an $M$-gadget with its neighbours outside $C$. By Lemma \ref{betterleaves}, $v_3$, $v_4$, $v_5$, and $v_6$ each receive at least $\frac{1}{2}$ during discharging. It follows that
\begin{align*}
ch_f(C) &\geq 6(3) + 4\left(\frac{1}{2}\right) \\
&= 6\left(\frac{10}{3}\right),
\end{align*}
as desired.
\end{proof}
Finally, we show the following.
\begin{claim}\label{p3}
If $C$ is a path of length three, then $ch_f(C) \geq v(C) \left(\frac{10}{3}\right)$.
\end{claim}
\begin{proof}
Let $C$ be the path $v_1v_2v_3v_4$. Let $u$ be the neighbour of $v_2$ not contained in $C$. By Lemma \ref{identificationlemma} applied to the path $v_4v_3v_2$, either $uv_1 \in E(G)$, or $v_1, v_2$, and $u$ are contained in an $M$-gadget with end $v_2$. By Lemma \ref{triangledegrees}, $uv_1$ is not an edge in $E(G)$, as otherwise $uv_1v_2$ is a triangle containing two vertices of degree three. Thus by the structure of $M$-gadgets, there exist two disjoint triangles $T=abca$ and $T'=a'b'c'a'$ such that, up to relabelling, $u$ is adjacent to $a$ in $T$ and $a'$ in $T'$, and $v_1$ is adjacent to $b$ in $T$ and $b'$ in $T'$.
Next, note that by Lemma \ref{identificationlemma} applied to the path $v_3v_2v_1$, either $bb' \in E(G)$, or $v_1$ is the end of an $M$-gadget with $b$ and $b'$. First suppose $bb' \in E(G)$. In this case, note that by Lemma \ref{triangledegrees}, at most one of $a$ and $c$ has degree three. Thus $b$ does not send charge to at least two of its neighbours in Step 1. Symmetrically, at most one of $a'$ and $c'$ has degree three, and so $b'$ does not send charge to at least two of its neighbours in Step 1. Thus $v_1$ receives at least $\frac{1}{3}$ from each of $b$ and $b'$ in Step 1. By Claim \ref{leaves}, $v_4$ receives at least $\frac{4}{9}$ charge in Step 1. Finally, each of $v_2$ and $v_3$ receive at least $\frac{1}{6}$ by Observation \ref{onesixth}. It follows that
\begin{align*}
ch_f(C) & \geq 4(3) + 2\left(\frac{1}{3} \right) + \frac{4}{9} + 2\left(\frac{1}{6}\right) \\
&> 4\left(\frac{10}{3}\right),
\end{align*}
as desired. Thus we may assume that $bb'$ is not an edge in $G$. But then by Lemma \ref{identificationlemma} applied to the path $v_3v_2v_1$, we have that $v_1$, $b$, and $b'$ are contained in an $M$-gadget with end $v_1$. By Claim \ref{betterleaves}, $v_1$ thus receives at least $\frac{1}{2}$ charge during discharging. By a perfectly symmetrical argument, $v_4$ receives at least $\frac{1}{2}$ charge during discharging. As above, each of $v_2$ and $v_3$ receive at least $\frac{1}{6}$ by Observation \ref{onesixth}. It follows that:
\begin{align*}
ch_f(C) & \geq 4(3) + 2\left(\frac{1}{2}\right) + 2\left(\frac{1}{6}\right) \\
&= 4\left(\frac{10}{3}\right)
\end{align*}
as desired.
\end{proof}
We are now equipped to prove Theorem \ref{maintheorem}.
\begin{proof}[Proof of Theorem \ref{maintheorem}]Suppose not. Let $G$ be a vertex-minimum counterexample. It follows from Claims \ref{isolated} through \ref{p3} that $KY(G) \leq 0$. Moreover, by Lemma \ref{bigcomp}, $D_3(G)$ contains a component with at least three vertices. We break into cases depending on the structure of the components in $D_3(G)$. \\
\noindent
\textbf{Case 1: $D_3(G)$ contains a component $C$ with $v(C) \geq 3$ such that $C$ is any of the graphs described in Claims \ref{Y} through \ref{p3}}. \\
In this case, $C$ contains a path $P$ of length two ending with a non-leaf vertex, $v$. Thus, by applying Lemma \ref{identificationlemma} to $P$ with $v$ playing the role of $x$, we get that either $x$ is contained in a triangle with its neighbours not on $P$, or that $x$ is the end of an $M$-gadget, $H$. By Lemma \ref{triangledegrees}, $x$ is not contained in a triangle with another vertex in $C$, and so it follows that $x$ is the end of an $M$ gadget, $H$. But $T^3(H) = 2$, and so $T^3(G) \geq 2.$ It follows that $p(G) \leq KY(G) -2 \leq -2$, and so $G$ is not a counterexample.
\\
\vskip 4mm
\noindent
\textbf{Case 2: $D_3(G)$ contains no components described in Case 1, but contains a star $H$ with four vertices.} \\
Let $V(H) = \{v_1, v_2, v_3,v_4\}$ and $E(H) = \{v_4v_1, v_4v_2, v_4v_3\}$. By applying Lemma \ref{identificationlemma} to each of the paths $v_1v_4v_3, v_1v_4v_2,$ and $v_2v_4v_3$, we see that $v_1,v_2,$ and $v_3$ are each the ends of $M$-gadgets, or that they are contained in triangles with their neighbours outside $H$. As in the above case, if $G$ contains an $M$-gadget, then $p(G) \leq -2$, so $G$ is not a counterexample. Thus we may assume neither $v_1, v_2$ nor $v_3$ is the end of an $M$-gadget. Let $T_1$, $T_2$, and $T_3$ be the triangles containing $v_1, v_2$ and $v_3$, respectively. By Lemma \ref{triangledegrees}, these triangles are distinct. If $T^3(T_1 \cup T_2 \cup T_3) \geq 2$, then $p(G) \leq -2$ and $G$ is not a counterexample. Thus we may assume the triangles share some vertices. There are two cases to consider: either there exists a vertex contained in all three triangles, or this does not happen and instead every pair of triangles shares a vertex. If every pair of triangles shares a vertex, then since $H \cup T_1 \cup T_2 \cup T_3$ is $4$-critical, $G = H \cup T_1 \cup T_2 \cup T_3$. But then $p(G) = 5(7)-3(12)-1 = -2$, and so $G$ is not a counterexample. Thus we may assume that $T_1 \cap T_2 \cap T_3 = \{u\}$, for some vertex $u \in G$. In this case, note that $\deg(u) \geq 6$. Moreover, $u$ is adjacent to at least three vertices that are adjacent to $H$ but not in $H$; thus $u$ neighbours at least three vertices of degree greater than three. It follows that $u$ sends at least $\frac{8}{9}$ charge to each of $v_1, v_2$, and $v_3$ in Step 1 of the discharging process. Thus $ch_f(H) \geq 3\left(\frac{8}{9}\right) + 4(3) = \frac{44}{3} = 4\left(\frac{10}{3}\right) + \frac{4}{3}$. Note that every other component $C$ in $D_3(G)$ has final charge at least $v(C) \left(\frac{10}{3}\right)$ and every vertex of degree at least four has final charge at least $\frac{10}{3}$. Thus the sum of the charges is at least $v(G)\left(\frac{10}{3}\right) + \frac{4}{3}$. Moreover since potential is integral, it follows that the sum of the charges it at least $v(G)\left(\frac{10}{3}\right) + 2$. Thus $KY(G) \leq -3$. Moreover, as $G$ contains a triangle, $T^3(G) \geq 1$. Thus $p(G) \leq -4$, which contradicts the fact that $G$ is a counterexample.
\\
\vskip 4mm
\noindent
\textbf{Case 3: $D_3(G)$ contains no components described in Cases 1 or 2, but contains a path $H$ of length 2.} \\
Let $H = v_1v_2v_3$. Note that by Claim \ref{p2}, the final charge of $H$ is strictly greater than $v(H) \left( \frac{10}{3}\right)$. Moreover, every other component $C$ of $D_3(G)$ has final charge at least $v(C) \left(\frac{10}{3}\right)$ and every vertex of degree at least four has final charge at least $\frac{10}{3}$. Since potential is integral, it follows that the sum of the charges is at least $v(G) \left(\frac{10}{3}\right) + 1$, and so $KY(G) \leq -\frac{3}{2}$. But since $KY(G)$ is also integral, $KY(G) \leq -2$. Thus $p(G) \leq -2$, and so $G$ is not a counterexample.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2020-12-04T02:03:11",
"yymm": "2012",
"arxiv_id": "2012.01503",
"language": "en",
"url": "https://arxiv.org/abs/2012.01503",
"abstract": "We prove that every triangle-free $4$-critical graph $G$ satisfies $e(G) \\geq \\frac{5v(G)+2}{3}$. This result gives a unified proof that triangle-free planar graphs are $3$-colourable, and that graphs of girth at least five which embed in either the projective plane, torus, or Klein Bottle are $3$-colourable, which are results of Grötzsch, Thomassen, and Thomas and Walls. Our result is nearly best possible, as Davies has constructed triangle-free $4$-critical graphs $G$ such that $e(G) = \\frac{5v(G) + 4}{3}$. To prove this result, we prove a more general result characterizing sparse $4$-critical graphs with few vertex-disjoint triangles.",
"subjects": "Combinatorics (math.CO)",
"title": "A density bound for triangle-free $4$-critical graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534365728416,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573501438788
} |
https://arxiv.org/abs/1311.2965 | Derived subdivisions make every PL sphere polytopal | We give a simple proof that some iterated derived subdivision of every PL sphere is combinatorially equivalent to the boundary of a simplicial polytope, thereby resolving a problem of Billera (personal communication). | \subsection{Making any PL sphere polytopal}
A \Defn{subdivision} of a simplicial complex $\Delta$ is a simplicial complex
$\Delta'$ with the same underlying space as $\Delta$, such that for every face
$D'$ of $\Delta'$ there is some face $D$ of $\Delta$ for which $D' \subset D$.
One also says that $\Delta'$ is a \Defn{refinement} of $\Delta$, or writes
$\Delta' \prec \Delta$. A \Defn{stellar subdivision} of $\Delta$ at a face
$\tau$ is defined as
\[
\st(\tau,\Delta):=(\Delta-\tau) \cup \{\conv \{v_\tau\cup \sigma\} : \sigma \in
\St(\tau,\Delta)-\tau\subset \Delta \}
\]
Here $\Delta-\tau$ denotes the \Defn{deletion} of $\tau$ from $\Delta$, i.e.\
the maximal subcomplex of $\Delta$ that does not contain $\tau$, the point
$v_\tau$ lies anywhere in the relative interior of $\tau$, and
$\St(\tau,\Delta)$ is the \Defn{star} of $\tau$ in $\Delta$, i.e.\ the minimal
subcomplex of $\Delta$ that contains all faces of $\Delta$ containing $\tau$.
Clearly, the combinatorial type of the stellar subdivision does not depend on
the choice of $v_\tau$.
A \Defn{derived subdivision} $\sd \Delta$ is obtained by stellarly subdividing
$\Delta$ at all faces in order of decreasing dimension, cf.\ \cite{Hudson}. A
special case is the \Defn{barycentric subdivision}, where the point $v_\tau$ is
the barycenter of $\tau$.
The derived subdivision can be iterated, by defining $\sd^m \Delta:=\sd
(\sd^{m-1} \Delta)$ and $\sd^0 \Delta:=\Delta$. Our main result in this note is:
\begin{thm}
\label{thm:MakePoly}
For every PL sphere $\Delta$, there exists a $k\ge 0$ such that $\sd^k \Delta$
is \Defn{polytopal}, i.e., it is combinatorially equivalent to the boundary
complex of some convex polytope.
\end{thm}
This answers a question asked to the authors on several occasions, in particular
by Louis J. Billera (personal communication). The result itself is implicit in
the work of Morelli \cite[Sec.\ 6]{Morelli}; however, it was never written up
explicitly. We obtain the following immediate corollary, cf.\
\cite[Cor.\ I.3.12]{AB-SSZ}:
\begin{cor}
For every closed simplicial PL manifold $M$, there is an $n\ge 0$ such that for every
nonempty face $F$ of $\sd^n M$, the simplicial PL sphere $\Lk(F,\sd^n M)$ is polytopal.
\end{cor}
\begin{proof}
Notice that, if $\Delta$, $\Gamma$ is any pair of PL spheres, then $\sd^n
(\Delta \ast \Gamma)$ is a stellar subdivision of $\sd^n \Delta \ast \sd^n
\Gamma$ (where $\ast$ denotes the join operation); since stellar subdivisions
preserve polytopality, we therefore observe that $\sd^n (\Delta \ast \Gamma)$ is
polytopal if $\sd^n \Delta$ and $\sd^n \Gamma$ are polytopal.
Observe secondly that if $\Delta$ is any simplicial complex, and $F$ is any face
of $\sd \Delta$, then there is a face $\widetilde{F}\in \Delta$ and simplices $\sigma_1$, $\cdots$, $\sigma_k$
such that \[\Lk(F,\sd \Delta)\, \cong\, \sd \partial \sigma_1\ast \cdots\ast \sd \partial\sigma_k \ast \sd \Lk(\widetilde{F},\Delta).\]
Now, let $n$ be chosen large enough such that for all faces $\widetilde{F}$ of
$M$, the complex $\sd^n \Lk(\widetilde{F},M)$ is polytopal. It then follows from
the two observations above that for every face $F$ of $\sd^n M$, the complex
$\Lk(F,\sd^n M)$ is polytopal.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:MakePoly}}]
A \Defn{complete pointed fan} in $\mathbb{R}^{d+1}$ is a partition of $\mathbb{R}^{d+1}$ into
convex polyhedral cones with apices at the origin $\mathbf{0}$ such that the
intersection of any two cones is a face of both.
A fan is called \Defn{regular} if it consists of the cones over the faces of a
convex polytope (with the origin in the interior). A fan $F$ is regular if and
only if there exists a PL function $\phi \colon \mathbb{R}^{d+1} \to \mathbb{R}$ whose domains
of linearity are exactly the full-dimensional cones of $F$ and that is strictly
convex across every codim 1 cone of $F$, cf.\ \cite{LRS}. Thus we have to show
that every PL $d$-sphere $\Delta$ becomes combinatorially equivalent to some
regular simplicial fan after several derived subdivisions.
We will be repeatedly using the following simple observation:
\begin{lem}[cf.\ {\cite[Ch.\ 1, Lem.\ 4]{ZeemanBK}}]
Let $\Delta_1$ and $\Delta_2$ denote two simplicial complexes with the same
support; then there is a derived subdivision $\sd^k\Delta_1$ that refines
$\Delta_2$. Moreover, one can choose $k\le |f|(\Delta_2)$.
\end{lem}
Here, $|f|(\cdot)$ denotes the total number of faces of a simplicial complex.
\noindent\textbf{Claim 1:} There is an $n$ such that $\sd^n \Delta$ is
combinatorially equivalent to a simplicial (not necessarily regular) fan in
$\mathbb{R}^{d+1}$.
By definition, $\Delta$ is PL homeomorphic to the boundary of the
$(d+1)$-simplex $\sigma^{d+1}$. In other words, there are combinatorially
equivalent subdivisions
$\widetilde{\Delta}$ and $\Sigma$ of $\Delta$ and $\partial \sigma^{d+1}$,
respectively. Let now
\[\vartheta:\widetilde{\Delta} \longrightarrow \Sigma\]
denote a facewise linear map realizing the combinatorial equivalence, and let
$\sd^n \Delta$ be chosen fine enough such that $\sd^n \Delta
\prec\widetilde{\Delta}$. Then $\vartheta (\sd^n \Delta)$ is a subdivision of
$\partial \sigma^{d+1}$ combinatorially equivalent to $\sd^n \Delta$. The cone
with respect to any interior point of $\sigma^{d+1}$ gives the desired
simplicial fan.
In the following we identify subdivisions of $\partial \sigma^{d+1}$ with the
corresponding fans.
\noindent\textbf{Claim 2:} There is a regular subdivision $\Delta'$ of $\sd^n
\Delta$.
Regularity is preserved under stellar, and in particular derived, subdivisions,
cf.\ \cite{LRS}. Thus we may take for $\Delta'$ any derived subdivision of
$\partial \sigma^{d+1}$ that refines $\sd^n \Delta$.
Let $\sd^{n+m} \Delta$ be a derived subdivision that refines the regular
subdivision $\Delta'$:
\[\sd^{n+m} \Delta\, \prec\, \Delta'\, \prec\, \sd^n \Delta
\]
\noindent \textbf{Claim 3:} $\sd^{n+m} \Delta$ is regular.
We have to show that there is a convex PL function with the fan $\sd^{n+m}
\Delta$. First, let us construct a PL function $h:\mathbb{R}^{d+1} \longrightarrow \mathbb{R}$
that is linear on the faces of $\sd^{n+m} \Delta$, and that is strictly convex
at every $\mr{codim}\ 1$-face \emph{except} at the $\mr{codim}\ 1$ skeleton of
$\sd^n \Delta$.
This is proven by induction: $\sd^{n+m} \Delta$ is a derived, and in particular
an iterated stellar subdivision of $\sd^n \Delta$; let $\Delta_1$, $\Delta_2,
\ldots$ denote the intermediate complexes, so that $\Delta_{i+1}$ is obtained
from $\Delta_{i}$ using a single stellar subdivision (obtained by introducing a
vertex $\nu_i$).
If $\nu$ is a ray of a simplicial fan $\mr{F}$, then let us denote by
\[[\nu,\mr{F}]^\ast(\cdot):\mathbb{R}^{d+1} \longrightarrow \mathbb{R}\] the function that is
$\langle \cdot, \nu\rangle$ on the ray spanned by $\nu$, that is $0$ on all
other rays and that is linear on the faces of $\mr{F}$. Note that
$[\nu,\mr{F}]^\ast(\cdot)$ is strictly convex across all $\mr{codim}\ 1$-faces
of $F$ that contain $\nu$.
On $\sd^n \Delta=\Delta_0$, we just take the zero function.
By induction assumption, let us assume that $\Delta_i$ admits a function
$h_i:\mathbb{R}^{d+1} \longrightarrow \mathbb{R}$ that is linear on faces of $\Delta_i$, and
strictly convex at every $\mr{codim}\ 1$ face except those in the $\mr{codim}\
1$ skeleton of $\Delta$. Then, for $\varepsilon_i>0$ small enough, the function
$\varepsilon_i[\nu_i,\Delta_{i+1}]^\ast+h_i$ is linear on every face of
$\Delta_{i+1}$, and strictly convex at all $\mr{codim}\ 1$ faces newly
introduced. Hence, by induction, there is a function $h$ with the desired
property.
\enlargethispage{4mm}
Now, $\Delta'$ is regular, and hence there exists a strictly convex piecewise
linear function $h': \mathbb{R}^{d+1} \longrightarrow \mathbb{R}$ whose domains of linearity are
the facets of $\Delta'$. In particular, $h'$ is linear on all faces of
$\sd^{n+m} \Delta$. The function $h'$ is strictly convex across those faces
where the convexity of $h$ can fail. Hence, for an $\varepsilon>0$ small enough,
$\varepsilon h+h'$ is strictly convex at all codimension one faces of $\sd^{n+m}
\Delta$, and linear on all facets of $\sd^{n+m} \Delta$. This finishes the
proof.
\end{proof}
\subsection{Algorithmic aspects}
Now that we determined that sufficiently many iterations of the derived
subdivision make any PL sphere polytopal, it makes sense to ask how many
precisely are needed.
If $\dim \Delta=2$, then $\Delta$ is combinatorially equivalent to the boundary
of a convex polytope by Steinitz Theorem, cf.\ \cite{Z}; the fact that the graph
of every triangulation of $S^2$ is $3$-connected is an easy exercise. Thus $k=0$
suffices in this case.
For higher dimensions, $k$ can not be bounded that easily, as we shall see now.
As usual, $f_i(\cdot)$ denotes the number of $i$-dimensional faces of a
simplicial complex.
\begin{thm}
\begin{compactenum}[\rm (a)]
\item If $d \ge 3$, then there is no $k$ that would depend only on $d$ such that
all PL $d$-spheres become polytopal after $k$ derived subdivisions.
\item For $d=3$, it number $k=k(\Delta)$ of derived subdivisions needed to make
a PL sphere $\Delta$ polytopal can be bounded from above by \[k(\Delta)\, \le\,
a\cdot 2^{b\cdot f_3(\Delta)\cdot 2^{c\cdot f_3^2(\Delta) \cdot 2^{d\cdot f_3^2(\Delta)}}}
\, +\, c\cdot f_3^2(\Delta) \cdot 2^{d\cdot f_3^2(\Delta)},\]
where $a, b, c, d\ge 0$ are constants independent of $\Delta$.
\item If $d \ge 5$, then the number of derived subdivisions that makes a PL
$d$-sphere $\Delta$ polytopal is not (Turing machine) computable from $\Delta$.
\end{compactenum}
\end{thm}
In other words, if $\varphi:\mathfrak{K}_d\longmapsto \mathbb{N}$ is any
computable function (cf.\ \cite{Davis}), where $\mathfrak{K}_d$ is the
collection of $d$-dimensional simplicial complexes, then for some PL $d$-sphere
$\Delta$, more than $\varPhi(\Delta)$ derived subdivisions are needed to make
$\Delta$ polytopal. In particular, the number of subdivisions is not computable
from the dimension, the $f$-vector, the flag vector or any other combinatorial
invariant of $\Delta$.
\begin{proof}
\begin{asparaenum}[\rm (a)]
\item The first statement follows from the work of Bing \cite{Bing} and
Lickorish \cite{LME}. Indeed, one can show that for every $d\ge 3$ and every
$k\ge 0$, there is a PL $d$-sphere $\Delta$ such that $\sd^k \Delta$ is not
shellable (cf.\ \cite{LME}). Since the boundary of every convex polytope is
shellable \cite{BruggesserMani}, $\sd^k \Delta$ can not be combinatorially
equivalent to the boundary of a convex polytope. Compare also \cite{AB-SSZ}.
\item For the second assertion, recall that there is an $\ell$ such that
$\sd^\ell \Delta$ is combinatorially equivalent to a subdivision of (the
simplicial fan spanned by) $\partial \sigma^4$ by Claim 1 in the proof of
Theorem \ref{thm:MakePoly}. By a result of Mijatovi\'c \cite{MJT}, $\ell$
can be bounded in terms of the number of faces of $\Delta$; more explicitly, one
can show that $\ell\le c' \cdot f_3^2(\Delta)\cdot 2^{d'\cdot f_3^2(\Delta)}$,
where $c', d'\ge 0$ are constants independent of $\Delta$.
Now, there is a iterated derived subdivision $\Delta'=\sd^m \partial \sigma^4$
of $\partial \sigma^4$ that is regular and subdivides $\sd^\ell \Delta$, and the
number of derived subdivisions needed can be bounded from above by
\[m\le|f|(\sd^\ell \Delta)\le (4!)^\ell \cdot 2^4 \cdot f_3(\Delta).\] Finally,
the fan $\Delta'$ is regular, and there is an $n$ such that $\sd^{\ell+n}
\Delta$ subdivides $\Delta'$, and \[n\le |f|(\Delta')\le (4!)^m \cdot 2^4 \cdot
f_3(\partial \sigma^4).\]
But $\sd^{\ell+n} \Delta$ is regular by Claim 3 in the proof of Theorem
\ref{thm:MakePoly}.
\item For the final claim: suppose there exists a Turing machine computable
function $\varphi:\mathfrak{K}_d\longmapsto \mathbb{N}$ that, for every PL
$d$-sphere $\Delta$, $d\ge 5$, returns a value $\varphi(\Delta)$ such that
$\sd^{\varphi(\Delta)} \Delta$ is polytopal, we would also have Turing machine
that decides whether or not a given simplicial $d$-manifold, $d\ge 5$, is a PL
sphere: Recall that deciding whether a given simplicial complex is the boundary
of a convex polytope is complete within the existential theory of the reals, and
therefore Turing machine decidable cf.\ \cite{Davis, Mnev}. Now, if this Turing
machine returns, for any $d$-dimensional simplicial complex $\Delta$, that
$\sd^{\varphi(\Delta)} \Delta$ is not polytopal, then $\Delta$
is not a PL sphere by assumption; if instead it returns that
$\sd^{\varphi(\Delta)} \Delta$ is polytopal, then $\Delta$ is a PL sphere, as
desired.
The existence of this Turing machine, however, stands in contradiction to a
classical result of S.~P.~Novikov \cite{Novikov}, cf.\ \cite{Nabutovsky}, who
proved that it is not decidable whether a given $5$-manifold is actually the PL
$5$-sphere. Therefore, the assumption is wrong, and no such Turing machine
bounding $k$ exists. \qedhere
\end{asparaenum}
\end{proof}
\subsection{Regular triangulations and geometric bistellar moves}
Let $P \subset \mathbb{R}^d$ be a convex $d$-polytope. A triangulation $T$ of $P$ (the
vertex set of $T$ may be bigger than that of $P$) is called \emph{regular} if
there exists a PL function $h \colon P \to \mathbb{R}$ linear on all $d$-simplices of
$T$ and convex across all of its $(d-1)$-simplices, compare also the notion of a
regular fan in the proof of Theorem \ref{thm:MakePoly}, and \cite{Z} or
\cite{LRS}.
While polytopality is a combinatorial property of a simplicial complex,
regularity of a triangulation depends not only on its combinatorics, but also on
the position of its vertices.
\begin{thm}
\label{thm:MakeReg}
For every triangulation $T$ of a convex polytope $P$ there is a $k$ such that
some derived subdivision $\sd^k T$ is regular.
\end{thm}
\begin{proof}
The proof is similar to that of Theorem \ref{thm:MakePoly}.
We need a regular triangulation of $P$ to start with: To find one, choose $h_i
\in \mathbb{R}$ for every vertex $p_i$ of $P$ generically and take the lower envelope of
the points $(p_i, h_i) \in \mathbb{R}^{d+1}$, cf.\ \cite{LRS}. By applying derived
subdividisions to $T'$, we see that there exists a regular triangulation $T'$ of
$P$ that refines $T$. Now, there is an $m\ge 0$ such that $\sd^m T$ refines
$T'$. It now follows as in the proof of Theorem \ref{thm:MakePoly}, Claim $3$,
that $\sd^m T$ is regular.
\end{proof}
\begin{cor}
\label{cor:Pachner}
Any two triangulations $T_0$ and $T_1$ of $P$ can be connected by a sequence of
geometric Pachner (or bistellar) moves.
\end{cor}
This is essentially the main result of \cite{Morelli} and \cite{Wlo}, with the
difference that we don't assume $P$ to be a lattice polytope and don't require
triangulations to be unimodular. Ewald and Shephard had earlier proven it for
regular triangulations \cite{ES}. On the other hand, Pachner \cite{Pachner}
proved that PL homeomorphic manifolds are related by \emph{combinatorial}
Pachner moves.
To deduce Corollary \ref{cor:Pachner} from Theorem \ref{thm:MakeReg}, take any
triangulation $\widetilde{T}$ of $P \times [0,1]$ that restricts to $T_0$ and
$T_1$ on $P \times \{0\}$ and $P \times \{1\}$ respectively, and apply derived
subdivisions to make $\widetilde{T}$ regular. Sweeping out from $0$ to $1$
produces a sequence of bistellar moves. Details can be found in \cite[Sec.\
2]{IzmScl}.
Note that geometric bistellar \emph{flips} (bistellar moves other than stellar
subdivisions and moves inverse to them) do not suffice in general to transform
one of two triangulations of the same point configuration into the other; see
\cite{Santos05, Santos06} for a counterexample in dimension $5$.
\small{\bibliographystyle{myamsalpha}
| {
"timestamp": "2014-03-21T01:10:34",
"yymm": "1311",
"arxiv_id": "1311.2965",
"language": "en",
"url": "https://arxiv.org/abs/1311.2965",
"abstract": "We give a simple proof that some iterated derived subdivision of every PL sphere is combinatorially equivalent to the boundary of a simplicial polytope, thereby resolving a problem of Billera (personal communication).",
"subjects": "Combinatorics (math.CO); Metric Geometry (math.MG)",
"title": "Derived subdivisions make every PL sphere polytopal",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534360303621,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573497523479
} |
https://arxiv.org/abs/2109.05556 | Family of $\mathscr{D}$-modules and representations with a boundedness property | In the representation theory of real reductive Lie groups, many objects have finiteness properties. For example, the lengths of Verma modules and principal series representations are finite, and more precisely, they are bounded. In this paper, we introduce a notion of uniformly bounded families of holonomic $\mathscr{D}$-modules to explain and find such boundedness properties.A uniform bounded family has good properties. For instance, the lengths of modules in the family are bounded and the uniform boundedness is preserved by direct images and inverse images. By the Beilinson--Bernstein correspondence, we can deduce several boundedness results about the representation theory of complex reductive Lie algebras from corresponding results of uniformly bounded families of $\mathscr{D}$-modules. In this paper, we concentrate on proving fundamental properties of uniformly bounded families, and preparing abstract results for applications to the branching problem and harmonic analysis. | \section{Application to representation theory}\label{sect:applications}
In this section, we define a notion of uniformly bounded family of $\lie{g}$-modules.
A typical example is a family of Harish-Chandra modules with bounded lengths.
As an application of results about uniformly bounded families of $\mathscr{D}$-modules, we will show that the uniform boundedness of a family of $\lie{g}$-modules is preserved by several operations such as (cohomologically) parabolic induction and taking coinvariants.
We will also prove the boundedness of the lengths of $\univ{g}^{G'}$-modules, which is related to the branching problem and harmonic analysis.
\subsection{Uniformly bounded family of \texorpdfstring{$\lie{g}$}{g}-modules}\label{sect:BBcorrespondence}
In this subsection, we introduce the notion of uniformly bounded family of $\lie{g}$-modules.
Let $G$ be a connected reductive algebraic group and $B$ a Borel subgroup of $G$.
Fix a Levi decomposition $B=TU$, where $T$ is a maximal torus and $U$ is the unipotent radical of $B$.
Then the natural projection $p\colon G/U\rightarrow G/B$ is a principal $T$-bundle and $G$-equivariant.
We will reduce theorems about $\lie{g}$-modules to those about $\mathcal{D}$-modules on $G/B$.
To do so, we review the Beilinson--Bernstein correspondence.
Let $\mathscr{D}_{G/U}$ be the algebra of non-twisted differential operators on $G/U$ equipped with the natural $G\times T$-equivariant structure.
For a character $\lambda$ of $\lie{t}$, we set
\begin{align*}
\mathscr{D}_{G/B,\lambda} := (\mathbb{C}_{\lambda}\otimes_{\univ{t}}p_*\mathscr{D}_{G/U})^T
\end{align*}
and consider $\mathscr{D}_{G/B,\lambda}$ as a $G\times T$-equivariant algebra of twisted differential operators
as in Subsection \ref{sect:PrincipalBundle}.
Then $p^{\#}\mathscr{D}_{G/B,\lambda}$ is naturally isomorphic to $\mathscr{D}_{G/U}$.
Note that we can explicitly construct a bounded trivialization belonging to $\mathcal{B}(G/B, G)$ using an open covering by the open Bruhat cell and its translations.
${\Delta^+} = {\Delta^+}(\lie{g}, \lie{t})$ denotes the set of positive roots determined by $B$ and $T$.
We write $\rho$ for half the sum of the positive roots.
The following fact is called the Beilinson--Bernstein correspondence \cite{BeBe81}.
\begin{fact}\label{fact:BeilinsonBernstein}
Let $\lambda$ be a character of $\lie{t}$.
\begin{enumerate}[(i)]
\item The homomorphism $\univ{g}\rightarrow \ntDalg{G/B,\lambda}(=\Gamma(\mathscr{D}_{G/B,\lambda}))$ is surjective and its kernel is equal to the minimal primitive ideal with infinitesimal character $\lambda-\rho$.
\item If $\lambda - \rho$ is anti-dominant, then any quasi-coherent $\mathscr{D}_{G/B,\lambda}$-module $\mathcal{M}$ is acyclic, i.e.\ $H^i(G/B, \mathcal{M})=0$ for any $i > 0$.
Moreover, there exists a full subcategory $\mathrm{Mod}_{qc}^e(\mathscr{D}_{G/B, \lambda})$ of $\mathrm{Mod}_{qc}(\mathscr{D}_{G/B,\lambda})$ such that
the global section functor
\begin{align*}
\Gamma\colon \mathrm{Mod}_{qc}^e(\mathscr{D}_{G/B,\lambda}) \rightarrow \mathrm{Mod}(\ntDalg{G/B,\lambda})
\end{align*}
gives an equivalence of categories.
\item If $\lambda - \rho$ is regular and anti-dominant, then the global section functor $\Gamma\colon \mathrm{Mod}_{qc}(\mathscr{D}_{G/B,\lambda}) \rightarrow \mathrm{Mod}(\ntDalg{G/B,\lambda})$
gives an equivalence of categories.
\end{enumerate}
\end{fact}
Motivated by the Beilinson--Bernstein correspondence and the definition of uniformly bounded family of $\mathscr{D}$-modules, we introduce the following definition.
\begin{definition}
Let $(V_i)_{i \in I}$ be a family of $\lie{g}$-modules.
We say that $(V_i)_{i \in I}$ is \define{uniformly bounded} if
the following two conditions hold.
\begin{enumerate}[(i)]
\item The length of $V_i$ is bounded by a constant independent of $i \in I$.
\item There exist a family $(\lambda(r))_{r \in R}$ of anti-dominant weights of $\lie{t}$ and a family $\mathcal{N} \in \mathrm{Mod}_{ub}(\mathscr{D}_{G/B, \lambda(r)+\rho}, \mathcal{B}(G/B,G))$ (see \ref{sect:BornologyPrincipalBundle}) such that any composition factor of any $V_i$ is isomorphic to some $\Gamma(\mathcal{N}_r)$.
\end{enumerate}
We say that a family of $(\lie{g}, K)$-modules of a pair $(\lie{g}, K)$ is \define{uniformly bounded} if it is a uniformly bounded family of $\lie{g}$-modules.
\end{definition}
The uniform boundedness is preserved by several operations of $\lie{g}$-modules.
The following proposition is a direct consequence of Proposition \ref{prop:FundamentalBoundeFamily} and Theorem \ref{thm:TensorUniformlyBounded}.
\begin{proposition}\label{prop:FundamentailUniformlyBoundedGH}
Let $\lie{g}$ and $\lie{h}$ be complex reductive Lie algebras.
\begin{enumerate}[(i)]
\item For a short exact sequence $0\rightarrow L \rightarrow M \rightarrow N \rightarrow 0$ in $\prod_{i \in I} \mathrm{Mod}(\lie{g})$,
both $L$ and $N$ are uniformly bounded if and only if so is $M$.
\item For any family $(\lambda_i)_{i \in I}$ of characters of $\lie{g}$
and uniformly bounded family $(V_j)_{j \in J}$ of $\lie{g}$-modules,
$(V_j\otimes \mathbb{C}_{\lambda_i})_{i \in I, j \in J}$ is also uniformly bounded.
\item For any set $\Phi$ of inner automorphisms of $\lie{g}$ and uniformly bounded family $(V_j)_{j \in J}$ of $\lie{g}$-modules, $(V_j^{\varphi})_{\varphi \in \Phi, j\in J}$ is also uniformly bounded.
Here $V_j^{\varphi}$ is the $\lie{g}$-module defined by the composition $\lie{g}\xrightarrow{\varphi} \lie{g}\rightarrow \textup{End}_{\mathbb{C}}(V_j)$.
\item \label{enum:prop:UniformlyBoundedGH} Let $(V_i)_{i \in I}$ (resp.\ $(W_i)_{i \in I}$) be a family of $\lie{g}$-modules (resp.\ $\lie{h}$-modules).
Then $(V_i\boxtimes W_i)_{i \in I}$ is a uniformly bounded family of $(\lie{g\oplus h})$-modules if and only if both $(V_i)_{i \in I}$ and $(W_i)_{i \in I}$ are uniformly bounded.
\end{enumerate}
\end{proposition}
\begin{proposition}\label{prop:FundamentalUniformlyBoundedGmod}
Let $(\lie{g}, K)$ be a pair and $M$ a reductive subgroup of $K$.
Let $(V_i)_{i \in I}$ be a uniformly bounded family of $(\lie{g}, M)$-modules.
\begin{enumerate}[(i)]
\item\label{enum:FundamentalZuckerman} $(\Dzuck{K}{M}{j}(V_i))_{i \in I, j \in \mathbb{Z}}$ is uniformly bounded.
\item If $K$ is reductive, there exists a constant $C$ such that for any $i \in I$ and $j \in \mathbb{Z}$, we have
\begin{align*}
\mathrm{Len}_{\univ{g}^K}(H^j(\lie{k}, M; V_i)) \leq C.
\end{align*}
\item (ii) is also true if $(V_i)_{i \in I}$ is a uniformly bounded family of $(\lie{g}, \lie{m})$-modules and $H^j(\lie{k}, M; \cdot)$ is replaced by $H^j(\lie{k}, \lie{m}; \cdot)$.
\item (ii) is also true if we replace $M$ with its covering $\widetilde{M}$.
\end{enumerate}
\end{proposition}
\begin{proof}
By definition, we can reduce the assertions to similar results about $\mathscr{D}$-modules on the flag variety.
(i) follows from Theorem \ref{thm:UniformlyBoundedFamilyBernstein}.
Taking the $K$-invariant part of (i), (ii) follows from Corollary \ref{cor:TorAndBernLength}.
By the definition of the relative Lie algebra cohomology, we have
\begin{align*}
H^j(\lie{k}, \lie{m}; V_i) = H^j(\lie{k}, M_0; (V_i)_{M_0}),
\end{align*}
where $(V_i)_{M_0}$ is the sum of all $\lie{m}$-submodules in $V_i$ that can lift to $M_0$-modules.
Hence (iii) follows from (ii).
(iv) can be reduced to (iii) by
\begin{align*}
H^j(\lie{k}, \widetilde{M}; V_i) = H^j(\lie{k}, \lie{m}; V_i)^{\widetilde{M}/\widetilde{M}_0}.
\end{align*}
We have proved the proposition.
\end{proof}
Let $(\lie{g}, K)$ be a pair.
Then $K$ acts on the flag variety of $\lie{g}$, which is isomorphic to $G/B$.
Assume that $K$ has finitely many orbits in $G/B$.
\begin{proposition}\label{prop:TorUniformlyBoundedGmod}
Let $\lie{h}$ be a complex reductive Lie algebra
and $(M_i)_{i \in I}$ a uniformly bounded family of $(\lie{g\oplus h})$-modules.
For any set $\mathcal{F}$ of finite-dimensional $\lie{k}$-modules whose dimensions are bounded, the family $(\mathrm{Tor}^{\univ{k}}_{j}(F, M_i))_{i \in I, j \in \mathbb{Z}, F \in \mathcal{F}}$ is a uniformly bounded family of $\lie{h}$-modules.
Moreover, there exists a constant $C$ such that for any finite-dimensional $\lie{k}$-module $F$, $i \in I$ and $j \in \mathbb{Z}$,
\begin{align*}
\mathrm{Len}_{\lie{h}}(\mathrm{Tor}^{\univ{k}}_{j}(F, M_i)) \leq C\cdot \dim_{\mathbb{C}}(F).
\end{align*}
\end{proposition}
\begin{proof}
The proposition follows from Theorem \ref{thm:UniformlyBoundedFiniteOrbits}.
\end{proof}
\begin{remark}\label{rmk:TorUniformlyBoundedGmod}
If $\lie{h} = 0$, then $\mathrm{Len}_{\lie{h}}(\mathrm{Tor}^{\univ{k}}_{0}(F, M_i))$ is the dimension of $F\otimes_{\univ{k}} M_i$.
Hence we can deduce a kind of finite multiplicity theorems from the proposition.
Replacing $G/B$ by a partial flag variety $G/P$ and $\mathscr{D}_{G/B,\lambda}\otimes_{\univ{k}}F$ by some holonomic $\mathscr{D}$-module,
one can obtain several finite multiplicity theorems.
We postpone the results to the sequel.
\end{remark}
A typical example of uniformly bounded family is a family of Harish-Chandra modules.
\begin{proposition}\label{prop:UniformlyBoundedG-modSpherical}
Any family of $(\lie{g}, \lie{k})$-modules with bounded lengths is uniformly bounded.
In particular, so is any family of $(\lie{g}, K)$-modules with bounded lengths.
\end{proposition}
\begin{proof}
The second assertion follows from the first one because the length of $(\lie{g}, K)$-module $V$ is bounded by $|K/K_0|\cdot \mathrm{Len}_{\lie{g}}(V)$ by Lemma \ref{lem:GeneralizedPairConnected}.
Take a covering $K'$ of $K_0$ such that $[K'/U_{K'}, K'/U_{K'}]$ is simply-connected, where $U_{K'}$ is the unipotent radical of $K'$.
Then we have a homomorphism $K'\rightarrow K\rightarrow \textup{Aut}(\lie{g})$ and $K'$ has finitely many orbits in $G/B$.
Let $V$ be an irreducible $(\lie{g}, \lie{k})$-module.
We want to realize $V$ as $\Gamma(\mathcal{V})$ for some irreducible $\mathscr{D}$-module $\mathcal{V}$ on $G/B$.
Since the $\lie{k}$-action is locally finite, we can take a finite-dimensional irreducible $\lie{k}$-submodule $F \subset V$.
Take a character $\mu$ of $\lie{k}$ such that the $\lie{k}$-action on $F\otimes \mathbb{C}_\mu$ lifts to a $K'$-action.
Since $V$ is irreducible, the multiplication map $\univ{g}\otimes F \rightarrow V$ is surjective.
Hence the $\lie{k}$-action on $V\otimes \mathbb{C}_\mu$ lifts to a $K'$-action.
Let $\lambda$ be a character of $\lie{t}$ such that $\lambda - \rho$ is anti-dominant and $\lambda - \rho$ is the infinitesimal character of $V$.
We can take an irreducible subquotient $\mathcal{V}$ of $\mathscr{D}_{G/B, \lambda}\otimes_{\univ{g}} V$ such that $\Gamma(\mathcal{V})\simeq V$ (see \cite[Corollary 11.2.6]{HTT08}).
By construction, $\mathcal{V}$ is an irreducible twisted $(\mathscr{D}_{G/B,\lambda}, K')$-module with twist $\mu$.
Since $K'$ has finitely many orbits in $G/B$, the proposition follows from Corollary \ref{cor:UniformlyBoundedIrreducibles}.
\end{proof}
\subsection{Induction of uniformly bounded family}
In this subsection, we will show uniform boundedness of some family of $\lie{g}$-modules.
Let $G$ be a connected reductive algebraic group and $B$ a Borel subgroup of $G$ with unipotent radical $U$.
Put $T:=B/U$.
We denote by $\mathcal{I}_\chi$ the minimal primitive ideal of $\univ{g}$ with infinitesimal character $\chi$.
Let $W_G$ be the Weyl group of $G$.
\begin{proposition}\label{prop:UniformlyBoundedMinimalPrimitive}
The family $(\univ{g}/\mathcal{I}_\chi)_{\chi \in \lie{t}^*/W_G}$ of $(\lie{g}\oplus\lie{g}, \Delta(G))$-modules is uniformly bounded.
In particular, the number of two sided ideals with fixed infinitesimal character is bounded by a constant independent of its infinitesimal character.
\end{proposition}
\begin{proof}
$\univ{g}/\mathcal{I}_\chi$ is isomorphic to $\Gamma((\mathscr{D}_{G/B,\lambda+\rho}\boxtimes \mathscr{D}_{G/B, \lambda'+\rho})\otimes_{\mathcal{U}(\Delta(\lie{g}))} \mathbb{C})$,
where $\lambda$ (resp.\ $\lambda'$) is an anti-dominant weight in $\chi$ (resp.\ $-\chi$).
Since $\Delta(G)$ has finitely many orbits in $G/B\times G/B$, the assertion follows from Corollary \ref{cor:UniformlyBoundedTor}.
\end{proof}
\begin{remark}
The structure of the $(\lie{g}\oplus \lie{g}, \Delta(G))$-modules can be reduced to that of Verma modules (see \cite[Section 6]{BeGe80_projective_functor}).
The proposition can be deduced from this and Soergel's theorem \cite[Theorem 11]{So90} (see also Remark \ref{rmk:Soergel}).
\end{remark}
\begin{proposition}\label{prop:LengthVerma}
Let $P$ be a parabolic subgroup of $G$ containing $B$ with unipotent radical $U_P$, and $(M_i)_{i \in I}$ a uniformly bounded family of $\lie{p}/\lie{u}_P$-modules.
Then $(\univ{g}\otimes_{\univ{p}} M_i)_{i \in I}$ is a uniformly bounded family of $\lie{g}$-modules.
In particular, the length of any Verma module is bounded by a constant independent of its highest weight.
\end{proposition}
\begin{proof}
Since $P$ is parabolic, each $\lie{g}$-module $\univ{g}\otimes_{\univ{p}} M_i$ has an infinitesimal character $\chi_i$.
Then we have
\begin{align*}
\univ{g}\otimes_{\univ{p}} M_i \simeq (\univ{g}/\mathcal{I}_{\chi_i}\otimes M_i)\otimes_{\univ{p}} \mathbb{C}.
\end{align*}
$(\univ{g}/\mathcal{I}_{\chi_i}\otimes M_i)_{i\in I}$ is a uniformly bounded family of $(\lie{g\oplus g\oplus p/u}_P)$-modules by Propositions \ref{prop:UniformlyBoundedMinimalPrimitive} and \ref{prop:FundamentailUniformlyBoundedGH} (\ref{enum:prop:UniformlyBoundedGH}).
Since $P$ has finitely many orbits in $G/B\times P/B$, the assertion follows from Proposition \ref{prop:TorUniformlyBoundedGmod}.
\end{proof}
\begin{remark}\label{rmk:Soergel}
The second assertion is an easy consequence of Soergel's theorem \cite[Theorem 11]{So90}.
In fact, the categorical structure of each block of the BGG category $\mathcal{O}$ depends only on a pair of a Coxeter system and a subgroup of $W_G$ determined by the block,
and the number of such pairs is finite.
\end{remark}
We consider cohomologically induced modules.
Let $(\lie{g}, K)$ be a pair and $\lie{p}$ a parabolic subalgebra of $\lie{g}$.
Take a Levi subalgebra $\lie{l}$ of $\lie{p}$ and a reductive subgroup $K_L$ of $K$ whose Lie algebra is contained in $\lie{l}\cap \lie{k}$.
Assume that $K_L$ normalizes $\lie{p}$ and $\lie{l}$.
We consider $\lie{l}$-modules as $\lie{p}$-modules through the natural surjection $\lie{p}\rightarrow \lie{l}$.
\begin{theorem}\label{thm:uniformlyBoundedLengthCohInd}
Let $(V_i)_{i \in I}$ be a uniformly bounded family of $(\lie{l}, K_L)$-modules, e.g.\ a family of irreducible Harish-Chandra modules.
(See Proposition \ref{prop:UniformlyBoundedG-modSpherical}.)
Then $(\Dzuck{K}{K_L}{j}(\univ{g}\otimes_{\univ{p}}V_i))_{j \in \mathbb{Z}, i\in I}$ is a uniformly bounded family of $(\lie{g}, K)$-modules.
In particular, there exists some constant $C$ such that for any $i \in I$ and $j \in \mathbb{Z}$, we have
\begin{align*}
\mathrm{Len}_{\lie{g}, K}(\Dzuck{K}{K_L}{j}(\univ{g}\otimes_{\univ{p}}V_i)) \leq C.
\end{align*}
\end{theorem}
\begin{proof}
The assertion follows from Propositions \ref{prop:LengthVerma} and \ref{prop:FundamentalUniformlyBoundedGmod} (i).
\end{proof}
It is well-known that a $(\lie{g}, K)$-module cohomologically induced from an irreducible module of
a parabolic subpair
is of finite length (see e.g.\ \cite[Theorem 0.46]{KnVo95_cohomological_induction}).
In addition to this fact, we have shown that the lengths of such modules are bounded.
The following corollary is a special case of Theorem \ref{thm:uniformlyBoundedLengthCohInd} because the underlying Harish-Chandra module of any principal series representation can be realized as a cohomologically induced module.
See \cite[Propositions 11.57 and 11.65]{KnVo95_cohomological_induction}.
\begin{corollary}
Let $G_\mathbb{R}$ be a real reductive Lie group.
Then there exists a constant $C$ such that the length of any principal series representation of $G_\mathbb{R}$ is bounded by $C$.
\end{corollary}
\begin{remark}
The corollary has been proved in \cite[Proposition 4.1]{KoOs13} by using the theory of minimal $K$-types and the translation principle.
\end{remark}
\subsection{\texorpdfstring{$\univ{g}^{G'}$}{U(g)G'}-modules}
For applications to the branching problem and harmonic analysis, we shall summarize several consequences of the results so far about uniformly bounded families.
Let $G$ be a reductive algebraic group and $G'$ a reductive subgroup of $G$.
\begin{theorem}\label{thm:uniformlyBoundedLengthGeneral}
Let $(V_{i})_{i \in I}$ and $(V'_{i})_{i \in I}$ be uniformly bounded families of $\lie{g}$-modules and $\lie{g'}$-modules, respectively.
Then there exists some constant $C$ such that for any $i \in I$ and $j \in \mathbb{N}$, we have
\begin{align*}
\mathrm{Len}_{\univ{g}^{G'}}(\mathrm{Tor}^{\univ{g'}}_j(V_i, V'_i)) \leq C.
\end{align*}
\end{theorem}
\begin{proof}
By Corollary \ref{cor:TorAndBernLength} and Proposition \ref{prop:FundamentalUniformlyBoundedGmod} (ii) for $K=\Delta(G')$ and $M=\set{e}$, there is a constant $C$ such that for any $i \in I$ and $j \in \mathbb{N}$,
\begin{align*}
\mathrm{Len}_{\univ{g}^{G'}}(H^j(\lie{g'}; V_i \otimes V'_i)) \leq C.
\end{align*}
Put $n = \dim_{\mathbb{C}}(\lie{g'})$.
By the Poincar\'e duality (Fact \ref{fact:PoincareDuality}), we have
\begin{align*}
H^j(\lie{g'}; V_i \otimes V'_i) \simeq H_{n-j}(\lie{g'}; V_i\otimes V'_i)
\simeq \mathrm{Tor}^{\univ{g'}}_{n-j}(V_i, V'_i).
\end{align*}
Since these isomorphisms are natural in $V_i$ and $V'_i$, the isomorphisms are $\univ{g}^{G'}$-homomorphisms.
We have shown the theorem.
\end{proof}
\begin{corollary}\label{cor:uniformlyBoundedLengthN}
Let $\lie{b'}$ be a Borel subalgebra of $\lie{g'}$ and $(V_i)_{i \in I}$ a uniformly bounded family of $\lie{g}$-modules.
Then there exists some constant $C$ such that for any character $\lambda$ of $\lie{b'}$, $j \in \mathbb{Z}$ and $i \in I$, we have
\begin{align*}
\mathrm{Len}_{\univ{g}^{G'}}(\mathrm{Tor}^{\univ{b'}}_{j}(V_i, \mathbb{C}_{\lambda})) \leq C.
\end{align*}
Moreover, the constant $C$ can be chosen independently of $\lie{b'}$.
\end{corollary}
\begin{proof}
Since $\univ{g'}$ is a free right $\univ{b'}$-module, there is a natural isomorphism
\begin{align*}
\mathrm{Tor}^{\univ{b'}}_{j}(V_i, \mathbb{C}_{\lambda}) \simeq \mathrm{Tor}^{\univ{g'}}_j(V_i, \univ{g'}\otimes_{\univ{b'}}\mathbb{C}_\lambda)
\end{align*}
of $\univ{g}^{G'}$-modules.
The family $(\univ{g'}\otimes_{\univ{b'}}\mathbb{C}_{\lambda})_{\lambda, \lie{b'}}$ is uniformly bounded by Proposition \ref{prop:LengthVerma} and Proposition \ref{prop:FundamentailUniformlyBoundedGH} (iii).
Hence the corollary follows from Theorem \ref{thm:uniformlyBoundedLengthGeneral}.
\end{proof}
\begin{corollary}\label{cor:uniformlyBoundedLengthInfChar}
Let $(V_i)_{i \in I}$ be a uniformly bounded family of $\lie{g}$-modules.
There exists some constant $C$ such that for any maximal ideal $\mathcal{I}$ of $\univcent{g'}$, $i \in I$ and $j \in \mathbb{Z}$, we have
\begin{align*}
\mathrm{Len}_{\univ{g}^{G'}\otimes \univ{g'}}(\mathrm{Tor}_j^{\univcent{g'}}(\univcent{g'}/\mathcal{I}, V_i)) \leq C.
\end{align*}
\end{corollary}
\begin{proof}
Since $\univ{g'}$ is a free $\univcent{g'}$-module,
we have a natural isomorphism
\begin{align*}
\mathrm{Tor}_j^{\univcent{g'}}(\univcent{g'}/\mathcal{I}, V_i)\simeq
\mathrm{Tor}_j^{\univ{g'}}(\univ{g'}/\mathcal{I}\univ{g'}, V_i).
\end{align*}
Hence the corollary follows from Proposition \ref{prop:UniformlyBoundedMinimalPrimitive} and Theorem \ref{thm:uniformlyBoundedLengthGeneral}.
\end{proof}
Retain the notation $G$ and $G'$ as above.
Let $(\lie{g}, K)$ and $(\lie{g'}, K')$ be pairs (see Definition \ref{def:pair}).
\begin{corollary}\label{cor:uniformlyBoundedLengthGKGK}
Assume that $K$ and $K'$ have finitely many orbits in the flag varieties of $\lie{g}$ and $\lie{g'}$, respectively.
Then there exists some constant $C$ such that
for any $i \in \mathbb{N}$, irreducible $(\lie{g'}, K')$-module $V'$ and irreducible $(\lie{g}, K)$-module $V$, we have
\begin{align*}
\mathrm{Len}_{\univ{g}^{G'}}(\mathrm{Tor}^{\univ{g'}}_i(V, V')) \leq C.
\end{align*}
\end{corollary}
\begin{proof}
By Proposition \ref{prop:UniformlyBoundedG-modSpherical}, any family of irreducible $(\lie{g}, K)$-modules or irreducible $(\lie{g'}, K')$-modules is uniformly bounded.
Hence the assertion follows from Theorem \ref{thm:uniformlyBoundedLengthGeneral}.
\end{proof}
\subsection{Euler--Poincar\'e characteristic}
We shall define the Euler--Poincar\'e characteristic in the setting of the branching problem and harmonic analysis.
Retain the notation $G$, $G'$, $K$ and $K'$ in the previous subsection.
Assume that $K'$ is reductive and contained in $K$, and $\textup{Ad}_{\lie{g}}(K')$ is contained in $\textup{Ad}_{\lie{g}}(G')$.
\begin{theorem}\label{thm:uniformlyBoundedLengthGeneralGK}
Let $(V_{i})_{i \in I}$ (resp. $(V'_{i})_{i \in I}$) be a uniformly bounded family of $(\lie{g}, K)$-modules (resp. $(\lie{g'}, K')$-modules).
Then there exists some constant $C$ such that for any $i \in I$ and $j \in \mathbb{N}$, we have
\begin{align*}
\mathrm{Len}_{\univ{g}^{G'}}(H_j(\lie{g'}, K'; V_i \otimes V'_i)) \leq C.
\end{align*}
In particular, the Euler--Poincar\'e characteristic
\begin{align*}
\mathrm{EP}(V_i, V'_i):=\sum_{i}(-1)^i H_i(\lie{g'}, K'; V_i\otimes V'_i)
\end{align*}
is well-defined as an element of the Grothendieck group of the category
of $\univ{g}^{G'}$-modules of finite length.
\end{theorem}
\begin{proof}
Almost all of the proof is the same as that of Theorem \ref{thm:uniformlyBoundedLengthGeneral}.
We note the difference.
In this setting, the Poincar\'e duality (Fact \ref{fact:PoincareDuality}) is written as
\begin{align*}
H^{n-j}(\lie{g'}, K'; V_i \otimes V'_i \otimes \wedge^n (\lie{g'}/\lie{k'})) \simeq H_j(\lie{g'}, K'; V_i \otimes V'_i),
\end{align*}
where $n = \dim_{\mathbb{C}}(\lie{g'}/\lie{k'})$ and the $\lie{g'}$-action on $\wedge^n (\lie{g'}/\lie{k'})$ is trivial.
Hence the twisting by $\wedge^n (\lie{g'}/\lie{k'})$ does not affect the action of $\univ{g}^{G'}$.
Therefore we have proved the theorem by Proposition \ref{prop:FundamentalUniformlyBoundedGmod} (iv).
\end{proof}
\begin{remark}
It is clear that for the well-definedness of the Euler--Poincar\'e characteristic, we does not need the notion of uniformly bounded families.
In fact, we need only holonomicity of modules.
\end{remark}
\begin{remark}
$H_i(\lie{g'}, K'; V\otimes V')^*$ is isomorphic to $\mathrm{Ext}^i_{\lie{g'}, K'}(V, (V')^*_{K'})$ as a $\univ{g}^{G'}$-module (see \cite[Corollary 3.2]{KnVo95_cohomological_induction}).
If $\mathrm{Ext}^i_{\lie{g'}, K'}(V, (V')^*_{K'})$ is not finite dimensional, the $\univ{g}^{G'}$-module does not have finite length because it is uncountably infinite dimensional.
\end{remark}
If all $H_i(\lie{g'}, K'; V\otimes V')$ are finite dimensional, we can define the ($\mathbb{Z}$-valued) Euler--Poincar\'e characteristic
\begin{align}
\dim_{\mathbb{C}}\mathrm{EP}(V, V'):=\sum_{i}(-1)^i \dim_{\mathbb{C}}(H_i(\lie{g'}, K'; V\otimes V')) \label{eqn:EP}
\end{align}
The characteristic for $p$-adic groups is studied in \cite{Pr13}, \cite{AiSa17} and \cite{ChSa18}.
Remark that $\mathrm{EP}(V, V')$ in the papers corresponds to $\dim_{\mathbb{C}}\mathrm{EP}(V, (V')^*_{K'})$ in our notation.
We give sufficient conditions for the well-definedness of the $\mathbb{Z}$-valued characteristic in the sequels \cite[Corollary 7.17]{Ki20}.
\subsection{Theta lifting}
We apply Theorem \ref{thm:uniformlyBoundedLengthGeneralGK} to the theory of the Howe duality (see \cite{Ho89,Ho89_transcending}).
Let $G_{\mathbb{R}}$ be a double cover of $\textup{Sp}(n, \mathbb{R})$.
Let $(H_{\mathbb{R}}, H'_{\mathbb{R}})$ be a reductive dual pair of $G_{\mathbb{R}}$, i.e.\ $H_{\mathbb{R}} = C_{G_{\mathbb{R}}}(H'_{\mathbb{R}})$ and $H'_{\mathbb{R}} = C_{G_{\mathbb{R}}}(H_{\mathbb{R}})$ holds.
Here $C_{G_\mathbb{R}}(\cdot)$ denotes the centralizer in $G_\mathbb{R}$.
We write $\lieR{g}, \lieR{h}$ and $\lieR{h'}$ for the Lie algebras of $G_\mathbb{R}$, $H_\mathbb{R}$ and $H'_\mathbb{R}$, respectively.
Fix a Cartan involution $\theta$ of $G_\mathbb{R}$ which stabilizes $H_\mathbb{R}$ and $H'_\mathbb{R}$,
and put $K_\mathbb{R}:=G_\mathbb{R}^\theta, K_{H,\mathbb{R}}:=H_\mathbb{R}^\theta$ and $K_{H',\mathbb{R}}:=(H'_\mathbb{R})^\theta$.
Then we have pairs $(\lie{g}, K), (\lie{h}, K_H)$ and $(\lie{h'}, K_{H'})$,
which are the complexifications of $(\lieR{g}, K_\mathbb{R})$, $(\lieR{h}, K_{H,\mathbb{R}})$
and $(\lieR{h'}, K_{H',\mathbb{R}})$, respectively.
We write $(\omega, V)$ for the underlying Harish-Chandra module of the Segal--Shale--Weil representation of $G_{\mathbb{R}}$.
Then, by the classical invariant theory, we have $\omega(\univ{g})^{H} = \omega(\univ{h'})$.
Here $H$ is the centralizer in $\textup{Sp}(n,\mathbb{C})$ of the image of $H'_\mathbb{R}$ by the covering map $G_\mathbb{R}\rightarrow \textup{Sp}(n,\mathbb{R})$.
For an irreducible $(\lie{h}, K_H)$-module $V'$, we set
\begin{align*}
\Theta_i(V') := H_i(\lie{h}, K_H; V\otimes \dual{V'}),
\end{align*}
where $\dual{V'}$ is the space of all $K_H$-finite vectors in $(V')^*$.
Then $\Theta_i(V')$ is a $(\lie{h'}, K_{H'})$-module.
Let $\mathcal{R}(\lie{h}, K_H, \omega)$ be the set of equivalence classes of irreducible $(\lie{h}, K_H)$-modules such that $\Theta_0(V')\neq 0$.
\begin{fact}[R. Howe {\cite[Theorem 2.1]{Ho89_transcending}}]
For any $V' \in \mathcal{R}(\lie{h}, K_H, \omega)$, $\Theta_0(V')$ is of finite length and has a unique irreducible quotient $\theta(V')$.
The correspondence $\mathcal{R}(\lie{h}, K_H, \omega) \ni V'\mapsto \theta(V') \in \mathcal{R}(\lie{h'}, K_{H'}, \omega)$ is bijective.
\end{fact}
For any $i \in \mathbb{N}$, $\Theta_i(V')$ is of finite length by $\omega(\univ{g})^{H} = \omega(\univ{h'})$ and Theorem \ref{thm:uniformlyBoundedLengthGeneralGK}.
More precisely, the following theorem holds.
\begin{theorem}\label{thm:ThetaLift}
Let $V'$ be an irreducible $(\lie{h}, K_H)$-module.
Then there exists some constant $C$ independent of $V'$ such that
\begin{align*}
\mathrm{Len}_{\lie{h'}, K_{H'}}(\Theta_i(V')) \leq C
\end{align*}
for any $i\in \mathbb{N}$.
In particular, as an element of the Grothendieck group of the category of $(\lie{h'}, K_{H'})$-modules of finite length,
the Euler--Poincar\'e characteristic
\begin{align*}
\mathrm{EP}(V, \dual{V'}) = \sum_i (-1)^i\Theta_i(V')
\end{align*}
is well-defined.
\end{theorem}
The well-definedness of the Euler--Poincar\'e characteristic of the theta lifting for $p$-adic groups is proved and studied in \cite[Proposition 1.1]{APS17}.
\subsection{Uniformly bounded family in branching laws}
Let $G$ be a connected reductive algebraic group and $G'$ a connected reductive subgroup of $G$.
Using the restriction of modules, we can construct a uniformly bounded family of $\lie{g'}$-modules from one of $\lie{g}$-modules.
We consider the embedding $\iota\colon G'\rightarrow G'\times G'\times G$ defined by $\iota(g) = (e, g, g)$.
\begin{lemma}\label{lem:FindimQuot}
Let $V$ be a $\lie{g}$-module and $V'$ an irreducible $\lie{g'}$-module, and set $\mathcal{I}:= \textup{Ann}_{\univcent{g'}}(V')$.
If $0 < \dim_{\mathbb{C}} \textup{Hom}_{\lie{g'}}(V, V') < \infty$, then there exists an irreducible $(\lie{g'\oplus g}, \Delta(G'))$-module $W$ such that $V'\boxtimes W$ is isomorphic to a subquotient of $\Dzuck{\iota(G')}{\set{e}}{n}(\univ{g'}/\mathcal{I}\otimes V)$,
where $n:=\dim_{\mathbb{C}}(G')$.
\end{lemma}
\begin{proof}
Take a basis $\set{\varphi_i}$ of $\textup{Hom}_{\lie{g'}}(V/\mathcal{I} V, V')(\simeq \textup{Hom}_{\lie{g'}}(V, V'))$ and its dual basis $\set{\lambda_i}$ of $\textup{Hom}_{\lie{g'}}(V/\mathcal{I} V, V')^*$.
Since $\textup{Hom}_{\lie{g'}}(V/\mathcal{I} V, V')$ is finite dimensional, we obtain a $\univ{g'}\otimes \univ{g}^{G'}$-module homomorphism
\begin{align*}
V/\mathcal{I} V \rightarrow V' \boxtimes \textup{Hom}_{\lie{g'}}(V/\mathcal{I} V, V')^*
\end{align*}
given by $v\mapsto \sum_i \varphi_i(v)\otimes \lambda_i$.
Hence the $\univ{g'}\otimes \univ{g}^{G'}$-module $V/\mathcal{I} V$ has an irreducible quotient of the form $V'\boxtimes W_0$ for an irreducible $\univ{g}^{G'}$-module $W_0$.
By Fact \ref{fact:PoincareDuality} and Lemma \ref{lem:TorAndBern}, we have
\begin{align*}
\Dzuck{\iota(G')}{\set{e}}{n}(\univ{g'}/\mathcal{I}\otimes V)^{\iota(G')} &\simeq \univ{g'}/\mathcal{I}\otimes_{\univ{g'}} V \\
&\simeq V / \mathcal{I} V
\end{align*}
as $\univ{g'}\otimes \univ{g}^{G'}$-modules.
Hence $V'\boxtimes W_0$ is isomorphic to a quotient of $\Dzuck{\iota(G')}{\set{e}}{n}(\univ{g'}/\mathcal{I}\otimes V)^{\iota(G')}$.
This implies that we can take an irreducible subquotient $X$ of $\Dzuck{\iota(G')}{\set{e}}{n}(\univ{g'}/\mathcal{I}\otimes V)$ such that $X^{\iota(G')} \simeq V'\boxtimes W_0$ (see e.g. \cite[Proposition 3.5.4]{Wa88_real_reductive_I}).
Since $X = \univ{g}X^{\iota(G')}$, the $\lie{g'}$-module $X|_{\lie{g'}}$ is a direct sum of some copies of $V'$.
Hence $X$ is naturally isomorphic to $V'\boxtimes \textup{Hom}_{\lie{g'}}(V', X)$ and the natural $(\lie{g'\oplus g})$-action on $\textup{Hom}_{\lie{g'}}(V', X)$ is irreducible.
We have shown the lemma.
\end{proof}
\begin{remark}
Suppose that $V$ is irreducible.
By the proof, one can see that if the Beilinson--Bernstein localization of $V$ is regular holonomic, then that of $V'$ is also regular holonomic.
\end{remark}
\begin{theorem}\label{thm:BranchingUniformlyBounded}
Let $(V_i)_{i \in I}$ be a uniformly bounded family of $\lie{g}$-modules and $(V'_i)_{i \in I}$ a family of irreducible $\lie{g'}$-modules.
If $0 < \dim_{\mathbb{C}}(\textup{Hom}_{\lie{g'}}(V_i, V'_i)) < \infty$ for any $i \in I$, then $(V'_i)_{i \in I}$ is uniformly bounded.
\end{theorem}
\begin{proof}
Set $\mathcal{I}_i := \textup{Ann}_{\univcent{g'}}(V'_i)$.
Then $(\univ{g'}/\mathcal{I}_i\otimes V_i)_{i \in I}$ is a uniformly bounded family of $(\lie{g'\oplus g'\oplus g})$-modules by Proposition \ref{prop:UniformlyBoundedMinimalPrimitive}.
By Proposition \ref{prop:FundamentalUniformlyBoundedGmod} (\ref{enum:FundamentalZuckerman}), $(\Dzuck{\iota(G')}{\set{e}}{n}(\univ{g'}/\mathcal{I}_i\otimes V_i))_{i \in I}$ is a uniformly bounded family of $(\lie{g'\oplus g'\oplus g}, \iota(G'))$-modules.
Here we set $n := \dim_{\mathbb{C}}(G')$.
By Lemma \ref{lem:FindimQuot}, for each $i \in I$, we can take an irreducible $(\lie{g'\oplus g}, \Delta(G'))$-module $W_i$ such that $V'_i\boxtimes W_i$ is a subquotient of $\Dzuck{\iota(G')}{\set{e}}{n}(\univ{g'}/\mathcal{I}_i\otimes V_{i})$.
This implies that $(V'_i\boxtimes W_i)_{i \in I}$ is a uniformly bounded family of $(\lie{g'\oplus g'\oplus g}, \iota(G'))$-modules.
By Proposition \ref{prop:FundamentailUniformlyBoundedGH} (\ref{enum:prop:UniformlyBoundedGH}), the family $(V'_i)_{i \in I}$ is uniformly bounded.
\end{proof}
\subsection{Tensoring with finite-dimensional modules}
Let $G$ be a connected reductive algebraic group.
We shall show that the uniformly boundedness is preserved by tensoring with finite-dimensional modules.
In particular, the uniformly boundedness is preserved by the translation functors.
\begin{lemma}\label{lem:TranslationLength}
Let $(V_i)_{i \in I}$ be a uniformly bounded family of $\lie{g}$-modules and $F$ a finite-dimensional $\lie{g}$-module.
Then there exists a constant $C > 0$ independent of $F$ such that
\begin{align*}
\mathrm{Len}_{\lie{g}}(V_i\otimes F) \leq C\cdot \dim_{\mathbb{C}}(F)^2
\end{align*}
for any $i \in I$.
\end{lemma}
\begin{proof}
Clearly, we can assume that $F$ is completely reducible.
Since the lengths of all $V_i$ are bounded by a constant independent of $i \in I$, we can also assume that all $V_i$ are irreducible.
Set $n:=\dim_{\mathbb{C}}(F)$.
Fix $i \in I$.
By Kostant's theorem \cite[Theorem 7.133]{KnVo95_cohomological_induction}, $V_i\otimes F$ is a direct sum of finitely many submodules $W_1, W_2, \ldots, W_m$ with generalized infinitesimal characters $\chi_1, \chi_2, \ldots, \chi_m$, respectively.
More precisely, we have $m \leq n$ and $\mathcal{I}_{j}^{|W_{\lie{g}}|} W_j = 0$ for any $1\leq j \leq m$,
where $\mathcal{I}_{j}$ is the maximal ideal of $\univcent{g}$ corresponding to $\chi_j$ and $W_{\lie{g}}$ is the Weyl group of $\lie{g}$.
There is a $\lie{g}$-module surjection
\begin{align*}
\mathcal{I}_j^{k} \otimes (W_j/\mathcal{I}_j W_j) \twoheadrightarrow \mathcal{I}_j^k W_j / \mathcal{I}_j^{k+1} W_j
\end{align*}
for any $k \in \mathbb{N}$, and $\mathcal{I}_j^{k}$ is generated by $r^k$ elements as a $\univcent{g}$-module.
Here $r$ is the rank of $\lie{g}$.
Hence we have
\begin{align}
\mathrm{Len}_{\lie{g}}(V_i\otimes F) &= \sum_j \mathrm{Len}_{\lie{g}}(W_j) \\
&\leq C' \cdot \sum_j \mathrm{Len}_{\lie{g}}(W_j/\mathcal{I}_j W_j) \\
&=C' \cdot \sum_j \mathrm{Len}_{\lie{g}}((V_i\otimes F)/\mathcal{I}_j (V_i\otimes F)), \label{eqn:TranslationLengthInf}
\end{align}
where $C'$ is a constant depending only on $|W_{\lie{g}}|$ and $r$.
We shall estimate $\mathrm{Len}_{\lie{g}}((V_i\otimes F)/\mathcal{I}_j (V_i\otimes F))$.
By Corollary \ref{cor:uniformlyBoundedLengthInfChar}, there exists a constant $C''$ depending only on the family $(V_i)_{i \in I}$ such that
\begin{align}
\mathrm{Len}_{\univ{g}\otimes \univ{g\oplus g}^{\Delta(G)}}((V_i\otimes F)/\mathcal{I}_j (V_i\otimes F)) \leq C'' \label{eqn:TranslationLength}
\end{align}
for any $j$.
Here the action of $\univ{g\oplus g}^{\Delta(G)}$ on $V_i\otimes F$ factors through
\begin{align*}
(\univ{g}/\textup{Ann}_{\univ{g}}(V_i)\otimes \textup{End}_{\mathbb{C}}(F))^{\Delta(G)}.
\end{align*}
Take a maximal torus $T$ of $G$.
Since $\univ{g}/\textup{Ann}_{\univ{g}}(V_i)$ is isomorphic to a submodule of $\rring{G/T}$ as a $G$-module (see \cite[Theorem 12.4.2]{GoWa09}), we have
\begin{align*}
\dim_{\mathbb{C}}((\univ{g}/\textup{Ann}_{\univ{g}}(V_i)\otimes \textup{End}_{\mathbb{C}}(F))^{\Delta(G)}) \leq \dim_{\mathbb{C}}(\textup{End}_{T}(F)) \leq \dim_{\mathbb{C}}(F)^2.
\end{align*}
In particular, the dimension of any irreducible module of $(\univ{g}/\textup{Ann}_{\univ{g}}(V_i)\otimes \textup{End}_{\mathbb{C}}(F))^{\Delta(G)}$ is less than or equal to $\dim_{\mathbb{C}}(F)$.
By \eqref{eqn:TranslationLength}, we have
\begin{align*}
\mathrm{Len}_{\lie{g}}((V_i\otimes F)/\mathcal{I}_j (V_i\otimes F)) \leq C'' \cdot \dim_{\mathbb{C}}(F).
\end{align*}
Combining \eqref{eqn:TranslationLengthInf}, we obtain
\begin{equation*}
\mathrm{Len}_{\lie{g}}(V_i\otimes F) \leq C'C''\cdot \dim_{\mathbb{C}}(F)^2. \qedhere
\end{equation*}
\end{proof}
One can prove and refine Lemma \ref{lem:TranslationLength} using twisting of $\mathscr{D}$-modules on the flag variety of $\lie{g}$ or the theory of projective functors \cite{BeGe80_projective_functor}.
For $(\lie{g}, K)$-modules, a more precise estimate is known \cite[Proposition 5.4.1 and its proof]{Ko08}.
\begin{theorem}\label{thm:UniformlyBoundedTranslation}
Let $(V_i)_{i \in I}$ be a uniformly bounded family of $\lie{g}$-modules and $(F_j)_{j \in J}$ a family of finite-dimensional $\lie{g}$-modules with bounded dimensions.
Then $(V_i\otimes F_j)_{i\in I, j\in J}$ is a uniformly bounded family of $\lie{g}$-modules.
\end{theorem}
\begin{proof}
For $i\in I$ and $j\in J$, let $R_{ij}$ be the set of all composition factors of $V_i\otimes F_j$.
By Lemma \ref{lem:TranslationLength}, the lengths of all $V_i\otimes F_j$ are bounded by a constant independent of $i \in I$ and $j \in J$.
Hence it suffices to show that the family $(W)_{W \in R_{ij}, i\in I, j\in J}$ is uniformly bounded.
As we have seen in the proof of Lemma \ref{lem:TranslationLength}, any element of $R_{ij}$ is a subquotient of $(V_i\otimes F_j)/\mathcal{I}(V_i\otimes F_j)$ for a maximal ideal $\mathcal{I}$ of $\univcent{g}$.
By Theorem \ref{thm:BranchingUniformlyBounded}, the family $(W)_{W \in R_{ij}, i\in I, j\in J}$ is uniformly bounded.
Note that although we have proved Theorem \ref{thm:BranchingUniformlyBounded} for a family of irreducible quotients, the proof also works for a family of irreducible subquotients under some finiteness assumption.
\end{proof}
\subsection{Category of \texorpdfstring{$(\lie{g}, \lie{k})$}{(g,k)}-modules}
Let $G$ be a connected reductive algebraic group and $K$ a finite covering of a connected reductive subgroup of $G$.
Suppose that $[K, K]$ is simply-connected.
We denote by $\mathcal{C}(\lie{g}, \lie{k})$ the full subcategory of $\mathrm{Mod}(\lie{g})$ whose object is
\begin{enumerate}[(i)]
\item of finite length,
\item locally finite and completely reducible as a $\lie{k}$-module, and
\item $\lie{k}$-admissible, i.e.\ any $\lie{k}$-isotypic component is finite dimensional.
\end{enumerate}
Such a module is called a generalized Harish-Chandra module by I. Penkov and G. Zuckerman (see e.g.\ \cite{PeZu14}).
We write $\mathcal{C}_\chi(\lie{g}, \lie{k})$ for the full subcategory of $\mathcal{C}(\lie{g}, \lie{k})$ whose object has the infinitesimal character $\chi$.
In this subsection, we study the category $\mathcal{C}_{\chi}(\lie{g}, \lie{k})$.
It is related to the branching problem and harmonic analysis because the algebra $\univ{g}^\lie{k}$ roughly controls multiplicities and its modules can be obtained from the $\Delta(K)$-invariant part of $(\lie{g\oplus k}, \Delta(K))$-modules.
See Theorem \ref{thm:uniformlyBoundedLengthGeneral}.
\begin{theorem}\label{thm:twosidedidal}
Let $\mathcal{I}$ (resp.\ $\mathcal{J}$) be a maximal ideal of $\univcent{g}$ (resp.\ $\univcent{k}$).
The number of two sided ideals of $\univ{g}^{K}/(\mathcal{I}+\mathcal{J})\univ{g}^{K}$
is bounded by some constant independent of $\mathcal{I}$ and $\mathcal{J}$.
\end{theorem}
\begin{proof}
We have a $(\univ{g}^{K}, \univ{g}^K)$-bimodule isomorphism
\begin{align*}
\univ{g}^{K}/(\mathcal{I}+\mathcal{J})\univ{g}^{K} &= (\univ{g}/(\mathcal{I}\univ{g}+\mathcal{J}\univ{g}))^{K} \\
&\simeq (\univ{k}/\mathcal{J} \univ{k}\otimes_{\univ{k}}\univ{g}/\mathcal{I}\univ{g})^{K}.
\end{align*}
By Proposition \ref{prop:UniformlyBoundedMinimalPrimitive}, $(\univ{g}/\mathcal{I}\univ{g})_\mathcal{I}$ and $(\univ{k}/\mathcal{J} \univ{k})_\mathcal{J}$
are uniformly bounded families of $(\lie{g\oplus g})$-modules and $(\lie{k\oplus k})$-modules, respectively.
By applying Theorem \ref{thm:uniformlyBoundedLengthGeneral} to $\univ{k}/\mathcal{J} \univ{k}\otimes \univ{g}/\mathcal{I}\univ{g}$, there exists some constant $C$ independent of $\mathcal{I}$ and $\mathcal{J}$ such that
\begin{align*}
\mathrm{Len}_{\univ{g}^K\otimes \univ{g}^K}((\univ{k}/\mathcal{J} \univ{k}\otimes_{\univ{k}}\univ{g}/\mathcal{I}\univ{g})^{K}) \leq C.
\end{align*}
This shows the theorem.
\end{proof}
\begin{remark}
If $\lie{k}$ does not contain any non-trivial ideal of $\lie{g}$,
the center of $\univ{g}^{K}$ is equal to $\univcent{g}\univcent{k}\simeq \univcent{g}\otimes \univcent{k}$ by \cite[Theorem 10.1]{Kn94}.
\end{remark}
Let $\mathcal{I}_\chi$ be the minimal primitive ideal of $\univ{g}$ with infinitesimal character $\chi$.
\begin{theorem}
Any family of objects in $\mathcal{C}(\lie{g}, \lie{k})$ with bounded lengths is a uniformly bounded family.
In particular, for any irreducible object $V \in \mathcal{C}_{\chi}(\lie{g}, \lie{k})$, there exist an anti-dominant $\lambda \in \chi$ and some $\mathcal{M} \in \mathrm{Mod}_h(\mathscr{D}_{G/B,\lambda+\rho})$ such that $V\simeq \Gamma(\mathcal{M})$.
(See Subsection \ref{sect:BBcorrespondence} for the notation.)
\end{theorem}
\begin{remark}
The second assertion has been proved by A. V. Petukhov \cite{Pe12}.
More precisely, one can see from our proof that $\mathcal{M}$ is regular holonomic.
\end{remark}
\begin{proof}
It is enough to show that the family of all irreducible objects in $\mathcal{C}(\lie{g}, \lie{k})$ (modulo isomorphism) is uniformly bounded.
Let $V$ be an irreducible object in $\mathcal{C}(\lie{g}, \lie{k})$ with infinitesimal character $\chi$.
Then we can take a character $\mu$ of $\lie{k}$ such that $V\otimes \mathbb{C}_\mu$
lifts to a $K$-module (see the proof of Proposition \ref{prop:UniformlyBoundedG-modSpherical}).
Since $V\otimes \mathbb{C}_\mu$ is an irreducible $(\lie{k}/[\lie{k},\lie{k}]\oplus \lie{g}, \Delta(K))$-module, replacing $V$ by $V\otimes \mathbb{C}_\mu$ and $(\lie{g}, \lie{k})$ by $(\lie{k}/[\lie{k},\lie{k}]\oplus \lie{g}, \Delta(K))$, we can assume that $V$ is an irreducible $(\lie{g}, K)$-module.
See also the proof of Corollary \ref{cor:UniformlyBoundedIrreducibles}.
We put $W:=\Dzuck{K}{\set{e}}{n}(\univ{g}/\mathcal{I}_\chi)$, where $n = \dim_{\mathbb{C}}(\lie{k})$ and we take the functor $\Dzuck{K}{\set{e}}{n}$ with respect to the left $\lie{k}$-action.
Then for any irreducible $K$-module $F$, we have
\begin{align*}
\textup{Hom}_{K}(F, W) \simeq F^*\otimes_{\univ{k}} \univ{g}/\mathcal{I}_\chi
\end{align*}
by Fact \ref{fact:BernAndF}.
This implies that $W$ is a $(\lie{g\oplus g}, K\times K)$-module.
$\dual{V}$ denotes the subspace of all $K$-finite vectors in $V^*$, which is the dual in $\mathcal{C}(\lie{g}, \lie{k})$.
It is easy to see that $\dual{V}$ is irreducible by the $K$-admissibility.
We shall show that $V\boxtimes \dual{V}$ is a subquotient of $W$.
Fix an irreducible $K$-submodule $F$ of $V$.
Then we have isomorphisms
\begin{align*}
\textup{Hom}_{K\times K}(F\boxtimes F^*, W) &\simeq F^*\otimes_{\univ{k}} \univ{g}/\mathcal{I}_\chi \otimes_{\univ{k}}F \\
&\simeq (\univ{g}/\mathcal{I}_\chi \otimes_{\univ{k}} \textup{End}_{\mathbb{C}}(F))^K\\
&\simeq (\univ{g}/(\mathcal{I}_\chi + \univ{g}\textup{Ann}_{\univ{k}}(F)))^K
\end{align*}
as $\univ{g\oplus g}^{K\times K}$-modules.
Here $\textup{Ann}_{\univ{k}}(F)$ denotes the annihilator of $F$ in $\univ{k}$.
Since $\textup{Hom}_{K}(F, V)$ is a finite-dimensional irreducible $\univ{g}^K$-module (see Fact \ref{fact:UgK-module}), we have a surjection
\begin{align*}
(\univ{g}/(\mathcal{I}_\chi + \univ{g}\textup{Ann}_{\univ{k}}(F)))^K \rightarrow \textup{End}_{\mathbb{C}}(\textup{Hom}_K(F, V))
\end{align*}
by the Jacobson density theorem.
This implies that there is a surjection
\begin{align*}
\textup{Hom}_{K\times K}(F\boxtimes F^*, W) &\rightarrow \textup{Hom}_{K\times K}(F\boxtimes F^*, V\boxtimes \dual{V}) \\
&\simeq \textup{End}_{\mathbb{C}}(\textup{Hom}_K(F, V))
\end{align*}
of $\univ{g\oplus g}^{K\times K}$-modules.
By \cite[Proposition 3.5.4]{Wa88_real_reductive_I}, $V\boxtimes \dual{V}$ is isomorphic to a subquotient of $W$.
By Propositions \ref{prop:UniformlyBoundedMinimalPrimitive} and \ref{prop:FundamentalUniformlyBoundedGmod} (i), $(\Dzuck{K}{\set{e}}{n}(\univ{g}/\mathcal{I}_\chi))_{\chi}$ is a uniformly bounded family of $(\lie{g\oplus g}, K\times K)$-modules.
By Proposition \ref{prop:FundamentailUniformlyBoundedGH} (i) the family $(V\boxtimes \dual{V})_{V \in \mathcal{S}}$ is uniformly bounded, where $\mathcal{S}$ is the set of all equivalence classes of irreducible $K$-admissible $(\lie{g}, K)$-modules.
From this and Proposition \ref{prop:FundamentailUniformlyBoundedGH} (\ref{enum:prop:UniformlyBoundedGH}), the family $(V)_{V\in \mathcal{S}}$ is uniformly bounded.
This shows the theorem.
\end{proof}
In the proof, we have proved the following corollary.
For a character $\mu$ of $\lie{k}$, we denote by $\mathcal{C}_{\chi, \mu}(\lie{g}, \lie{k})$ the full subcategory of $\mathcal{C}_\chi(\lie{g}, \lie{k})$ whose object $V$
satisfies that $V\otimes \mathbb{C}_\mu$ lifts to a $K$-module.
\begin{corollary}\label{cor:NumberOfIrreducibles}
Let $\mu$ be a character of $\lie{k}$ and $\chi$ an infinitesimal character of $\lie{g}$.
The number of equivalence classes of irreducible objects in $\mathcal{C}_{\chi, \mu}(\lie{g}, \lie{k})$ is bounded by some constant independent of $\mu$ and $\chi$.
\end{corollary}
\begin{remark}
The number of equivalence classes of irreducible objects in $\mathcal{C}_\chi(\lie{g}, \lie{k})$ may be infinite.
$(\lie{g}, \lie{k}) = (\sl(2,\mathbb{C}), \mathfrak{so}(2,\mathbb{C}))$ gives an example.
See \cite[Theorem 1.3.1]{HoTa92}.
\end{remark}
For a $(\lie{g}, \lie{k})$-module $V$ and an irreducible finite-dimensional $\lie{k}$-module $F$, we denote by $V(F)$ the isotypic component with respect to $F$.
Then $V(F)$ is a $\univ{k}\otimes \univ{g}^K$-module.
It is well-known that the $\lie{g}$-module structure on $V$ is related to the $\univ{k}\otimes \univ{g}^K$-module structure on $V(F)$.
See \cite[Lemma 3.5.3]{Wa88_real_reductive_I} and \cite{LeMc73}.
Recall that $\univ{k}$ and $\univ{g}^K$ are noetherian.
\begin{fact}\label{fact:UgK-module}
Let $V$ be a $(\lie{g}, \lie{k})$-module and $F$ an irreducible finite-dimensional $\lie{k}$-module.
For any submodule $W$ of $V(F)$, we have $(\univ{g} W)(F) = W$.
In particular, the length of $V(F)$ is less than or equal to that of $V$,
and $V(F)$ is finitely generated if $V$ is finitely generated.
\end{fact}
\begin{lemma}\label{lem:FiniteUgK}
Let $F$ be an irreducible $K$-module.
Then $\univ{g}(F)$ with respect to the adjoint action is finitely generated as a left/right $\univ{g}^K$-module.
In particular, any finitely generated submodule of the $(\univ{g}^K, \univ{g}^K)$-bimodule $\univ{g}$ is finitely generated as a left/right $\univ{g}^K$-module.
\end{lemma}
\begin{proof}
$F^*\otimes \univ{g}$ is finitely generated left $\univ{g}$-module.
By \cite[Lemma 2.2]{Ki14}, $(F^*\otimes \univ{g})^K$ is finitely generated left $\univ{g}^K$-module.
Since $\univ{g}(F)$ is canonically isomorphic to $F\otimes \textup{Hom}_K(F, \univ{g})$ as a $\univ{g}^K$-module, this shows the first assertion.
Any finitely generated submodule of the $(\univ{g}^K, \univ{g}^K)$-bimodule $\univ{g}$ is contained in a finite sum of some $K$-isotypic components.
Since $\univ{g}^K$ is noetherian, the second assertion follows from the first one.
\end{proof}
\begin{lemma}\label{lem:FiniteGK}
Let $V$ be a $(\lie{g}, \lie{k})$-module and $F$ an irreducible finite-dimensional $\lie{k}$-module.
If $V(F)$ is finite dimensional and generates $V$, then $V$ is in $\mathcal{C}(\lie{g}, \lie{k})$.
\end{lemma}
\begin{proof}
Since the multiplication map $\univ{g}V(F)\rightarrow V$ is surjective,
$V$ is completely reducible as a $\lie{k}$-module.
We shall show that $V$ is $\lie{k}$-admissible and of finite length.
Let $F'$ be an irreducible finite-dimensional $\lie{k}$-module.
We shall show that $V(F')$ is finite dimensional.
Since $V$ is finitely generated, $V(F')$ is finitely generated as a $\univ{g}^K$-module by Fact \ref{fact:UgK-module}.
Since $V$ is generated by $V(F)$, we can take a finite-dimensional subspace $X \subset \univ{g}$ such that $V(F')\subset \univ{g}^K XV(F)$.
By Lemma \ref{lem:FiniteUgK}, there exists a finite-dimensional subspace $X' \subset \univ{g}$ such that $\univ{g}^K X \univ{g}^K = X'\univ{g}^K$.
Then we have
\begin{align*}
V(F') \subset \univ{g}^K XV(F) = X' \univ{g}^K V(F) = X' V(F),
\end{align*}
and hence $V(F')$ is finite dimensional.
Therefore $V$ is $\lie{k}$-admissible.
We shall show that $V$ is of finite length.
Since $V$ is generated by $V(F)$, $\textup{Ann}_{\univcent{g}}(V)$ is of finite codimension in $\univcent{g}$.
Hence $V$ is a finite direct sum of $\lie{g}$-submodules with generalized infinitesimal characters.
We can assume that $V$ has a generalized infinitesimal character $\chi$.
Since $V$ is generated by $V(F)$, there is a character $\mu$ of $\lie{k}$ such that $V\otimes \mathbb{C}_\mu$ lifts to a $K$-module.
Then any irreducible subquotient of $V$ is in $\mathcal{C}_{\chi,\mu}(\lie{g}, \lie{k})$.
By Corollary \ref{cor:NumberOfIrreducibles}, the number of equivalence classes of irreducible objects in $\mathcal{C}_{\chi,\mu}(\lie{g}, \lie{k})$ is finite.
Since $V$ is $\lie{k}$-admissible and noetherian, this shows that $V$ is of finite length.
\end{proof}
Suppose that $0=V_0\subset V_1 \subset V_2 \subset \cdots \subset V_r = V$ is the socle filtration of $V \in \mathcal{C}(\lie{g}, \lie{k})$, that is, each $V_i / V_{i-1}$ is the sum of all irreducible submodules in $V/V_{i-1}$.
The length $r$ is called the Loewy length of $V$.
\begin{theorem}\label{thm:LoewyLength}
The Loewy length of any object in $\mathcal{C}_\chi(\lie{g}, \lie{k})$ is bounded by some constant independent of the object and the infinitesimal character $\chi$.
\end{theorem}
\begin{proof}
We construct projective objects in $\mathcal{C}(\lie{g}, \lie{k})$ using $\univ{g}\otimes_{\univ{k}}F$, which is a projective object in the category of all $(\lie{g}, \lie{k})$-modules whose $\lie{k}$-actions are completely reducible.
Let $F$ be an irreducible finite-dimensional $\lie{k}$-module
and $\chi$ an infinitesimal character of $\lie{g}$.
Put
\begin{align*}
\widetilde{P}_{F,\chi} := \univ{g}/\mathcal{I}_\chi \otimes_{\univ{k}}F
\end{align*}
Then there is a canonical isomorphism
\begin{align*}
\textup{Hom}_{\lie{k}}(F, \widetilde{P}_{F,\chi}) \simeq (\univ{g}/(\mathcal{I}_\chi + \univ{g}\textup{Ann}_{\univ{k}}(F)))^K
\end{align*}
of $(\univ{g}^K, \univ{g}^K)$-modules.
We regard $\mathcal{A}:=\textup{Hom}_{\lie{k}}(F, \widetilde{P}_{F,\chi})$ as an algebra under this isomorphism.
Let $\mathcal{J}$ be the union of all left ideals of finite codimension in $\mathcal{A}$.
Then $\mathcal{J}$ is also the union of all two-sided ideals of finite codimension.
By Theorem \ref{thm:twosidedidal}, the number of two-sided ideals is finite.
Hence $\mathcal{J}$ is of finite codimension in $\mathcal{A}$.
There is a canonical isomorphism
\begin{align*}
F\otimes \textup{Hom}_{\lie{k}}(F, \widetilde{P}_{F,\chi}) \xrightarrow{\simeq} \widetilde{P}_{F,\chi}(F)
\end{align*}
of $\univ{k}\otimes \univ{g}^K\otimes \univ{g}^K$-modules.
We consider $F\otimes \mathcal{J}$ as a subspace of $\widetilde{P}_{F,\chi}$ by the isomorphism.
Put $\widetilde{J}:=\univ{g}\cdot (F\otimes \mathcal{J})$ and
\begin{align*}
P_{F,\chi} := \widetilde{P}_{F,\chi} / \widetilde{J}.
\end{align*}
Since $\widetilde{J}(F) = F\otimes J$ by Fact \ref{fact:UgK-module}, we have
\begin{align*}
P_{F,\chi}(F) \simeq F\otimes (\textup{Hom}_{\lie{k}}(F, \widetilde{P}_{F,\chi}) / \mathcal{J}).
\end{align*}
Hence $P_{F,\chi}$ is generated by the finite-dimensional subspace $P_{F,\chi}(F)$.
By Lemma \ref{lem:FiniteGK}, $P_{F,\chi}$ is an object in $\mathcal{C}_\chi(\lie{g}, \lie{k})$.
By construction, $P_{F,\chi}$ is projective in $\mathcal{C}_\chi(\lie{g}, \lie{k})$.
In fact, the image of $F\otimes \mathcal{J}$ by any $\lie{g}$-homomorphism $\widetilde{P}_{F,\chi} \rightarrow V \in \mathcal{C}_\chi(\lie{g}, \lie{k})$ should be zero.
Hence any object in $\mathcal{C}_\chi(\lie{g}, \lie{k})$ is isomorphic to a quotient of a finite direct sum of projective objects of the form $P_{F,\chi}$.
It is enough to bound the Loewy length of $P_{F,\chi}$.
By Proposition \ref{prop:UniformlyBoundedMinimalPrimitive} and Theorem \ref{thm:uniformlyBoundedLengthGeneral}, there is a constant $C$ independent of $\chi$ and $F$ such that
\begin{align*}
\mathrm{Len}_{\univ{g}\otimes \univ{g}^K}(P_{F,\chi}) \leq \mathrm{Len}_{\univ{g}\otimes \univ{g}^K}(\univ{g}/\mathcal{I}_\chi \otimes_{\univ{k}}F) \leq C.
\end{align*}
By the following lemma, the Loewy length of $P_{F,\chi}$ as a $\lie{g}$-module is bounded by $C$.
\end{proof}
\begin{lemma}
Let $\mathcal{A}$ be a $\mathbb{C}$-algebra and $V$ an irreducible $\univ{g}\otimes \mathcal{A}$-module.
If $V$ is in $\mathcal{C}(\lie{g}, \lie{k})$ as a $\lie{g}$-module,
then $V$ is completely reducible as a $\lie{g}$-module.
\end{lemma}
\begin{proof}
Since $V$ has finite length as a $\lie{g}$-module, $V$ has an irreducible $\lie{g}$-submodule $W$.
Since $V$ is irreducible as a $\univ{g}\otimes
\mathcal{A}$-module, we have $V = \mathcal{A}\cdot W$.
This implies that $V$ is a sum of some copies of $W$, and hence $V$ is completely reducible as a $\lie{g}$-module.
\end{proof}
In the proof of Theorem \ref{thm:LoewyLength}, we have proved the following proposition.
\begin{proposition}\label{prop:EnoughProj}
$\mathcal{C}_\chi(\lie{g}, \lie{k})$ has enough projectives.
\end{proposition}
\section{\texorpdfstring{$\mathscr{D}$}{D}-module and its operations}
The purpose of this section is to summarize fundamental results about $\mathscr{D}$-modules.
\subsection{\texorpdfstring{Twisted $\mathscr{D}$}{D}-module}\label{subsect:DirectImage}
We review algebras of twisted differential operators and their operations.
We refer to \cite{BeBe93}, \cite[Section 1]{KaTa96}, \cite[Sections 3 and 4]{Ka08_dmodule} and \cite{Ka89} for ($G$-equivariant) algebras of twisted differential operators.
In this paper, we denote by $\mathscr{D}_X$ the algebra of non-twisted (local) differential operators on a smooth variety $X$.
Any variety in this paper is assumed to be quasi-projective.
Let $X$ be a (quasi-projective) smooth variety over $\mathbb{C}$.
Let $i_X$ be the standard homomorphism $\rsheaf{X}\rightarrow \mathscr{D}_X$.
There are several definitions of algebras of twisted differential operators on $X$.
We adopt a definition in which the algebras are locally trivial in the \'etale topology.
\begin{definition}
We say that a sheaf $\mathscr{A}$ of algebras on $X$ equipped with a $\mathbb{C}$-algebra homomorphism $i\colon \rsheaf{X}\rightarrow \mathscr{A}$ is an \define{algebra of twisted differential operators}
if $\mathscr{A}$ is quasi-coherent as a left $\rsheaf{X}$-module and the pair $(\mathscr{A}, i)$ is locally isomorphic to $(\mathscr{D}_X, i_X)$ in the \'etale topology (see \cite[A.1]{HMSW87} and \cite[1.1]{KaTa96}).
We identify $\rsheaf{X}$ with $i(\rsheaf{X})$.
\end{definition}
Then $\mathscr{A}$ admits a canonical filtration called the ordered filtration
such that its associated graded algebra is isomorphic to $p_* \rsheaf{T^*X}$, where $p$ is the natural projection from the cotangent bundle $T^*X$ to $X$.
Let $f\colon X\rightarrow Y$ be a morphism of smooth varieties
and $\mathscr{A}_{Y}$ an algebra of twisted differential operators on $Y$.
We set
\begin{align*}
\Omega_{f}&:= f^{-1}\Omega_{Y}^{\vee}\otimes_{f^{-1}\rsheaf{Y}}\Omega_{X},\\
\mathscr{A}_{Y\leftarrow X} &:= f^{-1}\mathscr{A}_{Y}\otimes_{f^{-1}\rsheaf{Y}}\Omega_{f},\\
\mathscr{A}_{X\rightarrow Y} &:= \rsheaf{X} \otimes_{f^{-1}\rsheaf{Y}}f^{-1}\mathscr{A}_Y,
\end{align*}
where $\Omega_{X}$ (resp.\ $\Omega_{Y}$) denotes the canonical sheaf of $X$ (resp.\ $Y$).
$f^{\#}\mathscr{A}_{Y}$ denotes the sheaf of all differential endomorphisms
of the $\rsheaf{X}$-module $\mathscr{A}_{X\rightarrow Y}$ that commute with the right $f^{-1}\mathscr{A}_{Y}$-action.
Then $f^{\#}\mathscr{A}_Y$ is an algebra of twisted differential operators on $X$
and $\mathscr{A}_{Y\leftarrow X}$ is an $(f^{-1}\mathscr{A}_{Y}, f^{\#}\mathscr{A}_Y)$-bimodule.
The direct image of $\mathcal{M}^{\bullet} \in D^b_{qc}(f^{\#}\mathscr{A}_Y)$ is defined by
\begin{align*}
D f_{+}(\mathcal{M}^\bullet) = Rf_*(\mathscr{A}_{Y\leftarrow X}\otimes^L_{f^{\#}\mathscr{A}_Y}\mathcal{M}^{\bullet}) \in D^b_{qc}(\mathscr{A}_Y),
\end{align*}
and the inverse image of $\mathcal{N}^{\bullet} \in D^b_{qc}(\mathscr{A}_Y)$ is defined by
\begin{align*}
Lf^*(\mathcal{N}^\bullet) = \mathscr{A}_{X\rightarrow Y}\otimes^L_{f^{-1}\mathscr{A}_Y}f^{-1}(\mathcal{N}^\bullet)
\in D^b_{qc}(f^\#\mathscr{A}_Y).
\end{align*}
It is well-known that the functors $D f_{+}$ and $Lf^*$ are local for $Y$,
and preserves holonomicity.
\begin{proposition}\label{prop:DirectImageAffineMorphism}
Suppose that $f\colon X\rightarrow Y$ is an affine morphism.
Then $Df_+$ is isomorphic to the left derived functor of $D^0 f_+$.
\end{proposition}
\begin{proof}
Let $\mathcal{M}^{\bullet} \in D^b_{qc}(f^{\#}\mathscr{A}_Y)$.
Then $\mathcal{M}^\bullet$ has a locally free resolution $\mathcal{F}^\bullet$ by \cite[Corollary 1.4.20]{HTT08}.
$\mathscr{A}_{Y\leftarrow X}\otimes_{f^{\#}\mathscr{A}_Y} \mathcal{F}^i$ locally admits a quasi-coherent $\rsheaf{X}$-module structure, and hence it is $f_*$-acyclic.
Therefore we have
\begin{align*}
D f_{+}(\mathcal{M}^\bullet) &= f_*(\mathscr{A}_{Y\leftarrow X}\otimes_{f^{\#}\mathscr{A}_Y} \mathcal{F}^\bullet) \\
&\simeq f_*(\mathscr{A}_{Y\leftarrow X})\otimes_{f_*f^{\#}\mathscr{A}_Y} f_*(\mathcal{F}^\bullet)\\
&\simeq f_*(\mathscr{A}_{Y\leftarrow X})\otimes_{f_*f^{\#}\mathscr{A}_Y}^L f_*(\mathcal{M}^{\bullet}).
\end{align*}
This shows the proposition.
\end{proof}
The following two facts are fundamental.
See \cite[Lemma 1.1.7, Propositions 1.2.3 and 1.2.6]{KaTa96},
and see \cite[Propositions 1.5.11 and 1.5.21, and Theorem 1.7.3]{HTT08} for the non-twisted case.
\begin{fact}\label{fact:FundamentalDmodule}
Let $f\colon X\rightarrow Y$ and $g\colon Y\rightarrow Z$ be morphisms of smooth varieties and $\mathscr{A}_Z$ an algebra of twisted differential operators on $Z$.
Then we have $(g\circ f)^\# \mathscr{A}_Z = f^\# g^\# \mathscr{A}_Z$ and
\begin{enumerate}[(i)]
\item \label{eqn:DirectImage} $Dg_+\circ Df_+ = D(g\circ f)_+$,
\item \label{eqn:InverseImage} $Lf^*\circ Lg^* = L(g\circ f)^*$.
\end{enumerate}
\end{fact}
\begin{fact}[Base change theorem]\label{fact:BaseChange}
Suppose that we have the following cartesian square of smooth varieties:
\begin{align*}
\xymatrix {
Y\times_X Z \ar[r]^-{\widetilde{g}} \ar[d]^-{\widetilde{f}}& Y \ar[d]^-f\\
Z \ar[r]^-g & X.
}
\end{align*}
Let $\mathscr{A}_X$ be an algebra of twisted differential operators.
Then there exists an isomorphism $Lg^*\circ Df_+\simeq D\widetilde{f}_+\circ L\widetilde{g}^*$ of functors.
\end{fact}
\subsection{Picard algebroid}\label{sect:PicardAlgebroid}
We review the notion of Picard algebroids and describe the action of $f^\# \mathscr{A}_Y$ on $\mathscr{A}_{Y\leftarrow X}$ using Picard algebroids.
We refer the reader to \cite[\S 2]{BeBe93}.
Let $Z$ be a smooth variety and $\mathcal{T}_Z$ the tangent sheaf of $Z$.
\begin{definition}[{\cite[1.2 and 2.1.3]{BeBe93}}] \label{def:Picard}
Let $\widetilde{\mathcal{T}}$ be a quasi-coherent $\rsheaf{Z}$-module on $Z$.
$\widetilde{\mathcal{T}}$ is called a \define{Lie algebroid} on $Z$ if
$\widetilde{\mathcal{T}}$ is a sheaf of complex Lie algebras equipped with a $\rsheaf{Z}$-module homomorphism $\sigma\colon \widetilde{\mathcal{T}} \rightarrow \mathcal{T}_Z$ such that
\begin{align*}
[T, fT'] = (\sigma(T)f) T' + f[T, T']
\end{align*}
for any local sections $T, T' \in \widetilde{\mathcal{T}}$ and $f \in \rsheaf{Z}$.
A Lie algebroid $\widetilde{\mathcal{T}}$ on $Z$ is called a \define{Picard algebroid} if
$\sigma\colon\widetilde{\mathcal{T}}\rightarrow \mathcal{T}_Z$ is epimorphic and
there is an isomorphism $i\colon\rsheaf{Z}\rightarrow \ker(\sigma)$ of $\rsheaf{Z}$-modules such that
$[T, i(f)] = \sigma(T)f$ for any local sections $T \in \widetilde{\mathcal{T}}$ and $f \in \rsheaf{Z}$.
\end{definition}
The isomorphism $i$ in the definition is unique.
We identify $\rsheaf{X}$ with $i(\rsheaf{X})$.
For an algebra $(\mathscr{A}_Z, i)$ of twisted differential operators on a smooth variety $Z$,
we denote by $\mathcal{P}(\mathscr{A}_Z)$ the sheaf of sections in $\mathscr{A}_Z$ with the order less than or equal to $1$.
Then $\mathcal{P}(\mathscr{A}_Z)$ is a Picard algebroid on $Z$ equipped with
the homomorphism $i\colon\rsheaf{Z}\rightarrow \mathcal{P}(\mathscr{A}_Z)$.
Since $\mathscr{A}_Z$ is generated by $\mathcal{P}(\mathscr{A}_Z)$, to define an action of $\mathscr{A}_Z$, it is enough to define an action of $\mathcal{P}(\mathscr{A}_Z)$ such that $i(1)$ acts by the identity morphism (\cite[Lemma 2.1.4]{BeBe93}).
We describe the action $f^\# \mathscr{A}_Y$ on $\mathscr{A}_{Y\leftarrow X}$.
Let $X, Y, f$ and $\mathscr{A}_Y$ be as in the previous subsection.
We denote by $f^\# \mathcal{P}(\mathscr{A}_Y)$ the fiber product $f^* \mathcal{P}(\mathscr{A}_Y) \times_{f^*\mathcal{T}_Y} \mathcal{T}_X$ of
\begin{align*}
f^* \mathcal{P}(\mathscr{A}_Y) \xrightarrow{f^*\sigma} f^* \mathcal{T}_Y \leftarrow \mathcal{T}_X.
\end{align*}
Then $f^\# \mathcal{P}(\mathscr{A}_Y)$ is a Picard algebroid on $X$ equipped with
$i\colon\rsheaf{X}\rightarrow f^\# \mathcal{P}(\mathscr{A}_Y)$ ($h \mapsto (h\otimes 1, 0)$).
We can define an action of $f^\# \mathcal{P}(\mathscr{A}_Y)$ on $\mathscr{A}_{X\rightarrow Y}$ via
\begin{align*}
(\sum_i f_i \otimes T_i, T') \cdot g \otimes S = T'g \otimes S + \sum_i f_i g \otimes T_i S
\end{align*}
for $(\sum_i f_i \otimes T_i, T') \in f^\# \mathcal{P}(\mathscr{A}_Y)$ and $g \otimes S \in \mathscr{A}_{X\rightarrow Y}$.
This induces a canonical isomorphism $f^\# \mathcal{P}(\mathscr{A}_Y) \simeq \mathcal{P}(f^\# \mathscr{A}_Y)$ of Picard algebroids.
See \cite[Lemma 2.2]{BeBe93}.
\begin{proposition}\label{prop:ActionOnDXY}
For local sections $(\sum_i f_i \otimes T_i, T') \in f^\# \mathcal{P}(\mathscr{A}_Y)$ and $S\otimes \tau \otimes \omega \in \mathscr{A}_{Y\leftarrow X} = f^{-1}(\mathscr{A}_Y \otimes_{\rsheaf{Y}} \Omega_Y^\vee)\otimes_{\rsheaf{X}} \Omega_X$,
we define
\begin{align*}
S\otimes \tau \otimes \omega \cdot (\sum_i f_i \otimes T_i, T')
&= \sum_i ST_i \otimes \tau \otimes f_i\omega \\
&-\sum_i S\otimes \sigma(T_i) \tau \otimes f_i\omega - S \otimes \tau \otimes \sigma(T') \omega,
\end{align*}
where $\sigma(T') \omega$ and $\sigma(T_i) \tau$ are defined by the Lie derivative.
Then this induces a right action of $f^\# \mathscr{A}_Y$ on $\mathscr{A}_{Y\leftarrow X}$.
\end{proposition}
\begin{proof}
A straightforward computation shows the proposition.
Hence we omit the details.
\end{proof}
\begin{remark}
Since $\mathscr{A}_Y$ is locally trivial, we can reduce the computation to the
non-twisted case.
In the case, the action coincides with that in \cite[Lemma 1.3.4]{HTT08}.
In \cite[1.1.15]{KaTa96}, the action in the proposition is constructed by a formal computation of algebras of twisted differential operators.
\end{remark}
\subsection{\texorpdfstring{$G$}{G}-equivariant module}
In this subsection, we review the notion of $G$-equivariant $\mathscr{D}$-modules.
We refer the reader to \cite[1.8]{BeBe93}, \cite{Ka89} and \cite[Section 3]{Ka08_dmodule}.
Let $G$ be an affine algebraic group and $X$ a smooth $G$-variety.
We write $\mu\colon G\times X\rightarrow X$ for the multiplication map and
$p_2\colon G\times X\rightarrow X$ for the projection onto the second factor.
An $\rsheaf{X}$-module $\mathcal{M}$ is \define{$G$-equivariant} if an $\rsheaf{G\times X}$-module isomorphism
$\mu^* \mathcal{M} \xrightarrow{\simeq} p_2^* \mathcal{M}$ is specified and satisfies the associative law \cite[(3.1.2)]{Ka08_dmodule}.
The $G$-equivariant structure is sometimes called an (algebraic) $G$-action on $\mathcal{M}$.
In fact, the $G$-equivariant structure induces a $G$-action on the set of sections of $\mathcal{M}$.
We can define the differential of the $G$-action, that is, a Lie algebra homomorphism $\lie{g}\rightarrow \textup{End}_{\mathbb{C}}(\mathcal{M})$.
We say that an algebra $\mathscr{A}$ of twisted differential operators is \define{$G$-equivariant} if
an algebra homomorphism $i_{\lie{g}}\colon\univ{g}\rightarrow \mathscr{A}$ and a $G$-action on $\mathscr{A}$ are specified and satisfy
the following conditions:
\begin{enumerate}[(i)]
\item The $G$-action is given by algebra isomorphisms.
\item $i_{\lie{g}}$ is $G$-equivariant with respect to the adjoint action on $\univ{g}$.
\item The differential of the $G$-action on $\mathscr{A}$ coincides with the adjoint action of $\lie{g}$ on $\mathscr{A}$
coming from $i_{\lie{g}}$.
\end{enumerate}
The $G$-equivariant structure induces an isomorphism
\begin{align}
\mu^{\#}\mathscr{A} \simeq p_2^{\#}\mathscr{A} (\simeq \mathscr{D}_{G}\boxtimes \mathscr{A}) \label{eqn:Gequivariant}
\end{align}
of algebras satisfying the associative law.
See \cite[Lemma 1.8.7]{BeBe93}.
Let $\mathscr{A}_X$ be a $G$-equivariant algebra of twisted differential operators on $X$.
An $\mathscr{A}$-module $\mathcal{M}$ is called \define{$G$-equivariant} or an \define{$(\mathscr{A}_X, G)$-module} if $\mathcal{M}$ is $G$-equivariant as a $\rsheaf{X}$-module
and the morphism $\mu^* \mathcal{M} \xrightarrow{\simeq} p_2^* \mathcal{M}$ is a $\mathscr{D}_{G}\boxtimes \mathscr{A}_X$-isomorphism.
Let $f\colon Y\rightarrow X$ be a $G$-morphism of smooth $G$-varieties.
Then the natural left action of $\univ{g}$ on $\mathscr{A}_{Y\rightarrow X}$
induces an algebra homomorphism $\univ{g}\rightarrow f^{\#}\mathscr{A}_X$.
Hence the algebra $f^{\#}\mathscr{A}_X$ is $G$-equivariant.
The $G$-equivariant structure coincides with the one obtained from the canonical isomorphism
$\mathcal{P}(f^{\#}\mathscr{A}_X) \simeq f^* \mathcal{P}(\mathscr{A}_X)\times_{f^*\mathcal{T}_X} \mathcal{T}_Y$
of Picard algebroids.
In this setting, we can define the direct image functor and the inverse image functor
\begin{align*}
&H^i\circ Df_+\colon\mathrm{Mod}_{qc}(f^{\#}\mathscr{A}_X, G) \rightarrow \mathrm{Mod}_{qc}(\mathscr{A}_X, G), \\
&H^i \circ Lf^*\colon\mathrm{Mod}_{qc}(\mathscr{A}_X, G) \rightarrow \mathrm{Mod}_{qc}(f^{\#}\mathscr{A}_X, G).
\end{align*}
Although it is more conceptual to use the equivariant derived category,
we do not deal with it in this paper.
Let $\mathcal{A}$ be a $G$-equivariant algebra of twisted differential operators on $X$.
We consider $G\times X$ as a $G\times G$-variety via the action $(a, b)\cdot (g, x) = (agb^{-1}, bx)$, and the codomain $X$ of $\mu$ (resp.\ $p_2$) as a $G\times G$-variety by letting the second (resp.\ first) factor of $G\times G$ act trivially.
Then $\mu$ and $p_2$ are $G\times G$-equivariant, and hence $\mu^\# \mathcal{A}$ and $p_2^\# \mathcal{A}$ are $G\times G$-equivariant algebras.
\begin{proposition}\label{prop:GGequivariant}
The isomorphism \eqref{eqn:Gequivariant} is $G\times G$-equivariant.
\end{proposition}
\begin{proof}
The assertion follows from the associative law of the $G$-equivariant structure on $\mathcal{A}$ and easy diagram chasing.
Hence we omit the details.
\end{proof}
\subsection{Principal bundle and direct image}\label{sect:PrincipalBundle}
Let $G$ be an affine algebraic group and $p\colon\widetilde{X}\rightarrow X$ a principal $G$-bundle over a smooth
variety $X$ together with a free right action $\widetilde{X}\times G \rightarrow \widetilde{X}$.
In this paper, a principal bundle over an algebraic variety is assumed to be locally trivial in the \'etale topology.
Then the projection $p$ is affine.
In this subsection, we study the direct image functor with respect to the projection $p$.
Let $\mathscr{A}_{\widetilde{X}}$ be a $G$-equivariant algebra of twisted differential operators on $\widetilde{X}$
equipped with a $G$-equivariant algebra homomorphism $R\colon\univ{g} \rightarrow \mathscr{A}_{\widetilde{X}}$.
\begin{proposition}\label{prop:DsheafLocal}
Assume that $p\colon\widetilde{X} \rightarrow X$ is a trivial bundle, i.e.\ $\widetilde{X}\simeq X\times G$.
Then there exists some algebra $\mathscr{A}_X$ of twisted differential operators on $X$
such that
\begin{align*}
\mathscr{A}_{\widetilde{X}} \simeq \mathscr{A}_{X}\boxtimes \mathscr{D}_{G}
\end{align*}
under the identification $\widetilde{X}\simeq X\times G$.
\end{proposition}
\begin{proof}
Let $\mu\colon\widetilde{X}\times G \rightarrow \widetilde{X}$ be the multiplication map
and $p_1\colon\widetilde{X}\times G\rightarrow \widetilde{X}$ the projection.
We define $s\colon X\rightarrow \widetilde{X}=X\times G$ via $x\mapsto (x, e)$
and $t\colon\widetilde{X}\rightarrow \widetilde{X}\times G$ via $X\times G \ni (x, g) \mapsto (x, e, g) \in X\times G\times G$.
Then we have $\mu\circ t = \textup{id}_{\widetilde{X}}$ and a commutative diagram
\begin{align*}
\xymatrix{
\widetilde{X} \ar[r]^-t \ar[d]^{p}& \widetilde{X}\times G \ar[r]^-{\mu}\ar[d]^{p_1}& \widetilde{X} \\
X \ar[r]^s& \widetilde{X}.
}
\end{align*}
We regard $X$ and $\widetilde{X}$ at the bottom as $G$-varieties via the trivial actions
and $\widetilde{X}\times G$ via the right translation on $G$.
Then the morphisms are $G$-equivariant.
Since $\mathscr{A}_{\widetilde{X}}$ is $G$-equivariant, we have
\begin{align*}
\mu^{\#}\mathscr{A}_{\widetilde{X}} \simeq p_1^{\#}\mathscr{A}_{\widetilde{X}}.
\end{align*}
We therefore obtain isomorphisms
\begin{align*}
\mathscr{A}_{\widetilde{X}} = (\mu\circ t)^{\#}\mathscr{A}_{\widetilde{X}} \simeq t^{\#}p_1^{\#}\mathscr{A}_{\widetilde{X}} \simeq p^{\#}s^{\#}\mathscr{A}_{\widetilde{X}}
\simeq s^{\#}\mathscr{A}_{\widetilde{X}}\boxtimes \mathscr{D}_G
\end{align*}
of $G$-equivariant algebras of twisted differential operators.
\end{proof}
For $\lambda \in (\lie{g}^*)^G$, we set $I_{\lambda}:= \mathrm{Ker}(\lambda) \subset \univ{g}$ and
\begin{align}
\mathscr{A}_{X,\lambda} &:=(\mathbb{C}_{\lambda-\delta} \otimes_{\univ{g}} p_*\mathscr{A}_{\widetilde{X}})^G \nonumber\\
&\simeq (p_*\mathscr{A}_{\widetilde{X}}/R(I_{-\lambda+\delta}) p_*\mathscr{A}_{\widetilde{X}})^G \nonumber\\
&\simeq p_*\mathscr{A}_{\widetilde{X}}^G/(R(I_{-\lambda+\delta}) p_*\mathscr{A}_{\widetilde{X}})^G \nonumber\\
&\simeq (p_*\mathscr{A}_{\widetilde{X}}/p_*\mathscr{A}_{\widetilde{X}}R(I_{-\lambda}))^G, \label{eqn:DefinitionTwisted}
\end{align}
where $\delta \in (\lie{g}^*)^G$ is the character $Z \mapsto \textup{tr}(\textup{ad}_\lie{g}(Z))$.
Remark that
\begin{align*}
(R(I_{-\lambda+\delta})\rring{G})^G = (\rring{G}R(I_{-\lambda}))^G \subset \ntDalg{G}.
\end{align*}
Then $\mathscr{A}_{X,\lambda}$ is a $G$-equivariant algebra of twisted differential operators on $X$
equipped with the homomorphism $\univ{g}\rightarrow \mathscr{A}_{X,\lambda}$ ($Z\mapsto \lambda({}^tZ)$).
Here we consider $X$ as a $G$-variety via the trivial action.
Note that if $\lambda$ is a character of $G$ and $\mu\in (\lie{g}^*)^G$, $\mathscr{A}_{X,\lambda + \mu}$ is isomorphic to $\mathcal{L}_{\lambda}\otimes_{\rsheaf{X}}\mathscr{A}_{X,\mu}\otimes_{\rsheaf{X}}\mathcal{L}_{-\lambda}$,
where $\mathcal{L}_{\lambda}$ is the invertible $\rsheaf{X}$-module corresponding to the line bundle $\widetilde{X}\times_{G}\mathbb{C}_{\lambda}\rightarrow X$.
For a while, we fix $\lambda \in (\lie{g}^*)^G$.
We write $D p_{+,\lambda}\colon D^b_{qc}(p^{\#}\mathscr{A}_{X, \lambda})\rightarrow D^b_{qc}(\mathscr{A}_{X, \lambda})$ for the direct image functor in Subsection \ref{subsect:DirectImage}.
We define a right exact functor
\begin{align*}
p_{+,\lambda}(\mathcal{M}):= \mathbb{C}_{\lambda-\delta}\otimes_{\univ{g}} p_*(\mathcal{M}) \in \mathrm{Mod}_{qc}(\mathscr{A}_{X,\lambda})
\end{align*}
for $\mathcal{M} \in \mathrm{Mod}_{qc}(\mathscr{A}_{\widetilde{X}})$.
For a character $\lambda$ of $G$, $p_{+,\lambda}(\rsheaf{\widetilde{X}})$ is
isomorphic to $\mathcal{L}_{\lambda}$ if $G$ is reductive.
To see $Dp_{+,\lambda} \simeq L p_{+,\lambda}$, we need the following lemma.
\begin{lemma}\label{lem:PrincipalBundleDsheaf}
$p^{\#}\mathscr{A}_{X,\lambda}$ is canonically isomorphic to $\mathscr{A}_{\widetilde{X}}$
as a $G$-equivariant algebra of twisted differential operators.
\end{lemma}
\begin{proof}
Since $p\colon \widetilde{X}\rightarrow X$ is locally trivial, the multiplication map
induces an isomorphism
\begin{align*}
\mathscr{A}_{\widetilde{X}\rightarrow X} = \rsheaf{\widetilde{X}}\otimes_{p^{-1}\rsheaf{X}} p^{-1}\mathscr{A}_{X,\lambda}
\simeq \mathscr{A}_{\widetilde{X}} / \mathscr{A}_{\widetilde{X}} R(I_{-\lambda})
\end{align*}
of sheaves.
This is $G$-equivariant and an $(\rsheaf{\widetilde{X}}\otimes \univ{g}, f^{-1}\mathscr{A}_{X})$-bimodule isomorphism.
By the definition of $p^{\#}\mathscr{A}_{X,\lambda}$, we obtain
an isomorphism $\mathscr{A}_{\widetilde{X}} \simeq p^{\#}\mathscr{A}_{X,\lambda}$
of $G$-equivariant algebras of twisted differential operators.
\end{proof}
The isomorphism $\mathscr{A}_{\widetilde{X}} \simeq p^{\#}\mathscr{A}_{X,\lambda}$ can be obtained from
the following cartesian square:
\begin{align*}
\xymatrix{
p^* \mathcal{P}(\mathscr{A}_{X,\lambda}) \ar[r]^-{p^*\sigma}& p^* \mathcal{T}_X \\
\mathcal{P}(\mathscr{A}_{\widetilde{X}}) \simeq p^*(p_*(\mathcal{P}(\mathscr{A}_{\widetilde{X}}))^G) \ar[r] \ar[u] & \mathcal{T}_{\widetilde{X}} \simeq p^*(p_*(\mathcal{T}_{\widetilde{X}})^G). \ar[u]
}
\end{align*}
This implies $\mathcal{P}(\mathscr{A}_{\widetilde{X}}) \simeq p^\# \mathcal{P}(\mathscr{A}_{X, \lambda})$
(see above Proposition \ref{prop:ActionOnDXY}).
We identify $\mathscr{A}_{\widetilde{X}}$ with $p^{\#}\mathscr{A}_{X,\lambda}$ by the isomorphism.
We shall describe the right $\mathscr{A}_{\widetilde{X}}$-action on $\mathscr{A}_{X\leftarrow \widetilde{X}}$.
Let $\pi$ be the natural projection $\mathcal{T}_{\widetilde{X}}^G \twoheadrightarrow p^{-1}(\mathcal{T}_X)$.
Here $\mathcal{T}_{\widetilde{X}}^G$ is the sheaf of $G$-invariant local sections of $\mathcal{T}_{\widetilde{X}}$, that is, $p^{-1}(p_*(\mathcal{T}_{\widetilde{X}})^G)$.
Fix a basis $X_1, X_2, \ldots, X_{\dim G} \in \lie{g}$.
Since $p\colon\widetilde{X}\rightarrow X$ is a principal $G$-bundle,
$\Omega_p = p^{-1}(\Omega_X^\vee)\otimes_{p^{-1}\rsheaf{X}} \Omega_{\widetilde{X}}$ is isomorphic to $\rsheaf{\widetilde{X}}$ as an $\rsheaf{\widetilde{X}}$-module.
The isomorphism is given by
\begin{align*}
\theta_1\wedge \theta_2 \wedge \cdots \wedge \theta_{\dim X} \otimes \omega
\mapsto \omega(\widetilde{\theta_1}, \widetilde{\theta_2}, \ldots, \widetilde{\theta_{\dim X}},
R(X_1), R(X_2), \ldots, R(X_{\dim G}))
\end{align*}
for local sections $\theta_1, \theta_2, \ldots, \theta_{\dim X} \in p^{-1}(\mathcal{T}_X)$ and $\omega \in \Omega_{\widetilde{X}}$,
where each $\widetilde{\theta_i}$ is a local section of $\mathcal{T}_{\widetilde{X}}^G$ such that $\pi(\widetilde{\theta_i}) = \theta_i$.
Since $[\mathcal{T}_{\widetilde{X}}^G, R(\lie{g})] = 0$, the isomorphism commutes with the actions of $\mathcal{T}_{\widetilde{X}}^G$ on $\Omega_p$ and $\rsheaf{\widetilde{X}}$ defined by the Lie derivative.
\begin{lemma}\label{lem:PrincipalBundleDYX}
$\mathscr{A}_{X\leftarrow \widetilde{X}}$ is isomorphic to $\mathscr{A}_{\widetilde{X}} / R(I_{-\lambda+\delta}) \mathscr{A}_{\widetilde{X}}$
as a $(p^{-1}\mathscr{A}_{X,\lambda}, \mathscr{A}_{\widetilde{X}})$-bimodule.
\end{lemma}
\begin{proof}
Let $i\colon\Omega_p \rightarrow \rsheaf{\widetilde{X}}$ be the above isomorphism.
Composing $i$ with the multiplication map, we obtain an isomorphism
\begin{align*}
\mathscr{A}_{X\leftarrow \widetilde{X}} = p^{-1}\mathscr{A}_{X,\lambda} \otimes_{p^{-1}\rsheaf{X}} \Omega_p
\rightarrow p^{-1}\mathscr{A}_{X,\lambda} \otimes_{p^{-1}\rsheaf{X}} \rsheaf{\widetilde{X}}
\rightarrow \mathscr{A}_{\widetilde{X}} / R(I_{-\lambda+\delta}) \mathscr{A}_{\widetilde{X}}.
\end{align*}
of sheaves, and denote it by $\iota$.
It is trivial that $\iota$ is a $(p^{-1}\mathscr{A}_{X, \lambda}, \rsheaf{\widetilde{X}})$-module homomorphism.
By Proposition \ref{prop:ActionOnDXY}, the action of $\theta \in \mathcal{P}(\mathscr{A}_{\widetilde{X}})^G$ is given by
\begin{align*}
(S\otimes \omega)\cdot \theta = S\theta \otimes \omega - S\otimes \sigma(\theta) \omega
\end{align*}
for $S\otimes \omega \in \mathscr{A}_{X\leftarrow \widetilde{X}}$.
Here $\sigma\colon\mathcal{P}(\mathscr{A}_{\widetilde{X}})^G\rightarrow \mathcal{T}_{\widetilde{X}}^G$ is the restriction of the morphism $\sigma\colon\mathcal{P}(\mathscr{A}_{\widetilde{X}}) \rightarrow \mathcal{T}_{\widetilde{X}}$ attached to the Picard algebroid.
Hence $\iota$ commutes with the $\mathcal{P}(\mathscr{A}_{\widetilde{X}})^G$-action by the definition of $i$.
Since $\mathcal{P}(\mathscr{A}_{\widetilde{X}})$ is generated by $\mathcal{P}(\mathscr{A}_{\widetilde{X}})^G$ as an $\rsheaf{\widetilde{X}}$-module,
$\iota$ is a $(p^{-1}\mathscr{A}_{X,\lambda}, \mathscr{A}_{\widetilde{X}})$-bimodule isomorphism.
\end{proof}
\begin{remark}
Suppose that $G$ is not unimodular.
Although $\iota$ is a right $\lie{g}$-homomorphism, the isomorphism $p^{-1}\mathscr{A}_{X,\lambda} \otimes_{p^{-1}\rsheaf{X}} \Omega_p
\rightarrow p^{-1}\mathscr{A}_{X,\lambda} \otimes_{p^{-1}\rsheaf{X}} \rsheaf{\widetilde{X}}$ is not.
This is because $i\colon \Omega_p \rightarrow \rsheaf{\widetilde{X}}$ is not $G$-equivariant.
\end{remark}
We fix the isomorphism $\mathscr{A}_{X\leftarrow \widetilde{X}}\simeq \mathscr{A}_{\widetilde{X}} / R(I_{-\lambda+\delta}) \mathscr{A}_{\widetilde{X}}$.
\begin{proposition}\label{prop:PrincipalBundleDirectImage}
$D p_{+,\lambda}$ is isomorphic to the left derived functor $L p_{+,\lambda}$ of $p_{+,\lambda}$.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:DirectImageAffineMorphism}, $Dp_{+,\lambda}$ is isomorphic to $p_*\mathscr{A}_{X\leftarrow \widetilde{X}}\otimes^L_{p_*\mathscr{A}_{\widetilde{X}}} p_*(\cdot)$.
Hence it is enough to show that there is a natural isomorphism
\begin{align*}
p_*(\mathscr{A}_{X\leftarrow \widetilde{X}})\otimes_{p_*(\mathscr{A}_{\widetilde{X}})} p_*(\mathcal{F}) \simeq \mathbb{C}_{\lambda-\delta}\otimes_{\univ{g}} p_*(\mathcal{F})
\end{align*}
for any locally free $\mathscr{A}_{\widetilde{X}}$-module $\mathcal{F}$.
Since $p$ is affine, we have $p_*(\mathscr{A}_{X\leftarrow \widetilde{X}}) \simeq p_*(\mathscr{A}_{\widetilde{X}})/R(I_{-\lambda+\delta})p_*(\mathscr{A}_{\widetilde{X}})$ by Lemma \ref{lem:PrincipalBundleDYX}.
This implies
\begin{align*}
p_*(\mathscr{A}_{X\leftarrow \widetilde{X}})\otimes_{p_*(\mathscr{A}_{\widetilde{X}})} p_*(\mathcal{F}) &\simeq p_*(\mathcal{F})/R(I_{-\lambda+\delta}) p_*(\mathcal{F}) \\
&\simeq \mathbb{C}_{\lambda-\delta}\otimes_{\univ{g}} p_*(\mathcal{F}).
\end{align*}
We have proved the proposition.
\end{proof}
\begin{lemma}\label{lem:PrincipalBundleLocallyProjective}
Let $U$ be an open subset of $X$.
If $p_*(\mathscr{A}_{\widetilde{X}})|_U$ is acyclic (e.g.\ $U$ is affine), $\Gamma(U, p_*\mathscr{A}_{\widetilde{X}})$ is a projective left/right $\univ{g}$-module.
\end{lemma}
\begin{proof}
Since the bundle $p\colon \widetilde{X}\rightarrow X$ is locally trivial in the \'etale topology,
we can take an affine \'etale covering $\set{U_j \rightarrow U}$ such that
$U_j \times_X \widetilde{X} \rightarrow U_j$ is a trivial principal $G$-bundle.
Since $p_*\mathscr{A}_{\widetilde{X}}$ is a quasi-coherent $\rsheaf{X}$-module,
the cohomology group $H^i(U, p_*\mathscr{A}_{\widetilde{X}})$ is isomorphic to the \'etale cohomology group $H^i(U_{\text{\'et}}, (p_*\mathscr{A}_{\widetilde{X}})_{\text{\'et}})$ for any $i$.
Here $(p_*\mathscr{A}_{\widetilde{X}})_{\text{\'et}}$ is the \'etale sheaf associated to $p_*\mathscr{A}_{\widetilde{X}}$.
Hence the \v{C}ech complex
\begin{align*}
0 \rightarrow \Gamma(U, p_*\mathscr{A}_{\widetilde{X}}) \rightarrow C^0 \rightarrow C^1 \rightarrow \cdots
\end{align*}
associated to the covering $\set{U_j \rightarrow U}$ is exact and each term $C^j$ is a free left/right
$\univ{g}$-module.
$\Gamma(U, p_*\mathscr{A}_{\widetilde{X}})$ is therefore a projective left/right $\univ{g}$-module.
\end{proof}
By Lemma \ref{lem:PrincipalBundleLocallyProjective} and Proposition \ref{prop:PrincipalBundleDirectImage}, we obtain the following theorem.
Note that for a generalized pair $(\mathcal{A}, G)$ and a left $\mathcal{A}$-module $M$,
$\mathrm{Tor}^{\univ{g}}_i(\mathbb{C}_{\lambda-\delta}, M)$ admits a natural $(\mathcal{A}/I_{-\lambda+\delta}\mathcal{A})^G$-module structure if $\mathcal{A}$ is a flat left $\univ{g}$-module.
In fact, $\mathrm{Tor}^{\univ{g}}_i(\mathbb{C}_{\lambda-\delta}, M)$ can be computed by using a free resolution of the $\mathcal{A}$-module $M$.
\begin{theorem}\label{thm:DirectImageTor}
For any $\mathcal{M} \in \mathrm{Mod}_{qc}(\mathscr{A}_{\widetilde{X}})$, we have a natural isomorphism
\begin{align*}
D^{-i}p_{+,\lambda}(\mathcal{M}) \simeq \mathrm{Tor}^{\univ{g}}_{i}(\mathbb{C}_{\lambda - \delta}, p_*(\mathcal{M}))
\end{align*}
of $\mathscr{A}_{X,\lambda}$-modules, where $\mathrm{Tor}^{\univ{g}}_{i}(\mathbb{C}_{\lambda - \delta}, p_*(\mathcal{M}))$ denotes the sheafification of the presheaf $(U\rightarrow \mathrm{Tor}^{\univ{g}}_i(\mathbb{C}_{\lambda - \delta}, \Gamma(p^{-1}(U), \mathcal{M})))$.
\end{theorem}
By Theorem \ref{thm:DirectImageTor}, there is a natural homomorphism
\begin{align*}
\mathrm{Tor}^{\univ{g}}_{i}(\mathbb{C}_{\lambda-\delta}, \Gamma(\mathcal{M})) \rightarrow \Gamma(D^{-i}p_{+,\lambda}(\mathcal{M}))
\end{align*}
of $\Dalg{\widetilde{X}}^G$-modules, where $\Dalg{\widetilde{X}} = \Gamma(\mathscr{A}_{\widetilde{X}})$.
In general, it is not isomorphic.
Under some assumption, we can show that the homomorphism is an isomorphism.
\begin{lemma}\label{lem:PrincipalBundleAcylic}
Assume that $\mathscr{A}_{\widetilde{X}}$ is acyclic.
Then the natural homomorphism $(\Dalg{\widetilde{X}}/R(I_{-\lambda+\delta})\Dalg{\widetilde{X}})^G \rightarrow \Dalg{X,\lambda}$ is bijective.
Moreover, for a free $\mathscr{A}_{\widetilde{X}}$-module $\mathcal{F}$, the natural homomorphism
$\mathbb{C}_{\lambda-\delta}\otimes_{\univ{g}} \Gamma(\mathcal{F}) \rightarrow \Gamma(p_{+,\lambda}(\mathcal{F}))$ is an isomorphism of $\Dalg{X,\lambda}$-modules.
\end{lemma}
\begin{proof}
Let $\mathcal{F}$ be a free $\mathscr{A}_{\widetilde{X}}$-module.
Since $p$ is affine and $\mathscr{A}_{\widetilde{X}}$ is acyclic, $p_*(\mathcal{F})$ is also acyclic.
Take a free resolution $\cdots \rightarrow P_1 \xrightarrow{d_1} P_0 \xrightarrow{d_0} \mathbb{C}_{\lambda-\delta}\rightarrow 0$.
By Lemma \ref{lem:PrincipalBundleLocallyProjective}, the following sequence is exact:
\begin{align*}
\cdots \rightarrow P_1\otimes_{\univ{g}}p_*(\mathcal{F}) \xrightarrow{d_1} P_0\otimes_{\univ{g}}p_*(\mathcal{F}) \xrightarrow{d_0} p_{+,\lambda}(\mathcal{F}) \rightarrow 0.
\end{align*}
Since all $P_i\otimes_{\univ{g}}p_*(\mathcal{F})$ are acyclic, $\mathrm{Ker}(d_0)$ is also acyclic, and hence
\begin{align*}
\Gamma(P_1\otimes_{\univ{g}}p_*(\mathcal{F})) \xrightarrow{d_1} \Gamma(P_0\otimes_{\univ{g}}p_*(\mathcal{F})) \xrightarrow{d_0} \Gamma(p_{+,\lambda}(\mathcal{F})) \rightarrow 0
\end{align*}
is exact.
Since each $P_i$ is free, we have $\Gamma(P_i\otimes_{\univ{g}} p_*(\mathcal{F})) \simeq P_i\otimes_{\univ{g}} \Gamma(\mathcal{F})$.
This implies that the natural homomorphism $\mathbb{C}_{\lambda-\delta}\otimes_{\univ{g}} \Gamma(\mathcal{F}) \rightarrow \Gamma(p_{+,\lambda}(\mathcal{F}))$ is bijective.
Hence the second assertion follows from the first one.
Since $(\cdot)^G$ is left exact, we have
\begin{align*}
\Dalg{X,\lambda} = \Gamma(p_{+,\lambda}(\mathscr{A}_{\widetilde{X}})^G)=\Gamma(p_{+,\lambda}(\mathscr{A}_{\widetilde{X}}))^G
\simeq (\Dalg{\widetilde{X}}/R(I_{-\lambda+\delta})\Dalg{\widetilde{X}})^G.
\end{align*}
This implies that the natural algebra homomorphism $(\Dalg{\widetilde{X}}/R(I_{-\lambda+\delta})\Dalg{\widetilde{X}})^G\rightarrow \Dalg{X,\lambda}$ is an isomorphism.
\end{proof}
\begin{theorem}\label{thm:PrincipalBundleDirectImageDerived3}
Let $\mathcal{M} \in \mathrm{Mod}_{qc}(\mathscr{A}_{\widetilde{X}})$ and $i \in \mathbb{N}$.
Assume that the global section functors are exact on $\mathrm{Mod}_{qc}(\mathscr{A}_{\widetilde{X}})$ and $\mathrm{Mod}_{qc}(\mathscr{A}_{X,\lambda})$.
Then the natural homomorphism
\begin{align*}
\mathrm{Tor}_i^{\univ{g}}(\mathbb{C}_{\lambda-\delta}, \Gamma(\mathcal{M}))\rightarrow \Gamma(D^{-i} p_{+,\lambda}(\mathcal{M}))
\end{align*}
is an isomorphism of $\Dalg{X,\lambda}$-modules.
\end{theorem}
\begin{proof}
Remark that any object in $\mathrm{Mod}_{qc}(\mathscr{A}_{\widetilde{X}})$ is acyclic because $\Gamma$ is exact on $\mathrm{Mod}_{qc}(\mathscr{A}_{\widetilde{X}})$.
Take a free resolution $\mathcal{F}^\bullet$ of $\mathcal{M}$.
By Lemma \ref{lem:PrincipalBundleLocallyProjective}, $\Gamma(\mathcal{F}^\bullet)$ is a projective resolution of $\Gamma(\mathcal{M})$ as a $\lie{g}$-module.
Hence we have
\begin{align*}
\Gamma(D^{-i} p_{+,\lambda}(\mathcal{M})) &\simeq \Gamma(H^{-i}(p_{+,\lambda}(\mathcal{F}^\bullet))) \\
&\simeq H^{-i}\circ \Gamma(p_{+,\lambda}(\mathcal{F}^\bullet)) \\
&\simeq H^{-i}(\mathbb{C}_{\lambda-\delta}\otimes_{\univ{g}} \Gamma(\mathcal{F}^\bullet)) \\
&\simeq \mathrm{Tor}_i^{\univ{g}}(\mathbb{C}_{\lambda-\delta}, \Gamma(\mathcal{M})).
\end{align*}
Here the third isomorphism follows from Lemma \ref{lem:PrincipalBundleAcylic}.
\end{proof}
\begin{remark}
One can prove a similar result about the commutativity of $R\Gamma$ and $\mathbb{C}_{\lambda-\delta}\otimes^L_{\univ{g}}(\cdot)$ without the exactness of $\Gamma$.
\end{remark}
$\Gamma$ is exact for any affine variety and $\lambda$, or for any flag variety and good $\lambda$ (see Fact \ref{fact:BeilinsonBernstein}).
To apply Theorem \ref{thm:PrincipalBundleDirectImageDerived3} to direct products of such varieties,
we shall show the following lemma.
\begin{lemma}
Let $X$ and $Y$ be smooth varieties and $\mathscr{A}_X$ (resp.\ $\mathscr{A}_Y$) an algebra of twisted differential operators on $X$ (resp.\ $Y$).
If the global section functors on $\mathrm{Mod}_{qc}(\mathscr{A}_{X})$ and $\mathrm{Mod}_{qc}(\mathscr{A}_{Y})$
are exact, then the global section functor on $\mathrm{Mod}_{qc}(\mathscr{A}_{X}\boxtimes \mathscr{A}_{Y})$ is also exact.
\end{lemma}
\begin{proof}
Let $0\rightarrow \mathcal{M}_1 \rightarrow \mathcal{M}_2 \rightarrow \mathcal{M}_3 \rightarrow 0$
be a short exact sequence of quasi-coherent $\mathscr{A}_{X}\boxtimes \mathscr{A}_{Y}$-modules.
Let $U$ be an affine open subset of $X$.
We write $p\colon U\times Y \rightarrow Y$ for the projection onto the second factor.
Then $p$ is affine.
Hence we obtain a short exact sequence
\begin{align*}
0\rightarrow p_*(\mathcal{M}_1|_{U\times Y}) \rightarrow p_*(\mathcal{M}_2|_{U\times Y}) \rightarrow p_*(\mathcal{M}_3|_{U\times Y}) \rightarrow 0
\end{align*}
of quasi-coherent $\mathscr{A}_Y$-modules.
Since the global section functor $\Gamma$ on $\mathrm{Mod}_{qc}(\mathscr{A}_Y)$ is exact,
we obtain a short exact sequence
\begin{align}
0\rightarrow \Gamma(U\times Y, \mathcal{M}_1) \rightarrow \Gamma(U\times Y, \mathcal{M}_2) \rightarrow \Gamma(U\times Y, \mathcal{M}_3) \rightarrow 0.
\label{eqn:exactexacttoexact}
\end{align}
Let $q\colon X\times Y\rightarrow X$ be the projection onto the first factor.
By (\ref{eqn:exactexacttoexact}), $q_*\colon\mathrm{Mod}_{qc}(\mathscr{A}_X\boxtimes \mathscr{A}_Y) \rightarrow \mathrm{Mod}_{qc}(\mathscr{A}_X)$
is exact.
Since the global section functor $\Gamma$ on $\mathrm{Mod}_{qc}(\mathscr{A}_X)$ is exact,
this implies that the sequence $0\rightarrow \Gamma(\mathcal{M}_1) \rightarrow \Gamma(\mathcal{M}_2) \rightarrow \Gamma(\mathcal{M}_3)\rightarrow 0$
is exact.
We have proved the lemma.
\end{proof}
\section{Examples of uniformly bounded family}\label{sect:ExampleUBF}
In general, it is not easy to construct bornologies and uniformly bounded families of twisted $\mathscr{D}$-modules.
An easy way to construct them is to use group actions.
In this section, we construct uniformly bounded families using
principal bundles and group actions with finite orbits.
\subsection{Bornology of a principal bundle}\label{sect:BornologyPrincipalBundle}
Let $G$ be an affine algebraic group and $p\colon \widetilde{X}\rightarrow X$ a principal $G$-bundle over a smooth variety $X$.
Let $\mathscr{A}_{\widetilde{X}}$ be a $G$-equivariant algebra of twisted differential operators.
For each $\lambda \in (\lie{g}^*)^G$, we have defined a $G$-equivariant algebra $\mathscr{A}_{X,\lambda}$
of twisted differential operators on $X$ in (\ref{eqn:DefinitionTwisted}).
Then we obtain a family $(\mathscr{A}_{X,\lambda})_{\lambda \in (\lie{g}^*)^G}$.
Put $\Lambda := (\lie{g}^*)^G$.
In this subsection, we shall show that the family admits a standard bornology
determined by the bundle $\widetilde{X}\rightarrow X$.
We can take a surjective \'etale morphism $\varphi\colon U\rightarrow X$ such that
the pull-back $p\colon U \times_{X} \widetilde{X} \rightarrow U$ of the $G$-bundle
is trivial and $\mathscr{A}_{\widetilde{X}}|_{U\times_X \widetilde{X}}$
is isomorphic to the algebra $\mathscr{D}_{U\times_X \widetilde{X}}$.
Fix a section $s\colon U\rightarrow U\times_X \widetilde{X}$
and an isomorphism $\alpha\colon s^\#(\mathscr{A}_{\widetilde{X}}|_{U\times_X \widetilde{X}}) \rightarrow \mathscr{D}_{U}$.
The section $s$ determines a trivialization $U\times G \simeq U\times_X \widetilde{X}$
and $\alpha$ induces an isomorphism $\mathscr{A}_{\widetilde{X}}|_{U\times_X \widetilde{X}}\simeq \mathscr{D}_U\boxtimes \mathscr{D}_G$ by Proposition \ref{prop:DsheafLocal}.
Then we have an isomorphism
\begin{align*}
\Phi^{U, s,\alpha}_\lambda\colon \mathscr{A}_{X,\lambda}|_{U} = p_*(\mathscr{A}_{\widetilde{X}}/R(I_{-\lambda+\delta})\mathscr{A}_{\widetilde{X}})^G|_U \xrightarrow{\simeq} \mathscr{D}_{U}
\end{align*}
for any $\lambda \in \Lambda$.
See \eqref{eqn:DefinitionTwisted} for the notation.
Hence we obtain a trivialization $(U, \varphi, \Phi^{U, s,\alpha})$ of $\mathscr{A}_{X,\Lambda}$.
\begin{proposition}\label{prop:BornologyPrincipalBundle}
$(U, \varphi, \Phi^{U, s,\alpha})$ is bounded and its equivalence class does not depend on the choice of $\varphi\colon U\rightarrow X$, $s$ and $\alpha$.
\end{proposition}
\begin{proof}
Let $(\psi\colon V\rightarrow X, t, \beta)$ be another choice of $(\varphi, s, \alpha)$.
By considering the pull-back of $s, \alpha, t, \beta, \Phi^{s,\alpha}$ and $\Phi^{t,\beta}$ to $U\times_X V$, our computation can be done only on $U\times_X V$.
Hence we can assume $U=V=X$, $\widetilde{X}=X\times G$ and $\mathscr{A}_{\widetilde{X}} = \mathscr{D}_{\widetilde{X}} = \mathscr{D}_{X}\boxtimes \mathscr{D}_G$.
We identify $s^\#(\mathscr{D}_{\widetilde{X}})$ and $t^\#(\mathscr{D}_{\widetilde{X}})$ with $\mathscr{D}_X$ by the canonical isomorphisms.
Then $\alpha$ and $\beta$ are automorphisms of $\mathscr{D}_X$.
Since $\alpha$ and $\beta$ are independent of $\lambda \in \Lambda$, the choice of $\alpha$ and $\beta$ does not affect the equivalence.
Hence we can assume $\alpha = \beta = \textup{id}$.
Fix $\lambda \in \Lambda$.
By the decomposition $X\times G = s(X)G$, we have a monomorphism
$\iota_s\colon \mathscr{D}_X \simeq \mathscr{D}_{s(X)} \rightarrow p_*(\mathscr{D}_{X\times G})^G$
and the isomorphism $(\Phi^{U,s,\alpha})^{-1}\colon \mathscr{D}_X \rightarrow \mathscr{D}_{X,\lambda}$ factors through the monomorphism.
We define $\iota_t$ similarly.
Then $\Phi^{V,t,\beta}_\lambda\circ (\Phi^{U,s,\alpha}_\lambda)^{-1}$ is given by the following dot arrow:
\begin{align*}
\xymatrix{
\mathscr{D}_X \simeq \mathscr{D}_{s(X)} \ar@{.>}[d]\ar[r]^-{\iota_s} & p_*(\mathscr{D}_{X\times G})^G \ar@{->>}[r]& \mathscr{D}_{X,\lambda} \ar@{=}[d] \\
\mathscr{D}_X \simeq \mathscr{D}_{t(X)} \ar[r]^-{\iota_t} & p_*(\mathscr{D}_{X\times G})^G \ar@{->>}[r]& \mathscr{D}_{X,\lambda}.
}
\end{align*}
We write $s(x) = (x, s'(x))$ ($x \in X$) and
define an automorphism $a$ of $X\times G$ by $a(x, g) = (x, s'(x)g)$.
For a local section $T \in \mathcal{T}_X$, we denote by $T_s$
the corresponding section of $\mathcal{T}_{s(X)}$.
There exist closed $1$-forms $\omega^s_1, \omega^s_2, \cdots, \omega^s_n$ $(n=\dim_\mathbb{C}(\lie{g}))$ on $X$ such that for any local sections $T \in \mathcal{T}_{X}$ and $f \in \rsheaf{X}\otimes \rring{G}$,
\begin{align*}
T_s f = ((a^*)^{-1}\circ T \circ a^*) f = Tf - \sum_i \omega_i^s(T) L(X_i)f,
\end{align*}
where $\set{X_i}_{i=1,2,\ldots, n}$ is a basis of $\lie{g}$
and $L$ is the differential of the left translation on $G$.
Similarly, we define $\set{\omega_j^t}$ for $t$.
Therefore $\set{\Phi^{V,t,\beta}_\lambda\circ (\Phi^{U,s,\alpha}_\lambda)^{-1}}_{\lambda \in \Lambda}$ is contained in a finite-dimensional subspace spanned by $\set{\omega_i^s}$ and $\set{\omega_j^t}$ in $\mathcal{Z}(X)$.
We have proved the proposition.
\end{proof}
\begin{definition}
We denote by $\mathcal{B}(X, \widetilde{X})$ the equivalence class of the bounded trivialization $(U, \varphi, \Phi^{U,s,\alpha})$.
\end{definition}
Let $f\colon Y\rightarrow X$ be a morphism of smooth varieties.
Then we have a cartesian square
\begin{align*}
\xymatrix{
Y\times_X \widetilde{X} \ar[r]^-{\widetilde{f}} \ar[d]^-{q}&\widetilde{X} \ar[d]^-p\\
Y \ar[r]^-f & X.
}
\end{align*}
Put
\begin{align*}
\widetilde{Y}&:= Y\times_X \widetilde{X}, \\
\mathscr{A}_{\widetilde{Y}} &:= \widetilde{f}^\#\mathscr{A}_{\widetilde{X}}.
\end{align*}
It is easy to see that $q\colon \widetilde{Y}\rightarrow Y$ is a principal $G$-bundle
and $\mathscr{A}_{\widetilde{Y}}$ is $G$-equivariant.
For each $\lambda \in \Lambda$, we can define an algebra $\mathscr{A}_{Y,\lambda}$
of twisted differential operators on $Y$ as in \eqref{eqn:DefinitionTwisted}
and we have a canonical isomorphism
\begin{align*}
\mathscr{A}_{Y,\lambda} \simeq f^\#\mathscr{A}_{X,\lambda}.
\end{align*}
We identify the two algebras by the isomorphism.
Then we obtain two bornologies $\mathcal{B}(Y,\widetilde{Y})$
and $f^\#\mathcal{B}(X,\widetilde{X})$ of $\mathscr{A}_{Y, \Lambda} := (\mathscr{A}_{Y,\lambda})_{\lambda \in \Lambda}$.
\begin{lemma}\label{lem:pull-backOfBornologyPrincipalBundle}
$\mathcal{B}(Y,\widetilde{Y})$ and $f^\#\mathcal{B}(X,\widetilde{X})$
are equal.
\end{lemma}
\begin{proof}
It is clear from the definition of $\mathcal{B}(X,\widetilde{X})$ and its pull-back (Definition \ref{def:pull-backBornology}).
\end{proof}
The following Theorem is a consequence of Lemma \ref{lem:pull-backOfBornologyPrincipalBundle} and
Theorem \ref{thm:FunctorOnUniformlyBounded}.
\begin{theorem}\label{thm:FunctorUniformlyBoundedPrincipalBundle}
We have functors
\begin{align*}
Df_+\colon D^b_{ub}(\mathscr{A}_{Y,\Lambda}, \mathcal{B}(Y,\widetilde{Y})) \rightarrow D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}(X,\widetilde{X})), \\
Lf^*\colon D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}(X,\widetilde{X}))\rightarrow D^b_{ub}(\mathscr{A}_{Y,\Lambda}, \mathcal{B}(Y,\widetilde{Y})),
\end{align*}
which are the restrictions of the direct image functor and the inverse image functor, respectively.
\end{theorem}
\begin{corollary}\label{cor:FamilyOfUniformlyBounded}
Let $\mathcal{M} \in D_h^b(\mathscr{A}_{\widetilde{X}})$.
Then the family $(Dp_{+,\lambda}(\mathcal{M}))_{\lambda \in \Lambda}$ is
uniformly bounded with respect to $\mathcal{B}(X,\widetilde{X})$.
\end{corollary}
\begin{proof}
We shall apply Theorem \ref{thm:FunctorUniformlyBoundedPrincipalBundle} to $Y=\widetilde{X}$ and $f = p$.
The fiber product $\widetilde{X}\times_X \widetilde{X}$ is canonically isomorphic to the trivial bundle $\widetilde{X}\times G$.
The isomorphism is given by $\widetilde{X}\times G \ni (x, g) \mapsto (x, xg) \in \widetilde{X}\times_X \widetilde{X}$.
Hence the following diagram is a cartesian square:
\begin{align*}
\xymatrix{
\widetilde{X} \times G \ar[r]^-m \ar[d]^{\textup{pr}}&\widetilde{X} \ar[d]^-p\\
\widetilde{X} \ar[r]^-p & X,
}
\end{align*}
where $m$ is the multiplication map and $\textup{pr}$ is the projection onto the first factor.
Then the constant family $(\mathcal{M})_{\lambda \in \Lambda}$ is uniformly bounded with respect to $\mathcal{B}(\widetilde{X}, \widetilde{X}\times G)$.
Therefore the assertion follows from Theorem \ref{thm:FunctorUniformlyBoundedPrincipalBundle}.
\end{proof}
By Corollary \ref{cor:FamilyOfUniformlyBounded}, we can construct many uniformly bounded families of $\mathscr{D}$-modules parametrized by $(\lie{g}^*)^G$ using principal $G$-bundles.
\subsection{\texorpdfstring{$G$}{G}-equivariant bornology}
Let $X$ be a smooth $G$-variety of an affine algebraic group $G$, and $\mathscr{A}_{X,\Lambda}$ be a family of $G$-equivariant algebras of twisted differential operators.
We write $\pi\colon G\times X\rightarrow X$ and $m\colon G\times X\rightarrow X$ for the projection and the multiplication map, respectively.
Since all $\mathscr{A}_{X, \lambda}$ are $G$-equivariant, we have a canonical isomorphism
\begin{align*}
\pi^\# \mathscr{A}_{X,\Lambda} \simeq m^\# \mathscr{A}_{X,\Lambda}.
\end{align*}
See \eqref{eqn:Gequivariant}.
\begin{definition}\label{def:EquivariantBornology}
We say that a bornology $\mathcal{B}$ of $\mathscr{A}_{X,\Lambda}$ is $G$-equivariant if $\pi^\# \mathcal{B} = m^\# \mathcal{B}$ holds under the isomorphism $\pi^\# \mathscr{A}_{X,\Lambda} \simeq m^\# \mathscr{A}_{X,\Lambda}$.
\end{definition}
The following proposition is clear by the definition and Proposition \ref{prop:pull-backBornology}.
\begin{proposition}
Let $f\colon Y\rightarrow X$ be a morphism of smooth $G$-varieties and $\mathcal{B}$ a $G$-equivariant bornology of $\mathscr{A}_{X,\Lambda}$.
Then $f^\# \mathcal{B}$ is $G$-equivariant.
\end{proposition}
We set $m_g:=m(g,\cdot)$ for $g \in G$.
Then $m_g$ is an automorphism of $X$.
\begin{proposition}\label{prop:TranslationByG}
Let $\mathcal{M} \in D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B})$.
Then $(Lm_g^*(\mathcal{M}_\lambda))_{\lambda \in \Lambda, g\in G}$ is uniformly bounded with respect to $\mathcal{B}$.
\end{proposition}
\begin{proof}
For $g \in G$, let $f_g$ denotes the morphism $f_g\colon X\rightarrow G\times X$ defined by $f_g(x) = (g, x)$.
Then we have $m_g = m \circ f_g$.
Since $\mathcal{B}$ is $G$-equivariant, $Lm^*(\mathcal{M})$ is uniformly bounded with respect to $\pi^\#\mathcal{B}=m^\#\mathcal{B}$.
By Proposition \ref{prop:FamilyMorphismPull}, $(Lf_g^*\circ Lm^*(\mathcal{M}_\lambda))_{\lambda \in \Lambda, g\in G}$ is uniformly bounded with respect to $\mathcal{B} = f_g^\#\pi^\#\mathcal{B}$.
This shows the assertion.
\end{proof}
In Subsection \ref{sect:BornologyPrincipalBundle}, we have given a way to construct a bornology using a principal bundle.
We shall show that the bornology is $G$-equivariant if the bundle has $G$-equivariant structure.
Let $G$ and $T$ be affine algebraic groups and $p\colon \widetilde{X}\rightarrow X$ a principal $T$-bundle over a smooth variety $X$.
Suppose that $\widetilde{X}$ and $X$ are $G\times T$-varieties and $p$ is $G\times T$-equivariant.
Let $\mathscr{A}_{\widetilde{X}}$ be a $G\times T$-equivariant algebra of twisted differential operators on $\widetilde{X}$.
Put $\Lambda :=(\lie{t}^*)^T$.
Then we have a family $\mathscr{A}_{X,\Lambda} = (\mathscr{A}_{X,\lambda})_{\lambda \in \Lambda}$ of $G\times T$-equivariant algebras, and its bornology $\mathcal{B}(X,\widetilde{X})$ as in Subsection \ref{sect:BornologyPrincipalBundle}.
\begin{proposition}\label{prop:G-equivariantBornologyPrincipal}
$\mathcal{B}(X,\widetilde{X})$ is $G$-equivariant.
\end{proposition}
\begin{proof}
Consider the following commutative diagram:
\begin{align*}
\xymatrix{
\widetilde{X} \ar[d]^-{p} & G\times \widetilde{X} \ar[d]^-{\textup{id}\times p}\ar[l]_-{\pi}\ar[r]^-{m}& \widetilde{X} \ar[d]^-{p} \\
X & G\times X \ar[l]_-{\pi}\ar[r]^-{m}& X,
}
\end{align*}
where $\pi$ and $m$ are the projection and the multiplication map, respectively.
Since $\mathscr{A}_{\widetilde{X}}$ is $G$-equivariant, $\pi^\# \mathscr{A}_{\widetilde{X}}$ and $m^\# \mathscr{A}_{\widetilde{X}}$ are canonically isomorphic.
We obtain a family $\mathscr{A}_{G\times X, \Lambda}$ constructed from the principal $T$-bundle $G\times \widetilde{X}\rightarrow G\times X$.
Then $\pi^\# \mathscr{A}_{X,\Lambda}$ and $m^\# \mathscr{A}_{X,\Lambda}$ are canonically isomorphic to $\mathscr{A}_{G\times X, \Lambda} = \mathscr{D}_G\boxtimes \mathscr{A}_{X,\Lambda}$.
Under this identification, by Lemma \ref{lem:pull-backOfBornologyPrincipalBundle}, we have
\begin{align*}
\pi^\#\mathcal{B}(X,\widetilde{X}) = \mathcal{B}(G\times X, G\times \widetilde{X}) = m^\#\mathcal{B}(X,\widetilde{X}).
\end{align*}
This implies that $\mathcal{B}(X,\widetilde{X})$ is $G$-equivariant.
\end{proof}
We shall show the uniqueness of $G$-equivariant bornologies on a homogeneous variety.
Let $G$ and $H$ be affine algebraic group and its closed subgroup,
and $\mathscr{D}_G$ the algebra of non-twisted differential operators.
We write $p\colon G\rightarrow G/H$ for the natural projection.
Then we obtain a $G$-equivariant algebra $\mathscr{D}_{G/H,\lambda}$ of twisted differential operators on $G/H$ for any $\lambda \in (\lie{h}^*)^H$.
See \eqref{eqn:DefinitionTwisted}.
It is well-known that any $G$-equivariant algebra of twisted differential operators is canonically isomorphic to some $\mathscr{D}_{G/H,\lambda}$ (see \cite[Theorem 4.9.2]{Ka89}).
This is because it is generated by $\univ{g}$ and $\rsheaf{G/H}$.
Hence we consider a bornology of a family $\mathscr{D}_{G/H,\Lambda}:=(\mathscr{D}_{G/H,\Lambda(r)})_{r \in R}$ for $\Lambda\colon R\rightarrow (\lie{h}^*)^H$.
\begin{proposition}\label{prop:UniqueBornologyHomogeneous}
There exists a unique $G$-equivariant bornology of $\mathscr{D}_{G/H,\Lambda}$.
\end{proposition}
\begin{proof}
The existence is clear because $\mathcal{B}(G/H, G)$ is a $G$-equivariant bornology of $\mathscr{D}_{G/H,\Lambda}$ by Proposition \ref{prop:G-equivariantBornologyPrincipal}.
We shall show the uniqueness.
Let $\mathcal{B}$ be a $G$-equivariant bornology of $\mathscr{D}_{G/H,\Lambda}$.
By Proposition \ref{prop:FundamentalBornology} (iv), it is enough to show $p^\# \mathcal{B} = p^\# \mathcal{B}(G/H,G)$.
Let $\pi, m\colon G\times G/H\rightarrow G/H$ be the projection and the multiplication map, respectively, and $\iota\colon G\rightarrow G\times G/H$ a morphism given by $\iota(g) = (g, eH)$.
Using the $G$-equivariant structure, we identify the following three families:
\begin{align*}
m^\#\mathscr{D}_{G/H, \Lambda},\quad \pi^\# \mathscr{D}_{G/H,\Lambda},\quad (\mathscr{D}_{G}\boxtimes \mathscr{D}_{G/H,\lambda(r)})_{r \in R}.
\end{align*}
Since $m\circ \iota = p$, by Proposition \ref{prop:pull-backBornology}, we have
\begin{align*}
p^\# \mathcal{B} = \iota^\# m^\#\mathcal{B} = \iota^\# \pi^\# \mathcal{B}
= \iota^\# (\mathcal{B}_{\textup{id}}\boxtimes \mathcal{B}) = \mathcal{B}_{\textup{id}},
\end{align*}
where $\mathcal{B}_{\textup{id}}$ is the equivalence class of the trivialization $(G, \textup{id}_G, \textup{id})$ of the constant family $(\mathscr{D}_{G})_{r \in R}$.
Therefore we have $p^\# \mathcal{B} = \mathcal{B}_{\textup{id}} = p^\# \mathcal{B}(G/H,G)$.
\end{proof}
\subsection{Uniformly bounded family of irreducible modules}
Let $K$ be an affine algebraic group and $X$ a $K$-variety.
Let $\mathscr{A}_{X,\Lambda}:=(\mathscr{A}_{X,\lambda})_{\lambda\in \Lambda}$ be a family of $K$-equivariant algebras of twisted differential operators on $X$.
Fix a $K$-equivariant bornology $\mathcal{B}$ of $\mathscr{A}_{X,\Lambda}$.
A classification of $K$-equivariant $\mathscr{A}_{X,\lambda}$-modules
is given by Beilinson--Bernstein \cite{BeBe81} (see also \cite[Theorem 2.4]{HMSW87}).
We review the classification.
Fix $\lambda \in \Lambda$ and $x \in X$.
We write $i\colon Kx \hookrightarrow X$ and $p\colon K\rightarrow Kx$ for the inclusion and the natural surjection, respectively.
Let $K_x$ denote the stabilizer of $x$ in $K$.
Since $i^\# \mathscr{A}_{X,\lambda}$ is $K$-equivariant and $Kx$ is homogeneous,
there is a unique element $\mu(\lambda)$ of $(\lie{k}_x^*)^{K_x}$ such that
$i^\# \mathscr{A}_{X,\lambda}$ is canonically isomorphic to
\begin{align*}
\mathscr{D}_{Kx, \mu(\lambda)} := (p_*(\mathscr{D}_{K})\otimes_{\mathcal{U}(\lie{k}_x)} \mathbb{C}_{\mu(\lambda)})^{K_x},
\end{align*}
See \cite[Theorem 4.9.2]{Ka89}.
We identify $i^\# \mathscr{A}_{X,\lambda}$ with $\mathscr{D}_{Kx, \mu(\lambda)}$ by the isomorphism.
\begin{fact}\label{fact:ClassificationDmodule}
Let $\mathcal{M}$ be an irreducible coherent $(\mathscr{A}_{X,\lambda}, K)$-module whose support is $\overline{Kx}$.
Then there exists a unique irreducible $K_x$-module $F$ such that
\begin{enumerate}[(i)]
\item $\lie{k}_x$ acts on $F$ by the character $\mu(\lambda)$,
\item $\mathcal{M}$ is isomorphic to the unique irreducible submodule
of $D^0i_+(\textup{Ind}_{K_x}^K(F))$,
\end{enumerate}
where $\textup{Ind}_{K_x}^K(F)$ is the $(\mathscr{D}_{Kx,\mu(\lambda)}, K)$-module of local sections of the associated vector bundle $K\times_{K_x} F$ over $Kx \simeq K/K_x$.
In particular, $\mathcal{M}$ is holonomic.
\end{fact}
We shall show that a family of $(\mathscr{A}_{X,\lambda}, K)$-modules with bounded lengths is uniformly bounded if $K$ has finitely many orbits in $X$.
\begin{lemma}\label{lem:KequivModules}
Let $F$ be an irreducible $K_x$-module.
Put $n=\dim_{\mathbb{C}}(K_x)$.
Assume that $\lie{k}_x$ acts on $F$ by the character $\mu(\lambda)$.
Then $\textup{Ind}_{K_x}^K(F)$ is isomorphic to a direct summand of $D^{-n}p_{+,\mu(\lambda)}(\rsheaf{K})$.
\end{lemma}
\begin{proof}
By Theorem \ref{thm:DirectImageTor} and the Poincar\'e duality (Fact \ref{fact:PoincareDuality}), we have
\begin{align*}
D^{-n}p_{+,\mu(\lambda)}(\rsheaf{K}) \simeq \mathrm{Tor}_n^{\mathcal{U}(\lie{k}_x)}(\mathbb{C}_{\mu(\lambda) - \delta}, p_*(\rsheaf{K}))
\simeq (p_*(\rsheaf{K}) \otimes \mathbb{C}_{\mu(\lambda)})^{(K_x)_0},
\end{align*}
where $\delta$ is the character $\lie{k}_x \ni X\mapsto \textup{tr}(\textup{ad}_{\lie{k}_x}(X))$.
The assertion follows from the isomorphisms and the Frobenius reciprocity.
\end{proof}
\begin{lemma}\label{lem:KequivModules2}
Let $\mathcal{M}$ be an irreducible coherent $(\mathscr{A}_{X,\lambda}, K)$-module whose support is $\overline{Kx}$.
Then $\mathcal{M}$ is isomorphic to a direct summand of $H^{-n} \circ Di_+\circ Dp_{+,\mu(\lambda)}(\rsheaf{K})$.
\end{lemma}
\begin{proof}
Since $Kx$ is locally closed in $X$, the cohomology $D^{k}i_+(\mathcal{N})$ vanishes for any $k < 0$ and $\mathcal{N} \in \mathrm{Mod}_{qc}(\mathscr{A}_{X,\lambda})$.
Using truncation functors (see Subsection \ref{sect:truncation}),
we have
\begin{align*}
H^{-n} \circ Di_+\circ Dp_{+,\mu(\lambda)}(\rsheaf{K}) \simeq D^{0}i_+(D^{-n}p_{+,\mu(\lambda)}(\rsheaf{K})).
\end{align*}
Hence the assertion follows from Fact \ref{fact:ClassificationDmodule} and Lemma \ref{lem:KequivModules}.
\end{proof}
Let $\pi$ and $m$ be the projection and the multiplication map
from $K\times X$ to $X$, respectively.
We write $f_x(g) = (g, x)$ for $g \in K$ and $x \in X$.
Then we have $i\circ p = m\circ f_x$.
We denote by $D(f_x)_{+,\lambda}$ the direct image functor
$D^b_{qc}(\mathscr{D}_{G}) \rightarrow D^b_{qc}(\pi^\#\mathscr{A}_{X,\lambda})$.
By Proposition \ref{prop:FamilyMorphismPush}, $(D(f_x)_{+, \lambda}(\rsheaf{K}))_{x \in X, \lambda \in \Lambda}$ is uniformly bounded with respect to $\pi^\# \mathcal{B}$.
Since $\mathcal{B}$ and any algebra in $\mathscr{A}_{X,\Lambda}$ are $K$-equivariant,
we have $m^\# \mathscr{A}_{X,\Lambda} \simeq \pi^\# \mathscr{A}_{X,\Lambda}$
and $m^\#\mathcal{B} = \pi^\# \mathcal{B}$.
Therefore $(Dm_+ \circ D(f_x)_{+,\lambda}(\rsheaf{K}))_{x \in X, \lambda \in \Lambda}$ is uniformly bounded with respect to $\mathcal{B}$ by Theorem \ref{thm:FunctorOnUniformlyBounded}.
By Lemma \ref{lem:KequivModules2} and $i\circ p = m\circ f_x$, we obtain
\begin{proposition}\label{prop:UniformlyBoundedIrreducibles}
Let $\mathcal{M} \in \prod_{\lambda \in \Lambda}\mathrm{Mod}_{h}(\mathscr{A}_{X,\lambda}, K)$.
Assume that each $\mathcal{M}_\lambda$ is irreducible and its support is the closure of some $K$-orbit dependent on $\lambda$.
Then $\mathcal{M}$ is a uniformly bounded family with respect to $\mathcal{B}$.
\end{proposition}
\begin{theorem}\label{thm:UniformlyBoundedIrreducibles}
Let $\mathcal{M} \in \prod_{\lambda \in \Lambda}\mathrm{Mod}_{h}(\mathscr{A}_{X,\lambda}, K)$.
Assume that $K$ has finitely many orbits in $X$ and the length of each $\mathcal{M}_\lambda$ is bounded by a constant independent of $\lambda \in \Lambda$.
Then $\mathcal{M}$ is a uniformly bounded family with respect to $\mathcal{B}$.
\end{theorem}
For the representation theory of real reductive Lie groups, we generalize the theorem to the universal covering group of $K$ in a sense.
Retain the notation $X, K, \mathscr{A}_{X,\Lambda}, \mathcal{B}$ as above
and assume that $K$ is connected.
Fix $\lambda \in \Lambda$ for a while.
Let $\nu$ be a character of $\lie{k}$ and $\mathcal{M}$ a quasi-coherent $\mathscr{A}_{X,\lambda}$-module.
We say that $\mathcal{M}$ is a twisted $(\mathscr{A}_{X,\lambda}, K)$-module with twist $\nu$ if the action of $\lie{k}$ on $\mathcal{M}\otimes \mathbb{C}_{\nu}$ lifts to an action of $K$.
Let $\mathscr{A}_{X,(\lambda, \nu)}$ be the $K$-equivariant algebra $\mathscr{A}_{X,\lambda}\otimes \textup{End}_{\mathbb{C}}(\mathbb{C}_{\nu})$, which is isomorphic to $\mathscr{A}_{X,\lambda}$ without the $K$-equivariant structures.
Then $\mathcal{M}$ is a twisted $(\mathscr{A}_{X,\lambda}, K)$-module with twist $\nu$
if and only if $\mathcal{M}$ admits a $K$-equivariant structure as an $\mathscr{A}_{X,(\lambda, \nu)}$-module.
Take $(U, \varphi, \Phi) \in \mathcal{B}$.
Then $(U, \varphi, (\Phi_\lambda)_{\lambda \in \Lambda, \nu \in (\lie{k}^*)^K})$
is a bounded trivialization of $(\mathscr{A}_{X,(\lambda, \nu)})_{\lambda\in \Lambda, \nu \in (\lie{k}^*)^K}$.
Since the $K$-action on $\mathscr{A}_{X,(\lambda, \nu)}$ is the same as that on $\mathscr{A}_{X,\lambda}$, the bornology defined by $(U, \varphi, (\Phi_\lambda)_{\lambda \in \Lambda, \nu \in (\lie{k}^*)^K})$ is $K$-equivariant.
\begin{corollary}\label{cor:UniformlyBoundedIrreducibles}
Proposition \ref{prop:UniformlyBoundedIrreducibles} and Theorem \ref{thm:UniformlyBoundedIrreducibles} hold even if all $\mathcal{M}_\lambda$
are twisted $(\mathscr{A}_{X, \lambda}, K)$-modules.
\end{corollary}
\subsection{Finite orbits and uniformly bounded family}
Retain the notation $X, K, \mathscr{A}_{X, \Lambda}, \mathcal{B}$ in the previous subsection.
Assume that $K$ has finitely many orbits in $X$ and $K$ is connected.
In this subsection, we consider the $\mathscr{A}_{X,\lambda}$-module $\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F)$
for a finite-dimensional $\lie{k}$-module $F$.
To estimate the length of $\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F)$, we need the following lemma about a complex of filtered modules.
\begin{lemma}\label{lem:FilteredCohomology}
Let $\mathcal{A}$ be a filtered ring and $(C^\bullet, d^\bullet)$ a complex of filtered $\mathcal{A}$-modules.
Then $\textup{gr}(H^i(C^\bullet))$ is isomorphic to a subquotient of $H^i(\textup{gr}(C^\bullet))$ for any $i \in \mathbb{Z}$.
\end{lemma}
\begin{proof}
Fix $i \in \mathbb{Z}$.
It is easy to see that the following canonical homomorphisms are injective:
\begin{align}
\Im(\textup{gr}(d^{i-1})) \rightarrow \textup{gr}(\Im(d^{i-1})) \rightarrow \textup{gr}(\mathrm{Ker}(d^i)) \rightarrow \mathrm{Ker}(\textup{gr}(d^i)), \label{eqn:ProofFilteredComplex}
\end{align}
where $\textup{gr}(d^k)\colon \textup{gr}(C^k)\rightarrow \textup{gr}(C^{k+1})$ is the homomorphism induced from $d^k\colon C^k \rightarrow C^{k+1}$.
The filtrations on $\Im(d^{i-1})$, $\mathrm{Ker}(d^{i})$ and $H^i(C^\bullet)$
are induced from that on $C^i$.
Hence we have $\textup{gr}(H^i(C^\cdot)) \simeq \textup{gr}(\mathrm{Ker}(d^i))/\textup{gr}(\Im(d^{i-1}))$.
This isomorphism and \eqref{eqn:ProofFilteredComplex} show the lemma.
\end{proof}
Let $\pi\colon T^*X\rightarrow X$ be the cotangent bundle.
We have a homomorphism $\sigma\colon S(\lie{k}) \rightarrow \rsheaf{T^*X}$
defined by taking the principal symbol of $\mathscr{A}_{X,\lambda}$.
The homomorphism $\sigma$ does not depend on the choice of the $K$-equivariant algebra $\mathscr{A}_{X,\lambda}$.
In fact, the composition $\lie{k}\rightarrow \mathcal{P}(\mathscr{A}_{X,\lambda})\rightarrow \mathcal{T}_X$ coincides with the differential of the $K$-action on $X$,
and $\sigma$ is determined by $\sigma|_{\lie{k}}$.
Here $\mathcal{P}(\mathscr{A}_{X,\lambda})$ is the Picard algebroid associated to $\mathscr{A}_{X,\lambda}$ (see Subsection \ref{sect:PicardAlgebroid}).
\begin{lemma}\label{lem:HolonomicBoundLength}
Fix $\lambda \in \Lambda$.
Let $\mathcal{M}$ be an $\mathscr{A}_{X,\lambda}$-module with a filtration,
and $\mathcal{N}$ a coherent $\pi_*\rsheaf{T^*X}$-module annihilated by $\sigma(\lie{k})$.
If $\textup{gr}(\mathcal{M})$ is isomorphic to a subquotient of $\mathcal{N}^{\oplus n}$,
then $\mathcal{M}$ is holonomic and there exists a constant $C(\mathcal{N})$ depending only on $\mathcal{N}$ such that
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{X,\lambda}}(\mathcal{M}) \leq C(\mathcal{N}) \cdot n.
\end{align*}
\end{lemma}
\begin{proof}
Put $\widetilde{\mathcal{N}}:=\rsheaf{T^*X}\otimes_{\pi^{-1}\pi_* \rsheaf{T^*X}}\mathcal{N}$.
Since $\sigma(\lie{k})$ annihilates $\widetilde{\mathcal{N}}$ and $K$ has finitely many orbits in $X$, the support of $\widetilde{\mathcal{N}}$ is contained in the union of the conormal bundles of all $K$-orbits in $X$.
Since $\textup{gr}(\mathcal{M})$ is isomorphic to a subquotient of $\mathcal{N}^{\oplus n}$, the filtration of $\mathcal{M}$ is good, and hence $\mathcal{M}$ is coherent by \cite[Theorem 2.1.3]{HTT08}.
Moreover, the characteristic variety of $\mathcal{M}$ is a union of the conormal bundles of some $K$-orbits in $X$.
This shows that $\mathcal{M}$ is holonomic.
Let $C(\mathcal{N})$ be the sum of multiplicities of $\widetilde{\mathcal{N}}$ along the conormal bundles of all $K$-orbits.
Let $m(\mathcal{M})$ be the sum of multiplicities in the characteristic cycle of $\mathcal{M}$.
Then we have $m(\mathcal{M}) \leq C(\mathcal{N})\cdot n$.
Since the length of $\mathcal{M}$ is bounded by $m(\mathcal{M})$ (see \cite[Proposition 5.1.9]{HTT08}), this shows the lemma.
\end{proof}
Let $U$ be the unipotent radical of $K$.
If necessary, replacing $K$ with its finite covering, we may assume that $[K/U, K/U]$ is simply-connected.
\begin{lemma}\label{lem:FiniteOrbitDmodule}
There exists some constant $C > 0$ such that
for any finite-dimensional $\lie{k}$-module $F$, $\lambda \in \Lambda$ and $i\in \mathbb{Z}$, we have
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{X,\lambda}}(\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F))
\leq C\cdot \dim_{\mathbb{C}}(F),
\end{align*}
where $\mathscr{A}_{X,\lambda}$ is considered as a $\univ{k}$-module by the right action.
Moreover, any composition factor of $\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F)$ is a holonomic twisted $(\mathscr{A}_{X,\lambda}, K)$-module.
\end{lemma}
\begin{remark}
The lemma for $n=1$ is proved in \cite{Ta18}.
\end{remark}
\begin{proof}
Fix $F$ and $\lambda$.
By induction on the length of $F$, the assertion can be reduced to the case of irreducible $F$.
Since $F$ is irreducible, $\lie{k}/\textup{Ann}_\lie{k}(F)$ is reductive, where $\textup{Ann}$ means the annihilator of a module.
Hence we can take a character $\mu$ of $\lie{k}$ such that $F\otimes \mathbb{C}_\mu$ lifts to a $K$-module.
This implies that $\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F)$ is a twisted $(\mathscr{A}_{X,\lambda}, K)$-module with twist $\mu$.
In fact, the homology can be computed by an $h$-complex of weak $(\mathscr{A}_{X,(\lambda, \mu)}, K)$-modules in the sense of Bernstein--Lunts \cite[2.5]{BeLu95}.
See \cite[Proposition 3.3]{Ki12} for the complex.
Here $\mathscr{A}_{X,(\lambda, \mu)}$ is a $K$-equivariant algebra defined before Corollary \ref{cor:UniformlyBoundedIrreducibles}.
To compute $\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F)$, we shall use the Chevalley--Eilenberg chain complex.
See Fact \ref{fact:LiealgebraCohomology}.
Let $(\mathscr{A}_{X,\lambda}\otimes F \otimes \wedge^{-\bullet} \lie{k}, d^\bullet )$ be the complex.
For any $i\geq 0$, the differential $d^{-i}$ is given by
\begin{align}
&d^{-i}(P\otimes f \otimes (X_1 \wedge X_2 \wedge \cdots \wedge X_i)) \nonumber \\
= &\sum_{a} (-1)^{a+1} (PX_a\otimes f - P\otimes X_a f) \otimes X_1 \wedge X_2 \wedge \cdots \wedge \hat{X_a} \wedge \cdots \wedge X_i \nonumber \\
+ &\sum_{a < b} (-1)^{a+b} P\otimes f \otimes [X_a, X_b]\wedge X_1 \wedge X_2 \wedge \cdots \wedge \hat{X_a} \wedge \cdots \wedge \hat{X_b} \wedge \cdots \wedge X_i. \nonumber \\
\label{eqn:ProofChevalleyEilenberg}
\end{align}
We denote by $G$ the order filtration of $\mathscr{A}_{X,\lambda}$.
It induces a filtration $\widetilde{G}^i$ on $\mathscr{A}_{X,\lambda}\otimes F \otimes \wedge^i \lie{k}$ as
\begin{align*}
\widetilde{G}^i_n(\mathscr{A}_{X,\lambda}\otimes F \otimes \wedge^i \lie{k})
= G_{n-i}(\mathscr{A}_{X,\lambda})\otimes F\otimes \wedge^i \lie{k}
\end{align*}
for any $i \geq 0$.
Then the complex $(\mathscr{A}_{X,\lambda}\otimes F \otimes \wedge^{-\bullet} \lie{k}, d^\bullet )$ is a complex of filtered $\mathscr{A}_{X,\lambda}$-modules.
By \eqref{eqn:ProofChevalleyEilenberg}, we have
\begin{align*}
H^{-i}(\textup{gr}(\mathscr{A}_{X,\lambda}\otimes F \otimes \wedge^{-\bullet} \lie{k}))
\simeq \mathrm{Tor}^{S(\lie{k})}_i(\pi_*\rsheaf{T^*X}, \mathbb{C}) \otimes F
\end{align*}
as $\pi_* \rsheaf{T^*X}$-modules.
Note that $\mathrm{Tor}^{S(\lie{k})}_i(\pi_*\rsheaf{T^*X}, \mathbb{C})$ is a coherent $\pi_*\rsheaf{T^*X}$-module because each term of the complex is coherent.
By Lemma \ref{lem:FilteredCohomology}, $\textup{gr}(\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F))$ is isomorphic to a subquotient of $\mathrm{Tor}^{S(\lie{k})}_i(\pi_*\rsheaf{T^*X}, \mathbb{C}) \otimes F$.
We can apply Lemma \ref{lem:HolonomicBoundLength} to $\mathcal{M}=\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F)$ and $\mathcal{N}=\mathrm{Tor}^{S(\lie{k})}_i(\pi_*\rsheaf{T^*X}, \mathbb{C})$.
Hence there is a constant $C_i$ depending only on $\mathrm{Tor}^{S(\lie{k})}_i(\pi_*\rsheaf{T^*X}, \mathbb{C})$ such that
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{X,\lambda}}(\mathrm{Tor}^{\univ{k}}_i(\mathscr{A}_{X,\lambda}, F))
\leq C_i \cdot \dim_{\mathbb{C}}(F).
\end{align*}
$C:=\max_i\set{C_i}$ exists because $\mathrm{Tor}^{\univ{k}}_i(\cdot, \cdot)$ vanishes for any $i > \dim_\mathbb{C}(\lie{k})$.
The assertion in the lemma holds for this $C$.
\end{proof}
The following corollary is a direct consequence of Lemma \ref{lem:FiniteOrbitDmodule} and Corollary \ref{cor:UniformlyBoundedIrreducibles}.
\begin{corollary}\label{cor:UniformlyBoundedTor}
Let $\mathcal{F}$ be a set of $\lie{k}$-modules with bounded dimensions.
Then the family $(\mathrm{Tor}^{\univ{k}}_{i}(\mathscr{A}_{X,\lambda}, F))_{i \in \mathbb{Z}, F \in \mathcal{F}, \lambda \in \Lambda}$ is uniformly bounded with respect to $\mathcal{B}$.
\end{corollary}
Let $\mathscr{A}_{Y, \Lambda}$ be a family of twisted differential operators on a smooth variety $Y$.
Fix a bornology $\mathcal{B}'$ of $\mathscr{A}_{Y, \Lambda}$.
We write $q\colon X\times Y\rightarrow Y$ for the projection onto the second factor.
\begin{theorem}\label{thm:UniformlyBoundedFiniteOrbits}
Let $\mathcal{M} \in \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}\boxtimes \mathscr{A}_{Y,\Lambda}, \mathcal{B}\boxtimes \mathcal{B}')$.
If all $\mathcal{M}_{\lambda}$ are $q_*$-acyclic, then there exists a constant $C > 0$ such that
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{Y,\lambda}}(\mathrm{Tor}^{\univ{k}}_i(F, q_*(\mathcal{M}_\lambda))) \leq C\cdot \dim_{\mathbb{C}}(F)
\end{align*}
for any finite-dimensional $\lie{k}$-module $F$, $i \in \mathbb{Z}$ and $\lambda \in \Lambda$.
Moreover, the family $(\mathrm{Tor}^{\univ{k}}_i(F, q_*(\mathcal{M}_\lambda)))_{\lambda \in \Lambda, i \in \mathbb{Z}, F \in \mathcal{F}}$ is uniformly bounded with respect to $\mathcal{B}'$.
Here $\mathcal{F}$ is a set of finite-dimensional $\lie{k}$-modules whose dimensions are bounded.
\end{theorem}
\begin{proof}
For $\mathcal{N} \in D^b_h(\mathscr{A}_{X,\lambda}^\mathrm{op})$, put
\begin{align*}
T^i_\lambda(\mathcal{N}) := R^iq_*(p^{-1}\mathcal{N}\otimes^L_{p^{-1}\mathscr{A}_{X,\lambda}} \mathcal{M}_\lambda),
\end{align*}
where $p\colon X\times Y \rightarrow X$ is the projection onto the first factor.
For $\lambda \in \Lambda$, let $I_\lambda$ be the set of all (isomorphism classes of) irreducible twisted $(\mathscr{A}^\mathrm{op}_{X,\lambda}, K)$-modules.
By Corollary \ref{cor:UniformlyBoundedIrreducibles}, the family $(\mathcal{N})_{\lambda \in \Lambda, \mathcal{N} \in I_\lambda}$ is a uniformly bounded family with respect to $\mathcal{B}$.
Hence by Theorem \ref{thm:UniformlyBoundedIntegralTransform}, we can define a constant $C_1$ as
\begin{align*}
C_1 := \max\set{\mathrm{Len}_{\mathscr{A}_{Y,\lambda}}(T^i_\lambda(\mathcal{N})): \lambda \in \Lambda, \mathcal{N} \in I_\lambda, i\in \mathbb{Z}}.
\end{align*}
Fix a finite-dimensional $\lie{k}$-module $F$.
Take a free resolution $J^\bullet$ of the $\univ{k}$-module $F$.
Then we have
\begin{align*}
T^{-i}_\lambda(J^\bullet \otimes_{\univ{k}} \mathscr{A}_{X,\lambda}) &= R^{-i}q_*(p^{-1}(J^\bullet \otimes_{\univ{k}} \mathscr{A}_{X,\lambda})\otimes^L_{p^{-1}\mathscr{A}_{X,\lambda}} \mathcal{M}_\lambda) \\
&\simeq R^{-i} q_*(J^\bullet \otimes_{\univ{k}}\mathcal{M}_\lambda) \\
&\simeq H^{-i}(J^\bullet \otimes_{\univ{k}}q_*(\mathcal{M}_\lambda)) \\
&\simeq \mathrm{Tor}^{\univ{k}}_i(F, q_*(\mathcal{M}_\lambda)).
\end{align*}
Here the second isomorphism holds because $J^k \otimes_{\univ{k}}\mathcal{M}_\lambda$ is isomorphic to a direct sum of some copies of $\mathcal{M}_\lambda$
as a sheaf, and $\mathcal{M}_\lambda$ is $q_*$-acyclic.
This shows the second assertion by Theorem \ref{thm:UniformlyBoundedIntegralTransform} and Corollary \ref{cor:UniformlyBoundedTor}.
To show the first assertion, we remark that $H^{-i}(J^\bullet \otimes_{\univ{k}} \mathscr{A}_{X,\lambda})$ is isomorphic to $\mathrm{Tor}^{\univ{k}}_i(F, \mathscr{A}_{X,\lambda})$ as an $\mathscr{A}_{X,\lambda}^\mathrm{op}$-module.
By Lemma \ref{lem:AdditiveFunctionDerived} (ii), we have
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{Y,r}}(\mathrm{Tor}^{\univ{k}}_i(F, q_*(\mathcal{M}_\lambda))) &= \mathrm{Len}_{\mathscr{A}_{Y,r}}(T^{-i}_\lambda(J^\bullet \otimes_{\univ{k}} \mathscr{A}_{X,\lambda})) \\
&\leq \sum_{j=0}^{\dim_{\mathbb{C}}(\lie{k})} \mathrm{Len}_{\mathscr{A}_{Y,\lambda}} (T^{-i+j}_\lambda(\mathrm{Tor}^{\univ{k}}_j(F, \mathscr{A}_{X,\lambda}))).
\end{align*}
By Lemma \ref{lem:FiniteOrbitDmodule}, there is a constant $C_2$ independent of $F$ such that
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{X,\lambda}^\mathrm{op}}(\mathrm{Tor}^{\univ{k}}_j(F, \mathscr{A}_{X,\lambda})) \leq C_2 \cdot \dim_{\mathbb{C}}(F)
\end{align*}
for any $j \in \mathbb{Z}$ and $\lambda \in \Lambda$,
and any composition factor of the module is in $I_\lambda$.
By Lemma \ref{lem:AdditiveFunctionDerived} (i), we obtain
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{Y,\lambda}}(\mathrm{Tor}^{\univ{k}}_i(F, q_*(\mathcal{M}_\lambda))) \leq C_1\cdot C_2 \cdot \dim_{\mathbb{C}}(F) (\dim_{\mathbb{C}}(\lie{k}) + 1).
\end{align*}
We have taken $C_1$ and $C_2$ independently of $F$, $i$ and $\lambda$.
Therefore we have proved the theorem.
\end{proof}
\section{Introduction}
In this paper, we introduce a notion of uniformly bounded families of $\mathscr{D}$-modules, which are good families of holonomic $\mathscr{D}$-modules with bounded lengths.
We show that the uniform boundedness is preserved by fundamental operations of $\mathscr{D}$-modules such as direct images, inverse images and taking subquotients.
By the Beilinson--Bernstein correspondence \cite{BeBe81}, we can deduce several boundedness results about the representation theory of complex reductive Lie algebras from corresponding results of uniformly bounded families of $\mathscr{D}$-modules.
In the representation theory of real reductive Lie groups, finiteness results about lengths of modules and multiplicities in branching laws are fundamental and enable us to study Harish-Chandra modules and unitary representations.
We list typical examples of the results: finiteness of the lengths of Verma modules and principal series representations, Harish-Chandra's admissibility theorem \cite{Ha53_admissible}, irreducibility of $\univ{g}^K$-actions on $K$-isotypic components, and finiteness of multiplicities in the Plancherel formula of symmetric spaces \cite{Ba87,KoOs13}.
Our main concern is that the finiteness is uniform.
The length of a Verma module is bounded by some constant independent of its highest weight, and a similar result holds for principal series representations.
The former is an easy consequence of Soergel's theorem \cite{So90} (see also Remark \ref{rmk:Soergel}), and the latter is proved by Kobayashi--Oshima in \cite{KoOs13}.
In \cite{KoOs13}, T.\ Kobayashi and T.\ Oshima give criteria for the finiteness and the uniform boundedness of multiplicities in the branching problem and harmonic analysis of real reductive Lie groups.
The criteria are given by conditions on the existence of open orbits in flag varieties,
and proved by using hyperfunction boundary value maps.
A.\ Aizenbud, D.\ Gourevitch and A.\ Minchenko give an alternative proof to some of the results using families of holonomic $\mathscr{D}$-modules in \cite{AiGoDm16}.
T.\ Tauchi proves similar results based on the finiteness of hyperfunction solutions in \cite{Ta18}.
Their results are one of our motivations.
In this paper, we do not deal with concrete applications to the branching problem and harmonic analysis.
We concentrate on providing fundamental properties of uniformly bounded families, and preparing abstract results for such applications.
See Proposition \ref{prop:TorUniformlyBoundedGmod} and Remark \ref{rmk:TorUniformlyBoundedGmod} for an easy application to the estimate of multiplicities.
Let us state the definition of uniformly bounded families and their properties.
Our definition is based on Bernstein's work \cite{Be72}.
In the paper, he has introduced the multiplicity $m(M)$ of a module $M$ of the Weyl algebra $\ntDalg{\mathbb{C}^n}$, and proved that the multiplicity is well-behaved for direct images, inverse images and taking subquotients.
We denote by $\mathrm{Mod}_h(\mathscr{D}_{X})$ the category of holonomic $\mathscr{D}$-modules on an smooth variety $X$.
Let $f\colon \mathbb{C}^n \rightarrow \mathbb{C}^m$ be a morphism of algebraic varieties of degree $d'$
and set $d = \max(1, d')$.
Let $Df_+$ (resp. $L f^*$) denote the direct (resp. inverse) image functor.
Then we have
\begin{align*}
\sum_i m(D^if_+(\mathcal{M})) \leq d^{n+m} m(\mathcal{M})\text{ and } \sum_i m(L_i f^*(\mathcal{N})) \leq d^{n+m} m(\mathcal{N}).
\end{align*}
for any $\mathcal{M} \in \mathrm{Mod}_h(\mathscr{D}_{\mathbb{C}^n})$ and $\mathcal{N} \in \mathrm{Mod}_h(\mathscr{D}_{\mathbb{C}^m})$ (see Fact \ref{fact:WeylAlgebraDirectInverseImage}).
Here we put $m(\mathcal{M}):= m(\Gamma(\mathcal{M}))$.
The key point is that the coefficient $d^{n+m}$ is independent of $\mathcal{M}$ (or $\mathcal{N}$).
In other words, the estimates of the multiplicities are uniform with respect to $\mathcal{M}$ (or $\mathcal{N}$).
This is the starting point of our definition.
Let $\mathscr{A}_{X, \Lambda}:=(\mathscr{A}_{X,\lambda})_{\lambda \in \Lambda}$
be a family of algebras of twisted differential operators on a smooth variety $X$ over $\mathbb{C}$.
We say that $(U,\varphi, \Phi)$ is a trivialization of $\mathscr{A}_{X,\Lambda}$
if $\varphi \colon U\rightarrow X$ is a surjective \'etale morphism and $\Phi_\lambda \colon \varphi^\#\mathscr{A}_{X,\lambda} \rightarrow \mathscr{D}_{U}$ is an isomorphism.
Here $\varphi^\#$ is the pull back of algebras of twisted differential operators by $\varphi$.
Take a trivialization $(U, \varphi, \Phi)$ with affine $U$ and a closed embedding $\iota \colon U\rightarrow \mathbb{C}^n$.
Then for a family $\mathcal{M} \in \prod_{\lambda \in \Lambda}\mathrm{Mod}_h(\mathscr{A}_{X,\lambda})$, we can consider a function
\begin{align}
\Lambda \ni \lambda \mapsto m(\iota_+(\varphi^*(\mathcal{M}_\lambda))) \in \mathbb{N}.
\label{eqn:MultiplicityFunction}
\end{align}
The boundedness of the function does not depend on $\iota$ (see Proposition \ref{prop:LocalMultiplicityEtale}), and does depend on the isomorphisms $\Phi$.
We introduce a relation $\sim$ of trivializations.
For two trivialization $(U,\varphi, \Phi)$ and $(V,\psi, \Psi)$, we write $(U,\varphi, \Phi) \sim (V,\psi, \Psi)$ if the set
\begin{align*}
\set{\widetilde{\varphi}^\#\Psi_\lambda \circ (\widetilde{\psi}^\#\Phi_\lambda)^{-1}: \lambda \in \Lambda} \subset \textup{Aut}(\mathscr{D}_{U\times_X V}) \simeq \mathcal{Z}(U\times_X V)
\end{align*}
spans a finite-dimensional subspace of the space $\mathcal{Z}(U\times_X V)$ of closed $1$-forms.
Here $\widetilde{\varphi}\colon U\times_X V\rightarrow V$ and $\widetilde{\psi}\colon U\times_X V \rightarrow U$ are the projections of the fiber product.
See Definition \ref{def:BoundedTrivialization}.
We say that a trivialization $T$ is bounded if $T\sim T$.
Although the relation is not an equivalence relation of trivializations,
it is an equivalence relation of bounded trivializations.
Moreover, if two bounded trivialization $T=(U,\varphi, \Phi)$ and $S=(V, \psi, \Psi)$ with affine $V$ and $U$ are equivalent, the boundedness of the function \eqref{eqn:MultiplicityFunction} defined for $T$ is equivalent to that for $S$.
An equivalence class of bounded trivializations is called a bornology of $\mathscr{A}_{X,\Lambda}$ (see Definition \ref{def:EquivalenceBornology}).
For a bornology $\mathcal{B}$ of $\mathscr{A}_{X,\lambda}$, we say that $\mathcal{M} \in \prod_{\lambda\in \Lambda}\mathrm{Mod}_{h}(\mathscr{A}_{X,\lambda})$ is a uniformly bounded family with respect to $\mathcal{B}$
if the function \eqref{eqn:MultiplicityFunction} defined for any/some $T \in \mathcal{B}$ is bounded.
We denote by $\mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B})$ the full subcategory of $\prod_{\lambda\in \Lambda}\mathrm{Mod}_{h}(\mathscr{A}_{X,\lambda})$ whose objects are uniformly bounded.
Similarly, we define a derived category version $D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B})$.
See Definition \ref{def:UniformlyBoundedFamilyOri} for the details.
Corresponding to operations of $\mathscr{A}_{X,\Lambda}$, we define operations of bornologies by natural ways: pull-back $f^\#\mathcal{B}$, external tensor product $\mathcal{B}\boxtimes \mathcal{B}'$, twisting by an invertible sheaf $\mathcal{B}^\mathcal{L}$ and opposite $\mathcal{B}^\mathrm{op}$.
The following theorem is a fundamental result about uniformly bounded families.
See Subsections \ref{sect:UniformlyBoundedFamily} and \ref{sect:twisting}.
\begin{theorem}\label{thm:IntroUniformlyBounded}
The uniform boundedness is preserved by direct images, inverse images, external tensor products, twisting by an invertible sheaf and taking subquotients.
\end{theorem}
For example, for a morphism $f\colon Y\rightarrow X$ of smooth varieties, we can define
a direct image functor
\begin{align*}
Df_+ \colon D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}) \rightarrow D^b_{ub}(f^\#\mathscr{A}_{X,\Lambda}, f^\# \mathcal{B})
\end{align*}
via $Df_+(\mathcal{M}) = (Df_+(\mathcal{M}_\lambda))_{\lambda \in \Lambda}$,
which is the restriction of the direct product of the direct image functors on $D^b_h(\mathscr{A}_{X,\lambda})$ ($\lambda \in \Lambda$).
Here $f^\#\mathscr{A}_{X,\Lambda}$ is the family $(f^\# \mathscr{A}_{X,\lambda})_{\lambda \in \Lambda}$.
The proofs for the last three operations in Theorem \ref{thm:IntroUniformlyBounded} are easy by the definition of uniformly bounded families.
The proofs for the others are essentially the same as a proof of preserving holonomicity (see e.g.\ \cite[VII. \S 12]{Bo87} and \cite[3.2]{HTT08}).
When each $\mathscr{A}_{X,\lambda}$ is $G$-equivariant, we can define a notion of $G$-equivariant bornologies by a natural way.
The $G$-equivariance is preserved by the pull-back by a $G$-equivariant morphism.
It is important for the representation theory that if $X$ is a homogeneous variety $G/H$, there exists a unique $G$-equivariant bornology of a family of $G$-equivariant algebras of twisted differential operators (see Proposition \ref{prop:UniqueBornologyHomogeneous}).
In Beilinson--Bernstein's paper \cite{BeBe81}, they give a way to classify equivariant $\mathscr{D}$-modules.
Combining the classification and the notion of $G$-equivariant bornologies,
we obtain
\begin{theorem}[Theorem \ref{thm:UniformlyBoundedIrreducibles}]\label{thm:IntroIrreducibles}
Let $\mathcal{B}$ be a bornology of $\mathscr{A}_{X,\Lambda}$.
Suppose that $X$ is a $G$-variety of an affine algebraic group $G$,
and $\mathcal{B}$ and $\mathscr{A}_{X,\Lambda}$ are $G$-equivariant.
If $G$ has finitely many orbits in $X$, then any family of $(\mathscr{A}_{X,\lambda}, G)$-modules with bounded lengths
is uniformly bounded with respect to $\mathcal{B}$.
\end{theorem}
In Section \ref{sect:ExampleUBF}, we give several methods to construct bornologies and uniformly bounded families from algebraic group actions.
In particular, we will see that there are many uniformly bounded families.
Let us state applications of uniformly bounded families to the representation theory.
Let $G$ be a connected reductive algebraic group over $\mathbb{C}$ and $B$ a Borel subgroup of $G$.
By the Beilinson--Bernstein correspondence, any $\lie{g}$-module with an infinitesimal character is isomorphic to $\Gamma(\mathcal{M})$ for some twisted $\mathscr{D}$-module on $G/B$.
We always choose the twist of $\mathscr{D}$ such that $\Gamma$ is exact on the category of quasi-coherent twisted $\mathscr{D}$-modules.
We say that a family $(M_i)_{i \in I}$ of $\lie{g}$-modules is uniformly bounded if the lengths of $M_i$ are bounded and the localization of the family of all composition factors of all $M_i$ is a uniformly bounded family on $G/B$.
The uniform boundedness is preserved by several operations of $\lie{g}$-modules such as taking subquotients, tensoring with finite-dimensional modules and cohomological parabolic inductions.
This follows from corresponding results for $\mathscr{D}$-modules in Theorem \ref{thm:IntroUniformlyBounded}.
By Theorem \ref{thm:IntroIrreducibles}, any family of Harish-Chandra modules (or objects in the BGG category $\mathcal{O}$) with bounded lengths is uniformly bounded.
This implies that many families in the representation theory of real reductive Lie groups are uniformly bounded.
We shall state the preservation result for cohomological parabolic inductions.
Let $P$ be a parabolic subgroup of $G$ and $L$ a Levi subgroup of $P$.
Let $(\lie{g}, K)$ be a pair (see Definition \ref{def:pair}).
Take a reductive subgroup $K_L \subset K$ such that $\lie{k}_L \subset \lie{k}\cap \lie{l}$ and $K_L$ normalizes $\lie{l}$ and $\lie{p}$.
\begin{theorem}[Theorem \ref{thm:uniformlyBoundedLengthCohInd}]
Let $(M_i)_{i \in I}$ be a uniformly bounded family of $(\lie{l}, K_L)$-modules.
Then the family $(\Dzuck{K}{K_L}{j}(\univ{g}\otimes_{\univ{p}} M_i))_{i \in I, j \in \mathbb{Z}}$ is uniformly bounded,
where $\Dzuck{K}{K_L}{j}$ is the $j$-th Zuckerman derived functor.
\end{theorem}
As mentioned before, the lengths of Verma modules (or principal series representations) are bounded, which is a special case of the theorem.
It is well-known that the length of a cohomologically induced module is finite (see e.g.\ \cite[Theorem 0.46]{KnVo95_cohomological_induction}).
For the proof of the theorem, we need the localization of the Zuckerman derived functor.
In this paper, we construct the localization following F.\ Bien \cite{Bi90}.
A conceptual treatment of the localization using the equivariant derived category
is given by S.\ N.\ Kitchen \cite{Ki12}.
See also \cite{MiPa98}.
We do not treat the equivariant derived category in this paper.
An algebra of invariant differential operators plays an important role in the representation theory of real reductive Lie groups such as the Schur--Weyl duality and the compact Howe duality \cite{Ho89}, a characterization of compact Gelfand pair \cite{Th82}, and
Harish-Chandra's study of $(\lie{g}, K)$-modules \cite{Ha53_admissible,Ha54_subquotient_theorem}.
If $(\lie{g}, K)$ is a pair with connected reductive group $K$, then the $\univ{g}^K$-action on a non-zero $K$-isotypic component of an irreducible $(\lie{g}, K)$-module is irreducible.
This is a classical result that follows from the Jacobson density theorem and completely reducibility of the $K$-action (see e.g.\ \cite[Section 4.2]{GoWa09}).
The following theorem can be considered as a generalization of the result.
Let $G'$ be a reductive subgroup of $G$ and $(\lie{g'}, K')$ a subpair of $(\lie{g}, K)$.
Suppose that $K'$ is a reductive subgroup of $K$ and $\textup{Ad}_{\lie{g}}(K')$ is contained in $\textup{Ad}_{\lie{g}}(G')$.
\begin{theorem}[Theorem \ref{thm:uniformlyBoundedLengthGeneralGK}]\label{thm:intro:Zuckerman}
Let $(V_i)_{i \in I}$ and $(V'_i)_{i \in I}$ be uniformly bounded families of $(\lie{g}, K)$-modules and $(\lie{g'}, K')$-modules, respectively (e.g.\ families of irreducible Harish-Chandra modules).
Then there exists a constant $C$ such that for any $i \in I$ and $j \in \mathbb{Z}$, we have
\begin{align*}
\mathrm{Len}_{\univ{g}^{G'}}(H_j(\lie{g'}, K'; V_i \otimes V'_i)) \leq C,
\end{align*}
where $\mathrm{Len}(\cdot)$ means the length of a module.
\end{theorem}
One of our motivations of Theorem \ref{thm:intro:Zuckerman} is to study multiplicities in the branching problem and harmonic analysis of real reductive Lie groups.
The theorem asserts that the multiplicities are roughly controlled by $\univ{g}^{G'}$.
We can give criteria for the uniformly boundedness of the multiplicities by a ring invariant of $\univ{g}^{G'}$.
We postpone the results to the sequel \cite{Ki20}.
Another motivation of Theorem \ref{thm:intro:Zuckerman} is the Howe duality \cite{Ho89_transcending}.
If $V$ is the Segal--Weil--Shale representation and $(G', H')$ is a reductive dual pair of $G$ (i.e.\ $Z_G(G') = H', Z_{G}(H') = G'$), then Theorem \ref{thm:intro:Zuckerman} asserts that (higher) theta lifts $\Theta_i(V')=H_i(\lie{g'}, K'; V\otimes \dual{V'})$ are of finite length as $\lie{h'}$-modules and the lengths are bounded.
By our method, we can not prove one of important parts of Howe's theorem that the theta lift $\Theta_0(V')$ has a unique irreducible quotient.
However our theorem enables us to define the Euler--Poincar\'e characteristic of the higher theta lifts.
See Theorem \ref{thm:ThetaLift}.
We remark that the Euler–Poincar\'e characteristic of the theta lifting
for p-adic groups is studied in \cite{APS17}.
Let $G$ be a connected reductive algebraic group over $\mathbb{C}$ and $K$ its connected reductive subgroup.
I. Penkov and G. Zuckerman call a $\lie{g}$-module $M$ a generalized Harish-Chandra module if $M$ is locally finite, completely reducible and admissible as a $\lie{k}$-module (see e.g.\ \cite{PeZu14}).
A relation between generalized Harish-Chandra modules and supports of $\mathscr{D}$-modules on $G/B$ is studied by A. V. Petukhov \cite{Pe12}.
A motivation of our study of the category of generalized Harish-Chandra modules is to study the category of $\univ{g}^K$-modules.
By Lepowsky--McCollum's result \cite{LeMc73}, the category of $\univ{g}^K$-modules can be embedded in the category of $(\lie{g}\oplus \lie{k}, \Delta(K))$-modules.
Hence we can relate the branching problem and harmonic analysis to the study of $(\lie{g}\oplus \lie{k}, \Delta(K))$-modules.
As an application of uniformly bounded families, we prove fundamental results of the category of generalized Harish-Chandra modules: finiteness of equivalence classes of irreducible objects (Corollary \ref{cor:NumberOfIrreducibles}), boundedness of the Loewy lengths of modules (Theorem \ref{thm:LoewyLength}), and
existence of projective objects (Proposition \ref{prop:EnoughProj}).
This paper is organized as follows.
In Section 2, we review the notions of generalized pairs, pairs $(\lie{g}, K)$, relative Lie algebra cohomologies and truncation functors.
In Section 3, we recall the definition of algebras of twisted differential operators and their operations.
At the end of the section, we study the direct image functors with respect to the projections of principal $G$-bundles.
The definition of uniformly bounded families of $\mathscr{D}$-modules is in Section 4.
Theorem \ref{thm:IntroUniformlyBounded} is proved here.
Section 5 is devoted to constructions of bornologies and uniformly bounded families.
The proof of Theorem \ref{thm:IntroIrreducibles} is given here.
In Section 6, we review the localization of the Zuckerman derived functor following \cite{Bi90}.
Applications to the representation theory are given in Section 7.
\subsection*{Notation and convention}
In this paper, any algebraic variety is assumed to be quasi-projective and defined over $\mathbb{C}$.
Let $\rsheaf{X}$ and $\rring{X}$ denote the structure sheaf of a variety $X$ and the algebra of its global sections, respectively.
Suppose that $X$ is smooth.
We write $\mathscr{D}_X$ for the algebra of non-twisted (local) differential operators.
We express algebras of twisted differential operators and the spaces of their global sections by script letters and calligraphic letters, respectively.
For example, the space of global sections of algebras $\mathscr{A}_X, \mathscr{B}_{X,\lambda}$ and $\mathscr{D}_X$ are denoted as $\mathcal{A}_X, \mathcal{B}_{X,\lambda}$ and $\mathcal{D}_X$, respectively.
Any representation and module in this paper are assumed to be defined over $\mathbb{C}$.
We express affine algebraic groups and their Lie algebras by Roman alphabets and corresponding German letters.
For example, the Lie algebras of affine algebraic groups $G, K$ and $H$ are denoted as $\lie{g}, \lie{k}$ and $\lie{h}$.
For a complex Lie algebra $\lie{g}$, we write $\univ{g}$ and $\univcent{g}$ for the universal enveloping algebra and its center, respectively.
For an affine algebraic group $G$, let $G_0$ denote the identity component of $G$.
For a $G$-module (resp.\ a $\lie{g}$-module) $V$, we write $V^G$ (resp.\ $V^{\lie{g}}$) for the space of all invariant vectors in $V$.
We denote by $\mathrm{Mod}(\mathscr{A}_X)$, $\mathrm{Mod}(\mathcal{A}, G)$, $\mathrm{Mod}(\lie{g}, K)$ and $\mathrm{Mod}(\lie{g})$ the categories of (left) modules of a sheaf $\mathscr{A}_X$ of algebras, a generalized pair $(\mathcal{A}, G)$, a pair $(\lie{g}, K)$ and a Lie algebra $\lie{g}$, respectively.
We write $\mathrm{Len}_\mathcal{R}(V)$ for the length of a $\mathcal{R}$-module $V$ in each category, e.g.\ $\mathrm{Len}_{\mathscr{A}_X}(V)$, $\mathrm{Len}_{\mathcal{A}, G}(V)$, $\mathrm{Len}_{\lie{g}, K}(V)$ and $\mathrm{Len}_{\lie{g}}(V)$.
We denote by $\mathrm{Mod}_{qc}(\mathscr{A}_X)$ (resp.\ $\mathrm{Mod}_{h}(\mathscr{A}_X)$) the category of quasi-coherent modules (resp.\ holonomic modules) of an algebra $\mathscr{A}_X$ of twisted differential operators.
We use the same notation for categories of equivariant modules such as $\mathrm{Mod}_{qc}(\mathscr{A}_X, G)$ and $\mathrm{Mod}_h(\mathscr{A}_X, G)$.
For an algebra $\mathscr{A}_X$ of twisted differential operators on a smooth variety $X$, let $D^b_{qc}(\mathscr{A}_X)$ (resp.\ $D^b_{h}(\mathscr{A}_X)$) denote the full subcategory of the derived category $D(\mathrm{Mod}(\mathscr{A}_X))$ consisting of objects $\cpx{\mathcal{M}}$ whose cohomologies $H^i(\cpx{\mathcal{M}})$ are quasi-coherent (resp.\ holonomic)
and vanish for any $|i| \gg 0$.
We list operations of sheaves:
\begin{itemize}
\item $\mathcal{L}^\vee$: the dual of an invertible sheaf $\mathcal{L}$
\item $\Gamma$, $R\Gamma$: the global section functor and its right derived functor of sheaves
\item $f^{-1}$: the inverse image functor of sheaves
\item $f_*, Rf_*$: the direct image functor and its right derived functor of sheaves
\item $f^*, Lf^*$: the inverse image functor and its left derived functor of $\rsheaf{Y}$-modules (or twisted $\mathscr{D}$-modules)
\item $Df_+$: the direct image functor of twisted $\mathscr{D}$-modules.
\end{itemize}
Here $f\colon X\rightarrow Y$ is a morphism of smooth varieties.
We denote by $R^i f_*$, $L_{-i}f^*$ and $D^if_+$ the compositions $H^i\circ Rf_*$, $H^i\circ Lf^*$ and $H^i\circ D f_+$, respectively.
Let $(\cdot)\otimes (\cdot)$ (without subscript) denote the tensor product over $\mathbb{C}$.
For a $\mathcal{R}$-module $M$ and an $\mathcal{S}$-module $N$, we write $M\boxtimes N$ for the external tensor product of $M$ and $N$.
\subsection*{Acknowledgments}
I would like to thank T. Tauchi, Y. Ito and T. Kubo for reading the manuscript very carefully and for many corrections.
The author was partially supported by Waseda University Grants for Special Research Projects (No. 2019C-528).
\section{Preliminary}
In this section, we prepare several known results and definitions.
We deal with generalized pairs, Lie algebra cohomology groups and truncation functors.
\subsection{Generalized pair}
In this subsection, we recall the definitions of generalized pairs and $(\mathcal{A}, G)$-modules,
and show easy propositions related to generalized pairs.
We refer the reader to \cite[p.96]{KnVo95_cohomological_induction}.
In this paper, any algebraic group is affine and defined over $\mathbb{C}$,
and any $\mathbb{C}$-algebra is associative and unital without Lie algebras.
For a representation $V$ of an affine algebraic group $G$ as an abstract group,
we say that $V$ is a $G$-module or $G$ acts rationally on $V$ if
the $G$-action is locally finite and any finite-dimensional $G$-subrepresentation of $V$ is a representation of an algebraic group.
\begin{definition} \label{def:GeneralizedPair}
Let $\mathcal{A}$ be a $\mathbb{C}$-algebra and $G$ an affine algebraic group over $\mathbb{C}$ acting rationally on $\mathcal{A}$ by algebra automorphisms.
We say that the pair $(\mathcal{A}, G)$ equipped with a $G$-equivariant algebra homomorphism $\iota\colon\univ{g}\rightarrow \mathcal{A}$ is a generalized pair if
the adjoint action of $\lie{g}$ on $\mathcal{A}$ determined by $\iota$ coincides with the differential of the action of $G$ on $\mathcal{A}$.
\end{definition}
For a generalized pair $(\mathcal{A}, G)$, we denote by $\textup{Ad}_{\mathcal{A}}$ (or $\textup{Ad}$) the action of $G$ on $\mathcal{A}$.
For example, if $G'$ is a closed subgroup of $G$, then $(\univ{g}, G')$ is a generalized pair.
For a $G$-equivariant algebra $\mathscr{A}_X$ of twisted differential operators on a smooth variety $X$,
$(\Gamma(\mathscr{A}_{X}), G)$ forms a generalized pair.
\begin{definition}\label{def:AG-mod}
Let $(\mathcal{A}, G)$ be a generalized pair.
We say that an $\mathcal{A}$-module $V$ equipped with a rational $G$-action is an $(\mathcal{A}, G)$-module if the following two conditions hold.
\begin{enumerate}[(i)]
\item The differential of the $G$-action on $V$ coincides with the $\lie{g}$-action
via the composition $\lie{g}\xrightarrow{\iota} \mathcal{A} \rightarrow \textup{End}_{\mathbb{C}}(V)$ and
\item $gXv = \textup{Ad}(g)(X)gv$ holds for any $g \in G, X\in \mathcal{A}$ and $v \in V$.
\end{enumerate}
We denote by $\mathrm{Mod}(\mathcal{A}, G)$ the category of $(\mathcal{A}, G)$-modules.
\end{definition}
If $G$ is reductive, any $(\mathcal{A}, G)$-module is completely reducible as a $G$-module.
Hence the functor $\mathrm{Mod}(\mathcal{A}, G) \ni V\mapsto V^G \in \mathrm{Mod}(\mathcal{A}^G)$
is exact, where $V^G$ is the space of all $G$-invariant vectors in $V$.
Moreover, we can see that the functor sends an irreducible object to zero or irreducible one.
See e.g.\ \cite[Theorem 4.2.1]{GoWa09}.
Hence we have the following proposition.
\begin{proposition}\label{prop:AGandLength}
Let $(\mathcal{A}, G)$ be a generalized pair with reductive $G$.
Then for any $(\mathcal{A}, G)$-module $V$, we have
\begin{align*}
\mathrm{Len}_{\mathcal{A}^G}(V^G) \leq \mathrm{Len}_{\mathcal{A}, G}(V),
\end{align*}
where $\mathrm{Len}$ means the length of a module.
\end{proposition}
We will reduce some propositions about $(\mathcal{A}, G)$-modules to those for $(\mathcal{A}, G_0)$-module.
To do this, we need the following easy lemma.
\begin{lemma}\label{lem:GeneralizedPairConnected}
Let $(\mathcal{A}, G)$ be a generalized pair and $V$ an $(\mathcal{A}, G)$-module.
Then we have
\begin{align*}
\mathrm{Len}_{\mathcal{A}, G}(V)\leq \mathrm{Len}_{\mathcal{A}, G_0}(V) \leq |G/G_0|\mathrm{Len}_{\mathcal{A}, G}(V).
\end{align*}
\end{lemma}
\begin{proof}
The first inequality is trivial.
It is enough to show the second inequality when $V$ is an irreducible $(\mathcal{A}, G)$-module.
By Zorn's lemma, we can take a proper $(\mathcal{A}, G_0)$-submodule $W$ such that any non-zero $(\mathcal{A}, G_0)$-submodule of $V/W$ contains a unique irreducible $(\mathcal{A}, G_0)$-submodule.
Take a maximal subset $S\subset G/G_0$ such that $W_S:=\bigcap_{g \in S} gW$ is non-zero.
Since $V$ is irreducible as an $(\mathcal{A}, G)$-module, $S$ is a proper subset of $G/G_0$.
Fix $g \in G/G_0-S$.
Since $W_S\cap gW=0$, the composition $W_S\hookrightarrow V\twoheadrightarrow V/gW$ is injective.
Hence $W_S$ contains an irreducible $(\mathcal{A}, G_0)$-submodule $V_0$.
Since $V$ is an irreducible $(\mathcal{A}, G)$-module, we have $V = \sum_{g \in G/G_0} g V_0$.
This implies that $V$ is completely reducible as an $(\mathcal{A}, G_0)$-module, and hence the length as an $(\mathcal{A}, G_0)$-module is less than or equal to $|G/G_0|$.
\end{proof}
\subsection{\texorpdfstring{$(\lie{g}, K)$}{(g, K)}-module}\label{sect:CEcomplex}
We review the notion of pairs $(\lie{g}, K)$ and the relative Lie algebra cohomology groups.
We refer the reader to \cite[Chapters I, IV]{KnVo95_cohomological_induction}
and \cite[Chapter I]{BoWa00_continuous_cohomology}.
\begin{definition}\label{def:pair}
Let $\lie{g}$ be a complex Lie algebra and $K$ an affine algebraic group with Lie algebra $\lie{k} \subset \lie{g}$.
We say that $(\lie{g}, K)$ is a \define{pair} if the following two conditions hold.
\begin{enumerate}[(i)]
\item A rational $K$-action on $\lie{g}$ by Lie algebra automorphisms is given, whose restriction to $\lie{k}$ is equal to the adjoint action of $K$ on $\lie{k}$.
\item The differential of the $K$-action on $\lie{g}$ coincides with the adjoint action of $\lie{k}$ on $\lie{g}$.
\end{enumerate}
We denote by $\textup{Ad}_{\lie{g}}$ (or simply $\textup{Ad}$) the action of $K$ on $\lie{g}$.
\end{definition}
If $(\lie{g}, K)$ is a pair, then $(\univ{g}, K)$ forms a generalized pair.
A $(\univ{g}, K)$-module is called a \define{$(\lie{g}, K)$-module}.
For a complex Lie algebra $\lie{g}$, the functor $\mathrm{Tor}^{\univ{g}}_i(\cdot, \cdot)$ can be computed by an explicit complex called the Chevalley--Eilenberg chain complex.
We recall the complex in the relative setting.
Let $(\lie{g}, K)$ be a pair.
Remark that the following results also hold if $K$ is replaced with its Lie algebra $\lie{k}$.
The relative Chevalley--Eilenberg chain complex is a sequence
\begin{align*}
\cdots \xrightarrow{\partial_{k+1}} \CE{\lie{g}}{K}{k} \xrightarrow{\partial_k} \CE{\lie{g}}{K}{k-1} \xrightarrow{\partial_{k-1}} \cdots \xrightarrow{\partial_1} \CE{\lie{g}}{K}{0} \rightarrow 0,
\end{align*}
where $\CE{\lie{g}}{K}{k}:=\univ{g}\otimes_{\univ{k}}\wedge^k (\lie{g}/\lie{k})$.
The differential $\partial_k$ is given by
\begin{align*}
&\partial_k(v\otimes X_1 \wedge \cdots \wedge X_k)
\\= &\sum_i (-1)^{i+1} v\widetilde{X}_i \otimes X_1\wedge \cdots \wedge \hat{X_i} \wedge \cdots \wedge X_k \\
+ &\sum_{i<j} (-1)^{i+j} v\otimes ([\widetilde{X}_i, \widetilde{X}_j]+\lie{k})\wedge X_1\wedge \cdots \wedge \hat{X_i} \wedge \cdots \wedge \hat{X_j} \wedge \cdots \wedge X_k,
\end{align*}
where each $X_i$ is in $\lie{g}/\lie{k}$ and each $\widetilde{X}_i$ is a representative of $X_i$ in $\lie{g}$.
For a $(\lie{g}, K)$-module $V$, we set
\begin{align*}
H_i(\lie{g}, K; V)&:= H_i((V\otimes_{\univ{g}}\CE{\lie{g}}{K}{\bullet})^K) \simeq H_i((V\otimes_{\univ{k}}\wedge^\bullet (\lie{g}/\lie{k}))^K),\\
H^i(\lie{g}, K; V)&:= H^i(\textup{Hom}_{\lie{g}, K}(\CE{\lie{g}}{K}{\bullet}, V)) \simeq H^i(\textup{Hom}_K(\wedge^\bullet(\lie{g}/\lie{k}), V)).
\end{align*}
We call them the relative Lie algebra homology and cohomology of $V$, respectively.
If $K$ is reductive, the complex $(\CE{\lie{g}}{K}{\bullet}, \partial_\bullet)$ is a projective resolution of $\mathbb{C}$ in $\mathrm{Mod}(\lie{g}, K)$.
Hence we can compute $\mathrm{Tor}$ and $\mathrm{Ext}$ by the complex.
See \cite[Prposotion 2.117]{KnVo95_cohomological_induction} and \cite[Lemma 3.1.9]{Ku02}.
\begin{fact}\label{fact:LiealgebraCohomology}
Let $V$ and $W$ be $(\lie{g}, K)$-modules.
If $K$ is reductive, then we have natural isomorphisms
\begin{align*}
H_i(\lie{g}, K; V\otimes W) \simeq \mathrm{Tor}^{\lie{g},K}_i(V, W), \\
H^i(\lie{g}, K; \textup{Hom}_{\mathbb{C}}(V, W)) \simeq \mathrm{Ext}_{\lie{g}, K}^i(V,W).
\end{align*}
\end{fact}
The following result is called the Poincar\'e duality.
See \cite[Corollary 3.6]{KnVo95_cohomological_induction}.
\begin{fact}\label{fact:PoincareDuality}
Let $V$ be a $(\lie{g}, K)$-module.
Put $n = \dim_{\mathbb{C}}(\lie{g}/\lie{k})$.
If $K$ is reductive, then we have a natural isomorphism
\begin{align*}
H^i(\lie{g}, K; V\otimes \wedge^n(\lie{g}/\lie{k})) \simeq H_{n-i}(\lie{g}, K; V),
\end{align*}
where $\wedge^n(\lie{g}/\lie{k})$ is a $(\lie{g}, K)$-module with the natural $K$-action and the $\lie{g}$-action given by the character $X\mapsto \textup{tr}(\textup{Ad}_\lie{g}(X))$.
\end{fact}
\subsection{Truncation functor}\label{sect:truncation}
In many places, we reduce assertions about a complex to those about a single object.
To do so, we need the truncation functors.
We refer the reader to \cite[Definitions 11.3.11 and 12.3.1]{KaSc06}.
Let $\mathcal{A}$ be an abelian category and $\mathcal{C}(\mathcal{A})$ the category of complexes in $\mathcal{A}$.
For a complex $(C^\bullet, d^\bullet) \in \mathcal{C}(\mathcal{A})$, we set
\begin{align*}
\tau^{\leq k} C^\bullet &:= \cdots \rightarrow C^{k-2} \rightarrow C^{k-1} \rightarrow \mathrm{Ker}(d^k) \rightarrow 0 \rightarrow 0 \rightarrow \cdots \\
\tau^{> k} C^\bullet &:= \cdots \rightarrow 0 \rightarrow 0 \rightarrow \Im(d^k) \rightarrow C^{k+1} \rightarrow C^{k+2} \rightarrow \cdots \\
\tau^{\leq k}_s C^\bullet &:= \cdots \rightarrow C^{k-2} \rightarrow C^{k-1} \rightarrow C^k \rightarrow 0 \rightarrow 0 \rightarrow \cdots \\
\tau^{> k}_s C^\bullet &:= \cdots \rightarrow 0 \rightarrow 0 \rightarrow 0 \rightarrow C^{k+1} \rightarrow C^{k+2} \rightarrow \cdots.
\end{align*}
$\tau^{\leq k}$ and $\tau^{> k}$ are called \define{truncation functors},
and $\tau^{\leq k}_s$ and $\tau^{> k}_s$ are called \define{stupid truncation functors}.
Then we have distinguished triangles
\begin{align*}
&\tau^{\leq k} C^\bullet \rightarrow C^\bullet \rightarrow \tau^{> k} C^\bullet \xrightarrow{+1}, \\
&\tau^{\leq k}_s C^\bullet \rightarrow C^\bullet \rightarrow \tau^{> k}_s C^\bullet \xrightarrow{+1}.
\end{align*}
\begin{lemma}\label{lem:AdditiveFunctionDerived}
Let $\mathcal{A}$ and $\mathcal{B}$ be abelian categories and $m$ a $\mathbb{C}$-valued additive function on the Grothendieck group of $\mathcal{A}$.
Assume $m(M)\geq 0$ for any $M \in \mathcal{A}$.
\begin{enumerate}[(i)]
\item For any distinguished triangle $(N^\bullet \rightarrow M^\bullet \rightarrow L^\bullet \xrightarrow{+1})$ in $D^b(\mathcal{A})$ and $i \in \mathbb{Z}$, we have
\begin{align*}
m(H^i(M^\bullet)) \leq m(H^i(N^\bullet)) + m(H^i(L^\bullet)).
\end{align*}
\item For any functor $F\colon D^b(\mathcal{B})\rightarrow D^b(\mathcal{A})$ of triangulated categories, complex $M^\bullet \in D^b(\mathcal{B})$ and $i \in \mathbb{Z}$, we have
\begin{align*}
m(H^i(F(M^\bullet))) \leq \sum_j m(H^{i-j}(F(H^j(M^\bullet)))).
\end{align*}
\item For any functor $F\colon D^b(\mathcal{B})\rightarrow D^b(\mathcal{A})$ of triangulated categories, bounded complex $M^\bullet \in C(\mathcal{B})$ and $i \in \mathbb{Z}$, we have
\begin{align*}
m(H^i(F(M^\bullet))) \leq \sum_j m(H^{i-j}(F(M^j))).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
(i) is clear by the long exact sequence associated to the distinguished triangle $(N^\bullet \rightarrow M^\bullet \rightarrow L^\bullet \xrightarrow{+1})$.
Set $l(M^\bullet):=|\set{n \in \mathbb{Z}: H^n(M^\bullet)\neq 0}|$.
By induction on $l(M^\bullet)$ and the truncation functors, we can reduce the assertion (ii) to the case $M^\bullet \simeq N[n]$ for some $N \in \mathcal{B}$ and $n \in \mathbb{Z}$.
In fact, we can take $k \in \mathbb{Z}$ such that $l(\tau^{\leq k} C^\bullet), l(\tau^{> k} C^\bullet) < l(C^\bullet)$.
Applying (i) to the following distinguished triangle iteratively, we obtain (ii):
\begin{align*}
F(\tau^{\leq k} M^\bullet) \rightarrow F(M^\bullet) \rightarrow F(\tau^{> k} M^\bullet) \xrightarrow{+1}.
\end{align*}
Similarly, using the stupid truncation functors, we obtain (iii).
\end{proof}
\section{Uniformly bounded family}
The purpose of this section is to reformulate Bernstein's work \cite{Be72} about the multiplicity of a $\ntDalg{\mathbb{C}^n}$-module.
We will introduce the notion of uniformly bounded families of twisted $\mathscr{D}$-modules.
A uniformly bounded family is a family with a good boundedness property, which is preserved by direct images and inverse images.
We give several applications of uniformly bounded families in Section \ref{sect:applications}.
\subsection{Multiplicity and functors}
In this subsection, we review Bernstein's work \cite{Be72} about the multiplicity (or the Bernstein degree) of a $\ntDalg{\mathbb{C}^n}$-module.
We refer the reader to \cite{Be72}, \cite[3.2.2]{HTT08} and \cite[1.\S 3 and \S 4]{Bj79} for the proof of facts.
Let $\mathscr{D}_{\mathbb{C}^n}$ be the algebra of non-twisted differential operators on $\mathbb{C}^n$ and $\ntDalg{\mathbb{C}^n}$ the algebra of global sections of $\mathscr{D}_{\mathbb{C}^n}$.
Let $(x_1, x_2, \ldots, x_n)$ be the standard coordinate of $\mathbb{C}^n$ and put $\partial_i = \partial / \partial x_i$.
We denote by $F$ the Bernstein filtration of $\ntDalg{\mathbb{C}^n}$, and then
$F_0 \ntDalg{\mathbb{C}^n} = \mathbb{C},
F_1 \ntDalg{\mathbb{C}^n} = \spn{1, x_1, x_2, \ldots, x_n, \partial_1, \partial_2, \ldots, \partial_n},
F_i \ntDalg{\mathbb{C}^n} = (F_1 \ntDalg{\mathbb{C}^n})^i$.
The following facts are essential for our study of a family of $\mathscr{D}$-modules.
\begin{fact}\label{fact:WeylAlgebraMultiplicity}
Let $M$ be a finitely generated $\ntDalg{\mathbb{C}^n}$-module and $M_0$ a generating subspace of $M$ of finite dimension.
Put $F_i M := F_i \ntDalg{\mathbb{C}^n} \cdot M_0$.
Then
\begin{enumerate}[(i)]
\item there exists some polynomial $f \in \mathbb{Q}[t]$ such that $f(i) = \dim_{\mathbb{C}}(F_i M)$ for any $i \gg 0$
\item $d(M) := \deg(f)$ does not depend on $M_0$
\item the coefficient $a_d$ of the degree $d(M)$ of $f$ does not depend on $M_0$
\item $m(M):=a_d\cdot d(M)!$ is a natural number
\item $d(M)\geq n$ if $M$ is non-zero.
\end{enumerate}
\end{fact}
The integer $m(M)$ is called the \define{multiplicity} (or the Bernstein degree) of $M$.
A $\ntDalg{\mathbb{C}^n}$-module $M$ is said to be \define{holonomic} if $M$ is finitely generated and $d(M)=n$ or $d(M) = 0$ holds.
A $\ntDalg{\mathbb{C}^n}$-module $M$ is holonomic if and only if the corresponding $\mathscr{D}_{\mathbb{C}^n}$-module $\mathscr{D}_{\mathbb{C}^n}\otimes_{\ntDalg{\mathbb{C}^n}}M$
is holonomic (see \cite[Proposition 3.2.11]{HTT08}).
We put $m(\mathcal{M}):=m(\Gamma(\mathcal{M}))$ for $\mathcal{M} \in \mathrm{Mod}_{h}(\mathscr{D}_{\mathbb{C}^n})$
and
\begin{align*}
m(\mathcal{M}^\bullet) := \sum_{i} m(H^i(\mathcal{M}^\bullet))
\end{align*}
for $\mathcal{M}^\bullet \in D^b_h(\mathscr{D}_{\mathbb{C}^n})$.
\begin{fact}\label{fact:WeylAlgebraExact}
Let $0\rightarrow L \rightarrow M \rightarrow N \rightarrow 0$ be a short exact sequence of finitely generated $\ntDalg{\mathbb{C}^n}$-modules.
Then we have $d(M) = \max(d(L), d(N))$.
If in addition $d(L)=d(N)$, then $m(M) = m(L) + m(N)$ holds.
In particular, the length of a holonomic $\ntDalg{\mathbb{C}^n}$-module is less than or equal to its multiplicity.
\end{fact}
The following fact is an easy consequence of the definition of the multiplicity.
See the proof of \cite[Theorem 3.2]{Be72}.
\begin{fact}\label{fact:MultiplicityTensor}
Let $N$ and $M$ be modules of $\ntDalg{\mathbb{C}^n}$ and $\ntDalg{\mathbb{C}^m}$, respectively.
If $N$ and $M$ are holonomic, then $N\boxtimes M$ is holonomic and we have $m(N\boxtimes M) = m(N)m(M)$.
Conversely, if $N\boxtimes M$ is holonomic, then $N$ and $M$ are holonomic.
\end{fact}
We need a derived functor version of \cite[Theorem 3.2]{Be72}.
The proof is the same as the original version.
\begin{fact}\label{fact:WeylAlgebraDirectInverseImage}
Let $f\colon \mathbb{C}^n \rightarrow \mathbb{C}^m$ be a morphism of affine varieties.
Set $d:=\max(\deg(f), 1)$.
Then for any $\mathcal{M}^\bullet \in D^b_h(\mathscr{D}_{\mathbb{C}^n})$ and $\mathcal{N}^\bullet \in D^b_h(\mathscr{D}_{\mathbb{C}^m})$,
we have
\begin{align*}
m(Df_+(\mathcal{M}^\bullet)) &\leq d^{n+m} m(\mathcal{M}^\bullet) \\
m(Lf^*(\mathcal{N}^\bullet)) &\leq d^{n+m} m(\mathcal{N}^\bullet).
\end{align*}
\end{fact}
\subsection{\texorpdfstring{$\mathscr{D}$}{D}-modules on affine varieties}
In Fact \ref{fact:WeylAlgebraDirectInverseImage}, we have seen that the multiplicity is well-behaved for operations of $\mathscr{D}$-modules on affine spaces.
In this subsection, we consider similar results about $\mathscr{D}$-modules on affine varieties.
We recall the Kashiwara equivalence \cite[Theorem 1.6.1]{HTT08}.
\begin{fact}\label{fact:Kashiwara}
Let $f\colon X\rightarrow Y$ be a closed embedding of smooth varieties.
Then
\begin{align*}
f_+ := D^0f_+\colon& \mathrm{Mod}_{qc}(\mathscr{D}_X) \rightarrow \mathrm{Mod}_{qc}^X(\mathscr{D}_Y) \text{ and} \\
Df_+\colon& D_{qc}^b(\mathscr{D}_X) \rightarrow D_{qc}^{b,X}(\mathscr{D}_Y)
\end{align*}
give equivalences of categories.
Here $\mathrm{Mod}_{qc}^X(\mathscr{D}_Y)$ is the full subcategory of $\mathrm{Mod}_{qc}(\mathscr{D}_Y)$
whose objects are supported on $X$,
and $D_{qc}^{b,X}(\mathscr{D}_Y)$ is the full subcategory of $D_{qc}^{b}(\mathscr{D}_Y)$
consisting of complexes whose cohomologies are supported on $X$.
\end{fact}
For $\mathcal{M}^\bullet \in D^b_h(\mathscr{D}_X)$ on a smooth variety $X$
and a closed embedding $\iota\colon X\rightarrow \mathbb{C}^n$, we set
\begin{align*}
m_\iota(\mathcal{M}^\bullet) := m(D\iota_+(\mathcal{M}^\bullet)).
\end{align*}
\begin{proposition}\label{prop:LocalMultiplicity}
Let $f\colon X\rightarrow Y$ be a morphism of affine smooth varieties.
Fix closed embeddings $\iota\colon X\rightarrow \mathbb{C}^n$ and $\iota'\colon Y\rightarrow \mathbb{C}^m$.
Then there exists a constant $C>0$ such that
\begin{align}
m_{\iota'}(Df_+(\mathcal{M}^\bullet)) &\leq C\cdot m_\iota(\mathcal{M}^\bullet), \label{eqn:BoundDirectImageAffine}\\
m_{\iota}(Lf^*(\mathcal{N}^\bullet)) &\leq C\cdot m_{\iota'}(\mathcal{N}^\bullet) \label{eqn:BoundInverseImageAffine}
\end{align}
for any $\mathcal{M}^\bullet \in D^b_h(\mathscr{D}_X)$ and $\mathcal{N}^\bullet \in D^b_h(\mathscr{D}_Y)$.
\end{proposition}
\begin{proof}
Fix an extension $\widetilde{f}$ of $f$ to $\mathbb{C}^n$ such that the diagram
\begin{align*}
\xymatrix{
X \ar[r]^f \ar[d]^-\iota & Y \ar[d]^-{\iota'} \\
\mathbb{C}^n \ar[r]^{\widetilde{f}} & \mathbb{C}^m
}
\end{align*}
is commutative.
Set $d:=\max(\deg(\widetilde{f}), 1)$.
By Fact \ref{fact:WeylAlgebraDirectInverseImage}, we obtain
\begin{align*}
m_{\iota'}(Df_+(\mathcal{M}^\bullet)) = m(D\widetilde{f}_+(D\iota_+(\mathcal{M}^\bullet)))
\leq d^{n+m} m_\iota(\mathcal{M}^\bullet).
\end{align*}
Here we used $D\iota'_+ \circ Df_+ = D\widetilde{f}_+ \circ D\iota_+$ (Fact \ref{fact:FundamentalDmodule} \eqref{eqn:DirectImage}).
We have shown the first inequality \eqref{eqn:BoundDirectImageAffine}.
Consider the following diagram:
\begin{align*}
\xymatrix{
X \ar[r]^-{f_1} \ar[d]^-\iota & X\times Y \ar[r]^-{f_2} \ar[d]^-{\iota\times \iota'} & Y\\
\mathbb{C}^n \ar[r]^-{\widetilde{f}_1} & \mathbb{C}^n\times \mathbb{C}^m, &
}
\end{align*}
where $f_1(x) = (x, f(x))$, $f_2(x, y) = y$ and $\widetilde{f}_1(x) = (x, \widetilde{f}(x))$.
Since the left square is cartesian, we have an isomorphism $D\iota_+\circ Lf_1^* \simeq L\widetilde{f}^*_1 \circ D(\iota\times \iota')_+$ of functors by the base change theorem (Fact \ref{fact:BaseChange}).
Hence we obtain
\begin{align*}
m_{\iota_+}(Lf^*(\mathcal{N}^\bullet))) &= m(L\widetilde{f}_1^*\circ D(\iota\times \iota')_+ \circ Lf_2^*(\mathcal{N}^\bullet))\\
&\leq d^{n+m}m(D\iota_+(\rsheaf{X})\boxtimes D\iota'_+(\mathcal{N}^\bullet)) \\
&=d^{n+m}m_{\iota}(\rsheaf{X})m_{\iota'}(\mathcal{N}^\bullet)
\end{align*}
by Fact \ref{fact:MultiplicityTensor} and Fact \ref{fact:WeylAlgebraDirectInverseImage}, which proves the second inequality \eqref{eqn:BoundInverseImageAffine}.
\end{proof}
In the next subsection, we will consider families of twisted $\mathscr{D}$-modules on general smooth varieties.
Although the multiplicity itself is no longer a meaningful value for general smooth varieties, boundedness of multiplicities of twisted $\mathscr{D}$-modules can be defined.
To reduce properties of twisted $\mathscr{D}$-modules on a non-affine variety to that of affine spaces, we have many choices of affine \'etale coverings, closed embeddings to affine spaces, and local trivializations of an algebra of twisted differential operators.
We shall consider the effect on the multiplicity by the choices.
\begin{proposition}\label{prop:LocalMultiplicityEtale}
Let $f\colon X\rightarrow Y$ be a surjective \'etale morphism of affine smooth varieties.
Fix closed embeddings $\iota\colon X\rightarrow \mathbb{C}^n$ and $\iota'\colon Y\rightarrow \mathbb{C}^m$.
Then there exists a constant $C>0$ such that
\begin{align*}
C^{-1} \cdot m_{\iota'}(\mathcal{N}^\bullet) \leq m_{\iota}(Lf^*(\mathcal{N}^\bullet)) \leq C\cdot m_{\iota'}(\mathcal{N}^\bullet)
\end{align*}
for any $\mathcal{N}^\bullet \in D^b_h(\mathscr{D}_Y)$.
\end{proposition}
\begin{proof}
We have proved the second inequality in Proposition \ref{prop:LocalMultiplicity}.
We shall show the first inequality.
Since $f$ is smooth, $f^*$ is exact (\cite[Proposition 1.5.13]{HTT08}).
Hence we can assume $\mathcal{N} \in \mathrm{Mod}_{h}(\mathscr{D}_Y)$.
Since $f$ is \'etale, $f_*(\mathcal{M})$ admits a natural $\mathscr{D}_Y$-module structure
for $\mathcal{M} \in \mathrm{Mod}_h(\mathscr{D}_X)$
and the direct image functor $Df_+$ is isomorphic to $Rf_* = f_*$ by \cite[Theorem 2.2]{CoLe01}.
Hence the canonical morphism $\mathcal{N} \rightarrow f_*(f^*(\mathcal{N}))$ of $\rsheaf{Y}$-modules
is a morphism of $\mathscr{D}_Y$-modules.
The morphism is monomorphic since $f$ is surjective.
Applying Proposition \ref{prop:LocalMultiplicity} to $f^*(\mathcal{N})$,
we obtain
\begin{align*}
m_{\iota'}(\mathcal{N}) \leq m_{\iota'}(f_*(f^*(\mathcal{N}))) \leq C\cdot m_{\iota}(f^*(\mathcal{N})),
\end{align*}
where $C$ is a constant independent of $\mathcal{N}$.
\end{proof}
Hereafter we consider the effect on the multiplicity by twisting by automorphisms.
Let $X$ be a smooth affine variety and $\iota\colon X\rightarrow \mathbb{C}^n$ a closed embedding.
The automorphism group $\textup{Aut}(\mathscr{D}_X)$ is isomorphic to the additive group $\mathcal{Z}(X)$
of closed $1$-forms on $X$ (\cite{BeBe81}).
For $\omega \in \mathcal{Z}(X)$, we denote by $A_\omega$ the corresponding automorphism given by
\begin{align*}
A_\omega(T) = T - \omega(T) \in \mathcal{T}_X\oplus \rsheaf{X}
\end{align*}
for $T \in \mathcal{T}_X$.
A $\mathscr{D}_X$-module $\mathcal{M}$ can be twisted by $A_\omega$ and
the twisted module is denoted by $\mathcal{M}^\omega$.
We use the same notation for a complex of $\mathscr{D}_X$-modules, e.g.\ $(\mathcal{M}^\bullet)^\omega$.
\begin{lemma}\label{lem:RegRingDeg}
Let $W$ be a finite-dimensional subspace of $\mathcal{Z}(X)$.
Then there exists a constant $C$ such that
\begin{align*}
m_{\iota}(\rsheaf{X}^\omega) \leq C
\end{align*}
for any $\omega \in W$.
\end{lemma}
\begin{proof}
Put $\mathcal{M}:=\rsheaf{X}\otimes \rring{W}$ equipped with a $\mathcal{T}_X$-action via
\begin{align*}
T\cdot (f\otimes g) = Tf \otimes g - \sum_i \omega_i(T) f\otimes \lambda_i g
&& (T \in \mathcal{T}_X),
\end{align*}
where $\set{\omega_i}_i$ is a basis of $W$ and $\set{\lambda_i}_i$ is its dual basis.
Then the action on $\mathcal{M}$ extends to a $\mathscr{D}_X\otimes \rring{W}$-action.
We denote by $\mathfrak{m}_\omega$ the maximal ideal of $\rring{W}$ corresponding to $\omega \in W$.
Then by definition, we have $\mathcal{M} / \mathfrak{m}_\omega \mathcal{M} \simeq \rsheaf{X}^\omega$
for any $\omega \in W$.
Since the functors $\iota_+$ and $\Gamma$ are exact, we have
\begin{align*}
\Gamma(\iota_+(\rsheaf{X}^\omega)) \simeq \Gamma(\iota_+(\mathcal{M}))/ \mathfrak{m}_\omega \Gamma(\iota_+(\mathcal{M})).
\end{align*}
Put $M:=\Gamma(\iota_+(\mathcal{M}))$.
Since the functors $\iota_+$ and $\Gamma$ preserve the lattice of submodules, $M$ is noetherian and hence finitely generated as a $\ntDalg{\mathbb{C}^n}\otimes \rring{W}$-module.
Take a finite generating subspace $S \subset M$ and put $F_i M := (F_i \ntDalg{\mathbb{C}^n}\otimes \rring{W}) S$ for $i \geq 0$.
Then the associated graded module $\textup{gr}^F M$ is a finitely generated $\rring{\mathbb{C}^n\times W}$-module.
By \cite[Theorem 24.1]{Ma86}, we can take an affine open subset $U$ of $W$ such that
$\rring{U}\otimes_{\rring{W}}\textup{gr}^F M$ is a free $\rring{U}$-module.
Hence $\rring{U}\otimes_{\rring{W}}F_i M$ is a projective $\rring{U}$-module for any $i \geq 0$.
This implies that the function
\begin{align*}
W\ni \omega \mapsto \dim_{\mathbb{C}}(F_i M/\mathfrak{m}_\omega F_i M)
\end{align*}
is constant on $U$.
Hence $U\ni \omega \mapsto m(M/\mathfrak{m}_\omega M)$ is a constant function by the definition of the multiplicity.
Replacing $W$ by $W\backslash U$ and $M$ by $\rring{W\backslash U}\otimes_{\rring{W}} M$, and repeating this argument, we can see that
$m(M/\mathfrak{m}_\omega M)$ is bounded on $W$.
\end{proof}
\begin{remark}
Lemma \ref{lem:RegRingDeg} can be considered as a special case of \cite[Theorem 3.18]{AiGoDm16}
and the latter half of our proof is essentially the same as theirs.
\end{remark}
\begin{corollary}\label{cor:MultiplicityTwisted}
Let $\mathcal{M}^\bullet \in D^b_h(\mathscr{D}_X)$ and $W$ be a finite-dimensional subspace of $\mathcal{Z}(X)$.
Then there exists a constant $C$ independent of $\mathcal{M}^\bullet$ such that
\begin{align*}
m_{\iota}((\mathcal{M}^\bullet)^\omega) \leq C \cdot m_\iota(\mathcal{M}^\bullet)
\end{align*}
for any $\omega \in W$.
\end{corollary}
\begin{proof}
Fix $\omega \in W$.
Since $(\mathcal{M}^\bullet)^\omega \simeq \mathcal{M}^\bullet \otimes_{\rsheaf{X}} \rsheaf{X}^\omega$,
we have
\begin{align*}
D\iota_+((\mathcal{M}^\bullet)^\omega) \simeq D\iota_+(\mathcal{M}^\bullet)\otimes_{\rsheaf{\mathbb{C}^n}}^L \iota_+(\rsheaf{X}^\omega)
\end{align*}
by the base change theorem (Fact \ref{fact:BaseChange}).
By Facts \ref{fact:MultiplicityTensor} and \ref{fact:WeylAlgebraDirectInverseImage}, we obtain
\begin{align*}
m_\iota((\mathcal{M}^\bullet)^\omega) = m(D\iota_+(\mathcal{M}^\bullet)\otimes^L_{\rsheaf{\mathbb{C}^n}} \iota_+(\rsheaf{X}^\omega))
\leq m_\iota(\mathcal{M}^\bullet) m_\iota(\rsheaf{X}^\omega).
\end{align*}
This inequality and Lemma \ref{lem:RegRingDeg} imply the assertion.
\end{proof}
\subsection{Uniformly bounded family}\label{sect:UniformlyBoundedFamily}
We shall define a good local trivialization of a family of algebras of twisted differential operators.
For an \'etale map $\varphi\colon U\rightarrow V$, we denote by $(\cdot)|_U$ the functors $\varphi^\#(\cdot)$ and $\varphi^*(\cdot)$ by abuse of notation.
Let $\mathscr{A}_{X,\Lambda}=(\mathscr{A}_{X, \lambda})_{\lambda \in \Lambda}$ be a family of algebras of twisted differential operators on a smooth variety $X$.
Hereafter we deal with $\prod_{\lambda \in \Lambda} \mathrm{Mod}_h(\mathscr{A}_{X,\lambda})$ the direct product of categories and its derived category.
Set
\begin{align*}
\mathrm{Mod}_h(\mathscr{A}_{X,\Lambda}) &:= \prod_{\lambda \in \Lambda} \mathrm{Mod}_h(\mathscr{A}_{X,\lambda}),\\
D^b_h(\mathscr{A}_{X,\Lambda}) &:= \prod_{\lambda \in \Lambda} D^b_h(\mathscr{A}_{X,\lambda}).
\end{align*}
We denote by $H^i, Df_+, Lf^*, (\cdot)|_U$ and $f^\#$ the direct products of the corresponding functors by abuse of notation.
Recall that $\mathcal{Z}(X)$ is the space of closed $1$-forms on $X$, which is isomorphic to $\textup{Aut}(\mathscr{D}_X)$ as an abelian group.
\begin{definition}\label{def:BoundedTrivialization}
We say that a tuple $(U, \varphi, \Phi)$
is a \define{trivialization} of $\mathscr{A}_{X,\Lambda}$
if $U$ is a smooth variety, $\varphi\colon U\rightarrow X$ is a surjective \'etale morphism and $\Phi$ is a family of isomorphisms $\Phi_\lambda \colon \mathscr{A}_{X,\lambda}|_U \xrightarrow{\simeq} \mathscr{D}_U$.
Let $T_1=(U, \varphi, \Phi)$ and $T_2=(V, \psi, \Psi)$ be trivializations of $\mathscr{A}_{X,\Lambda}$.
We denote by $\mathcal{Z}(T_1, T_2)\subset \mathcal{Z}(U\times_X V)$ the image of
\begin{align*}
\set{\widetilde{\varphi}^\#\Psi_{\lambda}\circ (\widetilde{\psi}^\#\Phi_{\lambda})^{-1}: \lambda \in \Lambda}
\end{align*}
by the isomorphism $\textup{Aut}(\mathscr{D}_{U\times_X V}) \rightarrow \mathcal{Z}(U\times_X V)$.
Here $\widetilde{\varphi}\colon U\times_X V\rightarrow V$ and $\widetilde{\psi}\colon U\times_X V \rightarrow U$ are the projections of the fiber product.
We write $T_1 \sim T_2$ when $\mathcal{Z}(T_1, T_2)$ spans a finite-dimensional subspace of $\mathcal{Z}(U\times_X V)$.
We say that a trivialization $(U, \varphi, \Phi)$ is \define{bounded} if
$(U,\varphi, \Phi) \sim (U,\varphi, \Phi)$ holds.
\end{definition}
\begin{remark}
Let $T=(U,\varphi, \Phi)$ be a trivialization of $\mathscr{A}_{X,\Lambda}$.
Then any element of $\mathcal{Z}(T, T)$ is a $1$-cocycle of the \v{C}ech complex of the sheaf of closed $1$-forms on $X$ with respect to the \'etale covering $\varphi\colon U\rightarrow X$.
Hence for each $\lambda \in \Lambda$, we have a $1$-cocycle $c(\lambda) \in \mathcal{Z}(T,T)$, and the cocycle defines an algebra $\mathscr{D}_{X,c(\lambda)}$ of twisted differential operators on $X$.
Then $\Phi_\lambda$ extends to an isomorphism $\Phi'_\lambda\colon \mathscr{A}_{X,\lambda} \rightarrow \mathscr{D}_{X,c(\lambda)}$.
It is obvious that the correspondence
\begin{align*}
(U,\varphi, \Phi) \mapsto (U,\varphi, (c(\lambda))_{\lambda \in \Lambda}, (\Phi'_\lambda)_{\lambda \in \Lambda})
\end{align*}
is one-to-one.
One can use such tuples instead of our trivializations.
\end{remark}
\begin{definition}
Let $T=(U, \varphi, \Phi)$ be a trivialization of $\mathscr{A}_{X,\Lambda}$
and $f\colon Y\rightarrow X$ a morphism of smooth varieties.
We set $f^\#T := (U\times_X Y, \widetilde{\varphi}, \widetilde{f}^\# \Phi)$,
where $\widetilde{\varphi}\colon U\times_X Y\rightarrow Y$ and $\widetilde{f}\colon U\times_X Y \rightarrow U$ are the projections of the fiber product.
\end{definition}
It is clear that $f^\#T$ is a trivialization of $f^\#\mathscr{A}_{X,\Lambda}$.
The relation $\sim$ is clearly symmetric and not reflexive in general.
We shall show fundamental properties of bounded trivializations.
The following lemma is well-known and easy.
\begin{lemma}\label{lem:CommutativeClosed1Forms}
Let $f\colon U\rightarrow V$ be a morphism of smooth varieties.
Then the following diagram of abelian groups is commutative:
\begin{align*}
\xymatrix {
\textup{Aut}(\mathscr{D}_{V}) \ar[r]^-{f^\#} \ar[d]^-{\simeq} & \textup{Aut}(\mathscr{D}_U) \ar[d]^-{\simeq} \\
\mathcal{Z}(V) \ar[r]^-{f^*} & \mathcal{Z}(U).
}
\end{align*}
If, in addition, $f$ is dominant, then $f^*$ is injective.
\end{lemma}
\begin{proposition}\label{prop:FundamentalBornology}
Let $T_i=(U_i, \varphi_i,\Phi_i)$ ($i=1,2,3$) be trivializations of $\mathscr{A}_{X,\Lambda}$ and $f\colon Y\rightarrow X$ a morphism of smooth varieties.
\begin{enumerate}[(i)]
\item $\sim$ is transitive, i.e.\ $T_1\sim T_2 \text{ and }T_2 \sim T_3\Rightarrow T_1 \sim T_3$.
\item $T_1 \sim T_2 \Rightarrow f^\# T_1 \sim f^\# T_2$.
\item If $T_1$ is bounded, then so is $f^\#T_1$.
\item If $f$ is dominant, the converse of (ii) and (iii) are true.
\end{enumerate}
\end{proposition}
\begin{proof}
To show (i), let $f_{ij}\colon U_1\times_X U_2 \times_X U_3 \rightarrow U_i \times_X U_j$
be the projections of the fiber product for $(i,j)=(1,2),(2,3),(1,3)$.
Assume $T_1\sim T_2$ and $T_2 \sim T_3$.
Then we have
\begin{align*}
f_{13}^*(\mathcal{Z}(T_1, T_3)) \subset f_{12}^*(\mathcal{Z}(T_1, T_2)) + f_{23}^*(\mathcal{Z}(T_2, T_3))
\end{align*}
by Lemma \ref{lem:CommutativeClosed1Forms}.
Since $f_{13}$ is surjective, $f_{13}^*$ is injective.
Hence $\mathcal{Z}(T_1, T_3)$ spans a finite-dimensional subspace of $\mathcal{Z}(U_1\times_X U_3)$.
By definition, (iii) follows from (ii).
We shall show (ii) and (iv).
Let $\widetilde{f}\colon U_1\times_X U_2\times_X Y \rightarrow U_1\times_X U_2$ be the projection.
By Lemma \ref{lem:CommutativeClosed1Forms}, we have
\begin{align*}
\widetilde{f}^*(\mathcal{Z}(T_1, T_2)) = \mathcal{Z}(f^\#T_1, f^\#T_2).
\end{align*}
This implies (ii) and (iv).
\end{proof}
By Proposition \ref{prop:FundamentalBornology}, the relation $\sim$ is an equivalence relation of bounded trivializations.
\begin{definition}\label{def:EquivalenceBornology}
An equivalence class of bounded trivializations is called a \define{bornology} of the family $\mathscr{A}_{X,\Lambda}$.
\end{definition}
If $\mathcal{B}$ is a bornology of $\mathscr{A}_{X,\Lambda}$ and $(\lambda(i))_{i \in I}$ is a family of elements of $\Lambda$, then $T:=(U,\varphi,(\Phi_{\lambda(i)})_{i \in I})$ is a bounded trivialization of $(\mathscr{A}_{X,\lambda(i)})_{i \in I}$ for any $(U,\varphi,\Phi) \in \mathcal{B}$.
It is clear that the equivalence class of $T$ does not depend on the choice of $(U,\varphi,\Phi) \in \mathcal{B}$.
We denote by the same symbol $\mathcal{B}$ the equivalence class of $T$ by abuse of notation.
\begin{definition}\label{def:pull-backBornology}
Let $f\colon Y\rightarrow X$ be a morphism of smooth varieties
and $\mathcal{B}$ a bornology of $\mathscr{A}_{X,\Lambda}$.
By Proposition \ref{prop:FundamentalBornology} (ii), the equivalence class of $f^\#T$ ($T \in \mathcal{B}$) does not depend on the choice of $T$.
We denote by $f^\#\mathcal{B}$ the equivalence class.
\end{definition}
The following proposition is an easy consequence of the definition.
\begin{proposition}\label{prop:pull-backBornology}
Let $f\colon Y\rightarrow X$ and $g\colon Z\rightarrow Y$ be morphisms of smooth varieties.
For any bornology $\mathcal{B}$ of $\mathscr{A}_{X,\Lambda}$, we have $(f\circ g)^\#\mathcal{B} = g^\# f^\# \mathcal{B}$ as bornologies of $(f\circ g)^\#\mathscr{A}_{X,\Lambda} = g^\# f^\# \mathscr{A}_{X,\Lambda}$.
\end{proposition}
It is not obvious that a bornology contains enough many trivializations for applications.
We can make a good bounded trivialization from a bounded trivialization by the following proposition.
\begin{proposition}\label{prop:LiftTrivialization}
Let $T_U=(U, \varphi, \Phi)$ be a trivialization of $\mathscr{A}_{X,\Lambda}$
and $f\colon V\rightarrow U$ a surjective \'etale morphism.
Put $T_V:=(V, \varphi\circ f, f^\#\Phi)$.
Then the following conditions are equivalent:
\begin{enumerate}[(i)]
\item $T_U$ is bounded,
\item $T_V$ is bounded,
\item $T_U \sim T_V$.
\end{enumerate}
In particular, for any bornology $\mathcal{B}$ of $\mathscr{A}_{X,\Lambda}$,
there exists a trivialization $(W, \psi, \Psi)$ in $\mathcal{B}$ such that $W$ is affine.
\end{proposition}
\begin{proof}
Let $f_1\colon V\times_X V\rightarrow U\times_X V$ and $f_2\colon U \times_X V \rightarrow U\times_X U$ be the morphisms determined by the universal property of the fiber bundles.
Then by Lemma \ref{lem:CommutativeClosed1Forms}, we have
\begin{align*}
f_2^*(\mathcal{Z}(T_U, T_U)) &= \mathcal{Z}(T_U, T_V) \\
f_1^*(\mathcal{Z}(T_U, T_V)) &= \mathcal{Z}(T_V, T_V).
\end{align*}
Since $f_1$ and $f_2$ are surjective, $f_2^*$ and $f_1^*$ are injective.
Hence (i), (ii) and (iii) are equivalent.
The second assertion is clear because for any variety $U$, there is a surjective \'etale morphism $W\rightarrow U$ from an affine variety $W$.
\end{proof}
\begin{definition}\label{def:UniformlyBoundedFamilyOri}
Let $T=(U, \varphi, \Phi)$ be a trivialization of $\mathscr{A}_{X,\Lambda}$ with affine $U$.
We say that an object $(\mathcal{M}_\lambda)_{\lambda \in \Lambda} \in \mathrm{Mod}_h(\mathscr{A}_{X,\Lambda})$ is \define{uniformly bounded} with respect to $T$ if
for any closed embedding $\iota\colon U\rightarrow \mathbb{C}^n$,
$m_{\iota}(\mathcal{M}_\lambda|_{U})$ is bounded as a function on $\Lambda$.
Here we consider an $\mathscr{A}_{X,\lambda}|_{U}$-module as a $\mathscr{D}_{U}$-module by the isomorphism $\Phi_\lambda\colon \mathscr{A}_{X,\lambda}|_U \rightarrow \mathscr{D}_U$.
We say that an object $\mathcal{M} \in D^b_h(\mathscr{A}_{X,\Lambda})$
is \define{uniformly bounded} with respect to $T$ if $H^i(\mathcal{M})$
is uniformly bounded for any $i$ and $H^i(\mathcal{M})$ vanishes for any $|i| \gg 0$.
Here $H^i(\mathcal{M})$ is the family $(H^i(\mathcal{M}_\lambda))_{\lambda \in \Lambda}$.
We denote by $\mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T)$ (resp.\ $D^b_{ub}(\mathscr{A}_{X,\Lambda}, T)$)
the full subcategory of $\mathrm{Mod}_{h}(\mathscr{A}_{X,\Lambda})$
(resp.\ $D^b_{h}(\mathscr{A}_{X,\Lambda})$)
consisting of uniformly bounded objects with respect to $T$.
\end{definition}
\begin{remark}
By Proposition \ref{prop:LocalMultiplicityEtale}, the boundedness of $m_{\iota}(\mathcal{M}_\lambda|_{U})$
does not depend on the choice of the embedding $\iota$.
\end{remark}
The following propositions are easy consequences of the definition.
\begin{proposition}\label{prop:FundamentalBoundeFamily}
Let $T$ be a bounded trivialization of $\mathscr{A}_{X,\Lambda}$.
Then the following hold.
\begin{enumerate}[(i)]
\item $\mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T)$ is abelian.
\item For a short exact sequence $0\rightarrow L\rightarrow M \rightarrow N\rightarrow 0$
in $\mathrm{Mod}_{h}(\mathscr{A}_{X,\Lambda})$,
both $L$ and $N$ are uniformly bounded if and only if so is $M$.
\item $D^b_{ub}(\mathscr{A}_{X,\Lambda}, T)$ is a triangulated subcategory of
$D^b_{h}(\mathscr{A}_{X,\Lambda})$.
\end{enumerate}
\end{proposition}
\begin{proposition}
Let $T=(U,\varphi, \Phi)$ be a bounded trivialization with affine $U$.
Then for any $(\mathcal{M}_\lambda)_{\lambda \in \Lambda} \in \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T)$, the function $\mathrm{Len}_{\mathscr{A}_{X,\lambda}}(\mathcal{M}_{\lambda})$ of $\lambda \in \Lambda$
is bounded.
\end{proposition}
\begin{proof}
Fix a closed embedding $\iota\colon U\rightarrow \mathbb{C}^n$.
Then we have
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{X,\lambda}|_U}(\mathcal{M}_\lambda|_U) = \mathrm{Len}_{\mathscr{D}_{\mathbb{C}^n}}(\iota_+(\mathcal{M}_\lambda|_{U})) \leq m_\iota(\mathcal{M}_\lambda|_{U}).
\end{align*}
The first equality follows from the Kashiwara equivalence (Fact \ref{fact:Kashiwara}) and the second inequality from Fact \ref{fact:WeylAlgebraExact}.
By the definition of uniformly bounded family, there is a constant $C$ independent of $\lambda \in \Lambda$ such that
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{X,\lambda}|_U}(\mathcal{M}_\lambda|_U) \leq m_\iota(\mathcal{M}_\lambda|_{U}) \leq C.
\end{align*}
Since $\varphi$ is surjective \'etale, the inverse image functor $\varphi^*$ is exact and sends a non-zero module to a non-zero module (see the proof of Proposition \ref{prop:LocalMultiplicityEtale}).
Hence we obtain
\begin{align*}
\mathrm{Len}_{\mathscr{A}_{X,\lambda}}(\mathcal{M}_\lambda) \leq C
\end{align*}
for any $\lambda \in \Lambda$.
\end{proof}
We will show that the uniform boundedness is preserved by inverse images
and direct images.
To do so, we need the following basic proposition.
\begin{proposition}\label{prop:WellDefinedBoundedFamilyCat}
Let $T_i=(U_i, \varphi_i, \Phi_i)$ ($i=1,2$) be bounded trivializations of $\mathscr{A}_{X,\Lambda}$ with affine $U_i$.
If $T_1 \sim T_2$, then we have
\begin{align*}
\mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T_1) &= \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T_2),\\
D^b_{ub}(\mathscr{A}_{X,\Lambda}, T_1) &= D^b_{ub}(\mathscr{A}_{X,\Lambda}, T_2).
\end{align*}
\end{proposition}
\begin{proof}
By definition, the second equation follows from the first one.
Let $p_i\colon U_1\times_X U_2 \rightarrow U_i$ ($i=1,2$) be the projections and put $T'_i:=(U_1\times_X U_2, \varphi_i \circ p_i, p_i^\#\Phi_i)$ for $i=1,2$.
Applying Proposition \ref{prop:LocalMultiplicityEtale} to $f = p_i$, we have
\begin{align}
\mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T_i) = \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T'_i). \label{eqn:ProofUniformlyBoundedCat}
\end{align}
By $T_1 \sim T_2$, $\mathcal{Z}(T_1, T_2)$ spans a finite-dimensional subspace of $\mathcal{Z}(U_1\times_X U_2)$.
Applying Corollary \ref{cor:MultiplicityTwisted} to $W=\mathrm{span}_\mathbb{C} \mathcal{Z}(T_1, T_2)$, we have
\begin{align*}
\mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T'_1) = \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T'_2).
\end{align*}
This and \eqref{eqn:ProofUniformlyBoundedCat} imply the desired equation.
\end{proof}
The following definition is well-defined by Proposition \ref{prop:WellDefinedBoundedFamilyCat}.
\begin{definition}
Let $\mathcal{B}$ be a bornology of $\mathscr{A}_{X,\Lambda}$.
We set
\begin{align*}
\mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}) &:= \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, T),\\
D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}) &:= D^b_{ub}(\mathscr{A}_{X,\Lambda}, T),
\end{align*}
where $T=(U,\varphi, \Phi)$ is a bounded trivialization in $\mathcal{B}$
with affine $U$.
\end{definition}
\begin{theorem}\label{thm:FunctorOnUniformlyBounded}
Let $f\colon Y\rightarrow X$ be a morphism of smooth varieties and $\mathcal{B}$ a bornology of $\mathscr{A}_{X,\Lambda}$.
The direct image functor and the inverse image functor preserve the uniform boundedness, that is, we have functors
\begin{align*}
Df_+\colon D^b_{ub}(f^\#\mathscr{A}_{X,\Lambda}, f^\# \mathcal{B}) \rightarrow D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}), \\
Lf^*\colon D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B})\rightarrow D^b_{ub}(f^\#\mathscr{A}_{X,\Lambda}, f^\# \mathcal{B}).
\end{align*}
\end{theorem}
\begin{proof}
Take $T=(U, \varphi, \Phi) \in \mathcal{B}$ with affine $U$, and a surjective \'etale morphism $V\rightarrow U\times_X Y$ from an affine variety $V$.
Consider the following diagram:
\begin{align*}
\xymatrix{
V \ar[r]^-{g} & U\times_X Y \ar[r]^-{\widetilde{f}} \ar[d]^-{\widetilde{\varphi}} & U \ar[d]^-{\varphi} \\
& Y \ar[r]^-{f} & X,
}
\end{align*}
where $\widetilde{\varphi}$ and $\widetilde{f}$ are the projections.
Then $(V, \widetilde{\varphi}\circ g, (\widetilde{f}\circ g)^\# \Phi)$
is in $f^\#\mathcal{B}$ by Proposition \ref{prop:LiftTrivialization}.
Since $L(\widetilde{\varphi}\circ g)^* \circ Lf^* = L(\widetilde{f}\circ g)^* \circ L\varphi^*$ holds (Fact \ref{fact:FundamentalDmodule} \eqref{eqn:InverseImage}),
the assertion for the inverse image functor is reduced to Proposition \ref{prop:LocalMultiplicity} for $f = \widetilde{f}\circ g$.
We shall show the assertion for $Df_+$.
Take a finite affine open covering $\set{V_i}_{i = 0,1,2,\ldots, r}$ of $U\times_X Y$ and replace $V$ with the $(r+1)$-fold fiber product of $\bigsqcup_{i} V_i$.
Let $\mathcal{M} \in \mathrm{Mod}_{ub}(f^\#\mathscr{A}_{X,\Lambda}, f^\#\mathcal{B})$
and fix $\lambda \in \Lambda$.
Then $\mathcal{M}_\lambda|_{U\times_X Y}$ is quasi-isomorphic to the \v{C}ech complex
\begin{align*}
0\rightarrow C^0 \rightarrow C^1 \rightarrow \cdots \rightarrow C^r \rightarrow 0
\end{align*}
with respect to the covering $\set{V_i}$.
By the construction of \v{C}ech complex, $\bigoplus_i C^i$ is a direct summand of $g_*(\mathcal{M}_\lambda|_V)$.
By the base change theorem (Fact \ref{fact:BaseChange}), there is an isomorphism $L\varphi^* \circ Df_+ \simeq D\widetilde{f}_+ \circ L\widetilde{\varphi}^*$ of functors, and hence we have
\begin{align*}
L\varphi^* \circ Df_+(\mathcal{M}_\lambda) \simeq D\widetilde{f}_+ (\mathcal{M}_\lambda|_{U\times_X Y}) \simeq D\widetilde{f}_+(C^\bullet).
\end{align*}
For a closed embedding $\iota\colon U\rightarrow \mathbb{C}^n$, we have
\begin{align*}
m_\iota(D\widetilde{f}_+(C^\bullet)) &\leq \sum_i m_{\iota}(D\widetilde{f}_+(C^i)) \\
&\leq m_\iota(D\widetilde{f}_+ \circ g_* (\mathcal{M}_\lambda|_V)) \\
& = m_\iota(D(\widetilde{f} \circ g)_+ (\mathcal{M}_\lambda|_V)).
\end{align*}
Here the first inequality follows from Lemma \ref{lem:AdditiveFunctionDerived} (iii) for the complex $C^\bullet$.
Note that $Dg_+$ is isomorphic to $g_*$ (see the proof of Proposition \ref{prop:LocalMultiplicityEtale}).
By Proposition \ref{prop:LocalMultiplicity} and $\mathcal{M} \in \mathrm{Mod}_{ub}(f^\#\mathscr{A}_{X,\Lambda}, f^\#\mathcal{B})$, there is a constant $C$ independent of $\lambda$ such that
\begin{align*}
m_{\iota}(L\varphi^* \circ Df_+(\mathcal{M}_\lambda)) \leq m_\iota(D(\widetilde{f} \circ g)_+ (\mathcal{M}_\lambda|_V)) \leq C.
\end{align*}
This shows $Df_+(\mathcal{M}) \in D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B})$.
\end{proof}
We study the external tensor product of uniformly bounded families.
Let $\mathscr{A}_{Y,\Lambda}$ be a family of algebras of twisted differential operators on a smooth variety $Y$ with the same index set $\Lambda$ as $\mathscr{A}_{X,\Lambda}$.
\begin{definition}\label{def:TensorBornology}
Let $\mathcal{B}$ and $\mathcal{B}'$ be bornologies of $\mathscr{A}_{X,\Lambda}$ and $\mathscr{A}_{Y,\Lambda}$, respectively.
We denote by $\mathcal{B}\boxtimes \mathcal{B}'$ the equivalence class of $(U\times V, \varphi \times \psi, \Phi\boxtimes \Psi = (\Phi_\lambda \boxtimes \Psi_\lambda)_{\lambda \in \Lambda})$ for some $(U, \varphi, \Phi) \in \mathcal{B}$
and $(V, \psi, \Psi) \in \mathcal{B}'$.
If $X = Y$, we denote by $\mathcal{B}\mathbin{\#} \mathcal{B}'$ the pull-back of $\mathcal{B}\boxtimes \mathcal{B}'$ by the diagonal embedding $X\hookrightarrow X\times X$.
\end{definition}
It is easy to see that the definition is well-defined.
We set $\mathscr{A}_{X,\Lambda}\boxtimes \mathscr{A}_{Y,\Lambda}:=(\mathscr{A}_{X,\lambda}\boxtimes \mathscr{A}_{Y,\lambda})_{\lambda \in \Lambda}$ and
$\mathcal{M}\boxtimes \mathcal{N} := (\mathcal{M}_\lambda \boxtimes \mathcal{N}_\lambda)_{\lambda \in \Lambda}$ for $\mathcal{M} \in D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B})$
and $\mathcal{N} \in D^b_{ub}(\mathscr{A}_{Y,\Lambda}, \mathcal{B}')$.
By Fact \ref{fact:MultiplicityTensor}, we obtain the following theorem.
\begin{theorem}\label{thm:TensorUniformlyBounded}
Let $\mathcal{B}$ and $\mathcal{B}'$ be bornologies of $\mathscr{A}_{X,\Lambda}$ and $\mathscr{A}_{Y,\Lambda}$, respectively.
Then $\mathcal{B}\boxtimes \mathcal{B}'$ is a bornology of $\mathscr{A}_{X,\Lambda}\boxtimes \mathscr{A}_{Y,\Lambda}$ and we have a bifunctor
\begin{align*}
(\cdot) \boxtimes (\cdot)\colon D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}) \times D^b_{ub}(\mathscr{A}_{Y,\Lambda}, \mathcal{B}') \rightarrow D^b_{ub}(\mathscr{A}_{X,\Lambda}\boxtimes \mathscr{A}_{Y,\Lambda}, \mathcal{B}\boxtimes \mathcal{B}').
\end{align*}
Moreover, for any $\mathcal{M} \in D^b_{h}(\mathscr{A}_{X,\Lambda})$
and $\mathcal{N} \in D^b_{h}(\mathscr{A}_{Y,\Lambda})$,
both $\mathcal{M}$ and $\mathcal{N}$ are uniformly bounded if and only if so is $\mathcal{M}\boxtimes \mathcal{N}$.
\end{theorem}
\subsection{Twisting, opposite and tensor product}\label{sect:twisting}
We consider operations of algebras of twisted differential operators:
twisting by an invertible sheaf, taking opposite algebras and tensor products.
Corresponding to the operations, we introduce operations of a bornology.
Let $\mathscr{A}_{X, \Lambda} = (\mathscr{A}_{X,\lambda})_{\lambda \in \Lambda}$ be a family of algebras of twisted differential operators on a smooth variety $X$.
Let $\mathcal{B}$ be a bornology of $\mathscr{A}_{X,\Lambda}$ and $\mathcal{L}$ an invertible sheaf on $X$.
Then we have a new family
\begin{align*}
\mathscr{A}_{X,\Lambda}^\mathcal{L} := (\mathcal{L}\otimes_{\rsheaf{X}} \mathscr{A}_{X,\lambda}\otimes_{\rsheaf{X}} \mathcal{L}^{\vee})_{\lambda \in \Lambda}.
\end{align*}
Remark that for a morphism $f\colon Y\rightarrow X$ of smooth varieties, there is a canonical isomorphism
\begin{align}
f^\#(\mathcal{L}\otimes_{\rsheaf{X}} \mathscr{A}_{X,\lambda}\otimes_{\rsheaf{X}} \mathcal{L}^{\vee}) \simeq f^*(\mathcal{L}) \otimes_{\rsheaf{Y}} f^\#\mathscr{A}_{X,\lambda}\otimes_{\rsheaf{Y}} f^*(\mathcal{L})^\vee. \label{eqn:IsomTwist}
\end{align}
See e.g.\ \cite[Lemma 1.1.5]{KaTa96}.
We shall construct a bornology of $\mathscr{A}^\mathcal{L}_{X,\Lambda}$.
Since $\mathcal{L}$ is an invertible sheaf, there is a bounded trivialization $T=(U,\varphi, \Phi) \in \mathcal{B}$ such that $\mathcal{L}|_U$ is isomorphic to $\rsheaf{U}$.
Take a trivialization $\alpha\colon \mathcal{L}|_U \xrightarrow{\simeq} \rsheaf{U}$.
Then $T$ and $\alpha$ induce an isomorphism $\Phi^{\mathcal{L}, \alpha}$ given by
\begin{align*}
(\mathcal{L}\otimes_{\rsheaf{X}} \mathscr{A}_{X,\Lambda}\otimes_{\rsheaf{X}} \mathcal{L}^{\vee})|_U \xrightarrow{\textup{id}\otimes \Phi\otimes \textup{id}} \mathcal{L}|_U\otimes_{\rsheaf{U}} \mathscr{D}_{U}\otimes_{\rsheaf{U}} (\mathcal{L}|_U)^{\vee} \rightarrow \mathscr{D}_{U}.
\end{align*}
We obtain a trivialization $T^{\mathcal{L}, \alpha}=(U,\varphi, \Phi^{\mathcal{L},\alpha})$ of $\mathscr{A}_{X,\Lambda}^\mathcal{L}$.
\begin{lemma}\label{lem:BoundedTwistedBornology}
Take $S=(V,\psi, \Psi) \in \mathcal{B}$ and an isomorphism $\beta\colon \mathcal{L}|_V \rightarrow \rsheaf{V}$.
Then we have $S^{\mathcal{L},\beta}\sim T^{\mathcal{L},\alpha}$.
In particular, $T^{\mathcal{L},\alpha}$ is a bounded trivialization.
\end{lemma}
\begin{proof}
$\alpha$ and $\beta$ induce an isomorphism
\begin{align*}
\rsheaf{U\times_X V} \xrightarrow{\alpha^{-1}|_{U\times_X V}} \mathcal{L}|_{U\times_X V} \xrightarrow{\beta|_{U\times_X V}} \rsheaf{U\times_X V}.
\end{align*}
We write $f \in \rring{U\times_X V}^{\times}$ for the image of $1$ by the isomorphism.
Then we have
\begin{align*}
\mathcal{Z}(T^{\mathcal{L},\alpha},S^{\mathcal{L},\beta}) = f^{-1}df + \mathcal{Z}(T,S)
\end{align*}
(see Definition \ref{def:BoundedTrivialization}).
This shows the lemma.
\end{proof}
By the lemma, the following definition is well-defined.
\begin{definition}
We denote by $\mathcal{B}^\mathcal{L}$ the equivalence class of $T^{\mathcal{L}, \alpha}$.
\end{definition}
It is well-known that the functor
\begin{align*}
\mathcal{L}\otimes_{\rsheaf{X}}(\cdot) \colon \mathrm{Mod}_h(\mathscr{A}_{X,\lambda})
\rightarrow \mathrm{Mod}_h(\mathcal{L}\otimes_{\rsheaf{X}}\mathscr{A}_{X,\lambda}\otimes_{\rsheaf{X}} \mathcal{L}^{\vee})
\end{align*}
gives an equivalence of categories.
We denote by $\mathcal{L}\otimes_{\rsheaf{X}}(\cdot)$ the direct product $\prod_{\lambda \in \Lambda}\mathcal{L}\otimes_{\rsheaf{X}}(\cdot)$ of the functors by abuse of notation.
\begin{proposition}
The functor $\mathcal{L}\otimes_{\rsheaf{X}}(\cdot)$ preserves the uniform boundedness, that is, we have functors
\begin{align*}
\mathcal{L}\otimes_{\rsheaf{X}}(\cdot) \colon& \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B})
\rightarrow \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}^\mathcal{L}, \mathcal{B}^\mathcal{L}), \\
\mathcal{L}\otimes_{\rsheaf{X}}(\cdot) \colon& D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B})
\rightarrow D^b_{ub}(\mathscr{A}_{X,\Lambda}^\mathcal{L}, \mathcal{B}^\mathcal{L}).
\end{align*}
Moreover, the two functors give equivalences of categories.
\end{proposition}
\begin{proof}
Since the assertion is local for $X$, we can assume $\mathcal{L} \simeq \rsheaf{X}$.
In the case, the proposition is clear.
\end{proof}
Next we consider the family of the opposite algebras $\mathscr{A}_{X,\lambda}^{\mathrm{op}}$.
Note that $(\cdot)^\mathrm{op}$ is a functor on the category of algebras of twisted differential operators on $X$.
We set $\mathscr{A}_{X,\Lambda}^\mathrm{op}:= (\mathscr{A}_{X,\lambda}^{\mathrm{op}})_{\lambda \in \Lambda}$.
We shall construct a bornology of $\mathscr{A}_{X,\Lambda}^\mathrm{op}$ from the bornology $\mathcal{B}$.
Recall that there is a canonical isomorphism
\begin{align}
\mathscr{D}_{X}^\mathrm{op} \simeq \Omega_{X}\otimes_{\rsheaf{X}} \mathscr{D}_{X} \otimes_{\rsheaf{X}} \Omega_X^\vee, \label{eqn:IsomOp}
\end{align}
where $\Omega_X$ is the canonical sheaf of $X$.
See \cite[Lemma 1.2.7]{HTT08}.
The isomorphism induces an automorphism of the space $\mathcal{Z}(X)$ of closed $1$-forms as
\begin{align*}
\mathcal{Z}(X) \xrightarrow{\simeq}& \textup{Aut}(\mathscr{D}_X) \xrightarrow{(\cdot)^\mathrm{op}} \textup{Aut}(\mathscr{D}_X^\mathrm{op})
\xrightarrow{\simeq} \textup{Aut}(\Omega_{X}\otimes_{\rsheaf{X}} \mathscr{D}_{X} \otimes_{\rsheaf{X}} \Omega_X^\vee) \\
&\xrightarrow{\simeq} \textup{Aut}(\mathscr{D}_X) \xrightarrow{\simeq} \mathcal{Z}(X).
\end{align*}
Here the fourth isomorphism comes from the isomorphism $\mathscr{D}_X \simeq \Omega_{X}^\vee\otimes_{\rsheaf{X}}(\Omega_{X}\otimes_{\rsheaf{X}} \mathscr{D}_{X} \otimes_{\rsheaf{X}} \Omega_X^\vee)\otimes_{\rsheaf{X}} \Omega_X$.
\begin{lemma}\label{lem:OpAutomorphism}
The automorphism of $\mathcal{Z}(X)$ is the multiplication map by $-1$.
\end{lemma}
\begin{proof}
The lemma can be shown by an easy explicit computation.
\end{proof}
Remark that for an \'etale morphism $f\colon U\rightarrow X$, there are canonical isomorphisms
\begin{align*}
f^*\Omega_X &\simeq \Omega_U, \\
\Omega_f &\simeq \rsheaf{U}, \\
f^\#(\mathscr{A}_{X,\lambda}^\mathrm{op}) &\simeq (f^\#\mathscr{A}_{X,\lambda})^\mathrm{op}.
\end{align*}
In particular, the canonical isomorphism \eqref{eqn:IsomOp} commutes with the pull-back by the \'etale morphism.
Take a bounded trivialization $T=(U,\varphi, \Phi) \in \mathcal{B}$ such that $\Omega_X|_U\simeq \Omega_U$ is isomorphic to $\rsheaf{U}$.
Take a trivialization $\alpha\colon \Omega_U \xrightarrow{\simeq} \rsheaf{U}$.
Then $T$ and $\alpha$ induce an isomorphism $\Phi^{\mathrm{op}, \alpha}$ given by
\begin{align*}
\mathscr{D}_{X,\Lambda}^{\mathrm{op}}|_U \xrightarrow{\Phi^\mathrm{op}} \mathscr{D}_{U}^\mathrm{op} \xrightarrow{\simeq} \Omega_U\otimes_{\rsheaf{U}} \mathscr{D}_{U}\otimes_{\rsheaf{U}} \Omega_U^{\vee} \rightarrow \mathscr{D}_{U}.
\end{align*}
We obtain a trivialization $T^{\mathrm{op}, \alpha}=(U,\varphi, \Phi^{\mathrm{op},\alpha})$.
\begin{lemma}
Take $S=(V,\psi,\Psi) \in \mathcal{B}$ and an isomorphism $\beta\colon \Omega_V \rightarrow \rsheaf{V}$.
Then we have $S^{\mathrm{op}, \beta}\simeq T^{\mathrm{op}, \alpha}$.
In particular, $T^{\mathrm{op}, \alpha}$ is bounded.
\end{lemma}
\begin{proof}
As we have seen in the proof of Lemma \ref{lem:BoundedTwistedBornology}, there is $f \in \rring{U\times_X V}^\times$ such that
\begin{align*}
\mathcal{Z}(T^{\mathrm{op},\alpha}, S^{\mathrm{op}, \beta}) = f^{-1}df - \mathcal{Z}(T,S).
\end{align*}
The sign before $\mathcal{Z}(T,S)$ comes from Lemma \ref{lem:OpAutomorphism}.
This shows the lemma.
\end{proof}
\begin{definition}
We denote by $\mathcal{B}^\mathrm{op}$ the equivalence class of $T^{\mathrm{op},\alpha}$.
\end{definition}
Corresponding to canonical isomorphisms of algebras of twisted differential operators, there are identities of bornologies.
Let $\iota$ be the diagonal embedding $X\hookrightarrow X\times X$.
For two algebras $\mathscr{A}_1$ and $\mathscr{A}_2$ of twisted differential operators on $X$,
we set
\begin{align*}
\mathscr{A}_1 \mathbin{\#} \mathscr{A}_2 := \iota^\#(\mathscr{A}_1 \boxtimes \mathscr{A}_2).
\end{align*}
We use the same notation for families of algebras.
Let $\mathscr{A}_{X,\Lambda}, \mathscr{B}_{X,\Lambda}$ and $\mathscr{C}_{X,\Lambda}$
be families of algebras of twisted differential operators on $X$ with the same index set $\Lambda$.
Let $f\colon Y\rightarrow X$ be a morphism of smooth varieties.
For the constant family $\mathscr{D}_{X,\Lambda}:=(\mathscr{D}_{X})_{\lambda \in \Lambda}$, we consider a bounded trivialization $(X,\textup{id}_X, \textup{id})$.
We denote by $\mathcal{B}_\textup{id}$ the equivalence class of the trivialization.
We use the same notation for the constant family $\mathscr{D}_{Y,\Lambda}$ on $Y$.
Fix an invertible sheaf $\mathcal{L}$ on $X$.
By \cite[\S 1]{KaTa96}, we have canonical isomorphisms
\begin{align*}
(\mathscr{A}_{X,\Lambda}^\mathcal{L})^\mathrm{op} &\simeq (\mathscr{A}_{X,\Lambda}^\mathrm{op})^{\mathcal{L}^\vee} \\
\mathscr{A}_{X,\Lambda} \mathbin{\#} \mathscr{B}_{X,\Lambda} &\simeq \mathscr{B}_{X,\Lambda} \mathbin{\#} \mathscr{A}_{X,\Lambda}\\
(\mathscr{A}_{X,\Lambda} \mathbin{\#} \mathscr{B}_{X,\Lambda})\mathbin{\#} \mathscr{C}_{X,\Lambda} &\simeq \mathscr{A}_{X,\Lambda} \mathbin{\#} (\mathscr{B}_{X,\Lambda}\mathbin{\#} \mathscr{C}_{X,\Lambda}) \\
\mathscr{D}_{X,\Lambda} \mathbin{\#} \mathscr{A}_{X,\Lambda} &\simeq \mathscr{A}_{X,\Lambda} \\
\mathscr{A}_{X,\Lambda}^\mathrm{op} \mathbin{\#} \mathscr{A}_{X,\Lambda} &\simeq \mathscr{D}_{X,\Lambda}^\mathrm{op} \\
f^\# \mathscr{D}_{X,\Lambda} &\simeq \mathscr{D}_{Y,\Lambda} \\
f^\# (\mathscr{A}_{X,\Lambda}^\mathcal{L}) &\simeq (f^\#\mathscr{A}_{X,\Lambda})^{f^*\mathcal{L}} \\
f^\#(\mathscr{A}_{X,\Lambda}^\mathrm{op})^{\Omega_f} &\simeq (f^\#(\mathscr{A}_{X,\Lambda}))^\mathrm{op}.
\end{align*}
Here we set $\Omega_f = f^{-1}\Omega_{X}^{\vee}\otimes_{f^{-1}\rsheaf{X}}\Omega_{Y}$ (see Subsection \ref{subsect:DirectImage}).
Since the isomorphisms are canonical, they are natural in $\mathscr{A}_{X,\Lambda}, \mathscr{B}_{X,\Lambda}$ and $\mathscr{C}_{X,\Lambda}$.
It is easy to see that the isomorphisms and the operations commute with the pull-back by the following cartesian square:
\begin{align*}
\xymatrix{
Y \ar[r]^-f & X \\
U\times_X Y \ar[r]^-{\widetilde{f}} \ar[u]^-{\widetilde{\varphi}}& U \ar[u]^-{\varphi},
}
\end{align*}
where $\varphi\colon U\rightarrow X$ is an \'etale morphism.
For example, the following diagram commutes:
\begin{align*}
\xymatrix{
f^\#(\mathscr{A}_{X,\Lambda}^\mathrm{op})^{\Omega_f}|_{U\times_X Y} \ar[r]^{\simeq} & (f^\#(\mathscr{A}_{X,\Lambda}))^\mathrm{op}|_{U\times_X Y} \\
\widetilde{f}^\#((\mathscr{A}_{X,\Lambda}|_U)^\mathrm{op})^{\Omega_{\widetilde{f}}} \ar[r]^{\simeq} \ar[u]^{\simeq} & (\widetilde{f}^\#(\mathscr{A}_{X,\Lambda}|_U))^\mathrm{op} \ar[u]^\simeq.
}
\end{align*}
\begin{proposition}\label{prop:IdentityBornology}
Let $\mathcal{B}_1, \mathcal{B}_2$ and $\mathcal{B}_3$ be bornologies of $\mathscr{A}_{X,\Lambda}$,
$\mathscr{B}_{X,\Lambda}$ and $\mathscr{C}_{X,\Lambda}$.
Under the above identifications, we have
\begin{enumerate}[(i)]
\item $(\mathcal{B}_1^\mathcal{L})^\mathrm{op} = (\mathcal{B}_1^\mathrm{op})^{\mathcal{L}^\vee}$,
\item $\mathcal{B}_1 \mathbin{\#} \mathcal{B}_2 = \mathcal{B}_2 \mathbin{\#} \mathcal{B}_1$,
\item $(\mathcal{B}_1 \mathbin{\#} \mathcal{B}_2) \mathbin{\#} \mathcal{B}_3 = \mathcal{B}_1 \mathbin{\#} (\mathcal{B}_2 \mathbin{\#} \mathcal{B}_3)$,
\item $\mathcal{B}_{\textup{id}} \mathbin{\#} \mathcal{B}_1 = \mathcal{B}_1$,
\item $\mathcal{B}^\mathrm{op}_1 \mathbin{\#} \mathcal{B}_1 = \mathcal{B}^\mathrm{op}_{\textup{id}}$,
\item $f^\#\mathcal{B}_\textup{id} = \mathcal{B}_\textup{id}$,
\item $f^\#(\mathcal{B}_1^\mathcal{L}) = (f^\# \mathcal{B}_1)^{\mathcal{L}^\vee}$,
\item $f^\#(\mathcal{B}_1^\mathrm{op})^{\Omega_f} = (f^\#(\mathcal{B}))^\mathrm{op}$.
\end{enumerate}
\end{proposition}
\begin{proof}
The proposition is clear by the constructions of bornologies and the naturality of the canonical isomorphisms as mentioned above.
\end{proof}
\subsection{Integral transform}
We consider integral transforms of $\mathscr{D}$-modules.
Let $\mathscr{A}_{X,\Lambda}$ and $\mathscr{A}_{Y,\Lambda}$ be families of algebras of twisted differential operators on smooth varieties $X$ and $Y$ with the same index set $\Lambda$, respectively.
Fix bornologies $\mathcal{B}_X$ and $\mathcal{B}_Y$ of $\mathscr{A}_{X,\Lambda}$ and $\mathscr{A}_{Y,\Lambda}$, respectively.
Recall that there is a canonical isomorphism
\begin{align*}
\mathscr{A}_{X,\lambda}^\mathrm{op} \mathbin{\#} \mathscr{A}_{X,\lambda} \simeq \mathscr{D}_X^\mathrm{op}.
\end{align*}
We write $\iota\colon X\rightarrow X\times X$ for the diagonal embedding.
The isomorphism is induced from the action of $\mathscr{D}_X^\mathrm{op}$ on $\iota^*(\mathscr{A}_{X,\lambda}^\mathrm{op} \boxtimes \mathscr{A}_{X,\lambda}) \simeq \mathscr{A}_{X,\lambda}\otimes_{\rsheaf{X}}\mathscr{A}_{X,\lambda}$ given by
\begin{align}
Z\cdot (A\otimes B) = A\widetilde{Z}\otimes B - A\otimes \widetilde{Z}B \label{eqn:ActionTX}
\end{align}
for $Z \in \mathcal{T}_X (\subset \mathscr{D}_{X}^\mathrm{op})$ and $A,B\in \mathscr{A}_{X,\lambda}$, where $\widetilde{Z}$
is a section of $\mathcal{P}(\mathscr{A}_{X,\lambda})$ such that $\sigma(\widetilde{Z}) = Z$.
See Definition \ref{def:Picard} for the notation of Picard algebroids.
\begin{lemma}\label{lem:AssociativeTensor}
Fix $\lambda \in \Lambda$.
For any $\mathcal{A} \in \mathrm{Mod}_{qc}(\mathscr{D}_{X}^\mathrm{op})$, $\mathcal{B} \in \mathrm{Mod}_{qc}(\mathscr{A}_{X,\lambda}^\mathrm{op})$ and $\mathcal{C} \in \mathrm{Mod}_{qc}(\mathscr{A}_{X,\lambda})$, we have
\begin{align*}
\mathcal{A} \otimes_{\mathscr{D}_{X}} (\Omega_X^\vee \otimes_{\rsheaf{X}}(\mathcal{B} \otimes_{\rsheaf{X}} \mathcal{C})) \simeq (\mathcal{A} \otimes_{\rsheaf{X}} \Omega_X^\vee \otimes_{\rsheaf{X}}\mathcal{B}) \otimes_{\mathscr{A}_{X,\lambda}} \mathcal{C}
\end{align*}
as sheaves on $X$.
\end{lemma}
\begin{proof}
Both sides of the expression can be regarded as the sheaves of $\mathcal{T}_X$-coinvariants in
$\mathcal{A} \otimes_{\rsheaf{X}} \Omega_X^\vee \otimes_{\rsheaf{X}}\mathcal{B} \otimes_{\rsheaf{X}} \mathcal{C}$.
It is easy to see that the two actions of $\mathcal{T}_X$ coincide by using the action of Picard algebroids (see \eqref{eqn:ActionTX}).
\end{proof}
\begin{theorem}\label{thm:UniformlyBoundedIntegralTransform}
Let $\mathcal{M} \in D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}_X)$
and $\mathcal{N} \in D^b_{ub}(\mathscr{A}_{Y,\Lambda}\boxtimes \mathscr{A}_{X,\Lambda}^\mathrm{op}, \mathcal{B}_Y\boxtimes \mathcal{B}^\mathrm{op}_X)$.
Then we have
\begin{align*}
(Rq_*(\mathcal{N}_\lambda \otimes^L_{p^{-1}\mathscr{A}_{X,\lambda}} p^{-1}\mathcal{M}_\lambda))_{\lambda \in \Lambda} \in D^b_{ub}(\mathscr{A}_{Y,\Lambda}, \mathcal{B}_Y),
\end{align*}
where $p$ (resp.\ $q$) is the projection from $Y\times X$ onto $X$ (resp.\ $Y$).
\end{theorem}
\begin{proof}
Fix $\lambda \in \Lambda$.
Then we have
\begin{align*}
&Dq_+(p^*\Omega_X^\vee \otimes^L_{\rsheaf{Y\times X}}(\mathcal{N}_{\lambda}\otimes^L_{\rsheaf{Y\times X}} Lp^*\mathcal{M}_{\lambda}))\\
\simeq &Rq_*((\mathscr{A}_{Y,\lambda}\boxtimes \Omega_X) \otimes_{\mathscr{A}_{Y,\lambda}\boxtimes \mathscr{D}_{X}}^L (p^*\Omega_X^\vee \otimes^L_{\rsheaf{Y\times X}}(\mathcal{N}_{\lambda}\otimes^L_{\rsheaf{Y\times X}} Lp^*\mathcal{M}_{\lambda}))) \\
\simeq &Rq_*(p^{-1}\Omega_X \otimes_{p^{-1}\mathscr{D}_{X}}^L (p^{-1}\Omega_X^\vee \otimes^L_{p^{-1}\rsheaf{X}}(\mathcal{N}_{\lambda}\otimes^L_{p^{-1}\rsheaf{X}} p^{-1}\mathcal{M}_{\lambda}))) \\
\simeq &Rq_*((p^{-1}(\Omega_X \otimes_{\rsheaf{X}} \Omega_X^\vee) \otimes^L_{p^{-1}\rsheaf{X}}\mathcal{N}_{\lambda})\otimes^L_{p^{-1}\mathscr{A}_{X,\lambda}} p^{-1}\mathcal{M}_{\lambda}) \\
\simeq &Rq_*(\mathcal{N}_\lambda \otimes^L_{p^{-1}\mathscr{A}_{X,\lambda}} p^{-1}\mathcal{M}_\lambda).
\end{align*}
The third isomorphism follows from Lemma \ref{lem:AssociativeTensor} by
taking a flat resolution of $\Omega_X$, $\mathcal{N}_\lambda$ and $\mathcal{M}_\lambda$.
By Proposition \ref{prop:IdentityBornology}, we have
\begin{align*}
((\mathcal{B}_Y\boxtimes \mathcal{B}^\mathrm{op}_X)\mathbin{\#} p^\#\mathcal{B}_X)^{p^*\Omega_X^\vee}
= (\mathcal{B}_Y\boxtimes \mathcal{B}_\textup{id}^\mathrm{op})^{\rsheaf{Y}\boxtimes \Omega_X^\vee} = \mathcal{B}_Y\boxtimes \mathcal{B}_\textup{id} = q^\# \mathcal{B}_Y.
\end{align*}
Therefore the theorem follows from Theorem \ref{thm:FunctorOnUniformlyBounded}.
\end{proof}
\subsection{Family of easy morphisms}
Retain the notation $X$, $Y$, $\mathscr{A}_{X,\Lambda}$, $\mathscr{A}_{Y,\Lambda}$, $\mathcal{B}_X$ and $\mathcal{B}_Y$ in the previous subsection.
We consider operations of $\mathscr{D}$-modules by the following family of morphisms:
\begin{align*}
&f_y\colon X\rightarrow X\times Y \quad (y \in Y), \\
&f_y(x) = (x, y).
\end{align*}
\begin{proposition}\label{prop:FamilyMorphismPull}
For any $\mathcal{M} \in D^b_{ub}(\mathscr{A}_{X,\Lambda}\boxtimes \mathscr{A}_{Y,\Lambda}, \mathcal{B}_X\boxtimes \mathcal{B}_Y)$, the family $(Lf_y^*(\mathcal{M}_\lambda))_{\lambda\in \Lambda, y \in Y}$ is uniformly bounded with respect to the bornology $\mathcal{B}_X$.
\end{proposition}
\begin{proof}
It is enough to show the assertion for $\mathcal{M} \in \mathrm{Mod}_{ub}(\mathscr{A}_{X,\Lambda}\boxtimes \mathscr{A}_{Y,\Lambda}, \mathcal{B}_X\boxtimes \mathcal{B}_Y)$.
Fix $\lambda \in \Lambda$ and $y \in Y$.
Take $(U, \varphi, \Phi) \in \mathcal{B}_X$ and $(V, \psi, \Psi) \in \mathcal{B}_Y$
with affine $U$ and $V$.
Fix closed embeddings $\iota_U\colon U\rightarrow \mathbb{C}^n$ and $\iota_V\colon V\rightarrow \mathbb{C}^m$, and $y'\in \psi^{-1}(y)$.
Then we have a commutative diagram
\begin{align*}
\xymatrix{
\mathbb{C}^n \ar[r]^-{f''_y} & \mathbb{C}^n \times \mathbb{C}^m \\
U \ar[r]^-{f'_y}\ar[u]^-{\iota_U} \ar[d]_-\varphi & U\times V \ar[u]_-{\iota_U\times \iota_V} \ar[d]^-{\varphi \times\psi}\\
X \ar[r]^-{f_y}& X\times Y,
}
\end{align*}
where $f'_y(x) = (x, y')$ and $f''_y(x) = (x, \iota_V(y'))$.
Remark that the upper square is cartesian.
By Facts \ref{fact:FundamentalDmodule} (ii) and \ref{fact:BaseChange}, we have
\begin{align*}
D(\iota_U)_+ \circ L\varphi \circ Lf_y^*(\mathcal{M}_\lambda) \simeq L(f''_y)^*\circ D(\iota_U\times \iota_V)_+ \circ L(\varphi \times \psi)^*(\mathcal{M}_\lambda).
\end{align*}
Since the degree of $f''_y$ is $1$, we have
\begin{align*}
m_{\iota_U}(L\varphi \circ Lf_y^*(\mathcal{M}_\lambda)) \leq m_{\iota_U\times \iota_V}(L(\varphi \times \psi)^*(\mathcal{M}_\lambda))
\end{align*}
by Fact \ref{fact:WeylAlgebraDirectInverseImage}.
This shows the proposition.
\end{proof}
\begin{proposition}\label{prop:FamilyMorphismPush}
Let $\mathcal{N} \in D^b_{ub}(\mathscr{A}_{X,\Lambda}, \mathcal{B}_X)$.
The family $(D(f_y)_+(\mathcal{N}_\lambda))_{\lambda\in \Lambda, y \in Y}$ is uniformly bounded with respect to the bornology $\mathcal{B}_X\boxtimes \mathcal{B}_Y$.
\end{proposition}
\begin{proof}
We retain the notation in the proof of Proposition \ref{prop:FamilyMorphismPull}.
Then we have
\begin{align*}
L(\varphi \times \psi)^*\circ D(f_y)_+(\mathcal{N}_\lambda) \simeq L\varphi^*(\mathcal{N}_\lambda)\boxtimes D\iota_+(\rsheaf{\psi^{-1}(y)}),
\end{align*}
where $\iota\colon \psi^{-1}(y) \rightarrow V$ is the inclusion map.
By Fact \ref{fact:MultiplicityTensor}, we have
\begin{align*}
m_{\iota_U\times \iota_V}(L\varphi^*(\mathcal{N}_\lambda)\boxtimes D\iota_+(\rsheaf{\psi^{-1}(y)})) &= m_{\iota_U}(L\varphi^*(\mathcal{N}_\lambda))
m_{\iota_V}(D\iota_+(\rsheaf{\psi^{-1}(y)})).
\end{align*}
Since the multiplicity of the unique irreducible holonomic $\mathscr{D}_{\mathbb{C}^m}$-module supported on a point is $1$, we have
\begin{align*}
m_{\iota_V}(D\iota_+(\rsheaf{\psi^{-1}(y)})) = |\psi^{-1}(y)|.
\end{align*}
Since $\psi$ is \'etale, $|\psi^{-1}(y)|$ is bounded on $Y$.
Therefore we have shown the proposition.
\end{proof}
\section{Zuckerman derived functor and its localization}
In this section, we review the Zuckerman derived functors and their localization.
We use the functors to study the relative Lie algebra cohomology/homology.
The localization can be realized by a composition of direct image functors and inverse image functors.
Hence we can apply results about uniformly bounded families to study the functors and the cohomologies.
\subsection{Zuckerman functor}\label{sect:bernsteinFunctor}
In this subsection we review the Zuckerman derived functor.
We refer the reader to \cite[I.8]{BoWa00_continuous_cohomology} and \cite[6.3]{Wa88_real_reductive_I} for our construction.
Let $(\mathcal{A}, G)$ be a generalized pair and $H$ a reductive subgroup of $G$.
Then $(\mathcal{A}, H)$ forms a generalized pair and $(\lie{g}, H)$ forms a pair (see Definitions \ref{def:GeneralizedPair} and \ref{def:pair}).
Since $\mathcal{A}$ is a $G$-module, for any $X \in \mathcal{A}$, we can take $f_1, \ldots, f_n \in \rring{G}$
and $X_1, \ldots, X_n \in \mathcal{A}$ such that
\begin{align*}
\textup{Ad}(g^{-1})(X) = \sum_i f_i(g)X_i
\end{align*}
for any $g \in G$.
Let $V$ be an $(\mathcal{A}, H)$-module.
We define three actions on $\rring{G}\otimes V$ via
\begin{align*}
\mu(X)(f\otimes v) &= \sum_i f_i f \otimes X_i v, & (X \in \mathcal{A})\\
r(Y)(f\otimes v) &= R(Y)f \otimes v + f \otimes Yv, & (Y \in \lie{g})\\
r(g)(f\otimes v) &= R(g)f \otimes gv, & (g \in H)\\
l(g)(f\otimes v) &= L(g)f \otimes v & (g\in G)
\end{align*}
for $f \in \rring{G}$ and $v \in V$.
Here $L$ (resp.\ $R$) denotes the left (resp.\ right) regular action of $G$ on $\rring{G}$
and $f_i, X_i$ are the elements taken above for $X$.
It is easy to see that $\mu(X)$ does not depend on the choice of $\set{f_i}$ and $\set{X_i}$.
Note that the actions $\mu$ and $l$ commute with $r$
and we have
\begin{align*}
(\mu(X)-l(X))(\rring{G}\otimes V)^{r(\lie{g})} = 0
\end{align*}
for any $X \in \lie{g}$.
This implies that $\zuck{G}{H}(V) := (\rring{G}\otimes V)^{r(\lie{g}), r(H)}$ is an $(\mathcal{A}, G)$-module via $\mu$ and $l$.
For an $(\mathcal{A}, H)$-module $V$ and $i \in \mathbb{N}$, we set
\begin{align*}
\Dzuck{G}{H}{i}(V):= H^i(\lie{g}, H; \rring{G}\otimes V),
\end{align*}
where $\rring{G}\otimes V$ is considered as a $(\lie{g}, H)$-module via the action $r$ to take the relative Lie algebra cohomology.
The two actions $l$ and $\mu$ satisfy the definition of $(\mathcal{A}, G)$-modules (Definition \ref{def:AG-mod}), and hence $\Dzuck{G}{H}{i}(V)$ is an $(\mathcal{A}, G)$-module.
See e.g.\ \cite[Proposition I.8.2]{BoWa00_continuous_cohomology} and \cite[Theorem 1.6]{MiPa98}.
Remark that we can prove that $\Dzuck{G}{H}{i}(V)$ is an $(\mathcal{A}, G)$-module
under the weaker assumption that $G/H$ is affine without reductivity of $H$.
See Remark \ref{rmk:LocalZuckerman}.
\begin{fact}\label{fact:Zuckerman}
$\Dzuck{G}{H}{i}(V)$ admits an $(\mathcal{A}, G)$-module structure defined by $\mu$ and $l$.
If, in addition, $\mathcal{A}$ is flat as a right $\univ{g}$-module, then $\Dzuck{G}{H}{i}$ is isomorphic to the $i$-th right derived functor of $\zuck{G}{H}$.
\end{fact}
The functors $\Dzuck{G}{H}{i}$ are called the \define{Zuckerman derived functors}.
The following property is well-known and easy to see from the above isomorphism and the algebraic Peter--Weyl theorem (\cite[Theorem 4.2.7]{GoWa09}).
See e.g.\ \cite[Theorem I.8.8]{BoWa00_continuous_cohomology}.
\begin{fact}\label{fact:BernAndF}
Let $V$ be an $(\mathcal{A}, H)$-module.
Assume that $G$ is reductive.
For any $i \in \mathbb{N}$, the irreducible decomposition of $\Dzuck{G}{H}{i}(V)$ as a $G$-module is given by
\begin{align*}
\Dzuck{G}{H}{i}(V) \simeq \bigoplus_F H^i(\lie{g}, H; F\otimes V)\otimes F^*,
\end{align*}
where the direct sum is over all isomorphism classes of irreducible $G$-modules.
The isomorphism is natural in $V$.
\end{fact}
For a generalized pair $(\mathcal{A}, G)$, we consider $(\mathcal{A}\otimes \univ{g}, G)$ as a generalized pair
equipped with the diagonal homomorphism $\univ{g} \rightarrow \mathcal{A}\otimes \univ{g}$
and the diagonal action of $G$ on $\mathcal{A}\otimes \univ{g}$.
\begin{lemma}\label{lem:TorAndBern}
Let $V$ be an $(\mathcal{A}, H)$-module.
Assume that $G$ is reductive.
Then for any $(\lie{g}, H)$-module $W$ and $i \in \mathbb{N}$, there exists a natural isomorphism of $\mathcal{A}^G$-modules
\begin{align*}
\Dzuck{G}{H}{i}(V\otimes W)^G \simeq H^i(\lie{g}, H; V\otimes W),
\end{align*}
where $V\otimes W$ is considered as an $(\mathcal{A}\otimes \univ{g}, H)$-module to apply the functor $\Dzuck{G}{H}{i}$.
\end{lemma}
\begin{proof}
The isomorphism $\Dzuck{G}{H}{i}(V\otimes W)^G \simeq H^i(\lie{g}, H; V\otimes W)$ of vector spaces in Fact \ref{fact:BernAndF} is natural in $V$ and $W$.
Hence the isomorphism is also an $\mathcal{A}^G$-homomorphism.
\end{proof}
\begin{corollary}\label{cor:TorAndBernLength}
Retain the notation in Lemma \ref{lem:TorAndBern}.
Then for any $(\lie{g}, H)$-module $W$ and $i \in \mathbb{N}$, we have
\begin{align*}
\mathrm{Len}_{\mathcal{A}^G}(H^i(\lie{g}, H; V\otimes W)) \leq \mathrm{Len}_{\mathcal{A}\otimes \univ{g},G}(\Dzuck{G}{H}{i}(V\otimes W)).
\end{align*}
\end{corollary}
\begin{proof}
By Proposition \ref{prop:AGandLength}, we have
\begin{align*}
\mathrm{Len}_{(\mathcal{A}\otimes \univ{g})^G}(\Dzuck{G}{H}{i}(V\otimes W)^G) \leq \mathrm{Len}_{\mathcal{A}\otimes \univ{g},G}(\Dzuck{G}{H}{i}(V\otimes W)).
\end{align*}
The action of $(\mathcal{A}\otimes \univ{g})^G$ on $\Dzuck{G}{H}{i}(V\otimes W)^G$ factors through
$(\mathcal{A}\otimes \univ{g})^G / (\mathcal{A}\otimes \univ{g}\Delta(\lie{g}))^G$.
The inclusion map $\mathcal{A}^G\otimes 1 \hookrightarrow (\mathcal{A}\otimes \univ{g})^G$
induces an isomorphism
\begin{align*}
\mathcal{A}^G\otimes 1 \xrightarrow{\simeq} (\mathcal{A}\otimes \univ{g})^G / (\mathcal{A}\otimes \univ{g}\Delta(\lie{g}))^G.
\end{align*}
The assertion therefore follows from Lemma \ref{lem:TorAndBern}.
\end{proof}
\subsection{Localization of the Zuckerman functor}\label{sect:LocalizationBern}
We review the localization of the Zuckerman derived functor.
We refer to \cite[II.4]{Bi90} and to \cite[4.6]{Ki12} for a conceptual treatment using the equivariant derived category
(see also \cite[3.7]{BeLu94}).
Let $G$ be an affine algebraic group over $\mathbb{C}$ and $\mathscr{A}_X$ a $G$-equivariant algebra of twisted differential operators on a smooth $G$-variety $X$.
Let $H$ be a reductive subgroup of $G$.
We construct an $(\mathscr{A}_X, G)$-module from an $(\mathscr{A}_X, H)$-module.
We consider the following diagram:
\begin{align*}
X \xleftarrow{\pi} G\times X \xrightarrow{a} G\times X \xrightarrow{q} G/H\times X \xrightarrow{\pi'} X,
\end{align*}
where $\pi$ and $\pi'$ are the projections, $a$ is the isomorphism given by $a(g, x) = (g, gx)$ and $q$ is the natural projection.
We consider the left two $X$ as $G\times H$-varieties letting $G$ act trivially and the others as $G\times H$-varieties letting $H$ act trivially.
We consider $G$ as a $G\times H$-variety via the left and right translations.
Then $\pi, a, q$ and $\pi'$ are $G\times H$-equivariant.
Since $\mathscr{A}_X$ is $G$-equivariant, we have canonical isomorphisms
\begin{align*}
\pi^{\#}\mathscr{A}_X \simeq \mathscr{D}_{G}\boxtimes \mathscr{A}_{X} \simeq (\pi'\circ q\circ a)^\# \mathscr{A}_X
\end{align*}
of $G\times G$-equivariant algebras (see Proposition \ref{prop:GGequivariant}).
In particular, they are $G\times H$-equivariant.
We set $n = \dim_{\mathbb{C}}(\lie{h}), m = \dim_{\mathbb{C}}(\lie{g}/\lie{h})$ and
\begin{align*}
\DlocZuck{G}{H}{i}(\mathcal{M}):=L_{m-i} \pi'_+(L_n q_+(a_{+}\pi^* (\mathcal{M}))^{H/H_0}) \in \mathrm{Mod}_{qc}(\mathscr{A}_X, G)
\end{align*}
for $\mathcal{M} \in \mathrm{Mod}_{qc}(\mathscr{A}_X, H)$ and $i \in \mathbb{N}$.
Here $(\cdot)^{H/H_0}$ means taking the $H/H_0$-invariant part in a $H/H_0$-equivariant sheaf.
Since the functors to define $\DlocZuck{G}{H}{i}$ preserve holonomicity, we can replace $\mathrm{Mod}_{qc}$ by $\mathrm{Mod}_{h}$.
The functors $\DlocZuck{G}{H}{i}$ can be considered as a localization of the Zuckerman functors $\Dzuck{G}{H}{i}$ (see \cite[Theorem 4.4]{Bi90} and \cite[Proposition 4.17]{Ki12}).
\begin{proposition}\label{prop:commutativeBernSect}
Let $\mathcal{M}$ be an object in $\mathrm{Mod}_{qc}(\mathscr{A}_{X}, H)$.
If the global section functor $\Gamma\colon \mathrm{Mod}_{qc}(\mathscr{A}_X) \rightarrow \mathrm{Mod}(\Dalg{X})$ is exact, then
there exists a natural isomorphism
\begin{align*}
\Gamma(\DlocZuck{G}{H}{i}(\mathcal{M})) \simeq \Dzuck{G}{H}{i}(\Gamma(\mathcal{M}))
\end{align*}
of $(\Dalg{X}, G)$-modules for any $i \in \mathbb{N}$.
\end{proposition}
\begin{proof}
Note that $G/H$ is affine by Matsushima's criterion \cite[Theorem 3.8]{Ti11}.
Fix $i \in \mathbb{N}$.
Since $q\colon G\times X \rightarrow G/H\times X$ is a principal $H$-bundle,
$L_n q_+(\cdot)^{H/H_0}$ is isomorphic to $q_*(\cdot)^H$ by Theorem \ref{thm:DirectImageTor}.
Hence we have
\begin{align*}
L_nq_+ a_+ \pi^*(\mathcal{M})^{H/H_0} \simeq q_*(\rsheaf{G}\boxtimes \mathcal{M})^H.
\end{align*}
The action of $\ntDalg{X}\subset \Dalg{G/H}\otimes \Dalg{X}$ on $\Gamma(q_*(\rsheaf{G}\boxtimes \mathcal{M})^H) \simeq (\rring{G}\otimes\Gamma(\mathcal{M}))^H$ is given by
\begin{align}
A\cdot f\otimes m = \sum_{i} f_i f\otimes A_i m \label{eqn:ActionOfDG}
\end{align}
where $\set{A_i}\subset \Dalg{X}$ and $\set{f_i} \subset \rring{G}$ are finite subsets satisfying $\sum_i f_i(g)A_i = \textup{Ad}(g^{-1})A$.
The $G$-action on $(\rring{G}\otimes\Gamma(\mathcal{M}))^H$ is given by the left translation on $\rring{G}$.
Let $p\colon G/H\times X\rightarrow G/H$ be the projection and $q'\colon G\rightarrow G/H$
the natural projection.
Since $\pi'$ is projection, we can compute $L_{m-i} \pi'_+$ by the relative de Rham complex (see \cite[Lemma 1.5.27]{HTT08}).
Since $G/H$ is a homogeneous variety, the tangent sheaf $\mathcal{T}_{G/H}$
is isomorphic to $(q'_*\rsheaf{G}\otimes \lie{g}/\lie{h})^H$.
Hence we can write the relative de Rham complex using Lie algebras as
\begin{align*}
&\Gamma (L_{m-i} \pi'_+(q_*(\rsheaf{G}\boxtimes \mathcal{M})^H)) \\
\simeq &H^{i-m}\circ \Gamma\circ R\pi'_*(p^{-1}(q'_*\rsheaf{G}\otimes \wedge^{m+\bullet}(\lie{g}/\lie{h})^*)^H\otimes_{p^{-1}\rsheaf{G/H}} q_*(\rsheaf{G}\boxtimes \mathcal{M})^H) \\
\simeq &H^{i}((\rring{G}\otimes \wedge^{\bullet}(\lie{g}/\lie{h})^*)^H\otimes_{\rring{G/H}} (\rring{G}\otimes \Gamma(\mathcal{M}))^H) \\
\simeq &H^{i}((\rring{G}\otimes \wedge^{\bullet}(\lie{g}/\lie{h})^* \otimes \Gamma(\mathcal{M}))^H).
\end{align*}
Here the second isomorphism holds because $G/H$ is affine and $\Gamma$ is exact on $\mathrm{Mod}_{qc}(\mathscr{A}_X)$, and the third isomorphism comes from the tensor product of the two locally free sheaves on the affine variety $G/H$.
The differentials in the above complexes are those induced from the relative de Rham complex.
By a straightforward computation, the complex $(\rring{G}\otimes \wedge^{\bullet}(\lie{g}/\lie{h})^* \otimes \Gamma(\mathcal{M}))^H$ is isomorphic to the complex $\textup{Hom}_H(\CE{\lie{g}}{H}{\bullet}, \rring{G}\otimes \Gamma(\mathcal{M}))$.
See Subsection \ref{sect:CEcomplex} for the Chevalley--Eilenberg chain complex $\CE{\lie{g}}{H}{\bullet}$.
Therefore we have
\begin{align*}
\Gamma (L_{m-i} \pi'_+(q_*(\rsheaf{G}\boxtimes \mathcal{M})^H))\simeq H^{i}(\lie{g}, H; \rring{G}\otimes \Gamma(\mathcal{M})) \simeq \Dzuck{G}{H}{i}(\Gamma(\mathcal{M})).
\end{align*}
As we have seen around \eqref{eqn:ActionOfDG}, under the isomorphism $\Gamma(\DlocZuck{G}{H}{i}(\mathcal{M}))\simeq \Gamma(\DlocZuck{G}{H}{i}(\mathcal{M}))$ of vector spaces, the $\Dalg{X}$-action and the $G$-action on $\Gamma(\DlocZuck{G}{H}{i}(\mathcal{M}))$ coincide with those on $\Dzuck{G}{H}{i}(\Gamma(\mathcal{M}))$ given in Fact \ref{fact:Zuckerman}.
We have therefore proved the proposition.
\end{proof}
\begin{remark}\label{rmk:LocalZuckerman}
In the proof, we did not use the reductivity of $H$.
In fact, one can define the Zuckerman functor $\Dzuck{G}{H}{i}(V)$ by the same way in the previous subsection for any $H$ if $G/H$ is affine.
\end{remark}
Let $G$ and $H$ be an affine algebraic group and its reductive subgroup, and $X$ a smooth $G$-variety.
Let $\mathscr{A}_{X, \Lambda}:=(\mathscr{A}_{X,\lambda})_{\lambda \in \Lambda}$ be a family of $G$-equivariant algebras of twisted differential operators on $X$.
Take a $G$-equivariant bornology $\mathcal{B}$ of $\mathscr{A}_{X,\Lambda}$.
See Definition \ref{def:EquivariantBornology}.
The functor $\DlocZuck{G}{H}{i}$ is defined by a composition of inverse image functors
and direct image functors.
Hence $\DlocZuck{G}{H}{i}$ preserves the uniform boundedness.
\begin{theorem}\label{thm:UniformlyBoundedFamilyBernstein}
Let $(\mathcal{M}_\lambda)_{\lambda \in \Lambda}$ be a family of $(\mathscr{A}_{X,\lambda}, H)$-modules.
Suppose that $\mathcal{M}$ is uniformly bounded with respect to $\mathcal{B}$.
Then $(\DlocZuck{G}{H}{i}(\mathcal{M}_\lambda))_{i \in \mathbb{Z}, \lambda \in \Lambda}$ is uniformly bounded with respect to $\mathcal{B}$.
\end{theorem}
\begin{proof}
Recall the definition of the morphisms:
\begin{align*}
X \xleftarrow{\pi} G\times X \xrightarrow{a} G\times X \xrightarrow{q} G/H\times X \xrightarrow{\pi'} X.
\end{align*}
Since $\mathcal{B}$ is $G$-equivariant, we have $\pi^\# \mathcal{B} = (\pi'\circ q \circ a)^\# \mathcal{B}$ by Definition \ref{def:EquivariantBornology}.
The uniform boundedness is preserved by direct images, inverse images and taking subquotients by Proposition \ref{prop:FundamentalBoundeFamily} and Theorem \ref{thm:FunctorOnUniformlyBounded}.
Hence we have proved the theorem.
\end{proof}
| {
"timestamp": "2021-09-22T02:15:53",
"yymm": "2109",
"arxiv_id": "2109.05556",
"language": "en",
"url": "https://arxiv.org/abs/2109.05556",
"abstract": "In the representation theory of real reductive Lie groups, many objects have finiteness properties. For example, the lengths of Verma modules and principal series representations are finite, and more precisely, they are bounded. In this paper, we introduce a notion of uniformly bounded families of holonomic $\\mathscr{D}$-modules to explain and find such boundedness properties.A uniform bounded family has good properties. For instance, the lengths of modules in the family are bounded and the uniform boundedness is preserved by direct images and inverse images. By the Beilinson--Bernstein correspondence, we can deduce several boundedness results about the representation theory of complex reductive Lie algebras from corresponding results of uniformly bounded families of $\\mathscr{D}$-modules. In this paper, we concentrate on proving fundamental properties of uniformly bounded families, and preparing abstract results for applications to the branching problem and harmonic analysis.",
"subjects": "Representation Theory (math.RT); Algebraic Geometry (math.AG)",
"title": "Family of $\\mathscr{D}$-modules and representations with a boundedness property",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534349454033,
"lm_q2_score": 0.7217432062975978,
"lm_q1q2_score": 0.7083573489692861
} |
https://arxiv.org/abs/0809.2124 | Iterated function systems, moments, and transformations of infinite matrices | We study the moments of equilibrium measures for iterated function systems (IFSs) and draw connections to operator theory. Our main object of study is the infinite matrix which encodes all the moment data of a Borel measure on R^d or C. To encode the salient features of a given IFS into precise moment data, we establish an interdependence between IFS equilibrium measures, the encoding of the sequence of moments of these measures into operators, and a new correspondence between the IFS moments and this family of operators in Hilbert space. For a given IFS, our aim is to establish a functorial correspondence in such a way that the geometric transformations of the IFS turn into transformations of moment matrices, or rather transformations of the operators that are associated with them.We first examine the classical existence problem for moments, culminating in a new proof of the existence of a Borel measure on R or C with a specified list of moments. Next, we consider moment problems associated with affine and non-affine IFSs. Our main goal is to determine conditions under which an intertwining relation is satisfied by the moment matrix of an equilibrium measure of an IFS. Finally, using the famous Hilbert matrix as our prototypical example, we study boundedness and spectral properties of moment matrices viewed as Kato-Friedrichs operators on weighted l^2 spaces. | \chapter{A transformation of moment matrices: the affine case}\label{Sec:Exist}
In this chapter we use the maps from an iterated function system to describe a corresponding transformation on moment matrices. In the case of an affine IFS, we show that the matrix transformation is a sum of triple products of
infinite matrices.
Given a measurable endomorphism $\tau$ mapping a measure space
$(X,\mu)$ to itself, we make use of the measure transformation
\begin{equation}\label{Eqn:ExistMeasTrans}
\mu\mapsto\mu\circ\tau^{-1}.
\end{equation}
If
$(X,\mu)$ and $(X,\mu \circ \tau^{-1})$ have finite moments of all
orders, we can also study the transformation of moment matrices
\begin{equation}\label{Eqn:ExistMxTrans}
M^{(\mu)}\mapsto M^{(\mu\circ\tau^{-1})}.
\end{equation}
We are interested in the circumstances under which this
transformation of moment matrices (\ref{Eqn:ExistMxTrans}) can be
expressed explicitly as a well-defined intertwining of the form
\begin{equation}\label{Eqn:CovarianceFirst}
M^{(\mu\circ\tau^{-1})} = A^*M^{(\mu)}A
\end{equation}
for some suitable infinite matrix $A$. (We are using the notation
$A^*$ here to represent the conjugate transpose of the infinite
matrix $A$.) We then ask whether there are conditions under which
$A$ might be an operator on the Hilbert space $\ell^2$.
We begin by stating the result from \cite{EST06} that the
matrix $A$ can be described explicitly in the case where $\tau$ is
an affine map of a single variable. We give a proof of this
result in Section \ref{Sec:SingleVar} and then examine in
Section \ref{Subsec:FixedPointsHut} the connections of this
result to affine iterated function systems (IFSs) in
one dimension. We also prove a
stronger uniqueness result for the fixed point of iterations of
the moment matrix transformation arising out of a Bernoulli IFS.
We examine in Section \ref{Subsec:Hankel} the types of
transformations that preserve infinite Hankel matrices, as a
precursor to Chapter \ref{Sec:ComputeA} in which we explore the
existence of matrices $A$ for more general measurable maps $\tau$.
\section{Affine maps}\label{Sec:SingleVar}
We consider $\mu$ a probability measure on $\mathbb{R}$ or
$\mathbb{C}$. We will consider the case where $\tau$ is an affine
map, noting that a finite set $\{\tau_b\}$ of affine maps on
$\mathbb{R}$ or $\mathbb{C}$ can comprise an affine iterated
function system (IFS). Such IFSs are used in the study of infinite
convolution problems; see e.g., \cite{Erd39,JKS07a,JKS07b}. We
begin with some definitions.
Recall that the moment matrix $M^{(\mu)}$ of $\mu$ is the
infinite matrix defined by $M^{(\mu)}_{i,j} = \int_{\mathbb{R}}
x^{i+j} d\mu(x)$ when $x$ is a real variable, and by
$M^{(\mu)}_{i,j} = \int_{\mathbb{C}} \overline{z^i}z^j d\mu(z)$
when $z$ is a complex variable. We naturally must assume that
moments of all orders exist. Certainly moments of all orders exist when $\mu$ has
compact support, but we will not always be restricted to compactly
supported measures. Recall that the row and column
indexing of $M^{(\mu)}$ start at row $0$ and column $0$.
In the real case, we recall that every moment matrix $M^{(\mu)}$
is a Hankel matrix. Our moment matrix transformation
corresponding to $\tau$ in this real setting must preserve the
Hankel property, then, since $M^{(\mu \circ \tau^{-1})}$ is also a
moment matrix.
The following result is stated in \cite{EST06}. Because one of our goals is to generalize the
lemma, we include a proof here for completeness. This result
gives a matrix $A$ corresponding to an affine map $\tau$ on
$\mathbb{C}$, so that the moment transformation is an intertwining
by matrix multiplication. Note that the same matrix $A$ works in
the real case if we take $\tau$ to be an affine map of a single
real variable.
\begin{lemma}\label{Lemma:Amatrix}
\rm\cite[Proposition 1.1, p. 80]{EST06} \it Suppose
$\tau:\mathbb{C}\rightarrow\mathbb{C}$ is an affine map (not
necessarily contractive) given by
\begin{equation*}
\tau(z) = cz + b,
\end{equation*}
where $c, b\in\mathbb{C}$. Let
$M^{(\mu\circ\tau^{-1})}$ be the moment matrix associated with the measure $\mu\circ\tau^{-1}$. We have
\begin{equation}
M^{(\mu\circ\tau^{-1})} = A^* M^{(\mu)} A,
\end{equation}
where $A = (a_{i,j})$ is the upper triangular matrix with
\begin{equation}\label{Eqn:UpTriA}
a_{i,j} =
\begin{cases}
\binom{j}{i}c^i b^{j-i} & i \leq j\\
0 & \text{otherwise}
\end{cases}.
\end{equation}
\end{lemma}
\begin{proof} The
$(i,j)^{\mathrm{th}}$ entry of the moment matrix $M^{(\mu\circ\tau^{-1})}$
is given by
\begin{equation}
\begin{split}
M^{(\mu\circ\tau^{-1})}_{i,j}
& = \int_{\mathbb{C}} \overline{z^i} z^j \:d(\mu\circ\tau^{-1})(z)\\
& = \int_{\mathbb{C}} \overline{(c z +b)^i}(c z + b)^j \:d\mu(z).
\end{split}
\end{equation}
Even though $A$ and $M^{(\mu)}$ are infinite matrices, the sum
defining the $(i,j)^{\mathrm{th}}$ entry of the triple product $A^*
M^{(\mu)} A$ is finite because $A$ is upper triangular. With this
observation, the matrix product is well defined and we can compute
the $(i,j)^{\mathrm{th}}$ entry of the product $A^*M^{(\mu)}A$ by
\begin{equation*}
\begin{split}
(A^* M^{(\mu)} A)_{i,j}
& = \sum_{k = 0}^i \sum_{\ell = 0}^j A^*_{i,k} M^{(\mu)}_{k,\ell} A_{\ell,j}\\
& = \sum_{k = 0}^i \sum_{\ell = 0}^j \binom{i}{k}\overline{c^kb^{i-k}} \Biggl(\int_{\mathbb{C}} \overline{z^k} z^{\ell}\:d\mu(z)\Biggr)
\binom{j}{\ell} c^{\ell}b^{j-\ell}\\
& = \sum_{k=0}^i \binom{i}{k}
\overline{c^k}\overline{b^{i-k}}\sum_{\ell = 0}^j
\binom{j}{\ell}c^{\ell} b^{j-\ell} \Biggl(\int_{\mathbb{C}} \overline{z^k}
z^{\ell}\:d\mu(z)\Biggr).
\end{split}
\end{equation*}
Taking advantage of the linearity of the integral, we compute the
sums in $k$ and $\ell$ to obtain
\begin{equation*}
\begin{split} (A^* M^{(\mu)} A)_{i,j} & = \int_{\mathbb{C}} \sum_{k=0}^i \binom{i}{k}
\overline{c^k}\overline{z^k}\overline{b^{i-k}} \sum_{\ell = 0}^j
\binom{j}{\ell}c^{\ell}z^{\ell} b^{j-\ell} \: d\mu(z) \\ & = \int_{\mathbb{C}}
\overline{(cz+b)^i} (cz+b)^j \:d\mu(z).
\end{split}
\end{equation*}
This gives us our desired result.\end{proof}
\section{IFSs and fixed points of the Hutchinson operator}\label{Subsec:FixedPointsHut}
An iterated function system has an associated compact set $X$ and a measure $\mu$ supported on $X$, where both $X$ and $\mu$ arise as unique solution to a fixed-point problem described in the following paragraph. A central theme in this Memoir is that every IFS in the classical sense corresponds to a non-abelian system of operators and a version of Equations (\ref{Eqn:SetInvariance}) and (\ref{Eqn:TransformedMu}) in which the moment matrix $M = M^{(\mu)}$ will be a solution to an associated fixed-point problem for moment matrices. (See Proposition \ref{Prop:MatrixFixedPt} and Sections \ref{Sec:GeneralA} and \ref{Subsec:KatoA}.) One of the consequences is that we will be able to use the formula for $M$ in a recursive computation of the moments, which is not easy to do directly for even the simplest Cantor measures.
Let $I$ be a finite index set. An \textit{iterated function
system} (IFS) is a finite collection $\{\tau_i\}_{i \in I}$ of
contractive transformations on $\mathbb{R^d}$ and a set of
probabilities $\{p_i\}_{i \in I}$. Using Banach's fixed point
theorem and the Hausdorff metric topology, it is proved in Hutchinson's paper
\cite{Hut81} that there is a unique compact subset $X$ of
$\mathbb{R}^d$, called the \textit{attractor} of the IFS, which
satisfies the equation
\begin{equation}\label{Eqn:SetInvariance} X = \bigcup_{i \in I}\tau_i(X).
\end{equation}
There is also a unique measure $\mu$ supported on $X$ which arises from
Banach's theorem.
\begin{theorem}[Hutchinson, \cite{Hut81}]
Given a contractive IFS $\{\tau_i\}_{i \in I}$ in $\mathbb{R}^d$
and probabilities $\{p_i\}$, there is a unique Borel probability
measure $\mu$ on $\mathbb{R}^d$ satisfying
\begin{equation}\label{Eqn:TransformedMu} \mu = \sum_{i \in I} p_i
(\mu \circ\tau_i^{-1}). \end{equation} The measure $\mu$ is called
an \textit{equilibrium measure}. Moreover if $p_i
> 0$ for all $i \in I$, the support of $\mu$ is the unique compact
attractor $X$ for $\{\tau_i\}_{i \in I}$.
\end{theorem}
First, let us examine an IFS on the real line $\mathbb{R}$
consisting of contractive maps. We will find that the moment
matrix for the equilibrium measure of the IFS also satisfies an
invariance property corresponding to Equations
(\ref{Eqn:SetInvariance}) and (\ref{Eqn:TransformedMu}).
Let $I$ be a finite index set. Let a contractive IFS
on $\mathbb{R}$ be given by the maps $\{\tau_i\}_{i \in I}$ and the
nonzero probability weights $\{p_i \}_{i \in I}$. Suppose $\mu$ is
the invariant Hutchinson measure associated with this IFS. Then $\mu$ is a probability measure satisfying (\ref{Eqn:TransformedMu}) which is supported on the attractor set $X$.
We now observe, as also noted in \cite{EST06}, that the moment matrix for the equilibrium measure
$\mu$ of an IFS on the real line also satisfies an invariance
property $\mathcal{R}(M^{(\mu)}) = M^{(\mu)}$ under the
transformation
\begin{equation} \mathcal{R}: M^{(\nu)} \mapsto M^{(\sum_{i \in I} p_i (\nu \circ \tau_i^{-1}))}. \end{equation}
In other words, $M^{(\mu)}$ is a fixed point of the transformation
$\mathcal{R}$. We now state a uniqueness result.
\begin{proposition}\label{Prop:MatrixFixedPt} Given a
contractive IFS $\{\tau_i\}_{i \in I}$ on
$\mathbb{R}$ and probability weights $\{p_i\}_{i \in I}$ with $p_i
\in (0,1)$ and $\sum_{i \in I}p_i = 1$, there exists a unique
moment matrix $M^{(\mu)}$ for a probability measure $\mu$ which
satisfies
\begin{equation}
\mathcal{R}(M^{(\mu)}) = M^{(\mu)}.
\end{equation}
This unique solution is exactly
the moment matrix for the Hutchinson equilibrium measure for the
IFS. Furthermore, for any Borel regular probability measure $\nu$
supported on the attractor set $X$ of the IFS,
$\mathcal{R}^n(M^{(\nu)})$ converges componentwise to $M^{(\mu)}$.
\end{proposition}
\begin{proof}
We begin by proving the convergence of the moments. Let $S$ be
the transformation of measures supported on $X$ given by
\begin{equation} S(\nu) = \sum_{i \in I} p_i (\nu \circ \tau_i^{-1}).
\end{equation}
From \cite{Hut81}, we know that $S(\nu)$ maps probability measures
to probability measures, and that for any Borel regular
probability measure $\nu$ supported on $X$, $S^n\nu \rightarrow
\mu$ as $n \rightarrow \infty$, where convergence is in the metric
$\rho$ which Hutchinson calls the $L$-metric on the space of
measures \cite{Hut81}.
To define the $L$-metric, first recall the space of Lipschitz functions $\textrm{Lip}(X,\mathbb{R})$ on a metric space $X$ with metric $d$. $\textrm{Lip}(X,\mathbb{R})$ consists of all functions $\phi:X \rightarrow \mathbb{R}$ such that there exists a constant $C=C_{\phi}< \infty$ with
\begin{equation}\label{Eqn:Lip} |\phi(x)-\phi(y)| \leq C_{\phi} d(x,y) \; \textrm{ for all }\; x,y \in X.\end{equation} We then define the constant \begin{equation}
L(\phi) = \inf \{C_{\phi} \,|\, C_{\phi} \textrm{ satisfies (\ref{Eqn:Lip})} \}, \end{equation} and hence the space of functions \[ \mathrm{Lip}_1 = \{ \phi \in \mathrm{Lip}(X,\mathbb{R})\,|\, L(\phi) \leq 1 \}. \] We can now define the $L$-metric:
\begin{equation}\label{def:Lmetric} \rho(\mu,\nu) = \sup \left\{\int_X
\phi \, \mathrm{d}\mu - \int_X \phi \, \mathrm{d}\nu \,:\, \phi
\in \mathrm{Lip}_1(X,\mathbb{R}) \right\}.
\end{equation}
When the
measures are defined on compact spaces, the $L$-metric topology is
equivalent to a weak topology, and therefore we have $$\int_X f
\mathrm{d}(S^n\nu) \rightarrow \int_X f \mathrm{d}\mu $$ for all
functions $f:X \rightarrow \mathbb{R}$ which are bounded on
bounded sets \cite{Hut81}.
Let $f(x) = x^{i+j}$, which is bounded on bounded sets for all
choices of $i,j \in \mathbb{N}_0$. Inserting $f$ into the
convergence above yields the componentwise convergence of the
moment matrix for $S^n\nu$ to the moment matrix for $\mu$.
Given that $\mu$ satisfies the invariance property $$ S(\mu) =
\mu,$$ we next wish to conclude that $\mathcal{R}(M^{(\mu)}) =
M^{(\mu)}$. We defined the transformation $\mathcal{R}$ by
\[\mathcal{R}(M^{(\nu)}) = M^{(\sum_i p_i \nu \circ \tau_i^{-1})} =
M^{S\nu}.\] Since $S \mu = \mu$, it is clear that $S\mu$ and $\mu$ have the same
moment matrices. Therefore $\mathcal{R}(M^{(\mu)}) = M^{(\mu)}$.
It remains to be shown that the matrix $M^{(\mu)}$ is the unique
solution among moment matrices for probability measures on $X$ to
the equation $\mathcal{R}(M) = M$. We have from \cite{Hut81} that
the measure $\mu$ is the unique probability measure satisfying
$S\mu = \mu$. Suppose there exists another matrix $M^{(\nu)}$
which is a moment matrix for a probability measure $\nu$ and which
is a fixed point for $\mathcal{R}$. Since
$\mathcal{R}(M^{(\nu)}) = M^{(S\nu)}$, the measures
$\nu$ and $S\nu$ have the same moments of all orders. By
Stone-Weierstrass, since the measures have compact support and
agree on the moments (hence the polynomials), the
measures must be the same. Since $S\nu = \nu$, we must have $\nu =
\mu$.
\end{proof}
\begin{corollary}\label{Cor:InfUnique} Let $\{\tau_i\}_{i \in I}$ be an affine
contractive IFS on $\mathbb{R}$, such that $\tau_i(x) = c_ix+b_i$.
Let $\{p_i\}_{i \in I}$ be probability weights and let $A_i$ be
the matrix given in Lemma \ref{Lemma:Amatrix} which encodes
$\tau_i$ for each $i \in I$. The moment matrix transformation
\begin{equation}\label{Eqn:AffineTrans} M^{(\nu)} \mapsto \sum_{i \in I} p_i A_i^*M^{(\nu)}A_i \end{equation} has a
unique fixed point among moment matrices. Moreover, the fixed
point is the moment matrix $M^{(\mu)}$ for the Hutchinson
equilibrium measure of the IFS.
\end{corollary}
Given an affine IFS and its associated moment matrix
transformation, we can state a uniqueness result for finite
matrices which corresponds to and extends Corollary
\ref{Cor:InfUnique}. Given any infinite matrix $M$, denote by
$M_n$ the $(n+1) \times (n+1)$ matrix which is the upper left
truncation $[M_{ij}]_{i,j = 0, 1, \ldots, n}$ of $M$. Let
$\{\tau_i\}_{i=0}^k$ be a contractive affine IFS on $\mathbb{R}$,
and let $\mu$ be the unique Hutchinson measure corresponding to
the IFS and probability weights $\{p_i\}_{i=0}^k$. Note that we
have $\tau_i(x) = c_ix+b_i$, where $c_i<1$ for each $i=0,1,\ldots,
k$ in the most general affine IFS.
We know from Corollary \ref{Cor:InfUnique} that $M^{(\mu)}$ is
the unique infinite matrix which is fixed under the transformation
in Equation (\ref{Eqn:AffineTrans}). We now give the result that
the truncated form of the transformation in Equation
(\ref{Eqn:AffineTrans}) has the truncated moment matrix as a fixed
point, and moreover, that it is a unique fixed point among Hankel
matrices. We remark here that some of the computations required for this
proof are done in \cite{EST06}, but they don't use them to state
this uniqueness result.
We begin by observing that, because the matrices $A_i$ which
encode affine maps are triangular (recall Lemma
\ref{Lemma:Amatrix}), a truncation of the matrix product $ A_i^*MA_i$ is exactly equal to the product of the truncated matrices $(A_i)_n^*M_n(A_i)_n$.
\begin{lemma}\label{Lem:Truncation} Given infinite matrices $A,M$ such that $A$ is upper triangular,
\begin{equation} (A^*MA)_n = A^*_nM_nA_n. \end{equation}
\end{lemma}
\begin{proof} This follows immediately by the properties of block
matrices assuming that the associated block products are well defined. In the matrices below, $B$ and $D$ are upper triangular: $$ \left[ \begin{matrix} B^* & 0\\C^* & D^*
\end{matrix} \right] \left[ \begin{matrix} E&F\\G&H \end{matrix}
\right]\left[
\begin{matrix} B&C\\0&D \end{matrix} \right] = \left[
\begin{matrix} B^*EB & * \\ * & * \end{matrix} \right]. $$
\end{proof}
Given the affine IFS $\{\tau_i\}_{i=0}^k, \{p_i\}_{i=0}^k$ mentioned above, let $A_i$ be the matrix given by Lemma \ref{Lemma:Amatrix} which encodes each affine map $\tau_i(x) = c_ix+b_i$. Let $\mu$ be the unique Hutchinson equilibrium measure for this IFS. Using our truncation notation, define the matrix transformation $\mathcal{R}_n$ on $(n+1) \times (n+1)$ matrices $M$ by
\begin{equation}\label{Eq:finitetransform} \mathcal{R}_n(M) = \sum_{i=0}^k p_i (A_i^*)_n M (A_i)_n . \end{equation} We then have the following result.
\begin{proposition}\label{Prop:finite} For each $n \in \mathbb{N}$, $M = M^{(\mu)}_n$ is the unique Hankel matrix having $M_{0,0}=1$ which is a fixed point for $\mathcal{R}_n$.
\end{proposition}
\begin{proof}
We know from Corollary \ref{Cor:InfUnique} and Lemma \ref{Lem:Truncation} that for each $n$, $M^{(\mu)}_n$ is a fixed point of $\mathcal{R}_n$. We need only to prove the uniqueness.
For $n=1$, assume $M$ is a $2 \times 2$ Hankel matrix with $M_{0,0}=1$ such that $\mathcal{R}_1(M) = M$. $M$ is of the form $$M = \left[ \begin{matrix} 1&x\\x&y \end{matrix} \right].$$ Then,
\begin{eqnarray*} \mathcal{R}_1(M) &=& \sum_{i=0}^k p_i \left[ \begin{matrix} 1&0\\b_i&c_i \end{matrix} \right] \left[ \begin{matrix} 1&x\\x&y \end{matrix} \right] \left[ \begin{matrix} 1&b_i\\0&c_i \end{matrix} \right] \\ &=& \sum_{i=0}^k p_i \left[ \begin{matrix} 1&b_i + c_ix\\b_i +c_ix& b_i^2 + 2b_ic_ix+c_i^2 \end{matrix} \right] \\ &=& \left[ \begin{matrix} 1&x\sum p_ic_i + \sum p_ib_i \\x\sum p_ic_i + \sum p_ib_i& y\sum p_ic_i^2 + 2x\sum p_ib_ic_i + \sum p_ib_i^2 \end{matrix} \right] \\ &=& \left[ \begin{matrix} 1&x\\x&y \end{matrix} \right] = M.
\end{eqnarray*}
There is a unique solution for $x$ and $y$ in the equations
$x = x\sum p_ic_i + \sum p_ib_i$ and $y= y\sum p_ic_i^2 + 2x\sum p_ib_ic_i + \sum p_ib_i^2$, thus $M = M^{(\mu)}_1$.
Assume next that $M^{(\mu)}_{n-1}$ is the unique $n \times n$ Hankel matrix which is a fixed point of $\mathcal{R}_{n-1}$. Take $M$ to be an $(n+1) \times (n+1)$ Hankel matrix with $M_{0,0}=1$ such that $\mathcal{R}_n(M) = M$. By Lemma \ref{Lem:Truncation} and our hypothesis, $M$ is of the form
$$ M = \left[ \begin{matrix} M^{(\mu)}_{n-1} & B\\ B^{tr} & y \end{matrix} \right], $$ where $B$
is $n \times 1$. Due to the Hankel structure of $M$, all but one of the entries in $B$ are known from $M^{(\mu)}_n$:
$$ B = \left[ \begin{matrix} m_n \\ \vdots \\m_{2n-2} \\ x \end{matrix} \right].$$
It is straightforward to verify that, as in the $n=1$ case, the terms in the matrix equation $S(M)=M$ yield two
linear equations in $x$ and $y$ which have unique solutions. Therefore $M = M^{(\mu)}_n$.
\end{proof}
The computations in Proposition \ref{Prop:finite} to solve for $x$
and $y$ give a recursive equation to compute the moments of an
equilibrium measure in terms of the previous moments.
\begin{corollary}[\cite{EST06}] Given the IFS as above with equilibrium measure $\mu$, $$M^{(\mu)}_{m,n} = \frac{1}{1-\sum_{i=0}^k p_ic_i^2} \sum_{i=0}^k p_i
\sum_{j=0,\ell=0, (j,\ell) \neq (m,n)}^{m,n}
\binom{m}{j}\binom{n}{\ell} b_i^{m+n-j-\ell}c_i^{j+\ell}
M^{(\mu)}_{j,\ell} $$
\end{corollary}
\section{Preserving Hankel matrix structure}\label{Subsec:Hankel}
When we work with affine IFSs for a real variable $x$, the moment
matrix of a probability measure is a Hankel matrix whose
$(0,0)^{\mathrm{th}}$ entry is $1$. We will examine here the sorts of
matrix transformations of the form $$M \mapsto A^*MA$$ which
preserve Hankel structure, so that in the next section we can look
for moment matrix transformations corresponding to more general
maps $\tau$. Let $\mathcal{H}^{(1)}$ denote the set of positive
definite Hankel matrices whose $(0,0)^{\mathrm{th}}$ entry is $1$.
We have already shown that the
transformation $\mathcal{R}$ corresponding to an affine IFS given
by
\begin{equation}
\mathcal{R}(M) = \sum_{i\in I} p_i (A_i^{*} M A_i) = \sum_{b\in
B}p_b M^{(\mu\circ\tau_b^{-1})}
\end{equation}
maps a moment matrix $M^{(\mu)}$ to another moment matrix
$M^{(\sum_{i \in I}p_i \mu \circ \tau_i^{-1})}$, and therefore
maps $\mathcal{H}^{(1)}$ into itself.
\begin{definition}
We say that the matrix $A$ \textit{preserves $\mathcal{H}^{(1)}$} if
$A^{*}MA \in \mathcal{H}^{(1)}$ for all matrices
$M\in\mathcal{H}^{(1)}$.
\end{definition}
In the following lemma, we refer to inverses of infinite matrices
and to invertible matrices. A careful definition of these
concepts is given in Definition \ref{def:inverse}.
\begin{lemma}\label{Lemma:DGPreserveH1}
The following matrices preserve $\mathcal{H}^{(1)}$:
\begin{equation}
D(\delta)=
\begin{bmatrix}
1 & 0 & 0 & 0 & \cdots \\
0 & \delta & 0 & 0 & \cdots \\
0 & 0 & \delta^2 & 0 & \cdots \\
0 & 0 & 0 & \delta^3 & \\
\vdots & \vdots & \vdots & & \ddots \\
\end{bmatrix}
\quad
\text{ and }
\quad
G(\gamma) =
\begin{bmatrix}
1 & \gamma & \gamma^2 & \gamma^3 & \cdots\\
0 & 1 & 2\gamma & 3\gamma^2 & \cdots\\
0 & 0 & 1 & 3\gamma & \cdots \\
0 & 0 & 0 & 1 & \\
\vdots & \vdots & \vdots & & \ddots \\
\end{bmatrix}.
\end{equation}
The inverse of $D(\delta)$ is $D(\delta^{-1})$ ($\delta\neq 0$) and the inverse of $G(\gamma)$ is $G(-\gamma)$.
\end{lemma}
We are pleased to thank Christopher French for the proof of the
following proposition. See also \cite{Fre07}, which implicity
uses Proposition \ref{Prop:French} throughout, and the papers
\cite{SpSt06} and \cite{Lay01}.
\begin{proposition}\label{Prop:French}
Suppose $A=(a_{i,j})$ is an infinite upper triangular, invertible
matrix which preserves $\mathcal{H}^{(1)}$. Then either $A$ or
$-A$ is the product of matrices of the form $D(\delta)$ and
$G(\gamma)$, where $D$ and $G$ are defined in Lemma
\ref{Lemma:DGPreserveH1}.
\end{proposition}
\begin{proof} If $A^{*}MA \in\mathcal{H}^{(1)}$ for all $M\in
\mathcal{H}^{(1)}$, then $a_{0,0}^2 = 1$, so $a_{0,0} = \pm 1$. If
$a_{0,0} = -1$, replace $A$ with $-A$.
Denote $A$ by
\begin{equation*}
A = \begin{bmatrix}
1 & a_{0,1} & a_{0,2} & \cdots \\
0 & a_{1,1} & a_{1,2} & \cdots \\
0 & 0 & a_{2,2} & \cdots \\
\vdots & \vdots & & \ddots\\
\end{bmatrix}.
\end{equation*}
Since $a_{1,1}\neq 0$, multiply $A$ on the right by
$D(1/a_{1,1})$. The upper $2\times 2$ principal submatrix of
$AD(\delta)$ is
\begin{equation*}
\begin{bmatrix}
1 & a_{0,1}/a_{1,1}\\
0 & 1\\
\end{bmatrix}.
\end{equation*}
Now set $\gamma = -a_{0,1}/a_{1,1}$; the matrix
$AD(\delta)G(\gamma)$ now has the $2\times 2$ identity matrix as
its upper left principal submatrix.
Now set $\tilde{A} = AD(1/a_{1,1})G(-a_{0,1}/a_{1,1})$; we know
that $\tilde{A}$ preserves $\mathcal{H}^{(1)}$ by Lemma
\ref{Lemma:DGPreserveH1}. We will show that $\tilde{A}$ is the
infinite identity matrix. We start with the $k=2$ case (instead
of $k=1$) to give more intuition.
Let $\tilde{A}$ be denoted
\begin{equation}
\tilde{A}=
\begin{bmatrix}
1 & 0 & a_0& \cdots\\
0 & 1 & a_1& \cdots\\
0 & 0 & a_2 & \cdots \\
\vdots & \vdots & &\ddots\\
\end{bmatrix}
\end{equation}
Consider the upper left $3\times 3$ principal submatrix of the
matrix $\tilde{A}^{*}M\tilde{A}$:
\begin{equation}
\begin{bmatrix}
1 & m_1 & a_0+a_1m_1 +a_2m_2\\
m_1 & m_2 & *\\
a_0+a_1m_1 +a_2m_2 & * & * \\
\end{bmatrix}.
\end{equation}
Since $\tilde{A}^{*}M\tilde{A}$ belongs to $\mathcal{H}^{(1)}$ for
all $M\in\mathcal{H}^{(1)}$, we know that the above matrix is
Hankel. We can treat $m_1$ and $m_2$ as independent variables, so
$a_0 = a_1 = 0$ and $a_2 = 1$. Therefore $\tilde{A}$ actually has
an upper $3\times 3$ principal submatrix which is the identity.
Continuing inductively, suppose the upper $k\times k$ principal
submatrix of $\tilde{A}$ is the $k\times k$ identity matrix and
$\tilde{A}$ has the form
\begin{equation}
\tilde{A} =
\begin{bmatrix}
1 & 0 & 0 & \cdots & 0 & a_0& * &\cdots\\
0 & 1 & 0 & \cdots & 0& a_1& *& \cdots \\
\vdots & & & & & \vdots & & \\
0 & 0 & 0 & \cdots & 1& a_{k-1} & *& \cdots\\
0 & 0 & 0 & \cdots & 0 & a_{k}& *&\cdots\\
0 & 0 & 0 & \cdots & 0 & 0 & * &\cdots \\
\vdots & \vdots & \vdots & & \vdots &\vdots & &\ddots
\end{bmatrix}.
\end{equation} Multiplying $\tilde{A}^{*}M\tilde{A}$, we find two expressions for the $(k,0)^{\mathrm{th}}$ entry:
\begin{equation}
m_{k} = \sum_{i=0}^{k} a_i m_i.
\end{equation}
Since this equation holds for all choices of $m_1, \ldots m_k$, we must have that $a_0 = \cdots = a_{k-1} = 0$ and $a_k = 1$.
Therefore, $\tilde{A} = AD(1/a_{1,1})G(-a_{0,1}/a_{1,1}) = I$, so
we can write
\begin{equation}
A = G(a_{0,1}/a_{1,1})D(a_{1,1}).
\end{equation}
\end{proof}
\chapter{The moment problem revisited}\label{Ch:Extensions}
Given a positive definite
Hankel matrix $M$ with real entries, we showed in Section
\ref{Subsec:exist} that there exists a measure $\mu$ (not unique in general) such that $M$ is the moment matrix of $\mu$, i.e., $M =
M^{(\mu)}$. In this chapter, we will describe a setting in which
the solution measure is not unique. Differing measures solving the same moment problem will arise from nontrivial
self-adjoint extensions of a symmetric shift operator $S$. In
fact, we will show that if $\mu$ is not unique, the self-adjoint extensions of $S$ yield a one-parameter family of measures
satisfying the moment problem for $M$. This will yield a necessary condition for non-uniqueness of measure for a given Hankel matrix $M$. For more details on the
theory of self-adjoint extensions of unbounded symmetric
operators and moments, see \cite{Con90,ReSi75,Rud91}.
Given a Hankel matrix $M$ with real entries, let $Q_M$ be the
quadratic form from Equation (\ref{Def:QM}) and let
$\mathcal{H}_Q$ be the Hilbert space completion of $Q_M$. Then the
inner product on $\mathcal{H}_Q$ is defined on the finite
sequences $\mathcal{D}$ by $$\langle c | d \rangle_{\mathcal{H}_Q} =
\sum_i\sum_j \overline{c}_i M_{i,j} d_j.$$
We define the shift operator $S$ by
\begin{equation}\label{Eqn:DefnS}
Sc = S(c_0,c_1,\ldots,) = (0,c_0,c_1,\ldots).
\end{equation} The domain of $S$ contains
$\mathcal{D}$ which is dense in $\mathcal{H}_Q$, so $S$ is densely
defined.
In Section \ref{Sec:ThreeInc} we observe some of the interplay
between the matrix $M$, the shift operator $S$, and the isometry
$F$. Then, in Sections \ref{Sec:Extensions} and \ref{Sec:Nonunique} we describe how the
self-adjoint extensions of $S$ yield solutions to the moment
problem $M = M^{(\mu)}$. We also use the shift operator in Section \ref{Sec:Banded} in order to find a Jacobi matrix corresponding to a given Hankel matrix $M$.
\section{The shift operator and three incarnations of symmetry}\label{Sec:ThreeInc}
Suppose $M$ is a positive definite Hankel matrix and $\mu$ is a measure such that $M = M^{(\mu)}$. Recall that we have defined an isometry $F:\mathcal{H}_Q\rightarrow L^2(\mu)$ by
\[Fc(x) = f_c(x) = \sum_{n\in\mathbb{N}_0}c_n x^n \textrm{ for all
}c\in\mathcal{D}.\]
The shift operator $S$
plays a major role in this chapter, and in this section we study
how $F$ and $S$ behave with respect to each other. The operators
$F$ and $S$ reveal three different incarnations of symmetry in the
Hankel matrix $M$. In turn, these incarnations will be used in
subsequent sections to prove results about non-uniqueness of
measures which solve the moment problem, particularly in Theorem
\ref{Thm:measures}.
\noindent\textbf{Incarnation 1: A symmetric operator. } We see
here the connection between the Hankel property of $M$ and the
shift operator in the Hilbert space $\mathcal{H}_Q$.
\begin{lemma}\label{Lemma:Symmetry} The shift operator $S$ is symmetric in
$\mathcal{D}\subset\mathcal{H}_Q$ if and only if the matrix $M$ is a Hankel matrix.
\end{lemma}
\begin{proof}
($\Rightarrow$): Setting $b = e_i$, $c = e_j$, and $\langle
Sb|c\rangle_{\mathcal{H}_Q} = \langle b|Sc \rangle_{\mathcal{H}_Q}$, we see that $M_{i-1, j} = M_{i,
j-1}$.
($\Leftarrow$): Let $b,c\in\mathcal{D}$. Then
\begin{equation}
\begin{split}
\langle Sb|c\rangle_{\mathcal{H}_Q} & = \sum_i \sum_j \overline{b_{i-1}} M_{i+j}
c_j = \sum_{i}\sum_{j}
\overline{b_i} M_{i+1+j}c_j\\
& = \sum_i \sum_j \overline{b_i} M_{i+j} c_{j-1} = \langle
b|Sc\rangle_{\mathcal{H}_Q}.
\end{split}
\end{equation}\end{proof}
\noindent\textbf{Incarnation 2: Multiplication by $x$. }We notice that if $p$ and $q$ are polynomials in $L^2(\mu)$, then
\[
\int \overline{x p(x)} q(x) \,\mathrm{d}\mu(x) = \int \overline{p(x)} xq(x) \,\mathrm{d}\mu(x).
\]
We can therefore state the interaction between the isometry $F$
and the shift $S$.
\begin{lemma}\label{Lemma:MultiplicationByX}Define $\widetilde{S}:=FSF^*$. Then $\widetilde{S}$ is a
symmetric operator on $\mathcal{P}\subset L^2(\mu)$, and in addition, $\widetilde{S}$ is a restriction {\rm(}to its domain{\rm)}
of the multiplication operator $M_x$:\[[M_xf](x) = xf(x).\]
\end{lemma}
\begin{proof}
Let $M_x$ be the operator which takes $f(x)$ to $xf(x)$. Using the definitions of $F$ and $S$, we see that
\begin{equation}\label{Eqn:MxFFS}
M_x(Fc) = FSc \textrm{ for all }c\in\mathcal{D},
\end{equation}
or stated equivalently,
\[ xf_c(x) = f_{Sc}(x) \textrm{ for all }c\in\mathcal{D}.\]
Since $F$ is an isometry, we know that $F^*F$ is the identity on
$\mathcal{H}_Q$. As a result, applying $F^*$ on the left to both
sides of Equation (\ref{Eqn:MxFFS}) yields
\[ S = F^*M_x F \textrm{ on } \mathcal{D}.\]
Also, $P = FF^*$ is the projection of $L^2(\mu)$ onto the closure
of the polynomials $\mathcal{P}\subset L^2(\mu)$. Applying $F^*$
on the right to both sides of Equation (\ref{Eqn:MxFFS}) yields
\[ M_xP = FSF^* = \widetilde{S} \textrm{ on }\mathcal{P},\]
which is the desired conclusion.
\end{proof}
\noindent\textbf{Incarnation 3: Jacobi matrices and orthogonal
polynomials. } Looking ahead in Section \ref{Sec:Banded}, we
see that given the space $L^2(\mu)$, there is a
Jacobi matrix $J$ which encodes the multiplication operator $M_x$ in terms of a recursion relation for
orthogonal polynomials $\{p_k\}_{k \in \mathbb{N}_0}$ in $L^2(\mu)$. Specifically, \[J \left[\begin{matrix} p_0\\p_1\\ \vdots \end{matrix} \right] = \left[ \begin{matrix} M_x p_0 \\ M_xp_1\\ \vdots \end{matrix}\right] . \] Lemma \ref{Lemma:WIntertwine} then gives an intertwining relationship between the shift $S$ and this Jacobi matrix $J$.
\section{Self-adjoint extensions of a shift
operator}\label{Sec:Extensions} Recall from Definition
\ref{Def:SelfAdjoint} that a densely defined operator $S$ on a
Hilbert space $\mathcal{H}$ is \textit{symmetric} on
$\textrm{dom}(S) \subseteq \mathcal{H}$ if $\langle Sh| k\rangle =
\langle h| Sk\rangle$ for all $h,k\in
\textrm{dom}(S)$. Also recall that we can define the
\textit{adjoint} $S^*$ of $S$, as in Definition \ref{Def:Adjoint},
and we call $S$ a \textit{self-adjoint} operator if $S=S^*$, in
particular, if $\mathrm{dom}(S) = \mathrm{dom}(S^*)$.
\begin{definition}\label{Defn:SAExt} Suppose $S$ is a densely defined
operator on $\mathcal{D}$ in a Hilbert space $\mathcal{H}$. A
\textit{self-adjoint} extension $T$ of $S$ satisfies the following
properties:
\begin{enumerate}
\item $T$ is self-adjoint with $\textrm{dom}(T) =
\textrm{dom}(T^*)$ \item $Tc = Sc$ for all $c \in \mathcal{D}$.
\end{enumerate}
It may be the case that no self-adjoint extensions of $S$ exist.
\end{definition}
If $S$ is essentially self-adjoint, we call the closure of $S$ a
trivial self-adjoint extension of $S$. Finally, we say that $S$ is
\textit{maximally symmetric} if $S$ has no proper self-adjoint
extensions.
Let $S$ be the closure of the shift operator from Equation (\ref{Eqn:DefnS}). By
Lemma \ref{Lemma:Symmetry}, $S$ is symmetric in $\mathcal{D}$ if
and only if the matrix $M$ is a Hankel matrix. The deficiency indices of $S$ will allow us to describe its self-adjoint extensions, if any exist.
\begin{definition}\label{Def:DeficiencySpace} Let $S$ be a closed
symmetric operator on a Hilbert space $\mathcal{H}$ and let
$\alpha \in \mathbb{C}$ with $\mathrm{Im}(\alpha) \neq 0$. We
define the \textit{deficiency subspace} of $S$ at $\alpha$ by
\begin{equation}\label{Eqn:Halpha} \mathcal{L}(\alpha) = \mathrm{null}(S^*-\alpha) = \{\xi \in \mathrm{dom}(S^*)\,:\,
S^*\xi = \alpha \xi\}.\end{equation} \end{definition}
Due to a beautiful argument of von Neumann, the dimension of the
deficiency subspace will be the same for any $\alpha$ with
$\mathrm{Im}(\alpha) > 0$, and the dimension will be the same for any
$\alpha$ with $\mathrm{Im}(\alpha) < 0$. It is therefore sufficient
to consider $\alpha = \pm i$. We will denote the respective
deficiency subspaces for the shift operator $\mathcal{L}_+$ and
$\mathcal{L}_-$.
\begin{definition}
The dimensions of $\mathcal{L}_+$ and
$\mathcal{L}_-$ for a closed symmetric operator $S$ are called the \textit{deficiency indices} of $S$ and are denoted by the ordered pair $(\textrm{dim}(\mathcal{L}_+),
\textrm{dim}(\mathcal{L}_-))$. Note that the indices can take
on any value in $\mathbb{N}_0$ or $\infty$.
\end{definition}
\begin{definition} Let $S$ and $T$ be linear operators with dense domains in a Hilbert space, and let $\mathrm{Gr}(S), \mathrm{Gr}(T)$ be their corresponding graphs. We say that $S \subseteq T$ if $\mathrm{Gr}(S) \subseteq \mathrm{Gr}(T)$. \end{definition}
Note that an operator $S$ is self-adjoint if and only if $S \subseteq S^*$. If $T$ is a self-adjoint extension of $S$, then \[ S \subseteq T \subseteq T^* \subseteq S^*.\] Given a bounded operator $J$, we write $JS \subseteq SJ$ if $J$ maps the domain of $S$ into itself and $JSv = SJv$ for all $v \in \mathrm{dom}(S)$.
Another theorem of von Neumann gives a
condition under which the deficiency indices of a symmetric
operator are equal.
\begin{theorem}\rm (von Neumann, as stated in \cite[Prop. 7.2, p. 343]{Con90}) \label{Thm:EqualIndices} \it Given an
operator $S$ on a Hilbert space $\mathcal{H}$, if there exists a
function $J: \mathcal{H} \rightarrow \mathcal{H}$ satisfying the
following properties:
\begin{enumerate}[(1)]
\item $J^2$ is the identity on $\mathcal{H}$, \item $J$ is
conjugate linear---that is, for all $\alpha\in\mathbb{C}$,
$J(\alpha h) = \overline{\alpha}J(h)$,
\item $\|Jh\| = \|h\|$ for all $h\in\mathcal{H}$, \item $J
\mathrm{dom}(S) \subseteq \mathrm{dom}(S)$ and $JS \subseteq SJ$,
\end{enumerate}
then $S$ has equal deficiency indices, i.e.
$\dim(\mathcal{L}_+) = \dim(\mathcal{L}_-)$.
\end{theorem}
The main idea in the proof of this theorem is that the operator
$J$ restricts as an
isometry between the deficiency subspaces $\mathcal{L}_+$ and
$\mathcal{L}_-$, thus showing they have equal dimension.
We apply this theorem to our shift operator $S$ on the space
$\mathcal{H}_Q$. Let $J$ be the conjugation operator:
\begin{equation}\label{Eqn:DefnJ}
J:\mathcal{H}_Q \rightarrow \mathcal{H}_Q \textrm{ with }J(c_0,
c_1,
c_2, \ldots):=(\overline{c_0}, \overline{c_1}, \overline{c_2},
\ldots),.
\end{equation}
If $\xi \in \mathcal{L}_+$, then $J\xi \in \mathcal{L}_-$. It
is readily verified that $J$ satisfies the properties in Theorem
\ref{Thm:EqualIndices}. In particular we see that on $\mathcal{D}
= \mathrm{dom}(S)$, $J$ commutes with $S$.
We now calculate the deficiency indices for $S$, the closure of
the shift operator.
\begin{lemma}\label{Lem:indices}
The closed shift operator $S$ {\rm(}\ref{Eqn:DefnS}{\rm)} defined on
$\mathcal{D}\subset
\mathcal{H}_{M}$ is either self-adjoint or
has deficiency indices $(1, 1)$---i.e.
\[
\dim\{\xi \in \mathrm{dom}(S^*)\,:\, S^*\xi = i\xi\}=1.
\]
In fact,
if $\xi\in\mathcal{L}_+$ with $\xi \neq 0$, then $H\xi$ is a
multiple of the vector $(1, i, i^2, i^3, \ldots)$, where $H$ is
the self-adjoint Kato operator for the quadratic form $Q_M$ on $\mathcal{H}_Q$.
\end{lemma}
\begin{proof} If $S$ is self-adjoint, its deficiency indices are $(0,0)$. Suppose there exists $\xi \neq 0$
such that $\xi\in\mathcal{L}_+$.
We will compute $(H\xi)_i$ via the inner product $\langle \cdot |
\cdot\rangle_{\mathcal{H}_Q}$ to show that
\begin{equation}\label{Eqn:AlphaVector}
H\xi \in\mathbb{C}(1, i, i^2, i^3, \ldots).
\end{equation}
Note $e_i$ belongs to $\mathcal{D} = \textrm{dom}(S)$ for every
$i\in\mathbb{N}_0$. We now compare $(H\xi)_i=\langle e_i | \xi\rangle_{\mathcal{H}_Q}$
and $(H\xi)_{i+1}= \langle e_{i+1} | \xi\rangle_{\mathcal{H}_Q}$:
\begin{equation}
\langle e_{i+1} | \xi\rangle_{\mathcal{H}_Q}
= \langle S e_i | \xi\rangle_{\mathcal{H}_Q} = \langle e_i | S^*\xi \rangle_{\mathcal{H}_Q} =
i \langle e_i | \xi\rangle_{\mathcal{H}_Q},
\end{equation}
which implies by induction that $H\xi$ is a multiple of the vector
(\ref{Eqn:AlphaVector}).
Since $H$ has trivial kernel in $\mathcal{H}_Q$, we can conclude
that $\dim \mathcal{L}_+ = 1$ and
by Theorem \ref{Thm:EqualIndices}, $\dim \mathcal{L}_- = 1$ as well. \end{proof}
We next use a theorem, again quoted almost verbatim from \cite{Con90}, which states that the
self-adjoint extensions of $S$ are determined by the partial
isometries from $\mathcal{L}_+$ to $\mathcal{L}_-$.
\begin{theorem}\cite[Theorem 2.17, p. 314]{Con90}\label{Thm:Isom} Let $S$ be a closed
symmetric operator. If $W$ is a partial isometry with initial space
in $\mathcal{L}_+$ and final space in $\mathcal{L}_-$, then there
is a closed symmetric extension $S_W$ of $S$ on the domain
\[\{f+g+Wg\,:\, f \in \mathrm{dom}(S), g \in \mathrm{initial}(W)\}\]
given by
\[S_W(f+g+Wg) = Sf+ig-iWg.\] Conversely, if $T$ is any closed
symmetric extension of $S$, then there is a unique partial
isometry $W$ such that $T=S_W$ as defined above.
\end{theorem}
In the case of our shift operator $S$, we see that the only
nontrivial partial isometries from $\mathcal{L}_+$ to
$\mathcal{L}_-$ are isometries between the one-dimensional spaces.
These isometries are given by multiplication by $z \in \mathbb{C}$ where $|z|=1$.
\begin{theorem}\label{Thm:extensions} Given a Hankel matrix $M$ and the associated
Hilbert space $\mathcal{H}_Q$, let $S$ be the closure of the symmetric shift operator
{\rm(}\ref{Eqn:DefnS}{\rm)}. If $S$ is not self-adjoint, then it has
self-adjoint extensions $T_z$ which have domain
$\mathcal{D}+\mathcal{L}_+ + \mathcal{L}_-$ and are exactly
given by
\[T_z(c+\xi+J\xi) = Sc+i\xi-izJ\xi \quad \mathrm{for}\, z \in
\mathbb{C}.\]
\end{theorem}
\begin{proof}
By Lemma \ref{Lem:indices}, if $S$ is not self-adjoint, it has
deficiency indices $(1,1)$. If $\xi \in \mathcal{L}_+$, then
$J\xi \in \mathcal{L}_-$. Given $z \in \mathbb{C}$ with
$|z|=1$, we can define an isometry $T_z:\mathcal{L}_+ \rightarrow
\mathcal{L}_-$ by $T\xi = zJ\xi$. In fact, every such isometry
is of this form since the spaces are one-dimensional. The result follows from Theorem \ref{Thm:Isom}.
\end{proof}
\section{Self-adjoint extensions and the moment problem}\label{Sec:Nonunique}
Next, we describe how the self-adjoint extensions $T_z$ to $S$
described in Theorem \ref{Thm:extensions} yield solutions $\mu_z$
to the moment problem $M=M^{(\mu)}$. (For background, see
\cite{Con90}.)
As we discussed in Section \ref{Sec:pvm}, every self-adjoint
operator $T$ can be written in terms of a projection-valued
measure $E$ such that
\begin{equation}\label{Eqn:ResolutionT}
T = \int_{\mathbb{R}} \lambda E(\mathrm{d}\lambda).
\end{equation}
\begin{proposition}\cite[Prop. 7.2, p. 343]{Con90}\label{Prop:NonuniqueMeasure}
Given an infinite Hankel matrix $M$, suppose $T$ is a self-adjoint
nontrivial extension of the shift operator $S$ on the Hilbert
space $\mathcal{H}_Q$ with corresponding
projection-valued measure $E$ {\rm(}\ref{Eqn:ResolutionT}{\rm)}. Given $E$, we can define a real measure as we did in Equation {\rm(}\ref{Eqn:RealMeasure}{\rm)}, by
\[
\mu(\cdot) := \langle e_0 | E(\cdot)e_0\rangle_{\mathcal{H}_Q},
\]
where $e_0$ is the first standard basis vector $(1, 0, 0,
\ldots)$. Then the real-valued measure $\mu$ is a solution to the moment
problem for $M$---that is,
\begin{equation*}
\int x^i \,\mathrm{d}\mu(x) = M_i \textrm{ for all }i\in\mathbb{N}_0.
\end{equation*}
\end{proposition}
\begin{proof} From the multiplicative property of integrals against projection-valued measures,
\[\int_{\mathbb{R}} \lambda^i E(\mathrm{d}\lambda) = T^i\] for each $i \in
\mathbb{N}_0$. This gives the corresponding result for $\mu$:\[
\int_{\mathbb{R}} x^i \mathrm{d}\mu(x) = \langle e_0 | T^i e_0
\rangle .\] Since $e_0 \in \mathcal{D}$ and $T$ maps $\mathcal{D}$ to $\mathcal{D}$, we have the following
calculation:
\begin{equation}
\begin{split}
\int x^i \,\mathrm{d}\mu(x) & = \langle e_0 | T^i e_0\rangle_{\mathcal{H}_Q}
= \langle e_0 |S^i e_0\rangle_{\mathcal{H}_Q}
= \langle e_0|e_i\rangle_{\mathcal{H}_Q}\\
& = M_{0, i} = M_i.
\end{split}
\end{equation}\end{proof}
We can now associate to each self-adjoint extension to the shift operator $S$ a measure which satisfies
the moment problem for the matrix $M$, in the case where the shift
operator is not essentially self-adjoint.
\begin{theorem}\label{Thm:measures} Given an infinite Hankel matrix $M$ with $M_{0,0} = 1$, let $S$ be
the closure of the shift operator on $\mathcal{H}_Q$.
Then the
following statements are equivalent. \begin{enumerate}
\item $S$ is not self-adjoint.
\item The set of distinct solutions to the moment problem $M=M^{(\mu)}$ is a one-parameter family of probability measures $\{\mu_z\,:\, z\in \mathbb{C}, |z|=1\}$.
\item There exist two nonequivalent probability measures $\mu_1$ and $\mu_2$ which are both solutions to the moment problem, i.e. $M=M^{(\mu_1)} = M^{(\mu_2)}$.
\item Given any measure $\mu$ solving the moment problem $M = M^{(\mu)}$, the polynomials are not dense in the space $L^2(\mu)$.
\end{enumerate}
\end{theorem}
\begin{proof}
\noindent $(1) \Rightarrow (2)$: If $S$ is not self-adjoint, then by
Theorem \ref{Thm:extensions} there is a one-parameter family
$\{T_z\,:\, z \in \mathbb{C}, |z|=1\}$ of distinct (and not
unitarily equivalent) self-adjoint extensions to $S$. For each
$T_z$, we can define a measure $\mu_z$ as in Proposition
\ref{Prop:NonuniqueMeasure} which satisfies the moment problem
$M=M^{(\mu)}$. It remains to be shown that these measures are
distinct.
The isometry $F:\mathcal{H}_Q \rightarrow L^2(\mu_z)$ which maps
$T_z^ke_0$ to $x^k$ extends to map $\psi(T_z)(e_0)$ to the function
$\psi$. Thus, because the span of the functions
$\{\frac{1}{x-\alpha}\,:\, \alpha \in \mathbb{C},
\mathrm{Im}(\alpha) \neq 0\}$ is dense in each space $L^2(\mu_z)$,
the measure $\mu_z$ is uniquely determined by these functions. The
extensions $T_z$ are cyclic operators on $\mathcal{H}_Q$, which
means (see \cite{Con90}) that the set of vectors
$\{(T_z-\alpha I)^{-1}e_0 \,:\, \alpha \in \mathbb{C},
\mathrm{Im}(\alpha) \neq 0\}$ is dense in $\mathcal{H}_Q$. It
follows that $\mu_z$ is determined uniquely by the isomorphism $F$
mapping the vector $(T_z-\alpha)^{-1}e_0 \in \mathcal{H}_Q$ to the
function $\frac{1}{x-\alpha} \in L^2(\mu)$. Since the operators
$T_z$ are distinct, this proves the measures $\{\mu_z\,:\, z \in
\mathbb{D}, |z|=1\}$ are distinct.
\noindent $(2) \Rightarrow (3)$: Follows because each $\mu_z$ is distinct.
\noindent $(3) \Rightarrow (4)$: We prove the contrapositive. Suppose
the space of polynomials $\mathcal{P}$ is dense in $L^2(\mu)$ where
$M=M^{(\mu)}$. Recall the map $F:\mathcal{D} \rightarrow \mathcal{P}$
given by $Fc = \sum_i c_ix^i$ is an isometry on $\mathcal{D}$ which
extends to an isometry on $\mathcal{H}_Q$. Assume $\xi \in
\mathcal{L}_+$, so $S^*\xi = i\xi$ and let $c \in \mathcal{D}$. Then
\begin{equation*}
\begin{split}
& \int_{\mathbb{R}}\overline{xFc(x)}F\xi(x) \mathrm{d}\mu(x)
= \langle FSc|F\xi \rangle_{L^2(\mu)} \\
& = \langle Sc | \xi \rangle_{\mathcal{H}_Q}
= \langle c| S^*\xi \rangle_{\mathcal{H}_Q}
= i \langle c|\xi \rangle_{\mathcal{H}_Q}\\
& = i\langle Fc | F\xi \rangle_{L^2(\mu)}
= i \int_{\mathbb{R}} \overline{Fc}(x)F\xi(x)\mathrm{d}\mu(x).
\end{split}
\end{equation*} Because $F$ gives a one-to-one correspondence between
$\mathcal{D}$ and $\mathcal{P}$, we can say for any polynomial $p \in
\mathcal{P}$ and $\xi \in \mathcal{L}_+$,
\begin{equation}\label{Eqn:XP} \int_{\mathbb{R}}
(x-i)\overline{p(x)}F\xi(x) \mathrm{d}\mu(x) = 0.\end{equation}
The function $\frac{1}{x-i} \in L^{\infty}(\mu)$, hence
$\frac{F\xi}{x-i} \in L^2(\mu)$. Let $\{p_n\} \subset \mathcal{P}$
converge in $L^2(\mu)$ to $\frac{F\xi}{x-i} \in L^2(\mu)$. Substituting into Equation (\ref{Eqn:XP}) then gives \[\int_{\mathbb{R}}
|F\xi(x)|^2 \mathrm{d}\mu(x) = 0,\] hence $\xi = 0$. Therefore, $S$
has deficiency indices $(0,0)$.
\noindent $(4) \Rightarrow (1)$: Assume the polynomials are not dense in
$L^2(\mu)$. As before, $M = M^{(\mu)}$. Let $\psi \in L^2(\mu)$ be a
nonzero bounded vector orthogonal to the subspace $\mathcal{P}$
spanned by the polynomials. Recall that $L^2(\mu)\ominus \mathcal{P}\neq 0$ if and only if $\mathcal{P}$ is not dense in $L^2(\mu)$. Given $\alpha \in \mathbb{C}$ with
$\mathrm{Im}(\alpha) \neq 0$, define \[\xi_{\alpha} =
\frac{\psi(x)}{x-\alpha}.\] Then, because each function
$\frac{1}{x-\alpha}$ is bounded, we have $\xi_{\alpha} \in
L^2(\mu)$. Note that the function $[M_x\xi_{\alpha}](x) =
x\xi_{\alpha}(x)$ is also an $L^2(\mu)$ function because
$x\xi_{\alpha}(x) = \psi(x)+\alpha \xi_{\alpha}(x)$.
The isometry $F$ between $\mathcal{H}_Q$ and $L^2(\mu)$ resulting
from Equation (\ref{Eqn:DefnF}) ensures that $FF^*$ is the projection
onto the range of $F$, which is the closed space spanned by the
polynomials in $L^2(\mu)$. Therefore, we know $F^*\psi = 0$. Given
the function $\xi_{\alpha}$, let $\phi_{\alpha} \in \mathcal{H}_Q$ be
defined by $\phi_{\alpha} = F^*\xi_{\alpha}$. We first must show that
for at least one choice of $\alpha$, $\phi_{\alpha} \neq 0$.
Suppose that for all $\alpha \in \mathbb{C} \setminus \mathbb{R}$,
$\xi_{\alpha}$ is orthogonal to the polynomials. Define $\Psi$ to be
the linear span of the functions $\{\frac{1}{x-\alpha}: \alpha \in
\mathbb{C}, \mathrm{Im}(\alpha) \neq 0\}$. It is well-known that
$\Psi$ is dense in $L^2(\mu)$ when $\mu$ is a finite real measure, so
let $\{\psi_k\}_{k \in \mathbb{N}_0} \subset \Psi$ be a sequence of functions which
converges to $\overline{\psi}$ in $L^2(\mu)$. This gives
\begin{equation*}
\begin{split}
& \int_{\mathbb{R}} x^{\ell} \psi(x) [\overline{\psi(x)}- \psi_k(x)]\mathrm{d}\mu(x)\\
& =\int_{\mathbb{R}} x^{\ell} |\psi(x)|^2 \mathrm{d}\mu(x) - \int_{\mathbb{R}} x^{\ell} \psi(x)\psi_k(x) \mathrm{d}\mu(x)\\
&\rightarrow 0 \; \mathrm{as}\, k \rightarrow \infty.
\end{split}
\end{equation*}
The terms $\psi(x)\psi_k(x)$ are linear combinations of $\xi_{\alpha}$
functions, so the second integral in the sum above is zero for all
$k,\ell \in \mathbb{N}_0$. Therefore, for all $\ell \in
\mathbb{N}_0$, \[\int_{\mathbb{R}} x^{\ell} |\psi(x)|^2
\mathrm{d}\mu(x)=0.\] In particular, the $\ell=0$ case implies that
$\psi = 0$. This contradicts our choice of $\psi$, hence we know
there must be some function $\xi_{\alpha}$ which is not orthogonal to
the polynomial space $\mathcal{P}$.
Because \[ \int_{\mathbb{R}} (x-\alpha)\overline{p(x)}\xi_{\alpha}(x) \mathrm{d}\mu(x) = \int_{\mathbb{R}} \overline{p(x)} \psi(x) \mathrm{d}\mu(x) = 0\] for all $p \in \mathcal{P}$, we have \[
\langle M_xp|\xi_{\alpha} \rangle_{L^2(\mu)} = \alpha \langle p| \xi_{\alpha} \rangle_{L^2(\mu)},\] where $M_x$ is the multiplication operator by $x$. Every polynomial $p$ is the image of a finite sequence $d \in \mathcal{D}$ under $F$. Recall that $FS=M_xF$ for all $d \in \mathcal{D}$, and let $\phi_{\alpha} = F^*\xi_{\alpha} \neq 0$. Applying $F^*$ in the above equation gives \[\langle F^*M_xp|F^*\xi_{\alpha} \rangle_{\mathcal{H}_Q} = \langle Sd|\phi_{\alpha} \rangle_{\mathcal{H}_Q} = \alpha\langle d|\phi_{\alpha} \rangle_{\mathcal{H}_Q} = \alpha\langle F^*p|F^*\xi_{\alpha} \rangle_{\mathcal{H}_Q}. \]Therefore, $\phi_{\alpha}$ satisfies the equation $S^*\phi = \alpha\phi$, hence $\phi_{\alpha} \in \mathcal{L}(\alpha)$ for $S$ which proves $S$ is not self-adjoint.
\end{proof}
The following example shows a Hankel matrix $M$ which does not
have a unique moment problem solution.
\begin{example}A non-unique measure.\end{example}\label{Ex:NotUnique}
Set
\[
f(x) := \int_{\mathbb{R}} \cos(xt) e^{-(t^2 + 1/t^2)} \,\mathrm{d}t.
\]
Then
\[
\int x^i f(x) \,\mathrm{d}x
= \Bigl(\frac{\mathrm{d}}{\,\mathrm{d}t}\Bigr)^i e^{-(t^2 +1/t^2)}\Big|_{t=0}
= 0 \textrm{ for all }i\in\mathbb{N}_0.
\]
Let $f_{+}$ be the function $\max (f,0)$ and let $f_{-}$ be the
function $-\min (f,0)$.
Set $f = f_{+} - f_{-}$ and $\,\mathrm{d}\mu_{\pm}(x) = f_{\pm}(x) \,\mathrm{d}x.$ Then
\[ \int x^i \mathrm{d}\mu_{+}(x) = \int x^i \mathrm{d}\mu_{-}(x).\]\hfill$\Diamond$
Next, we see that the converse of Lemma \ref{Lem:indices} also holds.
\begin{theorem} Let $M$ be an infinite Hankel matrix with
$M_{0,0}=1$. The following statements are equivalent.
\begin{enumerate}
\item The moment problem $M=M^{(\mu)}$ does not have a unique solution.
\item Given any $\alpha \in \mathbb{C}$ with $0 < \mathrm{Im}(\alpha)< 1$, there exists a vector $\xi \in \mathcal{H}_Q$ such that $H\xi = \lambda(1, \alpha, \alpha^2, \alpha^3, \cdots)$ for some $\lambda \in \mathbb{C}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\mathcal{H}_Q$ be the Hilbert space completion of the
quadratic form $Q_M$ as defined previously, and let $S$ be the
closure of the shift operator on $\mathcal{H}_Q$. The solution to
the moment problem is unique if and only if the shift operator has
no self-adjoint extensions, by Theorem \ref{Thm:measures}. Using Lemma
\ref{Lem:indices}, this is true if and only if $S$ is self-adjoint, i.e. its deficiency indices are $(0,0)$.
\noindent $(1 \Rightarrow 2)$: This is a restatement of Lemma \ref{Lem:indices}, using $\alpha$ for the deficiency spaces instead of $i$.
\noindent $(2 \Rightarrow 1)$: Fix $\alpha \in \mathbb{C}$ such that $0< \mathrm{Im}(\alpha) < 1$ and denote \[s = (1,\alpha,\alpha^2, \alpha^3, \ldots ) \in \mathcal{H}_Q.\] Assume there exists a nonzero $\xi \in \mathcal{H}_Q$ such that $H\xi = \lambda s$ for some nonzero scalar $\lambda \in \mathbb{C}$. Then for $e_k$ a standard basis vector, \[
\langle Se_k|\xi \rangle_{\mathcal{H}_Q} = \langle Se_k|H\xi \rangle_{\ell^2} = \lambda \alpha^{k+1},\] where we have selected $\alpha$ so that $H\xi$ is in $\ell^2$ as well as in $\mathcal{H}_Q$. We then compute \[\langle e_k|\xi\rangle_{\mathcal{H}_Q} = \langle e_k|H\xi \rangle_{\ell^2} = \lambda \alpha^k.\] By linearity, we have for every $d \in \mathcal{D}$,\[ \langle Sd|\xi \rangle_{\mathcal{H}_Q} = \alpha \langle d|\xi\rangle_{\mathcal{H}_Q}, \] hence $\xi \in \mathcal{L}(\alpha)$ and we know the moment problem for $\mu$ does not have a unique solution.
\end{proof}
\section{Jacobi representations of matrices}\label{Sec:Banded}
The big picture in our work is an analysis of measures, passing from
moments to spectra. It turns out that a number of our problems may be
studied with the use of unbounded operators in Hilbert space, which
fits in with our multi-faceted operator-theoretic approach to moment
problems.
In this section we focus on the special relationship
between banded matrices which represent unbounded operators and
their associated moment problems. Beginning with a Hankel matrix $M$, we find a
(nonunique) banded Jacobi matrix $T$ which encodes the information in $M$ (Theorem \ref{Thm:MTClass}). On the other hand, given a banded matrix $T$, we can find an associated moment matrix $M$ (Theorem \ref{Thm:JIndepOfMu}). We conclude with a discussion of higher-dimensional analogues and banded matrices which arise in quantum mechanics.
\begin{definition}
We say that a matrix $T$ indexed by $\mathbb{N}_0 \times
\mathbb{N}_0$ is \textit{banded} if its nonzero entries are
restricted to the main diagonal and some number of diagonal bands
adjacent to the main diagonal. Specifically, $T$ is banded if
there exist $b_1,b_2$ such that for all $j,k \in \mathbb{N}_0$, \[
T_{j,k} \neq 0 \Rightarrow j-b_1 \leq k \leq j+b_2.\]
\end{definition}
When $T_1$ and $T_2$ are banded matrices, then
\[(T_1T_2)_{s,t}:= \sum_{n\in S} (T_1)_{s,n} (T_2)_{n, t}\] for each pair
$(s,t)\in \mathbb{N}_0\times \mathbb{N}_0$. It is not hard to show that the banded matrices
form an algebra with composition as the multiplication operation.
Recall that a matrix $A$ with real entries is called
\textit{symmetric} if $A = A^{\mathrm{tr}}$ and if $A$ has complex
entries, it is called \textit{hermitian} if $A =
\overline{A}^{\mathrm{tr}}$. Banded hermitian (or symmetric)
matrices $T:\mathbb{N}_0\times \mathbb{N}_0 \rightarrow
\mathbb{C}$ define symmetric \textit{operators} on
$\ell^2(\mathbb{N}_0)$ with $\mathcal{D}$ as dense domain.
\begin{example}\label{Ex:P}The matrix $P$ which represents the momentum operator in quantum mechanics is a banded symmetric matrix:
\begin{equation}\label{Eqn:MomentumP}
P=\frac{1}{2}
\begin{bmatrix}
0 & 1 & 0 & 0 & & & &&\\
1 & 0 & \sqrt{2}&0 & & & &&\\
0 & \sqrt{2} & 0 & \sqrt{3} & & & &&\\
& &\ddots & \ddots & \ddots& & &&\\
& & &\sqrt{n-2}& 0 & \sqrt{n-1}& &&\\
& & & 0 & \sqrt{n-1}& 0 &\sqrt{n} &&\\
& & & &\ddots & \ddots & \ddots& &\\
\end{bmatrix}.
\end{equation}
Given $v \in \ell^2$, we have
\[(Pv)_n = \frac{1}{2}(\sqrt{n-1}v_{n-1} + \sqrt{n}v_{n+1}).\]
\hfill$\Diamond$
\end{example}
The following definition generalizes to higher dimensions, but for
clarity we will state everything in a one-dimensional form. We begin with a Hankel matrix $M$, and again work in the Hilbert space $\mathcal{H}_Q$. We
denote the standard orthonormal basis in $\ell^2(\mathbb{N}_0)$ by
$\{\delta_n\}_{n \in \mathbb{N}_0}$. Even though $\{\delta_n\}_{n \in\mathbb{N}_0}$ is not an ONB in $\mathcal{H}_Q$, we also use $\delta_n$ to denote the element of $\mathcal{H}_Q$ with $0$'s in every place except the $(n+1)^{\textrm{st}}$, which contains a $1$.
\begin{definition}We say that a Hankel matrix $M$ is of \textit{$T$-class} if there
exists some banded hermitian matrix $T$, representing a symmetric
operator, such that
\begin{equation}\label{Eqn:TClass}
M_{j,k} = \langle \delta_0 | T^{j+k}\delta_0\rangle_{\ell^2}.
\end{equation}
\end{definition}
Recall from Proposition \ref{Prop:NonuniqueMeasure} that if $M$ is
Hankel and positive semidefinite, then $M$ satisfies the equation \[ M_{j,k} = \langle \delta_0|\widetilde{S}
^{j+k}\delta_0 \rangle _{\mathcal{H}_Q}, \]
where $\tilde{S}$ is a self-adjoint extension
of the shift operator on the Hilbert space $\mathcal{H}_Q$. We will demonstrate the correspondence between this shift operator $S$ (when a moment matrix $M$ is given) and a symmetric Jacobi matrix $T$.
\begin{lemma}\label{Lem:TClassPD}Every Hankel matrix of $T$-class is positive
semidefinite.
\end{lemma}
\begin{proof} Let $v\in\mathcal{D}$. Then because $T$ is banded, all
summations are finite:
\begin{equation}
\begin{split}
\sum_j \sum_k \overline{v_j} M_{j,k} v_k
& = \sum_j \sum_k \overline{v_j}\Bigl\langle \delta_0|T^{j+k}\delta_0\Bigr\rangle_{\ell^2} v_k\\
& = \Bigl\langle \sum_j v_j T^j\delta_0 | \sum_k v_k T^k\delta_0\Bigr\rangle_{\ell^2}\\
& = \Big\| \sum_{j} v_j T^j\delta_0\Big\|_{\ell^2}^2\geq 0.
\end{split}
\end{equation}
\end{proof}
\begin{lemma}[One-dimensional version]\label{Lemma:WIsometry}
Suppose $M$ is Hankel of $T$-class. Define $W:\mathcal{H}_Q \rightarrow \ell^2(\mathbb{N}_0)$ by
\[W(\delta_k):=T^k\delta_0 \qquad \forall k \in \mathbb{N}_0.\]
Then $W$ is an isometry.
\end{lemma}
\begin{proof}We first show that $\|W(\delta_k)\|_{\ell^2}^2 = \|\delta_k\|^2_{\mathcal{H}_Q}$.
Recall from the norm on $\mathcal{H}_Q$ that
\[\|\delta_k\|^2_{\mathcal{H}_Q} = Q_M(\delta_k),\] and
\[Q_M(\delta_k) = M_{k,k} = \langle \delta_0 | T^{k+k}\delta_0\rangle_{\ell^2}.\] On the other hand,
\[ \|W(\delta_k)\|_{\ell^2}^2 = \langle T^k \delta_0 |T^k\delta_0\rangle_{\ell^2} = \langle \delta_0|T^{k+k}\delta_0\rangle_{\ell^2}.\]
Since every element $v$ of $\mathcal{D}$ is a finite linear
combination of $\delta_k$s, the rest of the proof follows from the
same computation used in the proof of Lemma
\ref{Lem:TClassPD}.\end{proof}
As in Equation (\ref{Eqn:DefnS}), let $S$ will refer to the
closure of the shift operator, which is defined on $\mathcal{D}$
by
\[ S(c_0, c_1, c_2, \ldots ) := (0, c_0, c_1, c_2, \ldots ).\]
Because we will consider deficiency indices of two different
operators $S$ and $T$, we let $\mathcal{L}_{\pm}(S)$ denote the dimension
of the deficiency subspace of $S$, and we
let $\mathcal{L}_{\pm}(T)$ denote the dimension of the deficiency
subspace of $T$.
\begin{lemma}\label{Lemma:WIntertwine} Let $M$ be of $T$-class and let the Hilbert space $\mathcal{H}_Q$ be as defined above.
On the dense domain $\mathcal{D} \subset \mathcal{H}_Q$, $WS =
TW$.
\end{lemma}
\begin{proof} Let $c=(c_0, c_1, \ldots)\in\mathcal{D}$. Then
\begin{equation}
\begin{split}
WS(c)
& = W(0, c_0, c_1, \ldots)\\
& = c_0 T\delta_0 + c_1 T^2 \delta_0 + c_2 T^3 \delta_0 + \cdots\\
& = T( c_0 \delta_0 + c_1 T \delta_0 + c_2 T^2 \delta_0 + \cdots
=TW(c).
\end{split}
\end{equation}\end{proof}
An immediate result of Lemma \ref{Lemma:WIntertwine} is the
following:
\begin{lemma}Suppose the set $\{Wc\}_{ c\in \mathcal{D}}$ is dense in $\text{dom}(T)$.
Then the isometry $W$ maps the $S$-defect subspaces
$\mathcal{L}_{\pm}(S)$ into the $T$-defect subspaces
$\mathcal{L}_{\pm}(T)$.
\end{lemma}
\begin{proof} Suppose $f_{\pm}$ denotes an element of
$\mathcal{L}_{\pm}(S)$. We know that $S^*f_{\pm} = \pm if_{\pm}$
if and only if $\langle Sc|f_{\pm}\rangle_{\mathcal{H}_Q} = \pm i\langle
c|f_{\pm}\rangle_{\mathcal{H}_Q}$ for all $c\in\mathcal{D}$. With the assumption
of density, $T^*Wf_{\pm} = \pm iWf_{\pm}$ if and only if
\[\langle TWc|Wf_{\pm}\rangle_{\ell^2} = \pm i \langle Wc|Wf_{\pm}\rangle_{\ell^2}\]
for all $c\in\mathcal{F}$.
Let $c\in\mathcal{D}$. Then by Lemmas \ref{Lemma:WIntertwine} and \ref{Lemma:WIsometry},
\begin{equation}
\begin{split}
\langle TWc|Wf_{+}\rangle_{\ell^2}
& = \langle WSc|Wf_{\pm}\rangle_{\ell^2} = \langle Sc|f_{\pm}\rangle_{\mathcal{H}_Q} \\
& = \langle c|S^*f_{\pm}\rangle_{\mathcal{H}_Q} = \pm i\langle c|f_{\pm}\rangle_{\mathcal{H}_Q} = \pm i\langle Wc|Wf_{\pm}\rangle_{\ell^2}.
\end{split}
\end{equation}\end{proof}
Recall that we established in Lemma \ref{Lem:indices} that the deficiency indices of $S$ are either $(0,0)$ or $(1,1)$.
\begin{corollary}\label{Cor:STdefects}Suppose the set $\{Wc \:\:|\:\: c\in \mathcal{F}\}$ is dense in $\text{dom}(T)$. Then
\[ \mathcal{L}_{\pm}(T) \geq \mathcal{L}_{\pm}(S).\]
\end{corollary}
Table \ref{Table:STW} shows the relationships among $S$, $T$, and $W$.
\begin{table}\caption{Relationships among $S$, $T$, and $W$.}
\begin{tabular}{cccc}\label{Table:STW}
$\mathcal{H}_{\text{min}}$ & $\overset{W}{\longrightarrow}$ & $\ell^2(\mathbb{N}_0)$ \\
$S \curvearrowright$ & & $\curvearrowright T$\\
$\mathcal{H}_{\text{min}}$& $\overset{W}{\longrightarrow}$ & $\ell^2(\mathbb{N}_0)$ \\
\end{tabular}
\end{table}
Because
\begin{equation}\label{Eqn:TWWS}
TW = WS,
\end{equation}
when $S(c_0, c_1, \ldots):=(0, c_0, c_1, \ldots)$ in $\mathcal{H}_{\text{min}}$, we can take adjoints in (\ref{Eqn:TWWS}) to see that
\begin{equation}\label{Eqn:WTSW}
W^*T^* = S^*W^*.
\end{equation}
Finally, $W^*$ maps $\mathcal{L}_{\pm}(T)$ into $\mathcal{L}_{\pm}(S)$. To see this, suppose $T^* f_{\pm} = \pm i f_{\pm}$. Apply $W^*$ and use (\ref{Eqn:WTSW}):
\[W^*T^*f_{\pm} = \pm iW^*f_{\pm}= S^*W^*f_{\pm},\]
which implies that $W^*f_{\pm} \in \mathcal{L}_{\pm}(S)$.
We now come to the main result of this section: every positive definite Hankel matrix $M$ is of $T$-class, and the matrix $T$ such that
\[M_{j,k} = \langle \delta_0 | T^{j+k}\delta_0\rangle_{\ell^2}\]
can be chosen to be banded with respect to the canonical ONB in
$\ell^2(\mathbb{N}_0)$.
\begin{theorem}\label{Thm:MTClass}
Let $M$ be a positive definite Hankel matrix, where $M$ is normalized so that $M_{0,0} = 1$.
\begin{enumerate}[\rm(a)]
\item Then there exists a banded symmetric matrix $T$ operating on $\ell^2(\mathbb{N}_0)$ such that (\ref{Eqn:TClass}) is satisfied.
\item We may choose $T$ of the banded form
\begin{equation}
T=
\begin{bmatrix}
b_0 & a_0 & 0 & 0 & 0 &\cdots &\\
\overline{a_0} & b_1 & a_1 & 0 & 0 &\cdots &\\
0 & \overline{a_1} & b_2 & a_2 & 0 & &\\
0 & 0 & \overline{a_2} & b_3 & a_3& &\\
\vdots & \vdots & & & &\ddots &\\
\end{bmatrix}.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}By Theorem \ref{thm:real} there exists $\mu$ such that
\[M_{j,k} = \int_{\mathbb{R}} x^{j+k} \,\mathrm{d}\mu(x).\]
Now select the orthogonal polynomials $p_0(x), p_1(x), p_2(x), \ldots$ in $L^2(\mu)$ with $p_0(x) = 1$. Then the mapping
\begin{equation}\label{Eqn:Fisometry}
\sum_{k=0}^{\infty} c_k p_k(x) \mapsto \{c_k\}_{k\in\mathbb{N}_0}
\end{equation}
is an isometry of a subspace in $L^2(\mu)$ onto $\ell^2(\mathbb{N}_0)$. Specifically,
\begin{equation}\label{Eqn:SquareSum}
\Big\|\sum_k c_k p_k \Big\|^2_{L^2(\mu)} = \sum_{k}|c_k|^2.
\end{equation}
It is well-known (see, for example \cite{Akh65, AAR99}) that there exist sequences $\{a_n\}_{n \in \mathbb{N}_0}$ and $\{b_n\}_{\mathbb{N}_0}$ such that the following three-term recursion formulas are satisfied:
\begin{align*}
xp_0 & = b_0 p_0 + \overline{a_0}p_1\\
xp_1 & = a_0 p_0 + b_1p_1 + \overline{a_1} p_2\\
\vdots & \qquad \qquad \vdots\\
xp_j &= a_{j-1}p_{j-1} + b_jp_j + \overline{a_j}p_{j+1}\\
\vdots & \qquad \qquad \vdots
\end{align*} Because of the
Hankel property assumed for $M$, in proving (\ref{Eqn:TClass}), it
is enough to show that
\begin{equation}\label{Eqn:44}
M_{0,k} = \langle \delta_0 | T^k\delta_0\rangle_{\ell^2}.
\end{equation}
Using (\ref{Eqn:Fisometry}) and (\ref{Eqn:SquareSum}), we have for all $k\in\mathbb{N}_0$,
\begin{equation*}
\begin{split}
\langle \delta_0 | T^k\delta_0\rangle_{\ell^2}
& = \int_{\mathbb{R}} p_0 x^k p_0 \,\mathrm{d}\mu(x)\\
& = \int_{\mathbb{R}} x^k\,\mathrm{d}\mu(x) = M_{0,k},
\end{split}
\end{equation*}
which is the desired conclusion.\end{proof}
\begin{remark} There are many other choices of banded symmetric or hermitian matrices which solve (\ref{Eqn:TClass}). The candidates for $T$ are dictated by applications. \end{remark}
Consider a fixed positive definite Hankel matrix $M$ such that the associated symmetric operator $S$ has deficiency indices $(1,1)$. Then by Theorem \ref{Thm:measures}, there is a one-parameter family of inequivalent measures $\{\mu_z: |z| = 1\}$ such that $M^{(\mu_z)} = M$. Further, for each measure $\mu_z$, we may compute an associated symmetric Jacobi matrix $T_z$, as in Theorem \ref{Thm:MTClass}. The question we ask next is whether the Jacobi matrix $T_z$ depends on the measure $\mu_z$ used to compute the orthogonal polynomials.
Consider the Hilbert space $\mathcal{H}_Q$, in which the shift operator $S$ has dense domain. The Jacobi matrix $T_{\mu}$ we just computed is also a symmetric operator, this time with dense domain in $\ell^2(\mathbb{N}_0)$. Moreover, for each $\mu$ solving the $M$-moment problem, we have an isometry $F = F_{\mu}$ which maps $\mathcal{H}_Q$ into $L^2(\mu)$ and which intertwines $S$ with multiplication by $x$ (Lemma \ref{Lemma:MultiplicationByX}). Recall also that $T_{\mu}$ encodes multiplication by $x$.
By looking at the relevant formulas for the orthogonal polynomials $\{p_n\}\subset L^2(\mu_z)$, we see that $T_z$ does not depend on $z$. In particular, we use the following well-known formulas for the orthogonal polynomials to justify our answer \cite{Akh65}.
Define \[D_k = \det\begin{bmatrix}m_0 & \ldots &m_k\\
m_1 &\ldots &m_{k+1}\\
\vdots& &\vdots\\
m_k & \ldots & m_{2k}\end{bmatrix}\]
and let $S_M = \{k\in \mathbb{N}: D_k\neq 0\}$. Then if $k\in S_M$, then $p_k(x) = (D_{k-1}{D_k})^{-1/2}D_k(x)$, where
\[D_k(x) = \det\begin{bmatrix}m_0 & m_1 &\ldots &m_k\\
m_1 & m_2 &\ldots &m_{k+1}\\
\vdots & & &\vdots\\
m_{k-1} & m_k &\ldots & m_{2k-1}\\
1 & x& \ldots & x^k\end{bmatrix}.\]
As a result, the isometry $F_{\mu}$ is also independent of $\mu$; the closed subspace spanned by the polynomials in $L^2(\mu_z)$ is independent of $z$, and it is only the relative orthogonal complement in $L^2(\mu_z)$ which depends on $z$. The orthogonal complement can be described by
\[ \Bigl\{\psi\in L^2(\mu) \Big| \int \psi(x)x^k \textrm{d}\mu(x) = 0 \textrm{ for all }k\in\mathbb{N}_0\Bigr\}.\] Lemma \ref{Lemma:MultiplicationByX} tells us that $S$ and $T$ are unitarily equivalent. Moreover,
\[ \langle e_0 | S^ke_0\rangle_{\mathcal{H}_Q} = \langle \delta_0 | T^k\delta_0\rangle_{\ell^2},\]
so the spectral measures derived from the self-adjoint extensions $\widetilde{S}$ of $S$ and $\widetilde{T}$ of $T$ produce the same measures $\mu$ which solve the $M$-moment problem. Thus, if we are thus given the Jacobi matrix $T$, the self-adjoint extensions of $T$ in $\ell^2(\mathbb{N}_0)$ correspond to spectral measures $\mu$ which will have a common moment matrix $M$.
We summarize with a theorem:
\begin{theorem}\label{Thm:JIndepOfMu}
Let $M$ be a positive definite Hankel matrix, and let $\mu$ be a measure such that $M^{(\mu)} = M$. The associated Jacobi matrix $T$ is independent of the choice of measure $\mu$ which solves the $M$-moment problem. Conversely, every symmetric Jacobi matrix $T$ gives rise to a moment problem.
The following three conditions are equivalent:
\begin{enumerate}[\rm(a)]
\item The solution $\mu$ to the $M$-moment problem is unique.
\item The deficiency indices for $T$ are $(0,0)$.
\item The polynomials are dense in $L^2(\mu)$.
\end{enumerate}
\end{theorem}
\begin{proof} The forward direction is shown in the discussion preceding the statement of the theorem. Given a symmetric Jacobi matrix $T$, define a Hankel matrix $M$ by taking \[M_{j,k} = \langle \delta_0|T^{j+k}\delta_0\rangle_{\ell^2}.\] Since powers of banded matrices remain banded, the entries of $M$ are all finite. By Lemma \ref{Lem:TClassPD}, $M$ is a positive-definite Hankel matrix, hence yielding a moment problem $M = M^{(\mu)}$ which has at least one solution.
The equivalent statements follow from Theorem \ref{Thm:measures}, Corollary \ref{Cor:STdefects}, and the discussion above showing that $\mathcal{L}_{\pm}(T) \leq \mathcal{L}_{\pm}(S)$.
\end{proof}
\section{The triple recursion relation and extensions to higher dimensions}
Let $p_0(x) \equiv 1,p_1(x), p_2(x), \ldots$ be the orthogonal polynomials with respect to $\mu$, where $\textrm{deg}(p_k) = k$. (Here we assume that $\mu$ corresponds to a positive definite, not positive semidefinite, linear functional on $L^2(\mu)$.) We can use Gram-Schmidt on $\{1, x, x^2, \ldots\}$, so that
\[
\textrm{span}\{1, x, x^2, \ldots\} = \textrm{span}\{p_0(x), p_1(x), p_2(x), \ldots\}
\]
and
\[
\int p_j(x) p_k(x) \,\mathrm{d}\mu(x) = \delta_{j,k} \textrm{ (the Kronecker delta)}.
\]
Then for $k\in\mathbb{N}_0$ we have
\begin{equation}\label{Eqn:PolyRecur1}
xp_{n-1}(x) = b_{n-1}p_{n-2}(x) + a_{n-1} p_{n-1}(x) + \overline{b_n}p_n(x)
\end{equation}
and
\begin{equation}\label{Eqn:PolyRecur2}
x p_n(x) = b_n p_{n-1}(x) + a_n p_n(x) + \overline{b_{n+1}}p_{n+1}(x).
\end{equation}
Note that the $\overline{b_n}$ appearing in (\ref{Eqn:PolyRecur1}) is the conjugate of the $b_n$ appearing in (\ref{Eqn:PolyRecur2}). To see this that this is true, suppose
\begin{align*}
x p_n(x) &= Bp_{n-1}(x) + \textrm{ terms in }p_{n}(x) \textrm{ and }p_{n+1}(x) \\
xp_{n-1}(x) & = Cp_n(x) + \textrm{ terms in }p_{n-2}(x) \textrm{ and }p_{n-1}(x).
\end{align*}
We now take advantage of the orthogonality:
\begin{equation*}
\begin{split}
B
& = \langle p_{n-1}(x) | xp_n(x)\rangle_{L^2(\mu)}
= \langle xp_{n-1}(x) | p_n(x)\rangle_{L^2(\mu)}\\
& = \langle Cp_n(x) | p_n(x)\rangle_{L^2(\mu)}
= \overline{C}\langle p_n(x) | p_n(x)\rangle_{L^2(\mu)} = \overline{C}.
\end{split}
\end{equation*}
The triple recursion relation can be considered in terms of projections onto finite-dimensional subspaces. Set
\[
\mathcal{H}_n:
=\textrm{span}\{1, x, \ldots, x^n\}
= \textrm{span}\{p_0(x), p_1(x), \ldots, p_n(x)\}
\]
and let $Q_n$ be the projection onto $\mathcal{H}_n$ with
\[
Q_n^* = Q_n = Q_n^2
\]
and
\[ Q_n(L^2(\mu)) = \mathcal{H}_n.
\]
(Note: $Q_n$ is a projection, and is \textit{not} related to our earlier quadratic form.) We are interested in the shift operator, which is the same as multiplication by $x$ on $\mathcal{P}\subset L^2(\mu)$. We have
\[
x\mathcal{H}_n \subset \mathcal{H}_{n+1},
\]
so
\[
Q_{n+1} xQ_n = xQ_n.
\]
Set $Q_n^{\perp}:=I - Q_n$---that is, $Q_n^{\perp}$ is the projection onto $\mathcal{H}_n^{\perp}$, where $\mathcal{H}_n^{\perp} = L^2(\mu) \ominus \mathcal{H}_n$.
If $k < n-1$, then $\langle p_k, xp_n\rangle_{L^2(\mu)} = 0$. To see this, write
\[
\langle p_k, xp_n\rangle_{L^2(\mu)} = \langle xp_k, p_n\rangle_{L^2(\mu)},
\]
and $p_n$ is orthogonal to any polynomial with degree less than $n$.
Now,
\begin{equation*}
\begin{split}
xp_n & = Q_n(xp_n) + Q_n^{\perp}(xp_n)\\
& = \underbrace{b_n p_{n-1} + a_n p_n}_{Q_n(xp_n)} + \underbrace{\overline{b_{n+1}}p_{n+1}}_{Q_n^{\perp}(xp_n)}.
\end{split}
\end{equation*}
If we restrict $J^{*}$ to $\mathcal{P}$, the subspace of all polynomials, then
\[
J^{*}|_{\mathcal{P}} = J,
\]
since $J\subset J^*$. Moreover, since the projections $Q_n$ are self-adjoint,
\[
Q_{n+1} J Q_n = JQ_n \Rightarrow Q_n J Q_{n+1} = Q_n J.
\]
Finally,
\[
Q_{n-2} J (Q_n - Q_{n-1}) =0,
\]
since
\[
Q_{n-2} (JQ_n) = Q_{n-2} J Q_{n-1} Q_n = Q_{n-2} J Q_{n-1}.
\]
The projection approach to the recursion relation can be extended to $\mathbb{R}^d$ for $d > 1$. For each of the $d$ coordinate directions, there is a Jacobi matrix
\[
J_k =
\begin{bmatrix}
a_0^{(k)} &b_1^{(k)} &0 &0 &0 &\cdots\\
\overline{b_1^{(k)} } &a_1^{(k)} &b_2^{(k)} &0 &0 &\cdots\\
0 &\overline{b_2^{(k)} } &a_2^{(k)} &b_3^{(k)} &0 &\cdots\\
0 &0 & \overline{b_3^{(k)} } &a_3^{(k)} &b_4^{(k)} & \\
0 &0 &0 &\overline{b_4^{(k)} } &a_4^{(k)} & \\
\vdots &\vdots & \vdots & & &\ddots \\
\end{bmatrix}
\]
and a shift operator $S_k$, which is realized by multiplication in the $k^{th}$ coordinate
\[
S_k p(x_1, \ldots, x_d) = x_k p(x_1, \ldots,x_d).
\]
Recall that the degree of $x^{\alpha}=x_1^{\alpha_1}\cdots x_d^{\alpha_d}$ is $\alpha_1 + \ldots + \alpha_d$. With this notation, the finite-dimensional subspaces are
\[
\mathcal{H}_n = \{ p \in \mathcal{P} | \textrm{deg}(p) \leq n\}.
\]
\section{Concrete Jacobi matrices}
We examine the momentum
and position operators from quantum mechanics for one degree of freedom and then study some
Hamiltonian operators (for example, the polynomials
in the momentum and position operators in Table
\ref{Table:Polynomials}). The operators we consider all have a common
property: in a natural orthonormal basis their representations
take the form of infinite banded matrices; that is, the matrices have
zeros outside a band around the diagonal of finite width.
The banded property of the matrices makes matrix multiplication
easy; under multiplication, the banded matrices form an algebra of
unbounded operators. While such banded matrices follow simple
algebraic rules, their spectral theory can be subtle. For example,
we show that these operators may not have a well-defined spectral
resolution. Using von Neumann's deficiency indices, we showed above the
connection of Jacobi matrices to the theory of extensions of
symmetric operators with dense domain, and thereby to moment problems.
One often encounters problems in physics, such as the Heisenberg
banded matrices $T$, where the nature of the bands is dictated by
applications. A particular infinite matrix $T$ represents an
operator in an $\ell^2$ sequence space. In fact, in a particular
application, $T$ may be realized in a different Hilbert space, for
example an $L^2$ function space, but the function version will be
unitarily equivalent to the matrix model. In particular, we refer to the fact that
Heisenberg's matrix formulation of quantum mechanics is unitarily
equivalent to Schr\"{o}dinger's wave formulation in function
space. For example, the momentum operator $P$ is represented by
the matrix (\ref{Eqn:MomentumP}) in $\ell^2$ and by the operator
$\frac{1}{i}\frac{\mathrm{d}}{\,\mathrm{d}x}$ on a dense subspace of
$L^2(\mathbb{R})$. See Examples \ref{Ex:P} and \ref{Ex:Q} and the
remark following the two examples.
From our banded matrix $T$ we then get a Hankel matrix $M$, and we
apply our theory to $M$. In particular, we find the measures $\mu$
which solve the moment problem for $M$. We use operator theory in
constructing the family of measures $\mu$ which solve the moment
problem at hand.
\begin{table}\caption{Two approaches to moments and banded matrices.}
\begin{tabular}{l}\label{Table:AB}
\boxed{\begin{minipage}{.15\linewidth}Banded $T$\end{minipage}}$\longrightarrow$\boxed{\begin{minipage}{.2\linewidth}Hankel $M_T$\end{minipage}}$\longrightarrow$\boxed{\begin{minipage}{.18\linewidth} {measures $\mu$}\end{minipage}}
\\
\\
\boxed{\begin{minipage}{.15\linewidth}Hankel $M$\end{minipage}}$\longrightarrow$\boxed{\begin{minipage}{.15\linewidth}$M = M^{(\mu)}$\end{minipage}}$\longrightarrow$\boxed{\begin{minipage}{.17\linewidth}Banded $T_M$\end{minipage}}
\end{tabular}
\end{table}
We emphasize further that in applications, one
typically encounters a much richer family of banded symmetric or
hermitian infinite matrices $T$; for example those from
Heisenberg's quantum mechanics. In these matrices, the band-size
will typically be more than three. In fact the band can be any
size, and the deficiency indices can be anything. However this
wider class of banded matrices, including for example anharmonic
oscillators, may be studied with the aid of the associated Jacobi
matrices.
We begin with two Jacobi matrices, the matrix $P$ given in Example \ref{Ex:P} and $Q$ here.
\begin{example}\label{Ex:Q}
The operator $Q$ is represented by multiplication by $x$ on $L^2(\mathbb{R})$. The operator $Q$ can be represented on $\ell^2$ by a matrix defined by
\[(Qv)_n = \frac{1}{2i}(\sqrt{n-1}v_{n-1} - \sqrt{n}v_{n+1}).\]
\end{example}
\begin{remark}The two operators $P$ and $Q$ in Examples \ref{Ex:P} and \ref{Ex:Q} have dense domains in $L^2(\mathbb{R})$:
\[ \textrm{dom}(P) = \{f\in L^2(\mathbb{R}) : f'\in L^2(\mathbb{R})\}\]
and
\[ \textrm{dom}(Q) = \{f\in L^2(\mathbb{R}) : xf(x)\in L^2(\mathbb{R})\}.\]
The operators $P$ and $Q$ are both self-adjoint, and they are unitarily equivalent via the Fourier transform in $L^2(\mathbb{R})$.
Setting
\[ A_{\pm}:=\frac{1}{\sqrt{2}}(P\pm i Q),\]
we get $A_{+}^* = A_{-}$, and the commutator
\begin{equation}\label{Eqn:Commutator}
[A_{+}, A_{-}] =-I.
\end{equation} The Hermite function $h_0(x) = c_0 e^{-x^2/2}$ satisfies
\[ A_{-}h_0 = 0.\]
An application of Equation (\ref{Eqn:Commutator}) yields
\[(A_{+}A_{-})A_{+}^n h_0 = n A_{+}^n h_0 \textrm{ for each }n\in\mathbb{N}.\]
The functions $h_n := c_n A_{+}h_0$ diagonalize the harmonic oscillator Hamiltonian
\[H:=A_{+}A_{-} = \frac{1}{2}(P^2 + Q^2 - I),\]
and the constants $c_n$ can be chosen so that $\{h_n|n\in\mathbb{N}_0\}$ is an orthonormal basis in $L^2(\mathbb{R})$ consisting of Hermite functions.
Using this ONB, we arrive at the two matrix representations for $P$ and $Q$ in Examples \ref{Ex:P} and \ref{Ex:Q}. Specifically,
\[\langle h_{n-1}|Ph_n\rangle = \frac{1}{2}\sqrt{n},\]
\[\langle h_{n}|Ph_n\rangle = 0,\]
and
\[\langle h_{n+1}|Ph_n\rangle = \frac{1}{2}\sqrt{n+1},\]
which is the Jacobi matrix in (\ref{Eqn:MomentumP}).
\end{remark}
\begin{example}If $T = QPQ$ in $\ell^2(\mathbb{N}_0)$, then $T$ has deficiency indices both equal to $1$.
\begin{proof}A differential equations problem.
\end{proof}
\end{example}
\begin{table}\caption{Some polynomials in the position and momentum operators.}
\begin{tabular}{c|c|c}\label{Table:Polynomials}
$T$ & index & spectrum\\
&&\\
\hline
$QPQ$ & $(1,1)$ & depends on the choice \\
& & of selfadjoint extension\\
&&\\
$P^2+Q^4$&$(0.0)$ & discrete, anharmonic oscillator\\
&&\\
$P^2-Q^4$ & $(2,2)$& repulsive potential\\ && quantum particle shoots to infinity in finite time \\
&&\\
$P^2+Q^2$&$(0,0)$ & $\{2n+1 \:\:|\:\: n\in\mathbb{N}_0\}$\\
&&\\
$P^2-Q^2$&$ (0,0)$ & continuous $\mathbb{R}$ (easier to see in $L^2(\mathbb{R})$\\
&&than by using matrix calculations)\\
\hline
\end{tabular}
\end{table}
\chapter{The integral operator of a moment matrix}\label{Sec:IntOperators}
In this chapter, we will first show how the Hilbert matrix $M$, which is the moment matrix for Lebesgue measure on $[0,1]$, can be associated to an integral operator. It turns out that both the operator associated to $M$ and the integral operator are bounded. In fact, we use properties of Hardy space functions and the polar decomposition to show $M$ and its integral operator are unitarily equivalent. The bounded operator properties of the Hilbert matrix are well-known. See \cite{Hal67, Wid66} for more details.
We will generalize Widom's technique, using a weighted Hilbert space when necessary, to find an integral operator on $L^2(\mu)$ associated with the moment matrix $M^{(\mu)}$, where $\mu$ is a positive Borel measure supported on $[-1,1]$. In the more general setting, the integral operator may not be bounded. In Chapter \ref{Sec:Spectrum} we will examine conditions under which the integral operator is bounded.
When we refer to the Hilbert matrix, where
$\mu$ is Lebesgue measure restricted to $[0,1)$, we will use the
symbol $M$; for any other measure $\mu$ supported in $[-1, 1]$,
the moment matrix will be denoted $M^{(\mu)}$.
\section{The Hilbert matrix}\label{Sec:Hilbert}
The Hilbert matrix $M$ is the moment matrix for Lebesgue measure
restricted to $[0, 1)$. $M$ has $(i,j)^{\textrm{th}}$ entry given
by
\begin{equation}
M_{i, j} :=\frac{1}{1 + i + j} = \int_0^1 x^{i+j} \,\mathrm{d}x.
\end{equation}
It is well-known \cite{Hal67, Wid66} that the Hilbert matrix defines a bounded operator on $\ell^2$, and the
operator norm of $M$ as an operator from $\ell^2$ to $\ell^2$ is
$\pi$.
Using our definition of $\mathcal{D}$ to be the set of
sequences with only finitely many nonzero coordinates, recall the operator
$F:\mathcal{D}\rightarrow\mathcal{P}[0,1]$ given by
\[
Fc(x) = \sum_{n \in \mathbb{N}_0} c_n x^n.
\]
The operator $F$ defines a correspondence between the finite
sequences $\mathcal{D}$ and the polynomials on $[0,1]$,
and this correspondence extends by Stone-Weierstrass and the Riesz-Fisher theorem to an
operator from $\ell^2$ to $L^2[0,1]$. We will often call the
image of a sequence $c$ a \textit{generating function} $Fc = f_c$.
Suppose $f\in L^2(0,1)$ and $c\in \ell^2$. By a Fubini argument and the Cauchy-Schwarz inequality, we can switch the sum and the integral in $\langle f
| Fc \rangle_{L^2(0,1)}$ to define the adjoint operator $F^*$:
\begin{eqnarray*}
\langle f|Fc \rangle_{L^2} &=& \int_0^1 \overline{f(x)}\sum_{n=0}^{\infty} c_nx^n \,\mathrm{d}x \\ &=& \sum_{n=0}^{\infty} c_n \int_0^1 \overline{f(x)}x^n \,\mathrm{d}x \\&=& \langle F^*f | c \rangle_{\ell^2}, \end{eqnarray*} where we have the adjoint now defined by
\[ (F^*f)_n = \int_0^1 f(x) x^n \,\mathrm{d}x.\]
Furthermore, if $f\in L^2(0,1)$, then $\{ (F^*f)_n\}_{n\in\mathbb{N}_0} \in \ell^2$.
\begin{lemma}\label{Lem:FFM} Let $M$ be the operator for the Hilbert matrix, and let $F$ and $F^*$ be the associated operators defined previously. Then $F^*F = M$.
\end{lemma}
\begin{proof}
Given $c \in \ell^2$, the matrix product $Mc$ is well defined
since $M$ is a bounded operator on $\ell^2$. We can therefore
write $(Mc)_i = \sum_{j=0}^{\infty} M(i,j)c_j =
\sum_{j=0}^{\infty} \frac{c_j}{i+j+1}$, where convergence is in
the $\ell^2$ norm. Then
\begin{eqnarray*} (F^*Fc)_i &=& \int_0^1 x^i (Fc)(x) \,\mathrm{d}x\\
&=& \int_0^1x^i\sum_{j=0}^{\infty} c_jx^j \,\mathrm{d}x \\ &=&
\sum_{j=0}^{\infty} c_j \int_0^1 x^{i+j}\,\mathrm{d}x\\&=&
\sum_{j=0}^{\infty} \frac{c_j}{i+j+1}. \end{eqnarray*} The exchange
of the sum and integral above follows from Fubini since $c \in
\ell^2$ and $\left(\frac{1}{i+j+1}\right) \in \ell^2$ (as a sequence in $j$).
\end{proof}
We now turn to a relation between the operator $M$ and an integral
operator $K$. This relationship is shown in a more general
setting in \cite{Wid66}, and we will also discuss a generalized version in Section \ref{Subsec:Integral}.
\begin{theorem}[\cite{Wid66}, Lemma 3.1]\label{Thm:HilbertKFFM}
Suppose $M = F^*F$ is the operator representing the Hilbert matrix. The self-adjoint operator $K=FF^*$ on $L^2[0,1]$ is an integral operator with kernel
\[
k(x,y) = \frac{1}{1- xy}.
\]
Moreover, $K$ and $M$ satisfy the relation on
$\ell^2$:
\begin{equation}\label{Eqn:HilbertKFFM}
K F = F M.
\end{equation}
\end{theorem}
\begin{proof}Let $f \in L^2[0,1]$ and fix $x \in (0,1)$. Then
\begin{eqnarray*} (FF^*f)(x) &=& \sum_{j=0}^{\infty}(F^*
f)_jx^j\\ &=& \sum_{j=0}^{\infty} \int_0^1 f(y) y^jx^j
\,\mathrm{d}y \\ &=& \int_0^1 \sum_{j=0}^{\infty} f(y) (xy)^j
\,\mathrm{d}y \\ &=& \int_0^1 \frac{f(y)\,\mathrm{d}y}{1-xy}.
\end{eqnarray*}
We again use Fubini to exchange the sum and integral above, so $FF^*$ is the desired integral operator.
Combining this equation with Lemma \ref{Lem:FFM}, we find
\[KF = FF^*F = FM.\] \end{proof}
We next develop tools which we will use in the next section to study the spectrum of the integral operator $K$.
We will demonstrate the relationship between the
integral transform and the Hardy
space $\mathbb{H}_2$ on the open unit disc $\mathbb{D}$ in $\mathbb{C}$.
The Hardy space $\mathbb{H}_2$ consists of functions which are analytic on $\mathbb{D}$. The Hardy space norm of a function $F(z)$ with expansion $F(z) = \sum_{n=0}^{\infty} c_nz^n$ is given by \[ \|F\|^2_{\mathbb{H}_2} = \sum_{n=0}^{\infty}|c_n|^2.\] The inner product on $\mathbb{H}_2$ is exactly the $\ell^2$ inner product of the expansion coefficients of two Hardy functions.
Given $z \in \mathbb{D}$, we let $k_z(x) = (1-zx)^{-1}$. Then $\widetilde{K}$ denotes the integral operator with kernel $k_z$, which is the extension of $K$ defined in Theorem \ref{Thm:HilbertKFFM} to the complex unit disc $\mathbb{D}$. Given $g \in L^2[0,1]$, we define
\[ (\widetilde{K}g)(z) = \int_0^1 \frac{g(x)}{1 - \overline{z}x} \,\mathrm{d}x = \langle k_z | g \rangle_{L^2[0,1]}.\]
Note that as a function of $z\in\mathbb{D}$, $k_z$ is analytic for $x\in [-1, 1]$:
\[
\frac{1}{1-zx} = \sum_{n \in \mathbb{N}_0} (zx)^n.
\]
Next, we prove that the range of $\widetilde{K}$ is contained in $\mathbb{H}_2$. In particular, we show that if $f = \widetilde{K} g$, with $g\in L^2[0,1]$, then $f$ can be extended to an analytic function $F(z)$ on the open unit disc $\mathbb{D}$.
\begin{lemma}Suppose $f\in L^2[0,1]$. Then
\[F(z) = \int_0^1 \frac{f(x)}{1 - zx} \,\mathrm{d}x \in \mathbb{H}_2.\]
\end{lemma}
\begin{proof}
Given $x \in [0,1]$ and $|z|<1$, we have \[F(z) = \int_0^1 f(x) \sum_{n=0}^{\infty}x^nz^n \,\mathrm{d}x.\] Next, observe that $(\int_0^1|f(x)|x^n \,\mathrm{d}x)$ is in $\ell^2$ as a sequence in $n$, since by Cauchy-Schwarz \[\int_0^1|f(x)|x^n \,\mathrm{d}x = \langle|f| | x^n \rangle_{L^2[0,1]} \leq \|f\|_{L^2[0,1]} \|x^n\|_{L^2[0,1]} = \frac{\|f\|_{L^2[0,1]}}{2n+1}.\] This allows us to use Fubini's theorem to write \[F(z) = \sum_{n=0}^{\infty} \left(\int_0^1f(x) x^n \,\mathrm{d}x \right)z^n\] and also shows that $F \in \mathbb{H}_2$. \end{proof}
The adjoint operator $\widetilde{K}^*$ to our extended integral operator $\widetilde{K}$ is a map from $\mathbb{H}_2$ to $L^2[0,1]$. Let $\phi \in \mathbb{H}_2$ have expansion $\phi(z) = \sum_{n=0}^{\infty} c_nz^n$. Then,
\begin{eqnarray*} \langle \widetilde{K}f|\phi \rangle_{\mathbb{H}_2} &=& \sum_{n=0}^{\infty} \left(\overline{\int_0^1 f(x)x^n\,\mathrm{d}x} \right) c_n \\&=& \int_0^1\overline{f(x)} \sum_{n=0}^{\infty} c_nx^n \,\mathrm{d}x \\ &=& \langle f | \phi \rangle_{L^2[0,1]}. \end{eqnarray*} We use Fubini's Theorem above to switch the sum and the integral, and note that in the last line, $\phi$ is restricted to the real interval $[0,1]$. Thus, we find that $\widetilde{K}^*\phi$ is the restriction of $\phi$ to $[0,1]$.
We now provide an argument using the polar decomposition and Hardy
spaces to show that the operator $F$ is in fact a unitary
operator, so the operators $M$ and $K$ have the same spectrum.
Using the definitions of $F$ and $F^*$, we note that the polynomials are in the domain of $F^*$. This proves that $F^*$ has a dense domain, and thus $F$ is closable. We therefore have by a result of von Neumann that both $F^*F$ and $FF^*$ are self-adjoint positive operators with dense domains.
By polar decomposition, given the closable operator $F:\ell^2 \rightarrow L^2[0,1]$, there is a partial isometry $U:\ell^2 \rightarrow L^2[0,1]$ such that
\[F = U (F^*F)^{1/2} = (FF^*)^{1/2}U.\]
Because $U$ is a partial isometry, it is an isometry on the orthogonal complement of the kernel of $F$. Our goal now is to show that $U$ is, in fact, a unitary operator, i.e. the kernels of $F$ and $F^*$ are trivial. This will give us a unitary equivalence between the operators $M$ and $K$.
Given $c \in \ell^2$, let $Fc=0$, that is, $\sum_{n=0}^{\infty} c_nx^n = 0$ for a.e. $x$ in $[0,1]$. But, by the definition of Hardy functions, $\sum_{n=0}^{\infty} c_nx^n$ is the restriction to $[0,1]$ of an $\mathbb{H}_2$ function \[F(z) = \sum_{n=0}^{\infty}c_nz^n.\] Since $F$ is analytic and is zero on a set with accumulation points, we have $F(z) = 0$ on $\mathbb{D}$. Therefore $c_n = 0$ for all $n$, which gives $\mathrm{ker}(F) = \{0\}$.
For a function $f \in L^2[0,1]$, assume that $F^*f = 0$. Thus, $\int_0^1 f(x) x^n \,\mathrm{d}x = 0$ for all $n \in \mathbb{N}_0$. This means that $f$ is orthogonal to the monomials, and hence to the polynomials. Since the polynomials are dense in $L^2[0,1]$ by Stone-Weierstrass, we have that $f(x)=0$ a.e.$x$. Therefore, $\ker(F) = \{0\}$, and we can now conclude that the operator $U$ is a unitary operator from $\ell^2$ to $L^2[0,1]$.
Given our definition of $U$ from the polar decomposition, we have $U(F^*F)^{1/2} = (FF^*)^{1/2}U$. This gives \[U(F^*F)^{1/2}U^* = (FF^*)^{1/2}, \] and squaring both sides gives \[ U(F^*F)U^* = UMU^* = FF^* = K.\]
\section{Integral operator for a measure supported on $[-1,1]$}\label{Subsec:Integral}
In this section we study measures $\mu$ with finite moments of all orders on $\mathbb{R}$. We generalize the results found in Section \ref{Sec:Hilbert} for the Hilbert matrix. Given a measure $\mu$, we see that the moment matrix $M^{(\mu)}$ may be realized in two ways:
\begin{enumerate}[(1)]
\item as an operator $H$ having dense domain in $\ell^2$ or a weighted space $\ell^2(w)$ and acting via formal matrix multiplication by $M^{(\mu)}$; and
\item as an integral operator $K$ in the $L^2(\mu)$-space of all square integrable functions.
\end{enumerate}
While the resulting duality is also true in $\mathbb{R}^d$, for any value of $d$, for clarity we will present the details here just for $d = 1$. The reader will be able to generalize to $d > 1$. We will also assume our measures to be supported in $[-1,1]$.
Given the moment matrix $M^{(\mu)}$, define a quadratic form $Q_M$ as in Definition \ref{Def:QM} on the dense space $\mathcal{D}$ consisting of finite sequences. Recall from Section \ref{Subsec:QClosable} that if the monomial functions $\{v_k\}_{k \in \mathbb{N}_0}$ are in $L^1(\mu) \cap L^2(\mu)$, there exist weights $w=\{w_i\}_{i \in\mathbb{N}_0}$ such that $Q_M$ is a closable quadratic form. (Note: in some cases such as the Hilbert matrix, the weights are not required.) Moreover, the operator $F: \ell^2(w) \rightarrow L^2(\mu)$ which takes $c \in \mathcal{D}$ to the generating function (a polynomial) $f_c(x) = \sum_j c_jx^j$ is also made closable by the same weights. Then, as we showed previously,
\[
Q_M(c) = \int |f_c(x)|^2 \, \mathrm{d}\mu(x)
\]
for all $c\in\mathcal{D}$. By Kato's theory, $Q_M$ has a corresponding closable linear operator $H$ on $\ell^2(w)$ with dense domain satisfying $Q_M(c)= \|H^{1/2}c\|_{\ell^2(w)}^2$. In particular, the Hilbert space completion $\mathcal{H}_Q$ with respect to the quadratic form $Q_M$ is the same as the completion with respect to the operator $H$. (See Remark \ref{Rem:Distributions} for another viewpoint on this space $\mathcal{H}_Q$.) We also showed in Lemma \ref{Lem:FStarFisH} that the Kato operator $H$ is given by $F_w^*F$, which shows that the domain of $H$ contains $\mathcal{D}$. Thus, for $c \in \mathcal{D}$, we have $Q_M(c) = \langle c|Hc \rangle_{\ell^2(w)}$.
We will show below that the operator $FF^*_w$ on $L^2(\mu)$ is equal on its domain to the integral operator with kernel
\[
k(x,y) = \sum_k \frac{(xy)^k}{w_k}.
\]
Note that if weights are not required, the kernel above reduces to the same kernel which represented the Hilbert matrix in the previous section. The integral operators corresponding to moment matrices $M^{(\mu)}$ for different measures $\mu$ might have the same kernel, but they will act on different Hilbert spaces $L^2(\mu)$.
We now can state the association between the Kato operator $F^*_wF$ for a moment matrix $M^{(\mu)}$, the operator $K=F^*_w$, and an integral operator on $L^2(\mu)$. As usual, we use the notation $v_j(x) = x^j$ for the monomials in $L^2(\mu)$.
\begin{proposition}\label{Lemma:FStar4}
Let $\mu$ be a Borel measure on $\mathbb{R}$ with support contained in $[-1,1]$ and $v_j \in L^1(\mu)$ for all $j \in \mathbb{N}_0$. Let $F$ be the closure of the operator in (\ref{Eqn:DefnF}) which sends a sequence $c\in\mathcal{D}$ to the polynomial $f_c\in L^2(\mu)$. Let $w=\{w_i\}_{i \in\mathbb{N}_0}$ be weights such that $F_w^*$ has dense domain as an operator into $\ell^2(w)$, i.e. via Equation (\ref{Eqn:FCriterion}) we assume that
\begin{equation}\label{Eqn:SumM}
\sum_{j\in\mathbb{N}_0} \frac{1}{w_j}|M_{j+k}|^2 < \infty \textrm{ for all } k\in\mathbb{N}_0.
\end{equation}
Then the operator $K=\widetilde{F}F^*_w$ is a self-adjoint operator with dense domain. On its domain, $K$ is an integral operator with kernel
\[ k(x,y) = \sum_j \frac{(xy)^j}{w_j}.\]
\end{proposition}
\begin{proof}
For the weights $w$, $F$ is closable and the domain of $F_w^*$ is given by
\begin{equation}
\Biggl\{\phi \in L^2(\mu) \Big| \sum_{j\in\mathbb{N}_0}w_j |(F_w^*\phi)_j|^2 < \infty\Biggr\}.
\end{equation}
We note in particular that the monomials $\{v_j\}_{j \in \mathbb{N}_0}$ are contained in the domain of $F_w^*$. The domain of $FF^*_w$ is the set of all functions $\phi$ in the domain of $F^*_w$ such that $F^*_w(\phi)$ is in the domain of $F$. This set is dense by von Neumann's polar decomposition but may not contain all of the polynomials.
Let $\phi\in \textrm{dom}(FF^*_w)$. Then a Lebesgue dominated convergence argument gives
\begin{equation}
\begin{split}
({F}F_w^*)\phi(x)
& =\sum_{k\in\mathbb{N}_0} (F^*\phi)_k v_k(x)\\
& \underset{(\ref{Eqn:DefnFStar})}{=} \sum_{k\in\mathbb{N}_0}\frac{1}{w_k} \int_{\mathrm{supp}(\mu)} \phi(y) y^k \,\mathrm{d}\mu(y) x^k\\
& \underset{(\ref{Eqn:SumM})}{=} \int_{\textrm{supp}(\mu)} \phi(y) \sum_{k\in\mathbb{N}_0}\frac{(xy)^k}{w_k} \,\mathrm{d}\mu(y). \end{split}
\end{equation}
The requirement that $\mu$ is supported on $[-1,1]$ ensures that the sum above converges for all $x,y$.
\end{proof}
The polar decomposition of $F$ is
\begin{equation}\label{Eqn:TildeFUFStarTildeF}
F = U(F_w^*F)^{1/2},
\end{equation}
where $U$ is a partial isometry. We therefore also have
\begin{equation}\label{Eqn:TildeFTildeFFStarU}
F = (FF^*)^{1/2}U.
\end{equation}
The partial isometry $U:\ell^2 \rightarrow L^2(\mu)$ is the same in both (\ref{Eqn:TildeFUFStarTildeF}) and (\ref{Eqn:TildeFTildeFFStarU}).
\begin{corollary}
The two operators $H = F^*_wF$ and $K=FF^*_w$ have the same spectrum, apart from the point $0$.
\end{corollary}
\begin{proof}A rearrangement of (\ref{Eqn:TildeFUFStarTildeF}) and (\ref{Eqn:TildeFTildeFFStarU}) yields
\begin{equation}\label{Eqn:UHUKStar}
UHU = K^*.
\end{equation} \end{proof}
\begin{example}\label{Ex:ConvexDirac} Convex combinations of Dirac masses. \end{example}
Let $\mu = \sum_{i=1}^n \alpha_i \delta_{x_i}$, where $\alpha_i \geq 0, \sum_{i=1}^n \alpha_i = 1$ and $x_i, i=1, 2, \ldots, n$ are distinct real numbers. Recall from Example \ref{Ex:FStarDelta1} that weights are required for any Dirac measure in order to make $F$ closable. The same holds for convex combinations of these measures. It turns out, however, that the integral operator is not so difficult to describe. We show here that $L^2(\mu)$ has dimension $n$, so the corresponding integral operator must have a representation as a finite matrix.
Suppose the polynomial $f_c(x) = \sum_{k=0}^m c_kx^k$ is equal to the zero vector in $L^2(\mu)$. In other words, \[ \|f_c\|_{L^2(\mu)}^2 = \int_{\mathbb{R}} \Bigr| \sum_{k=0}^m c_kx^k \Bigr| \mathrm{d}\mu(x) = 0.\] Using our usual notation, $f_c$ is the image of a sequence $c \in \mathcal{D}$ under the map $F_Q$ from the Hilbert space $\mathcal{H}_Q$. We recall from Lemma \ref{Lem:KatoIsometry} that $F_Q$ is an isometry, hence \[ \|f_c\|_{L^2(\mu)}^2 = Q_M(c) = \|c\|_{\mathcal{H}_Q}^2 \] where $M=M^{(\mu)}$ is the moment matrix for $\mu$. The matrix $M$ is of the form \[ M_{j,k} = \sum_{ i=1}^n \alpha_i x_i^{j+k}.\]
Therefore, if $f_c = 0$ in $L^2(\mu)$, we have
\begin{eqnarray*} \int_{\mathbb{R}} \Bigr| \sum_{k=0}^m c_k x^k \Bigr|^2\mathrm{d}\mu(x) &=& \sum_{k,j=0}^m \overline{c_j}c_k M_{j,k} \\ &=& \sum_{j,k=1}^m \overline{c_j}c_k \sum_{i=1}^n \alpha_i x_i^{j+k} \\ &=& \sum_{i=1}^n \alpha_i \Bigr| \sum_{k=0}^m c_kx_i^k \Bigr|^2 \\&=&0. \end{eqnarray*}
Since each $\alpha_i \geq 0$ and $\sum_{i=1}^n \alpha_i = 1$, the above sum is zero if and only if \[ \sum_{k=0}^m c_kx_i^k = 0\] for all $i=1, 2, \ldots, n$. Therefore, the distinct real numbers $x_1, \ldots, x_n$ are roots of any polynomial which is equal to zero in $L^2(\mu)$.
Let $p(x)$ be the degree-$n$ polynomial \[ p(x) = \prod_{i=1}^n (x-x_i).\] Using the Euclidean algorithm, any polynomial $q$ can be written in the form $q = ap+r$, where $a,r$ are polynomials and $r$ has degree less than $n$. In the equivalence classes of $L^2(\mu)$, we therefore have that $q = r$. Hence, the dimension of $L^2(\mu)$ is $n$.
\hfill$\Diamond$
\begin{remark}[\cite{JoOl00}]\label{Rem:Distributions} We can describe another way to generate the Hilbert space
completion $\mathcal{H}_Q$ of a quadratic form $Q$. So far we have looked at two ways to generate isomorphic Hilbert spaces
$\mathcal{H}_Q$. The first is to complete the finite sequences
$\mathcal{D}$ with respect to the quadratic form $Q_M$. The second is the
closure of the polynomials in $L^2(\mu)$, where $\mu$ is the measure whose
moments are the entries of $M$. Here, we briefly outline a third method
to generate the same Hilbert space which consists of distributions. For
now, we will restrict ourselves to the concrete case $M=$ the Hilbert
matrix.
As in our first method, we start with the positive semidefinite quadratic
form generated by the moment matrix $M$:
\begin{equation*}
Q_M: [0,1)\times [0,1) \rightarrow \mathbb{R}.
\end{equation*}
We use $M$ to define linear functionals. For each $x$ in $[0, 1)$, there
is a linear functional $v_x$ defined by
\begin{equation}\label{Defn:vxLinFun}
v_x(\cdot) = Q_M(\cdot, x).
\end{equation}
(The analog to $v_x$ is a sequence in $\mathcal{D}$ with zeros in every
position except one, where there is a $1$.)
We use the $v_x$s to build more linear functionals with finite sums (here,
the analog is $\mathcal{D}$):
\begin{equation}\label{Defn:uLinFun}
u(\cdot) = \sum_{x\textrm{ finite }}c_x v_x(\cdot) = \sum_{x\textrm{
finite }} Q_M(\cdot, x).
\end{equation}
Finally, there is an inner product in the space of linear functionals of
the form (\ref{Defn:uLinFun}):
\begin{equation}\label{Defn:HtildeInnerProduct}
\Bigl\langle \sum_{x\textrm{ finite }}c_x v_x, \sum_{y\textrm{ finite
}}c_y v_y\Bigr\rangle = \sum_x \sum_y \overline{c_x} c_y Q_M(x,y).
\end{equation}
Now, we complete the inner product space to the Hilbert space
$\widetilde{\mathcal{H}}$. The Hilbert space $\widetilde{\mathcal{H}}$ is
a reproducing kernel Hilbert space. It is fairly easy to check that for
$v_x$ as in (\ref{Defn:vxLinFun}) and $u$ as in (\ref{Defn:uLinFun}),
$\langle v_x, u\rangle_{\widetilde{\mathcal{H}}} = u(x)$.
Recall that the Hilbert matrix is related to the integral kernel
$(1-xy)^{-1}$. We will next explain how this kernel appears in the
Hilbert space of distributions $\widetilde{\mathcal{H}}$.
The tensor product of two distributions $\overline{u_1}$ and $u_2$ is
straightforward; we apply the linear functional $\overline{u_1}\otimes
u_2$ to the pair of functions $(f,g)$ by
\begin{equation}
(\overline{u_1}\otimes u_2)(f(x),g(y)) = \overline{u_1}(f)u_2(g).
\end{equation}
Then, we can extend this definition to functions $h(x,y)$ because the span
of functions of the form $f(x)g(y)$ is dense in $L^2[0,1]$.
One important example of a tensor product of distributions which is
related to Widom's theorem is the following. Suppose $\phi_1$ and
$\phi_2$ are locally integrable functions on $[0,1)$, and $u_1 = \phi_1
\mathrm{d}x$ and $u_2 = \phi_2 \mathrm{d}x$. Then
\begin{equation}
(\overline{u_1}\otimes u_2)\Biggl(\frac{1}{1-xy}\Biggr) = \int_0^1
\int_0^1 \frac{\overline{\phi_1}(x)\phi_2(y)}{1-xy} \,\mathrm{d}x \mathrm{d}y.
\end{equation}
The Hilbert space $\widetilde{\mathcal{H}}$ is spanned by distributions
$u$ on such that
\[
(\overline{u}\otimes u) \Biggl(\frac{1}{1-xy}\Biggr) < \infty.
\]
It can be shown that the Dirac mass at $0$ and all its derivatives belong
to $\widetilde{\mathcal{H}}$; moreover, derivatives of the Dirac mass at
$0$ (when scaled appropriately) form an orthonormal basis for
$\widetilde{\mathcal{H}}$ with respect to the inner product
(\ref{Defn:HtildeInnerProduct}). The specific ONB is
\[
u_n = \frac{(-1)^n}{n!} \delta_0^{(n)}, n = 0, 1, 2, \ldots.
\]
\end{remark}
\section*{Preface}
Moments of Borel measures $\mu$ on $\mathbb{R}^d$ have numerous
uses both in analysis and in applications. For example, moments are used
in the computation of orthogonal polynomials, in inverse spectral problems, in wavelets, in the analysis of fractals, in physics, and in probability
theory. In this paper, we study some well-known and perhaps
not-so-well-known aspects of moment theory which have been motivated by the study of both the classical literature on moments and the newer literature on iterated function systems (IFSs).
Over the last hundred years, since the time of Lebesgue, mathematicians have adopted two approaches to measures on a topological (Hausdorff) space $X$. In the first, measures are treated as functions on some sigma-algebra of ``measurable'' subsets of $X$. In the second, measures are realized as positive linear functionals on a suitable linear space $C$ of functions on $X$. If we take $C$ to be the compactly supported continuous functions of $X$, then Riesz's theorem states that the two versions are equivalent. Starting with a measure $\mu$ on the Borel sigma algebra, integration with respect to $\mu$ yields a functional $L$ defined on $C$, and conversely (by Riesz), every positive linear functional $L$ on $C$ takes the form of integration against some Borel measure $\mu$. The conclusion from Riesz's theorem asserts the existence of $\mu$.
The moment problem is an analog, taking $X$ to be $\mathbb{R}^d$ and replacing $C$ with the linear space of polynomials in $d$ variables. The issue, as before, is to construct a measure from a functional. Since the polynomials are spanned by the monomials, a functional $L$ is then prescribed by a sequence of moments. Hence, we have the moment problem: determine a measure $\mu$ from the sequence of moments. Given the moments, one can ask about the existence of a measure having those moments, the uniqueness of the measure if one exists, and the process for constructing such a measure and determining some of its properties. We will touch on each of these topics in this Memoir.
In Chapter \ref{Sec:Notation} we introduce our notation, definitions, and conventions. We have collected here for the reader's convenience some of the tools from operator theory and harmonic analysis which we will use throughout. In Chapter \ref{Sec:MomentTheory}, we review the classical moment existence problem, which asks the following question: Given a list of moments $\{m_k\}_{k \geq 0}$, does there exist a Borel measure $\mu$ such that the $k^{\textrm{th}}$ moment of $\mu$ is $m_k$? Alternately, we can arrange the moment sequence in a Hankel matrix in order to apply operator-theoretic techniques to this problem---a theme we carry throughout the Memoir. Using tools of Kolmogorov, Parthasarathy, and Schmidt, we provide a new approach to this old problem, which was first settled by Riesz (see \cite{Rie23}) nearly $100$ years ago. Our main result here is a new proof for the moment existence problem in both $\mathbb{R}$ and $\mathbb{C}$. We return to this same theme---the classical moment uniqueness problem---in Chapter \ref{Ch:Extensions}. There, we describe in detail the theory of self-adjoint extensions of symmetric operators, which helps us determine precisely when a list of moments has more than one associated measure.
In Chapters \ref{Sec:Exist} and \ref{Sec:ComputeA}, we explore the difficult problem of computing moments directly for an equilibrium measure arising from an iterated function system (IFS). An IFS is a finite set of contractive transformations in a metric space. A theorem of Hutchinson \cite{Hut81} tells us that for each IFS, there exists a unique normalized equilibrium measure $\mu$ which is the solution to a fixed point problem for measures. We show that every IFS corresponds to a non-abelian system of operators and a fixed point problem for infinite matrices. We then prove that the moment matrix $M=M^{(\mu)}$ is a solution to this matrix fixed point problem, and in turn we exploit this fixed point property to compute or approximate the moments for $\mu$ in a more general setting than the affine cases studied in \cite{EST06}.
As shown in \cite{EST06}, it is not straightforward to compute the moments for even the simplest Cantor equilibrium measures, but we \textit{can} compute the moments of an equilibrium measure $\mu$ directly in the affine IFS case. However, we generally have no choice but to approximate the moments in non-affine examples. In particular, our results can be applied to real and complex Julia sets, in which there is much current interest. The non-affine moment approximation problem is surprisingly subtle, and as a result, we turn to operator-theoretic methods. Affine IFSs are considered in Chapter \ref{Sec:Exist}, while non-affine IFSs and operator theory are considered in Chapter \ref{Sec:ComputeA}. In addition, there are associated results about spectral properties of moment matrices for equilibrium measures in Chapter \ref{Sec:Spectrum}.
Infinite Hankel matrices cannot always be realized directly by operators in the $\ell^2$ sequence space. We have been able to apply operator theoretic results more widely by allowing matrices to be realized as operators on a renormalized Hilbert space when necessary. We turn to this problem in Chapter \ref{Ch:Kato}, where we introduce the operator theoretic extensions of quadratic forms by Kato and Friedrichs \cite{Kat80}. The quadratic form we use in this context is induced by the moment matrix $M^{(\mu)}$. Using the Kato-Friedrichs theorem, we obtain a self-adjoint operator with dense domain. This generally unbounded Kato-Friedrichs operator can be used to obtain a spectral decomposition which helps us understand the properties of the quadratic form which gave rise to the operator. Often, renormalized or weighted spaces allow us to use the Kato-Friedrichs theorem in greater generality.
We continue to use Kato-Friedrichs theory in the remaining chapters to understand the spectral properties of the moment matrix operator. In Chapter \ref{Sec:IntOperators}, we use the classical example of the Hilbert matrix and its generalizations from Widom's work \cite{Wid66} in order to explore spectral properties of the moment matrix $M^{(\mu)}$ for general measures. In particular, we show that the moment matrix is unitarily equivalent to a certain integral operator. In Chapter \ref{Sec:Spectrum}, we further explore the spectral properties of the moment matrix and present some detailed examples. Finally, Chapter \ref{Ch:Extensions} uses spectral theory as a tool to reexamine the classical moment problem, this time considering not only the existence of a measure having prescribed moments, but also the uniqueness.
Readers not already familiar with the theory of moments may find some of the classical references useful. They treat
both the theory and the applications of moments of measures, and they
include such classics as \cite{ShTa43}, \cite{Sho47}, and \cite{Akh65}. The early uses of moments in mathematics were motivated to a large degree by
applications to orthogonal polynomials \cite{Sze75}. More recent applications of orthogonal polynomials are numerical analysis, and
random matrix theory, random products, and dynamics. These
applications are covered
in \cite{Lan87a}, \cite{Lan87b}, \cite{Dei99}, while the edited and delightful volume
\cite{Lan87b} includes additional applications to geometry, to signal
processing, to probability and to statistics.
\chapter{The Kato-Friedrichs operator}\label{Ch:Kato}
The moment matrix $M^{(\mu)}$ is an infinite matrix, and while $M^{(\mu)}$ may not be a well defined operator on $\ell^2$, we may be able to view $M^{(\mu)}$ as an operator in some other sequence space. We use the techniques of Kato and Friedrichs to turn $M^{(\mu)}$ into a self-adjoint densely defined operator on a weighted $\ell^2$ space. Our main tool will be a quadratic form $Q_M$ which is defined from the moments of $\mu$. We obtain the weighted $\ell^2$ space by finding a space in which the quadratic form $Q_M$ is closable. In the course of showing that $Q_M$ is closable, we introduce two key operators: $F$ and its adjoint $F^*$, which we will continue to study in Chapters \ref{Sec:IntOperators} and \ref{Sec:Spectrum}.
\section{The quadratic form $Q_M$}\label{Subsec:QClosable}
Let $\mathcal{D}$ be the set of all finitely supported sequences indexed by $\mathbb{N}_0$.
The familiar expression in Equation (\ref{Eqn:MatrixMult}) for the infinite matrix-vector product $Mc$ makes
sense for every $c \in \mathcal{D}$, but $Mc$ may not be well defined for
every sequence $c$. Even when it is well defined, $Mc$ may not
be in the same Hilbert space as $c$. We will address this technicality by changing the domain of $M$.
Let $M$ be the moment matrix of a positive Borel measure with finite moments, where $M$ satisfies the
positive semidefinite condition in Definition \ref{Defn:PD} (real
case) or the PD$\mathbb{C}$ condition in Definition \ref{Defn:PDC}
(complex case). The quadratic form $Q_M$ is defined on infinite sequences with only finitely many nonzero
components. Specifically, given $c\in\mathcal{D}$ we have
\begin{equation}\label{Def:QM}
Q_M(c) = \sum_i \sum_j \overline{c_i} M_{i,j} c_j.
\end{equation}
Even though every operator on a Hilbert space determines a quadratic form, the converse is not necessarily true. Here, we look for conditions on the quadratic form $Q_M$ which guarantee that the quadratic form gives rise to an operator on a Hilbert space $\mathcal{H}$. The work of Friedrichs and Kato provides the correct conditions. Friedrichs (see \cite[Section 6.2.3] {Kat80}) proved that semibounded symmetric operators in a Hilbert space
$\mathcal{H}$ have self-adjoint extensions in $\mathcal{H}$. Later,
Kato \cite[Section 6.2.1-Theorem 2.1; Section 6.2.6-Theorem 2.23]{Kat80} extended
Friedrichs's theorem to quadratic forms.
\begin{theorem}[Kato]\label{Thm:Kato} Let $Q$ be a densely defined, closed, positive quadratic form. Then there exists a unique self-adjoint operator $H$ on a Hilbert space $\mathcal{H}$ such that the Hilbert space completion $\mathcal{H}_Q$ of $Q$ is equal to the completion of $H$, and for all $c$ in the domain of $Q$, \[ Q(c) = \|H^{1/2}c\|^2_{\mathcal{H}} = \|c\|^2_{\mathcal{H}_Q}. \] In particular, the domain of $Q$ is equal to the domain of $H^{1/2}$.
\end{theorem}
Our quadratic form $Q_M$ arising from a moment matrix may not be closed, but we can show that $Q_M$ is
\textit{closable} and then apply Kato's theorem to the closure. We change our notation briefly here in order to work with sequences of elements in $\mathcal{D}$. A sequence in $\mathcal{D}$ will be denoted
$\{c_n\}_{n\in\mathbb{N}_0}$, and the $i^{\textrm{th}}$ component of $\{c_n\}$ is $c_n(i)$. (We retain the
notation $c = \{c(i)\}_{i\in\mathbb{N}_0}$ for elements of
$\mathcal{D}$ and its subsequent completion throughout the next two
sections.)
\begin{definition}[\cite{Kat80}]\label{Defn:QClosable}
Let $\mathcal{H}$ be a Hilbert space in which $\mathcal{D}$ is dense. A quadratic form $Q$ defined on $\mathcal{D}$ is \textit{closable} if and only if
\[ c_n \rightarrow 0 \text{ in } \mathcal{H}\]
and
\[ Q(c_n - c_m) \rightarrow 0 \text{ as } m,n\rightarrow \infty\]
imply
\[ Q(c_n)\rightarrow 0 \text{ as }n\rightarrow \infty.\]
\end{definition}
We next define the operator $F$ to
map finite sequences $c\in \mathcal{D}$ to polynomials (a.k.a generating
functions) in $L^2(\mu)$. Given $c\in\mathcal{D}$, we define
\begin{equation}\label{Eqn:DefnF}
Fc(x) = f_c(x) = \sum_{i\in \mathbb{N}_0} c_i x^i .
\end{equation}
\begin{example}A quadratic form which is not closable in $\ell^2$.\end{example}
In order to construct a quadratic form which is not closable in $\ell^2$, we need a sequence $\{c_n\}\subset\ell^2$ and a measure $\mu$ such that
\begin{enumerate}[(1)]
\item $\sum_{i\in\mathbb{N}_0} |c_n(i)|^2 \rightarrow 0$ as $n\rightarrow\infty$
\item $\int_{\mathbb{R}}|f_{c_m}(x) - f_{c_n}(x)|^2 \,\mathrm{d}\mu(x) \rightarrow 0$ as $m,n\rightarrow \infty$
\item $\int_{\mathbb{R}}|f_{c_n}(x)|^2 \,\mathrm{d}\mu(x) \not\rightarrow 0$ as $n\rightarrow\infty.$
\end{enumerate}
Let $\mu = \delta_b$ for $b > 1$, and let $c_n(i) =
\delta(n,i)b^{-i}$ (the Kronecker delta). Since $b > 1$,
$\|c_n\|^2_{\ell^2} = b^{-2n} \rightarrow 0$ as
$n\rightarrow\infty$, so (1) is satisfied. Since
\[
f_{c_n}(x) = b^{-n} x^n \textrm{ for each }n,
\]
and the integral in (2) simply evaluates $f_{c_n}$ and $f_{c_m}$ at $x = b$, we have
\[
\int_{\mathbb{R}}|f_{c_m}(x) - f_{c_n}(x)|^2 \,\mathrm{d}\mu(x) = |1 - 1|^2 = 0 \textrm{ for all }m, n.
\]
However, the integral in (3) is $1$ for all $n$. Therefore the quadratic form associated with $\delta_b$, $b > 1$, is not closable in $\ell^2$.
\hfill$\Diamond$
\section{The closability of $Q_M$}
In order to show that the quadratic form $Q_M$ is closable for a particular infinite Hankel matrix $M$,
we show that the operator $F$ defined in Equation (\ref{Eqn:DefnF}) is closable with respect
to a weighted space $\ell^2(w)$. Recall
Definition \ref{Def:FClosed} regarding closed and closable
operators. The \textit{weighted} $\ell^2$ spaces are
defined by a sequence of weights $w= \{w_i\}_{i\in\mathbb{N}_0}$:
\[
\ell^2(w):=\Bigl\{ \{c(i)\} \Big| \sum_{i}w_i |c(i)|^2 <
\infty\Bigr\}.
\]
The norm on $\ell^2(w)$ is
\[
\|c\|^2_{\ell^2(w)} = \sum_{i\in\mathbb{N}_0} w_i |c(i)|^2,
\]
and the inner product is given by
\[ \langle c|d \rangle_{\ell^2(w)} = \sum_{i \in \mathbb{N}_0} w_i \overline{c(i)}d(i). \]
We will generally take the weights to all be strictly greater than zero.
In the next lemma, we use an argument mirroring the proof of Lemma \ref{Lemma:RClosable} to produce a set
of weights $w$ which guarantee that $F$ will be closable in the weighted space $\ell^2(w)$, and then
in Theorem \ref{Thm:QClosable} we show that the quadratic form
$Q_M$ is closable in $\ell^2(w)$.
Consider $F$ as a map with domain in $\ell^2(w)$ for some weights $w=\{w_i\}_{i \in \mathbb{N}_0}$. We then see that the adjoint $F^*$ of $F$ is given by
\begin{equation}\label{Eqn:DefnFStar}
(F_w^*f)_k = \frac{1}{w_k}\int_{\mathbb{R}} x^k f(x) \,\mathrm{d}\mu(x).
\end{equation}
To verify this, we must show that $F_w^*$ satisfies
\begin{equation}
\langle Fc|f\rangle_{L^2(\mu)} = \langle
c|F_w^*f\rangle_{\ell^2(w)}
\end{equation} for every $f$ such that $F_w^*f \in \ell^2(w)$.
For each $c\in\mathcal{D}$, we have
\begin{equation}
\begin{split}
\langle Fc|f\rangle_{L^2(\mu)}
& = \int_{\mathbb{R}} \overline{\sum_{k \in \mathbb{N}_0} c(k)x^k} f(x) \,\mathrm{d}\mu(x)\\
& = \sum_{k \in \mathbb{N}_0} \overline{c(k)} \int_{\mathbb{R}} x^k f(x) \,\mathrm{d}\mu(x).\\
\end{split}
\end{equation}
We now multiply and divide by $w_k$:
\begin{equation}
\begin{split}
\langle Fc|f\rangle_{L^2(\mu)}
& = \sum_{k\in \mathbb{N}_0} w_k \overline{c(k)} \underbrace{\frac{1}{w_k} \int_{\mathbb{R}} x^k f(x) \,\mathrm{d}\mu(x)}_{(\ref{Eqn:DefnFStar})}\\
& = \langle c | F_w^*f\rangle_{\ell^2(w)},
\end{split}
\end{equation}
where we have shown the right had side of Equation (\ref{Eqn:DefnFStar}) satisfies the definition of the adjoint $F^*_w$.
\begin{lemma}\label{Lemma:FClosable}
Suppose $\int_{\mathbb{R}} |x|^k \,\mathrm{d}\mu(x) < \infty$ for each $k\in\mathbb{N}_0$.
There exist weights $\{w_i\}_{i \in \mathbb{N}_0}$ such that $F:\ell^2(w)\rightarrow
L^2(\mu)$ (\ref{Eqn:DefnF}) is closable.
\end{lemma}
\begin{proof} The operator $F$ is closable if and only if
$\textrm{dom}(F^*)$ is dense in $L^2(\mu)$. (See, for example, \cite[Proposition 1.6, Chapter 10]{Con90}.)
Now, $F_w^*f\in \ell^2(w)$ precisely when
\begin{equation}\label{Eqn:FStar2}
\sum_{k\in \mathbb{N}_0} w_k | (F_w^*f)_k|^2 < \infty,
\end{equation}
and (\ref{Eqn:FStar2}) is true if and only if
\begin{equation}\label{Eqn:Mkcriterion}
\sum_k \frac{1}{w_k} \Big| \int_{\mathbb{R}} x^k f(x) \,\mathrm{d}\mu(x)
\Big|^2 < \infty.
\end{equation}
This gives us one sufficient criterion for $F$ to be closable. We see from Equation (\ref{Eqn:Mkcriterion}) that the monomial functions $\{v_j\}_{j \in \mathbb{N}_0}$ are in the domain of $F^*_w$ if and only if \begin{equation}\label{Eqn:FCriterion}
\sum_k \frac{1}{w_k} \Big| \int_{\mathbb{R}} x^{j+k} \,\mathrm{d}\mu(x)
\Big|^2 = \sum_k \frac{1}{w_k} |M_{j,k}|^2< \infty \end{equation} for all $j \in \mathbb{N}_0$.
Rather than looking at this criterion (\ref{Eqn:FCriterion}), however, we will find weights that put the essentially bounded functions in the domain of $F^*_w$. By hypothesis, $\mu$ is a finite measure so $L^{\infty}(\mu) \subseteq L^2(\mu)$. Set $\mathcal{M}_k:=\int_{\mathbb{R}}|x|^k\,\mathrm{d}\mu(x)$, and suppose $f\in
L^{\infty}(\mu)$. Then
\begin{equation}
\Big| \int_{\mathbb{R}} x^k f(x) \,\mathrm{d}\mu(x)\Big| \leq
\|f\|_{L^{\infty}(\mu)} \mathcal{M}_k.
\end{equation}
Therefore, if $\{\mathcal{M}_k^2/w_k\}\in \ell^1$, then
\begin{equation}
L^{\infty}(\mu)\subset \textrm{dom}(F_w^*),
\end{equation}
and we explicitly define a choice of weights $w_k$:
\begin{equation}\label{Eqn:FWeights}
w_k:=(1 + k^2)\mathcal{M}^2_k.
\end{equation}
Since $L^{\infty}(\mu)$ is dense in $L^2(\mu)$, we now know that
$\textrm{dom}(F_w^*)$ is dense in $\ell^2(w)$ where $w = \{w_k\}_{k \in \mathbb{N}_0}$ is
defined in (\ref{Eqn:FWeights}). Therefore $F:\ell^2(w)
\rightarrow L^2(\mu)$ is closable.\end{proof}
\begin{theorem}\label{Thm:QClosable}
Given $\mu$ a Borel measure on $\mathbb{R}$ with finite moments of all orders and $M = M^{(\mu)}$, there are weights
$\{w(i)\}_{i\in\mathbb{N}_0}$ such that $(Q_M, \mathcal{D},
\ell^2(w))$ is closable.
\end{theorem}
\begin{proof}Choose weights $w = \{w_i\}_{i\in\mathbb{N}_0}$ as
in Equation (\ref{Eqn:FWeights}) so that $F$ is closable. Suppose
$c_n\underset{\ell^2(w)}{\longrightarrow} 0$ and $Q_M(c_n -
c_m)\rightarrow 0$ as $m,n\rightarrow \infty$ as in Definition \ref{Defn:QClosable}. Our goal is to show that $Q_M(c_n)\rightarrow 0$.
Denote the polynomial $Fc_n$ by $f_n$ for each $n \in \mathbb{N}_0$. Because $Q_M$ on $\mathcal{D}$ and $\widetilde{Q}_M$ defined on the
polynomials satisfy
\[\widetilde{Q}_M(F(c)) = Q_M(c),\]
the condition $Q_M(c_n - c_m)\rightarrow 0$ implies that $\{f_n\}_{n \in\mathbb{N}_0}$
is a Cauchy sequence in $L^2(\mu)$. Therefore, there exists $g \in
L^2(\mu)$ such that $f_{n}\underset{L^2(\mu)}{\longrightarrow} g$.
We will now use the condition that $c_n \rightarrow 0$ in
$\ell^2(w)$. By Lemma \ref{Lemma:FClosable},
$F:\ell^2(w)\rightarrow L^2(\mu)$ is closable, so its closure
$\overline{F}$ has a closed graph. Since $c_n\rightarrow 0$,
$F(c_n) =\overline{F}(c_n)\rightarrow g$, and the graph of
$\overline{F}$ is closed, $g$ must be $0\in L^2(\mu)$.
\end{proof}
As a result of Theorem \ref{Thm:QClosable}, we can apply Theorem
\ref{Thm:Kato} (Kato's Theorem) to the closure of the quadratic form $Q_M$. We denote by $\mathcal{H}_Q$ the Hilbert space completion with respect to the quadratic form $Q_M$. By Kato's theorem, there exists a self-adjoint operator $H$ on $\ell^2(w)$ such that the Hilbert space completion is also $\mathcal{H}_Q$. This gives \begin{equation}\label{Eqn:KatoProperties} \|c\|^2_{\mathcal{H}_Q} = Q_M(c) = \|H^{1/2}c\|^2_{\ell^2(w)} \end{equation} for all $c \in \mathcal{H}_Q$. Moreover, the domain of $H^{1/2}$ is equal to the domain of $Q_M$. It should be noted that in general, the domain of $H$ is a subset of the domain of $H^{1/2}$ and may or may not contain $\mathcal{D}$.
\section{A factorization of the Kato-Friedrichs operator}
Given $F$ from $\ell^2(w)$ to
$L^2(\mu)$, suppose the weights $w$ have been selected such that
$F$ is closable, i.e. $F^*_w$ is densely defined in $L^2(\mu)$.
Let $\widetilde{Q}_M$ be the quadratic form given by the $L^2(\mu)$-norm:
\begin{equation}
\widetilde{Q}_M(f)=\int_{\mathbb{R}}|f|^2 \,\mathrm{d}\mu.
\end{equation}
Then for every $c \in \mathcal{D}$, we have \begin{eqnarray}
\widetilde{Q}_M(Fc) &=& \int_{\mathbb{R}}|Fc|^2\,\mathrm{d}\mu \nonumber\\ &=& \int_{\mathbb{R}}\sum_i \sum_j \overline{c}_ic_j x^{i+j} \,\mathrm{d}\mu \nonumber\\ &=&
\sum_i \sum_j \overline{c_i} M_{i,j}c_j = Q_M(c). \end{eqnarray}
\begin{proposition}\label{Lem:FStarFisH} $F^*_wF$ is a self-adjoint densely defined operator on $\ell^2(w)$, and $F^*_wF$ is the Kato operator $H$.
\end{proposition}
\begin{proof}
The domain of $F$ contains $\mathcal{D}$, and the domain of $F^*_w$
contains the polynomials. Since $F$ maps finite sequences $c \in
\mathcal{D}$ to polynomials $Fc$, we see that $\mathcal{D}$ is
contained in the domain of $F^*_wF$. In order to prove that
$F^*_wF$ is the Kato operator corresponding to the quadratic form
$Q_M$, we must show that it satisfies \[ Q_M(c) =
\|(F^*_wF)^{1/2}c\|_{\ell^2(w)} \] for all $c \in \mathcal{D}$.
First, note that for any $c \in \mathcal{D}$,
\begin{eqnarray*} \langle c |F_w^*Fc \rangle_{\ell^2(w)} &=& \big\langle (F^*_wF)^{1/2}c \big|(F^*_wF)^{1/2}c \big\rangle_{\ell^2(w)} \\
&=& \|(F_w^*F)^{1/2} c\|_{\ell^2(w)}^2. \end{eqnarray*} We also note that
\begin{eqnarray*} \langle c|F_w^*Fc \rangle_{\ell^2(w)} &=& \langle Fc | Fc \rangle_{\ell^2(w)}\\ &=&
\|Fc\|^2_2\\&=& \widetilde{Q}_M(Fc)= Q_M(c). \end{eqnarray*}
Therefore, for $c \in \mathcal{D}$,
\[ Q_M(c) = \langle c|F^*_wFc \rangle_{\ell^2(w)} = \|(F^*_wF)^{1/2}c\|^2_{\ell^2(w)}.\]
By uniqueness of the Kato operator for the closable quadratic form
$Q_M$, we have $F^*_wF = H$.
\end{proof}
This proposition makes the usually mysterious Kato operator much more concrete. In particular, as a result of this proposition, the domain of $H$ includes $\mathcal{D}$.
\section{Kato connection to $A$ matrix}\label{Subsec:KatoA}
Given a moment matrix $M^{(\mu)}$ and a measurable map $\tau$, we can use Kato theory as another way to describe a matrix $A$ such that, under appropriate convergence criteria, \[M^{(\mu \circ \tau)} = A^*M^{(\mu)}A.\]
This is a different approach to that used in Section \ref{Sec:GeneralA}. We prove that under certain hypotheses on the measure $\mu$ and the map $\tau$, we can find an operator $\widetilde{A}$ which is an isometry between the Kato completion Hilbert spaces for the quadratic forms $Q_M$ and $Q_M'$, where $M = M^{(\mu)}$ and $M' = M^{(\mu \circ \tau^{-1})}$. We demonstrate that this isometry $\widetilde{A}$ will intertwine the Kato operators for $Q_M$ and $Q_{M'}$, and therefore the corresponding matrix representation $A$ will intertwine the moment matrices $M$ and $M'$ as required.
Let $(X,\mu)$ be a measure space with $X \subseteq \mathbb{R}$. For each $k \in \mathbb{N}_0$, let $v_k$ be the monomial function on $X$: $v_k(x) = x^k$. The polynomials may not be dense in $L^2(\mu)$, so denote by $\mathcal{P}$ the closed span of the polynomials in $L^2(\mu)$. We take $\mu$ to be a measure such that $\{v_k\}_{k\in \mathbb{N}_0} \subseteq L^1(\mu) \cap \mathcal{P}$. Let $M = M^{(\mu)}$ be the moment matrix for $\mu$. By Section \ref{Subsec:QClosable}, there exist weights $w=\{w_i\}_{i\in\mathbb{N}_0}$ such that the map $F$ is closable on the weighted space $\ell^2(w)$. Thus the quadratic form $Q_M$ is closable, and we can define the Kato operator $H$ on $\ell^2(w)$. Note that the domains of $Q_M, F$, and $F_w^*$ all contain the set of finite sequences $\mathcal{D}$.
Let $\tau$ be a measurable endomorphism on $X$ such that the functions $v_k \circ \tau = \tau^k$ are all in $L^1(\mu) \cap \mathcal{P}$. Let $M' = M^{(\mu \circ \tau^{-1})}$ be the moment matrix for the measure $\mu \circ \tau^{-1}$. Define the map $F':\mathcal{D} \rightarrow L^2(\mu \circ \tau^{-1})$ to be the analog of $F$, i.e. the map which takes $c \in \mathcal{D}$ to the polynomial $f_c = \sum_i c_iv_i \in L^2(\mu \circ \tau^{-1})$. Using the same reasoning as that used in Section \ref{Subsec:QClosable}, there exist weights such that $F'$ is closable, hence the quadratic form $Q_{M'}$ is closable, and we have $H'$, the Kato operator on the corresponding weighted $\ell^2$ space.
Let $w=\{w_k\}_{k \in \mathbb{N}_0}$ be weights that allow for both $F$ and $F'$ to be closable on $\ell^2(w)$, hence the Kato operators $H$ and $H'$ are both densely defined self-adjoint operators on $\ell^2(w)$. Denote the Kato completion Hilbert spaces for the quadratic forms $Q_M$ and $Q_{M'}$ by $\mathcal{H}_Q$ and $\mathcal{H}_{Q'}$, respectively.
Notice that $\phi \in L^2(\mu \circ \tau^{-1})$ if and only if $\int_{\mathbb{R}}|\phi \circ \tau|^2 \mathrm{d}\mu < \infty$, which holds if and only if $\phi \circ \tau \in L^2(\mu)$. In fact, there is a natural isometry $\alpha$ between $L^2(\mu \circ \tau^{-1})$ and $L^2(\mu)$ given by \[ \alpha(\phi) = \phi \circ \tau.\]
\begin{remark}
The map $F$ which we have defined above actually takes on several realizations in this section. Simply put, it is the map that takes finite sequences to the polynomial with these coefficients. But we can think of $F$ as operating on the Kato spaces $\mathcal{H}_Q$, $\mathcal{H}_{Q'}$ or on the weighted space $\ell^2(w)$. We can also take the codomain of $F$ to be $L^2(\mu)$ or $L^2(\mu \circ \tau^{-1})$. In order to clarify these different realizations, we will denote the operator $F$ as follows:
\begin{eqnarray*} F_Q&:& \mathcal{H}_Q \rightarrow L^2(\mu) \\ F_{Q'}&:& \mathcal{H}_{Q'} \rightarrow L^2(\mu \circ \tau^{-1}) \\ F &:& \ell^2(w) \rightarrow L^2(\mu) \\ F' &:& \ell^2(w) \rightarrow L^2(\mu \circ \tau^{-1}). \end{eqnarray*} Moreover, we also observe that the operator $T$ (defined below) which maps standard basis elements $e_i$ to $ \tau^i$ can be expressed in terms of the $F'$ and the isometry $\alpha$. Given $c \in \mathcal{D}$,
\begin{eqnarray*} Tc &=& \Bigr(\sum c_i \tau^i \Bigr) \in L^2(\mu)\\ &=& \sum c_i \alpha(v_i ) \in L^2(\mu)\\ &=& \alpha F'c.
\end{eqnarray*}
We will show below that the operators $F_Q$ and $F_{Q'}$ are isometries, and we have already proved that for appropriately chosen weights, the operators $F$ and $F'$ are closable on $\ell^2(w)$.
\end{remark}
\begin{lemma}\label{Lem:KatoIsometry}
The Kato space $\mathcal{H}_Q$ is isometric to $\mathcal{P} \subseteq L^2(\mu)$, and $\mathcal{H}_{Q'}$ is isometric to a subspace of $L^2(\mu \circ \tau^{-1})$.
\end{lemma}
\begin{proof} The norm on the Hilbert space $\mathcal{H}_Q$ is given by \[ \|c\|^2_{\mathcal{H}_Q} = Q_M(c) \] when $c \in \mathcal{D}$, and by the definition of the Kato operator, we also have \[\|c\|_{\mathcal{H}_Q} = \|H^{1/2} c \|_{\ell^2(w)}.\] Let $F_Q$ be the map taking $c \in \mathcal{D}$ to $f_c = \sum_i c_iv_i$ in $\mathcal{P} \subseteq L^2(\mu)$. Then \begin{eqnarray*} \|F_Qc\|_2^2 &=& \|f_c\|^2_2\\ &=& \int_{\mathbb{R}}\Bigr|\sum_ic_iv_i \Bigr|^2 \mathrm{d}\mu\\ &=& \sum_i \sum_j \overline{c_i}c_j \int_{\mathbb{R}}v_iv_k \mathrm{d}\mu\\& = &\sum_i\sum_j \overline{c_i}c_j M^{(\mu)}_{i,j} \\& =& Q_M(c) = \|c\|^2_{\mathcal{H}_Q}. \end{eqnarray*}
Thus $F_Q$ is an isometry from $\mathcal{H}_Q$ to $\mathcal{P}$. An identical argument proves that the corresponding $F_{Q'}$ map from $\mathcal{H}_{Q'}$ to $L^2(\mu \circ \tau^{-1})$ is also an isometry onto the subspace $\mathcal{P}'$ of $L^2(\mu \circ \tau^{-1})$ spanned by the polynomials.
\end{proof}
\begin{proposition}\label{Prop:DefA} The Kato Hilbert spaces $\mathcal{H}_Q$ and $\mathcal{H}_{Q'}$ are isometric.
\end{proposition}
\begin{proof} We use the notation $\mathcal{H} \cong \mathcal{G}$ if there exists an isometry from the Hilbert space $\mathcal{H}$ onto the Hilbert space $\mathcal{G}$. Using Lemma \ref{Lem:KatoIsometry} and the map $\alpha$, we have the following isometries:
\[ \mathcal{H}_{Q'} \cong \mathcal{P}' \cong \mathcal{P} \cong \mathcal{H}_{Q}. \]
\end{proof}
If we denote the isometry in Proposition \ref{Prop:DefA} by $\widetilde{A}$, then
\[ \widetilde{A} = F_Q^{-1}\alpha F_{Q'}.\] The isometry property $\|c\|_{Q'}
= \|\widetilde{A}c\|_{Q}$ implies the quadratic forms satisfy $Q_M(\widetilde{A}c) =
Q_{M'}(c)$ for all $c \in \mathcal{D}$.
Given $\{e_i\}_{i \in\mathbb{N}_0}$ the standard orthonormal basis for
$\ell^2$, we define the standard orthonormal basis
$\{e_i^w\}_{i\in\mathbb{N}_0} = \{ \frac{e_i}{\sqrt{w_i}}
\}_{i\in\mathbb{N}_0}$ for $\ell^2(w)$. Given $F$ from $\ell^2(w)$ to
$L^2(\mu)$, suppose the weights $w$ have been selected such that
$F$ is closable, i.e. $F^*_w$ is densely defined in $L^2(\mu)$. Recall from Lemma \ref{Lem:FStarFisH} that $F^*_wF$ is equal to the Kato operator $H$ on $\ell^2(w)$.
The operator $\widetilde{A}$ has domain containing $\mathcal{D} \subseteq
\mathcal{H}_Q$, but we will need to consider the analog of $\widetilde{A}$ as
an operator on $\ell^2(w)$. In order to do this, we define the
following maps:
\begin{eqnarray*} j_w &:& \ell^2(w) \rightarrow \mathcal{H}_Q\\
j_w'&:& \ell^2(w) \rightarrow \mathcal{H}_{Q'}.\end{eqnarray*}
Each of $j_w,j_w'$ acts like the identity map on $\mathcal{D}$,
mapping $e_i^w$ in $\ell^2(w)$ to $e_i^w$ in the Kato spaces
$\mathcal{H}_Q$, $\mathcal{H}_{Q'}$, respectively. Then, we see \[ F =
F_Qj_w \qquad \txt{and} \qquad F' = F_{Q'}j_w'.\]
We then wish to define the operator on $\ell^2(w)$: \[ A
= j_w'\widetilde{A}j_w^{-1}.\]
\begin{remark} Defining $A$ in this fashion requires the composition of operators to be defined on $\mathcal{D}$ and also requires $j_w$ be invertible. We will assume these properties through the remainder of this section, but we note that they may not hold in general. Thus, this approach requires convergence hypotheses to define $A$, just as were needed in Proposition \ref{Prop:TRA}. \end{remark}
Given the maps $\alpha$ and $F'$ defined above, we define $T$ to
be the composition \[ T = \alpha F': \ell^2(w) \rightarrow
L^2(\mu),\] where $T$ maps basis element $e_i^w$ to the function
$\frac{1}{\sqrt{w_i}}\tau^i =\frac{1}{\sqrt{w_i}} v_i \circ \tau$.
We now wish to write $T$ with respect to the isometry $\widetilde{A}$ and the
map $F=F_Qj_w$. We will need to make use of the operator
$A$, as shown in the following diagram.
\[
\begin{CD}
L^2(\mu \circ \tau^{-1}) @>\alpha>> L^2(\mu)\\
@AF_{Q'}AA @AF_QAA\\
\mathcal{H}_{Q'} @>\widetilde{A}>> \mathcal{H}_Q\\
@Aj_w'AA @Aj_wAA\\
\ell^2(w) @>A>> \ell^2(w)
\end{CD}
\]
\begin{lemma} Given the operator $T$ defined above, \[ T = Fj_w^{-1}\widetilde{A}j_w' = FA.\] \end{lemma}
\begin{proof} Recall from the definition of $\widetilde{A}$ that $\alpha F_Q' = F_Q\widetilde{A}$. We expand $T$ using the diagram above.
\begin{eqnarray*} T &=& \alpha F_w'\\ &=& \alpha F_Q'j_w'\\&=& F_Q\widetilde{A}J_w'\\&=& Fj_w^{-1}\widetilde{A}j_w' = FA. \end{eqnarray*}
\end{proof}
\begin{lemma} There exist weights such that $T$ is a closable operator on the weighted space $\ell^2(w)$.
\end{lemma}
\begin{proof} We have proven that there exist weights $w= \{w_i\}_{i \in \mathbb{N}_0}$ already that make the operators $F$ and $F'$ closable, and in particular, these weights ensure that the bounded functions are in the domains of both $F_w^*$ and $(F'_w)^*$. Since $\alpha$ is an isometry, these weights also ensure that $T^*_w$ has dense domain.
\end{proof}
\begin{lemma}\label{Lem:TStarT} Let $w$ be weights such that $F$
and $T$ are both closable operators on $\ell^2(w)$. If the operator
$T^*_wT$ is self-adjoint and densely defined on $\ell^2(w)$, then
$T_w^*T = H'$.
\end{lemma}
\begin{proof}
This proof repeats the argument from Lemma \ref{Lem:FStarFisH}:
\begin{eqnarray*} \langle e_i^w | T^*_wTe_j^w \rangle_{\ell^2(w)} &=& \langle Te_i^w|Te_j^w \rangle_{L^2(\mu)}\\ &=&
\frac{1}{\sqrt{w_iw_j}}\langle \tau^i|\tau^j \rangle_{L^2(\mu)}
\\&=& \frac{1}{\sqrt{w_iw_j}}\langle v_i|v_j \rangle_{L^2(\mu \circ \tau^{-1})}
\\&=& M'_{i,j} = \frac{1}{\sqrt{w_iw_j}} M^{(\mu \circ \tau^{-1})}_{i,j}. \end{eqnarray*}
Therefore, we have \[Q_{M'}(c) = \langle c|M'c\rangle_{\ell^2(w)} =
\|(T^*T)^{1/2}c\|_{\ell^2(w)}^2\] for all $c \in \mathcal{D}$, which proves
$T^*_wT$ is exactly the Kato operator $H'$ for $Q_{M'}$. This
proves $T^*_wT$ is self-adjoint and densely defined on
$\ell^2(w)$. \end{proof}
Let $A$ be the operator $j_w^{-1}\widetilde{A}j_w'$ on $\ell^2(w)$, so $T = F A$.
\begin{theorem} The operator $A$ intertwines the
Kato operators $H$ and $H'$, and thus in matrix form intertwines
the moment matrices $M^{(\mu)}$ and $M^{(\mu \circ \tau^{-1})}$.
\end{theorem}
\begin{proof}
If the product $T=FA$ is defined on $\mathcal{D}$, we
have \[ T^*_wT = (FA)^*FA =
A^*(F^*_wF)A,\] which by Lemmas
\ref{Lem:FStarFisH} and \ref{Lem:TStarT} implies that \[H' =
A^*HA.\]
If we form the matrices for $A, H, $ and $H'$ with
respect to the orthonormal basis $\{e_i^w\}_{i \in \mathbb{N}_0}$, we have \[
A^* M^{(\mu)}A = M^{(\mu \circ
\tau^{-1})}.\]
\end{proof}
Using the Kato approach we can therefore show that, under the appropriate conditions on
$\tau$ and the measures $\mu$ and $\mu \circ \tau$, we can find a matrix $A$ such that \[
A^*M^{(\mu)}A = M^{(\mu \circ
\tau^{-1})}.\]
\section{Examples}
The following examples illustrate the situations in which infinite matrices do not have direct realizations as $\ell^2$ operators, but can be realized as operators on a certain weighted Hilbert space. The simplest such case arises when the measure $\mu$ is a Dirac mass at the point $b= 1$, which we observed earlier in Example \ref{Ex:PointMass}.
\begin{example}\label{Ex:FStarDelta1}The Kato operator for $\mu = \delta_1$. \end{example}
Let $\mu = \delta_1$ be the Dirac measure on $\mathbb{R}$ with point
mass at $1$. As we stated in Example \ref{Ex:PointMass}, the moment matrix $M=M^{(\mu)}$ will be $M_{i,j} = 1$ for all
$i,j\in\mathbb{N}_0$. The associated quadratic form $Q_M$ is given on $\mathcal{D}$ by
\[ Q_M(c) = \Big|\sum_{k\in \mathbb{N}_0} c_k\Big|^2.\]
If we attempt to define the operator $F^*$ on the constant function $f(x)$ without weights, we have for each $i$,
\[ (F^*f)_i = \int_{\mathbb{R}}f(x) x^i \,\mathrm{d}\delta_1(x) = f(1).\]
The weights are necessary in order to make any function with $f(1) \neq 1$ be in the domain of $F^*$.
Select weights $w =\{w_i\}\subset \mathbb{R}^+$ such that
\[ \sum_{i\in\mathbb{N}_0} \frac{1}{w_i} < \infty.\]
Then $F^*_w$ and $Q_M$ are closable, and the resulting Kato-Friedrichs operator $H_w$ then satisfies Equation (\ref{Eqn:KatoProperties}). Given $c$ is in the domain of $H_w$, we have
\[ Q_M(c) = \sum_j\sum_k \overline{c_j}c_k = \langle c|H_wc \rangle_{\ell^2(w)} = \sum_j w_j\overline{c_j}(H_wc)_j\] for all $j \in \mathbb{N}_0$. Thus, $H_w$
is defined by
\[ (H_wc)_j = \frac{1}{w_j} \sum_k c_k.\]
In other words, $H_w$ is a rank-one operator with range equal to the span of the vector \[
\xi = \left[\begin{matrix} \frac{1}{w_0} & \frac{1}{w_1} & \frac{1}{w_2} & \cdots \end{matrix}\right]. \] \hfill $\Diamond$
\begin{example}\label{Ex:ExpDecay} The Laplace transform and the operator $F^*$ associated with the measure $e^{-x}\,\mathrm{d}x$ on $\mathbb{R}^+$\end{example}
We first observed in Examples \ref{Ex:Laguerre} and \ref{Ex:Laguerre-2} that the moment matrix entries
\[M_{i,j} = (i+j)!\] grow quickly as $i$ and $j$ increase. Therefore, we again need weights to make $F$ a closable operator.
Let $w=\{w_i\}_{i \in \mathbb{N}_0}$ be weights given by
\[ w_i = [(2i)!]^2.\] We claim that with these weights, $F$ is closable.
\begin{equation*} \sum_{i=0}^{\infty} \frac{1}{w_i}|M_{i+j}|^2 = \sum_{i=0}^{\infty} \frac{(i+j)!}{[(2i)!]^2}
\end{equation*} for each fixed $j \in \mathbb{N}_0$. Using the ratio test, we see that the series converges for each $j$ so by Equation (\ref{Eqn:FCriterion}) we have the monomials in the domain of $F^*_w$, which makes $F$ closable on $\ell^2(w)$.
As a side note, we observe that $F_w^*$ has a connection, under appropriate conditions on
the function $f$, to the Laplace transform $\mathcal{L}$:
\[(F_w^*f)_i = \frac{1}{w_i} \int_0^{\infty} f(x) x^i e^{-x}\,\mathrm{d}x = \frac{(-1)^i}{w_i} \Bigl(\frac{\mathrm{d}}{\mathrm{d}s}\Bigr)^i \mathcal{L}_{f}(1),\]
where
\[\mathcal{L}_{f}(s):=\int_0^{\infty} e^{-sx}\xi(x)\,\mathrm{d}x\] for $s \in
\mathbb{R}^+$.
\hfill$\Diamond$
\chapter{Moment matrix transformation: measurable
maps}\label{Sec:ComputeA} In the previous chapter, we focused on the case of affine transformations comprising IFSs, but there is a great deal of interest in concrete applications to nonaffine examples; e.g. real and complex Julia sets. We turn our attention to nonaffine measurable maps in this chapter. We begin with an
arbitrary measurable map $\tau$ on a measure space $(X,\mu)$, where $X$ is a
subset of $\mathbb{R}^d$ for $d \geq 1$ or $\mathbb{C}$, and the moments of all orders with respect to $\mu$ and $\mu \circ \tau^{-1}$ are finite. We ask
whether the transformation of moment matrices $M^{(\mu)} \mapsto
M^{(\mu \circ \tau^{-1})}$ can be expressed as a matrix triple
product $M^{(\mu \circ \tau^{-1})} = A^*M^{(\mu)}A$, for $A$ an
infinite matrix. In the previous section, we stated the
appropriate $A$ when $\tau$ is an affine map. We now seek to find
an intertwining matrix $A$ for more general maps $\tau$.
We find that $A$, when it can be
written down, need not be a triangular matrix. We therefore also
will need to examine the hypotheses under which the formal matrix
products we write down are actually well defined.
\section{Encoding matrix $A$ for $\tau$}\label{Sec:GeneralA}
The following analysis will be restricted to the real one-dimensional case.
We let $X \subset \mathbb{R}$ be a Borel subset and let $\mu$ be a Borel measure with moments of all orders on $X$. Let $\tau$ be a measurable map on $X$, so that $\mu \circ \tau^{-1}$ also is a measure on $X$ with moments of all orders. We seek to find an infinite matrix $A$ which enacts the moment matrix transformation from $M^{(\mu)}$ to $M^{(\mu \circ \tau^{-1})}$; more precisely, such that
\begin{equation}\label{eqn:Atau} M^{(\mu \circ \tau^{-1})} = A^* M^{(\mu)} A.
\end{equation}
In the space $L^2(\mu)$, let $\mathcal{P}$ be the closed linear
span of the monomials. (The monomials are $L^2$ functions since
all moments are finite for $\mu$.) The following results hinge
on a careful description of the Gram-Schmidt process on the
monomials, since the monomials could possibly have linear
dependence relations among them in $L^2(\mu)$ (See Example \ref{Ex:dependent} below). Let $\{v_j\}_{j
\in \mathbb{N}_0}$ be the set of monomial functions, but if $x^j$
is dependent on $1,x,x^2, \ldots, x^{j-1}$, we remove $x^j$ from
the collection. In other words, $v_j$ might not be $x^j$, but it
is a monomial function $x^k$ for some $k \geq j$, and there are no
finite dependence relations among the set
$\{v_j\}_{j\in \mathbb{N}_0}$.
\begin{example}\label{Ex:dependent} Bernoulli IFSs which yield measures with linearly dependent monomials. \end{example}
Consider the affine IFS on $\mathbb{R}$ of the form \[ \tau_0(x) = \lambda x, \quad
\tau_1(x) = \lambda(x + 2).\] The properties of the Hutchinson equilibrium measure depend on the choice of the parameter $\lambda$. If $\lambda < \frac12$, the measure is supported on a generalized Cantor set, while when $\lambda \geq \frac12$ the measure is supported on an interval in the real line.
We can transform these into measures supported on subsets of the unit circle $\mathbb{T} \subset \mathbb{C}$ via the map $x \mapsto e^{2 \pi i x}$. By \cite{JoPe98}, when $\lambda = \frac14$, there is an orthonormal basis of monomials \[ \{z^k\,:\, k= 0,1, 4, 5, 16, 17, 20, 21, \ldots\} \] for the corresponding measure contained in $ \mathbb{T}$. The collection of all the monomials, however, does not have finite linear dependence relations, as described in Remark \ref{Rem:Cantor}. On the other hand, if $\lambda = \frac13$, the corresponding circle measure does have finite linear dependence relations among the monomials. These cases also arise when $\lambda > \frac12$ \cite{JKS07b}.
\hfill $\Diamond$
\begin{remark}\label{Rem:Cantor}
A central theme in our study of moments in Chapters \ref{Sec:Exist} and \ref{Sec:ComputeA} is how the
study of moments relates to the self-similarity which characterizes
equilibrium measures for IFSs. Because we use operator-theoretic methods
to study moment matrices, we encounter spectral information along the
way. Historically (see \cite{Akh65}) the approach to spectral theory of
moments in $\mathbb{R}$ went as follows:
\begin{enumerate}
\item Start with the monomials $\{x^k\}_{k \in \mathbb{N}_0}$ viewed as a dense subset of (a subspace of)
$L^2(\mu)$.
\item Apply Gram-Schmidt to obtain the associated orthonormal polynomial
basis $\{p_k\}_{k \in \mathbb{N}_0}$.
\item The tri-diagonal matrix $J$ representing multiplication by $x$ in the ONB
$\{p_k\}_{k \in \mathbb{N}_0}$ (see Chapter \ref{Ch:Extensions}) gives rise to spectral information.
\end{enumerate}
However, the classical approach does not take into account the
self-similar properties that $\mu$ may have. Following \cite{JoPe98} and
\cite{Jor06}, we can seek instead to encode the IFS structure directly into
the analysis of moments. A case in point is the Cantor system mentioned above
with scaling by $\frac{1}{4}$ and two subdivisions (a measure $\mu$ with scaling
dimension $\frac12$). It was shown in \cite{JoPe98} that it is better to realize
$\mu$ as a complex measure which is supported on the circle in the
complex plane. The monomials we are then led to study are the complex
monomials $\{z^k \}_{k \in \mathbb{N}_0}$ in $L^2(\mu)$.
This set of all the monomials has no finite linear dependence relations. To see this, we assume a polynomial $p(z) = \sum_{j=0}^n a_jz^j$ supported on the circle is zero in $L^2(\mu)$. Then for almost every point $z$ in the support of $\mu$, the polynomial is zero. Since this measure has no atoms (see \cite{DuJo06d},\cite{JKS07c}), the complement of a set of measure zero must be infinite. We conclude that since $p$ has infinitely many zeros, it is the identically zero polynomial and hence $a_0=a_1=\cdots=a_n=0$.
The standard application of Gram-Schmidt on the monomials would produce an ONB of polynomials, but it would be different from an ONB of monomials such as $\{z^k | k =0, 1, 4, 5, 16, 17, 20, 21, ...\}$. Moreover, this approach misses the IFS scaling property of $\mu$. \end{remark}
Returning to our quest to encode the map $\tau$ via a matrix $A$, we perform the Gram-Schmidt process on
the finitely linearly independent monomials $\{v_j\}_{j\in\mathbb{N}_0}$ to construct an orthonormal
basis $\{p_k\}_{k \in \mathbb{N}_0}$ of polynomials for
$\mathcal{P}$. Part of the definition of the Gram-Schmidt
process gives that each polynomial $p_k$ is in the span of
$\{v_0,v_1, \ldots v_k\}$, and hence is orthogonal to $sp\{v_0,
\ldots, v_{k-1}\} = sp\{p_0, \ldots p_{k-1}\}$. We can write the lower triangular matrix $G$ which enacts Gram-Schmidt as follows:
\begin{equation}\label{Eqn:GramMatrix} \sum_{i=0}^k G_{k,i}v_i = p_k.\end{equation} Also, assume that $\mu$ is a
probability measure, so $p_0$ is the constant function $1$, i.e.
$p_0(x)\equiv 1$.
In the cases where some of the monomials have been left out of the
sequence $\{v_j\}$, we define a moment matrix $N^{(\mu)}$ for
$\mu$ to be \begin{equation} N^{(\mu)}_{j,k} = \langle v_j|v_k
\rangle_{L^2(\mu)}. \end{equation} This adjusted moment matrix $N^{(\mu)}$
will be symmetric in the real cases but will not have the Hankel
property. The moments contained in $N^{(\mu)}$ are total in
$\mathcal{P}$.
We place the condition on the map $\tau$ that the powers
$\tau^j$ are in the space $\mathcal{P}$ for all $j \in
\mathbb{N}_0$. From here, we define the following transformations
on $\mathcal{P}$:
\begin{eqnarray} Rp_k &=& v_k \label{Eqn:R} \\ Tp_k &=& v_k \circ \tau. \label{Eqn:T} \end{eqnarray}
$R$ and $T$ are well-defined operators on $\mathcal{P}$ since they are defined on an orthonormal basis. They might be unbounded, but their domains do contain the ONB elements $\{p_k\}_{k \in \mathbb{N}_0}$ by definition. Therefore, we can express $R$ and $T$ in matrix form with respect to $\{p_k\}_{k \in \mathbb{N}_0}$. With a slight abuse of notation, we will also refer to these matrices as $R$ and $T$ respectively:
\begin{eqnarray} R_{j,k} &=& \langle p_j | Rp_k \rangle_{L^2(\mu)} = \langle p_j | v_k \rangle_{L^2(\mu)} \label{eqn:grammatrix}\\ T_{j,k} &=& \langle p_j|Tp_k \rangle_{L^2(\mu)} = \langle p_j|v_k \circ \tau \rangle_{L^2(\mu)}. \label{eqn:T}\end{eqnarray}
Observe that the matrix for $R$ is upper triangular, since each $p_j$ is orthogonal to $sp\{v_0, \ldots v_{j-1}\} = sp\{p_0, \ldots p_{j-1}\}$.
\begin{lemma}\label{Lem:GramInverse} The matrix $R$ is invertible in the
sense of Definition \ref{def:inverse}, and the matrix of $R^{-1}$
is exactly the transpose of the matrix which enacts the
Gram-Schmidt process on the monomials, i.e. \[ \sum_{i=0}^k
R^{-1}_{i,k}v_i = p_k. \]
\end{lemma}
\begin{proof}
We compute the matrix product $RG^{tr}$. Note that the triangular structure of $G^{tr}$ gives a finite sum, so the product is well-defined:
\begin{eqnarray*} (RG^{tr})_{i,j} &=& \sum_{k=0}^{\infty} R_{i,k}G_{j,k}\\
&=& \sum_{k=0}^{\infty} \langle p_j|v_k \rangle_{L^2(\mu)}G_{j,k} \\
&=& \Bigr\langle p_j|\sum_{k=0}^j G_{j,k}v_k\Bigr\rangle_{L^2(\mu)}\\&=& \langle p_i|p_j \rangle\\ &=& \delta_{i,j}. \end{eqnarray*}
We need to show that this composition has a dense domain. One can readily verify that when a monomial $v_j$ is expressed as a column vector with respect to $\{p_k\}_{k \in \mathbb{N}_0}$, that column vector only has nonzero entries $\langle p_k|v_j \rangle$ for $k \leq j$. Moreover, by the upper triangular structure of $G^{tr}$, the matrix-vector product $G^{tr}v_j$ must be in $\mathcal{D}$. These are in the domain of the matrix for $R$, so we can now conclude that the monomials
$v_k$ are all in the domain of the operator $RG^{tr}$. Therefore, $RG^{tr}$ has dense domain and
on that domain, $RG^{tr}$ is the identity. By Lemma \ref{lem:inverse},
$G^{tr}$ and $R$ are inverses of each other, and we write $G^{tr} =
R^{-1}$.
\end{proof}
Next, we will discuss the adjoints $R^*$ and $T^*$. These can be
written down as matrices, but it is not always true that their
domains are dense in $\mathcal{P}$. We will show that there
exists a \textit{renormalization} of $L^2(\mu)$ such that both
$R^*$ and $T^*$ have dense domains in $\mathcal{P}$. This is
equivalent (see \cite[Proposition 1.6, Chapter 10]{Con90}) to
saying $R$ and $T$ are closable operators.
Given a set of nonzero weights $w=\{w_i\}_{i \in \mathbb{N}_0}$, define the space $\mathcal{P}_w$ to be the set of all measurable functions which are in the span of the monomials with respect to the weighted norm \[
\|f\|_{\mathcal{P}_w}^2 = \sum_{i=0}^{\infty} w_i |\langle f|p_i \rangle_{L^2(\mu)}|^2.\]
To be precise, we see that $\int f p_i \mathrm{d}\mu$ must be finite for all $i \in \mathbb{N}_0$, but depending on the weights, observe that $\mathcal{P}_w$ could include functions which are not in $L^2(\mu)$.
The inner product, then, on $\mathcal{P}_w$ is given by \[ \langle f|g \rangle_{\mathcal{P}_w} = \sum_k w_k \langle f|p_k\rangle_{L^2(\mu)} \langle p_k|g\rangle_{L^2(\mu)}.\]
If we consider the operator $R$ as defined above, but as a map from $\mathcal{P}_w$ to $\mathcal{P}$, then the adjoint $R_w^*: \mathcal{P} \rightarrow \mathcal{P}_w$ is given by
\begin{equation}\label{Eqn:DefnRStar}
R_w^*g = \sum_k \frac{1}{w_k} \langle v_k| g \rangle_{L^2(\mu)} p_k
\end{equation}
on every $g$ in the domain of $R^*_w$ (i.e. every $g$ such that $R_w^*g \in \mathcal{P}_w$.) Observe that the weights change the adjoint, so we denote it by $R^*_w$. To verify Equation (\ref{Eqn:DefnRStar}), we compute for $f \in \mathcal{P}_w \cap\, \mathrm{Dom}(R)$ and $g \in \mathcal{P}$;
\[Rf = \sum_{k=0}^{\infty} \langle p_k|f \rangle_{L^2(\mu)} v_k,\]
which gives
\[ \langle g|Rf \rangle_{L^2(\mu)} = \Bigr\langle g|\sum_{k=0}^{\infty} \langle p_k|f \rangle_{L^2(\mu)} v_k\Bigr\rangle_{L^2(\mu)} = \sum_{k=0}^{\infty} \langle p_k|f \rangle_{L^2(\mu)} \langle g|v_k \rangle_{L^2(\mu)} .\]
We then compute
\begin{eqnarray*} \Bigr\langle \sum_k \frac{1}{w_k} \langle v_k|g \rangle_{L^2(\mu)} p_k \Bigr| f \Bigr\rangle_{\mathcal{P}_w} &=&
\sum_j w_j \Bigr\langle \sum_k \frac{1}{w_k} \langle v_k|g \rangle_{L^2(\mu)} p_k \Bigr| p_j \Bigr \rangle_{L^2(\mu)} \langle p_j|f \rangle_{L^2(\mu)} \\ &=& \sum_k \langle g|v_k \rangle_{L^2(\mu)} \langle p_k|f\rangle_{L^2(\mu)}.
\end{eqnarray*}
The equality of these two expressions verifies our definition of $R^*_w$, at least for the dense set of finite linear combinations of the orthogonal polynomials, hence holds for all $f$ in the domain of $R$.
In the next lemma, we produce a weighted norm on the space
$\mathcal{P}$ using weights $\{w_k\}$ which guarantee that $R$
will be closable in the weighted space $\mathcal{P}_w$.
\begin{lemma}\label{Lemma:RClosable}
Given the measure space $(X, \mu)$ as above having finite moments of all orders and given the operator $R$ defined in Equation (\ref{Eqn:R}), there exist weights $\{w_i\}_{i \in \mathbb{N}_0}$ such that $R: \mathcal{P}_w \rightarrow
\mathcal{P}$ is closable.
\end{lemma}
\begin{proof}
Observe that $R_w^*g\in \mathcal{P}_w$ precisely when
\begin{equation}\label{Eqn:RStar2}
\sum_k w_k | \langle p_k| R_w^*g \rangle_{L^2(\mu)}|^2 < \infty.
\end{equation}
Substituting in the definition of $R^*_w$ shows that Equation (\ref{Eqn:RStar2}) is true if and only if
\begin{equation}
\sum_k \frac{1}{w_k} \Big|\langle v_k|g\rangle_{L^2(\mu)}\Big|^2 < \infty.
\end{equation}
Since $\mu$ has finite $0^{\mathrm{th}}$ moment, i.e. is a finite measure, we have $L^{\infty}(\mu) \subseteq L^2(\mu)$. Set $\mathcal{M}_k:=\int |v_k| d\mu(x)$, and suppose $f\in
L^{\infty}(\mu)$. Thenf
\begin{equation}
\Big| \langle v_k|f \rangle_{L^2(\mu)}\Big| \leq \int_{\mathbb{R}} |v_k f |\mathrm{d}\mu \leq
\|f\|_{L^{\infty}(\mu)} \mathcal{M}_k.
\end{equation}
Therefore, if $\{\mathcal{M}_k^2/w_k\}\in \ell^1$, then
\begin{equation}
L^{\infty}(\mu)\subset \textrm{dom}(R^*).
\end{equation}
We thus explicitly define a choice of weights $w = \{w_k\}_{k \in \mathbb{N}_0}$ by
\begin{equation}\label{Eqn:RWeights}
w_k:=(1 + k^2)\mathcal{M}_k^2.
\end{equation}
Since $L^{\infty}(\mu)$ is dense in $L^2(\mu)$, we now know that
$\textrm{dom}(R_w^*)$ is dense in $\mathcal{P}$, where $w = \{w_k\}_{k \in \mathbb{N}_0}$ is
defined in (\ref{Eqn:RWeights}). Therefore $R:\mathcal{P}_w
\rightarrow \mathcal{P}$ is closable.
\end{proof}
The space $\mathcal{P}_w$ has the weighted orthogonal polynomials $\left\{\frac{p_k}{\sqrt{w_k}}\right\}_{k \in \mathbb{N}_0}$ as an orthonormal basis. We will denote these vectors $\{p_k^w\}_{k \in \mathbb{N}_0}$. We next observe the following property of the matrix $R_w^*R$ written in terms of the orthonormal basis $\{p^w_k\}_{k \in \mathbb{N}_0}$:
\begin{eqnarray}\label{Eqn:RStarR} R_w^*R_{i,j}&=& \langle p_i^w | R^*_wRp_j^w \rangle_{\mathcal{P}_w} \\ \nonumber&=& \langle Rp_i^w|Rp_j^w \rangle_{L^2(\mu)} \\ \nonumber&=& \frac{\langle v_i|v_j \rangle_{L^2(\mu)}}{\sqrt{w_iw_j}} \\\nonumber &=& \frac{1}{\sqrt{w_iw_j}}N^{(\mu)}_{i,j}\end{eqnarray}.
So $R^*_wR$ is a self-adjoint operator and is exactly a weighted version of the moment matrix.
Next, we recall our definition (Equation (\ref{Eqn:T})) for the
operator $T$ which maps $p_i$ to $v_i \circ \tau$ for each $i \in
\mathbb{N}_0$. By our hypothesis on $\tau$, we know $T$ is an
operator on $\mathcal{P}$. We wish $T$ to also be closable, so
that $T^*$ is densely defined. As we discovered for $R$, this may
require weights. Since the adjoint depends on the weights, we will denote it $T^*_w$.
A function $f$ is in the domain of $T_w^*$ with respect to weights
$w = \{w_k\}_{k \in \mathbb{N}_0}$ if $T_w^*f$ is in $\mathcal{P}_w$. The same
computations used for $R^*_w$, replacing each $v_k$ with $v_k
\circ \tau$, show
\begin{equation}\label{Eqn:DefTStar} T_w^*g = \sum_{k=0}^{\infty} \frac{1}{w_k} \langle v_k \circ \tau|g \rangle_{L^2(\mu)}. \end{equation} We also find that $g \in \mathrm{dom}(T_w^*)$ if and only if \begin{equation}\label{Eqn:DomTStar} \sum_{k=0}^{\infty} \frac{1}{w_k} \Big|\langle v_k \circ \tau|g \rangle_{L^2(\mu)}\Big|^2 < \infty. \end{equation}
The condition on $\tau$ under which there are weights such that $T^*_w$ is densely defined on $\mathcal{P}_w$ is given below.
\begin{lemma}\label{Lem:TClosable} Let $\mathcal{M}'_k = \int_X |v_k \circ \tau |d\mu = \int |v_k| d(\mu \circ \tau^{-1})$.
If $\mathcal{M}'_k < \infty$ for all $k \in \mathbb{N}_0$, we
define the weights $w=\{w_k\}_{k \in \mathbb{N}_0}$ by \[ w_k = (\mathcal{M}'_k)^2
(1+k^2),
\] for which $T^*_w$ is densely defined from $\mathcal{P}$ to
$\mathcal{P}_w$.
\end{lemma}
\begin{proof}
Repeating the computation in Lemma \ref{Lemma:RClosable}, we find
that $f \in L^{\infty}(\mu)$ is in the domain of $T_w^*$ if and only
if
\[ \sum_k \frac{1}{w_k} (\mathcal{M}')^2 \|f\|_{\infty} < \infty, \] which holds for the given weights.
\end{proof}
This then gives us an expression for the matrix of the self-adjoint operator $T_w^*T$ with respect to our orthonormal basis of polynomials:
\begin{eqnarray}\label{Eqn:TStarT} (T_w^*T)_{i,j} &=& \langle p^w_i|T^*_wT p^w_j \rangle_{\mathcal{P}_w} \nonumber\\ &=& \langle Tp^w_i | Tp^w_j \rangle_{L^2(\mu)} \nonumber\\&=& \frac{\langle v_i \circ \tau|v_j \circ \tau \rangle_{L^2(\mu)}}{\sqrt{w_iw_j}} \nonumber \\ &=& \frac{ 1}{\sqrt{w_iw_j}} N^{\mu \circ \tau^{-1}}_{i,j}. \end{eqnarray}
We next define a matrix
$A$ that will give coefficients of the functions $v_k \circ \tau$
expanded in terms of the monomials $\{v_j\}_{j \in \mathbb{N}_0}$:
\begin{equation}\label{Eqn:DefA}
v_k \circ \tau = \sum_{j=0}^{\infty} A_{jk}v_j.
\end{equation}
The entries of $A$ exist since we assumed that each $v_k \circ
\tau$ is an element of $\mathcal{P}$ and therefore has an $L^2$-convergent
expansion in the monomials. We may or may not, however, be able to compute
entries of the matrix $A$ directly from this definition.
\begin{example}\label{Ex:Polynomial} A nonaffine map: $\tau(x) = x^2+b$ \end{example} This is perhaps the simplest example on a nonaffine transformation. The matrix $A$ which encodes $\tau(x)=x^2+b$ can be computed from the powers of $\tau$ to satisfy Equation (\ref{Eqn:DefA}):
\[A = \left[ \begin{matrix} 1&b&b^2 &b^3& \cdots\\ 0&0& 0 &0& \cdots\\
0&1&2b&3b^2 &\cdots\\ 0&0&0 &0& \cdots \\ 0&0&1&3b&\cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{matrix} \right]. \]
\hfill $\Diamond$
We wish to think of $A$ as the matrix representation for an operator on the weighted space $\mathcal{P}_w$. With an abuse of notation, we will also refer to this operator as $A$. In general, it is not even certain that the operator $A$ has dense domain. Throughout the remainder of this section, however, we will restrict our attention to the cases in which the domain of $A$ contains the monomials, and hence the orthogonal polynomials.
\begin{proposition}\label{Prop:TRA} Given $A$ as in Equation (\ref{Eqn:DefA}) and such that $A$ is the matrix representation of an operator (also denoted $A$) on $L^2(\mu)$ with respect to the ONB $\{p_k\}_{k\in \mathbb{N}_0}$, if the operator composition $RA$ is densely defined, then \[RA=T\] on the domain of $RA$, i.e. $RA$ is a restriction of $T$.
\end{proposition}
\begin{proof} Suppose $f \in \mathrm{dom}(R)$. Then by Parseval we have
\begin{eqnarray*} Rf &=& \sum_{j=0}^{\infty} \langle p_j|f
\rangle_{L^2(\mu)} Rp_j\\ &=& \sum_{j=0}^{\infty} \langle p_j|f \rangle_{L^2(\mu)}
v_j, \end{eqnarray*} with convergence in the $L^2$ sense.
We have assumed that, as an operator, $A$ has the monomial functions $\{v_k\}_{k \in \mathbb{N}_0}$ and the
polynomials $\{p_k\}_{k \in \mathbb{N}_0}$ in its domain. We have also assumed that the product $RA$ is well-defined on a dense subset of $L^2(\mu)$. We can then compute
\begin{eqnarray*} Tp_k &=& v_k \circ \tau \\ &=& \sum_{j=0}^{\infty} A_{j,k}v_j \\&=&
\sum_{j=0}^{\infty} \langle p_j|Ap_k \rangle_{L^2(\mu)} v_j \\&=&
RAp_k.
\end{eqnarray*}
The last line above holds for every $Ap_k$ that is in the domain
of $R$.
\end{proof}
\begin{theorem}\label{Thm:AMatrix} Given $(X,\mu)$ a Borel measure space with
$X \subset \mathbb{R}$ and given $\tau$ a measurable map from $X$
to itself, let $N^{(\mu)}$ be the adjusted moment matrix for $\mu$
and $N^{(\mu \circ \tau^{-1})}$ the corresponding moment matrix
for $\mu \circ \tau^{-1}$. Then, if the matrix $A$ from Equation (\ref{Eqn:DefA}) satisfies the hypotheses in Proposition \ref{Prop:TRA}, \begin{equation}\label{Eqn:Nmu} A^* N^{(\mu)}A =
N^{(\mu \circ \tau^{-1})} .\end{equation}
\end{theorem}
\begin{proof} We have $RA =T$. Find weights $w$ such that both $T^*_w$ and $R^*_w$ are densely defined on $L^2(\mu)$. Let $N^{(\mu)}_w$ be the weighted moment matrix with entries $\frac{1}{\sqrt{w_iw_j}}\langle v_i|v_j \rangle_{L^2(\mu)} $ Using Equations (\ref{Eqn:RStarR}) and (\ref{Eqn:TStarT}), we find that
\begin{eqnarray*} N_w^{(\mu \circ \tau^{-1})} &=& T^*_wT \\&=& (RA)^*(RA) \\ &=& A^*R^*_wRA \\ &=& A^*N_w^{(\mu)}A.
\end{eqnarray*}
\end{proof}
\section{Approximation of $A$ with finite matrices}\label{Subsec:Finite}
In this section, we explore whether we can perform computations with
finite matrices which yield a finite approximation to the infinite
matrix $A$ defined in Equation (\ref{Eqn:DefA}), and thereby achieve an
approximation of the moments for $\mu \circ \tau^{-1}$ from the
moments for $\mu$.
Let $\mu$ be a Borel measure on a set $X \subset \mathbb{R}$ such
that the moments $M^{(\mu)}_{i,j} = \int_X x^{i+j} d\mu(x)$ are
finite for all orders. Let $\tau:X \rightarrow X$ be a measurable
endomorphism such that the moments with respect to $\mu \circ
\tau^{-1}$ are also finite for all orders, and the powers of
$\tau$ are in the closed span $\mathcal{P}$ of the monomials in
$L^2(\mu)$. Let $T$ be the infinite matrix introduced in
Section \ref{Sec:GeneralA} with entries $T_{ij} = \langle p_i|v_j
\circ \tau \rangle_{L^2(\mu)}$, where $\{p_i\}_{i\in \mathbb{N}_0}$ are the
orthonormal polynomials in $L^2(\mu)$ given by performing the
Gram-Schmidt method on the monomials $\{v_j\}_{j \in \mathbb{N}_0}$. Let $R$ be the
transformation taking $p_i$ to $v_i$, so the matrix entries are
$R_{i,j} = \langle p_i|v_j \rangle_{L^2(\mu)}$.
Fix $n$. Let $R_n$ and $T_n$
be (as in Section \ref{Subsec:FixedPointsHut}) the $(n+1) \times
(n+1)$ truncations of the $R$ and $T$ matrices, respectively. Let
$\mathcal{H}_n$ be the closed linear span of the monomials
$\{v_0,v_1, \ldots, v_n\}$ (which is also the closed linear span
of $\{p_i\}_{i=0}^n$) and let $P_n$ be the orthogonal projection
onto $\mathcal{H}_n$ from $\mathcal{P}$. Note that in the Dirac
notation mentioned in Chapter \ref{Sec:Notation},
\begin{equation}\label{Eqn:Pn} P_n = \sum_{i=0}^n |p_i\rangle
\langle p_i | .\end{equation}
We have proved in Proposition \ref{Prop:TRA} that the matrix $A =
R^{-1}T$, in the cases where this
product of infinite matrices is well defined, gives the coefficients of $v_j \circ \tau$ in terms of the monomials $\{v_i\}_{i \in \mathbb{N}_0}$. We now wish to show
that the finite matrix product $R^{-1}_nT_n$ provides an
approximation of $A$, in the sense that it yields coefficients for
the projection of the powers of $\tau$ onto $\mathcal{H}_n$,
expanded in terms of the monomials.
\begin{lemma}\label{Lemma:finite} For fixed $n$, \begin{equation} \sum_{j=0}^n
(R^{-1}_nT_n)_{j,k}v_j = P_n(v_k \circ \tau). \end{equation}
Consequently,
\begin{equation} \lim_{n \rightarrow \infty} \sum_{j=0}^n
(R_n^{-1}T_n)_{j,k}v_j = v_k \circ \tau, \end{equation} where
convergence is in $L^2(\mu)$.
\end{lemma}
\begin{proof} Given our fixed $n$, let $k \leq n$. In the
computation below, recall that for $j,k \leq n$, we have
$(R^{-1}_n)_{j,k} = R^{-1}_{j,k}$ and $(T_n)_{j,k} = T_{j,k}$.
Also, since the orthogonal polynomials $p_0, \ldots, p_j$ are in
the span of the monomials $v_0, \ldots v_j$, we know that the
truncated matrix $R^{-1}_n$ maps $v_j$ to $p_j$ for each $j \leq
n$.
\begin{eqnarray*} \sum_{j=0}^n
(R_n^{-1}T_n)_{j,k}v_j &=& \sum_{j=0}^n \sum_{\ell=0}^n
(R_n)^{-1}_{j,\ell}(T_n)_{\ell,k}v_j \\ &=& \sum_{\ell=0}^n
\langle p_{\ell}|v_k \circ \tau \rangle_{L^2(\mu)} \sum_{j=0}^n
(R^{-1}_n)_{j,\ell} v_j \quad \text{where both \: }\ell,j \leq n \\
&=& \sum_{\ell=0}^n \langle p_{\ell}|v_k \circ \tau
\rangle_{L^2(\mu)}p_{\ell}
\\ &=& P_n
v_k \circ \tau \quad \text{by Equation (\ref{Eqn:Pn})}.
\end{eqnarray*}
\end{proof}
\begin{remark} It is important to realize here that even for $j,k \leq
n$,
\begin{equation} (R^{-1}_nT_n)_{j,k} \neq (R^{-1}T)_{j,k}.
\end{equation}
We do, however, know that because the product $R^{-1}T$ is well
defined, if we fix $j,k$ and let $n \rightarrow \infty$, we do
have $(R^{-1}_nT_n)_{j,k} \rightarrow (R^{-1}T)_{j,k}$.
\end{remark}
If we were able to conclude from the truncation result in Lemma
\ref{Lemma:finite} that $ \lim_{n \rightarrow \infty}
\sum_{j=0}^n (R^{-1}T)_{j,k}v_j = v_k \circ \tau$, where we take
the limit in $L^2(\mu)$, we would have an alternate proof of
Proposition \ref{Prop:TRA}. It other words, we would be able to
write
\begin{equation} \sum_{j=0}^{\infty} (R^{-1}T)_{j,k}v_j =
v_k \circ \tau. \end{equation} This convergence, however, would
require hypotheses about how the sequences $(R^{-1}_nT_n)_{j,k}$
converge to $(R^{-1}T)_{j,k}$ as $n \rightarrow \infty$. To
illustrate this point, let us fix $k$ and let $a_j^{(n)} =
(R^{-1}_n T_n)_{j,k}$ and $a_j = (R^{-1}T)_{j,k}$, so for each $j$, $a_j^{(n)} \rightarrow a_j$ as $n \rightarrow
\infty$.
\begin{eqnarray*} \Bigr\|\sum_{j=0}^n a_jv_j - v_k \circ \tau\Bigr\|_{L^2(\mu)} &=& \Bigr\| \sum_{j=0}^n a_jv_j - \sum_{j=0}^n a^{(n)}_jv_j
+ \sum_{j=0}^n a^{(n)}_jv_j - v_k \circ \tau\Bigr\|_{L^2(\mu)} \nonumber \\
&\leq& \Bigr\|\sum_{j=0}^n (a_j-a_j^{(n)})v_j\Bigr\|_{L^2(\mu)} +
\Bigr\|\sum_{j=0}^n a^{(n)}_jv_j - v_k \circ \tau\Bigr\|_{L^2(\mu)}
\end{eqnarray*}
Using Lemma \ref{Lemma:finite}, the second term of the last line
above can certainly be made arbitrarily small for large enough
$n$. The first term, however, may not have that property.
For each $k \in \mathbb{N}_0$, the truncated matrix products
$R^{-1}_nT_n$ produce asymptotic expansions for $v_k \circ \tau$
by giving expansions of $P_n(v_k \circ \tau)$ in terms of monomials
$v_j$, even if there is no \textit{a priori} known expansion of
$v_k \circ \tau$ in the monomials. We assume that such an
expansion exists, that is,
$$v_k \circ \tau = \sum_{j=0}^{\infty} a_j v_j. $$ We may, however, have no
way of computing the actual coefficients. Lemma
\ref{Lemma:finite} gives approximations to these coefficients
which get better as $n$ increases.
The next question is how these finite approximations to the matrix
$A$ interact with the moment matrix transformation given in Equation (\ref{Eqn:Nmu}) from
Theorem \ref{Thm:AMatrix}. In the case described in Subsection
\ref{Sec:SingleVar}, where $\tau$ is an affine map on $\mathbb{R}$
or $\mathbb{C}$, the matrix $A$ is upper triangular. With this
added structure, it is readily demonstrated that the truncated
matrix product yields exactly the truncation of the infinite
matrix product, i.e.
$$A^*_nM^{(\mu)}_nA_n = [A^*M^{(\mu)}A]_n = M^{(\mu \circ \tau^{-1})}.$$ We also note that in this special
case, we have Equation (\ref{Eqn:UpTriA}) from \cite{EST06} giving
a concrete expression of the entries of $A$. We then can compute
the exact moments with respect to $\mu \circ \tau^{-1}$ using
finite matrix computations.
In the more general case, however, we may not have a construction
giving us the entries in $A$, so we may need to use the finite
matrix product $\widetilde{A}_n = R_n^{-1}T_n$. In this case, we
find the triple product gives entries which are the inner products
of projections of powers of the measurable map $\tau$.
Given the map $\tau$, the entries in the adjusted moment matrix for $\mu
\circ \tau^{-1}$ are given by the inner product $$ (N^{(\mu \circ
\tau^{-1})})_{i,j} = \langle v_i \circ \tau | v_j \circ \tau
\rangle_{L^2(\mu)}. $$
\begin{corollary} Let $\widetilde{A}_n = R_n^{-1}T_n$. Then
\begin{equation} (\widetilde{A}_n^*N^{(\mu)}\widetilde{A}_n)_{i,j} =
\langle P_n(v_i \circ \tau) | P_n (v_j \circ \tau) \rangle_{L^2(\mu)} = \langle
v_i \circ \tau | P_n (v_j \circ \tau) \rangle_{L^2(\mu)}.
\end{equation}
\end{corollary}
\begin{proof} This is a consequence of Lemma \ref{Lemma:finite}.
\begin{eqnarray*} (\widetilde{A}_n^*N^{(\mu)}\widetilde{A}_n)_{i,j}
&=& \sum_{k=0}^n \sum_{l=0}^n
(\widetilde{A}_n^*)_{i,k}(N^{(\mu)})_{k,l}(\widetilde{A}_n)_{l,j} \\
&=& \sum_{k=0}^n
\sum_{l=0}^n(\widetilde{\overline{A}}_n)_{k,i} \langle v_k|v_l
\rangle_{L^2(\mu)} (\widetilde{A}_n)_{l,j}\\ &=& \left\langle
\sum_{k=0}^n (\widetilde{A}_n)_{k,i}v_k \Bigr| \sum_{l=0}^n
(\widetilde{A}_n)_{l,j}v_l \right\rangle_{L^2(\mu)} \\&=& \langle P_n(v_i
\circ \tau) | P_n(v_j \circ \tau) \rangle_{L^2(\mu)}\\ &=& \langle v_i \circ
\tau | P_n (v_j \circ \tau) \rangle_{L^2(\mu)}.
\end{eqnarray*}
In the last line, we use the property of projections that $P_n =
P_n^* = P_n^2$.
\end{proof}
\chapter*{Acknowledgements}
The authors thank Professor Christopher French for several helpful
conversations and the proof of Proposition \ref{Prop:French}. In
addition, one or more of the authors had helpful conversations
with Professors Ken Atkinson, Dorin Dutkay, Erin Pearse, and
Myung-Sin Song.
The third author is pleased to thank the orthogonal polynomial
workshop students and graduate students at the University of Iowa
2007 REU for their energy and enthusiasm. She especially thanks
Bobby Elam, Greg Ongie, and Mark Tucker, whose questions led the
authors to the paper \cite{EST06}.
The second author is grateful to the Mathematics Department at the
University of Oklahoma - Norman for their hospitality during her research leave from Grinnell College.
\mainmatter
\include{notation_08_09_07}
\include{mproblem_08_09_11}
\include{affine_08_09_11}
\include{measurable_08_09_11}
\include{kato_08_09_11}
\include{integral_08_09_11}
\include{spectrum_08_09_11}
\include{extensions_08_09_11}
\backmatter
\bibliographystyle{amsalpha}
\chapter{The moment problem}\label{Sec:MomentTheory}
In this chapter we introduce the \textit{moment
problem}, in which we describe the various positivity conditions
that an infinite matrix $M$ must satisfy in order to imply that
there is a Borel measure $\mu$ on
an ambient space $X$ such that $M$ is the
moment matrix of $\mu$, i.e. $M = M^{(\mu)}$. These conditions will be different in the
cases where $X$ is a subset of $\mathbb{R}$, $\mathbb{C}$, or
$\mathbb{R}^d$ for $d>1$. We also examine whether or not the measures found will be unique. The existence of the measure $\mu$ is discussed in Section \ref{Subsec:exist}, and a procedure by Parthasarathy using a Kolmogorov construction is used in Section \ref{Subsec:Parth} to discuss a more concrete construction of $\mu$. The connections of the moment problem to other areas of interest to mathematicians, physicists, and engineers are mentioned in Section \ref{Subsec:history}.
\section{The moment problem $M = M^{(\mu)}$}\label{Subsec:exist}
Given an infinite matrix $M$, the moment problem addresses whether
there exist measures $\mu$ such that $M = M^{(\mu)}$. There are known
conditions on $M$ which ensure that such a measure $\mu$ exists. Our
presentation will be brief and we will omit the proofs of the results
stated here, as they are available in the literature, albeit scattered
within a variety of journal articles. Fuglede's paper \cite{Fug83}
offers a very readable survey of the literature on moments in several
variables, up to 1983. The two papers \cite{BeDu06, BeDu07} emphasize
the complex case and include new results.
Our intention in this section is primarily to give the definitions of
the positive semidefinite properties on the infinite matrices in the
real and complex cases. We will only treat the existence part of the moment
problem, but there are a variety of interesting uniqueness results in
the literature as well. Here we shall only need the simplest version
of uniqueness: these are the cases when the measures $\mu$ are known
\textit{a priori} to be compactly supported. In this case, uniqueness
follows as a result of an application of the Stone-Weierstrass
Theorem. An example of nonuniqueness is seen in Section
\ref{Sec:Nonunique}.
We begin with the real case $\mathbb{R}^d$, for $d \geq 1$. Let
$\mathcal{D}$ be the space of all infinite sequences $c$ which are
finite linear combinations of elements of the standard orthonormal
basis $\{e_{\alpha}\}_{\alpha \in \mathbb{N}_0^d}$. Clearly
$\mathcal{D}$ is a dense subspace of the Hilbert space
$\ell^2(\mathbb{N}_0^d)$, and matrix-vector products are well defined
on vectors in $\mathcal{D}$.
One of the ways to test whether a given sequence, or a given infinite
Hankel matrix, is composed of the moments of a measure $\mu$ is to
test for a positive semidefinite condition. While there are several
such conditions in the literature, we isolate condition (\ref{Eqn:PD})
as it summarizes a variety of features that will be essential for our
point of view. Note that (\ref{Eqn:PD}) entails a separate
verification for every finite system of numbers, hence it amounts to
checking that all the finite truncated square matrices have positive
spectrum. This in turn can be done by checking determinants of finite
submatrices.
\begin{definition}\label{Defn:PD}
A matrix $M$ with real entries indexed by $\mathbb{N}_0^d \times
\mathbb{N}_0^d$ is said to be \textit{positive semidefinite} if
\begin{equation}\label{Eqn:PD} \langle c | Mc
\rangle_{\ell_2} = \sum_{\alpha \in \mathbb{N}_0^d} \sum_{\beta \in
\mathbb{N}_0^d} \overline{c}_{\alpha}
M_{\alpha,\beta}c_{\beta} \geq 0\end{equation}
for all $c\in\mathcal{D}$.
\end{definition}
We say that a given infinite matrix $M$ is a $(\mathbb{N}_0^d,
\mathbb{R}^d)$-moment matrix if there exists a positive Borel
measure $\mu$ on $\mathbb{R}^d$ such that
\begin{eqnarray} M_{\alpha,\beta} &=& M^{(\mu)}_{\alpha,\beta} = \int_{\mathbb{R}^d} x^{\alpha + \beta} \mathrm{d}\,\mu(x) \label{Eqn:MMuAlphaBeta}
\end{eqnarray}
for all $\alpha, \beta \in \mathbb{N}_0^d$. Note that, as we observed in Section \ref{Subsec:moments}, if $M$ is a moment matrix, it must have the Hankel property.
The solution to the existence part of the moment problem in the
$\mathbb{R}^d$ case is given in a theorem due to M. Riesz
\cite{Rie23} for $d=1$ and Haviland \cite{Hav35, Hav36} for
general $d$.
\begin{theorem}[Riesz, Haviland]\label{thm:real}
For any positive semidefinite real Hankel matrix $M = (M_{\alpha,\beta})$
indexed by $\mathbb{N}_0^d \times \mathbb{N}_0^d$, there is a
positive Borel measure $\mu$ on $\mathbb{R}^d$ having moment
matrix $M$.
\end{theorem}
If $M$ is an infinite matrix with complex entries, we say $M$ is a complex moment matrix if there exists a positive Borel measure $\mu$ on $\mathbb{C}$ such that
\begin{eqnarray} M_{i,j} = M^{(\mu)}_{i,j}
= \int_{\mathbb{C}} \overline{z}^i z^j \mathrm{d}\,\mu(z)
\end{eqnarray}
for all $i,j \in \mathbb{N}_0$. We define a property on the complex matrix $M$ which is stronger than the positive semidefinite property.
\begin{definition}[\cite{BeDu06, BeDu07}] \label{Defn:PDC}
We say a complex infinite matrix $M$ indexed by $\mathbb{N}_0 \times \mathbb{N}_0$ has \textit{Property PD$\mathbb{C}$} if given any doubly-indexed sequence $c = \{c_{i,j}\}$ having only finitely many nonzero entries, we have
\begin{equation}\label{Eqn:PDC}
\sum_{i,j,k,\ell \in \mathbb{N}_0}
\overline{c}_{i,j} M_{i+\ell, j+k} c_{k,\ell} \geq 0.
\end{equation}
\end{definition}
Note that if we consider the
special case where $j,\ell=0$ in Definition \ref{Defn:PDC}, we get
exactly the positive semidefinite condition on $M$ given in Definition \ref{Defn:PD}.
\begin{theorem}[\cite{Fug83, BeDu06, BeDu07}]
\label{thm:complex} If $M$ is an infinite complex matrix indexed by
$\mathbb{N}_0 \times \mathbb{N}_0$ which satisfies Property PD$\mathbb{C}$, then there exists a positive Borel measure $\mu$ on $\mathbb{C}$
such that $M = M^{(\mu)}$.
\end{theorem}
\begin{remark}\label{Rem:RversusC}
The condition in Theorem \ref{thm:real} that the matrix entries in $M$ are real numbers cannot be dropped. To
see this, let $d=1$ and take $\xi$ be a fixed nonreal number. Let
$M = ({\overline{\xi}}^j\xi^{k})_{j,k \in \mathbb{N}_0}$. We see
that
\begin{eqnarray*} \langle c | Mc \rangle_{\ell^2}
&=& \sum_j \sum_k \overline{c}_j M_{j,k} c_k \\ &=& \left( \sum_j \overline{c}_j \overline{\xi}^j \right) \left(\sum_k c_k \xi^k \right) \\
&=& \left| \sum_j c_j \xi^j\right|^2 \\
&\geq & 0 \end{eqnarray*} for all $c \in \mathcal{D}$. The
infinite matrix $M$ is therefore positive semidefinite. Without
the condition that the components of $M$ be real, then, we would
conclude that there is a measure on $\mathbb{R}$ with $M$ as its
moment matrix. We can also verify, however, that $M$ has
Property PD$\mathbb{C}$ from Theorem \ref{thm:complex}. Let $c =
\{c_{i,j}\}$ be a doubly indexed sequence in $\mathcal{D}$ (i.e.
$c$ has only finitely many nonzero entries). Then
\begin{eqnarray*} \sum_{i,j,k,\ell \in \mathbb{N}_0} \overline{c}_{i,j}M_{i+\ell,j+k} c_{k,\ell} &=&
\sum_{i,j,k,\ell \in \mathbb{N}_0}
\overline{c}_{i,j}\overline{\xi}^{i+\ell}\xi^{j+k} c_{k,\ell} \\
&=& \left(\overline{\sum_{i,j} c_{i,j}\overline{\xi}^j \xi^i
}\right) \left(\sum_{k,\ell} c_{k,\ell}\overline{\xi}^{\ell} \xi^k
\right) \\ &\geq& 0. \end{eqnarray*}
This leads to the conclusion that there is a measure on
$\mathbb{C}$ having moment matrix $M$. In fact, given $\xi \in
\mathbb{C}$, we note that the Dirac measure $\delta_{\xi}$ at
$\xi$ has the matrix $M$ as its moment matrix, since the entries
of $M$ are the evaluation of the monomials $\overline{z}^jz^k$ at
$\xi$:
\begin{equation*} M_{j,k} = \int_{\mathbb{C}} \overline{z}^jz^k \mathrm{d}\delta_{\xi}(z) = \overline{\xi}^j
\xi^k.
\end{equation*}
It is shown in \cite{Fug83} that measures with compact support are
uniquely determined by their moment matrices. The Dirac measure,
therefore, is the unique Borel measure on $\mathbb{C}$ having
moment matrix $M$. If we have $\xi \in \mathbb{C}\setminus \mathbb(R)$, then even though $M$ satisfies the positive semidefinite condition from Definition \ref{Defn:PD}, there cannot
be a Borel measure $\mu$ on $\mathbb{R}$ such that $M =
M^{(\mu)}$.
In some sense, what we have shown is that there is a measure, but
its support is not restricted to $\mathbb{R}$. One of the
classical moment questions is whether one can determine the
support of a measure $\mu$ from the geometric properties of its
moment matrix $M^{(\mu)}$.
\end{remark}
The simplest moment situation arises when the measure $\mu = \delta_{\xi}$ is a Dirac mass. We will revisit this example frequently in the remainder of the Memoir. Observe that the infinite moment matrix for $\delta_{\xi}$ has rank one. We will later show that if $\mu$ is a finite convex combination of Dirac masses, then the associated moment matrix will be of finite rank.
\section{A Parthasarathy-Kolmogorov approach to the moment problem}\label{Subsec:Parth}
The traditional existence proofs of Theorems \ref{thm:real} and
\ref{thm:complex} are not constructive. In this section, we carefully
explain how the Parthasarathy-Schmidt theorem, Theorem
\ref{Thm:ParKol} below, can be applied to the moment problems from
Theorems \ref{thm:real} and \ref{thm:complex}. Theorem
\ref{Thm:OmegaHan} and Corollary \ref{Cor:Complex} are restatements of
the moment problem in the real and complex cases, respectively. We
use Theorem \ref{Thm:ParKol} to produce the measures which appear in
Theorem \ref{Thm:OmegaHan} and Corollary \ref{Cor:Complex}. In
Corollary \ref{Cor:Complex}, the condition PD$\mathbb{C}$ (Definition
\ref{Defn:PDC}) appears in a more natural fashion. In addition, the
proof of Theorem \ref{Thm:OmegaHan} clearly shows why the Hankel
assumption for the infinite positive definite matrix $M$ is essential
in the real case.
The use of the Kolmogorov ideas is motivated by our applications
to iterated function systems (IFSs) which begin in Chapter
\ref{Sec:Exist}. We will be interested in moments of IFS
measures. A key tool in the analysis of these IFS measures will
be infinite product spaces, precisely such as those that arise in
the Parthasarathy-Schmidt construction.
To simplify the main idea, we carry out the details only in the
real case, and only for $d = 1$. The reader will be able to
generalize to the remaining real cases in $\mathbb{R}^d$, $d > 1$.
We conclude with the application to the complex moment problem.
\begin{definition}
Let $S$ be a set, and let $M:S\times S \rightarrow \mathbb{C}$. We
say that $M$ is a \textit{positive semidefinite function} if
\begin{equation*}
\sum_{s \in S} \sum_{t \in S} \overline{c_s} M(s, t) c_t \geq 0
\end{equation*}
for all sequences $\{c_s\}_{s\in S}\in\mathcal{D}$. (Recall this
means the sequences have only finitely many nonzero coordinates.)
In the real moment problem, the matrix $M$ is real, and we
restrict to sequences $\{c_s\}_{s\in S}$ with entries in
$\mathbb{R}$.
\end{definition}
\begin{theorem}\label{Thm:ParKol}\rm(\cite[Theorem 1.2]{PaSc72})
\it Suppose $S$ is a set, and suppose $M: S \times S \rightarrow
\mathbb{C}$ is a positive semidefinite function. Then there is a
Hilbert space $\mathcal{H}$ and a function $X:S \rightarrow
\mathcal{H}$ such that $\mathcal{H} =
\overline{\mathrm{sp}}\{X(s)\,:\, s \in S\}$ and
\begin{equation}\label{Eqn:XInnerProd}
M(s,t) = \langle X(s) | X(t)\rangle_{\mathcal{H}}.
\end{equation} Moreover, the
pair $(\mathcal{H},X)$ is unique up to unitary equivalence.
\end{theorem}
\begin{remark}
We outline here two choices for the pair $(\mathcal{H},X)$ in the
special case where $S = \mathbb{N}_0$ and $M$ is given by an
infinite positive semidefinite matrix. The first pair is the
the one constructed in \cite{PaSc72} using Kolmogorov's extension
principle on an infinite product space. This particular choice
will allow us in Theorem \ref{Thm:OmegaHan} to express concretely
the measure satisfying the moment problem in the special case
where $M$ is Hankel.
The second pair is formed by constructing a Hilbert space from a
quadratic form using the matrix $M$. We will be using similar
completions later in the paper to work with moment matrices, so this
is a natural approach. Note that Theorem \ref{Thm:ParKol} tells us that our two
Hilbert spaces are isometrically isomorphic.
\end{remark}
\noindent \textit{Proof $1$.}\; Let $S$ be the natural numbers
$\mathbb{N}_0$ and let $\Omega$ be the set of all functions from
$\mathbb{N}_0$ into the one-point compactification
$\overline{\mathbb{R}}$ of $\mathbb{R}$:
\[\Omega = \prod_{\mathbb{N}_0}\overline{\mathbb{R}}=(\overline{\mathbb{R}})^{\mathbb{N}_0}.\]
($\Omega$ must be compact
in order to use a Stone-Weierstrass argument in the construction.)
We now describe the Gaussian construction and its Kolmogorov
consistency. For now, assume that $M$ is positive definite in
the strict sense. Let $J=\{i_1, \ldots, i_p\}$ be a finite
subset of $\mathbb{N}_0$. The positive definite function $M$
gives rise to a (strictly) positive definite matrix $M_J$ which is
formed by choosing elements from rows and columns in $M$ indexed
by $J$.
Let $P_J$ be the Gaussian measure on
$\Omega_J:=\overline{\mathbb{R}}^J$ with zero mean and covariance
matrix $M_J$, such that
$P_J$ has Radon-Nikodym derivative $f_J$ with respect to Lebesgue measure,
where $f_J$ is given by
\begin{equation}\label{Eqn:Gauss}
f_J(\omega_J) = \frac{1}{(\sqrt{2\pi})^p\sqrt{\det
M_J}}\exp\Bigl(-\frac{1}{2}\omega_J^t M_J^{-1}\omega_J \Bigr),
\end{equation}
where we denote $\omega_J = (\omega_{i_1}, \ldots, \omega_{i_p})$.
Note that Equation (\ref{Eqn:Gauss}) uses the strict positive
definite condition because the inverse $M_J^{-1}$ is required. The density
in Equation (\ref{Eqn:Gauss}) is the standard multivariate normal
density for random variables $X(i_1), \ldots, X(i_p)$ with mean
$0$.
By Kolmogorov's extension theorem \cite[``Fundamental Theorem,''
p. 29]{Kol50}, in order for the family of measures $\{P_J\,:\, J
\, \textrm{finite}\}$ to define a measure $P$ on all of $\Omega$,
the measures $P_J$ must be consistent. Suppose $J$ and $K$ are
both finite subsets of $\mathbb{N}_0$, where $J\subset K$. In this
context, \textit{consistency} means that if $g$ is a function on
$\Omega_J$ which is extended to a function $G$ on $\Omega_K$
depending only on the variables in $J$, then
\[
\int_{\Omega_J} g f_J d\omega_J = \int_{\Omega_K}Gf_Kd\omega_K.
\]
If we consider this problem in the coordinates which diagonalize
the matrix $M_K$, we see that the variables in $K\backslash J$
will integrate to $1$, and the measures $P_J$ are indeed
consistent. Note that consistency in one set of coordinates does
not imply consistency in another set of coordinates.
If $M$ is not strictly positive definite, then for any finite set
$J \subset \mathbb{N}_0$, we can change coordinates to diagonalize
$M_J$ and write $\Omega_J = \mathrm{ker}(M_J) \oplus
\mathrm{ker}(M_J)^{\perp}$. We take the measure $P_J$ to be the
Gaussian given by $f_J$ on $\mathrm{ker}(M_J)^{\perp}$ and
$\delta_0$, the distribution having mean $0$ and variance $0$, on
$\mathrm{ker}(M_J)$. Then the covariance matrix of $P_J$ to
is exactly the diagonalized $M_J$, and Kolmogorov
consistency is maintained.
By the consistency condition, the measures $P_J$ defined on finite
subspaces of $\Omega$ can be extended to a measure $P$ on $\Omega$.
We now take the Hilbert space in Theorem \ref{Thm:ParKol} to be
$\mathcal{H} = L^2(\Omega, P)$ and the map $X: \mathbb{N}_0
\rightarrow \mathcal{H}$ to map $n \in \mathbb{N}_0$ to the
projection map onto the $n^{\mathrm{th}}$ coordinate of the element
$\omega\in\Omega$:
\[X(n)(\omega) = \omega(n).
\] Then we see that $\mathcal{H}$ is the closed linear span of the maps
$\{X(n)\,:\, n \in \mathbb{N}_0\}$ and \[ \langle X(n) | X(m)
\rangle_{\mathcal{H}} = \int_{\Omega} X(n)(\omega) X(m)(\omega)
\,\mathrm{d}P(\omega) = M(m,n). \]
We can now call $M$ the covariance function.
To show the uniqueness statement, suppose Hilbert spaces
$\mathcal{H}_1$ and $\mathcal{H}_2$ with corresponding functions
$X_1$ and $X_2$ satisfy $\langle X_i(s) |
X_i(t)\rangle_{\mathcal{H}_i} = M(s,t)$ and $H_i =
\overline{\mathrm{sp}}\{X_i(s)\,:\,s \in S\}$ for $i=1,2$. For
each $s \in S$, let $W$ be the linear operator such that $W(X_1(s)) =
X_2(s)$. $W$ defined this way is an operator, since if $\sum_i c_i
X_1(s_i) = 0$ then \begin{eqnarray*} \left\|W\Big(\sum_i c_i
X_1(s_i)\Big)\right\|^2_{\mathcal{H}_2} &=& \left\langle W\Big(\sum_i c_i X_1(s_i)\Big) \Bigr|
W\Big(\sum_i c_i X_1(s_i)\Big) \right\rangle_{\mathcal{H}_2}\\& = &\sum_{i,j}
\overline{c_i}c_j \langle X_2(s_i)|X_2(s_j) \rangle_{\mathcal{H}_2}
\\&=&\sum_{i,j} \overline{c_i}c_j M(i,j) \\&=& \sum_{i,j} \overline{c_i}c_j \langle
X_1(s_i)|X_1(s_j) \rangle_{\mathcal{H}_1} \\ &=& \left\| \sum_i c_i X_1(s_i) \right\|^2_{\mathcal{H}_1} = 0. \end{eqnarray*} By
linearity and density, $W$ extends to a map from $\mathcal{H}_1$
onto $\mathcal{H}_2$, and by the association of the inner products
with the matrix $M$, we see that $W$ is unitary. \hfill $\Box$
\medskip
\noindent \textit{Proof $2$.}\; As in the first proof, let $S =
\mathbb{N}_0$. Let $\mathcal{D}$ be the set of all maps from $S$
to $\mathbb{C}$ (i.e. complex sequences) such that only finitely
many coordinates are nonzero. Given the positive semidefinite
function $M$, define the sesquilinear form
\[ S(v_1,v_2) = \sum_{s \in \mathbb{N}_0} \sum_{t \in \mathbb{N}_0}
\overline{v_1(s)} M(s,t) v_2(t). \] This is clearly a sesquilinear
form and therefore yields a quadratic form $Q$ which is a seminorm
\[ Q(v) = \|v\|_M^2 = S(v,v). \] The set $ \mathrm{Null}_M = \{v \in
\mathcal{D}\,:\, Q(v) = 0 \}$ is a subspace of $\mathcal{D}$, so the
quotient space $\mathcal{D}/\mathrm{Null}_M$ is an inner product
space. We complete this space to form a Hilbert space which we denote
$\mathcal{H}_Q$ to emphasize the dependence on the quadratic form $Q$
arising from the matrix $M$. We note here that this construction of a
Hilbert space from a given positive semidefinite function $M$ is
unique up to unitary equivalence. We will make use of this Hilbert
space completion of the quadratic form $Q$ again in Chapters
\ref{Ch:Kato} and \ref{Ch:Extensions}.
We next define the map $X: \mathbb{N}_0 \rightarrow
\mathcal{H}_Q$ by \[ [X(s)](t) = \delta_s(t) =
\left\{\begin{matrix} 1 & s=t\\ 0 & x \neq t \end{matrix} \right.
.\] Clearly $\delta_s \in \mathcal{D} \subseteq \mathcal{H}_Q$
for all $s \in \mathbb{N}_0$, and in fact, $\mathcal{D}$ is the
linear span of $\{\delta_s\,:\, s \in \mathbb{N}_0\}$. Therefore,
$\mathcal{H}_Q = \overline{\mathrm{sp}}\{X(s)\,:\, s \in
\mathbb{N}_0\}$. We also see that
\begin{eqnarray*} \langle X(s)|X(t) \rangle &=& \sum_{u,v \in
\mathbb{N}_0} \overline{\delta_s(u)}M(u,v)\delta_t(v)\\ &=&
M(s,t).
\end{eqnarray*} \hfill $\Box$
\begin{remark} There are subtleties involved in these Hilbert space
completions. In the pre-Hilbert space (before the completion), we
will typically be working with a space $\mathcal{D}$ of all finitely
supported functions on $S$. In other words, $\mathcal{D}$ is the
space of all finite linear combinations from the orthonormal basis
$\{\delta_s \}_{s \in S}$ for $\ell^2(S)$. When the completion to a
Hilbert space is done, $\mathcal{D}$ will be a dense linear subspace
in the completion. Here, ``dense'' refers to the norm being used.
In Chapter \ref{Ch:Kato}, for example, the norm is a weighted
$\ell^2$ norm.
Caution: In these alternative Hilbert space completions, say
$\mathcal{H}_Q$ above, explicit representations of the vectors in
$\mathcal{H}_Q$ are often not transparent. The useful geometric
representations of limits of concretely given functions may be
subtle and difficult. Notions of boundary constructions reside in
these completions. In fact, this subtlety is quite typical when
Hilbert space completions are used in mathematical physics problems.
Such examples are seen in \cite{JoOl00, Jor00}, where symmetries
result in separate positive definite quadratic forms, and hence two
very different Hilbert space completions. \end{remark}
We are now ready to state our existence result for the real moment
problem, using the language of Kolmogorov and
Parthasarathy-Schmidt (the first proof above). Given a matrix $M$,
our goal is to find a measure $\mu$ such that $M$ is the moment
matrix for $\mu$. The previous theorem gives us a Hilbert space
and a map $(\mathcal{H},X)$, unique up to unitary equivalence,
which we can use under the right condition ($M$ a Hankel matrix)
to determine a solution to the moment problem. This solution will
not, in general, be a unique solution.
\begin{theorem}\label{Thm:OmegaHan} Let $M$ be an infinite matrix
satisfying the positive semidefinite condition (\ref{Eqn:PD}), and
let $M(0,0) = 1$. Let $\Omega_{Han}$ be the measurable subset of
$\Omega$ given by
\begin{equation}\label{Eqn:OmegaHan}
\Omega_{Han} = \{ \omega \in \Omega \;:\; \omega(k) =
[\omega(0)]^k \, \textrm{ for all } k \in \mathbb{N}_0\}.
\end{equation}
Then Kolmogorov's extension construction yields a probability
measure $P_{Han}$ on $\Omega_{Han}$ with $M$ as its moment matrix
if and only if $M$ satisfies the Hankel property from Definition
\ref{Defn:HankelN0d}.
\end{theorem}
\begin{proof} ($\Rightarrow$) Suppose a measure $P_{Han}$ exists as stated. Theorem \ref{Thm:ParKol} defines the map $X$ such that
$X(k)(\omega) = \omega(k) = [\omega(0)]^k$ for $\omega \in
\Omega_{Han}$. For the covariance function we have
\begin{equation}
M(j,k) = \int_{\Omega_{Han}} X(j) X(k) \mathrm{d}P_{Han}.
\end{equation}
Using Equation (\ref{Eqn:OmegaHan}), we then find
\begin{eqnarray*}
M(j,k)
&=& \int_{\Omega_{Han}} [X(0)(\omega)]^j [X(0)(\omega)]^k\mathrm{d}P_{Han}(\omega)\\
&=& \int_{\mathbb{R}} x^jx^k \mathrm{d}(P_{Han} \circ X(0)^{-1})(x) \\
&=& \int_{\mathbb{R}} x^{j+k} \mathrm{d}\mu(x)
\end{eqnarray*}
where we define the measure by
\begin{equation}
\mu = P_{Han} \circ X(0)^{-1}.
\end{equation}
Since this formula shows that $M$ is a Hankel matrix, we have
proved one implication.
($\Leftarrow$) Conversely, if $M$ is a Hankel matrix,
then Kolmogorov consistency holds on the subset $\Omega_{Han}
\subset \Omega$ because $\Omega_{Han}$ is measurable. By the
extension principle, we get a measure space
$(\Omega_{Han},P_{Han})$ such that the measure $\mu = P_{Han}
\circ X(0)^{-1}$ on $\mathbb{R}$ is a solution to the moment
problem for $M = M^{(\mu)}$.
\end{proof}
There is an analogous result for the complex case, which we
describe briefly. Let $M:\mathbb{N}_0 \times \mathbb{N}_0
\rightarrow \mathbb{C}$ be a function satisfying Property
PD$\mathbb{C}$, i.e.,
\begin{equation}
\sum_{i,j,k,\ell \in \mathbb{N}_0} \overline{c}_{i,j} M_{i+\ell,
j+k} c_{k,\ell} \geq 0
\end{equation}
for all doubly-indexed sequences $c = \{c_{ij}\}\in\mathcal{D}$.
Define an induced function $\widehat{M}:\mathbb{N}_0^2 \times
\mathbb{N}_0^2 \rightarrow \mathbb{C}$ by
\begin{equation}
\widehat{M}((i,j),(k,l)) = M(i+l,j+k).
\end{equation}
One can readily verify that $\widehat{M}$ satisfies the positive
definite condition given in Theorem \ref{Thm:ParKol}.
\begin{corollary}\label{Cor:Complex}
When Theorem \ref{Thm:ParKol} is applied to the function
$\widehat{M}$ above, we get a pair $(X,\mathcal{H})$ where the
Hilbert space may be taken to be $\mathcal{H} =
L^2(\mathbb{C},\mu)$, and the function can be given by $X(i,j) =
z^i \overline{z}^j$ for all $(i,j) \in \mathbb{N}_0^2$.
\end{corollary}
\begin{proof} Given these choices of $\mathcal{H}$ and $X$, we
see that
\begin{equation*}
\langle X(i,j) | X(k,l) \rangle_{\mathcal{H}} = \int_{\mathbb{C}}
\overline{z^i \overline{z}^j} z^k \overline{z}^l\mathrm{d}\mu(z)
\end{equation*} by Theorem \ref{Thm:ParKol}.
Then,
\begin{equation*}
\begin{split}
& \int_{\mathbb{C}} \overline{z^{i+l}}z^{j+k} \mathrm{d}\mu(z)
=\langle z^{i+l} | z^{j+k} \rangle_{L^2(\mu)}\\
&= M(i+l,j+k) = \widehat{M}((i,j),(k,l)) .
\end{split}
\end{equation*}
This verifies our desired result and also shows that $\mu$ is a
solution to the complex moment problem for the matrix $M_{ij} =
M(i,j)$.
\end{proof}
\section{Examples}
\begin{example}\label{Ex:Laguerre-2} The measure $\mu = e^{-x}dx$.\end{example}
We showed in Example \ref{Ex:Laguerre} that the measure $e^{-x}dx$ on
the positive reals has moment matrix \[M^{(\mu)}_{i,j} = (i+j)!\] for
all $i,j \in \mathbb{N}_0$. This measure does not have compact
support, but it is an example of a case where the solution to the
moment problem $M=M^{(\mu)}$ is unique. We know this because the
Laguerre system of orthogonal polynomials is dense in $L^2(\mu,
\mathbb{R}^+)$.
Recall our earlier observation that this is also an example which
illustrates that infinite Hankel matrices cannot always be realized
directly by operators on $\ell^2$-sequence spaces. However we will
show that an operator representation may be found after a certain
renormalization is introduced. This will be an example of a general
operator theoretic framework to be introduced in Chapter
\ref{Ch:Kato}. \hfill $\Diamond$
We next describe a series of examples of measures having moments
involving the Catalan numbers. We will revisit these examples in
Chapter \ref{Sec:Spectrum}, where we will be able to say more about
the operator properties of the moment matrices.
The $k^{\textrm{th}}$ Catalan number $C_k$ is defined
\[C_k
:=\frac{1}{k+1}\binom{2k}{k}
= \frac{(2k)!}{k!(k+1)!},
\]
where the first Catalan number, $C_0$, is $1$.
We define $B_k$ to be
\[
B_k:=\binom{2k}{k}.
\]
The Catalan numbers satisfy the following relation:
\[C_{k+1} = \sum_{n=0}^k C_n C_{k-n}.\] There is an explicit formula
for the generating function associated with the Catalan numbers:
\begin{equation}\label{Eqn:CatGenFn}
G_{\textrm{Cat}}(x)
= \sum_{k=0}^{\infty} C_k x^k
=\frac{1-\sqrt{1-4x}}{2x}.
\end{equation}
The radius of convergence for $G_{\textrm{Cat}}(x)$ is $\frac{1}{4}$.
The generating function $G_{\textrm{Bin}}(x)$ associated with the $B_k$'s has the same radius of convergence; in fact,
\begin{equation}
G_{\textrm{Cat}}(x) = \frac{1}{x}\int_0^x G_{\textrm{Bin}}(y)\mathrm{d}y,
\end{equation}
and
\begin{equation}
G_{\textrm{Bin}}(x) = \frac{2}{\sqrt{1-4x}}.
\end{equation}
The Hankel matrix $M^{\textrm{Cat}} = (C_{j+k})$ is positive definite because every principal submatrix has determinant $1$.
\begin{example}\label{Ex:Wigner}Wigner's semicircle measure.\end{example}
The measure $\mathrm{d}\mu$ is given on $(-2, 2)$ by
\[ \mathrm{d}\mu(x) = \frac{\sqrt{4-x^2}}{2\pi}\mathrm{d}x\]
and is $0$ otherwise.
A simple calculation shows that
\[ m_{2k} = \int_{-2}^2 x^{2k}\mathrm{d}\mu(x) = C_k\]
and
\[ m_{2k+1} = \int_{-2}^2 x^{2k+1}\mathrm{d}\mu(x) = 0.\]
\hfill$\Diamond$
\begin{example}\label{Ex:Secant}The secant measure.\end{example}
The measure $\mathrm{d}\mu$ is given on $(-2, 2)$ by
\[ \mathrm{d}\mu(x) = \frac{1}{\pi\sqrt{4-x^2}}\mathrm{d}x\]
and is $0$ otherwise. In this case,
\[ m_{2k} = \int_{-2}^2 x^{2k}\mathrm{d}\mu(x) = B_k\]
and
\[ m_{2k+1} = \int_{-2}^2 x^{2k+1}\mathrm{d}\mu(x) = 0.\]
\hfill$\Diamond$
\begin{example}\label{Ex:HalfSecant}The half-secant measure.\end{example}
The measure $\mathrm{d}\mu$ is given on $(0, 2)$ by
\[ \mathrm{d}\mu(x) = \frac{2}{\pi\sqrt{4-x^2}}\mathrm{d}x\]
and is $0$ otherwise. In this example, both the even and odd moments are nonzero:
\[m_{2k} =\int_0^2 x^{2k} \mathrm{d}\mu(x) = B_k,\]
and
\[m_{2k+1} = \int_0^2 x^{2k+1} \mathrm{d}\mu(x) = \frac{4^{k+2}}{\pi(k+1)} \binom{2k+1}{k}^{-1} .\]
\hfill$\Diamond$
\section{Historical notes}\label{Subsec:history}
Our positive semidefinite functions $M$ in the form (\ref{Eqn:PD})
have a history in a variety of guises under the names \textit{positive
semidefinite kernels}, \textit{positive hermitian matrices},
\textit{reproducing kernels}, or \textit{positive definite functions},
among others. In these settings, a positive semidefinite function is
just a function in two variables, or, equivalently, a function $M$
defined on $S \times S$ where $S$ is a set.
There is a broad literature covering many aspects of positive
semidefinite kernels, especially in the case when $S$ is a domain in a
continuous manifold; $S$ may even be a function space. Function
spaces lead to the theory of reproducing kernels, which are also
called Bergman-Shiffer-Aronszajn kernels. See
\cite{Aro50},\cite{BeSc52} for a classical exposition and
\cite{BeRe84}, \cite{Dut04}, \cite{DuJo06} for a modern view.
If $S$ is a group or a semigroup, and if $M$ is a positive
semidefinite kernel, we will adopt additive notation ``$+$'' for the
operation in $S$ and restrict attention to the abelian case. Of
special interest are the cases when the function $M$ has one of two
forms:
\begin{enumerate}[(i)]
\item $M(s,t) = F_1(s - t)$ for some function $F_1$ on $S$
\item $M(s,t) = F_2(s + t)$, $F_2$ a function on $S$.
\end{enumerate}
In the first case, we say that the function $F_1$ is \textit{positive
semidefinite} on $S$. Equivalently, we say that $M$ is a
\textit{Toeplitz matrix}. There is a substantial theory of positive
semidefinite functions on groups (and semigroups), see e.g.,
\cite{HeRo79}, \cite{BeRe84}; positive semidefinite functions play a
central role in harmonic analysis.
In the second case (ii), $M$ is often called a \textit{Hankel matrix}. In the following chapters, we develop operator theoretic duality techniques to study positive semidefinite kernels in the form of infinite Hankel matrices.
\begin{enumerate}
\item We define a (generaly unbounded) operator $F$ which maps
sequence spaces $\ell^2$ or their weighted variants $\ell^2(w)$ into
function spaces $L^2(\mu)$. This crucial operator $F : \ell^2(w)
\rightarrow L^2(\mu)$ is defined on the dense subspace $\mathcal{D}$
of finite sequences in $\ell^2$ and sends a sequence into the
generating function (or polynomial).
\item Given the Hilbert spaces $\ell^2(w)$ and $L^2(\mu)$, we
introduce a duality with the use of adjoint operators. If $F$ is a
given operator from $\ell^2(w)$ to $L^2(\mu)$, its adjoint operator
$F^*_w$ maps from $L^2(\mu)$ to $\ell^2(w)$.
\end{enumerate}
As a byproduct of our operator analysis for the spaces $\ell^2(w)$ and
$L^2(\mu)$, we get operations on the two sides which are unitarily
equivalent. The unitary equivalence is critical: it allows us to
derive spectral properties of measures $\mu$ from matrix operations
with infinite matrices, and vice versa.
As a final historical note, we mention that because the operator $F$
associates elements of sequence spaces with generating functions (or
polynomials), we initially obtained some of our results with formal
power series in the spirit of Rota's umbral calculus \cite{RoRo78},
\cite{Rom84}. We then found conditions (such as renormalizing
sequence spaces) which guaranteed not just formal convergence but
actual convergence. In Rota's
umbral calculus, the ``umbra'' is a space of formal power series. In
this Memoir, the ``umbra'' consists of the Hilbert spaces $\ell^2(w)$ and
$L^2(\mu)$ and the closed linear operators $F$ and $F^*_w$ between
them. For more about the book \cite{Rom84} and the umbral calculus's
relation to other parts of mathematics, see the 2007 review by
Jorgensen \cite{JorAmazon}.
\chapter{Notation}\label{Sec:Notation}
In this section, we introduce some notation, conventions, and
definitions needed throughout the paper.
\section{Hilbert space notation}
We use the convention that the inner
product, denoted $\langle u|v \rangle$, is linear in the second
position $v$ and conjugate linear in the first position $u$. This
choice results in fewer matrix transpositions in the type of
products we will be computing and therefore yields more
straightforward computations.
We will use Dirac's notation of ``bras'' and ``kets''. For
vectors $u$ and $v$ in a Hilbert space, the inner product is
denoted ``bra-ket'' $\langle u | v \rangle$. In contrast, the
``ket-bra'' notation $|u\rangle\langle v| $ denotes the rank-one
operator which sends a vector $x$ in the Hilbert space to a scalar
multiple of $u$. Specifically, it is the operator given by $x
\mapsto \langle v|x \rangle u$. The Dirac notation for this
operation is $$ |u\rangle \langle v|\, x\rangle = \langle v|x
\rangle u, $$ or also is sometimes written to preserve the order
$|u\rangle \langle v| \,x\rangle = u \langle v|x\rangle $.
If we consider these Hilbert space operations from the point of
view of matrices and Euclidean vectors, we denote a column vector
by $|u\rangle$, which we call a ``ket''. Similarly, we denote a
row vector by $\langle u |$ and call it ``bra''. The notation for
the inner product and rank-one operators above now are consistent
with the actual matrix operations being performed. Further, the
algebraic manipulations with operators and vectors are done by
simply merging ``bras'' and ``kets'' in the position they
naturally have as we write them.
\section{Unbounded operators}\label{subsec:unboundedop}
We state here some definitions and properties related to unbounded
operators on a Hilbert space. For more details, see \cite{ReSi80, Con90}.
\begin{definition} An \textit{operator} $F$ which maps a Hilbert space $\mathcal{H}_1$ to another Hilbert space $\mathcal{H}_2$ is a linear map from a subspace of $\mathcal{H}_1$ (the \textit{domain} of $F$) to $\mathcal{H}_2$.
\end{definition}
It will be assumed here that the domain of $F$ is dense in $\mathcal{H}_1$ with respect to the norm arising from the inner product on $\mathcal{H}_1$. If $F$ is not a bounded operator, we call it an \textit{unbounded operator}.
\begin{definition}[\cite{Con90}]\label{Def:FClosed} An operator $F$ from $\mathcal{H}_1$ to $\mathcal{H}_2$ is \textit{closed} if the graph of $F$, $\{( x , Fx ) \subset {\mathcal{H}_1 \times \mathcal{H}_2}\::\: x \in \mathrm{dom}(F) \}$ is a closed set in the Hilbert space $\mathcal{H}_1 \times \mathcal{H}_2$ with inner product $\langle (x_1,x_2) | (y_1,y_2) \rangle_{\mathcal{H}_1 \times \mathcal{H}_2} = \langle x_1 | y_1 \rangle_{\mathcal{H}_1} + \langle x_2 | y_2 \rangle_{\mathcal{H}_2}$. If there exists a closed extension to the operator $F$, then $F$ is called \textit{closable}. In that case, there exists a smallest closed extension, which is called the \textit{closure} of $F$ and is denoted $\overline{F}$.
\end{definition}
\begin{definition}\label{Def:Adjoint} Let $F$ be an unbounded operator with dense domain $\mathrm{dom}\,(F)$ from $\mathcal{H}_1$ to $\mathcal{H}_2$. Let $D$ be the set of all $y \in \mathcal{H}_2$ such that there exists an $x \in \mathcal{H}_1$ such that \[\langle Fz|y \rangle_{\mathcal{H}_2 }=
\langle z| x \rangle_{\mathcal{H}_1} \] for all $z \in \mathrm{dom}\,(F)$. We define the operator $F^*$ on the domain $D$ by $
F^*y = x$. Note that $x$ is uniquely determined because $F$ is densely defined. $F^*$ is called the \textit{adjoint} of $F$.
\end{definition}
It follows from these definitions (see \cite{ReSi80}) that $F$ is closable if and only if $F^*$ is densely defined, and that $F^*$ is always closed.
\begin{definition}\label{Def:SelfAdjoint} An operator $F$ on a Hilbert space $\mathcal{H}$ is \textit{symmetric} if $\mathrm{dom}(F) \subset \mathrm{dom}(F^*)$ and $Fx = F^*x$ for all $x \in \mathrm{dom}(F)$. $F$ is \textit{self-adjoint} if $F$ is symmetric and $\mathrm{dom}(F) = \mathrm{dom}(F^*)$; i.e. $F^* = F$.
\end{definition}
Often, the term \textit{hermitian} is also used for a symmetric operator. We see that a symmetric operator $F$ must be closable, since the domain of $F^*$ contains the dense set $\mathrm{dom}(F)$, and is therefore dense.
\begin{definition} A symmetric operator $F$ is \textit{essentially self-adjoint} if its closure $\overline{F}$ is self-adjoint.
\end{definition}
If an operator $F$ is bounded on its dense domain and symmetric, it is essentially self-adjoint.
\section{Multi-index notation}\label{subsec:index}
In $\mathbb{R}^d$, for $d > 1$, we will need to use multi-index
notation. Here, $\alpha$ and $\beta$ denote multi-indices
belonging to $\mathbb{N}_0^d$, where $\mathbb{N}_0^d$ is the
Cartesian product
\begin{equation}
\mathbb{N}_0^d = \underbrace{\mathbb{N}_0\times\cdots\times\mathbb{N}_0}_{d \textrm{ times }}.
\end{equation}
Following standard conventions, the sum $\alpha + \beta$ is
defined pointwise:
\begin{equation}
\alpha + \beta = (\alpha_i + \beta_i)_{i=1}^d.
\end{equation}
Using this notation, we have the following integral expression:
\begin{equation}
\int_{\mathbb{R}^d} x^{\alpha}\,\mathrm{d}\mu(x) =
\int_{\mathbb{R}^d} x_1^{\alpha_1}x_2^{\alpha_2}\cdots
x_d^{\alpha_d}\,\mathrm{d}\mu(x).
\end{equation}
\section{Moments and moment matrices}\label{Subsec:moments}
Let $X$ be one of $\mathbb{R}, \mathbb{C}, \mathbb{R}^d$ with Borel
measure $\mu$. When $X=\mathbb{R}$, we define the
$i^{\mathrm{th}}$ order moment with respect to $\mu$ to be
\begin{equation} m_i = \int_{\mathbb{R}} x^i \,\mathrm{d}\mu (x).
\end{equation} If the moments of all orders are finite, we will
denote by $M^{(\mu)}$ an infinite matrix called the \textit{moment
matrix}. The moment matrix in the real case has entries
\begin{equation}\label{Eqn:MomMx} M^{(\mu)}_{i,j} = m_{i+j} = \int
x^{i+j} \,\mathrm{d}\mu(x).\end{equation} Throughout this monograph, the real moment matrices will be indexed by $\mathbb{N}_0 \times \mathbb{N}_0$ -- in particular, the row and column indexing both start with $0$.
\begin{definition}\label{Defn:HankelN0d} An infinite real matrix $M$ whose entries are indexed by
$\mathbb{N}_0 \times \mathbb{N}_0$ is called a \textit{Hankel
matrix} if
\[ M_{i,j} = M_{i+k,j-k} = M_{i-k,j+k}\] for all values of $k$ for
which these entries are defined.
\end{definition}
The moment matrix defined in Equation (\ref{Eqn:MomMx}) is a Hankel
matrix. This justifies our notation above referencing the $(i,j)^{\mathrm{th}}$
entry of $M^{(\mu)}$ with $i+j$. The following examples will be used throughout the paper to illustrate our techniques and results.
\begin{example}\label{Ex:Lebesgue} Lebesgue measure on $[0,1]$. \end{example}
Let $\mu$ be the Lebesgue measure supported on
$[0,1]$. Then the moment matrix for $\mu$ is \[M^{(\mu)}_{i,j} =
\int_0^1 x^{i+j} \,\mathrm{d}x = \frac{1}{i+j+1}.\] This matrix
is often called the \textit{Hilbert matrix}. We will examine the properties of the Hilbert
matrix in greater detail in Section \ref{Sec:Hilbert}. \hfill $\Diamond$
\begin{example}\label{Ex:PointMass} Dirac point mass measure $\mu = \delta_1$. \end{example} Let $\mu$ be the Dirac probability mass $\delta_1$ on $\mathbb{R}$. Then the moments for $\mu$ are \[ M^{(\mu)}_{i,j} = \int x^{i+j} \,\mathrm{d}\mu = 1^{i+j} = 1,\] so the moment matrix entries are all $1$. In this case, we see that the moment matrix does not represent a bounded operator on $\ell^2$. This example will be studied further in Chapters \ref{Ch:Kato} and \ref{Sec:Spectrum}. \hfill $\Diamond$
\begin{example}\label{Ex:Laguerre} The measure $\mu = e^{-x}\,\mathrm{d}x$. \end{example} Let $\mu$ be $e^{-x}\mathrm{d}x$ on $\mathbb{R}^+ = (0,\infty)$. Using integration by parts, a quick induction proof shows that the moments for $\mu$ are\[ M^{(\mu)}_{i,j} = (i+j)!.\] Again, we see that the moments increase rapidly as $i,j$ increase, so $M^{(\mu)}$ cannot be a bounded operator on $\ell^2$. We will discover more properties for this matrix in Chapters \ref{Ch:Kato} and \ref{Ch:Extensions}. \hfill$\Diamond$
\begin{example}\label{Ex:Gaussian} Measures with moments from the gamma function.\end{example}
Let \[\Gamma(k) = \int_0^{\infty} s^{k-1}e^{-s}\,\mathrm{d}s = (k-1)! \] when $k \in \mathbb{Z}_+$. Set $\mu = e^{-p^2x^2}\mathrm{d}x$. $\mu$ is a Gaussian measure with support on $\mathbb{R}$. We have the odd moments equal to zero, and the even moments are given by
\[ \int_{\mathbb{R}} x^{2k}e^{-p^2x^2}\mathrm{d}x = \frac{\Gamma(k)}{2p^{2k+1}} \] for $k \in \mathbb{Z}_+$. \hfill $\Diamond$
If $X=\mathbb{C}$, the moments are now indexed by $\mathbb{N}_0
\times \mathbb{N}_0$ and are given by \begin{equation}m_{ij} =
\int_{\mathbb{C}} \overline{z}^iz^j
\mathrm{d}\mu(z).\end{equation} If every moment is finite, then
the complex moment matrix $M^{(\mu)}$ is given by
\begin{equation}\label{Eqn:MomMxComplex} M^{(\mu)}_{i,j} = m_{ij} =
\int_{\mathbb{C}} \overline{z}^iz^j
\,\mathrm{d}\mu(z).\end{equation} Notice that a complex moment
matrix will not in general have the Hankel property that arises in
real measures. We also observe that the complex moments are equivalent to the inner
products of monomials in the Hilbert space $L^2(\mu)$:
\begin{equation} m_{ij} = \int_{\mathbb{C}} \overline{z}^i z^j
\mathrm{d}\,\mu(z) = \langle z^i | z^j
\rangle_{L^2(\mu)}. \end{equation} Moments and moment matrices arise naturally in the
study of orthogonal polynomials in $L^2(\mu)$.
\begin{example} If $\mu$ is the Lebesgue measure supported on the unit circle $\mathbb{T}$ in $\mathbb{C}$, then
the moment matrix is the identity matrix: \[ M^{(\mu)}_{j,k} =
\int_{\mathbb{T}} \overline{z}^jz^k \,\mathrm{d}\mu = \int_0^1
e^{2\pi i (k-j)x} \,\mathrm{d}x = \delta_{jk}.\] As we noted
above, the identity matrix is not Hankel.
\end{example} \hfill $\Diamond$
In the case where $X = \mathbb{R}^d$, we index the moments using
the multi-index notation defined in Section \ref{subsec:index}.
Given $\alpha \in \mathbb{N}_0^d$, we have the $\alpha$-moment
\begin{equation}m_{\alpha} = \int_{\mathbb{R}^d} x^{\alpha}
\mathrm{d}\mu(x).\end{equation} The moment matrix for
$\mathbb{R}^d$ is actually indexed by $\mathbb{N}_0^d \times
\mathbb{N}_0^d$ and has entries
\begin{equation}\label{Eqn:multi-moment} M^{(\mu)}_{\alpha,\beta} = m_{\alpha+\beta} =
\int_{\mathbb{R}^d} x^{\alpha + \beta} \mathrm{d}\mu(x)
.\end{equation}
When $M$ has real entries indexed by $\mathbb{N}_0^d \times
\mathbb{N}_0^d$, we will also call $M$ a Hankel matrix if
\[ M_{\alpha, \beta} = M_{\alpha-\gamma, \beta+\gamma} = M_{\alpha + \gamma, \beta - \gamma}\]
for all $\gamma$ for which these entries are defined. We see from
Equation (\ref{Eqn:multi-moment}) that the moment matrix for a
measure on $\mathbb{R}^d$ ($d > 1$) also has the Hankel property.
\section{Computations with infinite matrices}\label{Sec:InfMatrices}
In a number of applications throughout this paper, we will have occasion to use infinite matrices. They will be motivated by the familiar correspondence between linear transformations and matrices from linear algebra. Given a linear transformation $T$ between two Hilbert spaces, assumed to be infinite dimensional, then a choice of ONBs in the respective Hilbert spaces produces an infinite matrix which represents $T$. Moreover a number of facts from (finite-dimensional) linear algebra carry over: for example composition of two transformations (and a choice of ONBs) corresponds to multiplication of the associated two infinite matrices. Moreover by taking advantage of the orthogonality, one sees in Lemma \ref{Lem:Product} that the matrix multiplication is convergent.
Given an infinite matrix $M$, it is a separate problem to determine when there exists a linear transformation $T$ between two Hilbert spaces and a choice of ONBs such that $M$ represents $T$. This problem is only known to have solutions in special cases. We will address this further in Chapters \ref{Ch:Kato} and \ref{Sec:IntOperators}. We will encounter infinite matrices which cannot be realized directly by operators in the $\ell^2$-sequence spaces, but for which an operator representation may be found after a certain renormalization is introduced.
We use the following standard
language and notation: if $G = G_{i,j}$ is an infinite matrix
indexed by $\mathbb{N}_0 \times \mathbb{N}_0$, and $x$ is an
infinite vector indexed by $\mathbb{N}_0$, we will define the
matrix-vector product $Gx$ componentwise using the usual rule,
provided the sums all converge. $$(Gx)_i = \sum_{j \in
\mathbb{N}_0} G_{i,j}x_j.$$ Similarly, for infinite matrices $G$
and $H$, the matrix product $GH$ is also defined componentwise
using the same summation formula used for finite matrices,
provided these infinite sums all converge in the appropriate
sense. Specifically, the $(i,j)^{\mathrm{th}}$ entry of the matrix
$GH$ is given by
\begin{equation}\label{Eqn:MatrixProduct} (GH)_{i,j} = \sum_{k \in \mathbb{N}_0} G_{i,k}H_{k,j}.
\end{equation}
As an aside, there is a parallel notion for such products which
applies to matrices indexed by $\mathbb{N}^d_0 \times
\mathbb{N}^d_0$ and vectors indexed by $\mathbb{N}_0^d$. Given
$M$ indexed by $\mathbb{N}_0^d \times \mathbb{N}_0^d$, a vector
$c$, and given $\alpha \in \mathbb{N}_0^d$, we have
\begin{equation}\label{Eqn:MatrixMult}
(Mc)_{\alpha} = \sum_{\beta \in \mathbb{N}_0^d}
M_{\alpha,\beta}c_{\beta},
\end{equation}
provided that this sum converges absolutely. We particularly need
absolute convergence here so that the sum can be appropriately
reordered to sum over $\mathbb{N}_0^d$.
\begin{definition}
When the formal summation rules for a product of infinite matrices
or a matrix-vector product yield convergent sums for each entry,
we say that the matrix operations are \textit{well defined}.
\end{definition}
Using our matrices $G$ and $H$ above, if the sums $\sum_{k \in
\mathbb{N}_0} G_{i,k}H_{k,j}$ converge for all $i,j \in
\mathbb{N}_0$, then the matrix product $GH$ is well defined.
If $G$ and $H$ are operators on a separable Hilbert space, we can determine necessary conditions on
their corresponding matrix representations so that the matrix products are well defined. If we
take the Hilbert space to be $\mathcal{H} = \ell^2(\mathbb{N}_0)$ and let $\{e_j\,:\, j \in
\mathbb{N}_0\}$ be the standard orthonormal basis in $\ell^2(\mathbb{N}_0)$, i.e., $e_j(k) =
\delta_{j,k}$ for $j,k \in \mathbb{N}_0$, then we must first assume that $G$ and $H$ are defined on
a dense domain which includes $\{e_j\}$. This allows us to write the matrix representations of
$G$ and $H$. From there, we have the following result.
\begin{lemma}\label{Lem:Product} Let $G$ and $H$ be linear
operators densely defined on $\ell^2(\mathbb{N}_0)$ such that
$Ge_j$, $G^*e_j$, and $He_j$ are well defined and in
$\ell^2(\mathbb{N}_0)$ for every element of the standard
orthonormal basis $\{e_j\}_{j\in \mathbb{N}_0}$. Then $G=
(G_{i,j})$ and $H=(H_{i,j})$, the infinite matrix
representations of $G$ and $H$ respectively, are defined and the
matrix product $((GH)_{i,j})$ is well defined.
\end{lemma}
\begin{proof} Recall
that the matrix representation of the operator $G$ is $G_{i,j} =
\langle e_i | Ge_j \rangle_{2}$ for $i,j \in \mathbb{N}_0$,
provided $Ge_j$ is in $\ell^2$ so that the inner product is
finite. Our hypotheses imply, then, that the matrix
representations of $G$ and $H$ exist.
Given an operator $G$, the adjoint operator $G^*$ satisfies
$\langle G^*u|v\rangle = \langle u|Gv \rangle$ for all $u,v$ which
are in the appropriate domains and for which $G^*u$ and $Gv$ are
in $\ell^2(\mathbb{N}_0)$. We now check for absolute convergence
of the matrix product sums from Equation
(\ref{Eqn:MatrixProduct}):
\begin{eqnarray*} \sum_{k=0}^{\infty} \left| G_{ik}H_{kj}
\right| &=& \sum_{k=0}^{\infty} \Big| \langle e_i|Ge_k \rangle
\langle e_k|He_j \rangle\Big| \\ &=& \sum_{k=0}^{\infty} \left|
\langle G^*e_i|e_k \rangle \langle e_k|He_j \rangle\right|
\\&\leq& \left(\sum_{k=0}^{\infty} \left|\langle G^*e_i|e_k \rangle \right|^2
\right)^{1/2} \left(\sum_{l=0}^{\infty} \left|\langle e_l|He_j
\rangle \right|^2 \right)^{1/2} \\
&=& \|G^*e_i\|_2 \|He_j\|_2 < \infty.
\end{eqnarray*}
We use the Cauchy-Schwarz inequality above since we know the
vectors $G^*e_i$ and $He_j$ are in $\ell^2$. Note that if $G,H$
are bounded operators, the sum above is bounded by
$\|G^*\|_{op}\|H\|_{op}$ for any choice of $i,j \in \mathbb{N}_0$.
Once we know the sums from Equation (\ref{Eqn:MatrixProduct}) are
absolutely convergent, they can be computed:
\begin{eqnarray*} \sum_{k=0}^{\infty} G_{i,k}H_{k,j} &=&
\sum_{k=0}^{\infty} \langle e_i|Ge_k \rangle \langle e_k|He_j
\rangle\\&=& \sum_{k=0}^{\infty} \langle G^*e_i|e_k \rangle
\langle e_k|He_j \rangle\\&=& \langle G^*e_i|He_j \rangle \qquad
\text{by Parseval's Identity} \\&=& \langle e_i |(GH)e_j \rangle\\
&=& (GH)_{i,j}.
\end{eqnarray*}
\end{proof}
Note that the conditions in Lemma \ref{Lem:Product} imply that the
operator $G$ is automatically closable. It is an immediate
corollary that bounded operators satisfy the hypotheses of Lemma
\ref{Lem:Product}, and thus their matrices will always have well
defined products.
\begin{corollary} If $G$ and $H$ are bounded operators on
$\ell^2(\mathbb{N}_0)$, then the product of their matrix
representations is well defined.
\end{corollary}
In some of the applications which follow -
for example the cases for which both $G$ and $H$ are lower
triangular matrices - the range of the summation index will be
finite. (See, for example, the
proof of Lemma \ref{Lemma:Amatrix}.) But we will also have occasion to
compute products $GH$ for pairs of infinite matrices $G$ and $H$
where the range of the summation index is infinite. In those
cases, we must check that the necessary sums are convergent.
There are two approaches to working with the product $GH$ of infinite
matrices $G$ and $H$. One is the computational approach we have
described above, and the other involves associating the matrices
to operators on appropriate Hilbert spaces (see Chapters
\ref{Ch:Kato} and \ref{Sec:IntOperators}). For many applications,
the first method is preferred. In fact, there are no known
universal, or canonical, procedures for turning an infinite matrix
into an operator on a Hilbert space (see e.g., \cite{Hal67} and
\cite{Jor06}), so often the computational approach is the only one
available.
\section{Inverses of infinite matrices}\label{Sec:Inverses}
Computations in later sections will require the notion of inverse for infinite
matrices. Let $G$ be an infinite matrix indexed, as discussed in Section \ref{subsec:index}, by the set $\mathbb{N}_0^d \times \mathbb{N}_0^d$. We will now make
precise our (admittedly abusive) use of the notation $G^{-1}$ for
an inverse. First, in order to discuss computations, the index set for
rows and columns must be equipped with an order. If $d > 1$, we
will use the order of the set $\mathbb{N}_0^d$ along successive finite diagonals. Our goal is to find an algorithm for the entries in the infinite matrix we denote
$G^{-1}$. This does turn out to be possible for the infinite matrices
we will be using in our analysis of moments and of
transformations.
\begin{definition}
Let $E$ be an infinite matrix. We say that $E$ is an \textit{idempotent} if the
matrix product $E^2$ is well defined and if $E^2 = E$.
\end{definition}
\begin{definition}\label{def:inverse}
Let $G, H, E_1$ and $E_2$ be infinite matrices. Assume that the matrix products $GH$ and $HG$ are well
defined. We say that $G$ is a \textit{left inverse} of $H$ if there is an idempotent matrix $E_1$ such that $GH = E_1$. $G$ is called a \textit{right inverse} of $H$ if there is an idempotent $E_2$ such
that $HG = E_2$. If $G$ is both a left and right inverse of $H$, $G$ is called an \textit{inverse} of $H$.
\end{definition}
While this notion of inverses for infinite matrices is not
symmetric, the following lemma does justify the use of the term
``inverse''.
\begin{lemma}\label{lem:inverse}
Let $G$ and $H$ be infinite matrices such that both
matrix products $GH$ and $HG$ are well defined. Suppose there is
an idempotent $E_1$ such that $GH = E_1$ and $E_1G = GE_1 = G$.
Then the infinite matrix $HG$ is an idempotent, denoted $E_2$,
which satisfies $E_2H = HE_1$ and $E_2E_1 = E_2$.
\end{lemma}
\begin{proof} First, we see that $HG$ is idempotent: $$HGHG =
H(E_1)G = HG.$$ The formulas are also readily verified: $$ E_2H =
HGH = HE_1
$$ and
$$ E_2E_1 = HGE_1 = HG = E_2.$$
\end{proof}
A result of Lemma \ref{lem:inverse} is that if we know the matrix
product $GH = E_1$ is an idempotent such that $E_1G = GE_1 = G$,
then by Definition \ref{def:inverse}, $G$ and $H$ are inverses.
\begin{example}\label{Ex:Inverses} The following matrices arise in Example \ref{Ex:Rank2} with respect to measures which are convex combinations of Dirac masses. \end{example} We are given the matrices
\begin{equation*}
G = \begin{bmatrix}
1 & 0 & 0 & \cdots\\
-1 & 2 & 0 & \cdots \\
0 & 0 & 0 &\cdots\\
\vdots & \vdots & &\ddots
\end{bmatrix}
\quad\text{and}\quad H_1 = \begin{bmatrix}
1 & 0 & 0 & \cdots\\
\frac{1}{2} & \frac{1}{2} & 0 &\cdots \\
\frac{1}{2} & \frac{1}{2} & 0 &\cdots \\
\vdots & \vdots &\vdots &\ddots\\
\end{bmatrix}.
\end{equation*}
We see that $GH_1 = E$ where $E$ is the idempotent (in fact, projection):
\begin{equation*}
E = GH_1 = \begin{bmatrix}
1 & 0 & 0 & \cdots\\
0 & 1 & 0 & \cdots\\
0 & 0 & 0 & \cdots\\
\vdots & \vdots &\vdots & \ddots
\end{bmatrix}
\end{equation*}
Therefore, $H_1$ is an inverse of $G$. Note that this inverse is not unique. The matrix
\begin{equation*}H_2= \begin{bmatrix}
1 & 0 & 0 & \cdots\\
\frac{1}{2} & \frac{1}{2} & 0 &\cdots \\
\frac{1}{2} & \frac{1}{2} & \frac12 &\cdots \\
\frac12 & \frac12 &\frac12 &\ddots\\
\vdots & \vdots & \vdots & \vdots
\end{bmatrix}
\end{equation*}
also satisfies $GH_2=E$ hence is also an inverse of $G$. \hfill $\Diamond$
\chapter{Boundedness and spectral properties}\label{Sec:Spectrum}
In cases such as the Hilbert matrix from Section \ref{Sec:Hilbert}, the operators $F$ and $F^*$ (which might be weighted or unweighted) are bounded operators, although in general they will be unbounded densely defined operators. In the first part of this chapter, we give sufficient conditions such that these operators are bounded operators, and hence the Kato operator $F^*F$ is also bounded. In Section \ref{Sec:Spectra}, we use the theory of projection-valued measures described in Section \ref{Sec:pvm} to analyze connections among the spectrum of the Kato operator for a moment matrix $M^{(\mu)}$, the measure $\mu$ itself, and the associated integral operator. We will demonstrate these connections via the examples in Section \ref{Subsec:SpectrumExamples}. We define the rank of a measure in Section \ref{Subsec:RankTransformation} and demonstrate with examples which are convex combinations of Dirac measures.
\section{Bounded Kato operators}
Given a measure $\mu$ with finite moments of all orders, denote its moment matrix by $M=M^{(\mu)}$. We will use the generating function $G_{\mu}(x)$ having the even moments $M_{k,k}$ as coefficients to express conditions which ensure the operator $F^*_w$ on the weighted space $\ell^2(w)$ is bounded.
\begin{proposition}Let $\mu$ be a measure with compact support on $\mathbb{R}$ with moments of all orders. Let the generating function
\[
G_{\mu}(x) = \sum_{k=0}^{\infty} M_{k,k} x^k
\]
have a finite and positive radius of convergence $R$. Select $t$ such that $t > R$ and $\frac{1}{t} < R$. If we choose weights $w_k := t^k$, then $F^*_w$ is a bounded operator with
\[
\|F_w^*\|_{op} \leq \Biggl( G_{\mu}\Biggl(\frac{1}{t}\Biggr)\Biggr)^{1/2}.
\]
\end{proposition}
\begin{proof} Note that since $\frac{1}{t} < R$, then $\frac{1}{t}$ is within the radius of convergence, hence
\[
G_{\mu}\Bigr(\frac{1}{t}\Bigr):=\sum_{k=0}^{\infty} \frac{M_{k,k}}{t^k} < \infty.
\]
We take $w_k:=t^k$ for our weights.
Now,
\begin{equation}\label{Eqn:FStarWPhi}
(F^*_{w}\phi)_k = \frac{1}{w_k} \int x^k \phi(x) \,\mathrm{d}\mu(x), k\in\mathbb{N}_0.
\end{equation}
So
\[
\Big|
\int x^k \phi(x) \,\mathrm{d}\mu(x)
\Big|^2
\leq M_{2k}\|\phi\|^2_{L^2(\mu)}
\]
and
\begin{equation}
\begin{split}
\|F^*_{w}\phi\|^2_{\ell^2(w)}
&=\sum_{k=0}^{\infty} \frac{1}{w_k} \Big|\int x^k \phi(x) \,\mathrm{d}\mu(x) \Big|^2\\
&\leq \sum_{k=0}^{\infty} \frac{1}{w_k} M_{2k} \|\phi\|^2_{L^2(\mu)}\\
&= G_{\mu}\Bigr(\frac{1}{t}\Bigr) \|\phi\|^2_{L^2(\mu)}.
\end{split}
\end{equation}
Therefore, $F^*_w$ is a bounded operator with
\[
\|F_{w}^*\|^2_{\ell^2(w)\rightarrow L^2(\mu)}
= \sup_{\|\phi\|=1} \|F_{w}^*\phi\|^2_{\ell^2(w)}
\leq G_{\mu}\Bigr(\frac{1}{t}\Bigr).
\]
Using the properties of the adjoints of bounded operators, we also can conclude that $F$, $F^*_wF$ and $FF^*_w$ are bounded operators with norms given by
\[
\|F_{w}^*\|_{op}^2 = \|F\|_{op}^2 = \|F_{w}^* F_{w}\|_{op} = \|F_{w} F^*_{w}\|_{op} \leq G_{\mu}\Bigr(\frac{1}{t}\Bigr).
\]
\end{proof}
\begin{lemma}Suppose $\mu$ is a measure on $\mathbb{R}$ with compact support and finite moments of all orders. Suppose in addition that $\textrm{supp}(\mu) \subset [-1,1]$, with $\,\mathrm{d}\mu(x) = B(x) \,\mathrm{d}x$, where $B$ is a nonnegative bounded function. Then the operators $F$, $F^*$, $F^*F$ and $FF^*$ are bounded, and
\[ \|F^*\|_{op}^2 = \|F\|_{op}^2 = \|F^* F\|_{op} = \|FF^*\|_{op} \leq \pi \|B\|_{\infty}.\]
\end{lemma}
\begin{proof} By \cite{Wid66} (a straightforward generalization of Theorem \ref{Thm:HilbertKFFM}), we know $FF^*$ is an integral operator of the form
\[
(FF^* \phi)(x)
=\int_{-1}^1 \frac{\phi(y)}{1-xy}\,\mathrm{d}\mu(y)
=\int_{-1}^1 \frac{\phi(y)B(y)}{1-xy}\,\mathrm{d}y.
\]
Let $K$ be the positive integral operator defined in Section \ref{Sec:Hilbert} which is equivalent to the moment matrix for Lebesgue measure. Then
\begin{equation*}
\begin{split}
\langle \phi | FF^*\phi \rangle_{L^2(\mu)}
& = \int_{-1}^1 \int_{-1}^1 \frac{\overline{\phi(x)}B(x)\phi(y)B(y)}{1-xy}\,\mathrm{d}y \mathrm{d}x\\
& = \langle \phi B | K\phi B \rangle_{L^2[-1,1]} \\
& = \langle K^{1/2} \phi B | K^{1/2} \phi B \rangle_{L^2[-1,1]} \\
& \leq \|K^{1/2}\|_{op} \|\phi B\|_{L^2[-1,1]} \\
& \leq \pi \int_{-1}^1 |\phi(x) B(x)|^2 \mathrm{d}x.
\end{split}
\end{equation*}
We recall that the operator norm of $K$ is $\pi$. In turn, the last expression is bounded above by
\[
\pi\|B\|_{\infty}\int_{-1}^{1}|\phi(x)|^2 B(x) \,\mathrm{d}x = \pi\|B\|_{\infty}\|\phi\|_{L^2(\mu)}.
\]
Therefore, $\|FF^*\|_{op}\leq \pi \|B\|_{\infty}$, and $\|F\|=\|F^*\| \leq \sqrt{\pi\|B\|_{\infty}}$.
\end{proof}
\begin{theorem}\label{Thm:HKBound}Suppose there exists a finite $t>1$ such that $\textrm{supp}(\mu)\subset [-t, t]$, $\mu$ has finite moments of all orders, and $\,\mathrm{d}\mu(x) = B(x) \,\mathrm{d}x$, with $B$ a bounded nonnegative function. Define weights $w = \{w_k\}_{k=0}^{\infty}$ where $w_k:=t^{2k}$ on $\ell^2(w)$. Then the operators $F$, $F^*$, $F^*_wF$, $FF^*_w$ are bounded, and the operator norm for $F^*_wF$ and $FF^*_w$ is bounded above by $t\pi \|B\|_{\infty}$.
\end{theorem}
\begin{proof}For each $k\in\mathbb{N}_0$,
\begin{equation*}
\begin{split}
(F_w^*\phi)_k
& = \frac{1}{w_k}\int_{-t}^t x^k \phi(x) \,\mathrm{d}\mu(x)\\
& = \frac{1}{t^{2k}}\int_{-t}^t x^k \phi(x) B(x) \,\mathrm{d}x\\
& = \frac{t}{t^k}\int_{-1}^1 u^k \phi(tu) B(tu) \,\mathrm{d}u,
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\|F_w^*\phi\|^2_{\ell^2(w)}
& = \sum_k \frac{1}{w_k} \Bigg| \int_{-t}^t x^k \phi(x) B(x) \,\mathrm{d}x\Bigg|^2\\
& = t^2 \sum_k \Bigg| \int_{-1}^1 u^k \phi(tu) B(tu)\,\mathrm{d}u\Bigg|^2.
\end{split}
\end{equation*}
But by the previous lemma, this last expression is less than or equal to
\begin{equation*}
\begin{split}
& t^2 \pi \|B\|_{\infty} \int_{-1}^1 |\phi(tu)|^2 B(tu) \,\mathrm{d}u\\
& = t\pi \|B\|_{\infty} \int_{-t}^t |\phi(x)|^2 B(x) \,\mathrm{d}x\\
& = t\pi \|B\|_{\infty} \|\phi\|^2_{L^2(\mu)}.
\end{split}
\end{equation*}
So, $\|F_w^*\|_{op}^2 = \|FF^*_w\|_{op} \leq t\pi\|B\|_{\infty}$. \end{proof}
The following is an immediate result of Theorem \ref{Thm:HKBound}.
\begin{corollary}\label{Cor:BWeights}
Suppose there exists a finite $t>1$ such that $\textrm{supp}(\mu)\subset [-t, t]$, $\mu$ has finite moments of all orders, and $\,\mathrm{d}\mu(x) = B(x) \,\mathrm{d}x$, with $B$ an nonnegative bounded function. Define weights $w=\{w_k\}_{k=0}^{\infty}$ where $w_k:=t^{2k}$ on $\ell^2(w)$. The operator $FF_w^*$ is an integral operator of the form
\[
(FF^*_w\phi)(x)
= \int_{-t}^t \frac{1}{1- \frac{xy}{t^2}} \phi(y) B(y)\,\mathrm{d}y.\]
\end{corollary}
We can use Theorem \ref{Thm:HKBound} to study operators from the examples in Chapter \ref{Sec:MomentTheory} where the moments were related to the Catalan numbers.
\begin{corollary}
Suppose $\mu$ is the Wigner semicircle measure in Example \ref{Ex:Wigner}. Then under the weights $w$ from Corollary \ref{Cor:BWeights}, $FF^*_w$ is bounded, with $\|FF^*_w\|_{op} \leq 4\pi$.
\end{corollary}
We would also like to apply Theorem \ref{Thm:HKBound} to Example \ref{Ex:Secant}, but we cannot since the function $B$ is not in $L^{\infty}$. Still, we can provide an estimate. We use a generating function $G$ defined by
\begin{equation}\label{Eqn:DefnGmu}
G(\zeta) = \sum_{k\in\mathbb{N}_0} \zeta^k \int |x|^k \,\mathrm{d}\mu(x) = \sum \zeta^k \mathcal{M}_k,
\end{equation}
where we recall the use of the notation $\mathcal{M}_k$ from Lemma \ref{Lemma:RClosable} in Section \ref{Sec:GeneralA}.
\begin{lemma}
Suppose $\textrm{supp}(\mu)\subset [-t, t]$, where $\mu$ has finite moments of all orders. Then the radius of convergence $R$ of the generating function $G$ from Equation (\ref{Eqn:DefnGmu}) satisfies $R \geq \frac{1}{t}$.
\end{lemma}
\begin{proof}We have
\[
\Biggl( \int_{-t}^t |x|^k \,\mathrm{d}\mu\Biggr)^{1/k} \leq t ,
\]
where we use the fact that $t := \textrm{ess supp}|x|$ on $L^{\infty}(\mu)$ and also the fact that $\mu$ is a probability measure. Therefore, we are assured that $G$ converges absolutely for $|\xi| t < 1$. Therefore, $R \geq \frac{1}{t}$. \end{proof}
Pick $w_k:=s^k$ for some $t^2 < s < \infty$. We can now establish an estimate for $\|F^*\phi\|_{\ell^2(w)}$ in terms of the generating function $G$:
\begin{equation*}
\begin{split}
\|F_w^* \phi\|^2_{\ell^2(w)}
& = \sum_{k} \frac{1}{s^k}\Bigg| \int x^k \phi(x) \,\mathrm{d}\mu(x) \Bigg|^2\\ & \leq \sum_k \frac{1}{s^k} \Biggr(\int_{-t}^t |x|^k |\phi(x)|\mathrm{d}\mu(x) \Biggr)^2. \end{split}\end{equation*}
Then, using Cauchy-Schwarz, we have
\begin{equation*} \begin{split} \|F^*_w\phi\|^2_{\ell^2(w)} & \leq \sum_k \frac{1}{s^k} \mathcal{M}_k \int |x|^k |\phi(x)|^2 \mathrm{d}\mu(x)\\
& = \int_{-t}^t G\Biggl(\frac{|x|}{s}\Biggr) |\phi(x)|^2 \,\mathrm{d}\mu(x).
\end{split}
\end{equation*}
The switch of the sum and integral above is justified by a Fubini argument, since the power series $G(x)$ is continuous.
We apply this observation to Example \ref{Ex:Secant}, the secant measure. In this case, $G(x) = 2(\sqrt{1-4x^2})^{-1}$ and $t = 2$. Choose $s > 4$ and set $w=\{w_k\}_{k=0}^{\infty}$ where $w_k = s^k$. Then in $\ell^2(w)$, we have
\[
\|F_w^* \phi\|^2_{\ell^2(w)} \leq \int_{-2}^2 \frac{2}{\sqrt{1-(\frac{2x}{s})^2}} |\phi(x)|^2 \,\mathrm{d}\mu(x).
\]
We next describe a result related to affine iterated function system measures. In particular, we show a sufficient condition under which the matrix $A$ (see Section \ref{Sec:GeneralA}) encoding an affine map $\tau$ is a Hilbert-Schmidt operator.
\begin{proposition}Consider the transformation $\tau(z) = cx+b$, and suppose $|c|\leq |b| < \frac{1}{4}$. Then the operator defined by the matrix $A$ which satisfies \[M^{(\mu \circ \tau)} = A^* M^{(\mu)}A,\] with is Hilbert-Schmidt; i.e. $A$ is bounded, and $\textrm{trace}(A^*A) < \infty$.
\end{proposition}
\begin{proof}Let $k$ denote the row index in $A$, and let $j$ denote the column index in $A$; as before (\ref{Eqn:UpTriA}), we have
\begin{equation*}
A_{k,j}=
\begin{cases} \binom{k}{j} c^k b^{j-k} & 0 \leq k \leq j\\
0 & k > j\\
\end{cases}.
\end{equation*}
The $(i,j)^{\textrm{th}}$ entry of the infinite matrix $A^*A$ is therefore
\begin{equation*}
(A^*A)_{i,j}
= \sum_{0\leq k \leq i\wedge j} \overline{A_{k,i}}A_{k,j}
= \sum_{0\leq k \leq i\wedge j} \binom{i}{k} \binom{j}{k} |c^k|^2 \overline{b^{i-k}}b^{j-k},
\end{equation*}
and if $i=j$, the diagonal entries are
\begin{equation}
(A^*A)_{j,j} = \sum_{k=0}^j \binom{j}{k}^2 |c^k|^2 |b^{j-k}|^2.
\end{equation}
Set $\alpha:=|c|^2$ and $\beta:=|b|^2$.
Then, using the assumption that $\frac{\alpha}{\beta}\leq 1$, we have
\begin{equation}\begin{split}
\textrm{trace}(A^*A)
& = \sum_{j=0}^{\infty} (A^*A)_{j,j}
= \sum_{j=0}^{\infty} \sum_{k=0}^j \binom{j}{k}^2 \alpha^k\beta^{j-k}\\
&\leq \sum_{j=0}^{\infty} \beta^j \sum_{k=0}^j \binom{j}{k}^2
= \sum_{j=0}^{\infty} \beta^j \binom{2j}{j}
\end{split}
\end{equation}
The last series has radius of convergence $\frac{1}{4}$.
\end{proof}
\section{Projection-valued measures}\label{Sec:pvm}
This section provides an overview of the properties of
projection-valued measures, leading to the spectral decomposition
for unbounded self-adjoint operators on a Hilbert space
$\mathcal{H}$. These ideas will be used in Section
\ref{Sec:Spectra} and later in Chapter \ref{Ch:Extensions} in order to discuss the
Kato-Friedrichs extension of operators. For far more detailed
treatment of this theory, we refer the reader to
\cite{Bag92,ReSi80,Rud91}.
Recall from Definition \ref{Def:SelfAdjoint} that an operator $H$
with dense domain $\mathrm{dom}(H)$ in a Hilbert space
$\mathcal{H}$ is said to be \textit{self adjoint} if $H^*=H$ and
$\mathrm{dom}(H^*) = \mathrm{dom}(H)$.
\begin{definition} A Borel \textit{projection-valued measure} $E$ on $\mathbb{R}$ is a
map from the $\sigma$-algebra $\mathcal{B}$ of Borel subsets of
$\mathbb{R}$ to the orthogonal projections on a Hilbert space
$\mathcal{H}$ such that
\begin{enumerate}[(i)] \item $E(\emptyset) = 0$, \item if $S =
\cup_{i=1}^{\infty} S_i$ and $S_i \cap S_j = \emptyset$ for $S_i
\in \mathcal{B}$ and $i \neq j$, then $$E(S) = \sum_{i=0}^{\infty}
E(S_i) = \lim_{i\rightarrow \infty} \sum_{j=1}^i E(S_j),$$ where
the limit is in the strong operator topology.\end{enumerate} A projection-valued measure $E$ is said to be \textit{orthogonal} if
\begin{equation}\label{Eqn:pvmMult} E(S_1 \cap S_2) = E(S_1)E(S_2)
\end{equation} for all Borel sets $S_1,S_2$. We will assume all projection-valued measures are orthogonal unless otherwise stated.
\end{definition}
Recall from the definition of an orthogonal projection that for
every Borel set $S$, we have $E(S) = E(S)^* = E(S)^2$.
There is a well-known theory which develops the notion of
integration of measurable real-valued functions against a
projection-valued measure $E$, resulting in an operator on the
Hilbert space $\mathcal{H}$. We will denote such an integral by
$\int_{\mathbb{R}} f(\lambda) E(\mathrm{d}\lambda)$. Quite
naturally, we start by defining integrals of characteristic
functions,
$$E(S) = \int_{\mathbb{R}} \chi_S(\lambda) E(\mathrm{d}\lambda),$$
then extend appropriately to measurable functions. The operator
is a bounded operator if and only if there exists an $M >0 $ such
that $E(|f|^{-1}(M,\infty)) = 0$ (the trivial projection). We say
that $E$ is a \textit{resolution of the identity} if
\begin{equation} I_{\mathcal{H}} = \int_{\mathbb{R}}
E(\mathrm{d}\lambda) = E(\mathbb{R}). \end{equation}
\begin{example}\cite{Bag92} Given $\mu$ a $\sigma$-finite Borel measure on $\mathbb{R}$, let
$\mathcal{H} = L^2(\mu)$. Given a Borel set $S \subseteq
\mathbb{R}$, define $E(S)$ to be a multiplication operator which
multiplies by the characteristic function $\chi_S$: $$E(S)(f) =
\chi_S f.$$ Then $E$ is a projection-valued measure on
$\mathbb{R}$, called the \textit{canonical projection-valued
measure}. Given $f$ a measurable real-valued function on
$\mathbb{R}$, $\int_{\mathbb{R}} f(\lambda) E(\mathrm{d}\lambda)$
has domain $\{g \in \mathcal{H}\,:\, fg \in \mathcal{H} \}$ and on
that domain,
$$\int_{\mathbb{R}} f(\lambda) E(\mathrm{d}\lambda)(g) = fg. $$ \hfill
$\Diamond$
\end{example}
One useful property of projection-valued measures is that
integration is multiplicative, where the product of operators is
composition.
$$\int f(\lambda)g(\lambda)E(\mathrm{d}\lambda) = \int f(\lambda)
E(\mathrm{d}\lambda) \int g(\lambda)E(\mathrm{d}\lambda) $$
Corresponding to a projection-valued measure $E$, there is a parameterized family of real-valued
measures $\{\nu_v\}_{v \in \mathcal{H}}$ given by the
$\mathcal{H}$-inner product:
\begin{equation}\label{Eqn:RealMeasure} \nu_v(S) = \langle v | E(S)v \rangle. \end{equation}
Since orthogonal projections are positive operators, the quantity
$\langle v | E(S)v \rangle$ is nonnegative for any choice of $v$
and any set $S$. Given a projection-valued measure which is a
resolution of the identity, we have $$ \int_{\mathbb{R}}
\mathrm{d}\nu_v(\lambda) = \langle v | E(\mathbb{R})v \rangle =
\|v\|^2.$$
The following theorem gives a spectral decomposition for
self-adjoint operators.
\begin{theorem}[\cite{DS88,Rud91}] \label{Thm:pvm}An operator $H$ in a Hilbert
space $\mathcal{H}$ is self adjoint if and only if there exists an
orthogonal projection-valued measure $E$ supported on the spectrum
$\sigma(H)$ such that
\begin{equation}\label{Eqn:pvmH} Hv =\left[\int_{\mathbb{R}}\lambda
E(\mathrm{d}\lambda)\right]v = \left[\int_{\sigma(H)}\lambda
E(\mathrm{d}\lambda)\right]v \end{equation} for all $v \in
\mathrm{dom}(H)$. Moreover, $v \in \mathrm{dom}(H)$ if and only
if \begin{equation}\label{Eqn:pvmNorm} \|Hv\|^2 =
\int_{\sigma(H)}\lambda^2 \mathrm{d}\nu_v(\lambda) < \infty,
\end{equation} where the measure $\nu_v$ is defined in Equation (\ref{Eqn:RealMeasure}).
\end{theorem}
While we leave the details of the proof to the cited literature,
it may be helpful here to include the computation which
demonstrates the equality given in Equation (\ref{Eqn:pvmNorm}).
Given $v \in \mathrm{dom}(H)$,
\begin{eqnarray*} \left\|\int \lambda E(\mathrm{d}\lambda)v\right\|^2 &=&
\left\langle \int \lambda E(\mathrm{d}\lambda)v \Bigr| \int
\lambda E(\mathrm{d}\lambda)v \right\rangle \\ &=& \left\langle
\Bigr[\int \lambda E(\mathrm{d}\lambda)\Bigr]^* \Bigr[\int \lambda
E(\mathrm{d}\lambda)\Bigr]v \Bigr| v \right\rangle \\ &=&
\left\langle\Bigr[\int \lambda E(\mathrm{d}\lambda)\Bigr]^2 v
\Bigr| v \right\rangle \quad \text{since the op. is self adjoint}
\\ &=& \left\langle \Bigr[\int
\lambda^2 E(\mathrm{d}\lambda)\Bigr]v \Bigr|v \right\rangle \quad \text{multiplicative prop. of integrals}\\
&=& \int \lambda^2 \mathrm{d}\nu_v(\lambda). \end{eqnarray*}
\hfill $\Box$
The formulas from Theorem \ref{Thm:pvm} together help us define
the domains of (possibly unbounded) self-adjoint operators defined
by integrals of measurable real-valued functions $f$ against a
projection-valued measure $E$. We say that $v$ is in the domain
of an operator $H = \int f(\lambda) E(\mathrm{d}\lambda)$ if
\begin{equation} \left\|Hv- \Bigr[\int_{-n}^n f(\lambda) E(\mathrm{d}\lambda)\Bigr] v\right\| \rightarrow 0 \end{equation} as
$n \rightarrow \infty$. If $H$ is a self-adjoint operator, we can
use the equality in (\ref{Eqn:pvmNorm}) which gives
\begin{equation*} \left\|Hv- \Bigr[\int_{-n}^n
\lambda E(\mathrm{d}\lambda)\Bigr] v \right\|^2 =
\int_{|\lambda|>n} \lambda^2 \mathrm{d}\nu_v(\lambda).
\end{equation*}
Therefore, a given vector $v \in \mathcal{H}$ is in
$\mathrm{dom}(H)$ if and only if \begin{equation} \lim_{n
\rightarrow \infty} \int_n^{\infty} \lambda^2
\mathrm{d}\nu_v(\lambda) = 0.
\end{equation}
\section{Spectrum of the Kato operator}\label{Sec:Spectra}
In this section, we show how spectral analysis of the Kato-Friedrichs operator translates into spectral data for the initially given moment matrix $M^{(\mu)}$ and the measure $\mu$. We then illustrate the theory in several examples. Our main result (Theorem \ref{Thm:AtomsPair}) is that atoms in the spectrum of the Kato-Friedrichs operator pair up with atoms in the spectrum of the measure $\mu$ itself. Using this we arrive at our own proof of the well-known fact (Theorem \ref{Thm:HilbertSpectrum}) that the Hilbert matrix has continuous spectrum.
In fact, in their spectral picture, the Kato-Friedrichs operators
are directly related to the associated moment matrices
$M^{(\mu)}$; this correspondence is especially transparent for
Examples
\ref{Subsec:KFExampleDelta1} and \ref{Subsec:KFConvex2Dirac}. In
these examples, the agreement of the rank of the Kato-Friedrichs
operators with the rank of the measure, which we define in Section \ref{Subsec:RankTransformation}, can be seen by inspection.
\begin{theorem}\label{Thm:AtomsPair} Let $\mu$ be a measure with compact support on the real line. Then the atoms in the spectrum of the Kato-Friedrichs operator of the moment matrix $M =M^{(\mu)}$ (if any) pair up with atoms in the spectrum of the measure $\mu$ itself.
\end{theorem}
Before we begin the proof, we will a few key points about
spectral resolutions and operator theory.
Let $H$ be a densely defined, positive self-adjoint operator on the
Hilbert space $\mathcal{H}$. Then there exists (unique, up to
unitary equivalence) a projection-valued measure $E$ such that
\[I_{\mathcal{H}} = \int E(\mathrm{d}\lambda)\] and
\[H = \int_{\sigma(H)} \lambda E(\mathrm{d}\lambda).\]
In addition, for any $x \in \mathcal{H}$, given the spectral measure $\nu_x$ defined in as in Equation (\ref{Eqn:RealMeasure}), we have
\[ \|Hx\|^2 = \int_{\sigma(H)} \lambda^2 \mathrm{d}\nu_x.\]
\begin{definition}\label{Defn:Atom}
An \textit{atom} in $H$ or in $E$ is a point $\lambda_1\in\mathbb{R}$ such
that $E(\{\lambda_1\})\neq 0$, i.e. is not the trivial projection.
\end{definition}
If $\mathcal{H}$ is a complex Hilbert space, then we can define an
order on self-adjoint operators on $\mathcal{H}$:
\[
H\leq K \textrm{ if and only if } \langle x|Hx\rangle \leq \langle
x|Kx\rangle \textrm{ for all } x\in\mathcal{H}.
\] With this order, we observe that
if $E_1$ and $E_2$ are projections on $\mathcal{H}$, then
\[ E_1\leq E_2 \Leftrightarrow E_1 = E_1E_2 = E_2E_1.\]
In this case, we say that $E_1$ is a subprojection of $E_2$.
\begin{proof} Let $\textrm{supp}(\mu)\subset\mathbb{R}$, where the measure $\mu$ has compact support. Denote the Kato-Friedrichs operator for the moment matrix $M=M^{(\mu)}$ by
$H$. Let $E$ be the projection-valued measure for $H$. Recall that in the unweighted case, the mapping $F^*:L^2(\mu)\rightarrow
\ell^2$ which takes $f_c(x)=\sum_{i\in\mathbb{N}_0} c_ix^i$ to
$\{c_i\}_{i\in\mathbb{N}_0}$ is an isometry.
Next, assume that $\lambda_1$ is an atom in the spectrum of $H$. Therefore, $E({\lambda_1})$ is a nontrivial projection, hence there exists $\xi_1\in
\mathrm{Range}(E({\lambda_1}))$, $\|\xi_1\|_{\ell^2} = 1$. We denote the rank-one projection onto $\xi_1$ (using Dirac notation;
see Chapter \ref{Sec:Notation}) by
\begin{equation}\label{Eqn:EKetBra}
E_{\lambda_1} = |\xi_1\rangle\langle\xi_1|.
\end{equation}
$H$ is a positive operator, hence $\lambda_1 > 0$. Since $H$ is the Kato-Friedrichs operator, we have \[\int |f_c|^2\mathrm{d}\mu = \|H^{1/2}c\|^2_{\ell^2}\] for all $c \in \mathcal{D}$. Therefore, there exists
$A_1\in \mathbb{R}^+$ such that
\begin{equation}\label{Eqn:AEH}
A_1E_{\lambda_1} \leq H.
\end{equation}
Equivalently, by the idempotent property of projections, for all
$c\in\mathcal{D}$ we have
\begin{equation}\label{Eqn:AEH2}
A_1\|E_{\lambda_1}c\|_{\ell^2}^2 \leq \|H^{1/2}c\|_{\ell^2}^2 = \int
|f_c|^2\,\mathrm{d}\mu.
\end{equation}
Recall that for all $c\in\ell^2$,
\begin{equation}
\langle c|E_{\lambda_1}c\rangle_{\ell^2} = \|E_{\lambda_1} c\|_{\ell^2}^2 = |
\langle\xi_1|c\rangle_{\ell^2}|^2.
\end{equation}
We now make use of the Hankel property from Definition \ref{Defn:HankelN0d} in the moment matrix $M$. Let
\[ S:\{c_0, c_1, c_2, \ldots\} \rightarrow \{0, c_0, c_1, \ldots\}\]
be the right shift operator in $\ell^2$. Then the Hankel condition is equivalent
on $\mathcal{D}$ to
\begin{equation}
HS = S^*H.
\end{equation}
By the spectral theorem, then
\begin{equation}
E_{\lambda_1}S = S^*E_{\lambda_1}.
\end{equation}
Looking ahead to the argument in Lemma \ref{Lem:indices}, we see that this implies that the vector $\xi_1$ is of the form \[ \xi_1 = [\begin{matrix} 1&b&b^2&b^3&\cdots \end{matrix}]^{tr}. \]
Equation (\ref{Eqn:AEH2}) gives that for all $c \in \mathcal{D}$, \begin{equation}\label{Eqn:NormEc} A_1\|E_{\lambda_1}c\|_{\ell^2}^2 = A_1 \sum_{i=0}^{\infty} \left| \sum_{j=0}^{\infty} \overline{\xi(i)}\xi(j)c_j \right|^2 = A_1 \sum_{i=0}^{\infty} \left| \sum_{j=0}^{\infty} b^{i+j}c_j \right|^2 \leq \int|f_c|^2\mathrm{d}\mu.\end{equation}
Using Example \ref{Ex:Rank1} regarding the Dirac point-mass measure $\delta_b$ at the point $b$, we have
\[ A_1 \int |f_c|^2\mathrm{d}\delta_b = A_1|f_c(b)|^2 = A_1\left|\sum_{j=0}^{\infty} c_jb^j \right|^2.\]
Since the expression above is equal to the $i=0$ term from Equation (\ref{Eqn:NormEc}), we also have
\begin{equation}\label{Ineq:Aff}
A_1\int |f_c|^2 \,\mathrm{d}\delta_b \leq \int |f_c|^2\,\mathrm{d}\mu \textrm{ for all
}c\in\mathcal{D}.
\end{equation}
Since the support of $\mu$ is compact, the set
$\{f_c\}_{c\in\mathcal{D}}$ is dense in $L^2(\mu)$. Therefore, for all Borel sets
$E\in\mathcal{B}(\mathbb{R})$ there exists a sequence $\{c_k\}_{k \in \mathbb{N}_0} \subset \mathcal{D}$ such that
\[
f_{c_k}(x) \underset{L^2(\mu)}{\longrightarrow} \chi_{E}(x).
\]
Therefore, using the sequence $\{f_{c_k}\}$ to approximate
$\chi_{E}$, we get
\begin{equation}
A_1\int_E \,\mathrm{d}\delta_b \leq \int_E \,\mathrm{d}\mu,
\end{equation}
or equivalently,
\begin{equation}\label{Ineq:Adm}
A_1\delta_b(E) \leq \mu(E).
\end{equation}
Since $\delta_b(\{b\}) = 1$, it must be true that $\mu(\{b\}) > 0$. Therefore, when the Kato-Friedrichs operator $H$ has an atom, the measure $\mu$ has a corresponding atom. \end{proof}
The Kato-Friedrichs operator for the moment matrix for Lebesgue measure on the interval $[0,1]$ is the bounded operator defined relative to the standard ONB in $\ell^2$ by the Hilbert matrix. In particular, we need not introduce weights into the $\ell^2$ space in order to produce a closable quadratic form. This is a case when the Kato-Friedrichs operator has infinite rank (see Section \ref{Subsec:RankTransformation}). We now see that the well-known fact that the spectrum of this operator is purely continuous follows as an immediate corollary of Theorem \ref{Thm:AtomsPair}. (This result was first shown in \cite{Mag50}.)
\begin{theorem}\label{Thm:HilbertSpectrum}
The spectrum of the Hilbert matrix is continuous, i.e., there are
no atoms in the spectral resolution of the Hilbert matrix.
\end{theorem}
\begin{proof}
We saw that the action of the Hilbert matrix $M$ on $\ell^2$ defines a bounded (see \cite{Hal67}) self-adjoint operator, and $M$ is the moment matrix $M^{(\mu)}$ where $\mu$ is taken to be Lebesgue measure on the unit
interval $(0,1)$. Moreover, the quadratic forms below coincide on the space
$\mathcal{D}$ of finite vectors in $\ell^2$:
\begin{equation}
\| f_c \|_{L^2(\mu)} = Q_M(c) = \langle c | Mc\rangle_{\ell^2}
\textrm{ for all } c \in \mathcal{D}.
\end{equation}
Therefore, $M$ is the Kato-Friedrichs operator for the quadratic form $Q_M$.
If $M=M^{(\mu)}$ contained an atom, then there would be a
rank-one projection in the spectral resolution for $M^{(\mu)}$, and by Theorem \ref{Thm:AtomsPair} this would imply that Lebesgue measure restricted to $(0,1)$ would
contain an atom. That is a clear
contradiction, and the therefore $M$ has continuous spectrum. \end{proof}
In Section \ref{Subsec:SpectrumExamples}, we explicitly compute the Kato-Freidrichs operators in several examples and study their spectra. Recall that in general, the Kato-Friedrichs operator $H$ is a self-adjoint operator on $\ell^2(w)$ for some set of weights $w=\{w_k\}_{k \in \mathbb{N}_0}$. Note that the operator $H (= H_w)$ depends on the choice of $w$, and the choice of $w$ depends on the measure $\mu$. The lemma below demonstrates how the weights in a Hilbert space $\ell^2(w)$ arise in the equations governing the point spectrum of the Kato-Friedrichs operator.
Recall our notation for the weighted Hilbert spaces, where $w=\{w_i\}_{i \in \mathbb{N}_0}$ are the weights:
\[
\ell^2(w) = \Bigl\{c = \{c_i\}_{i\in\mathbb{N}_0} \Big|
\sum_{i\in\mathbb{N}_0} w_i|c_i|^2 < \infty\Bigr\}
\]
and
\[
\langle b|c\rangle_{\ell^2(w)}:=\sum_{i\in\mathbb{N}_0} w_i \overline{b_i}
c_i.
\]
In the following computations, we assume that \begin{enumerate} \item the measure $\mu$ is a
positive Borel measure on $\mathbb{R}$ with compact support having
moments of all orders, \item the weights $\{w_i\}_{i \in \mathbb{N}_0}$ satisfy $w_i > 0$ for all $i\in\mathbb{N}_0$, \item the monomials $x^i$ are in $L^2(\mu)\cap L^1(\mu)$ for all
$i \in \mathbb{N}_0$. \end{enumerate}
\begin{lemma}\label{Lemma:FundEqns} Let $\mu$ be a Borel measure on $\mathbb{R}$ with compact support and finite moments of all orders. Let $M = M^{(\mu)}$ be the moment matrix for $\mu$, and let $H$ be the Kato-Friedrichs operator for $M$ under an appropriate choice of weights $w = \{w_i\}_{i \in \mathbb{N}_0}$. There exists $\lambda$ in the point spectrum of $H$, i.e.
\[
Hc = \lambda c
\]
for some nonzero $c$ in the domain of $H$, if and only if for each $i \in \mathbb{N}_0$,
\begin{equation}\label{Eqn:FundEqnPtSpec}
\sum_{j\in\mathbb{N}_0}M_{i,j}c_j = \lambda w_ic_i.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lemma:FundEqns}: ]
The (nonnegative) real number $\lambda$ is in the point spectrum of $H$ if and only if there is a nonzero $c$ in the domain of $H$ such that $Hc = \lambda c$. By definition of the Kato-Friedrichs operator, $H$ is related to the quadratic form $Q_M$ arising from $M$ via
\[ Q_M(c) = \sum_j\sum_k \overline{c_j}M_{j,k}c_k = \|H^{1/2}c\|^2_{\ell^2(w)}, \quad \forall c\in\mathcal{D}.\]
When we pass to the completion, given $c\in \textrm{dom}(H^{1/2})$ and $f_c\in
L^2(\mu)$, we have the identity
\[ \int |f_c|^2 \,\mathrm{d}\mu = \| H^{1/2} c\|^2_{\ell^2(w)}.\]
Recall that $f_c\in L^2(\mu)$ if and only if $c\in
\textrm{dom}(H^{1/2})$; this follows from the completion in the
Kato-Friedrichs construction.
There is also a sesquilinear form $S_M$ determined by $M$ (connected to $Q_M$ by the polarization identity) which satisfies
\begin{equation}\label{Eqn:Sesq} S_M(b,c) = \sum_j\sum_k \overline{b}_j M_{j,k} c_k = \langle b|Hc \rangle_{\ell^2(w)} \end{equation} for all $b \in \mathcal{D}, c \in \mathrm{dom}(H)$.
For each $i \in \mathbb{N}_0$, let $b = e_i$, the $i$-th standard basis vector. Equation (\ref{Eqn:Sesq}) becomes
\[
\sum_j M_{i,j}c_j = \langle e_i|Hc\rangle_{\ell^2(w)} = w_i(Hc)_i .\] We now have that $Hc=\lambda c$ for some nonzero vector $c$ in the domain of $H$
if and only if
\[\sum_j M_{i,j} c_j = w_i(Hc)_i = w_i \lambda c_i.\]
\end{proof}
In Equations (\ref{Eqn:FundEqnPtSpec}), we will seek
solutions for $\lambda$, $w$, and $c$. We require
\[ c\in \ell^2(w) \quad \textrm{and}\quad \|c\|_{\ell^2(w)}=1,\]
i.e.,
\begin{equation}\label{Eqn:SumCWOne}
\sum_{i\in\mathbb{N}_0} w_i |c_i|^2 = 1.
\end{equation}
In the examples shown in Section \ref{Subsec:SpectrumExamples}, we will find that it is relatively easy to solve
(\ref{Eqn:SumCWOne}) for a single Dirac mass $\mu = \delta_b$ for any $b\in \mathbb{R}$.
However, in the case of convex combinations of point masses,
finding solutions to (\ref{Eqn:FundEqnPtSpec}) and
(\ref{Eqn:SumCWOne}) is more tricky.
\section{Rank of measures}\label{Subsec:RankTransformation}
Consider a probability measure $\mu$ on $\mathbb{C}$ or on
$\mathbb{R}^d$, and assume that $\mu$ has finite moments of all orders.
Pick an order of the index set for the monomials, and let
$\{p_k\}_{k \in \mathbb{N}_0^d}$ be the associated orthogonal polynomials in $L^2(\mu)$
defined from $\mu$; further, let $G$ be the corresponding Gram
matrix. Let $\mathcal{P}$ be the closed subspace in $L^2(\mu)$
spanned by the monomials.
\begin{definition}\label{Defn:Rank}
We say that the \textit{rank of $G$}, and hence the \textit{rank of the measure $\mu$}, is the
dimension of the Hilbert space $\mathcal{P}$.
\end{definition}
We note two things in Definition \ref{Defn:Rank}. First,
$\{p_k\}_{k \in \mathbb{N}_0^d}$ is an ONB in $\mathcal{P}$. Second, $\mathcal{P} =
L^2(\mu)$ if and only if the monomials are dense in $L^2(\mu)$;
this is known to be true if $\mu$ has compact support, but it need
not be true in general. We will see this more in Chapter \ref{Ch:Extensions}.
\begin{example}\label{Ex:Rank1} A Dirac measure has rank 1. \end{example}
Let $b$ be a fixed complex number, and let $\mu:=\delta_b$ be the
corresponding Dirac measure. Then the moments $m_k$ of $\mu$ are
\begin{equation}
m_k = \int_{\mathbb{C}} z^k \,\mathrm{d}\mu(z) = b^k, \:\:k\in\mathbb{N}_0;
\end{equation}
the moment matrix is
\begin{equation}
M^{(\mu)}_{j,k} = \overline{b}^jb^k.
\end{equation}
If $\mathcal{D}$ is the space of finite sequences, $c =
\{c_j\}_{j\in\mathbb{N}_0}$, then
\begin{equation}\label{Eqn:DiracQFb}
\langle c|M^{(\mu)}c\rangle_{\ell^2} = \Big|
\sum_{j\in\mathbb{N}_0}c_jb^j\Big|^2.
\end{equation}
We showed in Example \ref{Ex:ConvexDirac} that $\text{dim}(L^2(\mu)) = 1$, so it follows that $p_0(z) \equiv
1$ and $p_k \equiv 0$ for $k\geq 1$. We use Equations (\ref{Eqn:GramMatrix}) and (\ref{eqn:grammatrix}) together with Lemma \ref{Lem:GramInverse} to compute the
following formulas for the infinite matrices $G$
and $G^{-1}$:
\begin{equation}
G = \begin{bmatrix}
1 & 0 & 0 & \cdots\\
0 & 0 & 0 & \cdots \\
0 & 0 & 0 & \\
\vdots & \vdots & & \ddots
\end{bmatrix}
\quad \text{and}\quad G^{-1} = \begin{bmatrix}
1 & 0 & 0 & \cdots\\
b & 0 & 0 & \cdots \\
b^2 & 0 & 0 & \\
\vdots & \vdots & & \ddots
\end{bmatrix}.\end{equation}
We see here that $G$ has rank $1$. Note that we are using the notion of inverse described in Section \ref{Sec:Inverses}. We see that $GG^{-1}$ is an idempotent rank-one matrix. A direct computation further yields
\begin{equation}
G^{-1}(G^{-1})^* = (b^j\overline{b}^k) = (M^{(\mu)})^{tr},
\end{equation}
as predicted by Lemma \ref{Lem:GramInverse} and Equation (\ref{Eqn:RStarR}).
\begin{example}\label{Ex:Rank2}A measure with rank 2.\end{example}
Let $\mu:=\frac{1}{2}(\delta_0 + \delta_1)$. The first moment of
$\mu$ is $1$, and all the other moments are all $\frac{1}{2}$.
The associated orthogonal polynomials are
\begin{equation}
p_0(z) \equiv 1, \qquad p_1(z) = 2z - 1, \quad \text{and}\quad
p_2(z) = p_3(z) = \ldots \equiv 0.
\end{equation}
We then compute the Gram matrix and an inverse (in the sense of Definition \ref{def:inverse} and Example \ref{Ex:Inverses}):
\begin{equation}
G = \begin{bmatrix}
1 & 0 & 0 & \cdots\\
-1 & 2 & 0 & \cdots \\
0 & 0 & 0 &\cdots\\
\vdots & \vdots & &\ddots
\end{bmatrix}
\quad\text{and}\quad G^{-1} = \begin{bmatrix}
1 & 0 & 0 & \cdots\\
\frac{1}{2} & \frac{1}{2} & 0 &\cdots \\
\frac{1}{2} & \frac{1}{2} & 0 &\cdots \\
\vdots & \vdots &\vdots &\ddots\\
\end{bmatrix}.
\end{equation}
Again we can compute directly the two idempotents $E_1$
and $E_2$.
\begin{equation}
E_1 = GG^{-1} = \begin{bmatrix}
1 & 0 & 0 & \cdots\\
0 & 1 & 0 & \cdots\\
0 & 0 & 0 & \cdots\\
\vdots & \vdots &\vdots & \ddots
\end{bmatrix},
\end{equation}
and
\begin{equation}
E_2 = G^{-1}G = \begin{bmatrix}
1 & 0 & 0 & 0 & \cdots\\
0 & 1 & 0 & 0 & \cdots\\
0 & 1 & 0 & 0 & \cdots\\
0 & 1 & 0 & 0 & \cdots\\
\vdots & \vdots & \vdots & \vdots &\ddots\\
\end{bmatrix}.
\end{equation}
Note that $E_1$ is a projection; i.e., $E_1 = E_1^* = E_1^2$,
while $E_2$ is only an idempotent; i.e., $E_2 ^2 = E_2$. Note
that these rules are statements about infinite matrices. In fact,
if $E_2$ is viewed as an operator in $\ell^2$, it is unbounded
($E_2(\textbf{e}_2)\not\in\ell^2$.) So,
\[\textrm{rank}(G) = \textrm{rank}(\mu) = \textrm{rank}(M^{(\mu)}) = \textrm{dim}(L^2(\mu)) = 2.\]
\hfill$\Diamond$
\iffalse
In the next result, we show that $\textrm{rank}(\mu)$ is monotone
under transformations $\mu\mapsto\mu\circ\tau^{-1}$ of measures.
\begin{corollary}\label{Cor:RankMonotone}
Let $\mu$ be a Borel probability measure on $\mathbb{C}$, let
$X\subset \mathbb{C}$ be a Borel subset, and let
$\tau:X\rightarrow X$ be a Borel endomorphism. Assume that $\mu$
and $\mu\circ\tau^{-1}$ have moments of all orders.
If condition (i) in Theorem \ref{Thm:Isometry} holds, then
\begin{equation}\label{Eqn:RankDecrease}
\textrm{rank}(\mu\circ\tau^{-1}) \leq \textrm{rank}(\mu)
\end{equation}
\end{corollary}
\begin{proof} Under the assumptions, we get an isometry
\[V_A:L^2(\mu\circ\tau^{-1})\rightarrow L^2(\mu)\] defined from
the identity
\begin{equation}\label{Eqn:RankIsometry}
\int_{\mathbb{C}} |f_c|^2 d(\mu\circ\tau^{-1}) = \int_{\mathbb{C}}
|f_{Ac}|^2 \,\mathrm{d}\mu,
\end{equation}
where $A$ is the infinite by infinite matrix from Theorem
\ref{Thm:Isometry} (i).
This means that $\mathcal{P}(\mu\circ\tau^{-1})$ is
isometrically embedded in $\mathcal{P}(\mu)$ via $V_A:f_c\mapsto
f_{Ac}$, so
\begin{equation}
\textrm{dim}(\mathcal{P}(\mu\circ\tau^{-1})) \leq \textrm{dim}
(\mathcal{P}(\mu)),
\end{equation}
which is the desired conclusion.\end{proof}
As illustrated in Section \ref{Subsec:Hankel}, finite iterated
function systems (IFSs) induce transformations as well. Let
$\{\tau_i\}_{i=1}^N$ be a system of measurable transformations
$\tau_i:X\rightarrow X$ satisfying the conditions of Corollary
\ref{Cor:RankMonotone}. Let $\{p_i\}_{i=1}^N$ be a finite
probability distribution, $p_i \geq 0$, $\sum_{i=1}^N p_i = 1$.
Set
\begin{equation}\label{Eqn:RankT}
S(\mu): = \sum_{i=1}^N p_i \mu\circ\tau^{-1}.
\end{equation}
\begin{corollary}
Let $\{\tau_i, p_i\}$ be an IFS. Let $\mu\mapsto S(\mu)$ be the
transformation of probability measures given by (\ref{Eqn:RankT}).
If there are transformation matrices $A_i$ such that
\begin{equation}\label{Eqn:RankAMA}
A_i^*M^{(\mu)} A_i = M(\mu\circ\tau_i^{-1}),
\end{equation}
then
\begin{equation}\label{Eqn:RankMonotoneT}
\textrm{rank}(S(\mu)) \leq \textrm{rank}(\mu).
\end{equation}
\end{corollary}
\begin{proof}Let $Q_{\mu}$ denote the quadratic form from
Corollary \ref{Cor:RankMonotone} and Theorem \ref{Thm:Isometry}.
For $c\in\mathcal{D}$,
\begin{equation}
\begin{split}
Q_{S(\mu)}
& = \langle c|M^{(S(\mu))}c\rangle_{\ell^2}\\
& = \sum_{i=1}^N p_i \langle c|M^{(\mu\circ\tau^{-1})}c\rangle_{\ell^2}\\
& = \sum_{i=1}^N p_i \langle A_ic|M^{(\mu)}A_ic\rangle_{\ell^2},
\end{split}
\end{equation}
and
\begin{equation}
\int_{\mathbb{C}} |f_c|^2 d(S(\mu)) = \sum_{i=1}^N p_i
\int_{\mathbb{C}} |f_{A_{i} c}|^2 \,\mathrm{d}\mu.
\end{equation}
The desired conclusion (\ref{Eqn:RankMonotoneT}) now follows by
the reasoning in Corollary \ref{Cor:RankMonotone}.\end{proof}
\fi
\section{Examples}\label{Subsec:SpectrumExamples}
In Example \ref{Exam:LebBounded}, we use Lebesgue measure and the corresponding Hilbert matrix to illustrate the case where the moment matrix is a bounded operator. We also can demonstrate the IFS techniques from Chapter \ref{Sec:Exist} here. Following that, we use the measures $\mu = \delta_1$ and $\mu = \frac12 (\delta_0 + \delta_1)$ to illustrate the spectral results from this chapter. In both cases, we find appropriate choices for the weights which allow us to find Kato-Friedrichs operators on the Hilbert space $\ell^2(w)$ space. Since the operator depends on the choice of weights, we will denote the Kato-Friedrichs operator by $H_w$. In Example \ref{Subsec:KFExampleDelta1}, for a particular choice of $w$, we get the associated Kato-Friedrichs operators to be a rank-one projection. We similarly analyze the spectrum of the Kato-Friedrichs operator (which has rank $2$) in Example \ref{Subsec:KFConvex2Dirac}.
\begin{example}\label{Exam:LebBounded} Lebesgue measure and the Hilbert matrix. \end{example}
Let $\mu$ be Lebesgue measure restricted to $[0,1]$, hence the moment matrix $M^{(\mu)}$ is the Hilbert matrix previously discussed in Example \ref{Ex:Lebesgue} and Section \ref{Sec:Hilbert}. Recall that it is known (\cite{Wid66}) that the Hilbert matrix represents a bounded operator on $\ell^2$ with operator norm $ \|M^{(\mu)}\|_{op} = \pi$.
It is well known that $\mu$ is exactly the equilibrium measure arising from the real affine IFS $\{\tau_i\}_{i=0,1}$ where \[\tau_0(x) = \frac{x}{2} \; \textrm{and} \; \tau_1(x) = \frac{x+1}{2}, \] together with equal weights $\frac12$. The IFS invariance property for moment matrices is \begin{equation}\label{Eqn:HilbertInvar} M^{(\mu)} = \frac12 \Bigr( A_0^*M^{(\mu)}A_0 + A_1^*M^{(\mu)}A_1\Bigr), \end{equation} where Lemma \ref{Lemma:Amatrix} gives the matrices $A_0$ and $A_1$ (from \cite{EST06}):
\[A_0 = \left[ \begin{matrix} 1&0&0&\cdots\\ 0&\frac12 & 0& \cdots\\0&0& \frac14 & \cdots\\ \vdots & \vdots & & \ddots \end{matrix}\right] \qquad A_1 = \left[ \begin{matrix} 1&\frac12&\frac14&\cdots\\ 0&\frac12 & \frac12& \cdots\\0&0& \frac14 & \cdots\\ \vdots & \vdots & & \ddots \end{matrix}\right]. \]
Since the infinite matrices $A_0$ and $A_1$ are upper triangular (and hence $A_0^*$ and $A_1^*$ are lower triangular), the matrix multiplication computations in (\ref{Eqn:HilbertInvar}) involve only \textit{finite} sums. In addition, when we check the appropriate convergence issues, each of the matrices $A^*_iM^{(\mu)}A_i = M^{(\mu \circ \tau_i^{-1})}$, for $i=0,1$, represents a bounded operator on $\ell^2$. We compute directly $A_0^*M^{(\mu)}A_0$:
\begin{eqnarray*} (A_0^*M^{(\mu)}A_0)_{i,j} &=& \sum_{k=0}^{\infty} \sum_{\ell=0}^{\infty} (A_0^*)_{i,k}M^{(\mu)}_{k,\ell}(A_0)_{\ell,j} \\ &=& (A^*_0)_{i,i} \frac{1}{1+i+j} (A_0)_{j,j}\\ &=& \Bigr(\frac{1}{2^{i+j}} \Bigr)\frac{1}{1+i+j} = M^{(\mu \circ \tau_0^{-1})}. \end{eqnarray*}
The entries of the matrix $A_0M^{(\mu)}A_0$ together with the IFS invariance (\ref{Eqn:HilbertInvar}) property
\begin{eqnarray*} \frac12 \Bigr( A_0^*M^{(\mu)}A_0 + A_1^*M^{(\mu)}A_1\Bigr)_{i,j} &=& \frac12 \Bigr[ \Bigr(\frac{1}{2^{i+j}}\Bigr) \frac{1}{1+i+j} + \Bigr(1-\frac{1}{2^{i+j}}\Bigr)\frac{1}{1+i+j} \Bigr]\\ &=& \frac{1}{1+i+j} = M^{(\mu)}_{i,j} \end{eqnarray*}
allow us to compute the entries of the matrix $A_1^*M^{(\mu)}A_1$:
\begin{eqnarray}\label{Eqn:A1Matrix} (A_1^*M^{(\mu)}A_1)_{i,j} &=& \sum_{k=0}^{\infty} \sum_{\ell=0}^{\infty} (A_1^*)_{i,k}M^{(\mu)}_{k,\ell}(A_1)_{\ell,j}\nonumber\\ &=& \frac{1}{2^{i+j}}\sum_{k=0}^{i} \sum_{\ell=0}^{j} \binom{i}{k} \binom{j}{\ell} \frac{1}{1+k+\ell} \\ &=&
\Bigr(1-\frac{1}{2^{i+j}} \Bigr) \frac{1}{1+i+j}.\nonumber\end{eqnarray}
We note that (\ref{Eqn:A1Matrix}) yields an interesting identity on binomial coefficients. Given $i,j \in \mathbb{N}_0$,
\begin{equation}\label{Eqn:Binomial} \sum_{k=0}^i \sum_{\ell=0}^j \binom{i}{k}\binom{j}{\ell} \frac{1}{1+k+\ell} = \frac{2^{i+j}-1}{1+i+j}. \end{equation}
The infinite matrices $A_0$ and $A_1$ represent bounded operators on the Hilbert space $\ell^2$ with \[ \|A_i\|_{op} = \|A_i^*\|_{op} = 1\quad i=0,1 .\] These are therefore contractive operators on $\ell^2$.
\hfill $\Diamond$
\begin{example}\label{Subsec:KFExampleDelta1} The spectrum of the Kato-Friedrichs operator for $\mu = \delta_1$. \end{example}
Let $\mu = \delta_1$, the Dirac point mass at $1$. We computed the Kato-Freidrichs operator $H_w$ in Example \ref{Ex:FStarDelta1}, where the weights $w = \{w_j\}_{j \in \mathbb{N}_0} \subset \mathbb{R}^+$ are chosen such that $\sum_j \frac{1}{w_j} < \infty$. We found that $H_w$ is a (bounded) rank-one operator on $\ell^2(w)$ defined by
\[ (H_wc)_j = \frac{1}{w_j} \sum_k c_k.\]
Recall that we also know from Example \ref{Ex:ConvexDirac} that the dimension of $L^2(\mu)$ must be exactly $1$.
If we choose weights $\{w_i\}\subset\mathbb{R}^+$
such that
\[ \sum_{i\in\mathbb{N}_0} \frac{1}{w_i} = 1,\]
then $H_w:\ell^2(w)\rightarrow \ell^2(w)$
is in fact a rank-one projection---that is,
\begin{equation}
H_w^2 = H_w = (H_w)^*.
\end{equation}
In this case, we know from the spectral theory of projections that the spectrum of $H_w$ is exactly $\{0,1\}$, and the operator norm of $H_w$ is $1$
\iffalse
\noindent\textbf{Uniqueness of solutions to $H_wc = \lambda c$.
}We will now show that the \textit{only} solutions for $Hc =
\lambda c$ are the ones which have already been stated.
\bigskip
\textbf{Case 1: }Suppose $\lambda = 0$. In this case, there are many solutions for $c$, for example $c = (1, -1, 0, 0, \ldots)$, and the weights are not restricted.
\bigskip
\textbf{Case 2: }Suppose $\lambda = 1$. Then
\[
c_i = \frac{1}{\lambda w_i}\sum_{j\in\mathbb{N}_0} c_j,
\]
and
\[
\sum_{i\in\mathbb{N}_0}w_i |c_i|^2 = \Big| \sum_{j\in\mathbb{N}_0}
c_j\Big|^2\frac{1}{\lambda^2}
\sum_{i\in\mathbb{N}_0}\frac{1}{w_i}.
\]
Since these solutions are proportional when $w$ is fixed, we only
need $\lambda = 1$ and $\sum_{j}c_j = 1$.
\bigskip
The $\lambda = 0$ solutions are not orthogonal, so they can be
dropped by the spectral theorem approach to $H_w$.
\fi
If we generalize to the case where $\mu = \delta_b$ for some real value $b$, then the moment matrix is \[M_{j,k} = b^{j+k}, \] and the Kato-Friedrichs operator $H_w$ for weights $w$ is a rank-one operator with range the span of \[ \xi_b = \left[ \begin{matrix} \frac{1}{w_0} & \frac{b}{w_1} & \frac{b^2}{w_2} & \cdots \end{matrix}\right]. \]
\hfill $\Diamond$
\begin{example}\label{Subsec:KFConvex2Dirac} The Kato-Friedrichs operator associated with $\mu = \frac{1}{2}(\delta_0 + \delta_1)$ \end{example}
When $\mu = \frac{1}{2}(\delta_0 + \delta_1)$, the moment matrix
$M$ is given by
\begin{equation}\label{Eqn:MConvex2Dirac}
M = \frac{1}{2}
\begin{bmatrix}
1 & 0 & 0 & 0 & \cdots\\
0 & 0 & 0 & 0 & \cdots\\
0 & 0 & 0 & 0 & \\
\vdots &\vdots & & & \ddots
\end{bmatrix}
+ \frac{1}{2}\begin{bmatrix}
1 & 1 & 1 & 1 & \cdots\\
1 & 1 & 1 & 1 & \cdots\\
1 & 1 & 1 & 1 & \\
\vdots &\vdots & & & \ddots
\end{bmatrix}.
\end{equation}
We wish to solve the eigenvalue equations
(\ref{Eqn:FundEqnPtSpec}), which in this case become
\begin{align}\label{Eqn:FirstSystem}
c_0 + \frac{1}{2}(c_1+c_2+\cdots) &= \lambda w_0 c_0, \quad k =0 \notag\\
\frac{1}{2}(c_0+c_1 + c_2 + \cdots) &= \lambda w_kc_k, \quad k \geq 1.\notag\\
\end{align}
which can be transformed into
\begin{equation}\label{Eqn:SecondSystem}
\frac{1}{2}c_0 = \lambda(w_0c_0 - w_kc_k), \quad k \geq 1.
\end{equation}
by subtracting the later equations in (\ref{Eqn:FirstSystem}) from
the first.
To solve for $c$ in Equation (\ref{Eqn:SecondSystem}), we consider the
cases where $\lambda = 0$ and where $\lambda\neq 0$.
\noindent \textbf{Case 1: }If $\lambda = 0$, $c_0$ must also be
$0$. In addition, referring back to the equations in the system
(\ref{Eqn:FirstSystem}), we easily see that
Equation (\ref{Eqn:FundEqnPtSpec}) is true whenever
\begin{equation}\label{Eqn:SumK1Ck}
\sum_{k = 1}^{\infty} c_k = 0.
\end{equation}
(For example, $c = (0, 1, -1, 0, 0, \ldots)$.) There is no restriction on the weights
$w= \{w_k\}_{k \in \mathbb{N}_0}$ coming from the equations here. So for any choice of weights, the eigenspace for $\lambda=0$ is infinite-dimensional.
\noindent \textbf{Case 2: }If $\lambda \neq 0$, the
$k^{\mathrm{th}}$ equation for $k > 0$ in the system of equations
(\ref{Eqn:SecondSystem}) can be solved for $c_k$ in terms of $\lambda$, $c_0$, and the weights $w
= \{w_k\}_{k \in \mathbb{N}}$:
\begin{equation}\label{Eqn:CkDirac2Convex}
c_k = \frac{c_0}{w_k}\Bigl(w_0 - \frac{1}{2\lambda}\Bigr), \quad k
\geq 1.
\end{equation}
We see that $c_0 = 0$ implies that $c_k = 0$ for all $k$, so we
require $c_0 \neq 0$. Substituting (\ref{Eqn:CkDirac2Convex})
into the first equation in (\ref{Eqn:FirstSystem}), and cancelling
$c_0$ from both sides, we obtain a condition on $\lambda$ and the
weights $w$:
\begin{equation}\label{Eqn:lambdaW}
1 +
\frac{1}{2}\Bigl(\sum_{k=1}^{\infty}\frac{1}{w_k}\Bigr)\Bigl(w_0
- \frac{1}{2\lambda}\Bigr) = \lambda w_0.
\end{equation}
Define $T = \sum_{k=1}^{\infty} \frac{1}{w_k}$, where this sum being finite imposes a restriction on our choice of weights $w$. Then the equation becomes \[ 1+\frac{1}{2}T\Bigr(w_0-\frac{1}{2\lambda}\Bigr) = \lambda w_0.\]
Observe that the two conditions
\[ 1 = \lambda w_0 \quad \textrm{ and }\quad 2\lambda w_0 = 1\]
will lead to inconsistent systems, so we must rule these conditions out. Otherwise, there are many solutions to Equations (\ref{Eqn:FundEqnPtSpec}).
We now demonstrate that there will be two distinct nonzero eigenvalues for the Kato-Friedrichs operator $H_w$, where the weights $w$ are chosen so that $T$ is finite. Without loss of generality, take $w_0 = c_0 = 1$. Our eigenvalues $\lambda \neq 0$ must satisfy Equation (\ref{Eqn:lambdaW}), which is now \begin{equation}\label{Eqn:SimplelambdaW}1+\frac12 T \Bigr(1-\frac{1}{2\lambda} \Bigr) =\lambda. \end{equation}
This yields the quadratic equation in $\lambda$:
\[
\lambda^2 - \Bigl( 1 + \frac{T}{2} \Bigr)\lambda + \frac{T}{4} = 0.
\]
This equation has distinct positive roots $\lambda_+, \lambda_-$ for $T>0$, and they are given by
\begin{equation}\label{Eqn:lambdapm}
\lambda_{\pm} = \frac{1+ \frac{T}{2} \pm \sqrt{1 +
\Bigl(\frac{T}{2}\Bigr)^2}}{2};
\end{equation}
and we also see
\begin{equation}\label{Eqn:TraceDet}
\lambda_+\lambda_- = \frac{T}{4} \textrm{ and } \lambda_+
+\lambda_- = 1 + \frac{T}{2}.
\end{equation}
Since the vector $c$ satisfying $Hc = \lambda c$ (where $\lambda \neq 0$) is uniquely determined from $\lambda, c_0$, and $w$, given a choice of weights and taking $c_0=1$, each eigenspace for nonzero $\lambda$ is one-dimensional. Denote by $c_+$ and $c_-$ these eigenvectors in the eigenspaces for $\lambda_+$ and $\lambda_-$ respectively.
Going back to (\ref{Eqn:CkDirac2Convex}) we have the
simplification for the eigenvectors:
\begin{equation}\label{Eqn:CkSub}
c_k = \frac{1}{w_k}\Bigl(1 - \frac{1}{2\lambda}\Bigr) =
\frac{1}{w_k}\Bigl(\frac{\lambda - \frac{1}{2}}{\lambda}\Bigr),
\quad k \geq 1.
\end{equation}
With respect to our eigenvalues $\lambda_{\pm}$, this gives
\[
c_{\pm} = \Bigl( 1,
\frac{\lambda_{\pm}-\frac{1}{2}}{w_1\lambda_{\pm}},
\frac{\lambda_{\pm}-\frac{1}{2}}{w_2\lambda_{\pm}},\ldots\Bigr);
\]
then
\[
\|c_{\pm}\|^2_{\ell^2(w)} \underset{(\ref{Eqn:CkSub})}
{=} 1 +
\Biggl(\frac{\lambda_{\pm}-\frac{1}{2}}{\lambda_{\pm}}\Biggr)^2 T.
\]
The reader can verify the orthogonality of the eigenvectors.
\iffalse
Now, we can write the numbers
$\frac{\lambda_{\pm}-\frac{1}{2}}{\lambda_{\pm}}$ in terms of $T$
alone:
\begin{equation*}
\begin{split}
\frac{\lambda_{\pm}-\frac{1}{2}}{\lambda_{\pm}}
& = \frac{2(\lambda_{\pm}-1)}{T}\\
& \underset{(\ref{Eqn:lambdapm})} {=} \frac{ \frac{T}{2} - 1 \pm
\sqrt{1 + \Bigl(\frac{T}{2}\Bigr)^2} }{T}.
\end{split}
\end{equation*}
\fi
In order to do more explicit computations, let us choose the weights $w =
\{w_k\}_{k\in\mathbb{N}_0}$ such that $T = 2$. This yields
\[
\lambda_{\pm} = 1 \pm
\frac{1}{\sqrt{2}},
\]
which are both positive.
Substituting back into (\ref{Eqn:CkSub}) we get
\begin{equation}\label{Eqn:NotNormEigv}
c_{\pm} = \Bigl( 1, \pm\frac{1}{\sqrt{2}w_1},
\pm\frac{1}{\sqrt{2}w_2}, \ldots\Bigr),
\end{equation}
with
\[
\|c_{\pm}\|^2_{\ell^2(w)} = 2.
\]
Normalize to obtain unit vectors
\begin{equation}\label{Eqn:NormEigv}
\xi_{\pm}: = \frac{1}{\sqrt{2}}c_{\pm}
\end{equation}
which yield the rank-one projections
\[
E_{\pm}:=|\xi_{\pm}\rangle \langle\xi_{\pm}|
\] onto the eigenspaces. In other words, the spectral resolution of the self-adjoint operator $H$ can
be written
\begin{equation}\label{Eqn:SpecResConvex2Dirac}
H = \Bigl(1 + \frac{1}{\sqrt{2}}\Bigr)E_+ + \Bigl(1 -
\frac{1}{\sqrt{2}}\Bigr)E_-.
\end{equation}
We next make two observations regarding the connections between $H^{1/2}$, the square root of the Kato-Friedrichs opertator, and the Hilbert space $L^2(\mu)$. First, we recall from Equation (\ref{Eqn:KatoProperties}) and Lemma \ref{Lem:KatoIsometry} the isometry
\begin{equation}\label{Eqn:KatoIsometry}
\int |f_c|^2\,\mathrm{d}\mu = \|H^{1/2} c\|^2_{\ell^2(w)};
\end{equation}
that is, the identification $f_c \leftrightarrow H^{1/2}c$ is
isometric. By choosing specific weights, we can verify this isometry using our Kato-Friedrichs operator for $\mu = \frac12(\delta_0+\delta_1)$.
Let $w = \{w_k\}$ be given by the sequence with $w_0 = 1$ and
\[
w_k = 2^{k-1}, \quad k \geq 1,
\]
which gives $T = 2$. Denote $F(\xi_{\pm})$ by the shorthand $f_{\pm}$. Using (\ref{Eqn:NotNormEigv}) and (\ref{Eqn:NormEigv}), we get
\begin{equation}
f_{\pm}(x) = \sum_{k=0}^{\infty} (\xi_{\pm})_k x^k =
\frac{1}{\sqrt{2}} \pm \frac{x}{2-x},
\end{equation}
and, because $\mu = \frac{1}{2}(\delta_0 + \delta_1)$, we have
\[
\int |f_{\pm}(x)|^2\,\mathrm{d}\mu(x) = 1 \pm \frac{1}{\sqrt{2}} =
\frac{\sqrt{2}\pm 1}{\sqrt{2}}.
\]
Using Equation (\ref{Eqn:SpecResConvex2Dirac}), we have
\[
H^{1/2} = \Bigl(1 + \frac{1}{\sqrt{2}}\Bigr)^{1/2}E_+ + \Bigl(1 -
\frac{1}{\sqrt{2}}\Bigr)^{1/2}E_-.
\]
Moreover,
\begin{equation}
H^{1/2}\xi_{\pm} = \Bigl( 1 \pm
\frac{1}{\sqrt{2}}\Bigr)^{1/2}\xi_{\pm}
\end{equation}
and hence
\[
\|H^{1/2}\xi_{\pm}\|^2_{\ell^2(w)} = 1 \pm \frac{1}{\sqrt{2}} \]
which verifies the isometry in (\ref{Eqn:KatoIsometry}).
Define the map $W: L^2(\mu) \rightarrow \ell^2(w)$ given by
\begin{equation}\label{Eqn:WConvex2Dirac}
W(f_c) = H^{1/2}c \textrm{ for }c\in\mathcal{D}\subset \ell^2(w).
\end{equation}
Recall from Example \ref{Ex:ConvexDirac} that the dimension of $L^2(\mu)$ is exactly $2$. Our second observation is that the map $W$ is an isometry into a two-dimensional subspace of $\ell^2(w)$.
We will check this fact directly. Recall $E_{\pm}
= |\xi_{\pm}\rangle \langle \xi_{\pm}|$, with $\xi_{\pm}$ as in
(\ref{Eqn:NormEigv}). We only need to check that $W$ takes an ONB
in $L^2(\mu)$ to an orthonormal family in $\ell^2(w)$. First, observe that the orthogonal
polynomials in $L^2(\mu)$ are $p_0(x)\equiv 1$, $p_1(x) = 2x -1$,
and $p_k(x)\equiv 0$ for $k \geq 2$. Moreover,
\begin{equation*}
\begin{split}
Wp_0
& = Wf_{(1, 0, 0, \ldots)}\\
& \underset{(\ref{Eqn:WConvex2Dirac})}{=} \Bigl( 1 +
\frac{1}{\sqrt{2}} \Bigr)^{1/2}E_+(1, 0, 0, \ldots) +
\Bigl( 1 - \frac{1}{\sqrt{2}} \Bigr)^{1/2}E_-(1, 0, 0, \ldots)\\
& \underset{(\ref{Eqn:NormEigv})}{=} \Bigl( 1 + \frac{1}{\sqrt{2}}
\Bigr)^{1/2} \frac{1}{\sqrt{2}}\xi_+ + \Bigl( 1 -
\frac{1}{\sqrt{2}} \Bigr)^{1/2} \frac{1}{\sqrt{2}}\xi_-,
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
Wp_1
& = Wf_{(-1, 2, 0, 0, \ldots)}\\
& = \Bigl( 1 + \frac{1}{\sqrt{2}}\Bigr)^{1/2} \Bigl(1 -
\frac{1}{\sqrt{2}}\Bigr)\xi_+ +
\Bigl( 1 - \frac{1}{\sqrt{2}}\Bigr)^{1/2} \Bigl(-1 - \frac{1}{\sqrt{2}}\Bigr)\xi_-\\
& = \Bigl( 1 - \frac{1}{\sqrt{2}} \Bigr)^{1/2}
\frac{1}{\sqrt{2}}\xi_+ - \Bigl( 1 + \frac{1}{\sqrt{2}}
\Bigr)^{1/2} \frac{1}{\sqrt{2}}\xi_-.
\end{split}
\end{equation*}
Since $Wp_0$ and $Wp_1$ are orthonormal in $\ell^2(w)$, we have that $W$ is an isometry.
\hfill $\Diamond$
\begin{example} \label{Subsec:KFExampleDeltab} The Kato-Friedrichs operator associated with $\mu = \frac{1}{2}(\delta_0 + \delta_b)$, $0 < b <1$ \end{example}
This example is a more general case where $\mu$ is a finite convex combination of Dirac masses, hence the associated infinite Hankel matrix will be of finite rank. To simplify matters we pick the atoms in $\mu$ in such a way that no weights will be needed. When we work with
an unweighted $\ell^2$ space, $w_k = 1$ for all $k \in
\mathbb{N}_0$, and $\ell^2 = \ell^2(w)$.
The moment matrix $M = M^{(\mu)}$ for
the convex combination $\mu$ is equal to the Kato-Friedrichs
operator, as in the case of the Hilbert matrix.
This last example also has the advantage of illustrating an ONB in $L^2(\mu)$ consisting of rational functions; not
polynomials. The choice of the rational functions is dictated by
our Kato-Friedrichs operator $H = M$.
As in the previous Example \ref{Subsec:KFConvex2Dirac},
$H$ has a spectrum consisting of $0$ and two points on the
positive real line, and $L^2(\mu)$ is two-dimensional. The point
$0$ has infinite multiplicity, and the two positive eigenvalues
are simple, i.e., have multiplicity one.
For $\mu = \frac{1}{2}(\delta_0 + \delta_b)$, $0 < b < 1$, the
moment matrix $M$ is
\begin{equation}
M = H =
\begin{bmatrix}
1 & \frac{1}{2}b & \frac{1}{2}b^2 & \cdots\\
\frac{1}{2}b & \frac{1}{2} b^2 & \frac{1}{2} b^3 & \cdots\\
\frac{1}{2}b^2 & \frac{1}{2} b^3 & \frac{1}{2} b^4 & \cdots\\
\vdots & \vdots & & \ddots\\
\end{bmatrix}.
\end{equation}
The orthogonal polynomials in $L^2(\mu)$ are
\begin{equation}
p_0(x)\equiv 1,\quad p_1(x) = 1 - \frac{2x}{b},\quad p_k(x)
\equiv 0, k \geq 2.
\end{equation}
However, we also have an orthogonal basis in $L^2(\mu)$ consisting of the two
rational functions $f_{\pm}(x)$, where
\begin{equation}
f_{\pm}(x):= \alpha_{\pm} + \frac{bx}{1 - bx}.
\end{equation}
These correspond to the eigenvectors for $H$. We see that if \[ \xi_{\pm} = [ \begin{matrix} \alpha_{\pm} & b &b^2&b^3& \cdots \end{matrix}]^{tr}, \] then $H\xi_{\pm} = \lambda \xi_{\pm}$, where the parameters $\alpha_{\pm}$ are given by
\begin{equation*} \alpha_{\pm} = 1-\frac{p}{2} \pm \sqrt{ \left(\frac{p}{2}\right)^2 + 1} \end{equation*} and the two positive eigenvalues of $H$ are \begin{equation*}
\lambda_{\pm} = \frac{1}{2}\left( 1+\frac{p}{2} \pm \sqrt{\left(\frac{p}{2}\right)^2+1} \right), \end{equation*}
where $p$ is the constant \[ p = \frac{b^2}{1-b^2}.\]
The functions above are the images of $\xi_{\pm}$ under the operator $F$:
\[ f_{\pm}(x) = (F\xi_{\pm})(x) = \alpha_{\pm} + \frac{bx}{1-bx}.\]
The eigenvectors $\xi_{\pm}$ are orthogonal, and it follows that $f_{\pm}$ are orthogonal in $L^2(\mu)$:
\begin{eqnarray*} \langle f_+|f_-\rangle_{L^2(\mu)} &=& \langle F\xi_+| F\xi_- \rangle_{L^2(\mu)}\\
&=& \langle \xi_+|F^*F \xi_- \rangle_{\ell^2} \\ &=& \langle \xi_+|H\xi_- \rangle_{\ell^2}\\
&=& \lambda_- \langle \xi_+|\xi_- \rangle_{\ell^2} = 0 \end{eqnarray*}
Direct computation of $\langle f_+|f_-\rangle_{L^2(\mu)}$ also verifies that these functions are orthogonal.
This example also demonstrates the correspondence from Theorem \ref{Thm:AtomsPair} between atoms of the measure $\mu$ and atoms in the projection-valued measure $E$ associated to $H$. The measure $\mu$ has two atoms at $0$ and $b$, while the projection valued measure $E$ for $H$ has atoms at the two nonzero eigenvalues $\lambda_{\pm}$.
\hfill $\Diamond$
| {
"timestamp": "2008-09-12T04:51:26",
"yymm": "0809",
"arxiv_id": "0809.2124",
"language": "en",
"url": "https://arxiv.org/abs/0809.2124",
"abstract": "We study the moments of equilibrium measures for iterated function systems (IFSs) and draw connections to operator theory. Our main object of study is the infinite matrix which encodes all the moment data of a Borel measure on R^d or C. To encode the salient features of a given IFS into precise moment data, we establish an interdependence between IFS equilibrium measures, the encoding of the sequence of moments of these measures into operators, and a new correspondence between the IFS moments and this family of operators in Hilbert space. For a given IFS, our aim is to establish a functorial correspondence in such a way that the geometric transformations of the IFS turn into transformations of moment matrices, or rather transformations of the operators that are associated with them.We first examine the classical existence problem for moments, culminating in a new proof of the existence of a Borel measure on R or C with a specified list of moments. Next, we consider moment problems associated with affine and non-affine IFSs. Our main goal is to determine conditions under which an intertwining relation is satisfied by the moment matrix of an equilibrium measure of an IFS. Finally, using the famous Hilbert matrix as our prototypical example, we study boundedness and spectral properties of moment matrices viewed as Kato-Friedrichs operators on weighted l^2 spaces.",
"subjects": "Classical Analysis and ODEs (math.CA); Functional Analysis (math.FA)",
"title": "Iterated function systems, moments, and transformations of infinite matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534344029239,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573485777555
} |
https://arxiv.org/abs/2010.11450 | Optimal Approximation -- Smoothness Tradeoffs for Soft-Max Functions | A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input. Our goal is to identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness. This leads to novel soft-max functions, each of which is optimal for a different application. The most commonly used soft-max function, called exponential mechanism, has optimal tradeoff between approximation measured in terms of expected additive approximation and smoothness measured with respect to Rényi Divergence. We introduce a soft-max function, called "piecewise linear soft-max", with optimal tradeoff between approximation, measured in terms of worst-case additive approximation and smoothness, measured with respect to $\ell_q$-norm. The worst-case approximation guarantee of the piecewise linear mechanism enforces sparsity in the output of our soft-max function, a property that is known to be important in Machine Learning applications [Martins et al. '16, Laha et al. '18] and is not satisfied by the exponential mechanism. Moreover, the $\ell_q$-smoothness is suitable for applications in Mechanism Design and Game Theory where the piecewise linear mechanism outperforms the exponential mechanism. Finally, we investigate another soft-max function, called power mechanism, with optimal tradeoff between expected \textit{multiplicative} approximation and smoothness with respect to the Rényi Divergence, which provides improved theoretical and practical results in differentially private submodular optimization. | \section{Introduction} \label{sec:intro}
A soft-max function is a mechanism for choosing one out of a number of options, given the
value of each option. Such functions have applications in many areas of computer science and
machine learning, such as deep learning
(as the final layer of a neural network classifier)
\cite{NIPS1989_195,Bridle,Goodfellow-et-al-2016}, reinforcement learning
(as a method for selecting an action)~\cite{RLBook}, learning from mixtures of
experts~\cite{JordanJacobs}, differential privacy~\cite{dworkr14,McSherryT07}, and mechanism
design~\cite{McSherryT07,ExpMechDesign}. The common requisite in these applications is for the
soft-max function to pick an option with close-to-maximum value, while behaving smoothly as the
input changes.
The soft-max function that has come to dominate these applications is the
{\em exponential function}. Given $d$ options with values $x_1, x_2, \ldots, x_d$, the
exponential mechanism picks $i$ with probability equal to the quantity
$\exp(\lambda x_i)/(\sum_{j=1}^d\exp(\lambda x_j))$ for a parameter $\lambda > 0$. This
function has a long history: It has been proposed as a model in decision theory in 1959 by
Luce~\cite{Luce1959}, and has its roots in the Boltzman (also known as Gibbs) distribution in
statistical mechanics~\cite{Boltzman,gibbs1902}. There are, however, many other ways to
smoothly pick an approximately maximal element from a list of values. This raises the
question: is there a way to quantify the desirable properties of soft-max functions, and are
there other soft-max functions that perform well under such criteria? If there are such
functions, perhaps they can be added to our repertoire of soft-max functions and might prove
suitable in some applications.
These questions are the subject of this paper. We explore the tradeoff between the
approximation guarantee of a soft-max function and its smoothness. A soft-max function is
\textit{$\delta$-approximate} if the expected value of the option it picks is at least the
maximum value minus $\delta$. Stronger yet, a function is
\textit{$\delta$-approximate in the worst case} if it never picks an option of value less than
the maximum minus $\delta$. We capture the requirement of {\em smoothness} using the notion of
Lipschitz continuity. A function is Lipschitz continuous if by changing its input by some
amount $x$, its output changes by at most a multiple of $x$. This multiplier, known as the
Lipschitz constant, is then a measure of smoothness. This notion requires
a way to measure distances in the domain (the input space) and the range (the output space) of
the function.
We will show that if the $p$-norm and the R\'enyi divergence are used to measure distances in
the domain and the range, respectively, then the exponential mechanism achieves the lowest
possible (to within a constant factor) Lipschitz constant among all $\delta$-approximate
soft-max functions. This Lipschitz constant is $O(\log(d)/\delta)$. The exponential function
picks each option with a non-zero probability, and therefore cannot guarantee worst-case approximation. In fact, we show that for these distance measures, there is no soft-max
function with bounded Lipschitz constant that can guarantee worst-case approximation.
On the other hand, if we use $p$-norms to measure changes in both the input and the output,
new possibilities open up. We construct a soft-max function
(called \textsc{PLSoftMax\xspace}, for piecewise linear soft-max) that achieves a Lipschitz constant of
$O(1/\delta)$ and is also $\delta$-approximate {\em in the worst case}. This is an important
property, as it guarantees that the output of the soft-max function is always as sparse as
possible. Furthermore, we prove that even only requiring $\delta$-approximation in
expectation, no soft-max function can achieve a Lipschitz constant of $o(1/\delta)$ for these
distance measures.
We also study several other properties we might want to require of a soft-max function. Most
notably, what happens if instead of requiring an additive approximation guarantee, we require
a multiplicative one? A simple way to construct a soft-max function satisfying this requirement
is to apply soft-max functions with additive approximation (e.g., exponential or \textsc{PLSoftMax\xspace}) on the
logarithm of the values. The resulting mechanisms (the power mechanism, and \textsc{LogPLSoftMax\xspace}) are
Lipschitz continuous, but with respect to a domain distance measure called log-Euclidean.
Moreover, we show that with the standard $p$-norm distance as the domain distance measure, no soft-max function with bounded Lipschitz constant and
multiplicative approximation guarantee exists.
Finally, we explore several applications of the new soft-max functions introduced in this
paper. First, we show how the power mechanism can be used to improve existing results
(using the exponential mechanism) on differentially private submodular maximization. Second,
we use \textsc{PLSoftMax\xspace}\ to design improved incentive compatible mechanisms with worst-case guarantees.
Finally, we discuss how \textsc{PLSoftMax\xspace}\ can be used as the final layer of deep neural networks in
multiclass classification.
\subsection{Related Work} \label{sec:intro:related}
A lot of work has been done in designing soft-max function that fit better to specific
applications. In Deep Learning applications, the exponential mechanism does not allow to take
advantage of the sparsity of the categorical targets during the training. Several methods
have been proposed to take use of this sparsity. Hierarchical soft-max uses a heuristically
defined hierarchical tree to define a soft-max function with only a few outputs
\cite{morin2005hierarchical, mikolov2013distributed}. Another direction is the use of a
spherically symmetric soft-max function together with a spherical class of loss functions that
can be used to perform back-propagation step much more efficiently
\cite{vincent2015efficient, de2015exploration}. Finally there has been a line of work that
targets the design of soft-max functions whose output favors sparse distributions
\cite{MartinsA16, LahaCAJSR18}. \section{Definitions and Preliminaries} \label{sec:model}
The $(d-1)$-dimensional {\em unit simplex} (also known as the {\em probability simplex}) is
the set of all the $d$-dimensional vectors $(x_1,\ldots,x_d)\in\ensuremath{\mathbb{R}}^d$ satisfying $x_i\ge 0$ for all $i$ and
$\sum_{i=1}^d x_i = 1$. In other words, each point in the $(d-1)$-dimensional unit simplex,
which we denote by $\Delta_{d-1}$, corresponds to a probability distribution over $d$ possible
outcomes $1,\ldots,d$.
\paragraph{Soft-max. }
A $d$-dimensional {\em soft-max function} (sometimes called a soft-max mechanism) is a
function $\ensuremath{\boldsymbol{f}}:\mathbb{R}^d \to \Delta_{d-1}$. Intuitively, this corresponds to a
randomized mechanism for choosing one outcome out of $d$ possible outcomes. For any
$\ensuremath{\boldsymbol{x}} \in\ensuremath{\mathbb{R}}^d$, the value $x_i$ denotes the value of the outcome $i$, and $f_i(\ensuremath{\boldsymbol{x}})$ is the
probability that $f$ chooses this outcome. In parts of this paper, specifically when we
discuss multiplicative approximations, we restrict the outcome values $x_i$ to be positive,
i.e., we consider soft-max functions from $\ensuremath{\mathbb{R}}_+^d$ to $\Delta_{d-1}$.
\smallskip
\paragr{Lipschitz continuity.}
The Lipschitz property is defined in terms of a distance measure $d_1$ over $\ensuremath{\mathbb{R}}^d$
(the domain) and a distance measure $d_2$ over $\Delta_{d-1}$ (the range). A distance measure
over a set is a function that assigns a non-negative {\em distance} to every ordered pair of
points in that set. We do not require symmetry or the triangle inequality. We say that a
soft-max function $f$ is $(d_1,d_2)$-Lipschitz continuous if there is a constant $\beta > 0$
such that for every two points $\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}} \in \ensuremath{\mathbb{R}}^d$, the following holds
\begin{equation} \label{eq:intro:Lipschitzness}
d_2(\ensuremath{\boldsymbol{f}}(\ensuremath{\boldsymbol{x}}), \ensuremath{\boldsymbol{f}}(\ensuremath{\boldsymbol{y}})) \le \beta \cdot d_1(\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}}).
\end{equation}
The smallest $\beta$ for which \eqref{eq:intro:Lipschitzness} holds is the Lipschitz
constant of $\ensuremath{\boldsymbol{f}}$ (with respect to $d_1$ and $d_2$).
\smallskip
\paragr{$\ell_p$ distance and R\'enyi\ divergence.}
Two measures of distance that are used in this paper are the $p$-norm distance and the R\'enyi\
divergence. For $p \ge 1$, the $p$-norm distance (also called the $\ell_p$ distance) between
two points $\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}} \in \ensuremath{\mathbb{R}}^d$ is denoted by $\norm{\ensuremath{\boldsymbol{x}} - \ensuremath{\boldsymbol{y}}}_p$, and is defined as
$\norm{\ensuremath{\boldsymbol{x}} - \ensuremath{\boldsymbol{y}}}_p = \left(\sum_{i = 1}^d \abs{x_i - y_i}^p\right)^{1/p}$. For any
$\alpha>1$ and points $\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}} \in \Delta_{d - 1}$, the R\'enyi divergence of order
$\alpha$ between $\ensuremath{\boldsymbol{x}}$ and $\ensuremath{\boldsymbol{y}}$ is denoted by $\mathrm{D}_\alpha(\ensuremath{\boldsymbol{x}}||\ensuremath{\boldsymbol{y}})$ and is
defined as
$\mathrm{D}_\alpha(x||y) = \frac{1}{\alpha - 1} \log\p{ \sum_{i = 1}^d \frac{x_i^{\alpha}}{y_i^{ \alpha - 1}}}$.
This expression is undefined at $\alpha=1$, but the limit as $\alpha \to 1$ can be written as
$D_1(\ensuremath{\boldsymbol{x}}||\ensuremath{\boldsymbol{y}})=\sum_{i=1}^d x_i\log\frac{x_i}{y_i}$ and is known as the Kullback-Leibler
(KL) divergence. Similarly, the R\'enyi\ divergence of order $\infty$ can be defined as the limit
as $\alpha \to \infty$, which is $D_\infty(\ensuremath{\boldsymbol{x}}||\ensuremath{\boldsymbol{y}}) = \log\max_{i}\frac{x_i}{y_i}$.
\smallskip
\paragr{Approximation.}
For any $\delta \ge 0$, a soft-max function $\ensuremath{\boldsymbol{f}} : \mathbb{R}^d \mapsto\Delta_{d-1}$ is
{\em $\delta$-approximate} if
\begin{equation}
\forall \ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}^d~:\qquad \langle \ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{f}}(\ensuremath{\boldsymbol{x}}) \rangle \ge \max_i\{x_i\} - \delta.
\end{equation}
Note that the inner product $\langle \ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{f}}(\ensuremath{\boldsymbol{x}}) \rangle$ is the expected value of the
outcome picked by $\ensuremath{\boldsymbol{f}}$. The function $\ensuremath{\boldsymbol{f}}$ is
{\em $\delta$-approximate in the worst case} if
\begin{equation}
\forall \ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}^d, \forall i \in [d] ~:\qquad f_i(\ensuremath{\boldsymbol{x}}) > 0\Rightarrow x_i \ge \max_i\{x_i\} - \delta.
\end{equation}
\section{The Exponential Mechanism}
\label{sec:exponential}
The exponential soft-max function, parameterized by a parameter $\lambda$ and denoted by
$\textsc{Exp}^\lambda$, is defined as follows: for $\ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}^d$, $\textsc{Exp}^\lambda(\ensuremath{\boldsymbol{x}})$ is a
vector whose $i$'th coordinate is $\exp(\lambda x_i)/\sum_{j = 1}^d\exp(\lambda x_j)$.
This mechanism was proposed and analyzed by McSherry and Talwar~\cite{McSherryT07} for its
application in differential privacy and mechanism design. It is not hard to see that the
differential privacy property they prove corresponds to $(\ell_p, D_\infty)$-Lipschitz
continuity, and therefore their analysis, cast in our terminology, implies the following:
\begin{theorem}[\cite{McSherryT07}] \label{thm:exp}
For any $\delta > 0$ and $p, \alpha\ge1$, the soft-max function $\textsc{Exp}^\lambda$ with
$\lambda=\log(d)/\delta$ satisfies the following: (1) it is $\delta$-approximate, and (2) it
is $(\ell_p, D_{\alpha})$-Lipschitz continuous with a Lipschitz less than $2 \lambda$.
\end{theorem}
This leaves the following question: is there any other soft-max function that achieves a
better Lipschitz constant? The following theorem gives a negative answer.
\begin{theorem} \label{thm:expLB}
Let $p, \alpha \ge 1$, $\delta > 0$, $d \ge 4$ and $\ensuremath{\boldsymbol{f}} : \ensuremath{\mathbb{R}}^d \to \Delta_{d-1}$ be a
$\delta$-approximate soft-max function satisfying
$\renyi{\alpha}{\vec{f}(\vec{x})}{\vec{f}(\vec{y})} \le c \norm{\vec{x} - \vec{y}}_p$ for
all $\vec{x}, \vec{y} \in \ensuremath{\mathbb{R}}^d$. Then it holds $c > \frac{\log d - 2}{4 \delta}$.
\end{theorem}
Also, since the exponential mechanism assigns a non-zero probability to any outcome, it is
of course not $\delta$-approximate in the worst case. The following theorem shows that this is
an unavoidable property of any $(\ell_p, D_{\alpha})$-Lipschitz continuous functions.
\begin{theorem} \label{thm:expNoWorstCase}
For any $p, \alpha \ge 1$, $\delta>0$, there is no soft-max function that is
$(\ell_p, D_{\alpha})$-Lipschitz continuous and $\delta$-approximate in the worst case.
\end{theorem}
The proofs of the above theorems are presented in~\refappendix{app:expLB}. \section{\textsc{PLSoftMax\xspace}: A Soft-Max Function with Worst Case Guarantee} \label{sec:plsm}
As we saw in the last section, the exponential mechanism is a $(\ell_p,D_\infty)$-Lipschitz function with the best possible Lipschitz constant among all $\delta$-approximate functions. Furthermore, a worst case approximation guarantee is not possible for such Lipschitz functions. In this section, we focus on $(\ell_p,\ell_q)$-Lipschitz functions which are the soft-max functions that are used in mechanism design and in machine learning setting. These functions exhibit a different picture: the exponential function is no longer the best function in this family. Instead, we construct a soft-max function that achieves the best (up to a constant factor) Lipschitz constant and at the same time provides a worst-case guarantee. This is the most technical result of the paper.
\subsection{Construction of \textsc{PLSoftMax\xspace}}
While the analysis of the properties of \textsc{PLSoftMax\xspace}\ and understanding the intuition behind its construction might be technically challenging, its actual description is rather concise and simple. In this Section we give a complete description of this soft-max function, and state our main result. Due to lack of space, the proofs are left to~\refappendix{app:plsm}.
\textsc{PLSoftMax\xspace}\ is a piecewise linear function, where each linear piece is defined using a carefully
designed matrix. More precisely, for a given $\ensuremath{\boldsymbol{x}}\in\ensuremath{\mathbb{R}}^d$, consider a permutation $\pi$ of $\{1,\ldots,d\}$ that {\em sorts} $x$, i.e., $x_{\pi(1)} \ge x_{\pi(2)} \ge \cdots \ge x_{\pi(d)}$, and let $\ensuremath{\boldsymbol{P}}_{\pi}$ be the permutation matrix of $\pi$, i.e., the matrix with $1$'s at entries $(i,\pi(i))$ and zeros everywhere else. In other words, $\ensuremath{\boldsymbol{P}}_{\pi}$ is the matrix that, once multiplied by $x$, sorts it. Each ``piece'' of our piecewise linear function corresponds to all $x\in\ensuremath{\mathbb{R}}^d$ that have the same sorting permutation $\pi$. The function, on this piece, is defined by multiplying $\ensuremath{\boldsymbol{x}}$ by $\ensuremath{\boldsymbol{P}}_{\pi}$ (thereby sorting it), then applying a linear function defined through a carefully designed family of matrices $\matr{SM}_{(k, d)}$, and then applying the inverse matrix $\ensuremath{\boldsymbol{P}}_{\pi}^{-1}$ to move values back to their original index. The matrices $\matr{SM}_{(k, d)}$ at the heart of this construction are defined below.
\begin{definition}[\textsc{Soft-Max Matrix}]
The soft max matrix $\matr{SM}_{(k, d)} = (m_{i j}) \in \mathbb{R}^{d \times d}$ is defined as
$m_{1 1} = (k - 1)/k$, $m_{i i} = 1/i$ for all $i \in [2, k]$, $m_{i 1} = - 1/k$ for all
$i \in [2, k]$, $m_{i j} = -1/(j (j - 1))$ for all $i, j \in [2, k]$ with $j < i$, and
$m_{i j} = 0$ otherwise
(See Appendix~\ref{app:soft-maxmat} for a better illustration of this matrix). Also, the vector
$\ensuremath{\boldsymbol{u}}^{(k)} \in \ensuremath{\mathbb{R}}^d$ is defined as $u^{(k)}_i = 1/k$ if $i \le k$ and $u^{(k)}_i = 0$ otherwise.
\end{definition}
We consider partitions where each piece contains all vectors with the same ordering of the
coordinates. Namely, for a permutation $\pi \in S_d$ we define $R_{\pi}$ to be the set of
vectors $\ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}^d$ such that $x_{\pi(1)} \ge x_{\pi(2)} \ge \cdots \ge x_{\pi(d)}$. Also, let
$\ensuremath{\boldsymbol{P}}_{\pi}$ be the permutation matrix of $\pi \in S_d$.
\begin{definition}(\textsc{PLSoftMax\xspace}) \label{def:soft-maxFunctionDef}
Let $\delta > 0$, and consider a vector $\ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}^d$ with a sorting permutation $\pi$ and the
corresponding permutation matrix $\ensuremath{\boldsymbol{P}}_\pi$. Define $k_{\ensuremath{\boldsymbol{x}}}$ as the maximum $k \in [d]$ such that
$x_{\pi(1)} - x_{\pi(k)}\le \delta$. The soft-max function~$\textsc{PLSoftMax\xspace}^\delta$\ on $x$ is defined as
follows.
\begin{equation} \label{eq:soft-maxFunctionTV}
\textsc{PLSoftMax\xspace}^{\delta}(\ensuremath{\boldsymbol{x}}) = \frac{1}{\delta} \cdot \ensuremath{\boldsymbol{P}}_{\pi}^{-1} \cdot \matr{SM}_{(k_{\ensuremath{\boldsymbol{x}}} ,d)} \cdot \ensuremath{\boldsymbol{P}}_{\pi} \cdot \ensuremath{\boldsymbol{x}} + \ensuremath{\boldsymbol{P}}_{\pi}^{-1} \cdot u^{(k_{\ensuremath{\boldsymbol{x}}})}.
\end{equation}
\end{definition}
As defined, it is not even clear that $\textsc{PLSoftMax\xspace}^\delta$ is a valid soft-max function, i.e., that $\textsc{PLSoftMax\xspace}^\delta(x)\in\Delta_{d-1}$. This, as well as the following result, is proved in Appendix~\ref{app:plsm}.
\medskip
\begin{theorem} \label{thm:mainsoft-maxFunctionTV}
Let $\delta > 0$, $\textsc{PLSoftMax\xspace}^{\delta}$ be the function defined in
\eqref{eq:soft-maxFunctionTV} and let $\vec{x} \in \ensuremath{\mathbb{R}}^d$, then
\begin{Enumerate}
\item $\textsc{PLSoftMax\xspace}^{\delta}$ is $\delta$-approximate in the worst case.
\item For any $p, q \ge 1$, $\textsc{PLSoftMax\xspace}^{\delta}$ is $(\ell_p,\ell_q)$-Lipschitz continuous with a Lipschitz constant that is at most
\[\frac{2}{\delta} \min\{p + 1, \frac{q}{q - 1}, \log d\}.\]
\end{Enumerate}
\end{theorem}
The proof of Theorem \ref{thm:mainsoft-maxFunctionTV} is based on bounding the
$(p, q)$-subordinate norm of the matrices $\matr{SM}_{(k, d)}$. This is a challenging task since even computing the $(p, q)$-subordinate norm is NP-hard~\cite{Rohn2000, HendrickxO10}. To circumvent this, we
generalize a theorem of~\cite{DrakakisP09} for subordinate norms, which might be of independent interest.
\subsection{Lower Bounds and Comparison with the Exponential Function}
\label{sec:plsm_LB}
Theorem~\ref{thm:mainsoft-maxFunctionTV} shows that the $(\ell_p, \ell_q)$-Lipschitz constant of \textsc{PLSoftMax\xspace}\ is at most $O(1/\delta)$ when $p$ is bounded or when $q$ is bounded away from $1$, but becomes $O(\log(d)/\delta)$ when $(p,q)$ gets close to $(\infty,1)$. It is easy to see that no soft-max function can achieve a Lipschitz constant better than $O(1/\delta)$. The following theorem shows that even for $(p,q)=(\infty,1)$, no soft-max function can beat the bound proved in Theorem~\ref{thm:mainsoft-maxFunctionTV} for \textsc{PLSoftMax\xspace}. The proofs of this theorem and the other theorems in this Section are deferred to Appendix~\ref{app:plsm_LB}.
\begin{theorem} \label{thm:totalVariationLowerBound}
Let $c,\delta>0$, and assume $f: \ensuremath{\mathbb{R}}^d \to \Delta_{d-1}$ is a soft-max function that is $\delta$-approximate and $(\ell_\infty,\ell_1)$-Lipschitz continuous with a Lipschitz constant of at most $c$. Then, $c = \Omega\left( \log d/\delta \right)$.
\end{theorem}
It is not hard to prove that for every $\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}}$, $\norm{\ensuremath{\boldsymbol{x}} - \ensuremath{\boldsymbol{y}}}_1 \le \renyi{\infty}{\ensuremath{\boldsymbol{x}}}{\ensuremath{\boldsymbol{y}}}$. Therefore, since the exponential soft-max function $\textsc{Exp}^\lambda$ for $\lambda=\log(d)/\delta$ is $(\ell_p,D_\infty)$-Lipschitz continuous with a Lipschitz constant of at most $2\lambda$ (Theorem~\ref{thm:exp}), it must also be $(\ell_p,\ell_1)$-Lipschitz with the same constant. The following theorem shows that this Lipschitz constant is at least $\frac{\lambda}2$.
\begin{theorem} \label{thm:exponentialLowerBound}
The $(\ell_p,\ell_1)$-Lipschitz constant of the soft-max function $\textsc{Exp}^\lambda$ is at least $\frac{\lambda}2$. Therefore, the $(\ell_p,\ell_1)$-Lipschitz constant of a $\delta$-approximate exponential soft-max function is at least $\frac{\log d}{2 \delta}.$
\end{theorem}
The combination of the above result and Theorem~\ref{thm:mainsoft-maxFunctionTV} shows that in terms of the $(\ell_p,\ell_1)$-Lipschitz constant, there is a gap of $\Theta(\log d)$ between the exponential function and \textsc{PLSoftMax\xspace}. \section{Other variants and desirable properties}
\label{sec:otherproperties}
In the previous sections, we studied the tradeoff between Lipschitz continuity of soft-max functions and their approximation quality, as quantified by the maximum additive gap between the (expected) value of the outcome picked and the maximum value. In this section, we look into variants of our definitions and other desirable properties that we might need to require from the soft-max function. Most importantly, is it possible to require a multiplicative notion of approximation?
\subsection{Multiplicative approximation}
\label{sec:multiplicative}
For any $\delta \ge 0$, we call a soft-max function $\ensuremath{\boldsymbol{f}} : \ensuremath{\mathbb{R}}_+^d\mapsto\Delta_{d-1}$ is {\em $\delta$-multiplicative-approximate} if for every $\ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}_+^d$, we have $\langle \ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{f}}(\ensuremath{\boldsymbol{x}}) \rangle \ge (1 - \delta) \max_i\{x_i\}$. Similarly, we can define the notion of $\delta$-multiplicative-approximate in the worst case.\footnote{Note that throughout this section, we restrict the domain of the soft-max function to only positive values.} Such multiplicative notions of approximation are practically useful in settings where the scale of the input is unknown.
First, here is a simple observation: to get a soft-max function with a multiplicative approximation guarantee, it is enough to start with one with an additive guarantee and apply it to the logarithm of the input values. The resulting function will be Lipschitz continuous, but with respect to a different distance measure as defined below.
\begin{definition}
For any $\ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}_+^d$, let $\log(\ensuremath{\boldsymbol{x}}) := (\log(x_1),\ldots,\log(x_d))$.
For $p\ge 1$, the \textit{$p$-log-Euclidean} distance between two points $\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}} \in \ensuremath{\mathbb{R}}_+^d$ is denoted by $\text{Log-}\ell_p(\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}})$ and is defined as $\text{Log-}\ell_p(\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}}):=\ell_p(\log(\ensuremath{\boldsymbol{x}}), \log(\ensuremath{\boldsymbol{y}}))$. Note that $\text{Log-}\ell_p$ is a metric.
\end{definition}
We can now state the above observation as follows:
\begin{proposition}
Let $\ensuremath{\boldsymbol{f}}:\ensuremath{\mathbb{R}}^d\to\Delta_{d-1}$ be a soft-max function that is $\delta$-approximate and $(\ell_p, \chi)$-Lipschitz for a distance measure $\chi$. Then the function $\text{Log}\ensuremath{\boldsymbol{f}}:\mathbb{R_+}^d\mapsto\Delta_{d-1}$ defined by $\text{Log}\ensuremath{\boldsymbol{f}}(x):=\ensuremath{\boldsymbol{f}}(\log(x))$ is a $\delta$-multiplicative-approximate soft-max function that is $(\text{Log-}\ell_p, \chi)$-Lipschitz with the same Lipschitz constant as $\ensuremath{\boldsymbol{f}}$.
\end{proposition}
Applying this proposition to \textsc{PLSoftMax\xspace}, we obtain a soft-max function called \textsc{LogPLSoftMax\xspace}\ that is $\delta$-multiplicative-approximate in the worst case and $(\text{Log-}\ell_p, \ell_q)$-Lipschitz.
More notably, applying this proposition to the exponential function, we obtain a soft-max function that we call the {\em power mechanism}, with a very simple and natural description: The Power Mechanism $\textsc{Pow}^{\lambda}$ with parameter $\lambda$,
applied to the input vector $x\in \mathbb{R}_{+}^d$ is defined as
$\textsc{Pow}^{\lambda}_i(\ensuremath{\boldsymbol{x}}) = x_i^{\lambda}/\sum_{j = 1}^d x_j^{\lambda}$. We will see in Section~\ref{sec:applications} how this mechanism can be used to improve existing results in a differentially private optimization problem.
A question that remains is whether, to obtain a multiplicative approximation, it is necessary to switch the domain distance measure to $\text{Log-}\ell_p$. In other words, are there $\delta$-multiplicative-approximate soft-max functions that are Lipschitz with respect to the domain metric $\ell_p$? The following theorem, whose proof is deferred to the appendix, provides a negative answer.
\begin{theorem}
For $\delta > 0$, let $\ensuremath{\boldsymbol{f}} : \ensuremath{\mathbb{R}}^d \to \Delta_{d-1}$ be a $\delta$-multiplicative-approximate
soft-max function. Then there is no $p, q$ such that $\ensuremath{\boldsymbol{f}}$ is $(\ell_p,\ell_q)$-Lipschitz with a bounded Lipschitz constant. Similarly, there is no $p, \alpha$ such that $f$ is $(\ell_p,D_\alpha)$-Lipschitz with a bounded Lipschitz constant.
\end{theorem}
\subsection{Scale and Translation Invariance}
Related to the notion of multiplicative approximation, one might wonder if there are soft-max functions that are {\em scale invariant}, i.e., guarantee that for every $\ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}^d$ and $c \in \ensuremath{\mathbb{R}}$, $\ensuremath{\boldsymbol{f}}(c \ensuremath{\boldsymbol{x}}) = \ensuremath{\boldsymbol{f}}(\ensuremath{\boldsymbol{x}})$? Similarly, one may require {\em translation invariance}, i.e., that for every $\ensuremath{\boldsymbol{x}} \in \ensuremath{\mathbb{R}}^d$ and $c \in \ensuremath{\mathbb{R}}$, $\ensuremath{\boldsymbol{f}}(\ensuremath{\boldsymbol{x}} + c \cdot {\bf 1}) = \ensuremath{\boldsymbol{f}}(\ensuremath{\boldsymbol{x}})$. It is easy to see that indeed the mechanisms \textsc{Exp}\ and \textsc{Pow}\ are translation and scale invariant, respectively. It is less obvious, but still not difficult, to show that similarly, the mechanisms \textsc{PLSoftMax\xspace}\ and \textsc{LogPLSoftMax\xspace}\ are translation and scale invariant, respectively.
In fact, it turns out that translation and scale invariance go hand-in-hand with the notion of approximation: no scale-invariant function can guarantee additive approximation, and no translation-invariant function can guarantee multiplicative approximation.
\section{Applications}
\label{sec:applications}
We present three applications of the soft-max functions introduced in this paper. In Section~\ref{sec:mech_design}, we show how to use \textsc{PLSoftMax\xspace}\ to design approximately incentive compatible mechanisms. In Section~\ref{sec:dp_submodular} we use \textsc{Pow}\ to improve a result on differentially private submodular maximization. Finally, in Section~\ref{sec:sparse_classification}, we discuss potential applications of \textsc{PLSoftMax\xspace}\ in neural network classifiers.
\subsection{Approximately Incentive Compatibile Mechanisms via \textsc{PLSoftMax\xspace}}
\label{sec:mech_design}
Let us start with an abstract definition of incentive compatibility in mechanism design.
Consider a setting with $n$ self-interested agents, indexed $1,\ldots, n$. A mechanism is a (randomized) algorithm $A$ that must pick one of the possible outcomes in a set $\Omega$. For simplicity, let us assume that $\Omega$ is finite and $|\Omega|=d$. Each agent $i$ has a utility function $u_i\in \ensuremath{\mathbb{R}}_+^\Omega$ that specifies the value that $i$ places on each of the possible outcomes. Let $\ensuremath{\mathcal{U}} \subseteq \ensuremath{\mathbb{R}}_+^{\Omega}$ denote the space of all possible utility functions. The input of the algorithm $A$ is the reported utility of all the agents, i.e., $A$ takes a $u\in \ensuremath{\mathcal{U}}^n$ as input, and probabilistically picks an outcome $A(u)$ in $\Omega$. We say that $A$ is \textit{$\varepsilon$-incentive compatible} with respect to $\ensuremath{\mathcal{U}}$ if for every $u \in \ensuremath{\mathcal{U}}^n$, $u' \in \ensuremath{\mathcal{U}}$ and every agent $i$, the following inequality holds
$\Exp_{z \sim A(\ensuremath{\boldsymbol{u}})}\b{u_i(z)} \ge \Exp_{z \sim A(u', \ensuremath{\boldsymbol{u}}_{-i})}\b{u_i(z)} - \varepsilon$.
Typically, in mechanism design, the challenge is to design a mechanism $A$ that is incentive compatible and at the same time (approximately) optimizes a given objective function $w$ that depends on the utility of the agents $u\in\ensuremath{\mathcal{U}}^n$ as well as the selected outcome in $\Omega$.
At a high-level, a soft-max function can be used to design an incentive compatible mechanism as follows: Assume $f:\ensuremath{\mathbb{R}}^d\to\Delta_{d-1}$ is $(\chi,\ell_1)$-Lipschitz with respect to some domain distance measure $\chi$. The mechanism $A_f$ is defined as follows: it computes the value of all outcomes in $\Omega$ at the reported utilities $u\in \ensuremath{\mathcal{U}}^n$, and uses $f$ to pick an outcome with respect to these values.
A central concept is the sensitivity of the function $w$ with respect to $\chi$. The $\chi$-sensitivity $S_{\chi}(w)$ of $w$ is defined as
$S_{\chi}(w) = \max\{\chi \left(\ensuremath{\boldsymbol{w}}(\ensuremath{\boldsymbol{v}}), \ensuremath{\boldsymbol{w}}(\ensuremath{\boldsymbol{v}}_{-i}, v'_i) \right)\}$, where
$\ensuremath{\boldsymbol{w}}(\ensuremath{\boldsymbol{v}}) = (w(\ensuremath{\boldsymbol{v}}, 1), \dots, w(\ensuremath{\boldsymbol{v}}, d))$ and the maximum is taken over all possible
$i, \ensuremath{\boldsymbol{v}}, v_i'$. If the soft-max function $f$ has low $(\chi,\ell_1)$-Lipschitz constant, and the objective $w$ has low sensitivity with respect to $\chi$, we can use the following theorem to obtain an $\varepsilon$-incentive compatible mechanism.
\begin{theorem} \label{lem:LipschitzFromFtoA}
Assume a mechanism design setting where utilities of the agents are bounded from above by 1, i.e.,
$\ensuremath{\mathcal{U}} \subseteq [0, 1]^{\Omega}$.
Let $f : \ensuremath{\mathbb{R}}^d \to \Delta_{d}$ be a soft-max function with $(\chi,\ell_1)$-Lipschitz constant at most $L$, and $w : \ensuremath{\mathcal{U}}^n\times\Omega \to \ensuremath{\mathbb{R}}$ be an objective function. The
algorithm $A_f$ is $(L/S_{\chi}(w))$-incentive compatible with respect to $\ensuremath{\mathcal{U}}$.
\end{theorem}
The connection between soft-max and mechanism design established in the above theorem is not new. McSherry and Talwar~\cite{McSherryT07} originally pointed out this connection and used it to design incentive compatible mechanisms. However, they stated this connection in terms of differential privacy (closely related to the $(\chi,D_\infty)$-Lipschitz property). The main difference between the above theorem and the one by McSherry and Talwar is that we only require $(\chi,\ell_1)$-Lipschitz continuity, which is closer to what the application demands. This can be combined with the soft-max function \textsc{PLSoftMax\xspace}\ analyzed in Theorem \ref{thm:mainsoft-maxFunctionTV} to obtain results that were not achievable using the exponential mechanism. We present two applications of this here. See~\refappendix{sec:singleItem} for details and proofs.
\paragraph{Worst-Case Guarantees for Mechanism Design.}
If we replace the exponential mechanism with \textsc{PLSoftMax\xspace}\ in many applications of
Differential Privacy in Mechanism Design, we get approximate incentive compatible algorithms with
\textit{worst-case} approximation guarantees as opposed to the expected approximation or the
high-probability guarantees that are currently known. Consider for example the digital goods
auction problem from \cite{McSherryT07}, where $n$ bidders have a private utility for a good at
hand for which the auctioneer has an unlimited supply and let $\mathrm{R}_{\mathrm{OPT}}$ be the optimal
revenue that the auctioneer can extract for a given set of bids. We can then prove the following.
\begin{inftheorem} \label{ithm:digitalGoodsWorstCase}
There is an $\varepsilon$-incentive compatible mechanism for the digital goods auction problem where
the revenue of the auctioneer is at least
$\mathrm{R}_{\mathrm{OPT}} - O(\mathrm{R}_{\mathrm{OPT}} \cdot n)/\varepsilon)$ \textbf{in the worst-case}.
\end{inftheorem}
\paragraph{Better Sensitivity implies better Utility.} If the revenue objective
function $w$ has bounded $S_q(w)$ sensitivity for some $q < \log(d)$, then using \textsc{PLSoftMax\xspace}\ we get a significantly better revenue-incentive compatibility tradeoff compared to using
the exponential mechanism. This is clear form Lemma \ref{lem:LipschitzFromFtoA}, and Theorems
\ref{thm:mainsoft-maxFunctionTV} and \ref{thm:exp}. See~\refappendix{sec:singleItem} for details.
\subsection{Differentially Private Submodular Maximization via the Power Mechanism}
\label{sec:dp_submodular}
We now show that $D_{\infty}$ smoothness can be used to design differentially private algorithms.
\paragr{Differential Privacy.} A randomized algorithm $A$ satisfies
$\varepsilon$-\textit{differential privacy} if
$\Prob(A(\ensuremath{\boldsymbol{v}}) \in S) \le \exp(\varepsilon) \cdot \Prob(A(v'_i, \vec{v}_{-i}) \in S)$
for all $i \in [n]$, $\ensuremath{\boldsymbol{v}} \in \mathcal{D}^n$, $v'_i \in \mathcal{D}$ and all sets
$S \subseteq \Omega$, where $\Omega$ is the set of possible outputs of $A$.
\smallskip
For some distance metric $\chi$ of $\ensuremath{\mathbb{R}}_+^d$, the soft-max function $\ensuremath{\boldsymbol{f}}$ satisfies $\renyi{\infty}{\vec{f}(\vec{x})}{\vec{f}(\vec{y})} \le L \cdot \chi(\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}}) ~~~~ \forall \vec{x}, \vec{y} \in \ensuremath{\mathbb{R}}_+^{d}$ and can
be used to design differentially private algorithms when the objective function has low $\chi$
sensitivity, according to the following lemma. The proof of this Lemma follows directly from the definitions of $\mathrm{D}_{\infty}$ and
$S_{\chi}$.
\begin{lemma} \label{lem:privacyFromFtoA}
Let $\vec{f} : \ensuremath{\mathbb{R}}_+^d \to \Delta_{d}$ be a soft maximum function, $w : \mathcal{D}^n \to \ensuremath{\mathbb{R}}_+$ be
an objective function. If $\vec{f}$ is $L$-Lipschitz with respect to $\mathrm{D}_{\infty}$ and
$\chi$, then $A_{\ensuremath{\boldsymbol{f}}}$ is $(L/S_{\chi}(w))$-differentially private.
\end{lemma}
\subsubsection{Application to Differentially Private Submodular Optimization}
\label{sec:multiplicative:submodular}
In differentially private maximization of submodular functions under cardinality constraints, we
observe that if the input data set satisfies a mild assumption, then using power mechanism we
achieve an asymptotically smaller error compare to the state of the art algorithm of Mitrovic et
al. \cite{MitrovicBKK17}.
\paragr{Submodular Functions.} Let $\mathcal{D}$ be a set of elements with $d = \abs{\mathcal{D}}$. A
function $h : 2^{\mathcal{D}} \to \mathbb{R}_{+}$ is called submodular if
$h(R \cup \{v\}) - h(R) \ge h(T \cup \{v\}) - h(T)$ for all $R \subseteq T \subseteq \mathcal{D}$ and
all $v \in \mathcal{D} \setminus T$.
\paragr{Monotone Functions.} A function $h : 2^{\mathcal{D}} \to \mathbb{R}_{+}$ is monotone if
$h(T) \ge h(R)$ for all $R \subseteq T \subseteq \mathcal{D}$.
\noindent Monotone and Submodular Maximization under Cardinality Constraints is
the optimization problem $\max_{R \subseteq \mathcal{D}, \abs{R} \le k} h(R)$.
We use the Algorithm 1 of \cite{MitrovicBKK17}, where we replace the
exponential mechanism in the soft maximization step with the power mechanism. Let $S_{l, q}$ be the
sensitivity of $h$ with respect to $q$-log-Euclidean quasi-metric and $\mathrm{OPT}$ be the optimal
value.
\begin{theorem} \label{thm:monotoneSubmodularCardinality}
Let $h : 2^{\mathcal{D}} \to \ensuremath{\mathbb{R}}_+$ be a monotone and submodular function. Then, there exists an
efficient $(\varepsilon, \delta)-$differentially private algorithm with output $R_k$ that achieves multiplicative
approximation guarantee
$\Exp[h(R_k)] \ge \left(1 - \exp\p{d^{\frac{\varepsilon}{S_{l, \infty}(h) \p{\sqrt{k} + \sqrt{\log(1/\delta)}}} - 1}} \right) \mathrm{OPT}$.
\end{theorem}
Even though it is not immediately clear if the above guarantee is better than the one
of \cite{MitrovicBKK17}, we note that the above result has only
a multiplicative approximation error. In contrast, the algorithm of \cite{MitrovicBKK17} has both
multiplicative and additive error. In general, it is impossible to compare the two
tradeoffs,
because the tradeoff of
\cite{MitrovicBKK17} is parameterized by $S_{\infty}$ sensitivity of $h$ whereas
our tradeoff is parameterized by $S_{l, \infty}$ of $h$. Even though there is no
a priori comparison between the two sensitivities, the following mild
assumption allows us to compare them.
\begin{definition}[\textsc{$t$-Multiplicative Insensitivity}] \label{def:nicenessDefinition}
A data-set $\mathcal{D}^n$ is $t$-\textit{multiplicative insensitive} for an objective
function $w : \mathcal{D}^n \times [d] \to \mathbb{R}_{+}$ if for any two inputs
$\matr{V}, \matr{V'} \in \mathcal{D}^n$ that differ only in one coordinate and for any $i \in [d]$, if
$w(\matr{V'}, i) \le w(\matr{V}, i)$ it holds that
$\frac{w(\matr{V'}, i)}{w(\matr{V}, i)} \ge 1 - \frac{1}{t} \frac{S_{\infty}(w)}{\mathrm{OPT}}$.
\end{definition}
Based on the above definition, we prove that the error of the power mechanism, under the assumption of
$O(1)$-multiplicative insensitivity, is asymptotically better than the error of the
exponential mechanism. This improvement is also observed in experiments with real world data as it is shown in
Figure \ref{fig:experiment-sub}. The missing proofs and a detailed explanation of
results is in the~\refappendix{sec:submodular}.
\begin{corollary} \label{cor:monotoneSubmodularCardinalityWithAssumption}
Assume the input data satisfy $O(1)$-multiplicative insensitivity. Let $T_{k}$ be the output of
Algorithm 1 of \cite{MitrovicBKK17} using the exponential mechanism, then the approximation guarantee is
\[ \Exp[h(T_k)] \ge \left(1 - 1/e \right) \mathrm{OPT} - O \left( k \cdot S_{\infty}(h) \log \abs{\mathcal{D}}/\varepsilon \right) \]
whereas if $R_k$ is the output when using the power mechanism, then the approximation guarantee
is
\[ \Exp[h(R_k)] \ge \left(1 - 1/e \right) \mathrm{OPT} - O \left( \sqrt{k} \cdot S_{\infty}(h) \log \abs{\mathcal{D}}/\varepsilon \right). \]
\end{corollary}
\begin{figure}[t]
\vspace{-1cm}
\centering
\begin{subfigure}[t]{0.4\textwidth}
\includegraphics[width=0.95\textwidth]{img/dblp-l1.eps}
\caption{$\ell_1$ Distance}
\label{fig:experiment-1-sub}
\end{subfigure}
~
\begin{subfigure}[t]{0.4\textwidth}
\includegraphics[width=0.95\textwidth]{img/dblp-l0.eps}
\caption{$\ell_\infty$ Distance}
\label{fig:experiment-2-sub}
\end{subfigure}
\caption{Smoothness vs utility in the submodular maximization with cardinality constraint $k=10$. The y-axis shows the ratio of the average objective to the (non-private) greedy algorithm. The x-axis represents the sensitivity to the manipulation test of the value of the first element selected. }\label{fig:experiment-sub}
\end{figure}
We validated these theoretical results with an empirical study we report fully in \refappendix{sec:experiments}. Here we briefly outline our results in Figure~\ref{fig:experiment-sub}, where we show improved objective vs sensitivity trade-offs for the power mechanism in an empirical data manipulation tests. In this experiments we manipulated randomly a submodular optimization instance, and measured how the output distribution of a differentially private soft-max (Power and Exponential mechanism with a given parameter) is affected by the manipulation (x-axis). In the y-axis we report the average objective obtained by the algorithm and parameter setting. The results in Figure~\ref{fig:experiment-sub} show that, for the same level of empirical sensitivity, the power mechanisms allows substantially improved results.
\subsection{Sparse Multi-class Classification}
\label{sec:sparse_classification}
Sparsity, or in our language worst-case approximation guarantee, is relevant both in multiclass
classification and in designing attention mechanisms \cite{MartinsA16, LahaCAJSR18}. As illustrated
in Theorem \ref{thm:mainsoft-maxFunctionTV}, \textsc{PLSoftMax\xspace}\ has small $\ell_q \to \ell_p$
smoothness for any $p, q$. In contrast, the mechanisms proposed in \cite{MartinsA16, LahaCAJSR18}
achieve much worse $\ell_q \to \ell_1$ smoothness as we can see below.
\begin{lemma} \label{lem:sparsemaxLowerBound}
Let $h(\cdot) = \mathrm{sparsegen}\text{-}\mathrm{lin}(\cdot)$ be the generalization of $\mathrm{sparsemax}(\cdot)$
function, then there exist $\ensuremath{\boldsymbol{x}}, \ensuremath{\boldsymbol{y}} \in \ensuremath{\mathbb{R}}^d$ such that
$\norm{h(\ensuremath{\boldsymbol{x}}) - h(\ensuremath{\boldsymbol{y}})}_1 \ge \frac{1}{2} d^{1 - 1/q} \norm{\ensuremath{\boldsymbol{x}} - \ensuremath{\boldsymbol{y}}}_q$.
\end{lemma}
In contrast, \textsc{PLSoftMax\xspace}\ achieves $\ell_q \to \ell_1$ smoothness of order
$\min\{q + 1, \log(d)\}$.
Smoothness is preferred for gradient calculation in commonly adopted stochastic gradient descent algorithms. To illustrate this we define a loss function
with properties that are summarized in the following proposition. A detailed explanation of the
loss function and a proof of Proposition \ref{prop:lossFunction:properties} are presented in Appendix
\ref{sec:lossFunction}.
\begin{proposition} \label{prop:lossFunction:properties}
The exists a loss function $L_{\textsc{PLSoftMax\xspace}} : \ensuremath{\mathbb{R}}^d \times \Delta_{d - 1} \to \ensuremath{\mathbb{R}}_+$
such that it holds:
(1) $L_{\textsc{PLSoftMax\xspace}}(\ensuremath{\boldsymbol{x}}; \ensuremath{\boldsymbol{q}}) \ge 0$,
(2) $L_{\textsc{PLSoftMax\xspace}}(\ensuremath{\boldsymbol{x}}; \ensuremath{\boldsymbol{q}}) = 0 \Leftrightarrow \textsc{PLSoftMax\xspace}^{\delta}(\ensuremath{\boldsymbol{x}}) = \ensuremath{\boldsymbol{q}}$,
(3) $L_{\textsc{PLSoftMax\xspace}}(\ensuremath{\boldsymbol{x}}; \ensuremath{\boldsymbol{q}})$ is a convex function with respect to $\ensuremath{\boldsymbol{x}}$.
\end{proposition}
\section*{Acknowledgements}
MZ was supported by a Google Ph.D. Fellowship.
\bibliographystyle{alpha}
| {
"timestamp": "2020-10-23T02:10:42",
"yymm": "2010",
"arxiv_id": "2010.11450",
"language": "en",
"url": "https://arxiv.org/abs/2010.11450",
"abstract": "A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input. Our goal is to identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness. This leads to novel soft-max functions, each of which is optimal for a different application. The most commonly used soft-max function, called exponential mechanism, has optimal tradeoff between approximation measured in terms of expected additive approximation and smoothness measured with respect to Rényi Divergence. We introduce a soft-max function, called \"piecewise linear soft-max\", with optimal tradeoff between approximation, measured in terms of worst-case additive approximation and smoothness, measured with respect to $\\ell_q$-norm. The worst-case approximation guarantee of the piecewise linear mechanism enforces sparsity in the output of our soft-max function, a property that is known to be important in Machine Learning applications [Martins et al. '16, Laha et al. '18] and is not satisfied by the exponential mechanism. Moreover, the $\\ell_q$-smoothness is suitable for applications in Mechanism Design and Game Theory where the piecewise linear mechanism outperforms the exponential mechanism. Finally, we investigate another soft-max function, called power mechanism, with optimal tradeoff between expected \\textit{multiplicative} approximation and smoothness with respect to the Rényi Divergence, which provides improved theoretical and practical results in differentially private submodular optimization.",
"subjects": "Machine Learning (cs.LG); Data Structures and Algorithms (cs.DS)",
"title": "Optimal Approximation -- Smoothness Tradeoffs for Soft-Max Functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534344029238,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573485777553
} |
https://arxiv.org/abs/1809.03070 | Rectangle Coincidences and Sweepouts | We prove an integral formula for continuous paths of rectangles inscribed in a piecewise smooth loop. We then use this integral formula to show that (with a very mild genericity hypothesis) the number of rectangle coincidences, informally described as the number of inscribed rectangles minus the number of isometry classes of inscribed rectangles, grows linearly with the number of positively oriented extremal chords -- a.k.a. diameters -- of the polygon |
\section{Introduction}
A {\it Jordan loop\/} is the image of a circle under a
continuous injective map into the plane.
Toeplitz conjectured in 1911 that every Jordan loop contains $4$
points which are the vertices of a square. This is sometimes called
the {\it Square Peg Problem\/}.
For historical details and a long bibliography, we refer the
reader to the excellent survey article [{\bf M\/}] by B. Matschke,
written in 2014, and also Chapter 5 of I. Pak's online book
[{\bf P\/}].
Some interesting work on problems related to
the Square Peg Problem has been done very recently.
The paper of C. Hugelmeyer [{\bf H\/}] shows
that a smooth Jordan loop
always has an inscribed rectangle of
aspect ratio $\sqrt 3$. The paper
[{\bf AA\/}] proves that any cyclic quadrilateral
can (up to similarity) be inscribed in any
convex smooth curve. The
paper [{\bf ACFSST\/}] proves, among other things,
that a dense set
of points on an embedded loop in space are
vertices of a (possibly degenerate) inscribed
parallelogram.
Say that a rectangle $R$ {\it graces\/} a
Jordan loop $\gamma$ if the vertices of
$R$ lie in $\gamma$ and if the
cyclic ordering on the vertices induced by
$R$ coincides with the cyclic ordering
induced by $\gamma$.
Let $G(\gamma)$ denote the space of
labeled gracing rectangles.
In [{\bf S1\/}] we prove the following
result.
\begin{theorem}
\label{threepoint}
Let $\gamma$ be a
Jordan loop. Then $G(\gamma)$
contains a connected set $S$ such that
all but at most $4$ vertices of
$\gamma$ are vertices of members of $S$.
\end{theorem}
We have a more precise characterization of
the possibilities for $S$ in [{\bf S1\/}].
We proved Theorem
\ref{threepoint} by taking a limit of
a result for polygons. We now
describe this result.
Given a polygon $P$, we say that a
chord $d$ of $P$ is a {\it diameter\/}
if $d$ if the two perpendiculars to $d$
based at $\partial P$ do not
locally separate $\partial P$ into two arcs.
Each diameter can be positively oriented
or negatively oriented, but not both.
To explain the condition, we rotate
the picture so that $d$ is vertical.
The endpoints of $d$ divide $P$ into
two arcs $P_1$ and $P_2$. Given the
non-separating condition associated
to a chord, we can say whether $P_1$
locally lies to the left or right of $P_2$
in a neighborhood of each endpoint of
$d$. We call $d$ {\it positively oriented\/}
if the left/right answer is the same
at both endpoints. That is, either
$P_1$ locally lies to the left at
both endpoints or $P_1$ locally
lies to the right at both endpoints.
Figure 1 some examples of positive diameters.
\begin{center}
\resizebox{!}{2.5in}{\includegraphics{fig1.eps}}
\newline
{\bf Figure 1:\/} Some positive diameters of polygons.
\end{center}
With respect to the distance function on
$P$, a diameter can be a minimum, a maximum,
or neither. We call the third kind
{\it saddles\/}.
Let $\Delta_+(P)$ denote the number of positively oriented
diameters of $P$.
Let $\Pi_N$ denote
the space of embedded $N$-gons. The set $\Pi_N$
is naturally an open subset of
$(\R^2)^N$ and as such inherits the
structure of a smooth manifold.
We call a subset $\Pi_N^* \subset \Pi_N$ {\it fat\/}
if $\Pi_N-\Pi_N^*$ is a finite union of positive
codimension submanifolds. In particular,
a fat set is open and has full measure.
\begin{theorem}
\label{polygon}
There exists a fat subset $\Pi_N^* \subset \Pi_N$
with the following property. For every
$N$-gon $P \in \Pi_N^*$ the space
$\Gamma(P)$ is a piecewise-smooth $1$-manifold.
Each arc component of $\Gamma(P)$ connects
two positive diameters of $P$, and every positive diameter
arises as the end of $4$ arc components.
of $\Gamma(P)$. In particular, there are
$2\Delta_+(P)$ arc components of $\Gamma(P)$.
\end{theorem}
The reason that there are $4$ arc components
connecting every pair of positive diameters that is
that we are considering cyclically labeled
rectangles. Each of the $4$ components
is obtained from each other one by
cyclically relabeling.
Now we describe the results we prove in this paper.
Given a rectangle $R$, we let
$X(R)$ and $Y(R)$ respectively denote the
lengths of the first and second sides of $R$.
For any continuous path of rectangles
in $\Gamma(P)$ which is either a closed loop
or which connects two diameters of $P$, we define
the {\it shape curve\/}
$Z(\alpha)$. This curve is given by
\begin{equation}
Z(\alpha,t)=(X(R_t),Y(R_t)).
\end{equation}
Here $t \to R_t$ is a parametrization of
$\alpha$.
When $\alpha$ is a closed loop,
$Z(\alpha)$ is a closed loop as well.
When $\alpha$ is an arc component,
$Z(\alpha)$ is an arc, not necessarily
embedded, that
starts and ends on the coordinate axes.
Figure 2 shows two of the possibilities.
\begin{center}
\resizebox{!}{1.5in}{\includegraphics{fig2.eps}}
\newline
{\bf Figure 2:\/} Shape curves associated to
hyperbolic and null arcs.
\end{center}
In the first case, one endpoint of
$\alpha$ lies on the $X$-axis and
the second endpoint lies on the $Y$-axis.
As in [{\bf S1\/}] we call such arcs
{\it hyperbolic arcs\/}.
In the other cases, both ends lie on the
same axis. We call such components
{\it null arcs\/}.
In the arc cases, we
augment $Z(\alpha)$ by adjoining
the relevant parts of
the coordinate axes so as to create a
closed loop. We have
shaded in the regions bounded by these
closed loops. We call this augmented
loop the {\it shape loop\/} associated
to $\alpha$ and give it the same name.
In [{\bf S2\/}] we found a kind of integral
formula associated to the shape loop, though
we stated it in a different context.
This invariant is quite similar to the
integral invariant in [{\bf Ta\/}], though
we use it in a different context. (In
\S \ref{squeeze} we give a sample result
from [{\bf S2\/}].) Here we
adapt the invariant to the present situation
and prove the following theorem.
\begin{theorem}
\label{sweep}
Let $P$ be any piecewise smooth Jordan loop.
Let $\alpha$ be a piecewise smooth path in
$\Gamma(P)$. If $\alpha$ is a hyperbolic
arc then the signed area of the region bounded by
$Z(\alpha)$ equals (up to sign) the area of
the region bounded by $P$. If
$\alpha$ is either a null arc or a closed
loop, then the signed area of the region
bounded by $Z(\alpha)$ is $0$.
\end{theorem}
Theorem \ref{sweep} says something about the number
of coincidences that appear amongst the inscribed
rectangles. We will give an example which
explains the connection. Since the shape loop associated to a
null component bounds a region of area $0$, the
shape curve must have a self-intersection. This
self-intersection corresponds to a pair of
isometric rectangles inscribed in the polygon.
Now we formulate a general result.
We call two labeled rectangles {\it really distinct\/}
if their unlabeled versions are also distinct.
Thus, two relabelings of the same rectangle are
not really distinct.
We define the multiplicity of the pair
$(X,Y)$ as follows.
\begin{itemize}
\item $\mu(X,Y)=n-1$ if there are
$n>1$ really distinct
labeled rectangles $R_1,...,R_n$ inscribed in $P$ such
$X(R_j)=X$ and $Y(R_j)=Y$ for all $j=1,...,n$.
We also allow $n=\infty$,
\item $\mu(X,Y)=0$ if there are
$0$ or $1$ such rectangles.
\end{itemize}
We define
\begin{equation}
\label{coincidence}
M(P)=\sum \mu(X,Y),
\end{equation}
where the sum is taken over all pairs $(X,Y)$. Typically
this is a sum with finitely many finite nonzero terms.
There is a more natural (but somewhat informal)
way to think about $M(P)$. Suppose that we
color all the points in $\Gamma(P)$ according to
the isometry class of rectangles they represent.
Then $M(P)$ is the number of points minus the
number of colors.
\begin{theorem}
\label{main}
For each $P \in \Pi_N^*$ we have
$M(P) \geq 2(\Delta_+(P)-2)$.
\end{theorem}
When $P$ is an obtuse triangle we have
$M(P)=0$ and $\Delta_+(P)=2$, so the result
is sharp in a trivial way.
Some version of Theorem \ref{main} is
true for an arbitrary polygon, but here
we place a mild constraint so as to
make the proof easier. Let $P$ be a polygon.
We call a diameter $S$ of $P$
{\it tricky\/} if the endpoints
of $S$ are vertices of $P$ and
if at least one of the edges
of $P$ incident to $S$ is
perpendicular to $S$.
\begin{theorem}
\label{main2}
If $P$ has no tricky diameters,
$M(P) \geq \frac{1}{16}(\Delta_+(P)-2)$.
\end{theorem}
The rest of the paper is devoted to proving the
results above.
\newpage
\section{The Integral Formula}
\subsection{The Differential Version}
Let $J$ be a piecewise smooth Jordan loop and
let $R$ be a labeled rectangle that graces $J$.
For each $j=1,2,3,4$ we let
$A_j$ denote the signed area of the region $R_j^*$ bounded
by the segment $\overline{R_{j}R_{j+1}}$ and
the arc of $J$ that connects $R_j$ to
$R_{j+1}$ and is between these two points in
the counterclockwise order. Figure 3 shows a
simple example. The signs are taken so that the signed areas
are positive in the convex case, and then in
general we define the signs so that the signed
areas vary continuously.
\begin{center}
\resizebox{!}{1.9in}{\includegraphics{fig3.eps}}
\newline
{\bf Figure 3:\/} The curve $J$, the rectangle $R$
and the regions $R_j^*$ for $j=1,2,3,4$.
\end{center}
Assuming that $J$ is fixed, we introduce the
quantity
\begin{equation}
A(R)=(A_1+A_3)-(A_2+A_4).
\end{equation}
We also have the point $(X,Y) \in \R^2$, where
\begin{equation}
X={\rm length\/}(\overline{R_1R_2}), \hskip 15 pt
Y={\rm length\/}(\overline{R_2R_3}), \hskip 15 pt
\end{equation}
Assuming that we have a piecewise smooth
path $t \to R_t$ of rectangles gracing $J$,
we have the two quantities
\begin{equation}
A_t=A(R_t), \hskip 30 pt
(X_t,Y_t)=(X(R_t),Y(R_t)).
\end{equation}
If $t$ is a point of differentiability, we may
take derivatives of all these quantities.
Here is the main formula.
\begin{equation}
\frac{dA}{dt}=Y \frac{dX}{dt} - X \frac{dY}{dt}.
\end{equation}
It suffices to prove this result for $t=0$.
This formula is rotation invariant, so for
the purposes of derivation, we rotate the
picture so that the first side of $R_0$ is
contained in a horizontal line, as shown
in Figures 3 and 4. When we differentiate,
we evaluate all derivatives at $t=0$.
We write
\begin{equation}
\frac{dR_j}{dt}=(V_j,W_j).
\end{equation}
Up to second order, the region $R_1^*(t)$
is obtained by adding a small quadrilateral
with base $X_0$ and adjacent sides
parallel to $t(V_1,W_1)$ and $t(V_2,W_2)$.
Up to second order, the area of this
quadrilateral is
$$\frac{X(W_1+W_2)}{2}.$$
\begin{center}
\resizebox{!}{2in}{\includegraphics{fig4.eps}}
\newline
{\bf Figure 4:\/} The change in area.
\end{center}
From this equation, we conclude that
\begin{equation}
\frac{dA_1}{dt}=-\frac{X(W_1+W_2)}{2}.
\end{equation}
We get the negative sign because the area
of the region increases when $W_1$ and $W_2$
are negative.
A similar derivation gives
\begin{equation}
\frac{dA_3}{dt}=+\frac{X(W_3+W_4)}{2}.
\end{equation}
Adding these together gives
$$
\frac{dA_1}{dt}+\frac{dA_3}{dt}=
X \times \bigg[\frac{W_3-W_1}{2}\bigg] +
X \times \bigg[\frac{W_4-W_2}{2}\bigg]=$$
\begin{equation}
\label{term1}
-X \times \bigg[\frac{1}{2}\frac{dY}{dt}\bigg]+
-X \times \bigg[\frac{1}{2}\frac{dY}{dt}\bigg]=
-X \frac{dY}{dt}.
\end{equation}
A similar derivation gives
\begin{equation}
\frac{dA_2}{dt}=-\frac{X(V_2+V_3)}{2}, \hskip 30 pt
\frac{dA_4}{dt}=+\frac{X(V_4+V_1)}{2}.
\end{equation}
Adding these together gives
\begin{equation}
\label{term2}
\frac{dA_2}{dt}+\frac{dA_4}{dt}=
-Y \frac{dX}{dt}.
\end{equation}
Subtracting Equation \ref{term2} from
Equation \ref{term1} gives
\begin{equation}
\label{diff}
\frac{dA}{dt}=-X \frac{dY}{dt}+Y \frac{dX}{dt},
\end{equation}
as claimed.
\subsection{The Integral Version}
Let $\omega=-XdY+YdX$. Here we think
of $\omega$ as a $1$-form. Suppose that
we have parameterized our curve
of rectangles so that the parameter
$t$ runs from $0$ to $1$. Integrating
Equation \ref{diff} over the piecewise
smooth path, we see that
\begin{equation}
\label{int}
A_1-A_0=\int_Z \omega.
\end{equation}
Here $Z$ is the shape curve associated to
the path of rectangles. We can interpret
this integral geometrically. Letting $O=(0,0)$,
consider the
closed loop
\begin{equation}
Z'=\overline{O, Z_0} \cup Z \cup \overline {Z_1,O}.
\end{equation}
Since $\omega$ vanishes on vectors of
the form $(h,h)$, we see that
\begin{equation}
A_1-A_0=\int_Z\omega=\int_{Z'} \omega= -\int \int_{\Omega} 2dxdy = -2\ {\rm area\/}(\Omega).
\end{equation}
Here $\Omega$ is the region bounded by $Z'$.
The last line of the equation refers to the signed area of $\Omega$.
\newline
\noindent
{\bf Proof of Theorem \ref{sweep}:\/}
Suppose first that $\alpha$ is a piecewise smooth loop rectangles
which grace the Jordan curve $J$. Then the curve $Z$ is already
a closed loop, and the signed area of the region bounded by
$Z$ is the same as the signed area bounded by $Z'$. Since
$A_1=A_0$ in this case, we see that $Z$ bounds a region of
signed area $0$.
If $\alpha$ is a null arc, then $R_0$ and $R_1$ both have
the same aspect ratio, either $0$ or $\infty$. In either
case, we have $A_0=A_1$. The common value is, up to sign,
the area of the region bounded by $J$. In this case,
$Z$ starts and stops on one of the coordinate axes, and
the region bounded by $Z$ has the same area as the
shape loop, $Z \cup \overline{Z_0Z_1}$. So, in this
case we also see that the shape loop bounds a region
of area $0$.
If $\alpha$ is a hyperbolic arc, then
$A_0=-A_1$ and both quantities up to sign
equal the area of the region bounded by $J$.
At the same time $Z'$ is precisely the
shape loop in this case. So, we see
that twice the area of the region bounded
by $J$ equals twice the area of the region
bounded by $Z$, up to sign. Cancelling
the factor of $2$ gives the desired result.
$\spadesuit$ \newline
\subsection{Generic Coincidences}
\label{generic}
In this section we prove
Theorem \ref{main}.
Suppose that $P$ is an $N$-gon that satisfies
the conclusions of Theorem \ref{polygon}.
This happens if $P \in \Pi_N^*$, but it might
happen more generally. In any case,
the space $\Gamma(P)$ of gracing
rectangles has $2\Delta(P)$ arc
components. There is a $\Z/4$ action
on $\Gamma(P)$ and this action
freely permutes the arc components
of $\Gamma(P)$.
We let $\delta=\Delta/2$
and we let $\alpha_1,...,\alpha_{\delta}$
denote a complete set of representatives
of these arc components modulo the
$\Z/4$ action. It suffices to show that
the sum in Equation \ref{coincidence}
is at least $\delta-1$ when we
restrict our attention to the components
just listed.
Consider those arcs on our list which
are null arcs. The shape
loops associated to each of these arcs
bound regions of area $0$ and hence
the corresponding loop has a double point.
Each double point corresponds to
a distinct pair that adds $1$ to the
total count for $M(J)$. The remaining
rectangle coincidences involve rectangles
not associated to these arcs or to
their images under the $\Z/4$ action.
Now consider those arcs on our
list which are hyperbolic arcs
whose shape loops are not
embedded. In exactly the same way
as above, each of these arcs
contributes $1$ to the count for
$M(J)$ and the rectangle pairs
involved are distinct from the
ones we have already considered.
Again, the remaining rectangle
coincidences involve rectangles
not associated to these arcs
or to their images under the $\Z/4$ action.
\newline
\newline
{\bf Remark:\/}
Before we move on to the last
case, we mention that the
count above might be an under-approximation,
even in case there is just one double
point per shape loop considered.
Consider the simple situation
where there are just $2$ null
arcs. It might happen that the
rectangle pairs corresponding
to these $2$ arcs are congruent
to each other. This would give
us a $4$ congruent gracing
rectangles and would contribute
$3$ rather than $2$ to the
total count.
\newline
Finally, consider the $d$
hyperbolic arcs on our list which
have embedded shape loops. If
$\alpha_1$ and $\alpha_2$ are
two such arcs, then
$Z(\alpha_1)$ and $Z(\alpha_2)$ are
two closed loops which bound the
same area. If these loops did not
intersect in the positive quadrant,
then either the region bounded
by $Z(\alpha_1)$ would strictly
contain the region bounded by
$Z(\alpha_2)$ or the reverse.
This contradicts the fact that
these two regions have the same
area. Hence $Z(\alpha_1)$ and
$Z(\alpha_2)$ intersect in
the positive quadrant, and the
intersection point corresponds
to a coincidence involving a
rectangle associated to
$\alpha_1$ and a rectangle
associated to $\alpha_2$.
Call this the {\it intersection property\/}.
We label so that
$\alpha_1,...,\alpha_d$ are the
hyperbolic arcs having embedded
shape loops.
We argue by induction that
these $d$ arcs contribute at
least $d-1$ to the count for
$M(J)$. If $d=1$ then there
is nothing to prove.
By induction, rectangle
coincidences associated to the arcs
$\alpha_1,...,\alpha_{d-1}$
contribute $d-2$ to the count
for $M(J)$.
By the intersection property,
$\alpha_d$ intersects each
of the other arcs, and
$\Gamma(J)$ is a manifold, there
is at least one new rectangle
involved in our count, namely
one that corresponds to a point
on $Z(\alpha_d)$ that is also
on some of the shape loop.
The corresponding rectangle
adds $1$ to the count in
Equation \ref{coincidence},
one way or another. So,
all in all, we add
$d-1$ to the count for
$M(J)$ by considering
the rectangle coincidences
associated to $\alpha_1,...,\alpha_d$.
This proves what we want.
\subsection{A Non-Squeezing Result}
\label{squeeze}
Here we explain how the invariant above
implies one of our main results in
[{\bf S2\/}]. Really, it is the
same proof. The material
in this section plays no role in the
rest of the paper.
Suppose that $\gamma_1$ and $\gamma_2$ are
$2$ piecewise smooth curves which are
disjoint. Suppose also that at each
end, $\gamma_j$ coincides with a
line segment. Finally suppose that
these line segments are parallel
at each end, so to speak. Figure 5 shows
what we mean.
\begin{center}
\resizebox{!}{1.5in}{\includegraphics{fig5.eps}}
\newline
{\bf Figure 5:\/} Sliding a square along a track.
\end{center}
Suppose that we have a piecewise smooth family
of rectangles, all having the same
aspect ratio, that starts at one end,
finishes at the other, and remains inscribed
in $\gamma_1 \cup \gamma_2$ the whole time.
We imagine $\gamma_1 \cup \gamma_2$ as being
a kind of track that the rectangle slides
along (changing its size and orientation
along the way).
Figure $5$ shows an example in which case the
rectangle is a square. In Figure 5
we show the starting rectangle $R_0$, the
ending rectangle $R_1$, and some $R_t$ for
$t \in (0,1)$. This is just a hypothetical
example.
We can complete the union $\gamma_1 \cup \gamma_2$ to
a piecewise smooth Jordan loop by extending
the ends of one or both of these curves,
if necessary, and then
dropping perpendiculars. Let $\Omega$ be
the region bounded by this loop. The shape curve
associated to our path lies on a line through
the origin, and our $1$-form $\omega$ vanishes
on such lines. Referring to the invariant
above, we therefore have $A(R_0)=A(R_1)$. But, after
suitably labeling the rectangles in our family, we have
$$A(R_j)={\rm area\/}(\Omega)-{\rm area\/}(R_j).$$
Hence $R_0$ and $R_1$ have the same area.
Since they also have the same aspect ratio,
they have the same side-lengths.
This is to say that the perpendicular
distance between the end of $\gamma_1$
and the end of $\gamma_2$ is the same
at either end. This is a kind of
non-squeezing result.
In particular, our result shows that
Figure 5 depicts an impossible situation.
There is no way to slide a square
continuously through the shown ``track''
because the widths are different at the
$2$ ends.
\newpage
\section{The General Case}
\subsection{Rectangles Inscribed in Lines}
\label{conn}
The goal of this chapter is to prove
Theorem \ref{main2}. We plan to
take a limit of the result in
Theorem \ref{main}.
Let $E=(E_1,E_2,E_3,E_4)$ be a
collection of $4$ line segments,
not necessarily distinct.
We say that a rectangle $R$
{\it graces\/} $E$ if the vertices
$R_1,R_2,R_3,R_4$
of $R$ go in cyclic order,
either clockwise or counterclockwise,
and $R_i \in E_i$ for all $i=1,2,3,4$.
We allow $R$ to be degenerate.
Let $\Gamma(E) \subset (\R^2)^4$ denote the set of
rectangles gracing $E$. Note
We call a point $p \in \Gamma(E)$
{\it degenerate\/} if every neighborhood
of $p$ in $\Gamma_E$ contains points
corresponding to infinitely many
distinct but isometric rectangles.
We call $E$ {\it degenerate\/} if
there is some $p \in \Gamma(E)$ which
is degenerate.
\begin{lemma}
Suppose that $E$ is nondegenerate.
$\Gamma(E)$ is the intersection of
a conic section with a rectangular solid.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
Let $E=(E_1,E_2,E_3,E_4)$ be a
$4$-tuple of lines. We rotate so
that none of the segments is vertical,
so that we may parameterize the
lines containing our segments
by their first coordinates.
Let $L_j$ be the line extending $E_j$.
We identify $\R^3$ with triples
$(x_1,x_2,x_3)$ where $p_j=(x_j,y_j) \in L_j$.
We let $p_4$ be such that
$p_1+p_3=p_2+p_4$. In other words,
we choose $p_4$ to that
$(p_1,p_2,p_3,p_4)$ is a
parallelogram.
Let $\Gamma(L)$ denote the set of
rectangles gracing $L$. We describe
the subset $\Gamma'(L) \subset \R^3$ corresponding
to $\Gamma(L)$. The actual
set $\Gamma(L)$ is the image of
$\Gamma'(L)$ under a linear map from
$\R^3$ into $(\R^2)^4$.
The condition that $p_4 \in L_4$ is a linear
condition. Therefore, the set
$(x_1,x_2,x_3) \in \R^3$ corresponding to
parallelograms inscribed in $L$ is a
hyperplane $\Pi$. The condition that
our parallelogram is a rectangle is
$(p_3-p_2) \cdot (p_1-p_2)=0.$
This condition defines a quadric
hypersurface $H$ in $\R^3$.
The intersection $\Gamma'(L)=\Pi \cap H$
corresponds to the
inscribed rectangles.
$\Pi \cap H$ is either a plane
or a conic section. In the
former case, $E$ is degenerate.
In the latter case, every point
$\Pi \cap H$ is either an
analytic curve or two crossing
lines. Since $\Gamma(L)$ is
the image of $\Gamma'(L)$ under
a linear map, the set
$\Gamma(L)$ is also a conic section.
Let
$[E]=E_1 \times E_2 \times E_3 \times E_4.$
The $[E]$ is a rectangular solid. We have
$\Gamma(E)=\Gamma(L) \cap [E]$.
$\spadesuit$ \newline
\begin{lemma}
When $E$ is non-degenerate,
$\Gamma(E)$ has at most
$64=2^8$ connected components.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
We use the notation from the previous
lemma. Note $[E]$ is
bounded by $8$ hyperplanes and
a conic section either lies in a
hyperplane or intersects it at
most twice. So, each boundary
component of $[E]$ cuts
$\Gamma(L)$ into at most $2$ components.
$\spadesuit$ \newline
We call a polygon $P$
{\it degenerate\/} if some
$4$-tuple of edges associated to
$P$ is degenerate.
Otherwise we call $P$ {\it non-degenerate\/}.
\begin{lemma}
Let $P$ be a non-degenerate polygon.
The space $\Gamma(E)$ is a graph
having analytic edges and degree
at most $32$.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
Each rectangle $R$ can grace
at most $16$ different
$4$-tuples of edges of $P$, because each
vertex can lie in at most $2$ segments.
Hence, each $p \in \Gamma(P)$ lies
in the intersection of at most
$16$ distinct $\Gamma(E)$.
Since $\Gamma(E)$ is the intersection
of a conic section with a rectangular
solid, $\Gamma(E)$ is a graph with
analytic edges and maximum degree $4$.
From what we have said above,
$\Gamma(P)$ is a graph with
analytic edges and maximum degree $64=16 \times 4$.
We can cut down by a factor of
$2$ as follows. The only time a
point of $\Gamma(P)$ lies in more
than $8$ spaces $\Gamma(E)$ is when
$p$ corresponds to a gracing
rectangle whose every vertex is a
vertex of $P$. In this case,
$p$ is a vertex of $[E]$ for each
$4$-tuple $E$ that the rectangle
graces. But then
$p$ has degree at most $2$ in each
$\Gamma(E)$. So, this exceptional
case produces vertices of degree
at most $32$.
$\spadesuit$ \newline
\subsection{The Inscribing Sequence}
A generic polygon $P$ satisfies the conclusions
of Theorem \ref{main}. For such polygons,
any $4$-tuple which supports a
gracing rectangle is nice.
We label the sides of
$P$ by $\{1,...,N\}$.
Let $\Omega$ denote the set of ordered
$4$-element subsets of $\{1,...,N\}$,
not necessarily distinct.
Consider some embedded arc $\alpha \subset
\Gamma(P)$ of inscribed rectangles.
$\alpha$ defines a finite sequence
$\Sigma$ of elements of $\Omega$.
We simply note which
edges of $P_n$ contain any given
rectangle and then we order the
elements of $\Omega$ we get.
We call $\Sigma$ the
{\it inscribing sequence\/}
for $\alpha$.
\begin{lemma}
\label{inscribing}
$\Sigma$ has length at most $\kappa N^4$.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
If $\Sigma$ had length longer
than this, then we could find
a single $4$-tuple $E$ of edges
such that the subset of
$\alpha$ supported by $E$
has at least $82$ components.
In other words the sequence
would have to return to the
$4$-tuple describing $E$
at least $82$ times. The
arcs of
$\Gamma(E)$ corresponding to
these returns are disconnected
from each other, because otherwise
$\alpha$ would be a loop rather
than an arc. This contradiction
proves our claim.
$\spadesuit$ \newline
\subsection{Stable Diameters}
For the rest of the chapter, we
use the word {\it diameter\/} to
mean a positively oriented diameter,
in the sense discussed in the introduction.
Let $P$ be a polygon and let
$S$ be a diameter of $P$. We
call $S$ {\it stable\/} if
\begin{itemize}
\item At least one endpoint of $S$
is a vertex of $P$.
\item If $v$ is an endpoint of $S$
and $e$ is an edge of $P$ incident
to $P$ at $v$, then $S$ and $e$ are
not perpendicular.
\end{itemize}
\begin{lemma}
Suppose that $P$ has no tricky diameters.
If $P$ has an unstable diameter, then
$P$ is non-degenerate.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
This is a case-by-case analysis.
Suppose first that $P$ has a diameter $S$ whose
endpoints are not vertices of $P$.
Then the endpoints of $S$ lie in
the interior of a pair of parallel
edges of $P$. But then $P$ is degenerate.
Suppose that $P$ has a diameter $S$ having
one endpoint which is a vertex $v$ of $P$.
The other endpoint of $S$ lies in the
interior of an edge $e'$ of $P$. By
definition $e'$ and $S$ are perpendicular. If
$S$ is not stable, then one of the edges
$e$ of $P$ is perpendicular to $S$ and
hence parallel to $e'$. But then we can
shift $S$ over a bit and produce a diameter
of $P$ whose endpoints lie in the interior
of $e$ and $e'$. Again, $P$ is
degenerate. The remaining unstable
diameters are (in the technical sense) tricky.
$\spadesuit$ \newline
In view of the preceding result, it suffices to
prove Theorem \ref{main2} under the assumption
that $P$ is non-degenerate and has all stable
diameters.
\subsection{Limits of Diameters}
Let $P$ be an $N$-gon with
stable diameters.
We can find a sequence
$\{P_n\}$ of generic
$N$-gons converging to $P$. Each
$P_n$ satisfies the conclusions
of Theorem \ref{main}.
\begin{lemma}
Let $D$ be a diameter of $P$.
The polygon $P_n$ has a diameter $D_n$
such that $\{D_n\}$ converges to $D$.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
Since $P$ only has stable
diameters, there are
just $2$ cases to consider.
Suppose first that $D$ connects
two vertices $v$ and $w$ of $P$.
The polygon
$P_n$ has vertices $v_n$ and $w_n$ which
converge respectively to $v$ and $w$
as $n \to \infty$.
Let $D_n$ be the chord whose endpoints
are $v_n$ and $w_n$. By construction,
$D_n$ converges to $D$ and
for large $n$ this chord is a diameter.
Suppose now that $D$ connects a
vertex $v$ to a point in the interior
of an edge $e$.
Let $v_n$ and $e_n$
be the corresponding vertex and
edge of $P_n$. Since $v_n \to v$
and since $e_n \to e$ we see that
eventually there is a chord
$D_n$ that has $v_n$ as one endpoint
and has the other endpoint perpendicular
to $e_n$. By construction $D_n \to D$
and eventually $D_n$ is a diameter
of $P_n$.
$\spadesuit$ \newline
\begin{lemma}
If $\{D_n\}$ is a sequence
of diameters of $P_n$, then
$\{D_n\}$ converges on a subsequence to a diameter of $P$.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
Given the
sequence $\{D_n\}$ we can pass to a
subsequence so that the endpoints of
these diameters converge. The limiting
segment $D$, provided that it has nonzero
length, must be a diameter of $P$ because
the required condition is a closed condition.
We just have to see that the length
of $\{D_n\}$ does not shrink to $0$.
Note that $D_n$ is at least as long
as the shortest diameter of $P_n$.
Furthermore, there is a positive
lower bound to the length of any
edge of $P_n$, independent of $n$.
So, if the length of $D_n$ converges
to $0$, there are two non-adjacent
vertices of $D_n$ whose distance
converges to $0$. This contradicts
the fact that $\{P_n\}$ converges
to the embedded polygon $P$.
$\spadesuit$ \newline
We think of a diameter as a subset of
$(\R^2)^2$, and in this way we can
talk about the distance between
two diameters of $P_n$.
\begin{lemma}
Suppose that $\{D_n\}$ and
$\{D_n'\}$ are two sequences
of diameters such that the
distance from $D_n$ to $D_n'$
converges to $0$ as $n \to \infty$.
Then $D_n=D_n'$ for $n$ sufficiently
large.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
Let $v_n$ and $w_n$ be the
endpoints of $D_n$ and
let $v_n'$ and $w_n'$ be
the endpoints of $D_n'$.
We label so that
$\|v_n-v_n'\|$ and
$\|w_n-w_n'\|$ both tend to $0$.
In all cases, we can re-order so
that $v_n$ is a vertex of $P_n$
and $v_n'$ is not. In other
words, $v_n'$ lies in the interior
of an edge $e_n'$ of $P_n$.
Since $v_n'$ converges to $v_n$,
a vertex of $P_n$, the segment
$e_n'$ becomes perpendicular to
$D_n'$ in the limit. This contradicts
the fact that $P$ has only stable
diameters.
$\spadesuit$ \newline
\begin{corollary}
\label{stable}
For $n$ sufficiently large, there
is a bijection between the diameters
of $P_n$ and the diameters of $P$
such that each diameter of $P$
is match with a sequence of diameters
of $P_n$ which converges to $P$.
\end{corollary}
{\bf {\medskip}{\noindent}Proof: }
This is an immediate consequence of the
preceding $3$ lemmas.
$\spadesuit$ \newline
We truncate our sequence of polygons so that
the last corollary holds for all $n$.
For each $n$, these diameters
are paired together by the arc components
of the manifold $\Gamma(P_n)$. We pass to
a further subsequence so that the
same pairs arise for each $n$. This gives
us a well defined way to pair the
diameters of $D$. We say that two
diameters of $D$ are
{\it partners\/} if and only if the
corresponding diameters of $D_n$ are
paired together.
\begin{lemma}
\label{connect}
Each pair of partner diameters in
$P$ is connected by a piecewise smooth
path in $J(P)$.
\end{lemma}
{\bf {\medskip}{\noindent}Proof: }
Let $A$ and $B$ be two partner diameters
of $P$. Let $A_n$ and $B_n$ be the
corresponding diameters of $P_n$.
Let $\alpha_n$ be the arc in
$\Gamma(P_n)$ which connects
$A_n$ and $B_n$.
To understand the convergence of
$\{\alpha_n\}$ we
work in the Hausdorff topology
on the set of compact subsets of
$(\R^2)^4$. This ambient
space contains $\Gamma(J)$ for
any Jordan loop.
We consistently label the
sides of $P_n$ and $P$.
Let $\Sigma_n$ be the
inscribing sequence of
$\alpha_n$. By Lemma
\ref{inscribing} there is
a uniform upper bound of
$\kappa N^4$ on the length
of $\Sigma_n$. Therefore, we
may pass to a subsequence
so that the inscribing
sequence associated to
$\alpha_n$ is independent of $n$.
We write
$$\alpha_n=\alpha_{n1},...,\alpha_{nk},$$
where $\alpha_{nj}$ is the arc of
rectangles corresponding to the
$j$th element of the sequence in
$\Omega$. Here $k$ is the length
of the inscribing sequence.
We pass to a subsequence so that
$\{\alpha_{nj}\}$ converges in
the Hausdorff topology to a
subset $\alpha_j \subset \alpha$.
The set $\alpha_j$ is connected
and contained in a subset of
$\Gamma(E)$, where $E$ is the
$4$-tuple of edges corresponding
to the $j$th element of $\Omega$.
From the discussion in \S \ref{conn}, we see that
$\alpha_j$ is a compact, connected
algebraic arc. By construction
$\alpha_j$ and $\alpha_{j+1}$ share
one point common for all $j$.
This vertex is the limit of
the sequence $\{\alpha_{nj} \cap \alpha_{n,j+1}\}$.
The description above reveals $\alpha$
to be a piecewise smooth arc
connecting the two diameters $A$ and $B$.
$\spadesuit$ \newline
\subsection{The End of the Proof}
Let $P$ be a polygon.
We still assume that $P$ has
stable diameters, so that the
results from the previous section apply.
We know from Lemma \ref{connect}
that the diameters of $P$
are paired in some way, and
each pair is connected by some
piecewise smooth path of
gracing rectangles. We can
erase any loops that these
paths have and thereby
assume that all these paths
are embedded. Next, we can assume that
every $2$ arcs in the collection
intersect each other in at most
one point. Otherwise, we can
do a splicing operation to decrease
the number of intersection points.
(See Figure 6 below.)
The splicing operation may change
the way that the diameters are
paired up, but this doesn't bother us.
Finally, we
can make our choice of connectors
invariant under the
$\Z/4$ re-labelling action.
As in the proof of
Theorem \ref{main} we let
$\delta=\Delta_+(P)/2$ and we
chose a collection
$\alpha_1,...,\alpha_{\delta}$ of
connecting arcs which has
one representative in each
orbit of the $\Z/4$ action.
Suppose that our collection of
paths contains two hyperbolic
arcs $\alpha_1$ and $\alpha_2$
that intersect. Each path
connects a (degenerate)
rectangle of aspect ratio $0$
to a (degenerate) rectangle
of aspect ratio $\infty$.
By splicing the paths together
and then re-dividing them,
we produce $2$ new paths
$\beta_1$ and $\beta_2$ such
that each $\beta_j$ connects
two degenerate rectangles of
the same aspect ratio.
In other words, we can do
a cut-and-paste operation
at an intersection point
to replace the two hyperbolic
arcs by null arcs. If
necessary, we can erase any
loops created in this process.
Figure 6 shows this operation.
\begin{center}
\resizebox{!}{1.2in}{\includegraphics{fig6.eps}}
\newline
{\bf Figure 6:\/} The splicing operation.
\end{center}
Suppose first that there are $\delta/2$ arcs
in our collection that are hyperbolic
arcs. Then this collection is
an embedded $1$-manifold contained in
$\Gamma(P)$. Just using these arcs,
the same argument as in the
proof of Theorem \ref{main}
shows that
$$M(P) \geq \Delta_+(P)-2.$$
That is, we get the same answer
as in Theorem \ref{main} except
for the factor of $1/2$.
Now suppose that there are at least
$\delta/2$ null arcs. For the rest
of the proof we just deal with these
null arcs.
Let $\Gamma_1(P)$ denote the
union of these null arcs.
We know that $\Gamma_1(P)$ is a
subset of $\Gamma(P)$ and
also a graph with algebraic
edges and maximim valence
at most $32$.
Let $\widehat \Gamma_1$ denote the
formal disjoint union of these
embedded null arcs. The space
$\widehat \Gamma_1$ is a $1$-manifold,
just a union of arcs, and the
``forgetful map'' $\phi: \widehat \Gamma_1 \to \Gamma_1$
is at most $16$ to $1$.
The same argument as in the proof
of Theorem \ref{generic} says that
there are $\delta$ distinct points
in $\widehat \Gamma_1$, two per
arc, corresponding to rectangle
coincidences. Let $S$ be the
set of these points. The image
$\phi(S)$ contains at least $\delta/16$
points. For each of these points,
there is a second point corresponding
to an isometric rectangle. We know
this because the map
$\phi$ is injective on each null arc,
and each null arc contains $2$
points of $S$. So, we can match
our $\delta/16$ points into
$\delta/32$ distinct pairs of
points, corresponding to pairs
of isometric but distinct
rectangles in $\Gamma(P)$.
This adds a count of $\delta/32$ to
$M(P)$. To make the comparison with
Theorem \ref{main} cleaner, we work
with $(\delta-1)/32$ instead.
In the case at hand, we get the same bound
as in Theorem \ref{main} except
for the factor of $1/32$.
Going back to the count of labeled
rectangles, we have
$$M(P) \geq \frac{1}{16}(\Delta_+(P)-2).$$
This completes the proof of
Theorem \ref{main2}.
\newpage
\section{Four Lines and a Rectangle}
\subsection{The Generic Case}
\end{document}
\newpage
\section{Four Rectangles and a Line}
In this chapter we present some of
the results from [{\bf S3\/}] in
abbreviated form.
\subsection{Statement of Results}
\section{References}
[{\bf AA\/}] A. Akopyan and S Avvakumov, {\it Any cyclic quadrilateral can be
inscribed in any closed convex smooth curve.\/}
arXiv: 1712.10205v1 (2017)
\newline
\newline
[{\bf ACFSST\/}] J. Aslam, S. Chen, F. Frick, S. Saloff-Coste,
L. Setiabrate, H. Thomas, {\it Splitting Loops and necklaces:
Variants of the Square Peg Problem\/}, arXiv 1806.02484 (2018)
\newline
\newline
[{\bf CH\/}] D. Hilbert and S. Cohn-Vossen, {\it Geometry and The Imagination\/},
\newline
Chelsea Publishing Company (American Math Society), 1990
\newline
\newline
[{\bf H\/}] C. Hugelmeyer,
{\it Every Smooth Jordan Curve has an inscribed
rectangle with aspect ratio equal to $\sqrt 3$.\/}
arXiv 1803:07417 (2018)
\newline
\newline
[{\bf M\/}] B. Matschke, {\it A survey on the
Square Peg Problem\/}, Notices of the A.M.S.
{\bf Vol 61.4\/}, April 2014, pp 346-351.
\newline
\newline
[{\bf S1\/}] R. E. Schwartz, {\it A Trichotomy for Rectangles Inscribed in
Jordan Loops\/}, preprint, 2018
\newline
\newline
[{\bf S2\/}] R. E. Schwartz, {\it Four lines and a Rectangle\/},
preprint, 2018
\newline
\newline
[{\bf Ta\/}], T. Tao, {\it An integration approach
to the Toeplitz square peg conjecture\/}
\newline
Forum of Mathematics, Sigma, 5 (2017)
\newline
\newline
[{\bf W\/}] S. Wolfram, {\it The Mathematica Book\/}, 4th ed. Wolfram Media/Cambridge
University Press, Champaign/Cambridge (1999)
| {
"timestamp": "2018-11-28T02:06:09",
"yymm": "1809",
"arxiv_id": "1809.03070",
"language": "en",
"url": "https://arxiv.org/abs/1809.03070",
"abstract": "We prove an integral formula for continuous paths of rectangles inscribed in a piecewise smooth loop. We then use this integral formula to show that (with a very mild genericity hypothesis) the number of rectangle coincidences, informally described as the number of inscribed rectangles minus the number of isometry classes of inscribed rectangles, grows linearly with the number of positively oriented extremal chords -- a.k.a. diameters -- of the polygon",
"subjects": "Metric Geometry (math.MG)",
"title": "Rectangle Coincidences and Sweepouts",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534333179648,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573477946936
} |
https://arxiv.org/abs/1609.07352 | Some results on the Signature and Cubature of the Fractional Brownian motion for $H>\frac{1}{2}$ | In this work we present different results concerning the signature and the cubature of fractional Brownian motion (fBm). The first result regards the rate of convergence of the expected signature of the linear piecewise approximation of the fBm to its exact value, for a value of the Hurst parameter $H\in(\frac{1}{2},1)$. We show that the rate of convergence is given by $2H$. We believe that this rate is sharp as it is consistent with the result of Ni and Xu, who showed that the sharp rate of convergence for the Brownian motion (i.e. fBm with $H=\frac{1}{2}$) is given by $1$. The second result regards the bound of the coefficient of the rate of convergence obtained in the first result. We obtain an uniform bound for the coefficient for the $2k$-th term of the signature of $\frac{\tilde{A}k(2k-1)}{(k-1)!2^{k}}$, where $\tilde{A}$ is a finite constant independent of $k$. The third result regards the sharp decay rate of the expected signature of the fBm. We obtain a sharp bound for the $2k$-th term of the expected signature of $\frac{1}{k!2^{k}}$. The last results concern the cubature method for the fBm for $H>\frac{1}{2}$. In particular, we develop the framework of the cubature method for fBm, provide a bound for the approximation error in the general case, and obtain the cubature formula for the fBm in a particular setting. These results extend the work of Lyons and Victoir, who focused on the Brownian motion case. | \section*{Some results on the Signature and Cubature of the Fractional Brownian motion for $H>\frac{1}{2}$}
\subsection*{Riccardo Passeggeri\footnote[1]{Imperial College London, UK. Email: riccardo.passeggeri14@imperial.ac.uk}}
\end{center}
\begin{abstract}
In this work we present different results concerning the signature and the cubature of fractional Brownian motion (fBm). The first result regards the rate of convergence of the expected signature of the linear piecewise approximation of the fBm to its exact value, for a value of the Hurst parameter $H\in(\frac{1}{2},1)$. We show that the rate of convergence is given by $2H$. We believe that this rate is sharp as it is consistent with the result of Ni and Xu \cite{NiXu}, who showed that the sharp rate of convergence for the Brownian motion (\textit{i.e.} fBm with $H=\frac{1}{2}$) is given by $1$. The second result regards the bound of the \textit{coefficient} of the rate of convergence obtained in the first result. We obtain an uniform bound for the coefficient for the $2k$-th term of the signature of $\frac{\tilde{A}k(2k-1)}{(k-1)!2^{k}}$, where $\tilde{A}$ is a finite constant independent of $k$. The third result regards the sharp decay rate of the expected signature of the fBm. We obtain a sharp bound for the $2k$-th term of the expected signature of $\frac{1}{k!2^{k}}$. The last results concern the cubature method for the fBm for $H>\frac{1}{2}$. In particular, we develop the framework of the cubature method for fBm, provide a bound for the approximation error in the general case, and obtain the cubature formula for the fBm in a particular setting. These results extend the work of Lyons and Victoir \cite{LV}, who focused on the Brownian motion case.
\\
\\
\textbf{Key words:} fractional Brownian motion, signature, rate of convergence, sharp decay rate, cubature method.
\end{abstract}
\section{Introduction}
The signature of a $d$-dimensional fractional Brownian motion (fBm) is a sequence of iterated Stratonovich integrals along the paths of the fBm; it is an object taking values in the tensor algebra over $\mathbb{R}^{d}$.
\\
Signatures were firstly studied by K.T.-Chen in 1950’s in a series of papers \cite{Chen1}, \cite{Chen} and \cite{Chen3}. In the last twenty years the attention devoted to signatures has increased rapidly. This has been caused by the pivotal role they have in rough path theory, a field developed in the late nineties by Terry Lyons culminating in the paper \cite{L}, which is also at the base of the newly developed theory of regularity structures \cite{Hairer}. The signature of a path summarises the essential properties of that path allowing the possibility to study SPDEs driven by that path.
\\
We remark also the increasing importance of the signatures in the field of machine learning, see for example \cite{machine} among many. In light of this increasing importance and of the fact that rate of convergences are crucial to evaluate algorithmic efficiency, our results may have a \textit{direct} impact to real world applications.
In 2015 Hao Ni and Weijun Xu \cite{NiXu} computed the sharp rate of convergence for expected Brownian signatures. They obtained a rate of convergence of $1$; in formulas,
\begin{equation*}
\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{m,I}\right)\Bigg|\leq Cm^{-1},
\end{equation*}
where $C>0$, $\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)$ is the (truncated) expected signature of the Brownian motion and $\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{m,I}\right)$ is the (truncated) expected signature of the linear piecewise approximation of the Brownian motion with mesh size $\frac{1}{m}$. A more formal introduction is given in the next section.
\\
On the other hand, for the fBm no progress has been made in this direction. In particular, the rate of convergence for the expected signature of the fBm is not known for any value of the Hurst parameter $H\in(0,1)$. The first result of this article address this problem, obtaining the ``sharp" rate of convergence for $H\in(\frac{1}{2},1)$. In formulas,
\begin{equation}\label{prima}
\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{H,I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{H,m,I}\right)\Bigg|\leq C'm^{-2H},
\end{equation}
where $C'>0$, $\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{H,I}\right)$ and $\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{H,m,I}\right)$ are respectively the (truncated) expected signature of the fBm and of its linear piecewise approximation with mesh size $\frac{1}{m}$. To achieve this result, we used the results of Baudoin and Coutin in \cite{BauCou}, who computed the expected signature for the fractional Brownian motion for $H>\frac{1}{2}$ and also for small times for $H\in(\frac{1}{3},\frac{1}{2})$. We mention also the works \cite{1/4} and \cite{LVextension}, where further properties of the signature of the fBm are studied.
\\
In this work we focus on the weak rate of convergence and we refer to the work of Friz and Riedel \cite{FR} for the strong rate of convergence. It is possible to derive from their work a rate of $H$ for $H>\frac{1}{2}$ (see also Deya, Neuenkirch and Tindel \cite{Deya} for similar results), while here we obtain a weak convergence rate of $2H$.
The second result of this work regards the bound of the coefficient of the first result, namely $C'$ in $(\ref{prima})$. We obtain an uniform bound for the coefficient for the $2k$-th term of the signature of
\begin{equation*}
\frac{\tilde{A}k(2k-1)}{(k-1)!2^{k}},
\end{equation*}
where $\tilde{A}$ is a finite constant independent of $k$. This shows that for a fixed linear piecewise approximation of the fBm as the number of iterated integrals increases the difference between the expected signature of the fBm and of its linear piecewise approximation goes to zero (and it goes fast).
For the third result we move away from the linear piecewise approximation and focus just on the expected signature. In \cite{CheLyo}, Chevyrev and Lyons showed that the expected signature has infinite radius of convergence, but not a factorial decay. In this work we show that the expected signature has a factorial decay, indeed the sharp bound for the $2k$-th term of the signature is simply given by $\frac{1}{k!2^{k}}$ for all $H\in(\frac{1}{2},1)$. In the $H > \frac{1}{2}$ case, our result gives an alternative proof, with sharper estimates, of the fact that the expected signature of the fractional Brownian motion has infinite radius of convergence, which by \cite{CheLyo} implies that the expected signature determines the signature in distribution. Our estimate is also sharper than the one obtained by Friz and Riedel \cite{FR2}, from which is possible to obtain a bound of $\frac{1}{(k/2)!}$, and Neuenkirch, Nourdin, Rössler and Tindel \cite{NNRT}, who showed a bound of $\frac{C^{k}}{\sqrt{(2k)!}}$ (with $C>2$ and dependent on $H$). In formulas, our result is:
\begin{equation*}
\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{I}\right)\leq \dfrac{(t-s)^{2kH}}{k!2^{k}}.
\end{equation*}
In 2003 Lyons and Victoir \cite{LV} developed a numerical methods for approximating the solution of parabolic PDEs and general SDEs driven by Brownian motions called cubature method on Weiner space. In the cubature method the main first step is to obtain the cubature formula in which the truncated signature of a path (in the case of \cite{LV} is the Brownian motion) is matched to a finite weighted sum of Dirac delta measures applied to the iterated integrals of the deterministic paths. In this work, we develop the framework of the cubature method for fBm, obtain the cubature formula for the fBm in a particular setting, and provide a bound for the approximation error for the general case.
This paper is structured in the following way. In section 2 we introduce some notations and state formally the main results. In section 3 we will discuss about preliminaries involving definitions and the results of other authors. In section 4, 5 and 6 we prove the first three main results of the article. In section 7 we discuss the cubature method for the fBm.
\section{Main Results}
In this section we introduce the main results of this paper. But first, we introduce some notation, which is in line with the one used in the papers of Baudoin and Coutin $\cite{BauCou}$, Lyons $\cite{L}$, and Lyons and Victoir $\cite{LV}$ and in the book by Lyons, Caruana, and L\'{e}vy $\cite{LCL}$.
The fractional Brownian motion is defined as follows.
\begin{defn}
Let H be a constant belonging to $(0,1)$. A fractional Brownian motion (fBm) $(B^{H} (t))_{t≥0}$ of Hurst index H is a continuous and centered Gaussian process with covariance function
\begin{equation*}
\mathbb{E}\left[B^{H} (t)B^{H} (s)\right]=\frac{1}{2}(t^{2H}+s^{2H}-|t-s|^{2H}).
\end{equation*}
\end{defn}
\noindent From now on we will denote $B:=B^{H}$. \\For $H=\frac{1}{2}$ then the fBm is a Bm. Further, the multi-dimensional fBm has coordinate components that are independent and identically distributed copies of one dimensional fBm. Moreover, recall that the fBm has the following two properties:
\\
$(i)\enspace \text{(scaling) for any $c>0$,}\enspace c^{H}B_{\cdot/c}\enspace\text{is a fBm,}$
\\
$(ii)\enspace \text{(stationary increments) for any $h>0$,}\enspace B_{\cdot+h}-B_{h}\enspace\text{is a fBm.}$\\\\
Now, we define the simplex $\Delta^{k}[s,t]$, where $T\in[0,\infty)$ and $s,t\in[0,T]$,
\begin{equation*}
\Delta^{k}[s,t]:=\{(t_{1},...,t_{k})\in[s,t]^{k}:t_{1}<...<t_{k}\}.
\end{equation*}
Further, we define the following iterated integrals. Let $I=(i_{1}, . . . , i_{k})\in\{1,..,d\}^{k}$ be a word with length $k$ then
\begin{equation*}
\int_{\Delta^{k}[s,t]}dB^{I}:=\int_{s\leq t_{1}<...<t_{k}\leq t}dB^{i_{1}}_{t_{1}}\cdot\cdot\cdot dB^{i_{k}}_{t_{k}}
\end{equation*}
and
\begin{equation*}
\int_{\Delta^{k}[s,t]}dB^{m,I}:=\int_{s\leq t_{1}<...<t_{k}\leq t}dB^{m,i_{1}}_{t_{1}}\cdot\cdot\cdot dB^{m,i_{k}}_{t_{k}},
\end{equation*}
where $B$ is the fractional Brownian motion with Hurst parameter $H$ and $B^{m}$ is its linear piecewise approximation. In addition, $B^{i}$ is the $i$-th coordinate component of the fBm $B$ and the iterated integrals can be defined in the sense of Young \cite{Young}.
\\Moreover, the linear piecewise approximation $B^{m}$ is defined as follows. The only requirement we need is that the linear approximation comes from a uniform grid. Let $s,u\in[0,\infty)$ and consider an interval of time $[s,u]$. Let $t_{i}^{[s,u]}:=\frac{i}{m}(u-s)+s$ for $i=0,...,m$. If $t\in[t_{i}^{[s,u]},t_{i+1}^{[s,u]}]$ then
\begin{equation*}
B^{m}_{[u,s],t}:=B^{m}_{t_{i}^{[s,u]}}+\frac{m}{u-s}(t-t_{i}^{[s,u]})(B^{m}_{t_{i+1}^{[s,u]}}-B^{m}_{t_{i}^{[s,u]}}).
\end{equation*}
Observe that the definition of the linear piecewise approximation depends on the interval considered. Notice also that $B_{t}^{m}$ satisfies the scaling and stationary increments property, once the interval is properly modified. Indeed, we have that for the stationary increments property the interval to be taken is, for any $h>0$, $[s+h,u+h]$ and we have
\begin{equation*}
B^{m}_{[s+h,u+h],t+h}=B^{m}_{t_{i}^{[s+h,u+h]}}+\frac{m}{u-s}(t+h-t_{i}^{[s+h,u+h]})(B^{m}_{t_{i+1}^{[s+h,u+h]}}-B^{m}_{t_{i}^{[s+h,u+h]}})
\end{equation*}
\begin{equation*}
\stackrel{Law}{=}B^{m}_{t_{i}^{[u,s]}}+\frac{m}{u-s}(t-t_{i}^{[u,s]})(B^{m}_{t_{i+1}^{[u,s]}}-B^{m}_{t_{i}^{[u,s]}})=B^{m}_{[u,s],t}
\end{equation*}
and for the scaling property the interval to be taken, for any $c>0$, is $[s/c,u/c]$ and we have
\begin{equation*}
c^{H}B^{m}_{[u/c,s/c],t/c}=c^{H}B^{m}_{t_{i}^{[s/c,u/c]}}+c^{H}\frac{cm}{u-s}(t/c-t_{i}^{[s/c,u/c]})(B^{m}_{t_{i+1}^{[s/c,u/c]}}-B^{m}_{t_{i}^{[s/c,u/c]}})
\end{equation*}
\begin{equation*}
=c^{H}B^{m}_{t_{i}^{[s,u]}/c}+c^{H}\frac{cm}{u-s}(t/c-t_{i}^{[s,u]}/c)(B^{m}_{t_{i+1}^{[s,u]}/c}-B^{m}_{t_{i}^{[s,u]}/c})
\end{equation*}
\begin{equation*}
\stackrel{Law}{=}B^{m}_{t_{i}^{[u,s]}}+\frac{m}{u-s}(t-t_{i}^{[u,s]})(B^{m}_{t_{i+1}^{[u,s]}}-B^{m}_{t_{i}^{[u,s]}})=B^{m}_{[u,s],t}.
\end{equation*}
Further, from now on we will not write explicitly the interval of the linear approximation, unless it is necessary in order to avoid confusion. Hence, we denote $t_{i}:=t_{i}^{[s,u]}$ and $B^{m}_{t}:=B^{m}_{[u,s],t}$.
We can now present our main results. The first result is about the rate of convergence of the expected signature of the linear piecewise approximation of the fBm to its exact value.
\begin{thm}\label{pr1}
Let $H > \frac{1}{2}$, $T\in[0,\infty)$ and $0\leq s<t\leq T$. Let $I = (i_{1}, . . . , i_{2k})$ be a word where $i_{l}\in\{1,...,d\}$ for $l=1,...,2k$, then for all m
\begin{equation*}
\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{m,I}\right)\Bigg|\leq Cm^{-2H}(t-s)^{2kH},
\end{equation*}
where C is a finite constant and depending only on $k$ and $H$.
\end{thm}
\noindent It is important to stress that for the Brownian motion case, namely $H=\frac{1}{2}$, Ni and Xu in \cite{NiXu} proved that the sharp rate of convergence is given by $1$ (\textit{i.e.} a uniform bound of $Cm^{-1}$), which is in line with the result presented here. Moreover, the proof used in \cite{NiXu} cannot in principle be used to prove our result since Ni and Xu used the independence of the increments property (\textit{i.e.} the Markov semigroup property) of the Brownian motion, which does not hold for the fBm. Conversely, the proof used to prove Theorem 2.2 is based on the integral form of the covariance function of the fBm which is valid only for $H > \frac{1}{2}$, hence our proof cannot in principle be used to prove the result in \cite{NiXu}.
In the next theorem we focus on and refine the value of the coefficient $C$ of the previous theorem and we provide a bound for it.
\begin{thm}\label{pr2}
Let $H > \frac{1}{2}$, $T\in[0,\infty)$ and $0\leq s<t\leq T$. Let $I = (i_{1}, . . . , i_{2k})$ be a word where $i_{l}\in\{1,...,d\}$ for $l=1,...,2k$, then
\begin{equation*}
\limsup\limits_{m\rightarrow\infty}m^{2H}\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{m,I}\right)\Bigg|\leq \dfrac{\tilde{A}k(2k-1)}{(k-1)!2^{k}}(t-s)^{2kH},
\end{equation*}
where $\tilde{A}$ is a finite constant depending only on $H$.
\end{thm}
The following theorem provides a sharp bound for the value for the expected signature of the fBm. In the $H > \frac{1}{2}$ case, this result in particular implies Chevyrev-Lyons'result that the expected signature of fBm has infinite radius of convergence. This in turns implies that the expected signature determines the signature in distribution.
\begin{thm}\label{pr3}
Let $H > \frac{1}{2}$, $T\in[0,\infty)$ and $0\leq s<t\leq T$. Let $I = (i_{1}, . . . , i_{2k})$ be a word where $i_{l}\in\{1,...,d\}$ for $l=1,...,2k$, then
\begin{equation*}
\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{I}\right)\leq \dfrac{(t-s)^{2kH}}{k!2^{k}}.
\end{equation*}
\end{thm}
We move now to the cubature method. The results we present require an extended formal introduction, which is given in Section 7. The first main result regards the bound of the approximation error of the cubature method, namely the bound of the difference between the expected exact solution of an SDE driven by a fBm, which we denote by $\mathbb{E}\left(f(\xi_{T,x})\right)$, and the deterministic solution obtained by the cubature method, denoted by $\sum_{j=1}^{n}\lambda_{j}f(\Phi_{T,x}(\omega_{j}))$.
In particular, with this result we extend Proposition 2.1, Lemma 3.1 and Proposition 3.2 of \cite{LV} to the fBm case for $H>1/2$. Before stating the theorem we introduce the following notation. Let $|I|$ indicate the length of the word $I$, where $I\in\{0,...,d\}^{k}$ for some $k\in\mathbb{N}$ and let $C_{b}^{\infty}(\mathbb{R}^{N},\mathbb{R}^{N})$, where $N\in\mathbb{N}$, be the space of $\mathbb{R}^{N}$-valued smooth functions defined in $\mathbb{R}^{N}$ whose derivatives of any order are bounded. We regard elements of $C_{b}^{\infty}(\mathbb{R}^{N},\mathbb{R}^{N})$ as vector fields on $\mathbb{R}^{N}$. Let $V_{0},...,V_{d}$ be such vector fields.
\begin{thm}\label{prNEW} Consider any function $f:\mathbb{R}^{N}\rightarrow\mathbb{R}$, whose derivatives of any order exist and are bounded, and vector fields $V_{0},...,V_{d}\in C_{b}^{\infty}(\mathbb{R}^{N},\mathbb{R}^{N})$. Let $T\in[0,\infty)$ and assume that there exists $M > 0$ and $0 \leq\gamma < 1/2$ such that, for every word $I$, $|(V_{i_{i}}\cdots V_{i_{k}}f)(x)| \leq M^{|I|} (|I|!)^{\gamma}$. Then, we have
\begin{equation*}
\sup_{x\in\mathbb{R}^{N}}\Big|\mathbb{E}\left(f(\xi_{T,x})\right) -\sum_{j=1}^{n}\lambda_{j}f(\Phi_{T,x}(\omega_{j}))\Big|\leq\begin{cases}
L_{1}(T)\quad\text{if $T\geq 1$},\\ L_{2}(T)\quad\text{if $T< 1$},
\end{cases}
\end{equation*}
where
\begin{equation*}
L_{1}(T):=CT^{(m+2)/2}\left(1+\sup\limits_{x\in\mathbb{R}^{N}}M_{x}^{(m+2)/2}\sum_{k=0}^{\infty}\frac{(dM_{x}KT)^{k}}{(k!)^{1/2-\gamma}}\right)
\end{equation*}
\begin{equation*}
L_{2}(T):=C' T^{2H}+C''T^{H(m+2)/2}\sup\limits_{x\in\mathbb{R}^{N}}M_{x}^{(m+2)/2}\sum_{k=0}^{\infty}\frac{(dM_{x}KT^{H})^{k}}{(k!)^{1/2-\gamma}},
\end{equation*}
and where $C,C',C''>0$ are constants independent of T and $K=\sqrt{\frac{2}{H(2H-1)}}$.
\end{thm}
In the second main results, we provide the cubature formula for the one dimensional fBm up to degree 5. We will discuss the concept of the degree of a cubature formula in Section 7. Now, let $C^{0}_{0,bv}([0, T ], \mathbb{R}^{d} )$ be the space of $\mathbb{R}^{d}$-valued continuous paths of bounded variation defined in $[0, T]$ and which starts at zero.
\begin{thm}\label{pr4}
Let $H \geq \frac{1}{2}$. Define $\hat{B}$ and $\hat{\omega}$ to be
\begin{equation*}
\hat{B}_{t_{l}}^{i_{l}}=
\begin{cases}
t_{l}, \qquad if \quad i_{l}=0,\\
B_{t_{l}} \qquad if \quad i_{l}=1,
\end{cases}\quad
and \qquad
\hat{\omega}_{t_{l},j}^{i_{l}}=
\begin{cases}
t_{l}, \qquad if \quad i_{l}=0,\\
\omega_{t_{l},j} \qquad if \quad i_{l}=1,
\end{cases}
\end{equation*}
for $l=1, . . . , k$ and $j=1,...,n$ and where $\omega_{j}\in C^{0}_{0,bv}([0, T ], \mathbb{R}^{d} )$. The weights $\{\lambda_1, \lambda_2, \lambda_3\}$ and the paths $\{\omega_{t,1},\omega_{t,2},\omega_{t,3}\} $ will satisfy the cubature formula
\begin{equation*}
\mathbb{E}\left[\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{i_{k}}\right]=\sum_{j=1}^{n}\lambda_{j}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{\omega}_{t_{1},j}^{i_{1}}\cdot\cdot\cdot d\hat{\omega}_{t_{1},j}^{i_{k}},
\end{equation*}
for the following degree
\begin{equation*}
Degree=\begin{cases}
5 \qquad for \qquad \frac{1}{2}\leq H<\frac{2}{3},\\
4 \qquad for \qquad \frac{2}{3}\leq H<1,\\
\end{cases}
\end{equation*}
if $n=3$, $\lambda_{1}=\lambda_{2}=\frac{1}{6}$ and $\lambda_{3}=\frac{2}{3}$, and
\begin{equation*}
\omega_{t,1}=\begin{cases}
(2\alpha-\beta) t, \qquad t\in[0,\frac{1}{3}],\\
(\alpha-\beta)+(2\beta-\alpha)t, \qquad t\in[\frac{1}{3},\frac{2}{3}],\\
(\beta-\alpha)+(2\alpha-\beta)t, \qquad t\in[\frac{2}{3},1],\\
\end{cases}
\end{equation*}
where
\begin{equation*}
\alpha:=\dfrac{2H\sqrt{3}+\sqrt{3}}{2H+1} \qquad and \qquad \beta:=\dfrac{\sqrt{-96H^{2}+66H+57}}{2H+1},
\end{equation*}
and $\omega_{t,2}=-\omega_{t,1}$ and $\omega_{t,3}=0$ for $t\in[0,1]$.
\end{thm}
\noindent It is important to remark that for $H = \frac{1}{2}$, \textit{i.e.} when the fractional Brownian motion is just a Brownian motion, the results obtained in this theorem perfectly coincide with the results of Lyons and Victoir obtained in \cite{LV}.
\section{Preliminaries}\label{Preliminaries}
One of the main concepts of this paper is the signature of the fractional Brownian motion. In order to give a definition of signature we need to introduce first the following concept: the \textit{p-variation} of a path.
\begin{defn} \label{p-var}
Let $p\geq 1$ be a real number. Let $X:J\rightarrow E$ be continuous path, where J is a compact interval and E is a finite dimensional Banach space. The p-variation of X on the interval J is defined by
\begin{equation*}
\|X\|_{p,J}=\left[\sup_{\mathcal{D}\subset J}\sum_{i=0}^{r-1}|X_{t_{i}}-X_{t_{i+1}}|^{p}\right]^{\frac{1}{p}},
\end{equation*}
where the supremum is taken over all the partitions of J.
\end{defn}
\noindent A path $X$ is said to be of finite $p$-variation over the interval $J$ if $\|X\|_{p,J}<\infty$.\\
For signature of a stochastic process we mean the following.
\begin{defn}
Let E be a finite dimensional Banach space. Let $X:[0,T]\rightarrow E$ be a continuous path with finite p-variation for some $p<2$. The signature of X is the element $S(X)$ of $T(E):=\{\mathbf{a}=(a_{0},a_{1},...)|\forall n\geq 0, a_{n}\in E^{\otimes n}\}$ defined as follows
\begin{equation*}
S(X_{[0,T]})=(1,X_{[0,T]}^{1},X_{[0,T]}^{2},...),
\end{equation*}
where, for each $n\geq 1$,
\begin{equation*}
X_{[0,T]}^{n}=\int_{0<u_{1}<...<u_{n}<T}dX_{u_{1}}\otimes...\otimes dX_{u_{n}}.
\end{equation*}
\end{defn}
\noindent The integration in the previous definition is defined in the sense of Young \cite{Young}.
The fractional Brownian motion is a $p$-variation path for all $p>\frac{1}{H}$. Hence, for $H\in(\frac{1}{2},1)$ we have $p<2$.
Notice that for $I$, a word composed by the letters $(i_{1},...,i_{2k})$ where each $i_{l}\in{1,...,d}$, we have that $dB^{I}:=dB^{i_{1}}\cdot\cdot\cdot dB^{i_{2k}}$. Therefore, we have that the $2k$-th element of the signature of the fBm and its piecewise approximation are respectively given by
\begin{equation}
\mathbb{E}\left(\int_{0<u_{1}<...<u_{2k}<T}dB_{u_{1}}\otimes\cdot\cdot\cdot \otimes dB_{u_{2k}}\right)=\sum_{i_{1}=1}^{d}\cdot\cdot\cdot \sum_{i_{2k}=1}^{d}\mathbb{E}\left(\int_{\Delta^{2k}[0,T]}dB^{I}\right)e_{i_{1}}\otimes\cdot\cdot\cdot \otimes e_{i_{2k}},\quad \text{and by}
\end{equation}
\begin{equation}
\mathbb{E}\left(\int_{0<u_{1}<...<u_{2k}<T}dB^{m}_{u_{1}}\otimes\cdot\cdot\cdot\otimes dB^{m}_{u_{2k}}\right)=\sum_{i_{1}=1}^{d}\cdot\cdot\cdot \sum_{i_{2k}=1}^{d}\mathbb{E}\left(\int_{\Delta^{2k}[0,T]}dB^{m,I}\right)e_{i_{1}}\otimes\cdot\cdot\cdot \otimes e_{i_{2k}},
\end{equation}
where $(e_{1},...,e_{d})$ is the basis of $\mathbb{R}^{d}$. Notice that we will consider only the $2k$ case since for odd values the expected value of the iterated integrals is zero due to the symmetry of the fBm.
We recall two results which can be found for example in Baudoin and Coutin paper $\cite{BauCou}$. First, we present a reformulation of the Isserlis' (or Wick's) theorem.
\begin{lem}
Let $G = (G_{ 1 }, . . . , G_{ 2k })$ be a centered Gaussian vector. We have
\begin{equation}
\mathbb{E}(G_{ 1 }\cdot\cdot\cdot G_{ 2k })=\dfrac{1}{k!2^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\prod_{l=1}^{k}\mathbb{E}(G_{\sigma(2l-1)}G_{\sigma(2l)}),
\end{equation}
where $\mathcal{G}_{2k}$ is the group of the permutations of the set $\{1, . . . ,2k\}$.
\end{lem}
\noindent Therefore, from the previous lemma we have
\begin{equation}
\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{m,I}\right)=\dfrac{1}{k!2^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\int_{\Delta^{2k}[s,t]}\prod_{l=1}^{k}\mathbb{E}\left(\dfrac{dB^{m,i_{\sigma(2l-1)}}}{dt_{\sigma(2l-1)}}\dfrac{dB^{m,i_{\sigma(2l)}}}{dt_{\sigma(2l)}}\right)dt_{1}\cdot\cdot\cdot dt_{2k}.
\end{equation}
Notice that if $t_{\sigma(2l-1)}\in[t_{j},t_{j+1}]$ and $t_{\sigma(2l)}\in[t_{i},t_{i+1}]$ then
\begin{equation}
\mathbb{E}\left(\dfrac{dB^{m,i_{\sigma(2l-1)}}}{dt_{\sigma(2l-1)}}\dfrac{dB^{m,i_{\sigma(2l)}}}{dt_{\sigma(2l)}}\right)=\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}} H(2H-1)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy.
\end{equation}
Further, recall Theorem 31 of $\cite{BauCou}$.
\begin{thm}\label{31}
Assume $H > \frac{1}{2}$. Letting $I = (i_{1}, . . . , i_{2k})$ be a word, then
\begin{equation}
\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)=\dfrac{H^{k}(2H-1)^{k}}{k!2^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\int_{\Delta^{2k}[0,1]}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdot\cdot\cdot dt_{2k}.
\end{equation}
\end{thm}
\noindent We conclude this section with the following remark.
\begin{rem}
By using the scaling and the stationary increments property of the fBm, we have that
\begin{equation*}
\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{I}\right)=\mathbb{E}\left(\int_{\Delta^{2k}[0,t-s]}dB^{I}\right)=(t-s)^{2kH}\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right),\quad \text{and}
\end{equation*}
\begin{equation*}
\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{m,I}\right)=\mathbb{E}\left(\int_{\Delta^{2k}[0,t-s]}dB^{m,I}\right)=(t-s)^{2kH}\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{m,I}\right).
\end{equation*}
Hence, it is sufficient to focus on the case $\Delta^{2k}[0,1]$ in the proofs of our results.
\end{rem}
\section{Proof of Theorem \ref{pr1}}\label{ChapterSharp}
The first result we prove regards rate of convergence of the expected signature of the piecewise approximation of the fractional Brownian motion to its exact value. The sharpness of the rate of convergence is a delicate matter and it will be discussed at the end of this section.\\
In this section we prove Theorem $\ref{pr1}$. As mentioned before, this theorem covers the weak convergence rate of the signature of the fBm. A previous result on the strong convergence rate was obtained by Friz and Riedel in $\cite{FR}$. Recall from Theorem $\ref{31}$ that we have
\begin{equation*}
\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{m,I}\right)\Bigg|
\end{equation*}
\begin{equation*}
=\Bigg|\dfrac{H^{k}(2H-1)^{k}}{k!2^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\int_{\Delta^{2k}[0,1]}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}
\end{equation*}
\begin{equation*}
-\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t_{\sigma(2l)},t_{\sigma(2l-1)})m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdydt_{1}\cdot\cdot\cdot dt_{2k}\Bigg|.
\end{equation*}
\begin{figure}
\centerline{\includegraphics[scale=0.5]{grafico2.png}}
\caption{\small Representation of the area of the double integral (\ref{integrals}).}
\end{figure}
In order to understand this as a whole we need to first understand the single parts of it. In other words let us focus on
\begin{equation*}
\int_{u_{2}}^{v_{2}}\int_{u_{1}}^{v_{1}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt,
\end{equation*}
where $u_{1},u_{2},v_{1},v_{2},t,s\in\{t_{1},...,t_{2k}\}$.
There are two possible cases. The first one is when $t$ and $s$ are next to each other. In this case, we have
\begin{equation}\label{first}
\int_{u}^{v}\int_{u}^{t}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation}
with $u,v,t,s\in\{t_{1},...,t_{2k}\}$ with $u\neq v\neq t\neq s$.
The second one is that $s$ and $t$ are not next to each other. Hence, we have
\begin{equation}\label{second}
\int_{u_{2}}^{v_{2}}\int_{u_{1}}^{v_{1}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation}
with $u_{1}\neq u_{2}\neq v_{1}\neq v_{2}\neq t\neq s$.
We will focus first on the first case $(\ref{first})$, which is more delicate. We are going to split this double integral into different parts according to our piecewise approximation. In particular, let $t_{q}$ be the lowest grid time above $u$ and $t_{p}$ the highest grid time below $v$, as shown in the Figure 1. Then we have that our double integral can be rewritten in the following way:
\begin{equation}\label{integrals}
\int_{u<s<t<v}=\int_{u}^{v}\int_{u}^{t}=\int_{u}^{t_{q}}\int_{u}^{t}+\int_{t_{q}}^{t_{p}}\int_{u}^{t_{q}}+\int_{t_{p}}^{v}\int_{u}^{t_{q}}+\int_{t_{p}}^{v}\int_{t_{q}}^{t_{p}}+\int_{t_{p}}^{v}\int_{t_{p}}^{t}+\int_{t_{q}}^{t_{p}}\int_{t_{q}}^{t}.
\end{equation}
\noindent In order to better understand this decomposition it is helpful to look at the upper triangle in Figure 1 (\textit{i.e.} the triangle from the red diagonal to the upper and left blue edges). The first integral is the small triangle in the lower left. From here you first move up and then move right. The last integral is the central yellow triangle in Figure 1. For this integral the piecewise approximation and the exact value are equal, hence their difference is zero. This is because the piecewise approximation is equal to the exact value if the points in time considered coincide with the grid points.
What we want to show now is the speed at which the difference between the piecewise approximation and the exact value for these integrals is going to zero as the grid size goes to zero (\textit{i.e.} $m\rightarrow\infty$).
\\
In order to facilitate the reading and understanding of the content of this section, instead of having a 7 page long proof we decided to split the proof of Theorem \ref{pr1} in different small propositions.
\\ The following propositions regards the rate of convergence of each double integral of the right hand side of equation $(\ref{integrals})$. First, let us focus on the integrals: $\int_{u}^{t_{q}}\int_{u}^{t}$ and $\int_{t_{p}}^{v}\int_{t_{p}}^{t}$. We have the following proposition.
\begin{pro} The differences
\begin{equation*}
\Bigg|\int_{u}^{t_{q}}\int_{u}^{t}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\normalfont\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt\Bigg|
\end{equation*}
and
\begin{equation*}
\Bigg|\int_{t_{p}}^{v}\int_{t_{p}}^{t}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\normalfont\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt\Bigg|
\end{equation*}
are bounded by
\begin{equation*}
\dfrac{m^{-2H}}{H(2H-1)}.
\end{equation*}
\end{pro}
\begin{proof}
\begin{equation*}
\int_{u}^{t_{q}}\int_{u}^{t}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\int_{u}^{t_{q}}\int_{u}^{t}\Big(|t-s|^{2H-2}-m^{2}\int_{t_{q-1}}^{t_{q}}\int_{t_{q-1}}^{t_{q}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\dfrac{(t_{q}-u)^{2H}}{2H(2H-1)}-m^{2}\int_{u}^{t_{q}}\int_{u}^{t}\dfrac{(t_{q}-t_{q-1})^{2H}}{H(2H-1)}dsdt
\end{equation*}
\begin{equation*}
=\dfrac{(t_{q}-u)^{2H}}{2H(2H-1)}-m^{2-2H}\dfrac{(t_{q}-u)^{2}}{2H(2H-1)}.
\end{equation*}
Notice that
\begin{equation*}
\Bigg|\dfrac{(t_{q}-u)^{2H}}{2H(2H-1)}-m^{2-2H}\dfrac{(t_{q}-u)^{2}}{2H(2H-1)}\Bigg|\leq\dfrac{(t_{q}-u)^{2H}}{2H(2H-1)}+m^{2-2H}\dfrac{(t_{q}-u)^{2}}{2H(2H-1)}
\end{equation*}
\begin{equation*}
\leq \dfrac{m^{-2H}}{2H(2H-1)}+m^{2-2H}\dfrac{m^{-2}}{2H(2H-1)}=\dfrac{m^{-2H}}{H(2H-1)}.
\end{equation*}
The same reasoning done for this integral applies \textit{mutatis mutandis} to the integral $\int_{t_{p}}^{v}\int_{t_{p}}^{t}$.
\end{proof}
Let us focus on the integrals: $\int_{t_{q}}^{t_{p}}\int_{u}^{t_{q}}$ and $\int_{t_{p}}^{v}\int_{t_{q}}^{t_{p}}$. We have the following proposition.
\begin{pro}
The differences
\begin{equation*}
\Bigg|\int_{t_{q}}^{t_{p}}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\normalfont\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt\Bigg|
\end{equation*}
and
\begin{equation*}
\Bigg|\int_{t_{p}}^{v}\int_{t_{q}}^{t_{p}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\normalfont\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt\Bigg|
\end{equation*}
are bounded by
\begin{equation*}
\left(\dfrac{2^{2H}+2}{H(2H-1)}+(4-4H)\sum_{i=1}^{\infty}i^{2H-3}\right)m^{-2H}.
\end{equation*}
\end{pro}
\begin{proof}
We have
\begin{equation*}
\int_{t_{q}}^{t_{p}}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{m}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\int_{t_{q}}^{t_{p}}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-\sum_{t_{i}=t_{q}}^{t_{p-1}}\textbf{1}_{[t_{i},t_{i+1}]}(t)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{q-1}}^{t_{q}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\sum_{t_{i}=t_{q}}^{t_{p-1}}\int_{t_{i}}^{t_{i+1}}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{q-1}}^{t_{q}}|x-y|^{2H-2}dxdy\Big)dsdt.
\end{equation*}
Here we need to split the sum in two parts. The first part is composed by the first element of the sum, while the second is composed by the whole sum without the first element. Let us first consider the first part.
\begin{equation*}
\int_{t_{q}}^{t_{q+1}}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-m^{2}\int_{t_{q}}^{t_{q+1}}\int_{t_{q-1}}^{t_{q}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\dfrac{(t_{q+1}-u)^{2H}-(t_{q}-u)^{2H}-(t_{q+1}-t_{q})^{2H}}{2H(2H-1)}
\end{equation*}
\begin{equation*}
-m(t_{q+1}-t_{q})(t_{q}-u)\dfrac{(t_{q+1}-t_{q-1})^{2H}-(t_{q}-t_{q-1})^{2H}-(t_{q+1}-t_{q})^{2H}}{2H(2H-1)}.
\end{equation*}
Notice that every element in the numerators is smaller or equal than $m^{-2H}$ except $(t_{q+1}-u)^{2H}$ and $(t_{q+1}-t_{q-1})^{2H}$ for which the following relation holds $(t_{q+1}-u)^{2H}\leq (t_{q+1}-t_{q-1})^{2H}=2^{2H}m^{-2H}$. Therefore, taking absolute value of each element we have
\begin{equation*}
\leq \dfrac{2^{2H}+2}{H(2H-1)}m^{-2H}.
\end{equation*}
Now let's focus on the second part, that is
\begin{equation*}
\sum_{t_{i}=t_{q+1}}^{t_{p-1}}\int_{t_{i}}^{t_{i+1}}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{q-1}}^{t_{q}}|x-y|^{2H-2}dydx\Big)dsdt
\end{equation*}
\begin{equation*}
=\sum_{t_{i}=t_{q+1}}^{t_{p-1}}\int_{t_{i}}^{t_{i+1}}\int_{u}^{t_{q}}\Big((t-s)^{2H-2}-m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{q-1}}^{t_{q}}(x-y)^{2H-2}dydx\Big)dsdt
\end{equation*}
\begin{equation*}
=\sum_{t_{i}=t_{q+1}}^{t_{p-1}}\int_{t_{i}}^{t_{i+1}}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{i}}^{t_{i+1}}\int_{t_{q-1}}^{t_{q}}(t-s)^{2H-2}-(x-y)^{2H-2}dydx\Big)dsdt.
\end{equation*}
By applying Mean Value Theorem we have
\begin{equation*}
\leq \sum_{t_{i}=t_{q+1}}^{t_{p-1}}\int_{t_{i}}^{t_{i+1}}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{i}}^{t_{i+1}}\int_{t_{q-1}}^{t_{q}}\sup_{\xi\in[(t-s),(x-y)]}(2-2H)\xi^{2H-3}|t-s -(x-y)|dydx\Big)dsdt
\end{equation*}
\begin{equation*}
=\sum_{t_{i}=t_{q+1}}^{t_{p-1}}\int_{t_{i}}^{t_{i+1}}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{i}}^{t_{i+1}}\int_{t_{q-1}}^{t_{q}}(2-2H)(t_{i}-t_{q})^{2H-3}|t-x +y-s|dydx\Big)dsdt.
\end{equation*}
Observe that $|t-x|\leq m^{-1}$ and $|y-s|\leq m^{-1}$, hence $|t-x +y-s|\leq 2m^{-1}$. Thus, we have
\begin{equation*}
\leq\sum_{t_{i}=t_{q+1}}^{t_{p-1}}\int_{t_{i}}^{t_{i+1}}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{i}}^{t_{i+1}}\int_{t_{q-1}}^{t_{q}}(4-4H)(t_{i}-t_{q})^{2H-3}m^{-1}dydx\Big)dsdt
\end{equation*}
\begin{equation*}
\leq (4-4H)m^{-3}\sum_{t_{i}=t_{q+1}}^{t_{p-1}}(t_{i}-t_{q})^{2H-3}.
\end{equation*}
Moreover, it is possible to notice that
\begin{equation*}
\leq (4-4H)m^{-3}\sum_{i=1}^{m}\left(\dfrac{i}{m}\right)^{2H-3}=(4-4H)m^{-2H}\sum_{i=1}^{m}i^{2H-3}\leq (4-4H)m^{-2H}\sum_{i=1}^{\infty}i^{2H-3}
\end{equation*}
\begin{equation*}
\leq Km^{-2H}
\end{equation*}
where $K$ is a finite constant independent of $m$, and that $\sum_{i=1}^{\infty}i^{2H-3}<\infty$ for $H<1$.
\\
A similar reasoning applies to the integral $\int_{t_{p}}^{v}\int_{t_{q}}^{t_{p}}$.
\end{proof}
The last integral to be analysed is $\int_{t_{p}}^{v}\int_{u}^{t_{q}}$.
\begin{pro}
The difference
\begin{equation*}
\Bigg|\int_{t_{p}}^{v}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\normalfont\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt\Bigg|
\end{equation*}
is bounded by
\begin{equation*}
\dfrac{3^{2H}+10(2^{2H})+2}{2H(2H-1)}m^{-2H}.
\end{equation*}
\end{pro}
\begin{proof}
Here we need to distinguish two cases. One in which $|u-v|\leq \frac{2}{m}$ and the other in which $|u-v|> \frac{2}{m}$. Let's focus first on the case $|u-v|> \frac{2}{m}$.
\begin{equation*}
\int_{t_{p}}^{v}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\int_{t_{p}}^{v}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-m^{2}\int_{t_{p}}^{t_{p+1}}\int_{t_{q-1}}^{t_{q}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\int_{t_{p}}^{v}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{p}}^{t_{p+1}}\int_{t_{q-1}}^{t_{q}}|t-s|^{2H-2}-|x-y|^{2H-2}dxdy\Big)dsdt.
\end{equation*}
By Mean Value Theorem we have
\begin{equation*}
\Bigg|\int_{t_{p}}^{v}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{p}}^{t_{p+1}}\int_{t_{q-1}}^{t_{q}}|t-s|^{2H-2}-|x-y|^{2H-2}dxdy\Big)dsdt\Bigg|
\end{equation*}
\begin{equation*}
\leq \int_{t_{p}}^{v}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{p}}^{t_{p+1}}\int_{t_{q-1}}^{t_{q}}\sup_{\xi\in[(t-s),(x-y)]}(2-2H)\xi^{2H-3}|t-s -(x-y)|dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\int_{t_{p}}^{v}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{p}}^{t_{p+1}}\int_{t_{q-1}}^{t_{q}}(2-2H)(t_{p}-t_{q})^{2H-3}|t-x +y-s|dxdy\Big)dsdt.
\end{equation*}
Since $|u-v|> \frac{2}{m}$ then $t_{p}-t_{q}\geq m^{-1}$.
\begin{equation*}
\leq\int_{t_{p}}^{v}\int_{u}^{t_{q}}m^{2}\Big(\int_{t_{p}}^{t_{p+1}}\int_{t_{q-1}}^{t_{q}}(4-4H)m^{-2H+3}m^{-1}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
\leq(4-4H)m^{-2H}.
\end{equation*}
Finally consider the second case, \textit{i.e.} $|u-v|\leq \frac{2}{m}$.
\begin{equation*}
\int_{t_{p}}^{v}\int_{u}^{t_{q}}\Big(|t-s|^{2H-2}-m^{2}\int_{t_{p}}^{t_{p+1}}\int_{t_{q-1}}^{t_{q}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
\begin{equation*}
=\dfrac{(v-u)^{2H}-(t_{p}-u)^{2H}-(v-t_{q})^{2H}+(t_{p}-t_{q})^{2H}}{2H(2H-1)}
\end{equation*}
\begin{equation*}
-m^{2}(v-t_{p})(t_{q}-u)\dfrac{(t_{p+1}-t_{q-1})^{2H}-(t_{p}-t_{q-1})^{2H}-(t_{p+1}-t_{q})^{2H}+(t_{p}-t_{q})^{2H}}{2H(2H-1)}.
\end{equation*}
Now, since $|u-v|\leq \frac{2}{m}$ then $t_{p}-t_{q}\leq \frac{1}{m}$ so $t_{p+1}-t_{q-1}\leq \frac{3}{m}$. Therefore, we have
\begin{equation*}
\leq \left(\dfrac{3^{2H}+10(2^{2H})+2}{2H(2H-1)}\right)m^{-2H}.
\end{equation*}
Moreover, since
\begin{equation*}
(4-4H)<\dfrac{3^{2H}+10(2^{2H})+2}{2H(2H-1)},
\end{equation*}
then we can use the right hand side of the above inequality as the coefficient for the rate of convergence for the integral $\int_{t_{p}}^{v}\int_{u}^{t_{q}}$ independently of the value of $u$ and $v$.
\end{proof}
Therefore, we have the following proposition.
\begin{pro}\label{propositionintegrals}
The difference
\begin{equation*}
\Bigg|\int_{u<s<t<v}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\normalfont\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt\Bigg|
\end{equation*}
is bounded by
\begin{equation*}
Am^{-2H},
\end{equation*}
where $A$ is given by
\begin{equation*}
A=2\left(\dfrac{1}{H(2H-1)}+\dfrac{2^{2H}+2}{H(2H-1)}+(4-4H)\sum_{i=1}^{\infty}i^{2H-3}\right)+\dfrac{3^{2H}+10(2^{2H})+2}{2H(2H-1)}.
\end{equation*}
\end{pro}
\begin{proof}
It is immediate from the previous propositions, from the fact that for the double integral $\int_{t_{q}}^{t_{p}}\int_{t_{q}}^{t}$ the difference is zero as discussed before and by considering equation $(\ref{integrals})$. Indeed, the difference for the double integral $\int_{t_{q}}^{t_{p}}\int_{t_{q}}^{t}$ is given by
\begin{equation*}
\int_{t_{q}}^{t_{p}}\int_{t_{q}}^{t}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\normalfont\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt.
\end{equation*}
Notice that this difference can be expressed as the sum of two classes of integrals. The first class contains the off-diagonal terms, which can be expressed as
\begin{equation*}
\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}\Big(|t-s|^{2H-2}-m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt,
\end{equation*}
where $t_{i},t_{j}\in\{t_{q},t_{q+1},...,t_{p-1},t_{p}\}$. It is possible to see that this difference it is immediately zero.
\\ The second case is
\begin{equation*}
\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{t}\Big(|t-s|^{2H-2}-m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{t_{i+1}}|x-y|^{2H-2}dxdy\Big)dsdt,
\end{equation*}
where $t_{i}\in\{t_{q},t_{q+1},...,t_{p-1}\}$. This difference is equal to
\begin{equation*}
=\dfrac{(t_{i+1}-t_{i})^{2H}}{2H(2H-1)}-\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{t}m^{2}\dfrac{(t_{i+1}-t_{i})^{2H}}{H(2H-1)}dsdt
\end{equation*}
\begin{equation*}
=\dfrac{m^{-2H}}{2H(2H-1)}-\dfrac{m^{2-2H}}{H(2H-1)}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{t}dsdt
\end{equation*}
\begin{equation*}
=\dfrac{m^{-2H}}{2H(2H-1)}-\dfrac{m^{2-2H}}{H(2H-1)}\dfrac{m^{-2}}{2}=0.
\end{equation*}
This result could also be directly seen from the symmetric (with respect to the diagonal) property of the function $|t-s|^{2H-2}$.
\end{proof}
Now, recall the second case (\textit{i.e.} equation $(\ref{second})$), which we rewrite here:
\begin{equation*}
\int_{u_{2}}^{v_{2}}\int_{u_{1}}^{v_{1}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt
\end{equation*}
with $u_{1}\neq u_{2}\neq v_{1}\neq v_{2}\neq t\neq s$.\\
Concerning this case we have not any more a triangle but a rectangle. First, observe that these rectangles never lie on the diagonal. This is because we have $v_{2}> u_{2}> v_{1}> u_{1}$. The value of the double integral for this rectangle can be obtained by the difference of double integrals of properly chosen triangles. That is
\begin{equation*}
\int_{u_{2}}^{v_{2}}\int_{u_{1}}^{v_{1}}=\int_{u_{1}}^{v_{2}}\int_{u_{1}}^{t}-\int_{u_{1}}^{u_{2}}\int_{u_{1}}^{t}-\int_{v_{1}}^{v_{2}}\int_{v_{1}}^{t}+\int_{v_{1}}^{u_{2}}\int_{v_{1}}^{t}.
\end{equation*}
Therefore, we have the following proposition.
\begin{pro}\label{newprop}
The difference
\begin{equation*}
\Bigg|\int_{u_{2}}^{v_{2}}\int_{u_{1}}^{v_{1}}\Big(|t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dsdt\Bigg|
\end{equation*}
is bounded by
\begin{equation*}
4Am^{-2H},
\end{equation*}
where $A$ is given in Proposition \ref{propositionintegrals}.
\end{pro}
\begin{proof}
Let
\begin{equation*}
f(t,s):= |t-s|^{2H-2}-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t,s)m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy.
\end{equation*}
Then from the discussion in the above paragraph we have that
\begin{equation*}
\int_{u_{2}}^{v_{2}}\int_{u_{1}}^{v_{1}}f(t,s)dsdt=\int_{u_{1}}^{v_{2}}\int_{u_{1}}^{t}f(t,s)dsdt-\int_{u_{1}}^{u_{2}}\int_{u_{1}}^{t}f(t,s)dsdt-\int_{v_{1}}^{v_{2}}\int_{v_{1}}^{t}f(t,s)dsdt+\int_{v_{1}}^{u_{2}}\int_{v_{1}}^{t}f(t,s)dsdt
\end{equation*}
\begin{equation*}
\leq 4Am^{-2H},
\end{equation*}
where the last inequality follows from Proposition \ref{propositionintegrals}.
\end{proof}
\noindent Now we are ready to prove Theorem $\ref{pr1}$.
\begin{proof}[Proof (Theorem \ref{pr1}).]
\begin{equation*}
\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{m,I}\right)\Bigg|=
\end{equation*}
\begin{equation*}
=\Bigg|\dfrac{H^{k}(2H-1)^{k}}{k!2^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\int_{\Delta^{2k}[0,1]}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}
\end{equation*}
\begin{equation*}
-\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t_{\sigma(2l)},t_{\sigma(2l-1)})m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdydt_{1}\cdot\cdot\cdot dt_{2k}\Bigg|.
\end{equation*}
Now, by using the relation
\begin{equation*}
\prod_{l=1}^{k} a_{l}-\prod_{l=1}^{k} b_{l}=(a_{1}-b_{1})\prod_{l=2}^{k} b_{i}+a_{1}(a_{2}-b_{2})\prod_{l=3}^{k} b_{i}+\cdot\cdot\cdot+\prod_{l=1}^{k-1} a_{l}(a_{k}-b_{k}),
\end{equation*}
where $a_{i},b_{i}\in\mathbb{R}$ for $i=1,...,k$, we have
\begin{equation*}
=\Bigg|\dfrac{H^{k}(2H-1)^{k}}{k!2^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\int_{\Delta^{2k}[0,1]}\delta_{i_{\sigma(2)},i_{\sigma(1)}}\Big(|t_{\sigma(2)}-t_{\sigma(1)}|^{2H-2}
\end{equation*}
\begin{equation*}
-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t_{\sigma(2)},t_{\sigma(1)})m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)
\end{equation*}
\begin{equation*}
\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t_{\sigma(2l)},t_{\sigma(2l-1)})m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy
\end{equation*}
\begin{equation*}
+...+\prod_{l=1}^{k-1}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\delta_{i_{\sigma(2k)},i_{\sigma(2k-1)}}\Big(|t_{\sigma(2k)}-t_{\sigma(2k-1)}|^{2H-2}
\end{equation*}
\begin{equation}\label{last-probably}
-\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t_{\sigma(2k)},t_{\sigma(2k-1)})m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy\Big)dt_{1}\cdot\cdot\cdot dt_{2k}\Bigg|.
\end{equation}
By Fubini's theorem, given an integrable function $g:\mathbb{R}^{2k}\rightarrow\mathbb{R}$ and denoting $\textbf{1}_{\{s<t<r\}}:=\textbf{1}_{[s,r]}(t)$ we have that
\begin{equation*}
\int_{\Delta^{2k}[0,1]}g dt_{1}\cdots dt_{2k}= \int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty}g \textbf{1}_{\{0<t_{1}<t_{2}\}}\cdots \textbf{1}_{\{t_{2k-1}<t_{2k}<1\}}dt_{1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
\stackrel{Fubini}{=} \int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty}g \textbf{1}_{\{0<t_{1}<t_{2}\}}\cdots \textbf{1}_{\{t_{2k-1}<t_{2k}<1\}}
\end{equation*}
\begin{equation*}
dt_{\sigma(2l)}dt_{\sigma(2l-1)}dt_{1}\cdots dt_{\sigma(2l)-1}dt_{\sigma(2l)+1}\cdots dt_{\sigma(2l-1)-1}dt_{\sigma(2l-1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
=\int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty}g \textbf{1}_{\{t_{\sigma(2l)-1}<t_{\sigma(2l)}<t_{\sigma(2l)+1}\}} \textbf{1}_{\{t_{\sigma(2l-1)-1}<t_{\sigma(2l-1)}<t_{\sigma(2l-1)+1}\}}dt_{\sigma(2l)}dt_{\sigma(2l-1)}
\end{equation*}
\begin{equation*}
\textbf{1}_{\{0<t_{1}<t_{2}\}}\cdots \textbf{1}_{\{t_{\sigma(2l)-2}<t_{\sigma(2l)-1}<t_{\sigma(2l)+1}\}}\textbf{1}_{\{t_{\sigma(2l)-1}<t_{\sigma(2l)+1}<t_{\sigma(2l)+2}\}}\cdots \textbf{1}_{\{t_{\sigma(2l-1)-2}<t_{\sigma(2l-1)-1}<t_{\sigma(2l-1)+1}\}}
\end{equation*}
\begin{equation*}
\textbf{1}_{\{t_{\sigma(2l-1)-1}<t_{\sigma(2l-1)+1}<t_{\sigma(2l-1)+2}\}}\cdots \textbf{1}_{\{t_{2k-1}<t_{2k}<1\}}dt_{1}\cdots dt_{\sigma(2l)-1}dt_{\sigma(2l)+1}\cdots dt_{\sigma(2l-1)-1}dt_{\sigma(2l-1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
=\int_{0<t_{1}<...<t_{\sigma(2l)-1}<t_{\sigma(2l)+1}<...<t_{\sigma(2l-1)-1}<t_{\sigma(2l-1)+1}<...<t_{2k}<1}\int_{t_{\sigma(2l-1)-1}}^{t_{\sigma(2l-1)+1}}\int_{t_{\sigma(2l)-1}}^{t_{\sigma(2l)+1}}g
\end{equation*}
\begin{equation*}
dt_{\sigma(2l)}dt_{\sigma(2l-1)}dt_{1}\cdots dt_{\sigma(2l)-1}dt_{\sigma(2l)+1}\cdots dt_{\sigma(2l-1)-1}dt_{\sigma(2l-1)+1}\cdots dt_{2k},
\end{equation*}
where we assumed without loss of generality that $t_{\sigma(2l)}<t_{\sigma(2l-1)}$. Notice that, for example, $\textbf{1}_{\{0<t_{1}<t_{2}\}}$ is a function of $t_{1}$ and that $\textbf{1}_{\{0<t_{1}<t_{2}\}}\textbf{1}_{\{t_{1}<t_{2}<t_{3}\}}=\textbf{1}_{\{0<t_{1}<t_{3}\}}\textbf{1}_{\{t_{1}<t_{2}<t_{3}\}}=\textbf{1}_{\{0<t_{1}<t_{2}\}}\textbf{1}_{\{0<t_{2}<t_{3}\}}$. In our case the function $g$ is given by the integrand of $(\ref{last-probably})$, which is integrable since $H>\frac{1}{2}$. Now, by using this fact together with Proposition \ref{propositionintegrals} and Proposition \ref{newprop}, we have that $(\ref{last-probably})$ is bounded by
\begin{equation*}
\leq \dfrac{4Am^{-2H}H^{k}(2H-1)^{k}}{k!2^{k}}
\end{equation*}
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\bigg[\int_{0<t_{1}<...<t_{\sigma(2)-1}<t_{\sigma(2)+1}<...<t_{\sigma(1)-1}<t_{\sigma(1)+1}<...t_{2k}<1}
\end{equation*}
\begin{equation*}
\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t_{\sigma(2l)},t_{\sigma(2l-1)})m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy
\end{equation*}
\begin{equation*}
dt_{1}\cdots dt_{\sigma(2)-1}dt_{\sigma(2)+1}\cdots dt_{\sigma(1)-1}dt_{\sigma(1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
+...+\int_{0<t_{1}<...<t_{\sigma(2k)-1}<t_{\sigma(2k)+1}<...<t_{\sigma(2k-1)-1}<t_{\sigma(2k-1)+1}<...t_{2k}<1}
\end{equation*}
\begin{equation}\label{k-addends}
\prod_{l=1}^{k-1}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2k)-1}dt_{\sigma(2k)+1}\cdots dt_{\sigma(2k-1)-1}dt_{\sigma(2k-1)+1}\cdots dt_{2k}\bigg].
\end{equation}
Observe that we have not any more the absolute value since it is a positive value. Now, consider the single addends inside the square parenthesis in the formula above. Each element is bounded by
\begin{equation*}
\int_{0}^{1}...\int_{0}^{1}\prod_{l=2}^{k}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2)-1}dt_{\sigma(2)+1}\cdots dt_{\sigma(1)-1}dt_{\sigma(1)+1}\cdots dt_{2k}.
\end{equation*}
The reason why we have $l=2$ to $k$ in the product does not matter. Indeed, once we take the integral to be over $[0, 1]^{2k-2}$ instead of $\Delta^{2k-2}[0,1]$ it is not important which permutation we consider. We can move the integrands around using Tonelli's theorem since they are positive value. Further, the above bound is the same as considering $m=0$, because when the extremes of the integrals are grid points the piecewise approximation and the exact value coincide. This integral reduces to
\begin{equation*}
=\dfrac{1}{H^{k-1}(2H-1)^{k-1}}.
\end{equation*}
Therefore, we can bound $(\ref{k-addends})$ by
\begin{equation}\label{rate}
\dfrac{4Am^{-2H}H^{k}(2H-1)^{k}(2k)!}{k!2^{k}}\dfrac{k}{H^{k-1}(2H-1)^{k-1}},
\end{equation}
where we have used the fact that $\sum_{\sigma\in\mathcal{G}_{2k}}$, which is the sum over all permutations of the set $\{1,...,2k\}$, corresponds to $(2k)!$. Therefore, we can rewrite the formula above as
\begin{equation*}
=Cm^{-2H},\quad\text{where}\quad C:=\dfrac{4AH(2H-1)(2k)!}{(k-1)!2^{k}}<\infty.
\end{equation*}
This concludes the proof of the rate of convergence.
\end{proof}
The sharpness of the bound is a delicate matter. It has been proved by $\cite{NiXu}$ that for the Brownian motion the sharp rate of convergence is $1$, which motivates us to believe in the sharpness of our result. In order to be completely sure about it we need to prove that
\begin{equation*}
\limsup\limits_{m\rightarrow\infty}m^{2H}\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{m,I}\right)\Bigg|>0.
\end{equation*}
However, the piecewise approximation makes this idea extremely difficult to put in practice. This is because, when we focus on the double integral $\int_{u<s<t<v}$, the piecewise approximation and the exact value are equal when the times $u$ and $v$ correspond to grid points $t_{q}$ and $t_{p}$ respectively, but they are not equal otherwise. Hence, as $m\rightarrow\infty$ the difference between the exact value and the piecewise approximation goes to zero not linearly. This makes the problem of the sharpness of the rate of convergence to appear impossible.
A possible solution is to consider the iterated integral as a whole without focusing on the general double integral. But this seems a difficult way of dealing with the problem. Indeed, we proceeded on with this way at the beginning of this work obtaining a bound with a rate of convergence of $2H$ but with a non integrable coefficient. For these reasons we are confident that the rate of convergence of $2H$ is sharp.
A final remark regards the fact that $m^{-2H}<m^{-1}$, which means that the difference goes to zero faster for the fBm than the Bm. This is an expected result since the fBm with $H>\frac{1}{2}$ is a smoother path than the Bm. This higher smoothness is related to the positive correlation of the increments of the fBm with $H>\frac{1}{2}$. Indeed, we can see that the rate of convergence becomes smaller (hence the convergence faster) as the positive correlation (represented by $H$) increases.
\section{Proof of Theorem \ref{pr2}}
As it is possible to see from equation $(\ref{rate})$ the coefficient of the rate of convergence goes to infinity as the degree of the truncated signature $k$ goes to infinity. In other words, $C\rightarrow\infty$ as $k\rightarrow\infty$. In the following, we improve the estimate of this coefficient and show that it goes to zero as $k$ goes to infinity.
In this section we prove Theorem \ref{pr2}. However, first we prove the following proposition.
\begin{pro}\label{newprop-permutation}
The following equality holds
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\int_{0<t_{1}<...<t_{\sigma(2j-1)-1}<t_{\sigma(2j-1)+1}<...<t_{\sigma(2j)-1}<t_{\sigma(2j)+1}<...<t_{2k}<1}
\end{equation*}
\begin{equation*}
\prod_{l=1,l\neq j}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2j)-1}dt_{\sigma(2j)+1}\cdots dt_{\sigma(2j-1)-1}dt_{\sigma(2j-1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation}\label{new-permutation}
=2k(2k-1)\sum_{\tau\in\mathcal{G}_{2k-2}}\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2},
\end{equation}
where $\mathcal{G}_{2k-2}$ is the set of all permutations of the set $\{1,...,2k-2\}$ and $j=1,...,k$.
\end{pro}
\begin{proof}
For the moment, fix $\sigma\in\mathcal{G}_{2k}$ and consider the case $j=1$. By a simple change of variables we have
\begin{equation*}
\int_{0<t_{1}<...<t_{\sigma(1)-1}<t_{\sigma(1)+1}<...<t_{\sigma(2)-1}<t_{\sigma(2)+1}<...<t_{2k}<1}
\end{equation*}
\begin{equation*}
\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2)-1}dt_{\sigma(2)+1}\cdots dt_{\sigma(1)-1}dt_{\sigma(1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
= \int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2},
\end{equation*}
where $\tau\in\mathcal{G}_{2k-2}$. Now, given a permutation $\tau\in\mathcal{G}_{2k-2}$ there are different $\sigma$s, say $\sigma_{1},\sigma_{2},...\in\mathcal{G}_{2k}$ such that they satisfy the same equality as above. In particular, for a given $\tau$ there are precisely $2k(2k-1)$ of these $\sigma$s that satisfy the equality. That is
\begin{equation*}
\sum_{j=1}^{2k(2k-1)}\int_{0<t_{1}<...<t_{\sigma_{j}(1)-1}<t_{\sigma_{j}(1)+1}<...<t_{\sigma_{j}(2)-1}<t_{\sigma_{j}(2)+1}<...<1}
\end{equation*}
\begin{equation*}
\prod_{l=2}^{k}\delta_{i_{\sigma_{j}(2l)},i_{\sigma_{j}(2l-1)}}|t_{\sigma_{j}(2l)}-t_{\sigma_{j}(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2)-1}dt_{\sigma(2)+1}\cdots dt_{\sigma(1)-1}dt_{\sigma(1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
=2k(2k-1)\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}.
\end{equation*}
The $2k(2k-1)$ factor comes from the possible values that $\{\sigma(1),\sigma(2)\}$ may assume in the set $\{1,...,2k\}$ leaving the order of the remaining $2k-2$ elements, which are the values $\sigma(3),...,\sigma(2k)$, unchanged. That is the $2k(2k-1)$ comes from the $2$-permutation of $2k$.
The next step is to see that it is possible to reformulate all the permutations $\sigma\in\mathcal{G}_{2k}$ in terms of the permutations $\tau\in\mathcal{G}_{2k-2}$ times $2k(2k-1)$. In particular we have the following equality
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\int_{0<t_{1}<...<t_{\sigma(1)-1}<t_{\sigma(1)+1}<...<t_{\sigma(2)-1}<t_{\sigma(2)+1}<...<1}
\end{equation*}
\begin{equation*}
\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation*}
\begin{equation*}
=2k(2k-1)\sum_{\tau\in\mathcal{G}_{2k-2}}\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}.
\end{equation*}
which is our desired equality $(\ref{new-permutation})$. The reason why this holds is that it is possible to decompose the permutations $\mathcal{G}_{2k}$ (the $\sigma$s) into the 2-permutations of $2k$ (which are $2k(2k-1)$ permutations for each permutation $\tau$ and they do not modify the value of the integral) times the permutations $\mathcal{G}_{2k-2}$ (the $\tau$s). Indeed, for $\mathcal{G}_{2k}$ we have $(2k)!$ permutations (and of course they are all different from each other) and for the other we have $(2k-2)!\cdot2k(2k-1)=(2k)!$ permutations (and they are also all different from each other).
It is possible to see that the same arguments apply to the case $j=1,...,k$. For example, take $j=k$ then it is easy to see that we have
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\int_{0<t_{1}<...<t_{\sigma(2k)-1}<t_{\sigma(2k)+1}<...<t_{\sigma(2k-1)-1}<t_{\sigma(2k-1)+1}<...<t_{2k}<1}
\end{equation*}
\begin{equation*}
\prod_{l=1}^{k-1}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2k)-1}dt_{\sigma(2k)+1}\cdots dt_{\sigma(2k-1)-1}dt_{\sigma(2k-1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
=2k(2k-1)\sum_{\tau\in\mathcal{G}_{2k-2}}\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}.
\end{equation*}
Since the proof is based on combinatorial arguments, which sometimes are difficult to grasp, we added a longer explanation of the result in the Appendix.
\end{proof}
We are now ready to prove Theorem \ref{pr2}.
\begin{proof}[Proof (Theorem \ref{pr2}).]
First, multiply our main object of study by $m^{2H}$, namely
\begin{equation*}
\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{m,I}\right)\Bigg|m^{2H}.
\end{equation*}
We can use our estimates obtained in ($\ref{k-addends}$) to get:
\begin{equation*}
\Bigg|\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)-\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{m,I}\right)\Bigg|m^{2H}\leq \dfrac{4AH^{k}(2H-1)^{k}}{k!2^{k}}
\end{equation*}
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\bigg[\int_{0<t_{1}<...<t_{\sigma(2)-1}<t_{\sigma(2)+1}<...<t_{\sigma(1)-1}<t_{\sigma(1)+1}<...t_{2k}<1}
\end{equation*}
\begin{equation*}
\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}\sum_{i,j=1}^{m}\textbf{1}_{[t_{i},t_{i+1}]\times[t_{j},t_{j+1}]}(t_{\sigma(2l)},t_{\sigma(2l-1)})m^{2}\int_{t_{i}}^{t_{i+1}}\int_{t_{j}}^{t_{j+1}}|x-y|^{2H-2}dxdy
\end{equation*}
\begin{equation*}
dt_{1}\cdots dt_{\sigma(2)-1}dt_{\sigma(2)+1}\cdots dt_{\sigma(1)-1}dt_{\sigma(1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
+...+\int_{0<t_{1}<...<t_{\sigma(2k)-1}<t_{\sigma(2k)+1}<...<t_{\sigma(2k-1)-1}<t_{\sigma(2k-1)+1}<...t_{2k}<1}
\end{equation*}
\begin{equation*}
\prod_{l=1}^{k-1}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2k)-1}dt_{\sigma(2k)+1}\cdots dt_{\sigma(2k-1)-1}dt_{\sigma(2k-1)+1}\cdots dt_{2k}\bigg].
\end{equation*}
Now, taking the limit as $m\rightarrow\infty$, by dominated convergence theorem we obtain
\begin{equation*}
\dfrac{4AH^{k}(2H-1)^{k}}{k!2^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\bigg[\int_{0<t_{1}<...<t_{\sigma(2)-1}<t_{\sigma(2)+1}<...<t_{\sigma(1)-1}<t_{\sigma(1)+1}<...t_{2k}<1}
\end{equation*}
\begin{equation*}
\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2)-1}dt_{\sigma(2)+1}\cdots dt_{\sigma(1)-1}dt_{\sigma(1)+1}\cdots dt_{2k}
\end{equation*}
\begin{equation*}
+...+\int_{0<t_{1}<...<t_{\sigma(2k)-1}<t_{\sigma(2k)+1}<...<t_{\sigma(2k-1)-1}<t_{\sigma(2k-1)+1}<...t_{2k}<1}
\end{equation*}
\begin{equation*}
\prod_{l=1}^{k-1}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdots dt_{\sigma(2k)-1}dt_{\sigma(2k)+1}\cdots dt_{\sigma(2k-1)-1}dt_{\sigma(2k-1)+1}\cdots dt_{2k}\bigg].
\end{equation*}
From Proposition \ref{newprop-permutation} we have
\begin{equation}\label{problem2}
=\dfrac{4AH^{k}(2H-1)^{k}}{k!2^{k}}2k(2k-1)k\sum_{\tau\in\mathcal{G}_{2k-2}}\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}.
\end{equation}
Notice that the function $f(s_{1},...,s_{2k-2})$ defined as
\begin{equation*}
f(s_{1},...,s_{2k-2}):=\sum_{\tau\in\mathcal{G}_{2k-2}}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}
\end{equation*}
is symmetric with respect to the diagonal of the $2k-2$-dimensional hypercube. Hence, we have
\begin{equation*}
\int_{\Delta^{2k-2}[0,1]}f(s_{1},...,s_{2k-2})ds_{1}\cdots ds_{2k-2}=\dfrac{1}{(2k-2)!}\int_{[0,1]^{2k-2}}f(s_{1},...,s_{2k-2})ds_{1}\cdots ds_{2k-2}.
\end{equation*}
Therefore, we have that the formula $(\ref{problem2})$ is equal to
\begin{equation*}
\dfrac{4AH^{k}(2H-1)^{k}}{k!2^{k}}\dfrac{2k(2k-1)k}{(2k-2)!}\sum_{\tau\in\mathcal{G}_{2k-2}}\int_{[0,1]^{2k-2}}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}
\end{equation*}
\begin{equation}\label{final}
\leq \dfrac{4AH^{k}(2H-1)^{k}}{k!2^{k}}\dfrac{2k(2k-1)k}{(2k-2)!}\dfrac{(2k-2)!}{H^{k-1}(2H-1)^{k-1}}=\dfrac{8AH(2H-1)}{(k-1)!2^{k}}k(2k-1)
\end{equation}
Recall that $A$ is given by
\begin{equation*}
A=2\left(\dfrac{1}{H(2H-1)}+\dfrac{2^{2H}+2}{H(2H-1)}+(4-4H)\sum_{i=1}^{\infty}i^{2H-3}\right)+\dfrac{3^{2H}+10(2^{2H})+2}{2H(2H-1)}
\end{equation*}
and let $\tilde{A}$ to be defined as
\begin{equation*}
\tilde{A}:=8AH(2H-1)=16\left(3+2^{2H}+H(2H-1)(4-4H)\sum_{i=1}^{\infty}i^{2H-3}\right)+4(3^{2H}+10(2^{2H})+2)
\end{equation*}
\begin{equation*}
=56(1+2^{2H})+4(3^{2H})+16H(2H-1)(4-4H)\sum_{i=1}^{\infty}i^{2H-3}.
\end{equation*}
Thus, equation $(\ref{final})$ is equal to $\dfrac{\tilde{A}k(2k-1)}{(k-1)!2^{k}}$.
\end{proof}
Notice that
\begin{equation*}
\dfrac{\tilde{A}k(2k-1)}{(k-1)!2^{k}}\rightarrow 0\qquad as \qquad k\rightarrow\infty.
\end{equation*}
Thus, the coefficient of the rate of convergence goes to zero as the number of iterated integrals increases (\textit{i.e.} as the order of the signature increases) and it goes very fast.
\section{Proof of Theorem \ref{pr3}}
In this section we shift the focus we had in the last two sections. Our object is now solely the expected signature of the fractional Brownian motion, for which we prove a simple but sharp estimate for it.
\\ Indeed, we prove Theorem \ref{pr3}. The proof of this theorem is very short and the arguments are similar to the ones used at the end of the proof of Theorem \ref{pr2}.
\begin{proof}
Consider the $2k$-term of the expected signature of the fractional Brownian motion for $H>\frac{1}{2}$, we have
\begin{equation*}
\mathbb{E}\left(\int_{\Delta^{2k}[0,1]}dB^{I}\right)=\dfrac{H^{k}(2H-1)^{k}}{k!2^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\int_{\Delta^{2k}[0,1]}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdot\cdot\cdot dt_{2k}
\end{equation*}
\begin{equation*}
=\dfrac{H^{k}(2H-1)^{k}}{k!2^{k}}\frac{1}{(2k)!}\sum_{\sigma\in\mathcal{G}_{2k}}\int_{[0,1]^{2k}}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}dt_{1}\cdot\cdot\cdot dt_{2k}
\end{equation*}
\begin{equation*}
=\dfrac{H^{k}(2H-1)^{k}}{k!2^{k}}\frac{1}{(2k)!}\sum_{\sigma\in\mathcal{G}_{2k}}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}\int_{0}^{1}\int_{0}^{1}|t-s|^{2H-2}dsdt
\end{equation*}
\begin{equation*}
=\dfrac{H^{k}(2H-1)^{k}}{k!2^{k}}\frac{1}{(2k)!}\frac{1}{H^{k}(2H-1)^{k}}\sum_{\sigma\in\mathcal{G}_{2k}}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}\leq\dfrac{1}{k!2^{k}}
\dfrac{1}{(2k)!}(2k)!=\dfrac{1}{k!2^{k}},
\end{equation*}
by using only the fact that the function
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}
\end{equation*}
is symmetric with respect to the diagonal.
\end{proof}
A similar but less sharp estimate was obtained in Proposition 4.8 of \cite{NNRT}. Indeed, using the explicit formulation of this proposition reported in Proposition 3.3 of \cite{BauZha}, it is possible to see that they obtained a uniform bound for the $2k$-th term of the expected signature of
\begin{equation*}
\frac{2^{k}}{H^{k}(2H-1)^{k}\sqrt{(2k)!}}.
\end{equation*}
Observe that
\begin{equation*}
\frac{2^{k}}{H^{k}(2H-1)^{k}\sqrt{(2k)!}}>\frac{2^{k}}{\sqrt{(2k)!}}>\dfrac{1}{k!2^{k}}.
\end{equation*}
The advantage of our result is that the estimate is sharp, it is independent of $H$ and the proof is very short. Moreover, we can also have an equality estimate for each particular word $I$ by just leaving the $\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}$ in place. Indeed, in case we have a two dimensional fBm and the word $I$ does not contain just the letter $1$ then the upper bound becomes much smaller than $1/(k!2^{k})$. In particular, we have the following proposition which sharpen the estimate of Theorem \ref{pr3} according to which word $I$ is considered.
\begin{pro}
Let $p,n\in\mathbb{N}$. Consider a $n$-dimensional fBm with $n\geq p$ and a word $I$ which contains $p$ different letters then we have
\begin{equation*}
\mathbb{E}\left(\int_{\Delta^{2k}[s,t]}dB^{I}\right)\leq \dfrac{(t-s)^{2kH}}{k!2^{k}(2k)!}\frac{k!2^{p-1}(2(k-p+1))!}{(k-p+1)!}
\end{equation*}
for $p\leq k$ and zero otherwise.
\end{pro}
\begin{proof}
Observe that if $p>k$ then there is going to be a letter which cannot be paired with a letter of the same value. Hence, the expected iterated integral is zero. \\Regarding the case $p\leq k$, we proceed as follows. First, we pick a number $k$ and analyse the values obtained for different $p$. In particular, consider the case $2k=12$, hence $I=(i_{1},...,i_{12})$. Since what matters are pairs of letters, we can reformulate the word $I=(a_{1},...,a_{6})$ where $a_{j}$ is a pair of two letters with the same value. Now, if $p=k$ then $a_{1}\neq...\neq a_{6}$ and all the combinations becomes
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}=6!2^{6},
\end{equation*}
where the $6!$ comes from the possible permutations of the $a_{j}$ in the word $I$ and the $2^{6}$ comes from the fact that inside each $a_{j}$ we have two letters that can assume two possible positions and this is true for $j=1,..,6$. In case $p=5$ then we can reformulate $I:=(a_{1},a_{1},...,a_{6})$ with $a_{1}=a_{2}$ and we obtain
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}=\frac{6!}{2}2^{4}4!,
\end{equation*}
where $6!$ comes from again the possible permutations of $a_{j}$ and the $1/2$ comes from the fact that the order $...,a_{1},...,a_{2},...$ is the same as $...,a_{2},...,a_{1},...$. The relevance of the order is indeed considered but at the level of the letters, in the $4!$, as explained in the following line. Further, the $2^{4}$ comes from the two permutations of each $a_{3},...,a_{6}$ and the $4!$ comes from the permutations of the letter inside $(a_{1},a_{2})$. In case $p=4$ there are two possible cases $I=(a_{1},...,a_{6})$ with $a_{1}=a_{2}$ and $a_{3}=a_{4}$ (and the others are all different) or $a_{1}=a_{2}=a_{3}$ (and the others are all different). It is possible to check that the second case give us an higher upper bound so we use this case, for which we have
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\prod_{l=1}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}=\frac{6!}{3!}2^{3}6!,
\end{equation*}
where the first $6!$ comes from again the possible permutations of $a_{j}$, further the order of $a_{1},a_{2},a_{3}$ is not relevant now and, since there are $3!$ different order for each permutation of the $6$ elements, we divide by $3!$; in addition, the $2^{3}$ comes from the two permutations of each $a_{4},a_{5},a_{6}$ and the $6!$ comes from the permutations of the letter inside $(a_{1},a_{2},a_{3})$. The same procedure applies to $p=3,2,1$ and for any $k\in\mathbb{N}$.
\end{proof}
\section{The cubature method for the fBm and the proofs of Theorems \ref{prNEW} and \ref{pr4}}\label{cub}
\noindent We start this section with a brief introduction to the cubature method for the fractional Brownian motion. In this section we do \textit{not} use the notation $B_{t}:=B_{t}^{H}$ since we work with the Brownian motion as well.
The cubature method is a numerical method used to obtain approximate solutions to SDEs and parabolic PDEs. The first main step is to obtain the cubature formula, which we define now for the $d$-dimensional fBm. The setting is the same as the one presented for the Bm case in \cite{LV}. So, the probability space we are working on is $(C_{0}^{0}([0,T],\mathbb{R}^{d}),\mathcal{F},\mathbb{P})$ where $C_{0}^{0}([0,T],\mathbb{R}^{d})$ is the space of
$\mathbb{R}^{d}$-valued continuous functions defined in $[0, T ]$ and which starts at zero (i.e. the Wiener space), $\mathcal{F}$ is its Borel $\sigma$-field and $\mathbb{P}$ is the Wiener measure. As in \cite{LV}, let $B_{t}:C_{0}^{0}([0,T],\mathbb{R}^{d})\rightarrow\mathbb{R}^{d}$ such that $B^{i}_{t}(\omega)=\omega^{i}(t)$ for $t\in[0,T]$ and $i=1,..,d$, then $\{B_{t}\}_{t\in[0,T]}=\{B^{1}_{t},...,B^{d}_{t}\}_{t\in[0,T]}$ is a $\mathbb{R}^{d}$-valued Brownian motion on the probability space $(C_{0}^{0}([0,T],\mathbb{R}^{d}),\mathcal{F},\mathbb{P})$.
Now, recall that it is possible to write the fBm in terms of a Bm, as done in \cite{H}. In particular we will follow the formulation in \cite{H} and consider the process $B^{H,i}_{t}(\omega)=\int_{0}^{t}K(t,s)dB^{i}_{s}(\omega)\left(=\int_{0}^{t}K(t,s)d\omega^{i}(s)\right)$ for $t\in[0,T]$ and $i=1,..,d$. Then, the process $\{B^{H}_{t}\}_{t\in[0,T]}=\{B^{H,1}_{t},...,B^{H,d}_{t}\}_{t\in[0,T]}$ is a $\mathbb{R}^{d}$-valued fractional Brownian motion on the probability space $(C_{0}^{0}([0,T],\mathbb{R}^{d}),\mathcal{F},\mathbb{P})$.
\\ As in \cite{LV}, the process $\{\xi_{t,x}\}_{t\in[0,T]}$ is a stochastic process like our $\{B_{t}\}_{t\in[0,T]}$, that is $\xi_{t,x}:C_{0}^{0}([0,T],\mathbb{R}^{d})\rightarrow\mathbb{R}^{N}$ (for $N\in\mathbb{N}$), hence $\xi_{t,x}:\omega\mapsto \xi_{t,x}(\omega)$. Further, we will consider the process $\{\hat{B}^{H}_{t}\}_{t\in[0,T]}=\{t,B^{H,1}_{t},...,B^{H,d}_{t}\}_{t\in[0,T]}$.
First, we give the definition of the cubature formula for the fBm for $H\geq 1/2$ on the Wiener space.
\begin{defn}\label{cubature-definition}
Let $m\in\mathbb{N}$ and $H\geq\frac{1}{2}$. Let define $\mathcal{A}_{m}:=\{(i_{1},...,i_{k})\in\{0,...,d\}^{k},2Hk+(2-2H)\times\mathnormal{card}\{l,i_{l}=0\}\leq m \,\,\,\text{and}\,\,\, k\in\mathbb{N} \}$. We say that the paths
\begin{equation*}
\bar{\omega}_{1} , . . . , \bar{\omega}_{n}\in C_{0,bv}^{0}
([0, T ], \mathbb{R}^{d})
\end{equation*}
and the positive weights $\lambda_{1} , . . . ,\lambda_{n}$ define a cubature formula of degree $m$ at time $T$, if and only if, for all $(i_{1}, . . . , i_{k})\in \mathcal{A}_{m}$,
\begin{equation*}
\mathbb{E}\left[\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{H,i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}\right]=\sum_{j=1}^{n}\lambda_{j}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{\omega}_{j}^{i_{1}}(t_{1})\cdot\cdot\cdot d\hat{\omega}_{j}^{i_{k}}(t_{k}).
\end{equation*}
where, for $l=1,...,k$,
\begin{equation*}
\hat{B}_{t_{l}}^{H,i_{l}}:=
\begin{cases}
t_{l}, \qquad if \quad i_{l}=0,\\
B^{i_{l}H}_{t_{l}} \qquad if \quad i_{l}\neq 0,
\end{cases}\quad \text{and}\quad \hat{\omega}_{t_{l}}^{i_{l}}:=
\begin{cases}
t_{l}, \qquad if \quad i_{l}=0,\\
\bar{\omega}^{i_{l}}_{t_{l}} \qquad if \quad i_{l}\neq 0.
\end{cases}
\end{equation*}
\end{defn}
\noindent Notice that the formulation of $\mathcal{A}_{m}$ comes from the fact that for a word $I$
\begin{equation*}
\mathbb{E}\left(\int_{\Delta^{k}[s,t]}d\hat{B}^{H,I}\right)=\mathbb{E}\left(\int_{\Delta^{k}[0,t-s]}d\hat{B}^{H,I}\right)=(t-s)^{kH+(1-H)\times\mathnormal{card}\{j,i_{j}=0\}}\mathbb{E}\left(\int_{\Delta^{k}[0,1]}d\hat{B}^{H,I}\right)
\end{equation*}
(see \cite{NNRT} Proposition 4.8 for similar discussions) and the reason why we multiply by 2 in $\mathcal{A}_{m}$ is to be consistent with the formulation provided by \cite{LV} for the Bm case.
Once the cubature formula is obtained, it is possible to derive approximate solutions of SDEs driven by fBm. Indeed, let us recall that $C_{b}^{\infty}(\mathbb{R}^{N},\mathbb{R}^{N})$ be the space of $\mathbb{R}^{N}$-valued smooth functions defined in $\mathbb{R}^{N}$ whose derivatives of any order are bounded. We regard elements of $C_{b}^{\infty}(\mathbb{R}^{N},\mathbb{R}^{N})$ as vector fields on $\mathbb{R}^{N}$. Let $V_{0},...,V_{d}$ be such vector fields. Let $\xi_{t,x}$, $t\in[0, T ], x\in \mathbb{R}^{N}$, be the solution of the SDE
\begin{equation}\label{SDE}
\text{d}\xi_{t,x}=\sum_{i=0}^{d}V_{i}(\xi_{t,x})\text{d}\hat{B}^{H,i}_{t},\quad\text{with}\quad\xi_{0,x}=x.
\end{equation}
Moreover, let $\Phi_{T,x}(\omega_{j}^{*})$, where $\omega_{j}^{*}\in C_{0}^{0}([0,T],\mathbb{R}^{d})$, be the solution at time T of the ODE
\begin{equation}\label{ODE}
\text{d}y_{t,x}=\sum_{i=0}^{d}V_{i}(y_{t,x})\text{d}\hat{\omega}^{i}_{j}(t),\quad\text{with}\quad y_{0,x}=x.
\end{equation}
The core message of the cubature method is that the weighted sum of the solutions of the ODEs $(\ref{ODE})$ for $j=1,...,n$, where the weights are given by $\lambda_{1},...,\lambda_{n}$ of the cubature formula, approximates the expected solution of the SDE $(\ref{SDE})$. In particular, we have the Theorem \ref{prNEW}, which extends Proposition 2.1, Lemma 3.1 and Proposition 3.2, Proposition of \cite{LV} to the fBm case for $H>1/2$. Before proving the theorem let us recall some notation. Let $\|\cdot\|_{\mathbb{R}^{N}}$ denote the Euclidean norm in $\mathbb{R}^{N}$ and let $|I|$ indicate the length of the word $I$, where $I\in\{0,...,d\}^{k}$ for some $k\in\mathbb{N}$. We will also use the supremum norm $\|\cdot\|_{\infty}$, \textit{e.g.} $\|V_{i_{1}}\cdots V_{i_{k}}f\|_{\infty}=\sup_{x\in\mathbb{R}^{N}}\frac{|(V_{i_{1}}\cdots V_{i_{k}}f)(x)|}{\|x\|_{\mathbb{R}^{N}}}$.
\begin{proof}[Proof (Theorem \ref{prNEW})]
First of all, notice that we have the following stochastic Taylor expansion (see Proposition 2.1 of \cite{LV} together with \cite{BauZha}). For any $f$ which is smooth (\textit{i.e.} infinitely differentiable) and whose derivatives of any order are bounded, we have
\begin{equation*}
f(\xi_{T,x})=\sum_{(i_{1},...,i_{k})\in\mathcal{A}_{m}}V_{i_{1}}\cdots V_{i_{k}}f(x)\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{H,i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}+R_{m}(T,x,f),\quad\text{where}
\end{equation*}
\begin{equation*}
R_{m}(T,x,f)=\sum_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\int_{0<t_{0}<,...,<t_{k}< T}V_{i_{0}}\cdots V_{i_{k}}f(\xi_{t_{0},x}) d\hat{B}_{t_{0}}^{H,i_{0}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}.
\end{equation*}
Assume that the paths $\bar{\omega}_{1}, . . . ,\bar{\omega}_{n}\in C_{0,bv}^{0}([0,T],\mathbb{R}^{d})$ and the weights $\lambda_{1}, . . . ,\lambda_{n}$ define a cubature formula for the fBm of degree $m$ for time T.
Now notice that by the cubature formula, we have
\begin{equation}\label{zzz}
\mathbb{E}\left[\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{H,i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}\right]=\sum_{j=1}^{n}\lambda_{j}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{\omega}_{j}^{i_{1}}(t_{1})\cdot\cdot\cdot d\hat{\omega}_{j}^{i_{k}}(t_{k}),
\end{equation}
where $\hat{\omega}(t)=(t,\bar{\omega}(t))$. Observe that from our setting, we have
\begin{equation*}
\mathbb{E}\left[\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{H,i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}\right]=\int_{C_{0}^{0}([0,T],\mathbb{R}^{d})}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{B}_{t_{1}}^{H,i_{1}}(\omega)\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}(\omega)\mathbb{P}(d\omega),
\end{equation*}
so eq.~$(\ref{zzz})$ can be rewritten as
\begin{equation*}
\int_{C_{0}^{0}([0,T],\mathbb{R}^{d})}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{B}_{t_{1}}^{H,i_{1}}(\omega)\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}(\omega)\mathbb{P}(d\omega)=\sum_{j=1}^{n}\lambda_{j}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{\omega}_{j}^{i_{1}}(t_{1})\cdot\cdot\cdot d\hat{\omega}_{j}^{i_{k}}(t_{k}).
\end{equation*}
From this formula we can obtain the probability measure $\mathbb{Q}_{T}=\sum_{j=1}^{n}\lambda_{j}\delta_{\omega_{j}^{\star}}$ on $(C_{0}^{0}([0,T],\mathbb{R}^{d}),\mathcal{F})$, namely on the Wiener space with its Borel $\sigma$-field. In particular, let
\begin{equation*}
Z(\omega):=\int_{0<t_{1}<,...,<t_{k}< T}d\hat{B}_{t_{1}}^{H,i_{1}}(\omega)\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}(\omega),
\end{equation*}
and so
\begin{equation*}
\int_{C_{0}^{0}([0,T],\mathbb{R}^{d})}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{B}_{t_{1}}^{H,i_{1}}(\omega)\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}(\omega)\mathbb{P}(d\omega)=\int_{C_{0}^{0}([0,T],\mathbb{R}^{d})}Z(\omega)\mathbb{P}(d\omega),
\end{equation*}
and by applying the measure $\mathbb{Q}_{T}$ we have
\begin{equation*}
\mathbb{E}_{\mathbb{Q}_{T}}\left[Z\right]=\sum_{j=1}^{n}\lambda_{j}\int_{C_{0}^{0}([0,T],\mathbb{R}^{d})}Z(\omega)\delta_{\omega_{j}^{\star}}d\omega=\sum_{j=1}^{n}\lambda_{j}Z(\omega_{j}^{\star})
\end{equation*}
\begin{equation*}
=\sum_{j=1}^{n}\lambda_{j}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{B}_{t_{1}}^{H,i_{1}}(\omega_{j}^{\star})\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}(\omega_{j}^{\star}),
\end{equation*}
where from the definition of the fBm we have, for $i_{l}\neq0$, $\hat{B}_{t_{l}}^{H,i_{l}}(\omega_{j}^{\star})=\int_{0}^{t_{l}}K(t_{l},s)d\omega^{\star,i_{l}}_{j}(s)$, while, for $i_{l}=0$, $\hat{B}_{t_{l}}^{H,i_{l}}(\omega_{j}^{\star})=t_{l}$.\\It is possible to observe that we have not mentioned how we select these $\omega^{\star}_{j}\in C_{0}^{0}([0,T],\mathbb{R}^{d})$. We do it now. Choose $\omega^{\star}_{j}\in C_{0}^{0}([0,T],\mathbb{R}^{d})$ such that $\hat{\omega}^{i_{l}}_{j}(t_{l})=\int_{0}^{t_{l}}K(t_{l},s)d\omega^{\star,i_{l}}_{j}(s)$ for $i_{l}\neq 0$. We know that such $\omega^{\star}_{j}$ exists since $\bar{\omega}\in C_{\text{bv},0}^{0}([0,T],\mathbb{R}^{d})$, where we had $\hat{\omega}(t)=(t,\bar{\omega}(t))$. Thus, $\mathbb{Q}_{T}=\sum_{j=1}^{n}\lambda_{j}\delta_{\omega_{j}^{\star}}$ is a probability measure on $(C_{0}^{0}([0,T],\mathbb{R}^{d}),\mathcal{F})$.
\\ Hence, we have $\hat{B}_{t_{l}}^{H,i_{l}}(\omega_{j}^{\star})=\hat{\omega}^{i_{l}}_{j}(t_{l})$ and, consequently,
\begin{equation*}
\sum_{j=1}^{n}\lambda_{j}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{B}_{t_{1}}^{H,i_{1}}(\omega_{j}^{\star})\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}(\omega_{j}^{\star})=\sum_{j=1}^{n}\lambda_{j}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{\omega}_{j}^{i_{1}}(t_{1})\cdot\cdot\cdot d\hat{\omega}_{j}^{i_{k}}(t_{k}),
\end{equation*}
that is
\begin{equation*}
\mathbb{E}_{\mathbb{Q}_{T}}\left[\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{H,i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}\right]=\sum_{j=1}^{n}\lambda_{j}\int_{0<t_{1}<,...,<t_{k}< T}d\hat{\omega}_{j}^{i_{1}}(t_{1})\cdot\cdot\cdot d\hat{\omega}_{j}^{i_{k}}(t_{k}).
\end{equation*}
Therefore, from eq. $(\ref{zzz})$ we obtain the following formulation:
\begin{equation}\label{equa}
\mathbb{E}\left[\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{H,i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}\right]=\mathbb{E}_{\mathbb{Q}_{T}}\left[\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{H,i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}}\right].
\end{equation}
It is very important to notice that while $B_{t_{l}}^{H,i_{l}}(\omega)$ is a fBm under the Wiener measure, it is not any more under $\mathbb{Q}_{T}$.
Observe that equality $(\ref{equa})$ holds only for all $(i_{1}, . . . ,i_{k})\in\mathcal{A}_{m}$ by definition of the cubature formula. Moreover, notice that
\begin{equation*}
\mathbb{E}_{\mathbb{Q}_{T}}[f(\xi_{T,x})]=\sum_{j=1}^{n}\lambda_{j}\int_{C_{0}^{0}([0,T],\mathbb{R}^{d})}f(\xi_{T,x}(\omega_{j}))\delta_{\omega_{j}^{*}}d\omega=\sum_{j=1}^{n}\lambda_{j}f(\xi_{T,x}(\omega_{j}^{*}))
\end{equation*}
and that $\xi_{T,x}(\omega_{j}^{*})$ is the solution of
\begin{equation*}
\text{d}\xi_{t,x}(\omega_{j}^{*})=\sum_{i=0}^{d}V_{i}(\xi_{t,x}(\omega_{j}^{*}))\text{d}\hat{B}^{H,i}_{t}(\omega_{j}^{*}),\quad\text{with}\quad\xi_{0,x}(\omega_{j}^{*})=x,
\end{equation*}
or, equivalently, the solution of
\begin{equation*}
\text{d}y_{t,x}=\sum_{i=0}^{d}V_{i}(y_{t,x})\text{d}\hat{\omega}^{i}_{j}(t),\quad\text{with}\quad y_{0,x}=x,
\end{equation*}
which we defined before to be $\Phi_{t,x}(\omega_{j}^{*})$. Therefore,
\begin{equation*}
\sum_{j=1}^{n}\lambda_{j}f(\Phi_{T,x}(\omega_{j}^{*}))=\mathbb{E}_{\mathbb{Q}_{T}}[f(\xi_{T,x})].
\end{equation*}
Let us now adapt the stochastic Taylor formula to the probability measure $\mathbb{Q}_{T}$. In particular, using the scaling property of the fBm, which is inherited by the respective cubature paths we have that
\begin{equation*}
|\mathbb{E}_{\mathbb{Q}_{T}}[R_{m}(T,x,f)]|=\Bigg|\mathbb{E}_{\mathbb{Q}_{T}}\left[\sum_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\int_{0<t_{0}<,...,<t_{k}< T}V_{i_{0}}\cdots V_{i_{k}}f(\xi_{t_{0},x}) d\hat{B}_{t_{0}}^{H,i_{0}}\cdots d\hat{B}_{H,t_{k}}^{i_{k}}\right]\Bigg|
\end{equation*}
\begin{equation}\label{ineq}
\leq \sum_{j=1}^{n}\lambda_{j}\sum_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\bigg|\int_{0<t_{0}<,...,<t_{k}< T}V_{i_{0}}\cdots V_{i_{k}}f(\xi_{t_{0},x}(\omega_{j}^{\star})) d\hat{\omega}_{j}^{i_{1}}(t_{1})\cdot\cdot\cdot d\hat{\omega}_{j}^{i_{k}}(t_{k})\bigg|
\end{equation}
\begin{equation*}
\leq \sum_{i=j}^{n}\lambda_{j}\sum_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\|V_{i_{0}}\cdots V_{i_{k}}f\|_{\infty}
\end{equation*}
\begin{equation*}
T^{1+kH+(1-H)\times\mathnormal{card}\{l,i_{l}=0\}}\int_{0<t_{0}<,...,<t_{k}< 1}\big|d\hat{\omega}_{j}^{i_{0}}(t_{0})\big|\cdots \big|d\hat{\omega}_{j}^{i_{k}}(t_{k})\big|
\end{equation*}
\begin{equation*}
\leq C' T^{(m+2)/2} \sup_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\|V_{i_{0}}\cdots V_{i_{k}}f\|_{\infty},\quad\text{if $T\geq 1$, and}
\end{equation*}
\begin{equation*}
(\ref{ineq})\leq \sum_{i=j}^{n}\lambda_{j}\sum_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\|V_{i_{0}}\cdots V_{i_{k}}f\|_{\infty}
\end{equation*}
\begin{equation*}
T^{H+kH+(1-H)\times\mathnormal{card}\{l,i_{l}=0\}}\int_{0<t_{0}<,...,<t_{k}< 1}\big|d\hat{\omega}_{j}^{i_{0}}(t_{0})\big|\cdots \big|d\hat{\omega}_{j}^{i_{k}}(t_{k})\big|
\end{equation*}
\begin{equation*}
\leq C'' T^{2H} \sup_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\|V_{i_{0}}\cdots V_{i_{k}}f\|_{\infty},\quad\text{if $T<1$,}
\end{equation*}
where $C',C''>0$ and does not depend on $T$. Notice that the $1$ in $1+kH+(1-H)\times\mathnormal{card}\{l,i_{l}=0\}$ and the $H$ in $H+kH+(1-H)\times\mathnormal{card}\{l,i_{l}=0\}$ comes from the fact that we might have $i_{0}=0$ (hence we will have the scaling $T$) or $i_{0}=1$ (hence we will have the scaling $T^{H}$), since $T\geq T^{H}$ for $T\geq1$ we take $T$ to get the upper bound for $T\geq1$ and $T^{H}$ otherwise for $T<1$. Moreover, for the last inequality in both cases we used the fact that for any word in $\mathcal{A}_{m}$ we have by its definition that $H\leq kH+(1-H)\times\mathnormal{card}\{l,i_{l}=0\}\leq m/2$.
Regarding $|\mathbb{E}[R_{m}(T,x,f)]|$, we are not able to get a similar bound. Indeed, \cite{BauZha} is partially devoted to the study of this quantity. More precisely, in \cite{BauZha} a different form of the remainder is considered, but the two can be linked together. In particular, from Theorem 3.4 in \cite{BauZha} we have
\begin{equation*}
|\mathbb{E}[R_{m}(T,x,f)]|\leq C_{\gamma}\frac{(dMKT)^{(m+2)/2}}{(((m+2)/2)!)^{1/2-\gamma}}\sum_{k=0}^{\infty}\frac{(dMKT)^{k}}{(k!)^{1/2-\gamma}},\quad\text{if $T\geq 1$ and}
\end{equation*}
\begin{equation*}
|\mathbb{E}[R_{m}(T,x,f)]|\leq C_{\gamma}\frac{(dMKT^{H})^{(m+2)/2}}{(((m+2)/2)!)^{1/2-\gamma}}\sum_{k=0}^{\infty}\frac{(dMKT^{H})^{k}}{(k!)^{1/2-\gamma}},\quad\text{if $T< 1$},
\end{equation*}
where $M$ and $K$ are defined in the statement of the theorem. We used $(m+2)/2$ for the ``$N+1$" in the statement of Theorem 3.4 in \cite{BauZha} because it is the minimum number of iterated integrals of $\mathcal{A}_{m+2}$. Further, we used $T$ instead of $T^{H}$ because we want a bound which is independent of the exact composition of the word $I$:
\begin{equation*}
\mathbb{E}\left(\bigg|\int_{\Delta^{k}[0,T]}d\hat{B}^{H,I}\bigg|\right)^{1/2}\leq T^{kH+(1-H)\times\mathnormal{card}\{j,i_{j}=0\}}\frac{K^{k}}{\sqrt{k!}}\leq \begin{cases}
T^{k}\dfrac{K^{k}}{\sqrt{k!}}\quad\text{if $T\geq 1$},\\T^{kH}\dfrac{K^{k}}{\sqrt{k!}}\quad\text{if $T< 1$}.
\end{cases}
\end{equation*}
Notice that by doing this we are able to consider the drift, which is not considered explicitly in Theorem 3.4 in \cite{BauZha}. Moreover, we have not mentioned that the vector fields $V_{i}$s are analytic since being infinitely differentiable and having bounded derivatives imply that they are global analytic.
Finally, observe that
\begin{equation*}
|\mathbb{E}[f(\xi_{T,x})]-\mathbb{E}_{\mathbb{Q}_{T}}[f(\xi_{T,x})]|\leq |\mathbb{E}[R_{m}(T,x,f)]|+|\mathbb{E}_{\mathbb{Q}_{T}}[R_{m}(T,x,f)]|
\end{equation*}
\begin{equation*}
+ \Bigg|(\mathbb{E}-\mathbb{E}_{\mathbb{Q}_{T}})\left[\sum_{(i_{1},...,i_{k})\in\mathcal{A}_{m}}V_{i_{1}}\cdots V_{i_{k}}f(x)\int_{0<t_{1}<,...,<t_{k}< T} d\hat{B}_{t_{1}}^{H,i_{1}}\cdot\cdot\cdot d\hat{B}_{t_{k}}^{H,i_{k}} \right]\Bigg|.
\end{equation*}
For the first two terms we have proved their upper bounds, while the last term is zero by definition of $\mathbb{Q}_{T}$, namely by the cubature formula. Then, we have
\begin{equation*}
\sup_{x\in\mathbb{R}^{N}}\Big|\mathbb{E}\left(f(\xi_{T,x})\right) -\sum_{j=1}^{n}\lambda_{j}f(\Phi_{T,x}(\omega_{j}))\Big|\leq\begin{cases}
Z_{1}(T)+Z_{3}(T)\quad\text{if $T\geq 1$},\\ Z_{2}(T)+Z_{4}(T)\quad\text{if $T< 1$}.
\end{cases}
\end{equation*}
where
\begin{equation*}
Z_{1}(T):=C' T^{(m+2)/2}\sup_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\|V_{i_{1}}\cdots V_{i_{k}}f\|_{\infty},
\end{equation*}
\begin{equation*}
Z_{2}(T):=C'' T^{2H}\sup_{(i_{1},...,i_{k})\in\mathcal{A}_{m}, (i_{0},...,i_{k})\notin\mathcal{A}_{m}}\|V_{i_{1}}\cdots V_{i_{k}}f\|_{\infty},
\end{equation*}
\begin{equation*}
Z_{3}(T):=C_{\gamma}T^{(m+2)/2}\frac{(dK)^{(m+2)/2}}{(((m+2)/2)!)^{1/2-\gamma}}\sup\limits_{x\in\mathbb{R}^{N}}M_{x}^{(m+2)/2}\sum_{k=0}^{\infty}\frac{(dM_{x}KT)^{k}}{(k!)^{1/2-\gamma}},
\end{equation*}
\begin{equation*}
Z_{4}(T):=C_{\gamma}T^{H(m+2)/2}\frac{(dK)^{(m+2)/2}}{(((m+2)/2)!)^{1/2-\gamma}}\sup\limits_{x\in\mathbb{R}^{N}}M_{x}^{(m+2)/2}\sum_{k=0}^{\infty}\frac{(dM_{x}KT^{H})^{k}}{(k!)^{1/2-\gamma}},
\end{equation*}
where $C',C'',C_{\gamma}>0$ are constants independent of T.
\end{proof}
\begin{rem}
Notice that our result differs from Proposition 3.2 of \cite{LV} for two reasons. First, in that proposition lack the consideration of the case $T<1$. Second, we do not have the possibility to use the It\^{o} isometry of Proposition 2.1 in \cite{LV}, but we have to rely on Theorem 3.4 in \cite{BauZha}. We also point out that in case sharper estimates for the remainder of the stochastic Taylor expansion for the fBm are found, these will directly improve our result.
\end{rem}
From the scaling properties of the fBm, we obtain the following simple proposition, which is an equivalent of Proposition 2.5 in \cite{LV} for the fBm.
\begin{pro}Assume that $\bar{\omega}_{1}, . . . ,\bar{\omega}_{n}\in C_{0,bv}^{0}([0,1],\mathbb{R}^{d})$ and the weights $\lambda_{1}, . . . ,\lambda_{n}$ define a cubature formula for the fBm on the Wiener space of degree $m$ for time 1. Define, for $j = 1,...,n$, the paths $\bar{\omega}^{(T)}_{j}\in C_{0,bv}^{0}([0,T],\mathbb{R}^{d})$ by $\bar{\omega}^{(T),i}_{j}(t)=T^{H} \bar{\omega}_{j}^{i}(t/T)$ for $i=1,...,d$. The paths $\bar{\omega}^{(T)}_{j}$ and the weights $\lambda_{j}$ , $j = 1,...,n$, then define a cubature formula for the fBm on the Wiener space of degree $m$ at time $T$.
\end{pro}
\begin{proof}
It follows from the scaling property of the fractional Brownian motion.
\end{proof}
The last step of the cubature method is to extend it from small times to any times. This step is usually called concatenation step. Indeed, for the Bm case it is sufficient to divide the time interval in smaller subintervals and then concatenate the solutions obtained in each subinterval (see \cite{LV} for details).\\
In this article, we do not deal with the concatenation step. This is because for the fBm we do not have the independence of the increment property (\textit{i.e.} the Markov semigroup property) that we have for the Brownian motion case. Hence, the problem of the concatenation is an open and non-trivial problem. In Theorem \ref{pr4} we focused on the cubature formula for the one dimensional fBm up to degree 5. Due to the fact that the proof is tedious and mainly based on linear algebra computations we decided to only sketch it here.
\begin{proof}[Sketch of the proof (Theorem \ref{pr4})]
The first step is to compute the expected iterated integrals for different combinations of words, that is for different combinations of time path $t$ and fBm $B_{t}$. For example, for the degree $2H+2$ we need to compute:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{2}}dB_{u_{1}}du_{2}\right)=\int_{0}^{1}\mathbb{E}\left(B_{u_{2}}\right)du_{2}=0
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{2}}du_{1}dB_{u_{2}}\right)=\mathbb{E}\left(\int_{0}^{1}u_{2}dB_{u_{2}}\right)=0.
\end{equation*}
The second step is to obtain the respective iterated integrals of the $\omega_{i}$ weighted by the $\lambda_{i}$. Hence, for degree $2H+2$ we have:
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{2}}d\omega_{i,u_{1}}du_{2}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\omega_{i,u_{2}}du_{2}=0
\end{equation*}
and
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{2}}du_{1}d\omega_{i,u_{2}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}u_{2}d\omega_{i,u_{2}}=0.
\end{equation*}
Then we have a system of equations where the unknowns are the $\omega_{i}$, the $\lambda_{i}$ and $n$. Solving this system give us our result.
\end{proof}
We conclude with a final remark on the solution(s) obtained for the cubature formula.
\begin{rem}
From the proof of this theorem we obtain two solutions of our system of equations, which determine the slope of our paths $\omega$s. The reason why we focused only on one solution is because Lyons and Victoir focused on that solution in their paper. They used MATHEMATICA to produce a solution, without explicitly justify their decision. However, both solutions are feasible. Hence, we have two valid solutions for the cubature formula of the fractional Brownian motion for this kind of structure (\textit{i.e.} piecewise linear paths with change of slopes at $t=\frac{1}{3}$ and $t=\frac{2}{3}$).
\\
The reason why they did not justify their decision is probably due to the fact that the structure adopted is already arbitrary and hence it is important to have a solution and not to have a particular or unique solution.
\end{rem}
\section{Acknowledgements}
The author would like to thank Horatio Boedihardjo for his assistance, his comments and constructive discussions through out the writing of this work. Further, the author would like to thank Dan Crisan and Thomas Cass for useful remarks and discussions, and Tobias Kuna for an important observation regarding Theorem \ref{pr3}. Finally, the author would like to thank the CDT in MPE for providing funding for this research.
\section{Appendix 0: Discussion of Proposition 5.1}
To clarify the result of Proposition 5.1 and its proof we explain as follows.
\\The key point is to decompose the permutation $\sigma$ of $\mathcal{G}_{2k}$ into a permutation $\tau\in\mathcal{G}_{2k-2}$ and a 2-permutation of $2k$. In particular, the number of permutations of the set $(t_{1},...,t_{2k})$ is equal to the number of permutations of the set of $2k-2$ elements times the $2$-permutations of $2k$ (where $k$-permutations of $n$ are the different ordered arrangements of a $k$-element subset of an $n$-set), that is
\begin{equation*}
(2k)!=(2k-2)!\cdot\frac{(2k)!}{(2k-2)!}
\end{equation*}
Now, consider the case $t_{\sigma(1)}=t_{4}$ and $t_{\sigma(1)}=t_{9}$ without any loss of generality. It is possible to see that by a simple change of variables (or better change of notation)
\begin{equation}
\int_{0<t_{1}<t_{2}<t_{3}<t_{5}<...<t_{8}<t_{10}<...<1}\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation}
\begin{equation}\label{integral}
= \int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}
\end{equation}
where $\tau$ is a permutation of the set $(1,...,2k-2)$. The main problem can take place on the dependence of $\tau$ on $\sigma$. This is true. However, notice that we would get the same integral if the permutation of the $2k-2$ elements of the set $\{1,...,2k\}\setminus \{\sigma(1),\sigma(2)\}$ was the same. That is we would get same $\tau$ for different $\sigma$s. We will explain the argument in details in the next pages. We start with an example.
\\ \textbf{Example 1.} Assume that we have that our permutation $\sigma$ is just the identity, that is $\sigma(1)=1,\sigma(2)=2,...,\sigma(2k)=2k$, then we would have
\begin{equation}
\int_{0<t_{3}<...<t_{2k}<1}\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation}
\begin{equation}
=\int_{0<t_{3}<...<t_{2k}<1}\prod_{l=2}^{k}\delta_{i_{2l},i_{2l-1}}|t_{2l}-t_{2l-1}|^{2H-2}\text{d}t_{2l}\text{d}t_{2l-1}
\end{equation}
\begin{equation}
=\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{2l},i_{2l-1}}|t_{2l}-t_{2l-1}|^{2H-2}\text{d}t_{2l}\text{d}t_{2l-1}.
\end{equation}
However, we can get the same integral if we have that $\sigma(1)=2k-1,\sigma(2)=2k$ and the rest is $\sigma(3)=1,\sigma(4)=2,...,\sigma(2k)=2k-2$. Indeed, in this case we would have
\begin{equation}
\int_{0<t_{1}<...<t_{2k-2}<1}\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation}
\begin{equation}
=\int_{0<t_{1}<...<t_{2k-2}<1}\prod_{l=1}^{k-1}\delta_{i_{2l},i_{2l-1}}|t_{2l}-t_{2l-1}|^{2H-2}\text{d}t_{2l}\text{d}t_{2l-1}
\end{equation}
\begin{equation}
=\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{2l},i_{2l-1}}|t_{2l}-t_{2l-1}|^{2H-2}\text{d}t_{2l}\text{d}t_{2l-1}.
\end{equation}
Continuing with this argument, it is possible to observe that we get the same integral $(\ref{integral})$ if $\sigma(1)=4$ and $\sigma(2)=9$, while the permutation of the other $2k-2$ terms remains the same, which in this case is $\sigma(3)=1,\sigma(4)=2,\sigma(5)=3,\sigma(6)=5,\sigma(7)=6,\sigma(8)=7,\sigma(9)=8,\sigma(10)=10,\sigma(11)=11,...,\sigma(2k)=2k$. Then we have
\begin{equation}
\int_{0<t_{1}<t_{2}<t_{3}<t_{5}<...<t_{8}<t_{10}<...<1}\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation}
\begin{equation*}
=\int_{0<t_{1}<t_{2}<t_{3}<t_{5}<...<t_{8}<t_{10}<...<1}\delta_{i_{2},i_{1}}|t_{2}-t_{1}|^{2H-2}\delta_{i_{5},i_{3}}|t_{5}-t_{3}|^{2H-2}\delta_{i_{7},i_{6}}|t_{7}-t_{6}|^{2H-2}\delta_{i_{10},i_{8}}|t_{10}-t_{8}|^{2H-2}
\end{equation*}
\begin{equation}
\prod_{l=6}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{1}\text{d}t_{2}\text{d}t_{3}\text{d}t_{5}\cdots\text{d}t_{8}\text{d}t_{10}\cdots\text{d}t_{2k}
\end{equation}
and with a simple change of notation
\begin{equation}
=\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{2l},i_{2l-1}}|t_{2l}-t_{2l-1}|^{2H-2}\text{d}t_{2l}\text{d}t_{2l-1}.
\end{equation}
From this example it is possible to see that if the permutation of the remaining $2k-2$ elements (\textit{i.e.} elements from the set $\{1,...,2k\}\setminus \{\sigma(1),\sigma(2)\}$) is the same it is not important which value the points $\sigma(1),\sigma(2)$ take, we always get the same integral $(\ref{integral})$. Hence, concerning the value of the integral, we have an independence between the position of $\sigma(1),\sigma(2)$ and the permutation of the $2k-2$ remaining elements.
The above example is based on the permutation (of the $2k-2$ elements) $\tau(1)=1,...,\tau(2k-2)=2k-2$. However, this was just because it was easier to explain it and less cumbersome in details. Indeed, the argument can be extended to any permutation $\tau\in\mathcal{G}_{2k-2}$ (\textit{i.e.} any permutation of the $2k-2$ elements). Indeed, we have the following second example.
\\\textbf{Example 2.} Fix a permutation $\tau$ of the set $(1,...,2k-2)$. Say for example $(1,3,5,7,2,4,6,8,9,...,2k-2)$. Then consider the permutation $\sigma$ of the $2k$ set $(1,2,3,5,7,9,4,6,8,10,11,...,2k)$. Then we have
\begin{equation}
\int_{0<t_{3}<...<t_{2k}<1}\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation}
(skipping the $\delta$s since they follows the terms $|t_{\sigma(2p)}-t_{\sigma(2p-1)}|^{2H-2}$ with the same order)
\begin{equation*}
=\int_{0<t_{3}<...<t_{2k}<1}|t_{5}-t_{3}|^{2H-2}|t_{9}-t_{7}|^{2H-2}|t_{6}-t_{4}|^{2H-2}|t_{10}-t_{8}|^{2H-2}
\end{equation*}
\begin{equation}
\prod_{l=6}^{k}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{3}\cdots\text{d}t_{2k}
\end{equation}
\begin{equation}\label{integral2}
= \int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}.
\end{equation}
Now, notice that we can get the same integral $(\ref{integral2})$ from the permutation $\tilde{\sigma}$: $(2k-1,2k,1,3,5,7,2,4,6,8,9,...,2k-2)$. This is because
\begin{equation}
\int_{0<t_{1}<...<t_{2k-2}<1}\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation}
\begin{equation*}
= \int_{0<t_{1}<...<t_{2k-2}<1}|t_{3}-t_{1}|^{2H-2}|t_{7}-t_{5}|^{2H-2}|t_{4}-t_{2}|^{2H-2}|t_{8}-t_{6}|^{2H-2}
\end{equation*}
\begin{equation}
\prod_{l=5}^{k-1}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{1}\cdots\text{d}t_{2k-2}
\end{equation}
\begin{equation}
= \int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|t_{\tau(2l)}-t_{\tau(2l-1)}|^{2H-2}\text{d}t_{1}\cdots \text{d}t_{2k-2}.
\end{equation}
Again we can get the same integral $(\ref{integral2})$ from the permutation $\hat{\sigma}$: $(4,9,1,3,6,8,2,5,7,10,11,...,2k)$ (here $\sigma(1)=4,\sigma(2)=9$). This is because
\begin{equation}
\int_{0<t_{1}<t_{2}<t_{3}<t_{5}<...<t_{8}<t_{10}<...<1}\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation}
\begin{equation*}
=\int_{0<t_{1}<t_{2}<t_{3}<t_{5}<...<t_{8}<t_{10}<...<1}|t_{3}-t_{1}|^{2H-2}|t_{8}-t_{6}|^{2H-2}|t_{5}-t_{2}|^{2H-2}|t_{10}-t_{7}|^{2H-2}
\end{equation*}
\begin{equation}
\prod_{l=6}^{k}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{1}\text{d}t_{2}\text{d}t_{3}\text{d}t_{5}\cdots\text{d}t_{8}\text{d}t_{10}\cdots\text{d}t_{2k}
\end{equation}
\begin{equation}
= \int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}.
\end{equation}
The principle is the following. Take a permutation $\tau$ of the set of $2k-2$ elements, in our Example 2 was $(1,3,5,7,2,4,6,8,9,...,2k-2)$. Take two points $\sigma(1),\sigma(2)$, in the last case of Example 2 $\sigma(1)=4,\sigma(2)=9$. We need to find the permutations $\sigma\in\mathcal{G}_{2k}$ such that we have the same integral $(\ref{integral2})$. How can we find them? To get these permutations $\sigma$s we need just to follow the following algorithm. First, we put $\sigma(1),\sigma(2)$ for the first two positions in the $2k$ set, while for the others we put a \textit{modification} of $(1,3,5,7,2,4,6,8,9,...,2k-2)$, which depends on the value of $\sigma(1),\sigma(2)$. We denote this \textit{modification} $(y_{1},...,y_{2k-2})$. Hence, we have $(\sigma(1),\sigma(2),y_{1},...,y_{2k-2})$. In particular, the \textit{modification} follows this rule. Assume $\sigma(1)<\sigma(2)$ without loss of generality and let $x$ be an element of the set $(1,3,5,7,2,4,6,8,9,...,2k-2)$ so $x_{1}=1,x_{2}=3,x_{3}=5,...$. If the values below $x_{i}<\sigma(1)$ then $y_{i}=x_{i}$ (this is the case for $1,3,2$ in our example). If $\sigma(1)\leq x_{i}<\sigma(2)-1$ then $y_{i}=x_{i}+1$ (this is the case for $5,7,4,6$ in our example). If $x_{i}\geq\sigma(2)-1$ then $y_{i}=x_{i}+2$ (this is the case for $8,9,10,...,2k-2$ in our example).
\\
\\
Now there are two questions to be answered. The first question is: we say that there are different possibilities (\textit{i.e.} different $\sigma$s) to get the same integral (say for example integral $(\ref{integral2})$) but how many exactly? The answer is $\frac{(2k)!}{(2k-2)!}=2k(2k-1)$ which is the arrangements of a fixed length $2$ of elements taken from a given set of size $2k$, in other words, these $2$-permutations of $2k$ are the different ordered arrangements of a $2$-element subset of an $2k$-set (see Wikipedia or Wolfram Alpha on permutation). For example, $2$-permutations of $3$ are $3!/(3-2)!=6$. Indeed, consider the set $\{1,2,3\}$ then we have $\{1,2\},\{1,3\},\{2,3\},\{2,1\},\{3,1\},\{3,2\}$. In our case we have $2$-permutations on $2k$ because the $2$ comes from $\sigma(1),\sigma(2)$ and the $2k$ from the size of the set.
\\
This implies that there are $2k(2k-1)$ different permutations in $\mathcal{G}_{2k}$ (call them $\sigma_{1},\sigma_{2},...,\sigma_{2k(2k-1)}$) such that we get the same integral in terms of a permutation $\tau$ (like integral $(\ref{integral2})$). Hence, we have
\begin{equation*}
\sum_{j=1}^{2k(2k-1)}\int_{0<t_{1}<...<t_{\sigma_{j}(1)-1}<t_{\sigma_{j}(1)+1}<...<t_{\sigma_{j}(2)-1}<t_{\sigma_{j}(2)+1}<...<1}
\end{equation*}
\begin{equation}
\prod_{l=2}^{k}\delta_{i_{\sigma_{j}(2l)},i_{\sigma_{j}(2l-1)}}|t_{\sigma_{j}(2l)}-t_{\sigma_{j}(2l-1)}|^{2H-2}\text{d}t_{\sigma_{j}(2l)}\text{d}t_{\sigma_{j}(2l-1)}
\end{equation}
\begin{equation}
=2k(2k-1)\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}.
\end{equation}
\\
\\
The second and last question is the following: by using the arguments above can we cover all the permutations $\sigma\in\mathcal{G}_{2k}$? In other words, can we reformulate all the permutations $\sigma\in\mathcal{G}_{2k}$ in terms of the permutations $\tau\in\mathcal{G}_{2k-2}$ times $2k(2k-1)$? The answer is yes and in particular we have the following equality
\begin{equation*}
\sum_{\sigma\in\mathcal{G}_{2k}}\int_{0<t_{1}<...<t_{\sigma(1)-1}<t_{\sigma(1)+1}<...<t_{\sigma(2)-1}<t_{\sigma(2)+1}<...<1}
\end{equation*}
\begin{equation}
\prod_{l=2}^{k}\delta_{i_{\sigma(2l)},i_{\sigma(2l-1)}}|t_{\sigma(2l)}-t_{\sigma(2l-1)}|^{2H-2}\text{d}t_{\sigma(2l)}\text{d}t_{\sigma(2l-1)}
\end{equation}
\begin{equation}
=2k(2k-1)\sum_{\tau\in\mathcal{G}_{2k-2}}\int_{\Delta^{2k-2}[0,1]}\prod_{l=1}^{k-1}\delta_{i_{\tau(2l)},i_{\tau(2l-1)}}|s_{\tau(2l)}-s_{\tau(2l-1)}|^{2H-2}\text{d}s_{1}\cdots \text{d}s_{2k-2}.
\end{equation}
This is because we can decompose the permutations of $\mathcal{G}_{2k}$ (the $\sigma$s) into permutations of $\sigma(1),\sigma(2)$ (which are $2k(2k-1)$ permutations for each permutation $\tau$ and they do not modify the value of the integral) and the permutations of $\mathcal{G}_{2k-2}$ (the $\tau$s). Indeed, for $\mathcal{G}_{2k}$ we have $(2k)!$ permutations (and of course they are all different from each other) and for the other we have $(2k-2)!\cdot2k(2k-1)=(2k)!$ (and they are also all different from each other).
\\
Again, the key point is to decompose the permutation $\sigma$ of $\mathcal{G}_{2k}$ into a permutation $\tau\in\mathcal{G}_{2k-2}$ and a 2-permutation on $2k$.
\section{Appendix 1: Iterated integrals for the cubature method}
In this appendix we study the iterated integrals with respect to the path $(t,B^{H}_{t})$. Notice that we will use the notation $B_{t}:=B^{H}_{t}$. We will not consider the case of iterated integrals of time solely since they bring no information for the construction of the cubature. Further, we will use many times the following formula for the fractional Brownian motion:
\begin{equation*}
\mathbb{E}\left((B_{t}-B_{s})^{2k}\right)=\dfrac{(2k)!}{k!2^{k}}|t-s|^{2Hk}
\end{equation*}
In particular, we have:
\\
Degree$=2H$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}dB_{u_{1}}\right)=0
\end{equation*}
\\
Degree$=4H$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{2}}dB_{u_{1}}dB_{2}\right)=\mathbb{E}\left(\dfrac{B_{1}^{2}}{2}\right)=\dfrac{1}{2}
\end{equation*}
\\
Degree$=2H+2$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{2}}dB_{u_{1}}du_{2}\right)=\int_{0}^{1}\mathbb{E}\left(B_{u_{2}}\right)du_{2}=0
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{2}}du_{1}dB_{u_{2}}\right)=\mathbb{E}\left(\int_{0}^{1}u_{2}dB_{u_{2}}\right)=0
\end{equation*}
\\
Degree$=6H$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}dB_{2}dB_{3}\right)=\mathbb{E}\left(\dfrac{B_{1}^{3}}{3!}\right)=0
\end{equation*}
\\
Degree$=2+4H$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}dB_{u_{2}}du_{3}\right)=\int_{0}^{1}\mathbb{E}\left(\dfrac{B_{u_{3}}^{2}}{2}\right)du_{3}=\int_{0}^{1}\dfrac{u_{3}^{2H}}{2}du_{3}=\dfrac{1}{2(2H+1)}
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}du_{2}dB_{u_{3}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}B_{u_{2}}du_{2}dB_{u_{3}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{u_{2}}^{1}B_{u_{2}}dB_{u_{3}}du_{2}\right)
\end{equation*}
\begin{equation*}
=\int_{0}^{1}\mathbb{E}\left(B_{u_{2}}(B_{1}-B_{u_{2}})\right)du_{3}=\int_{0}^{1}\dfrac{1}{2}\left(1-u_{2}^{2H}-(1-u_{2})^{2H}\right)du_{2}=\dfrac{1}{2}-\dfrac{2}{2(2H+1)}=\dfrac{2H-1}{2(2H+1)}
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}dB_{u_{2}}dB_{u_{3}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{u_{1}}^{u_{3}}dB_{u_{2}}du_{1}dB_{u_{3}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}B_{u_{3}}-B_{u_{1}}du_{1}dB_{u_{3}}\right)
\end{equation*}
\begin{equation*}
=\mathbb{E}\left(\int_{0}^{1}\int_{u_{1}}^{1}B_{u_{3}}-B_{u_{1}}dB_{u_{3}}du_{1}\right)=\int_{0}^{1}\mathbb{E}\left(\dfrac{B_{1}^{2}}{2}-\dfrac{B_{u_{1}}^{2}}{2}-B_{u_{1}}(B_{1}-B_{u_{1}})\right)du_{1}
\end{equation*}
\begin{equation*}
=\int_{0}^{1}\mathbb{E}\left(\dfrac{B_{1}^{2}}{2}+\dfrac{B_{u_{1}}^{2}}{2}-B_{u_{1}}B_{1}\right)du_{1}=\int_{0}^{1}\dfrac{1}{2}+\dfrac{u_{1}^{2H}}{2}-\dfrac{1}{2}(1+u_{1}^{2H}-(1-u_{1})^{2H})du_{1}
\end{equation*}
\begin{equation*}
=\dfrac{1}{2}\int_{0}^{1}(1-u_{1})^{2H}du_{1}=\dfrac{1}{2(2H+1)}
\end{equation*}
\\
Degree$=8H$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}dB_{2}dB_{3}dB_{4}\right)=\mathbb{E}\left(\dfrac{B_{1}^{4}}{4!}\right)=\dfrac{1}{8}
\end{equation*}
\\
Degree$=4+2H$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}du_{2}du_{3}\right)=\int_{0}^{1}\int_{0}^{u_{3}}\mathbb{E}\left(B_{u_{2}}\right)du_{2}du_{3}=0
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}dB_{u_{2}}du_{3}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{u_{1}}^{u_{3}}dB_{u_{2}}du_{1}du_{3}\right)=\int_{0}^{1}\int_{0}^{u_{3}}\mathbb{E}\left(B_{u_{3}}-B_{u_{1}}\right)du_{1}du_{3}=0
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}du_{2}dB_{u_{3}}\right)=\mathbb{E}\left(\int_{0}^{1}\dfrac{u_{3}^{2}}{2}dB_{u_{3}}\right)=0
\end{equation*}
Degree$=6H+2$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}dB_{u_{2}}dB_{u_{3}}du_{4}\right)=\int_{0}^{1}\mathbb{E}\left(\dfrac{B_{u_{4}}^{3}}{3!}\right)du_{4}=0
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}dB_{u_{2}}du_{3}dB_{u_{4}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\dfrac{B_{u_{3}}^{2}}{2!}du_{3}dB_{u_{4}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{u_{3}}^{1}\dfrac{B_{u_{3}}^{2}}{2!}dB_{u_{4}}du_{3}\right)
\end{equation*}
\begin{equation*}
=\int_{0}^{1}\mathbb{E}\left((B_{1}-B_{u_{3}})\dfrac{B_{u_{3}}^{2}}{2!}\right)du_{3}=0
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}du_{2}dB_{u_{3}}dB_{u_{4}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}B_{u_{2}}du_{2}dB_{u_{3}}dB_{u_{4}}\right)
\end{equation*}
\begin{equation*}
=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{u_{2}}^{u_{4}}B_{u_{2}}dB_{u_{3}}du_{2}dB_{u_{4}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}(B_{u_{4}}-B_{u_{2}})B_{u_{2}}du_{2}dB_{u_{4}}\right)
\end{equation*}
\begin{equation*}
=\mathbb{E}\left(\int_{0}^{1}\int_{u_{2}}^{1}(B_{u_{4}}-B_{u_{2}})B_{u_{2}}dB_{u_{4}}du_{2}\right)=\int_{0}^{1}\mathbb{E}\left(\dfrac{B_{u_{2}}}{2}(B_{1}-B_{u_{2}})^{2}-B_{u_{2}}^{2}(B_{1}-B_{u_{2}})\right)du_{2}=0
\end{equation*}
and
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}dB_{u_{2}}dB_{u_{3}}dB_{u_{4}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{u_{1}}^{u_{3}}dB_{u_{2}}du_{1}dB_{u_{3}}dB_{u_{4}}\right)
\end{equation*}
\begin{equation*}
=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}(B_{u_{3}}-B_{u_{1}})du_{1}dB_{u_{3}}dB_{u_{4}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\int_{u_{1}}^{u_{4}}(B_{u_{3}}-B_{u_{1}})dB_{u_{3}}du_{1}dB_{u_{4}}\right)
\end{equation*}
\begin{equation*}
=\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{4}}\dfrac{B_{u_{4}}^{2}}{2}-\dfrac{B_{u_{1}}^{2}}{2}-B_{u_{1}}(B_{u_{4}}-B_{u_{1}})du_{1}dB_{u_{4}}\right)=\mathbb{E}\left(\int_{0}^{1}\int_{u_{1}}^{1}\dfrac{B_{u_{4}}^{2}}{2}-\dfrac{B_{u_{1}}^{2}}{2}-B_{u_{1}}(B_{u_{4}}-B_{u_{1}})dB_{u_{4}}du_{1}\right)
\end{equation*}
\begin{equation*}
=\int_{0}^{1}\mathbb{E}\left(\dfrac{B_{1}^{3}}{3!}-\dfrac{B_{u_{1}}^{3}}{3!}-\dfrac{B_{u_{1}}^{2}}{2}(B_{1}-B_{u_{1}})-B_{u_{1}}\left(\dfrac{B_{1}^{2}}{2}-\dfrac{B_{u_{1}}^{2}}{2}\right)+B_{u_{1}}^{2}(B_{1}-B_{u_{1}})\right)du_{1}=0
\end{equation*}
Degree$=10H$:
\begin{equation*}
\mathbb{E}\left(\int_{0}^{1}\int_{0}^{u_{5}}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}dB_{u_{1}}dB_{2}dB_{3}dB_{4}dB_{5}\right)=\mathbb{E}\left(\dfrac{B_{1}^{5}}{5!}\right)=0
\end{equation*}
\\
\\
Now we need to match them with the corresponding deterministic iterated integrals. In other words, we have
\\
\\
For degree$=2H$:
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}d\omega_{u_{1},i}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\omega_{i,1}=0
\end{equation*}
\\
For degree$=4H$:
\begin{equation*}
\dfrac{1}{2}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}\Rightarrow \dfrac{1}{2}=\dfrac{1}{2}\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{2}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{2}=1
\end{equation*}
\\
For degree$=2H+2$:
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{2}}d\omega_{i,u_{1}}du_{2}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\omega_{i,u_{2}}du_{2}=0
\end{equation*}
and
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{2}}du_{1}d\omega_{i,u_{2}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}u_{2}d\omega_{i,u_{2}}=0
\end{equation*}
For degree$=6H$:
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}d\omega_{i,u_{3}}\Rightarrow \dfrac{1}{3!}\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{3}=0\Rightarrow
\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{3}=0
\end{equation*}
\\
For degree$=4H+2$:
\begin{equation*}
\dfrac{1}{2(2H+1)}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}du_{3}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\omega_{i,u_{3}}^{2}du_{3}=\dfrac{1}{2H+1}
\end{equation*}
and
\begin{equation*}
\dfrac{2H-1}{2(2H+1)}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}du_{2}d\omega_{i,u_{3}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\omega_{i,u_{2}}du_{2}d\omega_{i,u_{3}}=\dfrac{2H-1}{2(2H+1)}
\end{equation*}
and
\begin{equation*}
\dfrac{1}{2(2H+1)}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}d\omega_{i,u_{2}}d\omega_{i,u_{3}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}u_{2}d\omega_{i,u_{2}}d\omega_{i,u_{3}}=\dfrac{1}{2(2H+1)}
\end{equation*}
For degree$=8H$:
\begin{equation*}
\dfrac{1}{8}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}d\omega_{i,u_{3}}d\omega_{i,u_{4}}\Rightarrow \dfrac{1}{4!}\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{4}=\dfrac{1}{8}\Rightarrow
\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{4}=3
\end{equation*}
\\
For degree$=2H+4$:
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}du_{2}du_{3}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\omega_{i,u_{2}}du_{2}du_{3}=0
\end{equation*}
and
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}d\omega_{i,u_{2}}du_{3}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}u_{2}d\omega_{i,u_{2}}du_{3}=0
\end{equation*}
and
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}du_{2}d\omega_{i,u_{3}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\dfrac{u_{3}^{2}}{2}d\omega_{i,u_{3}}=0\Rightarrow\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}u_{3}^{2}d\omega_{i,u_{3}}=0
\end{equation*}
For degree$=6H+2$:
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}d\omega_{i,u_{3}}du_{4}\Rightarrow \dfrac{1}{3!}\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\omega_{i,u_{4}}^{3}du_{4}=0\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\omega_{i,u_{4}}^{3}du_{4}=0
\end{equation*}
and
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}du_{3}d\omega_{i,u_{4}}\Rightarrow \dfrac{1}{2}\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\omega_{i,u_{3}}^{2}du_{3}d\omega_{i,u_{4}}=0
\end{equation*}
\begin{equation*}
\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\omega_{i,u_{3}}^{3}du_{3}d\omega_{i,u_{4}}=0
\end{equation*}
and
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}du_{2}d\omega_{i,u_{3}}d\omega_{i,u_{4}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\omega_{i,u_{2}}du_{2}d\omega_{i,u_{3}}d\omega_{i,u_{4}}=0
\end{equation*}
and
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}d\omega_{i,u_{2}}d\omega_{i,u_{3}}d\omega_{i,u_{4}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}u_{2}d\omega_{i,u_{2}}d\omega_{i,u_{3}}d\omega_{i,u_{4}}=0
\end{equation*}
For degree$=10H$:
\begin{equation*}
0=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{5}}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}d\omega_{i,u_{3}}d\omega_{i,u_{4}}d\omega_{i,u_{5}}\Rightarrow \dfrac{1}{5!}\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{5}=0\Rightarrow
\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{5}=0
\end{equation*}
\section{Appendix 2: Extended proof of Theorem 2.6}
In this appendix we present the extended proof of \textbf{Theorem 2.6}.
\begin{proof}
Let us now start to investigate the form of the functions $\omega_{j}$ for $j=1,...,n$.
\\
From the Appendix it is possible to see that we have 17 equations, hence we need to have 17 unknowns.
\\
Following the work done by Lyons and Victoir we are going to choose two symmetric paths and one path which has constant value zero. This reduces the number of equations to 5.
Hence, assume that there are two continuous functions, $\omega_{1,s}$ and $\omega_{2,s}$, with the property that $\omega_{1,s}=-\omega_{2,s}$ for $s\in[0,1]$. Further assume that there is a third path $\omega_{3,s}=0$ for $s\in[0,1]$. With this formulation only 5 equations need to be taken into consideration since the other 12 are already satisfied. The 5 equations are the following. First,
\begin{equation}\label{2}
\dfrac{1}{2}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}\Rightarrow \dfrac{1}{2}=\dfrac{1}{2}\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{2}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{2}=1
\end{equation}
Second,
\begin{equation}\label{6}
\dfrac{1}{2(2H+1)}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}du_{3}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\omega_{i,u_{3}}^{2}du_{3}=\dfrac{1}{2H+1}
\end{equation}
Third,
\begin{equation}\label{7}
\dfrac{2H-1}{2(2H+1)}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}du_{2}d\omega_{i,u_{3}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\omega_{i,u_{2}}du_{2}d\omega_{i,u_{3}}=\dfrac{2H-1}{2(2H+1)}
\end{equation}
Fourth,
\begin{equation}\label{8}
\dfrac{1}{2(2H+1)}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}d\omega_{i,u_{2}}d\omega_{i,u_{3}}\Rightarrow \sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{3}}u_{2}d\omega_{i,u_{2}}d\omega_{i,u_{3}}=\dfrac{1}{2(2H+1)}
\end{equation}
Fifth,
\begin{equation}\label{9}
\dfrac{1}{8}=\sum_{i=1}^{n}\lambda_{i}\int_{0}^{1}\int_{0}^{u_{4}}\int_{0}^{u_{3}}\int_{0}^{u_{2}}d\omega_{i,u_{1}}d\omega_{i,u_{2}}d\omega_{i,u_{3}}d\omega_{i,u_{4}}\Rightarrow \dfrac{1}{4!}\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{4}=\dfrac{1}{8}\Rightarrow
\sum_{i=1}^{n}\lambda_{i}\omega_{i,1}^{4}=3
\end{equation}
The other 12 equations are automatically zero by the symmetric properties of the $\omega_{1,s}$ and $\omega_{,s2}$, and by the fact that $\omega_{3,s}=0$, since the 12 equations involve odd integrals of the $\omega$s.
\\
Now we need to have maximum 5 unknowns in order to solve the system of equations. We have actually have 6 equations since the sum of the weights $\sum_{i=1}^{3}\lambda_{i}=1$, because we are considering a probability measure. Assume that
\begin{equation*}
\omega_{1,s}=
\begin{cases}
as\qquad for \qquad s\in[0,\frac{1}{3}],\\
b_{1}s+b_{0}\qquad for \qquad s\in[\frac{1}{3},\frac{2}{3}],\\
c_{1}s+c_{0}\qquad for \qquad s\in[\frac{2}{3},1].\\
\end{cases}
\end{equation*}
With this formulation we have 5 unknowns which are $a$, $b_{1}$, $c_{1}$, $\lambda_{1}$ and $\lambda_{3}$. The reason why $b_{0}$ and $c_{0}$ are not unknowns is because they have to take certain values in order to make the path $\omega_{1,s}$ continuous. Notice that $\lambda_{2}$ is not an unknowns since $\lambda_{2}=\lambda_{1}$. We have 5 unknowns for 6 equations there is a risk that the system cannot be solved. However, we hope that two of the 5 equations are the same. An alternative approach is to let the points where the slope changes, which we fixed to be at $\frac{1}{3}$ and $\frac{2}{3}$, be two unknowns.
\\
Let us now solve the system. Consider equation $(\ref{2})$, we have
\begin{equation*}
\lambda_{1}\omega_{1,1}^{2}+\lambda_{2}\omega_{2,1}^{2}=1\Rightarrow 2\lambda_{1}\omega_{1,1}^{2}=1\Rightarrow 2\lambda_{1}(c_{1}+c_{0})^{2}=1
\end{equation*}
Consider equation $(\ref{6})$, we have
\begin{equation*}
2\lambda_{1}\int_{0}^{1}\omega_{1,u_{3}}^{2}du_{3}=\dfrac{1}{2H+1}\Rightarrow \int_{0}^{\frac{1}{3}}a^{2}u_{3}^{2}du_{3}+\int_{\frac{1}{3}}^{\frac{2}{3}}(b_{1}u_{3}+b_{0})^{2}du_{3}+\int_{\frac{2}{3}}^{1}(c_{1}u_{3}+c_{0})^{2}du_{3}=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow \dfrac{a^{2}}{81}+\dfrac{1}{3b_{1}}\left(\dfrac{8}{27}b_{1}^{3}+b_{0}^{3}+\dfrac{4}{3}b_{1}^{2}b_{0}+2b_{1}b_{0}^{2}\right)-\dfrac{1}{3b_{1}}\left(\dfrac{1}{27}b_{1}^{3}+b_{0}^{3}+\dfrac{1}{3}b_{1}^{2}b_{0}+b_{1}b_{0}^{2}\right)
\end{equation*}
\begin{equation*}
+\dfrac{1}{3c_{1}}\left(c_{1}^{3}+c_{0}^{3}+3c_{1}^{2}c_{0}+3c_{1}c_{0}^{2}\right)-\dfrac{1}{3c_{1}}\left(\dfrac{8}{27}c_{1}^{3}+c_{0}^{3}+\dfrac{4}{3}c_{1}^{2}c_{0}+2c_{1}c_{0}^{2}\right)=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation}\label{third}
\Rightarrow \dfrac{a^{2}}{81}+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{1}{3}\left(\dfrac{19}{27}c_{1}^{2}+\dfrac{5}{3}c_{1}c_{0}+c_{0}^{2}\right)=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation}
Consider now equation $(\ref{8})$, we have
\begin{equation*}
\int_{0}^{1}\int_{0}^{u_{3}}\int_{0}^{u_{2}}du_{1}d\omega_{1,u_{2}}d\omega_{1,u_{3}}=\dfrac{1}{4\lambda_{1}(2H+1)}
\end{equation*}
By Fubini's theorem we have
\begin{equation*}
\int_{0}^{1}\int_{0}^{u_{3}}\int_{u_{1}}^{u_{3}}d\omega_{1,u_{2}}du_{1}d\omega_{1,u_{3}}=\dfrac{1}{4\lambda_{1}(2H+1)}\Rightarrow\int_{0}^{1}\int_{0}^{u_{3}}(\omega_{1,u_{3}}-\omega_{1,u_{1}})du_{1}d\omega_{1,u_{3}}=\dfrac{1}{4\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow\int_{0}^{1}\int_{u_{1}}^{1}(\omega_{1,u_{3}}-\omega_{1,u_{1}})d\omega_{1,u_{3}}du_{1}=\dfrac{1}{4\lambda_{1}(2H+1)}\Rightarrow\int_{0}^{1}\dfrac{\omega_{1,1}^{2}}{2}-\dfrac{\omega_{1,u_{1}}^{2}}{2}-\omega_{1,u_{1}}(\omega_{1,1}-\omega_{1,u_{1}})du_{1}=\dfrac{1}{4\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow \int_{0}^{1}\dfrac{\omega_{1,1}^{2}}{2}+\dfrac{\omega_{1,u_{1}}^{2}}{2}-\omega_{1,1}\omega_{1,u_{1}}du_{1}
=\dfrac{1}{4\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow (c_{1}+c_{0})^{2}-2(c_{1}+c_{0})\left[\int_{0}^{\frac{1}{3}}au_{1}du_{1}+\int_{\frac{1}{3}}^{\frac{2}{3}}b_{1}u_{1}+b_{0}du_{1}+\int_{\frac{2}{3}}^{1}c_{1}u_{1}+c_{0}du_{1}\right]
\end{equation*}
\begin{equation*}
+\int_{0}^{\frac{1}{3}}a^{2}u_{1}^{2}du_{1}+\int_{\frac{1}{3}}^{\frac{2}{3}}(b_{1}u_{1}+b_{0})^{2}du_{1}+\int_{\frac{2}{3}}^{1}(c_{1}u_{1}+c_{0})^{2}du_{1}=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow (c_{1}+c_{0})^{2}-2(c_{1}+c_{0})\left[a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}+c_{1}\dfrac{5}{18}+c_{0}\dfrac{1}{3}\right]
\end{equation*}
\begin{equation*}
+\dfrac{a^{2}}{81}+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{1}{3}\left(\dfrac{19}{27}c_{1}^{2}+\dfrac{5}{3}c_{1}c_{0}+c_{0}^{2}\right)=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow\dfrac{a^{2}}{81}-2(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)
\end{equation*}
\begin{equation*}
+c_{1}^{2}\left(1-\dfrac{5}{9}+\dfrac{19}{81}\right)+c_{0}^{2}\left(1-\dfrac{2}{3}+\dfrac{1}{3}\right)+c_{1}c_{0}\left(2-\dfrac{2}{3}-\dfrac{5}{9}+\dfrac{5}{9}\right)=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow\dfrac{a^{2}}{81}-2(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{55c_{1}^{2}}{81}+\dfrac{2c_{0}^{2}}{3}+\dfrac{4c_{1}c_{0}}{3}=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
Consider now equation $(\ref{7})$, we have
\begin{equation*}
\int_{0}^{1}\int_{0}^{u_{3}}\omega_{1,u_{2}}du_{2}d\omega_{1,u_{3}}=\dfrac{2H-1}{4\lambda_{1}(2H+1)}
\end{equation*}
By Fubini's theorem we have
\begin{equation*}
\int_{0}^{1}\int_{u_{2}}^{1}\omega_{1,u_{2}}d\omega_{1,u_{3}}du_{2}=\dfrac{2H-1}{4\lambda_{1}(2H+1)}\Rightarrow\int_{0}^{1}\omega_{1,u_{2}}(\omega_{1,1}-\omega_{1,u_{2}})du_{2}=\dfrac{2H-1}{4\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow (c_{1}+c_{0})\left[\int_{0}^{\frac{1}{3}}au_{1}du_{1}+\int_{\frac{1}{3}}^{\frac{2}{3}}b_{1}u_{1}+b_{0}du_{1}+\int_{\frac{2}{3}}^{1}c_{1}u_{1}+c_{0}du_{1}\right]
\end{equation*}
\begin{equation*}
-\int_{0}^{\frac{1}{3}}a^{2}u_{1}^{2}du_{1}-\int_{\frac{1}{3}}^{\frac{2}{3}}(b_{1}u_{1}+b_{0})^{2}du_{1}-\int_{\frac{2}{3}}^{1}(c_{1}u_{1}+c_{0})^{2}du_{1}=\dfrac{2H-1}{4\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow (c_{1}+c_{0})\left[a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}+c_{1}\dfrac{5}{18}+c_{0}\dfrac{1}{3}\right]
\end{equation*}
\begin{equation*}
-\dfrac{a^{2}}{81}-\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)-\dfrac{1}{3}\left(\dfrac{19}{27}c_{1}^{2}+\dfrac{5}{3}c_{1}c_{0}+c_{0}^{2}\right)=\dfrac{2H-1}{4\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)-\dfrac{a^{2}}{81}-\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)
\end{equation*}
\begin{equation*}
+c_{1}^{2}\left(\dfrac{5}{18}-\dfrac{19}{81}\right)+c_{0}^{2}\left(\dfrac{1}{3}-\dfrac{1}{3}\right)+c_{1}c_{0}\left(\dfrac{1}{3}+\dfrac{5}{18}-\dfrac{5}{9}\right)=\dfrac{2H-1}{4\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)-\dfrac{a^{2}}{81}-\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{7c_{1}^{2}}{162}+\dfrac{c_{1}c_{0}}{18}=\dfrac{2H-1}{4\lambda_{1}(2H+1)}
\end{equation*}
Consider now equation $(\ref{9})$, we have
\begin{equation*}
\lambda_{1}\omega_{1,1}^{4}+\lambda_{2}\omega_{2,1}^{4}=3\Rightarrow 2\lambda_{1}\omega_{1,1}^{4}=3\Rightarrow \lambda_{1}(c_{1}+c_{0})^{4}=\dfrac{3}{2}
\end{equation*}
Therefore, we have the following system of equations
\begin{equation}\label{system}
\begin{cases}
2\lambda_{1}+\lambda_{3}=1 \qquad with \qquad \lambda_{1},\lambda_{1}\in[0,1],\\
2\lambda_{1}(c_{1}+c_{0})^{2}=1,\\
\dfrac{a^{2}}{81}+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{1}{3}\left(\dfrac{19}{27}c_{1}^{2}+\dfrac{5}{3}c_{1}c_{0}+c_{0}^{2}\right)=\dfrac{1}{2\lambda_{1}(2H+1)},\\
\dfrac{a^{2}}{81}-2(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{55c_{1}^{2}}{81}+\dfrac{2c_{0}^{2}}{3}+\dfrac{4c_{1}c_{0}}{3}=\dfrac{1}{2\lambda_{1}(2H+1)},\\
(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)-\dfrac{a^{2}}{81}-\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{7c_{1}^{2}}{162}+\dfrac{c_{1}c_{0}}{18}=\dfrac{2H-1}{4\lambda_{1}(2H+1)},\\
\lambda_{1}(c_{1}+c_{0})^{4}=\dfrac{3}{2},
\end{cases}
\end{equation}
with the following unknowns $\lambda_{1},\lambda_{3},a,b_{1},c_{1}$.
\\\\
Consider the two equations in the system above
\begin{equation*}
\lambda_{1}(c_{1}+c_{0})^{4}=\dfrac{3}{2} \qquad and \qquad 2\lambda_{1}(c_{1}+c_{0})^{2}=1
\end{equation*}
we have
\begin{equation*}
\Rightarrow \lambda_{1}\dfrac{1}{4\lambda_{1}^{2}}=\dfrac{3}{2} \Rightarrow \lambda_{1}=\dfrac{1}{6}.
\end{equation*}
and
\begin{equation}\label{knew}
\Rightarrow c_{1}+c_{0}=\sqrt{3}
\end{equation}
Further, using $2\lambda_{1}+\lambda_{3}=1$ we have
\begin{equation*}
\Rightarrow \lambda_{3}=\dfrac{2}{3}
\end{equation*}
Now, consider the equations
\begin{equation*}
\dfrac{a^{2}}{81}-2(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{55c_{1}^{2}}{81}+\dfrac{2c_{0}^{2}}{3}+\dfrac{4c_{1}c_{0}}{3}=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
and
\begin{equation*}
(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)-\dfrac{a^{2}}{81}-\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{7c_{1}^{2}}{162}+\dfrac{c_{1}c_{0}}{18}=\dfrac{2H-1}{4\lambda_{1}(2H+1)}.
\end{equation*}
By summing them, we have
\begin{equation*}
-(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{7c_{1}^{2}}{162}+\dfrac{c_{1}c_{0}}{18}+\dfrac{55c_{1}^{2}}{81}+\dfrac{2c_{0}^{2}}{3}+\dfrac{4c_{1}c_{0}}{3}=\dfrac{3}{2}.
\end{equation*}
\begin{equation}\label{system1}
\Rightarrow-(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{13c_{1}^{2}}{18}+\dfrac{25c_{1}c_{0}}{18}+\dfrac{2c_{0}^{2}}{3}=\dfrac{3}{2}.
\end{equation}
Further, by taking the difference of the two equations
\begin{equation*}
\dfrac{a^{2}}{81}+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{1}{3}\left(\dfrac{19}{27}c_{1}^{2}+\dfrac{5}{3}c_{1}c_{0}+c_{0}^{2}\right)=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
and
\begin{equation*}
\dfrac{a^{2}}{81}-2(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{55c_{1}^{2}}{81}+\dfrac{2c_{0}^{2}}{3}+\dfrac{4c_{1}c_{0}}{3}=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
we obtain
\begin{equation*}
2(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{1}{3}\left(\dfrac{19}{27}c_{1}^{2}+\dfrac{5}{3}c_{1}c_{0}+c_{0}^{2}\right)-\dfrac{55c_{1}^{2}}{81}-\dfrac{2c_{0}^{2}}{3}-\dfrac{4c_{1}c_{0}}{3}=0
\end{equation*}
\begin{equation}\label{system2}
\Rightarrow2(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)-\dfrac{4c_{1}^{2}}{9}-\dfrac{c_{0}^{2}}{3}-\dfrac{7c_{1}c_{0}}{9}=0
\end{equation}
Using equations $(\ref{system1})$ and $(\ref{system2})$ we have
\begin{equation*}
\dfrac{13c_{1}^{2}}{9}+\dfrac{25c_{1}c_{0}}{9}+\dfrac{4c_{0}^{2}}{3}-\dfrac{4c_{1}^{2}}{9}-\dfrac{c_{0}^{2}}{3}-\dfrac{7c_{1}c_{0}}{9}=3
\end{equation*}
\begin{equation*}
\Rightarrow c_{1}^{2}+2c_{1}c_{0}+c_{0}^{2}=3 \Rightarrow c_{1}+c_{0}=\sqrt{3}.
\end{equation*}
We already knew this information from equation $(\ref{knew})$, but we did not use it here. This means that two of our constraints are in fact only one. Hence, we have one constraint less than expected, meaning that we are able to find a solution of our system since now the number of equations is the same as the number of unknowns.\\
Recall that there are two underlying equations, which comes from the continuity condition of $\omega$, which are
\begin{equation*}
\dfrac{a}{3}=\dfrac{b_{1}}{3}+b_{0} \qquad and \qquad \dfrac{2b_{1}}{3}+b_{0}=\dfrac{2c_{1}}{3}+c_{0}
\end{equation*}
Let us continue to focus on the equation $(\ref{system1})$. Substitute $a=b_{1}+3b_{0}$, $b_{0}=\dfrac{2c_{1}}{3}+c_{0}-\dfrac{2b_{1}}{3}$ and $c_{0}=\sqrt{3}-c_{1}$. In other words, take
\begin{equation*}
a=b_{1}+2c_{1}+3c_{0}-2b_{1}=2c_{1}+3c_{0}-b_{1}=2c_{1}+3\sqrt{3}-3c_{1}-b_{1}=3\sqrt{3}-c_{1}-b_{1}
\end{equation*}
and
\begin{equation*}
b_{0}=\dfrac{2c_{1}}{3}+\sqrt{3}-c_{1}-\dfrac{2b_{1}}{3}=\sqrt{3}-\dfrac{c_{1}}{3}-\dfrac{2b_{1}}{3}
\end{equation*}
Hence, we have that equation $(\ref{system1})$ becomes
\begin{equation*}
-\sqrt{3}\left((3\sqrt{3}-c_{1}-b_{1})\dfrac{1}{18}+b_{1}\dfrac{1}{6}+(\sqrt{3}-\dfrac{c_{1}}{3}-\dfrac{2b_{1}}{3})\dfrac{1}{3}\right)+\dfrac{13c_{1}^{2}}{18}+\dfrac{25c_{1}(\sqrt{3}-c_{1})}{18}+\dfrac{2(\sqrt{3}-c_{1})^{2}}{3}=\dfrac{3}{2}
\end{equation*}
\begin{equation*}
\Rightarrow-\sqrt{3}\left(\dfrac{\sqrt{3}}{2}-\dfrac{c_{1}}{6}-\dfrac{b_{1}}{9}\right)+2+\dfrac{c_{1}\sqrt{3}}{18}=\dfrac{3}{2}
\end{equation*}
\begin{equation*}
\Rightarrow\dfrac{b_{1}\sqrt{3}}{9}+\dfrac{2c_{1}\sqrt{3}}{9}=1\Rightarrow b_{1}+2c_{1}=3\sqrt{3}.
\end{equation*}
\begin{equation*}
\Rightarrow b_{1}=3\sqrt{3}-2c_{1}.
\end{equation*}
We can now consider the third equation of our system (\textit{i.e.} equation $(\ref{third})$) and rewrite it in terms of $c_{1}$. First, let us rewrite all the unknowns that we need in terms of $c_{1}$:
\begin{equation*}
a=3\sqrt{3}-c_{1}-b_{1}=3\sqrt{3}-c_{1}-3\sqrt{3}+2c_{1}=c_{1}
\end{equation*}
and
\begin{equation*}
b_{0}=\sqrt{3}-\dfrac{c_{1}}{3}-\dfrac{2b_{1}}{3}=\sqrt{3}-\dfrac{c_{1}}{3}-2\sqrt{3}+\dfrac{4c_{1}}{3}=c_{1}-\sqrt{3}
\end{equation*}
Notice that $a=c_{1}$ and $b_{0}=-c_{0}$, which is in accordance with the results of Lyons and Victoir.\\
We can now proceed with the substitution
\begin{equation*}
\dfrac{c_{1}^{2}}{81}+\dfrac{1}{3}\left(\dfrac{7}{27}(3\sqrt{3}-2c_{1})^{2}+(3\sqrt{3}-2c_{1})(c_{1}-\sqrt{3})+(c_{1}-\sqrt{3})^{2}\right)
\end{equation*}
\begin{equation*}
+\dfrac{1}{3}\left(\dfrac{19}{27}c_{1}^{2}+\dfrac{5}{3}c_{1}(\sqrt{3}-c_{1})+(\sqrt{3}-c_{1})^{2}\right)=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow \dfrac{c_{1}^{2}}{81}+\dfrac{1}{3}\left(7+\dfrac{28c_{1}^{2}}{27}-\dfrac{28c_{1}\sqrt{3}}{9}+3c_{1}\sqrt{3}-9-2c_{1}^{2}+2c_{1}\sqrt{3}+c_{1}^{2}+3-2c_{1}\sqrt{3}\right)
\end{equation*}
\begin{equation*}
+\dfrac{1}{3}\left(\dfrac{19c_{1}^{2}}{27}+\dfrac{5c_{1}\sqrt{3}}{3}-\dfrac{5c_{1}^{2}}{3}+3+c_{1}^{2}-2c_{1}\sqrt{3}\right)=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
\begin{equation*}
c_{1}^{2}\dfrac{1}{27}-c_{1}\dfrac{4\sqrt{3}}{27}+\dfrac{4}{3}=\dfrac{3}{2H+1} \Rightarrow c_{1}^{2}\dfrac{1}{27}-c_{1}\dfrac{4\sqrt{3}}{27}=\dfrac{5-8H}{3(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow c_{1}=\dfrac{4H\sqrt{3}+2\sqrt{3}-\sqrt{-96H^{2}+66H+57}}{2H+1}
\end{equation*}
If $H=\frac{1}{2}$ then
\begin{equation*}
c_{1}=\sqrt{3}\left(2-\sqrt{\dfrac{11}{2}}\right)
\end{equation*}
which is in accordance with the results of Lyons and Victoir.
\\
Therefore, we know all the unknowns. In particular, the path $\omega_{1,t}$ is given by:
\begin{equation*}
\begin{cases}
\dfrac{4H\sqrt{3}+2\sqrt{3}-\sqrt{-96H^{2}+66H+57}}{2H+1}t, \qquad t\in[0,\frac{1}{3}],\\
\dfrac{2H\sqrt{3}+\sqrt{3}-\sqrt{-96H^{2}+66H+57}}{2H+1}+\dfrac{2\sqrt{-96H^{2}+66H+57}-2H\sqrt{3}-\sqrt{3}}{2H+1}t, \qquad t\in[\frac{1}{3},\frac{2}{3}],\\
\dfrac{\sqrt{-96H^{2}+66H+57}-2H\sqrt{3}-\sqrt{3}}{2H+1}+\dfrac{4H\sqrt{3}+2\sqrt{3}-\sqrt{-96H^{2}+66H+57}}{2H+1}t, \qquad t\in[\frac{2}{3},1],\\
\end{cases}
\end{equation*}
If you let $\alpha$ and $\beta$ to be:
\begin{equation*}
\alpha:=\dfrac{2H\sqrt{3}+\sqrt{3}}{2H+1} \qquad and \qquad \beta:=\dfrac{\sqrt{-96H^{2}+66H+57}}{2H+1}
\end{equation*}
then we can rewrite our path $\omega_{1,t}$ can be written in the following form:
\begin{equation*}
\begin{cases}
(2\alpha-\beta) t, \qquad t\in[0,\frac{1}{3}],\\
(\alpha-\beta)+(2\beta-\alpha)t, \qquad t\in[\frac{1}{3},\frac{2}{3}],\\
(\beta-\alpha)+(2\alpha-\beta)t, \qquad t\in[\frac{2}{3},1],\\
\end{cases}
\end{equation*}
Now, we proceed with a check of our result. Indeed we substitute the values obtained for our unknowns in our system of equations to check the correctness of these values.\\
In particular, let us focus first on the equation
\begin{equation*}
\dfrac{a^{2}}{81}-2(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)+\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{55c_{1}^{2}}{81}+\dfrac{2c_{0}^{2}}{3}+\dfrac{4c_{1}c_{0}}{3}=\dfrac{1}{2\lambda_{1}(2H+1)}
\end{equation*}
Let us rewrite it in terms of $c_{1}$:
\begin{equation*}
\dfrac{c_{1}^{2}}{81}-2\sqrt{3}\left(c_{1}\dfrac{1}{18}+(3\sqrt{3}-2c_{1})\dfrac{1}{6}+(c_{1}-\sqrt{3})\dfrac{1}{3}\right)+
\end{equation*}
\begin{equation*}
+\dfrac{1}{3}\left(\dfrac{7}{27}(3\sqrt{3}-2c_{1})^{2}+(3\sqrt{3}-2c_{1})(c_{1}-\sqrt{3})+(c_{1}-\sqrt{3})^{2}\right)
\end{equation*}
\begin{equation*}
+\dfrac{55c_{1}^{2}}{81}+\dfrac{2(\sqrt{3}-c_{1})^{2}}{3}+\dfrac{4c_{1}(\sqrt{3}-c_{1})}{3}=\dfrac{3}{2H+1}
\end{equation*}
\begin{equation*}
\Rightarrow c_{1}^{2}\dfrac{1}{27}-c_{1}\dfrac{4\sqrt{3}}{27}+\dfrac{4}{3}=\dfrac{3}{2H+1}
\end{equation*}
as before. Now, let us focus on the equation
\begin{equation*}
(c_{1}+c_{0})\left(a\dfrac{1}{18}+b_{1}\dfrac{1}{6}+b_{0}\dfrac{1}{3}\right)-\dfrac{a^{2}}{81}-\dfrac{1}{3}\left(\dfrac{7}{27}b_{1}^{2}+b_{1}b_{0}+b_{0}^{2}\right)+\dfrac{7c_{1}^{2}}{162}+\dfrac{c_{1}c_{0}}{18}=\dfrac{2H-1}{4\lambda_{1}(2H+1)}
\end{equation*}
Following the same procedure we have
\begin{equation*}
\sqrt{3}\left(c_{1}\dfrac{1}{18}+(3\sqrt{3}-2c_{1})\dfrac{1}{6}+(c_{1}-\sqrt{3})\dfrac{1}{3}\right)-\dfrac{c_{1}^{2}}{81}
\end{equation*}
\begin{equation*}
-\dfrac{1}{3}\left(\dfrac{7}{27}(3\sqrt{3}-2c_{1})^{2}+(3\sqrt{3}-2c_{1})(c_{1}-\sqrt{3})+(c_{1}-\sqrt{3})^{2}\right)
\end{equation*}
\begin{equation*}
+\dfrac{7c_{1}^{2}}{162}+\dfrac{c_{1}(\sqrt{3}-c_{1})}{18}=\dfrac{3(2H-1)}{2(2H+1)}
\end{equation*}
\begin{equation*}
\Rightarrow -c_{1}^{2}\dfrac{1}{27}+c_{1}\dfrac{4\sqrt{3}}{27}+\dfrac{1}{6}=\dfrac{3(2H-1)}{2(2H+1)} \Rightarrow c_{1}^{2}\dfrac{1}{27}-c_{1}\dfrac{4\sqrt{3}}{27}=\dfrac{5-8H}{3(2H+1)}.
\end{equation*}
as before. The other equations are straightforward to check. Therefore, our solution is consistent and when $H=\frac{1}{2}$ our solution is the same solution obtained by Lyons and Victoir in their paper.
\end{proof}
\small
| {
"timestamp": "2017-11-20T02:10:29",
"yymm": "1609",
"arxiv_id": "1609.07352",
"language": "en",
"url": "https://arxiv.org/abs/1609.07352",
"abstract": "In this work we present different results concerning the signature and the cubature of fractional Brownian motion (fBm). The first result regards the rate of convergence of the expected signature of the linear piecewise approximation of the fBm to its exact value, for a value of the Hurst parameter $H\\in(\\frac{1}{2},1)$. We show that the rate of convergence is given by $2H$. We believe that this rate is sharp as it is consistent with the result of Ni and Xu, who showed that the sharp rate of convergence for the Brownian motion (i.e. fBm with $H=\\frac{1}{2}$) is given by $1$. The second result regards the bound of the coefficient of the rate of convergence obtained in the first result. We obtain an uniform bound for the coefficient for the $2k$-th term of the signature of $\\frac{\\tilde{A}k(2k-1)}{(k-1)!2^{k}}$, where $\\tilde{A}$ is a finite constant independent of $k$. The third result regards the sharp decay rate of the expected signature of the fBm. We obtain a sharp bound for the $2k$-th term of the expected signature of $\\frac{1}{k!2^{k}}$. The last results concern the cubature method for the fBm for $H>\\frac{1}{2}$. In particular, we develop the framework of the cubature method for fBm, provide a bound for the approximation error in the general case, and obtain the cubature formula for the fBm in a particular setting. These results extend the work of Lyons and Victoir, who focused on the Brownian motion case.",
"subjects": "Probability (math.PR)",
"title": "Some results on the Signature and Cubature of the Fractional Brownian motion for $H>\\frac{1}{2}$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534316905262,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7083573466201009
} |
https://arxiv.org/abs/1712.07368 | Notes on noncommutative Fitting invariants | To each finitely presented module $M$ over a commutative ring $R$ one can associate an $R$-ideal $\mathrm{Fitt}_{R}(M)$, which is called the (zeroth) Fitting ideal of $M$ over $R$. This is of interest because it is always contained in the $R$-annihilator $\mathrm{Ann}_{R}(M)$ of $M$, but is often much easier to compute. This notion has recently been generalised to that of so-called `Fitting invariants' over certain noncommutative rings; the present author considered the case in which $R$ is an $\mathfrak{o}$-order $\Lambda$ in a finite dimensional separable algebra, where $\mathfrak{o}$ is an integrally closed commutative noetherian complete local domain. This article is a survey of known results and open problems in this context. In particular, we investigate the behaviour of Fitting invariants under direct sums. In the appendix, we present a new approach to Fitting invariants via Morita equivalence. | \section*{Introduction}
Let $R$ be a commutative unitary ring and let $M$ be a finitely presented
$R$-module. This means that there is an exact sequence
\begin{equation} \label{eqn:finite-presentation-comm}
R^a \stackrel{h}{\longrightarrow} R^b \longrightarrow
M \longrightarrow 0,
\end{equation}
where $a$ and $b$ are positive integers.
In other words, the module $M$ is finitely generated and there is
a finite number of relations between the generators.
This information is incorporated in the $a \times b$ matrix $h$.
Note that every finitely generated module over a noetherian ring
is indeed finitely presented.
Suppose that $a \geq b$. Then the (zeroth) Fitting ideal of $M$
over $R$ is defined to be the $R$-ideal generated by all $b \times b$
minors of the matrix $h$:
\[
\mathrm{Fitt}_R(M) := \langle \det(H) \mid H \in S_b(h) \rangle_R,
\]
where $S_b(h)$ denotes the set of all $b \times b$ submatrices of $h$.
In the case $a<b$ one simply puts $\mathrm{Fitt}_R(M) := 0$. This notion was
introduced by the German mathematician
Hans Fitting \cite{Fitting-DMV} who showed that it is
in fact independent of the chosen finite presentation $h$.
Fitting was a student of Emmy Noether and is also famous for his
contributions to group theory.
Fitting ideals became an important tool in commutative algebra.
A key property is that the Fitting ideal of $M$ is always contained in the
$R$-annihilator ideal of $M$, but in many cases is much easier to
compute. For instance, it behaves well under epimorphisms,
certain exact sequences, and direct sums of
$R$-modules: if $M$ and $N$ are two finitely presented $R$-modules,
then one has an equality
\begin{equation} \label{eqn:additivity-comm}
\mathrm{Fitt}_R(M \oplus N) = \mathrm{Fitt}_R(M) \cdot \mathrm{Fitt}_R(N).
\end{equation}
For a full account of the theory, we refer the reader to
\cite{MR0460383}.
Fitting ideals have many applications in number theory.
We give two typical examples. If $L/K$
is a finite Galois extension of number fields with Galois group $G$,
then the class group $\mathrm{cl}_L$ of $L$ has a natural structure as a module
over the group ring $\mathbb{Z}[G]$. If $p$ is a prime then the
$p$-part $\mathbb{Z}_p \otimes_{\mathbb{Z}} \mathrm{cl}_L$ of the class group is a module
over the $p$-adic group ring $\mathbb{Z}_p[G]$.
Now assume that $p$ is odd and that $L/K$ is a CM-extension.
If $G$ is abelian then Greither \cite{MR2371374} has computed the
Fitting ideal of the Pontryagin dual of the minus part of
$\mathbb{Z}_p \otimes_{\mathbb{Z}} \mathrm{cl}_L$ via the equivariant Tamagawa number conjecture.
This gives strong evidence for a conjecture of Brumer that asserts that
certain `Stickelberger elements' (constructed from values at zero of Artin
$L$-functions attached to the irreducible characters of $G$)
annihilate the class group.
Fitting ideals also appear in Iwasawa theory. The formulation and the
proof of the (classical) main conjecture for
totally real fields by Wiles \cite{MR1053488} makes heavy use
of the close relation between characteristic ideals and Fitting ideals
of Iwasawa modules.
Fitting ideals of arithmetic objects are also interesting
in their own right. Kurihara
and Miura \cite{MR2805636} showed that (away from the $2$-primary part) the
Fitting ideal of the minus class group
of an absolutely abelian imaginary number field coincides with
the Stickelberger ideal (as conjectured by Kurihara
\cite{MR1998607}). This gives the precise arithmetical
interpretation of the latter ideal.
As Galois groups are in general non-abelian, it is natural to ask
whether analogous invariants can be defined for modules over
noncommutative rings such as $p$-adic group rings.
Grime considered several cases in his PhD thesis \cite{grime_thesis},
including matrix rings over commutative rings. We will describe an
approach to noncommutative Fitting invariants over rings that
are Morita equivalent to a commutative ring in the appendix.
This is essentially a generalisation of Grime's approach (and of
\cite[\S 2]{MR3092262}). Parker has treated the case of $p$-adic group rings
in his PhD thesis \cite{parker_thesis} under the
(for applications sometimes too restrictive)
hypothesis that one can choose $a = b$ in
\eqref{eqn:finite-presentation-comm}.
Now let $\mathfrak o$ be an integrally closed commutative noetherian
complete local domain with field of quotients $F$.
Let $\Lambda$ be an $\mathfrak o$-order in a finite dimensional separable
algebra $A$ over $F$. We will call such an order a Fitting order
over $\mathfrak o$. A standard example is that of $p$-adic group rings
$\mathbb{Z}_p[G]$ where $p$ is a prime and $G$ is a finite group.
The Iwasawa algebra $\mathbb{Z}_p \llbracket G \rrbracket$ of a one-dimensional
$p$-adic Lie group $G$ is a second example.
Let $\Lambda$ be an arbitrary Fitting order and let $M$ be a finitely
presented $\Lambda$-module. In \cite{MR2609173} the present author
defined the `maximal Fitting invariant'
$\mathrm{Fitt}_{\Lambda}^{\max}(M)$ of $M$ over $\Lambda$ as an
equivalence class of certain modules over the centre of $\Lambda$
using reduced norms. If $\Lambda$ is commutative then the reduced norm
coincides with the usual determinant and thus this notion is compatible
with that of the classical Fitting ideal.
This approach has been studied further by Johnston and the present author
in \cite{MR3092262}. It has also been applied in number theory
to study the class group and other Galois modules
in extensions with arbitrary Galois groups
\cite{MR2976321, MR3072281, MR2801311, non-abelian-zeta,
MR3739317, non-abelian-Brumer-Stark}
(see also \cite{Brumer-Gross-Stark} for a survey).
In this article we do not use the notion of `$\mathrm{Nrd}(\Lambda)$-equivalence'
as in \cite{MR2609173}. We essentially follow the alternative approach in
\cite[\S 3.5]{MR3092262} and define Fitting invariants as a genuine ideal
of a certain commutative ring $\mathcal{I}(\Lambda)$ that contains the
centre of $\Lambda$ and the reduced norms of every matrix with entries
in $\Lambda$. As long as we are only interested in annihilation results,
this approach has no disadvantage over the more complicated notion of
$\mathrm{Nrd}(\Lambda)$-equivalence. The latter is only necessary when one
wishes to relate Fitting invariants to (relative) $K$-theory.
In order to obtain annihilators from $\mathrm{Fitt}_{\Lambda}^{\max}(M)$
one has to multiply by a certain ideal in
$\mathcal{I}(\Lambda)$ which we call the denominator ideal.
In this article we report on basic properties of noncommutative
Fitting invariants and on lower bounds for the denominator ideal.
In particular, we consider the case where $\Lambda$ is a $p$-adic
group ring. Inspired by \cite[Lemma 6]{MR2046598} and
recent work of Kataoka \cite{Kataoka},
we give a new shorter and simpler proof of a proposition in
\cite{MR2609173} on the behaviour of Fitting invariants
under Pontryagin duality.
Finally, in section \S \ref{sec:additivity} we investigate whether
Fitting invariants are additive in the sense of \eqref{eqn:additivity-comm}
when working over noncommutative rings.
We show that this property does hold for certain classes of
Fitting orders but, perhaps surprisingly, that it does not hold for
non-maximal hereditary Fitting orders over complete discrete valuation rings.
In the appendix, we present a new approach to Fitting invariants
for unitary rings $\Lambda$ which are Morita equivalent to
a commutative ring. This generalises \cite[\S 2]{MR3092262},
where the case of matrix rings over commutative rings is considered.
In many cases we will not provide rigorous proofs. Instead we
try to motivate the results and give many examples.
\subsection*{Acknowledgements}
The author acknowledges financial support provided by the
Deutsche Forschungsgemeinschaft (DFG)
within the Heisenberg programme (No.\, NI 1230/3-1).
I am indebted to Masato Kurihara, Kenichi Bannai
and Takeshi Tsuji for the excellent organisation of the Iwasawa 2017
conference, where I had the opportunity to give a Preparatory Lecture
Series on `Non-abelian Stark-type conjectures and noncommutative Iwasawa
theory' including most of the material presented in this article.
I also thank Henri Johnston, Takenori Kataoka and David Watson
for fruitful discussions
and help concerning various aspects of Fitting invariants.
\subsection*{Notation and conventions}
All rings are assumed to have an identity element and all modules are assumed
to be left modules unless otherwise stated.
If $\Lambda$ is a ring, we write $M_{m \times n}(\Lambda)$ for the set of all
$m \times n$ matrices with entries in $\Lambda$.
We denote the group of invertible
matrices in $M_{n \times n}(\Lambda)$ by $\mathrm{GL}_n(\Lambda)$
and let $\boldsymbol{1}_n \in \mathrm{GL}_n(\Lambda)$ be the $n \times n$
identity matrix.
Moreover, we let $\zeta(\Lambda)$ denote the centre of the ring $\Lambda$.
We shall sometimes abuse notation by using the symbol $\oplus$ to denote
the direct product of rings or orders.
\section{The commutative case}
The material presented in this section originates with Fitting
\cite{Fitting-DMV}. We refer the reader to \cite{MR0460383}
and \cite[\S 20]{MR1322960} for further details.
Let $R$ be a commutative ring and let $M$ be a finitely presented
$R$-module. Choose a finite presentation $h$ as in
\eqref{eqn:finite-presentation-comm} and define
\[
\mathrm{Fitt}_{R}(M) := \left\{\begin{array}{lll}
0 & \mathrm{ if } & a < b\\
\langle \det(H) \mid H \in S_b(h) \rangle_{R}
& \mathrm{ if } & a \geq b,
\end{array}\right.
\]
where we recall that $S_b(h)$ denotes the set of all $b \times b$
submatrices of $h$.
We call $\mathrm{Fitt}_R(M)$ the \emph{Fitting ideal} of $M$ over $R$.
\begin{theorem} \label{thm:Fitt-well-comm}
Let $R$ be a commutative ring and let $M$ be a finitely presented
$R$-module. Then $\mathrm{Fitt}_R(M)$ is independent of the choice of $h$
and thus is well-defined.
\end{theorem}
The idea of the proof is as follows
(for a full proof see \cite[\S 3.1]{MR0460383} or \cite[\S 20.2]{MR1322960}).
Since two ideals are equal
if and only if they become equal in every localisation of $R$,
we can and do assume that $R$ is local. We choose $b' \in \mathbb{N}$ minimal
such that there is a surjection $R^{b'} \twoheadrightarrow M$.
Similarly, we choose $a' \in \mathbb{N}$ minimal such that there is a
finite presentation
\[
R^{a'} \stackrel{h'}{\longrightarrow} R^{b'} \longrightarrow
M \longrightarrow 0.
\]
It suffices to show that the Fitting ideals coming from $h$
and $h'$ coincide.
We may view \eqref{eqn:finite-presentation-comm} as the truncation of
a free resolution $\mathcal{F}$ of $M$. Similarly,
we may view $h'$ as a truncation of a \emph{minimal} free resolution
$\mathcal{F}'$ of $M$. However, as $R$ is local,
every free resolution is isomorphic to the direct sum of
$\mathcal F'$ and a trivial complex (see \cite[Theorem 20.2]{MR1322960}).
In particular, there are invertible matrices $X \in \mathrm{GL}_a(R)$ and
$Y \in \mathrm{GL}_b(R)$ such that
\[
h \circ X = Y \circ \left( \begin{array}{ccc}
h' & 0 & 0 \\ 0 & \boldsymbol{1} & 0
\end{array}\right),
\]
where $\boldsymbol{1} = \boldsymbol{1}_{b-b'}$
is a $(b-b') \times (b-b')$ identity matrix.
As $\det(Y)$ belongs to $R^{\times}$, we may assume that $Y=1$.
By an exercise in linear algebra that uses the multilinearity of
the determinant we may likewise assume that $X=1$.
The result now follows easily.
\begin{example} \label{ex:R/I}
Let $I = \langle r_1, \dots, r_a \rangle_R$
be a finitely generated ideal of $R$ and
take $M = R/I$. Then we have a finite presentation
\[
R^a \longrightarrow R \longrightarrow R/I \longrightarrow 0,
\]
where the first arrow maps the $i$-th standard basis vector of $R^a$ to
$r_i$, $1 \leq i \leq a$. Thus we have that
\[
\mathrm{Fitt}_R(R/I) = I.
\]
\end{example}
\begin{remark} \label{rem:base-change}
Let $R \rightarrow S$ be a homomorphism of commutative rings.
As taking tensor products is a right exact functor, a finite
presentation \eqref{eqn:finite-presentation-comm} of the
$R$-module $M$ yields a finite presentation
\[
S^a \longrightarrow S^b \longrightarrow S \otimes_R M
\longrightarrow 0
\]
of $S \otimes_R M$.
Fitting ideals therefore commute with base change:
\[
\mathrm{Fitt}_S (S \otimes_R M) = S \otimes_R \mathrm{Fitt}_R(M).
\]
\end{remark}
\begin{remark}
Let $M$ be a finitely presented $R$-module with a finite presentation
$h$ as in \eqref{eqn:finite-presentation-comm}.
For each integer $i \geq 0$ one can define `higher Fitting ideals'
$\mathrm{Fitt}_R^i(M)$ of $M$ that are generated by the minors of $h$
of size $b-i$. These invariants form an increasing sequence
\[
\mathrm{Fitt}_R(M) = \mathrm{Fitt}_R^0(M) \subseteq \mathrm{Fitt}_R^1(M)
\subseteq \mathrm{Fitt}_R^2(M) \subseteq \dots
\]
of $R$-ideals. Moreover, if $M$ can be generated by $q$ elements,
then $\mathrm{Fitt}_R^q(M) = R$. See \cite[\S 3, Theorem 2]{MR0460383}
for a proof. If $R$ is a principal ideal domain, then the higher
Fitting ideals $\mathrm{Fitt}^i_R(M)$ for all $i \geq 0$ determine
the $R$-module $M$ up to isomorphism. This can be deduced from the
structure theorem of finitely generated modules over principal ideal
domains (see \cite[\S 1.1]{MR1998607}, for instance).
\end{remark}
Let us denote the $R$-annihilator ideal of $M$ by $\mathrm{Ann}_R(M)$.
The main interest in Fitting ideals comes from the following fact.
\begin{theorem} \label{thm:fitt-ann-comm}
Let $R$ be a commutative ring and let $M$ be a finitely presented
$R$-module. Then one has an inclusion
\[
\mathrm{Fitt}_R(M) \subseteq \mathrm{Ann}_R(M).
\]
\end{theorem}
\begin{proof}
Choose a finite presentation \eqref{eqn:finite-presentation-comm}
of $M$. Let $H \in S_b(h)$ be a submatrix. As $M$ is a
homomorphic image of the cokernel of $H$, we may assume that
$h = H$ and thus $a=b$. Let $H^{\ast} \in M_{b \times b}(R)$
be the adjoint matrix of $H$. Then
$H^{\ast} H = H H^{\ast} = \det(H) \boldsymbol{1}_{b}$
and so the result follows from the
commutative diagram
\[
\xymatrix{
R^{b} \ar@{>}[rr]^{H} & &
R^{b} \ar@{>}[d]^{\det(H)} \ar@{>}[lld]_{H^{\ast}} \ar@{>>}[rr] & & M \ar@{>}[d]^{\det(H)} \\
R^{b} \ar@{>}[rr]^{H} & & R^{b} \ar@{>>}[rr] & & M
}
\]
once one notes that the right vertical arrow is zero.
\end{proof}
\begin{example} \label{ex:Fitting-Z}
Let $R = \mathbb{Z}$ and let $M$ be an arbitrary finitely generated $\mathbb{Z}$-module.
By the fundamental theorem of finitely generated abelian groups
there are unique integers $f,n \geq 0$ and positive integers
$1 < d_1 \mid d_2 \mid \dotsm\mid d_n$ such that
\[
M \simeq \mathbb{Z}^f \oplus \bigoplus_{i=1}^n \mathbb{Z}/ d_i \mathbb{Z}.
\]
If $f > 0$ then clearly $\mathrm{Fitt}_{\mathbb{Z}}(M) = 0$. If $f=0$
we can take for $h$ the diagonal matrix with entries
$d_1, \dots, d_n$. It follows that
\[
\mathrm{Fitt}_{\mathbb{Z}}(M) = \left\{ \begin{array}{lll}
0 & \mathrm{ if } & f>0\\
(\prod_{i=1}^{n} d_i) \mathbb{Z} & \mathrm{ if } & f=0.
\end{array}\right.
\]
Moreover, we clearly have
\[
\mathrm{Ann}_{\mathbb{Z}}(M) = \left\{ \begin{array}{lll}
0 & \mathrm{ if } & f>0\\
d_n \mathbb{Z} & \mathrm{ if } & f=0.
\end{array}\right.
\]
In particular, the inclusion $\mathrm{Fitt}_{\mathbb{Z}}(M) \subseteq \mathrm{Ann}_{\mathbb{Z}}(M)$
is proper if and only if $f=0$ and $n>1$. Of course, similar observations
hold for every principal ideal domain $R$.
\end{example}
\begin{example} \label{ex:augmentation-ideal}
Let $G$ be a finite abelian group and let $\Delta G$ be the kernel of
the natural augmentation map $\mathbb{Z}[G] \rightarrow \mathbb{Z}$
that sends each $g \in G$ to $1$. It is straightforward to show that
\[
\mathrm{Ann}_{\mathbb{Z}[G]}(\Delta G) = N_G \mathbb{Z},
\]
where $N_G := \sum_{g \in G} g$ (see \cite[Satz 1.3]{MR3014997}).
By Theorem \ref{thm:fitt-ann-comm}
we must have
\[
\mathrm{Fitt}_{\mathbb{Z}[G]}(\Delta G) = m N_G \mathbb{Z}
\]
for some integer $m$. We now apply Remark \ref{rem:base-change}
with $R = \mathbb{Z}[G]$ and $S = \mathbb{Z}$ so that
\[
\mathrm{Fitt}_{\mathbb{Z}}(\mathbb{Z} \otimes_{\mathbb{Z}[G]} \Delta G) = m |G| \mathbb{Z}.
\]
Moreover, we have isomorphisms of abelian groups
$\mathbb{Z} \otimes_{\mathbb{Z}[G]} \Delta G
\simeq \Delta G / (\Delta G)^2 \simeq G$ so that
\[
\mathrm{Fitt}_{\mathbb{Z}}(\mathbb{Z} \otimes_{\mathbb{Z}[G]} \Delta G) = |G| \mathbb{Z}
\]
by Example \ref{ex:Fitting-Z}. It follows that $m \in \mathbb{Z}^{\times}$
and thus
\[
\mathrm{Fitt}_{\mathbb{Z}[G]}(\Delta G) = N_G \mathbb{Z} = \mathrm{Ann}_{\mathbb{Z}[G]}(\Delta G).
\]
\end{example}
We now record some basic facts about Fitting ideals.
\begin{lemma} \label{lem:basic-props-comm}
Let $R$ be a commutative ring and let $M_1$, $M_2$, $M_3$
be finitely presented $R$-modules.
\begin{enumerate}
\item
If $\pi: M_1 \twoheadrightarrow M_2$ is an epimorphism, then
$\mathrm{Fitt}_R(M_1) \subseteq \mathrm{Fitt}_R(M_2)$.
\item
Fitting ideals behave well under direct sums:
\[
\mathrm{Fitt}_R(M_1 \oplus M_3) = \mathrm{Fitt}_R(M_1) \cdot \mathrm{Fitt}_R(M_3).
\]
\item
If $M_1 \stackrel{\iota}{\rightarrow} M_2 \rightarrow M_3 \rightarrow 0$
is an exact sequence, then
\[
\mathrm{Fitt}_R(M_1) \cdot \mathrm{Fitt}_R(M_3) \subseteq \mathrm{Fitt}_R(M_2).
\]
\end{enumerate}
\end{lemma}
\begin{proof}
We only sketch the proof. Let $R^{a_1} \stackrel{h_1}{\longrightarrow}
R^{b_1} \stackrel{\pi_1}{\longrightarrow} M_1 \rightarrow 0$ be a
finite presentation of $M_1$.
Put $\pi_2 := \pi \circ \pi_1$.
Then one may construct a finite presentation
\[
R^{a_2} \xrightarrow{(h_1 \mid \ast)}
R^{b_1} \stackrel{\pi_2}{\longrightarrow} M_2 \longrightarrow 0
\]
of $M_2$ by adding more relations if necessary. This shows (i).
For (iii) we may therefore assume that $\iota$ is injective.
Let $h_1$ and $h_3$ be finite presentations of $M_1$ and $M_3$,
respectively. As in the proof of the horseshoe lemma (see
\cite[Lemma 2.2.8]{MR1269324}, for instance) one can construct
a finite presentation $h_2$ of $M_2$ of shape
\[
\left( \begin{array}{cc}
h_1 & g \\ 0 & h_3
\end{array}
\right).
\]
If $M_2 = M_1 \oplus M_3$ one may additionally assume that $g=0$.
From this one can deduce (ii) and (iii).
\end{proof}
\begin{example}
Let $I_1, \dots, I_n$ be finitely generated ideals of $R$.
Then it follows from Example \ref{ex:R/I} and Lemma
\ref{lem:basic-props-comm}(ii) that
\[
\mathrm{Fitt}_R\left(\bigoplus_{j=1}^n R/I_j\right) = \prod_{j=1}^{n} I_j.
\]
\end{example}
\begin{example}
Let $p$ be a prime and let $R = \mathbb{Z}_p\llbracket T \rrbracket$ be the
power series ring in one variable over $\mathbb{Z}_p$.
Let $M$ be a finitely generated
torsion $R$-module. Then by the structure theorem for Iwasawa modules
\cite[Theorem 5.3.8]{MR2392026} there is a pseudo-isomorphism
\[
\alpha: M \longrightarrow \bigoplus_{i = 1}^s R / p^{m_i} \oplus
\bigoplus_{j = 1}^t R / F_j^{n_j},
\]
where $s,t \geq 0$, $m_i, n_j \geq 1$ are integers and the $F_j$
are distinguished irreducible polynomials.
This means in particular that $\alpha$ becomes an isomorphism in every localisation
of $R$ at a prime ideal of height $1$.
The characteristic ideal
$\mathrm{Char}_R(M)$ of $M$ is defined to be the $R$-ideal generated by
$\prod_{i=1}^s p^{m_i} \cdot \prod_{j=1}^t F_j^{n_j}$.
Now assume that $M$ contains no finite non-trivial submodule
(that is $\alpha$ is injective). Then the projective dimension
of $M$ is at most $1$ by
\cite[Proposition 5.3.19(i)]{MR2392026}.
Since $R$ is a local ring, every projective
$R$-module is free and so there is a short exact sequence
\[
0 \longrightarrow R^a \longrightarrow R^a \longrightarrow
M \longrightarrow 0.
\]
It follows that $\mathrm{Fitt}_R(M)$ is a principal ideal. Since two
principal ideals over $R$ are equal if and only if they become
equal in every localisation of $R$ at a height $1$ prime ideal,
we have that
\[
\mathrm{Fitt}_R(M) = \mathrm{Char}_R(M)
\]
in this case (see \cite[Lemma 9.1]{MR2908781}, for instance).
\end{example}
\section{Noncommutative Fitting invariants: Basic properties}
\subsection{Fitting domains and Fitting orders}
We now introduce the class of rings for which we intend to define
Fitting invariants. We recall that an $F$-algebra $A$ over a field $F$
is called \emph{separable} if
$E \otimes_F A$ is a semisimple $E$-algebra for every field extension
$E$ of $F$. If $F$ is a perfect field, then every finite dimensional
semisimple $F$-algebra is indeed separable
(as follows from \cite[Corollary 7.6]{MR632548}).
\begin{definition}
Let $\mathfrak o$ be an integrally closed commutative
noetherian complete local domain with field of quotients $F$.
Then we call $\mathfrak o$ a \emph{Fitting domain}.
Let $A$ be a finite dimensional separable $F$-algebra and let $\Lambda$
be an $\mathfrak o$-order in $A$. Then $\Lambda$ is called a
\emph{Fitting order} over $\mathfrak o$.
\end{definition}
\begin{remark}
Let $\Lambda$ be a Fitting order over the Fitting domain $\mathfrak o$.
Then $\Lambda$ is noetherian and so every
finitely generated $\Lambda$-module
is in fact finitely presented.
\end{remark}
\begin{example}
Any complete discrete valuation ring $\mathfrak o$ is a Fitting domain.
Conversely, any Fitting domain of Krull dimension $1$ is a complete
discrete valuation ring.
\end{example}
\begin{example}
For any Fitting domain $\mathfrak o$ and any positive integer $n$,
the ring $M_{n \times n}(\mathfrak o)$ is a Fitting order over
$\mathfrak o$.
\end{example}
\begin{example}
Let $p$ be a prime and let $G$ be a finite group.
Then the ring of $p$-adic integers $\mathbb{Z}_p$ is a Fitting domain
and the group ring $\mathbb{Z}_p[G]$ is a Fitting order over $\mathbb{Z}_p$.
\end{example}
\begin{example}
More generally, let $\mathfrak o$ be an arbitrary Fitting domain
with field of quotients $F$ and let $G$ be a finite group. Then
an $\mathfrak o$-order $\Lambda$ in $A := F[G]$ is a Fitting order
over $\mathfrak o$ if and only if $|G|$ is invertible in $F$.
\end{example}
\begin{example} \label{ex:Iwasawa-algebra}
Let $G$ be a profinite group containing a finite normal subgroup $H$
such that $G/H \simeq \Gamma$, where $\Gamma$ is a pro-$p$ group
isomorphic to $\mathbb{Z}_p$.
Note that $G$ can be written as a semi-direct product $H \rtimes \Gamma$
and is a one-dimensional $p$-adic Lie group.
The Iwasawa algebra of $G$ over $\mathbb{Z}_p$
is defined to be
\[
\mathbb{Z}_{p}\llbracket G\rrbracket = \varprojlim \mathbb{Z}_{p}[G/N],
\]
where the inverse limit is taken over all open normal subgroups
$N$ of $G$. Since any homomorphism $\Gamma \rightarrow \mathrm{Aut}(H)$
must have open kernel, we may choose
a natural number $n$ such that $\Gamma^{p^n}$ is central
in $G$. We put
$\mathfrak o := \mathbb{Z}_p \llbracket \Gamma^{p^n} \rrbracket$
and $F := \mathrm{Quot}(\mathfrak o)$.
Then $\mathfrak o$
is non-canonically isomorphic to the power series ring
$\mathbb{Z}_p \llbracket T \rrbracket$ in one variable over $\mathbb{Z}_p$
and is thus a Fitting domain. If we view $\mathbb{Z}_{p}\llbracket G\rrbracket$
as an $\mathfrak o$-module (or indeed as a left $\mathfrak o[H]$-module),
there is a decomposition
\[
\mathbb{Z}_p \llbracket G \rrbracket =
\bigoplus_{i=0}^{p^n-1} \mathfrak o[H] \gamma^i,
\]
where $\gamma$ is a topological generator of $\Gamma$.
This shows that $\mathbb{Z}_p \llbracket G \rrbracket$ is a Fitting
order over $\mathfrak o$ in the separable
$F$-algebra
$A = \mathcal{Q}(G) := \oplus_i F[H]\gamma^i$.
\end{example}
\subsection{Reduced norms and the integrality ring} \label{subsec:reduced-norms}
Let $\mathfrak o$ be a Fitting domain with field of quotients $F$
and let $A$ be a finite dimensional separable $F$-algebra.
By Wedderburn's theorem $A$ decomposes into
\[
A = A_1 \oplus \dots \oplus A_t,
\]
where each $A_i$ is isomorphic to an algebra of $n_i \times n_i$
matrices over a skewfield $D_i$.
Then $F_i := \zeta(A_i) = \zeta(D_i)$ is a finite field extension
of $F$ and $A_i$ is a central simple $F_i$-algebra.
The reduced norm map
\[
\mathrm{Nrd} = \mathrm{Nrd}_A: A \longrightarrow \zeta(A) = F_1 \oplus \dots \oplus F_t
\]
is defined componentwise and extends to matrix rings
over $A$ in the obvious way (see \cite[\S 7D]{MR632548}).
If every $D_i$ is in fact a field,
then the reduced norm of $x =(x_i)_i \in A$ is indeed given by
$\mathrm{Nrd}(x) = (\det(x_i))_i$. In general, one can always choose
a (finite) field extension $E$ of $F$ such that $A_E := E \otimes_F A$
is of this form. Then one puts $\mathrm{Nrd}_A(x) := \mathrm{Nrd}_{A_E}(1 \otimes x)$
which actually belongs to $\zeta(A)$ and is independent of the choice of $E$.
Now let $\Lambda$ be a Fitting order in $A$ over $\mathfrak o$.
By \cite[Corollary 10.4]{MR1972204} we may choose a maximal
$\mathfrak o$-order $\Lambda'$ in $A$ containing $\Lambda$.
Then $\Lambda'$ is also a Fitting order over $\mathfrak o$
and likewise decomposes into $\Lambda' = \Lambda_1' \oplus \dots
\oplus \Lambda_t'$, where $\Lambda_i'$ is a maximal $\mathfrak o$-order
in $A_i$ for each $i$.
The reduced norm restricts to a map
\[
\mathrm{Nrd}: \Lambda' \longrightarrow \zeta(\Lambda') =
\mathfrak o_1 \oplus \dots \oplus \mathfrak o_t,
\]
where $\mathfrak o_i = \zeta(\Lambda_i')$
denotes the integral closure of $\mathfrak o$
in $F_i$. Unfortunately, it is in general not true that the reduced norm
maps $\Lambda$ into its centre.
\begin{example} \label{ex:dihedral-denominators}
Let $p$ be an odd prime and let $D_{2p}$ be
the dihedral group of order $2p$.
We may write
\[
D_{2p} = \langle \sigma, \tau \mid \sigma^p = \tau^2 = 1, \tau \sigma
= \sigma^{-1} \tau \rangle.
\]
Then $\Lambda := \mathbb{Z}_p[D_{2p}]$ is a Fitting order
in $A := \mathbb{Q}_p[D_{2p}]$ over $\mathbb{Z}_p$
and we wish to compute $\mathrm{Nrd}(\sigma + \tau)$.
We put $E := \mathbb{Q}_p(\zeta_p)$, where $\zeta_p$ denotes a primitive
$p$-th root of unity, and let $j \in \mathrm{Gal}(E/\mathbb{Q}_p)$
be the unique automorphism of order $2$.
By \cite[Example 7.39]{MR632548} we have the
Wedderburn decomposition
\begin{equation} \label{eqn:D2p-Wedderburn}
A \simeq A_1 \oplus A_2 \oplus A_3,
\end{equation}
where $A_1 = A_2 = \mathbb{Q}_p$ and $A_3$ is the twisted group algebra
$E \oplus E y$ with relations $y^2 = 1$ and
$y \alpha = j(\alpha) y$, $\alpha \in E$.
Moreover, for $\alpha + \beta y \in A_3$ one has
\[
\mathrm{Nrd}(\alpha + \beta y) =
N_{E^+/\mathbb{Q}_p}(\alpha j(\alpha) - \beta j(\beta)),
\]
where $E^+$ denotes the fixed field of $E$
under the action of $j$ and $N_{E^+/\mathbb{Q}_p}: E^+
\rightarrow \mathbb{Q}_p$ is the field-theoretic norm map.
The isomorphism \eqref{eqn:D2p-Wedderburn} maps $\sigma$ to the triple
$(1,1,\zeta_p)$ and $\tau$ to the triple $(1,-1,y)$.
As the first factor in this decomposition corresponds to the central
idempotent $e_1 := \frac{1}{2p} \sum_{\delta \in D_{2p}} \delta$,
we have
\[
\mathrm{Nrd}(\sigma + \tau) = 2 e_1 =
\frac{1}{p} \sum_{\delta \in D_{2p}} \delta
\not\in \zeta(\Lambda).
\]
\end{example}
To overcome this problem we define a $\zeta(\Lambda)$-submodule
of $\zeta(A)$ by
\[
\mathcal{I}(\Lambda) := \langle \mathrm{Nrd}(H) \mid H \in
M_{b \times b}(\Lambda), b \in \mathbb{N} \rangle_{\zeta(\Lambda)}.
\]
Note that this is in fact a commutative $\mathfrak o$-order in $\zeta(A)$
contained in $\zeta(\Lambda')$. We call $\mathcal{I}(\Lambda)$ the
\emph{integrality ring} of $\Lambda$.
This is the smallest ring that contains $\zeta(\Lambda)$ and the image
of the reduced norm of all matrices with entries in $\Lambda$.
\begin{example} \label{ex:dihedral-int-ring}
Let $\ell$ and $p$ be primes with $p$ odd. Choose a maximal
$\mathbb{Z}_{\ell}$-order $\mathfrak M_{\ell}(D_{2p})$ containing
$\mathbb{Z}_{\ell}[D_{2p}]$. Then one has (see \cite[Example 6]{MR3092262}
and \cite[Proposition 6.9]{MR3461042})
\[
\mathcal I(\mathbb{Z}_{\ell}[D_{2p}]) = \left\{
\begin{array}{lll}
\zeta(\mathfrak M_{p}(D_{2p})) & \mathrm{ if } & \ell = p\\
\zeta(\mathbb{Z}_{\ell}[D_{2p}]) & \mathrm{ if } & \ell \not= p.
\end{array}
\right.
\]
Note that the case $\ell \not= p$ follows from Proposition
\ref{prop:best-denominators} below. For the case $\ell = p$
one has to compute the reduced norms of several
group ring elements as in Example \ref{ex:dihedral-denominators}
and then show that these generate $\zeta(\mathfrak M_{p}(D_{2p}))$
as a $\mathbb{Z}_p$-module.
\end{example}
\begin{remark}
The integrality ring appears in many conjectures on the integrality
of so-called Stickelberger elements. These elements lie in the centre
of the rational group ring and are constructed via integer values of
Artin $L$-functions attached to the irreducible characters of $G$,
where $G$ is the Galois group of a finite Galois extension of
number fields. We refer the reader to
\cite{Brumer-Gross-Stark} for a survey of conjectures
and results in this context.
\end{remark}
\subsection{Noncommutative Fitting invariants}
Let $\Lambda$ be a Fitting order over the Fitting domain $\mathfrak o$.
Let $M$ be a $\Lambda$-module with finite presentation
\[
\Lambda^a \stackrel{h}{\longrightarrow} \Lambda^b \longrightarrow
M \longrightarrow 0.
\]
As before we let $S_b(h)$ be the set of all $b \times b$ submatrices
of $h$. Since the reduced norm is a generalisation of
the determinant with values in $\mathcal{I}(\Lambda)$,
it is now natural to make the following definition.
\[
\mathrm{Fitt}_{\Lambda}(h) := \left\{\begin{array}{lll}
0 & \mathrm{ if } & a < b\\
\langle \mathrm{Nrd}(H) \mid H \in S_b(h) \rangle_{\mathcal{I}(\Lambda)}
& \mathrm{ if } & a \geq b.
\end{array}\right.
\]
Unfortunately, this definition depends on $h$.
\begin{example} \label{ex:dependence-on-h}
Consider the Fitting order $\Lambda = M_{2 \times 2}(\mathbb{Z}_3)$
over $\mathbb{Z}_3$ and the trivial $\Lambda$-module $M=0$.
We have $\mathcal{I}(\Lambda) = \zeta(\Lambda) = \mathbb{Z}_3$.
The identity map $\mathrm{id}: \Lambda \rightarrow \Lambda$
is certainly a finite presentation of $M$ and we have
$\mathrm{Fitt}_{\Lambda}(\mathrm{id}) = \langle \mathrm{Nrd}(\mathrm{id}) \rangle_{\mathbb{Z}_3} = \mathbb{Z}_3$.
However, the map
\begin{eqnarray*}
h: \Lambda e_1 \oplus \Lambda e_2 & \longrightarrow & \Lambda \\
e_1 & \mapsto & \left(\begin{array}{cc} 4 & 1 \\ 1 & 4
\end{array}\right) \\
e_2 & \mapsto & \left(\begin{array}{cc} 5 & 1 \\ 1 & 5
\end{array}\right)
\end{eqnarray*}
is also a finite presentation of $M$ and we have
$\mathrm{Fitt}_{\Lambda}(h) = \langle 15, 24 \rangle_{\mathbb{Z}_3} = 3 \mathbb{Z}_3$.
\end{example}
\begin{remark}
The ring $\Lambda$ in Example \ref{ex:dependence-on-h}
is a matrix ring over a commutative ring.
In this case one can remedy the dependence on $h$ via
Morita equivalence. We will explain this approach in
the appendix.
\end{remark}
In order to examine the dependence on $h$ we try to adapt the proof
of Theorem \ref{thm:Fitt-well-comm} in the commutative case.
We still may view a finite presentation of $M$ as a truncated free
resolution of $M$. As $\Lambda$ is a semiperfect ring
(see \cite[Example 23.3]{MR1838439}),
every finitely generated module $M$ has a projective cover
(see \cite[Theorem 6.23]{MR632548} or \cite[Theorem 24.16]{MR1838439}):
there is a finitely generated projective module $P_0$ (unique up to
isomorphism) and a surjective map $\pi: P_0 \twoheadrightarrow M$
such that no proper submodule of $P_0$ is mapped onto $M$ by $\pi$.
If every projective $\Lambda$-module is free, then $P_0 = \Lambda^{b'}$
with $b' \in \mathbb{N}$ minimal such that there is a surjection
$\Lambda^{b'} \twoheadrightarrow M$. Let $P_1$ be a projective cover
of the kernel of $\pi$. We obtain an exact sequence
\[
P_1 \longrightarrow P_0 \longrightarrow M \longrightarrow 0
\]
which we may view as the truncation of a `minimal projective resolution'
$\mathcal{P}_M$ of $M$.
This is the correct analogue of a minimal free resolution
of a module over a commutative local ring:
Any free (in fact any projective) resolution of $M$ is isomorphic to
the direct sum of $\mathcal{P}_M$ and a trivial complex
\cite[Proposition 2.1]{MR2609173}.
Now let
\[
\Lambda^{a'} \stackrel{h'}{\longrightarrow} \Lambda^{b'} \longrightarrow
M \longrightarrow 0
\]
be a second finite presentation of $M$. By similar arguments as in
the commutative case one may assume that $a=a'$, $b=b'$ and
that there are matrices $X \in \mathrm{GL}_a(\Lambda)$ and $Y \in \mathrm{GL}_b(\Lambda)$
such that
\[
h \circ X = Y \circ h'.
\]
As $\mathrm{Nrd}(Y)$ belongs to $\mathcal{I}(\Lambda)^{\times}$, we may
assume in addition that $Y=1$. In contrast to the determinant,
the reduced norm is not a multilinear map so that we cannot assume
that $X=1$. However, assuming $h \circ X = h'$ as we may, we can construct
a new finite presentation of $M$, namely
\[
\Lambda^a \oplus \Lambda^a \xrightarrow{(h \mid h')}
\Lambda^b \longrightarrow
M \longrightarrow 0.
\]
Now $\mathrm{Fitt}_{\Lambda}((h \mid h'))$ contains both $\mathrm{Fitt}_{\Lambda}(h)$
and $\mathrm{Fitt}_{\Lambda}(h')$. As $\mathcal{I}(\Lambda)$ is
a noetherian ring, we have shown the following
(see also \cite[Theorem 3.2 and Definition 3.3]{MR2609173} and
\cite[\S 3.5]{MR3092262}).
\begin{theorem} \label{thm:max-Fitting}
Let $\Lambda$ be a Fitting order and let $M$ be a finitely generated
$\Lambda$-module. Then there is a finite presentation $h$ of $M$
such that $\mathrm{Fitt}_{\Lambda}(h)$ contains $\mathrm{Fitt}_{\Lambda}(h')$
for every other choice $h'$ of finite presentation of $M$.
\end{theorem}
\begin{definition}
Using the notation of Theorem \ref{thm:max-Fitting}, we put
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M) := \mathrm{Fitt}_{\Lambda}(h)
\]
and call this the \emph{maximal Fitting invariant} of $M$ over $\Lambda$.
\end{definition}
\begin{remark}
For an axiomatic approach to
noncommutative Fitting invariants we refer the reader to \cite{Kataoka}.
A natural notion of `higher noncommutative Fitting invariants'
has recently been considered
by Burns and Sano \cite{non-abelian-zeta}.
\end{remark}
\begin{remark}
In order to define noncommutative Fitting invariants it suffices
to assume that $\mathfrak o$ is a commutative
noetherian complete local domain. In fact, the integral closure
of $\mathfrak o$ in its field of quotients is finitely generated
as an $\mathfrak o$-module by \cite[Theorem 4.3.4]{MR2266432}
in this case, and noncommutative Fitting invariants have been
defined in this greater generality in \cite{MR2609173}.
\end{remark}
One can prove the analogues of Lemma \ref{lem:basic-props-comm}(i)
and (iii) without any significant changes.
\begin{lemma} \label{lem:basic-props-noncomm}
Let $\Lambda$ be a Fitting order and let $M_1$, $M_2$, $M_3$
be finitely generated $\Lambda$-modules.
\begin{enumerate}
\item
If $\pi: M_1 \twoheadrightarrow M_2$ is an epimorphism, then
$\mathrm{Fitt}_{\Lambda}^{\max}(M_1) \subseteq \mathrm{Fitt}_{\Lambda}^{\max}(M_2)$.
\item
If $M_1 \rightarrow M_2 \rightarrow M_3 \rightarrow 0$
is an exact sequence, then
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M_1) \cdot \mathrm{Fitt}_{\Lambda}^{\max}(M_3)
\subseteq \mathrm{Fitt}_{\Lambda}^{\max}(M_2).
\]
\end{enumerate}
\end{lemma}
However, the proof of Lemma \ref{lem:basic-props-comm}(ii) only gives
an inclusion
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M_1) \cdot \mathrm{Fitt}_{\Lambda}^{\max}(M_3)
\subseteq \mathrm{Fitt}_{\Lambda}^{\max}(M_1 \oplus M_3)
\]
which is a special case of Lemma \ref{lem:basic-props-noncomm}(ii).
We will treat the question of whether this inclusion is actually an equality
in \S \ref{sec:additivity} below.\\
It is hard to decide in general whether a given presentation gives
a maximal Fitting invariant. In this direction we have the following
result.
\begin{proposition} \label{prop:Fitt-of-quadratic}
Let $\Lambda$ be a Fitting order and let $M$ be a finitely generated
$\Lambda$-module. If $M$ admits a quadratic presentation $h$, i.e.~a
finite presentation of the form
\[
\Lambda^a \stackrel{h}{\longrightarrow} \Lambda^a \longrightarrow
M \longrightarrow 0,
\]
then $\mathrm{Fitt}_{\Lambda}^{\max}(M) = \mathrm{Fitt}_{\Lambda}(h)$.
\end{proposition}
\begin{proof}
This follows from \cite[Proposition 1.1 (4)]{MR2976321}.
\end{proof}
\begin{remark}
We briefly discuss the relation of noncommutative Fitting invariants
to algebraic $K$-theory.
This will not be used in the following.
For background on algebraic $K$-theory we refer the reader to
\cite{MR892316} and \cite{MR0245634}.
Suppose that $M$ admits a quadratic presentation
$h$ which is injective. Then $M$ is torsion as an $\mathfrak o$-module
and therefore defines a class $[M]$ in the relative algebraic $K$-group
$K_0(\Lambda, A)$ associated to the ring homomorphism
$\Lambda \hookrightarrow A$. Moreover, the matrix $h$ belongs
to $\mathrm{GL}_a(A)$ and so defines a class $[h]$ in $K_1(A)$ such that
$\partial([h]) = [M]$, where $\partial: K_1(A) \rightarrow
K_0(\Lambda,A)$ denotes the connecting homomorphism of relative
algebraic $K$-theory. The reduced norm induces a group homomorphism
$\mathrm{Nrd}: K_1(A) \rightarrow \zeta(A)^{\times}$ such that
$\mathrm{Nrd}([h]) = \mathrm{Nrd}(h)$. Now suppose that $x \in K_1(A)$ is a second
pre-image of $[M]$. Then $[h]x^{-1}$ lies in the image of $K_1(\Lambda)$
and therefore $\mathrm{Nrd}(x) = \mathrm{Nrd}([h]) \cdot \mathrm{Nrd}(y)$ for some
$y \in K_1(\Lambda)$.
As $\mathrm{Nrd}(y) \in \mathcal{I}(\Lambda)^{\times}$,
the $\mathcal{I}(\Lambda)$-ideals generated by $\mathrm{Nrd}([h])$
and $\mathrm{Nrd}(x)$ coincide.
In other words, for any $x \in K_1(A)$ such that $\partial(x) = [M]$
one has
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M) =
\langle \mathrm{Nrd}(x) \rangle_{\mathcal{I}(\Lambda)}.
\]
Now suppose that $\mathrm{Fitt}_{\Lambda}^{\max}(M)$ is generated by some
$\xi \in \zeta(A)^{\times}$. Then $\xi \mathrm{Nrd}(x)^{-1}$ belongs to
$\mathcal{I}(\Lambda)^{\times}$, but we cannot conclude in general
that $\xi \mathrm{Nrd}(x)^{-1}$ lies in $\mathrm{Nrd}(K_1(\Lambda))$.
The more involved notion of $\mathrm{Nrd}(\Lambda)$-equivalence classes
in \cite{MR2609173} is designed in such a way that this conclusion
works.
\end{remark}
\section{Fitting invariants and annihilation}
\subsection{Generalised adjoint matrices}
If $R$ is a commutative ring and $M$ is a finitely presented $R$-module,
we know by Theorem \ref{thm:fitt-ann-comm} that $\mathrm{Fitt}_R(M)$ is
always contained in the $R$-annihilator ideal of $M$. The main ingredient
of the proof was the existence of adjoint matrices. We now generalise
this concept.
Let $\Lambda$ be a Fitting order.
Choose $n \in \mathbb{N}$ and let $H \in M_{n \times n}(\Lambda)$.
Then recalling the notation of \S \ref{subsec:reduced-norms},
decompose $H$ into
\[
H = \sum_{i=1}^{t} H_{i} \in M_{n \times n}(\Lambda') = \bigoplus_{i=1}^t M_{n \times n}(\Lambda'_{i}).
\]
Let $m_{i} = n_{i} \cdot s_{i} \cdot n$, where $s_i$ denotes the Schur index
of $D_i$ so that $[D_i:F_i] = s_i^2$.
The reduced characteristic polynomial $f_{i}(X) = \sum_{j=0}^{m_{i}} \alpha_{ij}X^{j}$ of $H_{i}$
has coefficients in $\mathfrak{o}_{i}$.
Moreover, the constant term $\alpha_{i0}$ is equal to
$\mathrm{Nrd}(H_{i}) \cdot (-1)^{m_{i}}$.
We put
\[
H_{i}^{\ast} := (-1)^{m_{i}+1} \cdot \sum_{j=1}^{m_i} \alpha_{ij}H_{i}^{j-1}, \quad H^{\ast} := \sum_{i=1}^{t} H_{i}^{\ast}.
\]
We call $H^{\ast}$ the \emph{generalised adjoint matrix} of $H$.
\begin{lemma}\label{lem:ast}
We have $H^{\ast} \in M_{n\times n} (\Lambda')$ and $H^{\ast} H = H H^{\ast} = \mathrm{Nrd}(H) \cdot \boldsymbol{1}_{n}$.
\end{lemma}
\begin{proof}
The first assertion is clear by the above considerations.
Since $f_{i}(H_{i}) = 0$, we find that
\[
H_{i}^{\ast} \cdot H_{i} = H_{i} \cdot H_{i}^{\ast} = (-1)^{m_{i}+1} (-\alpha_{i0}) = \mathrm{Nrd}(H_{i}),
\]
as desired.
\end{proof}
\begin{remark}
Note that the above definition of $H^{\ast}$ differs slightly from the definition in \cite[\S 4]{MR2609173}.
Here we follow the treatment in \cite[\S 3.6]{MR3092262}.
\end{remark}
\begin{remark} \label{rem:reduced-adjoint}
Let $E/F$ be a separable field extension such that $A_E :=
E \otimes_F A$ splits. We may view $H$ as an element
of $M_{n \times n}(A_E)$ which is a finite sum of matrix rings
over $E$. Then $H^{\ast} \in M_{n \times n}(A_E)$ is just the
sum of the adjoint matrices in each component. As such it might have
been more natural to call $H^{\ast}$ the `reduced adjoint matrix'
of $H$.
\end{remark}
\begin{example} \label{ex:0-ast}
Let $0 \in M_{n \times n}(R)$ where $R$ is a commutative ring.
Then for the adjoint matrix $0^{\ast}$ we have
$0^{\ast} = 1$ if $n = 1$ and $0^{\ast} = 0$ if $n>1$.
Let $p$ be a prime and let $G$ be a finite group. Denote the
commutator subgroup of $G$ by $G'$ and let $E/\mathbb{Q}_p$
be a splitting field for $\mathbb{Q}_p[G]$.
Then Wedderburn's theorem for the algebra $E[G]$ implies that
for $0 \in M_{1 \times 1}(\mathbb{Z}_p[G])$ we have
\[
0^{\ast} = \frac{1}{|G'|} \sum_{g\in G'} g.
\]
\end{example}
\begin{example} \label{ex:H1ast}
Let $\Lambda$ be a Fitting order and let $H \in M_{n \times n}(\Lambda)$.
Then for every positive integer $m$ one has
\[
\left(\begin{array}{cc}
H & 0 \\ 0 & \boldsymbol{1}_m
\end{array}
\right)^{\ast} =
\left(\begin{array}{cc}
H^{\ast} & 0 \\ 0 & \mathrm{Nrd}(H) \boldsymbol{1}_m
\end{array}
\right).
\]
In view of Remark \ref{rem:reduced-adjoint}, this follows from
the respective statement for adjoint matrices over commutative rings
(for a more detailed proof see \cite[Theorem 1.7.8(iii)]{watson_thesis}).
\end{example}
\subsection{Denominator ideals}
We define
\[
\mathcal{H}(\Lambda) := \{ x \in \zeta(\Lambda) \mid xH^{\ast} \in
M_{b \times b}(\Lambda) \, \forall H \in M_{b \times b}(\Lambda) \,
\forall b \in \mathbb{N} \}
\]
and call $\mathcal{H}(\Lambda)$ the \emph{denominator ideal} of $\Lambda$.
We claim that
\begin{equation} \label{eqn:HI_equals_H}
\mathcal{H}(\Lambda) \cdot \mathcal{I}(\Lambda) = \mathcal{H}(\Lambda)
\subseteq \zeta(\Lambda)
\end{equation}
and so $\mathcal{H}(\Lambda)$ is in fact an ideal in the
$\mathfrak{o}$-order $\mathcal{I}(\Lambda)$.
We follow an argument of David Watson
\cite[Lemma 1.10.9]{watson_thesis}.
Let $x \in \mathcal{H}(\Lambda)$ and
$H_1 \in M_{b_1 \times b_1}(\Lambda)$,
$H_2 \in M_{b_2 \times b_2}(\Lambda)$ with positive integers $b_1$
and $b_2$. We have to show that $x \mathrm{Nrd}(H_1)H_2^{\ast}$ belongs
to $M_{b_2 \times b_2}(\Lambda)$. By Example \ref{ex:H1ast}
we may assume that $b_1 = b_2$. We now compute
\[
x \mathrm{Nrd}(H_1)H_2^{\ast} = x H_1 H_1^{\ast} H_2^{\ast}
= H_1 x (H_2 H_1)^{\ast} \in M_{b_2 \times b_2}(\Lambda).
\]
\begin{remark}
The denominator ideal $\mathcal{H}(\Lambda)$ measures the failure
of the generalised adjoint matrices to have entries in $\Lambda$.
\end{remark}
\begin{remark}
Let $\Lambda'$ be a maximal order containing $\Lambda$.
The central conductor
of $\Lambda'$ over $\Lambda$
is defined to be
$\mathcal F(\Lambda) := \left\{x \in \zeta(\Lambda') \mid
x \Lambda' \subseteq \Lambda \right\}$.
It is clear from Lemma \ref{lem:ast} that we
always have $\mathcal F(\Lambda) \subseteq \mathcal H(\Lambda)$.
Note that in particular we have
$\mathcal{H}(\Lambda') = \zeta(\Lambda')$ for every maximal
Fitting order $\Lambda'$.
\end{remark}
We now consider the case of $p$-adic group rings in more detail.
If $p$ is a prime and $G$ is a finite group, we set
\[
\mathcal{I}_{p}(G) := \mathcal{I}(\mathbb{Z}_{p}[G]), \quad
\mathcal{H}_{p}(G) := \mathcal{H}(\mathbb{Z}_{p}[G]).
\]
\begin{proposition} \label{prop:best-denominators}
Let $p$ be prime and $G$ be a finite group. Then $\mathcal{H}_{p}(G) = \zeta(\mathbb{Z}_{p}[G])$ if and only if $p$
does not divide the order of the commutator subgroup of $G$. Moreover, in this case we have
$\mathcal{I}_{p}(G) = \zeta(\mathbb{Z}_{p}[G])$.
\end{proposition}
\begin{proof}
The first claim is a special case of \cite[Proposition 4.4]{MR3092262}.
Note that Example \ref{ex:0-ast} shows that
$\mathcal{H}_{p}(G) = \zeta(\mathbb{Z}_{p}[G])$ is only possible if
$p$ does not divide the order of the commutator subgroup of $G$.
The second claim follows easily from \eqref{eqn:HI_equals_H}.
\end{proof}
Let $\overline{\mathbb{Q}}_p$ be a separable closure of $\mathbb{Q}_p$.
For an irreducible character
$\chi: G \rightarrow \overline{\mathbb{Q}}_p$ we put
$\mathbb{Q}_p(\chi) := \mathbb{Q}_p(\chi(g) \mid g \in G)$.
In the case of $p$-adic group rings
the central conductor is explicitly given by
Jacobinski's formula \cite{MR0204538}
(see \cite[Theorem 27.13]{MR632548})
\begin{equation} \label{eqn:conductor-formula}
\mathcal{F}_p(G) :=
\mathcal F(\mathbb{Z}_p[G]) = \bigoplus_{\chi} \frac{|G|}{\chi(1)}
\mathcal D^{-1} (\mathbb{Q}_p(\chi) / \mathbb{Q}_p),
\end{equation}
where $\mathcal D^{-1} (\mathbb{Q}_p(\chi)/ \mathbb{Q}_p)$ denotes the inverse different of
the extension $\mathbb{Q}_p(\chi)$ over $\mathbb{Q}_p$
and the sum runs over all irreducible characters of $G$
modulo the natural action of the absolute Galois group
of $\mathbb{Q}_p$ on the irreducible characters of $G$.
\begin{example} \label{c7-ex:dihedral}
Let $p$ and $\ell$ be primes with $p$ odd.
We consider the group ring $\mathbb{Z}_{\ell}[D_{2p}]$,
where $D_{2 p}$ denotes the dihedral group of order $2p$.
In the case $p=3$, one has
$D_{6} \simeq S_{3}$, the symmetric group on three letters.
Then we have
\[
\mathcal{H}_{\ell}(D_{2p}) = \left\{ \begin{array}{lll}
\zeta(\mathbb{Z}_{\ell}[D_{2p}]) & \mbox{ if } & p\neq \ell \\
\mathcal{F}_{p}(D_{2p}) & \mbox{ if } & p=\ell.
\end{array}\right.
\]
In fact, the result follows from Proposition
\ref{prop:best-denominators} if $p \neq \ell$.
In the case $p=\ell$, the result is established in
\cite[Example 6]{MR3092262}. The corresponding integrality rings
have already been determined in Example \ref{ex:dihedral-int-ring}.
\end{example}
\begin{example} \label{ex:Aff(q)}
Let $p$ be a prime and let $q = \ell^{n}$ be a prime power.
We consider the group $\mathrm{Aff}(q) = \mathbb{F}_q \rtimes \mathbb{F}_q^{\times}$
of affine transformations
on $\mathbb{F}_q$, the finite field
with $q$ elements.
Let $\mathfrak{M}_{p}(\mathrm{Aff}(q))$ be a maximal $\mathbb{Z}_{p}$-order such that $\mathbb{Z}_{p}[\mathrm{Aff}(q)] \subseteq \mathfrak{M}_{p}(\mathrm{Aff}(q)) \subseteq \mathbb{Q}_{p}[\mathrm{Aff}(q)]$.
Then by \cite[Proposition 6.7]{MR3461042} we have
\[
\mathcal{H}_{p}(\mathrm{Aff}(q)) = \left\{ \begin{array}{lll}
\zeta(\mathbb{Z}_{p}[\mathrm{Aff}(q)]) & \mbox{ if } & p \neq \ell \\
\mathcal{F}_{p}(\mathrm{Aff}(q)) & \mbox{ if } & p=\ell \neq 2;
\end{array}\right.
\]
\[
\mathcal{I}_{p}(\mathrm{Aff}(q)) = \left\{ \begin{array}{lll}
\zeta(\mathbb{Z}_{p}[\mathrm{Aff}(q)]) & \mbox{ if } & p \neq \ell\\
\zeta(\mathfrak{M}_{p}(\mathrm{Aff}(q))) & \mbox{ if } & p=\ell \neq 2.
\end{array}\right.
\]
If $p=\ell=2$, then we have containments
\[
2\mathcal{H}_{2}(\mathrm{Aff}(q)) \subseteq \mathcal{F}_{2}(\mathrm{Aff}(q)) \subseteq \mathcal{H}_{2}(\mathrm{Aff}(q)),
\]
\[
2\zeta(\mathfrak{M}_{2}(\mathrm{Aff}(q))) \subseteq \mathcal{I}_{2}(\mathrm{Aff}(q)) \subseteq \zeta(\mathfrak{M}_{2}(\mathrm{Aff}(q))).
\]
Note that the commutator subgroup of $\mathrm{Aff}(q)$ is $\mathbb{F}_q$ so that
the case $p \neq \ell$ again follows from Proposition
\ref{prop:best-denominators}. An exact formula for the
denominator ideal including the case
$p = \ell = 2$ has been determined by David Watson
\cite[Example 3.6.6]{watson_thesis}
\end{example}
\begin{example}
Let $S_4$ be the symmetric group on $4$ letters.
If $p$ is an odd prime, then
$\mathcal{I}_p(S_4) = \zeta(\mathfrak{M}_p(S_4))$
and $\mathcal{H}_p(S_4) = \mathcal{F}_p(S_4)$. However, if $p=2$ we have
\[
\mathcal{F}_2(S_4) \subsetneq \mathcal{H}_2(S_4)
\subsetneq \zeta(\mathbb{Z}_2[S_4]);
\]
\[
\zeta(\mathbb{Z}_2[S_4]) \subsetneq \mathcal{I}_2(S_4)
\subsetneq \zeta(\mathfrak M_2(S_4)).
\]
This follows from \cite[Proposition 6.8]{MR3461042}.
\end{example}
\begin{remark}
Even in the case of $p$-adic group rings, a general formula
for denominator ideals is still not available, though it would be
of significant interest for arithmetic applications.
In particular, we seek good lower bounds. This question is extensively studied
in the PhD thesis of David Watson \cite{watson_thesis}.
In particular, he
determines the denominator ideal $\mathcal{H}_p(G)$ for any
(non-abelian) group $G$ of order $p^3$.
\end{remark}
Now let $\Lambda$ be an arbitrary Fitting order and let $\Lambda'$
be a maximal order containing $\Lambda$. We define
a variant of the central conductor by
\[
\mathcal{F}_{\zeta}(\Lambda) :=
\left\{x \in \zeta(\Lambda') \mid
x \zeta(\Lambda') \subseteq \zeta(\Lambda) \right\}.
\]
One clearly has an inclusion $\mathcal{F}(\Lambda)
\subseteq \mathcal{F}_{\zeta}(\Lambda)$, but this is not an equality
in general.
\begin{example}
Let $D_{2^a}$ be the dihedral group of order $2^a$, where
$a \geq 3$.
Then one can show (see \cite[Example 7]{MR3092262}) that
\[
[\mathcal{F}_{\zeta}(\mathbb{Z}_2[D_{2^a}]) : \mathcal{F}(\mathbb{Z}_2[D_{2^a}])]
= 2^{a-2}.
\]
\end{example}
\begin{remark}
In the case where $\Lambda$ is a $p$-adic group ring one has an explicit
formula for $\mathcal{F}_{\zeta}(\Lambda)$; see
\cite[Proposition 6.12]{MR3092262}.
\end{remark}
Since the reduced characteristic polynomials have coefficients
in $\zeta(\Lambda')$, one can give the following lower bound
\cite[Proposition 6.3]{MR3092262} for $\mathcal{H}(\Lambda)$.
\begin{proposition}
We have $\mathcal{F}_{\zeta}(\Lambda) \subseteq \mathcal{H}(\Lambda)$.
\end{proposition}
\subsection{Fitting invariants and annihilation}
Now a proof similar to the commutative case shows the desired
annihilation result (see \cite[Theorem 4.2]{MR2609173}
and \cite[Theorem 3.3]{MR3092262}).
\begin{theorem}\label{thm:fitt-ann}
Let $\Lambda$ be a Fitting order and let $M$ be a finitely generated $\Lambda$-module. Then one has an inclusion
\[
\mathcal{H}(\Lambda) \cdot \mathrm{Fitt}_{\Lambda}^{\max}(M) \subseteq \mathrm{Ann}_{\zeta(\Lambda)}(M).
\]
\end{theorem}
Since $\mathcal{F}(\Lambda)$ is contained in $\mathcal{H}(\Lambda)$,
the above
inclusion also holds with $\mathcal{H}(\Lambda)$ replaced by
$\mathcal{F}(\Lambda)$. However, if one wishes to compute annihilators
using $\mathcal{F}(\Lambda)$ then the following result shows that it
suffices to compute the Fitting invariant over the maximal order
(see \cite[Corollary 6.5 and Theorem 6.7]{MR3092262}).
\begin{proposition} \label{prop:ann-over-max}
Let $\Lambda$ be a Fitting order and let $M$ be a
finitely generated $\Lambda$-module. Choose a maximal order
$\Lambda'$ containing $\Lambda$. Then
\[
\mathcal{F}(\Lambda) \cdot \mathrm{Fitt}_{\Lambda}^{\max}(M) \subseteq
\mathcal{F}(\Lambda) \cdot \mathrm{Fitt}_{\Lambda'}^{\max}
(\Lambda' \otimes_{\Lambda} M) \subseteq
\mathrm{Ann}_{\zeta(\Lambda)}(M).
\]
\end{proposition}
\begin{example}
We generalise Example \ref{ex:augmentation-ideal}.
Let $p$ be a prime and let $G$ be a finite group.
Let $\Delta_p G$ be the kernel of the natural augmentation map
$\mathrm{aug}_p: \mathbb{Z}_p[G] \rightarrow \mathbb{Z}_p$ that sends each $g \in G$ to $1$.
As $|G|$ belongs to $\mathcal{H}_p(G)$ we have by
Theorem \ref{thm:fitt-ann} that
\[
|G| \cdot \mathrm{Fitt}_{\mathbb{Z}_p[G]}^{\max}(\Delta_p G) \subseteq
\mathrm{Ann}_{\mathbb{Z}_p[G]}(\Delta_p G) = N_G \mathbb{Z}_p,
\]
where as before $N_G := \sum_{g \in G} g$. It follows that
\[
\mathrm{Fitt}_{\mathbb{Z}_p[G]}^{\max}(\Delta_p G) = m \frac{1}{|G|} N_G \mathbb{Z}_p
\]
for some $m \in \mathbb{Z}_p$. Let $h$ be a finite presentation of
$\Delta_p G$ such that $\mathrm{Fitt}_{\mathbb{Z}_p[G]}(h) =
\mathrm{Fitt}_{\mathbb{Z}_p[G]}^{\max}(\Delta_p G)$. Let $\mathrm{aug}_p(h)$ be the matrix
obtained from $h$ by applying $\mathrm{aug}_p$ to each of the entries of $h$.
Then $\mathrm{aug}_p(h)$ is a finite presentation of the $\mathbb{Z}_p$-module
\[
\mathbb{Z}_p \otimes_{\mathbb{Z}_p[G]} \Delta_p G \simeq
\Delta_p G / (\Delta_p G)^2 \simeq \mathbb{Z}_p \otimes_{\mathbb{Z}} G/G',
\]
where $G'$ denotes the commutator subgroup of $G$. It follows
from Example \ref{ex:Fitting-Z} that
\[
m \mathbb{Z}_p = \mathrm{Fitt}_{\mathbb{Z}_p}(\mathrm{aug}_p(h)) =
\mathrm{Fitt}_{\mathbb{Z}_p}(\mathbb{Z}_p \otimes_{\mathbb{Z}_p[G]} \Delta_p G) = |G/G'| \mathbb{Z}_p.
\]
This implies that we may choose $m = |G/G'|$ and thus
\[
\mathrm{Fitt}_{\mathbb{Z}_p[G]}^{\max}(\Delta_p G) = \frac{1}{|G'|} N_G \mathbb{Z}_p.
\]
As $N_{G'} := \sum_{g \in G'} g$ belongs to $\mathcal{H}_p(G)$
by \cite[Corollary 6.14]{MR3092262} we find that
\[
\mathcal{H}_p(G) \cdot \mathrm{Fitt}_{\mathbb{Z}_p[G]}^{\max}(\Delta_p G)
= N_G \mathbb{Z}_p = \mathrm{Ann}_{\mathbb{Z}_p[G]}(\Delta_p G).
\]
Let $\mathfrak{M}_p(G)$ be a maximal order containing $\mathbb{Z}_p[G]$.
One can likewise show that
\[
\mathrm{Fitt}_{\mathfrak M_p(G)}^{\max}(\mathfrak{M}_p(G)
\otimes_{\mathbb{Z}_p[G]} \Delta_p G)
= \frac{1}{|G'|} N_G \mathbb{Z}_p.
\]
Then Proposition \ref{prop:ann-over-max} implies
the weaker result
\[
\mathcal{F}(\mathbb{Z}_p[G]) \cdot
\mathrm{Fitt}_{\mathfrak M_p(G)}^{\max}(\mathfrak{M}_p(G)
\otimes_{\mathbb{Z}_p[G]} \Delta_p G) =
\frac{|G|}{|G'|} N_G \mathbb{Z}_p
\subseteq \mathrm{Ann}_{\mathbb{Z}_p[G]}(\Delta_p G).
\]
\end{example}
\section{$p$-adic group rings}
In this section we fix a prime $p$ and a finite group $G$.
The $p$-adic group ring $\mathbb{Z}_p[G]$ is a Fitting order over $\mathbb{Z}_p$
which is of particular interest in number theory.
We put $\Lambda := \mathbb{Z}_p[G]$ and $A := \mathbb{Q}_p[G]$.
For any $\Lambda$-module $M$ we write $M^{\vee}$ for its
Pontryagin dual $\mathrm{Hom}_{\mathbb{Z}_p}(M, \mathbb{Q}_p/\mathbb{Z}_p)$ and $M^{\ast}$ for the
linear dual $\mathrm{Hom}_{\mathbb{Z}_p}(M, \mathbb{Z}_p)$, each endowed with the natural
contragredient action of $G$. We denote by
$^{\sharp}: A \rightarrow A$ the anti-involution
which maps each $g \in G$ to its inverse.
If $h \in M_{a \times b}(A)$ is a matrix we let $h^{\sharp}$
be the matrix obtained from $h$ by applying $^{\sharp}$ to each of its
entries. Moreover, we let $h^T \in M_{b \times a}(A)$ be the
transpose of $h$. We note that there is an isomorphism
$\Lambda^{\ast} \simeq \Lambda$, $f \mapsto \sum_{g \in G} f(g)g$.
Under this identification the $\mathbb{Z}_p$-dual of a map
$h \in M_{a \times b}(\Lambda)$ identifies with
$h^{T, \sharp} \in M_{b \times a}(\Lambda)$.
Now let $C$ be a finite $\Lambda$-module of projective dimension
at most $1$. Choose $n \in \mathbb{N}$ and a surjective map
$\Lambda^n \twoheadrightarrow C$ with kernel $P$. Note that $P$
is projective. As $C$ is finite, we have an isomorphism
$\mathbb{Q}_p \otimes_{\mathbb{Z}_p} P \simeq A^n$ of $A$-modules. Now
Swan's theorem \cite[Theorem (32.1)]{MR632548}
implies that in fact $P \simeq \Lambda^n$.
In particular, we find that $C$ has a quadratic presentation
\begin{equation} \label{eqn:quadratic-presentation}
0 \longrightarrow \Lambda^n \stackrel{q}{\longrightarrow}
\Lambda^n \longrightarrow C \longrightarrow 0.
\end{equation}
Moreover, the maximal Fitting invariant $\mathrm{Fitt}_{\Lambda}^{\max}(C)$
is generated by $\mathrm{Nrd}(q) \in \zeta(A)^{\times}$ by
Proposition \ref{prop:Fitt-of-quadratic}.
We also note that a $\Lambda$-module is of projective dimension
at most $1$ if and only if it is a cohomologically trivial
$\Lambda$-module by \cite[Theorem 9]{MR0219512}.
The following result is very useful in computing
Fitting invariants over $p$-adic group rings.
\begin{proposition} \label{prop:sequence-group-rings}
Let $\Lambda := \mathbb{Z}_p[G]$ where $p$ is a prime
and $G$ is a finite group.
\begin{enumerate}
\item
Let $C$ be a finite $\Lambda$-module of projective dimension
at most $1$. Let $c \in \zeta(A)^{\times}$ be a generator
of $\mathrm{Fitt}_{\Lambda}^{\max}(C)$. Then the Pontryagin dual
$C^{\vee}$ is also a finite $\Lambda$-module of projective
dimension at most $1$ and $\mathrm{Fitt}_{\Lambda}^{\max}(C^{\vee})$
is generated by $c^{\sharp}$.
\item
Suppose we are given an exact sequence of finite $\Lambda$-modules
\[
0 \longrightarrow M \longrightarrow C \longrightarrow C'
\longrightarrow M' \longrightarrow 0,
\]
where $C$ and $C'$ are of projective dimension at most $1$.
Then we have an equality
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M^{\vee})^{\sharp} \cdot
\mathrm{Fitt}_{\Lambda}^{\max}(C') =
\mathrm{Fitt}_{\Lambda}^{\max}(M') \cdot
\mathrm{Fitt}_{\Lambda}^{\max}(C).
\]
\end{enumerate}
\end{proposition}
\begin{proof}
This follows from \cite[Proposition 5.3]{MR2609173}.
Here we will give a new proof of (i) which is much
shorter and easier than the original proof. The argument
is inspired by \cite[Lemma 6]{MR2046598} and
recent work of Kataoka \cite[\S 4]{Kataoka}.
As $C$ is finite, we have
$\mathrm{Hom}_{\mathbb{Z}_p}(C, \mathbb{Z}_p) = \mathrm{Hom}_{\mathbb{Z}_p}(C, \mathbb{Q}_p) = 0$.
As $\mathbb{Q}_p$ is an injective $\mathbb{Z}_p$-module, we have
$\mathrm{Ext}^1_{\mathbb{Z}_p}(C, \mathbb{Q}_p) = 0$ and thus
the short exact sequence
$\mathbb{Z}_p \hookrightarrow \mathbb{Q}_p \twoheadrightarrow \mathbb{Q}_p / \mathbb{Z}_p$
induces an isomorphism
\[
C^{\vee} \simeq \mathrm{Ext}_{\mathbb{Z}_p}^1(C, \mathbb{Z}_p).
\]
Now choose a quadratic presentation $q$ of $C$ as in
\eqref{eqn:quadratic-presentation}.
We may assume that $c = \mathrm{Nrd}(q)$.
Note that $\mathrm{Ext}_{\mathbb{Z}_p}^1(\Lambda, \mathbb{Z}_p)$ vanishes,
since $\Lambda$ is a projective $\mathbb{Z}_p$-module.
We apply $\mathbb{Z}_p$-duals to \eqref{eqn:quadratic-presentation}
and obtain an exact sequence
\[
0 \longrightarrow \Lambda^n \xrightarrow{q^{T,\sharp}}
\Lambda^n \longrightarrow \mathrm{Ext}_{\mathbb{Z}_p}^1(C, \mathbb{Z}_p) \longrightarrow 0.
\]
As $\mathrm{Nrd}(q^{T,\sharp}) = c^{\sharp}$ we are done.
\end{proof}
\begin{remark}
Note that exact sequences of the type considered in Proposition
\ref{prop:sequence-group-rings} naturally occur in
the context of the equivariant Tamagawa number conjecture
as formulated by Burns and Flach \cite{MR1884523}.
This conjecture refines and generalises a very wide range of well
known results and conjectures relating special values of
$L$-functions to certain natural arithmetic invariants.
It thereby vastly generalises the analytic class number formula
for number fields and the Birch and Swinnerton-Dyer
conjecture for elliptic curves (see \cite{MR2088713} for a survey).
\end{remark}
\begin{remark}
There is also an analogue of Proposition \ref{prop:sequence-group-rings}
for Iwasawa modules \cite[Proposition 6.3]{MR2609173}
and even for more general Fitting orders \cite[\S 4]{Kataoka}.
This has applications in the context of main conjectures
of equivariant Iwasawa theory.
\end{remark}
\section{Additivity of Fitting invariants} \label{sec:additivity}
Let $\Lambda$ be a Fitting order over the Fitting domain $\mathfrak o$.
Let $M$ and $N$ be two finitely generated $\Lambda$-modules.
As already observed in Lemma \ref{lem:basic-props-noncomm}(ii),
one always has an inclusion
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M) \cdot \mathrm{Fitt}_{\Lambda}^{\max}(N)
\subseteq \mathrm{Fitt}_{\Lambda}^{\max}(M \oplus N).
\]
\begin{definition}
The Fitting order $\Lambda$ is called \emph{Fitting-additive}
if
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M) \cdot \mathrm{Fitt}_{\Lambda}^{\max}(N)
= \mathrm{Fitt}_{\Lambda}^{\max}(M \oplus N)
\]
for all finitely generated $\Lambda$-modules $M$ and $N$.
\end{definition}
The following observation is clear by Lemma \ref{lem:basic-props-comm}(ii).
\begin{proposition}
Every commutative Fitting order $\Lambda$ is Fitting-additive.
\end{proposition}
As reduced norms are defined componentwise, the following is also
immediate.
\begin{lemma} \label{lem:add-add}
Let $\Lambda_1$ and $\Lambda_2$ be Fitting orders over the Fitting domain
$\mathfrak o$. Then $\Lambda_1 \oplus \Lambda_2$ is Fitting-additive
if and only if both $\Lambda_1$ and $\Lambda_2$ are Fitting-additive.
\end{lemma}
We record some cases where it is known that $\Lambda$ is Fitting-additive.
\begin{theorem} \label{thm:Fitting-additive}
The Fitting order $\Lambda$ is Fitting-additive in each of the
following cases.
\begin{enumerate}
\item
$\Lambda$ is a direct product of matrix rings over commutative rings.
\item
$\Lambda$ is a maximal order and $\mathfrak o$ is
a complete discrete valuation ring.
\end{enumerate}
\end{theorem}
\begin{proof}
This follows from \cite[Theorem 4.6(ii)]{MR3092262}.
We will reprove part (i) in the appendix
(see Remark \ref{rmk:compatible-Fitt-defs}
and Lemma \ref{lem:basic-props-Morita}(ii)).
\end{proof}
\begin{corollary} \label{cor:p-adic-additive}
Let $p$ be a prime and let $G$ be a finite group. Suppose that
$p$ does not divide the order of the commutator subgroup of $G$.
Then the $p$-adic group ring $\mathbb{Z}_p[G]$ is Fitting-additive.
\end{corollary}
\begin{proof}
It follows from \cite[Corollary, p.~390]{MR704622}
that $\mathbb{Z}_p[G]$ is a direct product of matrix rings over commutative rings
in this case. Thus the result follows from
Theorem \ref{thm:Fitting-additive}(i).
\end{proof}
\begin{corollary}
Let $G$ be a profinite group containing a finite normal subgroup $H$
such that $G/H \simeq \mathbb{Z}_p$ for some prime $p$. Suppose that
$p$ does not divide the order of the commutator subgroup of $G$
(which is finite). Then the Iwasawa algebra
$\mathbb{Z}_p \llbracket G \rrbracket$ is Fitting-additive.
\end{corollary}
\begin{proof}
The Iwasawa algebra $\mathbb{Z}_p \llbracket G \rrbracket$ is again a
direct product of matrix rings over commutative rings
in this case by \cite[Proposition 4.5]{MR3092262}.
\end{proof}
We now give an example of a Fitting order $\Lambda$ which is
\emph{not} Fitting-additive.
\begin{example} \label{ex:hereditary}
Let $p$ be a prime and consider the $\mathbb{Z}_p$-order
\[
\Lambda := \left\{ \left(\begin{array}{cc}
a & b \\ c & d
\end{array}\right) \in M_{2 \times 2}(\mathbb{Z}_p) \mid b \equiv 0
\mod p
\right\}.
\]
This is a Fitting order over $\mathbb{Z}_p$ and one has
$\mathcal{H}(\Lambda) = \mathcal{I}(\Lambda) = \zeta(\Lambda) = \mathbb{Z}_p$.
We let $M$ and $N$ be $\Lambda$-modules which as sets are equal to
$\mathbb{Z}_p / p \mathbb{Z}_p$ and upon which $\Lambda$ acts as follows.
Let $\lambda = \left(\begin{array}{cc} a & b \\ c & d \end{array}\right)
\in \Lambda$. For every $x \in \mathbb{Z}_p$ we write
$\overline x$ for its image in $\mathbb{Z}_p / p \mathbb{Z}_p$.
Then $\lambda \cdot m := \overline{a} m$ and
$\lambda \cdot n := \overline{d} n$ for $m \in M$ and $n \in N$.
Using $\overline b = 0$ it is easily checked that this defines
left $\Lambda$-module structures on $M$ and $N$,
respectively. There is
a short exact sequence
\[
0 \longrightarrow \Lambda \stackrel{h}{\longrightarrow}
\Lambda \longrightarrow M \oplus N \longrightarrow 0,
\]
where $h$ is right multiplication by
$\left(\begin{array}{cc} 0 & p \\ 1 & 0 \end{array}\right)
\in \Lambda$. As $h$ is a quadratic presentation,
we have that
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M \oplus N) = \mathrm{Nrd}(h) \cdot \mathbb{Z}_p = p \mathbb{Z}_p
\]
by Proposition \ref{prop:Fitt-of-quadratic}. As $M \oplus N$
surjects onto $M$, the ideal generated by $p$ is contained in
$\mathrm{Fitt}_{\Lambda}^{\max}(M)$ by Lemma \ref{lem:basic-props-noncomm}(i).
However, the maximal Fitting invariant $\mathrm{Fitt}_{\Lambda}^{\max}(M)$
annihilates $M$ by Theorem \ref{thm:fitt-ann} and so
$\mathrm{Fitt}_{\Lambda}^{\max}(M)$ is properly contained in $\mathbb{Z}_p$
as $M \not=0$.
It follows that
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M) = p \mathbb{Z}_p.
\]
Exactly the same reasoning applies for $N$ and therefore
\[
\mathrm{Fitt}_{\Lambda}^{\max}(N) = p \mathbb{Z}_p.
\]
Altogether we have that
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M) \cdot \mathrm{Fitt}_{\Lambda}^{\max}(N) = p^2 \mathbb{Z}_p
\subsetneq p \mathbb{Z}_p = \mathrm{Fitt}_{\Lambda}^{\max}(M \oplus N)
\]
and thus $\Lambda$ is not Fitting-additive.
\end{example}
The order in Example \ref{ex:hereditary} is a hereditary, but
non-maximal $\mathbb{Z}_p$-order.
By the classification of hereditary orders
over complete discrete valuation rings \cite[Theorem 26.28]{MR632548}
it is clear that similar examples can be constructed for every
hereditary, non-maximal order over a
complete discrete valuation ring.
Taking Theorem \ref{thm:Fitting-additive}(ii) into account,
we have established the following.
\begin{proposition} \label{prop:hereditary-not-add}
Let $\Lambda$ be a Fitting order over a complete discrete
valuation ring. Suppose that $\Lambda$ is hereditary.
Then $\Lambda$ is Fitting-additive if and only if it is
maximal.
\end{proposition}
\begin{remark} \label{rem:hereditary-max}
A $p$-adic group ring $\mathbb{Z}_p[G]$ is hereditary if and only if
it is maximal.
We give an indirect proof of this fact.
Suppose that $\mathbb{Z}_p[G]$ is hereditary. Then $\mathcal{H}_p(G) =
\zeta(\mathbb{Z}_p[G])$ by \cite[Corollary 4.2]{MR3092262}
(this follows easily from the definitions once one observes
that the centre of a hereditary $\mathbb{Z}_p$-order is itself
a maximal $\mathbb{Z}_p$-order).
Now Proposition \ref{prop:best-denominators} implies
that $p$ does not divide the order of the commutator subgroup of $G$.
By Corollary \ref{cor:p-adic-additive} the group ring
$\mathbb{Z}_p[G]$ is Fitting-additive and so by Proposition
\ref{prop:hereditary-not-add} it is maximal.
\end{remark}
Let $\Lambda$ be a Fitting order over the Fitting domain $\mathfrak o$.
In view of Example \ref{ex:hereditary} and
Proposition \ref{prop:hereditary-not-add}
one may ask whether there are hereditary, non-maximal orders
when $\mathfrak o$ is not a complete discrete valuation ring.
We now show that this is indeed not the case.
\begin{proposition}
Let $\Lambda$ be a Fitting order over the Fitting domain $\mathfrak o$.
If $\Lambda$ is hereditary, then $\mathfrak o$ is a complete discrete valuation ring.
\end{proposition}
\begin{proof}
Suppose that $\Lambda$ is a hereditary Fitting order over $\mathfrak o$
in the separable $F$-algebra $A$, where as before $F$ denotes the quotient
field of $\mathfrak o$. Let $A = A_1 \oplus \dots \oplus A_t$ be
the Wedderburn decomposition of $A$ so that each $A_i$ is a central simple
$F_i = \zeta(A_i)$-algebra.
Let $\mathfrak o_i$ be the integral closure of $\mathfrak o$ in $F_i$.
Then we likewise have a decomposition
\[
\Lambda= \Lambda_1 \oplus \dots \oplus \Lambda_t,
\]
where each $\Lambda_i$ is a hereditary $\mathfrak{o}_i$-order
by \cite[Proposition 2.2]{MR0151489}.
Moreover, each $\mathfrak o_i$ is in fact a Dedekind domain by
\cite[Theorem 2.6]{MR0151489} and thus has Krull dimension $1$.
The Fitting domain $\mathfrak{o}$ then also has Krull dimension $1$
by \cite[Proposition 4.15]{MR1322960}. However, a Fitting domain
of Krull dimension $1$ is a complete discrete valuation ring.
\end{proof}
\begin{remark}
In view of Corollary \ref{cor:p-adic-additive} and Remark
\ref{rem:hereditary-max} one may ask the following question:
Is every $p$-adic group ring Fitting-additive?
Similarly, is the Iwasawa algebra $\mathbb{Z}_p \llbracket G \rrbracket$
of a one-dimensional $p$-adic Lie group $G$ always Fitting-additive?
In both cases one knows that
\[
\mathrm{Fitt}_{\Lambda}^{\max}(M) \cdot \mathrm{Fitt}_{\Lambda}^{\max}(N)
= \mathrm{Fitt}_{\Lambda}^{\max}(M \oplus N)
\]
whenever at least one of the two $\Lambda$-modules $M$ and $N$
has projective dimension at most $1$. This follows
from \cite[Proposition 2.11]{Kataoka}.
\end{remark}
| {
"timestamp": "2018-09-11T02:19:41",
"yymm": "1712",
"arxiv_id": "1712.07368",
"language": "en",
"url": "https://arxiv.org/abs/1712.07368",
"abstract": "To each finitely presented module $M$ over a commutative ring $R$ one can associate an $R$-ideal $\\mathrm{Fitt}_{R}(M)$, which is called the (zeroth) Fitting ideal of $M$ over $R$. This is of interest because it is always contained in the $R$-annihilator $\\mathrm{Ann}_{R}(M)$ of $M$, but is often much easier to compute. This notion has recently been generalised to that of so-called `Fitting invariants' over certain noncommutative rings; the present author considered the case in which $R$ is an $\\mathfrak{o}$-order $\\Lambda$ in a finite dimensional separable algebra, where $\\mathfrak{o}$ is an integrally closed commutative noetherian complete local domain. This article is a survey of known results and open problems in this context. In particular, we investigate the behaviour of Fitting invariants under direct sums. In the appendix, we present a new approach to Fitting invariants via Morita equivalence.",
"subjects": "Rings and Algebras (math.RA); Number Theory (math.NT)",
"title": "Notes on noncommutative Fitting invariants",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534382002796,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7083573454442772
} |
https://arxiv.org/abs/1204.1687 | Recursively determined representing measures for bivariate truncated moment sequences | A theorem of Bayer and Teichmann implies that if a finite real multisequence \beta = \beta^(2d) has a representing measure, then the associated moment matrix M_d admits positive, recursively generated moment matrix extensions M_(d+1), M_(d+2),... For a bivariate recursively determinate M_d, we show that the existence of positive, recursively generated extensions M_(d+1),...,M_(2d-1) is sufficient for a measure. Examples illustrate that all of these extensions may be required to show that \beta has a measure. We describe in detail a constructive procedure for determining whether such extensions exist. Under mild additional hypotheses, we show that M_d admits an extension M_(d+1) which has many of the properties of a positive, recursively generated extension. | \section{Introduction}\label{Intro}
\setcounter{equation}{0}
Let $\beta\equiv \beta^{(2d)}:= \{\beta_{ij}\}_{i,j\ge 0, i+j\le 2d}$
denote a real bivariate moment sequence of degree $2d$. \ The Truncated
Moment Problem seeks conditions on $\beta$ for the existence of a
positive Borel measure $\mu$ on $\mathbb{R}^{2}$ such that
\begin{equation} \label{beta}
\beta_{ij} = \int_{\mathbb{R}^{2}} x^{i}y^{j}d\mu \quad(i,j\ge 0,~~i+j\le 2d).
\end{equation}
A result of \cite{tcmp10} shows that $\beta$ admits a finitely atomic {\it{representing measure}} $\mu$
(as in (\ref{beta})) if and only if $M_{d}\equiv M_{d}(\beta)$, the {\it{moment matrix}}
associated with $\beta$, admits a {\it flat extension} $M_{d+k+1}$, i.e., an extension to a positive semidefinite moment
matrix $M_{d+k+1}$ such that $\text{rank}~M_{d+k+1} =
\text{rank}~M_{d+k}$. \ The extension of this result to general representing measures follows from a theorem
of C. Bayer and J. Teichmann \cite{BT}, which implies that if $\beta$ has a representing measure, then it has a finitely atomic
representing measure (cf. \cite[Section 2]{finitevariety}, \cite[Section 1]{tcmp11}). \ At present, for a general moment matrix, there
is no known concrete test for the existence of a flat extension $M_{d+k+1}$.
In this note, for the class of bivariate {\it{recursively determinate}}
moment matrices, we present a detailed analysis of an algorithm of \cite{finitevariety}
that can be used in numerical examples to determine the existence or
nonexistence of flat extensions (and representing measures). \ This algorithm determines the existence or
nonexistence of positive, recursively generated extensions $M_{d+1},\ldots,M_{2d-1}$, at least one of which
must be a flat extension in the case when there is a measure. \ Theorem \ref{gridthm} shows that there are
sequences $\beta^{(2d)}$ for which the first flat extension occurs at $M_{2d-1}$, so all of the above extensions must be
computed in order to recognize that there is a measure. \ This result stands in sharp contrast to traditional truncated moment
theorems (concerning representing measures supported in $\mathbb{R}$, $[a,b]$, $[0,+\infty)$, or in a planar curve of degree $2$),
which express the existence of a measure in terms of tests closely related to the original moment data (cf.
Remark \ref{newrmk} below and \cite{Houston}, \cite{tcmp1}, \cite{tcmp3}, \cite{tcmp11}, \cite{finitevariety}). \ Here we see that, at least within
the framework of moment matrix extensions, we may need to go far from the original data to resolve the existence of a
measure. \ In Theorems \ref{rdext} and \ref{RDnew} we show that
under mild additional hypotheses on $M_d$, the implementation of each extension step, from $M_{d+j}$ to $M_{d+j+1}$,
leading to a flat extension $M_{d+k+1}$, consists of simply verifying a matrix positivity condition.
Let $\mathcal{P}_{d}\equiv \mathbb{R}[x,y]_{d}$ denote the bivariate real
polynomials of degree at most $d$. \ For $p\in \mathcal{P}_{d}$, $p(x,y) \equiv
\displaystyle \sum \limits_{i,j\ge 0, i+j\le d} a_{ij}x^{i}y^{j}$, let $\hat{p}:=(a_{ij})$ denote
the vector of coefficients with respect to the basis for $\mathcal{P}_{d}$
consisting of monomials in degree-lexicographic order, i.e.,
$1,x,y,x^{2},xy,y^{2},\ldots,x^{d},\newline
\ldots,y^{d}$. \ Let
$L_{\beta}:\mathcal{P}_{2d}\longrightarrow \mathbb{R}$ denote the
Riesz functional, defined by $L_{\beta}(\displaystyle \sum \limits_{i,j\ge 0,i+j\le 2d} a_{ij}x^{i}y^{j})
:=\sum a_{ij} \beta_{ij}$. \ The moment matrix $M_{d}$, whose rows and
columns are indexed by the monomials in $\mathcal{P}_{d}$, is defined by
$\langle M_{d}\hat{p},\hat{q} \rangle := L_{\beta}(pq)$ ($p,q\in \mathcal{P}_{d}$).
We denote the successive rows and columns of $M_{d}$ by $1,~X,~Y,\ldots,
X^{d},\ldots,Y^{d}$; thus, the entry in row $X^iY^j$, column $X^kY^{\ell}$, which we denote by
$\langle X^{k}Y^{\ell},X^{i}Y^{j} \rangle$, is equal to $\beta_{i+k,j+\ell}$. \ We may denote a linear combination of
rows or columns by $p(X,Y):= \sum a_{ij}X^{i}Y^{j}$ for some
$p \equiv \sum a_{ij}x^{i}y^{j}
\in \mathcal{P}_{d}$; note that $p(X,Y) = M_{d}\hat{p}$. \ We say that $M_{d}$ is {\it{recursively
generated}} if $ker~M_{d}$ has the following ideal-like property:
\begin{equation} \label{recgen}
p,q,pq\in \mathcal{P}_{d},~p(X,Y)=0 \Longrightarrow (pq)(X,Y)=0.
\end{equation}
If $\beta$ has a representing measure, then $M_{d}$ is positive semidefinite
and recursively generated \cite{tcmp10} (and in one variable these conditions are
sufficient for the existence of a representing measure \cite{Houston}). \ Moreover, from \cite{BT}, $\beta$ actually admits
a {\it finitely atomic} representing measure $\mu$, which therefore has finite moments of all orders; it follows that $M_{d}$ admits positive,
recursively generated moment matrix extensions of all orders, namely $M_{d+1}[\mu],\ldots,M_{d+k}[\mu],\ldots$. \ Let us consider a moment matrix
extension
$$M_{d+1}\equiv
\bpm
M_{d} & B(d+1) \\
B(d+1)^{T} & C(d+1)
\epm,$$
where the block $B(d+1)$ includes new moments of degree $2d+1$ (as well as old moments of degrees $d+1,\ldots,2d$),
and block $C(d+1)$ consists of new moments of degree $2d+2$. \ We denote the columns of $B(d+1)$ by $X^{d+1},\ldots,Y^{d+1}$, and we say that
$(M_d \; B(d+1))$ is {\it recursively generated} if (\ref{recgen}) holds in its column space, but with $p,q,pq\in \mathcal{P}_{d+1}$. \
$M_{d+1}$ is positive semidefinite if and only if (i) $M_{d}$ is positive semidefinite;
(ii) $Ran~B(d+1) \subseteq Ran~M_{d}$ (equivalently,
$B(d+1) = M_{d}W$ for some matrix $W$);
(iii) $ C(d+1) \succeq C^{\flat}:= W^{T}M_{d}W$ (cf. \cite{tcmp1}). \ (Here and in the sequel, for a real symmetric
matrix $A$, we will write $A \succeq 0$ (resp. $A \succ 0$) to denote that $A$ is positive semidefinite (resp. positive semidefinite
and invertible).) \ If $M_{d+1} \succeq 0$, then we also have (iv)
each dependence relation in $Col~M_{d}$ (the column space of $M_{d}$)
extends to $Col~M_{d+1}$. \ In the sequel we say that $M_{d+1}$ is an
$RG$ {\it extension} if properties (i), (ii), and (iv) hold and $M_{d+1}$ is
recursively generated (so, in particular, $(M_d \; B(d+1))$ is recursively generated). \ In the
sequel, we provide sufficient conditions for $RG$ extensions; note that to verify that
an $RG$ extension is positive semidefinite and recursively
generated, it is only necessary to verify condition (iii).
For a general $M_{d}$, a significant difficulty in determining the
existence of a flat extension $M_{d+k+1}$ is that there may be infinitely
many positive and recursively generated extensions $M_{d+1}$. \ If one such
extension does not admit a subsequent flat extension, this does not preclude
the possibility that some other extension does. \ In the sequel, we focus
on the class of recursively determinate moment matrices {\it (RD)} introduced in \cite{finitevariety} (cf. \cite{F08}). \ These are
characterized by the property that there can be at most one positive, recursively
generated extension, and there is a concrete
procedure (described below) for determining the existence or nonexistence of this extension.
Since such an extension is also recursively determinate,
we may proceed iteratively to determine the existence or nonexistence
of positive and recursively
generated
extensions
\begin{equation} \label{sequence}
M_{d+1},\ldots,M_{2d-1}.
\end{equation}
As we discuss below, the existence of the extensions in (\ref{sequence}) is equivalent to the existence of a flat extension
$M_{d+k+1}$ and, in fact, one of the extensions in (\ref{sequence}) is a flat extension of $M_d$. \ (If $M_{j+1}$ is positive semidefinite, then $M_{j}$ is positive semidefinite
and recursively generated \cite{tcmp10}, so, using also \cite{BT}, it follows that (\ref{sequence}) is equivalent
to the existence of a positive semidefinite extension $M_{2d}$.)
A bivariate moment matrix $M_{d}$ admits a block decomposition
$M_{d} = (B[i,j])_{0\le i,j \le d}$, where
$$B[i,j] = \bpm
\beta_{i+j,0} & \cdots & \beta_{i,j} \\
\vdots & \vdots & \vdots \\
\beta_{j,i} & \cdots & \beta_{0,i+j}
\epm.$$
Thus, $B[i,j]$ is constant on each cross-diagonal; we refer to this as the {\it Hankel property}. \
Note that in the extension $M_{d+1}$, $B(d+1) = (B[i,d+1])_{0\le i \le d}$,
and all of the new moments of degree $2d+1$ appear within block
$B[d,d+1]$, either in column $X^{d+1}$ (the leftmost column) or in
column $Y^{d+1}$ (on the right). \ Similarly, all new moments of degree $2d+2$
appear in column $X^{d+1}$ or column $Y^{d+1}$ of $C(d+1)$ ($=B[d+1,d+1]$).
In the sequel, by a {\it{column dependence relation}} we mean a linear dependence
relation of the form $X^{i}Y^{j}=r(X,Y)$, where $deg~r\le i+j$ and each monomial
term in $r$ strictly precedes $x^{i}y^{j}$ in the degree-lexicographic order;
we say that such a relation is {\it{degree reducing}} if $deg~r<i+j$.
A bivariate moment matrix $M_{d}$ is {\it {recursively determinate}} if there are
column dependence relations of the form
\begin{equation}\label{xrelation}
X^{n} = p(X,Y) \quad (p\in \mathcal{P}_{n-1}, ~n\le d)
\end{equation}
and
\begin{equation}\label{yrelation}
Y^{m} = q(X,Y) \quad (q\in \mathcal{P}_{m}, ~q ~\text{has no} ~y^m ~\text{term}, ~ m\le d),
\end{equation}
or with similar relations with the roles of $p$ and $q$ reversed. \ In the sequel, we state the
main results (Theorems \ref{rdext} and \ref{gridthm}) with $p$ and $q$ as in (\ref{xrelation})-(\ref{yrelation}),
but these results are valid as well with the roles of p and q reversed. \ In Section \ref{Sect2} we show
that if $M_{d}$ is recursively determinate, then the only possible positive, recursively generated (or merely $RG$) extension
is completely determined by
column relations $X^{d+1} = (x^{d+1-n}p)(X,Y)$
and $Y^{d+1}=(y^{d+1-m}q)(X,Y)$.
The most important case of recursive determinacy occurs when $M_{d}$ is
positive and {\it{flat}}, i.e., $\text{rank}~M_{d} = \text{rank}~M_{d-1}$ (equivalently,
each column of degree $d$ can be expressed as a linear combination of
columns of strictly lower degree). \ A fundamental result of \cite{tcmp1} shows that
in this case $M_{d}$ admits a unique flat extension $M_{d+1}$ (and a
corresponding $\text{rank}~M_{d}$-atomic representing measure). \ In this paper,
we stay within the framework of recursive determinacy, but relax the
flatness condition, and study the extent to which positive, recursively generated
extensions exist. \
Our main results are Theorems \ref{rdext} and \ref{RDnew}, which give sufficient conditions for RG extensions,
and Theorem \ref{gridthm}, which shows that the number of extension steps leading to a flat extension is sometimes proportional
to the degree of the moment problem. \ Theorem \ref{rdext} shows that if $M_{d}$
is positive and recursively generated, and if all column dependence relations
arise from (\ref{xrelation}) or (\ref{yrelation}) via recursiveness and linearity,
then $M_{d}$ admits a
unique $RG$ extension. \ In general, this extension need not be positive semidefinite
(see the discussion preceding Example \ref{posexample}), but if $d = n+m-2$, then this extension is actually a flat extension, so
$\beta$ admits a representing measure (Corollary \ref{Cblock}). \
Additionally, we show in Theorem \ref{RDnew} that if $M_{d}$ is positive semidefinite,
recursively generated, and recursively determinate, and if all column
dependence relations are degree-reducing, then
$M_{d}$ again admits a unique $RG$ extension. \ However, we show
in Example \ref{notRDexample} that if all of the column relations are degree-reducing
except that $deg~q=m$, then $M_{d}$ need not even admit a block
$B(d+1)$ consistent with recursiveness for $(M_d \; B(d+1))$. \ In Theorem \ref{gridthm} we
show that for each $d$, there exists $\beta \equiv \beta^{(2d)}$, with $M_{d}(\beta) \in RD$,
such that in the sequence of positive, recursively generated extensions, $M_{d+1},\ldots,M_{2d-1}$,
the first flat extension is $M_{2d-1}$, so the determination
that a measure exists takes the maximum possible number of extension steps. \ Moreover, at each extension step, $M_{d+i}$ satisfies
the hypotheses of Theorem \ref{rdext}, so it is guaranteed in advance that the next extension $M_{d+i+1}$
is well-defined and recursively generated; only its positivity needs to be verified. \
In general, however, the existence of a positive, recursively generated extension
$M_{d+1}$ does not imply the existence of a measure. \
In Section \ref{Sect3} we answer \cite[Question 4.19]{finitevariety} by showing that if,
under the hypotheses of Theorem \ref{rdext}, $M_{d}$ does admit a positive,
recursively generated extension $M_{d+1}$, then $M_{d+1}$ may also satisfy the conditions of Theorem \ref{rdext}, but need
not admit a positive, recursively generated extension $M_{d+2}$, and
thus $M_{d}$ may fail to have a measure.
We conclude this section by reviewing and illustrating \cite[Algorithm 4.10]{finitevariety}
concerning extensions of recursively determinate bivariate moment matrices. \
We may assume that $M_{d}$ is positive and recursively generated, for otherwise
there is no representing measure. \ (We note that in numerical problems, positivity and
recursiveness can easily be verified using elementary linear algebra.) \
To define block $B(d+1)$ for an extension $M_{d+1}$, note that blocks
$B[0,d+1],\ldots,B[d-1,d+1]$ consist of old moments from $M_{d}$.
To define moments of degree $2d+1$ for block $B[d,d+1]$,
we first use (\ref{xrelation}) and recursiveness
to define the ``left band" of columns, $X^{n+i}Y^{d+1-i-n} := (x^{i}y^{d+1-i-n}p)(X,Y)$
($0\le i \le d+1-i$). \ In block $B[d,d+1]$, certain ``new moments"
in column $X^{n}Y^{d+1-n}$ can be moved up and to the right along cross-diagonals until they reach row $X^{d}$ (the top row of $B[d,d+1]$)
in columns of the ``central band," $X^{n-1}Y^{d+2-n},\ldots,X^{d+2-m}Y^{m-1}$. \ These values
can then be used to define $\langle X^{d+1-m}Y^{m}, X^{d} \rangle$
(the entry in row $X^{d}$, column $X^{d+1-m}Y^{m}$) by means of
\begin{equation}\label{rightrec}
\langle X^{d+1-m}Y^{m}, X^{d}\rangle := \langle (x^{d+1-m}q)(X,Y), X^{d}\rangle.
\end{equation}
This value may be moved one position down and to the left along its
cross-diagonal and then used to define
$\langle X^{d+1-m}Y^{m}, X^{d-1}Y \rangle
:= \langle (x^{d+1-m}q)(X,Y), X^{d-1}Y\rangle.$
We repeat this process
successively to complete the definition of column $X^{d+1-m}Y^{m}$ in $B[d,d+1]$ as well as
the definition of the central band of columns in this block. \
We next complete the definition of $B[d,d+1]$ by successively defining the
``right band" of columns,
$X^{d-m}Y^{m+1},\ldots,Y^{d+1}$, using
$$X^{d+1-m-i}Y^{m+i}:=(x^{d+1-m-i}y^{i}q)(X,Y)~~~(0\le i\le d+1-m).$$
It is necessary to check that the values in the central and right bands, as just defined,
are compatible with values in the left band, and, more generally, to verify that
$B(d+1)$ is a well-defined moment matrix block. \ If this fails to be the case,
there is no measure. \ If $B(d+1)$ is well-defined, we next check that
$Ran~B(d+1)\subseteq Ran~M_{d}$, for if this is not the case, then there is no
measure. \ Assuming the range condition is satisfied, (\ref{xrelation}) and
(\ref{yrelation}) will hold in the columns of $B(d+1)^{T}$ (the transpose).
We then apply recursiveness and the method used just above in defining $B[d,d+1]$
to attempt to define $C(d+1)\equiv B[d+1,d+1]$. \ Assuming that $C(d+1)$
is well-defined, we further check that $M_{d+1}$ is positive and recursively
generated. \ If any of the preceding steps fails, there is no representing measure.
Our main results (Theorems \ref{rdext} and \ref{RDnew}) show that if all column relations come from
(\ref{xrelation}) or (\ref{yrelation}) via recursiveness and linearity, or if (\ref{xrelation}) - (\ref{yrelation}) hold and all column dependence
relations are degree-reducing, then all of the preceding steps are
guaranteed to succeed, except possibly the positivity of $M_{d+1}$;
thus $M_{d+1}$ is at least an $RG$ extension.
If $M_{d+1}$, as just defined, is positive and recursively generated, then,
since it is also recursively determinate, we may apply the above procedure
successively,
in attempting
to define positive and recursively generated extensions $M_{d+2}$,
$M_{d+3},\ldots$.
Note that the central band of degree $d$ in $M_{d}$ has $n+m-d-1$ columns, $X^{n-1}Y^{d-n+1},\ldots,X^{d+1-m}Y^{m-1}$. \
In each successive extension $M_{d+k}$, the
number of columns in the central band of degree $d+k$ is $n+m-d-1-k$.
Thus, after at most $n+m-d-1$ extension steps, either the extension
process fails, and there is no measure, or the central band disappears and there is
a flat extension,
at or before $M_{n+m-1}$, and a measure. \
(Note that since $n,m\le d$, this refines our earlier assertion that a flat extension
occurs at or before $M_{2d-1}$.) Another estimate for the number of
extension steps is based on the {\it{variety}} of $M_{d}$, defined as
$\mathcal{V} \equiv \mathcal{V}(M_{d}) := \displaystyle \bigcap \limits_{r\in \mathcal{P}_{d},r(X,Y)=0}
\mathcal{Z}_{r}$, where $\mathcal{Z}_{r}$ is the set of real zeros of $r$.
It follows from \cite{finitevariety} that the number of extension steps leading to a flat extension
is at most $1+ \text{card}~\mathcal{V} - \text{rank}~M_{d}$. \ Note also that when a measure exists, it is supported inside
$\mathcal{V}$ \cite{tcmp10}, so its support is a subset of the finite real variety determined by $x^{n}-p(x,y)$ and $y^{m}-q(x,y)$.
Examples are known where the $RG$ extension $M_{d+1}$ is not positive semidefinite
(cf. \cite[Example 4.18]{finitevariety}, [CFM, Theorem 5.2], both with $n=m=d=3$, and the example of Section \ref{Sect3}
(below), with $d=5$, $n=m=4$). \ We next present an example, adapted from \cite[Example 5.2]{F08}, which illustrates
the algorithm in a case leading to a measure.
\begin{example}\label{posexample}
Let $d=3$ and consider
$$M_{3}=
\left( \begin{array}{cccccccccc}
1 & 0 & 0 & 1 & 2 & 5 & 0 & 0 & 0 & 0 \\
0 & 1 & 2 & 0 & 0 & 0 & 2 & 5 & 14 & 42 \\
0 & 2 & 5 & 0 & 0 & 0 & 5 & 14 & 42 & 132 \\
1 & 0 & 0 & 2 & 5 & 14 & 0 & 0 & 0 & 0 \\
2 & 0 & 0 & 5 & 14 & 42 & 0 & 0 & 0 & 0 \\
5 & 0 & 0 & 14 & 42 & 132 & 0 & 0 & 0 & 0 \\
0 & 2 & 5 & 0 & 0 & 0 & 5 & 14 & 42 & 132 \\
0 & 5 & 14 & 0 & 0 & 0 & 14 & 42 & 132 & 429 \\
0 & 14 & 42 & 0 & 0 & 0 & 42 & 132 & 429 & c \\
0 & 42 & 132 & 0 & 0 & 0 & 132 & 429 & c & d
\end{array} \right).
$$
We have $M_{3}\succeq 0$, $M_{2}\succ 0$, and $\text{rank}~M_{3} = 8 \Longleftrightarrow d=2026881 - 2844 c + c^2$. \
When $\text{rank}~M_{3}=8$, then the two
column relations are
$$ Y = X^{3}$$
and
$$ Y^{3} = q(X,Y),$$
where $q(x,y):=(5715 - 4 c) x+ 10 (-1428 + c) y - 3 (-2853 + 2 c) x^2 y + (-1422 + c) x y^2$. \
Let $r_{1}(x,y) = y - x^{3}$ and $r_{2}(x,y) =
y^{3}-q(x,y)$. \ With these two column relations in hand, Theorem \ref{rdext} guarantees the existence of a unique
$RG$ extension $M_{4}$. \ To test the positivity of $M_{4}$, we calculate the determinant of the $9 \times 9$ matrix consisting of the rows and
columns of $M_{4}$ indexed by the monomials $1,x,y,x^2,xy,y^2,x^2y,xy^2,x^2y^2$. \ A straightforward calculation using {\it Mathematica} shows
that three cases arise:
(i) $c<1429$: here $M_{4} \not \succeq 0$, so $M_{3}$ admits no representing measure;
(ii) $c=1429$: here $M_{4}$ is a flat extension of $M_{3}$, so by the main result in \cite {tcmp1}, $M_{3}$ admits an $8$-atomic
representing measure;
(iii) $c>1429$: here $M_{4}$ is a positive RG extension of $M_{3}$ with rank $9$. \ Although $M_{4}$ is not a flat extension of $M_{3}$, it nevertheless satisfies the hypotheses of Theorem \ref{rdext}, so Corollary \ref{flatext} implies that $M_{4}$ admits a flat extension $M_{5}$, and therefore $M_{3}$ has a $9$-atomic representing measure. \ Moreover, since the original algebraic variety $\mathcal{V} \equiv \mathcal{V} (M_{3})$ associated with $M_{3}$, $\mathcal{Z}_{r_{1}}\bigcap \mathcal{Z}_{r_{2}}$, can have at most $9$ points (by B\'ezout's Theorem), it follows that $\mathcal{V}=\mathcal{V}(M_{5})$. This algebraic variety must have exactly $9$ points, and thus constitutes the support of the unique representing measure for $M_{3}$. \
To illustrate this case, we take the special value $c=1430$, so that $q(x,y) \equiv -5x+20y-21x^2y+8xy^2$.
\ Let $\alpha:=\frac{1}{2}\sqrt{5-2\sqrt{5}}$ and $\gamma:=\sqrt{5}\alpha$. \ A calculation shows that
$\mathcal{V} = \{(x_{i},x_{i}^{3})\}_{i=1}^{9}$,
where $x_{1} = 0$, $x_{2} =\frac{1}{2}(-1-\sqrt{5}) \approx -1.618$, $x_{3} =\frac{1}{2}(1-\sqrt{5}) \approx -0.618$, $x_{4}
=-x_{3} \approx 0.618$,
$x_{5} =-x_{2} \approx 1.618$,
$x_{6} =-\alpha-\gamma \approx -1.176$,
$x_{7} =-\alpha+\gamma \approx 0.449$,
$x_{8} =-x_{7} \approx -0.449$ and
$x_{9} =-x_{6} \approx 1.176$. \ $M_{3}$ satisfies the hypothesis of Theorem \ref{rdext} with $n=m=3$,
so we proceed to generate the $RG$ extension
$M_{4}$. \ This extension is uniquely determined by imposing the
column relations $X^{4} = XY$, $X^{3}Y = Y^{2}$,
$XY^{3} = (xq)(X,Y)$, and $Y^{4}= (yq)(X,Y)$
(first in
$ \left( \begin{array}{cc}
M_{3} & B(4) \\
\end{array} \right) $, then in
$ \left( \begin{array}{cc}
B(4)^{T} & C(4)
\end{array} \right) $). \
A calculation shows
that, as expected, these relations unambiguously define a positive moment matrix
$M_{4}$ with $\text{rank}~M_{4} = 9$ ($>8=\text{rank}~M_{3}$). \ It follows that $M_{3}$ admits no
flat extension $M_{4}$, so we proceed to construct the $RG$
extension $M_{5}$, uniquely determined by imposing the relations
$X^{5} = X^{2}Y$, $X^{4}Y = XY^{2}$, $X^{3}Y^{2} = Y^{3}$,
$X^{2}Y^{3} = (x^{2}q)(X,Y)$, $XY^{4}= (xyq)(X,Y)$,
$Y^{5} = (y^{2}q)(X,Y)$. \ A calculation of these columns
(first in
$ \left( \begin{array}{cc}
M_{4} & B(5) \\
\end{array} \right) $, then in
$ \left( \begin{array}{cc}
B(5)^{T} & C(5)
\end{array} \right) $),
shows that, as again expected, they do fit together to unambiguously define
a moment matrix $M_{5}$.
From the form of $q(x,y)$, we see that $M_{5}$ is actually a
flat extension of $M_{4}$, in keeping with the above discussion. \ Corresponding to this flat extension is the unique,
$9$-atomic, representing
measure $\mu\equiv \mu_{M_{5}}$ as described in \cite{tcmp10}. \
Clearly, $\text{supp}~\mu = \mathcal{V}$, so $\mu$ is of the form
$\mu = \sum_{i=1}^{9} \rho_{i}\delta_{(x_{i},x_{i}^{3})}$. \
To compute the densities, we use the method of \cite{tcmp10} and find
$\rho_{1}=\frac{1}{5}=0.2$, $\rho_{2}=\rho_{5}=\frac{-1+\sqrt{5}}{8\sqrt{5}} \approx 0.069$,
$\rho_{3}=\rho_{4}=\frac{1+\sqrt{5}}{8\sqrt{5}} \approx 0.181$, $\rho_{6}=\rho_{9}=\frac{5+3\sqrt{5}}{40\sqrt{5}} \approx 0.131$, and $\rho_{7}=\rho_{8}=\frac{-5+3\sqrt{5}}{40\sqrt{5}} \approx 0.019$.
\ Thus, the existence of a representing measure for $\beta^{(6)}$
is established on the basis of the extensions $M_{4}$ and
$M_{5}$,
in keeping with Theorem \ref{rdext}.
Note that in this case, the actual number of extensions leading to
a flat extension can be computed as either $n+m-d-1$ or as
$1+ \text{card}~\mathcal{V}-\text{rank}~M_{3}$, which is consistent with our earlier discussion.
$\square$
\end{example}
\textit{Acknowledgment}. \ Examples \ref{posexample} and \ref{notRDexample}, and the example of Section \ref{Sect3}, were obtained using
calculations with the software tool \textit{Mathematica \cite{Wol}}. \
\section{The extension of a bivariate $RD$ positive moment matrix }\label{Sect2}
\setcounter{equation}{0}
In Theorem \ref{rdext} (below) we show that a positive recursively determinate
moment matrix $M_{d}\equiv M_{d}(\beta)$,
each of whose column dependence relations is recursively generated by a relation
of the form
\begin{equation}\label{X} X^{n} = p(X,Y), \; (p\in \mathcal{P}_{n-1})
\end{equation}
or
\begin{equation}\label{Y}
Y^{m} = q(X,Y), \; (q\in \mathcal{P}_{m}),
\end{equation}
(where $n,m\le d$ are fixed and $q$ has terms $x^{u}y^{v}$ with $v<m$),
always admits
a unique $RG$ extension
$$M_{d+1}\equiv \bpm
M_{d} & B(d+1) \\
B(d+1)^{T} & C(d+1) \epm .
$$
The main step towards Theorem \ref{rdext} is the following result, which shows
that $M_{d}$ (as above) admits
an extension block
$B(d+1)$ that is consistent with the structure of a positive,
recursively generated moment matrix extension $M_{d+1}$.
\begin{theorem}\label{main}
Suppose the bivariate moment matrix
$M_{d}(\beta)$ is positive and recursively generated,
with column dependence relations generated entirely by
(\ref{X}) and (\ref{Y}) via recursiveness and linearity. \
Then there exists a unique moment matrix block
$B(d+1)$ such that
$\bpm
M_{d} & B(d+1)
\epm$ is recursively generated and $Ran~B(d+1)\subseteq Ran~ M_{d}$.
\end{theorem}
The hypothesis implies that the column dependence relations in $M_{d}$
are precisely those of the form
\begin{equation}\label{Xrec}
X^{n+i}Y^{j} = (x^{i}y^{j}p)(X,Y) \quad (i,j\ge 0,~i+j+n\le d)
\end{equation}
and
\begin{equation}\label{Yrec}
X^{k}Y^{m+l}=(x^{k}y^{l}q)(X,Y) \quad (k,l\ge 0, ~k+l+m\le d).
\end{equation}
In particular, the degree $d$ columns $X^{d},\ldots,X^{n}Y^{d-n}$
are recursively determined in terms of
columns of strictly lower degree. \ Since, by (\ref{Yrec}), each column
$X^{d-m-k}Y^{m+k}$ ($0\le k\le d-m$) may be
expressed as a linear combination of columns to its left, it follows
that if $n\le d-m+1$, then $M_{d}$ is flat. \
Since a flat positive moment matrix
admits a unique positive, recursively generated extension (cf. \cite{tcmp1}), we may
assume that not every column of degree $d$ is recursively determined,
i.e., $n>d-m+1$, or
\begin{equation}\label{n+m}
n+m > d+1 .
\end{equation}
We may denote
\begin{equation}\label{X^n}
X^{n} = p(X,Y) \equiv \displaystyle \sum \limits_{r,s\ge 0, r+s\le n-1}
a_{rs}X^{r}Y^{s}
\end{equation}
and
\begin{eqnarray}
Y^{m} &=&q(X,Y)\equiv \sum\limits_{u,v\geq 0,u+v\leq m,v<m}b_{uv}X^{u}Y^{v} \nonumber \\
&\equiv &\sum\limits_{a,b\geq 0,a+b\leq m-1}\alpha
_{ab}X^{a}Y^{b}+\sum\limits_{c,e\geq 0,c+e=m,e<m}\gamma _{ce}X^{c}Y^{e}.
\end{eqnarray}
Thus, in any positive and recursively generated (or merely $RG$) extension
$M_{d+1}$, certain columns of
$B(d+1)$ are recursively determined.
On the left of $B(d+1)$,
there is a band of
columns,
\begin{eqnarray}\label{Xton_rec}
X^{n+f}Y^{d+1-n-f} &:=& (x^{f}y^{d+1-n-f}p)(X,Y) \nonumber \\
&\equiv& \sum \limits_{r,s\ge 0, r+s\le n-1}^{}a_{rs}X^{r+f}Y^{s+d+1-n-f} \quad (0\le f\le d+1-n)
\end{eqnarray}
each of which is well-defined as a linear combination
of columns of $M_{d}$. \ On the right of $B(d+1)$
there is another band of recursively determined columns,
\begin{eqnarray}\label{Y^m_rec}
X^{d+1-m-g}Y^{m+g} &:=& (x^{d+1-m-g}y^{g}q)(X,Y) \nonumber \\
&\equiv& \sum \limits_{u,v\ge 0, u+v\le m, v<m}^{}b_{uv}X^{u+d+1-m-g}Y^{v+g} \quad (0\le g\le d+1-m). \quad \quad
\end{eqnarray}
If $deg~q=m$, the sum in (\ref{Y^m_rec}) may involve
columns from the middle band,
$X^{u+d+1-m-g}Y^{v+g}$ ($u+v=m, u+d+1-m-n < g < m-v$),
which has not yet been defined, so some care is needed in
implementing (\ref{Y^m_rec}).
The proof of Theorem \ref{main} entails two main steps, which we prove in detail in Section \ref{PROOF}: the construction of the
block $B(d+1)$, and the verification of the inclusion $Ran~B(d+1) \subseteq Ran~M_d$. \ Assuming that we have already built a unique block $B(d+1)$
consistent with the existence of a positive, recursively generated extension
\begin{equation*}
M_{d+1}\equiv
\bpm
M_{d} & B(d+1) \\
B(d+1)^{T} & C(d+1)
\epm,
\end{equation*}
we next use this to construct a unique block $C(d+1)$ consistent with the existence of an $RG$ extension.
\begin{cor}\label{Cblock}
If $M_{d}$ satisfies the hypotheses of Theorem \ref{main}, then
there exists a unique moment matrix block $C\equiv C(n+1)$
consistent with the structure of an $RG$ extension $M_{d+1}$.
\end{cor}
\begin{proof}
In any $RG$ extension
$M_{d+1}$
the column relations (\ref{Xton_rec}) and (\ref{Y^m_rec}) must hold. \ The proof of Theorem \ref{main} shows that these relations define a unique moment matrix block $B\equiv B(d+1)$ consistent
with positivity and recursiveness.
To define $C\equiv C(n+1)$, we may formally repeat the proof of
Theorem \ref{main} concerning the well-definedness and uniqueness of block
$B(d+1)$, but applying the argument with $M_d$ replaced with $B(d+1)^{T}$,
and $B[d,d+1]$ replaced with $C\equiv B[d+1,d+1]$. \ In brief, we use
$B(d+1)^{T}$ and (\ref{Xton_rec}) to define the left recursive
band in $C$. \ We then define column $X^{d+1-m}Y^{m}$
by applying (\ref{Y^m_rec}) successively,
starting in row $X^{n+1}$, so that this column is Hankel with respect to the
central band, which we are completing simultaneously. \ We then use (\ref{Y^m_rec}) to successively define the remaining columns
on the right. \ Lemma \ref{hankel} can be used to show that the left band is internally
Hankel, and an adaptation of the argument in Lemma \ref{Hankellemma} can be used to show
that column $X^{d+1-m}Y^{m}$ is Hankel with respect to the left and
central blocks. \ Finally, the argument of Lemma \ref{righthankel} can be adapted to show that
that the right band is also Hankel.
\end{proof}
By combining Theorem \ref{main} with Corollary \ref{Cblock}, we immediately obtain the first of our main results, which follows.
\begin{theorem}\label{rdext} If $M_{d}$ is positive, with column relations generated entirely by (\ref{X}) and (\ref{Y})
via recursiveness and linearity, then $M_{d}$ admits a unique $RG$ extension $M_{d+1}$, i.e., $Ran~B(n+1)\subseteq Ran~M_{d}$, (\ref{Xton_rec})-(\ref{Y^m_rec}) hold
in $Col \; M_{d+1}$, and $M_{d+1}$ is recursively generated.
\end{theorem}
\begin{cor}\label{flatext}
If $M_{d}$ satisfies the hypotheses of Theorem \ref{rdext} and $d=n+m-2$, then
$M_{d}$ admits a flat moment matrix extension $M_{d+1}$ (and
$\beta$ admits a $\text{rank}~M_{d}$-atomic representing measure).
\end{cor}
\begin{proof}
Each column in the left band is, from (\ref{Xton_rec}), a linear combination
of columns of strictly lower degree.
Since $d=n+m-2$, there is no central band in the construction of $B(d+1)$ in Theorem \ref{main} and
of $C(d+1)$ in Corollary \ref{Cblock}. \ It thus follows
from (\ref{Y^m_rec}) that
each column in the right band is also a linear combination of columns
of strictly lower degree, so $M_{d+1}$ is a flat extension.
\end{proof}
To illustrate Corollary \ref{flatext} in the simplest case, let $n=m=d=2$ and suppose that $M_2$ satisfies the hypotheses of Theorem \ref{rdext}.
\ It follows from \cite{tcmp6} that $M_{2}$
admits a representing measure if and only if the equations
$x^{2}-p(x,y) = 0$ and $y^{2}-q(x,y)=0$ have at least 4 common real zeros.
Corollary \ref{flatext} implies that the latter ``variety condition" is superfluous; indeed, from Corollary \ref{flatext}, there {\it{is}} a representing measure, so \cite{tcmp6} implies that the system {\it{must}} have at least 4 ($= \text{rank}~M_{2}$) common real zeros.
Note that if $M_{d}(\beta)$ satisfies the hypothesis of Theorem \ref{rdext}, then the existence or nonexistence of a representing measure for $\beta$ will be established in at most $d-1$ extension steps (after which the central band would vanish and every column of $M_{2d-1}$ would be recursively determined). \ The next result shows that for every $d\ge 2$, there exists $M_{d}(\beta)$, satisfying the conditions of Theorem \ref{rdext}, for
which the determination that a representing measure exists entails the maximum number of extension steps, each of which falls within the scope of Theorem \ref{rdext}. \
\begin{theorem}\label{gridthm}
For $d \ge 1$, there exists a moment matrix $M_{d}$, satisfying the conditions of Theorem \ref{rdext}, for which
the extension algorithm determines successive positive, recursively generated extensions $M_{d+1},\ldots,M_{2d-1}$, and for which the first
flat extension occurs at $M_{2d-1}$. \ Moreover, each extension $M_{d+i}$ satisfies the conditions of Theorem \ref{rdext}, so
to continue the sequence it is only necessary to verify that the $RG$ extension $M_{d+i+1}$ is positive semidefinite.
\end{theorem}
\begin{remark} \label{newrmk}
To illustrate the significance of Theorem \ref{gridthm}, let us compare it to the following result of \cite[Theorem 1.2]{tcmp9}:
If $M_d(\beta)$ is a
bivariate moment matrix with a column relation $p(X,Y) = 0 ~ (deg ~ p \le 2)$, then $\beta$ has a measure if and only if $M_d$ is
positive, recursively generated, and $\text{rank} ~ M_d \le \text{card} ~ \mathcal{V}(M_{d})$. \ In this result, we see that the existence of a measure can
be determined directly from the data by establishing the positivity, rank, and variety of $M_d$. \ By contrast, in Theorem \ref{gridthm}
we see that it may be necessary to extend $M_d$ to $M_{2d-1}$ in order to establish that a measure exists. \ In this sense, within
the framework of moment matrices, we see that the general case of the truncated moment problem cannot be solved in ``closed form."
\ We may therefore seek to go beyond the framework of moment matrices. \ Recall that for $\beta \equiv \beta^{(2d)}$,
$L_{\beta}$ is {\it{positive}} if $p\in \mathcal{P}_{2d}$, $p|_{R^d} \ge 0 \Longrightarrow L_{\beta}(p) \ge 0$. \ In \cite{tcmp12}
we showed that $\beta$ admits a representing measure if and only if $L_\beta$ admits a positive extension
$L:\mathcal{P}_{2d+2} \longrightarrow \mathbb{R}$. \ Thus, as an alternative to constructing all of the extensions $M_{d+1},\cdots, M_{2d+1},$
in principle it would suffice to test the positivity of the Riesz functional corresponding to $M_{d+1}$. \ Unfortunately, at present there
is no known concrete test for positivity of Riesz functionals (except in special cases, cf. \cite{tcmp12}, \cite{FN1}, \cite{FN2}),
so the moment matrix extension algorithm remains the most viable approach to resolving the existence of a representing measure in
the bivariate $RD$ case.
\end{remark}
For the proof of Theorem \ref{gridthm}, we require some preliminaries. \ For $d\ge 1$, suppose
$x_{1},\ldots, x_{d}$ are distinct and $y_{1},\ldots, y_{d}$ are distinct. \ Let $P(x,y) := (x-x_{1})\cdots (x-x_{d})$, $Q(x,y):=(y-y_{1})\cdots (y-y_{d})$, and set $\mathcal{Z}_{P,Q}:= \{(x_{i},y_{j})\}_{1\le i,j \le d}$, the common zeros of $P$ and $Q$. \
Let $J$ be an ideal in $\mathbb{R}[x,y]$ with real variety $\mathcal{V}\equiv
\mathcal{V}(J):= \{(x,y)\in \mathbb{R}^{2}:s(x,y) = 0~ \forall s\in J\}$. \ Let $I(\mathcal{V})
=\{f \in \mathbb{R}[x,y]: f|\mathcal{V} \equiv 0 \}$. \ In general, $I(\mathcal{V}(J))$ may be strictly larger than $J$ \cite{CLO}. \
However, for $J:=(P,Q)$ (with $P$ and $Q$ as above), we will show below (Proposition \ref{prop 213}) that
each element of $I(\mathcal{V}(J))$
admits a ``degree-bounded" representation which displays it as a member of
$J$; in particular, $J$ is a {\it{real ideal}}
in the sense of \cite{Mo}. \ Although this result may well be known, we could not find a reference, so we
include a proof for the sake of completeness. \ First, we need three auxiliary results.
\begin{lemma} \label{divisionalg} (The Division Algorithm in $\mathbb{R}[x_{1},\cdots ,x_{n}]$ \cite[Section 2.3, Theorem 3]{CLO}) \
Fix a monomial order $>$ on $\mathbb{Z}_{\geq 0}^{n}$ and let $F=(f_{1},\cdots ,f_{s})$ be an ordered $s$-tuple of
polynomials in $\mathbb{R}[x_{1},\cdots ,x_{n}]$. \ Then every $f\in \mathbb{R}[x_{1},\cdots ,x_{n}]$ can be written as
\begin{equation*}
f=a_{1}f_{1}+\cdots +a_{s}f_{s}+r,
\end{equation*
where $a_{i},\in \mathbb{R}[x_{1},\cdots ,x_{n}]$, and either
$r=0$ or $r$ is a linear combination, with coefficients in $\mathbb{R}$, of
monomials, none of which is divisible by any of the leading terms in $f_{1},\cdots ,f_{s}$.
Furthermore, if $a_{i}f_{i}\neq 0$, then we have $\text{multideg}~(f)\geq \text{multideg}~(a_{i}f_{i})$.
\end{lemma}
\begin{lemma} \cite[p. 67]{Sau} \label{Sauer} \ For $N \geq 1$ let ${v_1,\cdots,v_N}$ be distinct points in
$\mathbb{R}^2$, and consider the multivariable
Vandermonde matrix $V_{N}:=(v_i^{\alpha})_{1 \le i \le N, \alpha \in \mathbb{Z}_+^2, \left|\alpha\right| \leq N-1}$, of size $N \times \frac{N(N+1)}{2}$. \ Then the rank of $V_N$ equals $N$.
\end{lemma}
\begin{corollary} \label{vander} \ Let ${\bf x} \equiv \{ x_1,\ldots,x_m \} $ and ${\bf y} \equiv \{y_1,\ldots,y_n\}$
be sets of distinct real numbers, and consider the
grid ${\bf x} \times {\bf y} := \{(x_i,y_j)\}_{1 \leq i \leq m, 1 \leq j \leq n}$ consisting of $N:=mn$ distinct points in
$\mathbb{R}^2$. \ Then the generalized Vandermonde matrix $V_{{\bf x} \times {\bf y}}$, obtained from $V_{N}$ by removing all columns indexed
by monomials divisible by $x^m$ or $y^n$, is invertible.
\end{corollary}
\begin{proof}
The columns of $V_{N}$ are indexed by the monomials in $x$ and $y$ of degree at most $N$, listed in degree-lexicographic order. \
The size of $V_{N}$ is $N \times \frac{N(N+1)}{2}$, and by Lemma \ref{Sauer} we know that its rank is $N$. \
We will show that $V_{{\bf x} \times {\bf y}}$ has exactly $N$ columns, and that each column that was removed from $V_{N}$ to
produce $V_{{\bf x} \times {\bf y}}$ is a linear combination of other columns in $V_{N}$. \
Toward the first assertion, assume without loss of generality that $m \leq n$, let $k:=n-m$ (so that $m+k=n$), and
observe that the columns of $V_{{\bf x} \times {\bf y}}$ are
indexed by the following monomials:
\begin{eqnarray*}
1,\\
x,y,\\
x^2,xy,y^2,\\
\cdots,x^{m-1},\cdots,y^{m-1}, \\
x^{m-1}y,\cdots,xy^{m-1},y^m, \\
x^{m-1}y^2,\cdots,xy^{m},y^{m+1}, \\
x^{m-1}y^3,\cdots,xy^{m+1},y^{m+2}, \\
\cdots, \\
x^{m-1}y^k, \cdots, xy^{m+k-2},y^{m+k-1}, \\
x^{m-1}y^{k+1},\cdots,xy^{m+k-2} \\
\cdots, \\
x^{m-1}y^{n-1}.
\end{eqnarray*}
The number of monomials is then $(1+2+\cdots+m)+mk+[(m-1)+(m-2)+\cdots+2+1]=\frac{m(m+1)}{2}+mk+\frac{(m-1)m}{2}
=m^2+mk=m(m+k)=mn$. \ It follows that $V_{{\bf x} \times {\bf y}}$ has exactly $N \equiv mn$ columns.
To prove the second assertion, observe that the polynomials $P:=(x-x_1)\cdots (x-x_m)$ and $Q:=(y-y_1)\cdots (y-y_n)$ vanish identically on
${\bf x} \times {\bf y}$, and therefore the columns of $V_{N}$ indexed by multiples of $x^m$ or $y^n$ are linear combinations of
columns preceding them in degree-lexicographic order.
By combining the preceding two assertions, it follows that $V_{\bf{x} \times \bf{y}}$, having size $N$ and rank $N$, must be invertible.
\end{proof}
The following result is a special case of Alon's Combinatorial Nullstellensatz \cite{A}; for completeness, we give a proof based on
Corollary \ref{vander}.
\begin{corollary} \label{md-1}
Let $G \equiv {\bf x} \times {\bf y}$ be a grid as in Corollary \ref{vander}, let $N:=mn$, and let $p \in \mathbb{R}[x,y]$ be such that
$\text{deg}_x~p<m$ and $\text{deg}_y~p<n$. \ Assume also that $p|_G \equiv 0$. \ Then $p \equiv 0$.
\end{corollary}
\begin{proof}
We wish to apply Corollary \ref{vander}. \ From the hypotheses, it is straightforward to verify that $p$ does not contain any
monomials divisible by $x^m$ or $y^n$, so $\hat{p}$, properly extended with zeros
to indicate the absence of relevant monomials, can be regarded as a vector in $\mathbb{R}^N$, the domain of the
generalized Vandermonde matrix $V_{G}$ in Corollary \ref{vander}. \ Since, by assumption, $p(x_i,y_j)=0$ for all $1 \le i \le m$ and $1 \le j \le n$,
it follows that $V_{G}\hat{p}=0$. \ Since $V_{G}$ is invertible (by Corollary \ref{vander}), we must have $\hat{p}=0$, so $p \equiv 0$, as desired.
\end{proof}
\begin{prop} \label{prop 213} \ Let $P(x,y) := (x-x_{1})\cdots (x-x_{d})$ and let $Q(x,y):=(y-y_{1})\cdots (y-y_{d})$. \
If $\rho := \text{multideg}~(f) \geq d$ and $f| \mathcal V((P,Q)) \equiv 0$, then there exists $u,v \in \mathcal{P}_{\rho-d}$ such that $f=uP+vQ$.
\end{prop}
\begin{proof}
Let $\mathcal{V} := \mathcal{V}((P,Q))$. \ By Lemma \ref{divisionalg}, we can write $f=uP+vQ+r$, where $\text{multideg}~ (uP)\leq \rho$ and $\text{multideg}~ (vQ) \leq \rho$. \ It follows that $u,v \in \mathcal{P}_{\rho-d}$ and that $r| \mathcal{V} \equiv 0$. \ Moreover, $r$ is a linear combination, with coefficients in $\mathbb{R}$, of
monomials, none of which is divisible by any of the leading terms in $P$ and $Q$, that is, they are not divisible by $x^d$ and $y^d$. \ Therefore,
$r$ satisfies the hypotheses of Corollary \ref{md-1} with $m=n=d$. \ By Corollary \ref{md-1}, $r \equiv 0$. \ Thus, $f=uP+vQ$, as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{gridthm}]
At several points of the proof we will use the fact that if a moment matrix $M_{k}$ admits a representing measure $\nu$ and $f \in \mathcal{P}_{k}$,
then $f |_{\text{supp}~ \nu} \equiv 0$ if and only if $f(X,Y)=0$ in $\mathcal{C}_{M_{k}}$ \cite[Proposition 3.1]{tcmp1}. \ Let
$x_1,\ldots,x_d$ and $y_1,\ldots,y_d$ be sets of distinct real numbers, and let $G:={\bf x} \times {\bf y} \equiv {(x_i,y_j)}_{1 \le i,j \le d}$
denote the corresponding grid. \ Let $\mu$ denote a measure whose support is precisely equal to $G$ and let $M_{d}:= M_{d}[\mu]$. \
Let $P(x,y) := (x-x_{1})\cdots (x-x_{d})$ and let $Q(x,y):=(y-y_{1})\cdots (y-y_{d})$. \ Since $P |_G \equiv 0$ and $Q |_G \equiv 0$, then $P(X,Y)=0$ and $Q(X,Y)=0$ in $\mathcal{C}_{M_d}$, whence
$X^d=p(X)$ and $Y^d=q(Y)$ for certain $p,q \in \mathcal{P}_{d-1}$ satisfying $P(x,y) \equiv x^d-p(x)$ and $Q(x,y) \equiv y^d-q(y)$;
thus, $M_d$ is recursively determinate.
\ We first show that the only column dependence relations in $M_d$ arise from the above relations
via linearity, so that $M_d$ falls within the scope of Theorem \ref{rdext}. \ If $\text{deg} ~ f=d$ and $f(X,Y)=0$ in $Col ~ M_d$,
then $f |_G \equiv 0$, so Proposition \ref{prop 213} implies that there exists scalars $u$ and $v$ such that $f=uP+vQ$. \ Thus,
$f(X,Y)=uP(X,Y)+vQ(X,Y)$. \ Further, if $\text{deg}~ f<d$ and $f(X,Y)=0$, then since $f|_{G} \equiv 0$,
it follows from Corollary \ref{md-1} that $f \equiv 0$ (whence $M_{d-1} \succ 0)$. \ Thus, $M_d$ satisfies the
conditions of Theorem \ref{rdext}. \
Since $M_{d}$ has the finitely atomic representing measure $\mu$, $M_{d}$ admits successive positive, recursively generated extensions
$M_{d+1}[\mu], M_{d+2}[\mu],\ldots$, so clearly these are the unique successive positive, recursively determined extensions of $M_{d}$; let
$M_{d+k} := M_{d+k}[\mu] ~ (1 \le k \le d-1)$. \ We seek to show that each of $M_{d+1},\ldots,M_{2d-1}$ falls within the scope of Theorem \ref{rdext} and that the first flat extension in this sequence occurs with $\text{rank}~M_{2d-1}= \text{rank}~M_{2d-2}$. \
We first give a concrete description of $ker ~ M_{d+k}$. \ Since $M_{d-1} \succ 0$, if $r \in \mathcal{P}_{d+k}$ with $\hat{r} \in
ker ~ M_{d+k}$, then $\text{deg} ~ r=d+j$ for $0 \le j \le k$. \ Since $\mu$ is a representing measure for $M_{d+k}$, it follows that $r|\text{supp}~\mu \equiv 0$. \ Proposition \ref{prop 213} now implies that there exist $u,v\in \mathcal{P}_{j}$ such that $r=uP + vQ$ (with $P$ and $Q$ defined above in the description of $\mu$). \ Thus $ker~M_{d+k}$ is indexed by the recursively determined columns; precisely, $ker~M_{d+k}$ is the span of all of the columns $\widehat{x^{s}y^{t}(x^d-p)}$ and
$\widehat{x^{s}y^{t}(y^d-q)}$ ($s,t\ge0$, $s+t\le k$). \ Thus, $M_{d+k}$ satisfies the conditions of Theorem \ref{rdext}. \
In passing from $M_{d+k-1}$ to $M_{d+k}$ there are $d+k+1$ new columns, of which $2(k+1)$ are recursively determined, and since these correspond (as just above) to elements of $ker~M_{d+k}$, we have
$\text{rank}~M_{d+k}= \text{rank}~M_{d+k-1} + (d+k+1)-2(k+1)= \text{rank}~M_{d+k-1}+ d-k-1$. \ Thus the first flat extension occurs when $k=d-1$, in passing from $M_{2d-2}$ to $M_{2d-1}$.
\end{proof}
We continue with an example which shows that Theorem \ref{main}
is no longer valid if we permit column dependence relations in $M_{d}$ in addition
to those in (\ref{Xrec}) - (\ref{Yrec}).
\begin{example}\label{notRDexample}
We define $M_{3}$
by setting $\beta_{00}= \beta_{20}=\beta_{02} = 1$;
$\beta_{11}= \beta_{30}=\beta_{21}=\beta_{03}=0$;
$\beta_{12}=\beta_{40}=2$; $\beta_{31}=\beta_{13}=0$;
$\beta_{22}=5$, $\beta_{04}=22$;
$\beta_{50}=-1$, $\beta_{41}=-2$, $\beta_{32}=13$,
$\beta_{23}=3$, $\beta_{14}=\frac{894}{13}$,
$\beta_{05}= \frac{336}{13}$;
$\beta_{60}=178$, $\beta_{51}=139$, $\beta_{42}=159$,
$\beta_{33}= \frac{1657}{13}$, $\beta_{24}= \frac{4298}{13}$,
$\beta_{15}=r$, $\beta_{06}= \gamma
\equiv \frac{443272376768-2742712830r-4826809r^{2}}{41327767}$.
Thus, we have
\begin{equation}\label{m3ex}
M_{3} =
\bpm
1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 2 & 0 \\
0 & 1 & 0 & 0 & 0 & 2 & 2 & 0 & 5 & 0 \\
0 & 0 & 1 & 0 & 2 & 0 & 0 & 5 & 0 & 22 \\
1 & 0 & 0 & 2 & 0 & 5 & -1 & -2 & 13 & 3 \\
0 & 0 & 2 & 0 & 5 & 0 & -2 & 13 & 3 & \frac{894}{13} \\
1 & 2 & 0 & 5 & 0 & 22 & 13 & 3 & \frac{894}{13} & \frac{336}{13} \\
0 & 2 & 0 & -1 & -2 & 13 & 178 & 139 & 159 & \frac{1657}{13} \\
0 & 0 & 5 & -2 & 13 & 3 & 139 & 159 & \frac{1657}{13} &\frac{4298}{13}\\
2 & 5 & 0 & 13 & 3 & \frac{894}{13} & 159 & \frac{1657}{13} & \frac{4298}{13} &
r \\
0 & 0 & 22 & 3 & \frac{894}{13}& \frac{336}{13} & \frac{1657}{13}
& \frac{4298}{13} & r & \gamma
\epm.
\end{equation}
It is straightforward to check that $M_{3}$ is positive, recursively generated, and recursively determinate, with $M_{2}\succ 0$,
$\text{rank}~M_{3} = 7$ and column
dependence relations
\begin{equation}\label{X3}
X^{3}= p(X,Y):= 40\cdot 1 -24X+4Y-53X^{2}-2XY+13Y^{2},
\end{equation}
\begin{equation}\label{X2Y}
X^{2}Y = t(X,Y):= 35\cdot 1 -22X-Y-46X^{2}+3XY+11Y^{2},
\end{equation}
and
\begin{equation}\label{Y3}
Y^{3}= q(X,Y):= d_{1}\cdot 1+ d_{2} X + d_{3}Y+ d_{4}X^{2}+
d_{5}XY+d_{6}Y^{2}+d_{7}XY^{2},
\end{equation}
where
$d_{1}= \frac{3(487658-1651r)}{1447}$,
$d_{2}= \frac{3(-342075+1157r)}{1447}$,
$d_{3}= \frac{2(-2131598+6591r)}{18811}$,
$d_{4}= \frac{-2000094+6773r}{1447}$,
$d_{5}= \frac{2338519-6591r}{18811}$,
$d_{6}= \frac{2(-316575+1079r)}{1447}$,
$d_{7}= \frac{-48015+169r}{1447}$.
Thus, $M_{3}$ satisfies all of the hypotheses of Theorem \ref{main},
except that (\ref{X2Y}) is an ``extra" dependence relation
(not a linear combination of the relations defined in (\ref{X3})
and (\ref{Y3}). \ We claim that $M_{3}$ does not admit a
moment matrix extension block $B(4)$ such that
$\bpm
M_{3} & B(4)
\epm $
is recursively generated.
Indeed, if such a block existed, then in the column space of
$\bpm
M_{3} & B(4)
\epm $
we would have
$X^{3}Y = (yp)(XY) :=40Y -24XY+4Y^{2}-53X^{2}Y-2XY^{2}+13Y^{3}$
and also
$X^{3}Y = (xt)(X,Y):= 35X -22X^{2}-XY-46X^{3}+3X^{2}Y+11XY^{2}$.
A calculation shows that
$\langle (yp)(X,Y)-(xt)(X,Y),XY^{2}\rangle=
\frac{-49462+169r}{13}$, so for $r\not = \frac{49462}{169}$,
$X^{3}Y$ is not well-defined. \ Thus, the conclusions of
Theorem \ref{main} do not hold for $M_{3}$ (and thus there is no representing measure). \qed
\end{example}
By contrast with the preceding example, we next show
that if $M_{d} \in RD$, with all column dependence relations
of strictly lower degree, then $M_{d}$ does admit an $RG$
extension.
\begin{theorem} \label{RDnew}
Suppose $M_{d}$ is positive and recursively generated,
and satisfies (\ref{Xrec})-(\ref{Yrec}). \ If each column relation in $M_{d}$ can be expressed as
$X^{i}Y^{j}=r(X,Y)$ with $deg~r<i+j$, then $M_{d}$
admits a unique $RG$ extension.
\end{theorem}
We present the proof of Theorem \ref{RDnew} in Section \ref{PROOF2}. \ Finally, we note that in applying the algorithm, Theorem
\ref{rdext} or Theorem \ref{RDnew} may apply at some extension steps, but not at others. \ Consider \cite[Example 4.15]{finitevariety},
which concerns a recursively determinate $M_{5}$ with $n=m=d=5$, $deg~p=5$, $deg~q=4$. \ The moment matrix $M_{5}$ satisfies the
hypotheses of Theorem \ref{rdext} (with the roles of $p$ and $q$ reversed). \ The $RG$ extension $M_{6}$ is positive semidefinite and
satisfies the hypotheses of Theorem \ref{rdext}. \ The $RG$ extension $M_{7}$ is also positive semidefinite, but has a new column
relation, $X^3Y^4=r(X,Y) \; (deg~r=6)$, that is not recursively determined from $X^5=p(X,Y)$ or $Y^5=q(X,Y)$. \ Thus, Theorem \ref{rdext}
does not apply to $M_{7}$, nor does Theorem \ref{RDnew} (since $deg~p=5=n$). \ Nevertheless, in this case, when the algorithm is applied
to $M_{7}$, a flat extension $M_{8}$ (and a measure) results.
\section{An extension sequence that fails at the second stage}\label{Sect3}
\setcounter{equation}{0}
Recall that
in the most important case of recursive determinacy,
a positive, flat $M_{d}$ admits unique positive,
recursively generated extensions of all orders, $M_{d+1},\ldots
M_{d+k},\ldots$, leading to a unique representing measure. \
Further, in all of the examples of \cite{tcmp3}, \cite{tcmp11} and \cite{finitevariety}, when a positive,
recursively generated,
recursively
determinate $M_{d}$ fails to have a representing measure, it is
because it fails to admit a positive, recursively generated
extension $M_{d+1}$. \ These results suggest the question as to whether
a positive, recursively generated, recursively determinate $M_{d}$
which admits a positive, recursively generated $M_{d+1}$ necessarily
admits positive,
recursively generated extensions of all orders
(and thus a representing measure) \cite[Question 4.19]{finitevariety}. \
In this section we provide a negative answer to this question. \
In the sequel we construct a positive, recursively
generated, recursively determinate
$M_{4}(\beta^{(8)})$ which admits a positive, recursively generated extension
$M_{5}$, but
such that $M_{5}$
fails to admit a positive, recursively
generated extension $M_{6}$. \ It then follows
from the Bayer-Teichmann Theorem that $\beta^{(8)}$ has no
representing measure.
We define $M_{4}$ by defining its component blocks in the
decomposition
\begin{equation}\label{m4blocks}
M_{4} =
\bpm
M_{3} & B(4) \\
B(4)^{T} & C(4)
\epm.
\end{equation}
We begin by setting $\beta_{00}= \beta_{20}=\beta_{02}= \beta_{22} = 1$,
$\beta_{40}=\beta_{04} = \beta_{42}=\beta_{24}=2$, $\beta_{60}=
\beta_{06} = 5$, and all other moments up to degree 6 set to $0$, so that
\begin{equation}\label{m3}
M_{3} =
\bpm
1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 2 \\
1 & 0 & 0 & 2 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 1 & 0 & 2 & 0 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 & 0 & 0 & 5 & 0 & 2 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & 2 \\
0 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & 2 & 0 \\
0 & 0 & 2 & 0 & 0 & 0 & 0 & 2 & 0 & 5
\epm.
\end{equation}
We next set
\begin{equation}\label{B4}
B(4) =
\bpm
2 & 0 & 1 & 0 & 2 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
5 & 0 & 2 & 0 & 2 \\
0 & 2 & 0 & 2 & 0 \\
2 & 0 & 2 & 0 & 5 \\
a & b & 0 & 0 & 0 \\
b & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & g \\
0 & 0 & 0 & g & h
\epm,
\end{equation}
where $\beta_{70}=a$, $\beta_{61}=b$, $\beta_{16}=g$, $\beta_{07}=h$,
and all other degree 7 moments equal 0.
Let
\begin{equation}\label{p}
p(x,y):= ax^{3} + b x^{2}y+ 3x^{2}-by-2ax-1
\end{equation}
and
\begin{equation}\label{q}
q(x,y):= gxy^{2}+hy^{3}+ 3y^{2}-2hy-gx-1,
\end{equation}
so that in the column space of
$ \bpm
M_{3} & B(4)
\epm $, we have the relations
\begin{equation}\label{x4}
X^{4}= p(X,Y)
\end{equation}
and
\begin{equation}\label{y4}
Y^{4}=q(X,Y),
\end{equation}
and $\text{rank}~
\bpm
M_{3} & B(4)
\epm = 13$.
We complete the definition of a
recursively determinate $M_{4}$
by extending the relations
(\ref{x4}) and (\ref{y4})
to the columns of
$
\bpm
B(4)^{T} & C(4)
\epm,$ leading to
\begin{equation}\label{C4}
C(4) =
\bpm
13+a^{2}+b^{2} & ab & 5 & 0 & 4 \\
ab & 5 & 0 & 4 & 0 \\
5 & 0 & 4 & 0 & 5 \\
0 & 4 & 0 & 5 & gh \\
4 & 0 & 5 & gh & 13+g^{2}+h^{2}
\epm.
\end{equation}
Since $M_{3}\succ 0$ (positive and invertible), we see that
$M_{4} \succeq 0$ with rank 13
if and only if
$\Delta(4):= C(4) - B(4)^{T}M_{3}^{-1}B(4)\succ 0$. \ In view of
(\ref{x4}) and (\ref{y4}), this is equivalent to the
positivity of the compression of $\Delta(4)$ to rows and
columns indexed by $X^{3}Y$, $X^{2}Y^{2}$, $XY^{3}$, i.e.,
\begin{equation}\label{Delta4}
[\Delta(4)]_{\{X^{3}Y,X^{2}Y^{2},XY^{4}\}} \equiv
\bpm
1-b^{2} & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1-g^{2}
\epm \succ 0.
\end{equation}
Thus, if $b$ and $g$ satisfy
$1-b^{2}>0$ and $1-g^{2}>0$, then
$M_{4}$ is positive, recursively generated, and recursively
determinate, with $\text{rank}~M_{4} = 13$, so $M_{4}$
satisfies the hypotheses of Theorem \ref{main}.
We next seek to extend $M_{4}$ to a positive
and recursively generated $M_{5}$.
In view of (\ref{x4}) and (\ref{y4}), this can
only be accomplished by defining
\begin{equation}\label{x5}
X^{5} := (xp)(X,Y)
\end{equation}
and
\begin{equation}\label{y5}
Y^{5} := (yq)(X,Y).
\end{equation}
Theorem \ref{main} implies
that
the resulting
$B(5)$ is well-defined and
satisfies $Ran~B(5)\subseteq Ran~M_{4}$,
so there exists $W$ satisfying
$B(5) = M_{4}W$. \ A calculation now shows
that if we define $C(5)$ via
(\ref{x5}) and (\ref{y5})
(as we must to preserve recursiveness),
then
$M_{5}\succeq 0$ if and only if
$$\Delta(5)\equiv C(5) -B(5)^{T}W =
\bpm
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{-1+2b^{2}}{-1+b^{2}} & bg & 0 & 0 \\
0 & 0 & bg & \frac{-1+2g^{2}}{-1+g^{2}} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0
\epm \succeq 0.
$$
Thus,
using
nested determinants, and since $b^{2}<1$, we see that
$M_{5}$
is positive and recursively generated, with
$\text{rank}~M_{5} = 15$
if and only if
\begin{equation}\label{btest}
b^{2} < \frac{1}{2}
\end{equation}
and
\begin{equation}\label{bgtest}
1-2b^{2}-2g^{2}+3b^{2}g^{2}+b^{4}g^{2}+b^{2}g^{4}-b^{4}g^{4} > 0.
\end{equation}
For example, setting $b=g= \frac{1}{4}$,
the expression in (\ref{bgtest}) equals $\frac{49951}{65536} (>0)$,
so it follows that $M_{5}$ is positive and recursively
generated, with $\text{rank}~M_{5} = 15$, whence $M_{5}$ satisfies
the conditions of Theorem \ref{main}.
With these values for $b$ and $g$ (or using other
appropriate values), we next attempt to define
a positive and recursively generated extension $M_{6}$.
This can only be done by defining $X^{6}:= (x^{2}p)(X,Y)$
and $Y^{6}:= (y^{2}q)(X,Y)$.
Theorem \ref{main} implies that the resulting $B(6)$ is well-defined
and that there is a matrix $V$ such that $B(6) = M_{5}V$.
Further, $C(6)$ is uniquely defined via the
preceding column relations. \ $M_{6}$ as thus defined is
recursively generated (by construction), but we will show that
it need not be positive. \
Indeed, a calculation shows that $\Delta(6)
\equiv C(6)- B(6)^{T}V$
is identically 0 except perhaps for the element
in the row and column indexed by
$X^{3}Y^{3}$ (the row 4, column 4 element), which is equal to
$$\frac{(1-3b^{2}+b^{4}-ab^{2}g+ab^{4}g+bh-2b^{3}h)
(-1-ag+3g^{2}+2ag^{3}-g^{4}+bg^{2}h-bg^{4}h)}
{-1+2b^{2}+2g^{2}-3b^{2}g^{2}-b^{4}g^{2}-b^{2}g^{4}+b^{4}g^{4}}.$$
Note that the denominator of the preceding expression is the
negative of the expression in (\ref{bgtest}),
and is thus strictly negative.
Thus $M_{6}$ is positive if and only if
\begin{equation}\label{m6test}
\eta :=(1-3b^{2}+b^{4}-ab^{2}g+ab^{4}g+bh-2b^{3}h)
(-1-ag+3g^{2}=2ag^{3}-g^{4}+bg^{2}h-bg^{4}h)\le 0.
\end{equation}
With $b=g=\frac{1}{4}$, we have
$$\eta = \frac{(-836+15a-224 h)(836+224a-15h)}{1048576}.$$
If we choose $a$ and $h$ so that $\eta=0$, then $M_{6}$ is a
flat extension of $M_{5}$, and $\beta \equiv \beta^{(8)}$ has
a $15$-atomic representing measure.
If we choose $a$ and $h$ so that $\eta <0$, then $M_{6}$ is
positive with rank 16, and since, in Corollary \ref{flatext}, $n=m=4$ and
$d=6$, it follows that $M_{6}$ has a flat extension $M_{7}$.
However, if we choose $a$ and $h$ so that $\eta>0$
(e.g., with
$h=0$ and $a>\frac{836}{15}$), then
$M_{6}$ is not positive, whence $\beta$ has no representing measure.
\section{Proof of Theorem \ref{main}} \label{PROOF}
\setcounter{equation}{0}
The proof of Theorem \ref{main} entails two mains steps: (i) the construction of the block $B(d+1)$ from the column relations (\ref{X}) and
(\ref{Y}) so that $(M_d \; B(d+1))$ is recursively generated; and (ii) the verification that $Ran~B(d+1) \subseteq Ran~M_d$. \
{\bf STEP (i)}: \ Step (i) will follow from a series of five auxiliary results (Lemmas \ref{old=new} - \ref{righthankel}). \
To begin the formal definition of $B(d+1)$, note that blocks $B[0,d+1],\ldots,B[d-1,d+1]$ are completely defined in terms
of moments in $M_{d}$. \ Indeed, for $0\le i\le d+1$, $0\le j\le d-1$, and $h,k\ge 0$ with $h+k=j$, the component
of $B[j,d+1]$ in row $X^{h}Y^{k}$ and column $X^{i}Y^{d+1-i}$, which we denote by $\langle X^{i}Y^{d+1-i},X^{h}Y^{k}\rangle$,
must equal $\beta_{i+h,d+1-i+k}$. \ Note also that for $i\ge n$,
the above component is alternately defined by (\ref{Xton_rec}),
so we must show that the two definitions agree.
\begin{lemma}\label{old=new}
For $0\le f \le d+1-n$ and $i,j\ge 0$ with $i+j\le d-1$,
the entry in column $X^{n+f}Y^{d+1-n-f}$, row $X^{i}Y^{j}$, as defined
by (\ref{Xton_rec}), coincides with the moment inherited
from $M_{d}$ by moment matrix structure, $\beta_{n+f+i,d-n-f+j+1}$.
\end{lemma}
\begin{proof} Consider first the case when $d-n-f\ge 0$.
From (\ref{Xton_rec}), we have
\begin{eqnarray*}
X^{n+f}Y^{d+1-n-f} &:=& (x^{f}y^{d+1-n-f}p)(X,Y) \\
&\equiv& \sum \limits_{r,s\ge 0, r+s\le n-1}^{} a_{rs}X^{r+f}Y^{s+d+1-n-f} \quad (0\le f\le d+1-n),
\end{eqnarray*}
so
\begin{eqnarray*}
\langle X^{n+f}Y^{d+1-n-f},X^{i}Y^{j}\rangle =\sum a_{rs}\langle X^{r+f}Y^{s+d+1-n-f}, X^{i}Y^{j}\rangle.
\end{eqnarray*}
Since $r+f+s+d+1-n-f\le d$, $s+d+1-n-f\ge 1$ and $i+j\le d-1$,
using the moment matrix structure of the blocks of $M_{d}$ we may express
the last sum as
$$
\sum a_{rs}\langle X^{r+f}Y^{s+d-n-f}, X^{i}Y^{j+1}\rangle .
$$
Now (\ref{Xrec}) implies that in $M_{d}$ the later expression is equal to
$$
\langle X^{n+f}Y^{d-n-f},X^{i}Y^{j+1}\rangle
= \beta_{n+f+i,d-n-f+j+1}.
$$
For the case $f=d-n+1$ and $i+j\le d-1$,
\begin{eqnarray*}
\langle X^{d+1},X^{i}Y^{j}\rangle &=& \sum a_{rs}\langle X^{r}Y^{s}X^{d+1-n}, X^{i}Y^{j}\rangle \\
&=&\sum a_{rs}\langle X^{r}Y^{s}X^{d-n}, X^{i+1}Y^{j}\rangle \\
&=&\langle X^{d},X^{i+1}Y^{j}\rangle \\
&=& \beta_{d+i+1,j}.
\end{eqnarray*}
\end{proof}
We have just verified that in the left recursive band,
in blocks of degree at most $d-1$, each column element
coincides with the corresponding ``old" moment from $M_{d}$.
Old moments are also used to {\it{define}} the central (nonrecursive)
band of columns in blocks of degree at most $d-1$. \ We next use these left and central bands,
together with (\ref{Y^m_rec}), to show that the column elements in the right recursive band,
in blocks of degree at most $d-1$,
also agree with corresponding old moments.
\begin{lemma}\label{oldright}
For $0\le k\le d+1-m$, $i,j\ge0$, $i+j\le d-1$,
column $X^{d+1-m-k}Y^{m+k}$, as defined by (\ref{Y^m_rec}), satisfies
$\langle X^{d+1-m-k}Y^{m+k},X^{i}Y^{j}\rangle =
\beta_{i+d+1-m-k,m+k+j}$.
\end{lemma}
\begin{proof}
The proof is by induction on $k$. \ For $k=0$, we show that
$\langle X^{d+1-m}Y^{m},X^{i}Y^{j}\rangle =
\beta_{i+d+1-m,m+j}$. \
From (\ref{Y^m_rec}), we have
\begin{eqnarray*}
\langle X^{d+1-m}Y^{m},X^{i}Y^{j}\rangle &=& \sum \limits_{a,b\ge 0, a+b\le m-1}
\alpha_{ab} \langle X^{d+1-m+a}Y^{b},X^{i}Y^{j}\rangle \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m}
\gamma_{ce} \langle X^{c+d+1-m}Y^{e},X^{i}Y^{j}\rangle.
\end{eqnarray*}
Since $a+b<m$, then in $M_{d}$,
$$ \langle X^{d+1-m+a}Y^{b},X^{i}Y^{j}\rangle =
\beta_{d+1-m+a+i,b+j}.$$
Since $e<m$, $ \langle X^{c+d+1-m}Y^{e},X^{i}Y^{j}\rangle$
is in either the left or central band, and thus equals the old moment
$\beta_{c+d+1-m+i,e+j}$.
Now
$$
\langle X^{d+1-m}Y^{m},X^{i}Y^{j}\rangle =
\displaystyle \sum \limits_{a,b\ge 0, a+b\le m-1}
\alpha_{ab} \beta_{d+1-m+a+i,b+j}+
\displaystyle \sum \limits_{c,e\ge 0, c+e=m, e<m}
\gamma_{ce} \beta_{c+d+1-m+i,e+j}.
$$
In $M_{d}$, the latter expression equals
\begin{eqnarray*}
\sum \alpha_{ab} \langle X^{d-m+a}Y^{b},X^{i+1}Y^{j}\rangle + \sum \gamma_{ce} \langle X^{c+d-m}Y^{e},X^{i+1}Y^{j}\rangle &=& \langle X^{d-m}Y^{m},X^{i+1}Y^{j}\rangle \\
&=& \beta_{d-m+i+1,m+j},
\end{eqnarray*}
as desired. \ We next assume the result is true for $0,\ldots,k-1$. \ Consider first
the case when $k<d+1-m$.
We have
\begin{eqnarray*}
\langle X^{d+1-m-k}Y^{m+k},X^{i}Y^{j}\rangle &=&
\sum \limits_{a,b\ge 0, a+b\le m-1}
\alpha_{ab} \langle X^{d+1-m-k+a}Y^{b+k},X^{i}Y^{j}\rangle \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m}
\gamma_{ce} \langle X^{c+d+1-m-k}Y^{e+k},X^{i}Y^{j}\rangle.
\end{eqnarray*}
The term
$\langle X^{d+1-m-k+a}Y^{b+k},X^{i}Y^{j}\rangle$
is a component of $M_{d}$, and thus equals the corresponding moment.
Since $e+k\le m+ (k-1)$, $ X^{c+d+1-m-k}Y^{e+k}$ is, by induction,
a column for which the elements of row-degree $i+j$ are old moments.
Thus,
\begin{eqnarray*}
\langle X^{d+1-m-k}Y^{m+k},X^{i}Y^{j}\rangle = \sum \alpha_{ab} \beta_{d+1-m-k+a+i,b+k+j}+ \sum \gamma_{ce} \beta_{c+d+1-m-k+i,e+k+j}.
\end{eqnarray*}
In $M_{d}$, the last expression equals
\begin{eqnarray*}
\sum \alpha_{ab} \langle X^{d+a-m-k}Y^{b+k},X^{i+1}Y^{j}\rangle \quad \quad \\
\quad \quad + \sum \gamma_{ce} \langle X^{c+d-m-k}Y^{e+k},X^{i+1}Y^{j}\rangle
&=& \langle X^{d-m-k}Y^{m+k},X^{i+1}Y^{j}\rangle \\
&=& \beta_{d-m-k+i+1,m+k+j}.
\end{eqnarray*}
Finally, we consider the case $k=d+1-m$.
We have
\begin{eqnarray*}
\langle Y^{d+1},X^{i}Y^{j}\rangle &=& \sum \limits_{a,b\ge 0, a+b\le m-1}
\alpha_{ab} \langle X^{a}Y^{b+d+1-m},X^{i}Y^{j}\rangle \\
&& \quad + \sum \limits_{c,e\ge 0, c+e=m, e<m}
\gamma_{ce} \langle X^{c}Y^{e+d+1-m},X^{i}Y^{j}\rangle.
\end{eqnarray*}
Since $e<m$, then $c\ge1$, so $X^{c}Y^{e+d+1-m}$ is to the
left of $Y^{d+1}$, i.e., $c=d+1-m-k^{\prime}$ for
$k^{\prime} = d+1-m-c<k$. \ Thus, by induction,
\begin{eqnarray*}
\langle Y^{d+1},X^{i}Y^{j}\rangle = \sum \alpha_{ab} \beta_{a+i,b+d+1-m+j}+ \sum \gamma_{ce} \beta_{c+i,e+d+1-m+j}.
\end{eqnarray*}
In $M_{d}$, the last expression equals
\begin{eqnarray*}
\sum \alpha_{ab} \langle X^{a}Y^{b+d-m},X^{i}Y^{j+1}\rangle + \sum \gamma_{ce} \langle X^{c}Y^{e+d-m},X^{i}Y^{j+1}\rangle &=&
\langle Y^{d},X^{i}Y^{j+1}\rangle \\
&=& \beta_{i,d+j+1},
\end{eqnarray*}
as desired.
\end{proof}
To complete the definition of $B(d+1)$ we must define $B[d,d+1]$.
Within this proposed block, we first use (\ref{Xton_rec}) to define the left
recursive band, $X^{d+1},\ldots,X^{n}Y^{d+1-n}$.
Note that between the end of the left band, $X^{n}Y^{d+1-n}$,
and the beginning of the right band, $X^{n+1-m}Y^{m}$,
there is a central band of $n+m-d-2$ columns; set $\delta := n+m-d-1$.
In row $X^{d}$, each of the components in the central columns,
$\langle X^{n-1}Y^{d+2-n},X^{d}\rangle,\ldots,\langle X^{d+2-m}Y^{m-1},X^{d}\rangle$, corresponds via a cross-diagonal to a
component of column $X^{n}Y^{d+1-n}$ (whose value is known from (\ref{Xton_rec})),
i.e.,
$$
\langle X^{n-j}Y^{d+1-n+j},X^{d}\rangle
=\langle X^{n}Y^{d+1-n},X^{d-j}Y^{j}\rangle \ (1\le j\le m+n-d-2).
$$
We may thus use (\ref{Y^m_rec}) to define
$\langle X^{d+1-m}Y^{m},X^{d}\rangle$, and we extend
the latter value along the central-band section of the cross-diagonal to which it belongs. \
Next, in row $X^{d-1}Y$, we use this value with (\ref{Y^m_rec})
to define $\langle X^{d+1-m}Y^{m},X^{d-1}Y\rangle$, and
we extend this value along
the central-band section of
its cross-diagonal.
Proceeding in this way, we completely define column $X^{d+1-m}Y^{m}$
and insure that it is Hankel with respect to the central band.
Finally, we use (\ref{Y^m_rec}) to define column $X^{d-m}Y^{m+1}$, and, successively,
$X^{d-m-1}Y^{m+2},\ldots,Y^{d+1}$. \ This completes the definition
of a proposed block $B[d,d+1]$. \ However, to ensure that it is
well-defined as a moment block, we must check that for a cross-diagonal which intersects
columns $X^{n}Y^{d+1-n}$ and $X^{d+1-m}Y^{m}$, the components of the
cross-diagonal in these columns agree in value, i.e., the values arising
from (\ref{Xton_rec}) are consistent with those arising from (\ref{Y^m_rec}). \ More generally,
we need to show that the block we have defined is constant on cross-diagonals.
To show that $B[d,d+1]$ is well-defined and Hankel, we begin
with the
following general result concerning
adjacent columns that are recursively determined from the same
column dependence relation.
Suppose in $Col~M_{d}$ there is a dependence relation
$X^{c}Y^{e} = p(X,Y)$, where $c+e=d$ and
$p(x,y) \equiv \displaystyle \sum \limits_{a,b\ge 0, a+b\le d-1}^{}
\alpha_{ab}x^{a}y^{b}\in \mathcal{P}_{d-1}$.
Then the elements of $Col~M_{d}$ defined by
\begin{eqnarray*}
X^{c+1}Y^{e} \equiv (xp)(X,Y) := \sum \limits_{a,b\ge 0, a+b\le d-1}^{}\alpha_{ab}X^{a+1}Y^{b}
\end{eqnarray*}
and
\begin{eqnarray*}
X^{c}Y^{e+1} \equiv (yp)(X,Y) := \sum \limits_{a,b\ge 0, a+b\le d-1}^{}\alpha_{ab}X^{a}Y^{b+1}
\end{eqnarray*}
are Hankel with respect to each other,
as follows.
\begin{lemma}\label{hankel}
For $i,j\ge 0$, $i+j\le d$, $j>0$,
$$\langle X^{c+1}Y^{e},X^{i}Y^{j} \rangle
= \langle X^{c}Y^{e+1},X^{i+1}Y^{j-1} \rangle$$
\end{lemma}
\begin{proof}
We have
$$\langle X^{c+1}Y^{e},X^{i}Y^{j} \rangle
=
\displaystyle \sum \limits_{a,b\ge 0, a+b\le d-1}^{}
\alpha_{ab} \langle X^{a+1}Y^{b}, X^{i}Y^{j} \rangle,$$
and since each row and column in the last sum
has degree at most $d$, relative to $M_{d}$ we may rewrite this sum as
$$ \displaystyle \sum \limits_{a,b\ge 0, a+b\le d-1}^{}
\alpha_{ab} \langle X^{a}Y^{b+1}, X^{i+1}Y^{j-1} \rangle
= \langle X^{c}Y^{e+1},X^{i+1}Y^{j-1} \rangle.$$
This completes the proof.
\end{proof}
It follows immediately from Lemma \ref{hankel} that the left
recursive band in $B[d,d+1]$ is constant on cross-diagonals.
We next check that if an element of a column in
the non-recursive central band can
be reached on a cross-diagonal which intersects both columns
$X^{n}Y^{d+1-n}$ (at the edge of the left recursive band)
and $X^{d+1-m}Y^{m}$ (at the edge of the right recursive band),
then the values obtained from
both of these columns agree. \ This is the substance of the following
lemma.
\begin{lemma}\label{Hankellemma} For $0\le k\le 2d+1-n-m$,
\begin{equation}\label{hankelformula} \langle X^{d+1-m}Y^{m},X^{d-k}Y^{k} \rangle
= \langle X^{n}Y^{d+1-n},X^{d-\delta-k}Y^{\delta + k} \rangle.
\end{equation}
\end{lemma}
\begin{proof}
The proof is by induction on $k$. \ We begin with the base case, $k=0$, and seek to show that
$\langle X^{d+1-m}Y^{m},X^{d} \rangle
= \langle X^{n}Y^{d+1-n},X^{d-\delta}Y^{\delta} \rangle $ \ (recall that $\delta:=n+m-d-1$). \
Using (\ref{Y^m_rec}), we may express
$ \langle X^{d+1-m}Y^{m},X^{d} \rangle$ as
\begin{equation}\label{rightbase}
\displaystyle \sum \limits_{a,b\ge 0, a+b\le m-1}
\alpha_{ab} \langle X^{d+1-m+a}Y^{b},X^{d}\rangle +
\displaystyle \sum \limits_{c,e\ge 0, c+e=m, e<m}
\gamma_{ce} \langle X^{d+1-e}Y^{e},X^{d}\rangle.
\end{equation}
Note that $ \langle X^{d+1-m+a}Y^{b},X^{d}\rangle$ is a
component of $M_{d}$; further, since $e<m$,
$ \langle X^{c+d+1-m}Y^{e},X^{d}\rangle$ is the
endpoint of a cross-diagonal that lies entirely in the
left and central bands, and is thus constant. \ Therefore, we may
rewrite (\ref{rightbase}) as
$$
\displaystyle \sum \limits_{a,b\ge 0, a+b\le m-1}
\alpha_{ab} \langle X^{d}, X^{d+1-m+a}Y^{b}\rangle +
\displaystyle \sum \limits_{c,e\ge 0, c+e=m, e<m}
\gamma_{ce} \langle X^{d+1}, X^{d-e}Y^{e} \rangle
$$
\begin{eqnarray*}
&=& \sum \alpha_{ab} \langle \sum a_{rs} X^{d-n+r}Y^{s}, X^{d+1-m+a}Y^{b}\rangle +
\sum \gamma_{ce} \langle \sum a_{rs} X^{d-n+r+1}Y^{s}, X^{d-e}Y^{e} \rangle \\
&=& \sum a_{rs} \sum \alpha_{ab} \langle X^{d-n+r}Y^{s}, X^{d+1-m+a}Y^{b}\rangle +
\sum a_{rs} \sum \gamma_{ce} \langle X^{d-n+r+1}Y^{s}, X^{d-e}Y^{e} \rangle \\
&=& \sum a_{rs} ( \sum \alpha_{ab} \langle X^{d+1-m+a}Y^{b}, X^{d-n+r}Y^{s}\rangle +
\sum \gamma_{ce} \langle X^{d-e}Y^{e}, X^{d-n+r+1}Y^{s} \rangle) \\
&=& \sum a_{rs} \langle \sum \alpha_{ab} X^{d-m+a}Y^{b} +
\sum \gamma_{ce} X^{d-e}Y^{e}, X^{d-n+r+1}Y^{s} \rangle \\
&=& \sum a_{rs} \langle X^{d-m}Y^{m} , X^{d-n+r+1}Y^{s} \rangle.
\end{eqnarray*}
Since $\delta=m+n-d-1$, in $M_{d}$ the last sum is equal to
$$
\sum a_{rs} \langle X^{r}Y^{d+1-n+s}, X^{d-\delta}Y^{\delta} \rangle = \langle X^{n}Y^{d+1-n},X^{d-\delta}Y^{\delta} \rangle,
$$
which completes the proof of the base case.
We assume now that (\ref{hankelformula}) holds for $0,\ldots, k-1$, with $k-1 < 2d+1-n-m$.
To establish (\ref{hankelformula}) for $k$, we consider first the case $d-k \ge n$.
Let us write $\kappa:=\langle X^{d+1-m}Y^{m},X^{d-k}Y^{k} \rangle$ as
\begin{eqnarray} \label{levelk}
\kappa&=& \sum \limits_{a,b\ge 0, a+b\le m-1} \alpha_{ab} \langle X^{d+1-m+a}Y^{b},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m,d+1-e\ge n} \gamma_{ce} \langle X^{d+1-e}Y^{e},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m,d+1-e\ge n,d+1-e<n} \gamma_{ce} \langle X^{d+1-e}Y^{e},X^{d-k}Y^{k}\rangle.
\end{eqnarray}
Note that the components in the first sum of (\ref{levelk}) lie in $M_{d}$. \ In the third sum,
since $d+1-e<n$, column $X^{d+1-e}Y^{e}$ is in the middle band, and the component
$\gamma:=\langle X^{d+1-e}Y^{e},X^{d-k}Y^{k}\rangle$
lies on a cross-diagonal $\sigma$ strictly above the cross-diagonal for $\kappa$. \ Either because $\sigma$ does not
intersect column $X^{d+1-m}Y^{m}$, or by induction if it does, we see that $\gamma$ has the same value as
$\langle X^{n}Y^{d+1-n},X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}\rangle$
(on the same cross-diagonal).
Thus (\ref{levelk}) can be expressed as
\begin{eqnarray} \label{levelk_2}
\kappa &=& \sum \limits_{a,b\ge 0, a+b\le m-1} \alpha_{ab} \langle X^{d-k}Y^{k}, X^{d+1-m+a}Y^{b}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m,d+1-e\ge n} \gamma_{ce} \langle X^{n}X^{d+1-e-n}Y^{e},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m,d+1-e\ge n,d+1-e<n} \gamma_{ce} \langle X^{n}Y^{d+1-n},X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}\rangle \nonumber \\
\medskip \medskip
&=& \sum \alpha_{ab} \langle X^{n} X^{d-k-n}Y^{k}, X^{d+1-m+a}Y^{b}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \langle X^{n}X^{d+1-e-n}Y^{e},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \langle X^{n}Y^{d+1-n},X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}\rangle \nonumber \\
\medskip \medskip
&=& \sum \alpha_{ab} \sum a_{rs} \langle X^{r+d-k-n}Y^{s+k}, X^{d+1-m+a}Y^{b}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \sum a_{rs} \langle X^{r+d+1-e-n}Y^{s+e},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \sum a_{rs} \langle X^{r}Y^{s+d+1-n},X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}\rangle \nonumber \\
\medskip \medskip
&=& \sum a_{rs}(\sum \alpha_{ab} \langle X^{r+d-k-n}Y^{s+k}, X^{d+1-m+a}Y^{b}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \langle X^{r+d+1-e-n}Y^{s+e},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \langle X^{r}Y^{s+d+1-n},X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}\rangle ).
\end{eqnarray}
Using the symmetry of $M_{d}$ in the first and third inner sums of the last
expression, we may rewrite this
expression as
\begin{eqnarray}\label{level3}
& & \sum a_{rs}(\sum \alpha_{ab} \langle X^{d+1-m+a}Y^{b}, X^{r+d-k-n}Y^{s+k}\rangle + \sum \gamma_{ce} \langle X^{r+d+1-e-n}Y^{s+e},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \langle X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}, X^{r}Y^{s+d+1-n} \rangle).
\end{eqnarray}
In the second inner sum of (\ref{level3}),
$\langle X^{r+d+1-e-n}Y^{s+e},X^{d-k}Y^{k}\rangle$
is a component of $M_{d}$ and thus equals the moment
$\beta_{r+d+1-e-n+d-k,s+e+k}$. \ Since $X^{d-k+r-n}Y^{s+k}$
is a row of degree at most $d-1$, this moment
coincides with
$\langle X^{c+d+1-m}Y^{e},X^{d-k+r-n}Y^{s+k}\rangle$
from the left band of $B[d+r+s-n,n+1]$.
Further, in the third inner sum of (\ref{level3}),
$$
\langle X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}, X^{r}Y^{s+d+1-n} \rangle
$$
is also a component of $M_{d}$, equal to $\beta_{r+d-k-(n-(d+1-e)),s+k+n-(d+1-e)+d+1-n}$, and this moment coincides with $\langle X^{d+1-e}Y^{e},X^{r+d-k-n}Y^{s+k}\rangle$ from the middle band
in $B[d+r+s-n,d+1]$. \ Thus, the expression in (\ref{level3}) can be written as
\begin{eqnarray}\label{level4}
&& \sum a_{rs}(\sum \limits_{a,b\ge 0, a+b\le m-1} \alpha_{ab} \langle X^{d+1-m+a}Y^{b}, X^{r+d-k-n}Y^{s+k}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m,d+1-e\ge n} \gamma_{ce} \langle X^{d+1-m+c}Y^{e},X^{r+d-k-n}Y^{s+k}\rangle \nonumber \\
& \quad & + \sum \sum \limits_{c,e\ge 0, c+e=m, e<m,d+1-e < n} \gamma_{ce} \langle X^{d+1-m+c}Y^{e}, X^{r+d-k-n}Y^{s+k} \rangle),
\end{eqnarray}
which equals
\begin{equation}\label{level5}
\sum a_{rs} \langle X^{d+1-m}Y^{m}, X^{r+d-k-n}Y^{s+k} \rangle).
\end{equation}
Since $ X^{r+d-k-n}Y^{s+k}$
is a row of degree at most $d-1$, Lemma \ref{oldright} implies
that the expression in (\ref{level5}) equals
\begin{eqnarray*}
\sum a_{rs} \beta_{d+1-m+r+d-k-n,m+s+k} &=& \sum a_{rs} \beta_{r+d-\delta-k,s+d+1-n+\delta+k} \\
&=& \sum a_{rs} \langle X^{r}Y^{s+d+1-n}, X^{d-\delta-k}Y^{\delta+k} \rangle \\
&=& \langle X^{n}Y^{d+1-n}, X^{d-\delta-k}Y^{\delta+k} \rangle .
\end{eqnarray*}
This completes the proof of the induction step for (\ref{hankelformula}) when $d-k\ge n$.
We next treat the case when $d-k<n$, which implies $\delta + k\ge m$.
We have
\begin{eqnarray}\label{decomp}
\langle X^{n}Y^{d+1-n}, X^{d-\delta-k}Y^{\delta+k}\rangle &=&
\sum a_{rs} \langle X^{r}Y^{d+1-n+s}, X^{d-\delta-k}Y^{\delta+k}\rangle \nonumber \\
&=& \sum a_{rs} \langle X^{d-\delta-k}Y^{m}Y^{\delta+k-m}, X^{r}Y^{d+1-n+s} \rangle \nonumber \\
&=& \sum a_{rs}(\displaystyle \sum \limits_{a,b\ge 0, a+b\le m-1}\alpha_{ab}\langle X^{a+ d-\delta-k}Y^{b+\delta+k-m}, X^{r}Y^{d+1-n+s} \rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m}\gamma_{ce} \langle X^{c+d-\delta-k}Y^{e+\delta+k-m}, X^{r}Y^{d+1-n+s} \rangle). \nonumber \\
&& \quad
\end{eqnarray}
Note for future reference that all of the matrix components that appear in (\ref{decomp}) come from $M_{d}$.
We now consider
\begin{eqnarray}\label{d2}
\langle X^{d+1-m}Y^{m},X^{d-k}Y^{k} \rangle &=& \sum \limits_{a,b\ge 0, a+b\le m-1} \alpha_{ab} \langle X^{d+1-m+a}Y^{b},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m} \gamma_{ce} \langle X^{d+1-m+c}Y^{e},X^{d-k}Y^{k}\rangle \nonumber \\
&=& \sum \alpha_{ab} \langle X^{d-k}Y^{k},X^{d+1-m+a}Y^{b}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \langle X^{d+1-e}Y^{e},X^{d-k}Y^{k}\rangle \quad \quad
\end{eqnarray}
(using symmetry of $M_d$ in the first sum). \ Since $k-(n-(d-k))$, $d+1-m+a-n+d-k$ $(= a+d-\delta-k)$, and $b+n-(d-k)$
are all nonnegative, by applying the block-Hankel property of $M_{d}$ to the
first sum in (\ref{d2}), we may rewrite the expression in (\ref{d2}) as
\begin{eqnarray}\label{d3}
&&\sum \alpha_{ab} \langle X^{n} Y^{k-(n-(d-k))},X^{d+1-m+a-n+d-k}Y^{b+n-(d-k)}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \langle X^{d+1-e}Y^{e},X^{d-k}Y^{k}\rangle
\end{eqnarray}
\begin{eqnarray}\label{d4}
&=& \sum \alpha_{ab} \sum a_{rs} \langle X^{r} Y^{s+k-(n-(d-k))},X^{d+1-m+a-n+d-k}Y^{b+n-(d-k)}\rangle \nonumber \\
& \quad & + \sum \gamma_{ce} \langle X^{d+1-e}Y^{e},X^{d-k}Y^{k}\rangle,
\end{eqnarray}
and all of the matrix components in the first double sum of (\ref{d4}) are from $M_{d}$.
Comparing the components in the first double sums of
(\ref{decomp}) and (\ref{d4}), we have
\begin{eqnarray*}
\langle X^{a+ d-\delta-k}Y^{b+\delta+k-m},X^{r}Y^{d+1-n+s} \rangle &=& \beta_{a+d-\delta-k+r,b+\delta+k-m+d+1-n+s} \\
&=&\beta_{r+d+1-m+a-n+d-k,s+k-(n-(d-k))+b+n-(d-k)} \\
&=& \langle X^{r} Y^{s+k-(n-(d-k))},X^{d+1-m+a-n+d-k}Y^{b+n-(d-k)}\rangle,
\end{eqnarray*}
so the first double sums of (\ref{decomp}) and (\ref{d4}) are equal.
Let us write the rightmost sum in (\ref{d4}) as
\begin{eqnarray}\label{d5}
&&\sum \limits_{c,e\ge 0, c+e=m,e<m,d+1-e\ge n} \gamma_{ce} \langle X^{n} X^{d+1-e-n}Y^{e},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m,e<m,d+1-e < n}\gamma_{ce} \langle X^{d+1-e}Y^{e},X^{d-k}Y^{k}\rangle.
\end{eqnarray}
In the second sum of (\ref{d5}), since $d+1-e<n$, the component
$\langle X^{d+1-e}Y^{e},X^{d-k}Y^{k}\rangle$ (from the
middle band) has the same value as the component
$\langle X^{n}Y^{d+1-n},X^{d-k-(n-(d+1+e)}Y^{k+n-(d+1-e)}\rangle$
on the same cross-diagonal. \ (This is because the cross-diagonal is strictly above that for $ \langle X^{d+1-m}Y^{m},X^{d-k}Y^{k} \rangle $,
so the conclusion follows by definition or induction.) \ We may now write the expression in (\ref{d5}) as
\begin{eqnarray}\label{d6}
&& \sum \limits_{c,e\ge 0, c+e=m,e<m,d+1-e\ge n} \gamma_{ce} \sum a_{rs} \langle X^{r+d+1-e-n}Y^{s+e},X^{d-k}Y^{k}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, e<m,c+e=m,d+1-e < n} \gamma_{ce} \langle X^{n}Y^{d+1-n},X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}\rangle \nonumber \\
&=& \sum \limits_{c,e \ge 0, e<m,c+e=m,d+1-e\ge n} \gamma_{ce} \sum a_{rs} \langle X^{r+d+1-e-n}Y^{s+e},X^{d-k}Y^{k} \rangle \nonumber \\
& \quad & + \sum \limits_{c,e \ge 0, c+e=m,d+1-e < n} \gamma_{ce} \sum a_{rs} \langle X^{r}Y^{s+d+1-n},X^{d-k-(n-(d+1-e))}Y^{k+n-(d+1-e)}\rangle. \quad \quad
\end{eqnarray}
All of the matrix components in (\ref{d6}) are from $M_{d}$, so
(\ref{d6}) can be expressed as
$$
\sum a_{rs} \sum \limits_{c+e=m} \beta_{r+d+1-e-n+d-k,s+e+k}.
$$
It is straightforward to check
that this double sum coincides with the second double sum in (\ref{decomp})
(whose matrix components also come entirely from $M_{d}$).
This completes the proof that the second double sums in (\ref{decomp})
and
(\ref{d4}) have the same value, so the expressions in
(\ref{decomp})
and (\ref{d4}) are equal, which completes the proof of the induction
when $d-k<n$. \ Thus, the induction is complete.
\end{proof}
We have shown above that in $B[d,d+1]$ the columns
$X^{d+1},\ldots,X^{d+1-m}Y^{m}$ are well-defined and
Hankel with respect to one another. \ Using (\ref{Y^m_rec}), we also
successively defined columns $X^{d-m}Y^{m+1},\ldots,Y^{d+1}$.
We next show that the columns $X^{d-m+1}Y^{m},\ldots,Y^{d+1}$
are Hankel with respect to each other, so that all of $B[d,d+1]$ has the
Hankel property.
\begin{lemma}\label{righthankel}
For $0\le s \le d+1-m$ and $i,j\ge 0$ with $i+j = d$ and $j>0$, we have
$$ \langle X^{d+1-m-s}Y^{m+s},X^{i}Y^{j}\rangle =
\langle X^{d-m-s}Y^{m+s+1},X^{i+1}Y^{j-1}\rangle.$$
\end{lemma}
\begin{proof}
The proof is by induction on $s\ge 0$. \ For $s=0$, we have
$ \langle X^{d+1-m}Y^{m},X^{i}Y^{j}\rangle $
\begin{equation}\label{baseeqn}
= \displaystyle \sum \limits_{a,b\ge 0, a+b\le m-1}
\alpha_{ab} \langle X^{d+1-m+a}Y^{b},X^{i}Y^{j}\rangle +
\displaystyle \sum \limits_{c,e\ge 0, c+e=m, e<m}
\gamma_{ce} \langle X^{d+1-e}Y^{e},X^{i}Y^{j}\rangle.
\end{equation}
In the first sum, each component is from $M_{d}$. \ In the second sum,
column $X^{d+1-e}Y^{e}$ is strictly to the left
of $X^{d+1-m}Y^{m}$, so it is Hankel with respect to its right successor,
$X^{d-e}Y^{e+1}$ .
We may thus rewrite the expression in (\ref{baseeqn}) as
$$\displaystyle \sum \limits_{a,b\ge 0, a+b\le m-1}
\alpha_{ab} \langle X^{d-m+a}Y^{b+1},X^{i+1}Y^{j-1}\rangle +
\displaystyle \sum \limits_{c,e\ge 0, c+e=m, e<m}
\gamma_{ce} \langle X^{d-e}Y^{e+1},X^{i+1}Y^{j-1}\rangle$$
\newline
$= \langle X^{d-m}Y^{m+1},X^{i+1}Y^{j-1}\rangle$.
Assume now that the Hankel property holds through $s-1$ and consider
\begin{eqnarray}\label{level_s}
\langle X^{d+1-m-s}Y^{m+s},X^{i}Y^{j}\rangle &=& \sum \limits_{a,b\ge 0, a+b\le m-1} \alpha_{ab} \langle X^{d+1-m+a-s}Y^{b+s},X^{i}Y^{j}\rangle \nonumber \\
& \quad & + \sum \limits_{c,e\ge 0, c+e=m, e<m} \gamma_{ce} \langle X^{d+1-e-s}Y^{e+s},X^{i}Y^{j}\rangle.
\end{eqnarray}
As above, in the first sum, each component is from $M_{d}$; in the second
sum, each column $X^{d+1-e-s}Y^{e+s}$ is to the left of
$X^{d+1-m-s}Y^{m+s}$, so the Hankel property holds for this
column by induction. \ We may thus write the expression in (\ref{level_s})
as
\begin{eqnarray*}
\sum \limits_{a,b\ a+b\le m-1} \alpha_{ab} \langle X^{d-m+a-s}Y^{b+s+1},X^{i+1}Y^{j-1}\rangle
&+& \sum \limits_{c+e=m, e<m} \gamma_{ce} \langle X^{d-e-s}Y^{e+s+1},X^{i+1}Y^{j-1}\rangle \\
&=& \langle X^{d-m-s}Y^{m+s+1},X^{i+1}Y^{j-1}\rangle,
\end{eqnarray*}
which completes the proof by induction.
\end{proof}
{\bf STEP (ii)}: \ The preceding results show that under the hypotheses of
Theorem \ref{main}, there exists a unique block
$B(d+1)$ that is consistent with
recursiveness in
$\bpm
M_{d} & B(d+1) \epm$
. \ To prove
Theorem \ref{main}, we must also
show that $Ran~B(d+1)\subseteq Ran~M_{d}$.
The following lemma is a step toward this end; it shows
that the rows of
$\bpm
M_{d} & B(d+1) \epm$
of the form $X^{n+f}Y^{g}$ $(f,g\ge 0, ~n+f+g\le d)$ are
recursively determined from row $X^{n}$.
\begin{lemma}\label{recXrows} For $i,j\ge 0, i+j\le d+1$ and
for $f,g\ge 0,~ n+f+g\le d$,
\begin{equation}\label{Xrows}
\langle X^{i}Y^{j},X^{n+f}Y^{g}\rangle =
\displaystyle \sum \limits_{r,s\ge 0, r+s\le n-1}^{}
a_{rs}\langle X^{i}Y^{j}, X^{r+f}Y^{s+g}\rangle.
\end{equation}
\end{lemma}
\begin{proof}
Since $M_{d}$ is real symmetric, it follows from
(\ref{Xton_rec}) that (\ref{Xrows}) holds for
$i+j\le d$. \ We may thus assume that $j= d+1-i$.
Consider first the case when $n+f+g<d$. \ In the subcase when $i \le d$, it follows from the presence of old moments in
$B[n+f+g,d+1]$ that
\begin{eqnarray*}
\langle X^{i}Y^{d+1-i},X^{n+f}Y^{g}\rangle &=&
\beta_{i+n+f,d+1-i+g},
\end{eqnarray*}
and in $M_{d}$ we have
\begin{eqnarray} \label{new}
\beta_{i+n+f,d+1-i+g} &=& \langle X^{n+f}Y^{g+1},X^{i}Y^{d-i}\rangle \nonumber \\
&=& \sum \limits_{r,s\ge 0, \ r+s\le n-1}^{} a_{rs} \langle X^{r+f}Y^{s+g+1},X^{i}Y^{d-i}\rangle \\
&=& \sum a_{rs} \langle X^{i}Y^{d-i}, X^{r+f}Y^{s+g+1}\rangle \quad \text{ \ (by symmetry in $M_{d}$)} \nonumber \\
&=& \sum a_{rs} \beta_{i+r+f,d-i+s+g+1} \nonumber \\
&=& \sum a_{rs} \langle X^{i}Y^{d+1-i}, X^{r+f}Y^{s+g}\rangle \nonumber \\
& & \text{ \ (by moment matrix structure in $B(d+1)$)}. \nonumber
\end{eqnarray}
For the subcase when $i=d+1$, we first note that $\langle X^{d+1}, X^{n+f}Y^{g}\rangle = \beta_{d+1+n+f,g} = \langle X^{d}, X^{n+f}Y^{g}\rangle
= \langle X^{n+f}Y^{g}, X^{d}\rangle $, and we then proceed beginning as in (\ref{new}). \
We next consider the case $n+f+g=d$,
and we seek to show that
\begin{equation}\label{XrowsHi}
\langle X^{i}Y^{d+1-i}, X^{n+f}Y^{g}\rangle=
\sum a_{rs} \langle X^{i}Y^{d+1-i}, X^{r+f}Y^{s+g}\rangle.
\end{equation}
We begin by showing that (\ref{XrowsHi}) holds if the
column $X^{i}Y^{d+1-i}$ is recursively determined
from (\ref{Xton_rec}), i.e., $i \ge n$. \ In this case,
we have $0\le i\le d+1-n$, so
\begin{eqnarray*}
\langle X^{i}Y^{d+1-i}, X^{n+f}Y^{g}\rangle &=& \sum a_{rs} \langle X^{r}Y^{s}X^{i-n}Y^{d+1-i},X^{n+f}Y^{g}\rangle \\
&=& \sum a_{rs} \langle X^{n+f}Y^{g}, X^{r}Y^{s}X^{i-n}Y^{d+1-i} \rangle \\
&=& \sum a_{uv} \sum a_{rs} \langle X^{u}Y^{v}X^{f}Y^{g}, X^{r}Y^{s}X^{i-n}Y^{d+1-i} \rangle \\
&=& \sum a_{uv} \sum a_{rs} \langle X^{r}Y^{s}X^{i-n}Y^{d+1-i}, X^{u+f}Y^{v+g} \rangle \\
&=& \sum a_{uv} \langle X^{i}Y^{d+1-i}, X^{u+f}Y^{v+g}\rangle.
\end{eqnarray*}
Thus $$ \langle X^{i}Y^{d+1-i}, X^{n+f}Y^{g}\rangle=
\sum a_{uv} \langle X^{i}Y^{d+1-i}, X^{u+f}Y^{v+g}\rangle,$$
which is equivalent to (\ref{XrowsHi}).
Returning to the proof of (\ref{XrowsHi}), we next assume
that column $X^{i}Y^{d+1-i}$ is not recursively determined,
i.e., $d+1-m<i<n$. \ By the Hankel condition in $B(d+1)$, we have
\begin{eqnarray*}
\langle X^{i}Y^{d+1-i}, X^{n+f}Y^{g}\rangle &=& \langle X^{n}Y^{d+1-n}, X^{i+f}Y^{n-i+g}\rangle \\
&=& \sum a_{rs} \langle X^{r}Y^{s+d+1-n},X^{i+f}Y^{n-i+g}\rangle \\
&=& \sum a_{rs} \beta_{r+i+f,s+d+1-i+g} \quad \text{(in $M_{d}$)} \\
&=& \sum a_{rs} \langle X^{i}Y^{d+1-i},X^{r+f}Y^{s+g}\rangle \\
&& \quad \text{(since $r+f+s+g<n+f+g=d$)}.
\end{eqnarray*}
Note that if (\ref{Xrows}) holds
for a collection of columns, then it holds for linear combinations
of those columns. \ Thus, using the preceding cases and (\ref{Y^m_rec}), we see that (\ref{Xrows})
holds, successively, for $X^{d+1-m}Y^{m},\ldots,Y^{d+1}$,
which completes the proof.
\end{proof}
The following result shows
that the rows of
$\bpm
M_{d} & B(d+1) \epm$
of the form $X^{f}Y^{m+g}$ $(f,g\ge 0, ~m+f+g\le d)$
are recursively determined from row $Y^{m}$.
\begin{lemma}\label{recYrows} For $i,j\ge 0, i+j\le d+1$ and
for $f,g\ge 0,~ m+f+g\le d$,
\begin{equation}\label{Yrows}
\langle X^{i}Y^{j},X^{f}Y^{m+g}\rangle =
\displaystyle \sum \limits_{u,v\ge 0, u+v\le m,v<m}
b_{uv}\langle X^{i}Y^{j}, X^{u+f}Y^{v+g}\rangle.
\end{equation}
\end{lemma}
\begin{proof}
Since $M_{d}$ is real symmetric and recursively generated, its
rows are also recursively generated from (\ref{X}) and (\ref{Y}), so
(\ref{Yrows}) holds if $i+j\le d$. \ We may now assume $j=d+1-i$, and we
first consider the case $m+f+g<d$ and the subcase $i \le d$. \ Since $f+g+m<d$, using old moments we see that
\begin{eqnarray*}
\langle X^{i}Y^{d+1-i},X^{f}Y^{m+g}\rangle &=& \beta_{i+f,d+1-i+g+m} \\
&=& \langle X^{f}Y^{m+g+1}, X^{i}Y^{d-i} \rangle \quad \text{(in $M_{d}$)} \\
&=& \sum b_{uv} \langle X^{u+f}Y^{v+g+1}, X^{i}Y^{d-i} \rangle \\
&=& \beta_{i+u+f,d-i+v+g+1} \text{(in $M_{d}$)} \\
&=& \sum b_{uv} \langle X^{i}Y^{d+1-i}, X^{u+f}Y^{v+g} \rangle \\
&& \quad \text{(since $u+v+f+g<d$)}.
\end{eqnarray*}
The subcase when $i=d+1$ proceeds as above, but starting with $\langle X^{d+1},X^{f}Y^{m+g} \rangle = \beta_{d+1+f,m+g} =
\langle X^{d},X^{f+1}Y^{m+g} \rangle = \langle X^{f+1}Y^{m+g},X^{d} \rangle$. \ For the case $m+f+g=d$,
we first consider the subcase when $i\ge n$,
so $X^{i}Y^{d+1-i}$ is in the left recursive band.
We have
\begin{eqnarray*}
\langle X^{i}Y^{d+1-i},X^{f}Y^{m+g}\rangle &=& \langle X^{n}X^{i-n}Y^{d+1-i},X^{f}Y^{m+g}\rangle \\
&=& \sum a_{rs} \langle X^{r+i-n}Y^{s+d+1-i},X^{f}Y^{m+g}\rangle \\
&=& \sum a_{rs} \sum b_{uv} \langle X^{r+i-n}Y^{s+d+1-i},X^{u+f}Y^{v+g}\rangle \\
&& \quad \text{(by row recursiveness in $M_{d}$)} \\
&=& \sum b_{uv} \langle X^{i}Y^{d+1-i},X^{n+f}Y^{v+g}\rangle.
\end{eqnarray*}
In the next subcase, we consider a column in the center band, of the form
$X^{d+1-i}Y^{i}$ with $d+1-n<i<m$. \ In this case,
(\ref{Yrows}) is equivalent to
\begin{equation}\label{newYrows}
\langle X^{d+1-i}Y^{i},X^{f}Y^{m+g}\rangle =
\displaystyle \sum \limits_{u,v\ge 0, u+v\le m,v<m}
b_{uv}\langle X^{d+1-i}Y^{i}, X^{u+f}Y^{v+g}\rangle.
\end{equation}
Note that the component $\langle X^{d+1-i}Y^{i},X^{f}Y^{m+g}\rangle$
lies on a cross-diagonal that reaches column $X^{d+1-m}Y^{m}$, so since $B(d+1)$ is well-defined, we have
\begin{eqnarray}\label{ymred}
\langle X^{d+1-i}Y^{i},X^{f}Y^{m+g}\rangle &=& \langle X^{d+1-m}Y^{m},X^{f+m-i}Y^{g+i}\rangle \nonumber \\
&=& \sum \limits_{u,v\ge 0, u+v\le m,v<m} b_{uv}\langle X^{u+d+1-m}Y^{v}, X^{f+m-i}Y^{g+i}\rangle.
\end{eqnarray}
For the subcase when $u+v<m$, in $M_{d}$ we have
\begin{eqnarray*}
\langle X^{u+d+1-m}Y^{v}, X^{f+m-i}Y^{g+i}\rangle &=& \beta_{u+d+1+f-i,v+g+i} \\
&=& \langle X^{d+1-i}Y^{i},X^{u+f}Y^{v+g}\rangle \quad \text{(since $u+f+v+g\le d-1$)}.
\end{eqnarray*}
For the subcase when $u+v=m$, there are three further subcases
in showing that
\begin{equation}\label{mrowsred}
\langle X^{d+1-i}Y^{i},X^{u+f}Y^{v+g}\rangle
= \langle X^{u+d+1-m}Y^{v}, X^{f+m-i}Y^{g+i}\rangle.
\end{equation}
For $v=i$, (\ref{mrowsred}) is clear.
For $v<i$, the Hankel property in $B[d,d+1]$ implies
\begin{eqnarray*}
\langle X^{d+1+u-m}Y^{v},X^{m+f-i}Y^{g+1} \rangle &=& \langle X^{d+1+u-m-(i-v)}Y^{v+(i-v)},X^{m+f-i+(i-v)}Y^{g+i-(i-v)} \rangle \\
&=& \langle X^{d+1-i}Y^{i},X^{u+f}Y^{g+v} \rangle .
\end{eqnarray*}
For $v>i$ we have, similarly,
\begin{eqnarray*}
\langle X^{d+1-i}Y^{i},X^{u+f}Y^{v+g}\rangle &=& \langle X^{d+1-i-(v-i)}Y^{i+v-i},X^{u+f+v-i}Y^{v+g-(v-i)}\rangle \\
&=& \langle X^{d+1+u-m}Y^{v},X^{m+f-i}Y^{g+i}\rangle.
\end{eqnarray*}
Since (\ref{Yrows}) holds in $M_{d}$ and in all columns of the left
and center bands, it now follows, using (\ref{Y^m_rec}) successively, that it holds
for columns in the right recursive band, which completes the proof.
\end{proof}
We are now prepared to prove that $Ran~B(d+1)\subseteq Ran~M_{d}$.
It follows immediately from (\ref{Xton_rec}) that each column in the left recursive band
of $B(d+1)$ belongs to $Ran~M_{d}$. \ In view of (\ref{Y^m_rec}), to establish range inclusion, it suffices to show that each central-band column of $B(d+1)$ belongs to $Ran~M_{d}$. \
Let $\mathcal{S}$ denote the set of recursively determined
columns of $M_{d}$, i.e.,
$$
\mathcal{S} = \{X^{n},~X^{n+1},~X^{n}Y,\ldots, X^{d},\ldots,
X^{n}Y^{d-n},\ldots,Y^{m},~ XY^{m},~Y^{m+1},\dots,X^{d-m}Y^{m},
\ldots, Y^{d}\}.
$$
Let $\mathcal{B}$ denote the basis for
$Col~M_{d}$ (the column space of $M_{d}$) consisting of those
columns of $M_{d}$ which do not belong to $\mathcal{S}$.
Let $M_{\mathcal{B}}$ denote the compression of $M_{d}$ to the
rows and columns indexed by $\mathcal{B}$. \ Since $M_{d} \succeq 0$, we also have $M_{\mathcal{B}} \succeq 0$.
Let $X^{i}Y^{d+1-i}$ ($d+1-m<i<n$)
denote a central-band column of $B(d+1)$, and let
$v_{i} \equiv [X^{i}Y^{d+1-i}]_{\mathcal{B}}$ denote the
compression of $X^{i}Y^{d+1-i}$ to the rows of $\mathcal{B}$.
There exists a unique vector of coefficients
$(c_{ab}^{(i)})_{X^{a}Y^{b}\in \mathcal{B}}$ such that
$$v_{i} = \displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{}
c_{ab}^{(i)}[X^{a}Y^{b}]_{\mathcal{B}},$$
i.e., for each $X^{u}Y^{v}\in \mathcal{B}$,
\begin{equation}\label{range_formula}
\langle X^{i}Y^{d+1-i}, X^{u}Y^{v}\rangle
= \displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{} c_{ab}^{(i)}
\langle X^{a}Y^{b},X^{u}Y^{v}\rangle.
\end{equation}
To complete the proof that $Ran~B(d+1)\subseteq Ran~M_{d}$, it suffices
to prove that $X^{i}Y^{d+1-i}
= \displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{}
c_{ab}^{(i)}X^{a}Y^{b},$
which, in view of (\ref{range_formula}), follows from the next result.
\begin{lemma}\label{range}
For each $X^{c}Y^{e}\in \mathcal{S}$,
\begin{equation}\label{range_equation}
\langle X^{i}Y^{d+1-i}, X^{c}Y^{e}\rangle
= \displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{} c_{ab}^{(i)}
\langle X^{a}Y^{b},X^{c}Y^{e}\rangle.
\end{equation}
\end{lemma}
\begin{proof}
We may assume without loss of generality that $n\le m$, so the
elements of $\mathcal{S}$ may be arranged in degree-lexicographic
order as $X^{n}, \cdots, Y^{m}, \cdots, X^{d}, \cdots, Y^{d}$. \
We will prove (\ref{range_equation}) by induction on the position number
of row $X^{c}Y^{e}\in \mathcal{S}$ within the degree-lexicographic ordering.
For row $X^{n}$ ($c=n$, $e=0$), Lemma \ref{recXrows} implies that
\begin{equation}\label{basecase}
\langle X^{i}Y^{d+1-i}, X^{n}\rangle=
\displaystyle
\sum \limits_{r,s\ge 0, r+s\le n-1}
a_{rs} \langle X^{i}Y^{d+1-i}, X^{r}Y^{s}\rangle.
\end{equation}
Since $r+s<n$, $X^{r}Y^{s}\in \mathcal{B}$, so the sum in
(\ref{basecase}) may be expressed as
$$ \sum a_{rs}
\displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{} c_{ab}^{(i)}
\langle X^{a}Y^{b}, X^{r}Y^{s}\rangle =
\displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{} c_{ab}^{(i)}
\langle X^{a}Y^{b}, X^{n}\rangle$$
(using Lemma \ref{recXrows} again). \
Assume now that (\ref{range_equation})
holds for all rows $X^{C}Y^{E}\in \mathcal{S}$ with
order position up to $k-1$, and consider
$X^{c}Y^{e}\in \mathcal{S}$ with position $k$. \ Either $c\ge n$ or
$e\ge m$; we present the argument for the case $e\ge m$ (the other case
is simpler).
We have $e= m+g$ for some $g\ge 0$. \
From Lemma \ref{recYrows}, we have
$$ \langle X^{i}Y^{d+1-i}, X^{c}Y^{m+g}\rangle=
\displaystyle
\sum \limits_{u,v\ge 0, u+v\le m, v<m}
b_{uv} \langle X^{i}Y^{d+1-i}, X^{c+u}Y^{g+v}\rangle. $$
Now $X^{c+u}Y^{g+v}$ is either a basis vector, or, since $v<m$,
it precedes $X^{c}Y^{m+g}$ in the ordering of $\mathcal{S}$.
Thus, by definition (for the basis rows)
and by induction (for the non-basis rows), the preceding sum is
equal to
$$= \sum b_{uv}
\displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{} c_{ab}^{(i)}
\langle X^{a}Y^{b}, X^{c+u}Y^{g+v}\rangle =
\displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{} c_{ab}^{(i)}
\langle X^{a}Y^{b},
\sum b_{uv} X^{c+u}Y^{g+v}\rangle$$
$$=
\displaystyle \sum \limits_{X^{a}Y^{b}\in \mathcal{B}}^{} c_{ab}^{(i)}
\langle X^{a}Y^{b},
X^{c}Y^{e}\rangle$$
(by another application of Lemma \ref{recYrows}).
\end{proof}
The proof of Theorem \ref{main} is now complete. \
\section{Proof of Theorem \ref{RDnew}} \label{PROOF2}
For the proof of Theorem \ref{RDnew}, we require a preliminary result concerning a general moment matrix.
\begin{lemma}\label{longcolumns}
Suppose $M_{d+1}$ satisfies $Ran~B(d+1)\subseteq
Ran~M_{d}$. \ If $p\in \mathcal{P}_{d}$ and
$p(X,Y) = 0$ in $Col~M_{d}$, then $p(X,Y) = 0$
in $Col~M_{d+1}$.
\end{lemma}
\begin{proof}
Since $M_{d}$ is real symmetric, we have $p(X,Y) = 0$
in the row space of $M_{d}$, and we first show that
$p(X,Y)=0$ holds in the row space of
$\bpm
M_{d} & B(d+1)
\epm.$
Let $\rho := deg~p$ and
suppose $p(x,y) \equiv \sum_{r,s\ge 0, r+s\le \rho} a_{rs}x^{r}y^{s}$.
Then for $i,j\ge 0$ with $i+j\le d$, we have
\begin{equation}\label{pM}
\sum_{r,s} \alpha_{rs} \langle X^{i}Y^{j}, X^{r}Y^{s} \rangle = 0.
\end{equation}
Consider a column of degree $d+1$, $X^{u}Y^{d+1-u}$ ($0\le u\le d+1$).
We seek to show that
\begin{equation}\label{pB}
\sum_{r,s} \alpha_{rs} \langle X^{u}Y^{d+1-u}, X^{r}Y^{s} \rangle = 0.
\end{equation}
By the range inclusion, we have a dependence relation in $Col~
\bpm
M_{d} & B(d+1)
\epm$ of the form
\begin{equation}\label{nextdegree}
X^{u}Y^{d+1-u} = \sum_{a,b\ge 0, a+b\le d} c_{ab}^{(u)} X^{a}Y^{b}.
\end{equation}
Thus,
\begin{eqnarray*}
\sum_{r,s} \alpha_{rs} \langle X^{u}Y^{d+1-u}, X^{r}Y^{s} \rangle &=&
\sum_{r,s} \alpha_{rs} \sum_{a,b\ge 0, a+b\le d} c_{ab}^{(u)}
\langle X^{a}Y^{b}, X^{r}Y^{s} \rangle \\
&=& \sum c_{ab}^{(u)} \sum \alpha_{rs}
\langle X^{a}Y^{b}, X^{r}Y^{s} \rangle = 0 \quad (\text{by (\ref{pM})}).
\end{eqnarray*}
Now, $p(X,Y)=0$ in
the row space of
$
\bpm
M_{d} & B(d+1)
\epm$,
so
$p(X,Y)=0$ in $Col~
\bpm
M_{d} \\
B(d+1)^{T}
\epm.$
\end{proof}
\begin{proof} [Proof of Theorem \ref{RDnew}]
\ It follows from the proof of Theorem \ref{gridthm} that $M_{d}$ admits
a unique extension $M_{d+1}$ which satisfies $Ran~B(d+1)\subseteq
Ran~M_{d}$ and such that (\ref{Xton_rec})-(\ref{Y^m_rec}) hold in $Col~M_{d+1}$.
It remains only to prove that $M_{d+1}$ is recursively
generated. \
Since $M_{d}$ is recursively generated, it suffices to consider
a dependence relation in $Col~M_{d+1}$ of degree $d$,
say
\begin{equation}\label{firstrelation}
X^{i}Y^{d-i} = \sum_{g,h\ge0,g+h\le d-1} c_{gh}X^{g}Y^{h}
\end{equation}
(where $0\le i\le d$),
and to show that
\begin{equation}\label{timesX}
X^{i+1}Y^{d-i} = \sum_{g,h\ge0,g+h\le d-1} c_{gh}X^{g+1}Y^{h}
\end{equation}
and
\begin{equation}\label{timesY}
X^{i}Y^{d-i+1} = \sum_{g,h\ge0,g+h\le d-1} c_{gh}X^{g}Y^{h+1}.
\end{equation}
Suppose first that $i\ge n$, so that $X^{i}Y^{d-i}$ lies in the
left band. \ Then from (\ref{Xrec}) we also have
\begin{equation}\label{secondrelation}
X^{i}Y^{d-i} = \sum_{r+s\le n-1} a_{rs} X^{i-n+r}Y^{s+d-i}.
\end{equation}
Thus, in $M_{d}$ we have the column relation of degree
at most $d-1$,
$$\sum_{g+h\le d-1} c_{gh}X^{g}Y^{h}
= \sum_{r+s\le n-1} a_{rs} X^{i-n+r}Y^{s+d-i}.$$
Since $M_{d}$ is recursively generated, it follows that in
$Col~M_{d}$ we also have
$$\sum_{g+h\le d-1} c_{gh}X^{g+1}Y^{h}
= \sum_{r+s\le n-1} a_{rs} X^{i-n+r+1}Y^{s+d-i}.$$
Lemma \ref{longcolumns} implies that the
last equation also holds in $Col~M_{d+1}$, where, from (\ref{Y^m_rec}),
the right-hand sum represents $X^{i+1}Y^{d-i}$; this
establishes (\ref{timesX}). \ We omit the proof of (\ref{timesY}),
which is similar. \ The case when $d-i\ge m$, so that
$X^{i}Y^{d-i+1}$ is in the right band, is handled in an entirely
analogous fashion, so we also omit the proof of this case.
We next consider the case when $d-m<i<n$, so that column
$X^{i}Y^{d-i}$ in (\ref{firstrelation}) is in the central band.
To establish (\ref{timesX}), it suffices to verify that
\begin{equation}\label{Xcomponent}
\langle X^{i+1}Y^{d-i}, X^{k}Y^{j} \rangle
= \sum_{g,h\ge0,g+h\le d-1} c_{gh} \langle X^{g+1}Y^{h},X^{k}Y^{j}\rangle \quad (k,j\ge 0,~k+j\le d+1).
\end{equation}
The case when $k+j<d$ is easy, using (\ref{firstrelation}) and
the old moments in block $B[k+j,d+1]$. \ We consider next the
case $k+j=d$ and the subcase when $k\ge n$. \
In this subcase, $\langle X^{i+1}Y^{d-i}, X^{k}Y^{d-k} \rangle$
belongs to a cross-diagonal of $B[d,d+1]$
that intersects column $X^{n}Y^{d+1-n}$,
so
from the definition
of $B[d,d+1]$ in the proof of Theorem \ref{main}, we have
\begin{eqnarray} \label{recformula}
\langle X^{i+1}Y^{d-i}, X^{k}Y^{d-k} \rangle
&:=& \langle X^{n}Y^{d+1-n}, X^{k-(n-i-1)}Y^{d-k+n-i-1} \rangle \nonumber \\
&=& \sum a_{rs} \langle X^{r}Y^{s+d+1-n}, X^{k-(n-i-1)}Y^{d-k+n-i-1} \rangle.
\end{eqnarray}
Now, we have
\begin{eqnarray*}
\sum_{g,h\ge 0,g+h\le d-1} c_{gh} \langle X^{g+1}Y^{h},X^{k}Y^{d-k}\rangle &=& \sum c_{gh} \langle X^{k}Y^{d-k},X^{g+1}Y^{h}\rangle \\
&=& \sum c_{gh} \sum_{rs} a_{rs} \langle X^{r+k-n}Y^{s+d-k},X^{g+1}Y^{h}\rangle \\
&=& \sum a_{rs} \sum c_{gh} \langle X^{g+1}Y^{h},X^{r+k-n}Y^{s+d-k}\rangle \\
&=& \sum a_{rs} \sum c_{gh} \langle X^{g}Y^{h},X^{r+k-n+1}Y^{s+d-k}\rangle \quad (\text{in} \; M_{d}) \\
&=& \sum a_{rs} \langle X^{i}Y^{d-i},X^{r+k-n}Y^{s+d-k}\rangle \\
&=& \sum a_{rs} \langle X^{r+k-n}Y^{s+d-k}, X^{i}Y^{d-i}\rangle \\
&=& \sum a_{rs} \langle X^{r}Y^{s+d-k+(k-n+1)}, X^{i+(k-n+1)}Y^{d-i-(k-n+1)}\rangle.
\end{eqnarray*}
This last expression agrees with (\ref{recformula}), so (\ref{timesX})
is established for this subcase. \ The proof of this subcase for
(\ref{timesY}) is very similar, so we omit the details. \ In the subcase
when $k< n$, then $d-k\ge m$, and we see that
$\langle X^{i+1}Y^{d-i}, X^{k}Y^{d-k} \rangle$
belongs to a cross-diagonal of $B[d,d+1]$
that intersects column $X^{d+1-m}Y^{m}$. \ Since $deg~q<m$,
the proof of this subcase is entirely analogous to that above,
but using (\ref{Y^m_rec}) for the definition of $X^{d+1-m}Y^{m}$.
Finally, we consider the case $k+j=d+1$. \ As above, we will treat the
subcase of (\ref{timesX}) when $k\ge n$ in detail and omit the proofs of
the other subcases of (\ref{timesX}) and (\ref{timesY}), which are similar.
Since $k\ge n$, then, as above, we have
\begin{eqnarray} \label{recformulaC}
\langle X^{i+1}Y^{d-i}, X^{k}Y^{d+1-k} \rangle &:=& \langle X^{n}Y^{d+1-n}, X^{k-(n-i-1)}Y^{d+1-k+n-i-1} \rangle \nonumber \\
&=& \sum a_{rs} \langle X^{r}Y^{s+d+1-n}, X^{k-(n-i-1)}Y^{d-k+n-i} \rangle.
\end{eqnarray}
Now,
\begin{eqnarray*}
\sum_{g,h\ge 0,g+h\le d-1} c_{gh} \langle X^{g+1}Y^{h},X^{k}Y^{d+1-k}\rangle \\
&=& \sum c_{gh} \langle X^{k}Y^{d+1-k},X^{g+1}Y^{h}\rangle \\
&& (\text{since} \bpm M_{d} & B(d+1) \epm \text{is the transpose of} \\
&& \bpm
M_{d} \\
B(d+1)^{T}
\epm) \\
&=& \sum c_{gh} \sum_{rs} a_{rs} \langle X^{r+k-n}Y^{s+d+1-k},X^{g+1}Y^{h}\rangle \\
&=& \sum a_{rs} \sum c_{gh} \langle X^{g+1}Y^{h},X^{r+k-n}Y^{s+d+1-k}\rangle.
\end{eqnarray*}
Since the row degrees of the terms in the last sum are at most $d$, by the
previous cases (for $j+k<d$ and $j+k=d$), the last double sum may be
expressed as
$$ \sum a_{rs} \langle X^{i+1}Y^{d-i},X^{r+k-n}Y^{s+d+1-k}\rangle$$
relative to $\bpm
M_{d} & B(d+1)
\epm$. \ Since $M_{d+1}$ is real symmetric, the latter sum may be
expressed as
\begin{eqnarray*}
\sum a_{rs} \langle X^{r+k-n}Y^{s+d+1-k},
X^{i+1}Y^{d-i}\rangle \\
&=& \sum a_{rs} \langle X^{r}Y^{s+d+1-k+(k-n)},
X^{i+1+(k-n)}Y^{d-i-(k-n)}\rangle \\
&=& \sum a_{rs} \langle X^{r}Y^{s+d+1-n)},
X^{i+1+k-n}Y^{d-i-k+n}\rangle,
\end{eqnarray*}
and this agrees with (\ref{recformulaC}).
The proof is now complete.
\end{proof}
| {
"timestamp": "2012-04-10T02:01:34",
"yymm": "1204",
"arxiv_id": "1204.1687",
"language": "en",
"url": "https://arxiv.org/abs/1204.1687",
"abstract": "A theorem of Bayer and Teichmann implies that if a finite real multisequence \\beta = \\beta^(2d) has a representing measure, then the associated moment matrix M_d admits positive, recursively generated moment matrix extensions M_(d+1), M_(d+2),... For a bivariate recursively determinate M_d, we show that the existence of positive, recursively generated extensions M_(d+1),...,M_(2d-1) is sufficient for a measure. Examples illustrate that all of these extensions may be required to show that \\beta has a measure. We describe in detail a constructive procedure for determining whether such extensions exist. Under mild additional hypotheses, we show that M_d admits an extension M_(d+1) which has many of the properties of a positive, recursively generated extension.",
"subjects": "Functional Analysis (math.FA)",
"title": "Recursively determined representing measures for bivariate truncated moment sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534382002796,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573454442771
} |
https://arxiv.org/abs/2206.09189 | List Arboricity of Finitary Matroids: A Generalization of Seymour's Result | Seymour proved that the chromatic numbers and the list chromatic numbers of loop-free finite matroids are the same. In this paper we prove the same statement for infinite, loop-free finitary matroids. | \section{Introduction}
Matroids are important objects of finite combinatorics, that can represent the basic properties of independency and rank. The theorems about matroids can be applied in a wide range of fields of mathematics, such as linear algebra or graph theory. One of the most interesting fact about matroids is Seymour's list coloring theorem \cite{Se98}, that states, that the list chromatic number of a finite matroid is just equal's to the chromatic number. In this paper, we generalize this result for a class of infinite matroids, called finitary matroid. In section \ref{sc:def} we define the most important properties of these matroids. Then on section \ref{sc:fin}, we prove the generalization of Seymour's theorem, when the chromatic number is finite, by using some compactness result. Finally, in section \ref{sc:inf}, we show the same statement when the chromatic number is an infinite cardinal. This proof uses heavier set theory and logic, inparticular, elementary submodels.
\section{Definition and basic properties of finitary matroids}\label{sc:def}
We follow the terminology of \cite{Ox92}.
\begin{definition}\label{df:finitary-matroid}
Let $S$ be any set. A {\em rank function on $S$} is a function $r:[S]^{< \omega }\rightarrow \omega $, with the following properties:
\begin{enumerate}[1.]
\item
$r(\emptyset )=0$,
\item
$\forall A,B\in [S]^{<\omega }$ if $A\subseteq B$, then $r(A)\le r(B)$ ({\em monotonity}),
\item
$\forall A\in [S]^{<\omega }$, $r(A)\le |A|$ ({\em subcardinality}),
\item
$\forall A,B\in [S]^{<\omega }$, $r(A)+r(B)\ge r(A\cap B)+r(A\cup B)$ ({\em submodularity}).
\end{enumerate}
A {\em finitary matroid} is a pair $\mathcal{M}=(S,r)$, where $r$ is a rank function on $S$. A subset $X\subseteq S$ is called {\em independent} if $\forall A\in [X]^{<\omega }$, we have $r(A)=|A|$. The set of independent sets in $\mathcal{M}$ is denoted by $\mathcal{I}(\mathcal{M})$.
\end{definition}
By this definition it is clear that the subsets of an independent set are also independent.
If $\mathcal{M}=(S,r)$ is a finitary matroid and $A\in [S]^{<\omega }$, then $\mathcal{M}_{A}=(A,r|_{\mathcal{P}(A)})$ is a finite matroid. Using this observation, there are some claims, that can be proven easily using the fact that these hold for finite matroids. On the other hand, if $r$ is any function on the finite subsets of $S$, such that all $\mathcal{M}_{A}$-s are finite matroids, then $\mathcal{M}$ is a finitary matroid. This way, we can define several matroids. For example let $V$ be any vector space and $S\subseteq V$. The {\em linear matroid} of $S$ is a matroid, where each $A\in [S]^{<\omega }$ is independent if and only if their elements are linearly independent. Then $r(A)$ is the dimension of the subspace, generated by the elements of $A$. The other example is the {\em graphical matroid}, where $V$ is a vertex set and $S\subseteq [V]^{2}$ is an edge set. In this matroid $A\in [S]^{<\omega }$ is independent, if it does not contain a circuit and $r(A)$ is the number of vertices covered by $A$ minus the number of components of these vertices by the edge set $A$. However, there are many operations of finite matroids, that cannot be used for finitary matroids, such as dualisation.
Let us remark that there are "natural" infinite matroids which are not finitary: for example the {\em bond matroid} of an infinite graph cannot be obtained as a finitary matroid, as it may have sets that are not independent, but all of their finite subsets are independent.
\begin{lemma}\label{lm:lemma1}
If $\mathcal{M}=(S,r)$ is a finitary matroid and $A\in [S]^{<\omega }$, then $A\in \mathcal{I}(\mathcal{M})$ if and only if $r(A)=|A|$. Moreover, $|B|=r(A)$ for each maximal independent subset $B\subseteq A$.
\end{lemma}
\begin{proof}
Use the same result, (Theorem 1.3.2. in \cite{Ox92}) for the finite matroid $\mathcal{M}_{A}$.
\end{proof}
\begin{definition}
Let $\mathcal{M}=(S,r)$ be a finitary matroid. A subset $C\subseteq S$ is called a {\em circuit}, if $C\not\in \mathcal{I}(\mathcal{M})$, and $C$ is minimal not independent. The set of circuits of $\mathcal{M}$ is denoted by $\mathcal{C}(\mathcal{M})$.
\end{definition}
If $X\subseteq S$ is not independent, then there is some $A\in [X]^{<\omega }$ that is not independent. Then taking the elements one by one, we have that there is a minimal not independent set $C\subseteq A\subseteq X$, and thus $C\in \mathcal{C}(\mathcal{M})$. Hence, all not independent subsets of a finitary matroid contain a circuit and all circuits are finite.
It is also clear that $\emptyset \not\in \mathcal{C}(\mathcal{M})$ and if $C_{1}\neq C_{2}\in \mathcal{C}(\mathcal{M})$, then $C_{1}\not\subseteq C_{2}$.
The following lemma contains two statements,
{\em weak and strong circuit axioms}, that hold for infinite finitary matroids as well.
\begin{lemma}\label{lm:lemma2}
a)If $\mathcal{M}=(S,r)$ is a finitary matroid, $C_{1}\neq C_{2}\in \mathcal{C}(\mathcal{M})$ and $e\in C_{1}\cap C_{2}$, then there is a $C\in \mathcal{C}(\mathcal{M})$, such that $C\subseteq (C_{1}\cup C_{2})-e$.
b) If we also have $e_{1}\in C_{1}-C_{2}$, $C$ can be chosen such that $e_{1}\in C$.
\end{lemma}
\begin{proof}
For a) use Lemma 1.1.3 in \cite{Ox92} for the finite matroid $\mathcal{M}_{C_{1}\cup C_{2}}$. For b) use Proposition 1.4.11 in \cite{Ox92} for the finite matroid $\mathcal{M}_{C_{1}\cup C_{2}}$.
\end{proof}
A circuit consisting of one element is called a {\em loop}. A finitary matroid is called {\em loop-free} if it does not contain any loop. Equivalently, this means that for all $x\in S$, we have $r(\{ x\})=1$.
\begin{definition}
Let $\mathcal{M}=(S,r)$ be a finitary matroid. A set $B\in \mathcal{I}(\mathcal{M})$ is a {\em base}, if it is a maximal independent set. The set of bases of $\mathcal{M}$ is denoted by $\mathcal{B}(\mathcal{M})$.
\end{definition}
The existence of bases for finite matroids are clear, as we can add elements to an independent sets one by one, until it gets maximal, however since independence depend just on the finite subsets, by Teichmuller-Tukey lemma, we can show that bases exist in all finitary matroids and all independent sets are contained in a base.
\begin{lemma}\label{lm:lemma3}
If $\mathcal{M}=(S,r)$ is a finitary matroid, $B\in \mathcal{B}(\mathcal{M})$ and $x\in S-B$, then there is a unique $C\in \mathcal{C}(\mathcal{M})$, such that $C\subseteq B+x$.
\end{lemma}
\begin{proof}
Since $B$ is maximal independent, $B+x$ is not independent, so it must contain a circuit $C$.
Since $B$ is independent, $x\in C$.
For unicity suppose $C_{1},C_{2}\subseteq B+x$ are circuits. Since no circuit can be the subset of the independent set $B$, we must have $x\in C_{1}\cap C_{2}$. But then by lemma \ref{lm:lemma2} a), there is a $C\in \mathcal{C}(\mathcal{M})$, such that $C\subseteq (C_{1}\cup C_{2})-x\subseteq B$, that is a contradiction.
\end{proof}
\begin{definition}\label{df:maincircle}
Let $\mathcal{M}=(S,r)$ be a finitary matroid, $B\in \mathcal{B}(\mathcal{M})$ and $x\in S-B$. The {\em main circle} of $x$ on $B$, denoted by $C(B,x)$ is the unique $C\in \mathcal{C}(\mathcal{M})$, such that $C\subseteq B+x$.
\end{definition}
The next notion we need to introduce are {\em closed sets} in finitary matroids. But before, we show some important lemmas. It is clear by submodularity and subcardinality, that if $A\in [S]^{<\omega }$ and $x\in S$, then $r(A+x)$ is either $r(A)$ or $r(A)+1$.
\begin{lemma}\label{lm:lemma4}
If $\mathcal{M}=(S,r)$ is a finitary matroid, $A,B\in [S]^{<\omega }$, with $A\subseteq B$, and $x\in S$ is such that $r(A+x)=r(A)$. Then $r(B+x)=r(B)$.
\end{lemma}
\begin{proof}
Using the submodularíty for the sets $A+x$ and $B$, we have $r(A+x)+r(B)\ge r(A)+r(B+x)$. By our assumption, then $r(A)+r(B)\ge r(A)+r(B+x)$, so $r(B)\ge r(B+x)$. By the monotonity, we have $r(B)\le r(B+x)$, so $r(B)$ and $r(B+x)$ are equal.
\end{proof}
\begin{lemma}\label{lm:lemma5}
If $\mathcal{M}=(S,r)$ is a finitary matroid, $A\in [S]^{<\omega }$ and $x_{1},...,x_{n}\in S$ such that $r(A+x_{i})=r(A)$ for all $1\le i\le n$, then $r(A\cup \{ x_{1},...,x_{n}\})=r(A)$.
\end{lemma}
\begin{proof}
Induction on $n$. For $n=1$, it is clear. Suppose this is true for some $n$ and show for $n+1$. Since $A\subseteq A\cup \{ x_{1},...,x_{n}\}$ and $r(A+x_{n+1})=r(A)$, by lemma \ref{lm:lemma4}, we have $r(A\cup \{ x_{1},...,x_{n},x_{n+1}\})=r(A\cup \{ x_{1},...,x_{n}\}+x_{n+1})=r(A\cup \{ x_{1},...,x_{n}\})=r(A)$ by the induction hypothesis, so the induction step works.
\end{proof}
For a finite matroid a subset $Z\subseteq S$ is called closed if $r(Z+x)>r(Z)$ for all $x\in S-Z$. For finitary matroids, the rank function if defined just on finite subsets, so we need to refine this definition.
\begin{definition}
Let $\mathcal{M}=(S,r)$ be a finitary matroid. A subset $Z\subseteq S$ is {\em closed} if for all $Z_{0}\in [Z]^{<\omega }$ and $x\in S-Z$, we have $r(Z_{0}+x)>r(Z_{0})$.
\end{definition}
By lemma \ref{lm:lemma4}, this is clearly equivalent to the original definition of closedness for finite matroids. The whole set $S$ is closed for all matroids and $\emptyset $ is closed if and only is $\mathcal{M}$ is loop-free.
\begin{lemma}\label{lm:lemma6}
If $\mathcal{M}=(S,r)$ is a finitary matroid, $I$ is an index set and for all $i\in I$, $Z_{i}\subseteq S$ is a closed subset, then $\bigcap_{i\in I}{Z_{i}}$ is closed.
\end{lemma}
\begin{proof}
Let $Z_{0}\in [\bigcap_{i\in I}{Z_{i}}]^{<\omega }$ and $x\in S-(\bigcap_{i\in I}{Z_{i}})$. Then there is some $i\in I$, such that $x\not\in Z_{i}$. Since $Z_{i}$ is closed and $Z_{0}\in [Z_{i}]^{<\omega }$, we have $r(Z_{0}+x)>r(Z_{0})$. Since $Z_{0}$ and $x$ was arbitrary, we have that $\bigcap_{i\in I}{Z_{i}}$ is closed.
\end{proof}
\begin{definition}
Let $\mathcal{M}=(S,r)$ be a finitary matroid and $X\subseteq S$, then the {\em closure of $X$}, denoted by $\sigma (X)$, is defined by
$$\sigma (X)=\bigcap \{Z\subseteq S:X\subseteq Z,Z \text{ is closed}\}.$$
\end{definition}
By lemma \ref{lm:lemma6}, $\sigma (X)$ is closed and $X\subseteq \sigma (X)$, as it was contained in all members of intersection.
\begin{lemma}\label{lm:lemma7}
Let $\mathcal{M}=(S,r)$ be a finitary matroid, then
\begin{enumerate}[a)]
\item
For all $X\subseteq S$ if $X\subseteq Z\subseteq S$ and $Z$ is closed, then $\sigma (X)\subseteq Z$,
\item
If $X\subseteq Y\subseteq S$, then $\sigma (X)\subseteq \sigma (Y)$,
\item
If $X\subseteq S$, the $\sigma (\sigma (X))=\sigma (X)$.
\end{enumerate}
\end{lemma}
\begin{proof}
a) By the definition of $\sigma (X)$, $Z$ was an item in the intersection, so $\sigma (X)\subseteq Z$.
b) Since $X\subseteq Y\subseteq \sigma (Y)$ and $\sigma (Y)$ is closed, by a), we have $\sigma (X)\subseteq \sigma (Y)$.
c) On one hand $\sigma (X)\subseteq \sigma (\sigma (X))$ is by definition, on the other hand, since $\sigma (X)\subseteq \sigma (X)$ and $\sigma (X)$ is closed, by a) we must have $\sigma (\sigma (X))\subseteq \sigma (X)$, so they are equal.
\end{proof}
Now we give a characterization of the elements of $\sigma (X)$
\begin{lemma}\label{lm:lemma8}
If $\mathcal{M}=(S,r)$ is a finitary matroid, $X\subseteq S$, then
$$\sigma (X)=\{ x\in S\ |\ \exists X_{0}\in [X]^{<\omega }\ r(X_{0}+x)=r(X_{0})\}.$$
\end{lemma}
\begin{proof}
Let $$F=\{ x\in S| \exists X_{0}\in [X]^{<\omega }, r(X_{0}+x)=r(X_{0})\}.$$
First we will show that $F\subseteq \sigma (X)$. Suppose $x\in F$ and let $X_{0}\in [X]^{<\omega }$, such that $r(X_{0}+x)=r(X_{0})$. Since $X_{0}\subseteq X\subseteq \sigma (X)$, and $\sigma (X)$ is closed, so $x\not\in \sigma (X)$ would mean $r(X_{0}+x)>r(X_{0})$, in contradiction with our assumption. Thus, $x\in \sigma (X)$, so $F\subseteq \sigma (X)$.
Clearly $X\subseteq F$, as for $x\in X$, we can write the singleton $X_{0}=\{x\}$ into the definition of $F$. Now, we need to show that $F$ is closed. Let $Y_{0}\in [F]^{<\omega }$, and list all its elements $Y_{0}=\{ y_{1},...,y_{n}\}$ and let $x\in S-F$. For all $1\le i\le n$, since $y_{i}\in F$, there is an $X_{i}\in [X]^{<\omega }$, such that $r(X_{i}+y_{i})=r(X_{i})$. Since for all $i$, we have $X_{i}\subseteq \bigcup_{j=1}^{n}{X_{j}}$, by lemma \ref{lm:lemma4}, we have $r((\bigcup_{j=1}^{n}{X_{j}})+x_{i})=r(\bigcup_{j=1}^{n}{X_{j}})$. Then by lemma \ref{lm:lemma5}, we have $r((\bigcup_{j=1}^{n}{X_{j}})\cup Y_{0})=r(\bigcup_{j=1}^{n}{X_{j}})$. Since $x\not\in F$ and $\bigcup_{j=1}^{n}{X_{j}}\in [X]^{<\omega }$, by the definition of $F$, we must have $r((\bigcup_{j=1}^{n}{X_{j}})+x)>r(\bigcup_{j=1}^{n}{X_{j}})$. Then $r((\bigcup_{j=1}^{n}{X_{j}})\cup Y_{0}+x)\ge r((\bigcup_{j=1}^{n}{X_{j}})+x)>r(\bigcup_{j=1}^{n}{X_{j}})=r((\bigcup_{j=1}^{n}{X_{j}})\cup Y_{0})$, so by reverting lemma \ref{lm:lemma4}, we get that $r(Y_{0}+x)>r(Y_{0})$. Since $Y_{0}$ and $x$ were arbitrary, $F$ is closed. Then by lemma \ref{lm:lemma7}, $\sigma (X)\subseteq F$, and by the first part, ${\sigma}(X)$ and $F$ must be equal.
\end{proof}
\begin{lemma}\label{lm:lemma9}
If $\mathcal{M}=(S,r)$ is a finitary matroid, $B\subseteq S$, then $B\in \mathcal{B}(\mathcal{M})$ if and only if $B\in \mathcal{I}(\mathcal{M})$ and $\sigma (B)=S$.
\end{lemma}
\begin{proof}
If $B\in \mathcal{B}(\mathcal{M})$, then it is clearly independent. Suppose $\sigma (B)\neq S$ and let $x\in S-\sigma (B)$. Then for any $B_{0}\in [B]^{<\omega }$, since $B_{0}\subseteq B\subseteq \sigma (B)$, we have $r(B_{0}+x)>r(B_{0})$, so $r(B_{0}+x)=r(B_{0})+1=|B_{0}|+1=|B_{0}+x|$. Since all finite subsets of $B+x$ are either in this form or subset of $B$, by definition, we have $B+x\in \mathcal{I}(\mathcal{M})$, in contradiction, with the maximality of $B$.
Now suppose $B\in \mathcal{I}(\mathcal{M})$ and $\sigma (B)=S$. We need to show that $B$ is maximal independent. Let $x\in S-B$. Then since $x\in \sigma (B)$, by lemma \ref{lm:lemma8}, there is a subset $B_{0}\in [B]^{<\omega }$, such that $r(B_{0}+x)=r(B_{0})=|B_{0}|<|B_{0}+x|$. Since $B_{0}+x\in [B+x]^{<\omega }$, $B+x$ is not independent, so $B$ is maximal.
\end{proof}
In the next step, we define the contraction of a finitary matroid $\mathcal{M}=(S,r)$, by a subset $Z\subseteq S$. For finite matroids, this is a matroid on the set $S-Z$, defined as for all $A\subseteq S-Z$, $r'(A)=r(A\cup Z)-r(Z)$. However, for finitary matroids, we cannot define the contraction by an infinite set that way, so we need a refined definition.
\begin{definition}
Let $\mathcal{M}=(S,r)$ be a finitary matroid and $Z\subseteq S$.
The {\em contraction of $\mathcal{M}$ by $Z$ } is the pair
$\mathcal{M}'=(S\setminus Z,r')$, where for $A\in [S-Z]^{<\omega }$ we let
$$r'(A)=min\{ r(A\cup Z_{0})-r(Z_{0}):Z_{0}\in [Z]^{<\omega }\}.$$
Since all values of this set are nonnegative integers, $r'$ is well-defined (we still need to show that this construction makes a matroid).
For $A\in [S-Z]^{<\omega }$, we say that a $Z_{0}\in [Z]^{<\omega }$ {\em fits} $A$ if
$$r'(A)=r(A\cup Z_{0})-r(Z_{0}).$$
\end{definition}
\begin{lemma}\label{lm:lemma10}
Let $\mathcal{M}=(S,r)$ be a finitary matroid, $Z\subseteq S$, and $r'$ is the function defined by the contraction.
a) If $A\in [S-Z]^{<\omega }$ and $Z_{0},Z_{1}\in [Z]^{<\omega }$ with $Z_{0}\subseteq Z_{1}$ and $Z_{0}$ fits $A$, then $Z_{1}$ also fits $A$.
b) If $A_{1},...,A_{n}\in [S-Z]^{<\omega }$, then there is a $Z_{0}\in [Z]^{<\omega }$, that fits all of them.
\end{lemma}
\begin{proof}
a) In one hand, by the definition of $r'$, we clearly have $r(A\cup Z_{1})-r(Z_{1})\ge r'(A)$. On the other hand, we can write the submodular inequality for $A\cup Z_{0}$ and $Z_{1}$ to get that $r(A\cup Z_{0})+r(Z_{1})\ge r(A\cup Z_{1})+r(Z_{0})$, that can be transformed as $r(A\cup Z_{1})-r(Z_{1})\le r(A\cup Z_{0})-r(Z_{0})=r'(A)$, so the two sides are equal, $Z_{1}$ fits $A$.
b) For each $1\le i\le n$ choose $Z_{i}\in [Z]^{<\omega }$, that fits $A_{i}$. Then by a) $\bigcup_{i=1}^{n}{Z_{i}}$ fits all.
\end{proof}
\begin{lemma}\label{lm:lemma11}
Let $\mathcal{M}=(S,r)$ be a finitary matroid, $Z\subseteq S$, and $r'$ is the function defined by the contraction, then $r'(A)\le r(A)$ for all $A\in [S-Z]^{<\omega }$, and $\mathcal{M'}=(S-Z,r')$ is a finitary matroid.
\end{lemma}
\begin{proof}
By writing $Z_{0}=\emptyset $ into the definition of $r'$, we get that for $A\in [S-Z]^{<\omega }$ $r'(A)\le r(A\cup \emptyset )-r(\emptyset )=r(A)-0=r(A)$. To see that $\mathcal M'$ is a matroid, we need to show properties 1-4 from Definition \ref{df:finitary-matroid} for $r'$.
1. Clear by $0\le r'(\emptyset )\le r(\emptyset )=0$
2. Let $A,B\in [S-Z]^{<\omega }$ with $A\subseteq B$ and choose $Z_{0}\in [Z]^{<\omega }$ that fits both. Then since $A\cup Z_{0}\subseteq B\cup Z_{0}$, we have $r(A\cup Z_{0})\le r(B\cup Z_{0})$, so $r'(A)=r(A\cup Z_{0})-r(Z_{0})\le r(B\cup Z_{0})-r(Z_{0})=r'(B)$.
3. Comes from $r'(A)\le r(A)\le |A|$
4. Let $A,B\in [S-Z]^{<\omega }$ and choose $Z_{0}\in [Z]^{<\omega }$, that fits all of $A,B,A\cap B,A\cup B$. Then since $(A\cup Z_{0})\cap (B\cup Z_{0})=(A\cap B)\cup Z_{0}$, and $(A\cup Z_{0})\cup (B\cup Z_{0})=(A\cup B)\cup Z_{0}$, we have $r'(A)+r'(B)=r(A\cup Z_{0})+r(B\cup Z_{0})-2r(Z_{0})\ge r((A\cap B)\cup Z_{0})+r((A\cup B)\cup Z_{0})-2r(Z_{0})=r'(A\cap B)+r'(A\cup B).$
\end{proof}
\begin{lemma}\label{lm:lemma12}
Let $\mathcal{M}=(S,r)$ be a finitary matroid, $Z\subseteq S$, and $\mathcal{M'}=(S-Z,r')$ be the contraction matroid. Then for $X\subseteq S-Z$, we have $X\in \mathcal{I}(\mathcal{M'})$ if and only if for all $Y\in \mathcal{I}(\mathcal{M})\cap \mathcal{P}(Z)$, $X\cup Y\in \mathcal{I}(\mathcal{M})$.
\end{lemma}
\begin{proof}
First suppose $X\in \mathcal{I}(\mathcal{M})'$ and let $Y\in \mathcal{I}(\mathcal{M})$ with $Y\subseteq Z$. We need to show that $X\cup Y$ is independent. Let $X_{0}\cup Y_{0}\in [X\cup Y]^{<\omega }$, where $X_{0}\subseteq X$, $Y_{0}\subseteq Y$. Then since $Y$ is independent, we have $r(Y_{0})=|Y_{0}|$ and since $X$ is independent in $\mathcal{M}'$, we have $r(X_{0}\cup Y_{0})-|Y_{0}|=r(X_{0}\cup Y_{0})-r(Y_{0})\ge r'(X_{0})=|X_{0}|$, so $r(X_{0}\cup Y_{0})\ge |X_{0}\cup Y_{0}|$, so they are equal. Since all finite subsets of $X\cup Y$ is in this form, $X\cup Y$ is independent.
Now suppose that $X$ has this property and let $X_{0}\in [X]^{<\omega }$ be arbitrary. Choose a $Z_{0}\in [Z]^{<\omega }$ that fits $X_{0}$, and let $Y\subseteq Z_{0}$ be maximal independent subset of $Z_{0}$, so by lemma \ref{lm:lemma1}, we have $|Y|=r(Z_{0})$. Since $Y$ is independent, by our assumption $X\cup Y$ is also independent, so $r(X_{0}\cup Y)=|X_{0}\cup Y|=|X_{0}|+|Y|=|X_{0}|+r(Z_{0})$, but then $r'(X_{0})=r(X_{0}\cup Z_{0})-r(Z_{0})\ge r(X_{0}\cup Y)-r(Z_{0})=|X_{0}|+r(Z_{0})-r(Z_{0})=|X_{0}|$. Since $X_{0}$ was arbitrary, we have $X\in \mathcal{I}(\mathcal{M'})$.
\end{proof}
\begin{lemma}\label{lm:lemma13}
Let $\mathcal{M}=(S,r)$ be a finitary matroid, $Z\subseteq S$. Then the contraction matroid $\mathcal{M'}=(S-Z,r')$ is loop-free if and only if $Z\subseteq S$ is closed ($\mathcal{M}$ does not need to be loop-free).
\end{lemma}
\begin{proof}
First suppose $Z\subseteq S$ is closed. Then for all $x\in S-Z$ and $Z_{0}\in Z^{<\omega }$, we have $r(Z_{0}+x)=r(Z_{0})+1$, so $r(\{ x\}\cup Z_{0})-r(Z_{0})=1$. Then clearly, by the definition, $r'(\{x\})=1$, so $\mathcal{M}'$ is loop-free.
Now suppose that $\mathcal{M}'$ is loop-free. Let $Z_{0}\in [Z]^{<\omega }$ and $x\in S-Z$. Then $r(Z_{0}+x)-r(Z_{0})=r(\{x\} \cup Z_{0})-r(Z_{0})\ge r'(\{x\})=1$, so $r(Z_{0}+x)>r(Z_{0})$. Since $Z_{0}$ and $x$ were arbitrary, we have that $Z$ is closed.
\end{proof}
Now we define the proper colorings of finitary matroids, the chromatic and the list chromatic numbers.
\begin{definition}
Let $\mathcal{M}=(S,r)$ be a finitary matroid $\mathcal{K}$ be any color set. A
{\em proper coloring} of $\mathcal{M}$ by $\mathcal{K}$ is a function $\Phi :S\rightarrow \mathcal{K}$, such that for all $i\in \mathcal{K}$, we have $\Phi ^{-1}(i)\in \mathcal{I}(\mathcal{M})$.
\end{definition}
It is clear that $\Phi $ is a proper coloring if and only if there is no $C\in \mathcal{C}(M)$, such that $C\subseteq \Phi ^{-1}(i)$ for some $i\in \mathcal{K}$.
First we need to see what finitary matroids have any proper colorings
\begin{lemma}\label{lm:lemma14}
A finitary matroid $\mathcal{M}=(S,r)$ has a proper coloring to some color set, if and only if it is loop-free.
\end{lemma}
\begin{proof}
If $\mathcal{M}$ is loop-free, then let $\mathcal{K}=S$, and for all $x\in S$ put $\Phi (x)=x$. Then clearly for all $x$, $\Phi ^{-1}(x)=\{x\}\in \mathcal{I}(\mathcal{M})$. If $\mathcal{M}$ is not loop-free, there is an $x\in S$, that $\{x\}$ is not independent. But then for any $\mathcal{K}$ and $\Phi :S\rightarrow K$, we have $\Phi ^{-1}(\Phi (x))\supseteq \{x\}\not\in \mathcal{I}(\mathcal{M})$, so it has no proper colorings.
\end{proof}
For loop-free finitary matroids, we can define the chromatic number the following way.
\begin{definition}
A loop-free finitary matroid $\mathcal{M}=(S,r)$ is {\em $\kappa $-colorable} for a given (finite or infinite) cardinal $\kappa $ if there is a proper coloring $\Phi :S\rightarrow \kappa $ of $\mathcal{M}$. The \emph{chromatic number of $\mathcal{M}$}, denoted by $Chr(\mathcal{M})$, is the smallest cardinal $\kappa$, such that $\mathcal{M}$ is $\kappa $-colorable.
\end{definition}
By the similar proof as lemma \ref{lm:lemma14}, we can see, that for all loop-free matroids $\mathcal{M}=(S,r)$, $Chr(\mathcal{M})$ exists and $Chr(\mathcal{M})\le |S|$.
\begin{definition}
For a loop-free, finitary matroid $\mathcal{M}=(S,r)$ and a cardinal $\kappa $ a
{\em $\kappa $-listing} is a function $L$ from $S$, such that for all $x\in S$, we have $|L(x)|\ge \kappa $. An {\em $L$-coloring} of $\mathcal{M}$ is a function $\Phi :S\rightarrow \bigcup_{x\in S}{L(x)}$, such that $\Phi $ is a proper coloring of $\mathcal{M}$ and for all $x\in S$, we have $\Phi (x)\in L(x)$. $\mathcal{M}$ is {\em $\kappa $ list-colorable}, if it is $L$-colorable for all $\kappa $ listing $L$. The {\em list chromatic number} denoted by $List(\mathcal{M})$ is the smallest cardinal $\kappa $, such that $\mathcal{M}$ is $\kappa $ list-colorable.
\end{definition}
If $\mathcal{M}$ is $\kappa $ list-colorable, then it is also $\kappa $-colorable, as we can write $L(x)=\kappa $ for all $x\in S$, and then the $L$-colorings are exactly the $\kappa $ colorings. By this, if $List(\mathcal{M})$ exists, then $Chr(\mathcal{M})\le List(\mathcal{M})$.
\begin{lemma}\label{lm:lemma15}
For all loop-free finitary matroids $\mathcal{M}=(S,r)$ and $List(\mathcal{M})$ exists and $List(\mathcal{M})\le |S|$.
\end{lemma}
\begin{proof}
Let $\prec $ be a well-ordering of $S$ in order type $|S|$ and let $L$ be any $|S|$-listing. Then by transfinite recursion, for all $x\in S$, since $|L(x)|>|\{\Phi (y):y\prec x\}|$, we can choose $\Phi (x)\in L(x)$, such that $\Phi (x)\neq \Phi (y)$, for $y\prec x$. Then all values of $\Phi $ are different, so it is clearly a proper $L$-coloring.
\end{proof}
Then we have that for all loop-free finitary matroids $Chr(\mathcal{M})\le List(\mathcal{M})\le |S|$. Seymour's famous list-coloring theorem \cite[Theorem 2]{Se98} states that for any finite matroid $Chr(\mathcal{M})=List(\mathcal{M})$ holds. In this paper, we generalize this result for finitary matroid. In the next section \ref{sc:fin}, we consider the easy case when $Chr(\mathcal{M})$ is finite, and in Section \ref{sc:inf} we investigate the case when $Chr(\mathcal{M})$ is an infinite cardinal.
\section{The finite chromatic number case}\label{sc:fin}
\begin{theorem}\label{tm:theorem1}
Let $\mathcal{M}=(S,r)$ be a loop-free finitary matroid and $k\in \omega $. Then the following statements are equivalent:
(1) $Chr(\mathcal{M})\le k$
(2) $List(\mathcal{M})\le k$
\end{theorem}
\begin{proof}
$(2)\Rightarrow (1)$ is clear.
$(1)\Rightarrow (2)$
Let $\Phi :S\rightarrow k$ be a $k$-coloring of $\mathcal{M}$. Then clearly for all $A\in [S]^{<\omega }$, $\Phi |_{A}$ is a $k$-coloring of $\mathcal{M}_{A}$, so $Chr(\mathcal{M}_{A})\le k$. Since $\mathcal{M}_{A}$ is a finite matroid, by Seymour's list coloring theorem, we have that $List(\mathcal{M}_{A})\le k$ for all $A\in [S]^{<\omega }$.
Let $L$ be an arbitrary $k$-listing, that we want to construct an $L$-coloring of $\mathcal{M}$. We may suppose that $|L(x)|=k$ for all $x\in S$, as if not, we can choose an arbitrary $L'(x)\subseteq L(x)$ for all $x\in S$ with $|L'(x)|=k$, and then the constructed $L'$-coloring is also an $L$-coloring. For all $A\in [S]^{<\omega }$ choose an $L$-coloring $\Phi _{A}$ of $\mathcal{M}_{A}$.
For any $A\in [S]^{<\omega }$, let $T_{A}=\{ X\in [S]^{<\omega }:A\subseteq S\}$, and let us denote $T_{x}=T_{\{x\}}$. Clearly for any $A$, we have $A\in T_{A}$ and for $A_{1},...,A_{n}\in [S]^{<\omega }$, we have $T_{A_{1}}\cap ...\cap T_{A_{n}}=T_{A_{1}\cup ...\cup A_{n}}$. Since $\mathscr{T}=\{T_{A}:A\in [S]^{<\omega }\}\subseteq \mathcal{P}([S]^{<\omega })$ is a set that is closed under finite intersection and $\emptyset \not\in \mathscr{T}$, there is an ultrafilter $\mathscr{U}\subseteq \mathcal{P}([S]^{<\omega })$, such that $\mathscr{T}\subseteq \mathscr{U}$.
Let $x\in S$ be arbitrary. Then for each $A\in T_{x}$, since $x\in A$, the coloring $\Phi _{A}$ is defined on $x$ and $\Phi _{A}(x)\in L(x)$. For each $i\in L(x)$, let $T_{x,i}=\{A\in T_{x}| \Phi _{A}(x)=i\}$. Then the finite union $\bigcup_{i\in L(x)}{T_{x,i}}=T_{x}\in \mathcal{T}\subseteq \mathcal{U}$, so there is a unique $i\in L(x)$, with $T_{x,i}\in \mathscr{U}$. Define the coloring $\Phi _{\mathscr{U}}:S\rightarrow \bigcup_{x\in S}{L(x)}$, be such that for all $x\in S$, $\Phi _{\mathscr{U}}(x)\in L(x)$ is the unique element with $T_{x,\Phi _{\mathscr{U}}(x)}\in \mathscr{U}$.
We need to show that $\Phi _{\mathscr{U}}$ is a proper coloring. Suppose for contradiction, that there is a $C\in \mathcal{C}(\mathcal{M})$ and some $i$, such that $C\subseteq \Phi _{\mathscr{U}}^{-1}(i)$. Then for all $x\in C$, we have $i\in L(x)$ and $T_{x,i}\in \mathscr{U}$. Then the finite intersection $\bigcap_{x\in C}{T_{x,i}}\in \mathscr{U}$, so $\bigcap_{x\in C}{T_{x,i}}\neq \emptyset $. Let $A\in \bigcap_{x\in C}{T_{x,i}}$. Then for all $x\in C$, $A\in T_{x,i}$, so $x\in A$ and $\Phi _{A}(x)=i$. But then $C\subseteq \Phi _{A}^{-1}(i)$, that is a contradiction, since $\Phi _{A}$ is a proper coloring of $\mathcal{M}_{A}$. Thus, $\Phi _{\mathscr{U}}$ is an $L$-coloring. Hence, $\mathcal{M}$ is $k$ list-colorable, so $List(\mathcal{K})\le k$.
\end{proof}
\section{The infinite chromatic number case}\label{sc:inf}
In this final section, we will show that $Chr(\mathcal{M})=List(\mathcal{M})$, when it is an infinite cardinal. For this, first we define a notion.
\begin{definition}\label{df:mb}
If $\mathcal{M}=(S,r)$ is a loop-free finitary matroid, and
we say that $(B,\le )$ is a {\em well-ordered base} of $\mathcal{M}$,
if $B$ is a base of $\mathcal M$ and $\le $ is a well-order on $B$
Given a well-ordered base $(B,\le )$
we define the function $M_{B}:S\rightarrow B$ in the following way:
\begin{displaymath}
{M_B(x)}=\left\{\begin{array}{ll}
{x}&\text{\text{if $x\in B$,}}\\
{\max_{\le}(C(B,x)\cap B)}&\text{if $x\notin B$,}\\
\end{array}\right.
\end{displaymath}
where $C(B,x)$ denotes the main circle of $x$ in $B$, see Definition \ref{df:maincircle}.
\end{definition}
\begin{theorem}\label{tm:theorem2}
Let $\mathcal{M}=(S,r)$ be a loop-free finitary matroid, $\kappa $ be an infinite cardinal. Then the following statements are equivalent:
(1) $Chr(\mathcal{M})\le \kappa $
(2) $List(\mathcal{M})\le \kappa $
(3) There is a well-ordered base $(B,\le )$ of $\mathcal{M}$, such that for all $b\in B$, $$|\{ x\in S\ |\ M_{B}(x)=b \}|\le \kappa .$$
\end{theorem}
\begin{proof}
$(2)\Rightarrow (1)$ is clear.
Before proving $(3)\Rightarrow (2)$ we need some preparation.
\begin{lemma}\label{lm:lemma16}
If $\mathcal{M}=(S,r)$ is a finitary matroid $Z\subseteq S$ is closed, and $C\in \mathcal{C}(\mathcal{M})$ such that there is an $x\in C$ with $C-x\subseteq Z$, then $C\subseteq Z$.
\end{lemma}
\begin{proof}
Suppose for contradiction that $x\not\in Z$. Then $C-x\in [Z]^{<\omega }$ and $r((C-x)+x)=r(C)=|C|-1=r(C-x)$, in contradiction with $Z$ is closed.
\end{proof}
\begin{lemma}\label{lm:lemma17}
If $\mathcal{M}=(S,r)$ is a loop-free finitary matroid $(B,\le )$ is a well-ordered base and $C\in \mathcal{C}(\mathcal{M})$. Then there are $x,y\in C$, with $x\neq y$ and $M_{B}(x)=M_{B}(y)$.
\end{lemma}
\begin{proof}
Let $B^{*}=(C\cap B)\cup (\bigcup _{x\in C-B}{C(B,x)\cap B})\subset B$. This is a finite subset of $B$. Let $b=max_{\le }(B^{*})$. List the elements of $C$ by $C=\{x_{1},...,x_{n}\}$. Clearly $M_{B}(x)=b$ holds for at least one element of $C$, as either $b\in C$ or $b\in C(B,x)\cap B$ for some $x\in C$, and $b$ is even the maximal element of the whole $B^{*}$. We will show that $M_{b}(x)=b$ holds for at least two elements of $C$. Suppose for contradiction that this is not true. By symmetry, we may suppose, that $M_{B}(x_{n})=b$, and for all $1\le j\le n-1$, $M_{B}(x_{j})\neq b$. Then for $1\le j\le n-1$, if $x_{j}\in B$, then $x_{j}\in B^{*}-b$. If $x_{j}\not\in B$, then $C(B,x_{j})\cap B=C(B,x_{j})-x_{j}\subseteq B^{*}-b$, as if $b$ would be in $C(B,x_{j})$, it would be maximal. Thus, by lemma \ref{lm:lemma16}, we have that $x_{1},...,x_{n-1}\in \sigma (B^{*}-b)$. Applying lemma \ref{lm:lemma16} now for $C$, we also get $x_{n}\in \sigma (B^{*}-b)$. If $x_{n}\in B$, then $x_{n}=M_{B}(x_{n})=b$ would mean that $b\in \sigma (B^{*}-b)$. If $x_{n}\not\in B$, then we have $C(B,x_{n})-x_{n}-b\subseteq B^{*}-b$ and $x_{n}\in \sigma (B^{*}-b)$, so $C(B,x_{n})-b\subseteq \sigma (B^{*}-b)$, using lemma \ref{lm:lemma16} again, we also get that $b\in \sigma (B^{*}-b)$. But since $B^{*}$ is independent, for all $B_{0}\in [B^{*}-b]^{<\omega }$, we have $r(B_{0}+b)=|B_{0}+b|=r(B_{0})+1>r(B_{0})$ in contradiction with lemma \ref{lm:lemma8}. Thus, $M_{b}(x)=b$ holds for at least two $x\in C$.
\end{proof}
$(3)\Rightarrow (2)$
Suppose, that (3) holds, and let $(B,\le )$ be a well-ordered basis of $\mathcal{M}$ such that for all $b\in B$, $|S_{b}|=|\{x\in S|M_{B}(x)=b\}|\le \kappa $. We need to show that $\mathcal{M}$ is $\kappa $ list-colorable. Let $L$ be an arbitrary $\kappa $-listing. First, we will show that we can choose a $\Phi _{b}$ function on $S_{b}$, such that for $x\in S_{b}$, we have $\Phi _{b}(x)\in L(x)$ and $\Phi _{b}$ is one-to-one. Let $\prec _{b}$ be a well-ordering of $S_{b}$ in order type $\le \kappa $ and let us define $\Phi _{b}$ by transfinite recursion. For $x\in S_{b}$, since $|L(x)|\ge \kappa $ and $|\{\Phi_{b}(y),y\prec _{b}x\}|<\kappa $, we can choose $\Phi _{b}(x)\in L(x)$, such that $\Phi _{b}(x)\neq \Phi _{b}(y)$ for $y\prec _{b}x$. The $\Phi _{b}$ defined this way is clearly one-to-one. Let $\Phi =\bigcup_{b\in B}{\Phi _{b}}$. Then clearly for all $x\in S$, we have $\Phi (x)=\Phi_{M_{B}(x)}(x)\in L(x)$. We need to show that this is a proper coloring. Suppose for contradiction, that there is some $C\in \mathcal{C}(\mathcal{M})$ and an $i\in \bigcup_{x\in S}{L(x)}$, such that $C\subseteq \Phi^{-1}(i)$. Then by lemma \ref{lm:lemma17}, there are $x,y\in C$, $x\neq y$ and $M_{B}(x)=M_{B}(y)=b\in B$. Then $x,y\in S_{b}$, so $\Phi (x)=\Phi _{b}(x)\neq \Phi _{b}(y)=\Phi (y)$, since $\Phi _{b}$ is one-to-one. This is in contradiction with $\Phi (x)=i=\Phi (y)$, so $\Phi $ is a proper coloring.
Before proving $(1)\Rightarrow (3)$ we need to prove some lemmas.
\begin{lemma}\label{lm:lemma18}
Let $\mathcal{M}=(S,r)$ be a finitary matroid, $A\in [S]^{<\omega }$, with $|A|=n$, and $x_{1},...,x_{n+1}\in S-A$ be such that $r(A+x_{i})=r(A)$ for all $1\le i\le n+1$. Then the set $\{x_{1},...,x_{n+1}\}$ is not independent.
\end{lemma}
\begin{proof}
By lemma \ref{lm:lemma5}, we have that $r(A\cup \{x_{1},...,x_{n+1}\})=r(A)\le |A|=n$, so by monotonity $r(\{x_{1},...,x_{n+1}\})\le r(A\cup \{x_{1},...,x_{n+1}\})<n+1=|\{x_{1},...,x_{n+1}\}|$, that it is not independent.
\end{proof}
\begin{lemma}\label{lm:lemma19}
Let $\mathcal{M}=(S,r)$ be a loop-free finitary matroid, $\kappa $ be an infinite cardinal, with $Chr(\mathcal{M})\le \kappa $. Then for all $[A]\in [S]^{<\omega }$, we have
$$|\{x\in S\setminus A\ |\ r(A+x)=r(A) \}|\le \kappa .$$
\end{lemma}
\begin{proof}
Let $\Phi :S\rightarrow \kappa $ be a $\kappa $-coloring of $\mathcal{M}$ and for all $\alpha <\kappa $ let $A_{\alpha }=\{ x\in S-A|r(A+x)=r(A),\Phi (x)=\alpha \}$. Let $n=|A|$. First we will show that $|A_{\alpha }|\le n$ for all $\alpha $. Suppose for contradiction, that for some $\alpha $, we have $|A_{\alpha }|\ge n+1$, and let $x_{1},...,x_{n+1}\in A_{\alpha}$. Then by lemma \ref{lm:lemma18}, we have that $\{x_{1},...,x_{n+1}\}\not\in \mathcal{I}(\mathcal{M})$ and $\{x_{1},...,x_{n+1}\}\subseteq A_{\alpha}\subseteq \Phi^{-1}(\alpha )$ in contradiction with $\Phi $ is a proper coloring. Then we have $|\{x\in S-A|r(A+x)=r(A) \}|=|\bigcup_{\alpha <\kappa }{A_{\alpha }}|\le \kappa \cdot n=\kappa $.
\end{proof}
Thus, by Lemma \ref{lm:lemma19}, for each
for loop-free finitary matroid $\mathcal M$ with $Chr(\mathcal{M})\le \kappa $
we can fix a {\em bookkeeping function $h$}, i.e.
a function $h:[S]^{<\omega }\times \kappa \rightarrow S$ such that for all $A\in [S]^{<\omega }$
\begin{displaymath}
\{x\in S: r(A+x)=r(A)\}\subset \{h(A,{\alpha}):{\alpha}<{\kappa}\}.
\end{displaymath}
The last thing, we need for this proof is some model theoretic approach, using elementary submodels. For this, let $H(\theta )=\{ x:|TC(x)|<\theta \}$ for some infinite cardinal $\theta $. Here $TC$ denotes the transitive closure, so $TC(x)=\bigcup_{n\in \omega }{U_{n}(x)}$, where $U_{0}(x)=x$ and $U_{n+1}(x)=\cup U_{n}(x)$, for any set $x$. If $\theta $ is a regular cardinal, then $H(\theta )$ is a set and all ZFC axioms except Power Set Axiom holds in the model $H(\theta )$. However, if $x\in H(\theta )$ and $2^{|x|}<\theta $, then we also have $\mathcal{P}(X)\in H(\theta )$. Since all sets belong to some $H(\theta )$, in practice we may say that $\theta $ is sufficiently large. That means, that all sets defined in the proof are in $H(\theta )$. An $M$ is an elementary submodel of $H(\theta )$ if for set-theoretic formulae $\phi $ and $x_{1},...,x_{n}\in M$ (where $n$ is the number of free variables of $\phi $) we have $M\vDash \phi (x_{1},...,x_{n})\Leftrightarrow H(\theta ){\vDash}\phi (x_{1},...,x_{n})$. By Löwenheim-Skolem theorem, for all $R\subseteq H(\theta )$, there is an elementary submodel $M$, such that $R\subseteq M$ and $|M|=max(|R|,\omega )$.
$(1)\Rightarrow (3)$
Let $\kappa $ be fixed. We define the statement $Q(\lambda )$ for each infinite cardinal $\lambda $ in the following way:
\begin{enumerate}[($Q({\lambda})$)]
\item If $\mathcal{M}=(S,r)$ is a loop-free matroid, with $Chr(\mathcal{M})\le \kappa $ and $|S|=\lambda $, then there is a well-ordered base $(B,\le )$ of $\mathcal{M}$, such that for all $b\in B$, we have $|\{x\in S\ |\ M_{B}(x)=b\}|\le \kappa $.
\end{enumerate}
The statement $(1)\Rightarrow (3)$ would mean, that $Q(\lambda )$ holds for all cardinals $\lambda $. We prove it by induction on $\lambda $.
If $\lambda \le \kappa $, then $Q(\lambda )$ clearly holds, as any well-ordered base $(B,\le )$ would fit. Now suppose that $\lambda >\kappa $ and $Q(\mu )$ holds for all $\mu <\lambda $. We need to prove that $Q(\lambda )$ also holds.
Let $\mathcal{M}=(S,r)$ be a finitary matroid with $|S|=\lambda $ and $Chr(\mathcal{M})\le \kappa $. Fix a proper coloring $\Phi :S\rightarrow \kappa $ and a bookkeeping function $h:[S]^{<\omega }\times \kappa \rightarrow S$. Let $(S_{\alpha})_{\alpha <cf(\lambda)}$ be such that $S_{\alpha }\subseteq S$, $|S_{\alpha} |<\lambda$ for all $\alpha <cf(\lambda )$ and $\bigcup _{\alpha <cf(\lambda )}{S_{\alpha }}=S$. Let $\theta $ be a sufficiently large regular cardinal. We construct an increasing sequence of elementary submodels $M_{\alpha }\subseteq H(\theta )$ for $\alpha <cf(\lambda )$. Let $M_{0}\subseteq H(\theta )$ be an elementary submodel, such that $\kappa \cup \{\mathcal{M},\Phi ,h\}\subseteq M_{0}$ and $|M_{0}|=\kappa $. For $\alpha <cf(\lambda )$, let $M_{\alpha +1}\subseteq H(\theta )$ be an elementary submodel with $M_{\alpha }\cup S_{\alpha }\subseteq M_{\alpha +1}$, and $|M_{\alpha +1}|=|M_{\alpha }\cup S_{\alpha }|$. If $\alpha <cf(\lambda )$ is a limit ordinal, then let $M_{\alpha }=\bigcup_{\beta <\alpha }{M_{\beta}}$.
We also have that for all $\alpha <cf(\lambda )$, $|M_{\alpha }|<\lambda $. For $M_{0}$, we have $|M_{0}|=\kappa <\lambda $, for successor ordinals, since $|M_{\alpha }|<\lambda $ and $|S_{\alpha }|<\lambda $, we have by definition, that $|M_{\alpha +1}|<\lambda $. For limit ordinals $M_{\alpha }$ is a $<cf(\lambda )$ union of $<\lambda $ sets, so $|M_{\alpha }|<\lambda $ also holds. For all $\alpha <cf(\lambda )$, let $Z_{\alpha }=M_{\alpha }\cap S$.
\begin{lemma}\label{lm:lemma20}
For all $\alpha <cf(\lambda )$, the set $Z_{\alpha }\subseteq S$ is closed.
\end{lemma}
\begin{proof}
Suppose for contradiction, that $Z_{\alpha }$ is not closed and let $A\in [Z_{\alpha}]^{<\omega }$, and $x\in S-Z_{\alpha }$ be such that $r(A+x)=r(A)$. Then since $A\subseteq Z_{\alpha }\subseteq M_{\alpha}$ is a finite subset of an elementary submodel, we have $A\in M_{\alpha }$. Since $h$ is a bookkeeping function, there is some $\gamma <\kappa $, such that $h(A,\gamma )=x$. Clearly, we also have $h,\gamma \in M_{0}\subseteq M_{\alpha }$, so $x=h(A,\gamma )\in M_{\alpha }$. Thus, $x\in Z_{\alpha }$, that is a contradiction.
\end{proof}
Now since for all $\alpha <cf(\lambda )$, we have $S_{\alpha }\subseteq Z_{\alpha +1}$ and $S=\bigcup _{\alpha <cf(\lambda )}{S_{\alpha }}$, for all $x\in S$ there is some $\alpha <cf(\lambda )$, such that $x\in Z_{\alpha }$. Let us define the rank function $\rho :S\rightarrow cf(\lambda )$, such that $\rho (x)=min\{\alpha :x\in Z_{\alpha}\}$. Moreover, $\rho (x)$ must be a successor ordinal, as for limit ordinals $Z_{\alpha }$ is just the union of former ones.
We construct an increasing chain $(B_{\alpha },\le _{\alpha })_{\alpha <cf(\lambda )}$ of well-ordered independent sets in $\mathcal{M}$, such that
\begin{enumerate}[(i)]
\item $B_{\alpha }\in \mathcal{B}(\mathcal{M}_{Z_{\alpha }})$,
\item
$(B_{\alpha},\le_{\alpha})$ is an initial segment of $(B_{\beta},\le_{\beta})$ for
${\alpha}<{\beta}<cf({\lambda})$,
\item
$|\{x\in Z_{\alpha }\ |\ M_{B_{\alpha }}(x)=b\}|\le \kappa $ for each ${\alpha}<cf({\lambda})$ and $b\in B_{\alpha }$.
\end{enumerate}
We construct those $B_{\alpha}$s by transfinite recursion. First, since $Q(\kappa )$ clearly holds and $|Z_{0}|\le |M_{0}|=\kappa $, we can construct $(B_{0},\le _{0})$.
Now suppose, that $\alpha $ is a limit ordinal and for $\beta <\alpha $, $(B_{\beta },\le _{\beta })$ is already constructed. Then clearly $Z_{\alpha }=M_{\alpha }\cap S=(\bigcup_{\beta <\alpha }{M_{\beta}})\cap S=\bigcup_{\beta <\alpha }{(M_{\beta})\cap S}=\bigcup_{\beta <\alpha }{Z_{\beta }}$. Let $B_{\alpha }=\bigcup _{\beta <\alpha }{B_{\beta }}$
and $\le_{\alpha }=\bigcup _{\beta <\alpha }{\le_{\beta }}$
We need to show that $B_{\alpha }\in \mathcal{B}(\mathcal{M}_{Z_{\alpha }})$. First we show that $B_{\alpha }$ is independent. Let $B'\in [B_{\alpha }]^{<\omega }$ and let $B'=\{x_{1},...,x_{n}\}$. Since for all $1\le i\le n$, we have $x_{i}\in B_{\alpha }=\bigcup _{\beta <\alpha }{B_{\beta }}$, there is some $\beta _{i}<\alpha $, such that $x_{i}\in B_{\beta _{i}}$. Let $\beta =max_{1\le i\le n}(\beta _{i})$. Then for all $i$, $x_{i}\in B_{\beta _{i}}\subseteq B_{\beta }$, so $B'\subseteq B_{\beta }$. Since $B_{\beta }$ is independent, we have $r(B')=|B'|=n$. Since it was an arbitrary finite subset, we have $B_{\alpha }\in \mathcal{I}(\mathcal{M}_{Z_{\alpha }})$. We also show that this is maximal. Let $x\in Z_{\alpha }-B_{\alpha }$ be arbitrary. Then for $\beta =\rho (x)<\alpha $, since $x\in Z _{\beta }$, $B_{\beta }+x$ is not independent, thus $B_{\alpha }+x$ neither. Hence, $B_{\alpha }$ is maximal independent in $Z_{\alpha }$, so it is a base.
By (ii), $\le_{\alpha}$ is a well-ordering of $B_{\alpha}$,
and $(B_{\beta},\le_{\beta})$ is an initial segment of $(B_{\alpha},\le_{\alpha})$.
For (iii), let $b\in B_{\alpha }$ be arbitrary, and let
$\beta =\min\{{\gamma}<{\alpha}:b\in Z_{\gamma}\}<\alpha
$. Now, for any $x\in Z_{\alpha }$ with $M_{B_{\alpha }}(x)=b\in B_{\beta }$, we have either $x=b$ or $x\not \in B_{\alpha }$ and $C(B_{\alpha },x)-x=C(B_{\alpha },x)\cap B_{\alpha }\subseteq B_{\beta}\subseteq Z_{\beta }$, as $B_{\beta }$ is an initial segment. Then by lemma \ref{lm:lemma20}, $Z_{\alpha }$ is closed, applying lemma \ref{lm:lemma16}, we get that $x\in B_{\beta }$. Here we also have that $M_{B_{\alpha }}(x)=M_{B_{\beta }}(x)$. If $x\in B_{\beta }\subseteq B_{\alpha }$, this is clear, otherwise $x\not\in B_{\beta }$, so $C(B_{\beta },x)\subseteq B_{\beta }+x\subseteq B_{\alpha }+x$, that is a circuit, so by lemma \ref{lm:lemma3}, we have that $C(B_{\alpha },x)=C(B_{\beta },x)$, thus $M_{B_{\alpha }}(x)=M_{B_{\beta }}(x)$. Hence, $|\{x\in Z_{\alpha }|M_{B_{\alpha }}(x)=b\}|=|\{x\in Z_{\beta }|M_{B_{\beta }}(x)=b\}|\le \kappa $, so we are done.
\medskip
Now the successor case: let $\alpha <cf(\lambda )$ and assume that
$(B_{\alpha},\le_{\alpha})$ is constructed. In the matroid $\mathcal{M}_{Z_{\alpha +1}}$, the set $Z_{\alpha }$ is closed by lemma \ref{lm:lemma20}. Then, by lemma \ref{lm:lemma13}, the contracted matroid $\mathcal{M'}=(Z_{\alpha +1}-Z_{\alpha },r')$ is loop-free. Next we will see that $Chr(\mathcal{M}')\le \kappa $ also holds.
\begin{lemma}\label{lm:lemma21}
For the contracted matroid $\mathcal{M'}=(Z_{\alpha +1}-Z_{\alpha },r')$, the restriction $\Phi |_{Z_{\alpha +1}-Z_{\alpha }}$ is a proper coloring of $\mathcal{M}'$, and thus $Chr(\mathcal{M}')\le \kappa $.
\end{lemma}
\begin{proof}
Suppose for contradiction, that $\Phi $ is not a proper coloring. Then there is an $X\subseteq Z_{\alpha +1}-Z_{\alpha }$ and a $\gamma <\kappa $, such that $X\subseteq \Phi ^{-1}(\gamma )$ and $X\not\in \mathcal{I}(\mathcal{M'})$.
Then by lemma \ref{lm:lemma12}, there is a $Y\subseteq Z_{\alpha }$, such that $Y\in \mathcal{I}(\mathcal{M})$, but $X\cup Y\not\in \mathcal{I}(\mathcal{M})$. Let $C\in \mathcal{C}(\mathcal{M})$, be such that $C\subseteq X\cup Y$.
Then since $Y$ is independent, we must have $C\not\subseteq Y$, and since $C\cap Z_{\alpha }\subseteq Y$, we have $C\not\subseteq Z_{\alpha }$. Moreover, for all $x\in C-Z_{\alpha }$, we have $x\in X\subseteq \Phi ^{-1}(\gamma )$, so $\Phi (x)=\gamma $.
Let
\begin{multline*}
k=min \{ |A|:A\in [Z_{\alpha }]^{<\omega }\land
\exists C'\in \mathcal{C}(\mathcal{M}_{Z_{\alpha +1}})
\\ C'\not\subseteq Z_{\alpha }\land \forall x\in C'\setminus A\ (\Phi (x)=\gamma) \}.
\end{multline*}
As we can place $A_{0}=C-\Phi ^{-1}(\gamma )\subseteq Z_{\alpha }$ into this definition, so $k$ is well-defined. We also must have $k\ge 1$, as for $k=0$, we would have $C'\subseteq \Phi^{-1}(\gamma )$ in contradiction with $\Phi $ is a proper coloring of $\mathcal M$. Let $A\in [Z_{\alpha }]^{<\omega }$, be a set with $|A|=k$, and $C_{1}\in \mathcal{C}(\mathcal{M}_{Z_{\alpha +1 }})$ be a circuit with $A\subseteq C_{1}$, such that $C_{1}\not \subseteq Z_{\alpha }$ and $C_{1}-A\subseteq \Phi ^{-1}(\gamma )$. Let $l=|C_{1}|-k=|C_{1}-A|$.
Then we have that $$H(\theta )\vDash\exists x_{1}...\exists x_{l}, \Phi (x_{1})=\gamma \wedge ...\wedge \Phi (x_{l})=\gamma \wedge A\cup \{x_{1},...,x_{l}\}\in \mathcal{C}(\mathcal{M}).$$
Since $A\subseteq Z_{\alpha }\subseteq M_{\alpha }$ is a finite subset, we have $A\in M_{\alpha }$. As $\Phi ,\gamma , \mathcal{M}\in M_{0}\subseteq M_{\alpha }$ and $M_{\alpha }$ is an elementary submodel of $H(\theta )$, we have
$$M_{\alpha }\vDash \exists x_{1}...\exists x_{l},\Phi (x_{1})=\gamma \wedge ...\wedge \Phi (x_{l})=\gamma \wedge A\cup \{x_{1},...,x_{l}\}\in \mathcal{C}(\mathcal{M}).$$
Then there is a $C_{2}\in \mathcal{C}(\mathcal{M})$, with $A\subseteq C_{2}\subseteq Z_{\alpha }$ and for all $x\in C_{2}-A$, $\Phi (x)=\gamma $. Let $e\in A$, then clearly $e\in C_{1}\cap C_{2}$. Also choose an $e_{1}\in C_{1}-Z_{\alpha }\subseteq C_{1}-C_{2}$. Then we can use lemma \ref{lm:lemma2} b), so there is a $C_{3}\in \mathcal{C}(\mathcal{M})$, with $C_{3}\subseteq (C_{1}\cup C_{2})-e$ and $e_{1}\in C_{3}$.
Then clearly $C_{3}\subseteq C_{1}\cup C_{2}\subseteq Z_{\alpha +1}$, and $C_{3}\not\subseteq Z_{\alpha }$, as $e_{1}\in C_{3}$. Let $A_{1}=A\cap C_{3}$. Since $A_{1}\subseteq A-e$, $|A_{1}|\le k-1$.
Moreover, for all $x\in C_{3}-A_{1}$, we either have $x\in C_{1}-A$, or $x\in C_{2}-A$, thus $\Phi (x)=\gamma $.
Then the set $A_{1}$ is also good with the circuit $C_{3}$, that is a contradiction with the minimality of $k$. Hence, $\Phi |_{Z_{\alpha +1}-Z_{\alpha }}$ is a proper coloring of $\mathcal{M}'$.
\end{proof}
Now let $\mu =|Z_{\alpha +1}-Z_{\alpha }|\le |Z_{\alpha +1}|\le |M_{\alpha +1}|<\lambda $. By $Q(\mu )$, $\mathcal{M}'$ has a well ordered base $(B_{\alpha }',\le _{\alpha }')$, such that $|\{x\in Z_{\alpha +1}\setminus Z_{\alpha }:M_{B_{\alpha }'}(x)=b\}|\le \kappa $
for each $b\in B_{\alpha }'$.
Let $B_{\alpha +1}=B_{\alpha }\cup B_{\alpha }^{'}$. First we need to show that this is a base.
For this, we prove the properties of lemma \ref{lm:lemma9}. Since $B_{\alpha}$ is independent, and $B_{\alpha }'$ is independent in the contracted matroid, by lemma \ref{lm:lemma12}, we have $B_{\alpha +1}$ is also independent.
Now we need to show that $\sigma (B_{\alpha +1})=Z_{\alpha +1}$. Since by lemma \ref{lm:lemma20} $Z_{\alpha +1}$ is closed, we have $\sigma (B_{\alpha +1})\subseteq Z_{\alpha +1}$. Let $x\in Z_{\alpha +1}$. If $x\in Z_{\alpha }$, then using lemma \ref{lm:lemma9}, we get $x\in \sigma (B_{\alpha })\subseteq \sigma (B_{\alpha +1})$.
Suppose $x\in Z_{\alpha +1}-Z_{\alpha }$. If $x\in B_{\alpha }'$, then we are done, so suppose $x\not\in B_{\alpha }'$. Then $B_{\alpha }'+x$ is not independent in $\mathcal{M}'$, so by lemma \ref{lm:lemma12}, there is an independent $Y\subseteq Z_{\alpha }$, such that $(Y\cup B_{\alpha }')+x$ is not independent. Let $C\in \mathcal{C}(\mathcal{M})$, with $C\subseteq (Y\cup B_{\alpha }')+x$.
Since $B_{\alpha }'\in \mathcal{I}(\mathcal{M})'$, by lemma \ref{lm:lemma12}, we have $Y\cup B_{\alpha }'\in \mathcal{I}(\mathcal{M})$, we must have $x\in C$.
In one hand, we have $Y\subseteq Z_{\alpha } =\sigma (B_{\alpha })\subseteq \sigma (B_{\alpha +1})$, on the other hand $B_{\alpha }'\subseteq B_{\alpha +1}\subseteq \sigma (B_{\alpha +1})$, thus $C-x\subseteq Y\cup B_{\alpha }'\subseteq \sigma (B_{\alpha +1})$, so by lemma \ref{lm:lemma16}, we have $x\in \sigma (B_{\alpha +1})$. Since $x\in Z_{\alpha +1}$ was arbitrary, $\sigma (B_{\alpha +1})=Z_{\alpha +1}$ by lemma \ref{lm:lemma9} $B_{\alpha +1}$ is a base.
For the well ordering $\le _{\alpha +1}$, we simply put $B_{\alpha }'$ into the top of $B_{\alpha }$, more formally,
\begin{displaymath}
\le_{{\alpha}+1}\ =\ \le_{\alpha}\cup (B_{\alpha}\times B'_{\alpha})\cup \le'_{\alpha}.
\end{displaymath}
Clearly this is a well ordering and $B_{\alpha }$ (and hence all $B_{\beta }$-s for $\beta <\alpha $ )is an initial segment. Now we need to show that for all $b\in B_{\alpha +1}$, we have $|\{x\in Z_{\alpha +1}|M_{B_{\alpha +1}}(x)=b\}|\le \kappa $. First suppose $b\in B_{\alpha }$. We will show that for any $x\in Z_{\alpha +1}$ with $M_{B_{\alpha +1}}(x)=b$, we have $x\in Z_{\alpha }$ and $M_{B_{\alpha }}(x)=b$. If $x\in B_{\alpha +1}$, then $x=b$, so this is clear. Suppose that $x\not\in B_{\alpha +1}$. Then since $B_{\alpha }$ is an initial segment of $B_{\alpha +1}$, we have that $C(B_{\alpha +1},x)-x=C(B_{\alpha +1},x)\cap B_{\alpha +1}\subseteq B_{\alpha }\subseteq Z_{\alpha }$. By lemma \ref{lm:lemma20}, $Z_{\alpha }$ is closed, so by lemma \ref{lm:lemma16}, we have $x\in Z_{\alpha }$. Moreover, $C(B_{\alpha },x)\subseteq B_{\alpha }+x\subseteq B_{\alpha +1}+x$ is a circuit, so by lemma \ref{lm:lemma3}, $C(B_{\alpha +1},x)=C(B_{\alpha },x)$, thus $b=M_{B_{\alpha +1}}(x)=M_{B_{\alpha }}(x)$. Hence, $|\{x\in Z_{\alpha +1}|M_{B_{\alpha +1}}(x)=b\}|=|\{x\in Z_{\alpha }|M_{B_{\alpha }}(x)=b\}|\le \kappa $. Now suppose $b\in B_{\alpha }'$.
We will show that for any $x\in Z_{\alpha +1}$ with $M_{B_{\alpha +1}}(x)=b$, we have $x\in Z_{\alpha +1}-Z_{\alpha }$ and $M_{B_{\alpha }'}(x)=b$. Again if $x\in B_{\alpha +1}$, then $x=b$, so this is clear, so suppose that $x\not\in B_{\alpha +1}$. If $x$ was in $Z_{\alpha }$, we would have a circuit $C(B_{\alpha },x)\subseteq B_{\alpha }+x\subseteq B_{\alpha +1}+x$, so by lemma \ref{lm:lemma3}, we would have $C(B_{\alpha +1},x)=C(B_{\alpha },x)$, thus $b=M_{B_{\alpha +1}}(x)=M_{B_{\alpha }}(x)$, that is a contradiction. So we must have $x\in Z_{\alpha +1}-Z_{\alpha }$. Let $C'=C(B_{\alpha +1},x)-Z_{\alpha }$. Then $C'\subseteq B_{\alpha '}+x$. We will show that $C'$ is the main circuit of $x$ on $B_{\alpha }'$ in the matroid $\mathcal{M}'$. Then by lemma \ref{lm:lemma3}, we only need to prove, that it is a circuit in the contraction matroid. First $C'$ is not independent in $\mathcal{M}'$, as if it was independent, then by lemma \ref{lm:lemma12}, $C(B_{\alpha +1},x)=C'\cup (C(B_{\alpha +1},x)\cap Z_{\alpha })=C'\cup (C(B_{\alpha +1},x)\cap B_{\alpha })$ would be independent in $\mathcal{M}$, that is not true. Now we need to show that for all $X\subset C'$ with $X\neq C'$, we have $X\in \mathcal{I}(\mathcal{M}')$. If $x\not\in X$, then $X\subseteq B_{\alpha }'\in \mathcal{I}(\mathcal{M}')$. Suppose $x\in X$, and suppose for contradiction, that $X\not\in \mathcal{I(\mathcal{M}')}$ Then by lemma \ref{lm:lemma12}, there is a $Y\subseteq Z_{\alpha }$ independent set, such that $X\cup Y$ is not independent. Let $C\in \mathcal{C}(\mathcal{M})$ be such that $\mathcal{C}\subseteq X\cup Y$. As $X-x\subseteq B_{\alpha }'\in \mathcal{I}(\mathcal{M})'$, by lemma \ref{lm:lemma12} we have $(X-x)\cup Y\in \mathcal{I}(\mathcal{M})$, so we must have $x\in C$. Then by lemma \ref{lm:lemma9}, we have $Y\subseteq Z_{\alpha }=\sigma (B_{\alpha })\subseteq \sigma (B_{\alpha }\cup (X-x))$, thus $C-x\subseteq Y\cup (X-x)\subseteq \sigma (B_{\alpha }\cup (X-x))$, so by lemma \ref{lm:lemma16}, we have $x\in \sigma (B_{\alpha }\cup (X-x))$. Then by lemma \ref{lm:lemma8}, there is an $A\in [B_{\alpha }\cup (X-x)]^{<\omega }$, such that $r(A+x)=r(A)\le |A|<|A+x|$, thus $A+x$ is not independent. Let $C''\in \mathcal{C}(\mathcal{M})$ be such that $C''\subseteq A+x$. But then $C''\subseteq A+x\subseteq B_{\alpha }\cup X\subseteq B_{\alpha +1}+x$, then by lemma \ref{lm:lemma3}, we have $C''=C(B_{\alpha +1},x)$. But $C''-Z_{\alpha }\subseteq X$, witch is a proper subset of $C'$ and $C(B_{\alpha +1},x)-Z_{\alpha }=C'$, that is a contradiction. Hence, $C'$ is a circuit in $\mathcal{M}'$, so it is the main circuit of $x$ for $B_{\alpha }'$. Then $b=M_{B_{\alpha +1}}(x)=max_{\le _{\alpha }'}(C'-x)=M_{B_{\alpha }'}(x)$. Thus, $|\{x\in Z_{\alpha +1}|M_{B_{\alpha +1}}(x)=b\}|=|\{x\in Z_{\alpha }|M_{B_{\alpha }'}(x)=b\}|\le \kappa $. This way the induction step works for successor cardinals.
Finally, let $B=\bigcup _{\alpha <cf(\lambda )}{B_{\alpha }}$, and we define the well ordering $\le $ on $B$ by taking $\le=\bigcup _{\alpha <cf(\lambda )}{\le _{\alpha }}$
Similarly as in the proof for the limit step, we can see that
$|\{x\in S|M_{B}(x)=b\}|\le \kappa $, thus $Q(\lambda )$ for all $b\in B$.
Thus, by transfinite induction, we have proved, that $Q(\lambda )$ holds for all cardinals, so $(1)\Rightarrow (3)$ holds.
\end{proof}
\begin{theorem}
For any loop-free finitary matroid $\mathcal{M}=(S,r)$, we have $Chr(\mathcal{M})=List(\mathcal{M})$.
\end{theorem}
\begin{proof}
If $Chr(\mathcal{M})=k\in \omega $, by theorem \ref{tm:theorem1}, we have $List(\mathcal{M})\le k=Chr(\mathcal{M})$. If $Chr(\mathcal{M})$ is an infinite cardinal, then by theorem \ref{tm:theorem2} $List(\mathcal{M})\le k=Chr(\mathcal{M})$. As clearly $Chr(\mathcal{M})\le List(\mathcal{M})$, $Chr(\mathcal{M})= List(\mathcal{M})$ holds for all loop-free finitary matroids.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2022-06-22T02:09:43",
"yymm": "2206",
"arxiv_id": "2206.09189",
"language": "en",
"url": "https://arxiv.org/abs/2206.09189",
"abstract": "Seymour proved that the chromatic numbers and the list chromatic numbers of loop-free finite matroids are the same. In this paper we prove the same statement for infinite, loop-free finitary matroids.",
"subjects": "Combinatorics (math.CO)",
"title": "List Arboricity of Finitary Matroids: A Generalization of Seymour's Result",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534382002796,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573454442771
} |
https://arxiv.org/abs/1701.08237 | An Efficient Algebraic Solution to the Perspective-Three-Point Problem | In this work, we present an algebraic solution to the classical perspective-3-point (P3P) problem for determining the position and attitude of a camera from observations of three known reference points. In contrast to previous approaches, we first directly determine the camera's attitude by employing the corresponding geometric constraints to formulate a system of trigonometric equations. This is then efficiently solved, following an algebraic approach, to determine the unknown rotation matrix and subsequently the camera's position. As compared to recent alternatives, our method avoids computing unnecessary (and potentially numerically unstable) intermediate results, and thus achieves higher numerical accuracy and robustness at a lower computational cost. These benefits are validated through extensive Monte-Carlo simulations for both nominal and close-to-singular geometric configurations. |
\section{Introduction}
\label{sec:intro}
The Perspective-n-Point (PnP) is the problem of determining the 3D position and orientation (pose) of a camera from
observations of known point features.
The PnP is typically formulated and solved linearly by employing lifting (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot,~\cite{ansar2003linear}), or as a nonlinear least-squares problem minimized iteratively (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot,~\cite{haralick1989pose}) or directly (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot,~\cite{hesch2011direct}).
The minimal case of the PnP (for n=3) is often used in practice, in conjunction with RANSAC, for removing outliers~\cite{fischler1981random}.
The first solution to the P3P problem was given by Grunert~\cite{grunert1841pothenotische} in 1841.
Since then, several methods have been introduced, some of which~\cite{grunert1841pothenotische,finsterwalder1903ruckwartseinschneiden,merritt1949explicit,fischler1981random,linnainmaa1988pose,grafarend1989dreidimensionaler} were reviewed and compared, in terms of numerical accuracy, by Haralick \emph{et al}\onedot~\cite{haralick1991analysis}.
Common to these algorithms is that they employ the law of cosines to formulate a system of three quadratic equations in the features' distances from the camera.
They differ, however, in the elimination process followed for arriving at a univariate polynomial.
Later on, Quan and Lan~\cite{quan1999linear} and more recently Gao \emph{et al}\onedot~\cite{gao2003complete} employed the same formulation but instead used the Sylvester resultant~\cite{cox2006using} and Wu-Ritz's zero-decomposition method~\cite{wen1986basic}, respectively, to solve the resulting system of equations, and, in the case of~\cite{gao2003complete}, to determine the number of real solutions.
Regardless of the approach followed, once the feature's distances have been computed, finding the camera's orientation, expressed as a unit quaternion~\cite{horn1987closed} or a rotation matrix~\cite{horn1988closed}, often requires computing the eigenvectors of a $4\times4$~matrix (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot,~\cite{quan1999linear}) or performing singular value decomposition (SVD) of a $3\times3$~matrix (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot,~\cite{gao2003complete}), respectively, both of which are time-consuming.
Furthermore, numerical error propagation from the computed distances to the rotation matrix significantly reduces the accuracy of the computed pose estimates.
To the best of our knowledge, the first method\footnote{Nister and Stewenius~\cite{nister2007minimal} also follow a geometric approach for solving the {\em generalized} P3P resulting into an octic univariate polynomial whose odd monomials vanish for the case of the central P3P.}
that does not employ the law of cosines in its P3P problem formulation is that of Kneip~\emph{et al}\onedot~\cite{kneip2011novel}, and later on that of Masselli and Zell~\cite{masselli2014new}.
Specifically,~\cite{kneip2011novel} and~\cite{masselli2014new} follow a geometric approach for avoiding computing the features' distances and instead directly solve for the camera's pose.
In both cases, however, several intermediate terms (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, tangents and cotangents of certain angles) need to be computed, which negatively affect the speed and numerical precision of the resulting algorithms.
Similar to~\cite{kneip2011novel} and~\cite{masselli2014new}, our proposed approach does not require first computing the features' distances.
Differently though, in our derivation, we first eliminate the camera's position and the features' distances to result into a system of three equations involving {\em only the camera's orientation}.
Then, we follow an algebraic process for successively eliminating two of the unknown 3-dof and arriving into a quartic polynomial.
Our algorithm (summarized in Alg.~\ref{alg:p3p}) requires fewer operations and involves simpler and numerically more stable expressions, as compared to either~\cite{kneip2011novel} or~\cite{masselli2014new}, and thus performs better in terms of efficiency, accuracy, and robustness.
Specifically, the main advantages of our approach are:
\begin{itemize}
\item Our algorithm's implementation takes about 40\% of the time required by the current state of the art~\cite{kneip2011novel}.
\footnote{Although Masselli and Zell~\cite{masselli2014new} claim that their algorithm runs faster than Kneip \emph{et al}\onedot's~\cite{kneip2011novel}, our results (see Section~\ref{sec:results}) show the opposite to be true (by a small margin).
The reason we arrive at a different conclusion is that our simulation randomly generates a new geometric configuration for each run, while Masselli employs only one configuration during their entire simulation, in which they save time due to caching.}
\item Our method achieves better accuracy than~\cite{kneip2011novel,masselli2014new} under nominal conditions.
Moreover, we are able to further improve the numerical precision by applying root polishing to the solutions of the quartic polynomial while remaining faster than~\cite{kneip2011novel,masselli2014new}.
\item Our algorithm is more robust than~\cite{kneip2011novel,masselli2014new} when considering close-to-singular configurations (the three points are almost collinear or very close to each other).
\end{itemize}
The remaining of this paper is structured as follows.
Section~\ref{sec:main} presents the definition of the P3P problem, as well as our derivations for estimating first the orientation and then the position of the camera.
In Section~\ref{sec:results}, we assess the performance of our approach against ~\cite{kneip2011novel} and~\cite{masselli2014new} in simulation for both nominal and singular configurations.
Finally, we conclude our work in Section~\ref{sec:conclusion}.
\section{Problem Formulation and Solution}
\label{sec:main}
\subsection{Problem Definition}
Given the positions, $ ^{\scriptscriptstyle {G}}\mathbf{p}_i $, of three known features $ f_i, \ i=1,2,3$, with respect to a reference frame $ \{G\} $, and the corresponding unit-vector, bearing measurements, $ ^{\scriptscriptstyle {C}}\mathbf{b}_i$, $i=1,2,3 $, our objective is to estimate the position, $ ^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}} $, and orientation, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the rotation matrix $ ^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $, of the camera $ \{C\} $.
\subsection{Solving for the orientation}
From the geometry of the problem (see Fig.~\ref{fig:di}), we have (for $ i=1,2,3 $):\\
\begin{align}
\label{eq:di} ^{\scriptscriptstyle {G}}\mathbf{p}_i&={}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}}+d_i{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}{}^{\scriptscriptstyle {C}}\mathbf{b}_i
\end{align}
where $ d_i\triangleq\|^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}}-{}^{\scriptscriptstyle {G}}\mathbf{p}_i\| $ is the distance between the camera and the feature $f_i$.\\
\begin{figure}
\center
\includegraphics[width=.8\linewidth]{di.pdf}
\caption{The camera $ \{C\} $, whose position, $ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}} $, and orientation, $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $, we seek to determine, observes unit-vector bearing measurement $ {}^{\scriptscriptstyle {C}}\mathbf{b}_i $ of a feature $ f_i $, whose position, $ {}^{\scriptscriptstyle {G}}\mathbf{p}_i $, is known.}
\label{fig:di}
\end{figure}
In order to eliminate the unknown camera position, $ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}} $, and feature distance, $ d_i,\ i=1,2,3 $, we subtract pairwise the three equations corresponding to \eqref{eq:di} for $ (i,j)=(1,2),\ (1,3)$ and $(2,3)$, and project them on the vector $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}({}^{\scriptscriptstyle {C}}\mathbf{b}_i\times{}^{\scriptscriptstyle {C}}\mathbf{b}_j) $ to yield the following system of 3 equations in the unknown rotation $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $:
\begin{align}
\label{eq:c1} (^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_2)^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}({}^{\scriptscriptstyle {C}}\mathbf{b}_1\times{}^{\scriptscriptstyle {C}}\mathbf{b}_2)&=0\\
\label{eq:c2} (^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_3)^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}({}^{\scriptscriptstyle {C}}\mathbf{b}_1\times{}^{\scriptscriptstyle {C}}\mathbf{b}_3)&=0\\
\label{eq:c3} (^{\scriptscriptstyle {G}}\mathbf{p}_2-{}^{\scriptscriptstyle {G}}\mathbf{p}_3)^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}({}^{\scriptscriptstyle {C}}\mathbf{b}_2\times{}^{\scriptscriptstyle {C}}\mathbf{b}_3)&=0
\end{align}
Next, and in order to compute one of the 3 unknown degrees of rotational freedom, we introduce the following factorization of ${}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $:
\begin{equation}
\label{eq:ccc}{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}=\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{C}(\mathbf{k}_3,\theta_3)
\end{equation}
where\footnote{$ \mathbf{C}(\mathbf{k},\theta) $ denotes the rotation matrix describing the rotation about the unit vector, $ \mathbf{k}$, by an angle $ \theta $. Note that in the ensuing derivations, all rotation angles are defined using the left-hand rule.}\\
\begin{align}
\label{eq:ki}\mathbf{k}_1\triangleq\frac{{}^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_2}{\|{}^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_2\|},\ \mathbf{k}_3\triangleq\frac{{}^{\scriptscriptstyle {C}}\mathbf{b}_1\times{}^{\scriptscriptstyle {C}}\mathbf{b}_2}{\|{}^{\scriptscriptstyle {C}}\mathbf{b}_1\times{}^{\scriptscriptstyle {C}}\mathbf{b}_2\|},\ \mathbf{k}_2\triangleq\frac{\mathbf{k}_1\times\mathbf{k}_3}{\|\mathbf{k}_1\times\mathbf{k}_3\|}
\end{align}
Substituting \eqref{eq:ccc} in \eqref{eq:c1}, yields a scalar equation in the unknown $ \theta_2 $:
\begin{align}
\label{eq:t2}\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{k}_3&=0
\end{align}
which we solve by employing Rodrigues' rotation formula~\cite{koks2006roundabout}:\footnote{$ \lfloor\mathbf{k}\rfloor $ denotes the $ 3\times3 $ skew-symmetric matrix corresponding to $ \mathbf{k} $ such that $ \lfloor\mathbf{k}\rfloor\mathbf{a}=\mathbf{k}\times\mathbf{a}$, $\forall\ \mathbf{k},\mathbf{a}\in\mathbb{R}^3$. Note also that if $ \mathbf{k} $ is a unit vector, then $ \lfloor\mathbf{k}\rfloor^2=\mathbf{k}\mathbf{k}^{\scriptscriptstyle {T}}-\mathbf{I} $, while for two vectors $ \mathbf{a}$, $\mathbf{b} $, $ \lfloor\mathbf{a}\rfloor\lfloor\mathbf{b}\rfloor=\mathbf{b}\mathbf{a}^{\scriptscriptstyle {T}}-(\mathbf{a}^{\scriptscriptstyle {T}}\mathbf{b})\mathbf{I} $. Lastly, it is easy to show that $ \lfloor\lfloor\mathbf{a}\rfloor\mathbf{b}\rfloor=\mathbf{b}\mathbf{a}^{\scriptscriptstyle {T}}-\mathbf{a}\mathbf{b}^{\scriptscriptstyle {T}}. $}
\begin{equation}
\label{eq:Rod}\mathbf{C}(\mathbf{k}_2,\theta_2)=\cos\theta_2\mathbf{I}-\sin\theta_2\lfloor\mathbf{k}_2\rfloor+(1-\cos\theta_2)\mathbf{k}_2\mathbf{k}_2^{\scriptscriptstyle {T}}
\end{equation}
to get
\begin{align}
\label{eq:st2}\theta_2=\arccos(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{k}_3)\pm\frac{\pi}{2}
\end{align}
Note that we only need to consider one of these two solutions [in our case, we select $ \theta_2=\arccos(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{k}_3)-\frac{\pi}{2} $; see Fig.~\ref{fig:ki}], since the other one will result in the same $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $ (see Appendix~\ref{ssec:theta2} for a formal proof).\\
\indent In what follows, we describe the process for eliminating $ \theta_3 $ from \eqref{eq:c2} and \eqref{eq:c3}, and eventually arriving into a quartic polynomial involving a trigonometric function of $ \theta_1 $.
To do so, we once again substitute in \eqref{eq:c2} and \eqref{eq:c3} the factorization of $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $ defined in $\eqref{eq:ccc} $ to get (for $ i=1,2 $):
\begin{equation}
\label{eq:uCv}\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{C}(\mathbf{k}_3,\theta_3)\mathbf{v}_i=0
\end{equation}
where
\begin{align}
\label{eq:ui}\mathbf{u}_i\triangleq{}^{\scriptscriptstyle {G}}\mathbf{p}_i-{}^{\scriptscriptstyle {G}}\mathbf{p}_3,~~
\mathbf{v}_i&\triangleq{}^{\scriptscriptstyle {C}}\mathbf{b}_i\times{}^{\scriptscriptstyle {C}}\mathbf{b}_3,~~i=1,2,
\end{align}
and employ the following property of rotation matrices
\begin{equation}
\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{C}^{\scriptscriptstyle {T}}(\mathbf{k}_1,\theta_1)=\mathbf{C}(\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{k}_2,\theta_2)\nonumber
\end{equation}
to rewrite \eqref{eq:uCv} in a simpler form as
\begin{align}
\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{k}_3,\theta_3)\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{v}_i&=0\nonumber\\
\Rightarrow\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{k}^\prime_3,\theta_3)\mathbf{v}^\prime_i&=0\label{eq:k3prime}
\end{align}
where
\begin{align}
\mathbf{v}^\prime_i&\triangleq\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{v}_i,\ i=1,2\nonumber\\
\label{eq:k2xk1}\mathbf{k}^\prime_3&\triangleq\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{k}_3=\mathbf{k}_2\times\mathbf{k}_1
\end{align}
The last equality in \eqref{eq:k2xk1} is geometrically depicted in Fig.~\ref{fig:ki} and algebraically derived in Appendix~\ref{ssec:k3p}. Analogously, it is straightforward to show that
\begin{equation}
\mathbf{k}^\prime_1\triangleq\mathbf{C}^{\scriptscriptstyle {T}}(\mathbf{k}_2,\theta_2)\mathbf{k}_1=\mathbf{k}_3\times\mathbf{k}_2\nonumber
\end{equation}
\begin{figure}
\center
\includegraphics[width=\linewidth]{ki.pdf}
\caption{Geometric relation between unit vectors $ \mathbf{k}_1,\ \mathbf{k}_2,\ \mathbf{k}_3,\ \mathbf{k}_3^\prime,\ \mathbf{k}_3^{\prime\prime},$ and $ \mathbf{u}_1 $. Note that $ \mathbf{k}_1,\ \mathbf{k}_3$, and $\mathbf{k}_3^\prime $ belong to a plane $ \pi_1 $ whose normal is $ \mathbf{k}_2 $. Also, $ \mathbf{k}_2,\ \mathbf{k}_3^\prime$, and $\mathbf{k}_3^{\prime\prime} $ lie on a plane, $ \pi_2 $, normal to $\pi_1$.}
\label{fig:ki}
\end{figure}
Next, by employing Rodrigues' rotation formula [see \eqref{eq:Rod}], for expressing the product of a rotation matrix and a vector as a linear function of the unknown $ \begin{bmatrix}\cos\theta & \sin\theta\end{bmatrix}^{\scriptscriptstyle {T}} $, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot,
\begin{align}
\mathbf{C}(\mathbf{k},\theta)\mathbf{v}&=(-\cos\theta\lfloor\mathbf{k}\rfloor^2-\sin\theta\lfloor\mathbf{k}\rfloor+\mathbf{k}\mathbf{k}^{\scriptscriptstyle {T}})\mathbf{v}\nonumber\\
&=\begin{bmatrix}
-\lfloor\mathbf{k}\rfloor^2\mathbf{v} & -\lfloor\mathbf{k}\rfloor \mathbf{v}
\end{bmatrix}
\begin{bmatrix}
\cos\theta\\
\sin\theta
\end{bmatrix}+(\mathbf{k}^{\scriptscriptstyle {T}}\mathbf{v})\mathbf{k}\label{eq:Cv}
\end{align}
in \eqref{eq:k3prime} yields (for $ i=1,2 $):
\begin{align}
&\left(\begin{bmatrix}
-\lfloor\mathbf{k}_1\rfloor^2\mathbf{u}_i & \lfloor\mathbf{k}_1\rfloor \mathbf{u}_i
\end{bmatrix}
\begin{bmatrix}
\cos\theta_1\\
\sin\theta_1
\end{bmatrix}+(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{u}_i)\mathbf{k}_1\right)^{\scriptscriptstyle {T}}\nonumber\\
\cdot&\left(\begin{bmatrix}
-\lfloor\mathbf{k}^{\prime}_3\rfloor^2\mathbf{v}^{\prime}_i & -\lfloor\mathbf{k}^{\prime}_3\rfloor \mathbf{v}^{\prime}_i
\end{bmatrix}
\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix}+({\mathbf{k}^{\prime}_3}^{\scriptscriptstyle {T}}\mathbf{v}^{\prime}_i)\mathbf{k}^{\prime}_3\right)=0\label{eq:At+b}
\end{align}
Expanding \eqref{eq:At+b} and rearranging terms, yields (for $ i=1,2 $)
\begin{align}
\label{eq:t1t3}
&\begin{bmatrix}
\cos\theta_1 \\ \sin\theta_1
\end{bmatrix}^{\scriptscriptstyle {T}}\begin{bmatrix}
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor^2\lfloor\mathbf{k}^{\prime}_3\rfloor^2\mathbf{v}^{\prime}_i & \mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor^2\lfloor\mathbf{k}^{\prime}_3\rfloor\mathbf{v}^{\prime}_i\\
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor^2\mathbf{v}^{\prime}_i &
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor\mathbf{v}^{\prime}_i
\end{bmatrix}\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix}\nonumber\\
+&(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{u}_i)\begin{bmatrix}
-\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime}_3\rfloor^2\mathbf{v}^{\prime}_i & -\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime}_3\rfloor\mathbf{v}^{\prime}_i
\end{bmatrix}\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix} \nonumber\\
=&({\mathbf{k}^{\prime}_3}^{\scriptscriptstyle {T}}\mathbf{v}^{\prime}_i)\begin{bmatrix}
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor\mathbf{k}_1 &
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\mathbf{k}^{\prime}_3
\end{bmatrix}\begin{bmatrix}
\cos\theta_1\\
\sin\theta_1
\end{bmatrix}
\end{align}
Notice that the term $ \mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor$ appears three times in \eqref{eq:t1t3}, and
\begin{align}
\mathbf{u}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor&=\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}^{\prime}_3\mathbf{k}_1^{\scriptscriptstyle {T}}\nonumber\\
&=({}^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_3)^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor\nonumber\\
&=({}^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_2+{}^{\scriptscriptstyle {G}}\mathbf{p}_2-{}^{\scriptscriptstyle {G}}\mathbf{p}_3)^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor\nonumber\\
&=({}^{\scriptscriptstyle {G}}\mathbf{p}_2-{}^{\scriptscriptstyle {G}}\mathbf{p}_3)^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor\nonumber\\
&=\mathbf{u}_2^{\scriptscriptstyle {T}}\mathbf{k}^{\prime}_3\mathbf{k}_1^{\scriptscriptstyle {T}}= \mathbf{u}_2^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime}_3\rfloor\label{eq:u1k3pk1}
\end{align}
This motivates to rewrite \eqref{eq:k3prime} as (for $ i=1,2 $):
\begin{align}
0&=\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{k}^\prime_3,\theta_3)\mathbf{v}^\prime_i\nonumber\\
&=\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{k}_1,-\phi)\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}^\prime_3,\theta_3)\mathbf{v}^\prime_i\nonumber\\
&=\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\theta_1-\phi)\mathbf{C}(\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{k}^\prime_3,\theta_3)\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{v}^\prime_i\nonumber\\
\label{eq:CuCv}&=\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\theta_1^\prime)\mathbf{C}(\mathbf{k}^{\prime\prime}_3,\theta_3)\mathbf{v}^{\prime\prime}_i
\end{align}
where
\begin{align}
\label{eq:t1p}\theta_1^\prime\triangleq\theta_1-\phi,\ \mathbf{v}^{\prime\prime}_i\triangleq\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{v}^\prime_i,\ \mathbf{k}^{\prime\prime}_3\triangleq\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{k}^\prime_3
\end{align}
To simplify the equation analogous to \eqref{eq:t1t3} that will result from \eqref{eq:CuCv} [instead of \eqref{eq:t1t3}], we seek to find a $ \phi $ (not necessarily unique) such that $ \mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}^{\prime\prime}_3=0 $, and hence, $ \mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime\prime}_3\rfloor=0 $ [see \eqref{eq:u1k3pk1}], \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot,
\begin{align}
0&=\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}^{\prime\prime}_3=\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{k}^\prime_3\label{eq:u1k3pp}\\
&=\mathbf{u}_1^{\scriptscriptstyle {T}}(\cos\phi\mathbf{I}-\sin\phi\lfloor\mathbf{k}_1\rfloor+(1-\cos\phi)\mathbf{k}_1\mathbf{k}_1^{\scriptscriptstyle {T}})\mathbf{k}^\prime_3\nonumber\\
&=\cos\phi\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}^\prime_3-\sin\phi\mathbf{u}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\mathbf{k}^\prime_3\nonumber\\
&=\cos\phi\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}^\prime_3-\sin\phi\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}_2\nonumber
\end{align}
\begin{align}
\label{eq:phi}\Rightarrow\begin{bmatrix}
\cos\phi & \sin\phi
\end{bmatrix}&=\frac{1}{\delta}\begin{bmatrix}
\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}_2 & \mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}^\prime_3
\end{bmatrix}
\end{align}
where
\begin{align}
\label{eq:delta}\delta\triangleq\sqrt{(\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}^\prime_3)^2+(\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}_2)^2}
=\|\mathbf{u}_1\times\mathbf{k}_1\|
\end{align}
and thus [from \eqref{eq:t1p} using \eqref{eq:Rod}]
\begin{align}
\mathbf{k}_3^{\prime\prime}&=\cos\phi\mathbf{k}^\prime_3-\sin\phi\lfloor\mathbf{k}_1\rfloor\mathbf{k}^\prime_3+(1-\cos\phi)\mathbf{k}_1\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{k}^\prime_3\nonumber\\
&=(\mathbf{k}^\prime_3\mathbf{k}_2^{\scriptscriptstyle {T}}\mathbf{u}_1-\mathbf{k}_2{\mathbf{k}_3^\prime}^{\scriptscriptstyle {T}}\mathbf{u}_1)/\delta=\mathbf{u}_1\times(\mathbf{k}^\prime_3\times\mathbf{k}_2)/\delta\nonumber\\
\label{eq:k3pp}&=\frac{\mathbf{u}_1\times\mathbf{k}_1}{\|\mathbf{u}_1\times\mathbf{k}_1\|}
\end{align}
Now, we can expand \eqref{eq:CuCv} using \eqref{eq:Cv} to get an equation analogous to \eqref{eq:t1t3}:
\begin{align}
&\begin{bmatrix}
\cos\theta_1^\prime \\ \sin\theta_1^\prime
\end{bmatrix}^{\scriptscriptstyle {T}}\begin{bmatrix}
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor^2\lfloor\mathbf{k}^{\prime\prime}_3\rfloor^2\mathbf{v}^{\prime\prime}_i & \mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor^2\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i \nonumber\\
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime\prime}_3\rfloor^2\mathbf{v}^{\prime\prime}_i &
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i
\end{bmatrix}\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix}\nonumber\\
&+(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{u}_i)\begin{bmatrix}
-\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime\prime}_3\rfloor^2\mathbf{v}^{\prime\prime}_i & -\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i
\end{bmatrix}\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix}=\nonumber\\
\label{eq:matrix}&({\mathbf{k}^{\prime\prime}_3}^{\scriptscriptstyle {T}}\mathbf{v}^{\prime\prime}_i)\begin{bmatrix}
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{k}_1 &
\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\mathbf{k}^{\prime\prime}_3
\end{bmatrix}\begin{bmatrix}
\cos\theta_1^\prime\\
\sin\theta_1^\prime
\end{bmatrix}
\end{align}
Substituting $ \mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\lfloor\mathbf{k}^{\prime\prime}_3\rfloor=0 $ [see \eqref{eq:u1k3pk1}] in \eqref{eq:matrix} and renaming terms, yields (for $ i=1,2 $):
\begin{align}
&\begin{bmatrix}
\cos\theta_1^\prime \\ \sin\theta_1^\prime
\end{bmatrix}^{\scriptscriptstyle {T}}\begin{bmatrix}
\bar{f}_{i1} & \bar{f}_{i2}\\
0 & 0
\end{bmatrix}\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix}+\begin{bmatrix}
\bar{f}_{i4} & \bar{f}_{i5}
\end{bmatrix}\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix}\nonumber\\
=&\begin{bmatrix}
0 & \bar{f}_{i3}
\end{bmatrix}\begin{bmatrix}
\cos\theta_1^\prime\\
\sin\theta_1^\prime
\end{bmatrix}\nonumber\\
\label{eq:fi}
\Rightarrow
&\begin{bmatrix}
\bar{f}_{i1}\cos\theta_1^\prime+\bar{f}_{i4} & \bar{f}_{i2}\cos\theta_1^\prime+\bar{f}_{i5}
\end{bmatrix}\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix}=\bar{f}_{i3}\sin\theta_1^\prime
\end{align}
where\footnote{The simplified expressions for the following terms, shown after the second equality, require lengthy algebraic derivations which we omit due to space limitations.}
\begin{align}
\bar{f}_{i1}&\triangleq\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor^2\lfloor\mathbf{k}^{\prime\prime}_3\rfloor^2\mathbf{v}^{\prime\prime}_i=\delta\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_2\nonumber\\
\bar{f}_{i2}&\triangleq\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor^2\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i=\delta\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime\nonumber\\
\bar{f}_{i3}&\triangleq({\mathbf{k}^{\prime\prime}_3}^{\scriptscriptstyle {T}}\mathbf{v}^{\prime\prime}_i)\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\mathbf{k}^{\prime\prime}_3=\delta\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_3\nonumber\\
\bar{f}_{i4}&\triangleq-(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime\prime}_3\rfloor^2\mathbf{v}^{\prime\prime}_i=(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)(\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)\nonumber\\
\bar{f}_{i5}&\triangleq-(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i=-(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)(\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_2)\nonumber
\end{align}
For $ i=1,2 $, \eqref{eq:fi} results into the following system:
\begin{equation}
\begin{bmatrix}
\label{eq:Fi}
\bar{f}_{11}\cos\theta_1^\prime+\bar{f}_{14} & \bar{f}_{12}\cos\theta_1^\prime+\bar{f}_{15} \\
\bar{f}_{21}\cos\theta_1^\prime+\bar{f}_{24} & \bar{f}_{22}\cos\theta_1^\prime+\bar{f}_{25}
\end{bmatrix}\begin{bmatrix}
\cos\theta_3\\
\sin\theta_3
\end{bmatrix}=\begin{bmatrix}
\bar{f}_{13}\\
\bar{f}_{23}
\end{bmatrix}\sin\theta_1^\prime
\end{equation}
Note that since $ \bar{f}_{11}\bar{f}_{14}+\bar{f}_{12}\bar{f}_{15}=0 $, we can further simplify \eqref{eq:Fi} by introducing $ \theta_3^\prime $, where
\begin{align}
\label{eq:t3p}\begin{bmatrix}
\cos\theta_3^\prime\\
\sin\theta_3^\prime
\end{bmatrix}\triangleq\begin{bmatrix}\frac{\bar{f}_{11}\cos\theta_3+\bar{f}_{12}\sin\theta_3}{\sqrt{\bar{f}_{11}^2+\bar{f}_{12}^2}} & -\frac{\bar{f}_{14}\cos\theta_3+\bar{f}_{15}\sin\theta_3}{\sqrt{\bar{f}_{14}^2+\bar{f}_{15}^2}}\end{bmatrix}^{\scriptscriptstyle {T}}
\end{align}
Replacing $ \theta_3 $ by $ \theta_3^\prime $ in \eqref{eq:Fi}, we have
\begin{equation}
\begin{bmatrix}
f_{11}\cos\theta_1^\prime & f_{15} \\
f_{21}\cos\theta_1^\prime+f_{24} & f_{22}\cos\theta_1^\prime+f_{25}
\end{bmatrix}\begin{bmatrix}
\cos\theta_3^\prime\\
\sin\theta_3^\prime
\end{bmatrix}=\begin{bmatrix}
f_{13}\\
f_{23}
\end{bmatrix}\sin\theta_1^\prime\label{eq:fs1}
\end{equation}
where
\begin{align}
\label{eq:fi1}f_{11}&\triangleq\delta\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3\\
f_{21}&\triangleq\delta({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_2)(\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3)\\
f_{22}&\triangleq\delta(\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3)\|{}^{\scriptscriptstyle {C}}\mathbf{b}_1\times{}^{\scriptscriptstyle {C}}\mathbf{b}_2\|\\
f_{13}&\triangleq\bar{f}_{13}=\delta\mathbf{v}_1^{\scriptscriptstyle {T}}\mathbf{k}_3\\
f_{23}&\triangleq\bar{f}_{23}=\delta\mathbf{v}_2^{\scriptscriptstyle {T}}\mathbf{k}_3\\
f_{24}&\triangleq(\mathbf{u}_2^{\scriptscriptstyle {T}}\mathbf{k}_1)(\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3)\|{}^{\scriptscriptstyle {C}}\mathbf{b}_1\times{}^{\scriptscriptstyle {C}}\mathbf{b}_2\|\\
f_{15}&\triangleq-(\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}_1)(\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3)\\
\label{eq:fi5}f_{25}&\triangleq-(\mathbf{u}_2^{\scriptscriptstyle {T}}\mathbf{k}_1)({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_2)(\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3)
\end{align}
From \eqref{eq:fs1}, we have
\begin{align}
\label{eq:theta3p}
\begin{bmatrix}
\cos\theta_3^\prime\\
\sin\theta_3^\prime
\end{bmatrix}=&\det\left(\begin{bmatrix}
f_{11}\cos\theta_1^\prime & f_{15} \\
f_{21}\cos\theta_1^\prime+f_{24} & f_{22}\cos\theta_1^\prime+f_{25}
\end{bmatrix}\right)^{-1}\nonumber\\
\cdot&\begin{bmatrix}
f_{22}\cos\theta_1^\prime+f_{25} & -f_{15} \\
-(f_{21}\cos\theta_1^\prime+f_{24}) & f_{11}\cos\theta_1^\prime
\end{bmatrix}\begin{bmatrix}
f_{13}\\
f_{23}
\end{bmatrix}\sin\theta_1^\prime
\end{align}
Computing the norm of both sides of \eqref{eq:theta3p}, results in
\begin{align}
&\left\lVert\begin{bmatrix}
f_{22}\cos\theta_1^\prime+f_{25} & -f_{15} \\
-(f_{21}\cos\theta_1^\prime+f_{24}) & f_{11}\cos\theta_1^\prime
\end{bmatrix}\begin{bmatrix}
f_{13}\\
f_{23}
\end{bmatrix}\right\rVert^2(1-\cos^2\theta_1^\prime)\nonumber\\
&=\det\left(\begin{bmatrix}
f_{11}\cos\theta_1^\prime & f_{15} \\
f_{21}\cos\theta_1^\prime+f_{24} & f_{22}\cos\theta_1^\prime+f_{25}
\end{bmatrix}\right)^2\nonumber
\end{align}
which is a 4th-order polynomial in $ \cos\theta_1^\prime $ that can be compactly written as:
\begin{equation}
\label{eq:4th}\displaystyle\sum_{j=0}^{4}\alpha_j\cos^j\theta_1^\prime=0
\end{equation}
with
\begin{align}
\label{eq:A4}\alpha_4&\triangleq g_5^2+g_1^2+g_3^2\\
\alpha_3&\triangleq 2(g_5g_6+g_1g_2+g_3g_4)\\
\alpha_2&\triangleq g_6^2+2g_5g_7+g_2^2+g_4^2-g_1^2-g_3^2\\
\alpha_1&\triangleq 2(g_6g_7-g_1g_2-g_3g_4)\\
\alpha_0&\triangleq g_7^2-g_2^2-g_4^2\\
g_1&\triangleq f_{13}f_{22}\\
g_2&\triangleq f_{13}f_{25}-f_{15}f_{23}\\
g_3&\triangleq f_{11}f_{23}-f_{13}f_{21}\\
g_4&\triangleq -f_{13}f_{24}\\
g_5&\triangleq f_{11}f_{22}\\
g_6&\triangleq f_{11}f_{25}-f_{15}f_{21}\\
\label{eq:g7}g_7&\triangleq -f_{15}f_{24}
\end{align}
We compute the roots of \eqref{eq:4th} in closed form to find $ \cos\theta_1^\prime $. Similarly to~\cite{kneip2011novel} and~\cite{masselli2014new}, we employ Ferrari's method~\cite{cardano2007rules} to attain the resolvent cubic of \eqref{eq:4th}, which is subsequently solved by Cardano's formula~\cite{cardano2007rules}.
Once the (up to) four real solutions of \eqref{eq:4th} have been determined, an optional step is to apply root polishing following Newton's method, which improves accuracy for minimal increase in the processing cost (see Section~\ref{ssec:cost}).
Regardless, for each solution of $ \cos\theta_1^\prime $, we will have two possible solutions for $ \sin\theta_1^\prime $, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot,
\begin{equation}
\label{eq:st1p}\sin\theta_1^\prime=\pm\sqrt{1-\cos^2\theta_1^\prime}
\end{equation}
which, in general, will result in two different solutions for $ {}^{\scriptscriptstyle {C}}_{\scriptscriptstyle {G}}\mathbf{C} $. Note though that only one of them is valid if we use the fact that $ d_i>0 $ (see Appendix~\ref{ssec:stheta1p}).
\indent Next, for each pair of $ (\cos\theta_1^\prime,\sin\theta_1^\prime)$, we compute $ \cos\theta_3^\prime$ and $\sin\theta_3^\prime $ from \eqref{eq:theta3p}, which can be written as
\begin{equation}
\label{eq:t3}
\begin{bmatrix}
\cos \theta_3^\prime\\
\sin \theta_3^\prime
\end{bmatrix}=\frac{\sin\theta_1^\prime}{g_5\cos^2\theta_1^\prime+g_6\cos\theta_1^\prime+g_7}
\begin{bmatrix}
g_1\cos\theta_1^\prime+g_2\\
g_3\cos\theta_1^\prime+g_4
\end{bmatrix}
\end{equation}
\indent Lastly, instead of first computing $ \theta_1 $ from \eqref{eq:t1p} and $ \theta_3 $ from \eqref{eq:t3p} to find $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $ using \eqref{eq:ccc}, we hereafter describe a faster method for recovering $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $. Specifically, from \eqref{eq:ccc}, \eqref{eq:k3prime} and \eqref{eq:CuCv}, we have
\begin{align}
{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}&=\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{C}(\mathbf{k}_3,\theta_3)\nonumber\\
&=\mathbf{C}(\mathbf{k}_1,\theta_1)\mathbf{C}(\mathbf{k}_3^\prime,\theta_3)\mathbf{C}(\mathbf{k}_2,\theta_2)\nonumber\\
&=\mathbf{C}(\mathbf{k}_1,\theta_1^\prime)\mathbf{C}(\mathbf{k}_3^{\prime\prime},\theta_3)\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\label{eq:cold}
\end{align}
Since $ \mathbf{k}_1 $ is perpendicular to $ \mathbf{k}_3^{\prime\prime} $, we can construct a rotation matrix $ \mathbf{\bar{C}} $ such that
\begin{align}
\mathbf{\bar{C}}=\begin{bmatrix}
\mathbf{k}_1 & \mathbf{k}_3^{\prime\prime} & \mathbf{k}_1\times\mathbf{k}_3^{\prime\prime}
\end{bmatrix}\nonumber
\end{align}
and hence
\begin{align}
\label{eq:kcbar}\mathbf{k}_1=\mathbf{\bar{C}}\mathbf{e}_1,\ \mathbf{k}_3^{\prime\prime}=\mathbf{\bar{C}}\mathbf{e}_2
\end{align}
where
\begin{align}
\begin{bmatrix}
\mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3
\end{bmatrix}\triangleq\mathbf{I}_3\nonumber
\end{align}
Substituting \eqref{eq:kcbar} in \eqref{eq:cold}, we have
\begin{align}
{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}&=\mathbf{\bar{C}}\mathbf{C}(\mathbf{e}_1,\theta_1^\prime)\mathbf{C}(\mathbf{e}_2,\theta_3)\mathbf{\bar{C}}^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\nonumber\\
&=\mathbf{\bar{C}}\mathbf{C}(\mathbf{e}_1,\theta_1^\prime)\mathbf{C}(\mathbf{e}_2,\theta_3)\mathbf{C}(\mathbf{e}_2,\theta_3^\prime-\theta_3)\mathbf{\bar{\bar{C}}}\nonumber\\
\label{eq:crrc}&=\mathbf{\bar{C}}\mathbf{C}(\mathbf{e}_1,\theta_1^\prime)\mathbf{C}(\mathbf{e}_2,\theta_3^\prime)\mathbf{\bar{\bar{C}}}
\end{align}
where
\begin{align}
\mathbf{\bar{\bar{C}}}&\triangleq\mathbf{C}(\mathbf{e}_2,\theta_3-\theta_3^\prime)\mathbf{\bar{C}}^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\nonumber\\
&=\mathbf{C}(\mathbf{e}_2,\theta_3-\theta_3^\prime)\begin{bmatrix}
\mathbf{k}_1^\prime & \mathbf{k}_3 & \mathbf{k}_1^\prime\times\mathbf{k}_3
\end{bmatrix}^{\scriptscriptstyle {T}}\nonumber\\
&\stackrel{\eqref{eq:t3p}}{=}\begin{bmatrix}
{}^{\scriptscriptstyle {C}}\mathbf{b}_1 & \mathbf{k}_3 & {}^{\scriptscriptstyle {C}}\mathbf{b}_1\times\mathbf{k}_3
\end{bmatrix}^{\scriptscriptstyle {T}}\nonumber
\end{align}
The advantages of \eqref{eq:crrc} are: (i) The matrix product $ \mathbf{C}(\mathbf{e}_1,\theta_1^\prime)\mathbf{C}(\mathbf{e}_2,\theta_3^\prime) $ can be computed analytically; (ii) $ \mathbf{\bar{C}},\mathbf{\bar{\bar{C}}} $ are invariant to the (up to) four possible solutions and thus, we only need to construct them once.
\subsection{Solving for the position}
Substituting in \eqref{eq:di} the expression for $ d_3 $ from \eqref{eq:s1pd3} and rearranging terms, yields
\begin{align}
\label{eq:gpc} {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}}&={}^{\scriptscriptstyle {G}}\mathbf{p}_3-\frac{\delta\sin\theta_1^\prime}{\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3}{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}{}^{\scriptscriptstyle {C}}\mathbf{b}_3
\end{align}
Note that we only use \eqref{eq:di} for $ i=3 $ to compute $ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}} $ from $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $.
Alternatively, we could find the position using a least-squares approach based on \eqref{eq:di} for $ i=1,2,3 $ (see Appendix~\ref{ssec:ls}), if we care more for accuracy than speed.
Lastly, the proposed P3P solution is summarized in Alg.~\ref{alg:p3p}.
\begin{Ualgorithm}
\KwIn{$ {}^{\scriptscriptstyle {G}}\mathbf{p}_i,~i = 1,2,3 $ the features' positions; $ {}^{\scriptscriptstyle {C}}\mathbf{b}_i,~i = 1,2,3 $ bearing measurements}
\KwOut{$ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}} $, the position of the camera; $ {}^{\scriptscriptstyle {C}}_{\scriptscriptstyle {G}}\mathbf{C} $, the orientation of the camera}
Compute $ \mathbf{k}_1$, $ \mathbf{k}_3$ using \eqref{eq:ki}\\
Compute $ \mathbf{u}_i $ and $ \mathbf{v}_i $ using \eqref{eq:ui}, $ i=1,2 $\\
Compute $ \delta $ and $ \mathbf{k}_3^{\prime\prime} $ using \eqref{eq:delta} and \eqref{eq:k3pp}\\
Compute the $ f_{ij} $'s using \eqref{eq:fi1}-\eqref{eq:fi5}\\
Compute $ \alpha_i $, $ i=0,1,2,3,4 $ using \eqref{eq:A4}-\eqref{eq:g7}\\
Solve \eqref{eq:4th} to get $ n$ ($ n=2 $ or $ 4 $) real solutions for $ \cos\theta_1^\prime $, denoted as $ \cos\theta_1^{\prime(i)} $, $ i=1...n $\\
\For {$i = 1:n$}{
$ \sin\theta_1^{\prime(i)}\gets \sign(\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3) \sqrt{1-\cos^2\theta_1^{\prime(i)}} $\\
Compute $ \cos\theta_3^{\prime(i)} $ and $ \sin\theta_3^{\prime(i)} $ using \eqref{eq:t3}\\
Compute $ {}^{\scriptscriptstyle {C}}_{\scriptscriptstyle {G}}\mathbf{C}^{(i)} $ using \eqref{eq:crrc}\\
Compute $ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}}^{(i)} $ using \eqref{eq:gpc}\\
}
\caption{Solving for the camera's pose}
\label{alg:p3p}
\end{Ualgorithm}
\section{Experimental results}
\label{sec:results}
Our algorithm is implemented\footnote{Our code is submitted along with the paper as supplemental material.} in C++ using the same linear algebra library, TooN~\cite{TooN_library}, as~\cite{kneip2011novel}.
We employ simulation data to test our code and compare it to the solutions of~\cite{kneip2011novel}~and~\cite{masselli2014new}.
For each single P3P problem, we randomly generate three 3D landmarks, which are uniformly distributed in a $ 0.4\times0.3\times0.4 $ cuboid centered around the origin.
The position of the camera is $ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}}=\mathbf{e}_3 $, and its orientation is $ {}^{\scriptscriptstyle {C}}_{\scriptscriptstyle {G}}\mathbf{C}=\mathbf{C}(\mathbf{e}_1,\pi) $.
\subsection{Numerical accuracy}
\label{ssec:numerical}
We generate simulation data without adding any noise or rounding error to the bearing measurements, and run all three algorithms on 50,000 randomly-generated configurations to assess their numerical accuracy.
Note that the position error is computed as the norm of the difference between the estimate and the ground truth of $ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}} $.
As for the orientation error, we compute the rotation matrix that transforms the estimated $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $ to the true one, convert it to the equivalent axis-angle representation, and use the absolute value of the angle as the error.
Since there are multiple solutions to a P3P problem, we compute the errors for all of them and pick the smallest one (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the root closest to the true solution).
The distributions and the means of the position and orientation errors are depicted in Fig.s~\ref{fig:errorO}~-~\ref{fig:errorP} and Table~\ref{tab:err}.
As evident, we get similar results to those presented in~\cite{masselli2014new} for Kneip~\emph{et al}\onedot's~\cite{kneip2011novel} and Masselli and Zell's methods~\cite{masselli2014new}, while our approach outperforms both of them by two orders of magnitude in terms of accuracy.
This can be attributed to the fact that our algorithm requires fewer operations and thus exhibits lower numerical-error propagation.
Furthermore, and as shown in the results of Table~\ref{tab:err}, we can further improve the numerical precision by applying root polishing.
Typically, two iterations of Newton's method~\cite{ypma1995historical} lead to significantly better results, especially for the orientation, while taking only 0.01 $\mu$s per iteration, or about 4\% of the total processing time.
\begin{table}
\label{tab:err}
\center
\begin{tabular}{|c|c|c|}
\hline
& position & orientation \\
\hline
Kneip's method & 1.18E-05 & 1.02E-05 \\
\hline
Masselli's method & 1.84E-08 & 4.89E-10\\
\hline
Proposed method & \bf1.66E-10 & \bf5.30E-12 \\
\hline
Proposed method+Root polishing & \bf5.07E-11 & \bf1.53E-13 \\
\hline
\end{tabular}
\caption{Nominal case: Pose mean errors.}
\end{table}
\begin{figure}
\center
\includegraphics[width=\linewidth]{errorO.pdf}
\caption{Nominal case: Histogram of orientation errors.}
\label{fig:errorO}
\end{figure}
\begin{figure}
\center
\includegraphics[width=\linewidth]{errorP.pdf}
\caption{Nominal case: Histogram of position errors.}
\label{fig:errorP}
\end{figure}
\subsection{Processing cost}
\label{ssec:cost}
We use a test program that solves 100,000 randomly generated P3P problems and calculates the total execution time to evaluate the computational cost of the three algorithms considered.
We run it on a 2.0 GHz$ \times $4 Core laptop and the results show that our code takes 0.54 $ \mu $s on average (0.52 $ \mu $s without root polishing) while~\cite{kneip2011novel} and~\cite{masselli2014new} take 1.3 $ \mu $s and 1.5 $ \mu $s, respectively.
This corresponds to a 2.5$\times$ speed up (or 40\% of the time of~\cite{kneip2011novel}).
Note also, in contrast to what is reported in~\cite{masselli2014new}, Masselli's method is actually slower than Kneip's.
As mentioned earlier, Masselli's results in~\cite{masselli2014new} are based on 1,000 runs of the same features' configuration, and take advantage of data caching to outperform Kneip.
\subsection{Robustness}
There are two typical singular cases that lead to infinite solutions in the P3P problem:
\begin{itemize}
\item Singular case 1: The 3 landmarks are collinear.
\item Singular case 2: Any two of the 3 bearing measurements coincide.
\end{itemize}
In practice, it is almost impossible for these conditions to hold exactly, but we may still have numerical issues when the geometric configuration is close to these cases.
To test the robustness of the three algorithms considered, we generate simulation data corresponding to small perturbations (uniformly distributed within $[-0.05~~0.05]$) of the features' positions when in singular configurations.
The errors are defined as in Section~\ref{ssec:numerical}, while we compute the medians of them to assess the robustness of the three methods.
For fairness, we do not apply root polishing to our code here.
According to the results shown in Fig.s~\ref{fig:robustp1}~-~\ref{fig:robusto2} and Tables~\ref{tab:robust1}~-~\ref{tab:robust2}, our method achieves the best accuracy in these two close-to-singular cases. The reason is that we do not compute any quantities that may suffer from numerical issues, such as cotangent and tangent in~\cite{kneip2011novel}~and~\cite{masselli2014new}, respectively.
\begin{table}
\label{tab:robust1}
\center
\begin{tabular}{|c|c|c|}
\hline
& position & orientation \\
\hline
Kneip's method & 1.42E-14 & 1.34E-14 \\
\hline
Masselli's method & 7.13E-15 & 6.15E-15\\
\hline
Proposed method & \bf5.16E-15 & \bf3.73E-15 \\
\hline
\end{tabular}
\caption{Singular case 1: Pose median errors.}
\end{table}
\begin{table}
\label{tab:robust2}
\center
\begin{tabular}{|c|c|c|}
\hline
& position & orientation \\
\hline
Kneip's method & 8.10E-14 & 8.85E-14 \\
\hline
Masselli's method & 7.24E-14 & 6.07E-14\\
\hline
Proposed method & \bf6.73E-14 & \bf1.75E-14 \\
\hline
\end{tabular}
\caption{Singular case 2: Pose median errors.}
\end{table}
\begin{figure}
\center
\includegraphics[width=\linewidth]{errorP1.pdf}
\caption{Singular case 1: Histogram of position errors.}
\label{fig:robustp1}
\end{figure}
\begin{figure}
\center
\includegraphics[width=\linewidth]{errorO1.pdf}
\caption{Singular case 1: Histogram of orientation errors.}
\label{fig:robusto1}
\end{figure}
\begin{figure}
\center
\includegraphics[width=\linewidth]{errorP2.pdf}
\caption{Singular case 2: Histogram of position errors.}
\label{fig:robustp2}
\end{figure}
\begin{figure}
\center
\includegraphics[width=\linewidth]{errorO2.pdf}
\caption{Singular case 2: Histogram of orientation errors.}
\label{fig:robusto2}
\end{figure}
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper, we have introduced an algebraic approach for computing the solutions of the P3P problem in closed form.
Similarly to~\cite{kneip2011novel} and \cite{masselli2014new}, our algorithm does \textit{not} solve for the distances first, and hence reduces numerical-error propagation.
Differently though, it does not involve numerically-unstable functions (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, tangent, or cotangent) and has simpler expressions than the two recent alternative methods~\cite{kneip2011novel,masselli2014new}, and thus it outperforms them in terms of speed, accuracy, and robustness to close-to-singular cases.
As part of our ongoing work, we are currently extending our approach to also address the case of the generalized (non-central camera) P3P~\cite{nister2007minimal}.
\section{Appendix}
\subsection{Proof of $ \mathbf{k}^\prime_3=\mathbf{k}_2\times\mathbf{k}_1$}
\label{ssec:k3p}
First, note that $ \mathbf{k}_2\times\mathbf{k}_1 $ is a unit vector since $ \mathbf{k}_2 $ is perpendicular to $ \mathbf{k}_1 $. Also, from \eqref{eq:k2xk1} and \eqref{eq:t2} we have
\begin{equation}
\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{k}^\prime_3=\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{k}_3=0
\end{equation}
Then, we can prove $ \mathbf{k}^\prime_3=\mathbf{k}_2\times\mathbf{k}_1$ by showing that their inner product is equal to 1:
\begin{align}
(\mathbf{k}_2\times\mathbf{k}_1)^{\scriptscriptstyle {T}}\mathbf{k}_3^\prime&=\mathbf{k}_1^{\scriptscriptstyle {T}}(\mathbf{k}_3^\prime\times\mathbf{k}_2)\nonumber\\
&\stackrel{\eqref{eq:ki}}{=}\frac{\mathbf{k}_1^{\scriptscriptstyle {T}}(\mathbf{k}_3^\prime\times(\mathbf{k}_1\times\mathbf{k}_3))}{\|\mathbf{k}_1\times\mathbf{k}_3\|}\nonumber\\
&\stackrel{\eqref{eq:st2}}{=}\frac{\mathbf{k}_1^{\scriptscriptstyle {T}}(\mathbf{k}_1(\mathbf{k}_3^{\scriptscriptstyle {T}}\mathbf{k}_3^\prime)-\mathbf{k}_3(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{k}_3^\prime))}{\cos\theta_2}\nonumber\\
&=\frac{\mathbf{k}_3^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{k}_3}{\cos\theta_2}=1\nonumber
\end{align}
\subsection{Equivalence between the two solutions of $ \theta_2 $} \label{ssec:theta2}
When solving for $ \theta_2 $ [see \eqref{eq:st2}], we have two possible solutions $ \theta_2^{(1)}=\arccos(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{k}_3)-\frac{\pi}{2} $ and $ \theta_2^{(2)}=\theta_2^{(1)}+\pi $. Next, we will prove that using $ \theta_2^{(2)} $ to find $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $ is equivalent to using $ \theta_2^{(1)} $.
First, note that (see Fig.~\ref{fig:ki})
\begin{align}
\mathbf{C}(\mathbf{k}_2,\theta_2^{(1)}+\frac{\pi}{2})\mathbf{k}_3&=\mathbf{C}(\mathbf{k}_2,\frac{\pi}{2})\mathbf{k}_3^\prime=-\mathbf{k}_2\times\mathbf{k}_3^\prime\nonumber\\
&=-\mathbf{k}_2\times(\mathbf{k}_2\times\mathbf{k}_1)=\mathbf{k}_1\label{eq:t290}
\end{align}
Then, we can write $ \mathbf{C}(\mathbf{k}_2,\theta_2^{(2)}) $ as
\begin{align}
&\mathbf{C}(\mathbf{k}_2,\theta_2^{(2)})\nonumber\\
=&\mathbf{C}(\mathbf{k}_2,\theta_2^{(1)}+\frac{\pi}{2})\mathbf{C}(\mathbf{k}_2,\frac{\pi}{2})\nonumber\\
=&\mathbf{C}(\mathbf{k}_2,\theta_2^{(1)}+\frac{\pi}{2})\mathbf{C}(\mathbf{k}_3,\pi)\mathbf{C}(\mathbf{k}_2,-\frac{\pi}{2})\mathbf{C}(\mathbf{k}_3,-\pi)\nonumber\\
=&\mathbf{C}(\mathbf{C}(\mathbf{k}_2,\theta_2^{(1)}+\frac{\pi}{2})\mathbf{k}_3,\pi)\mathbf{C}(\mathbf{k}_2,\theta_2^{(1)}+\frac{\pi}{2})\mathbf{C}(\mathbf{k}_2,-\frac{\pi}{2})\mathbf{C}(\mathbf{k}_3,\pi)\nonumber\\
\stackrel{\eqref{eq:t290}}{=}&\mathbf{C}(\mathbf{k}_1,\pi)\mathbf{C}(\mathbf{k}_2,\theta_2^{(1)})\mathbf{C}(\mathbf{k}_3,\pi)\label{eq:ck2}
\end{align}
If we use $ \theta_2^{(2)} $ to find $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $,
\begin{align}
{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}&=\mathbf{C}(\mathbf{k}_1,\theta_1^{(2)})\mathbf{C}(\mathbf{k}_2,\theta_2^{(2)})\mathbf{C}(\mathbf{k}_3,\theta_3^{(2)})\nonumber\\
&\stackrel{\eqref{eq:ck2}}{=}\mathbf{C}(\mathbf{k}_1,\theta_1^{(2)})\mathbf{C}(\mathbf{k}_1,\pi)\mathbf{C}(\mathbf{k}_2,\theta_2^{(1)})\mathbf{C}(\mathbf{k}_3,\pi)\mathbf{C}(\mathbf{k}_3,\theta_3^{(2)})\nonumber\\
\label{eq:cccp}&=\mathbf{C}(\mathbf{k}_1,\theta_1^{(2)}+\pi)\mathbf{C}(\mathbf{k}_2,\theta_2^{(1)})\mathbf{C}(\mathbf{k}_3,\theta_3^{(2)}+\pi)
\end{align}
Note that $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $ in \eqref{eq:cccp} is of the same form as that in \eqref{eq:ccc}, so any solutions of $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $ computed using $ \theta_2^{(2)} $ will be found by using $ \theta_2^{(1)} $. Thus, we do not need to consider any other solutions for $ \theta_1 $ and $ \theta_3 $ beyond the ones found for $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $.
\subsection{Determining the sign of $ \sin\theta_1^\prime $}\label{ssec:stheta1p}
From \eqref{eq:st1p}, we have two solutions for $ \sin\theta_1^\prime $, and thus for $\theta_1^\prime $, with $ \theta_1^{\prime(2)}=-\theta_1^{\prime} $. This will also result into two solutions for $ \theta_3^\prime $ [see \eqref{eq:t3}] and, hence, two solutions for $ \theta_3 $: $ \theta_3 $ and $ \theta_3^{(2)}=\theta_3+\pi $. Considering these two options, we get two distinct solutions for $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $ [see \eqref{eq:cold}]:
\begin{align}
\mathbf{C}_1&\triangleq\mathbf{C}(\mathbf{k}_1,\theta_1^{\prime})\mathbf{C}(\mathbf{k}_3^{\prime\prime},\theta_3)\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\nonumber\\
\mathbf{C}_2&\triangleq\mathbf{C}(\mathbf{k}_1,-\theta_1^{\prime})\mathbf{C}(\mathbf{k}_3^{\prime\prime},\theta_3+\pi)\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\nonumber
\end{align}
Then, notice that
\begin{align}
\mathbf{C}_2\mathbf{C}_1^{\scriptscriptstyle {T}}&=\mathbf{C}(\mathbf{k}_1,-\theta_1^{\prime})\mathbf{C}(\mathbf{k}_3^{\prime\prime},\pi)\mathbf{C}(\mathbf{k}_1,-\theta_1^{\prime})\nonumber\\
&=\mathbf{C}(\mathbf{k}_3^{\prime\prime},\pi)\mathbf{C}(\mathbf{C}^{\scriptscriptstyle {T}}(\mathbf{k}_3^{\prime\prime},\pi)\mathbf{k}_1,-\theta_1^{\prime})\mathbf{C}(\mathbf{k}_1,-\theta_1^{\prime})\nonumber\\
&=\mathbf{C}(\mathbf{k}_3^{\prime\prime},\pi)\mathbf{C}(-\mathbf{k}_1,-\theta_1^{\prime})\mathbf{C}(\mathbf{k}_1,-\theta_1^{\prime})\nonumber\\
&=\mathbf{C}(\mathbf{k}_3^{\prime\prime},\pi)\nonumber
\end{align}
If $ \mathbf{C}_1=\mathbf{C}_2 $, this would require
\begin{align}
\mathbf{C}(\mathbf{k}_3^{\prime\prime},\pi)=\mathbf{C}_2\mathbf{C}_1^{\scriptscriptstyle {T}}=\mathbf{I}\nonumber
\end{align}
which cannot be true, hence $ \mathbf{C}_1 $ and $ \mathbf{C}_2 $ cannot be equal. Thus, there are always two different solutions of $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $.
If, however, we use the fact that $ d_i\ (i=1,2,3) $ is positive, we can determine the sign of $ \sin\theta_1^\prime $, and choose the valid one among the two solutions of $ {}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C} $. Subtracting \eqref{eq:di} pairwise for ($ i=3 $) from ($ i=1 $), we have
\begin{align}
{}^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_3=&d_1{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}{}^{\scriptscriptstyle {C}}\mathbf{b}_1-d_3{}^{\scriptscriptstyle {G}}_{\scriptscriptstyle {C}}\mathbf{C}{}^{\scriptscriptstyle {C}}\mathbf{b}_3\nonumber\\
\label{eq:d1-d3}\Rightarrow{}^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_3=&\mathbf{C}(\mathbf{k}_1,\theta_1^{\prime})\mathbf{C}(\mathbf{k}_3^{\prime\prime},\theta_3)\mathbf{C}(\mathbf{k}_1,\phi)\nonumber\\
\cdot&\mathbf{C}(\mathbf{k}_2,\theta_2)(d_1{}^{\scriptscriptstyle {C}}\mathbf{b}_1-d_3{}^{\scriptscriptstyle {C}}\mathbf{b}_3)
\end{align}
Multiplying both sides of \eqref{eq:d1-d3} with $ {\mathbf{k}_3^{\prime\prime}}^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,-\theta_1^{\prime}) $ from the left, yields
\begin{align}
&{\mathbf{k}_3^{\prime\prime}}^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,-\theta_1^{\prime})({}^{\scriptscriptstyle {G}}\mathbf{p}_1-{}^{\scriptscriptstyle {G}}\mathbf{p}_3)\nonumber\\
=&{\mathbf{k}_3^{\prime\prime}}^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)(d_1{}^{\scriptscriptstyle {C}}\mathbf{b}_1-d_3{}^{\scriptscriptstyle {C}}\mathbf{b}_3)\nonumber\\
\Rightarrow&{\mathbf{k}_3^{\prime\prime}}^{\scriptscriptstyle {T}}(\cos\theta_1^\prime\mathbf{I}+\sin\theta_1^\prime\lfloor\mathbf{k}_1\rfloor+(1-\cos\theta_1^\prime)\mathbf{k}_1\mathbf{k}_1^{\scriptscriptstyle {T}})\mathbf{u}_1\nonumber\\
=&{\mathbf{k}_3^{\prime}}^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_2,\theta_2)(d_1{}^{\scriptscriptstyle {C}}\mathbf{b}_1-d_3{}^{\scriptscriptstyle {C}}\mathbf{b}_3)\nonumber\\
\stackrel{\eqref{eq:k3pp}}{\Rightarrow}&\sin\theta_1^\prime{\mathbf{k}_3^{\prime\prime}}^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\mathbf{u}_1=\mathbf{k}_3^{\scriptscriptstyle {T}}(d_1{}^{\scriptscriptstyle {C}}\mathbf{b}_1-d_3{}^{\scriptscriptstyle {C}}\mathbf{b}_3)\nonumber\\
\Rightarrow&-\sin\theta_1^\prime\mathbf{u}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor \mathbf{k}_3^{\prime\prime}=-d_3\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3\nonumber\\
\label{eq:s1pd3}\Rightarrow&\delta\sin\theta_1^\prime=d_3(\mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3)
\end{align}
Using the fact that $ d_3>0 $ and $ \delta>0 $, we select the sign of $ \sin\theta_1^\prime $ to be the same as that of $ \mathbf{k}_3^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3 $.
\subsection{Least-squares solution for the position}
\label{ssec:ls}
$ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}} $ can also be solved following a least-squares approach, which is slower but more accurate than \eqref{eq:gpc}. Specifically, \eqref{eq:di} can result in the following system:
\begin{equation}
\begin{bmatrix}
{}^{\scriptscriptstyle {G}}\mathbf{C}_{\scriptscriptstyle {C}}{}^{\scriptscriptstyle {C}}\mathbf{b}_1 & & & \mathbf{I}\\
& {}^{\scriptscriptstyle {G}}\mathbf{C}_{\scriptscriptstyle {C}}{}^{\scriptscriptstyle {C}}\mathbf{b}_2 & & \mathbf{I}\\
& & {}^{\scriptscriptstyle {G}}\mathbf{C}_{\scriptscriptstyle {C}}{}^{\scriptscriptstyle {C}}\mathbf{b}_3 & \mathbf{I}\\
\end{bmatrix}\begin{bmatrix}
d_1\\
d_2\\
d_3\\
{}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}}
\end{bmatrix}=\begin{bmatrix}
{}^{\scriptscriptstyle {G}}\mathbf{p}_1 \\
{}^{\scriptscriptstyle {G}}\mathbf{p}_2 \\
{}^{\scriptscriptstyle {G}}\mathbf{p}_3 \\
\end{bmatrix}\nonumber
\end{equation}
Then, we only need to apply QR decomposition~\cite{golub2012matrix} and backsolve for $ {}^{\scriptscriptstyle {G}}\mathbf{p}_{\scriptscriptstyle {C}} $ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we do not need to compute $ d_i,\ i=1,2,3 $).
\subsection{Derivation of $ \bar{f}_{ij} $}
\begin{align}
\bar{f}_{i1}&\triangleq\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor^2\lfloor\mathbf{k}^{\prime\prime}_3\rfloor^2\mathbf{v}^{\prime\prime}_i\nonumber\\
&=\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor(\mathbf{k}^{\prime\prime}_3\mathbf{k}_1^{\scriptscriptstyle {T}}-(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{k}^{\prime\prime}_3)\mathbf{I})\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i\nonumber\\
&=(\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\mathbf{k}^{\prime\prime}_3)(\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i)\nonumber\\
&=((\mathbf{u}_i\times\mathbf{k}_1)^{\scriptscriptstyle {T}}\mathbf{k}^{\prime\prime}_3)(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\lfloor\mathbf{k}_3\rfloor\mathbf{v}_i)\nonumber\\
&=\delta{\mathbf{k}_1^\prime}^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_3\rfloor\mathbf{v}_i\nonumber\\
&=\delta\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_2\nonumber\\
\bar{f}_{i2}&\triangleq\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor^2\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i\nonumber\\
&=(\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\mathbf{k}^{\prime\prime}_3)(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{v}^{\prime\prime}_i)\nonumber\\
&=\delta(\mathbf{k}_1^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{v}_i)\nonumber\\
&=\delta\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime\nonumber\\
\bar{f}_{i3}&\triangleq({\mathbf{k}^{\prime\prime}_3}^{\scriptscriptstyle {T}}\mathbf{v}^{\prime\prime}_i)\mathbf{u}_i^{\scriptscriptstyle {T}}\lfloor\mathbf{k}_1\rfloor\mathbf{k}^{\prime\prime}_3\nonumber\\
&=\delta\mathbf{k}_3^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_2,-\theta_2)\mathbf{C}(\mathbf{k}_1,-\phi)\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\mathbf{v}_i\nonumber\\
&=\delta\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_3\nonumber\\
\bar{f}_{i4}&\triangleq-(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime\prime}_3\rfloor^2\mathbf{v}^{\prime\prime}_i\nonumber\\
&=-(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)\mathbf{k}_1(\mathbf{k}^{\prime\prime}_3{\mathbf{k}^{\prime\prime}_3}^{\scriptscriptstyle {T}}-\mathbf{I})\mathbf{v}^{\prime\prime}_i\nonumber\\
&=(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)(\mathbf{k}_1\mathbf{v}^{\prime\prime}_i)\nonumber\\
&=(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)(\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)\nonumber\\
\bar{f}_{i5}&\triangleq-(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)\mathbf{k}_1^{\scriptscriptstyle {T}}\lfloor\mathbf{k}^{\prime\prime}_3\rfloor\mathbf{v}^{\prime\prime}_i=-(\mathbf{u}_i^{\scriptscriptstyle {T}}\mathbf{k}_1)(\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_2)\nonumber
\end{align}
\subsection{Derivation of $ f_{ij} $ and $ \bar{\bar{\mathbf{C}}} $}
First, note that
\begin{align}
\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_2&=({}^{\scriptscriptstyle {C}}\mathbf{b}_i\times{}^{\scriptscriptstyle {C}}\mathbf{b}_3)^{\scriptscriptstyle {T}}(\mathbf{k}_1^\prime\times\mathbf{k}_3)\nonumber\\
&=({}^{\scriptscriptstyle {C}}\mathbf{b}_i^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)-({}^{\scriptscriptstyle {C}}\mathbf{b}_i^{\scriptscriptstyle {T}}\mathbf{k}_3)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)\nonumber\\
&=({}^{\scriptscriptstyle {C}}\mathbf{b}_i^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)\nonumber\\
\mathbf{v}_i^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime&=-({}^{\scriptscriptstyle {C}}\mathbf{b}_i\times{}^{\scriptscriptstyle {C}}\mathbf{b}_3)^{\scriptscriptstyle {T}}(\mathbf{k}_2\times\mathbf{k}_3)\nonumber\\
&=-({}^{\scriptscriptstyle {C}}\mathbf{b}_i^{\scriptscriptstyle {T}}\mathbf{k}_2)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)+({}^{\scriptscriptstyle {C}}\mathbf{b}_i^{\scriptscriptstyle {T}}\mathbf{k}_3)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_2)\nonumber\\
&=-({}^{\scriptscriptstyle {C}}\mathbf{b}_i^{\scriptscriptstyle {T}}\mathbf{k}_2)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)\nonumber
\end{align}
Let $ \psi\triangleq\theta_3-\theta_3^\prime $, and thus
\begin{align}
\label{eq:psi}\begin{bmatrix}
\cos\theta_3^\prime\\
\sin\theta_3^\prime
\end{bmatrix}=\begin{bmatrix}\cos\psi\cos\theta_3+\sin\psi\sin\theta_3 \\ \cos\psi\cos\theta_3+\sin\psi\sin\theta_3\end{bmatrix}
\end{align}
From \eqref{eq:psi} and \eqref{eq:t3p}, we get
\begin{align}
\cos\psi&=\frac{\bar{f}_{11}}{\sqrt{\bar{f}_{11}^2+\bar{f}_{12}^2}}=-\frac{\bar{f}_{15}}{\sqrt{\bar{f}_{14}^2+\bar{f}_{15}^2}}\nonumber\\
&=\frac{\mathbf{v}_1^{\scriptscriptstyle {T}}\mathbf{k}_2}{\sqrt{(\mathbf{v}_1^{\scriptscriptstyle {T}}\mathbf{k}_2)^2+(\mathbf{v}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)^2}}\nonumber\\
&=\frac{{}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime}{\sqrt{({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_2)^2+({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)^2}}\nonumber\\
&=\frac{{}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime}{\|\mathbf{k}_3\times{}^{\scriptscriptstyle {C}}\mathbf{b}_1\|}\nonumber\\
&={}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime\nonumber
\end{align}
\begin{align}
\sin\psi&=\frac{\bar{f}_{12}}{\sqrt{\bar{f}_{11}^2+\bar{f}_{12}^2}}=\frac{\bar{f}_{14}}{\sqrt{\bar{f}_{14}^2+\bar{f}_{15}^2}}\nonumber\\
&=\frac{\mathbf{v}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime}{\sqrt{(\mathbf{v}_1^{\scriptscriptstyle {T}}\mathbf{k}_2)^2+(\mathbf{v}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)^2}}\nonumber\\
&=-{}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_2\nonumber
\end{align}
Then, from \eqref{eq:Fi} and \eqref{eq:fs1}, we derive the expressions of $ f_{ij} $:
\begin{align}
f_{11} &= \bar{f}_{11}\cos\psi+\bar{f}_{12}\sin\psi\nonumber\\
&= \delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)(({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_2)^2+({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)^2)\nonumber\\
&=\delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)\nonumber\\
f_{21} &= \bar{f}_{21}\cos\psi+\bar{f}_{22}\sin\psi\nonumber\\
&= \delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)(({}^{\scriptscriptstyle {C}}\mathbf{b}_2^{\scriptscriptstyle {T}}\mathbf{k}_2)({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_2)+({}^{\scriptscriptstyle {C}}\mathbf{b}_2^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime))\nonumber\\
&= \delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)({}^{\scriptscriptstyle {C}}\mathbf{b}_2^{\scriptscriptstyle {T}}(\mathbf{k}_2\mathbf{k}_2^{\scriptscriptstyle {T}}+\mathbf{k}_1^\prime{\mathbf{k}_1^\prime}^{\scriptscriptstyle {T}}){}^{\scriptscriptstyle {C}}\mathbf{b}_1)\nonumber\\
&= \delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)({}^{\scriptscriptstyle {C}}\mathbf{b}_2^{\scriptscriptstyle {T}}(\mathbf{I}-\mathbf{k}_3\mathbf{k}_3^{\scriptscriptstyle {T}}){}^{\scriptscriptstyle {C}}\mathbf{b}_1)\nonumber\\
&=\delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)({}^{\scriptscriptstyle {C}}\mathbf{b}_2^{\scriptscriptstyle {T}}{}^{\scriptscriptstyle {C}}\mathbf{b}_1)\nonumber\\
f_{22} &=-\bar{f}_{21}\sin\psi+\bar{f}_{22}\cos\psi\nonumber\\
&= \delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)(({}^{\scriptscriptstyle {C}}\mathbf{b}_2^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_2)-({}^{\scriptscriptstyle {C}}\mathbf{b}_2^{\scriptscriptstyle {T}}\mathbf{k}_2)({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime))\nonumber\\
&= \delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)({}^{\scriptscriptstyle {C}}\mathbf{b}_2^{\scriptscriptstyle {T}}(\mathbf{k}_2\mathbf{k}_2^{\scriptscriptstyle {T}}+\mathbf{k}_1^\prime{\mathbf{k}_1^\prime}^{\scriptscriptstyle {T}}){}^{\scriptscriptstyle {C}}\mathbf{b}_1)\nonumber\\
&= \delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)({}^{\scriptscriptstyle {C}}\mathbf{b}_2\times{}^{\scriptscriptstyle {C}}\mathbf{b}_1)^{\scriptscriptstyle {T}}(\mathbf{k}_1^\prime\times\mathbf{k}_2)\nonumber\\
&= \delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)\|{}^{\scriptscriptstyle {C}}\mathbf{b}_2\times{}^{\scriptscriptstyle {C}}\mathbf{b}_1\|\mathbf{k}_3^{\scriptscriptstyle {T}}\mathbf{k}_3\nonumber\\
&=\delta({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)\|{}^{\scriptscriptstyle {C}}\mathbf{b}_2\times{}^{\scriptscriptstyle {C}}\mathbf{b}_1\|\nonumber\\
f_{15}&=\bar{f}_{15}\cos\psi-\bar{f}_{14}\sin\psi\nonumber\\
&=-(\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}_1)f_{11}/\delta\nonumber\\
&=-(\mathbf{u}_1^{\scriptscriptstyle {T}}\mathbf{k}_1)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)\nonumber\\
f_{24}&=\bar{f}_{25}\sin\psi+\bar{f}_{24}\cos\psi\nonumber\\
&=(\mathbf{u}_2^{\scriptscriptstyle {T}}\mathbf{k}_1)f_{22}/\delta\nonumber\\
&=(\mathbf{u}_2^{\scriptscriptstyle {T}}\mathbf{k}_1)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)\|{}^{\scriptscriptstyle {C}}\mathbf{b}_2\times{}^{\scriptscriptstyle {C}}\mathbf{b}_1\|\nonumber\\
f_{25}&=\bar{f}_{25}\cos\psi-\bar{f}_{24}\sin\psi\nonumber\\
&=-(\mathbf{u}_2^{\scriptscriptstyle {T}}\mathbf{k}_1)f_{21}/\delta\nonumber\\
&=-(\mathbf{u}_2^{\scriptscriptstyle {T}}\mathbf{k}_1)({}^{\scriptscriptstyle {C}}\mathbf{b}_3^{\scriptscriptstyle {T}}\mathbf{k}_3)({}^{\scriptscriptstyle {C}}\mathbf{b}_1^{\scriptscriptstyle {T}}\mathbf{k}_1^\prime)\nonumber
\end{align}
Additionally, we can derive the expression of $ \mathbf{\bar{\bar{C}}} $, which is defined in \eqref{eq:crrc}:
\begin{align}
\mathbf{\bar{\bar{C}}}&=\mathbf{C}(\mathbf{e}_2,\theta_3-\theta_3^\prime)\mathbf{\bar{C}}^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_1,\phi)\mathbf{C}(\mathbf{k}_2,\theta_2)\nonumber\\
&=\mathbf{C}(\mathbf{e}_2,\psi)\begin{bmatrix}
\mathbf{k}_1 & \mathbf{k}_3^\prime & \mathbf{k}_2
\end{bmatrix}^{\scriptscriptstyle {T}}\mathbf{C}(\mathbf{k}_2,\theta_2)\nonumber\\
&=\mathbf{C}(\mathbf{e}_2,\psi)\begin{bmatrix}
\mathbf{k}_1^\prime & \mathbf{k}_3 & \mathbf{k}_2
\end{bmatrix}^{\scriptscriptstyle {T}}\nonumber\\
&=\begin{bmatrix}
\cos\psi\mathbf{k}_1^\prime-\sin\psi\mathbf{k}_2 & \mathbf{k}_3 & \sin\psi\mathbf{k}_1^\prime+\cos\psi\mathbf{k}_2
\end{bmatrix}^{\scriptscriptstyle {T}}\nonumber\\
&=\begin{bmatrix}
(\mathbf{k}_1^\prime{\mathbf{k}_1^\prime}^{\scriptscriptstyle {T}}+\mathbf{k}_2\mathbf{k}_2^{\scriptscriptstyle {T}}){}^{\scriptscriptstyle {C}}\mathbf{b}_1 & \mathbf{k}_3 & \sin\psi\mathbf{k}_1^\prime+\cos\psi\mathbf{k}_2
\end{bmatrix}^{\scriptscriptstyle {T}}\nonumber\\
&=\begin{bmatrix}
(\mathbf{I}-\mathbf{k}_3\mathbf{k}_3^{\scriptscriptstyle {T}}){}^{\scriptscriptstyle {C}}\mathbf{b}_1 & \mathbf{k}_3 & \sin\psi\mathbf{k}_1^\prime+\cos\psi\mathbf{k}_2
\end{bmatrix}^{\scriptscriptstyle {T}}\nonumber\\
&=\begin{bmatrix}
{}^{\scriptscriptstyle {C}}\mathbf{b}_1 & \mathbf{k}_3 & \sin\psi\mathbf{k}_1^\prime+\cos\psi\mathbf{k}_2
\end{bmatrix}^{\scriptscriptstyle {T}}\nonumber\\
&=\begin{bmatrix}
{}^{\scriptscriptstyle {C}}\mathbf{b}_1 & \mathbf{k}_3 & {}^{\scriptscriptstyle {C}}\mathbf{b}_1\times\mathbf{k}_3
\end{bmatrix}^{\scriptscriptstyle {T}}\nonumber
\end{align}
\subsection{Comparison with the P3P code in OpenCV}
We also compared the performance of our code with that in OpenCV (based on \cite{gao2003complete}), using the same setup as Sec.~\ref{sec:results}. The error distributions are showed in Fig.~\ref{fig:errorOG} and Fig.~\ref{fig:errorPG}. It is obvious that the code in OpenCV has lower numerical accuracy comparing to ours. Also, it takes around 3 $ \mu $s on average to compute P3P once, which is much slower than ours (0.52 $ \mu $s according to Sec.~\ref{sec:results}).
In conclusion, our code performs much better than the one in OpenCV.
\begin{figure}
\center
\includegraphics[width=\linewidth]{errorOG.pdf}
\caption{Nominal case: Histogram of orientation errors.}
\label{fig:errorOG}
\end{figure}
\begin{figure}
\center
\includegraphics[width=\linewidth]{errorPG.pdf}
\caption{Nominal case: Histogram of position errors.}
\label{fig:errorPG}
\end{figure}
{
\bibliographystyle{ieee}
| {
"timestamp": "2017-01-31T02:01:59",
"yymm": "1701",
"arxiv_id": "1701.08237",
"language": "en",
"url": "https://arxiv.org/abs/1701.08237",
"abstract": "In this work, we present an algebraic solution to the classical perspective-3-point (P3P) problem for determining the position and attitude of a camera from observations of three known reference points. In contrast to previous approaches, we first directly determine the camera's attitude by employing the corresponding geometric constraints to formulate a system of trigonometric equations. This is then efficiently solved, following an algebraic approach, to determine the unknown rotation matrix and subsequently the camera's position. As compared to recent alternatives, our method avoids computing unnecessary (and potentially numerically unstable) intermediate results, and thus achieves higher numerical accuracy and robustness at a lower computational cost. These benefits are validated through extensive Monte-Carlo simulations for both nominal and close-to-singular geometric configurations.",
"subjects": "Computer Vision and Pattern Recognition (cs.CV)",
"title": "An Efficient Algebraic Solution to the Perspective-Three-Point Problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534376578004,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573450527463
} |
https://arxiv.org/abs/2302.00453 | Width and Depth Limits Commute in Residual Networks | We show that taking the width and depth to infinity in a deep neural network with skip connections, when branches are scaled by $1/\sqrt{depth}$ (the only nontrivial scaling), result in the same covariance structure no matter how that limit is taken. This explains why the standard infinite-width-then-depth approach provides practical insights even for networks with depth of the same order as width. We also demonstrate that the pre-activations, in this case, have Gaussian distributions which has direct applications in Bayesian deep learning. We conduct extensive simulations that show an excellent match with our theoretical findings. | \section{Introduction}
In recent years, deep neural networks have achieved remarkable success in a variety of tasks, such as image classification and natural language processing. However, the behavior of these networks in the limit of large depth and large width is still not fully understood.
The success of large language and vision models have recently amplified an existing trend of research on neural network limits.
Two main limits are the large-width and the large-depth limits.
While the former by itself is now relatively well understood \citep{neal, samuel2017, lee_gaussian_process, yang_tensor3_2020, hayou2019impact}, the latter and the interaction between the two have not been studied as much.
In particular, a basic question is: do these two limits commute?
Recent literature suggests that, at initialization, in certain kinds of multi-layer perceptrons (MLPs) or residual neural networks (resnets), the depth and width limits do not commute; this would imply that in practice, such kinds of networks would behave quite differently depending on whether width is much larger than depth or the other way around.
However, in this paper, we show: to the contrary, at initialization, for a resnet with branches scaled the natural way so as to avoid blowing up the output,%
\footnote{This contrasts with \cite{li2022sde} whose non-commute result requires the branches to be large enough to blow up the network output in the case of standard resnet.}
the width and depth limits \emph{do commute}.
This justifies prior calculations that take the width limit first, then depth, to understand the behavior of deep residual networks, such as prior works in the signal propagation literature \citep{hayou21stable}.
In addition to the significance of the results, the mathematical novelty of this paper is the proof technique: we take the depth limit first (fixing width), then take the width limit, in contrast to the typical prior work which takes the limits in the opposite order. In the process, we prove a concentration of measure result for a kind of McKean-Vlasov process (Mean-Field games). Our results provide new insights into the behavior of deep neural networks and we discuss implications for the design and analysis of these networks.
The proofs of the theoretical results are provided in the appendix and referenced after each result. Empirical evaluations support our theoretical findings.
\section{Related Work}
The theoretical analysis of randomly initialized neural networks with an infinite number of parameters has yielded a wealth of interesting results, both theoretical and practical. A majority of this research has concentrated on examining the scenario in which the width of the network is taken to infinity while the depth is fixed. However, in recent years, there has been a growing interest in exploring the large depth limit of these networks. In this overview, we present a summary of existing results in this area, though it's not exhaustive. A more comprehensive literature review is provided in \cref{sec:comprehensive_lit_review}.
\subsection{Infinite-Width Limit}
The study of the infinite-width limit of neural network architectures has been a topic of significant research interest, yielding various theoretical and algorithmic innovations. These include initialization methods, such as the Edge of Chaos \citep{poole, samuel2017, yang2017meanfield, hayou2019impact}, and the selection of activation functions \citep{hayou2019impact, martens2021rapid, zhang2022deep, wolinski2022gaussian}, which have been shown to have practical benefits. In the realm of Bayesian analysis, the infinite-width limit presents an intriguing framework for Bayesian deep learning, as it is characterized by a Gaussian process prior. Several studies (e.g. \cite{neal, lee_gaussian_process, yang_tensor3_2020, matthews, hron20attention}) have investigated the weak limit of neural networks as the width increases towards infinity, and have demonstrated that the network's output converges to a distribution modeled by a Gaussian process. Bayesian inference utilizing this ``neural" Gaussian process has been explored in \citep{lee_gaussian_process, hayou21stable}. \footnote{It is worth mentioning that kernel methods such as NNGP and NTK significantly underperform properly tuned finite-width network trained using SGD, see \cite{yang2022efficient}.}
The Neural Tangent Kernel (NTK) is another interesting area of research where the infinite-width limit proves useful. In this limit, the NTK converges to a deterministic kernel, given appropriate parameterization. This limiting kernel is fixed at initialization and remains constant throughout the training process. The optimization and generalization characteristics of the NTK have been the subject of extensive study in the literature (see e.g. \cite{Liu2022connecting, arora2019finegrained}).
\subsection{Infinite-Depth Limit}
The infinite-depth limit of neural networks with random initialization is a less explored area compared to the study of the infinite-width limit. Existing research in this field can be categorized into three groups based on the approach and criteria used to consider the infinite-depth limit in relation to the width.
\emph{Infinite-width-then-depth limit.} In this case, the width of the neural network is taken to infinity first, followed by the depth. This is the infinite-depth limit of infinite-width neural networks. This limit has been extensively utilized to explore various aspects of neural networks, such as examining the neural covariance, deriving the Edge of Chaos initialization scheme (cited in \citep{samuel2017, poole, yang2017meanfield}), evaluating the impact of the activation function \citep{hayou2019impact, martens2021rapid}, and studying the behavior of the Neural Tangent Kernel (NTK) \citep{hayou_ntk, xiao2020disentangling}.
\emph{The joint infinite-width-and-depth limit.} In this case, the ratio of depth to width is fixed, and the width and depth are jointly taken to infinity. There are only a limited number of works that have investigated the joint width-depth limit. In \citep{li21loggaussian}, the authors showed that for a particular type of residual neural networks (ResNets), the network output exhibits a (scaled) log-normal behavior in this limit, which differs from the sequential limit in which the width is first taken to infinity followed by the depth, in which case the distribution of the network output is asymptotically normal (\citep{samuel2017, hayou2019impact}). Additionally, in \citep{li2022sde}, the authors examined the covariance kernel of a multi-layer perceptron (MLP) in the joint limit and proved that it weakly converges to the solution of a Stochastic Differential Equation (SDE). Other works have investigated the correlation structure and the NTK in this limit \citep{Hanin2020Finite, hanin2022correlation}.
\emph{Infinite-depth limit of finite-width neural networks.} In the previous limits, the width of the neural network was extended to infinity, either independently or in conjunction with the depth. However, it is natural to inquire about the behavior of networks in which the width is fixed, while the depth is increased towards infinity. In \cite{peluchetti2020resnetdiffusion}, it was shown that for a particular ResNet architecture, the pre-activations converge weakly to a diffusion process in the infinite-depth limit, which follows from existing results in stochastic calculus on the convergence of Euler-Maruyama discretization schemes to continuous Stochastic Differential Equations. More recent work by \cite{hayou2022on} evaluated the impact of the activation function on the distribution of the pre-activation and characterized the distribution of the post-activation norms in this limit.
In this work, we are particularly interested in the case where both the width and depth are taken to infinity.
\section{Setup and Definitions}
When analyzing the asymptotic behavior of randomly initialized neural networks, various notions of probabilistic convergence are employed, depending on the context. These notions are typically well-established definitions in probability theory. In this study, we particularly focus on two forms of convergence:
\vspace{-0.2cm}
\begin{itemize}
\item Convergence in distribution (weak convergence): we show that the pre-activations converge weakly to a Gaussian distribution in the limit $\min(n, L) \to \infty$. We use the Wasserstein metric to quantify the convergence rate for the weak convergence.
\item Convergence in $L_2$ (strong convergence): we show that the neural covariance\footnote{The neural covariance is a (linear) measure of similarity between the pre-activations for different inputs. We define this quantity in \cref{sec:mlps}.} converges to a deterministic limit that is characterized by a differential flow $q_t$ as $\min(n, L)$ approaches infinity.
\end{itemize}
\begin{definition}[Weak convergence]
Let $d \geq 1$. We say that a sequence of $\mathbb{R}^d$-valued random variables $(X_k)_{ k\geq 1}$ converges weakly to a random variable $Z$ if the cumulative distribution function of $X_k$ converges point-wise to that of $Z$.
\end{definition}
There are various metrics that can be utilized to measure the weak convergence rate. One commonly used metric is the Wasserstein metric.
\begin{definition}[Wasserstein distance $\mathcal{W}_1$]
Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}^d$. The Wasserstein distance between $\mu$ and $\nu$ is defined by
\begin{align*}
\mathcal{W}_1 &= \sup_{f \in \textup{Lip}_1} \left| \int f(x) (d \mu - d \nu) \right|\\
&= \sup_{f \in \textup{Lip}_1} \left| \mathbb{E}_{\mu} f - \mathbb{E}_\nu f \right|,
\end{align*}
where $\textup{Lip}_1$ is the set of Lipschitz continuous functions from $\mathbb{R}^d$ to $\mathbb{R}$ with a Lipschitz constant $\leq 1$.
\end{definition}
In this work, we define \emph{strong} convergence to be the $L_2$ convergence as described in the following definition.
\begin{definition}[Strong convergence]
Let $d \geq 1$. We say that a sequence of $\mathbb{R}^d$-valued random variables $(X_k)_{ k\geq 1}$ converges in $L_2$ (or strongly) to a random variable $Z$ if $\lim_{k \to \infty} \|X_k - Z\|_{L_2} =0$, where the $L_2$ is defined by $\|X\|_{L_2} = \left(\mathbb{E}[\|X\|^2] \right)^{1/2}$.
\end{definition}
Both of these forms of convergence are valuable when analyzing the behavior of neural networks with an infinite number of parameters. They facilitate the understanding of the network's asymptotic behavior which enables predictions about the finite-but-large width-and-depth regimes.
\section{Warmup: Depth and Width Generally Do Not Commute}\label{sec:mlps}
In this section, we present corollaries of previously established results that demonstrate that depth and width typically do not commute. The width and depth of the network are denoted by $n$ and $L$, respectively, and the input dimension is denoted by $d$. Let $d, n, L \geq 1$, and consider a simple MLP architecture given by the following:
\begin{equation}\label{eq:mlp}
\begin{aligned}
Y_0(a) &= W_{in} a, \quad a \in \mathbb{R}^d\\
Y_l(a) &= W_l \phi(Y_{l-1}(a)), \hspace{0.1cm} l \in [1:L],
\end{aligned}
\end{equation}
where $\phi: \mathbb{R} \to \mathbb{R}$ is the ReLU activation function, $W_{in} \in \mathbb{R}^{n \times d}$, and $W_l \in \mathbb{R}^{n \times n}$ is the weight matrix in the $l^{th}$ layer. We assume that the weights are randomly initialized with \textit{iid}~ Gaussian variables $W_l^{ij} \sim \mathcal{N}(0, \frac{2}{n})$,\footnote{This is the standard He initialization which coincides with the Edge of Chaos initialization \citep{samuel2017}. This is the only choice of the variance that guarantees stability in both the large-width and the large-depth limits.} $W_{in}^{ij} \sim \mathcal{N}(0, \frac{1}{d})$. For the sake of simplification, we only consider networks with no bias, and we omit the dependence of $Y_l$ on $n$ and $L$ in the notation. While the activation function is only defined for real numbers ($1$-dimensional), we will abuse the notation and write $\phi(z) = (\phi(z^1), \dots, \phi(z^k))$ for any $k$-dimensional vector $z = (z^1, \dots, z^k) \in \mathbb{R}^k$ for any $k \geq 1$. We refer to the vectors $\{Y_l, l=0, \dots, L\}$ as \emph{pre-activations} and the vectors $\{\phi(Y_l), l=0, \dots, L\}$ as \emph{post-activations}.
\subsection{Distribution of the Pre-Activations in the Limit $n, L \to \infty$}
It is well-established that in fixed-depth neural networks of any type, as the width $n$ approaches infinity, the pre-activations exhibit Gaussian behavior. This phenomenon was initially demonstrated for single-layer perceptrons by \citep{neal}, and has since been extended to include multiple-layer perceptrons (MLPs) and general neural architectures \citep{yang_tensor3_2020}. This behavior can be roughly attributed to the Central Limit Theorem (CLT) (although a formal proof require careful application of CLT for exchangeable random variables in the MLP case, as detailed in \cite{matthews}, or Law of Large Numbers and Gaussian conditioning trick in the general case \citep{yang2019tensor_i}). A question that the reader may have in this context is: \emph{Why is the Gaussian distribution of significance?}\\
One of the key implications of the Gaussian behavior of infinite-width neural networks is their equivalence to Gaussian processes. By utilizing existing methods of Gaussian process regression, this equivalence facilitates the application of exact Bayesian inference to infinite-width neural networks, referred to as the neural network Gaussian process (NNGP, \cite{lee_gaussian_process}). The Gaussian behavior also provides an interesting framework to study signal propagation in deep neural networks; since a Gaussian distribution is fully characterized by its mean and covariance structure, understanding these quantities is sufficient to capture what happens inside the network at initialization.
When the depth $L$ is also taken to infinity, different behaviors may emerge. Specifically, in the case of the MLP architecture \eqref{eq:mlp}, if a fixed layer index $l < L$ is considered and the behavior of $Y_l$ is examined as $n$ and $L$ approach infinity, $Y_l$ will exhibit the same limiting behavior as in the case of $n \to \infty$ and the depth is fixed.
Some simple intuitive calculations indicate that it is only meaningful to study the limiting behavior of layers where the layer index is proportional to the depth $L$ (and not proportional to $L^\alpha$ for any $\alpha < 1$).%
\footnote{Indeed, the $(\frac 1 n)$-scaled Gram matrix of $\{Y_l(a): a \in \mathbb{R}^d\}$ fluctuates with size $\tilde \Theta(1/\sqrt{n})$ around its $n\to \infty$ limit for any fixed $l$.
This fluctuation is asymptotically independent across every layer, so the accumulated fluctuation at layer $l=L^\alpha$ is $\tilde \Theta(L^{\alpha/2}/\sqrt{n})$.
This is $\tilde \Theta(1)$ iff $\alpha = 1$.}
In this case, the quantity of interest is $Y_{\lfloor t L \rfloor}$ for some $t \in [0,1]$. Varying $t$ between $0$ and $1$ encompasses all layer indices, even in the infinite-depth limit.
Let us now state some corollaries of existing results. The following is a trivial result from existing literature (see e.g. \cite{matthews}) that characterizes the distribution of the pre-activations in the limit $n \to \infty$ then $L \to \infty$.
\begin{prop}[Infinite-width-then-depth]\label{prop:mlp_width_then_depth_gaussian}
Consider the MLP architecture given by \cref{eq:mlp} and let $a \in \mathbb{R}^d$ such that $a \neq 0$. Then, in the limit ``$n\to \infty$, then $L \to \infty$'', $Y^1_{L}(a)$\footnote{$Y^1_{L}(a)$ refers to the first neuron in the last layer.} converges weakly to a Gaussian distribution.
\end{prop}
When the width and depth of a neural network both tend towards infinity, the limiting behavior can vary depending on the relative rates at which the width and depth increase. Specifically, if the width and depth both approach infinity while the ratio of width to depth remains constant, the distribution of the pre-activations in the last layer is not Gaussian. This is a corollary of a more general result established by \citep{li21loggaussian} (the case when $\alpha = 0$) under certain conditions and assumptions, which was also verified through empirical evidence. We omit here the rigorous statement of the result and only illustrate this behaviour with simulations.
\begin{wrapfigure}{r}{0.5\textwidth}
\begin{center}
\includegraphics[width=0.48\textwidth]{figures/non_gaussian_mlp.pdf}
\end{center}
\caption{Histogram of $Y^1_L(a)$ for an MLP \cref{eq:mlp} with $(n, L) \in \{(10000, 500), (500, 500)\}$, $d=30$, and $a = \sqrt{d} \frac{u}{\|u\|}$ and $u \in \mathbb{R}^d$ has all coordinates randomly sampled from the uniform distribution $\mathcal{U}([0,1])$. The histogram is based on $N=10^4$ simulations. The red dashed line represents the theoretical distribution (Gaussian) predicted in \cref{prop:mlp_width_then_depth_gaussian}. We also perform a Kolmogorov-Smirnov normality test and report the KS statistic and the p-value.}
\label{fig:non_gaussian_mlp}
\vspace{-1cm}
\end{wrapfigure}
Empirical evidence supports the existence of this difference in the limiting behavior of the distribution. As shown in \cref{fig:non_gaussian_mlp}, the distribution of $Y^1_L(a)$ is observed to be (nearly) Gaussian when the width is significantly greater than the depth, as evidenced by a small KS statistic. However, when the width is of the same magnitude as the depth, the distribution exhibits heavy tails. This can be seen by comparing the distribution for the settings $(n, L) \in {(10000, 500), (500, 500)}$.
\subsection{Neural Covariance/Correlation}
In the literature on signal propagation, there is a significant interest in understanding the covariance/correlation structure of neural networks. Specifically, researchers have sought to understand the covariance of the pre-activation vectors $Y_{\lfloor t L \rfloor}(a)$ and $Y_{\lfloor t L \rfloor}(b)$ (often called the neural covariance) for two different inputs $a, b \in \mathbb{R}^d$. A natural question in this context is: \emph{Why do we study the covariance structure?}
It is well-established that even for properly initialized multi-layer perceptrons (MLPs), the network outputs $Y_L(a)$ and $Y_L(b)$ become perfectly correlated (correlation=1) in the limit of ``$n \to \infty$, \emph{then} $L \to \infty$'' \citep{samuel2017, poole, hayou2019impact, yang2019fine}. This can lead to unstable behavior of the gradients and make the model untrainable as the depth increases and also results in the inputs being non-separable by the network\footnote{To see this, assume that the inputs are normalized. In this case, the correlation between the pre-activations of the last layer for two different inputs converges to 1. This implies that as the depth grows, the network output becomes similar for all inputs, and the network no longer separates the data. This is problematic for the first step of gradient descent as it implies that the information from the data is (almost) unused in the first gradient update.}. To address this issue, several techniques involving targeted modifications of the activation function have been proposed \citep{martens2021rapid, zhang2022deep}. In the case of ResNets, the correlation still converges to 1, but at a polynomial rate \citep{yang2017meanfield}. A solution to this problem has been proposed by introducing well-chosen scaling factors in the residual branches, resulting in a correlation kernel that does not converge to 1 \citep{hayou21stable}. This analysis was carried in the limit ``$n \to \infty$, then, $L \to \infty$''. In the case of the joint limit $n,L \to \infty$ with $n/L$ fixed, it has been shown that the covariance/correlation between $Y_{\lfloor t L \rfloor}(a)$ and $Y_{\lfloor t L \rfloor}(b)$ becomes similar to that of a Markov chain that incorporates random terms. However, the correlation still converges to one in this limit.
\begin{prop}[Correlation, \citep{hayou2019impact, li2022sde}]\label{prop:covariance_mlp}
Consider the MLP architecture given by \cref{eq:mlp} and let $a, b \in \mathbb{R}^d$ such that $a, b \neq 0$. Then, in the limit ``$n\to \infty$, then $L \to \infty$'' or the the joint limit ``$n ,L \to \infty$, $L/n$ fixed'', the correlation $\frac{\langle Y_{L}(a), Y_{L}(b) \rangle}{ \|Y_{L}(a)\| \|Y_{L}(b)\|}$ converges\footnote{Note that weak convergence to a constant implies also convergence in probability.} weakly to 1.
\end{prop}
The convergence of the correlation to 1 in the infinite depth limit of a neural network poses a significant issue, as it indicates that the network loses all of the covariance structure from the inputs as the depth increases. This results in
degenerate gradients (see e.g. \citep{samuel2017}), rendering the network untrainable. To address this problem in MLPs, various studies have proposed the use of depth-dependent shaped ReLU activations, which prevent the correlation from converging to 1 and exhibit stochastic differential equation (SDE) behavior. As a result, the correlation of the last layer does not converge to a deterministic value in this case.
\begin{prop}[Correlation SDE, Corollary of Thm 3.2 in \cite{li2022sde}]\label{prop:covariance_shaped_mlp}
Consider the MLP architecture given by \cref{eq:mlp} with the following activation function $\phi_L(z) = z + \frac{1}{\sqrt{L}} \phi(z) $ (a modified ReLU). Let $a, b \in \mathbb{R}^d$ such that $a, b \neq 0$. Then, in the joint limit ``$n ,L \to \infty$, $L/n$ fixed'', the correlation $\frac{\langle Y_{L}(a), Y_{L}(b) \rangle}{ \|Y_L(a)\| \|Y_L(b)\|}$ converges weakly to a nondeterministic random variable.\footnote{In \cite{li2022sde}, the authors show that the correlation of $\frac{\langle \phi_L(Y_{L}(a)), \phi_L(Y_{L}(b)) \rangle}{\sqrt{ \|\phi_L(Y_{L}(a))\|} \sqrt{ \|\phi_L(Y_{L}(b))\|}}$ converges to a random variable in the joint limit. Since $\phi_L$ converges to the identity function in this limit, simple calculations show that the correlation between the pre-activations $\frac{\langle Y_{L}(a), Y_{L}(b) \rangle}{\|Y_{L}(a)\| \|Y_{L}(b)\|}$ is also random in this limit.}
\end{prop}
The joint limit, therefore, yields non-deterministic behaviour of the covariance structure. It is easy to check that even with shaped ReLU as in \cref{prop:covariance_shaped_mlp}, taking the width to infinity first, then depth, the result is a deterministic covariance structure. The main takeaway from this section is the following:
\paragraph{Summary.} \emph{With MLPs (\cref{eq:mlp}), the width and depth limits do not commute in the sense that the behaviour of the distribution of the pre-activations and the covariance structure might differ depending on how the limit is taken.}
With the background information provided above, we are now able to present our findings. In contrast to MLPs, our next section demonstrates that the limits of width and depth for ResNet architectures commute.
\section{Main results: Width and Depth Commute in ResNets}
We use the same notation as in the MLP case. Let $d, n, L \geq 1$, and consider the following ResNet architecture of width $n$ and depth $L$
\begin{equation}\label{eq:resnet}
\begin{aligned}
Y_0(a) &= W_{in} a, \quad a \in \mathbb{R}^d\\
Y_l(a) &= Y_{l-1}(a) + \frac{1}{\sqrt{L}} W_l \phi(Y_{l-1}(a)), \hspace{0.1cm} l \in [1:L],
\end{aligned}
\end{equation}
where $\phi: \mathbb{R} \to \mathbb{R}$ is the ReLU activation function. We assume that the weights are randomly initialized with \textit{iid}~ Gaussian variables $W_l^{ij} \sim \mathcal{N}(0, \frac{1}{n})$, $W_{in}^{ij} \sim \mathcal{N}(0, \frac{1}{d})$. For the sake of simplification, we only consider networks with no bias, and we omit the dependence of $Y_l$ on $n$ and $L$ in the notation.
The $1/\sqrt{L}$ scaling in \cref{eq:resnet} is not chosen arbitrarily. It has been demonstrated that this specific scaling serves to stabilize the norm of $Y_l$ and the gradient norms in the asymptotic limit of large depth (e.g. \cite{hayou2022on, hayou21stable, marion2022scaling}).\footnote{A scaling of the form $L^{-\alpha}$ where $\alpha < 1/2$ yields exploding pre-activations, while a more aggressive scaling where $\alpha>1/2$ yields trivial limiting covariance (identity covariance).}
\subsection{Distribution of the Pre-Activations in the Limit $n, L \to \infty$}
It turns out that for the ResNet architecture given by \eqref{eq:resnet}, the limiting distribution of the pre-activations $Y_{\lfloor t L\rfloor}$ is a zero-mean Gaussian distribution, with an analytic variance term, regardless of how the depth $L$ and width $n$ approach infinity, as long as $\min(n, L) \to \infty$. This is demonstrated in the following result, where an upper bound on the Wasserstein distance between the distribution of the neuron $Y^1_{\lfloor t L\rfloor}$ (the first coordinate of the pre-activations $Y_{\lfloor t L\rfloor}$)\footnote{Notice that the coordinate of the pre-activations are identically distributed (but not necessarily independent).} and that of a zero-mean Gaussian random variable is provided.
\begin{thm}[Convergence of the pre-activations]\label{thm:gaussian_limit}
Let $a \in \mathbb{R}^d$ such that $a \neq 0$. For $t \in [0,1]$, the random variable $(Y_{\lfloor t L\rfloor}(a))_{L \geq 1}$ converges weakly to a Gaussian random variable with law $\mathcal{N}(0, v(t,a))$ in the limit of $\min(n,L) \to \infty$, where $v(t,a) = d^{-1} \|a\|^2 \exp(t/2)$.
Moreover, we have the following convergence rate
$$
\sup_{t \in [0,1]} \mathcal{W}_1(\mu^t_{n,L}(a), \mu^t_{\infty, \infty}(a)) \leq C \left(\frac{1}{\sqrt{n}} + \frac{1}{\sqrt{L}} \right)
$$
where $\mu^t_{n,L}(a)$ is the distribution of $Y^1_{\lfloor t L\rfloor}(a)$, $\mu^t_{\infty, \infty}(a)$ is the distribution $\mathcal{N}(0, v(t,a))$, and $C$ is a constant that depends only on $\|a\|$ and $d$.
Moreover, for two different $i,j \in [n]$, the neurons $Y^i_{\lfloor t L\rfloor}(a)$ and $Y^j_{\lfloor t L\rfloor}(a)$ become independent in the limit $\min(n,L) \to \infty$.
\end{thm}
The proof of \cref{thm:gaussian_limit} is provided in \cref{sec:proof_gaussian_limit}. It relies on two technical results: 1) Width-uniform convergence rate of the finite-width neural networks to an infinite-depth SDE.\footnote{By width-uniform, we refer to bounds with constants that do not depend on the width $n$.} 2) A new result on the convergence of particles to a mean field process. Both results are new. More details are provided in the Appendix.
\cref{thm:gaussian_limit} suggests that the distribution of the pre-activations becomes similar to a Gaussian distribution as $\min(n,L) \to \infty$ regardless of how $n$ and $L$ go to infinity. Note that the limiting distribution is the same as the one reported in \citep{hayou2022on} where the author considered the limit ``$n \to \infty$, \emph{then} $L \to \infty$''. Our result generalizes these findings and establishes the universality of the Gaussian behaviour as long as $n \to \infty$ and $L\to \infty$. We validate these theoretical predictions in \cref{sec:experiments}. An important consequence of the Gaussian behaviour is that the residual network can be seen as a Gaussian process in this limit with a well-specified kernel function (see next section). Leveraging this result to perform Bayesian inference with infinite-width-and-depth networks can be an interesting direction for future work.
\subsection{Neural Covariance}
Unlike the covariance structure in MLPs which exhibits different limiting behaviors depending on how the width and depth limits are taken, we show in the next result that for the ResNet architecture given by \eqref{eq:resnet}, the neural covariance converges strongly to a deterministic kernel, which is given by the solution of a differential flow, in the limit $\min(n,L) \to \infty$ regardless of the relative rate at which $n$ and $L$ tend to infinity.
\begin{thm}[Neural covariance]\label{thm:covariance}
Let $a, b \in \mathbb{R}^d$ such that $a, b \neq 0$ and $a \neq b$. Define the neural covariance kernel $\hat{q}_t(a,b) = \frac{\langle Y_{\lfloor t L\rfloor}(a), Y_{\lfloor t L\rfloor}(b) \rangle}{n}$. Then, we have the following
$$
\sup_{t \in [0,1]} \left\| \hat{q}_t(a,b) - q_t(a,b)\right\|_{L_2} \leq C \left(\frac{1}{\sqrt{n}} + \frac{1}{\sqrt{L}} \right)
$$
where $C$ is a constant that depends only on $\|a\|$, $\|b\|$, and $d$, and $q_t(a,b)$ is the solution of the following differential flow
\begin{equation}
\begin{aligned}
\begin{cases}
\frac{d q_t(a,b)}{dt} &= \frac{1}{2} \frac{f(c_t(a,b))}{c_t(a,b)} q_t(a,b),\\
c_t(a,b) &= \frac{q_t(a,b)}{ \sqrt{q_t(a,a)} \sqrt{q_t(b,b)}},\\
q_0(a,b) &= \frac{\langle a, b \rangle}{d},
\end{cases}
\end{aligned}
\end{equation}
where the function $f: [-1,1] \to [-1,1]$ is given by
$$
f(z) = \frac{1}{\pi} ( z \arcsin(z) + \sqrt{1 - z^2}) + \frac{1}{2}z.
$$
\end{thm}
The proof of \cref{thm:covariance} is provided in \cref{sec:proof_covariance}. The result of \cref{thm:covariance} unifies previous approaches to understanding the covariance structure in large width and depth ResNets. Perhaps the most important consequence of our result is that it implies that all previous results that considered the limit $n\to\infty$, then $L \to \infty$, in order to understand the covariance structure in ResNets still hold for ResNets where the depth is of the same order as the width and both are large. This is specific for ResNet and does not hold for instance for MLPs where the joint-limit yields different asymptotic behaviors (see \cref{sec:mlps}). Notice that the limiting covariance kernel $q_t$ is the same kernel found in \citep{hayou21stable} in the limit $n \to \infty,$ then $L \to \infty$.\footnote{In \cite{hayou21stable}, the authors showed that the kernel $q_t$ is universal, meaning the network output is rich enough that we can approximate any continuous function on a compact set with features from this kernel.} It is also worth noting that constant $C$ can be chosen independent of $\|a\|$ and $\|b\|$ provided that the inputs belong to a compact set that does not contain $0$. The result of \cref{thm:covariance} can also be expressed in terms of the correlation. We demonstrate this in the next theorem.
\begin{thm}[Neural correlation]\label{thm:correlation}
Under the same conditions of \cref{thm:covariance}, we have the following
$$
\sup_{t \in [0,1]} \left\| \hat{c}_t(a,b) - c_t(a,b)\right\|_{L_2} \leq C' \left(\frac{1}{\sqrt{n}} + \frac{1}{\sqrt{L}} \right)
$$
where $C'$ is a constant that depends only on $\|a\|$, $\|b\|$, and $d$, and $\hat{c}_t(a,b) = \frac{\langle Y_{\lfloor t L\rfloor}(a), Y_{\lfloor t L\rfloor}(b) \rangle}{\|Y_{\lfloor t L\rfloor}(a)\| \|Y_{\lfloor t L\rfloor}(b)\|}$ is the neural correlation kernel, and $c_t(a,b)$ is defined in \cref{thm:covariance}.
\end{thm}
The proof of \cref{thm:correlation} relies on using a concentration inequality to control the inverse variance term, and conclude by using the bound in \cref{thm:covariance}. We refer the reader to the Appendix for more details.
The differential flow satisfied by the kernel function $q_t$ can actually be simplified and expressed as an ordinary differential equation (ODE). We show this in the next lemma.
\begin{lemma}\label{lemma:kernel_ode}
Let $z = (a,b) \in \mathbb{R}^d \times \mathbb{R}^d$. The function $q_t$ in \cref{thm:covariance} is the solution of the following ODE:
$$
\frac{d q_t(z)}{dt} = \frac{\exp(t/2)}{2} \xi(z) f\left(\xi(z)^{-1} \exp(-t/2) q_t(z)\right),
$$
where $\xi(z) = \frac{\|a\| \, \|b\|}{d}$, and $f$ is defined in \cref{thm:covariance}.
\end{lemma}
\begin{proof}
The proof is straightforward by noticing that $f(1)=1$. With this we get $\frac{d q_t(a,a)}{dt} = \frac{1}{2} q_t(a,a)$ which yields $q_t(a,a) = q_0(a,a) \exp(t/2) = d^{-1} \|a\|^2 \exp(t/2)$. The same holds for $b$, which concludes the proof.
\end{proof}
\cref{lemma:kernel_ode} will prove useful in the experiments section when we will have to approximate the solution $q_t$ using ODE solvers.
\section{Experiments and Practical Implications}\label{sec:experiments}
In this section, we validate our theoretical results with extensive simulations on large width and depth residual neural networks of the form \cref{eq:resnet}.
\subsection{Gaussian Behavior and Independence of Neurons}
\cref{thm:gaussian_limit} predicts that in the large depth and width limit, the neurons (pre-activations) converge weakly to a Gaussian distribution. To empirically validate this finding, we show in \cref{fig:hist_with_ks} the histograms of the first neuron in the last layer ($t=1$ in \cref{thm:gaussian_limit}) for a randomly chosen input $a$ and $n, L \in \{5, 50, 500\}$. We also perform a Kolmogorov-Smirnov normality test and report the statistic ($KS$) and the p-value. As can be seen in \cref{fig:hist_with_ks}, the histograms appear to fit the theoretical Gaussian distribution more closely as width and depth increase. Additionally, the KS statistic decreases as the width and depth increase. For smaller widths, the p-values are extremely small indicating a non-Gaussian behavior. This is expected as the Gaussian behavior arises primarily due to the average behavior when the width increases. The depth also plays a role in the goodness of fit, as can be seen for the pair $(n,L) = (500,50)$ and $(n,L) = (500,500)$ where the latter shows a better fit in terms of the KS statistic which measures the distance between the empirical cumulative distribution function and the theoretical one. Notice also the contrast with the previously reported case of MLP (\cref{fig:non_gaussian_mlp}) where the the distribution of the neurons in the last layer is heavy-tailed.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/gaussian_behaviour.pdf}
\caption{Histogram of $Y^1_L(a)$ for ResNet \cref{eq:resnet} with $n, L \in \{5, 50, 500\}$, $d=30$, and $a = \sqrt{d} \frac{u}{\|u\|}$ and $u \in \mathbb{R}^d$ has all coordinates randomly sampled from the uniform distribution $\mathcal{U}([0,1])$. The histogram is based on $N=10^4$ simulations. The red dashed line represents the theoretical distribution (Gaussian) predicted in \cref{thm:gaussian_limit}. We also peform a Kolmogorov-Smirnov normality test and report the KS statistic and the p-value.}
\label{fig:hist_with_ks}
\vspace{-0.2cm}
\end{figure}
\begin{wrapfigure}{r}{0.5\textwidth}
\begin{center}
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/joyplot_resnet_500x500.pdf}
\caption{ResNet}
\end{subfigure}
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/joyplot_mlp_500x500.pdf}
\caption{MLP}
\end{subfigure}
\caption{Densities (approximated by Kernel Density Estimation) of the first neuron $Y^1_{l}(a)$ for $l \in \{ 20 k, k=1, \dots,25\}$ for a ResNet \cref{eq:resnet} and an MLP \cref{eq:mlp} with $(n, L) = (500, 500)$. The input $a$ is randomly sampled and normalized in the same way as in \cref{fig:hist_with_ks}.}
\label{fig:joyplots}
\end{center}
\end{wrapfigure}
In \cref{fig:joyplots}, we illustrate the distribution of the first neuron in each layer in a ResNet/MLP of width $n=500$ and depth $L=500$. For the ResNet architecture, the distribution is relatively similar across layers which is expected since \cref{thm:gaussian_limit} predicts a Gaussian limit with a standard deviation that differs only by a factor of $e^{1/4} \approx 1.28$ between the first and the last layers. In MLPs, the distribution varies across layers with the neurons in the last layers displaying heavy-tailed shapes, which agrees with \cref{fig:non_gaussian_mlp}.
Another theoretical prediction of \cref{thm:gaussian_limit} is the independence of the neurons $(Y^i_{\lfloor tL \rfloor})_{1 \leq i \leq L}$. To validate this prediction, we show in figure \cref{fig:pairplot_500x500} the pair-wise joint distributions of 3 randomly chosen neurons in the last layer ($t = 1$). We also perform a kernel density estimation (KDE) using the Gaussian kernel and illustrate the result on top of the histograms. The joint distributions show an excellent match with an isotropic 2-dimensional Gaussian distribution which indicates independence of the neurons.
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{figures/pairplot_500x500.pdf}
\caption{Joint distributions of $(Y^i_L(a), Y^j_L(a))$ for ResNet \cref{eq:resnet} with $n, L = 500$, $d=30$, $i,j \in \{i_1, i_2, i_3\}$ where $i_1,i_2, i_3$ are randomly sampled from $\{1, \dots, n\}$, and $a = \sqrt{d} \frac{u}{\|u\|}$ and $u \in \mathbb{R}^d$ has all coordinates randomly sampled from the uniform distribution $\mathcal{U}([0,1])$. The histograms are based on $N=10^4$ simulations. The red curves represent an isotropic two-dimensional Gaussian distribution (i.e. independent coordinates).}
\label{fig:pairplot_500x500}
\vspace{-0.2cm}
\end{figure}
\subsection{Convergence of the Neural Covariance}
\cref{thm:covariance} predicts that the covariance $\hat{q}_t(a,b)$ for two inputs $a, b$ converges in $L_2$ norm to $q_t$ in the limit $\min(n,L) \to \infty$. In \cref{fig:covariance}, we compare the empirical covariance $\hat{q}_t$ with the theoretical prediction $q_t$ for $(n,L) \in \{5, 50, 500, 5000\}$. The empirical $L_2$ error is also reported. As the width increases, we observe a good match with the theory. The role of the depth is less visually noticeable, but for instance, with width $n = 5000$, we can see that the $L_2$ error is smaller with depth $L=5000$ as compared to depth $L=5$ (see \cref{sec:discussion_width_depth} for a more in-depth discussion of the role of width and depth). The theoretical prediction $q_t$ is approximated with a PDE solver (RK45 method, \cite{Fehlberg1968ClassicalFS}) for $t \in [0,1]$ with a discretization step $\Delta t = $1e-4.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figures/covariance.pdf}
\caption{The blue curve represents the average covariance $\hat{q}_t(a,b)$ for ResNet \cref{eq:resnet} with $n, L \in \{5, 50, 500, 5000\}$, $d=30$, and $a$ and $b$ are sampled following the same rule as in \cref{fig:hist_with_ks}. The average is calculated based on $N=100$ simulations. The shaded blue area represents 1 standard deviation of the observations. The red dashed line represents the theoretical covariance $q_t(a,b)$ predicted in \cref{thm:covariance}. The empirical $L_2$ error is reported as well.}
\label{fig:covariance}
\end{figure}
\subsection{Role of Width and Depth}\label{sec:discussion_width_depth}
From \cref{fig:hist_with_ks} and \cref{fig:covariance}, it appears that the role of the width is more important than that of the depth in the convergence to the limiting values. In this section, we provide an intuitive explanation as to why that happens. First of all, recall that in both figures, the impact of depth is less noticeable but reflected in some measures (KS statistic in \cref{fig:hist_with_ks}, and $L_2$ error in \cref{fig:covariance}). The bounds in \cref{thm:gaussian_limit} and \cref{thm:covariance} are of the form $C \left(\frac{1}{\sqrt{n}} + \frac{1}{\sqrt{L}} \right)$ for some constant $C$. This bound is sufficient to conclude on the convergence rate but it is not optimal in terms of the constants. We conjecture that a `better' bound of the form $\frac{C_1}{\sqrt{n}} + \frac{C_2}{\sqrt{L}}$ can be obtained where the constant $C_2$ is much smaller than $C_1$, which would explain why the depth has less impact on the bound. To give the reader an intuition of why this should be the case, let us look at the case where the width is much larger than the depth, for instance $n=500$ and $L \in \{5,50\}$ (see \cref{fig:hist_with_ks}). Since $n \gg L$, then we are essentially in the regime where the $n$ goes to infinity first. In this case, the impact of depth is limited to how far the finite-depth variance is from infinite-depth one $v(t,a)$ (see \cref{thm:gaussian_limit}). For an input satisfying $\|a\|^2 = d$, simple calculations yield that the infinite-width finite-depth $L$ variance of the neurons in the last layer is given by $ \sigma_L = (1 + \frac{1}{2L})^L$.\footnote{See e.g. \cite{hayou21stable}.} For $L=5$, $\sigma_5 \approx 1.61$ and for $L=50$, we have $\sigma_{50} \approx 1.644$. This is very close to the infinite-depth variance given by $v(1,a) = e^{1/2} \approx 1.648$. Hence, even for small depths, the finite-depth variance is close to the infinite-depth variance. Similar analysis can be carried for the covariance as well.
\section{Conclusion and Limitations}
In this paper, we have shown that, at initialization, in the most natural scaling of branches, the large-depth and large-width limits of a residual neural network (resnet) commute. We used a novel proof technique and proved a concentration of measure result for a kind of McKean-Vlasov process. Our results justify the calculations in prior works analyzing deep and wide neural networks that take the width limit first then depth.
However, our technique cannot say anything about what happens when the network starts training.
Potentially, different behaviors can occur depending on how the learning rate is chosen as a function of width and depth.
Because of the correlations between weights induced by training, such an analysis would likely require far more mathematical machinery than presented here, e.g., Tensor Programs \citep{yang2019scaling,yang2019tensor_i,yang2021tensor_iv,yangTP2b,yangTP2,yangTP5}.
\newpage
\printbibliography
\newpage
| {
"timestamp": "2023-02-02T02:14:45",
"yymm": "2302",
"arxiv_id": "2302.00453",
"language": "en",
"url": "https://arxiv.org/abs/2302.00453",
"abstract": "We show that taking the width and depth to infinity in a deep neural network with skip connections, when branches are scaled by $1/\\sqrt{depth}$ (the only nontrivial scaling), result in the same covariance structure no matter how that limit is taken. This explains why the standard infinite-width-then-depth approach provides practical insights even for networks with depth of the same order as width. We also demonstrate that the pre-activations, in this case, have Gaussian distributions which has direct applications in Bayesian deep learning. We conduct extensive simulations that show an excellent match with our theoretical findings.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "Width and Depth Limits Commute in Residual Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981453437115321,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573446612155
} |
https://arxiv.org/abs/1911.03858 | Arıkan meets Shannon: Polar codes with near-optimal convergence to channel capacity | Let $W$ be a binary-input memoryless symmetric (BMS) channel with Shannon capacity $I(W)$ and fix any $\alpha > 0$. We construct, for any sufficiently small $\delta > 0$, binary linear codes of block length $O(1/\delta^{2+\alpha})$ and rate $I(W)-\delta$ that enable reliable communication on $W$ with quasi-linear time encoding and decoding. Shannon's noisy coding theorem established the \emph{existence} of such codes (without efficient constructions or decoding) with block length $O(1/\delta^2)$. This quadratic dependence on the gap $\delta$ to capacity is known to be best possible. Our result thus yields a constructive version of Shannon's theorem with near-optimal convergence to capacity as a function of the block length. This resolves a central theoretical challenge associated with the attainment of Shannon capacity. Previously such a result was only known for the erasure channel.Our codes are a variant of Arıkan's polar codes based on multiple carefully constructed local kernels, one for each intermediate channel that arises in the decoding. A crucial ingredient in the analysis is a strong converse of the noisy coding theorem when communicating using random linear codes on arbitrary BMS channels. Our converse theorem shows extreme unpredictability of even a single message bit for random coding at rates slightly above capacity. | \section{Inverse sub-exponential decoding error probability}
\label{sec:exponential-decoding}
In this section we finish proving our main result (Theorem~\ref{thm:intro-main}), by showing how to obtain inverse sub-exponential $\exp(-N^{\alpha})$ probability of error decoding within our construction of polar codes, while still having $\poly(N)$ time complexity of construction.
Note that up to this point we only claimed inverse polynomial decoding error probability in Theorem~\ref{thm:main1}. This restriction came from the fact that we need to approximate the channels we see in the tree during the construction phase (recall the discussion at the beginning of Sections~\ref{sect:local} and~\ref{sect:cons}), and to get a polynomial-time construction we need the binning parameter $\mathsf{Q}$ to be $\poly(N)$ itself. But this means that we are only able to track the parameters (entropies, for instance) of the bit-channels approximately, with an additive error which is inverse polynomial in $N$, see~\eqref{eq:gpH}. Since the decoding error probability relates directly to the upper bound on the entropies of the ``good" bit-channels we choose, this leads to only being able to claim inverse polynomial decoding error probability.
It was proved in a recent work~\cite{Wang-Duursma} that it is possible to achieve a fast scaling of polar codes (good scaling exponent) and good decoding error probability (inverse sub-exponential instead of inverse polynomial in $N$) simultaneously, using the idea of multiple (dynamic) kernels in the construction. Specifically, for any constants $\pi, \mu > 0$ such that $\pi + 2\mu < 1$, it is shown that one can construct a polar code with rate $N^{-\mu}$ close to capacity of the channel (which corresponds to scaling exponent $\mu$) and decoding error probability $\exp(-N^{\pi})$, as $N \to \infty$. Moreover, it is shown that this is an optimal scaling of these two parameters one can obtain for \emph{any} (not just polar) codes. However, the construction phase in~\cite{Wang-Duursma} tracked the \emph{true} bit-channels that are obtained in the $\ell$-ary tree of channels, which makes the construction intractable. This is because (most of) the true bit-channels cannot even be described in a tractable way, since they have exponential size of output alphabet.
In what follows we combine our approach of using Ar\i kan's kernels for polarized bit-channels with a stronger analysis of polarization from~\cite{Wang-Duursma} to overcome this issue of intractable construction. Specifically, we show that even though we only track \emph{approximations} (binned versions) of the bit-channels in the tree, if we use Ar\i kan's channels for suction at the end regime, then we are still able to prove very strong polarization, as in~\cite{Wang-Duursma}. This comes from the fact that we know very well how Ar\i kan's basic $2\times 2$ kernel evolves the parameters of the bit-channels.
This allows us to get very strong bounds on the parameters of the \emph{true} bit-channels (which leads to good decoding error probability), while still only tracking their \emph{approximations} (which keeps the construction time polynomial). Somewhat surprisingly, the phase of the construction where the local kernels are chosen is exactly the same as it was before in Section~\ref{sect:cons}, and the difference lies in a much tighter analysis of how to choose a set of "good" indices to actually construct a polar code.
\subsection*{Notations}
We fix a small positive parameter $\alpha > 0$ from the statement of Theorem~\ref{thm:intro-main}, which corresponds to how close the scaling exponent will be to $1/2$. Specifically, we will have the scaling exponent $\mu = 2 + O(\alpha)$. As before, the size of the kernel is denoted by $\ell = 2^s$, where $\ell$ is large enough in terms of $\alpha$ (specifically, the bounds from the statement of the Theorem~\ref{thm:kernel_seacrh_correct} must hold).
We are going to work with the complete $\ell$-ary tree of bit-channels, as described in Section~\ref{sect:mix}. Let $t$ be the depth of this tree, then there are $N = \ell^t$ bit-channels at the last level, denoted as $W_i$ for $i\in [\ell^t]$ (these notations depend on the depth $t$ of the tree at which we are looking, but it will always be clear from the context). Throughout this section we will denote such a tree of depth $t$ as $\mathcal{T}_t$.
We will again have a random process of going down the tree, starting from the root, and picking a random child of a current bit-channel at each step. To be more precise, the random process $\mathsf{W}_i$ is defined as follows: $\mathsf{W}_0 = W$ (the initial channel, i.e. the root of the tree), and $\mathsf{W}_{j+1} = \left(\mathsf{W}_j\right)_k$, where $k\sim [\ell]$, and $\left(\mathsf{W}_j\right)_k$ is the $k^{\text{th}}$ Ar\i kan's bit-channel of $\mathsf{W}_j$ with respect to the corresponding kernel in the tree. This indeed is equivalent to a random walk down the tree. Then we also define the random processes $\mathsf{Z}_j = Z(\mathsf{W}_j)$ and $\mathsf{H}_j = H(\mathsf{W}_j)$. Note that $\mathsf{W}_t$ marginally is distributed as $W_i$ for $i \sim [N]$, where $N = \ell^t$, i.e. $\mathsf{W}_t$ is just a random bit-channel at the level $t$ of the tree. Further, we will also look at random processes $\mathsf{W}^{\bin}_j, \mathsf{H}^{\bin}_j, \mathsf{Z}^{\bin}_j$, which mean that we also do the binning procedure as described in the construction phase in Section~\ref{sect:cons}. Note that $\mathsf{W}^{\bin}_j$ are the channels that we actually track during the construction of the code, while $\mathsf{W}_j$ are the \emph{true} bit-channels in the tree.
Finally, by $\exp(\bullet)$ we will denote $2^{\bullet}$ in this section, and we denote by $x^+ = \max\{x, 0\}$ the positive part of $x$.
\subsection*{Plan}
First, notice that building the tree $\mathcal{T}_t$ of bit-channels is itself a part of construction of our polar codes. This includes tracking the binned versions of the bit-channels, and picking the kernels using Algorithm~\ref{algo:kernel_search}. This part will stay exactly the same as it is described in Section~\ref{sect:cons}, with the binning parameter $\mathsf{Q} = N^3$, and the same threshold of $\ell^{-4}$ in the Algorithm~\ref{algo:kernel_search}. The only part of the construction that is going to change is how we pick the set of good indices which we use to transmit information.
We will closely follow the analysis from~\cite[Appendices~B,\ C]{Wang-Duursma} (also appearing in~\cite{Wang18}), modified for our purposes. Specifically, we will prove the needed polarization of the construction presented in Section~\ref{sect:cons} in three steps (recall that $s = \log_2 \ell$):
\begin{enumerate}[label=\arabic*)]
\item $\P\bigg[\mathsf{Z}_t \leq \exp(-2st)\bigg] \geq I(W) - \ell^{-(1/2 - 10\alpha)t},$
\item $\P\bigg[\mathsf{Z}_t \leq \exp\left(-2^{t^{1/3}}\right) \bigg] \geq I(W) - \ell^{-(1/2 - 11\alpha)t + \sqrt{t}},$
\item $\P\bigg[\mathsf{Z}_t \leq \exp\left(-st\cdot\ell^{\alpha\cdot t}\right)\bigg] \geq I(W) - \ell^{-(1/2 - 16\alpha)t + 2\sqrt{t}}$ \hspace{4pt} for $t = \Omega(\log^6 s)$.
\end{enumerate}
Moreover, for each step, we prove that the polarization at each step is \emph{poly-time constructible}:
\begin{defin}
We call the polarization $\P[\mathsf{Z}_t \leq p(t)] \geq R(t)$ to be \emph{poly-time constructible} if one can find at least $N\cdot R(t)$ indexes $i \in [N]$ such that $Z(W_i) \leq p(t)$, where $N = \ell^t$, in time polynomial in $N$.
\end{defin}
Notice that if polarization $\P[\mathsf{Z}_t \leq p(t)] \geq R(t)$ is poly-time constructible, then by choosing these $N\cdot R(t)$ indexes as information bits of the code, a standard argument implies that one obtains a polar code of rate $R(t)$ and decoding error probability at most $N\cdot p(t)$. Moreover, since the indexes of the information bits were found in $\text{poly}(N)$ time, this makes the whole code construction complexity polynomial in $N$.
The polarization behavior from Step 3 with $t\geq \frac1{\alpha^2}$ will then correspond to polar codes with rate $I(W) - N^{-1/2 + 18\alpha}$ (i.e. codes with scaling exponent $(2+O(\alpha))$ and sub-exponentially small decoding error probability $N\cdot\exp\left(-st\cdot\ell^{\alpha\cdot t}\right) = \exp(-N^{\alpha})$, with $\poly(N)$ construction time, which finishes the proof of the main result of this paper.
\subsection{Step 1}
\begin{lem}
\label{lem:step1}
$\P\bigg[\mathsf{Z}_t \leq \exp(-2st)\bigg] \geq I(W) - \ell^{-(1/2 - 10\alpha)t}.$ Moreover, this polarization is poly-time constructible.
\end{lem}
\begin{proof}
This follows from the analysis of the construction we already have in the previous sections. Fix some $t$ and let $N = \ell^t$. Then the following is implied from Section~\ref{sect:main_together} if one takes $\mathsf{Q} = N^3$, i.e. $c=3$:
\[ \P_{i \sim [N]}\bigg[H(W_i^{\bin}) \leq \dfrac1{N^4}\bigg] \geq I(W) - N^{-(1/2 - 10\alpha)}.\]
Note here that $H(W_i^{\bin})$ are the entropies of the binned bit-channels that we are actually tracking during the construction phase, so they are computable in polynomial time. This means that there is $\poly(N)$-time procedure which returns all the indices $i$ for which $H(W_i^{\bin}) \leq \frac1{N^4}$. Then $Z(W^{\bin}_i) < \sqrt{H(W_i^{\bin})} \leq \frac1{N^2}$ for these indices, so we have for the random process $\mathsf{Z}^{\bin}_t$:
\[ \P\Big[\mathsf{Z}^{\bin}_t \leq \ell^{-2t} \Big] = \P\Big[\mathsf{Z}^{\bin}_t \leq 2^{-2st} \Big] = \P\Big[\mathsf{Z}^{\bin}_t \leq \exp\left(-2st\right) \Big] \geq I(W) - N^{-(1/2 - 10\alpha)},\]
and moreover, one can find at least $N(I(W) - N^{-(1/2 - 10\alpha)})$ indexes within $i \in [N]$ for which the inequality $Z(W^{\bin}_i) \leq \exp\left(-2st\right)$ holds in $\poly(N)$ time (just by returning the indices for which $H(W_i^{\bin}) \leq \frac1{N^4}$). Since it always holds $\mathsf{Z}_t \le \mathsf{Z}^{\bin}_t$, the statement of the lemma follows.
\end{proof}
\subsection{Step 2}
Next, we are going to strengthen the polarization of the construction, using the result of Lemma~\ref{lem:step1}. Specifically, we prove
\begin{lem}
\label{lem:step2}
$\P\bigg[\mathsf{Z}_n \leq \exp\left(-2^{n^{1/3}}\right) \bigg] \geq I(W) - \ell^{-(1/2 - 11\alpha)n + \sqrt{n}}$. Moreover, this polarization is poly-time constructible.
\end{lem}
\begin{proof}
For this lemma, we fix $n$ to be the total depth of the tree (instead of $t$), and we want to prove the speed of polarization at level $n$.
To do this, we will divide the tree into $\sqrt{n}$ stages, each of depth $\sqrt{n}$, and apply the polarization we obtained at Step 1 at each stage. So, we look at $m$ being $\sqrt{n}$, $2\sqrt{n}, \dots, n-\sqrt{n}$. Define the following events, starting with $E_0^{(0)} = \emptyset$ (again, closely following~\cite{Wang-Duursma}):
\begin{align}
&A_m = \left\{\mathsf{Z}^{\bin}_m < \exp(-2sm)\right\}\setminus E_0^{(m-\sqrt{n})} \\
&B_m = A_m \bigcap \left\{\sum_{i=1}^{s\sqrt{n}}g_{sm+i} \leq \beta\cdot s\sqrt{n}\right\}\\
&E_m = A_m \setminus B_m\\
&E_0^{(m)} = E_0^{(m-\sqrt{n})}\cup E_m,
\end{align}
where for now
one can think of $g_j$'s as of independent $\text{Bern}(1/2)$ random variables for all $j \in [s\cdot n]$. In the following several paragraphs we explain what these events are going to correspond to. First of all, the actual random variable we are tracking here is $\mathsf{W}_n$, and its realizations are $\ell^n$ bit-channels $W_i$ for $i\in[\ell^n]$ at the last level of the tree. We can then think of events and subsets of bit-channels at level $n$ interchangeably.
Notice that each bit-channel $W_i$ for $i\in [\ell^n]$ corresponds to a unique path in the tree $\mathcal{T}_n$ from the root $W$ (the initial channel) to the leaf $W_i$ on the $n^{\text{th}}$ level. We will be interested in the bit-channels on these path, their binned versions, and the parameters of both versions (true and binned) of these channels during the ensuing arguments. We denote this path of true bit-channels as $W_i^{(0)} = W, W_i^{(1)}, \dots, W_i^{(n-1)}, W_i^{(n)} = W_i$. Clearly, this path is just a realization of a random walk $\mathsf{W}_0, \mathsf{W}_1, \dots, \mathsf{W}_n$, when $\mathsf{W}_n$ ends up being $W_i$. In the same way, we will denote by $W_i^{(k),\bin}$, for $k = 0, 1,\dots, n$ the binned version of the bit-channel along this path, and by $H_i^{(k)}$, $H_i^{(k),\bin}$, $Z_i^{(k)}$, and $Z_i^{(k),\bin}$ the corresponding parameters of these channels.
We are going to construct a set of ``good" bit-channels $E_0^{(n-\sqrt{n})}$ incrementally, by inspecting the tree from top to bottom. We start with the set $E_0^{(0)} = \emptyset$. Then, at each stage $m = \sqrt{n}$, $2\sqrt{n}, \dots, n-\sqrt{n}$, we find a set $E_m$ of bit-channels which we mark to be "good" at level $m$. Precisely, the channel $W_i$, for some $i\in [\ell^n]$, is going to be in $E_m$, if: a) it is not marked as good before that (i.e. it is not in $E_0^{(m-\sqrt{n})}$); b) the Bhattacharyya parameter $Z_i^{(m), \bin}$ is small, specifically smaller then $\exp(-2sm)$; and c) a certain condition holds for how the branches are chosen in the path for $W_i$ between levels $m$ and $m + \sqrt{n}$ in the tree (more details on this later). Here conditions a) and b) correspond together to the event $A_m$, while condition c) further defines the event $B_m$. Then the set $E_0^{(m)}$ will be the set of all bit-channels that we marked to be good up to the level $m$ in the tree, and in the end, by collecting all the bit-channels that we marked as good at the stages $m = \sqrt{n}, 2\sqrt{n}, \dots, n-\sqrt{n}$, we obtain the final set $E_0^{(n-\sqrt{n})}$.
Denote by corresponding lowercase letters the probabilities of the events described before, i.e. $a_m \coloneqq \P[A_m]$, etc.. Finally, let $q_m = I(W) - e_0^{(m)}$, i.e. $q_m$ is the gap between the capacity and the fraction of the channels which we marked as ``good" up to level $m$.
\medskip
To begin the formal analysis, let us first consider what happens in case of the event $A_m$. First, it means that $\mathsf{Z}^{\bin}_m < \exp(-2sm)$. But then we know that we are going to apply Ar\i kan's kernel $A_2^{\otimes s}$ to this bit-channel at level $m$, since the threshold for picking Ar\i kan's kernel in Algorithm~\ref{algo:kernel_search}, which we use in the construction phaze, is $\ell^{-4} = \exp(-4s)$. This means that, conditioned on $A_m$, we have $\mathsf{Z}_{m+1} \leq \mathsf{Z}_m\cdot 2^s \leq \mathsf{Z}^{\bin}_m\cdot 2^s < 2^s\cdot\exp(-2sm)$, where the first inequality follows from that we know how Bhattacharrya parameter evolves when we use basic Ar\i kan's transforms. Precisely, using the kernel $A_2^{\otimes s}$ is equivalent to using the basic $2\times 2$ kernel $A_2$ for $s$ times, and the kernel $A_2$ in the worst case doubles the Bhattacharyya parameter. Thus $s$ applications of $A_2$ can increase the Bhattacharyya parameter by at most a factor of $2^s$.
Then it is easy to see that even after we apply Ar\i kan's kernel $A_2^{\otimes s}$ a total of $\sqrt{n}$ times, the Bhattacharyya parameter will still be below the threshold $\ell^{-4}$: conditioned on $A_m$, one has $\mathsf{Z}_{m+\sqrt{n}} \leq \mathsf{Z}_m \cdot \left(2^s\right)^{\sqrt{n}} < \exp(-2sm)\cdot \exp(s\sqrt{n}) < \exp(-sm) < \ell^{-4}$, as $m \geq \sqrt{n}$. It is easy to verify, using Proposition~\ref{prop:approx_accumulation} and the relation~\eqref{eq:Z-H} between the entropy and Bhattacharyya parameter of the bit-channel, that the binned parameter $\mathsf{H}^{\bin}_{m+j}$ will also be below $\ell^{-4}$ for $j = 1, 2, \dots, \sqrt{n}$. This means that indeed for these $\sqrt{n}$ levels, the Ar\i kan's kernel was taken in the construction phase. Therefore, we know that only the kernel $A_2^{\otimes s}$ was applied at levels between $m$ and $m+\sqrt{n}$, which can also be viewed as applying the basic $2\times 2$ kernel $A_2$ for $s\sqrt{n}$ levels in the tree. Further this can be viewed as taking $s\sqrt{n}$ ``good" or ``bad" branches while going down the tree, where the good branch corresponds to squaring the Bhattachryya parameter, and the bad branch at most doubles it. Denote then by bits $g_{sm+i} \in \{0,1\}$, for $i \in [s\sqrt{n}]$, the indicators of these branches being good or bad, where $g_{sm+i} = 0$ means the branch is bad, and $g_{sm+i}=1$ means the branch is good. It is clear then that since we consider the random process of going down the tree choosing the next child randomly, then all $g_{sm+i}$'s are independent $\text{Bern}(1/2)$ random variables. These are exactly the random variables appearing in the definition of $B_m$.
Notice then that
\[ \frac{b_m}{a_m} = \P\left[\sum_{i=1}^{s\sqrt{n}}g_{sm+i} \leq \beta\cdot s\sqrt{n}\right] \leq 2^{-s\sqrt{n}(1-h_2(\beta))} \leq 2^{-\gamma s\sqrt{n}} \ , \]
where we can take, for instance, $\beta = 1/20$ and $\gamma = 0.85$. The inequality follows from entropic bound on the sum of binomial coefficients (one could also just use the Chernoff bound).
Recall that we defined $q_m = I(W) - e_0^{(m)}$. We then can write $q_{m-\sqrt{n}} - a_m = I(W) - (e_0^{(m-\sqrt{n})} + a_m)$. But note that by definition, the event $\left\{\mathsf{Z}^{\bin}_m < \exp(-2sm)\right\}$ is a subevent of $A_m \cup E_0^{(m-\sqrt{n})}$, and thus using the bound from Lemma~\ref{lem:step1} (applied for the depth $m$) we know that
\[ (e_0^{(m-\sqrt{n})} + a_m) \geq \P[A_m \cup E_0^{(m-\sqrt{n})}] \geq \P[\mathsf{Z}^{\bin}_m < \exp(-2sm)] \geq I(W) - 2^{(-1/2 + 10\alpha)sm} \ . \]
Therefore we conclude \[(q_{m-\sqrt{n}} - a_m)^+ \leq 2^{(-1/2 + 10\alpha)sm}.\]
We can then derive
\begin{align} q_m &= I(W) - e_0^{(m)} = I(W) - (e_0^{(m-\sqrt{n})} + e_m) = q_{m-\sqrt{n}} - e_m\\
&=q_{m-\sqrt{n}}\left(1 - \frac{e_m}{a_m}\right) + \frac{e_m}{a_m}(q_{m-\sqrt{n}} - a_m)\\
&\leq q^+_{m-\sqrt{n}}\cdot\frac{b_m}{a_m} + (q_{m-\sqrt{n}} -a_m)^+\\
&\leq q_{m-\sqrt{n}}^+\cdot 2^{-\gamma s\sqrt{n}} + 2^{(-1/2+10\alpha)sm}.
\end{align}
Thus we end up we the following recurrence on $q^+_m$ (recall that $\ell = 2^s$):
\begin{align}
q^+_{\sqrt{n}} &\leq 1 \\
q^+_m &\leq q^+_{m-\sqrt{n}}\cdot\ell^{-\gamma\sqrt{n}} + \ell^{-\frac m2+10\alpha m}.
\end{align}
Solving this recurrence gives us $q^+_{n-\sqrt{n}} \leq \ell^{-\frac{n}{2} + 11\alpha n + \sqrt{n}}$, since $\gamma > 1/2$. Therefore we can conclude
\begin{equation}
\label{eq:step2e0n}
e_0^{(n-\sqrt{n})} \geq I(W) - \ell^{-\frac{n}{2} + 11\alpha n + \sqrt{n}}.
\end{equation}
Next, let us look at an arbitrary bit-channel (realization of $\mathsf{Z}_n$) for which the event $E_0^{(n-\sqrt{n})}$ happens, and prove that such a bit-channel is indeed ``good." Since $E_0^{(n-\sqrt{n})}$ happened, it means that $E_m$ happened at some stage, thus $\mathsf{Z}^{\bin}_m < \exp(-2sm)$ and $\sum_{i=1}^{s\sqrt{n}}g_{sm+i} \geq \beta\cdot s\sqrt{n}$, where $g_{sm+i}$ for $i\in[s\sqrt{n}]$ correspond to taking bad or good branches in the basic $2\times 2$ Ar\i kan's kernel. Similarly to Claim~\ref{cl:Z_evolution}, we then can bound
\[ \mathsf{Z}_{m+\sqrt{n}} < \left(2^{s\sqrt{n}}\mathsf{Z}_m\right)^{2^{\beta\cdot s\sqrt{n}}} < \left(2^{sm}\exp(-2sm)\right)^{2^{\beta\cdot s\sqrt{n}}} \leq \exp\left(-sm\cdot 2^{\beta\cdot s\sqrt{n}}\right). \]
Then for the remaining $(n-m-\sqrt{n})$ levels of the tree, it is easy to see that the Bhattacharyaa parameter will also not ever be above the threshold of picking Ar\i kan's kernel in Algorithm~\ref{algo:kernel_search}, thus, similarly as before, we can argue that the Bhattacharyya parameter increases by at most a factor of $2^s$ at each level. Therefore, we derive
\begin{equation}
\label{eq:step2zn}
\mathsf{Z}_n < 2^{s(n-m-\sqrt{n})}Z_{m+\sqrt{n}} \leq 2^{sn}\exp\left(-sm\cdot 2^{\beta\cdot s\sqrt{n}}\right) < \exp\left(-2^{n^{1/3}}\right),
\end{equation}
where the last inequality follows from $m\geq \sqrt{n}$, $\beta = \frac1{20}$, and the condition $s\geq \frac{11}{\alpha}$ from Theorem~\ref{thm:kernel_seacrh_correct} combined with the fact that $\alpha$ is small.
Since we proved that the event $E_0^{(n-\sqrt{n})}$ implies $\mathsf{Z}_n < \exp(-2^{n^{1/3}})$, we conclude, using~\eqref{eq:step2e0n}:
\begin{equation}
\label{eq:step2polarization}
\P[\mathsf{Z}_n < \exp(-2^{n^{1/3}})] \geq e_0^{(n-\sqrt{n})} \geq I(W) - \ell^{-\frac{n}{2} + 11\alpha n + \sqrt{n}},
\end{equation}
which precisely proves the polarization that was stated in the lemma.
The only thing left to prove then is that this polarization is poly-time constructible. To do this, we show that one can find the set $E_0^{(n-\sqrt{n})}$ of bit-channels in poly-time (recall here the equivalence between events and subsets of the bit-channel at the level $n$ of the tree $\mathcal{T}_n$). But one can see that checking if a particular bit-channel $W_i$, for some $i\in [\ell^n]$, is easy. Indeed, to check if $W_i$ is in $E_0^{(n-\sqrt{n})}$, it suffices to check if $W_i$ is in $E_m$ for any $m = \sqrt{n}, 2\sqrt{n}, \dots, n-\sqrt{n}$. But this corresponds to looking at a Bhattacharyya parameter $Z_i^{(m), \bin}$ and checking if it is smaller than $\exp(-2sm)$, and, if this is the case, also looking at how many ``good" branches (in the basic $2\times2$ Ar\i kan's transforms) there were within the next stage ($\sqrt{n}$ levels) in the tree $\mathcal{T}_n$. The latter can be done easily, since this information is essentially given by the index $i$ of the bit-channel $W_i$ (by its binary representation, to be precise). The former is actually also straightforward, since $Z_i^{(m), \bin}$ is the parameter of the binned bit-channel $W_i^{(m), \bin}$ that we are \emph{actually tracking} during the construction phase, so we have this channel written down explicitly, and thus calculating its Bhattacharyya parameter is simple. Therefore all this can be done in time, polynomial in $\ell^n$, and then the whole set $E_0^{(n-\sqrt{n})}$ can be found in poly-time (we can also say that the event $E_0^{(n-\sqrt{n})}$ is poly-time checkable). This finishes the proof of this lemma.
\end{proof}
For the following step, we will use the event $E_0^{(n-\sqrt{n})}$ as was defined in the proof of the above lemma. For convenience, we denote it as $R_n = E_0^{(n-\sqrt{n})}$, for any integer $n$. What we will use is that $\P[R_n] \geq I(W) - \ell^{-\frac{n}{2} + 11\alpha n + \sqrt{n}}$; if $R_n$ happens, then $\mathsf{Z}_n < \exp\left(-2^{n^{1/3}}\right)$; and that for any bit-channel it can be checked in poly-time if $R_n$ happened, all of which is proven in Lemma~\ref{lem:step2}.
\subsection{Step 3}
Here we will finally prove the polarization that implies the main result of this paper:
\begin{lem}
\label{lem:step3}
$\P\bigg[\mathsf{Z}_t \leq \exp\left(-st\cdot\ell^{\alpha\cdot t}\right)\bigg] \geq I(W) - \ell^{-(1/2 - 16\alpha)t + 2\sqrt{t}}$ for $t\geq C\cdot \log^6s$, where $C$ is an absolute constant. Moreover, this polarization is poly-time constructible.
\end{lem}
\begin{proof}
We will again closely follow the approach from~\cite{Wang-Duursma}, though we are going to change the indexing notations to avoid any confusion with the previous step. We return to having the total depth of the tree to be $t$, and we will have $\sqrt{t}$ stages in the tree, each of length $\sqrt{t}$, similarly to the previous step. As before, we will define several events, starting with $C_0^{(0)} = \emptyset$ and $Q_0^{(0)} = \emptyset$. Then, for $n$ being $\sqrt{t}, 2\sqrt{t}, \dots, t-\sqrt{t}$, we define:
\begin{align}
&C_n = R_n\setminus C_0^{(n-\sqrt{t})} \\
&C_0^{(n)} = C_0^{(n-\sqrt{t})} \cup C_n \\
&D_n = C_n \bigcap \left\{\sum_{i=1}^{s(t-n)}g_{i} \leq \alpha\cdot s\cdot t\right\}\\
&Q_n = C_n \setminus D_n\\
&Q_0^{(n)} = Q_0^{(n-\sqrt{t})}\cup Q_n,
\end{align}
where $R_n$ is defined at the end of previous step, and $g_i$'s can again be thought of as independent $\text{Bern}(1/2)$ random variables. The intuition behind what these events correspond to is almost the same as in Step 2, but the bit-channels in $D_n$ have conditions on branching from level $n$ down to the bottom level $t$ (instead of levels between $n$ and $n+\sqrt{t}$). Here, the channels in $Q_0^{(n)}$ are the channels that we mark as ``good" up to level $n$ in the tree, and we will be interested in the final set $Q_0^{(t-\sqrt{t})}$ of "good" channels in the end.
We again denote by corresponding lowercase letters the probabilities of these events. Define also
\[ f_n = I(W) - c_0^{(n)} \quad \text{and} \quad p_n = I(W) - q_0^{(n)} \ . \]
First, consider event $C_n$ happening. It means that $R_n$ happens, so $\mathsf{Z}_n < \exp\left(-2^{n^{1/3}}\right)$. Then at least for some time, we are going to pick Ar\i kan's kernel in the construction phase, since the Bhattacharyya parameter is small enough. But assuming that we take Ar\i kan's kernels all the way down to the bottom of the tree, one can see
\[ \mathsf{Z}_t < \ell^{t-n}\cdot \mathsf{Z}_n < \ell^{t}\cdot\exp\left(-2^{n^{1/3}}\right) \leq 2^{st}\cdot\exp\left(-2^{t^{1/6}}\right) < 2^{-4s} = \ell^{-4} \]
for $t \geq C\log^6 s$, where $C$ is large enough.
Again, by using Proposition~\ref{prop:approx_accumulation} and~\eqref{eq:Z-H} it is easy to show that the entropy of the binned version of the bit-channel will also always be below the threshold $\ell^{-4}$. It means that we cannot in $(t-n)$ levels go over the threshold of choosing Ar\i kan's kernel, thus we indeed take Ar\i kan's kernel all the way down in the tree for the path for which $R_n$ happens. Thus, similarly to the proof of Lemma~\ref{lem:step2} in the Step~2, we can think of it as taking the basic $2\times 2$ Ar\i kan's kernels $s\cdot(t-n)$ times, starting at level $n$. Therefore if $R_n$ happens, the branching down from level $n$ can be viewed as taking ``good" or ``bad" branches in the $A_2$ kernels, so we again define indicator random variables $g_i$, for $i \in [s(t-n)]$, to denote these branches. It is clear that these random variables are going to be independent $\text{Bern}(1/2)$. These are exactly the random variables $g_i$, for $i\in[s(t-n)]$, appearing in the definition of $D_n$.
We have
\[ \frac{d_n}{c_n} = \P\left[\sum_{i=1}^{s(t-n)}g_{i} \leq \alpha s t\right] \leq 2^{-s(t-n)\left(1-h_2(\delta)\right)} \ , \]
where we denote $\delta \coloneqq \min\left\{\frac{\alpha t}{t-n}, 1\right\}$. The inequality again follows from the entropic inequality on the sum of binomial coefficients.
Recall that we denoted $f_n = I(W) - c_0^{(n)}$. The event $C_0^{(n)}$ contains the event $R_n$, thus ${f_n \leq \ell^{-\frac{n}{2} + 11\alpha n + \sqrt{n}}}$, which follows from the proof of Lemma~\ref{lem:step2}. Same inequality holds for $f_n^+$.
We will obtain a recurrence on $p_n - f_n^+$ as follows:
\begin{align}
p_n - f_n^+ &= I(W) - q_0^{(n)} - (I(W) - c_0^{(n)})^+ \\&= p_{n-\sqrt{t}} - q_n - (f_{n-\sqrt{t}} - c_n)^+ \\
&\leq p_{n-\sqrt{t}} - q_n - \frac{q_n}{c_n}(f_{n-\sqrt{t}} - c_n)^+ \\
&\leq p_{n-\sqrt{t}} - q_n - \frac{q_n}{c_n}(f_{n-\sqrt{t}}^+ - c_n) \\
&\leq p_{n-\sqrt{t}} - f_{n-\sqrt{t}}^+ + \left(1 - \frac{q_n}{c_n}\right)f_{n-\sqrt{t}}^+ \\
& = p_{n-\sqrt{t}} - f_{n-\sqrt{t}}^+ + \frac{d_n}{c_n}f_{n-\sqrt{t}}^+ \\
&\leq p_{n-\sqrt{t}} - f_{n-\sqrt{t}}^+ + \ell^{-(1/2 - 11\alpha)(n-\sqrt{t})+\sqrt{n}}\cdot 2^{-s(t-n)\left(1-h_2(\delta)\right)},
\end{align}
where recall that $\delta = \min\left\{\frac{\alpha t}{t-n}, 1\right\}$. We want to obtain an upper bound on the additive term in the inequality above. Consider the following two cases:
\begin{enumerate}[label=\roman*)]
\item $\delta > \frac1{10}$, i.e. $10\alpha t > t - n$, thus $n > (1-10\alpha)t$. Then we give up on the term $2^{-s(t-n)\left(1-h_2(\delta)\right)}$ completely, and we can write
\begin{equation}
\ell^{-(1/2 - 11\alpha)(n-\sqrt{t})+\sqrt{n}}\cdot 2^{-s(t-n)\left(1-h_2(\delta)\right)} \leq \ell^{-(1/2 - 11\alpha)(1-10\alpha)t+\frac32\sqrt{t}} \leq \ell^{-(1/2 - 16\alpha)t+\frac32\sqrt{t}};
\end{equation}
\item $\delta \leq \frac1{10}$, and then $h_2(\delta) < 1/2$. In this case we derive
\begin{align}
\ell^{-(1/2 - 11\alpha)(n-\sqrt{t}) + \sqrt{n}}\cdot 2^{-s(t-n)\left(1-h_2(\delta)\right)} \leq \ell^{-(1/2 - 11\alpha)n + \frac32\sqrt{t}}\cdot \ell^{-1/2\cdot (t-n)} &= \ell^{-1/2\cdot t + 11\alpha n + \frac32\sqrt{t}} \\&< \ell^{-1/2\cdot t + 11\alpha t + \frac32\sqrt{t}}.
\end{align}
\end{enumerate}
Putting the above together, we obtain
\begin{align}
p_0 - f_{0}^+ &= 0 \\
p_n - f_n^+ &\leq p_{n-\sqrt{t}} - f_{n-\sqrt{t}}^+ + \ell^{-(1/2 - 16\alpha)t+\frac32\sqrt{t}}.
\end{align}
Therefore $p_{t-\sqrt{t}} - f_{t-\sqrt{t}}^+ \leq \sqrt{t}\cdot\ell^{-(1/2 - 16\alpha)t+\frac32\sqrt{t}}$. Combining this with $f_{t-\sqrt{t}}^+ \leq \ell^{-(1/2 - 11\alpha)(t-\sqrt{t})+\sqrt{t}}$, we obtain $p_{t-\sqrt{t}} \leq \ell^{-(1/2 - 16\alpha)t+2\sqrt{t}}$, and thus
\begin{equation}
\label{eq:step3:1}
\P\left[Q_0^{(t-\sqrt{t})}\right] = q_0^{(t-\sqrt{t})} \geq I(W) - \ell^{-(1/2 - 16\alpha)t+2\sqrt{t}}.
\end{equation}
Let us now check that the event $Q_0^{(t-\sqrt{t})}$ is actually ``good" and allows us achieve the needed polarization. If $Q_0^{(t-\sqrt{t})}$ happens, then $Q_n$ happened for some $n = k\cdot\sqrt{t}$. It means that $C_n$, and therefore $R_n$ takes place, thus $\mathsf{Z}_n < \exp\left(-2^{n^{1/3}}\right)$. It also means that $D_n$ does not happen, and thus there is at least $\alpha s t$ ``good" branches taken in the way down the tree, which corresponds to $\alpha s t$ squarings of the Bhattacharyya parameter. Therefore
\begin{equation}
\label{eq:step3:2}
\mathsf{Z}_t \leq \left(\ell^{t-n}\mathsf{Z}_n\right)^{2^{\alpha st}} < \left(2^{st}\exp\left(-2^{n^{1/3}}\right)\right)^{2^{\alpha st}} < \exp\left(-st\cdot 2^{\alpha st}\right) = \exp\left(-st\cdot \ell^{\alpha t}\right) = \frac1N\exp\left(-N^{\alpha}\right),
\end{equation}
where the third inequality trivially follows from $n \geq \sqrt{t}$ and $t \geq C\log^6 s$ for large enough $C$. Combining this with~\eqref{eq:step3:1}, we obtain the desired polarization:
\begin{equation}
\label{eq:step3polarization}
\P\left[\mathsf{Z}_t <\exp\left(-st\cdot 2^{\alpha st}\right) \right] \geq q_0^{(t-\sqrt{t})} \geq I(W) - \ell^{-(1/2 - 16\alpha)t+o(t)}.
\end{equation}
It only remains to argue that this polarization is poly-time constructible. But this easily follows from the fact that the event $R_n$ is poly-time checkable, which we proved in Step~2. Indeed, now for any bit-channel $W_i$, $i\in[\ell^t]$, we need to check if it is in $Q_0^{(t-\sqrt{t})}$. This means that one need to see if $Q_n$ happened for some $n = k\sqrt{t}$. To do this, one checks in poly-time if $C_n$ happened, which reduces to checking $R_n$ (which can be done in poly-time). If $R_n$ happened, then the only thing to check is how many ``good" branches the remaining path to $W_i$ has, which is easily (in poly-time) retrievable information from the index $i$. Therefore, the event $Q_0^{(t-\sqrt{t})}$ is indeed poly-time checkable, which finishes the proof of the lemma.
\end{proof}
\section{Introduction}
We construct binary linear codes that achieve the Shannon capacity of the binary symmetric channel, and indeed any binary-input memoryless symmetric (BMS) channel, with a near-optimal scaling between the code length and the gap to capacity. Further, our codes have efficient (quasi-linear time) encoding and decoding algorithms. Let us now describe the context of our result and its precise statement in more detail.
The binary symmetric channel (BSC) is one of the most fundamental and well-studied noise models in coding theory.
The BSC with crossover probability $p \in (0,1/2)$ ($\mathrm{BSC}_p$) flips each transmitted bit independently with probability $p$. By Shannon's seminal noisy coding theorem \cite{Shannon48}, we know that the capacity of $\mathrm{BSC}_p$ is $1-h(p)$, where $h(\cdot )$ is the binary entropy function. This means that reliable communication over $\mathrm{BSC}_p$ is possible at information rates approaching $1-h(p)$, and at rates above $1-h(p)$ this is not possible.
More precisely, for any $\delta > 0$, there \emph{exist} codes of rate $1-h(p)-\delta$ using which one can achieve miscommunication probability at most $2^{-\Omega(\delta^2 n)}$ where $n$ is the block length of the code. In fact, random linear codes under maximum likelihood decoding offer this guarantee with high probability. Thus Shannon's theorem implies the existence of codes of block length $O(1/\delta^2)$ that can achieve small error probability on $\mathrm{BSC}_p$ at rates within $\delta$ of capacity. Conversely, by several classical results \cite{wolfowitz,strassen,Strassen09trans,Polyanskiy10}, we know that the block length has to be at least $\Omega(1/\delta^2)$ in order to approach capacity within $\delta$.
Shannon's theorem is based on the probabilistic method and does not describe the codes that approach capacity or give efficient algorithms to decode them from errors caused by $\mathrm{BSC}_p$.
Thus the codes with rates $1-h(p)-\delta$ take at least time exponential in $1/\delta^2$ to construct as well as decode.
This is also true for concatenated coding schemes~\cite{forney} as the inner codes have to be decoded by brute-force, and either have to also be found by a brute-force search or allowed to vary over an exponentially large ensemble (leading to exponentially large block length).
The theoretical challenge of constructing codes of rate $1-h(p)-\delta$ with construction/decoding complexity scaling polynomially in $1/\delta$ in fact remained wide open for a long time. Finally, around 2013, two independent works~\cite{GX15,hassani-finite-scaling-paper-journal} gave an effective finite-length analysis of Ar\i kan's remarkable polar codes construction~\cite{arikan-polar}. (Ar\i kan's original analysis, as well as follow-ups like \cite{arikan-telatar}, proved convergence to capacity as the block length grew to infinity but did not quantify the speed of convergence.) Based on this, a construction of polar codes with block length, construction, and decoding complexity all bounded by a polynomial in $1/\delta$ to capacity was obtained in \cite{GX15,hassani-finite-scaling-paper-journal}. The result also applies to any BMS channel, not just the BSC.
If the block length of the code scales as $O(1/\delta^\mu)$ as a function of the gap $\delta$ to capacity, we say that $\mu$ is the \emph{scaling exponent}.
The above results established that the scaling exponent of polar codes is finite. It is worth pointing out that polar codes are the \emph{only} known efficiently decodable capacity-achieving family proven to have a finite scaling exponent. The work \cite{GX15} did not give an explicit upper bound on the scaling exponent of polar codes, whereas \cite{hassani-finite-scaling-paper-journal} showed the bound $\mu \le 6$.
Following some improvements in \cite{GB13,MHU16_unified}, the current best known upper bound on $\mu$ for the BSC (and any BMS channel) is $4.714$.
Note that random linear codes have optimal scaling exponent $2$. The above results thus raise the intriguing
challenge of constructing codes with scaling exponent close to $2$, a goal we could not even dream of till the recent successes of polar codes.
Ar\i kan's original polar coding construction is based on a large tensor power of a simple $2 \times 2$ matrix,
which is called the \emph{kernel} of the construction.
For this construction, it was shown in \cite{hassani-finite-scaling-paper-journal} that the scaling exponent
$\mu$ for Ar\i kan's original polar code construction is \emph{lower bounded} by $3.579$, even for the simple binary erasure channel.
Given this limitation, one approach to improve $\mu$
is to consider polar codes based on $\ell \times \ell$ kernels for larger $\ell$.
However, better upper bounds on the scaling exponent of polar codes based on larger kernels have not been established except for the simple case of the binary erasure channel (BEC).\footnote{Polar codes based on $\ell \times \ell$ kernels have much larger block length $\ell^t$ compared to $2^t$ for the $2 \times 2$ case. So to get an improvement in $\mu$, one has to compensate for the increasing block length via better bounds on the local behavior of the kernel.}
%
For the BEC, using large kernels, polar codes with scaling exponent $2+\alpha$ for any desired $\alpha > 0$ were given in the very nice paper~\cite{FHMV17} which spurred our work. (We will discuss this and other related works in more detail in Sections~\ref{subsec:prior-work}--\ref{subsec:polar-bec}.)
Our main result in this work is a polynomial time construction of polar codes based on large kernels that approach the optimal scaling exponent of $2$ for every BMS channel. Specifically, for any desired $\alpha > 0$, by picking sufficiently large kernels (as a function of $\alpha$), the block length $N$ can be made as small as $O_\alpha(1/\delta^{2+\alpha})$ for codes of rate $I(W)-\delta$ (the notation $O_\alpha(\cdot)$ hides a constant that depends only on $\alpha$). The encoding and decoding complexity will be \emph{quasi-linear} in $N$, and thus can also have a near-quadratic growth with $1/\delta$.
\iffalse
Shannon's noisy channel coding theorem implies that for every
memoryless channel $W$ with binary inputs and a finite output
alphabet, there is a capacity $I(W) \ge 0$ and constants $a_W <
\infty$ and $b_W > 0$ such that the following holds: For all $\delta >
0$ and integers $N \ge a_W/\eps^2$, there {\em exists} a binary
code $C \subset \{0,1\}^N$ of rate at least $I(W) - \delta$ which enables
reliable communication on the channel $W$ with probability of
miscommunication at most $2^{-b_W \eps^2 N}$. A proof implying these quantitative bounds is implicit in Wolfowitz's proof of Shannon's theorem \cite{wolfowitz}.
\fi
\begin{thm}[Main]
\label{thm:intro-main}
Let $W$ be an arbitrary BMS channel with Shannon capacity $I(W)$.
For arbitrarily small $\alpha>0$, if we choose a large enough constant $\ell\ge \ell_0(\alpha)$ to be a power of $2$, then there is a code $\mathcal{C}$ generated by the polar coding construction using kernels of size $\ell\times\ell$ such that the following four properties hold when the code length $N$ grows:
\begin{enumerate}
\itemsep=0ex
\vspace{-1ex}
\item
the code construction has $N^{O_\alpha(1)}$ complexity;
\item both encoding and decoding have $O_\alpha(N\log N)$ complexity;
\item the rate of $\mathcal{C}$ is at least $I(W)-N^{-1/2+18\alpha}$; and
\item the block decoding error probability is bounded by $\exp(-N^{\alpha})$
when $\mathcal{C}$ is used for channel coding over $W$.
\end{enumerate}
\end{thm}
The above ``constructivizes" the quantitative finite-length version of Shannon's theorem with a small $\alpha$ slack in the speed of convergence to capacity. The lower bound on $\ell$ can be chosen as $\ell_0(\alpha)=\exp(\alpha^{-1.01})$. Note that a similar lower bound on $\ell$ also appears in the aforementioned result for the BEC from \cite{FHMV17}.
We would like to point out that in the conference version of this work~\cite{Guruswami-Riazanov-Ye-STOC} we only proved inverse polynomial decoding error probability, as opposed to the inverse sub-exponential $\exp(-N^{\alpha})$ bound which we show here. This improvement uses the subsequent analysis of polarization due to Wang and Duursma in~\cite{Wang-Duursma}, where they extended the results of Theorem~\ref{thm:intro-main} to arbitrary discrete memoryless channels, possibly non-binary and asymmetric, and proved the $\exp\left(-N^{O(\alpha)}\right)$ bound on the decoding error probability.
However, this was done at a cost of losing the polynomial-time construction complexity of the code. We are able to non-trivially combine the analysis from~\cite{Wang-Duursma} with our approach of constructing the code to achieve both polynomial time construction and sub-exponentially small decoding error probability simultaneously.
\section{Outline of strong converse for random linear codes}
\label{sect:outline}
In this section we describe the plan of the proof for the strong converse theorem for bit-decoding random linear codes under the binary symmetric channel. In particular, we need to show the sharp transition as in~\eqref{eq:sharp_trans}, when the channel is BSC. The proof for the general BMS channel case follows the same blueprint by using the fact that a BMS channel can be represented as a convex combination of BSC subchannels, but executing it involves overcoming several additional technical hurdles. Let us fist formulate the precise theorem for the binary symmetric channel.
\begin{thm}\label{thm:over:BSC_converse} Let $W$ be the $\text{BSC}_p$ channel, and let $\ell$, $k$ be integers that satisfy ${\ell \geq k \geq \ell(1-H(W)) + \Omega(\ell^{1/2}\log\ell)}$. Let $G$ be a random binary matrix uniform over $\{0,1\}^{k\times \ell}$. Suppose a message $\bV\cdot G$ is transmitted through $\ell$ copies of the channel $W$, where $\bV$ is uniformly random over $\{0,1\}^k$, and let $\bY$ be the output vector, i.e. $\bY = W^{\ell}(\bV\cdot G)$. Then, with probability at least $1 - \ell^{-\Omega(\log\ell)}$ over the choice of~$G$ it holds ${H\left(V_1\;\big\lvert\;\bY\right) \geq 1 - \ell^{-\Omega(\log \ell)}}$.
\end{thm}
We want to point out two quantitative features of the above theorem. First, it applies at rates $\approx \Omega(\ell^{-1/2})$ above capacity. Second, it rules out predicting the bit $V_1$ with advantage $\ell^{-\omega(1)}$ over random guessing. Both these features are important to guarantee the desired bound $\lambda_\alpha \lesssim \ell^{-1/2}$.
\medskip\noindent {\bf Proof plan.}\quad We prove the lower bound on $H\left(V_1\;\big\lvert\;\bY\right)$ by lower bounding $\E\limits_{g\sim G}\left[H\left(V_1\;\big\lvert\;\bY\right)\right]$ and using Markov's inequality. Thus we write
\begin{equation}
\label{eq:intro:confused_notation}
\E_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] = \sum_g \P(G=g) H^{(g)}(V_1|\bY) = \sum_g \P(G=g)\hspace{-3pt} \left(\sum_{\by \in \mathcal{Y}^{\ell}} \P\nolimits^{(g)}(\bY=\by) H^{(g)}(V_1|\bY=\by) \right),
\end{equation}
where the summation of $g$ is over $\{0,1\}^{k\times\ell}$, and by $\P\nolimits^{(g)}(\cdot)$ and $H^{(g)}(\cdot)$ we denote probability and entropy over the randomness of the message $\bV$ and channel noise \emph{for a fixed matrix $g$}.
\smallskip \noindent {\bf 1: Restrict to zero-input.}\quad The first step is to use the linearity of the (random linear) code and the additive structure of BSC to prove that we can change $\P\nolimits^{(g)}(\bY=\by)$ to ${\P\nolimits^{(g)}(\bY=\by|\bV = \mathbi{0})}$ in the above summation, where $\mathbi{0}$ is the all-zero vector. This observation is crucial for our arguments, since it allows to only consider the outputs which are ``typical" for the all-zero codeword, and there is no dependence on $g$ in this case. Formally, in Appendix~\ref{app:BMS_lemmas} we prove
\[ \E\limits_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] = \sum\limits_{\by \in \mathcal{Y}^{\ell}} \P(\bY=\by|\bV = \emph{\mathbi{0}}) \cdot\E\limits_{g\sim G}\left[H^{(g)}(V_1|\bY=\by)\right].\]
\smallskip \noindent {\bf 2: Define a typical set of outputs.}\quad We define a typical output set for the zero-input as $\mathcal{F} \coloneqq \Big\{\by \in \mathcal{Y}^{\ell}\,:\, |wt(\by) - \ell p| \leq 2\sqrt{\ell}\log\ell \Big\}$. It is clear that if zero-vector is transmitted through the channel, the output will be a vector from $\mathcal{F}$ with high probability. It means that we do not lose too much in terms of accuracy if we restrict our attention only to this typical set, so the following lower suffice as a good lower bound on the expectation.
\begin{equation}
\label{eq:intro:expect_of_entropy_typical}
\E\limits_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] \geq \sum\limits_{\by \in \mathcal{F}} \P(\bY=\by|\bV = \mathbi{0}) \cdot\E\limits_{g\sim G}\left[H^{(g)}(V_1|\bY=\by)\right].
\end{equation}
\noindent {\bf 3: Fix a typical output $\by\in \mathcal{F}$.}
For a fixed choice of $\by\in\mathcal{F}$, we express $H^{(g)}(V_1|\bY=\by) = h(\P\nolimits^{(g)}(V_1=0 | \bY=\by)) = h \left(\frac{\P\nolimits^{(g)}(V_1=0,\bY=\by)}{\P\nolimits^{(g)}(\bY=\by)} \right)$.
It suffices to show that the ratio of these probabilities is very close to $1/2$ w.h.p.
To this end, we will show that both denominator and numerator are highly concentrated around their respective means for $g\sim G$, and that the means have a ratio nearly $1/2$ .
Focusing on the denominator (the argument for the numerator is very similar), we have:
\begin{equation}
\label{eq:intro:prob_of_y}
2^k\cdot\P\nolimits^{(g)}(\bY = \by) = \P(\bY=\by\,|\,\bV = \mathbi{0}) +
\sum_{d=0}^{\ell} B_g(d, \by) p^d (1-p)^{{\ell}-d},
\end{equation}
where $B_g(d,\by)$ is equal to the number of nonzero codewords in the code spanned by the rows of $g$ at Hamming distance $d$ from $\by$. We proceed with proving concentration on the summation above by splitting it into two parts.
\setlength{\leftskip}{0.3cm}
\smallskip \noindent {\bf 3a: Negligible part.}\quad It is very unlikely that an input codeword $x$ such that $|\text{dist}(x,\by)-\ell p| \geq 6\sqrt{\ell}\log\ell$ was transmitted, if $\by$ was received as the output. It is then possible to show that the expectation (over $g\sim G$) of $\sum\limits_{d\,:\,|d - \ell p| \geq 6\sqrt{\ell}\log\ell} \hspace{-18pt}B_g(d, \by) p^d (1-p)^{{\ell}-d}$ is negligible with respect to the expectation of the whole summation. Markov's inequality implies then that this sum is negligible with high probability over $g\sim G$.
\smallskip \noindent {\bf 3b: Substantial part.} On the other hand, for any $d$ such that $|d - \ell p| \leq 6\sqrt{\ell}\log\ell$, the expectation of $B_g(d, \by)$ is going to be extremely large for the above-capacity regime. We can apply Chebyshev's inequality to prove concentration on every single weight coefficient $B_g(d, \by)$ with $d$ in such a range. A union bound then implies that they are all concentrated around their means simultaneously.
\setlength{\leftskip}{0pt}
\smallskip
\noindent This proves that the summation over $d$ is concentrated around its mean in~\eqref{eq:intro:prob_of_y}. Finally, since
$|wt(\by) - \ell p| \leq 2\sqrt{\ell}\log\ell$ for $\by\in \mathcal{F}$ and we leave enough room above the capacity of the channel, w.h.p. over choice of $g$ we have $B_g(wt(\by),\by)\gg 1$, and consequently
$\P(\bY=\by\,|\,\bV = \mathbi{0})=p^{wt(\by)}(1-p)^{\ell-wt(\by)}$ is negligible compared to the second sum term in
in~\eqref{eq:intro:prob_of_y}.
\medskip \noindent {\bf 4: Concentration of entropy.}\quad Proving in the same way concentration on $\P\nolimits^{(g)}(V_1=0,\bY=\by)$, we derive that $\frac{\P\nolimits^{(g)}(V_1=0,\bY=\by)}{\P\nolimits^{(g)}(\bY=\by)}$ is close to $\frac12$ with high probability for any typical $\by\in\mathcal{F}$, and thus $\E_{g\sim G}[H^{(g)}(V_1|\bY=\by)]$ is close to $1$ with high probability for such $\by$. Recalling that the probability to receive $\by \in \mathcal{F}$ is overwhelming for zero-vector input, out of~\eqref{eq:intro:expect_of_entropy_typical} obtain the desired lower bound on
$\E\limits_{g\sim G}\big[H^{(g)}(V_1|\bY)\big]$.
\vspace{0.1cm}
The full proof for the BSC case is presented in Section~\ref{sec:BSC_converse}. In order to generalize the proof to general BMS channels we need to track and prove concentration bounds for many more parameters (in the BSC case, we had a single parameter $d$ that was crucial). More specifically, in the BSC case we have to deal with a single binomial distribution when trying to estimate the expectation of $B_g(d,\by)$. For general BMS channels, however, we have to cope with a multinomial distribution and an ensemble of binomially distributed variables that depend on the particular realization of that multinomial distribution. Moreover, we emphasize that Theorem~\ref{thm:over:BSC_converse} and its analogue for BMS must hold in the \emph{non-asymptotic regime}, namely for all code lengths above some absolute constant which does not depend on the channel. (In contrast, in typical coding theorems in information theory one fixes the channel and lets the block length grow to infinity.)
We show how to overcome all these technical challenges for the general BMS case in Section~\ref{sec:bit-decoding}.
\subsection*{Organization of rest of the paper}
The rest of the paper, which contains all the formal theorem statements and full proofs, is organized as follows. In Section~\ref{sect:KC}, we describe how to find a good polarizing kernel for any BMS, and reduce its analysis to a strong coding theorem and its converse for bit-decoding of random linear codes.
The case when the BMS has entropy already reasonably close to either $0$ or $1$ is handled in Section~\ref{sec:suctions}.
Also, the analysis of the complexity of the kernel finding algorithm is deferred to Section~\ref{sect:cons}.
Turning to the converse coding theorem for random codes, as a warmup this is first proven for the case of the binary symmetric channel in Section~\ref{sec:BSC_converse}. We then present the proof for general BMS channels in Section~\ref{sec:bit-decoding}. Finally, Section~\ref{sect:cons} has the complete details of our code construction based on the multiple kernels found at various levels, and a sketch of the encoding and decoding algorithms, which when all combined yield Theorem~\ref{thm:main1}, which is almost our main result, but with decoding error probability proven to be only inverse polynomial in the blocklength.
Lastly, in Section~\ref{sec:exponential-decoding} we show how to combine the tight analysis of the polarization from~\cite{Wang-Duursma} and our construction of codes from Section~\ref{sect:cons} to obtain our final result, also stated in the introductory section as Theorem~\ref{thm:intro-main}, with inverse sub-exponential $\exp(-N^{\alpha})$ decoding error probability.
\parskip=0.5ex
\section{Preliminaries}
\subsection{Binary entropy function}
All the logarithms in this paper are to the base $2$. The binary entropy function is defined as $h(x) = x\log\frac{1}{x} + (1-x)\log\frac1{1-x}$ for $x\in [0,1]$, where $0\log 0$ is taken to be $0$. We will use a simple fact that $h(x) \leq 2x\log\frac1x$ for $x\in [0,1/2)$ several times in the proofs.
The following proposition follows from the facts that $h(x)$ is concave, increasing for $x\in [0, 1/2)$, and symmetric around $1/2$, i.e. $h(x) = h(1-x)$ for $x\in [0,1]$.
\begin{prop}
\label{prop:entropy_differ}
For any $x, y \in [0,1]$, $|h(x) - h(y)| \leq h(|x-y|)$.
\end{prop}
\subsection{Channel degradation}
\begin{defin}
\label{def:degrad}
Let $W\,:\, \{0,1\} \to \mathcal{Y}$ and $\widetilde{W}\,:\, \{0,1\} \to \widetilde{\mathcal{Y}}$ be two BMS channels. We say that $\widetilde{W}$ is \emph{degraded} with respect to $W$, or, correspondingly, $W$ is \emph{upgraded} with respect to $\widetilde{W}$, denoted as $\widetilde{W} \preceq W$, if there exists a discrete memoryless channel $W_1\, :\, \mathcal{Y} \to \widetilde{\mathcal{Y}}$ such that
\begin{equation*}
\label{eq:degrad}
\widetilde{W}(\widetilde{y}\, |\, x) = \sum_{y\in\mathcal{Y}}W(y\,|\,x)W_1(\widetilde{y}\,|\,y) \qquad\quad \forall\ x\in\{0,1\},\ \widetilde{y}\in\widetilde{\mathcal{Y}}.
\end{equation*}
\end{defin}
Note that this is equivalent to saying that $\widetilde{W}(x)$ and $W_1(W(x))$ are identically distributed for any $x\in\{0,1\}$. In other words, one can simulate the usage of $\widetilde{W}$ by first using the channel $W$ and then applying some other channel $W_1$ to the output of $W$ to get a final output.
We will use some useful facts from~\cite[Lemma 3]{Tal_Vardy} and~\cite[Lemma IV.1]{Ye15}
\begin{prop}
\label{prop:degrad_entropy}
Let $W$ and $\widetilde{W}$ be two BMS channels, such that $\widetilde{W}\preceq W$. Then $H(\widetilde{W}) \geq H(W)$.
\end{prop}
\begin{prop}
\label{prop:degrad_subchannel}
Let $W$ and $\widetilde{W}$ be BMS channels, such that $\widetilde{W}\preceq W$, and $K \in \{0,1\}^{\ell\times\ell}$ be any invertible matrix. Denote by $W_i$, $\widetilde{W}_i$ the Ar\i kan's bit-channels of $W$ and $\widetilde{W}$ with respect to the kernel $K$ for any $i \in [\ell]$. Then for any $i \in [\ell]$, we have $\widetilde{W}_i \preceq W_i$, and consequently $H(\widetilde{W}_i) \ge H(W_i)$.
\end{prop}
\section{Give me a channel, I'll give you a kernel}
\label{sect:KC}
In this section we show that for any given binary-input memoryless symmetric (BMS) channel $W$ we can find a kernel $K$ of size $\ell\times\ell$, such that the Ar{\i}kan's bit-channels of $W$ with respect to this kernel will be highly polarized. By this we mean that the multiplicative decrease $\lambda_{\alpha}$ defined in \eqref{mult_decrease} will be sufficiently close to $\ell^{-1/2}$. The algorithm (Algorithm~\ref{algo:kernel_search}) to find such a kernel is as follows: if the channel is already almost noiseless or too noisy (entropy is very close to $0$ or $1$), we take this kernel to be a tensor power of original Ar{\i}kan's kernel for polar codes, $A_2 =\left( \begin{smallmatrix}1 & 0\\ 1 & 1\end{smallmatrix}\right)$. Otherwise, the algorithm will just try out all the possible invertible kernels in $\{0,1\}^{\ell\times\ell}$, until a "good" kernel is found, which means that conditions~\eqref{eq:algo_stop_condition} should be satisfied. Before proving that Algorithm~\ref{algo:kernel_search} achieves our goals of bringing $\lambda_{\alpha}$ close to $\ell^{-1/2}$, we discuss several details about it.
\subsection{Local kernel construction} \label{sect:local}
\begin{algorithm}[h]
\caption{Kernel search}
\label{algo:kernel_search}
\DontPrintSemicolon
\SetAlgoLined
\KwInput{BMS channel $\widetilde{W}$ with output size $\leq \mathsf{Q}$, error parameter $\Delta$, and number $\ell$}
\KwOutput{invertible kernel $K\in \{0,1\}^{\ell\times\ell}$}
\eIf{$H(\widetilde{W}) < \ell^{-4}$ \emph{\textbf{or}} $H(\widetilde{W}) > 1 - \ell^{-4} + \Delta$}
{ \Return $K = A_2^{\otimes \log\ell}$}
{\For{$K \in \{0,1\}^{\ell\times\ell}$, \emph{\textbf{if}} $K$ is invertible}{
Compute Ar{\i}kan's bit-channels $\widetilde{W}_i(K)$ of $\widetilde{W}$ with respect to the kernel $K$, as in~\eqref{Arikan_subchannels}
\If{
\vspace{-2pt}
\begin{equation}
\begin{aligned}
\label{eq:algo_stop_condition}
&H(\widetilde{W}_i(K)) \leq \ell^{-\log \ell/5} &&\text{\normalfont{for}} && i \geq \ell\cdot H(\widetilde{W}) + \ell^{1/2}\log^3\ell\\
&H(\widetilde{W}_i(K)) \geq 1 - \ell^{-\log \ell/20} &&\text{\normalfont{for}} && i \leq \ell\cdot H(\widetilde{W}) - 14\ell^{1/2}\log^3\ell
\end{aligned}
\end{equation}}{\Return $K$}
}}
\end{algorithm}
As briefly discussed at the end of Section~\ref{sect:encdec}, we are unable to efficiently track all the bit-channels in the $\ell$-ary recursive tree \emph{exactly}. This is because the size of the output alphabet of the channels increase \emph{exponentially} after each step deeper into the tree (this simply follows from the definition of bit-channels~\eqref{Arikan_subchannels}). Thus computing all the channels (and their entropies) cannot be done in poly$(N)$ time. To overcome this issue we follow the approach of~\cite{Tal_Vardy}, with subsequent simplification in~\cite{GX15}, of approximating the channels in the tree by degrading (see Definition~\ref{eq:degrad}) them. Degradation is achieved via the procedure of merging the output symbols, which (a) decreases the output alphabet size, and (b) does not change the entropy of the channel too much. This implies (with all the details worked out in Section~\ref{sect:cons}) that we can substitute all the channels in the tree of depth $t$ by their \emph{degraded approximations}, such that all the channels has output alphabet size at most $\mathsf{Q}$ (a parameter depending on $N = \ell^{t}$ to be chosen), and that if $\widetilde{W}$ is a degraded approximation of the channel $W$ in the tree, than $H(W) \leq H(\widetilde{W}) \leq H(W) + \Delta$ for some $\Delta$ depending on $\mathsf{Q}$. Moreover, in Theorem~\ref{thm:kernel_seacrh_correct} which we formulate and prove shortly, we show that when we apply the Algorithm~\ref{algo:kernel_search} to a degraded approximation $\widetilde{W}$ of $W$ with small enough $\Delta$, then, even though conditions~\eqref{eq:algo_stop_condition} only dictate a sharp transition for $\widetilde{W}$, the same kernel will induce a sharp transition in polarization for $W$.
The second issue which such degraded approximation resolves is the running time of the Algorithm~\ref{algo:kernel_search}. Notice that we only going to apply it for channels with output size bounded by $\mathsf{Q}$, and recall that we think of $\ell$ as of a constant (though very large). First of all, trying out all the possible kernels will then also take a constant number of iterations. Finally, within each iteration, just calculating all the Ar{\i}kan's bit-channels and their entropies in a straightforward way will take poly$(\mathsf{Q}^{\ell})$ time, which is just poly$(\mathsf{Q})$ when we treat $\ell$ as a constant. Therefore by choosing $\mathsf{Q}$ to be polynomial in $N$, the algorithm indeed works in poly$(N)$ time.
We now leave the full details concerning the complexity of the algorithm to be handled in Section~\ref{sect:cons}, and proceed with showing that the Algorithm~\ref{algo:kernel_search} always returns a kernel which makes $\lambda_{\alpha}$ from~\eqref{mult_decrease} close to $\ell^{-1/2}$.
\begin{thm}
\label{thm:kernel_seacrh_correct}
Let $\alpha >0$ be a small constants. Let $\ell$ be a power of $2$ such that $\log\ell \geq \frac{11}{\alpha}$ and $\frac{\log\ell}{\log\log\ell + 2} \geq \frac{3}{\alpha}$. Let $W : \{0,1\}\to\mathcal{Y}$ and $\widetilde{W}:\{0,1\} \to\widetilde{\mathcal{Y}}$ be two BMS channels, such that $\widetilde{W} \preceq W$, ${H(\widetilde{W}) - \Delta \leq H(W)\leq H(\widetilde{W})}$ for some $0 \leq \Delta \leq \ell^{-\log \ell}$, and $|\widetilde{\mathcal{Y}}| \leq \mathsf{Q}$. Then the Algorithm~\ref{algo:kernel_search} on input $\widetilde{W}$, $\Delta$, and $\ell$ returns a kernel $K \in \{0,1\}^{\ell\times\ell}$ that satisfies
\begin{equation}
\label{eq:mult_decrease_thm}
\dfrac{1}{\ell\cdotg_{\a}(H(W))} \sum_{i=1}^{\ell}g_{\a}\left(H(W_i)\right) \leq \ell^{-\frac12 + 5\alpha},
\end{equation}
where $W_1, W_2, \dots, W_{\ell}$ are the Ar{\i}kan's bit-channels of $W$ with respect to the kernel $K$, and the function $g_\alpha(\cdot)$ is defined as $g_\alpha(h) = \left(h(1-h)\right)^{\alpha}$ for any $h\in[0,1]$.
\end{thm}
\begin{proof} As we discussed above, we consider two cases:
\vspace{0.2cm}
\noindent \textbf{Suction at the ends.} If $H(\widetilde{W}) \notin (\ell^{-4}, 1-\ell^{-4} + \Delta)$, the Algorithm~\ref{algo:kernel_search} returns a standard Ar{\i}kan's kernel $K = A_2^{\otimes \log\ell}$ on input $\widetilde{W}$ and $\Delta$. For this case $H(W) \notin (\ell^{-4}, 1 - \ell^{-4})$, and fairly standard arguments imply that the polarization under such a kernel is much faster when the entropy is close to $0$ or $1$. For completeness, we present the full proofs for this case in a deferred Section~\ref{sec:suctions}. Specifically, Lemma~\ref{lem:suction_evolution} immediately implies the result of the theorem for this regime, as we pick $\log\ell \geq \frac1{\alpha}$.
\vspace{0.2cm}
\noindent \textbf{Variance in the middle.} \sloppy Otherwise, if $H(\widetilde{W}) \in (\ell^{-4}, 1-\ell^{-4} + \Delta)$, it holds $H(W) \in {(\ell^{-4} - \Delta, 1 - \ell^{-4} + \Delta)}$, thus $H(W) \in \left(\ell^{-4}/2, 1- \ell^{-4}/2\right)$.
We first need to argue that the algorithm will at least return some kernel. This argument is one of the main technical contributions of this work, and we formulate it as Theorem~\ref{thm:Arikans_entropies_polarize} in Section~\ref{BMS_section}. The theorem essentially claims that for any $\widetilde{W}$ an overwhelming fraction of possible kernels $K \in \{0,1\}^{\ell\times\ell}$ satisfies the conditions in~\eqref{eq:algo_stop_condition} for $\widetilde{W}$ and $K$ (note that we do not use any conditions on the size of $\widetilde{\mathcal{Y}}$ or the entropy $H(\widetilde{W})$ at all at this point). Clearly then, there is a decent fraction of \emph{invertible} kernels from $\{0,1\}^{\ell\times\ell}$ which also satisfy these conditions. Therefore, the algorithm will indeed terminate and return such a good kernel. Moreover, since the theorem claims that a random kernel from $\{0,1\}^{\ell\times\ell}$ will satisfy~\eqref{eq:algo_stop_condition} with high probability, and it is also known that it will be invertible with at least some constant probability. It means that instead of iterating through all possible kernels in step $4$ of the Algorithm~\ref{algo:kernel_search}, we could take a random kernel and check it, and then the number of iterations needed to find a good kernel would be very small with high probability. However, to keep everything deterministic, we stick to the current approach.
Suppose now the algorithm returned an invertible kernel $K \in \{0,1\}^{\ell\times\ell}$, which means that relations~\eqref{eq:algo_stop_condition} hold for $\widetilde{W}$ and Ar{\i}kan's bit-channels $\widetilde{W}_1, \widetilde{W}_2,\dots, \widetilde{W}_{\ell}$ (we omit dependence on $K$ from now on). Denote also $W_i = W_i(K)$ as an Ar{\i}kan's bit-channels of $W$ with respect to $K$. First, since degradation is preserved after considering Ar{\i}kan's bit-channels according to Proposition\ref{prop:degrad_subchannel}, $\widetilde{W}_i \preceq W_i$, thus $H(W_i) \leq H(\widetilde{W}_i)$ for all $i\in[\ell]$. Now, similarly to the proof of Proposition~\ref{prop:approx_accumulation}, since $K$ is invertible, conservation of entropy implies $\sum_{i =1}^{\ell}\left(H(\widetilde{W}_i) - H(W_i)\right) = \ell \left(H(\widetilde{W}) - H(W)\right) \leq \ell \cdot\Delta$, therefore derive $H(W_i) \leq H(\widetilde{W}_i) \leq H(W_i) + \ell\cdot\Delta$ for any $i \in [\ell]$. Then deduce
\begin{equation}
\begin{aligned}
\label{eq:algo_analysis_subchannels}
&H(W_i) \leq H(\widetilde{W}_i) \leq \ell^{-\log \ell/5} &&\text{\normalfont{for}} && i \geq \ell\cdot H(\widetilde{W}) + \ell^{1/2}\log^3\ell\\
&H(W_i) \geq H(\widetilde{W}_i) - \ell\cdot\Delta \geq 1 - \ell^{-\log \ell/21} &&\text{\normalfont{for}} && i \leq \ell\cdot H(\widetilde{W}) - 14\cdot\ell^{1/2}\log^3\ell,
\end{aligned}
\end{equation}
where we used that we chose $\Delta \leq \ell^{-\log \ell}$ in the condition of the theorem.
Recall that $H(W) \in \left(\ell^{-4}/2, 1- \ell^{-4}/2\right)$ for variance in the middle regime, and note that this implies $g_{\a}(H(W)) \geq g_{\a}(\ell^{-4}/2) \geq \frac12\ell^{-4\alpha}$. Using~\eqref{eq:algo_analysis_subchannels} and the trivial bound $g_{\a}(x) \leq 1$ for all the indices $i$ close to $\ell\cdot H(\widetilde{W})$ obtain that the LHS of the desired inequality \eqref{eq:mult_decrease_thm} is at most
\begin{equation}
\label{eq:mult_decrease_proof}
\begin{aligned}
&\dfrac{1}{\ell\cdotg_{\a}(H(W))} \Bigg(&&\sum_{i=1}^{ \ell\cdot H(\widetilde{W}) - 14\cdot\ell^{1/2}\log^3\ell}g_{\a}\left(1 - \ell^{-\log \ell/21}\right) + 15\ell^{1/2}\log^3\ell \\ & &+&\hspace{3pt} \sum_{i=\ell\cdot H(\widetilde{W}) + \ell^{1/2}\log^3\ell}^{\ell}g_{\a}\left(\ell^{-\log \ell/5}\right)\Bigg) \\
&< 30\ell^{-\frac12 + 4\alpha}\log^3\ell +2\ell^{-\alpha \log \ell/21 + 4\alpha} && \\
& < \rlap{$\ell^{-\frac12 + 5\alpha}$}, &&
\end{aligned}
\end{equation}
where the last inequality uses the conditions $\log\ell \geq \dfrac{11}{\alpha}$ and $\dfrac{\log\ell}{\log\log\ell + 2} \geq \dfrac{3}{\alpha}$ that we have on~$\ell$.
\end{proof}
\begin{remark} \label{rmk:rmk}
In this paper, we are interested in the cases where $\alpha$ is very close to $0$.
For such $\alpha$,
We can absorb the two conditions on $\ell$ in Theorem~\ref{thm:kernel_seacrh_correct} into one condition $\log\ell\geq \alpha^{-1.01}$.
\end{remark}
\subsection{Strong channel coding and converse theorems}
\label{BMS_section}
In this section we will show that Algorithm~\ref{algo:kernel_search}, which is used to prove the multiplicative decrease of almost $\ell^{-1/2}$ as in~\eqref{eq:mult_decrease_thm} in the settings of Theorem~\ref{thm:kernel_seacrh_correct}, indeed always returns some kernel for the regime when the entropy of the channel is not close to $0$ or $1$. While the analysis of suction at the ends regime, deferred to Section~\ref{sec:suctions}, is pretty standard and just relies on the fact that polarization is getting much faster when the channel is noiseless or useless, in this section we will follow the ideas from~\cite{FHMV17} and prove a \emph{sharp transition in the polarization behaviour}, when the polarization happens under a random and sufficiently large kernel.
The sharp transition stems from the fact that when the kernel $K$ is large enough, with high probability (over randomness of $K$) all the Ar{\i}kan's bit-channel with respect to $K$, except for approximately $\ell^{1/2}$ of them in the middle, are guaranteed to be either very noisy or almost noiseless.
We formulate the main result of this section in the following theorem, which was used in the proof of Theorem~\ref{thm:kernel_seacrh_correct}:
\begin{thm}
\label{thm:Arikans_entropies_polarize}
Let $W$ be any BMS channel, and let $W_1, W_2, \dots, W_{\ell}$ be the Ar{\i}kan's bit-channels defined in \eqref{Arikan_subchannels} with respect to the kernel $K$ chosen uniformly at random from $\{0,1\}^{\ell\times\ell}$.
Then for the following inequalities all hold with probability $(1 - o_{\ell}(1))$ over the choice of~$K$:
\begin{enumerate}[label=(\alph*)]
\item $H(W_i) \leq \ell^{-(\log \ell)/5}$\hspace{5pt}\qquad\qquad for\quad $i \geq \ell\cdot H(W) + \ell^{1/2}\log^3\ell$;
\item $H(W_i) \geq 1 - \ell^{-(\log \ell)/20}$\hspace{15pt}\quad for\quad $i \leq \ell\cdot H(W) - 14\cdot\ell^{1/2}\log^3\ell$.
\end{enumerate}
\end{thm}
\medskip
\begin{remark}
One can notice that the above theorem is stated for any BMS channel $W$, independent of the value of $H(W)$.
\end{remark}
\medskip
The proof of this theorem relies on results concerning bit-decoding for random linear codes that are interesting beyond the connection to polar codes. The following proposition shows how to connect Ar{\i}kan's bit-channels to this context.
\begin{prop}
\label{prop:Arikan-bit}
Let $W$ be a BMS channel, $K\in\{0,1\}^{\ell\times\ell}$ be an invertible matrix, and $i \in [\ell]$. Set $k = \ell - i + 1$, and let $G$ be a matrix which is formed by the last $k$ rows of $K$. Let $\bU$ be a random vector uniformly distributed over $\{0,1\}^{\ell}$, and $\bV$ be a random vector uniformly distributed over $\{0,1\}^{k}$. Then
\begin{equation}
\label{eq:Arikan-bit_decod}
H\left(U_i\;\Big\lvert\; W^{\ell}(\bU\cdot K), \bU_{<i}\right) = H\left(V_1\;\Big\lvert\;W^{\ell}(\bV \cdot G)\right)
\end{equation}
\end{prop}
The proof of this proposition only uses basic properties of BMS channels and linear codes, and is deferred to Appendix \ref{app:BMS_lemmas}. Notice now that the LHS of~\eqref{eq:Arikan-bit_decod} is exactly the entropy $H(W_i)$ of the $i$'s Ar{\i}kan's bit-channel of $W$ with respect to the kernel $K$, by definition of this bit-channel. On the other hand, one can think of the RHS of~\eqref{eq:Arikan-bit_decod} in the following way: look at $G$ as a generator matrix for a linear code of blocklength $\ell$ and dimension $k$, which is transmitted through the channel $W$. Then $H\left(V_1\;\Big\lvert\;W^{\ell}(\bV \cdot G)\right)$ in some sense corresponds to how well one can decode the first bit of the message, given the output of the channel. Since in Theorem~\ref{thm:Arikans_entropies_polarize} we are interested in random kernels, the generator matrix $G$ is also random, and thus we are indeed interested in understanding bit-decoding of random linear codes.
\subsubsection{The BEC case}
When $W$ is the binary erasure channel, a statement
very similar to Theorem~\ref{thm:Arikans_entropies_polarize} was established in \cite{FHMV17}. The situation for the BEC is simpler and we now describe the intuition behind this.
Suppose we map uniformly random bits $\bU \in \{0,1\}^\ell$ to $\bX = \bU K$ for a \emph{random} $\ell \times \ell$ binary matrix $K$.
We will observe $\approx (1-z) \ell$ bits of $\bX$ after it passes through $\mathrm{BEC}(z)$; call these bits $\bZ$. For a random $K$, with high probability the first $\approx z \ell$ bits of $\bU$ will be linearly independent of these observed bits $\bZ$. When this happens we will have $H(W_i) = 1$ for $i \lesssim z \ell$. On the other hand, $\bZ$ together with the first $\approx z \ell$ bits of $\bU$ will have full rank w.h.p. over the choice of $K$. When this is the case, the remaining bits $U_i$ for $i \gtrsim z \ell$ will be determined as linear combinations of these bits, making the corresponding conditional entropies $H(W_i)=0$. Thus except for a few exceptional indices around $i \approx z \ell$, the entropy $H(W_i)$ will be really close to $0$ or $1$. The formal details and quantitative aspects are non-trivial as the argument has to handle the case when $z$ is itself close to $0$ or $1$, and one has to show the number of exceptional indices to be $\lesssim \sqrt{\ell}$ (which is the optimal bound). But ultimately the proof amounts to understanding the ranks of various random subspaces. When $W$ is a BMS channel, the analysis is no longer linear-algebraic, and becomes more intricate. This is the subject of the rest of this section as well as Sections~\ref{sec:BSC_converse} and~\ref{sec:bit-decoding}.
\subsubsection{Part (a): channel capacity theorem}
Part (a) of Theorem~\ref{thm:Arikans_entropies_polarize} corresponds to transmitting through $W$ random linear codes with rates \emph{below} the capacity of the channel. For this regime, it turns out that we can use the classical result that random linear codes achieve the capacity of the channel with \emph{low error decoding probability}. Trivially, the bit-decoding error probability is even smaller, making the corresponding conditional entropy also very small. Therefore, the following theorem follows from classical Shannon's theory:
\begin{thm}
\label{thm:positive_Shannon_BMS}
Let $W$ be any BMS channel and $k \leq \ell(1-H(W)) - \ell^{1/2}\log^3\ell$. Let $G$ be a random binary matrix uniform over $\{0,1\}^{k\times \ell}$. Suppose a codeword $\bV\cdot G$ is transmitted through $\ell$ copies of the channel $W$, where $\bV$ is uniformly random over $\{0,1\}^k$, and let $\bY$ be the output vector, i.e. $\bY = W^{\ell}(\bV\cdot G)$. Then with high probability over the choice of $G$ it holds ${H\left(V_1\;\big\lvert\;\bY\right) \leq \ell^{-(\log \ell)/5}}$.
\end{thm}
\begin{proof}
The described communication is just a transmission of a random linear code $C = \{ \bv G,\ \bv \in \{0,1\}^k\}$ through $W^{\ell}$, where the rate of the code is $R = \frac{k}{\ell} \leq I(W) - \ell^{-1/2}\log^3\ell$, so it is separated from the capacity of the channel. It is a well-studied fact that random (linear) codes achieve capacity for BMS, and moreover a tight error exponent was described by Gallager in~\cite{Gallager65} and analyzed further in~\cite{Barg_Forney_correspond},~\cite{Forney_survey},~\cite{Domb}. Specifically, one can show $\overline{P_e} \leq \text{exp}(-\ell E_r(R, W))$, where $\overline{P_e}$ is the probability of decoding error, averaged over the ensemble of all linear codes of rate $R$, and $E_r(R, W)$ is the so-called \emph{random coding exponent}.
It is proven in~\cite[Theorem~2.3]{Fabregas} that for any BMS channel $W$, one has $E_r(R,W) \geq E_r^{\text{BSC}}(R, I(W))$ where the latter is the error exponent for the BSC channel with the same capacity $I(W)$ as $W$. But the random scaling exponent for BSC channels for the regime when the rate is close to the capacity of the channel is given by the so-called sphere-packing exponent $E_r^{\text{BSC}}(R, I) = E_{\text{sp}}(R, I)$ which is easily shown to be "almost" quadratic in $(I - R)$. Specifically, one can show $E_{\text{sp}}(R, I) \geq \frac{\log^4\ell}{2\ell}$ when $R \leq I - \ell^{-1/2}\log^3\ell$, and therefore $\overline{P_e} \leq \text{exp}(-\ell E_r(R, W)) \leq \text{exp}(-\ell E_{\text{sp}}(R, I(W))) \leq \text{exp}(-\log^4\ell/2)$. Then Markov's inequality implies that if we take a random linear code (i.e. choose a random binary matrix $G$), then with probability at least $1 - \ell^{-2}$ the decoding error is going to be at most $\ell^2\text{exp}(-\log^4\ell/2) \leq \text{exp}(-\log^4\ell/4) \leq \ell^{-\log\ell/4}$. Consider such a good linear code (matrix $G$), and then $\bV$ can be decoded from $\bY$ with high probability, thus, clearly, $V_1$ can be recovered from $\bY$ with at least the same probability. Then Fano's inequality gives us:
\[ H(V_1\,|\,\bY) \leq h_2(\ell^{-\log \ell/4}) \leq \ell^{-\log \ell/5}. \]
Thus we indeed obtain that the above holds with high probability (at least $1 - \ell^{-2}$, though this is very loose) over the random choice of $G$.
\end{proof}
\subsubsection{Part (b): strong converse for bit-decoding under noisy channel coding}
On the other hand, part (b) of Theorem~\ref{thm:Arikans_entropies_polarize} concerns bit-decoding of linear codes with rates \emph{above} the capacity of the channel. We prove that with high probability, for a random linear code with rate slightly above capacity of a BMS channel, any single bit of the input message is highly unpredictable based on the outputs of the channel on the transmitted codeword. Formally, we have the following theorem.
\begin{restatable}{thm}{converseShannon}
\label{thm:converse_Shannon_BMS}
\sloppy Let $W$ be any BMS channel, $\ell$ and $k$ be any integers that satisfy ${\ell \geq k \geq \ell(1-H(W)) + 14\ell^{1/2}\log^3\ell}$. Let $G$ be a random binary matrix uniform over $\{0,1\}^{k\times \ell}$. Suppose a message $\bV\cdot G$ is transmitted through $\ell$ copies of the channel $W$, where $\bV$ is uniformly random over $\{0,1\}^k$, and let $\bY$ be the output vector, i.e. $\bY = W^{\ell}(\bV\cdot G)$. Then, with probability at least $1 - \ell^{-\log\ell/20}$ over the choice of~$G$ it holds ${H\left(V_1\;\big\lvert\;\bY\right) \geq 1 - \ell^{-\log \ell/20}}$.
\end{restatable}
\noindent Since the theorem is of independent interest and of a fundamental nature, we devote a separate Section~\ref{sec:bit-decoding} to present a proof for it.
\bigskip
The above statements make the proof of Theorem~\ref{thm:Arikans_entropies_polarize} immediate:
\begin{proof}[Proof of Theorem~\ref{thm:Arikans_entropies_polarize}]
\sloppy Denote $k = \ell - i + 1$, then by Proposition~\ref{prop:Arikan-bit} ${H(W_i) = H\left(V_1\;\Big\lvert\;W^{\ell}(\bV \cdot G_k)\right)}$, where $\bV \sim \{0,1\}^k$ and $G_k$ is formed by the last $k$ rows of $K$. Note that since $K$ is uniform over $\{0,1\}^{\ell\times\ell}$, this makes $G_k$ uniform over $\{0,1\}^{k\times\ell}$ for any $k$. Then:
\begin{enumerate}[label=(\alph*)]
\item For any $i \geq \ell\cdot H(W) + \ell^{1/2}\log^3\ell$, we have $k \leq \ell(1-H(W)) - \ell^{1/2}\log^3\ell$, and therefore Theorem~\ref{thm:positive_Shannon_BMS} applies, giving $H(W_i) \leq \ell^{-(\log\ell)/5}$ with probability at least $1 - \ell^{-2}$ over $K$.
\item Analogically, if $i \leq \ell\cdot H(W) - 14\cdot\ell^{1/2}\log^3\ell$, then $k \geq \ell(1-H(W)) + 14\ell^{1/2}\log^3\ell$, and Theorem~\ref{thm:converse_Shannon_BMS} gives $H(W_i) \geq 1 - \ell^{-(\log\ell)/20}$ with probability at least $1 - \ell^{-(\log\ell/20)}$ over~$K$.
\end{enumerate}
It only remains to take the union bound over all indices $i$ as in (a) and (b), which implies that all of the bounds on the entropies will hold simultaneously with probability at least $1 - \ell\cdot\ell^{-2} \geq 1 - \ell^{-1}$ over the random kernel $K$.
\end{proof}
\section{Strong converse for BSC\texorpdfstring{$_p$}{\tiny p}}
\label{sec:BSC_converse} We present a proof of Theorem~\ref{thm:converse_Shannon_BMS} in the next two sections. It is divided into three parts: first, we prove it for a special case of $W$ being a BSC channel in this section. The analysis for this case is simpler (but already novel), and it provides the roadmap for the argument for the case of general BMS channel. Next, in Section~\ref{sec:BMS_large_alphabet} we prove Theorem~\ref{thm:converse_Shannon_BMS} for the case when the output alphabet size of $W$ is bounded by $2\sqrt{\ell}$, which is the main technical challenge in the paper. The proof will mimic the approach for the BSC case to some extent. Finally, in Section~\ref{sec:BMS_any_alphabet}, we show how the case of general BMS channel can be reduced to the case of the channel with bounded alphabet via "upgraded binning" to merge output symbols.
Throughout this section consider the channel $W$ to be BSC with the crossover probability $p\leq \frac12$. Denote $H = H(W) = h(p)$, where $h(\cdot)$ is the binary entropy function. For the BSC case we will actually only require $k \geq \ell(1-H) + 8\sqrt{\ell}\log \ell$ in the condition of the Theorem~\ref{thm:converse_Shannon_BMS}. Thus we are in fact proving Theorem~\ref{thm:over:BSC_converse} here.
\begin{proof}[Proof of Theorem~\ref{thm:over:BSC_converse}]
We will follow the plan described in Section~\ref{sect:outline}. As we discussed there, we prove that $H(V_1\,|\,\bY)$ is very close to $1$ with high probability over $G$ by showing that its expectation over $G$ is already very close to $1$ and then using Markov inequality. So we want to prove a lower bound on
\begin{equation}
\label{eq:confused_notation}
\E_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] = \sum_g \P(G=g) H^{(g)}(V_1|\bY),
\end{equation}
where $H^{(g)}(V_1|\bY)$ is the conditional entropy for the fixed matrix $g$. Similarly, in the remaining of this section, $\P\nolimits^{(g)}(\cdot)$ denotes probabilities of certain events \emph{for a fixed matrix $g$}. By $\sum_g$ we denote the summation over all binary matrices from $\{0,1\}^{k\times\ell}$.
\medskip \noindent{\bf Restrict to zero-input.}\quad We rewrite
\begin{align}
\label{entropy_expectation}
\E_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] &=\sum_g \P(G=g) \left(\sum_{\by \in \mathcal{Y}^{\ell}} \P\nolimits^{(g)}(\bY=\by) H^{(g)}(V_1|\bY=\by) \right) \nonumber \\
&=\sum_{\by \in \mathcal{Y}^{\ell}} \sum_g \P\nolimits^{(g)}(\bY=\by) \cdot\P(G=g)H^{(g)}(V_1|\bY=\by). \nonumber
\end{align}
Our first step is to prove that in the above summation we can change $\P\nolimits^{(g)}(\bY=\by)$ to ${\P\nolimits^{(g)}(\bY=\by|\bV = \mathbi{0})}$, where $\mathbi{0}$ is the all-zero vector. This observation is crucial for our arguments, since it allows us to only consider the outputs $\by$ which are "typical" for the all-zero codeword when approximating $\E\limits_{g\sim G}\big[H^{(g)}(V_1|\bY)\big]$. Precisely, we prove
\begin{lem}
\label{typical_entropy_BSC}
Let $W$ be a BMS channel, $\ell$ and $k$ be integers such that $k \leq \ell$. Let $G$ be a random binary matrix uniform over $\{0,1\}^{k\times \ell}$. Suppose a message $\bV\cdot G$ is transmitted through $\ell$ copies of $W$, where $\bV$ is uniformly random over $\{0,1\}^k$, and let $\bY$ be the output vector $\bY = W^{\ell}(\bV\cdot G)$. Then
\begin{equation}
\label{entropy_expectation_typical}
\E_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] = \sum_{\by \in \mathcal{Y}^{\ell}} \sum_g \P\nolimits^{(g)}(\bY=\by|\bV = \emph{\mathbi{0}}) \cdot\P(G=g)H^{(g)}(V_1|\bY=\by).
\end{equation}
\end{lem}
Note that the above lemma is formulated for any BMS channel, and we will also use it for the proof of the general case in Section~\ref{sec:bit-decoding}. The proof of this lemma uses the symmetry of linear codes with respect to shifting by a codeword and additive structure of BSC, together with the fact tha BMS channel can be represented as a convex combination of several BSC subchannels. We defer the proof to Appendix~\ref{app:BMS_lemmas}.
Note that $\P\nolimits^{(g)}(\bY=\by|\bV = \mathbi{0})$ does not in fact depend on the matrix $g$, since $\mathbi{0}\cdot g = \mathbi{0}$, and so randomness here only comes from the usage of the channel $W$. Specifically, $\P\nolimits^{(g)}(\bY=\by|\bV = \mathbi{0}) = p^{wt(\by)}(1-p)^{{\ell}-wt(\by)}$, where we denote by $wt(\by)$ the Hamming weight of $\by$. Then in~\eqref{entropy_expectation_typical} we obtain
\begin{align}
\label{entr_exp_weights_typical}
\E_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] = \sum_{\by \in \mathcal{Y}^{\ell}} p^{wt(\by)}(1-p)^{{\ell}-wt(\by)} \E_{g\sim G}\big[H^{(g)}(V_1|\bY=\by)\big].
\end{align}
\medskip \noindent{\bf Define a typical set.}\quad The above expression allows us to only consider "typical" outputs $\by$ for the all-zero input while approximating $\E_{g\sim G}\big[H^{(g)}(V_1|\bY)\big]$. For the BSC case, we consider $\by$ to be typical when $|wt(\by) - \ell p| \leq 2\sqrt{\ell}\log\ell$. Then we can write:
\begin{align}
\label{entr_exp_weights_only_typical}
\E_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] \geq \sum_{|wt(\by) - \ell p| \leq 2\sqrt{\ell}\log\ell} p^{wt(\by)}(1-p)^{{\ell}-wt(\by)} \E_{g\sim G}\big[H^{(g)}(V_1|\bY=\by)\big].
\end{align}
\medskip \noindent{\bf Fix a typical output.}\quad Let us fix any typical $\by \in \mathcal{Y}^{\ell}$ such that $|wt(\by) - \ell p| \leq 2\sqrt{\ell}\log\ell$, and show that $\E_{g\sim G}[H^{(g)}(V_1|\bY=\by)]$ is very close to $1$. To do this, we first notice that
\begin{equation}
\label{entropy_ratio}
H^{(g)}(V_1|\bY=\by) = h \left(\frac{\P\nolimits^{(g)}(V_1=0,\bY=\by)}{\P\nolimits^{(g)}(\bY=\by)} \right).
\end{equation}
Denote $\widetilde{\bV} = \bV^{[2:k]}$ to be bits $2$ to $k$ of vector $\bV$, and by $\widetilde{g} = g[2:k]$ the matrix $g$ without its first row. Next we define the shifted weight distributions of the codebooks generated by $g$ and $\widetilde{g}$:
\begin{align*}
B_g(d, \by) & :=|\{ \bv \in \{0,1\}^k \setminus \mathbi{0}\quad : wt(\bv g+\by)=d \}|, \\
\widetilde{B}_g(d, \by) & :=|\{\widetilde{\bv}\in \{0,1\}^{k-1} \setminus \mathbi{0} : wt(\widetilde{\bv} \widetilde{g}+\by)=d \}|.
\end{align*}
Therefore,
\begin{align}
\nonumber \frac{\P\nolimits^{(g)}(V_1=0,\bY=\by)}{\P\nolimits^{(g)}(\bY=\by)}
&=\frac{\sum_{\widetilde{\bu}}\P\nolimits^{(g)}(\bY=\by \big| V_1=0, \widetilde{\bV}=\widetilde{\bu})}
{\sum_{\bu}\P\nolimits^{(g)}(\bY=\by \big| \bV=\bu)} \\
\label{ratio} &= \frac{p^{wt(\by)}(1-p)^{{\ell}-wt(\by)}+
\sum_{d=0}^{\ell} \widetilde{B}_g(d, \by) p^d (1-p)^{{\ell}-d}}
{p^{wt(\by)}(1-p)^{{\ell}-wt(\by)}+
\sum_{d=0}^{\ell} B_g(d, \by) p^d (1-p)^{{\ell}-d}}.
\end{align}
We will prove a concentration of the above expression around $1/2$, which will then imply that $H^{(g)}(V_1|\bY=\by)$ is close to $1$ with high probability by \eqref{entropy_ratio}. To do this, we will prove concentrations around means for both numerator and denominator of the above ratio. Since the following arguments work in exactly the same way, let us only consider the denominator for now.
By definition,
\begin{equation} \label{eq:smds}
\begin{aligned}
B_g(d, \by) & =\sum_{\bv\neq \mathbi{0}}
\mathbbm{1}[wt(\bv g +\by)=d].
\end{aligned}
\end{equation}
The expectation and variance of each summand is
\begin{align*}
&\underset{g\sim G}{\Var}\, \mathbbm{1} \big[wt(\bv g +\by)=d \big] \le
\E_{g\sim G} \mathbbm{1} \big[wt(\bv g +\by)=d \big]
=\binom{{\ell}}{d} 2^{-{\ell}}
\quad\quad \forall \bv \in \{0,1\}^k \setminus \mathbi{0}.
\end{align*}
Clearly, the summands in \eqref{eq:smds} are pairwise independent. Therefore,
\begin{align}
\label{variance_expectation_weight}
\underset{g\sim G}{\Var} \big[B_g(d, \by)\big] & \le \E_{g\sim G} \big[B_g(d, \by)\big]
=(2^k-1) \binom{{\ell}}{d} 2^{-{\ell}},
\end{align}
and then
\begin{align*}
\E_{g\sim G} \left[\sum_{d=0}^{\ell} B_g(d, \by) p^d (1-p)^{{\ell}-d}\right]
= (2^{k}-1) 2^{-{\ell}} \left( \sum_{d=0}^{\ell} \binom{{\ell}}{d} p^d (1-p)^{{\ell}-d} \right)
= (2^{k}-1) 2^{-{\ell}}.
\end{align*}
Let us now show that $\sum_{d=0}^{\ell} B_g(d, \by) p^d (1-p)^{{\ell}-d}$ is tightly concentrated around its mean for $g\sim G$. To do this, we split the range of $d$ into two parts: when $|d - \ell p| > 6\sqrt{\ell} \log \ell $, and when $|d - \ell p| \leq 6\sqrt{\ell} \log \ell$:
\[ \sum_{d=0}^{\ell}B_g(d, \by)p^d(1-p)^{\ell - d} = \sum_{|d - \ell p| > 6\sqrt{\ell} \log \ell}B_g(d, \by)p^d(1-p)^{\ell - d} + \sum_{|d - \ell p| \leq 6\sqrt{\ell} \log \ell}B_g(d, \by)p^d(1-p)^{\ell - d}. \]
\medskip \noindent{\bf Negligible part.}\quad Denote $Z_g(\by) = \sum\limits_{|d - \ell p| > 6\sqrt{\ell} \log \ell}B_g(d, \by)p^d(1-p)^{\ell - d},$ and notice that
\begin{align} \E_{g\sim G}[Z_g(\by)] = (2^k-1)2^{-\ell}\sum_{|d - \ell p| > 6\sqrt{\ell} \log \ell}\binom{\ell}{d}p^d(1-p)^{\ell - d} &\leq (2^k-1)2^{-\ell}\cdot \exp(-12\log ^2 \ell)\\
&\leq2(2^k-1)2^{-\ell}\cdot \ell^{-12\log\ell}, \label{eq:EZ_bound}
\end{align}
where the inequality is obtained via the Chernoff bound for binomial random variable.
Then Markov's inequality gives $\P_{g\sim G}\big[ Z \geq \E\limits_{g\sim G}[Z_g(\by)]\ell^{2\log\ell}\big] \leq \ell^{-2\log\ell}$, and so
\[ \P\big[Z_g(\by) < 2(2^k-1)2^{-\ell}\ell^{-10\ell\log\ell} \big] \geq 1 - \ell^{-2\log\ell}.\]
Define the set \begin{equation}
\label{eq:G1def}
\mathcal{G}_1 := \{g\in\{0,1\}^{k\times\ell}\, :\, Z_g(\by) < 2(2^k-1)2^{-\ell}\ell^{-10\ell\log\ell} \},
\end{equation} and then $\P\limits_{g\sim G}[g \in \mathcal{G}_1] \geq 1 - \ell^{-2\log\ell}.$
\vspace{0.3cm} \noindent{\bf Substantial part.}\quad
Now we deal with the part when $|d - \ell p| \leq 6\sqrt{\ell}\log\ell$. For now, let us fix any $d$ in this interval, and use Chebyshev's inequality together with \eqref{variance_expectation_weight}:
\begin{equation}
\begin{aligned}
\label{chebyshev}
\P_{g\sim G}\bigg[\Big\lvert B_g(d,\by) - \E[B_g(d,\by)]\Big\lvert \geq \ell^{-2\log\ell}\E[B_g(d,\by)]\bigg] &\leq \dfrac{\Var[B_g(d,\by)]}{\ell^{-4\log\ell}\E^2[B_g(d,\by)]} \\
&\leq \dfrac{\ell^{4\log\ell}}{\E\limits_{g\sim G}[B_g(d,\by)]} \leq \ell^{4\log\ell}\,\dfrac{2^{\ell-k+1}}{\binom{\ell}{d}}.
\end{aligned}
\end{equation}
We use the following bound on the binomial coefficients
\begin{fact}[\cite{Macwilliams77}, Chapter 10, Lemma 7]
\label{binom_lemma}
For any integer $0\le d\le {\ell}$,
\begin{equation}
\label{binom_lemma_eq} \frac{1}{\sqrt{2\ell}} 2^{{\ell} h(d/{\ell})}
\le \binom{{\ell}}{d} \le 2^{{\ell} h(d/{\ell})}
\end{equation}
\end{fact}
Since we fixed $|d - \ell p| \leq 6\sqrt{\ell}\log\ell$, Proposition~\ref{prop:entropy_differ} implies
\[ \left\lvert h(p) - h\left(\frac{d}{\ell}\right) \right\lvert \leq h(6\ell^{-1/2}\log\ell) \leq 12\ell^{-1/2}\log\ell\cdot\log\dfrac{\ell^{1/2}}{6\log\ell} \leq 6\ell^{-1/2}\log^2\ell.\]
Recalling that we consider the above-capacity regime with $k \geq \ell(1 - h(p)) + 8\sqrt{\ell}\log^2\ell$, we derive from~\eqref{binom_lemma_eq} and above
\begin{equation}
\label{eq:binom_appr_1}
\dfrac{2^{\ell-k+1}}{\binom{\ell}{d}} \leq \ell\,2^{\ell\left[h(p) - h\left(\frac{d}{\ell} \right) - 8\ell^{-1/2}\log^2\ell\right]} \leq \ell\,2^{-2\ell^{1/2}\log^2\ell}.
\end{equation}
Therefore, we get in \eqref{chebyshev}:
\begin{equation}
\label{single_concentration}
\P_{g\sim G}\bigg[\Big\lvert B_g(d,\by) - \E[B_g(d,\by)] \Big\lvert \geq \ell^{-2\log\ell}\E[B_g(d,\by)]\bigg]\leq \ell^{4\log\ell+1}\,2^{-2\ell^{1/2}\log^2\ell} \leq \ell^{-\sqrt{\ell}-1}.
\end{equation}
Finally, denote
\begin{equation}
\label{eq:G2def} \mathcal{G}_2 := \bigg\{g \in \{0,1\}^{k\times\ell}\,:\, \Big\lvert B_g(d,\by) - \E[B_g(d,\by)]\Big\lvert \leq \ell^{-2\log\ell}\E[B_g(d,\by)]\quad \text{~for all~} |d-\ell p| \le 6\sqrt{{\ell}}\log {\ell} \bigg\}.
\end{equation}
Then by a simple union bound applied to \eqref{single_concentration} for all $d$ such that $|d-\ell p| \leq 6\sqrt{\ell}\log\ell$ we obtain
$$
\P_{g\sim G}[g\in\mathcal{G}_2]\ge 1-{\ell}^{-\sqrt {\ell}}. $$
\medskip
We are now ready to combine these bounds to get the needed concentration.
\begin{lem}
\label{binary_main_concentration_lem}
With probability at least $1 - 2\ell^{-2\log\ell}$ over the choice of $g\sim G$, it holds
\begin{equation}
\label{weight_concentration}
(2^{k}-1) 2^{-{\ell}} (1-2\ell^{-2\log {\ell}}) \le
\sum_{d=0}^{\ell} B_g(d, \by) p^d (1-p)^{{\ell}-d}
\le (2^{k}-1) 2^{-{\ell}} (1+2\ell^{-2\log {\ell}}).
\end{equation}
\end{lem}
\begin{proof}
Indeed, by union bound $\P_{g\sim G}[g\in\mathcal{G}_1\cap\mathcal{G}_2] \geq 1 - l^{-2\log\ell} - \ell^{\sqrt{\ell}} \geq 1 - 2\ell^{-2\log\ell}$. But for any $g \in \mathcal{G}_1\cap\mathcal{G}_2$ we have from~\eqref{eq:G2def}
\begin{align*}
\sum_{d=0}^{\ell} B_g(d, \by) p^d (1-p)^{{\ell}-d}
\ge & \sum_{|d-\ell p| \le 6\sqrt{{\ell}}\log {\ell}} B_g(d, \by) p^d (1-p)^{{\ell}-d} \\
\ge & (2^{k}-1) 2^{-{\ell}} (1-{\ell}^{-2\log {\ell}}) \sum_{|d-\ell p| \le 6\sqrt{{\ell}}\log {\ell}} \binom{{\ell}}{d} p^d (1-p)^{{\ell}-d} \\
\ge & (2^{k}-1) 2^{-{\ell}} (1-{\ell}^{-2\log {\ell}})
(1- 2\ell^{-12\log {\ell}}) \\
\ge & (2^{k}-1) 2^{-{\ell}} (1-2\ell^{-2\log {\ell}}).
\end{align*}
We can also upper bound using~\eqref{eq:G2def} and~\eqref{eq:G1def}
\begin{align*}
\sum_{d=0}^{\ell} B_g(d, \by) p^d &(1-p)^{{\ell}-d} \\
= & \sum_{|d-\ell p| \le 6\sqrt{{\ell}}\log {\ell}} B_g(d, \by) p^d (1-p)^{{\ell}-d}
+ \sum_{|d-\ell p| > 6\sqrt{{\ell}}\log {\ell}} B_g(d, \by) p^d (1-p)^{{\ell}-d} \\
\le & (2^{k}-1) 2^{-{\ell}} (1+{\ell}^{-2\log {\ell}}) \sum_{|d-\ell p| \le 6\sqrt{{\ell}}\log {\ell}} \binom{{\ell}}{d} p^d (1-p)^{{\ell}-d} + Z_g(\by) \\
\le & (2^{k}-1) 2^{-{\ell}} (1+{\ell}^{-2\log {\ell}})
+ 2(2^{k}-1) 2^{-{\ell}} {\ell}^{-10\log {\ell}} \\
\le & (2^{k}-1) 2^{-{\ell}} (1+2\ell^{-2\log {\ell}}). \qedhere
\end{align*}
\end{proof}
We similarly obtain the concentration for the sum in the numerator of \eqref{ratio}: with probability at least $1 - 2\ell^{-2\log\ell}$ over the choice of $g$, it holds
\begin{equation}
\label{tilde_weight_concentration}
(2^{k-1}-1) 2^{-{\ell}} (1-2\ell^{-2\log {\ell}}) \le
\sum_{d=0}^{\ell} \widetilde{B}_g(d, \by) p^d (1-p)^{{\ell}-d}
\le (2^{k-1}-1) 2^{-{\ell}} (1+2\ell^{-2\log {\ell}}).
\end{equation}
Next, let us use the fact the we took a typical output $\by$ with $|wt(\by) - \ell p| \leq 2\sqrt{\ell}\log\ell$ to show that the terms $p^{wt(\by)}(1-p)^{\ell-wt(\by)}$ are negligible in both numerator and denominator of \eqref{ratio}. We have
\begin{equation}
\label{eq:binary3}
p^{wt(\by)}(1-p)^{\ell-wt(\by)} = \left(\dfrac{1-p}{p}\right)^{\ell p - wt(\by)} \cdot p^{\ell p}(1-p)^{\ell - \ell p} = 2^{\left(\ell p - wt(\by)\right) \cdot\log\left(\frac{1-p}{p}\right)} \cdot 2^{-\ell h(p)}.
\end{equation}
Simple case analysis gives us:
\begin{enumerate}[label = (\alph*)]
\item If $p < \frac1{\sqrt{\ell}}$, then $\left(\ell p - wt(\by)\right) \cdot\log\left(\frac{1-p}{p}\right) \leq \ell p \log\frac1{p} < \ell \frac1{\sqrt{\ell}}\log\sqrt{\ell} < \sqrt\ell \log^2\ell;$
\item In case $p \geq \frac1{\sqrt{\ell}}$, obtain $\left(\ell p - wt(\by)\right) \cdot\log\left(\frac{1-p}{p}\right) \leq 2\sqrt{\ell}\log\ell \cdot \log\frac1{p} \leq \sqrt{\ell}\log^2\ell$.
\end{enumerate}
Using the above in~\eqref{eq:binary3} we derive for $k \geq \ell(1-h(p)) + 8\sqrt{\ell}\log^2\ell$
\begin{equation}
\begin{aligned}
p^{wt(\by)}(1-p)^{\ell-wt(\by)} \leq 2^{\sqrt{\ell}\log^2\ell -\ell h(p)} \leq 2^{2\sqrt{\ell}\log^2\ell - \ell h(p) - 2\log^2\ell - 2} \leq \ell^{-2\log\ell}\,(2^{k-1}-1)2^{-\ell}.
\end{aligned}
\end{equation}
Combining this with \eqref{weight_concentration} and \eqref{tilde_weight_concentration} and using a union bound we derive that with probability at least $1 - 4\ell^{-2\log\ell}$ it holds
\[ \left\lvert \left(p^{wt(\by)}(1-p)^{\ell-wt(\by)} + \sum_{d=0}^{\ell} B_g(d, \by) p^d (1-p)^{{\ell}-d}\right)- (2^k-1)2^{-\ell} \right\rvert \leq 3\ell^{-2\log\ell}\cdot(2^k-1)2^{-\ell}, \]
\[ \left\lvert \left(p^{wt(\by)}(1-p)^{\ell-wt(\by)} + \sum_{d=0}^{\ell} \widetilde{B}_g(d, \by) p^d (1-p)^{{\ell}-d}\right)- (2^{k-1}-1)2^{-\ell} \right\rvert \leq 3\ell^{-2\log\ell}\cdot(2^{k-1}-1)2^{-\ell}. \]
Therefore, with probability at least $1 - 4\ell^{-2\log\ell}$ the expression in \eqref{ratio} is bounded as
\[ \dfrac{(1 - 3\ell^{-2\log\ell})(2^{k-1}-1)2^{-\ell}}{(1 + 3\ell^{-2\log\ell})(2^{k}-1)2^{-\ell}} \leq
\frac{\P\nolimits^{(g)}(V_1=0,\bY=\by)}{\P\nolimits^{(g)}(\bY=\by)} \leq
\dfrac{(1 + 3\ell^{-2\log\ell})(2^{k-1}-1)2^{-\ell}}{(1 - 3\ell^{-2\log\ell})(2^{k}-1)2^{-\ell}}. \]
We can finally derive:
\begin{align*} \dfrac{(1 - 3\ell^{-2\log\ell})(2^{k-1}-1)}{(1 + 3\ell^{-2\log\ell})(2^{k}-1)} &\geq (1 - 6\ell^{-2\log\ell})\left(\dfrac12 - 2^{-k}\right)
\hspace{-27pt} &&\geq (1 - 6\ell^{-2\log\ell})\left(\dfrac12 - \ell^{-8\sqrt{\ell}\log\ell}\right) \\
& &&\geq \dfrac12 - \ell^{-\log\ell}, \\
\dfrac{(1 + 3\ell^{-2\log\ell})(2^{k-1}-1)}{(1 - 3\ell^{-2\log\ell})(2^{k}-1)} &\leq \rlap{$(1 + 9\ell^{-2\log\ell})\dfrac12 \leq \dfrac12 + \ell^{-\log\ell}.$} &&
\end{align*}
Therefore, with probability at least $1 - 4\ell^{-2\log\ell}$ over $g\sim G$ it holds
\[ \left\lvert
\frac{\P\nolimits^{(g)}(V_1=0,\bY=\by)}{\P\nolimits^{(g)}(\bY=\by)} - \dfrac12 \right\rvert \leq \ell^{-\log\ell}. \]
Since $h(1/2 + x) \geq 1 - 4x^2$ for any $x\in (-1/2,1/2)$, we then derive:
\[ \E_{g\sim G}\big[ H^{(g)}(V_1|\bY=\by)\big] = \E_{g\sim G}\hspace{-3pt}\left[h\hspace{-3pt}\left(\dfrac{\P\nolimits^{(g)}(V_1=0,\bY=\by)}{\P\nolimits^{(g)}(\bY=\by)}\right)\hspace{-2pt}\right]\hspace{-1.5pt}\geq \hspace{-1.3pt} (1 - 4\ell^{-2\log\ell}) (1- 4\ell^{-2\log\ell}) \geq 1 - 8\ell^{-2\log\ell}. \]
\medskip \noindent{\bf Concentration of entropy.}\quad
We are now ready to plug this into \eqref{entr_exp_weights_only_typical}:
\begin{align}
\E_{g\sim G}\big[H^{(g)}(V_1|\bY) \big] &\geq (1 - 8\ell^{-2\log\ell})\sum_{|wt(\by) - \ell p| \leq 2\sqrt{\ell}\log\ell} p^{wt(\by)}(1-p)^{{\ell}-wt(\by)} \\
&= (1 - 8\ell^{-2\log\ell})\sum_{|d - \ell p| \leq 2\sqrt{\ell}\log\ell} \binom{\ell}{d}p^d(1-p)^{\ell-d} \\
&\geq (1 - 8\ell^{-2\log\ell})(1-2\ell^{-2\log\ell}) \\
&\geq 1 - 10\ell^{-2\log\ell}. \label{eq:bound_exp_entropy}
\end{align}
Finally, using the fact that $H^{(g)}(V_1|\bY) \leq 1$, Markov's inequality, and \eqref{eq:bound_exp_entropy}, we get
\begin{equation}
\P_{g\sim G}\hspace{-1pt}\big[H^{(g)}(V_1|\bY) \leq 1 - \ell^{-\log\ell} \big] = \P_{g\sim G}\hspace{-1pt}\big[ 1 - H^{(g)}(V_1|\bY) \geq \ell^{-\log\ell} \big] \leq \dfrac{ \E\limits_{g\sim G}\big[1 - H^{(g)}(V_1|\bY)\big]}{\ell^{-\log\ell}} \leq 10\ell^{-\log\ell}.
\end{equation}
Thus we conclude that with probability at least $1 - 10\ell^{-\log\ell}$ over the choice of the kernel $G$ it holds that $H(V_1\,|\,\bY) \geq 1 - \ell^{-\log\ell}$ when $k \geq \ell(1-h(p)) + 8\sqrt{\ell}\log^2\ell$ and the underlying channel is BSC. This completes the proof of Theorem~\ref{thm:over:BSC_converse}, which is a version of Theorem~\ref{thm:converse_Shannon_BMS} for the BSC case.
\end{proof}
\section{Strong converse for BMS channel}
\label{sec:bit-decoding}
To make this section completely self-contained, we restate the theorem here:
\converseShannon*
\subsection{Bounded alphabet size}
\label{sec:BMS_large_alphabet}
This section is devoted to proving Theorem~\ref{thm:converse_Shannon_BMS} for the case when $W\,:\,\{0,1\} \to \mathcal{Y}$ is a BMS channel which has a bounded output alphabet size, specifically we consider $|\mathcal{Y}| \leq 2\sqrt{\ell}$.
We will use the fact that any BMS can be viewed as a convex combination of BSCs (see for example \cite{Land, Korada_thesis}), and generalize the ideas of the previous section. Namely, think of the channel $W$ as follows: it has $m$ possible underlying BSC subchannels $W^{(1)}, W^{(2)}, \dots, W^{(m)}$. On any input, $W$ randomly chooses one of the subchannels it is going to use with probabilities $q_1, q_2, \dots, q_m$ respetively. The subchannel $W^{(j)}$ has crossover probability $p_j$, and without loss of generality $0 \leq p_1 \leq p_2 \leq\dots\leq p_m \leq \frac12$. The subchannel $W^{(j)}$ has two possible output symbols $z^{(0)}_j$ or $z^{(1)}_j$, corresponding to $0$ and $1$, respectively (i.e. $0$ goes to $z^{(0)}_j$ with probability $1-p_j$, or to $z^{(1)}_j$ with probability $p_j$ under $W^{(j)}$). Then the whole output alphabet is $\mathcal{Y} = \{z^{(0)}_1, z^{(1)}_1, z^{(0)}_2, z^{(1)}_2, \dots, z^{(0)}_m, z^{(1)}_m\}$, $|\mathcal{Y}| = 2m \leq 2\sqrt{\ell}$.
\begin{remark}
Above we ignored the case when some of the subchannels have only one output (i.e. BEC subchannels), see~\cite[Lemma 4]{Tal_Vardy} for a proof that we can do this without loss of generality.
\end{remark}
\vspace{0.3cm}
\noindent\textbf{Notations and settings.}\quad In this section the expectation is only going to be taken over the kernel $g\sim G$, so we omit this in some places. As in the BSC case, by $\P\nolimits^{(g)}[\cdot]$ and $H^{(g)}(\cdot)$ we denote the probability and entropy only over the randomness of the channel and the message, \emph{for a fixed kernel $g$}.
For any possible output $\by\in\mathcal{Y}^\ell$ we denote by $d_i$ the number of symbols from $\{z^{(0)}_i, z^{(1)}_i\}$ it has (i.e. the number of uses of the $W^{(i)}$ subchannel), so $\sum_{i=1}^md_i = \ell$. Let also $t_i$ be the number of symbols $z^{(1)}_i$ in $\by$. Then
\begin{equation}
\label{def_prob_subchannels}
\P[\bY=\by|\bV=\mathbi{0}]= \prod_{i=1}^mq_i^{d_i}p_i^{t_i}(1-p_i)^{d_i-t_i}.
\end{equation}
For this case of bounded output alphabet size, we will consider the above-capacity regime when $k \geq \ell(1-H(W)) + 13\ell^{1/2}\log^3\ell$ (note that this is made intentionally weaker than the condition in Theorem~\ref{thm:converse_Shannon_BMS}).
\vspace{0.3cm}
We will follow the same blueprint of the proof for BSC from Section~\ref{sect:outline}, however all the technicalities along the way are going to be more challenging. In particular, while we were dealing with one binomial distribution in Section~\ref{sec:BSC_converse}, here we will face a multinomial distribution of $(d_1, d_2,\dots,d_m)$ as a choice of which subchannels to use, as well as binomial distributions $t_i \sim \text{Binom}(d_i, p_i)$ which correspond to "flips" within one subchannel.
\vspace{0.3cm}
\begin{proof}[Proof of Theorem~\ref{thm:converse_Shannon_BMS}]
As in the BSC case, we are going to lower bound the expectation of $H^{(g)}(V_1|\bY)$ and use Markov's inequality afterwards.
\medskip \noindent{\bf Restrict to zero-input.}\quad We use Lemma~\ref{typical_entropy_BSC} to write
\begin{equation}
\label{mult_entropy_formula}
\E_{g\sim G}\big[H^{(g)}(V_1|\bY)\big] = \sum_{\by\in\mathcal{Y}^{\ell}}\P[\bY=\by|\bV=\mathbi{0}]\E_{g\sim G}\big[H^{(g)}(V_1|\bY=\by)\big].
\end{equation}
Notice that there is no dependence of $\P[\bY=\by|\bV=\mathbi{0}]$ on the kernel $g$, since the output for the zero-input depends only on the randomness of the channel.
\paragraph*{Typical output set}
As for the binary case, we would like to consider the set of "typical" outputs (for input $\mathbi{0}$) from $\mathcal{Y}^{\ell}$. We define $\by\in\mathcal{Y}^{\ell}$ to be typical if
\begin{align}
\label{close_entropy_sum} \sum_{i=1}^m (\ell\cdot q_i - d_i)h(p_i) &\leq 2\sqrt{\ell}\log\ell,\\
\label{close_coordinates_sum} \sum_{i=1}^m (p_id_i - t_i)\log\left(\dfrac{1-p_i}{p_i}\right) &\leq 3\sqrt{\ell}\log^2\ell.
\end{align}
By typicality of this set we mean the following
\begin{lem}
\label{typical_lem}
$\sum\limits_{\by \text{ typical}}\P[\bY=\by|\bV=\emph{\mathbi{0}}] \geq 1 - \ell^{-\log\ell}$. In other words, on input $\emph{\mathbi{0}}$, the probability to get the output string which is not typical is at most $\ell^{-\log\ell}$.
\end{lem}
We defer the proof of this lemma until Section \ref{typical_proof_sect}, after we see why we are actually interested in these conditions on $\by$.
\subsubsection{Fix a typical output}
For this part, let us fix one $\by\in\mathcal{Y}^{\ell}$ which is typical and prove that $\E_g\big[H^{(g)}(V_1|\bY)\big]$ is very close to~$1$. We have
\begin{equation}
\label{mult_entropy_ratio}
H^{(g)}(V_1|\bY) = h\left(\dfrac{\P\nolimits^{(g)}\big[V_1=0, \bY = \by\big]}{\P\nolimits^{(g)}\big[\bY = \by\big]}\right).
\end{equation}
Similarly to the BSC case, we will prove that both the denominator and numerator of the fraction inside the entropy function above are tightly concentrated around their means. The arguments for the denominator and the numerator are almost exactly the same, so we only consider denominator for now.
\subsubsection*{Concentration for $\P\nolimits^{(g)}\big[\bY = \by\big]$}
Define now the shifted weight distributions for the codebook $g$ with respect to $m$ different underlying BSC channels. First, for any $x\in\{0,1\}^{\ell}$ and $i = 1, 2, \dots, m$, define
\[ \text{dist}_i(x,\by) = \lvert\{ \text{positions } j \text{ such that } (x_j=0, \by_j=z^{(1)}_i)\text{ or }(x_j=1, \by_j=z^{(0)}_i) \}\rvert. \]
That is, if you send $x$ through $W^{\ell}$ and receive $\by$, then dist$_i(x,\by)$ is just the number of coordinates where the subchannel $i$ was chosen, and the bit was flipped.
In our settings, we now need to think of "distance" between some binary vector $x\in\{0,1\}^{\ell}$ and $\by$ as of an integer vector $\bs = (s_1, s_2, \dots, s_m)$ , where $0\leq s_i \leq d_i$ for $i\in[m]$, where $s_i = \text{dist}_i(x,\by)$ is just the number of flips that occurred in the usage of $i^{\text{th}}$ subchannel when going from $x$ to $\by$. In other words, $s_i$ is just the Hamming distance between the parts of $x$ and $\by$ which correspond to coordinates $j$ where $\by_j$ is $z_i^{(0)}$ or $z_i^{(1)}$ (coming from the subchannel $W^{(i)}$).
Now we can formally define shifted weight distributions for our fixed typical $\by$. For an integer vector $\bs = (s_1, s_2, \dots, s_m)$ , where $0\leq s_i \leq d_i$ define
\[ B_g(\bs, \by) = \Big\lvert \bv\in\{0,1\}^k\setminus\mathbi{0} \ :\ \text{dist}_i(\bv\cdot g, \by) = s_i\quad \text{for } i=1, 2,\dots, m\Big\rvert. \]
We can express $\P\nolimits^{(g)}[\mathcal{Y}=\by]$ in terms of $B_{g}(\bs,\by)$ as follows:
\begin{equation}
\label{prob_of_y}
2^k\cdot\P\nolimits^{(g)}[\mathcal{Y}=\by] = \P[\mathcal{Y}=\by|\bv=\mathbi{0}] + \sum_{s_1,s_2, \dots, s_m=0}^{d_1, d_2, \dots, d_m}B_g(\bs,\by)\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i},
\end{equation}
because $\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i}$ is exactly the probability to get output $\by$ if a $\bv$ is sent that satisfies $\text{dist}_i(\bv\cdot g, \by) = s_i\quad \text{for } i=1, 2,\dots, m$.
We have:
\begin{equation}
\label{shifted_weight_dist}
B_g(\bs, \by) = \sum_{\bv\not=\mathbi{0}}\mathbbm{1}\big[\text{dist}_i(\bv\cdot g, \by) = s_i, \quad \forall i=1, 2, \dots, m\big].
\end{equation}
For a fixed $\bv$ but uniformly random binary matrix $g$, the vector $\bv\cdot g$ is just a uniformly random vector from $\{0,1\}^{\ell}$. Now, the number of vectors $x$ in $\{0,1\}^{\ell}$ such that $\text{dist}_i(x, \by) = s_i\ \ \forall i=1, 2, \dots, m$ is $\prod_{i=1}^m\binom{d_i}{s_i}$, since for any $i=1, 2,\dots, m$, we need to choose which of the $s_i$ coordinates amongst the $d_i$ uses of the subchannel $W^{(i)}$, got flipped. So
\[\underset{g\sim G}{\P} \big[ \text{dist}_i(\bv\cdot g,\by) = s_i, \ \ \forall i=1, 2,\dots, m\big] = 2^{-\ell}\prod_{i=1}^m\binom{d_i}{s_i}. \]
Then for the expectation of the shifted weight distributions we obtain
\begin{equation}
\label{eq:ExpBgsy}
\E_{g\sim G} [B_g(\bs, \by)] = \sum_{\bv\not=\mathbi{0}} \underset{g\sim G}{\P} \big[ \text{dist}_i(\bv\cdot g,\by) = s_i, \ \ \forall i=1, 2,\dots, m\big] = \dfrac{2^k-1}{2^{\ell}}\prod_{i=1}^m\binom{d_i}{s_i}.
\end{equation}
Then for the expectation of the summation in the RHS of \eqref{prob_of_y} we have:
\begin{align}
E \coloneqq& \E_{g\sim G}\left[\sum_{s_1,s_2, \dots, s_m=0}^{d_1, d_2, \dots, d_m}B_g(\bs,\by)\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i}\right] \\
=& \prod_{i=1}^mq_i^{d_i} \sum_{s_1,s_2, \dots, s_m=0}^{d_1, d_2, \dots, d_m}\left(\E_{g\sim G}\big[B_g(\bs, \by)\big] p_i^{s_i}(1-p_i)^{d_i-s_i}\right) \\
=& \dfrac{2^k-1}{2^{\ell}}\prod_{i=1}^mq_i^{d_i}\cdot\sum_{s_1,s_2, \dots, s_m=0}^{d_1, d_2, \dots, d_m}\prod_{i=1}^m\binom{d_i}{s_i}p_i^{s_i}(1-p_i)^{d_i-s_i} \\
=& \dfrac{2^k-1}{2^{\ell}}\prod_{i=1}^mq_i^{d_i}\cdot\prod_{i=1}^m\left(\underbrace{\sum_{s_i=0}^{d_i} \binom{d_i}{s_i}p_i^{s_i}(1-p_i)^{d_i-s_i}}_{=1}\right) = \dfrac{2^k-1}{2^{\ell}}\prod_{i=1}^mq_i^{d_i}. \label{exp_of_sum}
\end{align}
Next, by \eqref{shifted_weight_dist} we can see that $B_g(\bs, \by)$ is a sum of pairwise independent indicator random variables, since $\bv_1\cdot g$ and $\bv_2\cdot g$ are independent for distinct and non-zero $\bv_1, \bv_2$. Therefore
\begin{equation}
\label{var-exp}
\underset{g\sim G}{\Var}[B_g(\bs, \by)] \leq \E_{g\sim G}[B_g(\bs,\by)].
\end{equation}
\subsubsection*{Splitting the summation in \eqref{prob_of_y}}
We will split the summation in \eqref{prob_of_y} into two parts: for the first part, we will show that the expectation of each term is very large, and then use Chebyshev's inequality to argue that each term is concentrated around its expectation. For the second part, its expectation is going to be very small, and simple Markov inequality will imply that this part also does not deviate from its expectation too much with high probability (over the random kernel $g\sim G$). Putting these two arguments together, we will obtain that the sum in the RHS of \eqref{prob_of_y} is concentrated around its mean.
To proceed, define a distribution $\Omega = \text{Binom}(d_1, p_1)\times\text{Binom}(d_2, p_2)\times\dots\times\text{Binom}(d_m, p_m)$, and consider a random vector $\chi\sim\Omega$. In other words, $\chi$ has $m$ independent coordinates $\chi_i,\ i=1,\dots,m$, where $\chi_i$ is a binomial random variable with parameters $d_i$ and $p_i$. Note that by definition then for any vector $\bs = (s_1, s_2, \dots, s_m)$ , where $0\leq s_i \leq d_i$ and $s_i$ is integer for any $i$, we have
\[ \P_{\chi}[ \chi=\bs ] = \prod_{i=1}^m\P_{\chi}[\chi_i=s_i] = \prod_{i=1}^m\binom{d_i}{s_i}p_i^{s_i}(1-p_i)^{d_i-s_i}. \]
Let now $\mathcal{T}$ be some subset of $\mathcal{S} = [0:d_1]\times[0:d_2]\times\dots\times[0:d_m]$, where $[0:d] = \{0, 1, 2, \dots, (d-1), d\}$ for integer $d$. Let also $\mathcal{N}$ be $\mathcal{S}\setminus \mathcal{T}$. Then the summation in the RHS of \eqref{prob_of_y} we can write as
\begin{equation}
\label{split_sum}
\begin{aligned}
\hspace{-5pt}\sum_{\bs\in\mathcal{S}} \hspace{-1.5pt} B_g(\bs,\by)\hspace{-3pt}\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1\hspace{-2pt}-\hspace{-2pt}p_i)^{d_i-s_i} = \sum_{s\in \mathcal{T}}\hspace{-1.5pt}B_g(\bs,\by)\hspace{-3pt}\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1\hspace{-2pt}-\hspace{-2pt}p_i)^{d_i-s_i} +\hspace{-3pt}\sum_{s\in \mathcal{N}}\hspace{-1.5pt}B_g(\bs,\by)\hspace{-3pt}\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1\hspace{-2pt}-\hspace{-2pt}p_i)^{d_i-s_i}.
\end{aligned}
\end{equation}
In the next section we describe how to choose $\mathcal{T}$.
\paragraph{Substantial part}
\label{substantial_sect}
Exactly as in the binary case, using \eqref{var-exp} and Chebyshev's inequality, we have for any $s\in \mathcal{S}$
\begin{multline}
\label{chebyshev_multi}
\P_{g\sim G}\bigg[\Big\lvert B_g(\bs,\by) - \E[B_g(\bs,\by)]\Big\lvert \geq \ell^{-2\log\ell}\E[B_g(\bs,\by)]\bigg] \leq \dfrac{\Var[B_g(\bs,\by)]}{\ell^{-4\log\ell}\E^2[B_g(\bs,\by)]} \\
\leq \dfrac{\ell^{4\log\ell}}{\E_{g\sim G}[B_g(\bs,\by)]}
\leq \ell^{4\log\ell}\,\dfrac{2^{\ell-k+1}}{\prod_{i=1}^m\binom{d_i}{s_i}}.
\end{multline}
We need the above to be upper bounded by $\ell^{-2\sqrt{\ell}}$ to be able to use union bound for all ${\bs \in \mathcal{T} \subset \mathcal{S}}$, since $|\mathcal{S}| \leq \ell^{O(\sqrt{\ell})}$. Recall that we have $k \geq \ell(1-H(W)) + 13\ell^{1/2}\log^3\ell$, and then using a lower bound for binomial coefficients from Fact~\ref{binom_lemma} we obtain for the RHS of \eqref{chebyshev_multi}
\begin{equation}
\label{upper_bound}
\ell^{4\log\ell}\,\dfrac{2^{\ell-k+1}}{\prod_{i=1}^m\binom{d_i}{s_i}} \leq \ell^{4\log\ell}\left(\prod_{i=1}^m\sqrt{2d_i}\right)2^{\ell H(W) - \sum_{i=1}^md_ih\left(\frac{s_i}{d_i}\right) - 13\ell^{1/2}\log^3\ell}.
\end{equation}
We want to show that the term $2^{-\Omega(\ell^{1/2}\log^3\ell)}$ is the dominant one. First, it is easy to see that $\ell^{4\log\ell} = 2^{4\log^2\ell} \leq 2^{\ell^{1/2}\log^3\ell}$. To deal with the factor $\prod_{i=1}^m\sqrt{2d_i}$, recall that $\sum_{i=1}^md_i = \ell$ and $m \leq \sqrt{\ell}$ in this section, then AM-GM inequality gives us
\begin{equation}
\label{prod_of_sqrt}
\prod_{i=1}^m\sqrt{2d_i} \leq 2^{m/2}\cdot \sqrt{\left(\dfrac{\sum_{i=1}^md_i}{m}\right)^m} = \left(\dfrac{2\ell}{m}\right)^{m/2} \leq (2\sqrt{\ell})^{\sqrt{\ell}/2} \leq 2^{\ell^{1/2}\log^3\ell},
\end{equation}
where we used that $(a/x)^x$ is increasing while $x \leq a/e$. For the last factor of~\eqref{upper_bound} we formulate a lemma.
\begin{lem}
\label{multi_sum_lem} \sloppy There exists a set $\mathcal{T} \subseteq \mathcal{S} = [0:d_1]\times[0:d_2]\times\dots\times[0:d_m]$, such that ${\P\limits_{\chi\sim\Omega}[\chi\in\mathcal{T}] \geq 1 - \ell^{-\log\ell/4}}$, and for any $\bs\in\mathcal{T}$ it holds that
\begin{equation}
\label{multi_sum_lem_eq}
\ell H(W) - \sum_{i=1}^{m}d_ih\left(\frac{s_i}{d_i}\right) \leq 11\,\ell^{1/2}\log^3\ell.
\end{equation}
($\Omega = \text{Binom}(d_1, p_1)\times\text{Binom}(d_2, p_2)\times\dots\times\text{Binom}(d_m, p_m)$ above)
\end{lem}
\begin{proof}
Rearrange the above summation as follows:
\begin{equation}
\begin{aligned}
\ell H(W) - \sum_{i=1}^{m}d_ih\left(\frac{s_i}{d_i}\right) =& \sum_{i=1}^m\left(\ell q_ih(p_i) - d_ih\left(\frac{s_i}{d_i}\right)\right) \\
=& \sum_{i=1}^m\big(\ell q_i - d_i\big)h(p_i) + \sum_{i=1}^md_i\left(h(p_i) - h\left(\frac{s_i}{d_i}\right)\right).
\end{aligned}
\end{equation}
Now recall that we took typical $\by$ for now, so by inequality \eqref{close_entropy_sum} from the definition of the typicality of $\by$ we already have that the first part of the above sum is bounded by $\ell^{1/2}\log^3\ell$.
To deal with the second part, which is $\sum_{i=1}^md_i\left(h(p_i) - h\left(\frac{s_i}{d_i}\right)\right)$, we use a separate Lemma~\ref{two_concentrations_lem}, since the proof will be almost exactly similar for another concentration inequality we will need later. Lemma~\ref{two_concentrations_lem} claims that $\sum_{i=1}^md_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right) \leq 10\ell^{1/2}\log^3\ell$ with probability at least $ 1 - \ell^{-\log\ell/4}$ over $\chi\sim \Omega$. Then the result of the current lemma follows by taking $\mathcal{T}$ to be the subset of $\mathcal{S}$ where this inequality holds.
\end{proof}
Fix now a set $\mathcal{T}\subseteq\mathcal{S}$ as in Lemma~\ref{multi_sum_lem}. Then using the arguments above we conclude that the RHS in \eqref{upper_bound}, and therefore \eqref{chebyshev_multi}, is bounded above by $2^{-2\ell^{1/2}\log^3\ell}$ for any $\bs\in\mathcal{T}$. Thus we can apply union bound over $\bs\in\mathcal{T}$ for \eqref{chebyshev_multi}, since $|\mathcal{T}|\leq |\mathcal{S}| = \prod_{i=1}^m(d_i+1) \leq \left(2\sqrt{\ell}\right)^{\sqrt{\ell}} \leq 2^{\ell^{1/2}\log^3\ell}$, similarly to \eqref{prod_of_sqrt}. Therefore, we derive
\begin{cor}
\label{substantial_cor}
With probability at least $1 - 2^{-\ell^{1/2}\log^3\ell}$ (over the random kernel $g\sim G$) it holds \textit{simultaneously} for all $\bs\in\mathcal{T}$ that
\begin{equation}
\label{substantial_part_final}
\Big\lvert B_g(\bs,\by) - \E[B_g(\bs,\by)] \Big\lvert \leq \ell^{-2\log\ell}\E[B_g(\bs,\by)].
\end{equation}
\end{cor}
Moreover, the set $\mathcal{N} = \mathcal{S}\setminus\mathcal{T}$ satisfies $\P_{\chi\sim\Omega}[\chi\in\mathcal{N}] \leq \ell^{-\log\ell/4}$, which we will use next section to bound the second part of \eqref{split_sum}.
\paragraph{Negligible part}
\label{neglig_sect}
Denote for convenience $Z_g(\by) = \sum_{s\in\mathcal{N}}B_g(\bs,\by)\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i}$, the second part of the RHS of \eqref{split_sum}. Recall the value of $\E_{g\sim G}[B_g(\bs,\bY)]$ from~\eqref{eq:ExpBgsy} and notation of $E$ in~\eqref{exp_of_sum}. Then for the expectation of $Z_g(\by)$ derive
\begin{equation}
\begin{aligned}
\E_{g\sim G}\left[Z_g(\by)\right] &= \prod_{i=1}^mq_i^{d_i} \sum_{\bs\in\mathcal{N}}\left(\E_{g\sim G}\big[B_g(\bs, \by)\big] p_i^{s_i}(1-p_i)^{d_i-s_i}\right) \\
&= \dfrac{2^k-1}{2^{\ell}}\prod_{i=1}^mq_i^{d_i}\cdot\sum_{\bs\in\mathcal{N}}\prod_{i=1}^m\binom{d_i}{s_i}p_i^{s_i}(1-p_i)^{d_i-s_i} \\
&= E\cdot\P_{\chi\sim\Omega}\big[\chi\in\mathcal{N}\big] \\
&\leq E\cdot\ell^{-\log\ell/4}.
\end{aligned}
\end{equation}
Thus Markov's inequality implies
\begin{cor}
\label{negligible_cor}
With probability at least $1 - \ell^{-\log\ell/8}$ (over the random kernel $g\sim G$) it holds
\begin{equation}
\label{negligible_part_final}
Z_g(\by) \leq \ell^{\log\ell/8}\E[Z_g(\by)] \leq E\cdot \ell^{-\log\ell/8}.
\end{equation}
\end{cor}
\paragraph{Putting it together}
Combining the Corollaries \ref{substantial_cor} and \ref{negligible_cor} together and using union bound, we derive
\begin{cor}
\label{together_cor}
With probability at least $1 - \ell^{-\log\ell/8} - 2^{-\ell^{1/2}\log^3\ell} \geq 1 - 2\ell^{-\log\ell/8}$ over the randomness of the kernel $g\sim G$ it simultaneously holds
\begin{equation}
\label{together_simul}
\begin{aligned}
&\Big\lvert B_g(\bs,\by) - \E[B_g(\bs,\by)]\Big\lvert \leq \ell^{-2\log\ell}\E[B_g(\bs,\by)], \qquad\qquad \text{for all } \bs \in \mathcal{T},\\
&\sum_{s\in\mathcal{N}}B_g(\bs,\by)\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i} \leq E\cdot \ell^{-\log\ell/8}.
\end{aligned}
\end{equation}
\end{cor}
We are finally ready to formulate the concentration result we need. The following lemma is as analogue of Lemma \ref{binary_main_concentration_lem} from the BSC case:
\begin{lem}
\label{mult_main_concentration_lem}
With probability at least $1 - 2\ell^{-\log\ell/8}$ over the choice of $g\sim G$ it holds
\begin{equation}
\label{mult_weight_concentration}
\left|\sum_{\bs\in\mathcal{S}} B_g(\bs, \by)\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i} - E \right| \leq 2\ell^{-\log\ell/8}\cdot E.
\end{equation}
\end{lem}
\begin{proof}
Let us consider a kernel $g$ such that the conditions \eqref{together_simul} hold, which happens with probability at least $1 - 2\ell^{-\log\ell/8}$ according to Corollary \ref{together_cor}. Then
\begin{equation}
\begin{aligned}
\sum_{\bs\in\mathcal{S}} B_g(\bs, \by)\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i} &\geq \sum_{\bs\in\mathcal{T}} B_g(\bs, \by)\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i}\\
&\geq \sum_{\bs\in\mathcal{T}} \left(1 - \ell^{-2\log\ell}\right)\E[B_g(\bs, \by)]\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i}\\
&= \left(1 - \ell^{-2\log\ell}\right)\dfrac{2^k-1}{2^{\ell}}\prod_{i=1}^mq_i^{d_i}\cdot\sum_{\bs\in\mathcal{T}}\prod_{i=1}^m\binom{d_i}{s_i}p_i^{s_i}(1-p_i)^{d_i-s_i}\\
&= \left(1 - \ell^{-2\log\ell}\right)\cdot E\cdot\P_{\chi\sim\Omega}\big[\chi\in\mathcal{T}\big]\\
& \geq \left(1 - \ell^{-2\log\ell}\right)\left(1 - \ell^{-\log\ell/8}\right)E\\
& \geq \left(1 - 2\ell^{-\log\ell/8}\right)E.
\end{aligned}
\end{equation}
For the other direction, we derive for such $g$
\begin{align}
\sum_{\bs\in\mathcal{S}} B_g(\bs, \by)&\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i} =
\left(\sum_{\bs\in\mathcal{T}} + \sum_{\bs\in\mathcal{N}}\right) B_g(\bs, \by)\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i} \\
&\overset{\eqref{together_simul}}{\leq} \sum_{\bs\in\mathcal{T}} \left(1 + \ell^{-2\log\ell}\right)\E[B_g(\bs, \by)]\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i} + E\cdot \ell^{-\log\ell/8}\\
&\leq \left(1 + \ell^{-2\log\ell}\right)\sum_{\bs\in\mathcal{S}} \E[B_g(\bs, \by)]\prod_{i=1}^mq_i^{d_i}p_i^{s_i}(1-p_i)^{d_i-s_i} + E\cdot \ell^{-\log\ell/8}\\
&= \left(1 + \ell^{-2\log\ell} + \ell^{-\log\ell/8}\right)E\\
&\leq \left(1 + 2\ell^{-\log\ell/8}\right)E. \qedhere
\end{align}
\end{proof}
\subsubsection{Concentration of entropy}
We can now get a tight concentration for $\P\nolimits^{(g)}[\bY = \by]$ using the relation~\eqref{prob_of_y}. We already showed that the sum in RHS of \eqref{prob_of_y} is tightly concentrated around its expectation, so it only remains to show that $\P[\bY = \by|\bv=\mathbi{0}]$ is tiny comparable to $E$. Here we will use that we picked $\by$ to be "typical" from the start so that~\eqref{close_entropy_sum} and~\eqref{close_coordinates_sum} hold, and that $k\geq \ell(1-H(W)) +13\ell^{1/2}\log^3\ell$ for the above-capacity regime. Recall~\eqref{def_prob_subchannels}, as well the the conditions~\eqref{close_entropy_sum} and~\eqref{close_coordinates_sum} on $\by$ being typical. We derive
\begingroup
\allowdisplaybreaks
\begin{align}
\P[\bY = \by|\bV = \mathbi{0}] &= \prod_{i=1}^mq_i^{d_i}p_i^{t_i}(1-p_i)^{d_i-t_i} = \prod_{i=1}^m\left[q_i^{d_i}\cdot p_i^{d_ip_i}(1-p_i)^{d_i(1-p_i)}\cdot\left(\dfrac{1-p_i}{p_i}\right)^{d_ip_i-t_i}\right] \\
&= \prod_{i=1}^mq_i^{d_i} \cdot \prod_{i=1}^m2^{-d_ih(p_i)}\cdot\prod_{i=1}^m2^{(d_ip_i-t_i)\log\left(\frac{1-p_i}{p_i}\right)}\\
&= \prod_{i=1}^mq_i^{d_i} \cdot 2^{\sum_{i=1}^m\left(-\ell q_i h(p_i) + (\ell q_i-d_i)h(p_i)\right)}\cdot 2^{\sum_{i=1}^m(d_ip_i - t_i)\log\left(\frac{1-p_i}{p_i}\right)} \\
&\hspace{-12pt}\overset{\eqref{close_entropy_sum}, \eqref{close_coordinates_sum}}{\leq} \prod_{i=1}^mq_i^{d_i}\cdot2^{-\ell H(W) + 2\ell^{1/2}\log\ell + 3\ell^{1/2}\log^2\ell} \leq \prod_{i=1}^mq_i^{d_i}\cdot \dfrac{2^{k}-1}{2^{\ell}}\cdot\ell^{-\log\ell} = E\cdot \ell^{-\log\ell}.
\end{align}
\endgroup
Now, combining this with Lemma \ref{mult_main_concentration_lem}, we obtain a concentration for \eqref{prob_of_y}:
\begin{cor}
\label{mult_final_conc_cor}
With probability at least $1 - 2\ell^{-\log\ell/8}$ over the choice of kernel $g\sim G$ and for any typical $\by$
\begin{equation}
\label{cor_main_concentration}
\left|2^k\cdot\P\nolimits^{(g)}[\bY=\by] - E\right| \leq 3\ell^{-\log\ell/8}\cdot E,
\end{equation}
where $E = \dfrac{2^k-1}{2^{\ell}}\prod\limits_{i=1}^mq_i^{d_i}$.
\end{cor}
Next, completely analogously we derive the concentration for $\P\nolimits^{(g)}[\bY=\by|V_1=0]$, which is the numerator inside the entropy in \eqref{mult_entropy_ratio}. The only thing that changes is that we will have dimension $k-1$ instead of $k$ for this case. We can state
\begin{corbis}{mult_final_conc_cor}
\label{mult_final_conc_cor_prime}
With probability at least $1 - 2\ell^{-\log\ell/8}$ over the choice of kernel $g\sim G$ and for any typical $\by$
\begin{equation}
\label{cor_main_concentration_prime}
\left|2^k\cdot\P\nolimits^{(g)}[V_1=0, \bY=\by] - \widetilde{E}\right| \leq 3\ell^{-\log\ell/8}\cdot \widetilde{E},
\end{equation}
where $\widetilde{E} = \dfrac{2^{k-1}-1}{2^{\ell}}\prod\limits_{i=1}^mq_i^{d_i}$.
\end{corbis}
Combining these two together and skipping the simple math, completely analogical to that of the BSC case, we derive
\begin{cor}
With probability at least $1 - 4\ell^{-\log\ell/8}$ over the choice of kernel $g\sim G$ and for any typical $\by$
\begin{equation}
\left|\dfrac{\P\nolimits^{(g)}[V_1=0, \bY=\by]}{\P\nolimits^{(g)}[\bY=\by]} - \dfrac12 \right| \leq \ell^{-\log\ell/9}
\end{equation}
\end{cor}
Since $h(1/2 + x) \geq 1 - 4x^2$ for any $x\in (-1/2,1/2)$, we then derive for a typical $\by$:
\begin{equation}
\begin{aligned}
\E_g\big[ H^{(g)}(V_1|\bY=\by)\big] = \E_g\left[h\left(\dfrac{\P\nolimits^{(g)}[V_1=0,\bY=\by]}{\P\nolimits^{(g)}[\bY=\by]}\right)\right] &\geq (1 - 4\ell^{-\log\ell/8})\cdot (1- 4\ell^{-\log\ell/9})\\
&\geq 1 - 8\ell^{-\log\ell/9}.
\end{aligned}
\end{equation}
Then in \eqref{mult_entropy_formula} we have
\begin{equation}
\label{mult_bound_exp_entropy}
\begin{aligned}
\E_g\big[H^{(g)}(V_1|\bY) \big] &= \sum_{\by\in\mathcal{Y}^{\ell}}\P[\bY=\by|\bV=\mathbi{0}]\E_g\big[ H^{(g)}(V_1|\bY=\by)\big]\\
&\geq \sum_{\by \text{ typical}}\P[\bY=\by|\bV=\mathbi{0}]\E_g\big[ H^{(g)}(V_1|\bY=\by)\big]\\
&\geq (1 - \ell^{-\log\ell})\cdot(1 - 8\ell^{-\log\ell/9}).\\
&\geq 1 - 9\ell^{-\log\ell/9} \geq 1 - \ell^{-\log\ell/10},
\end{aligned}
\end{equation}
where we used that the probability to get a typical output on a zero input is at least $1 - \ell^{-\log\ell}$ by Lemma \ref{typical_lem}.
Finally, using the fact that $H^{(g)}(V_1|\bY) \leq 1$, Markov's inequality, and \eqref{mult_bound_exp_entropy}, we get
\begin{align*}
\P\limits_{g\sim G}\Big[H^{(g)}(V_1|\bY) \leq 1 - \ell^{-\frac{\log\ell}{20}} \Big]
= \P\big[ 1 - H^{(g)}(V_1|\bY)
\geq \ell^{-\frac{\log\ell}{20}} \big] \leq \dfrac{ \E\big[1 - H^{(g)}(V_1|\bY)\big]}{\ell^{-\log\ell/20}} \leq \ell^{-\log\ell/20}.
\end{align*}
This completes the proof of Theorem~\ref{thm:converse_Shannon_BMS} for the case of BMS channel with bounded output alphabet size, asuming the typicality Lemma~\ref{typical_lem} and concentration Lemma~\ref{two_concentrations_lem} which we used in Lemma~\ref{multi_sum_lem}. We now turn to proving these.
\subsubsection{Proof that the typical set is indeed typical}
\label{typical_proof_sect}
\begin{proof}[Proof of Lemma \ref{typical_lem}]
We start with proving that \eqref{close_entropy_sum} is satisfied with high probability (over the randomness of the channel). Notice that $(d_1, d_2, \dots, d_m)$ are multinomially distributed by construction, since for every of $\ell$ bits transitioned, we choose independently the subchannel $W^{(i)}$ to use with probability $q_i$, for $i = 1, 2,\dots, m$, and $d_i$ represents the number of times the channel $W^{(i)}$ was chosen. So indeed $(d_1, d_2, \dots, d_m) \sim \text{Mult}(\ell, q_1, q_2, \dots, q_m)$. The crucial property of multinomial random variables we are going to use is \textit{negative association} (\cite{NA_stat}, \cite{NA2}). The (simplified version of the) fact we are going to use about negatively associated random variables can be formulated as follows:
\begin{lem}[\cite{NA_stat}, Property P$_2$]\label{NA_lemma} Let $X_1, X_2,\dots, X_m$ be negatively associated random variables. Then, for every set of $m$ positive monotone non-decreasing functions $f_1,\dots, f_m$ it holds
\begin{equation}
\label{NA_property}
\E\left[\prod_{i=1}^mf_i(X_i)\right] \leq \prod_{i=1}^m \E[f_i(X_i)].
\end{equation}
\end{lem}
We also use the fact that since $(d_1, d_2, \dots, d_m)$ are negatively associated, then applying decreasing functions $g_i(x) = \ell q_i - x$ coordinate-wise to these random variables, we will also obtain negatively associated random variables (\cite{NA2}, Proposition 7). In other words, we argue that $(\ell q_1 - d_1, \ell q_2 - d_2, \dots, \ell q_m - d_m)$ are also negatively associated, thus we can apply Lemma \ref{NA_lemma} to these random variables.
\begin{sloppypar}
Let us now denote for convenience $\alpha_i = h(p_i)$ for $i=1, 2, \dots, m$, and so we have $0 \leq \alpha_i \leq 1$. Let also $X = \sum_{i=1}^m(\ell\cdot q_i - d_i)\alpha_i$, and we now can start with simple exponentiation and Markov's inequality: for any $a$ and any $t > 0$
\begin{equation}
\label{Chernoff-type1}
\P[X \geq a] = \P[e^{tX} \geq e^{ta}] \leq e^{-ta}\E\left[e^{tX}\right] = e^{-ta}\E\left[\prod_{i=1}^me^{t\cdot\alpha_i(\ell q_i - d_i)}\right] \leq e^{-ta}\prod_{i=1}^m\E\left[e^{t\cdot\alpha_i(\ell q_i - d_i)}\right],
\end{equation}
where in the last inequality we applied Lemma \ref{NA_lemma} for negatively associated random variables ${(\ell q_1 - d_1, \ell q_2 - d_2, \dots, \ell q_m - d_m)}$, as discussed above, and positive non-decreasing functions ${f_i(x) = e^{t\cdot\alpha_i\cdot x}}$, since $\alpha_i, t \geq 0$.
\end{sloppypar}
Next, consider the following claim, which follows from standard Chernoff-type arguments:
\begin{claim}
\label{cl:Chernoff_binom}
Let $Z\sim \text{Binom}(n, p)$, and $b > 0$. Then $\E[e^{-b\cdot Z}] \leq e^{np\cdot(e^{-b}-1)}$.
\end{claim}
\begin{proof} We can write $Z = \sum\limits_{j=1}^n Z_j$, where $Z_j \sim\text{Bern}(p)$ are independent Bernoulli random variables. Then
\begin{equation}
\label{eq:Chernoff_binom}
\begin{aligned}
\E\left[e^{-b\cdot Z}\right] &=
\E\left[\prod_{j=1}^{n}e^{-b\cdot Z_j}\right]=
\prod_{j=1}^{n}\E\left[e^{-b\cdot Z_j}\right] =
\left((1-p) + p\cdot e^{-b}\right)^n \leq e^{np(e^{-b} - 1)},
\end{aligned}
\end{equation}
where the only inequality uses the fact that $1 + x\leq e^x$ for any $x$.
\end{proof}
Turning back to~\eqref{Chernoff-type1}, we are going to bound the terms $\E\left[e^{t\cdot\alpha_i(\ell q_i - d_i)}\right]$ individually. It is clear that the marginal distribution of $d_i$ is just $\text{Binom}(\ell, q_i)$, so we are able to use Claim~\ref{cl:Chernoff_binom} for it. We derive:
\begin{equation}
\label{eq:Chernoff_binom2}\noeqref{eq:Chernoff_binom2}
\begin{aligned}
\E\left[e^{t\cdot\alpha_i(\ell q_i - d_i)}\right] = e^{t\alpha_i\ell q_i} \cdot \E\left[e^{-t\alpha_i\cdot d_i}\right] \overset{\eqref{eq:Chernoff_binom}}{\leq} e^{t\cdot\alpha_i\ell q_i}\cdot e^{\ell q_i\left(e^{-t\alpha_i}-1\right)}= e^{\ell q_i\left(t\alpha_i + e^{-t\alpha_i}-1\right)} \leq e^{\ell q_i\left(t + e^{-t} - 1 \right)},
\end{aligned}
\end{equation}
where the last inequality uses that $x + e^{-x}$ is increasing for $x\geq 0$ together with $0 \leq t\alpha_i \leq t$, as $t > 0$ and $0\leq \alpha_i \leq 1$. Plugging the above into \eqref{Chernoff-type1} and using $\sum_{i=1}^m q_i = 1$, we obtain
\begin{equation}
\label{eq:Chernoff_binom3}
\P[X\geq a] \leq e^{-ta}\prod_{i=1}^me^{\ell q_i\left(t + e^{-t}-1\right)}= e^{-ta}\cdot e^{\ell\left(t + e^{-t}-1\right)} \leq e^{-ta + \ell\frac{t^2}{2}},
\end{equation}
where we used $x + e^{-x} - 1 \leq \frac{x^2}{2}$ for any $x\geq 0$. Finally, by taking $a = 2\sqrt{\ell}\log\ell$, setting $t = a/\ell$, and recalling what we denoted by $X$ and $\alpha_i$ above, we immediately deduce
\begin{equation}
\P\left[\sum_{i=1}^m(\ell\cdot q_i - d_i)h(p_i) \geq 2\sqrt{\ell}\log\ell\right] \leq e^{-\frac{a^2}{2\ell}} = e^{-2\log^2\ell} \leq \ell^{-2\log\ell}.
\end{equation}
This means that the first typicality requirement~\eqref{close_entropy_sum} holds with very high probability (over the randomness of the channel).
\vspace{0.3cm}
Let us now prove that the second typicality condition~\eqref{close_coordinates_sum} holds with high probability. For that, we condition on the values of $d_1, d_2, \dots, d_m$. We will see that \eqref{close_coordinates_sum} holds with high probability for all values of $d_1, d_2, \dots, d_m$, and then it is clear that is will imply that it also holds with high probability overall.
So, fix the values of $d_1, d_2, \dots, d_m$. Denote a random variable $Y = \sum_{i=1}^m (p_id_i - t_i)\log\left(\frac{1-p_i}{p_i}\right)$, and then our goal it to show that $Y$ is bounded above by $O(\sqrt{\ell}\log^2\ell)$ with high probability (over the randomness of $t_i$'s). Given the conditioning on $d_1, d_2, \dots, d_m$, it is clear that $t_i \sim \text{Binom}(d_i, p_i)$ for all $i=1, 2,\dots,m$, and they are all independent (recall that $d_i$ corresponds to the number of times subchannel $W^{(i)}$ is chosen, while $t_i$ corresponds to the number of "flips" within this subchannel).
We split the summation in $Y$ into two parts: let $T_1 = \{i\,:\, p_i \leq \frac1{\ell} \}$ and $T_2 = [m] \setminus T_1$. Then for any realization of $t_i$'s, we have $\sum\limits_{i\in T_1}(p_id_i - t_i)\log\left(\frac{1-p_i}{p_i}\right) \leq \sum\limits_{i\in T_1}p_id_i\log\left(\frac{1}{p_i}\right) \leq \sum\limits_{i\in T_1}\frac{d_i\log\ell}{\ell} \leq \log\ell$.
Denote the second part of the summation as $Y_2 = \sum_{i\in T_2} (p_id_i - t_i)\log\left(\frac{1-p_i}{p_i}\right)$. Notice that $\log\left(\frac{1-p_i}{p_i}\right) \leq \log\left(\frac{1}{p_i}\right) \leq \log\ell$ for $i\in T_2$. Denote then $\gamma_i = \log\left(\frac{1-p_i}{p_i}\right)/\log\ell$, and so $0\leq \gamma_i \leq 1$ for $i\in T_2$. Finally, let $\widetilde{Y_2} = Y_2/\log\ell = \sum_{i\in T_2}(p_id_i - t_i)\cdot\gamma_i$.
We now prove the concentration on $\widetilde{Y_2}$ in exactly the same way we did for $X$ above. Claim~\ref{cl:Chernoff_binom} applied for $t_i \sim\text{Binom}(d_i, p_i)$ and $t\cdot\gamma_i > 0$ for any $t > 0$ gives $\E\left[e^{-t\gamma_i\cdot t_i}\right] \leq e^{d_ip_i(e^{-t\gamma_i}-1)}$, and so similarly to~\eqref{Chernoff-type1}-\eqref{eq:Chernoff_binom3} derive
\begin{equation}
\label{eq:Chernoff_again}
\begin{aligned}
\P\left[\widetilde{Y_2} >\hspace{-1.5pt} a\right] \leq e^{-ta}\cdot\hspace{-2.5pt}\prod_{i\in T_2}e^{p_id_i\left(t\gamma_i + e^{-t\gamma_i} -1\right)} \leq e^{-ta}\cdot\hspace{-2.5pt}\prod_{i\in T_2}e^{p_id_i\left(t + e^{-t} -1\right)} \leq e^{-ta + \sum_{i\in T_2}p_id_i\cdot t^2/2} \leq e^{-ta + \ell t^2/2}
\end{aligned}
\end{equation}
for any $t > 0$, where we used $0 \leq \gamma_i \leq 1$ for $i\in T_2$, $p_i < 1$, and $\sum_{i\in T_2}d_i \leq \ell$. Therefore, by taking again $a = 2\sqrt{\ell}\log\ell$ and $t = a/\ell$, obtain
\begin{align}
\P\left[Y_2 \geq 2\sqrt{\ell}\log^2\ell\right] = \P\left[\widetilde{Y_2} \geq 2\sqrt{\ell}\log\ell\right] \leq \ell^{-2\log\ell}.
\end{align}
Since $Y \leq \log\ell + Y_2$, we conclude that $Y \leq 3\sqrt{\ell}\log^2\ell$ with probability at least $\ell^{-2\log\ell}$ over the randomness of the channel.
\vspace{0.3cm}
Since both~\eqref{close_entropy_sum} and~\eqref{close_coordinates_sum} hold with probability at least $1 - \ell^{-2\log\ell}$, the union bound imply that these two conditions hold simultaneously with probability at least $1 - 2\ell^{-2\log\ell} \geq 1 - \ell^{-\log\ell}$.
\end{proof}
\subsubsection{Concentration Lemma}
\label{sec:concentration}
\begin{lem}
\label{two_concentrations_lem}
Let $\chi \sim\Omega = \text{Binom}(d_1, p_1)\times\text{Binom}(d_2, p_2)\times\dots\times\text{Binom}(d_m, p_m)$, where $d_i$'s are positive integers for $i\in[m]$, $p_i \leq 1/2$, $\sum_{i=1}^md_i = \ell$, and $m \leq \sqrt{\ell}$.
Then the following holds with probability at least $1 - \ell^{-(\log\ell)/4}$:
\begin{align}
\sum_{i=1}^md_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right) &\leq 10\ell^{1/2}\log^3\ell. \label{conc1}
\end{align}
\end{lem}
\begin{proof}
First, we split the interval $[1:m]$ into two parts. In the first part the value of $d_i\cdot p_i$ is going to be small, and the sum of $d_ih(p_i)$ will also be small. For the second part, when $d_i\cdot p_i$ is large enough, we will be able to apply some concentration arguments. Denote:
\begin{equation}
\label{split_intervals_lem}
\begin{aligned}
F_1 &\coloneqq \left\{i\,:\ p_i \leq \frac{4\log^2\ell}{d_i}\right\},\\
F_2 &\coloneqq \{1, 2,\dots,m\} \setminus F_1.
\end{aligned}
\end{equation}
Then
\begin{align}
\sum_{i=1}^md_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right) &\leq \sum_{i\in F_1}d_ih(p_i) + \sum_{i\in F_2}d_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right). \label{split_sum1}
\end{align}
Let us deal with the summation over $F_1$ first. Split this set even further: $F_1^{(1)} = \{i\in F_1 \,:\, d_i \geq 8\log^2\ell\}$, and $F_1^{(2)} = F_1\setminus F_1^{(1)}$. Then for any $i \in F_1^{(1)}$ deduce that $p_i\leq 1/2$, thus $h(p_i) \leq 2p_i\log\frac{1}{p_i}$. For any $i\in F_1^{(2)}$ we just use $h(p_i) \leq 1$. Combining these, obtain
\begin{align}
\sum_{i\in F_1}d_ih(p_i) \leq \sum_{i\in F_1^{(1)}} 2d_ip_i\log\dfrac{1}{p_i} + \sum_{i\in F_1^{(2)}}d_i &\leq \sum_{i\in F_1^{(1)}} 8\log^2\ell\cdot\log\left(\dfrac{d_i}{4\log^2\ell}\right) + \left\lvert F_1^{(2)}\right\lvert\cdot 8\log^2\ell\\
&\leq \left\lvert F_1^{(1)}\right\lvert\cdot 8\log^3\ell + \left\lvert F_1^{(2)}\right\lvert\cdot 8\log^2\ell \leq 8\ell^{1/2}\log^3\ell.
\label{split2_part2}
\end{align}
Therefore, the first part of the RHS of~\eqref{split_sum1} is always bounded by $8\ell^{1/2}\log^3\ell$. We will now deal with the remaining summations over $i\in F_2$.
For any $i\in F_2$, we know that $d_ip_i\geq4\log^2\ell$. Now, for $\chi\sim\Omega$ we have by the multiplicative Chernoff bound
\begin{equation}
\label{indiv_Chernoff}
\P\big[|\chi_i - d_ip_i| \geq \sqrt{d_ip_i}\log\ell\big] \leq 2e^{-\log^2 \ell/3}
\le \ell^{-\log\ell/3} \qquad \text{if}\ \log\ell \leq \sqrt{d_ip_i},
\end{equation}
where the last inequality holds because the $\log$ in the exponent is to base $2$.
The condition $\log\ell \leq \sqrt{d_ip_i}$ is needed for the multiplicative Chernoff bound to hold.
Then, by union bound, we derive
\begin{equation}
\label{mult_Chernoff}
\P_{\chi\sim\Omega}\left[|\chi_i - d_ip_i| \geq \sqrt{d_ip_i}\log\ell \text{ for some } i \in F_2 \right] \leq |F_2|\cdot\ell^{-\log\ell/3} \leq \ell^{-\log\ell/3 + 1/2}.
\end{equation}
Define the sets $\mathcal{T}_1^{(i)}$ for all $i=1,2,\dots,m$ as follows:
\begin{equation}
\label{chernoff_intervals}
\begin{aligned}
&\mathcal{T}_1^{(i)}\coloneqq \left\{ s_i \in [0:d_i]\, :\ |s_i - d_ip_i| \leq \sqrt{d_ip_i}\log\ell\right\}, \quad &\text{for } i \in F_2;\\
&\mathcal{T}_1^{(i)}\coloneqq [0:d_i], \quad &\text{for } i \notin F_2.
\end{aligned}
\end{equation}
and let
\begin{equation}
\label{theta_def}
\theta_i \coloneqq \P[\chi_i\in\mathcal{T}_1^{(i)}].
\end{equation}
Then by \eqref{indiv_Chernoff} we have
\begin{equation}
\label{chernoff_probs}
\begin{aligned}
&\theta_i \geq 1 -\ell^{-\log\ell/3},\qquad &\text{for } i \in F_2;\\
&\theta_i = 1, \qquad &\text{for } i \notin F_2.
\end{aligned}
\end{equation}
Finally, define
\begin{equation}
\label{mult_chernoff_prob}
\theta \coloneqq \prod_{i=1}^m\theta_i = \prod_{i\in F_2}\theta_i = \prod_{i\in F_2}\P[\chi_i\in\mathcal{T}_1^{(i)}] = \P_{\chi\sim\Omega}[\chi_i \in \mathcal{T}_1^{(i)} \text{ for all } i \in F_2] \geq 1 - \ell^{-\log\ell/3 + 1/2},
\end{equation}
where the last inequality is a direct implication of \eqref{mult_Chernoff}.
We will now define a set of new probability distributions $\mathcal{D}_i$ for all $i=1,2,\dots,m$, as binomial distributions $\text{Binom}(d_i, p_i)$ restricted to intervals $\mathcal{T}_1^{(i)}$. Formally, let us write
\begin{equation}
\label{truncated_distr}
\begin{aligned}
\P_{\eta_i\sim\mathcal{D}_i}\big[\eta_i=x\big] = \begin{cases} 0, \quad &\text{if } x \notin \mathcal{T}_1^{(i)};\\
\P_{\chi_i\sim\text{Binom}(d_i, p_i)}\big[\chi_i=x\big]\cdot\theta_i^{-1}, \quad &\text{if } x \in \mathcal{T}_1^{(i)}.
\end{cases}
\end{aligned}
\end{equation}
(So to get $\mathcal{D}_i$ we just took a distribution Binom$(d_i, p_i)$, truncated it so it does not have any mass outside of $\mathcal{T}_1^{(i)}$, and rescaled appropriately.)
Next, define a product distribution $\mathcal{D} \coloneqq \bigtimes_{i=1}^m\mathcal{D}_i$ on the set $\mathcal{T}_1 \coloneqq \bigtimes_{i=1}^m\mathcal{T}_1^{(i)}$. Notice now that it is trivial that for any subset $\mathcal{R}\subseteq \mathcal{T}_1$ it holds
\begin{equation}
\label{distributions_almost_equal}
\P_{\chi\sim\Omega}[\chi\in\mathcal{R}] = \P_{\eta\sim\mathcal{D}}[\eta\in\mathcal{R}] \cdot\theta.
\end{equation}
Since $\theta$ is really close to $1$, it follows that we can basically transition to considering $\mathcal{D}$ instead of $\Omega$.
\vspace{0.25cm}
Recall that our goal was to show that $\sum_{i\in F_2}d_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right)$ (the second part from \eqref{split_sum1}) is bounded above by $O(\ell^{1/2}\log^3\ell)$ with high probability, when $\chi\sim\Omega$. Instead now let us show that this summation is small with high probability when $\chi\sim\mathcal{D}$, and then use the arguments above to see that there is not much of a difference when $\chi\sim\Omega$.
\begin{claim}
\label{entropy_distortion_claim}
Let $i\in F_2$ and $\chi_i \sim \mathcal{D}_i$. Then
\begin{align}
\left|d_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right)\right| &\leq \sqrt{d_ip_i}\log^2\ell,\label{claim_ent1}.
\end{align}
\end{claim}
\begin{proof} First, $\left|\frac{\chi_i}{d_i} - p_i\right| \leq \sqrt{\frac{p_i}{d_i}}\log\ell$ for $\chi_i\sim\mathcal{D}_i$ by definition of the distribution $\mathcal{D}_i$. Now, for $i\in F_2$, $p_i \geq \frac{4\log^2\ell}{d_i}$ and then $\frac{p_i}2 \geq \sqrt{\frac{p_i}{d_i}}\log\ell$, therefore $\frac{\chi_i}{d_i} \geq \frac{p_i}{2}$. Then, using the concavity of the binary entropy function, we obtain:
\begin{equation*}
\begin{aligned}
\left|h\left(\frac{\chi_i}{d_i}\right) - h(p_i)\right| &\leq \left|\dfrac{\chi_i}{d_i} - p_i\right|\cdot \text{max}\left\{\dfrac{dh}{dx}(p_i), \dfrac{dh}{dx}\left(\frac{\chi_i}{d_i}\right) \right\} \\
&\leq \sqrt{\frac{p_i}{d_i}}\log\ell\cdot \dfrac{dh}{dx}\left(\frac{p_i}2\right) = \sqrt{\frac{p_i}{d_i}}\log \ell \cdot \log\dfrac{1-p_i/2}{p_i/2}\\
&\leq \sqrt{\frac{p_i}{d_i}}\log \ell \cdot \log\frac{2}{p_i} \leq \sqrt{\frac{p_i}{d_i}}\log\ell\cdot\log\left(\dfrac{d_i}{2\log^2\ell}\right) \leq \sqrt{\frac{p_i}{d_i}}\log^2 \ell,
\end{aligned}
\end{equation*}
and therefore \eqref{claim_ent1} follows.
\end{proof}
\vspace{0.3cm}
Let $\chi\sim \mathcal{D}$ here and further. Define for convenience new random variables $X_i = d_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right)$ for all $i \in F_2$, and let also $X = \sum_{i\in F_2}X_i = \sum_{i\in F_2}d_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right)$.
\begin{claim}
\label{sum_Hoeffding}
With probability at least $1 - \ell^{-\log\ell}$ it holds that
\[ X - \E[X] \leq \ell^{1/2}\log^3\ell \]
\end{claim}
\begin{proof}
Obviously all the $X_i$'s are independent, and also $X_i \in \left[- \sqrt{d_ip_i}\log^2\ell, \sqrt{d_ip_i}\log^2\ell \right]$ by Claim \ref{entropy_distortion_claim}. Then we can apply Hoeffding's inequality for the sum of independent random variables which are bounded by some intervals, and obtain
\begin{equation}
\begin{aligned}
\P_{\chi\sim\mathcal{D}}\Big[X - \E[X] \geq \ell^{1/2}\log^3\ell\Big] &\leq \text{exp}\left(-\dfrac{2\ell\log^6\ell}{\sum_{i\in F_2}(2\sqrt{d_ip_i}\log^2\ell)^2}\right) \\
&\leq e^{-\log^2\ell} \leq \ell^{-\log\ell},
\end{aligned}
\end{equation}
where we used in the last step that $\sum_{i=1}^md_i = \ell,\ p_i\leq 1/2,$ and $d_i \leq \ell$.
\end{proof}
So by now we proved that $X = \sum_{i\in F_2}d_i\left(h(p_i) - h\left(\frac{\chi_i}{d_i}\right)\right)$ does not deviate much from its expectation. What we are left to show now is that $\E[X]$ is not very large by itself.
The following two claims show that the first moment and mean absolute deviation of the distribution $\mathcal{D}_i$ are close to those of $\Omega_i$. This easily follows from the definition~\eqref{truncated_distr} of $\mathcal{D}_i$, and the proofs are deferred to Appendix~\ref{app:moments}
\begin{claim}
\label{cl:exp_of_Di}
Let $i\in F_2$. Then $\left|\E\limits_{\chi_i \sim \mathcal{D}_i}\left[\frac{\chi_i}{d_i}\right] - p_i\right| \leq \frac1{d_i}$.
\end{claim}
\begin{claim}
\label{cl:mean_absolute}
Let $\chi_i \sim \mathcal{D}_i$ and $\eta_i \sim \Omega_i$ for $i\in F_2$. Then $\E\Big\lvert\chi_i - \E[\chi_i]\Big\lvert \leq \E\Big\lvert\eta_i - \E\left[\eta_i\right]\Big\lvert + 1$.
\end{claim}
These observations allow us we prove the following
\begin{claim}
Let $i\in F_2$, and $\chi_i \sim \mathcal{D}_i$. Then $h\left(\E\left[\frac{\chi_i}{d_i}\right]\right) - \E\left[h\left(\frac{\chi_i}{d_i}\right)\right] \leq \frac{5\log\ell}{d_i}$.
\end{claim}
\begin{proof}
Unfortunately, Jensen's inequality works in the opposite direction for us here. However, we use some form of converse Jensen's from \cite{Dragomir}, which says the following:
\begin{lem}[Converse Jensen's inequality, \cite{Dragomir}, Corollary 1.8]\label{converse_Jensen} Let $f$ be a concave differentiable function on an interval $[a, b]$, and let $Z$ be a (discrete) random variable, taking values in $[a, b]$. Then
\[ 0 \leq f(\E[Z]) - \E[f(Z)] \leq \frac12\left(f'(a) - f'(b)\right)\cdot\E\left|Z - \E[Z]\right|. \]
\end{lem}
We apply it here for the concave binary entropy function $h$, and random variable $Z = \frac{\chi_i}{d_i}$ for $\chi_i\sim\mathcal{D}_i$, which takes values in $[a,b] \coloneqq \left[p_i - \sqrt{\frac{p_i}{d_i}}\log\ell, p_i + \sqrt{\frac{p_i}{d_i}}\log\ell \right]$. Recall also that for $i\in F_2$, $p_i \geq \frac{4\log^2\ell}{d_i}$ and then $\frac{p_i}2 \geq \sqrt{\frac{p_i}{d_i}}\log\ell$, therefore $a = p_i - \sqrt{\frac{p_i}{d_i}}\log\ell \geq \frac{p_i}{2}$. Using the mean value theorem, for some $c\in [a,b]$ we have
\[ h'(a) - h'(b) = (b-a)\cdot(-h''(c)) \leq 2\sqrt{\frac{p_i}{d_i}}\log\ell\cdot (-h''(c)).\]
But $(-h''(c)) = \frac{1}{c(1-c)\ln 2}\leq \frac{2}{c} \leq \frac2a \leq \frac{4}{p_i}$, thus
\[ h'(a) - h'(b) \leq \dfrac{8\log\ell}{\sqrt{d_ip_i}}. \]
Finally, Claim~\ref{cl:mean_absolute} gives $\E\left|Z - \E[Z]\right| \leq \E\left|\frac{Z_2}{d_i} - \E\left[\frac{Z_2}{d_i}\right]\right| + \frac1{d_i}$ for $Z_2\sim\text{Binom}(d_i, p_i)$, and so
\[ \E\left|Z - \E[Z]\right| \leq \frac1{d_i}\E\left|Z_2 - \E[Z_2]\right|+\frac1{d_i} \leq \frac1{d_i}\sqrt{\E[(Z_2 - \E[Z_2])^2]}+\frac1{d_i} = \sqrt{\frac{p_i(1-p_i)}{d_i}}+\frac1{d_i} \leq \sqrt{\frac{p_i}{d_i}} + \frac1{d_i}. \]
Putting all this together, Lemma \ref{converse_Jensen} gives us
\[ 0 \leq h\left(\E\left[\frac{\chi_i}{d_i}\right]\right) - \E\left[h\left(\frac{\chi_i}{d_i}\right)\right] \leq \dfrac12\cdot \dfrac{8\log\ell}{\sqrt{d_ip_i}} \cdot \left(\sqrt{\frac{p_i}{d_i}} + \frac1{d_i}\right) = \dfrac{4\log\ell}{d_i} + \dfrac{4\log\ell}{d_i\sqrt{d_ip_i}} \leq \dfrac{5\log\ell}{d_i}, \]
where the last step uses $\sqrt{p_id_i}\geq 2\log\ell$ for $i \in F_2$.
\end{proof}
We can now use the above claims and Proposition~\ref{prop:entropy_differ} to bound the expectation of $X$:
\begin{equation}
\begin{aligned}
\label{eq:X_exp}
\E[X] = \sum_{i\in F_2} d_i\left(h(p_i) - \E\left[h\left(\frac{\chi_i}{d_i}\right)\right]\right) &\leq \sum_{i\in F_2} d_i\left(h(p_i) - h\left(\E\left[\frac{\chi_i}{d_i}\right]\right) + \dfrac{5\log\ell}{d_i}\right) \\
&\leq \sum_{i\in F_2}d_i\left(h\left(\frac1{d_i}\right) + \frac{5\log\ell}{d_i}\right) \leq 7\ell^{1/2}\log\ell \leq \ell^{1/2}\log^3\ell.
\end{aligned}
\end{equation}
So we showed in Claim~\ref{sum_Hoeffding} that $X$ does not exceed its expectations by more than $\ell^{1/2}\log^3\ell$ with high probability (over $\chi\sim\mathcal{D}$), and also that $E[X]$ is bounded by $\ell^{1/2}\log^3\ell$ in~\eqref{eq:X_exp}, and therefore $X$ does not exceed $2\ell^{1/2}\log^3\ell$ with high probability. Specifically, it means that there exists $\mathcal{T}\subseteq \mathcal{T}_1$, such that $\P_{\chi\sim\mathcal{D}}[\chi\in\mathcal{T}] \geq 1 - \ell^{-\log\ell}$, and that for any $\bs\in\mathcal{T}$ it holds $\sum_{i\in F_2}d_i\left(h(p_i) - h\left(\frac{s_i}{d_i}\right)\right) \leq 2\ell^{1/2}\log^3\ell$. Taking into consideration that~\eqref{split2_part2} always holds, we conclude that $\sum_{i=1}^{m} d_i\left(h(p_i) - h\left(\frac{s_i}{d_i}\right)\right) \leq 10\ell^{1/2}\log^3\ell$ for any $\bs\in\mathcal{T}$. Finally, by \eqref{distributions_almost_equal} we also have
\begin{equation}
\label{prob_of_tau_is_big}
\P_{\chi\sim\Omega}[\chi\in\mathcal{T}] = \P_{\chi\sim\mathcal{D}}[\chi\in\mathcal{T}]\cdot\theta \geq \left(1 - \ell^{-\log\ell}\right)\left(1 - \ell^{-\log\ell/3 + 1/2}\right) \geq 1 - \ell^{-\log\ell/4},
\end{equation}
where we used $\log\ell \geq 8$.
\end{proof}
\subsection{Arbitrary alphabet size}
\label{sec:BMS_any_alphabet}
In this section we finish the proof of Theorem~\ref{thm:converse_Shannon_BMS} for the general BMS channel using the results from the previous section.
For BMS channels with large output alphabet size we will use binning of the output, however we will do it in a way that \textit{upgrades} the channel, rather then degrades it (recall Definition~\ref{def:degrad}).
Specifically, we will employ the following statement:
\begin{prop}
\label{prop:binned_upgraded_channel}
Let $W$ be any BMS channel. Then there exists another BMS channel~$\widetilde{W}$ with the following properties:
\begin{enumerate}[label=(\roman*)]
\item Output alphabet size of $\widetilde{W}$ is at most $2\sqrt{\ell}$;
\item $\widetilde{W}$ is \textit{upgraded} with respect to $W$, i.e. $W \preceq \widetilde{W}$;
\item $H(\widetilde{W}) \geq H(W) - \dfrac{\log\ell}{\ell^{1/2}}$.
\end{enumerate}
\end{prop}
Before proving this proposition, we first show how we can finish a proof of Theorem~\ref{thm:converse_Shannon_BMS} using it. So, consider any BMS channel $W$ with output alphabet size larger than $2\sqrt{\ell}$, and consider the channel $\widetilde{W}$ which satisfies properties (i)-(iii) from Proposition~\ref{prop:binned_upgraded_channel} with respect to $W$.
First of all, notice that $k \geq \ell(1 - H(W)) + 14\ell^{1/2}\log^3\ell \geq \ell\left(1 - H(\widetilde{W}) - \frac{\log\ell}{\ell^{1/2}}\right) + 14\ell^{1/2}\log^3\ell$, and thus ${k \geq \ell(1 - H(\widetilde{W})) + 13\ell^{1/2}\log^3\ell}$. Taking the property (i) into consideration, it follows that the channel $\widetilde{W}$ satisfies all the conditions for the arguments in the Section \ref{sec:BMS_large_alphabet} to be applied, i.e. the statement of Theorem~\ref{thm:converse_Shannon_BMS} holds for $\widetilde{W}$. Therefore, we can argue that with probability at least $1 - \ell^{-\log\ell/20}$ over a random kernel $G$ it holds $H(V_1\,|\,\widetilde{\bY}) \geq 1 - \ell^{-\log\ell/20}$, where $\widetilde{\bY} = \widetilde{W}^{\ell}(\bV\cdot G)$ is the output vector if one would use the channel $\widetilde{W}$ instead of $W$, for $\bV \sim \{0,1\}^k$.
Now, let $W_1$ be the channel which "proves" that $\widetilde{W}$ is upgraded with respect to $W$, i.e. $W_1\left(\widetilde{W}(x)\right)$ and $W(x)$ are identically distributed for any $x\in\{0,1\}$. Trivially then, $W_1^{\ell}\left(\widetilde{W}^{\ell}(X)\right)$ and $W^{\ell}(X)$ are identically distributed for any random variable $X$ supported on $\{0,1\}^{\ell}$.
Next, observe that the following forms a Markov chain
\[ V_1 \to \bV \to \bV\cdot G \to \widetilde{W}^{\ell}(\bV G) \to W_1^{\ell}\left(\widetilde{W}^{\ell}(\bV G)\right), \]
where $\bV$ is distributed uniformly over $\{0,1\}^{k}$. But then the data-processing inequality gives
\[ I\left(V_1\,;\,W_1^{\ell}\left(\widetilde{W}^{\ell}(\bV G)\right) \right) \leq I\left(V_1\,;\,\widetilde{W}^{\ell}(\bV G)\right). \]
However, as we discussed above, $W_1^{\ell}\left(\widetilde{W}^{\ell}(\bV G)\right)$ and $W^{\ell}(\bV G)$ are identically distributed, and so
\[ I(V_1\,;\, \bY) = I\left(V_1\,;\,W^{\ell}(\bV G) \right) = I\left(V_1\,;\,W_1^{\ell}\left(\widetilde{W}^{\ell}(\bV G)\right) \right) \leq I\left(V_1\,;\,\widetilde{W}^{\ell}(\bV G)\right) = I(V_1\,;\,\widetilde{\bY}). \]
Therefore using $H(X|Y) = H(X) - I(X;Y)$ we derive that
\[ H(V_1\,|\,\bY) \geq H(V_1\,|\,\widetilde{\bY}) \geq 1 - \ell^{-\log\ell/20} \]
with probability at least $1 - \ell^{-\log\ell/20}$. This concludes the proof of Theorem~\ref{thm:converse_Shannon_BMS}.
\end{proof}
\vspace{0.8cm}
\begin{proof}[Proof of Proposition \ref{prop:binned_upgraded_channel}] We are going to describe how to construct such an upgraded channel $\widetilde{W}$. We again are going to look at $W$ as a convex combination of BSCs, as we discussed in Section \ref{sec:BMS_large_alphabet}: let $W$ consist of $m$ underlying BSC subchannels $W^{(1)}, W^{(2)}\dots, W^{(m)}$, each has probability $q_j$ to be chosen. The subchannel $W^{(j)}$ has crossover probability $p_j$, and $0\leq p_1 \leq\dots\leq p_m \leq \frac12$. The subchannel $W^{(j)}$ can output $z^{(0)}_j$ or $z^{(1)}_j$, and the whole output alphabet is then $\mathcal{Y} = \{z^{(0)}_1, z^{(1)}_1, z^{(0)}_2, z^{(1)}_2, \dots, z^{(0)}_m, z^{(1)}_m\}$, $|\mathcal{Y}| = 2m$. It will be convenient to write the transmission probabilities of $W$ explicitly: for any $k \in [m]$, $c, x \in \{0,1\}$:
\begin{align}
\label{eq:W_def}
W\left(z^{(c)}_k\;\Big\lvert\;x\right) = \begin{cases} q_k\cdot (1-p_k), \qquad &x = c,\\
q_k\cdot p_k, \qquad &x \not= c.\\
\end{cases}
\end{align}
The key ideas behind the construction of $\widetilde{W}$ are the following:
\begin{itemize}[label={--}]
\item decreasing a crossover probability in any BSC (sub)channel always upgrades the channel, i.e. $\text{BSC}_{p_1} \preceq \text{BSC}_{p_2}$ for any $0 \leq p_2 \leq p_1 \leq \frac12$ (\cite[Lemma 9]{Tal_Vardy}). Indeed, one can simulate a flip of coin with bias $p_1$ by first flipping a coin with bias $p_2$, and then flipping the result one more time with probability $q = \frac{p_1-p_2}{1 - 2p_2}$. In other words, $\text{BSC}_{p_1}(x)$ and $\text{BSC}_{q}\left(\text{BSC}_{p_2}(x)\right)$ are identically distributed for $x\in\{0,1\}$.
\item "binning" two BSC subchannels with the same crossover probability doesn't change the channel
(\cite[Corollary 10]{Tal_Vardy}).
\end{itemize}
Let us finally describe how to construct $\widetilde{W}$. Split the interval $[0, 1/2]$ into $\sqrt{\ell}$ parts evenly, i.e. let ${\theta_j = \frac{i-1}{2\sqrt{\ell}}}$ for $j = 1, 2, \dots, \sqrt{\ell}+1$, and consider intevals $[\theta_j, \theta_{j+1})$ for $j = 1, 2, \dots, \sqrt{\ell}$ (include $1/2$ into the last interval). Now, to get $\widetilde{W}$, we first slightly decrease the crossover probabilities in all the BSC subchannels $W^{(1)}, W^{(2)}\dots, W^{(m)}$ so that they all become one of $\theta_1, \theta_2, \dots, \theta_{\sqrt{\ell}}$. After that we bin together the subchannels with the same crossover probabilities and let the resulting channel be $\widetilde{W}$. Formally, we define
\begin{align*}
T_j &\coloneqq \bigg\{ i \in [m]\ :\ p_i \in \big[\theta_j, \theta_{j+1}\big) \bigg\}, \quad\qquad j = 1, 2, \dots, \sqrt{\ell}-1, \\
T_{\sqrt{\ell}} &\coloneqq \bigg\{ i \in [m]\ :\ p_i \in \big[\theta_{\sqrt{\ell}}, \theta_{\sqrt{\ell}+1}\big] \bigg\}.
\end{align*}
So, $T_j$ is going to be the set of indices of subchannels of $W$ for which we decrease the crossover probability to be equal to $\theta_j$. Then the probability distribution over the new, binned, BSC subchannels $\widetilde{W^{(1)}}, \widetilde{W^{(2)}}\dots, \widetilde{W^{(\sqrt{\ell})}}$ in the channel $\widetilde{W}$ is going to be $(\widetilde{q_1}, \widetilde{q_2}, \dots, \widetilde{q_{\sqrt{\ell}}})$, where $\widetilde{q_j} \coloneqq \sum\limits_{i\in T_j} q_i$. The subchannel $\widetilde{W^{(j)}}$ has crossover probability $\theta_j$, and it can output one of two new symbols $\widetilde{z^{(0)}_j}$ or $\widetilde{z^{(1)}_j}$. The whole output alphabet is then $\widetilde{\mathcal{Y}} = \{\widetilde{z^{(0)}_1}, \widetilde{z^{(1)}_1}, \widetilde{z^{(0)}_2}, \widetilde{z^{(1)}_2}, \dots, \widetilde{z^{(0)}_{\sqrt{\ell}}}, \widetilde{z^{(1)}_{\sqrt{\ell}}}\}$, $|\widetilde{\mathcal{Y}}| = 2\sqrt{\ell}$. To be most specific, we describe $\widetilde{W}\, :\,\{0,1\}\to\widetilde{\mathcal{Y}}$, as follows: for any $j\in[\sqrt{\ell}]$ and any $b, x\in \{0,1\}$
\begin{align}
\label{eq:wW_def}
\widetilde{W}\left(\widetilde{z^{(b)}_j}\;\Big\lvert\;x\right) = \begin{cases} \sum\limits_{i\in T_j}q_i\cdot (1-\theta_j), \qquad &x = b,\\
\sum\limits_{i\in T_j}q_i\cdot \theta_j, \qquad &x \not= b.\\
\end{cases}
\end{align}
Property (i) on the output alphabet size for $\widetilde{W}$ then holds immediately. Let us verify (ii) by showing that $\widetilde{W}$ is indeed upgraded with respect to $W$.
One can imitate the usage of $W$ using $\widetilde{W}$ as follows: on input $x\in\{0,1\}$, feed it through $\widetilde{W}$ to get output $\widetilde{z_j^{(b)}}$ for some $b\in\{0,1\}$ and $j \in [\sqrt{\ell}]$. We then know that the subchannel $\widetilde{W^{(j)}}$ was used, which by construction corresponds to the usage of a subchannel $W^{(i)}$ for some $i\in T_j$. Then we randomly choose an index $k$ from $T_j$ with probability of $i\in T_j$ being chosen equal to $\dfrac{q_i}{\widetilde{q_j}}$. This determines that we are going to use the subchannel $W^{(k)}$ while imitating the usage of $W$. By now we flipped the input with probability $\theta_j$ (since we used the subchannel $\widetilde{W^{(j)}}$), while we want it to be flipped with probability $p_k \geq \theta_j$ overall, since we decided to use $W^{(k)}$. So the only thing we need to do it to "flip" $b$ to $(1-b)$ with probability $\frac{p_k - \theta_j}{1-2\theta_j}$, and then output $z_k^{(b)}$ or $z_k^{(1-b)}$ correspondingly.
Formally, we just describe the channel $W_1\,:\widetilde{\mathcal{Y}} \to \mathcal{Y}$ which proves that $\widetilde{W}$ is upgraded with respect to $W$ by all of its transmission probabilities: for all ${k \in [m]}$, ${j \in [\sqrt{\ell}]}$, ${b,c\in\{0,1\}}$ set
\begin{align}
\label{eq:W1_def}
W_1\left(z_k^{c} \;\Big\lvert\;\widetilde{z_j^{(b)}}\right) = \begin{cases}
0, \qquad &k \notin T_j \\
\dfrac{q_k}{\sum\limits_{i\in T_j}q_i}\cdot\left(1 - \dfrac{p_k - \theta_j}{1-2\theta_j} \right), \qquad &k\in T_j,\ b = c,\\
\dfrac{q_k}{\sum\limits_{i\in T_j}q_i}\cdot\left(\dfrac{p_k - \theta_j}{1-2\theta_j} \right), \qquad &k\in T_j,\ b \not= c.\\
\end{cases}
\end{align}
It is easy to check that $W_1$ is a valid channel, and that it holds for any $k\in[m]$ and $c,x \in \{0,1\}$
\begin{equation}
\label{eq:upgrad_calculation}
\sum_{j\in[\sqrt{\ell}],\;b\in\{0,1\}}\widetilde{W}\left(\widetilde{z_j^{(b)}}\,\Big\lvert\,x\right)W_1\left(z_k^{(c)} \;\Big\lvert\;\widetilde{z_j^{(b)}}\right) = W\left(z^{(c)}_k\;\Big\lvert\;x\right),
\end{equation}
which proves that $\widetilde{W}$ is indeed upgraded to $W$. We prove the above equality in Appendix~\ref{app:upgraded_calc}.
It only remains to check that the property (iii) also holds, i.e. that the entropy did not decrease too much after we upgrade the channel $W$ to $\widetilde{W}$. We have
\[ H\left(\widetilde{W}\right) = \sum_{j\in[\sqrt{\ell}]}\widetilde{q_j}h(\theta_j) =
\sum_{j\in[\sqrt{\ell}]}\left(\sum_{i \in T_j}q_i\right)h(\theta_j) = \sum_{k\in[m]} q_k h(\theta_{j_k}), \]
where we again denoted by $j_k$ the index from $[\sqrt{\ell}]$ for which $k \in T_{j_k}$. Therefore
\[ H(W) - H\left(\widetilde{W}\right) = \sum_{k\in [m]}q_k\big(h(p_k) - h(\theta_{j_k})\big) \leq \sum_{k\in [m]}q_k\big(h(\theta_{j_k+1}) - h(\theta_{j_k})\big), \]
since $p_k \in [\theta_{j_k}, \theta_{j_k+1}]$ as $k\in T_{j_k}$. Finally, since $\theta_{j+1} - \theta_j = \frac1{2\sqrt{\ell}}$, Proposition~\ref{prop:entropy_differ} gives
\begin{align}
H(W) - H\left(\widetilde{W}\right) \leq
\sum_{k\in [m]}q_k\big(h(\theta_{j_k+1}) - h(\theta_{j_k})\big) \leq
h\left(\dfrac1{2\sqrt{\ell}}\right) \leq 2\cdot\dfrac1{2\sqrt{\ell}}\log \left(2\sqrt{\ell}\right) \leq \dfrac{\log\ell}{\sqrt{\ell}}. &\qedhere
\end{align}
\end{proof}
\section{Suction at the ends}
\label{sec:suctions}
In this section we present the proof for Theorem~\ref{thm:kernel_seacrh_correct} in the case the standard Ar{\i}kans kernel was chosen in Algorithm~\ref{algo:kernel_search} -- the so-called suction at the ends regime. Recall that, as we discussed in section~\ref{sect:local}, this regime applies when the entropy of the channel $W$ falls into the interval ${(\ell^{-4}, 1-\ell^{-4})}$, and the algorithm directly takes a kernel $K = A_2^{\otimes \log\ell}$, where $A_2 = \left( \begin{smallmatrix}1 & 0\\ 1 & 1\end{smallmatrix} \right)$ is the kernel of Ar{\i}kan's original polarizing transform, instead of trying out all the possible matrices. Note that multiplying by such a kernel $K$ is equivalent to just applying the Ar{\i}kan's $2 \times 2$ transform recursively $\log\ell$ times. Suppose we have a BMS channel $W$ with $H(W)$ very close to $0$ or $1$. For Ar\i kan's basic transform, by working with the channel Bhattacharyya parameter $Z(W)$ instead of the entropy $H(W)$, it is well known that one of the two Ar\i kan bit-channels has $Z$ value gets much closer (quadratically closer) to the boundary of the interval $(0,1)$~\cite{arikan-polar,Korada_thesis}.
Using these ideas, we prove in this section that basic transform decreases the average of the function $g_{\a}(\cdot)$ of entropy at least by a factor of $\ell^{-1/2}$ after $\log\ell$ iterations for large enough $\ell$.
The basic Ar{\i}kan's transform takes one channel $W$ and splits it into a slightly worse channel $W^-$ and a slightly better channel $W^+$. Then the transform is applied recursively to $W^-$ and $W^+$, creating channels $W^{--}, W^{-+}, W^{+-},$ and $W^{++}$. One can think of the process as of a complete binary tree of depth $\log\ell$, with the root node $W$, and any node at the level $i$ is of form $W^{B_i}$ for some $B_i \in \{-, +\}^i$, with two children $W^{B_i-}$ and $W^{B_i+}$. Denote $r = \log\ell$, then the channels at the leaves $\{W^{B_r}\}$, for all $B_r\in\{-,+\}^r$ are exactly the Ar{\i}kan's subchannels of $W$ with respect to the kernel $K = A_2^{\otimes \log\ell}$. We are going to prove the following result
\begin{lem}
\label{lem:suction_evolution}
Let $W$ be a BMS channel with $H(W) \notin (\ell^{-4}, 1 - \ell^{-4})$. Denote $r = \log\ell$, then for $r\geq \dfrac1{\alpha}$
\begin{equation}
\label{eq:suction_lem}
\sum_{B\in\{-,+\}^r}g_{\alpha}\left(H\left(W^{B}\right)\right) \leq \ell^{1/2}g_{\alpha}\left(H(W)\right).
\end{equation}
\end{lem}
Clearly, the above lemma will imply the suction at the end case of Theorem~\ref{thm:kernel_seacrh_correct}, as we take $\log\ell \geq \frac1{\alpha}$.
For the analysis below, apart from the entropy of the channel, we will also use Bhattacharrya parameter $Z(W)$:
\[ Z(W) = \sum_{y \in \mathcal{Y}}\sqrt{W(y\,|\,0)W(y\,|\,1)}, \]
together with the inequalities which connect it to the entropy:
\begin{align}
\label{eq:Z-H}
Z(W)^2 \leq H(W) \leq Z(W),
\end{align}
for any BMS channel $W$ (\cite{Korada_thesis, Arikan_Source}). The reason we use this parameter is because of the following relations, which show how the Bhattacharrya parameter changes after the basic transform (\cite{arikan-polar, ModernCoding, Korada_thesis, hassani-finite-scaling-paper-journal}):
\begin{align}
Z(W^+) &= Z(W)^2, \label{eq:Z+}\\
Z(W) \sqrt{2 - Z(W)^2} \leq Z(W^-) &\leq 2Z(W). \label{eq:Z-}
\end{align}
We will also use the conservation of conditional entropy on application of Ar{\i}kan's transform
\begin{equation}
\label{eq:entropy_martingale}
H(W^+) + H(W^-) = 2H(W).
\end{equation}
\begin{proof}[Proof of Lemma~\ref{lem:suction_evolution}] The proof is presented in the next two sections, as it is divided into two parts: the case when $H(W) \leq \ell^{-4}$ (suction at the lower end), and when $H(W) \geq 1 - \ell^{-4}$ (suction at the upper end).
\subsection{Suction at the lower end}
\label{sec:lower_suction}
Suppose $H(W) \leq \ell^{-4}$ for this case, thus $Z(W) \leq \ell^{-2} \leq 2^{-2r}$.
First, recursive application of~\eqref{eq:entropy_martingale} gives
\begin{equation}
\label{eq:lower_eq0}
\sum_{B\in\{-,+\}^r}H\left(W^{B}\right) = 2^rH(W),
\end{equation}
and since entropy is always nonnegative, this implies for any $B\in\{-, +\}^r$
\begin{equation}
\label{eq:lower_eq1}
H\left(W^B\right) \leq 2^rH(W).
\end{equation}
Denote now $k = \left\lceil\log\frac1{\alpha}\right\rceil$, and notice that $\log r \geq k-1$ since $r \geq \frac1{\alpha}$. For $B \in \{-, +\}^r$, define $wt_+(B)$ to be number of $+$'s in $B$. We will split the summation in~\eqref{eq:suction_lem} into two parts: the part with $wt_+(B) < k$, and when $wt_+(B) \geq k$.
\medskip\noindent\textbf{First part.}
Out of~\eqref{eq:lower_eq1} derive
\begin{equation}
\label{eq:lower_end_lower_sum}
\begin{aligned}
\sum_{wt_+(B) < k } g_{\alpha}\left(H\left(W^{B}\right)\right)
\leq \sum_{j=0}^{k-1} \binom{r}{j}g_{\alpha}\left(2^{r}H(W)\right)
\leq \log r\cdot\binom{r}{\log r}\cdot 2^{r\alpha}H(W)^{\alpha} \leq 2^{\log^2 r + r\alpha}\cdot H(W)^{\alpha},
\end{aligned}
\end{equation}
where we used $\binom{r}{\log r} \leq \frac{r^{\log r}}{(\log r)!}$; the fact the $g_{\alpha}$ is increasing on $\left(0, \frac12\right)$ together with ${2^{r}H(W) \leq \ell^{-3} < \frac12}$, and that $g_\alpha(x) \leq x^{\alpha}$ for $x \in (0,1)$.
\medskip\noindent\textbf{Second part.} We are going to use the following observation, which can be proved by induction based on~\eqref{eq:Z+} and~\eqref{eq:Z-}:
\begin{claim}
\label{cl:Z_evolution}
Let $B \in \{-,+\}^r$, such that number of $+$'s in $B$ is equal to $s$. Then
\[ Z\left(W^B\right) \leq \left(2^{r-s}\cdot Z(W)\right)^{2^s}. \]
This corresponds to first using equality~\eqref{eq:Z-} $(r-s)$ times, and after that using bound~\eqref{eq:Z+} $s$ times while walking \textbf{down} the recursive binary tree of channels.
\end{claim}
Then, using Claim~\ref{cl:Z_evolution} along with~\eqref{eq:Z-H} and the fact that $Z(W) \leq \ell^{-2}\leq 2^{-2r}$, we obtain the following for any $B \in \{-, +\}^r$ with ${wt_+(B) = s\geq k}$:
\begin{equation}
\label{eq:lower_eq3}
\begin{aligned}
H\left(W^B\right) \leq Z\left(W^B\right) \leq \left(2^{r-s}\cdot Z(W)\right)^{2^s} &\leq 2^{(r-s)2^s}\cdot Z(W)^{2^s-2}\cdot H(W) \\
&\leq 2^{(r-s)2^s - 2t2^s + 4r}\cdot H(W) \\
&= 2^{-r2^s - s2^s + 4r}\cdot H(W) \\
&\leq 2^{-r2^k - k2^k + 4r}\cdot H(W).
\end{aligned}
\end{equation}
Therefore
\begin{equation}
\sum_{wt_+(B) \geq k}g_{\alpha}\left(H\left(W^{B}\right)\right) \leq \sum_{wt_+(B) \geq k}H\left(W^{B}\right)^{\alpha} \leq 2^r\cdot 2^{\alpha(-r2^k - k2^k + 4r)}\cdot H(W)^{\alpha}.
\end{equation}
Observe now the following chain of inequalities
\begin{align*}
\dfrac{r}{2} + 4r\alpha + 2 \leq r \leq r\cdot2^k\alpha \leq r\cdot2^k\alpha + k\cdot2^k\alpha,
\end{align*}
which trivially holds for $\alpha \leq \dfrac1{12}$. Therefore
\begin{equation}
r + \alpha(-r2^k-k2^k+4r) \leq \dfrac{r}2 - 2,
\end{equation}
and thus in~\eqref{eq:lower_end_upper_sum} obtain
\begin{equation}
\label{eq:lower_end_upper_sum}
\sum_{wt_+(B) \geq k}g_{\alpha}\left(H\left(W^{B}\right)\right) \leq 2^{r/2-2}\cdot H(W)^{\alpha}.
\end{equation}
\medskip\noindent\textbf{Overall bound.} Combining~\eqref{eq:lower_end_lower_sum} and~\eqref{eq:lower_end_upper_sum} we derive
\begin{equation}
\label{eq:lower_overall}
\begin{aligned}
\sum_{B\in\{-,+\}^r} g_{\alpha}\left(H\left(W^B\right)\right)
&\leq \left(2^{\log^2r + r\alpha} + 2^{r/2 - 2}\right)\cdot H(W)^{\alpha}\\
&\leq 2^{r/2}\cdot \dfrac{H(W)^{\alpha}}{2} \\
&\leq \ell^{1/2}g_{\alpha}(H(W)),
\end{aligned}
\end{equation}
where we used $\log^2r + r\alpha \leq \frac{r}{2} - 2$ for large enough $r$, and $\frac12 \leq (1-x)^{\alpha}$ for any $x \leq \frac12$. This proves Lemma~\ref{lem:suction_evolution} for the lower end case $H(W) \leq \ell^{-4}$.
\subsection{Suction at the upper end}
\label{upper_suction}
Now consider the case $H(W) \geq 1 - \ell^{-4}$. The proof is going to the quite similar to the previous case, but we are going to track the distance from $H(W)$ (and $Z(W)$) to $1$ now. Specifically, denote
\begin{equation}
\label{eq:upper_I-S}
\begin{aligned}
I(W) &= 1 - H(W),\\
S(W) &= 1 - Z(W),
\end{aligned}
\end{equation}
where $I(W)$ is actually the (symmetric) capacity of the channel, and $S(W)$ is just a notation we use in this proof. Notice that $g_{\alpha}(x) = g_{\alpha}(1-x)$, therefore it suffices to prove~\eqref{eq:suction_lem} with capacities of the channels instead of entropies in the inequality. Also notice that $I(W) \leq \ell^{-4}$ for the current case of suction at the upper end.
Let us now derive the relations between $I(W)$, $S(W)$, as well as evolution of $S(\cdot)$ for $W^+$ and $W^-$, similar to~\eqref{eq:Z-H}, \eqref{eq:Z+}, \eqref{eq:Z-}, and~\eqref{eq:entropy_martingale}. Inequalities in~\eqref{eq:Z-H} imply
\begin{equation}
\begin{aligned}
S(W) = 1 - Z(W) &\leq 1 - H(W) = I(W),\\
I(W) = 1 - H(W) &\leq 1 - Z(W)^2 \leq 2(1-Z(W)) = 2S(W),
\end{aligned}
\end{equation}
so let us combine this to write
\begin{equation}
\label{eq:S-I}
S(W) \leq I(W) \leq 2S(W).
\end{equation}
Next,~\eqref{eq:Z+} and~\eqref{eq:Z-} give
\begin{align}
\label{eq:S+}
S(W^+) &= 1 - Z(W)^2 \leq 2(1-Z(W)) \leq 2S(W),\\
\label{eq:S-}
S(W^-) &\leq 1 - Z(W)\sqrt{2 - Z(W)^2} \leq 2(1 - Z(W))^2 = 2S(W)^2,
\end{align}
where we used $1 - x\sqrt{2-x^2} \leq 2(1-x)^2$ for any $x\in(0,1)$.
Finally, it easily follows from~\eqref{eq:lower_eq0} that
\begin{equation}
\label{eq:upper_eq0}
\sum_{B\in\{-,+\}^r}I\left(W^{B}\right) = 2^rI(W),
\end{equation}
and since capacity is nonnegative as well, we also obtain for any $B\in\{-, +\}^r$
\begin{equation}
\label{eq:upper_cap_bound}
I\left(W^B\right) \leq 2^rI(W).
\end{equation}
We now proceed with a very similar approach to the suction at the lower end case in Section~\ref{sec:lower_suction}: denote $k = \left\lceil\log\frac1{\alpha}\right\rceil$, and notice that $\log r \geq k-1$ since $r \geq \frac1{\alpha}$. For $B \in \{-, +\}^r$, define $wt_-(B)$ to be number of $-$'s in $B$. We will split the summation in~\eqref{eq:suction_lem} (but with capacities of channels instead of entropies) into two parts: the part with $wt_-(B) < k$, and when $wt_-(B) \geq k$.
\medskip\noindent\textbf{First part.}
Out of~\eqref{eq:upper_cap_bound} derive, similarly to~\eqref{eq:upper_end_lower_sum}
\begin{equation}
\label{eq:upper_end_lower_sum}
\begin{aligned}
\sum_{wt_-(B) < k } g_{\alpha}\left(I\left(W^{B}\right)\right)
\leq \sum_{j=0}^{k-1} \binom{r}{j}g_{\alpha}\left(2^{r}I(W)\right)
\leq \log r\cdot\binom{r}{\log r}\cdot 2^{r\alpha}I(W)^{\alpha} \leq 2^{\log^2 r + r\alpha}\cdot I(W)^{\alpha}.
\end{aligned}
\end{equation}
\medskip\noindent\textbf{Second part.} Similarly to Claim~\ref{cl:Z_evolution}, one can show via induction using~\eqref{eq:S+} and~\eqref{eq:S-} the following
\begin{claim}
\label{cl:S_evolution}
Let $B \in \{-,+\}^r$, such that number of $-$'s in $B$ is equal to $s$. Then
\[ S\left(W^B\right) \leq 2^{2^s-1}\left(2^{r-s}\cdot S(W)\right)^{2^s}. \]
This corresponds to first using equality~\eqref{eq:S+} $(r-s)$ times, and after that using bound~\eqref{eq:S-} $s$ times while walking \textbf{down} the recursive binary tree of channels.
\end{claim}
Using this claim with~\eqref{eq:S-I} and the fact that $S(W) \leq Z(W) \leq \ell^{-4}\leq 2^{-4r}$ obtain for any $B \in \{-, +\}^r$ with ${wt_-(B) = s\geq k}$
\begin{equation}
\label{eq:upper_eq3}
\begin{aligned}
I\left(W^B\right) \leq 2S\left(W^B\right) &\leq 2^{2^s}\cdot\left(2^{r-s}\cdot S(W)\right)^{2^s} &&\leq 2^{(r-s+1)2^s}\cdot S(W)^{2^s-1}\cdot I(W) \\
&\leq 2^{(r-s+1)2^s - 4r2^s + 4r}\cdot I(W)
&&= 2^{-2^s(3r+s-1) + 4r}\cdot I(W) \\
&\leq 2^{-2^k(3r+k-1) + 4r}\cdot I(W) &&\leq 2^{-r2^k}\cdot I(W),
\end{aligned}
\end{equation}
where the last inequality uses $4r \leq 2^k(2t + k - 1)$, which holds trivially for $k\geq 1$. Therefore
\begin{equation}
\label{eq:upper_end_upper_sum}
\sum_{wt_-(B) \geq k}g_{\alpha}\left(I\left(W^{B}\right)\right) \leq \sum_{wt_-(B) \geq k}I\left(W^{B}\right)^{\alpha} \leq 2^r\cdot 2^{-\alpha r2^k}\cdot I(W)^{\alpha} \leq I(W)^{\alpha},
\end{equation}
since $\alpha\cdot2^k \geq 1$ by the choice of $k$.
\medskip\noindent\textbf{Overall bound.} The bounds~\eqref{eq:upper_end_lower_sum} and~\eqref{eq:upper_end_upper_sum} give us
\begin{equation}
\label{eq:upper_overall}
\begin{aligned}
\sum_{B\in\{-,+\}^r} g_{\alpha}\left(H\left(W^B\right)\right) = \sum_{B\in\{-,+\}^r} g_{\alpha}\left(I\left(W^B\right)\right)
\leq \left(2^{\log^2r + r\alpha} + 1\right)\cdot I(W)^{\alpha} \leq \ell^{1/2}g_{\alpha}(H(W))
\end{aligned}
\end{equation}
for large enough $r$ when $H(W) \geq 1 - \ell^{-4}$. This completes the proof of Lemma~\ref{lem:suction_evolution}.
\end{proof}
\section{Code construction, encoding and decoding procedures} \label{sect:cons}
Before presenting our code construction and encoding/decoding procedures, we first distinguish the difference between the code construction and the encoding procedure. The objectives of code construction for polar-type codes are two-fold: First, find the $N\times N$ encoding matrix; second, find the set of noiseless bits under the successive decoder, which will carry the message bits.
On the other hand, by encoding we simply mean the procedure of obtaining the codeword $\bX_{[1:N]}$ by multiplying the information vector $\bU_{[1:N]}$ with the encoding matrix, where we only put information in the noiseless bits in $\bU_{[1:N]}$ and set all the frozen bits to be $0$.
As we will see at the end of this section, while the code construction has complexity polynomial in $N$, the encoding procedure only has complexity $O_{\ell}(N\log N)$.
For polar codes with a fixed invertible kernel $K\in \{0,1\}^{\ell\times\ell}$, the polarization process works as follows: We start with some BMS channel $W$. After applying the polar transform to $W$ using kernel $K$, we obtain $\ell$ bit-channels $\{W_i:i\in[\ell]\}$ as defined in \eqref{Arikan_subchannels}.
Next we apply the polar transform using kernel $K$ to each of these $\ell$ bit-channels, and we write the polar transform of $W_i$ as $\{W_{ij}:j\in[\ell]\}$.
Then we apply the polar transform to each of the $\ell^2$ bit channels $\{W_{i_1,i_2}:i_1,i_2\in[\ell]\}$
and obtain $\{W_{i_1,i_2,i_3}:i_1,i_2,i_3\in[\ell]\}$, so on and so forth.
After $t$ rounds of polar transforms, we obtain $\ell^t$ bit-channels
$\{W_{i_1,\dots,i_t}:i_1,\dots,i_t\in[\ell]\}$, and one can show that these are the bit-channels seen by the successive decoder when decoding the corresponding polar codes constructed from kernel $K$.
\begin{algorithm}[t]
\caption{Degraded binning algorithm}
\label{algo:bin}
\DontPrintSemicolon
\SetAlgoLined
\KwInput{$W:\{0,1\}\to\mathcal{Y}$, bound $\mathsf{Q}$ on the output alphabet size after binning}
\KwOutput{$\widetilde{W}:\{0,1\}\to\widetilde{\mathcal{Y}}$, where $|\widetilde{\mathcal{Y}}|\le \mathsf{Q}$}
Initialize the new channel $\widetilde{W}$ with output symbols $\tilde{y}_1,\tilde{y}_2,\dots,\tilde{y}_{\mathsf{Q}}$ by setting $\widetilde{W}(\tilde{y}_i|x)=0$ for all $i\in[\mathsf{Q}]$ and $x\in\{0,1\}$
\For{$y\in\mathcal{Y}$}
{$p(0|y)\gets \frac{W(y|0)}{W(y|0)+W(y|1)}$
$i\gets \lceil \mathsf{Q} \cdot p(0|y) \rceil$
\If{$i=0$}
{$i\gets 1$
\tcp*{$i=0$ if and only if $p(0|y)=0$; we merge this single point into the next bin} }
$\widetilde{W}(\tilde{y}_i|0)\gets \widetilde{W}(\tilde{y}_i|0)+W(y|0)$
$\widetilde{W}(\tilde{y}_i|1)\gets \widetilde{W}(\tilde{y}_i|1)+W(y|1)$}
\Return $\widetilde{W}$
\end{algorithm}
For our purpose, we need to use polar codes with mixed kernels, and we need to search for a ``good" kernel at each step of polarization.
We will also introduce new notation for the bit-channels in order to indicate the usage of different kernels for different bit-channels.
As mentioned in Sections~\ref{sect:encdec} and~\ref{sect:local}, we need to use a binning algorithm (Algorithm~\ref{algo:bin}) to quantize all the bit-channels we obtain in the code construction procedure. As long as we choose the parameter $\mathsf{Q}$ in Algorithm~\ref{algo:bin} to be a large enough polynomial of $N$, the quantized channel can be used as a very good approximation of the original channel. This is made precise by \cite[Proposition 13]{GX15}: For $W$ and $\widetilde{W}$ in Algorithm~\ref{algo:bin}, we have\footnote{Note that the binning algorithm (Algorithm 2) in \cite{GX15} has one minor difference from the binning algorithm (Algorithm~\ref{algo:bin}) in this paper: In \cite{GX15}, the binning algorithm outputs a channel with $\mathsf{Q}+1$ outputs in contrast to $\mathsf{Q}$ outputs in this paper. More precisely, line 5-7 in Algorithm~\ref{algo:bin} of this paper is not included in the algorithm in \cite{GX15}, but one can easily check that this minor difference does not affect the proof at all.}
\begin{equation} \label{eq:ttt}
H(W) \le H(\widetilde{W})
\le H(W) + \frac{2\log \mathsf{Q}}{\mathsf{Q}}.
\end{equation}
Given a BMS channel $W$, our code construction works as follows:
\begin{enumerate}
\item {\bf Step 0:}
We first use Algorithm~\ref{algo:bin} to quantize/bin the output alphabet of $W$ such that the resulting (degraded) channel has at most $N^3$ outputs, i.e., we set $\mathsf{Q}=N^3$ in Algorithm~\ref{algo:bin}.
Note that the parameter $\mathsf{Q}$ can be chosen as any polynomial of $N$. By changing the value of $\mathsf{Q}$, we obtain a tradeoff between the decoding error probability and the gap to capacity; see Theorem~\ref{thm:main1} at the end of this section.
Here we choose the special case of $\mathsf{Q}=N^3$ to give a concrete example of code construction.
Next we use Algorithm~\ref{algo:kernel_search} in Section~\ref{sect:KC} to find a good kernel\footnote{We will prove in Proposition~\ref{prop:approx_accumulation} that the error parameter $\Delta$ in Algorithm~\ref{algo:kernel_search} can be chosen as $\Delta=\frac{6\ell \log N}{N^2}$ when we set $\mathsf{Q}=N^3$.} for the quantized channel and denote it as $K_1^{(0)}$.
Recall from Section~\ref{sect:mix} that a kernel is good if all but a $\tilde{O}(\ell^{-1/2})$ fraction of the bit-channels obtained after polar transform by this kernel have entropy $\ell^{-\Omega(\log\ell)}$-close to either $0$ or $1$.
The superscript $(0)$ in $K_1^{(0)}$ indicates that this is the kernel used in Step 0 of polarization. In this case, we use $\{W_i(B,K_1^{(0)}):i\in[\ell]\}$ to denote the $\ell$ bit-channels resulting from the polar transform of the quantized version of $W$ using kernel $K_1^{(0)}$.
Here $B$ stands for the binning operation, and the arguments in the brackets are the operations to obtain the bit-channel $W_i(B,K_1^{(0)})$ from $W$: first bin the outputs of $W$ and then perform the polar transform using kernel $K_1^{(0)}$.
For each $i\in[\ell]$, we again use Algorithm~\ref{algo:bin} to quantize/bin the output alphabet of $W_i(B,K_1^{(0)})$ such that the resulting (degraded) bit-channel
$W_i(B,K_1^{(0)},B)$ has at most $N^3$ outputs.
\item {\bf Step 1:} For each $i_1\in[\ell]$, we use Algorithm~\ref{algo:kernel_search} to find a good kernel for the quantized bit-channel $W_{i_1}(B,K_1^{(0)},B)$ and denote it as $K_{i_1}^{(1)}$.
The $\ell$ bit-channels resulting from the polar transform of $W_{i_1}(B,K_1^{(0)},B)$ using kernel $K_{i_1}^{(1)}$ are denoted as $\{W_{i_1,i_2}(B,K_1^{(0)},B,K_{i_1}^{(1)}):i_2\in[\ell]\}$. In this step, we will obtain $\ell^2$ bit-channels $\{W_{i_1,i_2}(B,K_1^{(0)},B,K_{i_1}^{(1)}):i_1,i_2\in[\ell]\}$. For each of them, we use Algorithm~\ref{algo:bin} to quantize/bin its output alphabet such that the resulting (degraded) bit-channels
$\{W_{i_1,i_2}(B,K_1^{(0)},B,K_{i_1}^{(1)},B):i_1,i_2\in[\ell]\}$ has at most $N^3$ outputs. See Fig.~\ref{fig:tree} for an illustration of this procedure for the special case of $\ell=3$.
\item We repeat the polar transforms and binning operations at each step of the code construction. More precisely, at {\bf Step $j$} we have $\ell^j$ bit-channels
$$
\{W_{i_1,i_2,\dots,i_j}(B,K_1^{(0)},B,K_{i_1}^{(1)},B,\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)},B):i_1,i_2,\dots,i_j\in[\ell]\}.
$$
This notation is a bit messy, so we introduce some simplified notation for the bit-channels obtained with and without binning operations: We still use
$$
W_{i_1,i_2,\dots,i_j}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)})
$$
to denote the bit-channel obtained without the binning operations at all, and we use
$$
W_{i_1,i_2,\dots,i_j}^{\bin}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)})
$$
to denote the bit-channel obtained with binning operations performed at every step from Step $0$ to Step $j-1$, i.e.,
\begin{align*}
W_{i_1,i_2,\dots,i_j}^{\bin}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)})
:= W_{i_1,i_2,\dots,i_j}(B,K_1^{(0)},B,K_{i_1}^{(1)},B,\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)},B).
\end{align*}
Moreover, we use $W_{i_1,i_2,\dots,i_j}^{\bin *}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)})$ to denote the bit-channel obtained with binning operations performed at every step except for the last step, i.e.,
\begin{align*}
W_{i_1,i_2,\dots,i_j}^{\bin*}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)})
:= W_{i_1,i_2,\dots,i_j}(B,K_1^{(0)},B,K_{i_1}^{(1)},B,\dots,B,K_{i_1,\dots,i_{j-1}}^{(j-1)}).
\end{align*}
Next we use Algorithm~\ref{algo:kernel_search} to find a good kernel for each of them and denote the kernel as $K_{i_1,\dots,i_j}^{(j)}$. After applying polar transforms using these kernels, we obtain $\ell^{j+1}$ bit-channels
$$
\{W_{i_1,\dots,i_{j+1}}^{\bin*}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_j}^{(j)}):i_1,\dots,i_{j+1}\in[\ell]\}.
$$
Then we quantize/bin the output alphabets of these bit-channels using Algorithm~\ref{algo:bin} and obtain the following $\ell^{j+1}$ quantized bit-channels
$$
\{W_{i_1,\dots,i_{j+1}}^{\bin}(K_1^{(0)},K_{i_1}^{(1)},
\dots,K_{i_1,\dots,i_j}^{(j)}):i_1,\dots,i_{j+1}\in[\ell]\}.
$$
\item After {\bf step $t-1$}, we obtain $N=\ell^t$ quantized bit-channels
$$
\{W_{i_1,\dots,i_t}^{\bin}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{t-1}}^{(t-1)}):i_1,i_2,\dots,i_j\in[\ell]\},
$$
and we have also obtained all the kernels in each step of polarization. More precisely, we have $\ell^i$ kernels in step $i$, so from step $0$ to step $t-1$, we have $1+\ell+\dots+\ell^{t-1}=\frac{N-1}{\ell-1}$ kernels in total.
\item Find the set of good (noiseless) indices. More precisely, we use the shorthand notation\footnote{We omit the reference to the kernels in the notation $H_{i_1,\dots,i_t}(W)$ and $H_{i_1,\dots,i_t}^{\bin}(W)$.}
\begin{equation} \label{eq:defHi}
\begin{aligned}
H_{i_1,\dots,i_t}(W) &:=
H(W_{i_1,\dots,i_t}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{t-1}}^{(t-1)})) \\
H_{i_1,\dots,i_t}^{\bin}(W) &:=
H(W_{i_1,\dots,i_t}^{\bin}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{t-1}}^{(t-1)}))
\end{aligned}
\end{equation}
and define the set of good indices as
\begin{equation} \label{eq:Sgood}
\mathcal{S}_{\good}:=\left\{(i_1,i_2,\dots,i_t)\in[\ell]^t: H_{i_1,\dots,i_t}^{\bin}(W) \le \frac{7\ell \log N}{N^2} \right\}.
\end{equation}
\item Finally, we need to construct the encoding matrix from these $\frac{N-1}{\ell-1}$ kernels. The kernels we obtained in step $j$ are
$$
\{K_{i_1,\dots,i_j}^{(j)}:i_1,\dots,i_j\in[\ell]\}.
$$
For an integer $i\in[\ell^j]$, we write the $j$-digit $\ell$-ary expansion of $i-1$ as $(\tilde{i}_1,\tilde{i}_2,\dots,\tilde{i}_j)$, where $\tilde{i}_j$ is the least significant digit and $\tilde{i}_1$ is the most significant digit, and each digit takes value in $\{0,1,\dots,\ell-1\}$. Let $(i_1,i_2,\dots,i_j):=(\tilde{i}_1+1,\tilde{i}_2+1,\dots,\tilde{i}_j+1)$, and define the mapping $\tau_j:[\ell^j]\to[\ell]^j$ as
\begin{equation} \label{eq:deftau}
\tau_j(i):=(i_1,i_2,\dots,i_j)
\text{~~for~} i\in[\ell^j] .
\end{equation}
This is a one-to-one mapping between $[\ell^j]$ and $[\ell]^j$, and we use the shorthand notation $K_i^{(j)}$ to denote $K_{\tau_j(i)}^{(j)}$ for $i\in[\ell^j]$.
For each $j\in\{0,1,\dots,t-1\}$, we define the block diagonal matrices $\overline{D}^{(j)}$ with size $\ell^{j+1}\times \ell^{j+1}$ and $D^{(j)}$ with size $N\times N$ as
\begin{equation} \label{eq:defD}
\overline{D}^{(j)}:=
\Diag(K_1^{(j)},K_2^{(j)},\dots,K_{\ell^j}^{(j)}) , \quad\quad\quad
D^{(j)}:=\underbrace{\{\overline{D}^{(j)},\overline{D}^{(j)},\dots,\overline{D}^{(j)}\}}_{
\text{number of }\overline{D}^{(j)} \text{ is } \ell^{t-j-1}} .
\end{equation}
For $i\in[\ell^t]$, we have $\tau_t(i)=(i_1,\dots,i_t)$.
For $j\in[t-1]$, we
define the permutation $\pi^{(j)}$ on the set $[\ell^t]$ as
\begin{equation}\label{eq:defpi}
\pi^{(j)}(i):=\tau_t^{-1}(i_1,\dots,i_{t-j-1},i_t,i_{t-j},i_{t-j+1},\dots,i_{t-1})
\quad \forall i\in[\ell^t].
\end{equation}
By this definition, $\pi^{(j)}$ simply keeps the first $t-j-1$ digits of $i$ to be the same and performs a cyclic shift on the last $j+1$ digits. Here we give some concrete examples:
\begin{align*}
\pi^{(1)}(i) & =\tau_t^{-1}(i_1,\dots,i_{t-2},i_t,i_{t-1}), \\
\pi^{(2)}(i) & =\tau_t^{-1}(i_1,\dots,i_{t-3},i_t,i_{t-2},i_{t-1}), \\
\pi^{(3)}(i) & =\tau_t^{-1}(i_1,\dots,i_{t-4},i_t,i_{t-3},i_{t-2},i_{t-1}) , \\
\pi^{(t-1)}(i) & =\tau_t^{-1}(i_t,i_1,i_2,\dots,i_{t-1}) .
\end{align*}
For each $j\in[t-1]$, let $Q^{(j)}$ be the $\ell^t \times \ell^t$ permutation matrix corresponding to the permutation $\pi^{(j)}$, i.e., $Q^{(j)}$ is the permutation matrix such that
\begin{equation}\label{eq:defQ}
(U_1,U_2,\dots,U_{\ell^t})Q^{(j)}
=(U_{\pi^{(j)}(1)},U_{\pi^{(j)}(2)},\dots,U_{\pi^{(j)}(\ell^t)}) .
\end{equation}
Finally, for each $j\in[t]$, we define the $N\times N$ matrix
\begin{equation} \label{eq:defMj}
M^{(j)}:=D^{(j-1)}Q^{(j-1)}D^{(j-2)}Q^{(j-2)}\dots D^{(1)}Q^{(1)} D^{(0)} .
\end{equation}
Therefore, $M^{(j)},j\in[t]$ satisfy the following recursive relation:
$$
M^{(1)}=D^{(0)}, \quad\quad
M^{(j+1)}=D^{(j)}Q^{(j)}M^{(j)} .
$$
Our encoding matrix for code length $N=\ell^t$ is the submatrix of $M^{(t)}$ consisting of all the row vectors with indices belonging to the set $\mathcal{S}_{\good}$ defined in \eqref{eq:Sgood};
see the next paragraph for a detailed description of the encoding procedure.
\end{enumerate}
\begin{figure}
\centering
\begin{tikzpicture}
\node at (7, 9.4) (w0) {$W$};
\node [block, align=center] at (7, 7.6) (q0)
{Bin, then\\find $K_1^{(0)}$};
\node at (2.5, 5.8) (w1) {$W_1$};
\node at (7, 5.8) (w2) {$W_2$};
\node at (11.5, 5.8) (w3) {$W_3$};
\node [block, align=center] at (2.5, 4) (q1)
{Bin, then\\find $K_1^{(1)}$};
\node [block, align=center] at (7, 4) (q2) {Bin, then\\find $K_2^{(1)}$};
\node [block, align=center] at (11.5, 4) (q3)
{Bin, then\\find $K_3^{(1)}$};
\node at (1, 2) (w11) {$W_{1,1}$};
\node at (2.5, 2) (w12) {$W_{1,2}$};
\node at (4, 2) (w13) {$W_{1,3}$};
\node at (5.5, 2) (w21) {$W_{2,1}$};
\node at (7, 2) (w22) {$W_{2,2}$};
\node at (8.5, 2) (w23) {$W_{2,3}$};
\node at (10, 2) (w31) {$W_{3,1}$};
\node at (11.5, 2) (w32) {$W_{3,2}$};
\node at (13, 2) (w33) {$W_{3,3}$};
\node at (1, 1.5) {$\vdots$};
\node at (2.5, 1.5) {$\vdots$};
\node at (4, 1.5) {$\vdots$};
\node at (5.5, 1.5) {$\vdots$};
\node at (7, 1.5) {$\vdots$};
\node at (8.5, 1.5) {$\vdots$};
\node at (10, 1.5) {$\vdots$};
\node at (11.5, 1.5) {$\vdots$};
\node at (13, 1.5) {$\vdots$};
\draw[->, thick] (w0)--(q0);
\draw[->, thick] (q0)--(w1);
\draw[->, thick] (q0)--(w2);
\draw[->, thick] (q0)--(w3);
\draw[->, thick] (w1)--(q1);
\draw[->, thick] (w2)--(q2);
\draw[->, thick] (w3)--(q3);
\draw[->, thick] (q1)--(w11);
\draw[->, thick] (q1)--(w12);
\draw[->, thick] (q1)--(w13);
\draw[->, thick] (q2)--(w21);
\draw[->, thick] (q2)--(w22);
\draw[->, thick] (q2)--(w23);
\draw[->, thick] (q3)--(w31);
\draw[->, thick] (q3)--(w32);
\draw[->, thick] (q3)--(w33);
\end{tikzpicture}
\caption{Illustration of code construction for the special case of $\ell=3$.}
\label{fig:tree}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\node at (15, 9) (y9) {$Y_1$};
\node at (15, 8) (y8) {$Y_2$};
\node at (15, 7) (y7) {$Y_3$};
\node at (15, 6) (y6) {$Y_4$};
\node at (15, 5) (y5) {$Y_5$};
\node at (15, 4) (y4) {$Y_6$};
\node at (15, 3) (y3) {$Y_7$};
\node at (15, 2) (y2) {$Y_8$};
\node at (15, 1) (y1) {$Y_9$};
\node [block] at (13.5, 9) (w9) {$W$};
\node [block] at (13.5, 8) (w8) {$W$};
\node [block] at (13.5, 7) (w7) {$W$};
\node [block] at (13.5, 6) (w6) {$W$};
\node [block] at (13.5, 5) (w5) {$W$};
\node [block] at (13.5, 4) (w4) {$W$};
\node [block] at (13.5, 3) (w3) {$W$};
\node [block] at (13.5, 2) (w2) {$W$};
\node [block] at (13.5, 1) (w1) {$W$};
\node at (12, 9) (x9) {$X_1$};
\node at (12, 8) (x8) {$X_2$};
\node at (12, 7) (x7) {$X_3$};
\node at (12, 6) (x6) {$X_4$};
\node at (12, 5) (x5) {$X_5$};
\node at (12, 4) (x4) {$X_6$};
\node at (12, 3) (x3) {$X_7$};
\node at (12, 2) (x2) {$X_8$};
\node at (12, 1) (x1) {$X_9$};
\node [sblock] at (9.5, 8) (K3) {$K_1^{(0)}$};
\node [sblock] at (9.5, 5) (K2) {$K_1^{(0)}$};
\node [sblock] at (9.5, 2) (K1) {$K_1^{(0)}$};
\node at (7, 9) (u9) {$U_1^{(1)}$};
\node at (7, 8) (u8) {$U_2^{(1)}$};
\node at (7, 7) (u7) {$U_3^{(1)}$};
\node at (7, 6) (u6) {$U_4^{(1)}$};
\node at (7, 5) (u5) {$U_5^{(1)}$};
\node at (7, 4) (u4) {$U_6^{(1)}$};
\node at (7, 3) (u3) {$U_7^{(1)}$};
\node at (7, 2) (u2) {$U_8^{(1)}$};
\node at (7, 1) (u1) {$U_9^{(1)}$};
\node at (5, 9) (v9) {$V_1^{(1)}$};
\node at (5, 8) (v8) {$V_2^{(1)}$};
\node at (5, 7) (v7) {$V_3^{(1)}$};
\node at (5, 6) (v6) {$V_4^{(1)}$};
\node at (5, 5) (v5) {$V_5^{(1)}$};
\node at (5, 4) (v4) {$V_6^{(1)}$};
\node at (5, 3) (v3) {$V_7^{(1)}$};
\node at (5, 2) (v2) {$V_8^{(1)}$};
\node at (5, 1) (v1) {$V_9^{(1)}$};
\node [sblock] at (2.5, 8) (KK3) {$K_1^{(1)}$};
\node [sblock] at (2.5, 5) (KK2) {$K_2^{(1)}$};
\node [sblock] at (2.5, 2) (KK1) {$K_3^{(1)}$};
\node at (0, 9) (uu9) {$U_1$};
\node at (0, 8) (uu8) {$U_2$};
\node at (0, 7) (uu7) {$U_3$};
\node at (0, 6) (uu6) {$U_4$};
\node at (0, 5) (uu5) {$U_5$};
\node at (0, 4) (uu4) {$U_6$};
\node at (0, 3) (uu3) {$U_7$};
\node at (0, 2) (uu2) {$U_8$};
\node at (0, 1) (uu1) {$U_9$};
\draw[->, thick] (uu9)--(1.2,9);
\draw[->, thick] (uu8)--(1.2,8);
\draw[->, thick] (uu7)--(1.2,7);
\draw[->, thick] (uu6)--(1.2,6);
\draw[->, thick] (uu5)--(1.2,5);
\draw[->, thick] (uu4)--(1.2,4);
\draw[->, thick] (uu3)--(1.2,3);
\draw[->, thick] (uu2)--(1.2,2);
\draw[->, thick] (uu1)--(1.2,1);
\draw[->, thick] (3.8,9)--(v9);
\draw[->, thick] (3.8,8)--(v8);
\draw[->, thick] (3.8,7)--(v7);
\draw[->, thick] (3.8,6)--(v6);
\draw[->, thick] (3.8,5)--(v5);
\draw[->, thick] (3.8,4)--(v4);
\draw[->, thick] (3.8,3)--(v3);
\draw[->, thick] (3.8,2)--(v2);
\draw[->, thick] (3.8,1)--(v1);
\draw[->, thick] (v9.east)--(u9.west);
\draw[->, thick] (v8.east)--(u6.west);
\draw[->, thick] (v7.east)--(u3.west);
\draw[->, thick] (v6.east)--(u8.west);
\draw[->, thick] (v5.east)--(u5.west);
\draw[->, thick] (v4.east)--(u2.west);
\draw[->, thick] (v3.east)--(u7.west);
\draw[->, thick] (v2.east)--(u4.west);
\draw[->, thick] (v1.east)--(u1.west);
\draw[->, thick] (u9)--(8.2,9);
\draw[->, thick] (u8)--(8.2,8);
\draw[->, thick] (u7)--(8.2,7);
\draw[->, thick] (u6)--(8.2,6);
\draw[->, thick] (u5)--(8.2,5);
\draw[->, thick] (u4)--(8.2,4);
\draw[->, thick] (u3)--(8.2,3);
\draw[->, thick] (u2)--(8.2,2);
\draw[->, thick] (u1)--(8.2,1);
\draw[->, thick] (10.8,9)--(x9);
\draw[->, thick] (10.8,8)--(x8);
\draw[->, thick] (10.8,7)--(x7);
\draw[->, thick] (10.8,6)--(x6);
\draw[->, thick] (10.8,5)--(x5);
\draw[->, thick] (10.8,4)--(x4);
\draw[->, thick] (10.8,3)--(x3);
\draw[->, thick] (10.8,2)--(x2);
\draw[->, thick] (10.8,1)--(x1);
\draw[->, thick] (x9)--(w9);
\draw[->, thick] (x8)--(w8);
\draw[->, thick] (x7)--(w7);
\draw[->, thick] (x6)--(w6);
\draw[->, thick] (x5)--(w5);
\draw[->, thick] (x4)--(w4);
\draw[->, thick] (x3)--(w3);
\draw[->, thick] (x2)--(w2);
\draw[->, thick] (x1)--(w1);
\draw[->, thick] (w9)--(y9);
\draw[->, thick] (w8)--(y8);
\draw[->, thick] (w7)--(y7);
\draw[->, thick] (w6)--(y6);
\draw[->, thick] (w5)--(y5);
\draw[->, thick] (w4)--(y4);
\draw[->, thick] (w3)--(y3);
\draw[->, thick] (w2)--(y2);
\draw[->, thick] (w1)--(y1);
\end{tikzpicture}
\caption{Illustration of the encoding process $\bX_{[1:N]}=\bU_{[1:N]} M^{(t)}$ for the special case of $\ell=3$ and $t=2$. Here $\bX_{[1:N]}$ and $\bU_{[1:N]}$ are row vectors. All four kernels in this figure $K_1^{(0)},K_1^{(1)},K_2^{(1)},K_3^{(1)}$ have size $3\times 3$, and the outputs of each kernel is obtained by multiplying the inputs with the kernel, e.g. $\bV_{[1:3]}^{(1)}=\bU_{[1:3]} K_1^{(1)}$.}
\label{fig:ill32}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\node at (5, 9) (v9) {$V_1^{(1)}$};
\node at (5, 8) (v8) {$V_2^{(1)}$};
\node at (5, 7) (v7) {$V_3^{(1)}$};
\node [sblock] at (2.5, 8) (KK3) {$K_1^{(1)}$};
\node at (0, 9) (uu9) {$U_1$};
\node at (0, 8) (uu8) {$U_2$};
\node at (0, 7) (uu7) {$U_3$};
\node [block] at (7.5, 9) (w9) {$W_1(K_1^{(0)})$};
\node [block] at (7.5, 8) (w8) {$W_1(K_1^{(0)})$};
\node [block] at (7.5, 7) (w7) {$W_1(K_1^{(0)})$};
\node at (10.5, 9) (y9) {$(Y_1,Y_2,Y_3)$};
\node at (10.5, 8) (y8) {$(Y_4,Y_5,Y_6)$};
\node at (10.5, 7) (y7) {$(Y_7,Y_8,Y_9)$};
\draw[->, thick] (uu9)--(1.2,9);
\draw[->, thick] (uu8)--(1.2,8);
\draw[->, thick] (uu7)--(1.2,7);
\draw[->, thick] (3.8,9)--(v9);
\draw[->, thick] (3.8,8)--(v8);
\draw[->, thick] (3.8,7)--(v7);
\draw[->, thick] (v9)--(w9);
\draw[->, thick] (v8)--(w8);
\draw[->, thick] (v7)--(w7);
\draw[->, thick] (w9)--(y9);
\draw[->, thick] (w8)--(y8);
\draw[->, thick] (w7)--(y7);
\end{tikzpicture}
\caption{The (stochastic) mapping from $\bU_{[1:3]}$ to $\bY_{[1:9]}$}
\label{fig:top3}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\node at (5, 9) (v9) {$V_4^{(1)}$};
\node at (5, 8) (v8) {$V_5^{(1)}$};
\node at (5, 7) (v7) {$V_6^{(1)}$};
\node [sblock] at (2.5, 8) (KK3) {$K_2^{(1)}$};
\node at (0, 9) (uu9) {$U_4$};
\node at (0, 8) (uu8) {$U_5$};
\node at (0, 7) (uu7) {$U_6$};
\node [block] at (7.5, 9) (w9) {$W_2(K_1^{(0)})$};
\node [block] at (7.5, 8) (w8) {$W_2(K_1^{(0)})$};
\node [block] at (7.5, 7) (w7) {$W_2(K_1^{(0)})$};
\node at (10.5, 9) (y9) {$(V_1^{(1)},Y_1,Y_2,Y_3)$};
\node at (10.5, 8) (y8) {$(V_2^{(1)},Y_4,Y_5,Y_6)$};
\node at (10.5, 7) (y7) {$(V_3^{(1)},Y_7,Y_8,Y_9)$};
\draw[->, thick] (uu9)--(1.2,9);
\draw[->, thick] (uu8)--(1.2,8);
\draw[->, thick] (uu7)--(1.2,7);
\draw[->, thick] (3.8,9)--(v9);
\draw[->, thick] (3.8,8)--(v8);
\draw[->, thick] (3.8,7)--(v7);
\draw[->, thick] (v9)--(w9);
\draw[->, thick] (v8)--(w8);
\draw[->, thick] (v7)--(w7);
\draw[->, thick] (w9)--(y9);
\draw[->, thick] (w8)--(y8);
\draw[->, thick] (w7)--(y7);
\end{tikzpicture}
\caption{The (stochastic) mapping from $\bU_{[4:6]}$ to $(\bV_{[1:3]}^{(1)},\bY_{[1:9]})$}
\label{fig:mid3}
\end{figure}
\begin{figure}[t!]
\centering
\begin{tikzpicture}
\node at (5, 9) (v9) {$V_7^{(1)}$};
\node at (5, 8) (v8) {$V_8^{(1)}$};
\node at (5, 7) (v7) {$V_9^{(1)}$};
\node [sblock] at (2.5, 8) (KK3) {$K_3^{(1)}$};
\node at (0, 9) (uu9) {$U_7$};
\node at (0, 8) (uu8) {$U_8$};
\node at (0, 7) (uu7) {$U_9$};
\node [block] at (7.5, 9) (w9) {$W_3(K_1^{(0)})$};
\node [block] at (7.5, 8) (w8) {$W_3(K_1^{(0)})$};
\node [block] at (7.5, 7) (w7) {$W_3(K_1^{(0)})$};
\node at (11.5, 9) (y9) {$(V_1^{(1)},V_4^{(1)},Y_1,Y_2,Y_3)$};
\node at (11.5, 8) (y8) {$(V_2^{(1)},V_5^{(1)},Y_4,Y_5,Y_6)$};
\node at (11.5, 7) (y7) {$(V_3^{(1)},V_6^{(1)},Y_7,Y_8,Y_9)$};
\draw[->, thick] (uu9)--(1.2,9);
\draw[->, thick] (uu8)--(1.2,8);
\draw[->, thick] (uu7)--(1.2,7);
\draw[->, thick] (3.8,9)--(v9);
\draw[->, thick] (3.8,8)--(v8);
\draw[->, thick] (3.8,7)--(v7);
\draw[->, thick] (v9)--(w9);
\draw[->, thick] (v8)--(w8);
\draw[->, thick] (v7)--(w7);
\draw[->, thick] (w9)--(y9);
\draw[->, thick] (w8)--(y8);
\draw[->, thick] (w7)--(y7);
\end{tikzpicture}
\caption{The (stochastic) mapping from $\bU_{[7:9]}$ to $(\bV_{[1:6]}^{(1)},\bY_{[1:9]})$}
\label{fig:bot3}
\end{figure}
Once we obtain the matrix $M^{(t)}$ and the set $\mathcal{S}_{\good}$ in the code construction, the encoding procedure is standard; it is essentially the same as the original polar codes \cite{arikan-polar}.
Let $\bU_{[1:N]}$ be a random vector consisting of $N$ i.i.d. Bernoulli-$1/2$ random variables, and let $\bX_{[1:N]}=\bU_{[1:N]} M^{(t)}$.
Recall that we use $\{W_i(M^{(t)}):i\in[\ell^t]\}$ to denote the $\ell^t$ bit-channels resulting from the polar transform of $W$ using kernel $M^{(t)}$.
If we transmit the random vector $\bX_{[1:N]}$ through $N$ independent copies of $W$ and denote the channel outputs as $\bY_{[1:N]}$, then by definition, the bit-channel mapping from $U_i$ to $(\bU_{[1:i-1]},\bY_{[1:N]})$ is exactly $W_i(M^{(t)})$.
Therefore, if we use a successive decoder to decode the input vector $\bU_{[1:N]}$ bit by bit from all the channel outputs $\bY_{[1:N]}$ and all the previous input bits $\bU_{[1:i-1]}$, then $W_i(M^{(t)})$ is the channel seen by the successive decoder when it decodes $U_i$.
Clearly, $H(W_i(M^{(t)}))\approx 0$ means that the successive decoder can decode $U_i$ correctly with high probability.
For every $i\in\ell^t$, we write $\tau_t(i)=(i_1,i_2,\dots,i_t)$.
In Proposition~\ref{prop:eqv} below, we will show that $H(W_i(M^{(t)}))=H_{i_1,\dots,i_t}(W)$.
Then in Proposition~\ref{prop:approx_accumulation}, we further show that $H_{i_1,\dots,i_t}(W)\approx H_{i_1,\dots,i_t}^{\bin}(W)$. Therefore,
$H(W_i(M^{(t)}))\approx H_{i_1,\dots,i_t}^{\bin}(W)$.
By definition \eqref{eq:Sgood}, the set $\mathcal{S}_{\good}$ contains all the indices $(i_1,\dots,i_t)$ for which $H_{i_1,\dots,i_t}^{\bin}(W)\approx 0$, so for all $i$ such that $\tau_t(i)\in\mathcal{S}_{\good}$, we also have $H(W_i(M^{(t)}))\approx 0$, meaning that the successive decoder can decode all the bits $\{U_i:\tau_t(i)\in\mathcal{S}_{\good}\}$ correctly with high probability.
In the encoding procedure, we put all the information in the set of good bits $\{U_i:\tau_t(i)\in\mathcal{S}_{\good}\}$, and we set all the other bits to be some pre-determined value, e.g., set all of them to be $0$. It is clear that the generator matrix of this code is the submatrix of $M^{(t)}$ consisting of all the row vectors with indices belonging to the set $\mathcal{S}_{\good}$.
\subsection{Analysis of bit-channels} \label{sect:bit}
We say that two channels $W_1:\{0,1\}\to\mathcal{Y}_1$ and $W_2:\{0,1\}\to\mathcal{Y}_2$ are equivalent if there is a one-to-one mapping $\pi$ between $\mathcal{Y}_1$ and $\mathcal{Y}_2$ such that $W_1(y_1|x)=W_2(\pi(y_1)|x)$ for all $y_1\in\mathcal{Y}_1$ and $x\in\{0,1\}$. Denote this equivalence relation as $W_1\equiv W_2$.
Then we have the following result.
\begin{prop} \label{prop:eqv}
For every $i\in\ell^t$, we write $\tau_t(i)=(i_1,i_2,\dots,i_t)$. Then we always have
$$
W_i(M^{(t)}) \equiv
W_{i_1,\dots,i_t}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{t-1}}^{(t-1)}).
$$
\end{prop}
Before formally proving this proposition, we first use the special case of $t=2$ and $\ell=3$ to illustrate the main idea behind the proof.
In this case, we obtained one kernel $K_1^{(0)}$ in step $0$ and three kernels $K_1^{(1)},K_2^{(1)},K_3^{(1)}$ in step $1$.
See Fig.~\ref{fig:ill32} for an illustration of the encoding process $\bX_{[1:9]}=\bU_{[1:9]} M^{(2)}$.
In particular, we can see that
$$
\bV_{[1:9]}^{(1)}
=\bU_{[1:9]} D^{(1)}, \quad\quad
\bU_{[1:9]}^{(1)}
=\bV_{[1:9]}^{(1)} Q^{(1)}, \quad\quad
X_{[1:9]}
=\bU_{[1:9]}^{(1)} D^{(0)} .
$$
Therefore, we indeed have $\bX_{[1:9]}=\bU_{[1:9]} D^{(1)}Q^{(1)}D^{(0)}=\bU_{[1:9]} M^{(2)}$.
Assume that $\bU_{[1:9]}$ consists of $9$ i.i.d. Bernoulli-$1/2$ random variables. Since $D^{(1)},Q^{(1)},D^{(0)}$ are all invertible matrices, the random vectors $\bV_{[1:9]}^{(1)},\bU_{[1:9]}^{(1)}$ and $\bX_{[1:9]}$ also consist of i.i.d. Bernoulli-$1/2$ random variables.
In order to analyze the bit-channels, we view Fig.~\ref{fig:ill32} from the right side to the left side.
First observe that the following three vectors
$$
(U_1^{(1)},U_2^{(1)},U_3^{(1)},Y_1,Y_2,Y_3),\quad\quad
(U_4^{(1)},U_5^{(1)},U_6^{(1)},Y_4,Y_5,Y_6),\quad\quad
(U_7^{(1)},U_8^{(1)},U_9^{(1)},Y_7,Y_8,Y_9)
$$
are independent and identically distributed (i.i.d.).
Given a channel $W_1:\mathcal{X}\to\mathcal{Y}$ and a pair of random variables $(X,Y)$ that take values in $\mathcal{X}$ and $\mathcal{Y}$ respectively, we write
$$
\P(X\to Y)\equiv W_1
$$
if $\P(Y=y|X=x)=W(y|x)$ for all $x\in\mathcal{X}$ and $y\in\mathcal{Y}$, where $\P(X\to Y)$ means the channel that takes $X$ as input and gives $Y$ as output.
By this definition, we have
$$
\P(U_1^{(1)}\to \bY_{[1:3]}) \equiv
\P(U_4^{(1)}\to \bY_{[4:6]}) \equiv
\P(U_7^{(1)}\to \bY_{[7:9]}) \equiv
W_1(K_1^{(0)}) .
$$
Since $V_1^{(1)}=U_1^{(1)},V_2^{(1)}=U_4^{(1)},V_3^{(1)}=U_7^{(1)}$, we also have
$$
\P(V_1^{(1)}\to \bY_{[1:3]}) \equiv
\P(V_2^{(1)}\to \bY_{[4:6]}) \equiv
\P(V_3^{(1)}\to \bY_{[7:9]}) \equiv
W_1(K_1^{(0)}) .
$$
Moreover, the following three vectors
$$
(V_1^{(1)},\bY_{[1:3]}), \quad\quad
(V_2^{(1)},\bY_{[4:6]}), \quad\quad
(V_3^{(1)},\bY_{[7:9]})
$$
are independent. Therefore, the (stochastic) mapping from $U_{[1:3]}$ to $Y_{[1:9]}$ in Fig.~\ref{fig:ill32} can be represented in a more compact form in Fig.~\ref{fig:top3}.
From Fig.~\ref{fig:top3}, we can see that
\begin{align*}
& W_1(M^{(2)}) \equiv
\P(U_1 \to \bY_{[1:9]}) \equiv
W_{1,1}(K_1^{(0)},K_1^{(1)}) , \\
& W_2(M^{(2)}) \equiv
\P(U_2 \to (U_1,\bY_{[1:9]})) \equiv
W_{1,2}(K_1^{(0)},K_1^{(1)}) , \\
& W_3(M^{(2)}) \equiv
\P(U_3 \to (U_1,U_2,\bY_{[1:9]})) \equiv
W_{1,3}(K_1^{(0)},K_1^{(1)}) .
\end{align*}
Next we investigate $W_4(M^{(2)}),W_5(M^{(2)}),W_6(M^{(2)})$. Observe that
$$
\P(U_2^{(1)}\to(U_1^{(1)},\bY_{[1:3]} ))
\equiv
\P(U_5^{(1)}\to(U_4^{(1)},\bY_{[4:6]} ))
\equiv
\P(U_8^{(1)}\to(U_7^{(1)},\bY_{[7:9]} )) \equiv
W_2(K_1^{(0)}).
$$
Therefore,
$$
\P(V_4^{(1)}\to(V_1^{(1)},\bY_{[1:3]} ))
\equiv
\P(V_5^{(1)}\to(V_2^{(1)},\bY_{[4:6]} ))
\equiv
\P(V_6^{(1)}\to(V_3^{(1)},\bY_{[7:9]} )) \equiv
W_2(K_1^{(0)}).
$$
Moreover, since
$$
(V_1^{(1)},V_4^{(1)},\bY_{[1:3]} ), \quad\quad
(V_2^{(1)},V_5^{(1)},\bY_{[4:6]} ), \quad\quad
(V_3^{(1)},V_6^{(1)},\bY_{[7:9]} )
$$
are independent,
the (stochastic) mapping from $\bU_{[4:6]}$ to $(\bV_{[1:3]}^{(1)} , \bY_{[1:9]})$ in Fig.~\ref{fig:ill32} can be represented in a more compact form in Fig.~\ref{fig:mid3}.
Notice that there is a bijection between $\bU_{[1:3]}$ and $\bV_{[1:3]}^{(1)}$. Thus we can conclude from Fig.~\ref{fig:mid3} that
\begin{align*}
& W_4(M^{(2)}) \equiv
\P(U_4 \to (\bU_{[1:3]},\bY_{[1:9]})) \equiv
\P(U_4 \to (\bV_{[1:3]}^{(1)},\bY_{[1:9]})) \equiv
W_{2,1}(K_1^{(0)},K_2^{(1)}) , \\
& W_5(M^{(2)}) \equiv
\P(U_5 \to (\bU_{[1:4]},\bY_{[1:9]})) \equiv
\P(U_5 \to (U_4,\bV_{[1:3]}^{(1)},\bY_{[1:9]})) \equiv
W_{2,2}(K_1^{(0)},K_2^{(1)}) , \\
& W_6(M^{(2)}) \equiv
\P(U_6 \to (\bU_{[1:5]},\bY_{[1:9]})) \equiv
\P(U_6 \to (U_4,U_5,\bV_{[1:3]}^{(1)},\bY_{[1:9]})) \equiv
W_{2,3}(K_1^{(0)},K_2^{(1)}) .
\end{align*}
Finally, we can use the same method to show that
\begin{align*}
& \P(V_7^{(1)}\to(V_1^{(1)},V_4^{(1)},\bY_{[1:3]} ))
\equiv
\P(V_8^{(1)}\to(V_2^{(1)},V_5^{(1)},\bY_{[4:6]} )) \\
\equiv
& \P(V_9^{(1)}\to(V_3^{(1)},V_6^{(1)},\bY_{[7:9]} )) \equiv
W_3(K_1^{(0)}).
\end{align*}
Therefore, the (stochastic) mapping from $\bU_{[7:9]}$ to $(\bV_{[1:6]}^{(1)} , \bY_{[1:9]})$ in Fig.~\ref{fig:ill32} can be represented in a more compact form in Fig.~\ref{fig:bot3}.
Notice that there is a bijection between $\bU_{[1:6]}$ and $\bV_{[1:6]}^{(1)}$. Thus we can conclude from Fig.~\ref{fig:bot3} that
\begin{align*}
& W_7(M^{(2)}) \equiv
\P(U_7 \to (\bU_{[1:6]},\bY_{[1:9]})) \equiv
\P(U_7 \to (\bV_{[1:6]}^{(1)},\bY_{[1:9]})) \equiv
W_{3,1}(K_1^{(0)},K_3^{(1)}) , \\
& W_8(M^{(2)}) \equiv
\P(U_8 \to (\bU_{[1:7]},\bY_{[1:9]})) \equiv
\P(U_8 \to (U_7,\bV_{[1:6]}^{(1)},\bY_{[1:9]})) \equiv
W_{3,2}(K_1^{(0)},K_3^{(1)}) , \\
& W_9(M^{(2)}) \equiv
\P(U_9 \to (\bU_{[1:8]},\bY_{[1:9]})) \equiv
\P(U_9 \to (U_7,U_8,\bV_{[1:6]}^{(1)},\bY_{[1:9]})) \equiv
W_{3,3}(K_1^{(0)},K_3^{(1)}) .
\end{align*}
Now we have proved Proposition~\ref{prop:eqv} for the special case of $\ell=3$ and $t=2$. The proof for the general case follows the same idea, and we defer it to Appendix~\ref{app:proofeqv}.
\subsection{Complexity of code construction, encoding and decoding}
\begin{prop} \label{prop:complex}
The code construction has $N^{O_\ell(1)}$ complexity. Both the encoding and successive decoding procedures have $O_{\ell}(N\log N)$ complexity.
\end{prop}
\begin{proof}
The key in our proof is that we consider $\ell$ as a (possibly very large) constant.
We start with the code construction and we first show that both Algorithm~\ref{algo:kernel_search} and Algorithm~\ref{algo:bin} have $\poly(N)$ time complexity.
In the worst case, we need to check all $2^{\ell^2}$ possible kernels in Algorithm~\ref{algo:kernel_search}, and for each kernel we need to calculate the conditional entropy of the $\ell$ subchannels. Since we always work with the quantized channel with output size upper bounded by $N^3$, each subchannel of the quantized channels has no more than $2^{\ell}N^{3\ell}$ outputs. Therefore, the conditional entropy of these subchannels can be calculated in $\poly(N)$ time, so Algorithm~\ref{algo:kernel_search} also has $\poly(N)$ complexity. After finding the good kernels, we need to use Algorithm~\ref{algo:bin} to quantize/bin the output alphabet of the subchannels produced by these good kernels. As mentioned above, the original alphabet size of these subchannels is no more than $2^{\ell}N^{3\ell}$. Therefore, Algorithm~\ref{algo:bin} also has $\poly(N)$ complexity.
At Step $i$, we use Algorithm~\ref{algo:kernel_search} $\ell^i$ times to find good kernels, and then we use Algorithm~\ref{algo:bin} $\ell^{i+1}$ times to quantize the bit-channels produced by these kernels, so in total we use Algorithm~\ref{algo:kernel_search} $\frac{N-1}{\ell-1}$ times and we use Algorithm~\ref{algo:bin} $\frac{\ell(N-1)}{\ell-1}$ times. Finally, finding the set $\mathcal{S}_{\good}$ only requires calculating the conditional entropy of the bit-channels in the last step, so this can also be done in polynomial time. Thus we conclude that the code construction has $\poly(N)$ complexity, albeit the degree in $\poly(N)$ complexity depends on $\ell$.
In the encoding procedure,
we first form the vector $\bU_{[1:N]}$ by putting putting all the information in the bits $\{U_i:\tau_t(i)\in\mathcal{S}_{\good}\}$ and setting all the other bits $\{U_i:\tau_t(i)\notin\mathcal{S}_{\good}\}$ to be $0$. Then
we multiply $\bU_{[1:N]}$ with the encoding matrix $M^{(t)}$ and obtain the codeword $\bX_{[1:N]}=\bU_{[1:N]}M^{(t)}$.
Since the matrix $M^{(t)}$ has size $N\times N$, a naive implementation of the encoding procedure would require $O(N^2)$ operations.
Fortunately, we can use \eqref{eq:defMj} to accelerate the encoding procedure. Namely, we first multiply $\bU_{[1:N]}$ with $D^{(t-1)}$, then multiply the result with $Q^{(t-1)}$, then multiply by $D^{(t-2)}$, so on and so forth. As mentioned above, for $j=0,1,\dots,t-1$, each $D^{(j)}$ is a block diagonal matrix with $N/\ell$ blocks on the diagonal, where each block has size $\ell\times\ell$. Therefore, multiplication with $D^{(j)}$ only requires $N\ell$ operations. By definition, $Q^{(j)}, j\in[t-1]$ are permutation matrices, so multiplication with them only requires $N$ operations. In total, we multiply with $2t-1=2\log_{\ell}N-1$ matrices. Therefore, the encoding procedure can be computed in $O_{\ell}(N\log N)$ time, where $O_{\ell}$ means that the constant in big-$O$ depends on $\ell$.
The decoding algorithm uses exactly the same idea as the algorithm in Ar{\i}kan's original paper \cite[Section~VIII-B]{arikan-polar}. Here we only use the special case of $\ell=3$ and $t=2$ in Fig.~\ref{fig:ill32} to explain how Ar{\i}kan's decoding algorithm works for large (and mixed) kernels, and we omit the proof for general parameters.
We start with the decoding of $U_1,U_2,U_3$ in Fig.~\ref{fig:ill32}. It is clear that decoding $U_1,U_2,U_3$ is equivalent to decoding $U_1^{(1)},U_4^{(1)},U_7^{(1)}$. Then the log-likelihood ratio (LLR) of each of these three bits can be calculated locally from only three output symbols. More precisely, the LLR of $U_1^{(1)}$ can be computed from $\bY_{[1:3]}$, the LLR of $U_4^{(1)}$ can be computed from $\bY_{[4:6]}$, and the LLR of $U_7^{(1)}$ can be computed from $\bY_{[7:9]}$. Therefore, the complexity of calculating each LLR only depends on the value of $\ell$. Since $\ell$ is considered as a constant, the calculation of each LLR also has constant time complexity (although the complexity is exponential in $\ell$).
The next step is to decode $\bU_{[4:6]}$ from $\bY_{[1:9]}$ together with $\bU_{[1:3]}$. This is equivalent to calculating the LLRs of $U_2^{(1)},U_5^{(1)},U_8^{(1)}$ given $\bY_{[1:9]}$ and $U_1^{(1)},U_4^{(1)},U_7^{(1)}$. This again can be done locally: To compute the LLR of $U_2^{(1)}$, we only need the values of $\bY_{[1:3]}$ and $U_1^{(1)}$; to compute the LLR of $U_5^{(1)}$, we only need the values of $\bY_{[4:6]}$ and $U_4^{(1)}$; to compute the LLR of $U_8^{(1)}$, we only need the values of $\bY_{[7:9]}$ and $U_7^{(1)}$. Finally, the decoding of $\bU_{[7:9]}$ from $\bY_{[1:9]}$ and $\bU_{[1:6]}$ can be decomposed into local computations in a similar way.
Using this idea, one can show that for general values of $\ell$ and $t$, the decoding can also be decomposed into $t=\log_{\ell}N$ stages, and in each stage, the decoding can further be decomposed into $N/\ell$ local tasks, each of which has constant time complexity (although the complexity is exponential in $\ell$). Therefore, the decoding complexity at each stage is $O_{\ell}(N)$ and the overall decoding complexity is $O_{\ell}(N\log N)$.
As a final remark, we mention that after calculating the LLRs of all $U_i$'s, we will only use the LLRs of the bits $\{U_i:\tau_t(i)\in\mathcal{S}_{\good}\}$. For these bits, we decode $U_i$ as $0$ if its LLR is larger than $0$ and decode it $1$ otherwise. Recall that in the encoding procedure, we have set all the other bits $\{U_i:\tau_t(i)\notin\mathcal{S}_{\good}\}$ to be $0$, so for these bits we simply decode them as $0$.
\end{proof}
\subsection{Code rate and decoding error probability}
In \eqref{eq:defHi}, we have defined the conditional entropy for all the bit-channels obtained in the last step (Step $t-1$). Here we also define the conditional entropy for the bit-channels obtained in the previous steps. More precisely, for every $j\in[t]$ and every $(i_1,i_2,\dots,i_j)\in[\ell]^j$, we use the following short-hand notation:
\begin{align*}
H_{i_1,\dots,i_j}(W) &:=
H(W_{i_1,\dots,i_j}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)})) \\
H_{i_1,\dots,i_j}^{\bin}(W) &:=
H(W_{i_1,\dots,i_j}^{\bin}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)})) \\
H_{i_1,\dots,i_j}^{\bin*}(W) &:=
H(W_{i_1,\dots,i_j}^{\bin*}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{j-1}}^{(j-1)})) .
\end{align*}
According to \eqref{eq:ttt}, we have
\begin{equation} \label{eq:gpH}
H_{i_1,\dots,i_j}^{\bin*}(W) \le H_{i_1,\dots,i_j}^{\bin}(W)
\le H_{i_1,\dots,i_j}^{\bin*}(W) + \frac{6\log N}{N^3}
\end{equation}
for every $j\in[t]$ and every $(i_1,i_2,\dots,i_j)\in[\ell]^j$.
\begin{prop}
\label{prop:approx_accumulation}
For every $j\in[t]$ and $(i_1,i_2,\dots,i_j)\in[\ell]^j$, the conditional entropy $H_{i_1,\dots,i_j}(W)$ and $H_{i_1,\dots,i_j}^{\bin}(W)$ satisfy the following inequality
\begin{equation} \label{eq:obg}
H_{i_1,\dots,i_j}(W) \le
H_{i_1,\dots,i_j}^{\bin}(W) \le
H_{i_1,\dots,i_j}(W) + \frac{6\ell \log N}{N^2}
\end{equation}
\end{prop}
\begin{proof}
Since the binning algorithm (Algorithm~\ref{algo:bin}) always produces a channel that is degraded with respect to the original channel,
the first inequality in \eqref{eq:obg} follows immediately by applying Proposition~\ref{prop:degrad_subchannel} recursively in our $t$-step code construction.
Now we prove the second inequality in \eqref{eq:obg}.
We will prove the following inequality by induction on $j$:
\begin{equation} \label{eq:glg}
H_{i_1,\dots,i_j}^{\bin}(W) \le H_{i_1,\dots,i_j}(W)
+ \frac{6\log N}{N^3}(1+\ell+\ell^2+\dots+\ell^j)
\quad\quad \forall (i_1,i_2,\dots,i_j)\in[\ell]^j .
\end{equation}
The base case of $j=0$ is trivial. Now assume that this inequality holds for $j$ and we prove it for $j+1$.
By chain rule, we know that
$$
\sum_{i_{j+1}=1}^{\ell}H_{i_1,\dots,i_j,i_{j+1}}^{\bin*}(W)
=\ell H_{i_1,\dots,i_j}^{\bin}(W),
\quad\quad
\sum_{i_{j+1}=1}^{\ell}H_{i_1,\dots,i_j,i_{j+1}}(W)
=\ell H_{i_1,\dots,i_j}(W).
$$
Therefore,
$$
\sum_{i_{j+1}=1}^{\ell}\Big(H_{i_1,\dots,i_j,i_{j+1}}^{\bin*}(W) -H_{i_1,\dots,i_j,i_{j+1}}(W) \Big)
=\ell \Big( H_{i_1,\dots,i_j}^{\bin}(W)
- H_{i_1,\dots,i_j}(W) \Big).
$$
Since every summand on the left-hand side is non-negative, we have
$$
H_{i_1,\dots,i_j,i_{j+1}}^{\bin*}(W) -H_{i_1,\dots,i_j,i_{j+1}}(W)
\le \ell \Big( H_{i_1,\dots,i_j}^{\bin}(W)
- H_{i_1,\dots,i_j}(W) \Big)
\le \frac{6\log N}{N^3} (\ell+\ell^2+\dots+\ell^{j+1}),
$$
where the second inequality follows from the induction hypothesis.
Combining this with \eqref{eq:gpH}, we obtain that
$$
H_{i_1,\dots,i_j,i_{j+1}}^{\bin}(W) \le H_{i_1,\dots,i_j,i_{j+1}}(W)
+ \frac{6\log N}{N^3}(1+\ell+\ell^2+\dots+\ell^{j+1}) .
$$
This establishes the inductive step and completes the proof of \eqref{eq:glg}.
The inequality \eqref{eq:obg} then follows directly from \eqref{eq:glg} by using the fact that $1+\ell+\dots+\ell^j<\ell N$ for all $j\le t$.
\end{proof}
\begin{thm} \label{thm:m1}
For arbitrarily small $\alpha>0$, if we choose a constant $\ell\ge\exp(\alpha^{-1.01})$ to be a power of $2$ and let $t=\log_{\ell} N$ grow, then
the codes constructed from the above procedure have decoding error probability $O_{\alpha}(\log N/N)$ under successive decoding and code rate $I(W)-N^{-1/2+7\alpha}$, where $N=\ell^t$ is the code length.
\end{thm}
\begin{proof}
By \eqref{eq:obg} and the definition of $\mathcal{S}_{\good}$ in \eqref{eq:Sgood}, we know that for every $(i_1,\dots,i_t)\in\mathcal{S}_{\good}$, we have $H_{i_1,\dots,i_t}(W)\le H_{i_1,\dots,i_t}^{\bin}(W)\le \frac{7\ell \log N}{N^2}$.
Then by Lemma 2.2 in \cite{Blasiok18}, we know that the ML decoding error probability of the bit-channel $W_{i_1,\dots,i_t}(K_1^{(0)},K_{i_1}^{(1)},\dots,K_{i_1,\dots,i_{t-1}}^{(t-1)})$ is also upper bounded by $\frac{7\ell \log N}{N^2}$. Since the cardinality of $\mathcal{S}_{\good}$ is at most $N$, we can conclude that the overall decoding error probability under the successive decoder is $O_{\alpha}(\log N/N)$ using the union bound.
Notice that $|\mathcal{S}_{\good}|$ is the code dimension. Therefore, we only need to lower bound $|\mathcal{S}_{\good}|$ in order to get the lower bound on the code rate.
Define another set
\begin{equation}\label{eq:sgprime}
\mathcal{S}_{\good}':=\left\{(i_1,i_2,\dots,i_t)\in[\ell]^t: H_{i_1,\dots,i_t}(W) \le \frac{\ell \log N}{N^2} \right\}.
\end{equation}
According to \eqref{eq:obg}, if $H_{i_1,\dots,i_t}(W)<\frac{\ell\log N}{N^2}$, then
$H_{i_1,\dots,i_t}^{\bin}(W)\le \frac{7\ell \log N}{N^2}$.
Therefore, $\mathcal{S}_{\good}'\subseteq \mathcal{S}_{\good}$, so $|\mathcal{S}_{\good}|\ge |\mathcal{S}_{\good}'|$.
In Lemma~\ref{lm:acd} below, we will prove that $|\mathcal{S}_{\good}'|\ge N(I(W)-N^{-1/2+7\alpha})$. Therefore, $|\mathcal{S}_{\good}|\ge N(I(W)-N^{-1/2+7\alpha})$.
This completes the proof of the theorem.
\end{proof}
\begin{lem} \label{lm:acd}
If $\ell\ge\exp(\alpha^{-1.01})$ is a power of $2$, then the set $\mathcal{S}_{\good}'$ defined in \eqref{eq:sgprime} satisfies the following inequality
$$
\left\lvert\mathcal{S}_{\good}'\right\lvert
\geq N\left(I(W) - N^{-\frac12 + 7\alpha } \right)
$$
\end{lem}
\begin{proof}
The proof is the same as in~\cite[Claim A.2]{Blasiok18}. Recall that we proved in~\eqref{eq:strong_Markov}-\eqref{eq:Ega_exponential}
$$
\P\left[H^{(t)} \in \left(\frac{\ell\log N}{N^2}, 1 - \frac{\ell\log N}{N^2} \right)\right] \leq 2 \frac{N^{2\alpha}}{(\ell\log N)^{\alpha}}\cdot\lambda_{\alpha}^t,
$$
where $H^{(t)}$ is (marginally) the entropy of the random channel at the last level of construction, i.e. $H^{(t)}$ is uniformly distributed over $H_{i_1,\dots,i_t}(W)$ for all possible $(i_1,i_2,\dots,i_t)\in[\ell]^t$, and $\lambda_{\alpha}$ is such that~\eqref{mult_decrease} holds for any channel $W'$ throughout the construction.
By Proposition~\ref{prop:approx_accumulation}, we can choose the error parameter $\Delta$ in Algorithm~\ref{algo:kernel_search} to be $\Delta=\frac{6\ell\log N}{N^2}$, which satisfies the condition $\Delta\le \ell^{-\log \ell}$ in Theorem~\ref{thm:kernel_seacrh_correct}.
Then Theorem~\ref{thm:kernel_seacrh_correct} and Remark~\ref{rmk:rmk} tell us that as long as $\log\ell\ge \alpha^{-1.01}$, Algorithm~\ref{algo:kernel_search} allows us to choose kernels such that $\lambda_{\alpha}\leq \ell^{-1/2+5\alpha}$, which gives
\begin{equation} \label{eq:wus}
\P\left[H^{(t)} \in \left(\frac{\ell\log N}{N^2}, 1 - \frac{\ell\log N}{N^2} \right)\right] \leq \frac{2 N^{-1/2 + 7\alpha}}{(\ell\log N)^{\alpha}}.
\end{equation}
On the other hand, conservation of entropy throughout the process implies $E\left[H^{(t)}\right] = H(W)$, therefore by Markov's inequality
\begin{equation}
\P\left[H^{(t)} \geq 1 - \frac{\ell\log N}{N^2} \right] \leq \frac{H(W)}{1 - \frac{\ell\log N}{N^2}} \leq H(W) + \frac{2\ell\log N}{N^2}.
\end{equation}
Since $H(W) = 1 - I(W)$ for symmetric channels and $\left\lvert\mathcal{S}_{\good}'\right\lvert = N\cdot\P\left[H^{(t)} \leq \frac{\ell\log N}{N^2} \right]$, we have
\begin{align*}
\left\lvert\mathcal{S}_{\good}'\right\lvert &\geq N\left(1 - \frac{2 N^{-1/2 + 7\alpha}}{(\ell\log N)^{\alpha}} - H(W) - \frac{2\ell\log N}{N^2} \right) \\
&\geq N\left(I(W) - \frac{3 N^{-1/2 + 7\alpha}}{(\ell\log N)^{\alpha}}\right) \\
&\geq N\Big(I(W) - N^{-1/2 + 7\alpha}\Big). \qedhere
\end{align*}
\end{proof}
\subsection{Main theorem: Putting everything together}
\label{sect:main_together}
As we mentioned at the beginning of this section, the code construction presented above only takes the special case of $\mathsf{Q}=N^3$ as a concrete example, where $\mathsf{Q}$ is the upper bound on the output alphabet size after binning; see Algorithm~\ref{algo:bin}.
In fact, we can change the value of $\mathsf{Q}$ to be any polynomial of $N$, and this will allow us to obtain a trade-off between the decoding error probability and the gap to capacity while maintaining the polynomial-time code construction as well as the $O_{\alpha}(N\log N)$ encoding and decoding complexity.
More precisely, we have the following theorem.
\begin{thm} \label{thm:main1}
For any BMS channel $W$, any $c>0$ and arbitrarily small $\alpha>0$, if we choose a constant $\ell\ge\exp(\alpha^{-1.01})$ to be a power of $2$ and set $\mathsf{Q}=N^{c+2}$ in the above code construction procedure, then
we can construct a code $\mathcal{C}$ with code length $N=\ell^t$ such that the following four properties hold when $t$ grows: (1) the code construction has $N^{O_\alpha(1)}$ complexity; (2) both encoding and decoding have $O_{\alpha}(N\log N)$ complexity; (3) rate of $\mathcal{C}$ is $I(W)-O(N^{-1/2+(c+6)\alpha})$; (4) decoding error probability of $\mathcal{C}$ is $O_{\alpha}(\log N/N^c)$ under successive decoding when $\mathcal{C}$ is used for channel coding over $W$.
\end{thm}
\begin{proof}
The proof of properties (1) and (2) is exactly the same as Proposition~\ref{prop:complex}.
Here we only briefly explain how to adjust the proof of Theorem~\ref{thm:m1} to show properties (3) and (4).
First, we change the definitions of $\mathcal{S}_{\good}$ and $\mathcal{S}_{\good}'$ to
\begin{align*}
\mathcal{S}_{\good} & :=\left\{(i_1,i_2,\dots,i_t)\in[\ell]^t: H_{i_1,\dots,i_t}^{\bin}(W) \le \frac{(2c+3)\ell \log N}{N^{c+1}} \right\}, \\
\mathcal{S}_{\good}' & :=\left\{(i_1,i_2,\dots,i_t)\in[\ell]^t: H_{i_1,\dots,i_t}(W) \le \frac{\ell \log N}{N^{c+1}} \right\}.
\end{align*}
The definition of $\mathcal{S}_{\good}$ immediately implies property (4). Next we prove property (3).
Since we change $\mathsf{Q}$ from $N^3$ to $N^{c+2}$,
inequality \eqref{eq:gpH} becomes
$$
H_{i_1,\dots,i_j}^{\bin*}(W) \le H_{i_1,\dots,i_j}^{\bin}(W)
\le H_{i_1,\dots,i_j}^{\bin*}(W) + \frac{2(c+2)\log N}{N^{c+2}} .
$$
As a consequence, inequality \eqref{eq:obg} in Proposition~\ref{prop:approx_accumulation} becomes
$$
H_{i_1,\dots,i_j}(W) \le
H_{i_1,\dots,i_j}^{\bin}(W) \le
H_{i_1,\dots,i_j}(W) + \frac{2(c+2)\ell\log N}{N^{c+1}} .
$$
This inequality tells us that $\mathcal{S}_{\good}'\subseteq \mathcal{S}_{\good}$, so $|\mathcal{S}_{\good}|\ge |\mathcal{S}_{\good}'|$.
Then we follow Lemma~\ref{lm:acd} to lower bound $|\mathcal{S}_{\good}'|$.
Inequality \eqref{eq:wus} now becomes
$$
\P\left[H^{(t)} \in \left(\frac{\ell\log N}{N^{c+1}}, 1 - \frac{\ell\log N}{N^{c+1}} \right)\right] \leq \frac{2 N^{-1/2 + (c+6)\alpha}}{(\ell\log N)^{\alpha}}.
$$
Therefore, we obtain that
$$
|\mathcal{S}_{\good}|\ge |\mathcal{S}_{\good}'|
\ge N\Big(I(W)-N^{-1/2 + (c+6)\alpha} \Big).
$$
This completes the proof of the theorem.
\end{proof}
\input{exp-error}
\section{Overview of our construction and analysis}
In order to better explain our work and situate it in the rich backdrop of related works on polar codes, we begin with some context and background on the phenomenon of channel polarization that lies at the heart of Ar\i kan's polar coding approach.
\subsection{Channel transforms, entropy polarization, and polar codes}
\label{subsec:polarization-intro}
\sloppy Consider an arbitrary binary-input memoryless symmetric (BMS)\footnote{We say that a channel $W:\{0,1\}\to\mathcal{Y}$ is a BMS channel if there is a permutation $\pi$ on the output alphabet $\mathcal{Y}$ satisfying i) $\pi^{-1}=\pi$ and ii) $W(y|1)=W(\pi(y)|0)$ for all $y\in\mathcal{Y}$.} channel $W\,:\,\{0,1\}\to\mathcal{Y}$, and an ${\ell \times \ell}$ invertible binary matrix $K$ (referred to as the \emph{kernel}). Suppose that we are transmitting a binary vector $\bU = (U_1, U_2, \dots, U_{\ell})$ uniformly chosen from $\{0,1\}^{\ell}$ in the following way: first, it is transformed into $\bX = \bU K$, which is then transmitted through $\ell$ copies of the channel $W$ to get the output $\bY = W^{\ell}(\bX) \in \mathcal{Y}^{\ell}$.
Now imagine decoding the input bits $U_i$ successively in the order of increasing $i$. This naturally leads to a binary-input channel $W_i : \{0,1\} \to \mathcal{Y}^{\ell}\times\{0,1\}^{i-1}$, for each $i\in [\ell]$, which is the channel ``seen" by the bit $U_i$ when all the previous bits $\bU_{<i}$ and all the channel outputs $\bY\in \mathcal{Y}^{\ell}$ are known. Formally, the transition probabilities of this channel are
\begin{equation}
\label{Arikan_subchannels}
W_i(\bY, \bU_{<i}\, |\, U_i) = \dfrac1{2^{\ell - i}} \sum_{\bV \in \{0,1\}^{\ell - i}} W^{\ell}\Big(\bY\, |\, (\bU_{<i}, U_i, \bV)K\Big),
\end{equation}
where $\bU_{<i} \in \{0,1\}^{i-1}$ are the first $(i-1)$ bits of $\bU$, and the sum is over all possible values $\bV \in \{0,1\}^{\ell-i}$ that the last $(\ell-i)$ bits of $\bU$ can take. In this paper we will address the channel $W_i$ as ``Ar{\i}kan's bit-channel of $W$ with respect to $K$."
A \emph{polarization transform} associated with the kernel $K$ is then defined as a transformation that maps $\ell$ copies of the channel $W$ to the channels $W_1$, $W_2, \dots, W_{\ell}$.
For a BMS channel $W$, we define $H(W)$ as the conditional entropy of the channel input random variable given the channel output random variable when the channel input has uniform distribution.
Since $K$ is invertible, a direct implication of the chain rule for entropy gives \emph{entropy conservation property}, which is \begin{equation}
\label{eq:entropy_conserv}
\ell \cdot H(W) = H(\bX|\bY)
=H(\bU|\bY)
=\sum_{i=1}^{\ell}H(U_i|\bU_{<i},\bY)
=\sum_{i=1}^{\ell}H(W_i).
\end{equation}
If $K$ is invertible and is not upper-triangular under any column permutation (which we refer to as a \emph{mixing matrix}), then the bit-channels $W_1, W_2, \dots, W_{\ell}$ start \emph{polarizing}---some of them become better than $W$ (have smaller entropy), and some become worse. The standard approach is then to recursively apply the polarization transform of $K$ to these bit-channels. This naturally leads to an $\ell$-ary tree of channels. The $t$'th level of the tree corresponds to the linear transformation $K^{\otimes t}$, the $t$-fold Kronecker product of $K$. \footnote{Actually, the analysis is more convenient if one applies a bit-reversal permutation of the $U_i$'s, and indeed we do so also in this paper, but this is not important for our current discussion.}
In his landmark paper~\cite{arikan-polar}, Ar\i kan proved that when $K = \left(\begin{smallmatrix}1 & 0\\ 1 & 1\end{smallmatrix}\right)$, at
the $t$'th level, all but a $o(1)$ fraction of the channels (as $t \to \infty$) are either almost noiseless (have tiny entropy) or completely useless (have entropy very close to $1$).
To get capacity-achieving codes from polarization, the idea is to use the almost-noiseless channels, which will constitute $\approx I(W)$ fraction by conservation of entropy, to carry the message bits, and ``freeze" the bits in the remaining positions to pre-determined values (eg. all $0$s). Thus the generator matrix of the code will consist of those rows of $K^{\otimes t}$ that correspond to the almost-noiseless positions. Ar\i kan presented a successive cancellation (SC) decoder and proved that it can be implemented using $O(N \log N)$ operations where $N=\ell^t$ is the code length, thanks to the nice recursive structure of $K^{\otimes t}$.
For the parameters of the code, if one shows that at most $\delta_t$ fraction of the channels at the $t$'th level have entropies in the range $(\zeta_t,1-\zeta_t)$, then one (roughly) gets codes of length $2^t$, rate $I(W)-\delta_t-\zeta_t$, for which the SC decoder achieves decoding error probability $\zeta_t \ell^t$ for noise caused by $W$ (see, for example \cite[Theorem A.3]{Blasiok18}). Thus, one needs $\zeta_t$ sub-exponentially small in $t$ (i.e., at most $\exp(-\omega(t))$) to achieve good decoding error. For Ar\i kan's original $2 \times 2$ kernel, this was shown in \cite{arikan-telatar}. Korada, Sasoglu and Urbanke extended the analysis to arbitrary $\ell\times\ell$ mixing matrices over the binary field~\cite{KSU10}, and Mori and Tanaka established a similar claim over all finite fields~\cite{mori-tanaka}.
The fraction $\delta_t$ of \emph{unpolarized} channels (whose entropies fail to be sub-exponentially close to $0$ or $1$) governs the gap to capacity of polar codes. The above works established that $\lim_{t \to \infty} \delta_t = 0$, and thus polar codes achieve capacity asymptotically as the block length grows to infinity. However, they did not provide any bounds on the speed at which $\delta_t \to 0$ as a function $t$, much less quantify a scaling exponent. Note that one would need to show $\delta_t \le O(\ell^{-t/\mu})$ to establish a scaling exponent of $\mu$, since the code length is $\ell^t$.
\subsection{Scaling exponents: prior work}
\label{subsec:prior-work}
For Ar\i kan's original kernel $\left(\begin{smallmatrix}1 & 0\\ 1 & 1\end{smallmatrix}\right)$, two independent works~\cite{hassani-finite-scaling-paper-journal,GX15} proved that $\delta_t$ drops to $0$ exponentially fast in $t$. This proved that Ar\i kan's polar codes have finite scaling exponent (i.e., converge to capacity polynomially fast in the block length), the first codes with this important feature. Blasiok {\it et al} generalized this result significantly \cite{Blasiok18}, proving that the entire class of polar codes, based on arbitrary mixing matrices over any prime field as kernels, has finite scaling exponent.
For concrete upper bounds on the scaling exponent, the work of Hassani, Alishahi, and Urbanke~\cite{hassani-finite-scaling-paper-journal} had proved $\mu \le 6$ for Ar\i kan's original kernel. This was improved to $\mu \le 5.702$ in \cite{GB13}.
Mondelli, Hassani, and Urbanke~\cite{MHU16_unified} showed that $\mu \leq 4.714$ for any BMS channel $W$, and showed a better upper bound $\mu \leq 3.639$ for the case when $W$ is a binary erasure channel (BEC).
A \emph{lower bound} $\mu \ge 3.579$ appears in \cite{hassani-finite-scaling-paper-journal} which suggests that polar codes based on Ar\i kan's original $2 \times 2$ kernel might inherently fall short of the optimal scaling exponent of $2$.
For larger kernels, effective upper bounds on the scaling exponent are harder to establish as the local evolution of the channels is more complex. In fact, to the best of our knowledge, there is no such explicit bound in the literature, for any kernel of size bigger than $2$.\footnote{Here we exclude special cases such as a block diagonal matrix with blocks of size at most $2$ which can be reduced to the $2 \times 2$ case but will only have a worse scaling exponent.}
The analysis of polar codes is a lot more tractable for the case of erasure channels, where symbols get erased (replaced by a '?' but never corrupted). Next we describe some results for erasure channels as well as the difficulty in extending these results to channels such as the BSC.
\subsection{Polar codes for erasure channels}
\label{subsec:polar-bec}
For the erasure channel, we have analyses of larger kernels and even codes with scaling exponent approaching $2$.
Binary $\ell \times \ell$ kernels for powers of two $\ell \le 64$ optimized for the binary erasure channel appear in \cite{Miloslavskaya-Trifonov,Fazeli-Vardy,Yao-Fazeli-Vardy}; a $64 \times 64$ kernel achieving $\mu < 3$ is reported in \cite{Yao-Fazeli-Vardy}.
Pfister and Urbanke proved in~\cite{Pfister-Urbanke} that the optimal scaling exponent $\mu=2$ can be approached if one considers transmission over the $q$-ary erasure channel for large alphabet size $q$. They used polar codes based on $q \times q$ kernels.
Fazeli, Hassani, Mondelli, and Vardy~\cite{FHMV17} then established a similar result for the more challenging and also more interesting case of $q=2$, i.e., for the binary erasure channel, using $\ell \times \ell$ kernels for large $\ell$. They pose proving an analogous result for arbitrary BMS channels as an important challenge. Their conjecture that this can be accomplished provided some of the impetus for our work. Our analysis structure follows a similar blueprint to \cite{FHMV17} though the technical ingredients become significantly more complex for channels other than the BEC, as explained next.
\iffalse
\todo[inline]{\cite{KSU10} for studying these, then \cite{Blasiok18} for proving finite scaling exponent for these, \cite{FHMV17} for proving that these achieve scaling exponent $2$ for BEC (perhaps also \cite{urbanke-pfister}}
In~\cite{KSU10} extended this result for any mixing kernel $K$. This behavior allows constructing \emph{polar codes} which are efficiently decodable under \emph{successive cancellation decoder} from~\cite{Arikan}, by choosing bits corresponding to (almost) noiseless bit-channels to actually transmit information, and freezing all the other bits to prescribed values, e.g. $0$'s. Moreover, the designed codes achieve the capacity of the channel (which follows from~\eqref{eq:entropy_conserv}), meaning that as $t$ grows, one can construct a code with blocklength $N = \ell^t$ and rate $R = I(W) - o(1)$, where the $o(1)$ terms is actually (almost) the same as the fraction of unpolarized channels at the $t^{th}$ level of the tree. It then follows that in order to estimate a gap to capacity $\delta = I(W) - R$ in these settings (and its dependence on $N$), one needs to carefully track the fraction of unpolarized channels throughout the presented recursive process.
The dependence of $\delta$ on $N$ for polar codes was shown to be of the form $\delta \sim N^{-1/\mu(K,W)}$, where $\mu(K,W)$ is a \emph{scaling exponent}, which depends on the channel $W$ and the kernel $K$. For the standard Ar{\i}kan's $2\times 2$ kernel, Hassani, Alishahi, and Urbanke in~\cite{hassani-finite-scaling-paper-journal}, with an improvement by Mondelli, Hassani, and Urbanke in~\cite{MHU16_unified} showed that $3.579 \leq \mu \leq 4.714$ for any BMS channel $W$, and a much better upper bound $\mu \leq 3.639$ was established for the case when $W$ is a binary erasure channel (BEC). Since our main goal is to get a gap to capacity $\delta \approx N^{-1/2}$, it then follows that the kernel with $\ell = 2$ cannot achieve that.
In~\cite{Fazeli-Vardy} Fazeli and Vardy proved that the scaling exponent can be decreased if one considers larger kernels, and they provided an example where $\mu = 3.356$ for BEC and kernel size $\ell = 16$.
\fi
The polarization process for erasure channels has a particularly nice structure. If the initial channel $W$ is the binary erasure channel with erasure probability $z$ (denoted $\mathrm{BEC}(z)$), then the Ar{\i}kan channels $W_i$, $i\in [\ell]$, arising from the linear transformation by the kernel are also binary erasure channels (specifically, $\mathrm{BEC}(p_i(z))$ where $p_i(\cdot)$ are some polynomials of degree at most $\ell$).
Crucially, \emph{all} the channels in the recursive tree remain BEC. Therefore it suffices to prove the existence of a good polarizing kernel for the class of binary erasure channels, which is parameterized by a single number, the erasure probability, which also equals the entropy of the channel. As shown in \cite{FHMV17}, a random kernel works with good probability for all BEC universally.
However, fundamentally the calculations for BEC revolve around the rank of various random subspaces, as decoding under the BEC is a linear-algebraic task. Moving beyond the BEC takes us outside the realm of linear algebra into information-theoretic settings where tight quantitative results are much harder to establish.
\subsection{The road to BSC: Using multiple kernels}
\label{sect:mix}
For the case when the initial channel $W$ is a BSC, a fundamental difficulty (among others) is that the channels in the recursion tree will no longer remain BSC (even after the first step). Further, to the best of our knowledge, the various channels that arise do not share a nice common exploitable structure.
Therefore, we have to think of the intermediate channels as arbitrary BMS channels, a very large and diverse class of channels. It is not clear (to us) if there exists a single kernel to universally polarize \emph{all} BMS channels at a rapid rate. Even if such a kernel exists, proving so seems out of reach of current techniques. Finally, even for a specific BMS, proving that a random kernel polarizes it fast enough requires some very strong quantitative bounds on the performance and limitations of random linear codes for channel coding. We next describe these issues dealing with which constitutes the core of our contributions.
Since we are not able to establish that a single kernel could work for the whole construction universally, our idea is to pick different kernels, which depend on the channels that we face during the procedure. That way, by picking a suitable kernel for each channel in the tree, we can ensure that polarization is fast enough throughout the whole process.
Though we use different kernels in the code construction, all of them have the same size $\ell\times\ell$.
We say that a kernel is \emph{good} if all but a $\widetilde{O}(\ell^{-1/2})$ fraction of the bit-channels obtained after polar transform by this kernel have entropy $\ell^{-\Omega(\log\ell)}$-close to either $0$ or $1$.
Given a BMS channel $W$, the code construction consists of $t$ steps, from Step $0$ to Step $t-1$.
At Step $0$, we find a good kernel $K_1^{(0)}$ for the original channel $W$.
After the polar transform of $W$ using kernel $K_1^{(0)}$, we obtain $\ell$ bit-channels $W_1,\dots,W_{\ell}$. Then in Step $1$, we find good kernels for each of these $\ell$ bit-channels. More precisely, the good kernel for $W_i$ is denoted as $K_i^{(1)}$, and the bit-channels obtained by polar transforms of $W_i$ using kernel $K_i^{(1)}$ are denoted as $\{W_{i,j}:j\in[\ell]\}$; see Figure~\ref{fig:12305} for an illustration.
At Step $j$, we will have $\ell^j$ bit-channels $\{W_{i_1,\dots,i_j}:i_1,\dots,i_j\in[\ell]\}$. For each of them, we find a good kernel
$K_{i_1,\dots,i_j}^{(j)}$. After polar transform of $\{W_{i_1,\dots,i_j}:i_1,\dots,i_j\in[\ell]\}$ using these kernels, we will obtain $\ell^{j+1}$ bit-channels. Finally, after the last step (Step $t-1$), we will obtain $N=\ell^t$ bit-channels $\{W_{i_1,\dots,i_t}:i_1,\dots,i_t\in[\ell]\}$.
Using the good kernels we obtained in this code construction procedure, we can build an $N\times N$ matrix (or we can view it as a large kernel) $M^{(t)}$ such that the $N$ bit-channels resulting from the polar transform of the original channel $W$ using this large kernel $M^{(t)}$ are exactly $\{W_{i_1,\dots,i_t}:i_1,\dots,i_t\in[\ell]\}$.
We will say a few more words about this in Section~\ref{sect:encdec} and provide all the details in Section~\ref{sect:cons}.
Define now a random process by $W^{(0)} = W$ and $W^{(j)} = W^{(j-1)}_i$ for $i$ uniformly chosen from $[\ell]$, where $W^{(j-1)}_i$ is the $i$th Ar\i kan bit-channel of $W^{(j-1)}$ with respect to the appropriate kernel chosen in the construction phase.
In other words, this is a random process of going down the tree of channels, where a uniformly random child of a current node is taken at each step. Finally, define another random process $H^{(j)} \coloneqq H\left(W^{(j)}\right)$. Since every kernel in the construction is chosen to be invertible, $H^{(j)}$ is a martingale due to the conservation of entropy property~\eqref{eq:entropy_conserv}. It is clear that $W^{(j)}$ marginally is a uniformly random channel of the $j^{\text{th}}$ level of channel tree, and then $H^{(j)}$ is the entropy of such a randomly chosen channel.
\begin{figure}
\centering
\resizebox{0.48\textwidth}{!}{
\begin{tikzpicture}
\node at (7, 9.4) (w0) {\LARGE $W$};
\node [block] at (7, 7.6) (q0)
{\LARGE Find $K_1^{(0)}$};
\node at (2.5, 5.8) (w1) {\LARGE $W_1$};
\node at (7, 5.8) (w2) {\LARGE $W_2$};
\node at (11.5, 5.8) (w3) {\LARGE $W_3$};
\node [block] at (2.5, 4) (q1)
{\LARGE Find $K_1^{(1)}$};
\node [block] at (7, 4) (q2)
{\LARGE Find $K_2^{(1)}$};
\node [block] at (11.5, 4) (q3)
{\LARGE Find $K_3^{(1)}$};
\node at (1, 2) (w11) {\LARGE $W_{1,1}$};
\node at (2.5, 2) (w12) {\LARGE $W_{1,2}$};
\node at (4, 2) (w13) {\LARGE $W_{1,3}$};
\node at (5.5, 2) (w21) {\LARGE $W_{2,1}$};
\node at (7, 2) (w22) {\LARGE $W_{2,2}$};
\node at (8.5, 2) (w23) {\LARGE $W_{2,3}$};
\node at (10, 2) (w31) {\LARGE $W_{3,1}$};
\node at (11.5, 2) (w32) {\LARGE $W_{3,2}$};
\node at (13, 2) (w33) {\LARGE $W_{3,3}$};
\node at (1, 1.5) {\LARGE $\vdots$};
\node at (2.5, 1.5) {\LARGE $\vdots$};
\node at (4, 1.5) {\LARGE $\vdots$};
\node at (5.5, 1.5) {\LARGE $\vdots$};
\node at (7, 1.5) {\LARGE $\vdots$};
\node at (8.5, 1.5) {\LARGE $\vdots$};
\node at (10, 1.5) {\LARGE $\vdots$};
\node at (11.5, 1.5) {\LARGE $\vdots$};
\node at (13, 1.5) {\LARGE $\vdots$};
\draw[->, thick] (w0)--(q0);
\draw[->, thick] (q0)--(w1);
\draw[->, thick] (q0)--(w2);
\draw[->, thick] (q0)--(w3);
\draw[->, thick] (w1)--(q1);
\draw[->, thick] (w2)--(q2);
\draw[->, thick] (w3)--(q3);
\draw[->, thick] (q1)--(w11);
\draw[->, thick] (q1)--(w12);
\draw[->, thick] (q1)--(w13);
\draw[->, thick] (q2)--(w21);
\draw[->, thick] (q2)--(w22);
\draw[->, thick] (q2)--(w23);
\draw[->, thick] (q3)--(w31);
\draw[->, thick] (q3)--(w32);
\draw[->, thick] (q3)--(w33);
\end{tikzpicture}
}
\hfill
\resizebox{0.5\textwidth}{!}{
\begin{tikzpicture}
\node at (13.3, 9) (y9) {\LARGE $Y_1$};
\node at (13.3, 8) (y8) {\LARGE $Y_2$};
\node at (13.3, 7) (y7) {\LARGE $Y_3$};
\node at (13.3, 6) (y6) {\LARGE $Y_4$};
\node at (13.3, 5) (y5) {\LARGE $Y_5$};
\node at (13.3, 4) (y4) {\LARGE $Y_6$};
\node at (13.3, 3) (y3) {\LARGE $Y_7$};
\node at (13.3, 2) (y2) {\LARGE $Y_8$};
\node at (13.3, 1) (y1) {\LARGE $Y_9$};
\node [block] at (11.8, 9) (w9) {\LARGE $W$};
\node [block] at (11.8, 8) (w8) {\LARGE $W$};
\node [block] at (11.8, 7) (w7) {\LARGE $W$};
\node [block] at (11.8, 6) (w6) {\LARGE $W$};
\node [block] at (11.8, 5) (w5) {\LARGE $W$};
\node [block] at (11.8, 4) (w4) {\LARGE $W$};
\node [block] at (11.8, 3) (w3) {\LARGE $W$};
\node [block] at (11.8, 2) (w2) {\LARGE $W$};
\node [block] at (11.8, 1) (w1) {\LARGE $W$};
\node at (10.3, 9) (x9) {\LARGE $X_1$};
\node at (10.3, 8) (x8) {\LARGE $X_2$};
\node at (10.3, 7) (x7) {\LARGE $X_3$};
\node at (10.3, 6) (x6) {\LARGE $X_4$};
\node at (10.3, 5) (x5) {\LARGE $X_5$};
\node at (10.3, 4) (x4) {\LARGE $X_6$};
\node at (10.3, 3) (x3) {\LARGE $X_7$};
\node at (10.3, 2) (x2) {\LARGE $X_8$};
\node at (10.3, 1) (x1) {\LARGE $X_9$};
\node [sblock] at (7.8, 8) (K3) {\LARGE $K_1^{(0)}$};
\node [sblock] at (7.8, 5) (K2) {\LARGE $K_1^{(0)}$};
\node [sblock] at (7.8, 2) (K1) {\LARGE $K_1^{(0)}$};
\node [sblock] at (2.5, 8) (KK3) {\LARGE $K_1^{(1)}$};
\node [sblock] at (2.5, 5) (KK2) {\LARGE $K_2^{(1)}$};
\node [sblock] at (2.5, 2) (KK1) {\LARGE $K_3^{(1)}$};
\node at (0, 9) (uu9) {\LARGE $U_1$};
\node at (0, 8) (uu8) {\LARGE $U_2$};
\node at (0, 7) (uu7) {\LARGE $U_3$};
\node at (0, 6) (uu6) {\LARGE $U_4$};
\node at (0, 5) (uu5) {\LARGE $U_5$};
\node at (0, 4) (uu4) {\LARGE $U_6$};
\node at (0, 3) (uu3) {\LARGE $U_7$};
\node at (0, 2) (uu2) {\LARGE $U_8$};
\node at (0, 1) (uu1) {\LARGE $U_9$};
\draw[->, thick] (uu9)--(1.2,9);
\draw[->, thick] (uu8)--(1.2,8);
\draw[->, thick] (uu7)--(1.2,7);
\draw[->, thick] (uu6)--(1.2,6);
\draw[->, thick] (uu5)--(1.2,5);
\draw[->, thick] (uu4)--(1.2,4);
\draw[->, thick] (uu3)--(1.2,3);
\draw[->, thick] (uu2)--(1.2,2);
\draw[->, thick] (uu1)--(1.2,1);
\draw[red, line width=2pt] (3.8,9)--(4.5, 9);
\draw[red, line width=2pt] (3.8,8)--(4.5, 8);
\draw[red, line width=2pt] (3.8,7)--(4.5, 7);
\draw[blue, line width=2pt] (3.8,6)--(4.5, 6);
\draw[blue, line width=2pt] (3.8,5)--(4.5, 5);
\draw[blue, line width=2pt] (3.8,4)--(4.5, 4);
\draw[ao, line width=2pt] (3.8,3)--(4.5, 3);
\draw[ao, line width=2pt] (3.8,2)--(4.5, 2);
\draw[ao, line width=2pt] (3.8,1)--(4.5, 1);
\draw[red, line width=2pt] (4.5, 9)--(5.8, 9);
\draw[red, line width=2pt] (4.5, 8)--(5.8, 6);
\draw[red, line width=2pt] (4.5, 7)--(5.8, 3);
\draw[blue, line width=2pt] (4.5, 6)--(5.8, 8);
\draw[blue, line width=2pt] (4.5, 5)--(5.8, 5);
\draw[blue, line width=2pt] (4.5, 4)--(5.8, 2);
\draw[ao, line width=2pt] (4.5, 3)--(5.8, 7);
\draw[ao, line width=2pt] (4.5, 2)--(5.8, 4);
\draw[ao, line width=2pt] (4.5, 1)--(5.8, 1);
\draw[->, red, line width=2pt] (5.8, 9)--(6.5,9);
\draw[->, blue, line width=2pt] (5.8, 8)--(6.5,8);
\draw[->, ao, line width=2pt] (5.8, 7)--(6.5,7);
\draw[->, red, line width=2pt] (5.8, 6)--(6.5,6);
\draw[->, blue, line width=2pt] (5.8, 5)--(6.5,5);
\draw[->, ao, line width=2pt] (5.8, 4)--(6.5,4);
\draw[->, red, line width=2pt] (5.8, 3)--(6.5,3);
\draw[->, blue, line width=2pt] (5.8, 2)--(6.5,2);
\draw[->, ao, line width=2pt] (5.8, 1)--(6.5,1);
\draw[->, thick] (9.1,9)--(x9);
\draw[->, thick] (9.1,8)--(x8);
\draw[->, thick] (9.1,7)--(x7);
\draw[->, thick] (9.1,6)--(x6);
\draw[->, thick] (9.1,5)--(x5);
\draw[->, thick] (9.1,4)--(x4);
\draw[->, thick] (9.1,3)--(x3);
\draw[->, thick] (9.1,2)--(x2);
\draw[->, thick] (9.1,1)--(x1);
\draw[->, thick] (x9)--(w9);
\draw[->, thick] (x8)--(w8);
\draw[->, thick] (x7)--(w7);
\draw[->, thick] (x6)--(w6);
\draw[->, thick] (x5)--(w5);
\draw[->, thick] (x4)--(w4);
\draw[->, thick] (x3)--(w3);
\draw[->, thick] (x2)--(w2);
\draw[->, thick] (x1)--(w1);
\draw[->, thick] (w9)--(y9);
\draw[->, thick] (w8)--(y8);
\draw[->, thick] (w7)--(y7);
\draw[->, thick] (w6)--(y6);
\draw[->, thick] (w5)--(y5);
\draw[->, thick] (w4)--(y4);
\draw[->, thick] (w3)--(y3);
\draw[->, thick] (w2)--(y2);
\draw[->, thick] (w1)--(y1);
\end{tikzpicture}
}
\caption{The left figure illustrates the code construction, and the right figure shows the encoding procedure for the special case of $\ell=3$ and $t=2$. All the kernels in this figure have size $3\times 3$. One can show that the bit-channel $W_{i,j}$ in the left figure is exactly the channel mapping from $U_{3(i-1)+j}$ to $(\bU_{[1:3(i-1)+j-1]},\bY_{[1:9]})$ in the right figure.}
\label{fig:12305}
\end{figure}
\subsection{Analysis of polarization via recursive potential function}
The principle behind polarization is that for large enough $t$, almost all of the channels on the $t$'s-level of the tree from Figure~\ref{fig:tree} will be close to either the useless or noiseless channel, i.e., their entropy is very close to $1$ or $0$, correspondingly. To estimate how fast such polarization actually happens, one needs to bound the fraction of channels on $t^{\text{th}}$ level that are not yet sufficiently polarized, i.e., $\P\Big[H^{(t)} \in (\zeta, 1 - \zeta)\Big]$ for some tiny threshold $\zeta$, and show that this fraction vanishes rapidly with increasing $t$.
Specifically, we have the following result (stated explicitly in ~\cite[Theorem A.3]{Blasiok18}) already alluded to in Section~\ref{subsec:polarization-intro}: if for all $t$
\begin{equation}
\label{eq:strong_polarization}
\P[H^{(t)} \in (p\ell^{-t}, 1 - p\ell^{-t})] \leq D\cdot\beta^t,
\end{equation}
then this corresponds to a polar code with block length $N = \ell^{t}$, rate $(D\cdot\beta^t + p\ell^{-t})$-close to the capacity of the channel, and decoding error at most $p$ under the successive cancellation decoder
\footnote{For this part the reader should think of $p$ as being inverse polynomial (of fixed degree) in $N$. We will discuss improving the decoding error probability in Section~\ref{sec:overview:exponential-decoding}.}.
To track the fractions of polarized and non-polarized channels at each level of the construction, we use a potential function $g_{\a}(h) = (h(1-h))^{\alpha}$ for some small fixed $\alpha > 0$, which was also used in~\cite{MHU16_unified} and~\cite{FHMV17}. Specifically, we are going to track $\E[g_{\a}(H^{(t)}]$ as $t$ increases, since Markov's inequality implies
\begin{equation}
\label{eq:strong_Markov}
\P[ H^{(t)} \in (p\ell^{-t}, 1 - p\ell^{-t})] = \P [g_{\a}(H^{(t)}) \geq g_{\a}(p\ell^{-t})] \leq \dfrac{\E[g_{\a}(H^{(t)})]}{g_{\a}(p\ell^{-t})}\leq 2\left(\ell^t/p\right)^{\alpha}\cdot \E[g_{\a}(H^{(t)})].
\end{equation}
To upper bound $\E[g_{\a}(H^{(t)})]$ we choose kernels in our construction so that the average of the potential function of all the children of any channel in the tree decreases significantly with respect to the potential function of this channel.
Precisely, we want for any channel $W'$ in the tree
\begin{equation}
\label{mult_decrease}
\E_{i\sim[\ell]}\Big[g_{\a}\left(H(W'_i)\right)\Big] \leq \lambda_{\alpha}\cdotg_{\a}\left(H(W')\right),
\end{equation}
where $W'_i$ are the children of $W'$ in the construction tree for $i\in[\ell]$, and the constant $\lambda_{\alpha}$ only depends on $\alpha$ and $\ell$, but is universal for all the channels in the tree (and thus for all the kernels chosen during the construction). If one can guarantee that~\eqref{mult_decrease} holds throughout the construction process, then for the martingale process $H^{(t)}$ obtain
\begin{equation}
\begin{aligned}
\label{eq:Ega_evolution}
\E\Big[g_{\a}\left(H^{(t)}\right)\Big] &= \E\left[\underset{j\sim [\ell]}{\E}\Big[g_{\a}\left(H(W^{(t-1)}_j)\right) \Big]\, \bigg\rvert\, W^{(t-1)} \right]\\ &=
\E\left[\dfrac1{\ell}\dfrac{\sum_{j=1}^{\ell}g_{\a}\left(H(W^{(t-1)}_j)\right)}{g_{\a}\left(H(W^{(t-1)})\right)}\cdotg_{\a}\left(H(W^{(t-1)})\right) \, \bigg\rvert\, W^{(t-1)} \right] \\
&\leq \lambda_{\alpha}\cdot\E\Big[g_{\a}\left(H^{(t-1)}\right)\Big],
\end{aligned}
\end{equation}
and then inductively
\begin{equation}
\label{eq:Ega_exponential}
\E\Big[g_{\a}\left(H^{(t)}\right)\Big] \leq \lambda_{\alpha}\cdot \E\Big[g_{\a}\left(H^{(t-1)}\right)\Big] \leq \lambda_{\alpha}^2\cdot \E\Big[g_{\a}\left(H^{(t-2)}\right)\Big] \leq \cdots \leq \lambda_{\alpha}^t H^{(0)} \leq \lambda_{\alpha}^t.
\end{equation}
Then~\eqref{eq:strong_Markov} and~\eqref{eq:strong_polarization} imply existence of code with rate $O\left((N/p)^{\alpha}\cdot\lambda_{\alpha}^t\right)$-close to capacity of the channel. Since our main task is to achieve a gap which is close to $N^{-1/2} = \ell^{-t/2}$, we need to argue that it is possible to choose kernels at each step in the construction so that~\eqref{mult_decrease} always holds for some $\alpha \to 0$ and $\lambda_{\alpha} \approx \ell^{-1/2}$.
\subsection{Sharp transition in polarization}
\label{sect:sharp}
The main technical contribution of this paper consists in showing that if $\ell$ is large enough, it is possible to choose kernels in the construction process for which $\lambda_{\alpha}$ is close to $\ell^{-1/2}$. Specifically, we prove that if $\ell$ is a power of $2$ such that $\log \ell = \Omega\left(\frac1{\alpha^{1.01}}\right)$, then it is possible to achieve
\begin{equation}
\label{eq:lambda_bound}
\lambda_{\alpha} \leq \ell^{-1/2 + 5\alpha}.
\end{equation}
To obtain such a behavior, while choosing the kernel for the current channel $W'$ during the recursive process we differentiate between two cases:
\medskip \noindent {\bf Case 1: $W'$ is already very noisy or almost noiseless.} Such regime is called \emph{suction at the ends} (following \cite{Blasiok18}), and it is known that polarization happens (much) faster for this case. So in this case we take a standard Ar\i kan's polarization kernel $K = \left(\begin{smallmatrix}1 & 0\\ 1 & 1\end{smallmatrix}\right)^{\otimes \log\ell}$ and prove~\eqref{mult_decrease} with a geometric decrease factor $\lambda_{\alpha} \leq \ell^{-1/2}$.
\medskip \noindent {\bf Case 2: $W'$ is neither very noisy nor almost noiseless.} We refer to this case as \emph{variance in the middle} regime (following~\cite{Blasiok18} again). For such a channel we adopt the framework from~\cite{FHMV17} and show a \emph{sharp transition in polarization} for a random kernel $K$ and $W'$. Specifically, we prove that with high probability over $K \sim \{0,1\}^{\ell\times\ell}$ (for $\ell$ large enough) it holds
\begin{equation}
\label{eq:sharp_trans}
\begin{aligned}
&H(W'_i(K)) \leq \ell^{-\Omega(\log \ell)} &&\text{\normalfont{for}} && i \geq \ell\cdot H(W') + \Omega(\ell^{1/2}\log^3\ell),\\
&H(W'_i(K)) \geq 1 - \ell^{-\Omega(\log \ell)} &&\text{\normalfont{for}} && i \leq \ell\cdot H(W') - \Omega(\ell^{1/2}\log^3\ell).
\end{aligned}
\end{equation}
It then follows that only about $\widetilde{O}(\ell^{-1/2})$ fraction of bit-channels are not polarized for some kernel $K$, which then easily translates into the bound~\eqref{eq:lambda_bound} on $\lambda_{\alpha}$ that we desire. Note that we can always ensure that we take an invertible kernel $K$, since a random binary matrix is invertible with at least some constant probability.
Proving such a sharp transition constitutes bulk of the technical work in this paper. In Section~\ref{BMS_section} we show that inequalities in~\eqref{eq:sharp_trans} correspond to decoding a single bit of a message which is transmitted through $W'$ using a random linear code. The first set of inequalities in~\eqref{eq:sharp_trans} then correspond to saying that one can decode this single bit with low error probability with high probability over the randomness of the code, if the rate of the code is at least approximately $\ell^{-1/2}$ smaller than the capacity of the channel (where $\ell$ is the blocklength of the code). This follows from the well-studied fact that random linear codes achieve Shannon's capacity over any BMS (\cite{Gallager65}, \cite{Barg_Forney_correspond}).
The second set of inequalities, on the other hand, corresponds to saying that for random linear codes with rate exceeding capacity by at least $\approx \ell^{-1/2}$, even predicting a single bit of the message with tiny advantage over a uniform guess is not possible. While it follows from converse Shannon's coding theorem that decoding the \emph{entire} message is not possible (even with small success probability) for \emph{any} code above capacity, it does not follow that one cannot recover \emph{a particular message bit} with accuracy noticeably better than random guessing. In fact, if we only want to decode a specific message bit and we do not put any constraints on the code, then we can easily construct codes with rate substantially above capacity that still allow us to decode this specific message bit with high probability. All we need to do here is to repeat the message bit sufficiently many times in the codeword, decode each copy based on the corresponding channel output, and then take a majority vote. The overal code rate does not even figure in this argument.
Therefore, one can only hope that the converse theorem for bit-decoding holds for certain code ensembles, and for the purpose of this paper, we restrict ourselves to random linear code ensemble.
While the converse for bit-decoding in this case is surely intuitive, establishing it in the strong quantitative form \eqref{eq:sharp_trans} that we need, and also for all BMS channels, turns out to be a challenging task. We describe some of the ideas behind our strong converse theorem for bit-decoding in Section~\ref{sect:outline}.
\subsection{Encoding and decoding} \label{sect:encdec}
Once we have obtained the kernels in the code construction (see Section~\ref{sect:mix}), the encoding procedure is pretty standard; see \cite{Presman15,Ye15,Gabry17,Benammar17,Wang18} for discussions on polar codes using multiple kernels.
As mentioned in Section~\ref{sect:mix}, we can build a $N\times N$ matrix $M^{(t)}:=D^{(t-1)}Q^{(t-1)}D^{(t-2)}Q^{(t-2)}\dots D^{(1)}Q^{(1)} D^{(0)}$, where the matrices $Q^{(1)}, Q^{(2)},\dots,Q^{(t-1)}$ are some permutation matrices, and $D^{(0)}, D^{(1)},\dots,D^{(t-1)}$ are block diagonal matrices. In particular, all the blocks on the diagonal of $D^{(j)}$ are the kernels that we obtained in Step $j$ of the code construction. We illustrate the special case of $\ell=3$ and $t=2$ in Figure~\ref{fig:12305}.
We take a random vector $\bU_{[1:N]}$ consisting of $N=\ell^t$ i.i.d. Bernoulli-$1/2$ random variables and we transmit the random vector $\bX_{[1:N]}$ through $N$ independent copies of $W$. Denote the output vector as $\bY_{[1:N]}$. Then for every $i\in[N]$, the bit-channel mapping from $U_i$ to $(\bU_{[1:i-1]},\bY_{[1:N]})$ is exactly $W_{i_1,\dots,i_t}$, where $(i_1,\dots,i_t)$ is $\ell$-ary expansion of $i$.
We have shown that almost all of the $N$ bit-channels $\{W_{i_1,\dots,i_t}:i_1,\dots,i_t\in[\ell]\}$ become either noiseless or completely noisy. In the code construction, we can track $H(W_{i_1,\dots,i_t})$ for every $(i_1,\dots,i_t)\in[\ell]^t$, and this allows us to identify which $U_i$'s are noiseless under successive decoding. Then in the encoding procedure, we only put information in these noiseless $U_i$'s and set all the other $U_i$'s to be some ``frozen" value, e.g., $0$. This is equivalent to saying that the generator matrix of our code is the submatrix of $M^{(t)}$ consisting of rows corresponding to indices $i$ of the noiseless $U_i$'s. In Section~\ref{sect:cons}, we will show that similarly to the classical polar codes, both the encoding and decoding of our code also have $O(N\log N)$ complexity.
As a final remark, we mention that we need to quantize every bit-channel we obtain in every step of the code construction. More precisely, we merge the output symbols whose log-likelihood ratios are close to each other, so that after the quantization, the output alphabet size of every bit-channel is always polynomial in $N$. This allows us to construct the code in polynomial time. Without quantization, the output alphabet size would eventually be exponential in $N$. We will provide more details about this aspect, and how it affects the code construction and the analysis of decoding, in Section~\ref{sect:local} and Section~\ref{sect:cons}.
\subsection{Inverse sub-exponential decoding error probability}
\label{sec:overview:exponential-decoding}
Up to this moment, the described construction only achieved inverse polynomial decoding error probability. One reason for this restriction comes from the quantization of the bit-channels that we do, which leads to only having approximations of the actual bit-channels. In particular this means that we only track the parameters (entropy and Bhattacharyya parameter) of the bit-channels approximately, with an additive error inverse polynomial in the blocklength. This directly translates to only claiming inverse polynomial decoding error probability.
It a recent work Wang and Duursma~\cite{Wang-Duursma} show that it is possible to achieve a good scaling exponent ($2 + O(\alpha)$) and inverse sub-exponential decoding error probability ($\exp(-N^{\alpha})$) for polar codes simultaneously, using the idea of multiple kernels in the construction. However, the construction phase in~\cite{Wang-Duursma} tracked the exact bit-channels that are obtained in the $\ell$-ary tree of channels (without quantization), which means that the construction of such polar codes is no longer doable in polynomial time. This is because (most of) the exact bit-channels cannot even be described in a tractable way, since they have exponential size of output alphabet.
We combine our approach of using Ar\i kan's kernels for polarized bit-channels (Case $1$ in Section~\ref{sect:sharp}) with a strong analysis of polarization from~\cite{Wang-Duursma} to achieve good scaling exponent, inverse sub-exponential decoding error probability, and polynomial time complexity of construction, all at the same time. The main idea behind our argument is that even though we cannot track the exact bit-channels in the construction, we know how basic Ar\i kan's kernel evolves the parameters of the bit-channels. Then, if we start with a slightly polarized bit-channel, and take a sufficient amount of "good" branches of Ar\i kan's $2\times 2$ kernels, we end up with a strongly polarized channel. The crucial observation here is that it suffices to only track the approximation of the bit-channel to verify that it is slightly polarized, and no additional computation is needed to check how many "good" branches were taken in the tree of bit-channels. In such a way, we show that it is possible to prove very strong polarization for bit-channels, which leads to good decoding error probability, while still only tracking the approximations of the bit-channels, which keeps the construction complexity polynomial. All of these arguments, which lead to the main result of this paper, are made precise and proven in Section~\ref{sec:exponential-decoding}.
\vspace{-2ex} | {
"timestamp": "2020-07-30T02:03:36",
"yymm": "1911",
"arxiv_id": "1911.03858",
"language": "en",
"url": "https://arxiv.org/abs/1911.03858",
"abstract": "Let $W$ be a binary-input memoryless symmetric (BMS) channel with Shannon capacity $I(W)$ and fix any $\\alpha > 0$. We construct, for any sufficiently small $\\delta > 0$, binary linear codes of block length $O(1/\\delta^{2+\\alpha})$ and rate $I(W)-\\delta$ that enable reliable communication on $W$ with quasi-linear time encoding and decoding. Shannon's noisy coding theorem established the \\emph{existence} of such codes (without efficient constructions or decoding) with block length $O(1/\\delta^2)$. This quadratic dependence on the gap $\\delta$ to capacity is known to be best possible. Our result thus yields a constructive version of Shannon's theorem with near-optimal convergence to capacity as a function of the block length. This resolves a central theoretical challenge associated with the attainment of Shannon capacity. Previously such a result was only known for the erasure channel.Our codes are a variant of Arıkan's polar codes based on multiple carefully constructed local kernels, one for each intermediate channel that arises in the decoding. A crucial ingredient in the analysis is a strong converse of the noisy coding theorem when communicating using random linear codes on arbitrary BMS channels. Our converse theorem shows extreme unpredictability of even a single message bit for random coding at rates slightly above capacity.",
"subjects": "Information Theory (cs.IT); Computational Complexity (cs.CC); Data Structures and Algorithms (cs.DS)",
"title": "Arıkan meets Shannon: Polar codes with near-optimal convergence to channel capacity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981453437115321,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573446612155
} |
https://arxiv.org/abs/1704.02030 | Using stacking to average Bayesian predictive distributions | The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit. We take the idea of stacking from the point estimation literature and generalize to the combination of predictive distributions, extending the utility function to any proper scoring rule, using Pareto smoothed importance sampling to efficiently compute the required leave-one-out posterior distributions and regularization to get more stability. We compare stacking of predictive distributions to several alternatives: stacking of means, Bayesian model averaging (BMA), pseudo-BMA using AIC-type weighting, and a variant of pseudo-BMA that is stabilized using the Bayesian bootstrap. Based on simulations and real-data applications, we recommend stacking of predictive distributions, with BB-pseudo-BMA as an approximate alternative when computation cost is an issue. | \section{Introduction}
A general challenge in statistics is prediction in the presence of multiple candidate models or learning algorithms $\mathcal{M}=(M_1,\dots, M_K)$. Model selection---picking one model that can give optimal performance for future data---can be unstable and wasteful of information \citep[see, e.g.,][]{predictiveCompare}. An alternative is model averaging, which tries to find an optimal model combination in the space spanned by all individual models. In Bayesian context, the natural target for prediction is to find a predictive distribution that is close to the true data generating distribution \citep{score,Vehtari+Ojanen:2012}.
Ideally, we prefer to attack the Bayesian model combination problem via continuous model expansion---forming a larger bridging model that includes the separate models $M_k$ as special cases \citep{gelman2011parameterization}---but in practice constructing such an expansion can require conceptual and computational effort, and so it makes sense to consider simpler tools that work with existing inferences from separately-fit models.
\subsection{Bayesian model averaging}
If the set of candidate models between them represents a full generative model, then the Bayesian solution is to simply average the separate models, weighing each by its marginal posterior probability. This is called {\em Bayesian model averaging} (BMA) and is optimal if the prior is correct, that is, if the method is evaluated based on its frequency properties evaluated over the joint prior distribution of the models and their internal parameters \citep{madigan1996bayesian, hoeting1999}. If $y=(y_1, \dots,y_n)$ represents the observed data, then the posterior distribution for any quantity of interest $\Delta$ is,
$$p(\Delta | y)= \sum_{k=1}^K p(\Delta| M_k, y)p(M_k |y).$$
In this expression, each model is weighted by its posterior probability,
$$p(M_k | y) =\frac{p(y | M_k ) p(M_k)} {\sum_{k=1}^K p(y | M_k ) p(M_k)} ,$$
and this expression depends crucially on the marginal likelihood under each model,
$$p(y | M_k ) = \int\! p(y | \theta_k, M_k ) p(\theta_k | M_k) d \theta_k\,.$$
In Bayesian model comparison, the relationship between the true data generator and the model list $\mathcal{M}=(M_1,\dots, M_K)$ can be classified into three categories: $ \mathcal{M}$-closed, $ \mathcal{M}$-complete and $ \mathcal{M}$-open. We adopt the following definition from \citet{Bernardo+Smith:1994, key1999bayesian, clyde2013bayesian}:
\begin{itemize}
\item $ \mathcal{M}$-closed means the true data generating model is one of $M_k \in \mathcal{M}$, although it is unknown to researchers.
\item $\mathcal{M}$-complete refers to the situation where the true model exists and is out of model list $\mathcal{M}$. But we still wish to use the models in $\mathcal{M}$ because of tractability of computations or communication of results, compared with the actual belief model. Thus, one simply finds the member in $\mathcal{M}$ that maximizes the expected utility (with respect to the true model).
\item $\mathcal{M}$-open refers to the situation in which we know the true model $M_t$ is not in $ \mathcal{M}$, but we cannot specify the explicit form $p(\tilde y|y) = p(\tilde y|M_t,y)$ because it is too difficult, we lack time to do so, or do not have the expertise, computational intractability, etc.
\end{itemize}
BMA is appropriate for $\mathcal{M}$-closed case. In the $\mathcal{M}$-open and $\mathcal{M}$-complete case, BMA will asymptotically select the one single model in the list that is closest in Kullback-Leibler (KL) divergence.
Furthermore, in BMA, the marginal likelihood depends sensitively on the specified prior $p(\theta_k | M_k)$ for each model. For example, consider a problem where a parameter has been assigned a normal prior distribution with center 0 and scale 10, and where its estimate is likely to be in the range $(-1, 1)$. The chosen prior is then essentially flat, as would also be the case if the scale were increased to 100 or 1000. But such a change would divide the posterior probability of the model by roughly a factor of 10 or 100.
\subsection{Predictive accuracy}
From another direction, one can simply aim to minimize out-of-sample predictive error, equivalently to maximize expected predictive accuracy. In this paper we propose a novel log score stacking method for combining Bayesian
predictive distributions. As a side result we also propose a simple model weighting using Bayesian leave-one-out cross-validation.
\subsection{Stacking}
{\em Stacking} is a direct approach for averaging point estimates from multiple models. The idea originates with \cite{wolpert1992}, and \cite{breiman1996} gives more details for stacking weights under different conditions. In supervised learning where the data are $((x_i, y_i), i=1,\dots, n)$, and each model $M_k$ has a parametric form $\hat y_k=f_k(x | \theta_k)$, stacking is done in two steps \citep{ting1999}. In the first, baseline-level, step, each model is fitted separately and the leave-one-out (LOO) predictor $\hat f_k ^{(-i)}(x_i) =E(y_i | \hat \theta_{k ,y_{-i}}, M_k )$ is obtained for each model $k$ and each data point $i$. Cross-validation or bootstrapping is used to avoid overfitting \citep{leblanc1996}. In the second, meta-level, step, a weight for each model is obtained by minimizing the mean squared error, treating the leave-one-out predictors from the previous stage as covariates:
\begin{equation}\label{stackingMean}
\hat w=\arg \min_w \sum_{i=1}^n \left(y_i-\sum_k w_k \hat f_k^{(-i)}(x_i)\right) ^2.
\end{equation}
\cite{breiman1996} notes that a positive constraint $w_k \geq 0, k=1,\dots K$, or a $K-1$ simplex constraint: $w_k \geq 0, \sum_k^{K} w_k =1$ enforces a solution. Better predictions may be attainable using regularization \citep{merz1999,yang2014minimax}. Finally, the point prediction for a new data point with feature vector $\tilde x $ is $$ \hat {\tilde y}= \sum_{k=1}^K \hat w_k f_k(\tilde x | \hat \theta_{k ,y_{1:n}} ).$$
It is not surprising that stacking typically outperforms BMA when the criterion is mean squared predictive error \citep{clarke2003}, because BMA is not optimized to this task. \citet{wong2004improvement} emphasize that the BMA weights reflect the fit to the data rather than evaluating the prediction accuracy. On the other hand, stacking is not widely used in Bayesian model combination because it only works with point estimates, not the entire posterior distribution \citep{hoeting1999}.
\citet{clyde2013bayesian} give a Bayesian interpretation for stacking by considering model combination as a decision problem when the true model $M_t$ is not in the model list. If the decision is of the form $a(y,w)=\sum_{k=1}^K w_k \hat y_k $, then the expected utility under quadratic loss is,
$$\mathrm{E}_ {\tilde y} (u( \tilde y, a(y,w) )| y ) =- \int || \tilde y- \sum_{k=1}^K w_k \hat {\tilde {y}}_k ||^2 p(\tilde y | y, M_t ) d\tilde y, $$
where $\hat {\tilde {y}}_k$ is the predictor of new data $\tilde y$ in model k.
The stacking weights are the solution to the LOO estimator:
$$\hat w= {\arg\max} _w \frac{1}{n} \sum_{i=1}^n u(y_i, a(y_{-i}, w) ) ,$$
where $a(y_{-i}, w)=\sum_{k=1}^K w_k \mathrm{E}(y_i | y_{-i}, M_k)$.
The stacking predictor for new data $\tilde y$ is $ \sum_{k=1}^K \hat w_k \hat {\tilde {y}}_k$. The predictor in the $k$-th model $\hat {\tilde {y}}_k$ can be either the plug-in estimator ,
$$ \mathrm{E}_k(\tilde y | \hat \theta_{k}, y )= \int \tilde y p (\tilde y |\hat \theta_{k, y}, M_k ) d\tilde y, $$
or the posterior mean,
$$\mathrm{E}_k(\tilde y | y ) = \int \tilde y p( \tilde y | \theta_{k} , M_k) p(\theta_k| y, M_k) d \tilde y d \theta_k. $$
\cite{le2016bayes} prove that the stacking solution is asymptotically the Bayes solution. With some mild conditions on the distribution, the following asymptotic relation holds:
$$ \int l( \tilde y, a(y,w) ) p(\tilde y | y) d \tilde y - \frac{1}{n}\sum_{i=1}^n l( y_i, a(y_{-i}, w) ) \xlongrightarrow{\text{$L_2$}} 0, $$
where $l$ is the squared loss, $l(\tilde y,a)=(\tilde y-a)^2.$
They also prove when the action is a predictive distribution $a(y_{-i}, w)=\sum_{k=1}^K w_k p(y_i | y_{-i}, M_k)$, then the asymptotic relation still holds for negative logarithm scoring rules.
However, most early literature limited stacking to averaging \emph{point} predictions, rather than \emph{predictive distributions}. In this paper, we extend stacking from minimizing the squared error to maximizing scoring rules, hence make stacking applicable to combining a set of Bayesian posterior predictive distribution. We argue this is the appropriate version of Bayesian model averaging in the $\mathcal{M}$-open situation.
\subsection{Fast leave-one-out cross-validation}
Exact leave-one-out cross validation can be computationally costly. For example, in the econometric literature, \citet{geweke2011optimal,geweke2012prediction} suggest averaging prediction models by maximizing predictive log score, only considering time series due to the computational cost of exact LOO for general data structures. In the present paper we demonstrate that Pareto-smoothed importance sampling leave-one-out cross-validation (PSIS-LOO) \citep{practicalPSIS} is a practically efficient way to calculate the needed leave-one-out predictive densities $p(y_i | y_{-i}, M_k)$ to compute log score stacking weights.
\subsection{Akaike weights and pseudo Bayesian model averaging}
Leave-one-out cross-validation is related to various information criteria \citep[see, e.g.][]{Vehtari+Ojanen:2012}. In case of maximum likelihood estimates, leave-one-out cross-validation is asymptotically equal to Akaike's information criterion (AIC) \citet{Stone:1977a}.
Given AIC =-2 log (maximum likelihood) + 2 (number of parameters), \citet{akaike1978likelihood} proposed to use $\exp(-\frac{1}{2} \mathrm{AIC})$ for model weighting \citep[see also][]{Burnham+Anderson:2002,wagenmakers2004aic}.
More recently we have seen also Watanabe-Akaike information criterion (WAIC) \citep{Watanabe:2010d} and leave-one-out cross-validation estimates used to compute model weights following the idea of AIC weights.
In Bayesian setting \citeauthor{Geisser+Eddy:1979} (\citeyear{Geisser+Eddy:1979}; see also, \citealt{Gelfand:1996}) proposed pseudo Bayes factors where marginal likelihoods $p(y|M_k)$ are replaced with a product of Bayesian leave-one-out cross-validation predictive densities $\prod_{i=1}^np(y_i | y_{-i}, M_k)$. Following the naming by \citeauthor{Geisser+Eddy:1979}, we call AIC type weighting which uses Bayesian cross-validation predictive densities as pseudo Bayesian model averaging (Pseudo-BMA).
In this paper we show that the uncertainty in the future data distribution should be taken into account when computing such weights. We will propose a AIC type weighting using the Bayesian bootstrap and the expected log predictive density (elpd), which we call Pseudo-BMA+ weighting. We show that although Pseudo-BMA+ weighting gives better results than regular BMA or Pseudo-BMA weighting (in $\mathcal{M}$-open setting), it is still inferior to the log score stacking. Due to its simplicity we use Pseudo-BMA+ weights as initial guess for optimization procedure in the log score stacking.
\subsection{Other model weighting approaches}\label{reference-bma}
Besides BMA, stacking and AIC type weighting, some other methods have been introduced to combine Bayesian models. \citet{gutierrez2005statistical} consider using a nonparametric prior in the decision problem stated above. Essentially they are fitting a mixture model with a Dirichlet process, yielding a posterior expected utility of,
$$ U_n (w_k, \theta_k)= \sum _{i=1}^{n} \sum_{ k=1}^K w_k f_k (y_i | \theta_k).$$
They then solve for the optimal weights $\hat w_k = \arg \max_{w_k, \theta_k}U_n (w_k, \theta_k)$.
\citet{dbd} propose model averaging using weights based on divergences from a reference model in $\mathcal{M}$-complete setting. If the true data generating density function is known to be $f^*$, then an AIC type (or Boltzmann-Gibbs type) weight can be defined as,
\begin{equation} \label{reference-pseudo-bma}
w_k =\frac{\exp\bigl(-n \mathrm{KL}(f^*, f_k) \bigr)}{\sum_{k=1}^K \exp \bigl(-n \mathrm{KL}(f^*, f_{k})\bigr) }.
\end{equation}
The true model can be approximated with a reference model $M_0$ with density $f_0( . | \theta_0)$ using nonparametric methods like Gaussian process or Dirichlet process, and $\mathrm{KL}(f^*, f_k)$ can be estimated by its posterior mean,
$$ \mathrm{\widetilde{ KL}_1} (f_0, f_k)=\int \!\!\int \mathrm{KL} \Bigl( f_0(\cdot| \theta_0) , f_k(\cdot| \theta_k) \Bigr)p(\theta_k | y, M_k)p(\theta_0 | y, M_0) d\theta_k d\theta_0,$$
or by the Kullback-Leibler divergence for posterior predictive distributions,
$$ \mathrm{\widetilde{ KL}_2} (f_0, f_k)= \mathrm{KL} \Bigl( \int\! f_0(\cdot| \theta_0)p(\theta_0 | y, M_0)d\theta_0 , \int\! f_k(\cdot| \theta_k) )p(\theta_k | y, M_k)d\theta_k \Bigr) .$$
Here, $\mathrm{\widetilde{ KL}_1}$ corresponds to Gibbs utility, which can be criticized for not using the posterior predictive distributions \citep{Vehtari+Ojanen:2012}, although asymptotically the two utilities are identical, and $\mathrm{\widetilde{ KL}_1}$ is often computationally simpler than $\mathrm{\widetilde{ KL}_2}$.
Let $p(\tilde y |y, M_k) = \int\! f_k(\tilde y| \theta_k) p(\theta_k | y, M_k)d\theta_k $, $k=0,\dots,K $, then
$$\mathrm{\widetilde{ KL}_2} (f_0, f_k)= -\int\! \log p(\tilde y |y, M_k) p(\tilde y |y, M_0) d \tilde y + \int\! \log p(\tilde y |y, M_0) p(\tilde y |y, M_0) d \tilde y $$
As the entropy of the reference model $\int\! \log p(\tilde y |y, M_0) p(\tilde y |y, M_0) d \tilde y$ is constant, the corresponding terms cancel out in the weight (\ref{reference-pseudo-bma}), leaving
$$w_k = \frac{ \exp \bigl(n \int\! \log p(\tilde y |y,M_k) p(\tilde y |y, M_0) d \tilde y \bigr) } {\sum_{k=1}^K \exp \bigl(n \int\! \log p(\tilde y |y, M_{k}) p(\tilde y |y, M_0) d \tilde y \bigr)}$$
It is proportional to the exponential expected log predictive density, where the expectation is taken with respect to the reference model $M_0$. Comparing with definition \ref{P-BMA} in Section \ref{Pseudo-BMA}, this method could be called Reference-Pseudo-BMA.
\section {Theory and methods}
We label classical stacking (\ref{stackingMean}) as \emph{stacking of means} because it combines the models by minimizing the mean squared error of the point estimate, which is the $L_2$ distance between the posterior mean and observed data.
In general, we can use a proper scoring rule (or equivalently the underlying divergence) to compare distributions. After choosing that, stacking can be extended to combining the whole distribution.
\subsection{Stacking using proper scoring rules}
Adapting the notation of \cite{score}, we label $Y$ as the random variable on the sample space $(\Omega, \mathcal{A})$ that can take values on $(-\infty, \infty)$. $\mathcal{P}$ is a convex class of probability measure on $\Omega$. Any member of $\mathcal{P}$ is called a probabilistic forecast.
A function $S: \mathcal{P} \times \Omega \to \bar{ \mathrm{R}}$ defines a scoring rule if $S(P, \cdot)$ is $\mathcal{P}$ quasi-integrable for all $P \in \mathcal{P}$. In the continuous case, distribution $P$ is identified with density function $p$.
For two probabilistic forecasts $P$ and $Q$, we write $S(P, Q)=\int S(P, \omega) dQ(\omega) $. A scoring rule $S$ is called \emph{proper} if $S(Q,Q) \geq S(P,Q)$ and \emph{strictly proper} if the equation holds only when $P= Q $ almost surely. A proper scoring rule defines the divergence $d: \mathcal{P} \times \mathcal{P} \to (0, \infty) $ as $d(P,Q)=S(Q,Q)-S(P,Q)$.
For continuous variables, some popularly used scoring rules include:
\begin{enumerate}
\item {\em Quadratic score:} $\mathrm{QS}(p,y) = 2p(y)-||p||_2^2$ with the divergence $d(p,q)= ||p-q ||_2^2$.
\item {\em Logarithmic score:}
$\mathrm{LogS}(p,y)=\log(p(y)) $ with $d(p,q)=\mathrm{KL}(q,p).$
The logarithmic score is the only proper local score assuming regularity conditions.
\item {\em Continuous ranked score:}
$\mathrm{CRPS}(F,y)=-\int_\mathrm{R} (F(y') - 1(y' \geq y) ) ^2 dy'$ with $d(F, G)=\int_\mathrm{R} (F(y)-G(y))^2 dy $, where $F$ and $G$ are the corresponding distribution function.
\item {\em Energy score:}
$\mathrm{ES}(P,y)= \frac{1}{2}\mathrm{E}_P || Y-Y' ||_2^\beta- \mathrm{E}_P|| Y-y||_2^\beta $, where $Y$ and $Y'$ are two independent random variables from distribution $P$.
When $\beta=2$, this becomes $\mathrm{ES}(P,y)= -||\mathrm{E}_P(Y)-y||^2.$
The energy score is strictly proper when $\beta \in (0,2)$ but not when $\beta=2$.
\item {\em Scoring rules depending on first and second moments:} Examples include $S(P, y)=-\log\mathrm{det} (\Sigma_P)- (y- \mu_P )^T \Sigma_p^{-1}(y-\mu_P )$, where $\mu_P $ and $\Sigma_P$ are the mean vector and covariance matrix of distribution $P$.
\end{enumerate}
Now return to the problem of model combination after specifying the score rule $S$ and corresponding divergence $d$. The observed data are $y=(y_1, \dots , y_n)$. For simplicity, we remove all covariates $x$ in the notation. Suppose we have a set of probabilistic models $\mathcal{M}= ( M_1, \dots, M_K)$; then the goal in stacking is to find an optimal super-model in the convex linear combination with the form $\mathcal{C}= (\sum_{k=1}^{K}w_k p(\cdot | M_k) | \sum_k w_k=1, w_k \geq 0 )$, such that its divergence to the true data generating model, denoted by $p_t(\cdot|y)$, is minimized:
$$\min_{w} d \Bigl( \sum_{k=1}^K w_k p( \cdot | y, M_k ), p_t(\cdot | y) \Bigr) . $$
Or equivalently maximize the scoring rule of the predictive distribution,
\begin{equation} \label{stacking_population}
\max_{w} S\Bigl( \sum_{k=1}^K w_k p(\cdot | y, M_k), p_t(\cdot | y) \Bigr),
\end{equation}
where $ p( \tilde y | y, M_k)$ is the predictive density of $\tilde y$ in model $k$:
$$
p( \tilde y | y, M_k) = \int p( \tilde y | \theta_k, y, M_k)p(\theta_k | y, M_k)d \theta_k .
$$
We label the leave-$i$-out predictive density in model $k$ as,
$$\hat p_{k, -i}(y_i)= \int p(y_i | \theta_k, M_k) p(\theta | y_{-i}, M_k) d\theta_k . $$
where $y_{-i}=(y_1,\dots, y_{i-1}, y_{i+1}, \dots, y_n)$.
Then we define the stacking weights as the solution to the following optimization problem:
\begin{equation} \label{stacking}
\max_w \frac{1}{n}\sum_{i=1}^n S\Bigl( \sum_{k=1}^K w_k \hat p_{k,-i}, y_i\Bigr) , \quad s.t. \quad w_k \geq 0, \quad \sum_{k=1}^K w_k=1.
\end{equation}
Eventually, the combined estimation of the predictive density is
\begin{equation} \label{predictive}
\hat p(\tilde y |y)= \sum_{k=1}^K \hat w_k p(\tilde y|y, M_k).
\end{equation}
When using logarithmic score (corresponding to Kullback-Leibler divergence), we call this \emph{stacking of predictive distributions}:
\begin{equation*}
\max_w \frac{1}{n} \sum_{i=1}^n \log \sum_{k=1}^K w_k p(y_i | y_{-i}, M_k) , \quad s.t. \quad w_k \geq 0, \quad \sum_{k=1}^K w_k=1.
\end{equation*}
The choice of scoring rule can depend on the underlying application. Stacking of means (\ref{stackingMean}) corresponds to the energy score with $\beta = 2$. The reasons why we prefer stacking of predictive distributions (corresponding to the logarithmic score) to stacking of means are: (i) the energy score with $\beta=2 $ is not a strictly proper scoring rule and can give rise to identification problems, and (ii) every proper local scoring rule is equivalent to the logarithmic score \citep{score}.
\subsection{ Asymptotic behavior of stacking}
The stacking estimate (\ref{stacking_population}) finds the optimal predictive distribution within the convex set $\mathcal{C}=\big\{ \sum_{k=1}^{K}w_k p(\cdot | M_k ) \mid \sum_{k=1}^K w_k=1, w_k \geq 0 \big\} $, that is the closest to the data generating process with respect to the chosen scoring rule.
This is different from Bayesian model averaging, which asymptotically with probability 1 will select a single model: the one that is closest in KL divergence to the true data generating process.
Solving for the stacking weights in (\ref{stacking}) is an M-estimation problem. Under some mild conditions \citep{le2016bayes, clyde2013bayesian, key1999bayesian}, for either the logarithmic scoring rule or energy score (negative squared error) and a given set of weights $w_1 \dots w_k$, as sample size $n \to \infty$, the following asymptotic limit holds:
$$
\frac{1}{n}\sum_{i=1}^n S\Bigl( \sum_{k=1}^K w_k \hat p_{k, -i} , y_i \Bigr) - \mathrm{E}_{\tilde y| y} S\Bigl( \sum_{k=1}^K w_k p(\tilde y| y , M_k) , \tilde y\Bigr) \xlongrightarrow{\text{$L_2$}} 0.
$$
Thus the leave-one-out-score is a consistent estimator of the posterior score. In this sense, the stacking weights is the optimal combination weights asymptotically.
In terms of \citet[][Section 3.3]{Vehtari+Ojanen:2012}, the proposed stacking with log score is $M_{*}$-optimal projection of the information in the actual belief model $M_{*}$ to $\hat{w}$, where explicit specification of $M_{*}$ is avoided by re-using data as a proxy for the predictive distribution of the actual belief model and $w_k$ are the free parameters.
\subsection{Pareto smoothed importance sampling}
One challenge in calculating the stacking weights proposed in (\ref{stacking}) is that we need the
leave-one-out (LOO) predictive density,
$$p(y_i | y_{-i}, M_k)=\int p(y_i | \theta_k, M_k) p(\theta_k | y_{-i}, M_k) d\theta_k. $$
Exact LOO requires refitting each model $n$ times. To avoid this onerous computation, we use the following approximate method. For the $k$-th model, we fit to all the data, obtaining simulation draws $\theta_k^s (s=1,\dots S)$ from the full posterior $p(\theta_k|y, M_k)$ and calculate
\begin{equation} \label{ratio}
r_{i,k}^s =\frac{1} {p(y_i | \theta^s, M_k) } \propto \frac{p(\theta^s | y_{-i}, M_k)}{ p(\theta^s | y, M_k) }.
\end{equation}
The ratio $ r_{i,k}^s$ has a density function in its denominator and can be unstable, due to a potentially long right tail. This problem can be resolved using Pareto smoothed importance sampling (PSIS). For each fixed model $k$ and data $i$, we fit the generalized Pareto distribution to the 20\% largest importance ratios $ r_{i,k}^s$, and calculate the expected values of the order statistics of the fitted generalized Pareto distribution. We further truncate those values to get the smoothed importance weight $w_{i,k}^s$, which is used to replace $ r_{i,k}^s$. For details of PSIS, see \cite{practicalPSIS}. In the end, the LOO importance sampling is performed using
$$p(y_i | y_{-i}, M_k ) \approx \frac{1}{ \frac{1}{S} \sum_{s=1}^{S} w_{i,k}^s } .$$
When stacking using the logarithmic score, we are combining each model's log predictive density. The PSIS estimate of the LOO expected log pointwise predictive density in the $k$-th model is,
$$ \mathrm { \widehat {elpd}_{loo}}^ k = \sum_{i=1}^n \log\left( \frac{ \sum _{s=1}^S w_{i,k}^s p(y_i | \theta^s, M_k) }{ \sum _{s=1}^S w_{i,k}^s } \right).$$
The reliability of the PSIS approximation can be determined by the estimated shape parameter $\hat k$ in the generalized Pareto distribution. For the left-out data points where $\hat k>0.7$, \cite{practicalPSIS} suggest replacing the PSIS approximation of those problematic cases by the exact LOO or $k$-fold validation.
One potential drawback of LOO is the large variance when the sample size is small. We see in the simulation that when the ratio of relative sample size to effective number of parameters is small, the weighting can be unstable. How to adjust this small sample behavior is left for the future research.
\subsection{Pseudo-BMA}\label{Pseudo-BMA}
In our paper, we also consider an AIC type weighting using leave-one-out cross-validation estimated expected log predictive densities. As mentioned in Section \ref{reference-bma}, these weights estimate the same quantities as \citet{dbd} that use the divergence from the reference model based inference.
To maintain comparability with the given dataset and to get easier interpretation of the differences in scale of effective number of parameters, we define the expected log pointwise predictive density (elpd) for a new dataset $\tilde y$ as a measure of predictive accuracy of a given model for the $n$ data points taken one at a time \citep{understanding}. In model $M_k$,
$$\mathrm{elpd}^k = \sum_{i=1}^{n} \int p_t(\tilde y_i) \log p(\tilde y_i |y, M_k) d \tilde y_i , $$
where $ p_t(\tilde y_i)$ denotes the true distribution of future data $\tilde y_i$.
Given observed data y, we estimate elpd using LOO:
$$ \widehat \mathrm{elpd}^k_{\mathrm{loo}} = \sum_{i=1}^{n} \log p(y_i |y _{-i}, M_k) . $$
Then we define the Pseudo-BMA weighting for model $k$:
\begin{equation}\label{P-BMA}
w_k= \frac{ \exp( \mathrm{\widehat {elpd}}^k_{\mathrm{loo}} ) } { \sum_{k=1}^K \exp( \mathrm{\widehat {elpd}}^{k}_{\mathrm{loo}} ) }.
\end{equation}
However, this estimation doesn't take into account the uncertainty resulted from having a finite number of proxy samples from the future data distribution.
Taking into account the uncertainty would regularize the weights
making them go further away from 0 and 1.
The computed estimate elpd is defined as the sum of $n$ independent components so it is trivial to compute their standard errors by computing the standard deviation of the $n$ pointwise values \citep{vehtari2002bayesian}. Define
$$\mathrm{\widehat {elpd}}_{\mathrm{loo},i}^k=\log p(y_i | y_{-i }, M_k),$$
and then we can calculate
$$\mathrm{se} ( \mathrm{\widehat {elpd}}_{\mathrm{loo},i}^k ) = \sqrt{ \sum_{i=1}^n (\mathrm{\widehat {elpd}}_{\mathrm{loo},i}^k- \mathrm{\widehat {elpd}}_{\mathrm{loo}}^k /n )^2 }.$$
Simple modification of weights is to use the lognormal approximation:
$$w_k= \frac{ \exp( \mathrm{\widehat {elpd}}^k_{\mathrm{loo}} -\frac{1}{2} \mathrm{se (\widehat {elpd}}_{\mathrm{loo}}^k ) )} { \sum_{k=1}^K \exp(\mathrm{\widehat {elpd}}^k_{\mathrm{loo}} -\frac{1}{2} \mathrm{se (\widehat {elpd}}_{\mathrm{loo}}^k ) ) }.$$
Finally, Bayesian bootstrap (BB) can be used to compute uncertainties related to LOO estimation \citep{vehtari2002bayesian}. Bayesian bootstrap \citep{rubin1981bayesian} makes simple non-parametric approximation to the distribution of random variable. Having samples of $z_1,\dots , z_n$ from a random variable $Z$, it is assumed that posterior probabilities for all observed $z_i$ have the distribution $\mbox{Dirichlet}(1,\dots,1)$ and values of $Z$ that are not observed have zero posterior probability. Sampling from the uniform Dirichlet distribution gives BB samples from the distribution of $Z$ and thus samples of any parameter of this distribution can be obtained.
In other words, each BB replication generates a set of posterior probability $\alpha_{1:n}$ for all observed $z_{1:n}$,
$$ \alpha_{1:n}\sim \mbox{Dirichlet}(\overbrace{1,\dots, 1}^{n}), \quad P(Z=z_i |\alpha)=\alpha_i. $$
This leads to one BB replication of any statistic $\phi(Z)$ that is of interest:
$$\hat \phi(Z | \alpha) = \sum_{i=1}^n \alpha_i \phi(z_i). $$
The distribution over all replicated $\hat \phi(Z |\alpha)$ (i.e., generated by repeated sampling of $\alpha$) produces an estimation for $\phi(Z)$.
As the distribution of $\mathrm{\widehat {elpd}}^k_{\mathrm{loo},i }$ is often highly skewed, BB is likely to work better than the Gaussian approximation. In our model weighting, we can define
$$z_i^k = \mathrm{\widehat {elpd}}^k_{\mathrm{loo},i}, \quad i=1,\dots n.$$
We sample vectors $(\alpha_{1,b},\dots ,\alpha_{n,b})_{b=1,\dots ,B}$ from the Dirichlet $(\overbrace{1,\dots, 1}^{n})$ distribution, and compute the weighted means,
$$ \bar z_b^k = \sum_{i=1}^n \alpha_{i,b}z_i^k $$
Then a Bayesian bootstrap sample of $w_k$ with size $B$ is,
$$w_{k,b}= \frac{ \exp(n \bar{z}^k_b ) }{\sum_{k=1}^K\exp(n \bar{z}^{k}_b )},\quad {b=1,\dots ,B}$$
and the final adjusted weight of model $k$ is,
\begin{equation}
w_{k}= \frac{1}{B}\sum_{b=1}^B w_{k,b},
\end{equation}
which we call Pseudo-BMA+ weights.
\section{Simulation examples}
In this section, we first illustrate the advantage of stacking of predictive distributions with a Gaussian mixture model. Then we compare stacking, BMA, Pseudo-BMA, Pseudo-BMA+ and other averaging methods through a series of linear regression simulation, where stacking gives the best performance in most cases. Finally we apply stacking to two real datasets, averaging multiple models so as to better explain US Senate voting or well-switching pattern in Bangladesh.
\subsection{Gaussian mixture model}
This simple example helps us understand how BMA and stacking behave differently. It also illustrates the importance of the choice of the scoring rules in combining distributions. Suppose the observed data $y=(y_i, i=1,\dots ,n)$ come iid from a normal distribution N$(3.4, 1)$, not known to the data analyst, and there are 8 candidate models, $\mbox{N}(1, 1)$, $\mbox{N} (2, 1)$,\dots , $\mbox{N} (8, 1)$. This is an $\mathcal{M}$-open problem in that none of the candidates is the true model, and we have set the parameters so that the models are somewhat separate but not completely distinct in their predictive distributions.
For BMA with a uniform prior $\mbox{Pr}(M_k)= \frac{1}{8}, k=1,\dots ,8$, we can write the posterior distribution explicitly:
$$\hat w_k^{\mathrm{BMA}}=P(M_k | y) = \frac{\exp( - \frac{1}{2}\sum_{i=1}^n (y_i- \mu_k)^2 )}{ \sum_{k'} \exp(- \frac{1}{2} \sum_{i=1}^n (y_i- \mu_{k'})^2 ) },
$$
from which we see that $\hat w_3^{\mathrm{BMA}} \xlongrightarrow{\text{$P$}} 1$ and $\hat w_k^{\mathrm{BMA}} \xlongrightarrow{\text{$P$}} 0$ for $k\neq 3$ as sample size $n \to\infty $.
Furthermore, for any given $n$ ,
\begin{eqnarray*}
\mathrm {E}_{y\sim \mathrm{N}(\mu, 1)}[ \hat w_k^{\mathrm{BMA}}] &\propto& E_y \left(\exp(- \frac{1}{2} \sum_{i=1}^n (y_i- \mu_k)^2 ) \right) \\
&\propto& \left(\int_{-\infty}^{\infty} \!\!\exp \left(- \frac{1}{2}\left( (y- \mu_k)^2 +(y-\mu)^2 )\right) \right) dy\right)^n\\
&\propto& \exp\left(- \frac{ n(\mu_k-\mu)^2}{4}\right),
\end{eqnarray*}
where $\mu=3.4$ and $\mu_k=k$ in this setting.
\begin{figure}
\includegraphics[width=\textwidth ] {dis34.pdf}
\vspace{-.3in}
\caption{ \em For the Gaussian mixture example, the predictive distribution $p( \tilde y |y)$ of BMA (green curve), stacking of means (blue) and stacking of predictive distributions (red). In each graph, the gray distribution represents the true model $\mbox{N}(3.4,1)$. Stacking of means matches the first moment but can ignore the distribution. For this $\mathcal{M}$-open problem, stacking of predictive distributions outperforms BMA as sample size increases. } \label{dis34}
\end{figure}
This example is simple in that there is no parameter to estimate within each of the models: $p( \tilde y | y , M_k)=p( \tilde y | M_k)$. Hence, in this case the weights from Pseudo-BMA and Pseudo-BMA+ are the same as the BMA weights, $\exp(\mathrm{\widehat{elpd}}_{\mathrm{loo}}^k)/\sum_{k'} \exp(\mathrm{\widehat{elpd}}_{\mathrm{{loo}}}^{k'}) $.
For stacking of means, we need to solve
$$\hat w =\arg \min_{w} \sum_{i=1}^n (y_i- \sum_{k=1}^8 w_k k )^2 , \quad s.t. \sum_{k=1}^8 w_k=1, \quad w_k \geq 0.$$
This is nonidentifiable because the solution contains any vector $\hat w$ satisfying $$\sum_{k=1}^8 \hat w_k=1, \quad \hat w_k \geq 0, \quad \sum_{k=1}^{8} \hat w_k k = \frac{1}{n}\sum_{i=1}^n y_i.$$ For point prediction, the stacked prediction is always $\sum_{k=1}^{8} \hat w_k k = \frac{1}{n}\sum_{i=1}^n y_i$, but it can lead to different predictive distributions $\sum_{i=1}^k \hat w_i \mbox{N}(k,1)$. To get one reasonable result, we transform the least square optimization to the following normal model and assign a uniform prior to $w$:
$$ y_i\sim \mbox{N}\left(\sum_{k=1}^8 w_k k , \sigma^2\right), \quad p(w_1,\dots ,w_8, \sigma )=1 . $$
Then we could use the posterior means of $w$ as model weights.
For stacking of predictive distributions,we need to solve
$$\max_w \sum_{i=1}^n \log \left( \sum_{k=1}^8 w_k \exp\left(-\frac{(y_k - k)^2}{2} \right)\right), \quad s.t. \sum_{k=1}^8 w_k=1, \quad w_k \geq 0 $$
In fact, this example is a density estimation problem. \cite{smyth1998stacked} first apply stacking to non-parametric density estimation, which they call \emph{stacked density estimation} and now can be viewed as a special case of our stacking method.
\begin{figure}
\includegraphics[width=\textwidth ] {score34.pdf}
\vspace{-.3in}
\caption{\em (a) The left panel shows the expected log predictive density of the predictive distribution under BMA , stacking of means and stacking of predictive distributions. Stacking of predictive distributions performs best for moderate and large sample sizes. (b) The middle panel shows the main squared error treating the posterior mean of $\hat y$ as a point estimation. Stacking of predictive distributions gives almost the same optimal mean squared error as stacking of means, both of which perform better than BMA. (c) The right panel shows the expected log predictive density of stacking and BMA when adding some more {N}$(4,1)$ models to the model list, where sample size is fixed to be 15. All average log scores and errors are calculated through 500 repeated simulation and 200 test data generating from the true distribution.} \label{score34}
\end{figure}
We compare the posterior predictive distribution $ \hat p( \tilde y | y) = \sum_k \hat w_k p( \tilde y | y , M_k) $, for these three methods of model averaging. Figure \ref{dis34} shows the predictive distributions in one simulation when the sample size $n$ varies from 3 to 200. We first notice that stacking of means (the middle row of graphs) gives an unappealing predictive distribution, even if its point estimate is reasonable. The broad and oddly spaced distribution here arises from nonidentification of $w$, and it demonstrates the general point that stacking of means does not even try to match the shape of the predictive distribution. The top and bottom row of graphs show how BMA picks up the single model that is closest in KL divergence, while stacking picks a combination; the benefits of stacking becomes clear for large $n$.
In this trivial non-parametric case, stacking of predictive distributions is almost the same as fitting a mixture model, except for the absence of the prior. The true model N$(3.4, 1)$ is actually a convolution of single models rather than a mixture, hence no approach can recover the true one from the model list. From Figure \ref{score34} we can compare the mean squared error and the mean logarithmic score of this three combination methods. The average log scores and errors are calculated through 500 repeated simulation and 200 test data generating from the true distribution. The left panel shows logarithmic score (or equivalent, expected log predictive density) of the predictive distribution. Stacking of predictive distributions always gives a larger score except for extremely small $n$.
In the middle panel, it shows the main squared error by considering the posterior mean of predictive distribution to be a point estimation, even if it is not our focus. In this case, it is not surprising to see that stacking of predictive distributions gives almost the same optimal mean squared error as the stacking of means, both of which are better than the BMA. Two distributions close in KL divergence are close in each moment, while the reverse doesn't necessarily hold. This illustrates the necessary of matching the \emph{distribution}, rather than matching the \emph{moments} for the predictive distribution.
Finally it is worth pointing out that stacking depends only on the space expanded by all candidate models, while BMA or Pseudo-BMA weighting may by misled by such model expansion. If we add another N$(4, 1)$ as the $9$th model in the model list above, stacking will not change at all in theory, even though it becomes non-strictly-convex and has infinite same-height mode. For BMA, it is equivalent to putting double prior mass on the original $4$th model, which doubles the final weights for it. The right panel of Figure \ref{score34} shows such phenomenon: we fix sample size $n$ to be 15 and add more and more N$(4,1)$ models. As a result, BMA (or Pseudo-BMA weighting) puts more weights on N$(4,1)$ and behaves worse, while stacking changes nowhere except for numerical fluctuation. It illustrates another benefit of stacking compared to BMA or Pseudo-BMA weights. We would not expect a combination method that would performs even worse as model candidate list expands, which may become a disaster when many similar weak models exist. We are not saying BMA can never work in this case. In fact some other methods are proposed to make BMA overcome such drawbacks. For example, \cite{george2010dilution} establishes dilution priors to compensate for model space redundancy for linear models, putting less weights on those models that are close to each other. \cite{fokoue2011bias} introduce prequential model list selection to obtain an optimal model space. But we propose stacking as a more straightforward solution.
\subsection{Linear subset regressions}\label{reg}
The previous section demonstrates a simple example of combining several different nonparametric models. Now we turn to the parametric case. This example comes from \cite{breiman1996} who compares stacking to model selection. Here we work in a Bayesian framework.
Suppose the true model is
$$Y=\beta_1X_{1}+\dots +\beta_JX_{J}+\epsilon.$$
In the model $\epsilon$ independently comes from N$(0, 1)$. All the covariates $X_{j}$ are independently from N$(5, 1)$. The number of predictors $J$ is 15.
The coefficient $\beta $ is generated by
$$ \beta_j=\gamma\left( (1_{ | j-4 | <h} (h- |j-4|)^2 +( 1_{|j-8|<h } ) ( h- |j-8| )^2+ ( 1_{|j-12|<h} ) ( h-|j-12| )^2 \right), $$
where $\gamma$ is determined by fixing the signal-to-noise ratio such that
$$\frac{\mathrm{Var}(\sum_j \beta_j X_j ) }{1+\mathrm{Var}(\sum_j \beta_j X_j )}=\frac{4}{5}. $$
The value $h$ determines the number of nonzero coefficients in the true model. For $h=1$, there are 3 ``strong" coefficients. For $h=5$, there are 15 ``weak" coefficients. In the following simulation, we fix $h=5$.
We consider the following two cases:
\begin{figure}
\vspace{-.1in}
\centerline{\includegraphics[width=.9\textwidth ] {single_variable_regression.pdf}}
\caption{\em Mean log predictive density of 7 combination methods in the linear regression variable selection example: the $k$-th model is a univariate linear regression with the $k$-th variable. The mean log score is evaluated by 100 repeated experiments and 200 test data. } \label{univariate}
\vspace{-.1in}
\end{figure}
\begin{enumerate}
\item $\mathcal{M}$-open
Each subset contains only one single variable. Hence, the $k$-th model is a univariate linear regression with the $k$-th variable $X_k$. We have $K=J=15 $ different models in total. One advantage of stacking and Pseudo-BMA weighting is that they are not sensitive to prior, hence even a flat prior will work, while BMA can be sensitive to the prior. For each single model $M_k: Y \sim \mbox{N}(\beta_k X_k , \sigma^2) $, we set prior $\beta_k \sim N(0,10)$, $\sigma \sim \mathrm{Gamma} (0.1,0.1) $.
\item $\mathcal{M}$-closed
Let model $k$ be the linear regression with subset $(X_1,\dots, X_k)$. Then there are still $K=15$ different models. Similarly, in model $M_k: Y \sim \mbox{N}( \sum_{j=1}^k \beta_j X_j , \sigma^2) $, we set prior $\beta_j \sim N(0,10), j=1,\dots, k$, $\sigma \sim \mathrm{Gamma} (0.1,0.1) $.
\end{enumerate}
In both cases, we have seven methods for combine predictive densities: (1) stacking of predictive distributions, (2) stacking of means, (3) Pseudo-BMA, (4) Pseudo-BMA+, (5) best model selection by mean LOO value, (6) best model selection by marginal likelihood, and (7) BMA. A linear combination $p( \tilde y | y)= \sum_{k=1}^K \hat w_k p( \tilde y | M_k) $ is what we have in the end as estimation for the posterior density of the new data $\tilde y$.
We generate a test dataset $(\tilde x_i ,\tilde y_i)$, $i=1,\dots,200$ from the underlying true distribution to calculate the out of sample score for the combined distribution under each method $k$: $$\frac{1}{200}\sum_{i=1}^{200} \log \sum_{k=1}^K \hat w_k p( \tilde y_i | M_k) .$$ We loop the test simulation 100 times to get the expected predictive accuracy for each method.
\begin{figure}
\vspace{-.1in}
\centerline{ \includegraphics[width=.9\textwidth ] {linear_regression.pdf}}
\caption{\em Mean log predictive density of 7 combination methods in the linear regression variable selection example: the $k$-th model is a linear regression with the first $k$ variables. We evaluate the mean log score using 100 repeated experiments and 200 test data. } \label{regression}
\vspace{-.1in}
\end{figure}
Figure \ref{univariate} shows the expected out-of-sample log predictive density for the seven methods, for a set of experiments with sample size $n$ ranging from 5 to 200. Stacking seems to outperform all other methods even for small $n$. Stacking of predictive distributions is asymptotically better than any other combination method. Pseudo-BMA+ weighting dominates naive Pseudo-BMA weighting. Finally, BMA performs similarly to Pseudo-BMA weighting, always better than any kind of model selection, but that advantage vanishes in the limit since BMA picks up one model. In this $\mathcal{M}$-open setting, we know model selection can never be optimal.
The results change when we move to the second case, in which the $k$-th model contains variables $X_1,\dots, X_k$ so that we are comparing models of differing dimensionality. The problem is $\mathcal{M}$-closed because the largest subset contains all the variables, and we have simulated data from this model. Figure \ref{regression} shows the mean log predictive density of the seven combination methods in this case. For a large sample size $n$, almost all methods recover the true answer (putting weight 1 on the full model), except BMA and model selection based on marginal likelihood. The poor performance of BMA comes from the parameter priors: recall that the optimality of BMA arises when averaging over the priors and not necessarily conditional on any particular chosen set of parameter values. There is no general no way to obtain a ``correct'' prior that accounts for the complexity for BMA in an arbitrary model space. Model selection by LOO can recover the true model, while selection by marginal likelihood cannot due to the same prior problems. Once again, we see that BMA eventually become the same as model selection by marginal likelihood, which is much worse than any other methods asymptotically.
In this example, stacking is unstable for extremely small $n$. In fact, our computations for stacking of predictive distributions and Pseudo-BMA depend on the the PSIS approximation $\log p(y_i | y_{-i})$. If this approximation is crude, then the second step optimization cannot be accurate. It is known that the parameter $\hat k$ in the generalized Pareto distribution can be used to diagnose the accuracy of PSIS approximation. In our method, we replace PSIS approximation by running exact LOO for any data points with estimated $\hat k > 0.7$ \citep[see ][]{practicalPSIS}.
\begin{figure}
\includegraphics[width=\textwidth ] {compare_elpd_and_elpd_real_loo.pdf}
\caption{\em Comparison of the mean elpd estimated by LOO and the mean elpd calculated from test data, for each model and each sample size in the simulation described above. The area of each dot represents the relative complexity of the model as measured by effective number of parameter divided by sample size. } \label{PSIS}
\end{figure}
Figure \ref{PSIS} shows the comparison of the mean elpd estimated by LOO, $$\frac{1}{n} \mathrm{elpd}_{\mathrm{loo}}^k = \frac{1}{n} \sum_{i=1}^n\log p_k (y_i | y_{-i}),$$ and the mean elpd calculated using 200 independent test data,
$$ \frac{1}{n} \mathrm{elpd}_{\mathrm{test}}^k= \frac{1}{200} \sum_{i=1}^{200} \log p_k (\tilde y_i | y) $$ for each model $k$ and each sample size in the simulation described above. The area of each dot in Figure \ref{PSIS} represents the relative complexity of the model as measured by effective number of parameters in the model divided by sample size. We evaluate the effective number of parameters using LOO \citep{practicalPSIS}. The sample size $n$ varies from 30 to 200 and variable size is fixed to be 20. Clearly, the relationship is far from the line $y=x$ for extremely small sample size, and the relative bias ratio ($\mathrm{elpd_{loo}} / \mathrm{elpd_{test}} $) depends on the complexity of the model. Empirically, we have found the approximation to be poor when the sample size is less than 5 times the number of parameters.
As a result, in the small sample case, LOO can lead to relatively large variance, which makes the stacking of predictive distributions and Pseudo-BMA/ Pseudo-BMA+ unstable, with performance improving quickly as $n$ grows.
\subsection {Comparison with mixture models}
Stacking is inherently a two-step procedure. In contrast, when fitting a mixture model, one estimates the model weights and the status within parameters in the same step. In a mixture model, given a model list $\mathcal{M}=\{M_1, \dots, M_k\}$, each component in the mixture occurs with probability $w_k$. Marginalizing out the discrete assignments yields the joint likelihood
$$ p(y |w_{1:K}, \theta_{1:K} ) =\sum_{k=1}^K w_k p(y | \theta_k , M_k) . $$
The mixture model seems to be the most straightforward continuous model expansion. Nevertheless, there are several reasons why we may prefer stacking to the mixture model, though the latter one is a full Bayesian approach. First, the computation cost of mixture models can be relatively large. If the true model is a mixture model and the estimation of each model depends a lot on others, then it is worth paying the extra computation cost. However, it is not quite possible to combine several components in real application. It is more likely that the researcher is running combination among several mixture models with different pre-specified number of components. The model space can be always extended so it is infeasible to make such kind of full Bayesian inference.
Second, if every single model is close to one another and sample size is small, the mixture model can face non-identification or instability problem, unless a strong prior is added. Since the mixture model is relatively complex, this leads to a poor small sample behavior.
\begin{figure}
\vspace{-.2in}
\includegraphics[width=\textwidth ] {compare_mixture.pdf}
\vspace{-.4in}
\caption{\em Log predictive density of the combined posterior distribution obtained by stacking of predictive distributions, BMA, Pseudo-BMA, Pseudo-BMA+, model selection by marginal likelihood, or mixture models. In each case, we evaluate the predictive density by taking mean of 100 testing data and 100 repeated simulations experiments. The correlation of variables range from $-0.3$ to 0.9, and sample sizes range from 4 to 50. Stacking on predictive distribution and Pseudo-BMA+ outperform mixture models in all cases. } \label{mixture}
\end{figure}
Figure \ref{mixture} shows a comparison of mixture model and our model averaging methods in a numerical experiment, in which the true model is
$$Y \sim \mbox{N}( \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_2 ,1), \quad \beta_k \text{ is generated from } \mbox{N}(0,1),$$
and there are 3 candidate models, each containing one covariate:
$$M_k: Y \sim \mbox{N}( \beta_k X_k ,\sigma_k^2), \text{with a prior } \beta_k \sim \mbox{N}(0,1) , \quad k=1,2,3 .$$
In the simulation, we generate the design matrix by $\mathrm{Var}(X_i)=1$ and $\mathrm{Cor}(X_i, X_j)=\rho$. $\rho$ determines how correlated these models are and it ranges from $-0.3$ to 0.9.
Figure \ref{mixture} shows that both the performance of mixture models and single model selection are worse than any other model averaging method we suggest, even though the mixture model takes much longer time to run (about 30 more times) than stacking or Pseudo-BMA+. When the sample size is small, the mixture model is too complex to fit. On the other hand, stacking of predictive distributions and Pseudo-BMA+ outperform all other methods with a moderate sample size.
\cite{clarke2003} argues that the effect of (point estimation) stacking only depends on the space spanned by the model list, hence he suggests putting those ``independent" models in the list. Figure \ref{mixture} shows high correlations do not hurt stacking and Pseudo-BMA+ in this example.
\subsection{Variational inference with different initial values}
In Bayesian inference, the posterior density of parameters $\theta=\{\theta_1, \dots, \theta_m \} $ given observed data $y=\{y_1, \dots, y_n \}$ can be difficult to compute. Variational inference can be used to give a fast approximation for $p(\theta | y)$ \citep[for a recent review, see][]{blei2017variational}. Among a family of distribution $\mathcal{Q}$, we try to find the member of that family such that the Kullback-Leibler divergence to the true distribution is minimized:
\begin{equation} \label{VI}
q^*(\theta) = \arg_{q \in \mathcal{Q}} \min \mathrm{KL}\bigl(q(\theta), p(\theta | y) \bigr) = \arg_{q \in \mathcal{Q}} \min \bigl( \mathrm{E}_q \log q(\theta)-\mathrm{E}_q \log p (\theta, y) \bigr),
\end{equation}
One widely used variational family is mean-field family where parameters are assumed to be mutually independent $\mathcal{Q}= \{ q(\theta): q(\theta_1, \dots,\theta_m )=\prod_{j=1}^{m} q_j (\theta_j) \}$. Some recent progress is made to run variational inference algorithm in a black-box way. For example, \citet{kucukelbir2016automatic} implement \emph {Automatic Variational Inference} in Stan. Assuming all parameters $\theta$ are continuous and model likelihood is differentiable, it transforms $\theta$ into real coordinate space ${\rm I\!R}^m$ through $\zeta=T(\theta)$ and uses normal approximation $p(\zeta | \mu, \sigma^2)= \prod_{j=1}^m \hbox{N}(\zeta_j | \mu_j, \sigma_j^2 )$. Plugging this into \ref{VI} leads to an optimization problem over $(\mu , \sigma^2) $, which can be solved by stochastic gradient ascent. Under some mild condition, it eventually converges to a local optimum $ q^*(\theta)$. However, $q^*(\theta)$ may depend on initialization since such optimization problem is in general non-convex, particularly when the true posterior density $p(\theta|y )$ is multi-modal.
Stacking of predictive distributions and Pseudo-BMA+ weighting can be used to average several sets of posterior draws coming from different approximation distribution. To do this, we repeat the variational inference $K$ times. At time $k$, we start from a random initial point and use stochastic gradient ascent to solve the optimization problem \ref{VI}, ending up with an approximation distribution $q^*_k(\theta)$. Then we draw $S$ samples $\{\theta_k^{(1)}, \dots, \theta_k^{(S)}\}$ from $q^*_k$ calculate the importance ratio $r_{i,k}^s$ defined in \ref{ratio} as
$$r_{i,k}^s = \frac{1}{p( y_i | \theta ^{(s)}_{k} )} $$
After this, the remaining steps follow as before. We obtain stacking or pseudo-BMA+ weights $w_k$ and average all approximation distribution as $\sum_{k=1}^K w_k q^*_k $.
\begin{figure}
\includegraphics[width=.9\textwidth ] {vb.pdf}
\vspace{-.2in}
\caption{\em (1) A multi-modal posterior distribution of $(\mu_1, \mu_2)$. (2--3) Posterior draws from variational inference with different initial values. (4--5) Averaged posterior distribution using stacking of predictive distributions and Pseudo-BMA+ weighting. } \label{vb}
\end{figure}
Figure \ref{vb} gives a simple numerical example that the averaging strategy helps adjust the optimization uncertainty of initial values. Suppose the data is two-dimensional $y=(y^{(1)}, y^{(2)})$ and the parameter is $(\mu_1, \mu_2) \in {\rm I\!R}^2$. The likelihood $p(y | \mu_1, \mu_2)$ is given by
$$y^{(1)} \sim \mathrm{Cauchy} (\mu_1,1), \quad y^{(2)} \sim \mathrm{Cauchy} (\mu_2,1). $$
A prior is assigned on $\mu_1-\mu_2$:
$$\mu_1- \mu_2\sim \mbox{N}(0,1). $$
We generate two observations $(y^{(1)}_{1}=3, y^{(2)}_{1}=2 )$ and $(y^{(1)}_{2}= -2, y^{(2)}_{2}=-2)$. The first panel shows the true posterior distribution of $\mathbf{\mu}=(\mu_1, \mu_2)$, which is bimodal. We run mean-field normal variational inference in Stan, with two initial values set to be $(\mu_1, \mu_2)=(5,5)$ and $(-5, -5)$ separately. This produces two distinct approximation distribution as shown in panel 2 and 3. We then draw 1000 samples each from these two approximation distribution and use stacking or Pseudo-BMA+ to combine them. The lower 2 panels show the averaged posterior distribution. Though neither can recover the true distribution, the averaged version is closer to it, measured by KL divergence to the true one.
\subsection{Proximity and directional models of voting}
\cite{JOPO155} use US Senate voting data from 1988 to 1992 to study voters' preference on the candidates who propose policies that are similar to them. They introduce two similar variables that indicate the distance between voters and candidates. \emph{Proximity voting comparison} represents the $i$-th voter's comparison between the candidates' ideological positions:
$$ U_i(D) -U_i(R)= (x_R - x_i)^2 - (x_D - x_i)^2, $$
where $x_i$ represents the $i$-th voter's preferred ideological position, and $x_D$ and $x_R$ represent the ideological positions of the Democratic and Republican candidates, respectively.
In contrast, the $i$-th voter's \emph {directional comparison} is defined by
$$ U_i(D)-U_i(R)=(x_D -X_N)(x_i -X_N)-(x_R -X_N)(x_i -X_N),$$
where $X_N$ is the neutral point of the ideology scale.
Finally, all these comparison is aggregated in the party level, leading to two party-level variable \emph{Democratic proximity advantage} and \emph{Democratic directional advantage}. The sample size is $n=94$.
For both of these two variables, there are two ways to measure candidates' ideological positions $x_D$ and $x_R$, which leads to two different datasets. In the \emph{Mean candidate} dataset, they are calculated by taking the average of all respondents' answers in the relevant state and year. In the \emph{Voter-specific} dataset, they are calculate by using respondents' own placements of the two candidates. In both datasets, there are 4 other party-level variables.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l cc cc cc cc}
& \multicolumn{2}{c}{\textit{\textbf{Full model}}} & \multicolumn{2}{c}{\textit{\textbf{BMA}}} & \multicolumn{2}{c}{\textit{\textbf{Stacking of}}} & \multicolumn{2}{c}{\textit{\textbf{Pseudo-BMA+ weighting}}} \\
& \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{\textit{\textbf{predictive distributions}}} & \multicolumn{2}{c}{} \\
& \textit{Mean} & \textit{Voter-} & \textit{Mean} & \textit{Voter-} & \textit{Mean} & \textit{Voter-} & \textit{Mean} & \textit{Voter-} \\
& \textit{Candidate} & \textit{specific} & \textit{Candidate} & \textit{specific} & \textit{Candidate} & \textit{specific} & \textit{Candidate} & \textit{specific} \\ \hline
\begin{tabular}{l}Dem.\\ prox.\\ adv.\end{tabular} & -3.05 (1.32) & -2.01 (1.06) & -0.22 (0.95) & 0.75 (0.68) & 0.00 (0.00) & 0.00 (0.00) & -0.02 (0.08) & 0.04(0.24) \\
\begin{tabular}{l}Dem.\\ direct. \\ adv.\end{tabular} & 7.95 (2.85) & 4.18 (1.36) & 3.58 (2.02) & 2.36 (0.84) & 2.56 (2.32) & 1.93 (1.16) & 1.60 (4.91) & 1.78 (1.22) \\
\begin{tabular}{l}Dem.\\ incumb.\\ adv.\end{tabular} & 1.06 (1.20) & 1.14 (1.19) & 1.61 (1.24) & 1.30 (1.24) & 0.48 (1.70) & 0.34 (0.89) & 0.66 (1.13) & 0.54 (1.03) \\
\begin{tabular}{l}Dem.\\ quality \\ adv.\end{tabular} & 3.12 (1.24) & 2.38 (1.22) & 2.96 (1.25) & 2.74 (1.22) & 2.20 (1.71) & 2.30 (1.52) & 2.05 (2.86) & 1.89 (1.61) \\
\begin{tabular}{l}Dem.\\ spend\\ adv.\end{tabular} & 0.27 (0.04) & 0.27 (0.04) & 0.32 (0.04) & 0.31 (0.04) & 0.31 (0.07) & 0.31 (0.03) & 0.31 (0.04) & 0.30 (0.04) \\
\begin{tabular}{l}Dem.\\ partisan\\ adv.\end{tabular} & 0.06 (0.05) & 0.06 (0.05) & 0.08 (0.06) & 0.07 (0.06) & 0.01 (0.04) & 0.00 (0.00) & 0.03 (0.05) & 0.03 (0.05) \\
Const. & 53.3 (1.2) & 52.0 (0.8) & 51.4 (1.0) & 51.6 (0.8) & 51.9 (1.1) & 51.6 (0.7) & 51.5 (1.2) & 51.4 (0.8)
\end{tabular}
}
\caption{\em Regression coefficient and standard error in the voting example, from the full model (columns 1--2), the subset regression model averaging using BMA (columns 3--4), stacking of predictive distributions (columns 5--6) and Pseudo-BMA+ (columns 7--8). \emph{Democratic proximity advantage} and \emph{Democratic directional advantage} are two highly correlated variables. \emph{Mean candidate} and \emph{Voter-specific} are two datasets that provide different measurements on candidates' ideological placement. }
\label{Senate}
\end{figure}
The two variables \emph{Democratic proximity advantage} and \emph{Democratic directional advantage} are highly correlated. \cite{montgomery2010bayesian} point out that Bayesian model averaging is an approach to helping arbitrate between competing predictors in a linear regression model. They average over all $2^6$ linear subset models excluding those that contain both variables \emph{Democratic proximity advantage} and \emph{Democratic directional advantage}, (i.e., 48 models in total). Each subset regression is with the form
$$ M_\gamma: y | X, \beta_0, \beta \sim N( \beta_0 + X_\gamma \beta_\gamma, \sigma^2) . $$
Accounting for the different complexity, they used the hyper-$g$ prior \citep{liang2012mixtures}.
Let $\phi$ to be the inverse of the variance $\phi=\frac{1}{\sigma^2}$. The hyper-$g$ prior with a prespecified hyperparameter $\alpha$ is,
\begin{align*}
p(\phi)&\propto \frac{1}{\phi}, \\
\beta | g, \phi, X &\sim \mbox{N} \Bigl(0, \frac{g}{\phi} (X^TX)^{-1}\Bigr), \\
p (g | \alpha ) &=\frac{ \alpha-2}{ 2} (1+g) ^{-\alpha/2}, \ g>0 .
\end{align*}
The first two columns of Figure \ref{Senate} show the linear regression coefficients as estimated using least squares. The remaining columns show the posterior mean and standard deviation of the regression coefficients using BMA, stacking of predictive distributions, and Pseudo-BMA+, respectively. Under all three averaging strategies, the coefficient of \emph{proximity advantage} is no longer statistically significantly negative, and the coefficient of \emph{directional advantage} is shrunk. As fit to these data, stacking puts near-zero weights on all subset models containing \emph{proximity advantage}, whereas Pseudo-BMA+ weighting always gives some weight to each model. In this example, averaging subset models by stacking or Pseudo-BMA+ weighting gives a way to deal with competing variables, which should be more reliable than BMA according to our previous argument.
\subsection{Predicting well-switching behavior in Bangladesh}
Many wells in Bangladesh and other South Asian countries are contaminated with natural arsenic. People whose wells have arsenic levels that exceed a certain threshold are encouraged to switch to nearby safe wells \citep[for background details, see Chapter 5 in][]{gelman2006data}. We are analyzing a dataset including $3020$ respondents to find factors predictive of the well switching among all people with unsafe wells. The outcome variable is
$$ y_i=
\begin{cases}
1& \text{ if household $i$ switched to a new well }\\
0& \text{ if household $i$ continued using its own well.}
\end{cases}$$
And we consider the following inputs:
\begin{itemize}
\item The distance (in meters) to the closest known safe well,
\item The arsenic level of the respondent's well,
\item Whether any member of the household is active in the community association,
\item The education level of the head of the household.
\end{itemize}
We start with what we call Model 1, a simple logistic regression with all variables above as well as a constant term:
\[ \begin{split}
y\sim &\mathrm{Bernoulli} (\theta)\\
\theta= & \mathrm{logit}^{-1}(\beta_0+\beta_1 dist +\beta_2 arsenic+\beta_3 assoc+ \beta_4 edu)
\end{split} \]
Model 2 contains the interaction between distance and arsenic level.
$$\theta= \mathrm{logit}^{-1}(\beta_0+\beta_1 dist +\beta_2 arsenic+\beta_3 assoc+ \beta_4 edu +\beta_5 dist\times arsenic )$$
Furthermore, it makes sense to us a nonlinear model for logit switching probability as a function of distance and arsenic level. We can use spline to capture that. Our Model 3 contains the B-splines for distance and arsenic level with polynomial degree 2,
$$\theta= \mathrm{logit}^{-1}(\beta_0+\beta_1 dist +\beta_2 arsenic+\beta_3 assoc+ \beta_4 edu +\alpha_{dis} B_{dis} +\alpha_{ars} B_{ars})$$
where $B_{dis}$ is the B-spline basis of distance with the form $\bigl(B_{dis, 1} (dist), \dots, B_{dis, q} (dist) \bigr)$ and $\alpha_{dis}, \alpha_{ars} $ are vectors. We also fix the number of knots to be 10 for both distance and arsenic level. Model 4 and 5 are the similar models with 3-degree and 5-degree B-splines, respectively.
\begin{figure}
\includegraphics[width=\textwidth ] {wells.pdf}
\vspace{-.3in}
\caption{\em Posterior mean, 50\% confidence interval, and 95\% confidence interval of the probability of switching from an unsafe well in Models 1--8. For each model, the switching probability is shown as a function of (a) the distance to the nearest safe well or (b) the arsenic level of the existing well. For each plot, the other input variable is held constant at different representative values. The model weights by stacking of predictive distributions and Pseudo-BMA+ are listed above each panel.} \label{well}
\end{figure}
Next, we can add a bivariate spline to capture a nonlinear interaction,
$$\theta= \mathrm{logit}^{-1}(\beta_0+\beta_1 dist +\beta_2 arsenic+\beta_3 assoc+ \beta_4 edu +\beta_5 dist\times arsenic+\alpha B_{dis, ars})$$
where $B_{dis, ars}$ is the bivariate spline basis with the degree to be $2\times 2, 3\times 3, 5\times 5$ in Model $6,7$ and $8$ respectively.
Figure \ref{well} shows the inference results in all 8 models, which are summarized by the posterior mean, 50\% confidence interval and 95\% confidence interval of the probability of switching from an unsafe well as a function of distance or arsenic level. Any other variables such as {\tt assoc} and {\tt edu} are fixed at their means. It is not obvious to pick one best model. Spline models give a more flexible shape, but also introduce more variance for posterior estimation.
\begin{figure}
\includegraphics[width=\textwidth ] {wells_combine.pdf}
\vspace{-.3in}
\caption{\em Posterior mean, 50\% confidence interval, and 95\% confidence interval of the probability of switching from an unsafe well in the combined model via stacking of predictive distributions. Pseudo-BMA+ weighting gives a similar result for the combination.} \label{well-combine}
\end{figure}
Finally, we run stacking of predictive distributions and Pseudo-BMA+ to combine these 8 models. The calculated model weights are listed above each panel in Figure \ref{well}. For both combination methods, Model 5 (univariate splines with 5th degree) accounts for the majority share. It is also worth pointing out that Model 8 is the most complicated one, but both stacking and Pseudo-BMA+ avoid overfitting by assigning a very small weight on it.
Figure \ref{well-combine} shows the posterior mean, 50\% confidence interval, and 95\% confidence interval of the switching probability in the stacking-combined model. Pseudo-BMA+ weighting gives a similar combination result for this example. At first glance, the combination looks quite similar to Model 5, while it may not seem necessary to put an extra 0.09 weight on Model 1 in stacking combination since Model 1 is completely contained in Model 5 if setting $\alpha_{dis} =\alpha_{ars}=0$. However, Model 5 is not perfect since it predicts that the posterior mean of switching probability will decrease as a function of distance to the nearest safe well, for very small distances. In fact, without further control, it is not surprising to find boundary fluctuation as a main drawback for higher order splines. Fortunately, we notice this decrease trend around the left boundary is a little bit flatter in the combined distribution since the combination contains part of straightforward logistic regression (in stacking weights) or lower order splines (in Pseudo-BMA+ weights). In this example the sample size $n=3020$ is large, hence we have reasons to believe stacking of predictive distributions gives the optimal combination.
\section{Discussion}
\subsection {Sparse structure and high dimensions}
\cite{yang2014minimax} propose to estimate a linear combination of point forecasts,
$f= \sum_ {k=1}^{K} w_k f_k$, using a Dirichlet aggregation prior,
$w \sim \mathrm{Dirichlet} \bigl( \frac{\alpha}{ K^\gamma}, \dots , \frac{\alpha}{ K^\gamma} \bigr)$, to pull toward structure, and estimating the weights $w_k$ using adaptive regression rather than cross-validation. They show that the combination under this setting can achieve the minimax squared risk among all convex combinations,
$$ \sup_{f_1,\dots f_K \in F_0 } \inf _{\hat f} \sup _{f_\lambda^* \in F_\Gamma } E || \hat f- f_\lambda^* || , $$
where $F_0=( f: ||f||_\infty \leq 1 ).$
Similar to our problem, when the dimension of model space is high, it can make sense to assign a strong prior can to the weights in estimation equation (\ref{stacking}) to improve the regularization, using a hierarchical prior to pull toward sparsity if that is desired.
\subsection{Constraints and regularity}
In point estimation stacking, the simplex constraint is the most widely used regularization so as to overcome potential problems with multicollinearity. \cite{clarke2003} suggests relaxing the constraint to make it more flexible.
When combining distributions, there is no need to worry about multicollinearity except in degenerate cases. But in order to guarantee a meaningful posterior predictive density, the simplex constraint becomes natural, which is satisfied automatically in BMA and Pseudo-BMA weighting. As mentioned in the previous section, stronger priors can be added.
Another assumption is that the separate posterior distributions are combined linearly and with weights that are positive and sum to 1.
There could be gains from going beyond convex linear combinations. For instance, in the subset regression example when each individual model is a univariate regression, the true model distribution is a convolution instead of a mixture of each possible models distribution. Both of them lead to the additive model in the point estimation, so stacking of the means is always valid, while stacking of predictive distributions is not possible to recover the true model in the convolution case.
Our explanation is that when the model list is large, the convex span should be large enough to approximate the true model. And this is the reason why we prefer adding stronger priors to make the estimation of weights stable in high dimensions.
\subsection{General recommendations}
The methods discussed in this paper are all based on the idea of fitting models separately and then combining the estimated predictive distributions. This approach is limited in that it does not pool information between the different model fits: as such, it is only ideal when the $K$ different models being fit have nothing in common. But in that case we would prefer to fit a larger super-model that includes the separate models as special cases, perhaps using an informative prior distribution to ensure stability in inferences.
That said, in practice it is common for different sorts of models to be set up without any easy way to combine them, and in such cases it is necessary from a Bayesian perspective to somehow aggregate their predictive distributions. The often-recommended approach of Bayesian model averaging can fail catastrophically in that the required Bayes factors can depend entirely on arbitrary specifications of noninformative prior distributions. Stacking is a more promising general method in that it is directly focused on performance of the combined predictive distribution. Based on our theory, simulations, and examples, we recommend stacking (of predictive distributions) for the task of combining separately-fit Bayesian posterior predictive distributions. As an alternative, Pseudo-BMA+ is computationally cheaper and can serve as an initial guess for stacking. The computations can be done in R and Stan, and the optimization required to compute the weights connects directly to the predictive task.
\section*{Appendix A. Implementation in Stan and R}
The $n\times K$ matrix of cross-validated log likelihood values, $\{p(y_i | y_{-i} , M_k)\}_{i=1,\dots, n, k=1,\dots, K}$, can be computed from the generated quantities block in a Stan program, following the approach of \cite{practicalPSIS}. For the example in Section \ref{reg}, the $k$-th model is a linear regression with the $k$-th covariates. We put the corresponding Stan code in the file \texttt{regression.stan}:
\begin{verbatim}
data {
int n;
int p;
vector[n] y;
matrix[n, p] X;
}
parameters {
vector[p] beta;
real<lower=0> sigma;
}
transformed parameters {
vector[n] theta;
theta = X * beta;
}
model {
y ~ normal(theta, sigma);
beta ~ normal(0, 10);
sigma ~ gamma(0.1, 0.1);
}
generated quantities {
vector[n] log_lik;
for (i in 1:n)
log_lik[i] = normal_lpdf(y[i] | theta[i], sigma);
}
\end{verbatim}
In R we can simulate the likelihood matrices from all $K$ models and save them as a list:
\begin{verbatim}
library("rstan")
log_lik_list <- list()
for (k in 1:K){
# Fit the k-th model with Stan
fit <- stan("regression.stan", data=list(y=y, X=X[,k], n=length(y), p=1))
log_lik_list[[k]] <- extract(fit)[["log_lik"]]
}
\end{verbatim}
The function \texttt{model\_weights()} in the \texttt{loo} package\footnote{ The package can be downloaded from https://github.com/stan-dev/loo/tree/yuling-stacking} in R can give model combination weights according to stacking of predictive distributions, Pseudo-BMA and Pseudo-BMA+ weighting. We can choose whether to use the Bayesian bootstrap to make further regularization for Pseudo-BMA, if computation time is not a concern.
\begin{verbatim}
model_weights_1 <- model_weights(log_lik_list, method="stacking")
model_weights_2 <- model_weights(log_lik_list, method="pseudobma", BB=TRUE)
\end{verbatim}
in one simulation with six models to combine, the output gives us the computed weights under each approach:
\begin{verbatim}
The stacking weights are:
[1,] "Model 1" "Model 2" "Model 3" "Model 4" "Model 5" "Model 6"
[2,] "0.25" "0.06" "0.09" "0.25" "0.35" "0.00"
The Pseudo-BMA+ weights using Bayesian Bootstrap are:
[1,] "Model 1" "Model 2" "Model 3" "Model 4" "Model 5" "Model 6"
[2,] "0.28" "0.05" "0.08" "0.30" "0.28" "0.00"
\end{verbatim}
For reasons discussed in the paper, we generally recommend stacking for combining separate Bayesian predictive distributions.
\vskip 0.2in
\bibliographystyle{ba}
| {
"timestamp": "2017-09-19T02:03:09",
"yymm": "1704",
"arxiv_id": "1704.02030",
"language": "en",
"url": "https://arxiv.org/abs/1704.02030",
"abstract": "The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit. We take the idea of stacking from the point estimation literature and generalize to the combination of predictive distributions, extending the utility function to any proper scoring rule, using Pareto smoothed importance sampling to efficiently compute the required leave-one-out posterior distributions and regularization to get more stability. We compare stacking of predictive distributions to several alternatives: stacking of means, Bayesian model averaging (BMA), pseudo-BMA using AIC-type weighting, and a variant of pseudo-BMA that is stabilized using the Bayesian bootstrap. Based on simulations and real-data applications, we recommend stacking of predictive distributions, with BB-pseudo-BMA as an approximate alternative when computation cost is an issue.",
"subjects": "Methodology (stat.ME); Computation (stat.CO)",
"title": "Using stacking to average Bayesian predictive distributions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534365728416,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7083573442696848
} |
https://arxiv.org/abs/2107.01424 | On the semitotal dominating sets of graphs | A set $D$ of vertices in an isolate-free graph $G$ is a semitotal dominating set of $G$ if $D$ is a dominating set of $G$ and every vertex in $D$ is within distance $2$ from another vertex of $D$.The semitotal domination number of $G$ is the minimum cardinality of a semitotal dominating set of $G$ and is denoted by $\gamma_{t2}(G)$. In this paper after computation of semitotal domination number of specific graphs, we count the number of this kind of dominating sets of arbitrary size in some graphs. | \section{Introduction}
A dominating set of a graph $G=(V,E)$ is any subset $S$ of $V$ such that every vertex not in $S$ is adjacent to at least one member of $S$.
The minimum cardinality of all dominating sets of $G$ is called the domination number of $G$ and is denoted by $\gamma(G)$. This parameter has been extensively studied in the literature and there are hundreds of papers concerned with domination.
For a detailed treatment of domination theory, the reader is referred to \cite{domination}. Also, the concept of domination and related invariants have
been generalized in many ways. Among the best know generalizations are total, independent, and connected dominating, each of them with the corresponding domination number. Most of the papers published so far deal with structural
aspects of domination, trying to determine exact expressions for $\gamma(G)$ or some upper and/or lower bounds for it. There were no paper concerned with the
enumerative side of the problem by 2008.
Regarding to enumerative side of dominating sets, Alikhani and Peng \cite{saeid1}, have introduced the domination polynomial of a graph. The domination polynomial of graph $G$ is the generating function for the number of dominating sets of $G$, i.e., $D(G,x)=\sum_{ i=1}^{|V(G)|} d(G,i) x^{i}$ (see \cite{euro,saeid1}). This polynomial and its roots has been actively studied in recent
years (see for example \cite{Filomat}).
It is natural to count the number of another kind of dominating sets (\cite{utilitas,weakly}). Motivated by these papers, we consider another type of dominating set of a graph in this paper.
A total dominating set, abbreviated a TD-set, of a graph $G$ with no isolated
vertex is a set $D$ of vertices of $G$ such that every vertex in $V(G)$ is adjacent to at
least one vertex in $D$. The total domination number of $G$, denoted by $\gamma_t(G)$, is
the minimum cardinality of a TD-set of $G$. Total domination is now well studied
in graph theory. The literature on the subject of total domination in graphs has
been surveyed and detailed in a book.
A set $D$ of vertices in an isolate-free graph $G$ is a semitotal dominating set of $G$ if $D$ is a dominating set of $G$ and every vertex in $D$ is within distance $2$ from another vertex of $D$.
The semitotal domination was introduced by Goddard, Henning and McPillan \cite{Goddard}, and studied further in \cite{semi1,semi2} and elsewhere.
The semitotal domination number of $G$ is the minimum cardinality of a semitotal dominating set of $G$ and is denoted by $\gamma_{t2}(G)$. By the definition it is easy to see that for any graph $G$ with no isolated vertices, $\gamma(G)\leq \gamma_{t2}(G)\leq \gamma_t(G)$. Straight from the definition we see that $\gamma_{t2}(G)\geq 2$ but in this paper we consider $\gamma_{t2}(K_n)=1$.
Recently, Henning, Pal and Pradhan \cite{DMGT} studied the semitotal domination number in block graphs.
They presented a linear time algorithm to
compute a minimum semitotal dominating set in block graphs. Also they studied the complexity of the semitotal domination problem.
A domination-critical (domination-super critical, respectively) vertex in a graph
$G$ is a vertex whose removal decreases (increases, respectively) the domination
number. Bauer et al. \cite{Bauer} introduced the concept of domination stability in graphs.
The domination stability, or just $\gamma$-stability, of a graph $G$ is the minimum number
of vertices whose removal changes the domination number. Motivated by domination stability, we consider the semi-total stability of a graph.
In Section 2, we compute the semitotal domination number of specific graphs. In Section 3, we count the number of semitotal dominating sets of arbitrary size in some graphs. Finally in Section 4, we introduce semitotal domination stability of a graph and compute it for some graphs.
\section{Semitotal domination number of specific graphs}
In this section, we study the semitotal domination number of some specific graphs. Here, we recall some graph products. The {\it corona product} $G\circ H$ of two graphs $G$ and $H$ is defined as the graph obtained by taking one copy of $G$ and $\vert V(G)\vert $ copies of $H$ and joining the $i$-th vertex of $G$ to every vertex in the $i$-th copy of $H$.
The Cartesian product of graphs $G$ and $H$ is a graph denoted $G\Box H$ whose
vertex set is $V (G) \times V (H)$. Two vertices $(g, h)$ and $(g', h')$ are adjacent if
either $g = g'$ and $hh'\in E(H)$, or $gg' \in E(G)$ and $h = h'$.
The {\it join} of two graphs $G_1$ and $G_2$, denoted by $G_1\vee G_2$,
is a graph with vertex set $V(G_1)\cup V(G_2)$
and edge set $E(G_1)\cup E(G_2)\cup \{uv| u\in V(G_1)$ and $v\in V(G_2)\}$.
We begin with computation of the semi-total domination number of specific graphs which is straightforward to compute.
\begin{theorem} \label{Thm1}
\begin{enumerate}
\item[(i)]
For every $n\geq 3$, $\gamma_{t2}(P_{n})=\gamma_{t2}(C_{n})=\lceil \frac{2n}{5}\rceil$.
\item[(ii)] If $W_n$ is a wheel of order $n$, then $\gamma_{t2}(W_n)=\lceil \frac{n-1}{3}\rceil$.
\item[(iii)]
If $F_n$ is a friendship graph (join of $K_1$ and $nK_2$), then $\gamma_{t2}(F_n)=n.$
\item[(iv)] If $B_n$ is a book graph (the Cartesian product $K_{1,n}\square P_2$), then
$\gamma_{t2}(B_n)=n+1$.
\item[(v)]
$ \gamma_{t2}(K_{m,n})=\left\{
\begin{array}{ll}
min\{m,n\}& 2\leq n,m\leq 4\\
4& m,n\geq 5
\end{array}\right.$
\end{enumerate}
\end{theorem}
The following theorem gives the difference between $\gamma(G)$ and $\gamma_{t_2}(G)$ for certain graphs, which are easy to prove.
\begin{theorem} \label{Thm2.2}
\begin{enumerate}
\item[(i)] \begin{equation*}
\gamma_{t2}(P_n)- \gamma(P_n)=\left\{
\begin{array}{ll}
0 &n=4,5,7,10\\
1 &6\leqslant n \leqslant 22 ,\quad n\neq 7,10,21,18\\
2 &23\leqslant n\leqslant 37,\quad n\neq 25,33,36\\
3 &38\leq n\leq 52,\quad n\neq 40,48,51\\
4 & 53\leq n\leq 67,\quad n\neq 55,63,66\\
5 & 68\leq n\leq 82,\quad n\neq 70,78,81\\
\geq 6 & n\geq83,\quad n\neq 85
\end{array}\right.
\end{equation*}
\item[(ii)]
For the Petersen graph $P$, $\gamma_{t2}(P)=\gamma(P)$.
\item[(iii)] $\gamma_{t2}(B_n)=\gamma(B_n)+n-1.$
\item[(iv)] $\gamma_{t2}(F_n)-\gamma(F_n)=n-1.$
\item[(v)] For the star graph $S_n=K_{1,n}$,
$\gamma_{t2}(S_n)-\gamma(S_n)=n-1.$
\item[(vi)] $\gamma_{t2}(W_n)-\gamma(W_n)=\lceil \frac{n-1}{3}\rceil-1.$
\item[(vii)]
For the complete bipartite graph $K_{m,n}$ with $m\leq n$,
\begin{equation*}
\gamma_{t2}(K_{m,n})-\gamma(K_{m,n})=\left\{
\begin{array}{ll}
m-2& 2\leq m\leq 4\\
2& m\geq 5
\end{array}\right.
\end{equation*}
\end{enumerate}
\end{theorem}
The following theorem is about the semitotal domination number of corona and join products of two graphs.
\begin{theorem}
\begin{enumerate}
\item[(i)]
If $G_1$ and $G_2$ are two graphs, then
$$\gamma_{t2}(G_1\circ G_2)\leq\gamma_{t2}(G_1)+\gamma_{t2}(G_2)\times(|V(G_1)|-\gamma_{t2}(G_1)).$$
Moreover, this inequality is sharp, when $G_2$ is a complete graph.
\item[(ii)] For two graphs $G$ and $H$ (which are not complete graphs) of order at least three,
$$ \gamma_{t2}(G\vee H)=min\{\gamma_{t2}(G), \gamma_{t2}(H),4\}$$
\end{enumerate}
\end{theorem}
\noindent{\bf Proof.\ }
\begin{enumerate}
\item[(i)] By the construction of $G_1\circ G_2$, the vertices in the semitotal dominating set of $G_1$
covers all the verticies of copies of $G_2$ which adjacent to them. Suppose that $D$ is a semitotal dominating set of $G_1$. Every vertex in $V(G_1)\setminus D$ adjacent to one copy of $G_2$, and therefore, these vertices cover by the semitotal dominating set of $G_2$. So
$\gamma_{t2}(G_1\circ G_2)\leq \gamma_{t2}(G_1)+\gamma_{t2}(G_2)\times(|V(G_1)|-\gamma_{t2}(G_1)).$
\item[(ii)] By the construction of $G\vee H$, all of the vertices of $G$ are adjacent to all of the vertices of $H$, and therefore any semitotal dominating set of $G$ is a semitotal dominating set of $G\vee H$ and also any semitotal dominating set of $H$ is a semitotal dominating set of $G\vee H$. If $\gamma_{t2}(G)>4$, $\gamma_{t2}(H)>4$,
then we can cover all vertices of $G\vee H$ by four vertices. So we have the result.\hfill $\square$\medskip
\end{enumerate}
\begin{theorem}
If $G$ is a complete graph and $H$ is an arbitrary graph (which is not complete graph), then
$$ \gamma_{t2}(G\vee H)=\gamma_{t2}(H).$$
\end{theorem}
\begin{proof}
Since in $G\vee H$ all vertices of $G$ are adjacent to all vertices of $H$, so the semitotal dominating set of $H$ is a semitotal dominating set of $G\vee H$. On the other hand, in $G$ the distance between two vertices is equal one and so we have $ \gamma_{t2}(G\vee H)=\gamma_{t2}(H)$.\hfill $\square$\medskip
\end{proof}
\begin{theorem}
$\gamma_{t2}(P_n\Box P_m) =\lceil \frac{2n}{5}\rceil\times \lceil \frac{m}{3}\rceil+ \lfloor\frac{m}{3}\rfloor \times (n-\lceil \frac{2n}{5}\rceil).$
\end{theorem}
\begin{proof}
By the construction of $P_n\Box P_m$ we have $m$ copies of $P_n$. The vertices of the second copy that adjacent to the semitotal dominating set of the first copy cover by these vertices and we can cover other
vertices of the second copy by the complement of the semitotal dominating set of the third copy. By continuing this method for other copies, the number of at least vertices that we can cover all vertices of $P_n\Box P_m$ by them, is
$\lceil \frac{2n}{5}\rceil\times \lceil \frac{m}{3}\rceil+ \lfloor\frac{m}{3}\rfloor \times (n-\lceil \frac{2n}{5}\rceil).$\hfill $\square$\medskip
\end{proof}
\section{The number of semitotal dominating sets}
In this section, we consider the problem of the number of the semitotal dominating sets of any size in a graph $G$.
Let ${\mathcal D}_{t2}(G,i)$ be the family of
semitotal dominating sets of a graph $G$ with cardinality $i$ and let
$d_{t2}(G,i)=|{\mathcal D}_{t2}(G,i)|$. We denote the generating function for the number of semitotal dominating sets of $G$ by $D_{t2}(G,x)$ and is the polynomial
$$D_{t2}(G,x)=\sum_{ i=1}^{|V(G)|} d_{t2}(G,i) x^{i},$$ and we call it semitotal domination polynomial of $G$.
Here, we try to count the number of this kind of dominating sets and study the semitotal domination polynomial for certain graphs.
\begin{theorem}
\begin{enumerate}
\item[(i)] For every $i\neq n$, $d_{t2}(K_{1,n},i)=0$, $d_{t2}(K_{1,n},n)=1.$
\item[(ii)] For every $n\geq 3$, $D_{t2}(K_{1,n},x)=x^n$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item[(i)]
Since $\gamma_{t2}(K_{1,n})=n$ so for $i< n, d_{t2}(K_{1,n},i)=0$ and since the distance between central vertex with other vertices is equal one, so this vertex is not in the semitotal dominating set of $K_{1,n}$, and so $d_{t2}(K_{1,n},n)=1.$
\item[(ii)]
Follows from Part (i) and the definition of the semitotal domination polynomial. \hfill $\square$\medskip
\end{enumerate}
\end{proof}
\begin{theorem} For a bipartite graph $K_{m,n}$ with $3 \geq m$ and $m \leq n$, we have
\begin{equation*}
d_{t2}(K_{m,n},i)=\left\{
\begin{array}{ll}
0 & i \leq m-1\\
{m+n\choose m}-{n\choose m}-m{n\choose m-1} & i=m\\
{m+n\choose i}-{n\choose i}-m{n\choose i-1}-n{m\choose i-1} &i>m ,i\neq n\\
{m+n\choose n}-mn-n{m\choose n-1}& i=n
\end{array}\right.
\end{equation*}
\end{theorem}
\begin{proof}
If $m\leq 3, \gamma_{t2}(K_{m,n})=m$ and so $d_{t2}(K_{m,n},i)=0$ for $i\leq m-1$. If $i\geq m$, the number of sets with $i$ vertices is ${m+n\choose i}$, but since the distance between one vertex of the first section and one vertex of the second section is one, so some of these sets are not semitotal dominating set. The number of $i$-sets which cannot be semitotal dominating sets, are ${m\choose 1}\times {n\choose i-1}$, ${n\choose 1}\times {m \choose i-1}$ and also for $i\neq n$, ${n \choose i}$. So we have the result.\hfill $\square$\medskip
\end{proof}
Similarly, we have the following theorem:
\begin{theorem} For a bipartite graph $K_{m,n}$ with $4\leq m\leq n$, we have
\begin{equation*}
d_{t2}(K_{m,n},i)=\left\{
\begin{array}{ll}
0 & i \leq 3\\
{m+n\choose m}-{n\choose m}-m{n\choose m-1} & i=m\\
{m+n\choose i}-{n\choose i}-m{n\choose i-1}-n{m\choose i-1} &i>3 ,i\neq n\\
{m+n\choose n}-mn-n{m\choose n-1}& i=n
\end{array}\right.
\end{equation*}
\end{theorem}
\begin{theorem}
\begin{enumerate}
\item[(i)] For every $i\geq n\geq 2$, $ d_{t2}(F_n,i)= 2^{n} {n\choose i-n}.$
\item[(ii)] For every $n\geq 2$, $D_{t2}(F_n,x)=2^nx^n(1+x)^n$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item[(i)] Since $\gamma_{t2}(F_n)=n$ so if $i<n$ , $d_{t2}(F_n,i)=0$.
If $i\geq n$, then first we should select $n$ vertex from sides of $n$ triangles by $2^{n}$ methods
and then select $i-n$ vertex with ${n\choose i-n}$ ways. Therefore $d_{t2}(F_n,i)= 2^{n} {n\choose i-n}$.
\item[(ii)]
It follows by Part (i) and definition of semitotal domination polynomial.\hfill $\square$\medskip
\end{enumerate}
\end{proof}
We need the following lemma to obtain more results:
\begin{lemma}{\rm \cite{Goddard}}
If $G$ is a connected graph on $n\geq 4$ vertices, then $\gamma_{t_2}(G)\leq \frac{n}{2}$.
\end{lemma}
Goddard, Henning, and McPillan in \cite{Goddard} characterized the trees with semitotal domination number exactly one-half order. They defined a family $\mathcal{T}$ of trees as follows. Let $H$ be a nontrivial tree and for each vertex $v$ of $H$, add either a $P_2$ or a $P_4$ and identify $v$ with one end vertex of the path. They proved the following theorem:
\begin{theorem} \label{half}
Let $T$ be a tree of order $n\geq 4$. Then $\gamma_{t_2}(T) =\frac{n}{2}$ if and only if $T\in \mathcal{T} $ or $T=K_{1,3}$.
\end{theorem}
Now, we state and prove the following result:
\begin{theorem}
For every tree $T\in \mathcal{T}$, $D_{t_2}(T,x)=D(T,x)$.
\end{theorem}
\begin{proof}
We should prove that for every $i\geq \gamma_{t_2}(T)$, $d_{t_2}(T,i)=d(T,i)$. Suppose that $i\geq \gamma_{t_2}(T)$, by Lemma \ref{half}, every dominating set of $T$ with cardinality $i\geq \gamma_{t_2}(T)$ is a semitotal dominating set of $T$. Therefore, we have the result. \hfill $\square$\medskip
\end{proof}
Goddard, Henning, and McPillan in \cite{Goddard} extended Theorem \ref{half} from trees to all graphs. For given graphs $G$ and $H$ and every vertex in $G$, form a copy of $H$ and identify one vertex in the copy of $H$ with the corresponding vertex in $G$. Let to denote this as $G\diamond H$ (See $P_5\diamond C_4$ in Figure \ref{diamond}). The following theorem characterize the graphs with with minimum degree at least $2$ whose semitotal domination number is exactly one-half order.
\begin{theorem} \label{halfgraph}{\rm \cite{Goddard}}
Let $G$ be a connected graph of order $n\geq 4$ with minimum degree at least $2$. Then $\gamma_{t_2}(T) =\frac{n}{2}$ if and only if $G$ is $C_6, C_8$, a spanning subgraph of $K_4$ or $H\diamond C_4$ for some graph $H$.
\end{theorem}
Now, we have the following result:
\begin{theorem}
$D_{t_2}(H\diamond C_4,x)=D(H\diamond C_4,x)$.
\end{theorem}
\begin{proof}
We should prove that for every $i\geq \gamma_{t_2}(H\diamond C_4)$, $d_{t_2}(H\diamond C_4,i)=d(H\diamond C_4,i)$. Suppose that $i\geq \gamma_{t_2}(H\diamond C_4)$, by Lemma \ref{halfgraph}, every dominating set of $H\diamond C_4$ with cardinality $i\geq \gamma_{t_2}(H\diamond C_4)$ is a semitotal dominating set of $H\diamond C_4$. Therefore, we have the result. \hfill $\square$\medskip
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{diamond}
\caption{ \label{diamond} The graph $P_5\diamond C_4$. }
\end{center}
\end{figure}
A split graph is a graph in which the vertices can be partitioned into a clique and an independent set. Figure \ref{split} shows a split graph partitioned into a clique (induced graph by $\{1,2,3\}$) and an independent set induced graph by $\{4,5\}$.
\begin{theorem}
If $G$ is a connected split graph $G$ with no dominating vertex, then
\[
D_t(G,x)=D_{t_2}(G,x)=D(G,x).
\]
\end{theorem}
\begin{proof}
First we show that $\gamma(G)=\gamma_{t_2}=\gamma_t(G)$. It is suffices to prove that $\gamma_t(G)\leq \gamma(G)$. Suppose that $V(G)=C\cup I$ is a partition of the vertices of $G$ into a clique $C$ and an independent set $I$. Consider a minimum dominating set of $G$ contained in $C$ such as $D$. If $D$ contains $v\in I$, then since no neighbour of $v$ such as $u$ is in $D$, $(D\setminus \{v\})\cup \{u\}$ is a minimum dominating set containing less vertices of $I$. Since $G$ has no dominating vertex, every
dominating set contained in $C$ is a total dominating set and so $\gamma_t(G) \leq \gamma(G)$. So every semitotal dominating set of cardinality $i$ of $G$ is a total dominating set of cardinality $i$ and is a dominating set of $G$ with cardinality $i$. Therefore $d(G,i)=d_t(G,i)=d_{t2}(G,i)$ and so we have the result. \hfill $\square$\medskip
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{split}
\caption{ \label{split} Example of split graph. }
\end{center}
\end{figure}
\section{Stability of semitoal domination number}
In this section, we introduce the semitotal domination stability of a graph and compute this parameter for some specific graphs.
\begin{definition}
Let $G$ be a graph of order $n\geq2$. The stabilizing on the semitotal domination number, or just semitotal, $st_{\gamma_{t2}}(G)$ of graph $G$ is the minimum number of vertices whose
removal changes the semitotal domination number.
\end{definition}
\begin{theorem}
If $m\leq n$, then
\begin{equation*}
st_{\gamma_{t2}}(K_{m,n})=\left\{
\begin{array}{ll}
0, & m=2\\
1, &3\leq m \leq 4\\
m-3, &m > 4\\
\end{array}\right.
\end{equation*}
\end{theorem}
\noindent{\bf Proof.\ }
Suppose that $3\leq m\leq 4$ and $m\leq n$. In this case $\gamma_{t2}(K_{m,n})=\min\{m,n\}=m$ and so removing one vertex changes $\gamma_{t2}(K_{m,n})$. Therefore in this case,
$st_{\gamma_{t2}}(K_{m,n})=1$.
Suppose that $4<m\leq n$, in this case $\gamma_{t2}(K_{m,n})=4$, so the number of minimum vertex whose removal changes the $\gamma_{t2}(K_{m,n})$ is $m-3$.\hfill $\square$\medskip
\begin{theorem}
\begin{equation*}
st_{\gamma_{t2}}(P_{n})=st_{\gamma_{t2}}(C_{n})=\left\{
\begin{array}{ll}
1, &n=5k+1,\quad n=5k+3\\
2, &n=5k+2, \quad n=5k-1\\
3, &n=5k.
\end{array}\right.
\end{equation*}
\end{theorem}
\noindent{\bf Proof.\ }
We know that $\gamma_{t2}(P_n)=\lceil \frac{2n}{5}\rceil$ (Theorem \ref{Thm1}). For $n=5k+1$ and $n=5k+3$, $\gamma_{t2}(P_{n-1})=\gamma_{t2}(P_{n})-1$. So in this case $st_{\gamma_{t2}}(P_n)=1$.
With similar arguments we have the results for another cases. \hfill $\square$\medskip
\begin{theorem}
\begin{equation*}
st_{\gamma_{t2}}(W_{n})=\left\{
\begin{array}{ll}
1, &n=3k+2 \\
2, &n=3k \\
3, & n=3k+1 \\
\end{array}\right.
\end{equation*}
\end{theorem}
\noindent{\bf Proof.\ }
We know that $\gamma_{t2}(W_n)=\lceil\frac{n-1}{3}\rceil$ (Theorem \ref{Thm1}). Since
$\lceil\frac{3k+2-1}{3}\rceil=\lceil\frac{3k+1-1}{3}\rceil+1$
so for the case $n=3k+2$, $st_{\gamma_{t2}}(W_{n})=1$. With similar arguments we have the results for another cases. \hfill $\square$\medskip
\begin{theorem}
\begin{enumerate}
\item[(i)] If $5\leq n \leq 10$ and $m>n$ then
\begin{equation*}
st_{\gamma_{t2}}(P_{n}\vee P_{m})=\left\{
\begin{array}{ll}
1 &n=5k+1,\quad n=5k+3\\
2 &n=5k+2 ,\quad n=5k-1\\
3 &n=5k \\
\end{array}\right.
\end{equation*}
\item[(ii)] If $n>10$ and $n\leq m$ then $st_{\gamma_{t2}}(P_{n}\vee P_{m})=n-7$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item[(i)]
Suppose that $5\leq n \leq 10$ and $m>n$. Since $\gamma_{t2}(P_{n}\vee P_{m})=\gamma_{t2}(P_n)$
so in this case $st_{\gamma_{t2}}(P_{n}\vee P_{m})=st_{\gamma_{t2}}(P_{n})$.
\item[(ii)]
If $n>10$ by Theorem \ref{Thm2.2}, $\gamma_{t2}(P_{n}\vee P_{m})=4$ and by attention to $\gamma_{t2}(P_7)=\lceil\dfrac{2\times 7}{5}\rceil=3$ we conclude $st_{\gamma_{t2}}(P_{n}\vee P_{m})=n-7$. \hfill $\square$\medskip
\end{enumerate}
\end{proof}
\begin{theorem}
$st_{\gamma_{t2}}(P_{n}\square P_{m})=\lceil\frac{2n}{5}\rceil.$
\end{theorem}
\begin{theorem}
\begin{enumerate}
\item[(i)]
If $F_n$ is a friendship graph, then $st_{\gamma_{t2}}(F_n)=2.$
\item[(ii)] If $B_n$ is a book graph (the Cartesian product $K_{1,n}\square P_2$), then
$st_{\gamma_{t2}}(B_n)=1$.
\item[(iii)] If $S_n$ is a star graph then
$st_{\gamma_{t2}}(S_n)=1$
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item[(i)] Since $\gamma_{t2}(F_n)=n$, so $\gamma_{t2}(F_{n-1})=n-1$ and obviously to reach from $F_n$ to $F_{n-1}$, we need to remove two vertices. So $st_{\gamma_{t2}}(F_n)=2.$
\item[(ii)] Since $\gamma_{t2}(B_n)=n+1$, so $\gamma_{t2}(B_{n-1})=n$ and obviously to reach from $B_n$ to $B_{n-1}$, we need to remove two vertices. So $st_{\gamma_{t2}}(B_n)=2.$
\item[(iii)] Since $\gamma_{t2}(S_n)=n$ and center vertex is not in semitotal dominating set, so by removing one vertex of $S_n$ we reach to $S_{n-1}$. Therefore $st_{\gamma_{t2}}(S_n)=1$.\hfill $\square$\medskip
\end{enumerate}
\end{proof}
| {
"timestamp": "2021-07-06T02:10:24",
"yymm": "2107",
"arxiv_id": "2107.01424",
"language": "en",
"url": "https://arxiv.org/abs/2107.01424",
"abstract": "A set $D$ of vertices in an isolate-free graph $G$ is a semitotal dominating set of $G$ if $D$ is a dominating set of $G$ and every vertex in $D$ is within distance $2$ from another vertex of $D$.The semitotal domination number of $G$ is the minimum cardinality of a semitotal dominating set of $G$ and is denoted by $\\gamma_{t2}(G)$. In this paper after computation of semitotal domination number of specific graphs, we count the number of this kind of dominating sets of arbitrary size in some graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "On the semitotal dominating sets of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534354878827,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573434866229
} |
https://arxiv.org/abs/math/0412102 | The arithmetic and the geometry of Kobayashi hyperbolicity | We survey the properties of Brody and Kobayashi hyperbolic manifolds. | \section{Introductory remarks about hyperbolicity}
In this section we define Brody and Kobayashi hyperbolicity and state
their basic properties. We also give examples of hyperbolic and
non-hyperbolic manifolds. The reader can refer to
\cite{lang:hyperbolic} and \cite{demailly:hyperbolic} for more
details.\footnote{After these notes were written, I discovered that O.
Debarre also has some very nice notes on hyperbolicity
\cite{debarre:hyperbolicity}. The reader is encouraged to consult
these notes for additional information on hyperbolic varieties,
especially about those varieties with ample cotangent bundle.} Let
$\mathbb{C}$ denote the complex plane. We use $X$ and $Y$ to denote complex
manifolds. We reserve $C$ for curves. \smallskip
\noindent {\bf Observation:} If $C$ is a curve of genus at least two,
then any holomorphic map $f: \mathbb{C} \rightarrow C$ is necessarily
constant. \smallskip
\noindent {\bf Proof:} Any map $f:\mathbb{C} \rightarrow C$ factors through
the universal cover of $C$, which is the unit disc. Such a map is
constant since by Liouville's Theorem every bounded entire function is
constant. $\Box$ \smallskip
In contrast, curves of genus zero and one do admit non-constant
holomorphic maps from $\mathbb{C}$. This property of higher genus curves can
be generalized to higher dimensional manifolds and leads to the
concept of Brody hyperbolicity.
\begin{definition}
A complex manifold $X$ is {\bf Brody hyperbolic} if there are no
non-constant holomorphic maps from $\mathbb{C}$ to $X$.
\end{definition}
{\bf Examples:} $\bullet$ Any variety of the form $\prod_{i=1}^n C_i$,
where $C_i$ are curves of genus at least two, is Brody
hyperbolic. More generally, a finite product of Brody hyperbolic
manifolds is Brody hyperbolic.
\smallskip
$\bullet$ If a complex manifold $X$ contains a rational curve or a
complex torus, then $X$ is not Brody hyperbolic. If $X$ contains
a rational curve, then the inclusion of $\mathbb{C}$ in the rational curve
followed by the inclusion of the rational curve in $X$ gives a
non-constant holomorphic map from $\mathbb{C}$ to $X$.
The universal cover of a $g$-dimensional complex torus is $\mathbb{C}^g$.
Taking the image of a general complex line in $\mathbb{C}^g$ under the
quotient map gives a non-constant holomorphic map from $\mathbb{C}$ into the
complex torus. If $X$ contains a complex torus, composing the
previous map with the inclusion of the complex torus in $X$ gives a
non-constant holomorphic map from $\mathbb{C}$ into $X$. \smallskip
$\bullet$ The blow-up of a variety is not Brody hyperbolic.
Consequently, Brody hyperbolicity is not a birational invariant.
\smallskip
$\bullet$ Consider $X = \P^2 - \cup_{i=1}^4 l_i$ where $l_i$ are
general lines. Then $X$ is not Brody hyperbolic since there are
$\mathbb{C}^*$'s in $X$---take the intersection of $X$ with the line passing
through $l_1 \cap l_2$ and $l_3 \cap l_4$. \smallskip
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=lines.eps}
\end{center}
\caption{A Brody hyperbolic manifold.}
\label{lines}
\end{figure}
$\bullet$ Consider $Y = \P^2 - \cup_{i=1}^5 l_i$ where $l_i$ are the
lines pictured in Figure \ref{lines}. $Y$ is Brody hyperbolic.
Consider the pencil of lines in $\P^2$ based at one of the points
where three of the lines $l_i$ intersect. Restricting this pencil to
$Y$, we see that $Y$ is fibered over $\P^1$ punctured at three points
with fibers isomorphic to $\P^1$ punctured at three points. Composing
any holomorphic map from $\mathbb{C}$ with the map to the base of the
fibration, we conclude that the image of the map must lie in a fiber.
Since the fibers are Brody hyperbolic, the map must be constant.
Using the following easy lemma, which generalizes the idea of this
example, one can generate many examples of Brody hyperbolic manifolds.
\begin{lemma}\label{fibration}
Let $f: X \rightarrow Y$ be a smooth morphism of complex manifolds,
where $Y$ is Brody hyperbolic and $f^{-1} (y)$ is Brody hyperbolic
for every $y \in Y$. Then $X$ is Brody hyperbolic.
\end{lemma}
The Schwarz-Pick Lemma states that a holomorphic
map between two hyperbolic Riemann surfaces is either a local isometry
or distance decreasing. The hyperbolic distance between any two points
$p$ and $q$ on a hyperbolic Riemann surface $C$ is equal to the
shortest hyperbolic distance between lifts of the points $p$ and $q$
to the universal cover, the unit disc $\Delta$. Any holomorphic map
$f: \Delta \rightarrow C$ factors through the universal cover.
Consequently, the Schwarz-Pick Lemma implies that the infimum of the
hyperbolic distances between $p'$ and $q'$ in $\Delta$ for which there
exists a holomorphic map $f: \Delta \rightarrow C$ such that $f(p')=p$
and $f(q') = q$ is achieved when $f$ is the universal covering
map. Moreover, this infimum
is equal to the hyperbolic distance between $p$ and $q$. In this form
we can generalize the hyperbolic distance to higher dimensional
manifolds. \smallskip
Let $\Delta_r$ denote the disc of radius $r$ in the complex plane. We
will denote the unit disc by $\Delta$. If $X$ is a complex manifold,
then $X$ can be endowed with a pseudo-distance due to Kobayashi.
\begin{definition}
Given a tangent vector $\xi \in T_{X,x}$ at $x \in X$ we define its
{\it Kobayashi pseudo-norm} to be
$$k(\xi) := \inf_{\lambda} \{ \lambda : \exists \ f : \Delta
\rightarrow X, \ f(0) = x, \ \lambda f'(0) = \xi \}.$$
The {\it
Kobayashi pseudo-distance} $d_{X}$ is the geodesic pseudo-distance
obtained by integrating this pseudo-norm.
\end{definition}
\begin{figure}[htbp]
\begin{center}
\epsfig{figure=kob.eps}
\end{center}
\caption{Measuring the Kobayashi distance.}
\label{kob}
\end{figure}
More visually, we can describe how to compute the distance between any
two points $p,q \in X$ as follows. We find chains of maps $f_i :
\Delta \rightarrow X$ with two distinguished points $p_i, q_i$, where
$f_1(p_1) = p$, $f_n(q_n)= q$ and $f_i(q_i) = f_{i+1}(p_{i+1})$. The
Kobayashi pseudo-distance is the infimum of the sum of the hyperbolic
distances between $p_i$ and $q_i$ over all such chains. \smallskip
The Schwarz-Pick Lemma generalizes to holomorphic maps between higher
dimensional manifolds endowed with the Kobayashi pseudo-distance.
\begin{lemma}\label{Pick}
If $f: X \rightarrow Y$ is a holomorphic map between two complex
manifolds, then $d_X(p,q) \geq d_Y (f(p), f(q))$.
\end{lemma}
Unfortunately, $d_X$ does not have to be non-degenerate. \smallskip
\noindent {\bf Main Counterexample:} The Kobayashi pseudo-distance on
$\mathbb{C}$ is identically zero. To compute the distance between two points
$x$ and $y$ in $\mathbb{C}$, consider for each integer $n>1$ the functions
$f_n(z)= n(y-x)z + x$ from
the unit disc to $\mathbb{C}$. The function $f_n(z)$ maps 0
to $x$ and $1/n$ to $y$. Since the hyperbolic distance between $0$ and
$1/n$ in the unit disc tends to zero as $n$ tends to infinity, we
conclude that the Kobayashi pseudo-distance between $x$ and $y$ in
$\mathbb{C}$ is zero. \smallskip
\begin{definition}
A complex manifold $X$ is called {\it Kobayashi hyperbolic} if its
Kobayashi pseudo-distance is non-degenerate.
\end{definition}
\noindent {\bf Kobayashi hyperbolicity implies Brody hyperbolicity. }
As a consequence of Lemma \ref{Pick} and the above example we see that
if a complex manifold $X$ admits a non-constant holomorphic map from
$\mathbb{C}$, then $X$ cannot be Kobayashi hyperbolic. In other words,
Kobayashi hyperbolicity implies Brody hyperbolicity. For compact
complex manifolds the converse of this result is true
(\cite{brody:hyperbolic}).
\begin{theorem}[Brody]\label{Br=Ko}
A compact complex manifold $X$ is Kobayashi hyperbolic if and only
if it is Brody hyperbolic.
\end{theorem}
\noindent{\bf Sketch of proof:} If $X$ is not Kobayashi hyperbolic, then there
exists a sequence of maps $f_n : \Delta \rightarrow X$ such that
$|df_n (0 )|$ tends to $\infty$, where $| \cdot |$ denotes a fixed
Hermitian metric on $X$. By rescaling the map we can assume that we
have a sequence of maps $f_n : \Delta_{r_n} \rightarrow X$ where
$|df_n(0)| = 1$ and the radii $r_n$ tend to $\infty$. \smallskip
Let $f:\Delta_r \rightarrow X$ be a holomorphic map with $|df(0)| =
c$. Let $f_t(z) = f(tz)$. Consider the function $$g(t)= \sup_{z \in
\Delta_r} |df_t (z)|.$$
The function $g(t)$ is increasing
for $0 \leq t \leq 1$ and continuous for $0 \leq t <1$. Moreover, as
$t$ tends to 1 from below $g(t)$ tends to $g(1)$. Hence there exists
$t$ in the interval $[0,1]$ and an automorphism $h$ of
$\Delta_r$ such that $$\sup_{z \in \Delta_r} |d(f_t \circ h)
(z)| = |d(f_t \circ h) (0)| = c .$$
This is known as the Brody
reparametrization lemma. \smallskip
Applying the Brody reparametrization lemma to our family of maps $f_n$, we
obtain
a new family of
maps $\tilde{f}_n :\Delta_{r_n} \rightarrow X$ whose derivatives are bounded
in norm by 1 in $\Delta_{r_n}$. Moreover, $|d \tilde{f}_n (0)| = 1$
for every $n$. We would like to show that we can select a subsequence
from this family that is uniformly convergent on every compact subset
of $\mathbb{C}$. Since the derivatives at $0$ all have norm 1, then the
family converges to a non-constant holomorphic map from $\mathbb{C}$ to
$X$. This violates Brody hyperbolicity. \smallskip
Using the compactness of $X$ we can check uniform convergence in a
neighborhood of every point $z_0$ in $\mathbb{C}$. Again using the compactness
of $X$ (and passing to a subsequence if necessary) we can assume the
family converges at $z_0$. In this situation the derivative bounds
imply that the family forms an equicontinuous family of
holomorphic maps around $z_0$. We can conclude that a subsequence
converges uniformly on compact sets by Ascoli's theorem. We thus
obtain a holomorphic map from $\mathbb{C}$ to $X$. $\Box$
\smallskip
Note that Brody's theorem does not have to hold for non-compact
manifolds (see \cite{brody:thesis}). Consider the domain in $\mathbb{C}^2$
given by
$$
D : = \{ (z,w) \ | \ |z|<1, \ |zw|<1 \ \mbox{and} \ |w| < 1 \
\mbox{if} \ z = 0 \}. $$
The projection of the domain $D$ to the first
coordinate gives rise to a family of discs parameterized by the unit
disc in the $z$-plane. Consequently, Lemma \ref{fibration} implies
that $D$ is Brody hyperbolic. On the other hand, when $z$ approaches
zero, the radius of the disc lying over $z$ tends to infinity. Using
this we can show that the Kobayashi pseudo-distance between $p =
(0,0)$ and $q =(0,w_0)$ in $D$ vanishes. We can find two points
$p'$ and $q'$ having the same $w$ coordinates as $p$ and $q$,
respectively, in a fiber close to zero. Since $p'$ and $q'$ are in a
very large disc, the hyperbolic distance between them is very small.
Since the hyperbolic distances between $p$ and $p'$ and $q$ and $q'$
are very small, the sum of the hyperbolic distances can be made
arbitrarily small. \smallskip
More explicitly, consider the three maps
$$f_1 (z) = (z,0), \ f_2(z) = \left( \frac{1}{n}, nz \right) , \ f_3
(z) = \left( \frac{1}{n} + \frac{1}{2} z, w_0 \right)$$
from the unit
disc into $D$. Note that $f_1(0)= (0,0)$, $f_1(1/n)=(1/n,0)$, $f_2(0)=
(1/n,0)$, $f_2(w_0/n) = (1/n, w_0)$ and $f_3(0)= (1/n,w_0)$,
$f_3(-2/n)= (0,w_0)$. Hence these maps form a chain of maps from the
unit disc to $D$ connecting $(0,0)$ to $(0,w_0)$. As $n$ tends to
infinity, the sum of the hyperbolic distances between the chosen
points in the unit disc tends to zero. We conclude that the Kobayashi
pseudo-distance between $(0,0)$ and $(0,w_0)$ is zero. Observe that
this example also shows that the analogue of Lemma \ref{fibration}
does not hold for Kobayashi hyperbolicity. \smallskip
Under suitable hypotheses on a subvariety $Y$ of $X$, we can assert
that $X-Y$, the complement of $Y$ in $X$, is Kobayashi hyperbolic. For
example, the following theorem is often useful in proving that the
complement of a subvariety is Kobayashi hyperbolic.
\begin{theorem}
Let $X$ be a compact variety and let $Y$ be a proper algebraic subset. If
$Y$ and $X - Y$ are Brody hyperbolic, then $X-Y$ is Kobayashi
hyperbolic.
\end{theorem}
\noindent {\bf Sketch of proof:} If $X-Y$ is not Kobayashi hyperbolic,
we can get a sequence of maps $f_n: \Delta_{r_n} \rightarrow X-Y$ that
converges to a holomorphic map $g$ from $\mathbb{C}$ to $X$. The question is
whether the image of $g$ intersects $Y$. The image cannot be
contained in $Y$ since $Y$ is Brody hyperbolic.
If we rule out that the image of $g$ intersects $Y$, then we obtain a
contradiction since $X-Y$ is Brody hyperbolic. One can prove that the
intersection points of the image of $g$ with $Y$ have to be isolated.
Suppose $g(z_0) \in Y$. Take a circle $S$ around $z_0$ such that $g(S)
\subset X-Y$. The winding number is zero because $f_n(\Delta_{r_n})$
does not meet $Y$. Hence $Y$ cannot intersect the image of the
interior of $S$, contradicting that $g(z_0) \in Y$. $\Box$ \smallskip
We now discuss some examples of Kobayashi hyperbolic
manifolds. From now on whenever we say hyperbolic without further
qualification, we will always mean Kobayashi hyperbolic. The
following example due to Green (\cite{green:hyperplane}) gives the
first non-trivial examples.
\begin{theorem}[Green]
The complement of $2n+1$ general hyperplanes in $\P^n$ is
hyperbolic.
\end{theorem}
Generalizing further one can ask whether the complement of a very
general irreducible hypersurface in $\P^n$ of large enough degree is
hyperbolic. Siu and Yeung answer this question positively in $\P^2$
(\cite{siuyeung:complement}). Here and in the following by `very
general' we will refer to the complement of the union of countably
many proper subvarieties.
\begin{theorem}[Siu-Yeung]\label{curve}
Let $C$ be a very general curve of sufficiently large degree
in $\P^2$. Then $\P^2 - C$ is Kobayashi hyperbolic.
\end{theorem}
Note that this theorem can be interpreted as a generalization of
Picard's theorem to higher dimensions. Recall that Picard's theorem
says that any entire map which omits two values is constant. This
theorem implies that any holomorphic map from $\mathbb{C}$ into $\P^n$ which
omits a (very general) hypersurface of a large enough degree is constant.
\smallskip
Siu and Yeung also prove the analogue of this result for the
complement of very general ample divisors in abelian varieties
(\cite{siuyeung:4} \cite{siuyeung:one}, \cite{siuyeung:2}
\cite{siuyeung:3}).
\begin{theorem}[Siu-Yeung]
Let $A$ be an abelian variety. Let $D$ be a very general ample
divisor in $A$. Then $A - D$ is Kobayashi hyperbolic.
\end{theorem}
\noindent There is a close relation between the hyperbolicity of
complements of divisors in projective space and the hyperbolicity of
general hypersurfaces in projective space. We have the following
theorem due to Siu (\cite{siu:hyperbolic}, \cite{siu:hypersurface}).
\begin{theorem}[Siu]\label{siuhyp}
A very general surface of sufficiently high degree in $\P^3$ is
Kobayashi hyperbolic.
\end{theorem}
\noindent In fact, Siu recently has generalized Theorems \ref{curve} and
\ref{siuhyp} to hypersurfaces in $\P^n$. Not all the proofs
have appeared in print.
\begin{theorem}[Siu]
Let $X$ be a very general hypersurface of sufficiently large degree
in $\P^n$. Then $\P^n - X$ is Kobayashi hyperbolic.
\end{theorem}
\begin{theorem}[Siu]\label{higherdim}
A very general hypersurface of sufficiently high degree in $\P^n$ is
Kobayashi hyperbolic.
\end{theorem}
\noindent For surfaces in $\P^3$ the term `sufficiently high degree'
can be taken to mean at least 11. Conjecturally a very general
hypersurface of degree larger than $2n$ in $\P^n$ is expected to be
hyperbolic. However, the current known bounds are much larger than the
expected bounds. \smallskip
Finally, we observe that some properties of the tangent or cotangent
bundle of a complex manifold forces the manifold to be hyperbolic. For
example, if $X$ admits a metric with negative sectional curvature; or
if $X$ has ample cotangent bundle; or if $X$ is the quotient of a
bounded domain in $\mathbb{C}^n$ by a free group action, then $X$ is Kobayashi
hyperbolic. \smallskip
As an amusing corollary we note that $M_g^0$, the moduli space of
automorphism-free, smooth curves of genus $g > 2$, is Kobayashi
hyperbolic since the Weil-Petersson metric is a metric on $M_g^0$ with
negative sectional curvature. Consequently, $M_g^0$ does not contain
any complete rational or elliptic curves. It would be interesting to
give lower bounds on the genus of complete curves contained in $M_g^0$.
\section{The geometry of Kobayashi hyperbolicity}
In this section we discuss some proven and conjectural geometric
characterizations of hyperbolicity. Hyperbolicity imposes strong
restrictions on the geometry of a variety. In particular it constrains
the type of subvarieties the variety can have.
\begin{proposition}
Let $X$ be a compact complex hyperbolic manifold with Hermitian
metric $\omega$. Then $\exists \ \epsilon > 0$ such that for any
reduced, irreducible curve $C \subset X$, the genus $g$ of the
normalization satisfies $$g \geq \epsilon \deg_{\omega} (C)$$
\end{proposition}
\noindent {\bf Sketch of proof:} If $X$ is hyperbolic, then there
exists a constant such that $d_X(\xi) \geq \epsilon_0 || \xi
||_{\omega}$ for every tangent vector $\xi$. Let $\nu: C^{\nu}
\rightarrow C$ be the normalization of a curve $C \subset X$. If we
denote the hyperbolic metric on $C^{\nu}$ by $k_{C^{\nu}}$, then the
Gauss-Bonnet formula implies that $$-\frac{1}{4} \int_{C^{\nu}}
\mbox{curv}(k_{C^{\nu}}) = -\frac{\pi}{2} \chi(C^{\nu}).$$
There is a
natural holomorphic map $i \circ \nu: C^{\nu} \rightarrow X$ obtained
by composing the normalization map $\nu$ from $C^{\nu}$ to $C$ by the
inclusion $i$ of $C$ in $X$. Since the Kobayashi distance can only
decrease under compositions of holomorphic maps, we conclude that
$$k_{C^{\nu}}(\xi) \geq \epsilon_0 ||(i \circ \nu)_*
(\xi)||_{\omega}$$
for any tangent vector $\xi \in T_{C^{\nu}}$.
Integrating both sides of the inequality yields the proposition. $\Box$
\smallskip
Lang has conjectured that the converse also holds. Let $X$ be a
compact, complex manifold endowed with a Hermitian metric. Lang
conjectures that if there exists a positive constant $\epsilon$ such
that for every reduced, irreducible curve $C$ in $X$ the ratio of the
genus of the normalization of $C$ to the degree of $C$ (with respect
to the Hermitian metric) is bounded below by $\epsilon$, then $X$ is
hyperbolic. \smallskip
If the geometric genus of every curve in a variety $X$ is bounded
below by some fixed positive multiple of their degree, then $X$ does not
admit any non-constant holomorphic maps from any abelian variety. More
generally, Lang has conjectured that a projective variety $X$ is
hyperbolic if and only if it does not admit any holomorphic maps from
an abelian variety. The latter conjecture, of course, implies the
former one for projective varieties. \smallskip
One can ask for the relation between varieties of general type and
hyperbolic varieties. Since varieties of general type can
contain rationally connected subvarieties or abelian subvarieties, an
arbitrary variety of general type cannot be hyperbolic. However, Lang
has conjectured that the existence of subvarieties which are not of
general type accounts for the failure of hyperbolicity.
\begin{conjecture}\label{generaltype}
A projective algebraic variety $X$ is hyperbolic if and only if
every subvariety of $X$ is of general type.
\end{conjecture}
Ein (\cite{ein:generaltype}) and later Voisin (\cite{voisin:clemens})
have shown that any subvariety of a very general hypersurface of
degree at least $2n+1$ in $\P^n$ is of general type. Combining their
theorem with this conjecture one obtains a conjectural sharp form of
Siu's Theorem \ref{higherdim}. \smallskip
Proving that a projective variety $X$ is hyperbolic usually has two
components. One has to show that there are no non-algebraic maps of
$\mathbb{C}$ into $X$ and that there are no rational or elliptic curves in
$X$. When the problem is broken into these two components, then some
progress can be made on each component of the problem under some
geometric restrictions. We now survey some of the results on these
questions. \smallskip
One of the first people to make important progress on hyperbolicity
questions was the French mathematician Andr\'e Bloch, even though at
the time hyperbolicity was not defined
(see \cite{bloch:hyperbolicity}). Bloch begins by determining the
Zariski closure of an entire map into a complex torus.
\begin{theorem}\label{abelian}
The Zariski closure of a holomorphic map $\mathbb{C} \rightarrow T$ to a
complex torus $T$ is the translate of a subtorus of $T$.
\end{theorem}
This theorem leads to a fairly complete understanding of the
hyperbolic subvarieties of complex tori.
\begin{corollary}
A subvariety $X$ of an abelian variety $A$ which does not contain
any translates of subtori of $A$ is hyperbolic.
\end{corollary}
Originally Bloch used these ideas to prove the following theorem often
referred to as Bloch's Theorem (\cite{bloch:hyperbolicity}).
\begin{theorem}[Bloch's Theorem]
Any holomorphic map of $\mathbb{C}$ into a smooth, compact K\"ahler variety $X$
whose irregularity ($h^0(X, \Omega_X^1)$) is bigger than its
dimension is analytically degenerate, i.e. its image lies in a
proper analytic subvariety.
\end{theorem}
{\bf Proof:} Recall that given a smooth, compact K\"ahler variety $X$, we
can associate to it a complex torus $\mbox{Alb}(X)$ of dimension
$h^0(X, \Omega_X^1)$ and a map $a: X \rightarrow \mbox{Alb}(X)$ with
the following universal property: for any complex torus $T$ and any
morphism $f: X \rightarrow T$, there exists a unique morphism $g:
\mbox{Alb}(X) \rightarrow T$ such that $f= g \circ a$. The complex
torus $\mbox{Alb}(X)$ is referred as the {\it Albanese} variety of
$X$ and the
map $a$ is called the {\it Albanese map}. $\mbox{Alb}(X)$ is
unique up to isomorphism. \smallskip
Bloch's theorem follows by considering the Albanese map. Let $f: \mathbb{C}
\rightarrow X$ be a holomorphic map. Consider $a \circ f: \mathbb{C}
\rightarrow \mbox{Alb}(X)$. Since the irregularity of $X$ is larger
than the dimension of $X$, the image of $X$ under the Albanese map
lies in a proper subvariety of $\mbox{Alb}(X)$. By the universality
of the Albanese variety and its uniqueness, the Albanese image of $X$
cannot be the translate of a subtorus. Hence the image of $\mathbb{C}$ under
the map $a \circ f$ in the Albanese variety has to be analytically
degenerate in the image of $X$. It follows that the image of the
original map $f: \mathbb{C} \rightarrow X$ has to be analytically degenerate.
$\Box$ \smallskip
Using these ideas one also obtains information about the hyperbolicity
of complements of ample divisors in abelian varieties.
\begin{theorem}\label{abeliancomplement}
Let $A$ be an abelian variety. Let $D$ be an ample divisor that
does not contain any translates of abelian subvarieties. Then
$A-D$ is hyperbolic.
\end{theorem}
Recall that a holomorphic map from $\mathbb{C}$ into an algebraic variety is
called algebraically degenerate if the Zariski closure of the image
lies in a proper algebraic subvariety. More recently McQuillan
(\cite{mcquillan:foliation}) has proved the algebraic degeneracy of
entire maps into surfaces of general type with $c_1^2 > c_2$. More
precisely,
\begin{theorem}[McQuillan]
If $X$ is a surface of general type satisfying $c_1^2 > c_2$, then
all entire curves on $X$ are algebraically degenerate. In
particular, if $X$ does not contain any rational or elliptic curves,
then $X$ is hyperbolic.
\end{theorem}
McQuillan obtains this result by first proving a result about the
algebraic degeneracy of leaves of certain foliations on surfaces of
general type, then showing that under the assumptions on $X$ there
always is a foliation which contains the image of any entire map in
one of its leaves. \smallskip
Most ways of proving the algebraic degeneracy of entire maps into
complex projective manifolds depend on producing enough differential
relations on the manifold that every entire map has to satisfy. One
then shows that these relations have a small base locus. This forces
the holomorphic map to lie in this base locus. One often formalizes
these ideas in terms of jet bundles (see \cite{demailly:hyphyp},
\cite{demailly:hyperbolic} or any of Siu's papers cited above).
\smallskip
In the other direction, there has been extensive work on showing that
certain varieties do not contain any rational or elliptic curves. Here
we mention the work of G. Xu bounding below the geometric genus of any curve
on a very general surface of degree at least 5 in $\P^3$ (\cite{xu:bound},
see also Clemens' paper \cite{clemens:bound}).
\begin{theorem}[Xu]
On a very general surface of degree $d$ in $\P^3$, the geometric genus of
any curve is greater than or equal to $d(d-3)/2 - 2$. Tritangent plane
sections achieve this bound. For $d \geq 6$ the tritangent plane
sections are the only curves that achieve the bound.
\end{theorem}
A very simple consequence of the theorem is that a general
hypersurface of degree at least 5 in $\P^3$ contains no rational or
elliptic curves. Thus the problem of showing the hyperbolicity of a
very general hypersurface in $\P^3$ of degree at least 5 reduces to
showing that any entire map into such a surface is algebraically
degenerate. \smallskip
Motivated by our discussion one can ask the following question:
\smallskip
\noindent {\bf Question:} Can the closure of the image of a
holomorphic map $\mathbb{C} \rightarrow \P^n$ be a variety of general type?
\smallskip
\noindent A negative answer to this question would imply one direction
of Conjecture \ref{generaltype}. If all the subvarieties of a compact
variety are of general type, then the variety would have to be
hyperbolic. At present this question seems very hard to answer.
\smallskip
We close this section with a discussion of how hyperbolicity varies in
families. Let $X$ be a compact $C^{\infty}$ manifold with Hermitian
metric $| \ |$. Brody proved that if one considers the various complex
structures that one can put on $X$, the set of those that are
hyperbolic is open in the analytic topology. More precisely, one can
consider the function
$$D(s) = \sup_{f \in \mbox{Hol} (\Delta, X_s)} |f'(0)|$$ on the moduli
spaces of complex structures parameterized by $S$. Brody proves
\begin{theorem}
$D(s)$ is a continuous function. Since the hyperbolic complex
structures correspond to those for which $D(s)$ is finite, the
hyperbolic complex structures form an open set in the analytic
topology.
\end{theorem}
Note that if one considers an algebraic family of quasiprojective
smooth varieties, then Brody's theorem may fail. In fact in
such a family (at least if we do not fix the $C^{\infty}$ type) the
set of fibers
that are Brody hyperbolic can be Zariski locally closed. Varying two of the
lines that meet a third at the same point in Figure \ref{lines}
provides such an example. \smallskip
Brody's theorem naturally raises the following question. \smallskip
\noindent {\bf Question:} Is hyperbolicity Zariski open in algebraic
families of projective varieties? \smallskip
\noindent This question, to the best of my knowledge, is still
open. A positive answer would have interesting geometric implications.
For example, it is not known whether the space of high degree surfaces
in $\P^3$ that contain rational and elliptic curves form an algebraic
variety or a countable union of subvarieties of the space of surfaces.
A positive answer to the Zariski openness of hyperbolicity would
settle this and similar questions.
\section{The arithmetic of Kobayashi hyperbolicity}
In this section we summarize some conjectures, mainly due to Lang,
about rational points on hyperbolic varieties. We also give some
surprising consequences of the conjectures about the distribution of
rational points on curves. The main references for this section are
\cite{lang:hyperbolic}, \cite{caporaso:lang},
\cite{capjoemazur:points} and \cite{capjoemazur:lang}.
The arithmetic of curves of genus two or more and those of genus one
and zero exhibit drastically different properties. If $C$ is a curve
of genus zero or one defined over a number field $K$, then after
passing to a finite field extension $L$ there are infinitely many
$L$-rational points on $C$. In fact, $L$ can be chosen so that these
points are dense in the analytic topology. The Mordell-Faltings
Theorem stands in sharp contrast to this result.
\begin{theorem}[Mordell-Faltings]
Let $C$ be a smooth curve of genus $g \geq 2$ defined over a number
field $K$. Then $C$ has only finitely many rational points over any
finite field extension $L$ of $K$.
\end{theorem}
\begin{definition} We say a variety $V$ defined over a number field
$K$ is of {\it Mordell type} if $V$ contains only finitely many
rational points over any finite field extension of $K$.
\end{definition}
It is natural to ask which, if any, of the geometric properties we
described so far imply that a variety is of Mordell type. Clearly a
variety of Mordell type does not contain any rational or elliptic
curves. Similarly any map from an abelian variety to a variety of
Mordell type has to be constant. In view of these observations it is
not unreasonable to formulate the following conjecture due to Lang.
\begin{conjecture}\label{mordellic}
A complex projective variety $V$ defined over a number field is of
Mordell type if and only if it is hyperbolic.
\end{conjecture}
In fact Lang has conjectured much more precise statements.
\begin{conjecture}[weak form]\label{weak}
If $X$ is a variety of general type defined over a number field $K$,
then the set of $K$-rational points of $X$ is not Zariski dense.
\end{conjecture}
The conjecture that a hyperbolic manifold is of Mordell type follows
from Conjecture \ref{weak} and the geometric Conjecture
\ref{generaltype} by the following argument. If a hyperbolic manifold
$X$ has infinitely many rational points, then the Zariski closure of
these points is a subvariety not of general type by Conjecture
\ref{weak}. By Conjecture \ref{generaltype} every subvariety of $X$ is
of general type. Lang has conjectured an even stronger statement.
\begin{conjecture}[strong form]\label{strong}
If $X$ is of general type, then there exists a proper algebraic
subset $Z$ of $X$, such that over any finite extension $L$ of $K$,
the number of $L$-rational points of $X-Z$ is finite.
\end{conjecture}
The Lang conjectures are currently open except for the case of curves
and more generally for subvarieties of abelian varieties. If true,
they would provide a fundamental understanding of the arithmetic of
varieties of general type. \smallskip
The Lang Conjectures \ref{weak} and \ref{strong} have some surprising
consequences for rational points on curves of genus at least two.
These consequences were investigated by Caporaso, Harris and Mazur in
the papers cited above.
\begin{theorem}
The weak form of Lang's conjecture implies that for every number
field $K$ and genus $g \geq 2$, there exists a constant $B(K,g)$
such that any curve of genus $g$ defined over $K$ has at most
$B(K,g)$ $K$-rational points.
\end{theorem}
The strong form of Lang's conjecture implies an even more surprising
bound.
\begin{theorem}
The strong Lang conjecture implies that for any $g \geq 2$ there
exists an integer $N(g)$ such that there are only finitely many
curves defined over a number field $K$ that have more than $N(g)$
$K$-rational points.
\end{theorem}
\noindent {\bf Ideas behind the proof:} These theorems depend on the
following geometric theorem Caporaso, Harris and Mazur prove.
\begin{theorem}(Correlation) \label{correlation}
Let $f:X \rightarrow B$ be a proper morphism of irreducible and
reduced schemes whose general fiber is a smooth curve of genus at
least two. Then for $n$ sufficiently large the fiber product $h:
X_B^n \dashrightarrow W$ admits a dominant rational map to a variety
of general type. Moreover, if $f:X \rightarrow B$ is defined over a
number field $K$, $h$ and $W$ can also be defined over $K$.
\end{theorem}
D. Abramovich proved a generalization of the Correlation Theorem for
morphisms with higher dimensional fibers of general type under
suitable hypotheses (see \cite{abramovich:correlation}). However, in
order to deduce the bounds on the rational points on curves, we only
need the case of curves. \smallskip
Caporaso, Harris and Mazur deduce the bounds on rational points on
curves by applying the Lang conjectures to a global family. More
precisely, they start with a family of curves $f : X \rightarrow B$
such that for every curve $C$ defined over $K$, there exists a
$K$-rational point on $B$ such that the fiber over it is isomorphic to
$C$ over $K$. \smallskip
Using Theorem \ref{correlation}, $X_B^n$, the $n-$th fiber product of
$X$ over $B$, admits a rational map to a variety $W$ of general type.
By Lang's conjecture there exists a smallest closed, proper subvariety
$V$ of $X_B^n$ that contains all the $K$-rational points. \smallskip
By studying the successive images of the complement of $V$ under the
projections to various factors of $X_B^n$, they produce a non-empty
open subset $U$ of $B$ and an integer $N$ such that for every rational
point $b \in U$ the fiber over $b$ has at most $N$ points. Then by
Noetherian induction on the complement of $U$, they conclude the
uniform bound. $\Box$
\bibliographystyle{math}
| {
"timestamp": "2004-12-05T23:49:50",
"yymm": "0412",
"arxiv_id": "math/0412102",
"language": "en",
"url": "https://arxiv.org/abs/math/0412102",
"abstract": "We survey the properties of Brody and Kobayashi hyperbolic manifolds.",
"subjects": "Algebraic Geometry (math.AG); Complex Variables (math.CV)",
"title": "The arithmetic and the geometry of Kobayashi hyperbolicity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534333179649,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573419204996
} |
https://arxiv.org/abs/math/0509667 | The homotopy invariance of the string topology loop product and string bracket | Let M be a closed, oriented, n -manifold, and LM its free loop space.Chas and Sullivan defined a commutative algebra structure in the homology of LM, and a Lie algebra structure in its equivariant homology. These structures are known as the string topology loop product and string bracket, respectively.In this paper we prove that these structures are homotopy invariants in the following sense.Let f : M_1 \to M_2 be a homotopy equivalence of closed, oriented n -manifolds. Then the induced equivalence, Lf : LM_1 \to LM_2 induces a ring isomorphism in homology, and an isomorphism of Lie algebras in equivariant homology. The analogous statement also holds true for any generalized homology theory h_* that supports an orientation of the M_i 's. | \section*{Introduction}
The term ``string topology" refers to multiplicative structures on the (generalized) homology of spaces of paths and loops in a manifold. Let $M^n$ be a closed, oriented, smooth $n$-manifold. The basic ``loop homology algebra" is defined by a product
$$
\mu : H_*(LM) \otimes H_*(LM) \longrightarrow H_*(LM)
$$
of degree $-n$, and
and the ``string Lie algebra" structure is defined by a bracket
$$
[\,\, , \,\, ] : H^{S^1}_*(LM) \otimes H^{S^1}_*(LM) \longrightarrow H^{S^1}_*(LM)
$$
of degree $2-n$. These were defined in \cite{chassullivan}. Here $H^{S^1}_*(LM)$
refers to the equivariant homology, $ H^{S^1}_*(LM) = H_*(ES^1 \times_{S^1} LM)$.
More basic structures on the chain level were also studied in \cite{chassullivan}.
Furthermore, these structures were shown to exist for any multiplicative homology
theory $h_*$ that supports an orientation of $M$. (see \cite{cohenjones}.
\footnote{The string bracket for generalized homology theories was not explicitly discussed in \cite{cohenjones}, although in Theorem 2 of that paper there is a homotopy theoretic action of Voronov's cactus operad given, which, by a result of Getzler \cite{getzler} yields a Batalin-Vilkovisky structure on the generalized homology, $h_*(LM)$, when $M$ is $h_*$-oriented. According to \cite{chassullivan}, this is all that is needed to construct the string bracket. We will review this construction below.})
Alternative descriptions
of the basic structure were given in \cite{cohenjones} and \cite{chataur},
but in the end they all relied on various perspectives of intersection theory of
chains and homology classes.
The existence of various descriptions of these operations
leads to the following:
\begin{ques} {\sl To what extent are the
the string topology operations
sensitive to the smooth structure of the manifold,
or even the homeomorphism
structure?}
\end{ques}
The main goal of this paper is to settle this question in the case of two of the basic operations: the string topology loop product and string bracket. We will in fact prove more: we will show that
the loop homology algebra and string Lie algebra structures are oriented
{\it homotopy invariants.}
We remark that it is still not known whether the full range of string topology operations \cite{chassullivan}, \cite{chassullivan2}, \cite{sullivan}, \cite{cohengodin} are homotopy invariants. Indeed the third author has conjectured that they are not (see the postscript to \cite{ranickisullivan}). More about this point will be made in the remark after theorem 2 below.
\medskip To state the main result, let $h_*$ be a multiplicative
homology theory that supports an orientation of $M$. Being a multiplicative theory
means that the corresponding cohomology theory, $h^*$,
admits a cup product, or more precisely,
the representing spectrum of the theory is required to be a ring spectrum.
An {\it $h_*$-orientation} of a closed $n$-manifold $M$
can be viewed as a choice of fundamental class $[M] \in h_n(M)$ that
induces a Poincar\'e duality isomorphism.
\medskip
\begin{theorem}\label{main} Let $M_1$ and $M_2$ be closed, $h_*$-oriented $n$-manifolds. Let $f : M_1 \to M_2$ be an $h_*$-orientation preserving homotopy equivalence. Then the induced homotopy equivalence of loop spaces,
$Lf : LM_1 \to LM_2$ induces a ring isomorphism of loop homology algebras,
$$
\begin{CD}
(Lf)_*: h_*(LM_1) @>\cong >> h_*(LM_2).
\end{CD}
$$
Indeed it is an isomorphism of Batalin-Vilkovisky ($BV$) algebras. Moreover the induced map on equivariant homology,
$$
\begin{CD}
(Lf)_*: h_*^{S^1}(LM) @>\cong >> h_*^{S^1}(LM).
\end{CD}
$$
is an isomorphism of graded Lie algebras,
\end{theorem}
\medskip
Evidence for the above theorem came from the
results of \cite{cohenjones} and \cite{cohen} which said
that for simply connected manifolds $M$,
there is an isomorphism of graded
algebras,
$$
H_*(LM) \cong H^*(C^*(M), C^*(M)) \, ,
$$
where the right hand side is the Hochschild cohomology of $C^*(M), $
the differential graded algebra of singular cochains on $M$, with
multiplication given by cup product.
The Hochschild cohomology algebra is clearly a homotopy invariant.
However, the above isomorphism is defined in terms of the
Pontrjagin-Thom construction arising from the diagonal
embedding $M^S \subset M^T$ associated with each surjection of
finite sets $T \to S$. Consequently, since the Pontrjagin-Thom construction uses the smooth structure, this isomorphism {\it a priori}
seems to be sensitive to the smooth structure.
Without an additional argument, one can only conclude from this
isomorphism that the loop homology algebra of two homotopy equivalent
simply connected closed manifolds
are {\it abstractly} isomorphic. In summary, to prove
homotopy invariance in the sense of Theorem \ref{main},
one needs a different argument.
\medskip
The argument we present here does not need the
simple connectivity hypothesis. This should prove of particular interest in
the case of surfaces and $3$-manifolds. Our argument
uses the description of the loop product $\mu$ in terms of a
Pontrjagin-Thom collapse map of an embedding
$$
LM \times_M LM \hookrightarrow LM \times LM
$$
given in \cite{cohenjones}.
Here $LM \times_M LM$ is the subspace of $LM \times LM$
consisting of those pairs of loops $(\alpha, \beta)$, with $\alpha (0) = \beta (0)$. In this description we are thinking of the loop space as the space of piecewise smooth maps $[0,1] \to M$ whose values at $0$ and $1$ agree. This is a smooth, infinite dimensional manifold. The differential topology of such manifolds is discussed
in \cite{cohenstacey} and \cite{chataur}.
This description quickly reduces the proof of the theorem to the question of
whether the homotopy type of the complement
of this embedding, $(LM \times LM) - (LM \times_M LM)$ is a stable homotopy invariant
when considered as ``a space over'' $LM \times LM$.
By using certain pullback properties, the latter question is then further reduced
to the question of whether the complement of the diagonal embedding,
$\Delta : M \to M \times M$, or somewhat weaker,
the complement of the embedding
\begin{align}
\Delta_k : M &\to M \times M \times D^k \notag \\
x &\to (x, x, 0) \notag
\end{align}
is a homotopy invariant when considered as a space over $M \times M$.
For this we develop the notion of relative smooth and Poincare embeddings.
This is related to the classical theory of Poincare embeddings
initiated by Levitt \cite{Levitt_poincare} and Wall \cite{Wall},
and further developed by the second author in \cite{Klein_haef} and
\cite{Klein_haef2}.
However, for our purposes, the results we need can be proved directly by elementary arguments. The results in Section 2 on relative embeddings
are rather fundamental, but don't appear in the literature. These results may be of independent interest, and furthermore, by proving them here, we make the paper self contained.
\medskip
Early on in our investigation of this topic,
our methods led us to advertise the following question,
which is of interest independent of its applications to string topology.
\medskip
Let $F(M,q)$ be the configuration space of $q$-distinct,
ordered points in a closed manifold $M$.
\medskip
\begin{ques} {\sl Assume that $M_1$ and $M_2$ be homotopy equivalent, simply connected closed $n$-manifolds. Are $F(M_1, q)$ and $F(M_2, q)$
homotopy equivalent?}
\end{ques}
\medskip
One knows that these configuration spaces have isomorphic cohomologies
(\cite{milgbodig}), stable homotopy types (\cite{Aouina-Klein},
\cite{fcohen?})
and have homotopy equivalent loop spaces (\cite{cohengitler},
\cite{Levitt_arcs}).
But the homotopy invariance of the configuration spaces themselves
is not yet fully understood. For example, when $q =2$ and the manifolds
are $2$-connected, then one does have homotopy invariance
(\cite{Levitt_arcs}, \cite{Aouina-Klein}). On the other hand,
the simple connectivity assumption in the above question
is a necessity: a recent result of Longoni and Salvatore \cite{LS} shows that
for the homotopy equivalent lens spaces $L(7,1)$ and $L(7,2)$,
the configuration spaces $F(L(7,1),2)$ and
$F(L(7,2),2)$ have distinct homotopy types.
\medskip
This paper is organized as follows. In Section 1 we will reduce the proof of the main theorem to a question
about the homotopy invariance of the complement of the diagonal embedding,
$\Delta_k : M \to M \times M \times D^k$. In Section 2 we
develop the theory of relative smooth and Poincare embeddings,
and then apply it to prove the homotopy invariance of these
configuration spaces, and complete the proof of
Theorem \ref{main}.
\medskip
\begin{flushleft} \bf Remark. \rm After the results of this paper were announced, two independent proofs of the homotopy invariance of the loop homology product were found by by Crabb \cite{crabb}, and by Gruher-Salvatore \cite{gruhersalvatore}.
\end{flushleft}
\begin{convent}
A finitely dominated pair of spaces $(X,\partial X)$ is
a {\it Poincare pair} of dimension $d$ if there
exists a pair $({\mathcal L},[X])$ consisting
of a rank one abelian local coefficient
system $\mathcal L$ on $X$ and a ``fundamental class''
$[X] \in H_d(X,\partial X;{\mathcal L})$
such that the cap product homomorphisms
$$
\cap [X]\: H^*(X;{\mathcal M}) \to
H_{d-*}(X,\partial X;{\cal L}\otimes {\mathcal M})
$$
and
$$
\cap [\partial X]\: H^*(\partial X;{\mathcal M}) \to H_{d-1-*}(\partial X;{\cal L}\otimes {\mathcal M})
$$
are isomorphisms for all local coefficient bundles ${\mathcal M}$ on $X$ (respectively
on $\partial X$). Here $[\partial X]\in H_{d-1}(\partial X;{\cal L})$ denotes
the image of $[X]$ under the evident boundary homomorphism.
If such a pair $({\mathcal L},[X])$ exists, then it is unique up to unique
isomorphism.
\end{convent}
\medskip
\section{A question about configuration spaces}
\medskip
In this section we state one of our main results about the homotopy invariance of certain configuration spaces, and then use it to prove Theorem \ref{main}. The theorem about configuration spaces will be proved in section 2.
\medskip
Using an identification of the tangent bundle $\tau_{M } $ with the normal bundle of the diagonal, $\Delta \: M \to M \times M $, we have an embedding of the disk bundle,
$$
D(\tau_{M }) \subset M \times M \, ,
$$
which is identified with a compact tubular neighborhood of
the diagonal. (To define the unit disk bundle, we use a fixed Euclidean structure on $\tau_M$.)
The closure of its complement will be denoted $F(M, 2)$. Notice that
the inclusion $F(M, 2) \subset M \times M - \Delta$ is a weak
equivalence. We therefore have a decomposition,
$$
M \times M = D(\tau_{M }) \cup_{S(\tau_{M })} F(M, 2),
$$
where $S(\tau_M) = \partial D(\tau_M)$ is the unit sphere bundle.
We now vary the configuration space in the following way.
Let $D^k$ be a closed unit disk, and consider the generalized diagonal embedding,
\begin{align}
\Delta_k : M &\to M \times M \times D^k \notag \\
x &\to (x, x, 0). \notag
\end{align}
We may now identify the stabilized tangent bundle, $ \tau_M \oplus \epsilon^k $ with the normal bundle of this embedding, where $\epsilon^k$ is the trivial $k$-dimensional bundle.
This yields an embedding, $D(\tau_M \oplus \epsilon^k) \subset M \times M \times D^k$, which is identified with a closed tubular neighborhood of $\Delta_k$. The closure of its complement is denoted by $F_{D^k}(M, 2)$. The reader will notice that this is a model for the $k$-fold fiberwise suspension of the map $F(M,2) \to M \times M$. We now have a similar decomposition,
$$
M \times M \times D^k = D(\tau_M \oplus \epsilon^k) \cup_{S(\tau_M \oplus \epsilon^k)} F_{D^k}(M, 2).
$$
Notice furthermore, that the boundary, $\partial ( M \times M \times D^k) = M \times M \times S^{k-1}$ lies in the subspace, $F_{D^k}(M, 2)$. In other words we have a commutative
diagram,
\begin{equation}\label{m2k}
\begin{CD}
S(\tau_M \oplus \epsilon^k) @>>> F_{D^k}(M, 2) @<<< M \times M \times S^{k-1} \\
@VVV @VVV \\
D(\tau_M \oplus \epsilon^k) @>>> M \times M \times D^k
\end{CD}
\end{equation}
where the commutative square is a pushout square. We refer to this diagram as $M(k)_\bullet$. We think of this more functorially as follows.
Consider the partially ordered set $\mathcal{F}$, with
five objects, referred to as
$\emptyset$, $0$, $1$, $01$, and $b$, and the morphisms
are generated by the following commutative diagram
\begin{equation}\label{five}
\begin{CD}
\emptyset @>>> 1 @<<< b \\
@VVV @VVV \\
0 @>>> 01\, .
\end{CD}
\end{equation}
Notice that $01$ is a terminal object of this category.
\medskip
\begin{definition}\label{fivespace}
We define an $\mathcal{F}$-space to be a functor $X : \mathcal{F} \to \text{\rm Top}$,
where \text{Top} is the category of
topological spaces. The value of the functor at $S \subset \{0,1\}$ is
denoted $X_S$. It will sometimes be convenient to specify $X$
by maps of pairs
$$
(X_b,\emptyset) \to (X_1,X_\emptyset) \to (X_{01},X_0) \, ,
$$
where we are abusing notation slightly since the maps
$X_\emptyset \to X_1$ and $X_0 \to X_{01}$ need not be
inclusions.
A map (morphism) $\phi : X \to Y$ of $\mathcal{F}$-spaces is a
natural transformation of functors.
We say that $\phi$ is a weak equivalence, if it
is an object-wise weak homotopy equivalence, i.e, it gives a
a weak homotopy equivalence $\phi_i :X_i \xrightarrow{\simeq}Y_i$
for each object $i \in \mathcal{F}$.
In general, we say that two $\mathcal{F}$-spaces
are weakly equivalent if there is a
finite zig-zag of morphisms connecting them,
$$
X = X^1 \xrightarrow{} X^2 \xleftarrow{ } X^3 \xrightarrow{ } \cdots \leftarrow X^i \rightarrow \cdots X^n = Y \, ,
$$
where each morphism is a weak equivalence. \end{definition}
Notice that diagram (\ref{m2k}) defines a $\mathcal{F}$-space for each closed manifold $M$, and integer $k$. We call this $\mathcal{F}$- space $M(k)_\bullet$.
In particular, $M(k)_{01} = M \times M \times D^k$.
\medskip
The following is our main result about configuration spaces. It will be proved in section 2.
\medskip
\begin{theorem}\label{config} Assume $M_1$ and $M_2$ are closed
manifolds and that
$f : M_1 \to M_2$ is a homotopy
equivalence. Then for $k$ sufficiently large, the $\mathcal{F}$-spaces $M_1(k)$ and $M_2(k)$ are weakly equivalent in the following specific way.
There is a $\mathcal{F}$-space $\mathcal{T}_\bullet$ that takes values in spaces of the homotopy type of $CW$-complexes, and morphisms of $\mathcal{F}$-spaces,
$$
M_1(k)_\bullet \xrightarrow{\phi_1} \mathcal{T}_\bullet \xleftarrow{\phi_2} M_2(k)_\bullet
$$
satisfying the following properties:
\begin{enumerate}
\item The morphisms $\phi_1$ and $\phi_2$ are weak equivalences.
\item The terminal space $\mathcal{T}_{01}$ is defined as
$$
\mathcal{T}_{01} = T_{f\times f} \times D^k
$$
where $T_{f \times f}$ is the mapping cylinder
$(M_2 \times M_2) \cup_{f\times f}(M_1 \times M_1) \times I$.
Furthermore on the terminal spaces, the morphisms, $\phi_1 : M_1 \times M_1 \times D^k \to T_{f\times f} \times D^k$ and $\phi_1 : M_2 \times M_2 \times D^k \to T_{f\times f} \times D^k$ are given by $\iota_1 \times 1$ and $\iota_2 \times 1$, where for $j = 1,2$, $\iota_j : M_j \times M_j \to T_{f \times f}$ are the obvious inclusions as the two ends of the mapping cylinder.
\item The induced weak equivalence,
$$
\begin{CD}
D(\tau_{M_1} \oplus \epsilon^k)
= M_1(k)_0 @>\phi_2 > \simeq > \mathcal{T}_0
@<\phi_2<\simeq < M_2(k)_0 = D(\tau_{M_2} \oplus \epsilon^k)
\end{CD}
$$
is homotopic to the composition
$$\begin{CD}D(\tau_{M_1} \oplus \eps^k) @>\text{\rm project}>\simeq > M_1 @>f>\simeq > M_2
@>\text{\rm zero}>\simeq > D(\tau_{M_2} \oplus \eps^k) . \end{CD}$$
\end{enumerate}
\end{theorem}
\medskip
Notice that this theorem is a strengthening of the following homotopy invariance statement (see \cite{Aouina-Klein}).
\begin{corollary} Let $f : M_1 \to M_2$ be a homotopy equivalence of closed manifolds. Then for sufficiently large $k$, the configuration spaces, $F_{D^k}(M_1, 2)$ and $F_{D^k}(M_2, 2)$ are homotopy equivalent.
\end{corollary}
\medskip
As mentioned, we will delay the proof of Theorem \ref{config} until the next section. Throughout the rest of this section we will assume its validity, and will use it to prove
Theorem \ref{main}, as stated in the introduction.
\begin{proof}
Consider the equivalences of $\mathcal{F}$-spaces given in Theorem \ref{config}. Notice that we have the following commutative diagram of maps of pairs.
\begin{equation}\label{downstairs}
\begin{CD}
(M_1(k)_{01}, \, M_0(k)_b) @>>>
(M_1(k)_{01}, \, M_1(k)_1) @<<< (M_1(k)_0 , \, M_1(k)_\emptyset) \\
@V\phi_1 VV @VV\phi_1 V @VV\phi_1 V \\
(\mathcal{T}_{01}, \, \mathcal{T}_b) @>>> (\mathcal{T}_{01}, \, \mathcal{T}_1)
@<<< (\mathcal{T}_0 , \, \mathcal{T}_\emptyset) \\
@A\phi_2AA @AA\phi_2 A @AA\phi_2 A \\
(M_2(k)_{01}, \, M_2(k)_b) @>>> (M_2(k)_{01}, \,
M_2(k)_1) @<<< (M_2(k)_0 , \, M_2(k)_\emptyset) \, .
\end{CD}
\end{equation}
The vertical maps are weak homotopy equivalences of pairs, by Theorem \ref{config}. The horizontal maps are induced by the values of the $\mathcal{F}$-spaces on the morphisms in $\mathcal{F}$.
For ease of notation, for a pair $(A,B)$ we write $A/B$ for the homotopy cofiber (mapping cone) $A \cup cB$. By plugging in the values of these $\mathcal{F}$-spaces, and taking homotopy cofibers, we get a commutative diagram
\begin{equation}\label{downcofiber}
\begin{CD}
M_1 \times M_1 \times D^k/M_1 \times M_1 \times S^{k-1} @>>> M_1 \times M_1 \times D^k/ F_{D^k}(M_1, 2) @<\simeq << D(\tau_{M_1} \oplus \epsilon^k)/S(\tau_{M_1} \oplus \epsilon^k) \\
@V\phi_1 VV @VV\phi_1 V @VV\phi_1 V \\
\mathcal{T}_{01}/ \mathcal{T}_b @>>> \mathcal{T}_{01}/ \mathcal{T}_1 @<<\simeq<
\mathcal{T}_0 / \mathcal{T}_\emptyset \\
@A\phi_2AA @AA\phi_2 A @AA\phi_2 A \\
M_2 \times M_2 \times D^k/M_2 \times M_2 \times S^{k-1} @>>> M_2\times M_2 \times D^k/ F_{D^k}(M_2, 2) @<\simeq << D(\tau_{M_2} \oplus \epsilon^k)/S(\tau_{M_2} \oplus \epsilon^k)
\end{CD}
\end{equation}
The right hand horizontal maps are equivalences, because the commutative squares
defined by the $\mathcal{F}$-spaces $M_1(k)_\bullet$ and $M_2(k)_\bullet$ are pushouts, and therefore the commutative square defined by the $\mathcal{F}$-space $\mathcal{T}_\bullet$ is a homotopy pushout.
By inverting these homotopy equivalences, as well as those induced by $\phi_2$, we get a homotopy commutative square,
\begin{equation}
\begin{CD}\label{intdiag}
\Sigma^k((M_1 \times M_1)_+) @>m_1 >> \Sigma^k(M_1 ^{\tau_{M_1}})\\
@V f_k VV @VV f_k V \\
\Sigma^k((M_2 \times M_2)_+) @>m_2 >> \Sigma^k(M_2^{\tau_{M_2}})
\end{CD}
\end{equation} Here the maps $f_k$ have the homotopy type of $\phi_2^{-1} \circ \phi_1$. The right hand spaces are the suspensions of the Thom spaces of the tangent bundles of $M_1$ and $M_2$ respectively.
Notice that property (2) in Theorem \ref{config} regarding the morphisms $\phi_1$ and $\phi_2$ and the mapping cylinder $\mathcal{T}_{0,1}$ impies that the left hand map
$f_k : \Sigma^k((M_1 \times M_1)_+) \to \Sigma^k((M_2 \times M_2)_+) $ is given by the $k$-fold suspension of the equivalence $f\times f : M_1 \times M_1 \xrightarrow{\simeq} M_2 \times M_2$.
Consider the right hand vertical equivalence, $f_k: \Sigma^k(M_1)^{\tau_{M_1}} \to \Sigma^k(M_2^{\tau_{M_2}}).$ By diagram (\ref{downstairs}) $f_k$ is induced by a map of pairs,
$(D(\tau_{M_1} \oplus \epsilon^k), \, S(\tau_{M_1} \oplus \epsilon^k)) \to (D(\tau_{M_2} \oplus \epsilon^k), \, S(\tau_{M_2} \oplus \epsilon^k)) $ which on the ambient space,
$D(\tau_{M_1} \oplus \epsilon^k) \to D(\tau_{M_2} \oplus \epsilon^k)$ is homotopic to the map determined by
$f : M_1 \to M_2$ as in property 3 of Theorem \ref{config}. Therefore the induced map in cohomology $(f_k)^* : h^*(\Sigma^k(M_2)^{\tau_{M_2}}) \cong h^*(\Sigma^k(M_1)^{\tau_{M_1}})$
is an isomorphism as modules over $h^*(M_2)$, where the module structure on $h^*(\Sigma^k(M_1)^{\tau_{M_1}})$ is via the isomorphism $f^* :h^*(M_2) \cong h^*(M_1)$.
Moreover the isomorphism $(f_k)^* : h^*(\Sigma^k(M_2 ^{\tau_{M_2}})) \cong h^*(\Sigma^k(M_1 ^{\tau_{M_1}}))$ preserves the Thom class in cohomology. To see this, notice that the
horizontal maps in diagram (\ref{intdiag}) yield the intersection product in homology, after applying the Thom isomorphism. This implies
that the image of the fundamental classes $\Sigma^k([M_i] \times 1) \in h_{k+n}(\Sigma^k(M_i \times M_i))$ maps to the Thom classes
in $h_{k+n}(\Sigma^k(M_i)^{\tau_{M_i}}).$ Since the left hand vertical map is homotopic to $\Sigma^k (f \times f)$, and since the homotopy equivalence $f$ preserves the $h_*$-orientations, it preserves the fundamental classes.
Therefore by the commutativity of this diagram, $(f_k)_*$ preserves the Thom class. These facts imply that after applying the Thom isomorphism, the isomorphism $(f_k)^*$ is given by $f^* : h^*(M_2) \xrightarrow{\cong} h^*(M_1)$.
This observation will be useful, as we will eventually lift the map of $\mathcal{F}$-spaces given in Theorem \ref{config} up to the level of loop spaces, and we'll consider the analogue of the diagram (\ref{intdiag}).
To understand why this is relevant, recall
from \cite{chassullivan}, \cite{cohenjones} that the loop homology product
$\mu : h_*(LM) \times h_*(LM) \to h_*(LM)$ can be defined in the following way.
Consider the pullback square
$$
\xymatrix{
LM \times_M LM \ar[r]^\iota \ar[d]_e & LM \times LM \ar[d]^{e \times e} \\
M \ar[r]_\Delta & M \times M
}
$$
where $e : LM \to M$ is the fibration given by evaluation at the basepoint: $e(\gamma) = \gamma (0)$. Let $\eta (\Delta) $ be a tubular neighborhood
of the diagonal embedding of $M$, and let $\eta (\iota)$ be the inverse image
of this neighborhood in $LM \times LM$. The normal bundle of $\Delta$ is the tangent bundle, $\tau_M$. Recall that the evaluation map $e : LM \to M$ is a locally trivial fiber bundle \cite{klingenberg}. Therefore the tubular neighborhood $\eta (\iota)$ of $\iota : LM \times_M LM \hookrightarrow LM \times LM$ is homeomorphic to total space of the pullback of the tangent bundle, $e^*(TM)$. We therefore have a Pontrjagin-Thom collapse map,
\begin{equation}\label{tau}
\tau : LM \times LM \longrightarrow LM \times LM /( (LM \times LM) - \eta (\iota)) \cong (LM \times_M LM)^{\tau_M}
\end{equation}
where $(LM \times_M LM)^{\tau_M}$ is the Thom space of the pullback $e^*(\tau_M) \to LM \times_M LM$.
Now as pointed out in \cite{chassullivan}, there is a natural map
$$
j\: LM \times_M LM \to LM
$$
given by sending a pair of loops $(\alpha, \beta)$ with the same starting point, to the concantenation
of the loops, $\alpha *\beta$.
The loop homology product is then defined to be the composition
\begin{equation}\label{mu}
\begin{CD}
\mu_* \: h_*(LM \times LM) @> \tau_* >>
h_{*}((LM \times_M LM)^{TM}) @>\cap u> \cong> h_{*-n}(LM \times_M LM) @> j_* >> h_{*-n}(LM)
\end{CD}
\end{equation} where $\cap u$ is the Thom isomorphism given by capping with the Thom class.
\medskip
Now consider the fiber bundles, $LM_i \times LM_i \times D^k \to M_i \times M_i \times D^k = M_i(k)_{01}$ for $i = 1,2$.
By restricting this bundle to the spaces
$M_i(k)_j$, $j \in \text{\rm Ob}(\mathcal{F})$, we obtain
$\mathcal{F}$-spaces which we call $\mathcal{L} M_i(k)_\bullet$, for $i = 1,2$. So we have morphisms of $\mathcal{F}$-space, $e: \mathcal{L} M_i(k)_\bullet \to M_i(k)_\bullet$ which on every object is a fiber bundle, and every morphism induces a pull-back square.
Similarly, let $\mathcal{L} \mathcal{T}_\bullet$ be the $\mathcal{F}$ space obtained by restricting the fibration
$L(T_{f\times f}) \times D^k \xrightarrow{e \times 1}T_{f\times f} \times D^k = \mathcal{T}_{01}$
to the spaces $\mathcal{T}_j$, for $j \in \text{\rm Ob}(\mathcal{F})$.
The morphisms $\phi_i$ of Theorem \ref{config} then lift to give weak equivalences of $\mathcal{F}$-spaces, $\mathcal{L} \phi_i : \mathcal{L} M_i(k)_\bullet \to \mathcal{L} \mathcal{T}_\bullet$ that make the following diagram of $\mathcal{F}$-spaces commute:
\begin{equation}\label{ell}
\begin{CD}
\mathcal{L} M_1(k)_\bullet @>\mathcal{L}\phi_1 >> \mathcal{L}\mathcal{T}_\bullet @<\mathcal{L}\phi_2 << \mathcal{L} M_2(k)_\bullet \\
@VeVV @VVe V @VVeV \\
M_1(k)_\bullet @> \phi_1 >> \mathcal{T}_\bullet @< \phi_2 << M_2(k)_\bullet
\end{CD}
\end{equation}
The commutative diagram of maps of pairs (\ref{downstairs}) lifts to give a corresponding diagram
with spaces $\mathcal{L} M_i(k)_\bullet$ replacing $M_i(k)_\bullet$, and $\mathcal{L} \mathcal{T}_\bullet$ replacing $\mathcal{T}_\bullet$. There is also a corresponding commutative diagram of quotients, that lifts
the diagram (\ref{downcofiber}). The result is a homotopy commutative square, which lifts
square (\ref{intdiag}).
\begin{equation}
\begin{CD}\label{liftint}
\Sigma^k((LM_1\times LM_1)_+) @>\tau_1 >> \Sigma^k(LM_1 \times_{M_1} LM_1)^{\tau_{M_1}}) \\
@V \tilde f_k VV @VV \tilde f_k V \\
\Sigma^k((LM_2\times LM_2)_+) @>\tau_1 >> \Sigma^k(LM_2 \times_{M_2} LM_2)^{\tau_{M_2}})
\end{CD}
\end{equation} Here the maps $\tilde f_k$ have the homotopy type of $L\phi_2^{-1} \circ L\phi_1$.
Now as argued above, the description of the maps $L\phi_i : \mathcal{L} M_i(k)_{(0,1)} \to \mathcal{L} \mathcal{T}_{(0,1)}$, that is,
$LM_i \times LM_i \times D^k \to LT_{f\times f} \times D^k$ as the loop functor applied to the inclusion as the ends of the cylinder, implies that the map
$\tilde f_k : \Sigma^k((LM_1\times LM_1)_+) \to \Sigma^k((LM_2\times LM_2)_+) $ is homotopic to the $k$-fold suspension of $Lf \times Lf : LM_1 \times LM_1 \xrightarrow{\simeq} LM_2 \times LM_2.$ Moreover, in cohomology, the map $\tilde f_k^* : h^*(\Sigma^k(LM_2 \times_{M_2} LM_2)^{\tau_{M_2}}) \to h^*(\Sigma^k(LM_1 \times_{M_1} LM_1)^{\tau_{M_1}})$ preserves Thom classes because the bundles are pulled back from
bundles over $M_1$ and $M_2$ respectively, and as seen above, $(f_k)^* : h^*(\Sigma^k(M_2 ^{\tau_{M_2}})) \cong h^*(\Sigma^k(M_1 ^{\tau_{M_1}}))$ preserves
Thom classes. Also, since this map is, up to homotopy, induced by a map of pairs
$$
L\phi_2^{-1} \circ L\phi_1: (D(e^*(\tau_{M_1} \oplus \epsilon^k)), \, S(e^*(\tau_{M_1} \oplus \epsilon^k))) \to (D(e^*(\tau_{M_2} \oplus \epsilon^k)), \, S(e^*(\tau_{M_2} \oplus \epsilon^k)))
$$
it induces an isomorphism of $h^*(D(e^*(\tau_{M_2} \oplus \epsilon^k)))$ modules, where this ring acts on $ h^*(\Sigma^k(LM_1 \times_{M_1} LM_1)^{\tau_{M_1}})$ via the homomorphism, $ (L\phi_2^{-1} \circ L\phi_1)^* : h^*(D(e^*(\tau_{M_2} \oplus \epsilon^k))) \xrightarrow{\cong} h^*(D(e^*(\tau_{M_1} \oplus \epsilon^k)))$. But by the lifting of property 3 in Theorem \ref{config}, this map is homotopic to the compositon,
$$
(D(e^*(\tau_{M_1} \oplus \epsilon^k)) \xrightarrow{\rm project} LM_1 \times_{M_1}LM_1 \xrightarrow{Lf \times Lf} LM_2 \times_{M_2}LM_2 \xrightarrow{\rm zero}(D(e^*(\tau_{M_2} \oplus \epsilon^k)).
$$
Hence when one applies the Thom isomorphism to both sides, the isomorphism
$$\tilde f_k^* : h^*(\Sigma^k(LM_2 \times_{M_2} LM_2)^{\tau_{M_2}}) \to h^*(\Sigma^k(LM_1 \times_{M_1} LM_1)^{\tau_{M_1}})$$
is given by $(Lf \times Lf)^* : h^*(LM_2 \times_{M_2}LM_2) \xrightarrow{\cong} h^*(LM_1 \times_{M_1}LM_1)$.
\medskip
By the definition of the loop product (\ref{mu}), to prove that $(Lf)_* : h_*(LM_1) \to h_*(LM_2)$ is a ring isomorphism, we need to show that the diagram
$$
\begin{CD}
h_*(LM_1\times LM_1) @>(\tau_1)_* >> h_{*}((LM_1 \times LM_1)^{\tau_{M_1}}) @>\cap u> \cong> h_{*-n}((LM_1 \times_{M_1} LM_1)) @> j_* >> h_{*-n}(LM_1) \\
@V(Lf \times Lf)_* VV @V(\tilde f_k)_* VV @VV(Lf \times Lf)_* V @VV(Lf)_* V \\
h_*(LM_2\times LM_2) @>(\tau_2)_*>> h_{*}((LM_2 \times LM_2)^{\tau_{M_2}}) @>\cap u> \cong> h_{*-n}((LM_2 \times_{M_2} LM_2)) @> j_* >> h_{*-n}(LM_2)
\end{CD}
$$
commutes. We have now verified that the left and middle squares commute. But the right hand square obviously commutes. Thus $(Lf)_* : h_*(LM_1) \to h_*(LM_2)$ is a ring isomorphism as claimed.
To prove that $Lf$ is a map of $BV$- algebras, recall that the $BV$-operator $\Delta$ is defined in terms of the $S^1$-action. Clearly $Lf$ preserves this action, and hence induces an isomorphism of $BV$-algebras. This will imply that
$Lf$ induces an isomorphism of the string Lie algebras for the following reason. Recall the definition of the Lie bracket from \cite{chassullivan}. Given $\alpha \in h_p^{S^1}(LM)$ and $\beta \in h_q^{S^1}(LM)$, then the bracket $[\alpha, \beta]$ is the image of $\alpha \times \beta$ under the composition,
\begin{align}
h_p^{S^1}(LM) \times h_q^{S^1}(LM) &\xrightarrow{\text{\rm tr}_{S^1} \times \text{\rm tr}_{S^1}} h_{p+1}(LM) \times h_{q+1}(LM)\notag \\
&\xrightarrow{\rm loop\, product}
h_{p+q+2-n}(LM) \xrightarrow{j} h_{p+q+2-n}^{S^1}(LM).
\end{align}
Here $\text{\rm tr}_{S^1} \: h_*^{S^1}(LM) \to h_{*+1}(LM)$ is the $S^1$ transfer map
(called ``M" in \cite{chassullivan}),
and $j \: h_*(LM) \to h^{S^1}_*(LM)$ is the usual
map that descends nonequivariant homology to
equivariant homology (called ``E" in \cite{chassullivan}).
We refer the reader to \cite{Adem-Cohen-Dwyer} for a concise definition of the $S^1$-transfer.
We now know that $Lf$ preserves the loop product, and since it is an $S^1$-equivariant map, it preserves the transfer map $\text{\rm tr}_{S^1}$ and the map $j$. Therefore it preserves the string bracket.
\end{proof}
\medskip
\section{Relative embeddings and the proof of Theorem \ref{main}}
Theorem \ref{config} reduces the proof of the
homotopy invariance of the loop product and the string bracket
(Theorem \ref{main})
to the homotopy invariance of the
${\cal F}$-spaces associated with the
embeddings diagonal of $M_1$ and $M_2$.
The goal of the present section is to prove
Theorem \ref{config}.
\subsection{Relative smooth embeddings}
Let $N$ be a compact smooth manifold of dimension $n$
whose boundary $\partial N$ comes equipped with a smooth manifold decomposition
$$
\partial N = \partial_0 N \cup \partial_1 N
$$
in which $\partial_0 N$ and $\partial_1 N$ are glued together
along their common boundary
$$
\partial_{01} N \,\, := \,\, \partial_0 N \cap \partial_1 N\, .
$$
Assume that $K$ is a space obtained
from $\partial_0 N$ by attaching a finite number of cells.
Hence we have a relative cellular complex
$$
(K,\partial_0 N)\, .
$$
It then makes sense to speak of the {\it relative dimension}
$$
\dim (K,\partial_0 N) \le k
$$ as being the maximum dimension of the
attached cells.
Let $$f\: K \to N$$ be a map of spaces which extends the identity map of $\partial_0 N$.
\begin{definition} We call these data, $(K, \partial_0N, f \: K\to N)$, a {\it relative smooth embedding problem}
\end{definition}
\begin{definition}
A {\it solution} to the relative smooth embedding problem consists
of
\begin{itemize}
\item a codimension zero compact submanifold
$$
W \subset N
$$
such that $\partial W \cap \partial N = \partial_0 N$ and this intersection
is transversal, and
\item a homotopy of $f$, fixed
on $\partial_0 N$, to a map of the form
$$
\begin{CD}
K @>\sim >> W @>\subset >> N
\end{CD}
$$
in which the first map is a homotopy equivalence.
\end{itemize}
\end{definition}
\begin{lemma} \label{smooth}
If $2k < n$, then there is a solution to the relative smooth
embedding problem.
\end{lemma}
We remark that Lemma 4 is essentially a simplified version of a result
of Hodgson \cite{Hodgson} who strengthens it by $r$ dimensions when
the map $f$ is $r$-connected.
\begin{proof}[Proof of the Lemma \ref{smooth}]
First assume that $K = \partial_0 N \cup D^k$ is the effect
of attaching a single $k$-cell to $\partial_0 N$. Then the restriction of
$f$ to the disk gives a map
$$
(D^k,S^{k-1}) \to (N,\partial_0 N)
$$
and, by transversality, we
can assume that its restriction $S^{k-1} \to \partial_0 N$
is a smooth embedding.
Applying transversality again, the map on $D^k$
can be generically deformed relative to
$S^{k-1}$ to a smooth embedding. Call the resulting embedding $g$.
Let $W$ be defined by taking a regular neighborhood of
$\partial_0 N \cup g(D^k) \subset N$. Then $g$ and $W$ give
the desired solution
in this particular case.
The general case is by induction on the
set of cells attached to $\partial_0 N$.
The point is that if a solution $W \subset N$
has already been achieved on a subcomplex $L$ of
$K$ given by deleting one of the top cells, then removing the interior of
$W$ from $N$ gives a new manifold $N'$, such that $\partial N'$
has a boundary decomposition. The
attaching map $S^{k-1} \to L$ can be deformed (again using transversality)
to a map into $\partial_0 N'$. Then we have reduced
to a situation of solving the problem for a map of
the form $D^k \cup \partial_0 N' \to N'$, which we know can be solved by the previous paragraph.
\end{proof}
\medskip
We now thicken the complex $K$ by crossing $\partial_0N$ with a disk. Namely,
for an integer $j \ge 0$, define the space
$$
K_j \simeq K \cup_{\partial_0 N} (\partial_0 N) \times D^j\, ,
$$
where we use the inclusion $\partial_0 N \times 0 \subset
(\partial_0 N) \times D^j$ to form the amalgamated union.
Then $(K,\partial_0 N) \subset (K_j,(\partial_0 N) \times D^j)$ is
a deformation retract, and the map $f\:K \to N$ extends in the evident
way to a map
$$
f_j \: K_j \to N \times D^j
$$
that is fixed on $(\partial_0 N) \times D^j$.
\begin{theorem} \label{smooth+j}
Let $f\: K \to N$ be as above, but without the dimension
restrictions. Then for sufficiently large $j \ge 0$, the
embedding problem for the map $f_j\: K_j \to N \times D^j$ admits a solution.
\end{theorem}
\begin{proof} The relative dimension of $(K_j, \, (\partial_0 N) \times D^j)$
is $k$, but for sufficiently large $j$ we have $2k \le n + j$. The result follows from the previous lemma.
\end{proof}
\subsection{Relative Poincar\'e embeddings}
Now suppose more generally that $(N,\partial N)$
is a (finite) Poincar\'e pair of dimension $n$ equipped with
a {\it boundary decomposition}
such that $\partial_0 N$ is a smooth manifold. By this, we mean
we have an expression of the form
$$
\partial N \,\, = \,\, \partial_0 N \cup_{\partial_{01} N} \partial_1 N
$$
in which $\partial_0 N$ is a manifold with boundary $\partial_{01}N$ and
also $(\partial_1 N,\partial_{01}N)$ is a Poincar\'e pair. Furthermore,
we assume that the fundamental classes for each of theses pairs glue
to a fundamental class for $\partial N$. These fundamental classes lie in ordinary homology.
As above, let
$$
f\: K \to N
$$
be a map which is fixed on $\partial_0 N$. We will assume
that the relative dimension of $(K,\partial_0 N)$ is
at most $n-3$. Call these data a
{\it relative Poincar\'e embedding problem}.
\begin{definition} A {\it solution} of a relative Poincar\'e embedding problem as above
consists of
\begin{itemize}
\item a Poincare pair $(W,\partial W)$, and a Poincar\'e decomposition
$$
\partial W = \partial_0 N \cup_{\partial_{01} N} \partial_1 W
$$
such that each of the maps $\partial_{01} N \to \partial_1 W$ and
$\partial_0 N \to W$ is $2$-connected;
\item a Poincare pair $(C,\partial C)$ with Poincar\'e
decomposition
$$
\partial C = \partial_1 W \cup_{\partial_{01} N} \partial_1 N \, ;
$$
\item a weak equivalence $h\: K \to W$ which is fixed on $\partial_0 N$;
\item a weak equivalence
$$
e\: W \cup_{\partial_1 W} C \to N
$$
which is fixed on $\partial N$, such that
$e\circ h$ is homotopic to $f$ by a homotopy fixing $\partial_0 N$.
\end{itemize}
\end{definition}
The above is depicted in the following schematic
homotopy decomposition of $N$:
\bigskip
$$
\begin{texdraw}
\lellip rx:2 ry:1
\htext (-2.27 0) {\small $\partial_0 N$}
\htext (-1 0) {$W$}
\htext (.03 0) {\small $\partial_1 W$}
\htext (1 0) {$C$}
\htext (2.03 0) {\small $\partial_1 N$}
\htext (-.9 .97) {\tiny $\partial_{01}N$}
\htext (-.95 -1.06) {\tiny $\partial_{01}N$}
\move (-.8 .91)
\clvec (0 .5)(.5 0)(-.8 -.9)
\end{texdraw}
$$
\bigskip
The space $C$ is called the {\it complement},
which is a Poincar\'e space with boundary $\partial_1 W \cup \partial_1 N$.
The above spaces
assemble to give a strictly commutative square which is
homotopy cocartesian:
\begin{equation} \label{the_diagram}
\xymatrix{
(\partial_1 W,\partial_{01} N ) \ar[r] \ar[d] & (C,\partial_1 N) \ar[d] \\
(W,\partial_0 N) \ar[r] & (N,\partial N)
}
\end{equation}
(compare \cite{Klein_haef2}).
From here through the rest of the paper we refer to
such a commutative square as a ``homotopy pushout".
\medskip
As above, we can construct maps $f_j \: K_j \to N \times D^j$, which define a family of relative Poincare embedding problems. Our goal in this section is to prove the analogue of Theorem \ref{smooth+j} that shows that for sufficiently large $j$ one can find solutions to these problems.
\medskip
We begin with the following result, comparing the smooth to the Poincare relative embedding problems.
\begin{lemma} \label{smooth=>poincare} Assume that is $M$ is a compact
smooth manifold equipped with a boundary decomposition. Let
$$
\phi\: (N;\partial_0 N,\partial_1 N) \to (M,\partial_0 M,\partial_1 M)
$$
be a homotopy equivalence whose restriction
$\partial_0 N \to \partial_0 M$ is a diffeomorphism.
Then the relative Poincar\'e embedding problem for $f$ admits
a solution if
the relative smooth embedding problem for $\phi\circ f$ admits a solution.
\end{lemma}
\begin{proof} A solution of the smooth problem
together with a choice of homotopy inverse for $h$ extending
the inverse diffeomorphism on $\partial_0 M$ gives solution
to the relative Poincar\'e embedding problem.
\end{proof}
\medskip
Now suppose that
$D(\nu) \to \partial_0 N$ is the unit disk bundle of the
normal bundle of an embedding of $\partial N$ into codimension $\ell$ Euclidean space, $\mathbb{R}^{n + \ell}$.
The zero section then gives an inclusion
$\partial_0 N \subset D(\nu)$.
Set
$$
K_\nu := K \cup_{\partial_0 N} D(\nu).
$$
Clearly, $K_\nu$ is canonically homotopy equivalent to $K$.
Assuming $\ell$ is sufficiently large, there exists
Spivak normal fibration \cite{Spivak}
$$
S(\xi) \to N
$$
whose fibers have the homotopy type of an $\ell-1$ dimensional sphere.
Then, by the uniqueness of the
Spivak fibration \cite{Spivak},
we have a fiber homotopy equivalence over $\partial N$
$$
\begin{CD}
S(\nu) @> \simeq >> S(\xi_{|\partial_0 N}).
\end{CD}
$$
Let $D(\xi)$ denote the fiberwise cone fibration of $S(\xi) \to N$. Then we
have a canonical map
$$
f_\nu\: K_\nu \to D(\xi)
$$
which is fixed on the $D(\nu)$. Note that
$$
\partial D(\xi) = D(\nu) \cup S(\xi)
$$
is a decomposition of Poincar\'e spaces such that
$D(\nu)$ has the structure of a smooth manifold.
Let us set $\partial_0 D(\xi) := D(\nu)$ and
$\partial_1 D(\xi) := S(\xi)$.
Then the classical construction of the Spivak fibration (using
regular neigbhorhood theory in Euclidean space)
shows that there is a homotopy equivalence
$$
\begin{CD}
(D(\xi);\partial_0 D(\xi),\partial_1 D(\xi))
@> \sim >> (M;\partial_0 M,\partial_1 M)
\end{CD}
$$
in which $M$ is a compact codimension zero submanifold of
some Euclidean space. Furthermore, the restriction
$\partial_0 D(\xi) \to \partial_0 M$ is a diffeomorphism.
Consequently, by Lemma \ref{smooth} and lemma \ref{smooth=>poincare} we obtain
\begin{proposition} \label{f-nu} If the rank of $\nu$ is sufficiently large, then
the relative Poincar\'e embedding problem for
$f_\nu$ has a solution.
\end{proposition}
Let $\eta$ denote a choice of inverse
for $\xi$ in the Grothendieck group of reduced spherical fibrations over $N$.
For simplicity, we may assume that the fiber of
$\eta$ is a sphere of dimension $\dim N - 1$. Then $\xi$ restricted to
$\partial_0 N$ is fiber homotopy equivalent to
$\tau_{\partial_0 N} \oplus \epsilon$, where $\tau_{\partial_0 N}$ is
the tangent sphere bundle of $\partial_0 N$ and $\epsilon$ is the
trivial bundle with fiber $S^0$. For simplicity, we will
assume that $\xi$ restricted to $\partial_0 N$ has been identified
with $\tau_{\partial_0 N} \oplus \epsilon$. Similarly, we
will choose an identification of $\xi_{\partial_1 N}$ with
$\tau_{\partial_1 N} \oplus \epsilon$, where $\tau_{\partial_1 N}$
is any spherical fibration over $\partial_1 N$ that represents
the Spivak tangent fibration.
Since $\xi \oplus \eta $ is trivializable,
for some integer $j$ we get a homotopy equivalence
$$
(D(\xi \oplus \eta); D(\nu \oplus \tau_{\partial_0 N});
D(\nu\oplus \tau_{\partial_1 N}) \cup S(\xi\oplus \eta)) \to
(N\times D^j; (\partial_0 N) \times D^j;(\partial_1 N) \times D^j \cup
N \times S^{j-1})
$$
which restricts to a diffeomorphism
$D(\nu\oplus \tau_{\partial_0 N}) \to (\partial_0 N) \times D^j$.
Now a choice of solution of the relative Poincar\'e embedding problem
for $f_\nu$, as given by Proposition \ref{f-nu}, guarantees that
the relative problem for $f_{\nu\oplus \tau}$ has a solution.
But clearly, the latter is identified with the map
$f_j\: K_j \to N \times D^j$. Consequently, we have proven the following.
\begin{theorem} \label{solution} If $j \gg 0$ is sufficiently large, then the
relative Poincar\'e embedding problem for $ f_j\: K_j \to N \times D^j$ has a solution.
\end{theorem}
\subsection{Application to diagonal maps and a proof of Theorem \ref{config}}
We now give a proof of Theorem \ref{config}. By the results of section 1, this will complete the proof of Theorem \ref{main}.
\medskip
Let $f \: M_1 \to M_2$ be a homotopy equivalence of
closed smooth manifolds.
Using an identification of the tangent bundle $\tau_{M_1} $ with the normal bundle of the diagonal, $\Delta \: M_1 \to M_1 \times M_1$, we have an embedding
$$
D(\tau_{M_1}) \subset M_1\times M_1\, ,
$$
which is identified with a compact tubular neighborhood of
the diagonal.
The closure of its complement will be denoted $F(M_1, 2)$. Notice that
the inclusion $F(M_1, 2) \subset M_1^{\times 2} - \Delta$ is a weak
equivalence of spaces over $M_1^{\times 2}$ (i.e., it is a morphism
of spaces over $M_1^{\times 2}$ whose underlying map of spaces
is a weak homotopy equivalence). Notice also that we have a decomposition
$$
M_1^{\times 2} = D(\tau_{M_1}) \cup_{S(\tau_{M_1})} F(M_1, 2).
$$
Making the same construction with $M_2$, we also have a decomposition
$$
M_2^{\times 2} = D(\tau_{M_2}) \cup_{S(\tau_{M_2})} F(M_2, 2) \, .
$$
Notice that since $f \: M_1 \to M_2$ is a homotopy equivalence,
the composite
$$
\begin{CD}
D(\tau_{M_1}) @> \text{projection} >> M_1 @> f >> M_2 @>
\text{zero section} >\hookrightarrow> D(\tau_{M_2})
\end{CD}
$$
is also a homotopy equivalence.
Let $T$ be the {\it mapping cylinder} of this composite map. Then we
have a pair
$$
(T,D(\tau_{M_1}) \amalg D(\tau_{M_2}))\, .
$$
Furthermore, up to homotopy, we have a preferred identification of $T$ with the mapping cylinder of $f$.
The map $f^{\times 2} \: M_1^{\times 2} \to M_2^{\times 2}$ also
has a mapping cylinder $T^{(2)}$ which contains the manifold
$$
\partial T^{(2)} := M_1^{\times 2} \amalg M_2^{\times 2}.
$$
Then $(T^{(2)},\partial T^{(2)})$ is a Poincar\'e pair.
Furthermore,
$$
\partial T^{(2)} = (D(\tau_{M_1}) \amalg D(\tau_{M_2})) \cup (F(M_1, 2) \amalg F(M_2, 2))
$$
is a manifold decomposition. Let us set $\partial_0 T^{(2)} =
D(\tau_{M_1}) \amalg D(\tau_{M_2})$ and $\partial_1 T^{(2)} = (F(M_1, 2) \amalg F(M_2, 2))$.
Since the diagram
$$
\xymatrix{
M_1 \ar[r]^f \ar[d]_{\Delta} & M_2
\ar[d]^{\Delta} \\
M_1 \times M_1 \ar[r]_{f\times f} & M_2 \times M_2
}
$$
commutes, we get an induced map of mapping cylinders.
This map, together with our preferred identification of the
cylinder of $f$ with
$T$, allows the construction of a map
$$
g\:T \to T^{(2)}
$$
which extends the identity map of $\partial_0 T^{(2)}$.
In other words, $g$ is a relative Poincar\'e embedding problem.
By Proposition \ref{f-nu}, there exists an integer $j \gg 0$ such that
the associated relative Poincar\'e embedding problem
$$
g_j\: T_j \to T^{(2)} \times D^j
$$
has a solution. Here,
$$
T_j \,\, := \,\, T \cup \, (\partial_0 T^{(2)}) \times D^j\, ,
$$
and
$$
\partial_0(T^{(2)} \times D^j) := \left(D(\tau_{M_1}) \times D^j \right) \amalg \left(D(\tau_{M_2}) \times D^j \right) \qquad
\partial_1(T^{(2)} \times D^j) := F_{D^j}(M_1, \, 2) \amalg F_{D^j}(M_2, \,2).
$$
where, for convenience, we are redefining $F_{D^j}(M, \, 2)$
as $M \times M \times S^{j-1} \cup F(M,2) \times D^j$ (cf.\ S1).
This makes $T^{(2)}\times D^j$ a Poincar\'e space with boundary decomposition
$$
\partial (T^{(2)} \times D^j)\,\, = \partial_0 (T^{(2)} \times D^j) \cup
\partial_1 (T^{(2)} \times D^j)\, .
$$
\medskip
By definition \ref{solution}, a solution to this Poincar\'e embedding problem yields Poincar\'e pairs $(W, \partial W)$ and $(C, \partial C)$, with the following properties.
\begin{itemize}\label{solved}
\item
$
\partial W = \partial_0(T^{(2)}\times D^j) \cup \partial_1W$, where $\partial_0 ({T^{(2)}\times D^j}) = \left(D(\tau_{M_1}) \times D^j \right) \amalg \left(D(\tau_{M_2}) \times D^j \right)$ and $\partial_1 W \hookrightarrow W$ is 2-connected,
\item $\partial C = \partial_1 W \cup \partial_1(T^{(2)}\times D^j), $ where $\partial_1(T^{(2)}\times D^j) = F_{D^j}(M_1, \, 2) \amalg F_{D^j}(M_2, \,2).$ Notice that $\partial_{01}(T^{(2)}\times D^j) = \partial(D(\tau_{M_1} \times D^j)) \, \amalg \, \partial(D(\tau_{M_2} \times D^j)).$
\item
There is a weak equivalence, $h \: T_j \xrightarrow{\simeq} W$, fixed on $\left(D(\tau_{M_1}) \times D^j \right) \amalg \left(D(\tau_{M_2}) \times D^j \right)$.
\item There is a weak equivalence
$$
e \: W \cup_{\partial_1W}C \to T^{(2)}\times D^j
$$
which is fixed on $\partial(T^{(2)}\times D^j)$, such that $e \circ h$ is homotopic to
$g_j \: T_j \to T^{(2)}\times D^j$ by a homotopy fixing $\left(D(\tau_{M_1}) \times D^j \right) \amalg \left(D(\tau_{M_2}) \times D^j \right)$.
\end{itemize}
The above homotopy decomposition of $T^{(2)}\times D^j$ is indicated in the
following schematic diagram:
\medskip
$$
\btexdraw
\lvec (2 0)
\lvec (2 -3)
\move (2.25 -3)
\lvec (-.25 -3)
\move (0 -3)
\lvec (0 0)
\move (.4 0)
\clvec (.75 -1)(.5 -2.5)(.4 -3)
\move (1.2 0)
\clvec (1 -1)(1.5 -2.5)(1.2 -3)
\htext (.8 -1.8) {$W$}
\htext (.3 -1.4) {$C$}
\htext (1.5 -1.4) {$C$}
\htext (.4 .05) {\small $D(\tau_{M_1}) \times D^j$}
\htext (.4 -3.2) {\small $D(\tau_{M_2}) \times D^j$}
\htext (-.6 -.1) {\small $M_1^{\times 2} \times D^j$}
\htext (-.9 -3.1) {\small $M_2^{\times 2} \times D^j$}
\etexdraw
$$
\medskip
Furthermore, the complement $C$ and the
normal data $\partial_1 W$
of the solution sits in a commutative diagram
of pairs
$$
\xymatrix{
(M_1^{\times 2} \times S^{j-1},\emptyset) \ar[d]_\cap \ar[rr]^\subset_\sim
&& (T^{(2)}\times S^{j-1},\emptyset) \ar[d]^\cap &&
(M_2^{\times 2}\times S^{j-1},\emptyset) \ar[d]^\cap \ar[ll]_\supset^\sim \\
(F_{D^j}(M_1,2),S(\tau_{M_1} + \epsilon^j))
\ar[rr]^\subset \ar[d]_{\cap} && (C,\partial_1 W) \ar[d]^e &&
(F_{D^j} (M_2, 2),S(\tau_{M_2} + \epsilon^j)) \ar[d]^{\cap}
\ar[ll]_{\supset}\\
(M_1^{\times 2} \times D^j,D(\tau_{M_1}) \times D^j)
\ar[rr]^{\subset}_\sim && (T^{(2)} \times D^j,T_j)
&&
(M_2^{\times 2}\times D^j,D(\tau_{M_2}) \times D^j) \ar[ll]_{\supset}^\sim \, .
}
$$
Here each (horizontal)
arrow marked with $\sim$ is a weak homotopy equivalence.
Each column describes an ${\cal F}$-space (cf.\
Definition \ref{fivespace}). In fact, the outer columns
are the ${\cal F}$-spaces $M_i(j)$ described in \S1.
Furthermore, the horizontal maps describe morphisms of ${\cal F}$-spaces.
Consequently, to complete
the proof of Theorem \ref{config},
it suffices to show these morphisms of ${\cal F}$-spaces
are weak equivalences.
We are therefore reduced to showing that the horizontal arrows in the
second row are weak homotopy equivalences.
By symmetry, it will suffice
to prove that the left map in the second row,
$$(F_{D^j}(M_1,2),S(\tau_{M_1} + \epsilon^j)) \to (C,\partial_1 W)
$$
is a weak equivalence.
\medskip
We will prove that the map $F_{D^j}(M_1,2)\to C$
is a weak equivalence; the proof that $S(\tau_{M_1} + \epsilon^j) \to
\partial_1 W$ is a weak equivalence is similar and will be
left to the reader.
To do this, consider the following commutative diagram.
\begin{equation}\label{bigcommute}
\xymatrix{
F_{D^j}(M_1, 2) \ar[r]^= \ar[d]_{\cap} &F_{D^j}(M_1, 2)
\ar[d]^\cap \ar[r]^{\hookrightarrow} & C \ar[d]^e\\
M_1^{\times 2} \times D^j \ar[r]_>>>>{\hookrightarrow} &
( M_1^{\times 2} \times D^j) \cup_{ (D(\tau_{M_1}) \times D^j)} W \ar[r]_>>>>{\hookrightarrow}
& T^{(2)} \times D^j\, .
}
\end{equation}
\medskip
\begin{lemma}\label{push}
Each of the commutative squares in diagram
\eqref{bigcommute} is a homotopy pushout.
\end{lemma}
\medskip
Before we prove this lemma, we show how we will use it to complete the proof of Theorem \ref{config}.
\begin{proof}[Proof of Theorem \ref{config}] \rm
By the lemma, since each of the squares
of this diagram is a homotopy pushout, then so is the outer diagram,
$$
\xymatrix{
F_{D^j}(M_1, 2) \ar[r]^{\hookrightarrow} \ar[d]_\cap & C \ar[d]^e \\
M_1^{\times 2} \times D^j \ar[r]_\hookrightarrow & T^{(2)} \times D^j.
}
$$
Now recall that $ T^{(2)}$ is the mapping cylinder of the homotopy equivalence,
$f^{\times 2}\: M_1^{\times 2} \to M_2^{\times 2}.$ Therefore the inclusion,
$M_1^{\times 2} \to T^{(2)}$ is an equivalence,
and hence so is the bottom horizontal
map in this pushout diagram, $M_1^{\times 2} \times D^j \hookrightarrow T^{(2)} \times D^j$.
Furthermore, the inclusion $ F_{D^j}(M_1, 2) \to M_1^{\times 2} \times D^j $ is $2$-connected, assuming the dimension of $M$ is $2$ or larger. Therefore by the pushout property of this square and the Blakers-Massey theorem, we conclude that the top horizontal map
in this diagram, $ F_{D^j}(M_1, 2) \hookrightarrow C$ is a homotopy equivalence.
As described before, this is what was needed to complete the proof of
Theorem \ref{config}. \end{proof}
\begin{proof}[Proof of Lemma \ref{push}]
We first consider the right hand commutative square. By the properties of the solution
to the relative embedding problem given above in (\ref{solved}), we know that $e\: C \to T^{(2)}\times D^j$ extends to an equivalence, $e\: C \cup_{\partial_1W}W \xrightarrow{\simeq} T^{(2)}\times D^j$. Now notice that the intersection of $\partial_1 W$ with $F_{D_j}(M_1,2)$ is the boundary, $\partial(D(\tau_{M_1}) \times D^j)$. But
$$
(F_{D_j}(M_1,2)) \cup_{\partial(D(\tau_{M_1}) \times D^j)} W = ( M_1^{\times 2} \times D^j) \cup_{ (D(\tau_{M_1}) \times D^j)} W .
$$
This proves that the right hand square is a homotopy pushout.
We now consider the left hand diagram. Again, by using the properties of the solution of the relative embedding problem given above in (\ref{solved}), we know that
the homotopy equivalence $h \: T_j \xrightarrow{\simeq} W$ extends to a homotopy equivalence,
$$
h \: ( M_1^{\times 2} \times D^j) \cup_{ (D(\tau_{M_1}) \times D^j)} T_j \xrightarrow{\simeq} ( M_1^{\times 2} \times D^j) \cup_{ (D(\tau_{M_1}) \times D^j)} W.
$$
But by construction, $T_j$ is homotopy equivalent to the mapping cylinder of the composite homotopy equivalence, $D(\tau_{M_1})
\xrightarrow{\text{\rm project}} M_1 \xrightarrow{f} M_2 \xrightarrow{\text{zero section}} D(\tau_{M_2})$. This implies that the inclusion $D(\tau_{M_1}) \times D^j \hookrightarrow T_j$ is a homotopy equivalence, and so $ ( M_1^{\times 2} \times D^j) \cup_{ (D(\tau_{M_1}) \times D^j)} T_j $ is homotopy equivalent to $ M_1^{\times 2} \times D^j$. Thus the inclusion given by the bottom horizontal map in the square in question, $M_1^{\times 2} \times D^j \hookrightarrow (M_1^{\times 2} \times D^j) \cup_{ (D(\tau_{M_1}) \times D^j)} W$ is also a homotopy equivalence. Since the top horizontal map is the identity,
this square is also a homotopy pushout. This completes the proof of Lemma \ref{push}, which was the last step in the proof of Theorem \ref{config}. \end{proof}
| {
"timestamp": "2008-10-18T01:50:12",
"yymm": "0509",
"arxiv_id": "math/0509667",
"language": "en",
"url": "https://arxiv.org/abs/math/0509667",
"abstract": "Let M be a closed, oriented, n -manifold, and LM its free loop space.Chas and Sullivan defined a commutative algebra structure in the homology of LM, and a Lie algebra structure in its equivariant homology. These structures are known as the string topology loop product and string bracket, respectively.In this paper we prove that these structures are homotopy invariants in the following sense.Let f : M_1 \\to M_2 be a homotopy equivalence of closed, oriented n -manifolds. Then the induced equivalence, Lf : LM_1 \\to LM_2 induces a ring isomorphism in homology, and an isomorphism of Lie algebras in equivariant homology. The analogous statement also holds true for any generalized homology theory h_* that supports an orientation of the M_i 's.",
"subjects": "Geometric Topology (math.GT); Algebraic Topology (math.AT)",
"title": "The homotopy invariance of the string topology loop product and string bracket",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534327754854,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573415289687
} |
https://arxiv.org/abs/2211.08973 | Smooth integers and the Dickman $ρ$ function | We establish an asymptotic formula for $\Psi(x,y)$ whose shape is $x \rho(\log x/\log y)$ times correction factors. These factors take into account the contributions of zeta zeros and prime powers and the formula can be regarded as an (approximate) explicit formula for $\Psi(x,y)$. With this formula at hand we prove oscillation results for $\Psi(x,y)$, which resolve a question of Hildebrand on the range of validity of $\Psi(x,y) \asymp x\rho(\log x/\log y)$. We also address a question of Pomerance on the range of validity of $\Psi(x,y) \ge x \rho(\log x/\log y)$.Along the way we improve classical estimates for $\Psi(x,y)$ and, on the Riemann Hypothesis, uncover an unexpected phase transition of $\Psi(x,y)$ at $y=(\log x)^{3/2+o(1)}$. | \section{Introduction}
A positive integer is called $y$-friable (or $y$-smooth) if all its prime factors do not exceed $y$. We denote the number of $y$-friable integers up to $x$ by $\Psi(x,y)$. We assume throughout $x \ge y \ge 2$.
We denote by $\rho\colon [0,\infty) \to (0,\infty)$ the Dickman function, defined as $\rho(t)=1$ for $t \in [0,1]$, while for larger values it is defined via the delay differential equation $\rho'(t) = -\rho(t-1)/t$.
Dickman \cite{dickman1930} showed that
\begin{equation}\label{eq:dickman}
\Psi(x,y) \sim x \rho \left( \log x / \log y \right) \qquad (x \to \infty)
\end{equation}
holds as long as $\log x/ \log y$ is bounded from above, that is, $y \ge x^{\varepsilon}$. For this reason, it is useful to introduce the parameter
\[ u := \log x /\log y.\]
\subsection{Hildebrand's work}
The range of validity of \eqref{eq:dickman} was considerably improved by de Bruijn \cite{debruijn1951}.
He stated his result in terms of the error term in the prime number theorem. Define $R(x)$ via
\[ \pi(x) = \mathrm{Li}(x) \left(1+O(R(x))\right)\]
where $\pi$ is the prime counting function and $\mathrm{Li}$ is the logarithmic integral. He used Buchstab's identity to show (essentially) that if $R(x) \ll_{\varepsilon} \exp(-(\log x)^{a-\varepsilon})$ ($a \in (0,1)$) then
\begin{equation}\label{eq:debruijn}
\Psi(x,y)= x \rho(u) \left(1 + O_{\varepsilon}\left( \log (u+1)/\log y \right)\right)
\end{equation}
holds uniformly in the range $\log y \ge (\log x)^{1/(1+a)+\varepsilon}$ for every $\varepsilon>0$. Using the Korobov--Vinogradov zero-free region for the Riemann zeta function one may take $a = 3/5$. Additionally, in the range of his results, he proved an asymptotic expansion for $\Psi(x,y)$ in (roughly) powers of $\log (u+1)/\log y$.
In \cite{Hildebrand1986}, Hildebrand extended de Bruijn's range qualitatively, showing that \eqref{eq:debruijn} holds uniformly in the range
\begin{equation}\label{eq:hildrange}
\log y \ge (\log \log x)^{\frac{1}{a}+\varepsilon}
\end{equation}
for every $\varepsilon>0$. Here $a$ is the same as defined above, so we may take $a=3/5$. In an earlier paper \cite{Hildebrand1984}, Hildebrand showed that RH implies a further qualitative improvement, namely that \eqref{eq:debruijn} holds in the wider range
\begin{equation}\label{eq:rhrange}
y \ge (\log x)^{2+\varepsilon}.
\end{equation}
The reverse implication is also true: if even the weaker estimate $\Psi(x,y)=x \rho(u)\exp(O_{\varepsilon}(y^{\varepsilon}))$
holds in the range \eqref{eq:rhrange} then RH must be true. Hildebrand's proofs rely on a beautiful identity \cite[p. 261]{Hildebrand1984}.
\begin{remark}
Hildebrand's conditional result does not give an asymptotic result when
\begin{equation}\label{eq:regime}
(\log x)^ A \ge \log y \ge (\log x)^{2+\varepsilon},
\end{equation}
$A$ being an arbitrary number. Indeed, the error term $\log(u+1)/\log y$ is bounded away from $0$ when \eqref{eq:regime} holds. Hildebrand's result only gives an upper bound in this regime, and if $y \ge (\log x)^C$ for sufficiently large $C$ then also a lower bound is implied.
\end{remark}
\subsection{Hildebrand's conjecture}
In \cite[p. 290]{Hildebrand1986}, Hildebrand speculates that $\Psi(x,y) \sim x \rho(u)$ for $y \ge (\log x)^{2+\varepsilon}$ but not for $y \le (\log x)^{2-\varepsilon}$. Specifically, he writes
\begin{quote}\label{eq:quotehild}
If the Riemann hypothesis is assumed, the range for $u$ can be further extended to $ 1\le u \le \log x/(2+\varepsilon)\log \log x$, but it seems likely that then the critical limit is attained: it may be conjectured that for $\log y <(2-\varepsilon)\log \log x$, the relation $\Psi(x,x^{1/u}) \sim x \rho(u)$ no longer holds.
\end{quote}
This conjecture is repeated by Granville in \cite{Granville1989}, and in \cite[p. 258]{Granville1993} he writes
\begin{quote}
... and Hildebrand has even shown that (2.3) holds for all $y \ge \log^{2+\varepsilon} x$ if and only if the Riemann Hypothesis is true. However we do not believe that (2.1) can hold uniformly for $y=\log^{2-\varepsilon} x$ for any fixed $\varepsilon>0$.
\end{quote}
We confirm these speculations:
\begin{thm}\label{thm:hildconj}
Fix $\varepsilon \in (0,2)$. There are sequences $x_n \to \infty$, $y_n \to \infty$ with $y_n = (\log x_n)^{2-\varepsilon+o(1)}$ and
\[ \Psi(x_n,y_n) > x_n \rho\left(\log x_n / \log y_n\right) \exp\left( y_n^{\varepsilon/(2-\varepsilon)+o(1)} \right).\]
\end{thm}
The theorem is proved in the next section, as part of the stronger Proposition \ref{prop:mainosc}.
\subsection{Pomerance's question}
In \cite{Granville2008,Lichtman}, Pomerance asked whether
\begin{equation}\label{eq:pom}
\Psi(x,y) \ge x \rho(u)
\end{equation}
holds for all $x/2 \ge y \ge 1$. The motivation is related to de Bruijn's approximation to $\Psi(x,y)$, called $\Lambda(x,y)$, which in some ranges is strictly larger than $x \rho(u)$, see \S\ref{sec:debruijn}.
If RH is false, we show Pomerance's inequality fails infinitely often. If RH is true, we show it is true when $y \ge (\log x)^{2+\varepsilon}$ or $y \le (\log x)^{2-\varepsilon}$ (at least for $x \gg_{\varepsilon} 1$). Near $y=(\log x)^2$, the question lies beyond RH in a precise sense, but we indicate that a positive answer follows from a conjecture of Montgomery and Vaughan on the size of the remainder term in the prime number theorem.
\begin{thm}\label{thm:pom}
Fix $\varepsilon>0$.
\begin{enumerate}
\item Suppose $x \gg_{\varepsilon} 1$. Unconditionally, \eqref{eq:pom} holds in $(1-\varepsilon)x \ge y \ge \exp((\log \log x)^{5/3 +\varepsilon})$.
\item Suppose RH is not true. Fix $\sigma_0 \in (1-\Theta,\Theta)$ where $\Theta \in (1/2,1]$ is the supremum of the real parts of the zeros of $\zeta$. Then, there are sequences $x_n \to \infty$, $y_n \to \infty$ satisfying
$y_n = (\log x_n)^{1/(1-\sigma_0)+o(1)}$
and
\[ \Psi(x_n,y_n) < x_n \rho\left(\log x_n/ \log y_n\right) \exp(-y_n^{\Theta-\sigma_0-\varepsilon}).\]
\item If RH is true, \eqref{eq:pom} holds when $x(1-\varepsilon) \ge y \ge (\log x)^{2+\varepsilon}$ and $2\le y \le (\log x)^{2-\varepsilon}$, as long as $x \gg_{\varepsilon} 1$.
\item Suppose RH is true. Let $L \in \mathbb{R}$ be the following constant:
\[ L = \max_{v \in \mathbb{R}} e^v \left( -\log (-\zeta(1/2)) - \frac{1}{2}\int_{v}^{2v} e^{-r}r^{-1}dr \right)\approx -0.666217.\]
Let $\psi$ be the Chebyshev function.
A necessary condition for \eqref{eq:pom} to hold in $(\log x)^{3} \ge y \ge (\log x)^{3/2}$ is
\begin{equation}\label{eq:necc}
\liminf_{y \to \infty} \frac{\psi(y)-y}{\sqrt{y}\log y} \ge L.
\end{equation}
A sufficient condition for \eqref{eq:pom} to hold in $(\log x)^{3} \ge y \ge (\log x)^{3/2}$ if $y \gg 1$ is that \eqref{eq:necc} holds with strict inequality.
\end{enumerate}
\end{thm}
The theorem is proved in \S\ref{sec:pom}. Note that RH implies
\begin{equation}\label{eq:vonkoch}
\psi(y)-y = O(\sqrt{y}(\log y)^2)
\end{equation}
as shown by von Koch in 1901 \cite[Thm. 13.1]{MV}, and this has not been improved since. It is believed that
\begin{equation}\label{eq:munsch} \liminf_{y \to \infty} \frac{\psi(y)-y}{\sqrt{y}(\log \log \log y)^2} = -\frac{1}{2\pi},
\end{equation}
see the discussion in \cite[p. 484]{MV}; \eqref{eq:munsch} implies that the limit considered in the last part of Theorem \ref{thm:pom} is $0$. Goldston and Suriajaya showed that sufficiently uniform versions of Montgomery's Pair Correlation lead to improvements on \eqref{eq:vonkoch} which would also show the limit is $0$.
\subsection*{Structure of paper}
In \S\ref{sec:int} we give some intuition for the behavior of $\Psi(x,y)/(x\rho(u))$ and then go on to prove Theorems \ref{thm:hildconj} and \ref{thm:pom} as well as a phase transition result (Theorem \ref{thm:phase}), new inequalities (Corollary \ref{cor:sandwich}, Theorem \ref{thm:ineq}) and a simple formula for $\Psi(x,y)/(x\rho(u))$ in a wide range given a zero-free strip for $\zeta$ (Theorem \ref{thm:strip}).
In Theorem \ref{thm:logsave} we study under RH in what range does $\Psi(x,y) \sim \Lambda(x,y)$ hold where $\Lambda$ is the de Bruijn approximation \cite{debruijn1951}, and as in Theorem \ref{thm:pom} the answer relates to the size of $(\psi(y)-y)/(\sqrt{y} \log y)$.
In \S\ref{sec:firststudy} and \S\ref{sec:Gstudy} we develop some standard material (including properties of the saddle point for $\zeta(s,y)$ introduced first in \cite{HildebrandTenenbaum1986}, and a variant of the truncated explicit formula for $\psi(y)$) that is needed for a subset of our results.
\subsection*{Conventions}
We use the convention where $C,c$ denote absolute positive constants which may change between different occurrences. The notation $A \ll B$ means $|A| \le C B$ for some absolute constant $C$, and $A\ll_{\varepsilon} B$ means $C$ may depend on $\varepsilon$. We write $A \asymp B$ to mean $C_1 B \le A \le C_2 B$ for some absolute positive constants $C_i$, and $A \asymp_{\varepsilon} B$ means $C_i$ may depend on $\varepsilon$. We write $A=\Theta(B)$ and $A=\Theta_{\varepsilon}(B)$ to mean $A\asymp B$ and $A \asymp_{\varepsilon} B$, respectively. If we differentiate a bivariate function, we always do so with respect to the first variable. Throughout, $L(x) = \exp((\log x)^{\frac{3}{5}}( \log \log x)^{-\frac{1}{5}})$.
\subsection*{Acknowledgements}
We are grateful to Sacha Mangerel for asking as about integer analogues of \cite{gorodetsky}. We thank Zeev Rudnick and James Maynard for comments on exposition. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 851318).
\section{A new approximation}\label{sec:proofs}
\subsection{Definitions and standard results}\label{sec:prelim}
We define $\xi\colon [1,\infty) \to [0,\infty)$, a function of $u>1$, via $e^{\xi(u)}=1+u\xi(u)$.
\begin{lem}\label{lem:xilem}\cite[Lem. 1]{HildebrandTenenbaum1986}\cite[Lem. 1]{Hildebrand1984}
For $u \ge 3$ we have
\begin{align}
\label{eq:xiasymp}\xi(u) &= \log u + \log \log u + O\left(\log \log u/ \log u\right),\\
\label{eq:derivxi} \xi'(u)^{-1} &= u\left(1+O\left( (\log u)^{-1}\right)\right).
\end{align}
\end{lem}
The first part of Lemma \ref{lem:xilem} shows
\begin{lem}\label{lem:sigmasize}
Let $\sigma=1-\xi(u)/\log y$. Fix $\varepsilon>0$. If $x\ge y\ge (1+\varepsilon) \log x$ and $x \gg_{\varepsilon} 1$ then
\begin{equation}\label{eq:sigmasymp}
\sigma= \frac{\log\left( \frac{y}{\log x}\right)}{\log y} \left(1 + O_{\varepsilon}\left( \frac{\log \log y}{\log y}\right)\right).
\end{equation}
\end{lem}
\begin{lem}\label{lem:sigmapositive}
Let $\sigma = 1-\xi(u)/\log y$, where $u=\log x/ \log y$. We have $\sigma \le 1$ with equality if and only if $u=1$. We have $\sigma \ge 0$ if and only if $y\ge 1+\log x$, and $\sigma=0$ if and only if $y=1+\log x$.
\end{lem}
\begin{proof}
The first part follows from $\xi$ being $0$ at $u=1$ and being strictly increasing. Next, we need to solve $\sigma \ge 0$, or $\log y\ge \xi(u)$. Again, since $\xi$ is strictly increasing, it actually suffices to solve $\log y= \xi(u)$. Exponentiating, this implies
\[ y = e^{\xi(u)} = 1+u\xi(u) = 1+u \log y= 1+\log x\]
as needed.
\end{proof}
We define the entire function $I(s)$ by $ I(s)=\int_{0}^{s} (e^v-1)dv/v$. The following are standard identities.
\begin{lem}\cite[Eq. (III.5.56)]{Tenenbaum2015}\label{lem:Ippxi}
For $u \ge 1$ we have $I'(\xi(u))= u$ and $I''(\xi(u)) = 1/\xi'(u)$.
\end{lem}
\begin{lem}\cite[Thm. III.5.10]{Tenenbaum2015}\label{lem:rho i transform}
Let $\gamma$ be the Euler-Mascheroni constant. For all $s \in \mathbb{C}$,
\begin{equation}\label{eq:hat rho}
\hat{\rho}(s) := \int_{0}^{\infty} e^{-sv}\rho(v)\, dv = \exp\left( \gamma + I(-s) \right).
\end{equation}
\end{lem}
\subsection{De Bruijn's approximation}\label{sec:debruijn}
Our results are based on a new approximation for $\Psi(x,y)$. It has consequences beyond resolving Hildebrand's and Pomerance's questions.
To put our approximation in context, we shall briefly discuss de Bruijn's approximation $\Lambda(x,y)$ \cite{debruijn1951}. For $x \not\in \mathbb{Z}$ it is given by
\[ \Lambda(x,y) := x\int_{\mathbb{R}} \rho\left(u-v\right)d\left(\lfloor y^v \rfloor/y^v\right),\]
otherwise $\Lambda(x,y)=\Lambda(x+,y)$ (one has $\Lambda(x,y)= \Lambda(x-,y)+O(1)$ if $x \in \mathbb{Z}$ \cite[p. 54]{debruijn1951}). We refer the reader to de Bruijn's original paper for the motivations for this definition. Integrating the definition by parts gives
\begin{equation}\label{eq:intbyparts}
\Lambda(x,y)=x\rho(u)-\{ x\}+\int_{0}^{u-1}(-\rho'(u-v))\{y^v\}y^{-v}dv
\end{equation}
when $x \notin \mathbb{Z}$. Due to $\rho$ being decreasing, the integral in the right-hand side of \eqref{eq:intbyparts} is non-negative, which motivates Pomerance's question.
Saias \cite[Lem. 4]{Saias1989}, improving on de Bruijn \cite{debruijn1951}, proved
\begin{equation}\label{eq:lambdayratio} \Lambda(x,y) = x\rho(u) \left(1+O_{\varepsilon}(\log(u+1)/\log y)\right)
\end{equation}
holds uniformly in $y \ge (\log x)^{1+\varepsilon}$, and that \cite[Thm.]{Saias1989}
\begin{equation}\label{eq:saiaslambdares}
\Psi(x,y) = \Lambda(x,y)\left(1 + O_{\varepsilon}\left(\exp(-(\log y)^{a-\varepsilon}\right)\right)
\end{equation}
holds uniformly in the range \eqref{eq:hildrange}. In particular, one recovers Hildebrand's unconditional result. Saias indicates in \cite[p. 79]{Saias1989} that if one assumes RH then his proof gives
\begin{equation}\label{eq:saiasrh} \Psi(x,y) = \Lambda(x,y) \left(1+O_{\varepsilon}\left(y^{\varepsilon-1/2}\log x\right)\right)
\end{equation}
in the range $y \ge (\log x)^{2+\varepsilon}$. Implicit in the proof of Proposition 4.1 of La Bret\`eche and Tenenbaum \cite{LaBreteche2005} is the estimate
\begin{equation}\label{eq:LaBTT}
\Lambda(x,y) = x \rho(u) Z( 1-\xi(u)/\log y) \left(1 + O_{\varepsilon}( 1/\log x)\right)
\end{equation}
uniformly for $y \ge (\log x)^{1+\varepsilon}$ where
\[ Z(t) := \zeta(t)(t-1)/t, \qquad Z(1)=1,\]
$\zeta$ is the Riemann zeta function and $\xi$ is defined in \S\ref{sec:prelim}.
The function $Z$ originates in de Bruijn's work \cite[Eq. (2.8)]{debruijn1951}, where it is denoted by $K(t+1)$.
It is evident that $\lim_{t \to 0^+} Z(t)= \infty$. Moreover,
\begin{lem}
The function $Z$ is strictly decreasing in $(0,1]$.
\end{lem}
\begin{proof}
We have
\[ Z'(t) = ((\zeta(t)(t-1))'-\zeta(t)(t-1))/t^2.\]
The integral representation $\zeta(s)=s/(s-1) -s \int_{1}^{\infty}\{x\}dx/x^{1+s}$ for $\Re s >0$ \cite[Thm. 1.2]{MV} implies
$Z'(t)=-\int_{1}^{\infty}(x+1-t^2)\{x\}x^{-2-t}dx<0$.
\end{proof}
It follows from Saias' work and \eqref{eq:LaBTT} that under RH, the quantities $\Psi(x,y)$ and $x\rho(u)$ are \textit{not} asymptotic in the regime \eqref{eq:regime}, but still of the same order of magnitude.
\subsection{The function \texorpdfstring{$G$}{G} and informal discussion}\label{sec:int}
Our approximations will be given in terms of the function
\[ G(s,y) := \zeta(s,y)/F(s,y)\]
where
\[ \zeta(s,y):=\prod_{p \le y} (1-p^{-s})^{-1}=\sum_{n \text{ is }y\text{-friable}} n^{-s} \qquad (\Re s >0)\]
is the partial zeta function, and
\begin{equation}
F(s,y):=\zeta(s)(s-1)\log y \hat{\rho}((s-1)\log y) \qquad (s \in \mathbb{C})
\end{equation}
where $\hat{\rho}$ is the Laplace transform of the Dickman function, which is never zero by Lemma \ref{lem:rho i transform}. Hence the function $G$ is defined for every $s\in \mathbb{C}$ with $\Re s >0$ which is not a zero of $\zeta$.
The ratio $G$ arises naturally: $\zeta(s,y)/s$ appears as the Mellin transform of $x\mapsto \Psi(x,y)$ while $F(s,y)/s$ appears as the Mellin transform of $x \mapsto \Lambda(x,y)$. This latter fact is due to de Bruijn \cite[p. 54]{debruijn1951} (cf. \cite{Saias1989}). The ratio $G$ contains information on the ratio $\Psi(x,y)/\Lambda(x,y) \sim \Psi(x,y)/(x\rho(u)Z(\sigma))$.
We choose the following logarithm of $\zeta(s,y)$:
\[ \log \zeta(s,y) = \sum_{p \le y} (-\log(1-p^{-s})) = \sum_{n \text{ is }y\text{-friable}} \Lambda(n)/(n^{s}\log n),\]
so we can write $G$ as $G_1 G_2$ where
\begin{equation}
G_1(s,y) = \exp\big(\sum_{n \le y} \frac{\Lambda(n)}{n^{s}\log n}\big)/F(s,y),\,
\log G_2(s,y) = \sum_{k \ge 2} \sum_{y^{1/k} <p \le y} p^{-ks}/k.
\end{equation}
We use the decomposition into $G_1$ and $G_2$ throughout. Informally, for real $s \in (0,1)$ we show in \S\ref{sec:Gstudy} that
\[ \log G_1(s,y) \approx -\sum_{\rho}\frac{ y^{\rho-s}}{(\rho-s)\log y} \]
where the sum is over zeros of $\zeta$, and
\[ \log G_2(s,y) \approx y^{\max\{1-2s,1/2-s\}}/\log y. \]
The relevant value of $s$ when studying $\Psi(x,y)$ and $\Lambda(x,y)$ using their Mellin transforms is known to be close to $1-\log \log x /\log y$.
In particular, if we fix $A>1$ and consider
$y \approx (\log x)^A$,
then for the purposes of studying $\Psi(x,y)$ we care mostly about $s \approx 1-1/A$. At this point
\begin{align}
\log G_1\left(1-1/A,y\right) &\ll y^{\Theta-1+1/A+o(1)},\\
\log G_2\left(1-1/A,y\right) &\approx y^{\max\left\{ 1/A-1/2,2/A-1\right\}+o(1)}
\end{align}
where $\Theta$ is the supremum of the real parts of the zeros of $\zeta$.
Our understanding of $\log G_1$ relies on our understanding of the zeros. For instance, let us suppose RH holds. Then $\Theta=1/2$ and $\log G_1(1-1/A,y)$ is $o(1)$ if $A>2$. In the other direction we end up using Landau's oscillation theorem to show that not only is it bounded by $y^{-1/A-1/2+o(1)}$, but that it can reach this order of magnitude \emph{with both signs} infinitely often, so $\log G_1(1-1/A,y)\neq o(1)$ once $A<2$. See \S\ref{sec:osc}.
The term $\log G_2(1-1/A,y)$ is elementary. For $A>2$, $\log G_2(1-1/A,y)$ is $o(1)$, while for $A<2$ it has a large positive contribution.
In summary, $A=2$ is a critical point for two different reasons: zeros and prime powers.
\subsection{Main formula}
Hildebrand and Tenenbaum proved the following asymptotic formula.
\begin{thm}\cite[Thms. 1, 2]{HildebrandTenenbaum1986}\label{thm:htsaddle}
Uniformly for $x \ge y \ge \log x$ we have
\begin{equation}\label{eq:HTsaddle} \Psi(x,y)=\frac{x^{\alpha}\zeta(\alpha,y)}{\alpha\sqrt{2\pi\phi_2(\alpha,y)}}\left(1 +O\left(u^{-1}\right) \right)
\end{equation}
where $\alpha >0$ is defined as the minimum of $s\mapsto \zeta(s,y)x^s$, and
\begin{equation}\label{eq:phi2}
\phi_2(\alpha,y):=\sum_{p \le y}\frac{p^{\alpha}(\log p)^2}{(p^{\alpha}-1)^2} = \big(1+\frac{\log x}{y} \big)\log x \log y\left(1+O\left( (\log(1+u))^{-1}\right)\right).
\end{equation}
The saddle point $\alpha$ satisfies, uniformly for $x \ge y \ge 2$,
\begin{equation}\label{eq:alphasize} \alpha = \frac{\log\left( 1+\frac{y}{\log x}\right)}{\log y}\left(1+O\left(\frac{\log \log y}{\log y}\right)\right).
\end{equation}
\end{thm}
The saddle point proof of Theorem \ref{thm:htsaddle} adapts to the study of $\rho$:
\begin{thm}\cite[Thm. III.5.13]{Tenenbaum2015}\label{thm:rho size}
For $u \ge 1$ we have
\begin{equation}\label{eq:rho and i}
\rho(u) = (\sqrt{2\pi I''(\xi(u))})^{-1}e^{\gamma-u\xi+I(\xi)}(1+O(u^{-1})).
\end{equation}
\end{thm}
By the definition of $F$, we can restate Theorem \ref{thm:rho size} as
\begin{cor}\label{cor:xrhozF}
Suppose $y \neq 1+\log x$. We have, for $\sigma=1-\xi(u)/\log y$,
\begin{equation}\label{eq:xrhoZsaddle}
x\rho(u)Z(\sigma) = \frac{x^{\sigma}F(\sigma,y)}{\sigma \sqrt{2\pi I''(\xi)(\log y)^2}} \left(1+O\left(u^{-1}\right)\right).
\end{equation}
\end{cor}
Throughout we use $\alpha$ and $\sigma$ as in Theorem \ref{thm:htsaddle} and Corollary \ref{cor:xrhozF}.
For $t \in (0,1]$ let
\begin{equation}\label{eq:fg}
f(t):= t\log x + \log F(t,y),\qquad
g(t):= t\log x + \log \zeta(t,y),
\end{equation}
and
\[ B(x,y):= \frac{\sigma}{\alpha}\frac{\sqrt{I''(\xi)(\log y)^2}}{\sqrt{\phi_2(\alpha,y)}}.\]
Observe also that
\begin{align}
g''(t)&=\sum_{p\le y} \frac{p^t (\log p)^2}{(p^t -1 )^2}>0,\\
f''(t)&= (\log (\zeta(t)(t-1)))'' + (\log y)^2I''((1-t)\log y).
\end{align}
\begin{proposition}[Main formula]\label{prop:first}
If $x \ge y > 1+ \log x$ then
\begin{align}
\frac{\Psi(x,y)}{x\rho(u)Z(\sigma)} &= G(\sigma,y) \exp\left( g(\alpha)-g(\sigma)\right) B(x,y) \left( 1 +O\left( u^{-1} \right)\right)\\
&=G(\alpha,y) \exp\left( f(\alpha)-f(\sigma)\right) B(x,y) \left( 1 +O\left( u^{-1} \right)\right).
\end{align}
\end{proposition}
\begin{proof}
We divide the left-hand side of \eqref{eq:HTsaddle} by the left-hand side of \eqref{eq:xrhoZsaddle}, and equate with the right-hand side of \eqref{eq:HTsaddle} divided by the right-hand side of \eqref{eq:xrhoZsaddle}. We then rearrange in two different ways via
\[ \frac{x^{\alpha}\zeta(\alpha,y)}{x^{\sigma}F(\sigma,y)} =\frac{\zeta(\alpha,y)}{F(\alpha,y)} \frac{x^{\alpha}F(\alpha,y)}{x^{\sigma}F(\sigma,y)}=\frac{\zeta(\sigma,y)}{F(\sigma,y)}\frac{x^{\alpha}\zeta(\alpha,y)}{x^{\sigma}\zeta(\sigma,y)}.\]
Finally, recall $G=\zeta/F$.
\end{proof}
If $u \to \infty$, Lemmas \ref{lem:xilem} and \ref{lem:Ippxi} show $I''(\xi(u)) \sim u$. If $y/\log x \to \infty$ and $u \to \infty$ then $\phi_2(\alpha,y) \sim \log x \log y$ by \eqref{eq:phi2}. Moreover, if $y/\log x \to \infty$ and $u \to \infty$ then $\alpha \sim \sigma$ by \eqref{eq:alphasize} and Lemma \ref{lem:xilem}. Hence
\[ B(x,y) \sim 1\]
if $u \to \infty$ and $y/ \log x \to \infty$.
In \S\ref{sec:firststudy} we study $B(x,y)$ in depth, which is not needed for Theorem \ref{thm:hildconj} but is needed for other results such as Theorem \ref{thm:ineq}.
The differences $g(\alpha)-g(\sigma)$ and $f(\alpha)-f(\sigma)$ are more delicate, but it is easy to determine their signs. Since $g'(\alpha)=0$ by definition and $g''(t) >0$, it follows that $g(\alpha)-g(\sigma) \le 0$. A similar argument works for $f(\alpha)-f(\sigma)$, but more care is needed because $f'(\sigma)$ is not $0$. A Taylor approximation shows
\[ f(\alpha)-f(\sigma) = (\alpha-\sigma)f'(\sigma) + (\alpha-\sigma)^2f''(t)/2\]
for some $t$ between $\alpha$ and $\sigma$. We have
\[ f'(\sigma) = \log x + (\log \zeta(\sigma)(\sigma-1))' -\log y I'(\xi) = (\log \zeta(\sigma)(\sigma-1))' = O(1).\]
Moreover, $f''(t) > 0$ by Lemma \ref{lem:gfdouble} and $\alpha=\sigma+o(1)$ by Lemma \ref{lem:ytoapower}. Hence
$f(\alpha)-f(\sigma) \ge o(1)$. We just established
\begin{cor}\label{cor:sandwich}
If $y/ \log x \to \infty$ and $u \to \infty$ when $x \to \infty$ then
\[ (1+o(1))G(\alpha,y) \le \frac{\Psi(x,y)}{x\rho(u)Z(\sigma)} \le (1+o(1))G(\sigma,y).\]
Fix $\varepsilon>0$. If $y \ge (1+\varepsilon)\log x$ and $x \gg_{\varepsilon} 1$ then
\[ G(\alpha,y) \ll_{\varepsilon} \frac{\Psi(x,y)}{x\rho(u)Z(\sigma)} \ll_{\varepsilon} G(\sigma,y).\]
\end{cor}
\begin{remark}\label{rem:var}
There is a variant of Proposition \ref{prop:first} proved in exactly the same way. Letting \[\tilde{f}(t):=t\log x+ \hat{\rho}((s-1)\log y), \quad \tilde{G}(s,y) := G(s,y) Z(s),\]
one has
\begin{align}
\frac{\Psi(x,y)}{x\rho(u)} &= \tilde{G}(\sigma,y) \exp\left( g(\alpha)-g(\sigma)\right) B(x,y) \left( 1 +O\left( u^{-1} \right)\right)\\
&=\tilde{G}(\alpha,y) \exp\big( \tilde{f}(\alpha)-\tilde{f}(\sigma)\big) B(x,y) \left( 1 +O\left( u^{-1} \right)\right).
\end{align}
On the one hand, $\tilde{f}'(t)$ is \emph{exactly} $0$ at $t=\sigma$ and $\tilde{f}(\alpha)-\tilde{f}(\sigma)$ is non-negative. On the other hand, $\tilde{G}$ is more complicated than $G$.
\end{remark}
In \S\ref{sec:firststudy} we study $g(\alpha)-g(\sigma)$ and $f(\alpha)-f(\sigma)$ which is needed for some of our later results, such as our phase transition result (Theorem \ref{thm:phase}), but not for Theorem \ref{thm:hildconj}. In Lemma \ref{lem:sharpen} we improve the error in Proposition \ref{prop:first} to $O((\alpha \log x)^{-1})$ which is needed e.g.\ in Theorem \ref{thm:ineq}, but not for Theorem \ref{thm:hildconj}. In view of Lemma \ref{lem:sharpen} and the estimate for $B$ in Lemma \ref{lem:Bsize}, we can drop the condition $u \to \infty$ in Corollary \ref{cor:sandwich}.
\subsection{Oscillations}
Recall $\sigma=\sigma(x,y)=1-\xi(u)/\log y$ and $\alpha=\alpha(x,y)$ are functions of $x$ and $y$. Given $y \ge 2$ and fixed $\sigma_0 \in (0,1)$, there is a unique $x$ with $\sigma(x,y)=\sigma_0$. It is determined by the relation
\[ (y^{1-\sigma_0}-1)/(1-\sigma_0) = \log x.\]
By Lemma \ref{lem:xilem} $\sigma_0 = 1-\log \log x/\log y+o(1)$ as $y \to \infty$, hence
\[ y=(\log x)^{1/(1-\sigma_0)+o(1)}.\]
Similarly, given $y\ge 2$ and fixed $\alpha_0>0$, there is a unique $x$ with $\alpha(x,y)=\alpha_0$, determined by
\[ -\log'(\alpha_0,y) =\sum_{p \le y}(p^{\alpha_0}-1)^{-1}= \log x.\]
We have the relation $\alpha_0 = 1-\log \log x/\log y+o(1)$ by \eqref{eq:alphasize}, so that $ y=(\log x)^{1/(1-\alpha_0)+o(1)}$.
\begin{proposition}\label{prop:mainosc}
Let $\Theta\in [1/2,1]$ be the supremum of the real parts of zeros of $\zeta$. Fix $\varepsilon>0$.
\begin{enumerate}
\item Assume RH fails and fix $\sigma_0 \in (1-\Theta,\Theta)$. Given $y>2$, let $x=x(y)$ be the solution to $\sigma(x,y)= \sigma_0$. Then
\[ \Psi(x(y),y)\ll x \rho(u) \exp\left(\Omega_{-}\left( y^{\Theta-\sigma_0-\varepsilon} \right)\right).\]
\item Fix $\alpha_0 \in (0,\Theta)$. Given $y>2$, let $x=x(y)$ be the solution to $\alpha(x,y) = \alpha_0$.
Then (regardless of the truth of RH), for some $c_{\alpha_0}>0$,
\[ \Psi(x(y),y)\gg x \rho(u)\exp\left( c_{\alpha_0} y^{\max\left\{1-2\alpha_0,\frac{1}{2}-\alpha_0\right\}}/\log y \right) \exp\left(\Omega_{+}\left( y^{\Theta-\alpha_0-\varepsilon} \right)\right).\]
\end{enumerate}
\end{proposition}
\begin{proof}
Let as assume RH fails. Let us fix $\sigma_0 \in (1-\Theta,\Theta)$, and given $y$ let $x=x(y)$ be the solution to $\sigma(x,y)=\sigma_0$. We have $\Psi(x(y),y)\ll x\rho(u)G(\sigma_0,y)$ by Corollary \ref{cor:sandwich}.
We have $\log G(\sigma_0,y)= \log G_1(\sigma_0,y) + \log G_2(\sigma_0,y)$. For our fixed $\sigma_0$, Corollary \ref{cor:g2size} tells us that the function $\log G_2(\sigma_0,y)$ is
\[ \log G_2(\sigma_0,y) \asymp y^{\max\left\{1-2\sigma_0, \frac{1}{2}-\sigma_0\right\}} / \log y\]
if $\sigma_0 \neq 1/2$, and otherwise $\log G_2(\sigma_0,y) \asymp 1$.
For $\log G_1(\sigma_0,y)$ we have $\log G_1 (\sigma_0,y) = \Omega_{\pm} (y^{\Theta-\sigma_0-\varepsilon})$
by Proposition \ref{prop:osc}. Since $y^{\Theta-\sigma-\varepsilon}$ dominates $\log G_2(\sigma,y)$ by our choice of $\sigma_0$, the first result follows.
We now fix $\alpha_0 \in (0,1)$ and assume nothing about RH. We argue as before, except that now we exploit the fact that $\log G_2$ is positive.
\end{proof}
Applying the second part of the proposition with $1/(1-\alpha_0) = 2-\varepsilon$ proves Theorem \ref{thm:hildconj}. The reader who is only interested in understanding Proposition \ref{prop:mainosc} in depth can go directly to the short proofs of Corollary \ref{cor:g2size} and Proposition \ref{prop:osc} which do not require material from the rest of this section or the next one.
\subsection{\texorpdfstring{$\Psi(x,y)$}{Psi(x,y)} under a zero-free strip}
\begin{thm}\label{thm:strip}
Let $\Theta$ be the supremum of the real parts of zeros of $\zeta$ and suppose $\Theta<1$. Fix $\varepsilon>0$. If
\[x \ge y \ge (\log x)^{\frac{1}{2}\max\left\{ 3, (1-\Theta)^{-1}\right\}+\varepsilon}\]
then, as $x \to \infty$,
\begin{equation}\label{eq:before32}
\Psi(x,y)\sim x\rho(u)Z(\sigma) G(\sigma,y) \sim x\rho(u)Z(\sigma)G(\alpha,y).
\end{equation}
\end{thm}
\begin{proof}
Our starting point is Lemma \ref{lem:sharpen}. It suffices to show
\[ g(\alpha)-g(\sigma),\, f(\alpha)-f(\sigma)= o(1).\]
By Lemma \ref{lem:gdifffdiff},
\begin{align}
g(\sigma)-g(\alpha) &\asymp_{\varepsilon} (\sigma-\alpha)^2 \log x \log y,\\
f(\alpha)-f(\sigma) +o(1) &\asymp_{\varepsilon} (\sigma-\alpha)^2 \log x \log y.
\end{align}
By Lemma \ref{lem:alphadifforder},
\begin{equation}
(\log x \log y)(\sigma-\alpha) \ll_{\varepsilon} \left|G'(\alpha,y)/G(\alpha,y)\right|+ 1.
\end{equation}
By Proposition \ref{prop:logg1size},
\begin{equation}
G_1'(\alpha,y)/G_1(\alpha,y) \ll y^{\Theta-\alpha} (\log y)^2.
\end{equation}
By Lemma \ref{lem:logGderivaccurate},
\begin{equation}
G'_2(\alpha,y)/G_2(\alpha,y) \ll_{\varepsilon} \int_{\sqrt{y}}^{y} t^{-2\alpha}dt \ll y^{\max\left\{ 1-2\alpha,\frac{1}{2}-\alpha\right\}}\log y.
\end{equation}
Since $y^{1-\alpha} \ll u\log(u+1)$ by Lemma \ref{lem:ytoapower}, these estimates give the result.
\end{proof}
Suppose we knew that $\Theta \le 3/4$. Then, if $x \to \infty$ and $y \ge (\log x)^{2+\varepsilon}$, Theorem \ref{thm:strip} would tell us that
\[ \Psi(x,y) \sim x \rho(u) Z(\sigma) G(\sigma,y).\]
\subsection{Phase transition}
Under RH, Theorem \ref{thm:strip} implies that, as $x \to \infty$,
\[ \Psi(x,y) \sim x \rho(u) Z(\sigma) G(\sigma,y)\]
for $x \ge y \ge (\log x)^{3/2+\varepsilon}$. The next theorem shows a different behavior emerges once
\[ y\asymp (\log x)^{3/2} (\log \log x)^{-1/2}.\]
\begin{thm}\label{thm:phase}
Assume $RH$. Fix $\varepsilon>0$. If $(1+\varepsilon)\log x\le y \le (\log x)^{2-\varepsilon}$ and $x\gg_{\varepsilon} 1$ then
\[ \frac{\Psi(x,y)}{x \rho(u)Z(\sigma)} = G(\sigma,y)\exp\big(-\Theta_{\varepsilon}\big( \frac{(\log x)^3}{y^2 \log y}\big)\big)= G(\alpha,y) \exp\big(\Theta_{\varepsilon}\big( \frac{(\log x)^3}{y^2 \log y}\big)\big).\]
\end{thm}
\begin{proof}
Our starting point is Proposition \ref{prop:first}. In the considered range, $\alpha \le 1/2-c_{\varepsilon}$ by \eqref{eq:alphasize}. According to Lemma \ref{lem:Bsize}, RH implies in our range that
\[ \log B(x,y) \ll_{\varepsilon}\frac{(\log y)^2}{\sqrt{y}}+ \frac{\log x}{y} = o\left( \frac{(\log x)^{3}}{y^2 \log y}\right).\]
It remains to study the differences $g(\alpha)-g(\sigma)$, $f(\alpha)-f(\sigma)$. By Lemma \ref{lem:gdifffdiff},
\begin{align}
g(\alpha)-g(\sigma) &\asymp_{\varepsilon} -(\sigma-\alpha)^2 \log x \log y,\\
f(\alpha)-f(\sigma)
&\asymp_{\varepsilon} (\sigma-\alpha)^2 \log x \log y + O(|\alpha-\sigma|).
\end{align}
By \eqref{eq:upperdifflogrh}, RH implies
\[ \alpha-\sigma \ll_{\varepsilon} \frac{\log (u+1)}{\sqrt{y}} + \frac{1}{\log x \log y} + \frac{\log x}{y \log y}=o\left( \frac{(\log x)^{3}}{y^2 \log y}\right)\]
in this range, so the term $O(|\alpha-\sigma|)$ is negligible. It remains to understand $(\sigma-\alpha)^2 \log x \log y$. By Lemma \ref{lem:alphadifforder},
\[ \sigma-\alpha \asymp_{\varepsilon} \big(\frac{G'}{G}(\alpha,y)+O(1)\big)/(\log x \log y).\]
By Proposition \ref{prop:logg1size}, \begin{equation}
G'_1(\alpha,y)/G_1(\alpha,y) \ll y^{\frac{1}{2}-\alpha} (\log y)^2.
\end{equation}
By Lemma \ref{lem:logGderivaccurate},
\begin{equation}
G_2'(\alpha,y)/G_2(\alpha,y) \asymp_{\varepsilon} -y^{1-2\alpha}.
\end{equation}
Since $y^{1-\alpha} \asymp u\log (u+1) \asymp \log x$ in this range by Lemma \ref{lem:ytoapower}, these estimates show $(\sigma-\alpha)^2 \log x \log y$ is of order of magnitude $(\log x)^3/(y^2 \log y)$.
\end{proof}
\subsection{Pomerance's question}\label{sec:pom}
Here we prove Theorem \ref{thm:pom}. The first part is essentially due to Saias. We claim
\begin{equation}\label{eq:lambdalower}
\Lambda(x,y) \ge x \rho(u) (1 + c\log (u+1)/\log y)
\end{equation}
holds in $(1-\varepsilon)x \ge y \ge (\log x)^{1+\varepsilon}$ for sufficiently small $c>0$. By \eqref{eq:LaBTT}, \eqref{eq:lambdalower} holds if $u \gg 1$. For bounded $u$ with $(1-\varepsilon)x \ge y$, we consider the contribution of $0\le v \le c/\log y$ to the integral in the right-hand side of \eqref{eq:intbyparts} to get \eqref{eq:lambdalower}. Now observe that the error term in Saias' estimate, \eqref{eq:saiaslambdares}, is smaller than $\log(u+1)/\log y$. This finishes the first part.
The second part of the theorem is just the first part of Proposition \ref{prop:mainosc}.
From now on we assume that RH holds. We have
\[ \rho(u) = \left( u\log u/e(1+o(1))\right)^{-u}\]
as $u \to \infty$ by \cite{debruijn19512}, which implies
$\Psi(x,y) \ge 1 > x \rho(u)$
for $y \le e\log x(1-\varepsilon)$ and $x \gg_{\varepsilon} 1$. This observation is due to Granville \cite[p. 270]{Granville2008}. So we may assume $y \ge 2\log x$. In the range $2\log x \le y \le (\log x)^{2-\varepsilon}$,
\[1/2-c_{\varepsilon}\ge \alpha,\sigma \ge c/\log y\]
by \eqref{eq:alphasize} and Lemma \ref{lem:sigmasize}. Pomerance's inequality follows from Theorem \ref{thm:phase} in this range. Indeed, the theorem shows
\[ \Psi(x,y) \ge x \rho(u) Z(\sigma)G(\alpha,y) \exp\bigg( c_{\varepsilon} \frac{(\log x)^3}{y^2 \log y}\bigg)\]
for $x \gg_{\varepsilon} 1$. All the terms to the right of $\rho(u)$ are $>1$. For the last one this is obvious. For $Z(\sigma)$ this follows by monotonicity:
\[Z(\sigma) \ge Z(1/2) > Z(1)=1.\]
For $G(\alpha,y)$, we use Proposition \ref{prop:logg1size},
\[ \log G_1(\alpha,y) \ll_{\varepsilon} y^{\frac{1}{2}-\alpha}\log y\]
and Corollary \ref{cor:g2size},
\[ \log G_2(\alpha,y) \ge c_{\varepsilon}y^{1-2\alpha}/\log y\]
to find
\[ \log G(\alpha,y) \ge c_{\varepsilon} y^{1-2\alpha}/\log y >0. \]
We now consider $x(1-\varepsilon)\ge y \ge (\log x)^{2+\varepsilon}$. By Saias' RH result \eqref{eq:saiasrh} and \eqref{eq:lambdalower},
\[ \frac{\Psi(x,y)}{x\rho(u)} = \frac{\Psi(x,y)}{\Lambda(x,y)} \frac{\Lambda(x,y)}{x\rho(u)} \ge \left(1 +c\frac{\log (u+1)}{\log y}\right)\left( 1+ O\left( \frac{\log x}{y^{1/3}}\right)\right) > 1 \]
if $x(1-\varepsilon)\ge y \ge (\log x)^4$ and $x \gg_{\varepsilon} 1$.
If $(\log x)^{2+\varepsilon} \le y \le (\log x)^4$, we use Theorem \ref{thm:strip} and the monotonicity of $Z$ to find
\[ \frac{\Psi(x,y)}{x\rho(u)} = Z(\sigma) G(\alpha,y)(1+o(1)) \ge Z(4/5) G(\alpha,y)(1+o(1))\]
as $x \to \infty$. We have $Z(4/5) > 1$, and $G(\alpha,y) \sim 1$ by Corollary \ref{cor:g2size} and \eqref{eq:rhlogg}, implying
\[ \Psi(x,y) > x \rho(u)\]
if $(\log x)^{2+\varepsilon} \le y \le (\log x)^4$ and $x \gg_{\varepsilon} 1$.
We now prove the last parts of the theorem, which deal with
\[(\log x)^{2+\varepsilon} \ge y \ge (\log x)^{2-\varepsilon}.\]
In this range, Theorem \ref{thm:strip} tells us
\[ \Psi(x,y) \sim x \rho(u) Z(\sigma) G(\sigma,y). \]
The asymptotic estimate for $\log G_2$ given in Corollary \ref{cor:g2size}, and the estimate for $\log G_1$ given in \eqref{eq:log1reals} yield
\[ \log G(\sigma,y) = \frac{1+o(1)}{2}\int_{\sqrt{y}}^{y} \frac{dt}{t^{2\sigma}\log t}dt - \frac{1}{\log y}\sum_{|\Im \rho|\le T}\frac{y^{\rho-\sigma}}{\rho-\sigma} + E\]
where
\[ E \ll y^{-\sigma} \left(\frac{y \log^2 (yT)}{T} + \log y+ \sum_{|\Im \rho|\le T} \left| \frac{y^{\rho}}{(\rho-\sigma)^2 \log^2 y}\right|\right)\]
for any choice of $T \ge 2$.
Here the summations are over non-trivial zeros of $\zeta$ up to height $T$. We take $T=y$. Recall $\sum_{\rho}1/|\rho|^2$ converges. It follows that
\[ \log G(\sigma,y) = \frac{1+o(1)}{2}\int_{\sqrt{y}}^{y} \frac{dt}{t^{2\sigma}\log t}dt - \frac{1}{\log y}\sum_{|\Im \rho|\le y}\frac{y^{\rho-\sigma}}{\rho} + O\left( \frac{y^{\frac{1}{2}-\sigma}}{\log y} \right).\]
We now recognize $\sum_{|\Im \rho|\le y} y^{\rho}/\rho$ as the error in the prime number theorem. Specifically,
\begin{equation}\label{eq:primestrunc} -\sum_{|\Im \rho|\le y}y^{\rho}/\rho = \psi(y)-y + O(\log^2 y)
\end{equation}
by the truncated explicit formula \cite[Thm. 12.5]{MV}, where $\psi$ is the Chebyshev function. Hence,
\begin{equation}\label{eq:logGpsi}
\log G(\sigma,y) = \frac{1+o(1)}{2}\int_{\sqrt{y}}^{y} \frac{dt}{t^{2\sigma}\log t}dt + \frac{\psi(y)-y}{y^{\sigma}\log y}+ O\bigg( \frac{y^{\frac{1}{2}-\sigma}}{\log y} \bigg).
\end{equation}
In summary, we want
\[\log Z(\sigma) + \frac{y^{\frac{1}{2}-\sigma}}{\log y} \left( \frac{\psi(y)-y}{\sqrt{y}}+ O(1)\right) +\frac{1+o(1)}{2}\int_{\sqrt{y}}^{y} \frac{dt}{t^{2\sigma}\log t}dt+ o(1)\]
to be non-negative.
We show that
\begin{equation}\label{eq:suff}
\liminf_{y \to \infty} \frac{\psi(y)-y}{\sqrt{y}\log y} > L
\end{equation}
is a sufficient condition, if $x \gg 1$. We consider three separate cases. If $(2\sigma-1)\log y$ tends to $\infty$ then
\[ \int_{\sqrt{y}}^{y} \frac{dt}{t^{2\sigma}\log t} \sim \frac{y^{\frac{1}{2}-\sigma}}{(\sigma-1/2)\log y}\]
by Lemma \ref{lem:intaccurate}. Thus,
\[ \log \left( \frac{\Psi(x,y)}{x\rho(u)}\right) = \log Z(\sigma)+ \frac{y^{\frac{1}{2}-\sigma}}{\log y} \frac{\psi(y)-y}{\sqrt{y}} +o(1). \]
If $x \gg 1$, \eqref{eq:suff} implies this is positive. If $(2\sigma-1)\log y$ tends to $-\infty$, a similar argument works, using Lemma \ref{lem:intaccurate} again.
The most delicate range is $(2\sigma-1)\log y= O(1)$. Here $Z(\sigma) \sim Z(1/2)$. Set
\[ \sigma= \frac{1}{2} + \frac{v}{\log y}\]
so that $v$ is bounded. We express $\log(\Psi(x,y)/(x\rho(u))$ as a function of $y$ and $v$:
\[ \log\left( \frac{\Psi(x,y)}{x\rho(u)}\right) = \log Z(1/2) + e^{-v} \frac{\psi(y)-y}{\sqrt{y}\log y} + \frac{1}{2} \int_{v}^{2v} \frac{e^{-r}}{r}dr+ o(1).\]
If \eqref{eq:suff} holds, we find by the definition of $L$ that the last expression is $\ge c$ for some positive $c$, if $y$ is sufficiently large. If instead
\begin{equation}
\liminf_{y \to \infty} \frac{\psi(y)-y}{\sqrt{y}\log y} < L
\end{equation}
then, by definition, we can find $v\in \mathbb{R}$ such that if $\sigma=1/2+v/\log y$ then
\[ \log \left(\frac{\Psi(x,y)}{x\rho(u)}\right) < -c \]
for some $c>0$, if $y$ is sufficiently large.
We record \eqref{eq:logGpsi} below.
\begin{thm}\label{thm:logsave}
Assume RH. In the range $x \ge y \ge (\log x)^{1+\varepsilon}$ we have
\begin{equation}\label{eq:logG}
\log G(\sigma,y) = \frac{1+o(1)}{2}\int_{\sqrt{y}}^{y} \frac{dt}{t^{2\sigma}\log t}dt + \frac{\psi(y)-y}{y^{\sigma}\log y}+ O\left( \frac{y^{\frac{1}{2}-\sigma}}{\log y} \right)
\end{equation}
as $x \to \infty$ where $\sigma=1-\xi(u)/\log y$. In particular,
\[ \psi(x,y) \sim x\rho(u)Z(\sigma) \sim \Lambda(x,y)\]
holds when $y/(\log x \log \log x)^2 \to \infty$, and if
\begin{equation}\label{eq:beyondrh}
\psi(y)-y=o(\sqrt{y}\log y)
\end{equation}
is true then
\[ \psi(x,y) \sim x\rho(u)Z(\sigma) \sim \Lambda(x,y)\]
holds when $y/(\log x)^2 \to \infty$ and this range is optimal.
\end{thm}
\begin{proof}
By \eqref{eq:LaBTT}, $x\rho(u) Z(\sigma) \sim \Lambda(x,y)$. By Theorem \ref{thm:strip}, in the range considered, $\psi(x,y) \sim x \rho(u)Z(\sigma)$ is equivalent to $G(\sigma,y) \sim 1$. It remains to understand when $\log G(\sigma,y)= o (1)$ and we use \eqref{eq:logG} to do so. By definition of $\sigma$, $y^{1-\sigma} \asymp u\log(u+1)$, so
\[ \frac{y^{\frac{1}{2}-\sigma}}{\log y} \asymp \frac{u\log(u+1)}{y \log y}\]
is $o(1)$ when $y=O((\log x)^2)$. Von Koch's estimate \eqref{eq:vonkoch} implies
\[ \frac{\psi(y)-y}{y^{\sigma}\log y} \asymp \frac{u\log(u+1)}{y \log y}(\psi(y)-y) = O\left(\frac{\log x\log(u+1)}{\sqrt{y}}\right)\]
is $o(1)$ if $y/(\log x \log \log x)^2 \to \infty$, and \eqref{eq:beyondrh} implies
\[ \frac{\psi(y)-y}{y^{\sigma}\log y} =o(1)\]
if $y/(\log x)^2 \to \infty$.
Using $y^{1-\sigma}\asymp u\log(u+1)$ and Corollary \ref{cor:g2size}, we see that if $y/(\log x)^2 = \Theta(1)$ then $\sigma = 1/2 + O(1/\log y)$ and the integral in \eqref{eq:logG} is $\Theta(1)$, while if $y/(\log x)^2$ tends to $0$ then $(\sigma-1/2)\log y \to \infty$ integral goes to $0$.
\end{proof}
\subsection{Inequality}
In \cite[Thm. III.5.21]{Tenenbaum2015}, Hildebrand and Tenenbaum showed
\begin{equation}\label{eq:HTineq}
\log \left(\frac{\Psi(x,y)}{x}\right) = \log \rho(u) \left(1 + O_{\varepsilon}\left( \exp(-(\log y)^{\frac{3}{5}-\varepsilon}) \right)\right)+O_{\varepsilon}\left( \frac{\log (u+1)}{\log y} \right)
\end{equation}
holds for $y \ge (\log x)^{1+\varepsilon}$. We offer an improvement in terms of range and error, which also shows \eqref{eq:HTineq} does not hold for $y \asymp \log x$.
\begin{thm}\label{thm:ineq}
Fix $\varepsilon>0$. Uniformly for $x \ge y \ge (1+\varepsilon)\log x$,
\begin{equation}\label{eq:intermsofG12}
\log \left(\frac{\Psi(x,y)}{xZ(\sigma)}\right) = \log \rho(u) \big(1+ O_{\varepsilon}\big( L(y)^{-c}\big)\big) +O_{\varepsilon}\left(\frac{1}{\alpha \log x}\right)+\log G_2(\sigma,y).
\end{equation}
If $y \ge \log x \cdot L(\log x)^C$ the term $\log G_2(\sigma,y)$ is absorbed in the existing errors. Otherwise, it contributes
\begin{align}
\log G_2(\sigma,y)&\asymp_{\varepsilon } (\log x)^2/(y\log y).
\end{align}
\end{thm}
\begin{proof}
Taking logs in Lemma \ref{lem:sharpen} we see
\begin{align}
\log \big( \frac{\Psi(x,y)}{x \rho(u)Z(\sigma)}\big) &= \log G(\sigma,y)+g(\alpha)-g(\sigma)+ \log B(x,y)+ O\left( \frac{1}{\alpha \log x}\right).
\end{align}
Recall $G = G_1 G_2$. By Proposition \ref{prop:logg1size},
\[\log G_1(\sigma,y) \ll y^{1-\sigma} L(y)^{-c}.\]
We have $y^{1-\sigma} \asymp u \log (u+1)$ which implies that
\[\log G_1(\sigma,y) \ll (-\log \rho(u)) L(y)^{-c}.\]
By Lemma \ref{lem:Bsize}, $\log B(x,y)$ can be absorbed in the error term $O(1/(\alpha \log x))$ and in the bound for $\log G_1(\sigma,y)$.
We have the estimate $g(\sigma)-g(\alpha)\ll_{\varepsilon} (\sigma-\alpha)^2 \log x \log y$ by Lemma \ref{lem:gdifffdiff} and the size of $\alpha-\sigma$ is studied in Lemma \ref{lem:alphadifforder}. We see that $g(\sigma)-g(\alpha)$ is also absorbed in the existing error terms. The term $\log G_2(\sigma,y)$ is studied in Corollary \ref{cor:g2size}, and the error term there is simplified using the definition of $\sigma$.
\end{proof}
De Bruijn found the asymptotics for $\log \Psi(x,y)$ uniformly for $x \ge y \ge 2$ \cite{debruijn1966}. In the range $y \ge (1+\varepsilon)\log x$, Theorem \ref{thm:ineq} strengthens his result when combined with the asymptotics for $\log G_2(\sigma,y)$ given in Corollary \ref{cor:g2size} and Lemma \ref{lem:logg2asymp}.
\section{Study of the main formula}\label{sec:firststudy}
Recall $\alpha$ and $\sigma$ were defined in Theorem \ref{thm:htsaddle} and Corollary \ref{cor:xrhozF}.
\begin{lem}\label{lem:ytoapower}
We have $y^{1-\sigma}\asymp u\log(u+1)$ uniformly for $x \ge y \ge 2$. Uniformly for $x \ge y > \log x$ we have $\sigma - \alpha = O(1/\log y)$ and so $y^{1-\alpha} \asymp u\log(u+1)$ as well.
\end{lem}
\begin{proof}
The first part follows from the definitions of $\sigma$ and $\xi$ and from Lemma \ref{lem:xilem}. The second part follows from \cite[Eq. (3.5)]{HildebrandTenenbaum1986}.
\end{proof}
\begin{lem}\label{lem:gfdouble}
Fix $2 \le k \le 5$. Let $I$ be the interval with endpoints $\alpha$ and $\sigma=1-\xi(u)/\log y$. Fix $\varepsilon>0$. Suppose $x \ge y \ge (1+\varepsilon)\log x$ and $x \gg_{\varepsilon} 1$. Uniformly for $t \in I$ we have
\begin{align}
f^{(k)}(t),\, g^{(k)}(t) &\asymp_{\varepsilon} (-1)^{k} \log x (\log y)^{k-1}.
\end{align}
Additionally, $f^{(2)}(t)$ and $g^{(2)}(t)$ are positive for $t\in (0,1]$.
\end{lem}
\begin{proof}
As shown in \cite[Lem. 4]{HildebrandTenenbaum1986},
\[ g^{(k)}(t) = (-1)^k\sum_{p \le y}(\log p)(p^{t}-1)^{-k}Q_{k-1}(p^{t}\log p)\]
for a polynomial $Q_{k-1}$ of degree $k-1$ and non-negative coefficients, so $(-1)^k g^{(k)}(t)$ is positive and monotone for $t>0$. Moreover, by the same lemma,
\[ g^{(k)}(\alpha)\asymp (-1)^k\log x (\log y)^{k-1}\]
uniformly for $x \ge y \ge \log x$.
It remains to show $g^{(k)}(\sigma)$ is also of order $(-1)^k\log x (\log y)^{k-1}$. Since $\sigma \gg_{\varepsilon} 1/\log y$, the same lemma shows, as is, that
\begin{equation}\label{eq:gk}
(\log y)^{k-1} \frac{y^{1-\sigma}-1}{1-\sigma} \ll_{\varepsilon} (-1)^{k} g^{(k)}(\sigma) \ll_{\varepsilon} (\log y)^{k-1}\sum_{p \le y} \frac{\log p}{p^{\sigma}-1}.
\end{equation}
The lower bound is, by definition of $\sigma$, $\log x (\log y)^{k-1}$. The sum in the upper bound is estimated in \cite[p. 552]{Tenenbaum2015} as
\[ \sum_{p \le y} \frac{\log p}{p^{\sigma}-1} \ll \frac{1}{1-y^{-\sigma}} \int_{1}^{y} t^{-\sigma}dt+O(1) = \frac{\log x}{1-y^{-\sigma}}+O(1) \ll_{\varepsilon} \log x\]
by definition of $\sigma$ and Lemma \ref{lem:sigmasize}. This finishes the bounds needed for $g$. For $f$,
\begin{align} f^{(k)}(t) &= (\log (\zeta(t)(t-1)))^{(k)}+(-\log y)^k I^{(k)}((1-t)\log y).
\end{align}
The function $I^{(k)}(r)=\sum_{i \ge 0} r^i/(i!(i+k))$ is monotone increasing for $r \ge 0$ and $I^{(k)}(0)=1/k$.
The expression $(\log (\zeta(t)(t-1)))^{(k)}$ is $O(1)$ for $t \in [0,1]$, and a computer calculation shows that for $k=2$ it is in $[-0.4,-0.1]$. This already shows $f^{(2)}(t)>0$. To obtain the order of magnitude for $(-1)^k f^{(k)}(t)$, observe $I^{(k)}(v) \asymp_k e^v/(v+1)$ and $e^{v}/(v+1) \asymp u$ as long as $v = \xi(u) + O(1)$, so we want to show
$(1-t)\log y = \xi(u)+O(1)$
for $t=\sigma$ and $t=\alpha$. For $t=\sigma$ it is trivial while for $t=\alpha$ it follows from Lemma \ref{lem:ytoapower}.
\end{proof}
We use Lemma \ref{lem:gfdouble} to study $\sigma-\alpha$, unconditionally and conditionally. Our estimates will benefit from introducing
\begin{equation}\label{eq:defH}
H(y,\alpha):=\frac{y^{1-2\alpha}-y^{\frac{1}{2}-\alpha}}{(1-2\alpha) \log y} >0
\end{equation}
which at $\alpha=1/2$ is defined as the limit at $1/2$. For $\alpha \le 1/2-\varepsilon$,
\[H(y,\alpha)\ll_{\varepsilon} y^{1-2\alpha}/\log y \asymp (\log x)^2/(y \log y) \]
by Lemma \ref{lem:ytoapower}.
\begin{lem}\label{lem:alphadifforder}
Fix $\varepsilon>0$. Suppose $x \ge y \ge (1+\varepsilon)\log x$ and $x \gg_{\varepsilon} 1$.
We have
\begin{align}\label{eq:alphadifforder}
\sigma-\alpha &\asymp_{\varepsilon} \frac{\frac{G'}{G}(\alpha,y)+C_{\sigma}}{\log x \log y},\\
\label{eq:Csigmadef}C_{\sigma}&:=(\log( \zeta(\sigma)(\sigma-1)))' =\Theta(1).
\end{align}
Unconditionally,
\begin{align}\label{eq:upperdifflog}
\sigma-\alpha &\asymp_{\varepsilon} \frac{H(y,\alpha)}{\log x} + O\left(\frac{1}{\log x \log y} +L(y)^{-c}\right)\\
& \ll_{\varepsilon} \frac{\log x}{y \log y} + \frac{1}{\log x \log y} +L(y)^{-c}.
\end{align}
Under RH,
\begin{equation}\label{eq:upperdifflogrh}
\sigma-\alpha \ll_{\varepsilon} \frac{\log(u+1)}{\sqrt{y}} + \frac{1}{\log x \log y} + \frac{H(y,\alpha)}{\log x}.
\end{equation}
\end{lem}
\begin{proof}
We have
\begin{equation} -\frac{\zeta'(\alpha,y)}{\zeta(\alpha,y)} = \log x,\qquad
-\frac{F'(\sigma,y)}{F(\sigma,y)} = \log x - C_{\sigma}.
\end{equation}
Writing $\zeta(s,y)$ as $F(s,y)$ times $G(s,y)$ we find that
\begin{equation}\label{eq:logderdiff} -\frac{F'(\alpha,y)}{F(\alpha,y)}+\frac{F'(\sigma,y)}{F(\sigma,y)}= \log x + \frac{G'(\alpha,y)}{G(\alpha,y)} -(\log x - C_{\sigma}) = \frac{G'(\alpha,y)}{G(\alpha,y)} + C_{\sigma}.
\end{equation}
By the mean value theorem, for some $t$ between $\alpha$ and $\sigma$ we have
\begin{equation}\label{eq:alphasigmamvt} -\frac{F'(\alpha,y)}{F(\alpha,y)}+\frac{F'(\sigma,y)}{F(\sigma,y)}= -(\alpha-\sigma) \left(\frac{F'}{F}\right)'(t).
\end{equation}
We have $(F'/F)'=f''$, and by Lemma \ref{lem:gfdouble}, $f''(t) \asymp_{\varepsilon} \log x \log y$. To conclude \eqref{eq:alphadifforder}, we compare \eqref{eq:logderdiff} and \eqref{eq:alphasigmamvt}.
We now show \eqref{eq:upperdifflog}. By \eqref{eq:alphasize} and Lemma \ref{lem:sigmasize}, $\sigma,\alpha \gg_{\varepsilon} 1/\log y$. By \eqref{eq:zerofree} and Lemma \ref{lem:logGderivaccurate},
\begin{align}
\frac{G'_1}{G_1}(\alpha,y) &\ll_{\varepsilon} y^{1-\alpha} L(y)^{-c},\\
\frac{G'_2}{G_2}(\alpha,y) &\asymp_{\varepsilon} -\frac{y^{1-2\alpha}-y^{\frac{1}{2}-\alpha}}{1-2\alpha}.
\end{align}
Hence, by \eqref{eq:alphadifforder},
\[ \sigma-\alpha \asymp_{\varepsilon} -\frac{\frac{y^{1-2\alpha}-y^{\frac{1}{2}-\alpha}}{1-2\alpha}+O\big( 1 + y^{1-\alpha} L(y)^{-c}\big)}{\log x\log y}.\]
By Lemma \ref{lem:ytoapower} this implies \eqref{eq:upperdifflog}. If RH holds, we use $(G'_1/G_1)(\alpha,y) \ll_{\varepsilon} y^{1/2-\alpha}(\log y)^2$ which is shown in \eqref{eq:rhlogg}.
\end{proof}
We use Lemma \ref{lem:alphadifforder} to improve on Lemma \ref{lem:gfdouble}.
\begin{lem}\label{lem:gfdoublebetter}
Fix $\varepsilon>0$. Suppose $x \ge y \ge (1+\varepsilon)\log x$ and $x \gg_{\varepsilon} 1$. Let $I$ be the interval with endpoints $\sigma=1-\xi(u)/\log y$ and $\alpha$. Then, uniformly for $t \in I$,
\begin{align}
g''(t) &=g''(\alpha) \left(1+O_{\varepsilon}\left( L(y)^{-c}+\frac{1}{\log x} + \frac{\log x}{y}\right)\right),\\
f''(t) &=f''(\sigma) \left(1+O_{\varepsilon}\left( L(y)^{-c}+\frac{1}{\log x} + \frac{\log x}{y}\right)\right).
\end{align}
\end{lem}
\begin{proof}
For any $t \in I$,
\[ g''(t) = g''(\alpha) + (t-\alpha)g^{(3)}(t_2)\]
for some $t_2\in I$. The estimates for $g''$ and $g^{(3)}$ in Lemma \ref{lem:gfdouble} imply
\[ g''(t) = g''(\alpha)\left( 1+ O_{\varepsilon}(|\alpha-\sigma|\log y)\right).\]
Plugging \eqref{eq:upperdifflog} here concludes the estimate for $g''$. As for $f''$, we write
\[ f''(t) = f''(\sigma) + (t-\sigma)f^{(3)}(t_3)\]
for some $t_3\in I$ and argue as before.
\end{proof}
We use Lemma \ref{lem:gfdoublebetter} to improve on Lemma \ref{lem:alphadifforder}.
\begin{cor}\label{cor:alphadifforder}
Fix $\varepsilon>0$. Suppose $x \ge y \ge (1+\varepsilon)\log x$ and $x \gg_{\varepsilon} 1$.
We have
\begin{equation}\label{eq:sigmaalphaequality}
\sigma-\alpha = \frac{\frac{G'}{G}(\alpha,y)+C_{\sigma}}{f''(\sigma)}\big(1 + O_{\varepsilon}\big( L(y)^{-c}+\frac{1}{\log x} + \frac{\log x}{y}\big)\big)
\end{equation}
where $C_{\sigma}$ is as defined in \eqref{eq:Csigmadef}. We have
\begin{align}\label{eq:gprimeg}
\frac{G'}{G}(\alpha,y) &= - H(y,\alpha)\log y \left( 1+O_{\varepsilon}\left( L(y)^{-c}+y^{-\alpha}\right)\right)\\&\qquad +O_{\varepsilon}\left( u\log(u+1) L(y)^{-c}\right).
\end{align}
Under RH, the right-most error in \eqref{eq:gprimeg} can be replaced with $y^{1/2-\alpha}(\log y)^2$.
\end{cor}
\begin{proof}
The equality \eqref{eq:sigmaalphaequality} follows by repeating the proof of Lemma \ref{lem:alphadifforder} and inputting the bounds for $f''$ given in Lemma \ref{lem:gfdoublebetter}.
The estimates for $G'/G$ follow from \eqref{eq:rhlogg} and Lemma \ref{lem:logGderivaccurate}.
\end{proof}
\begin{lem}\label{lem:gdoubfdoub}
Fix $\varepsilon>0$. Suppose $x \ge y \ge (1+\varepsilon)\log x$ and $x \gg_{\varepsilon} 1$. We have
\begin{equation}\label{eq:gfdiffs}
g^{(k)}(\alpha) -f^{(k)}(\sigma) \ll_{\varepsilon} |(\log G)^{(k)}(\alpha,y)|+|\alpha-\sigma|\log x (\log y)^k.
\end{equation}
for $2 \le k \le 4$. In particular,
\begin{equation}\label{eq:gprimeg2}
\frac{g^{(k)}(\alpha) -f^{(k)}(\sigma)}{\log x (\log y)^{k-1}} \ll_{\varepsilon} L(y)^{-c} + \frac{1}{\log x} + \frac{\log x}{y}.
\end{equation}
Under RH,
\begin{equation}\label{eq:gprimeg2rh}
\frac{g^{(k)}(\alpha) -f^{(k)}(\sigma)}{\log x (\log y)^{k-1}} \ll_{\varepsilon} \frac{\log y \log(u+1)}{\sqrt{y}} + \frac{1}{\log x} + \frac{H(y,\alpha)}{u}.
\end{equation}
\end{lem}
\begin{proof}
We write
\[g^{(k)}(\alpha) = (f^{(k)}(\alpha)- f^{(k)}(\sigma)) + f^{(k)}(\sigma) + \left(\log G\right)^{(k)}(\alpha,y)\]
and replace $f^{(k)}(\alpha)-f^{(k)}(\sigma)$ by $(\alpha-\sigma)f^{(k+1)}(t)$ for $t$ between $\alpha$ and $\sigma$. Lemma \ref{lem:gfdouble} bounds $f^{(k+1)}(t)$ by $\ll_{\varepsilon} \log x(\log y)^{k}$. This yields \eqref{eq:gfdiffs}. To deduce \eqref{eq:gprimeg2} from \eqref{eq:gfdiffs} we estimate $\alpha-\sigma$ using Lemma \ref{lem:alphadifforder} and $(\log G)^{(k)}$ using \eqref{eq:zerofree}, \eqref{eq:rhlogg} and Lemma \ref{lem:logGderivaccurate}. We simplify the resulting bounds using $y^{1-\alpha} \ll u\log (u+1)$ from Lemma \ref{lem:ytoapower}.
\end{proof}
Applying Lemma \ref{lem:gdoubfdoub} with $k=2$, we find
\[ g''(\alpha) = \log^2 y I''(\xi) \big( 1+ O_{\varepsilon} \big( L(y)^{-c}+\frac{1}{\log x} + \frac{\log x}{y}\big)\big)\]
uniformly for $x \ge y \ge (1+\varepsilon)\log x$, which improves on \eqref{eq:phi2} if $y \gg \log x \log \log x$.
We now prove asymptotics for $B(x,y)$, $g(\sigma)-g(\alpha)$ and $f(\sigma)-f(\alpha)$.
\begin{lem}\label{lem:gdifffdiff}
Fix $\varepsilon>0$. If $x \ge y \ge (1+\varepsilon)\log x$ and $x \gg_{\varepsilon} 1$ then
\begin{align}
g(\sigma)-g(\alpha) &\asymp_{\varepsilon} (\sigma-\alpha)^2 \log x \log y,\\
f(\alpha)-f(\sigma) +o(1) &\asymp_{\varepsilon} (\sigma-\alpha)^2 \log x \log y+o(1)
\end{align}
where the $o(1)$ term is $(\log (\zeta(\sigma)(\sigma-1)))'(\alpha-\sigma)$. More accurately,
\begin{align}
g(\sigma)-g(\alpha) &=g''(\alpha)(\sigma-\alpha)^2 \left(1 + E_1\right)/2,\\
f(\alpha)-f(\sigma) &=(\log ((\zeta(\sigma)(\sigma-1)))'(\alpha-\sigma)+f''(\sigma)(\alpha-\sigma)^2\left(1 + E_2\right)/2,
\end{align}
where
\begin{equation}
E_1,\,E_2 \ll L(y)^{-c}+ \frac{1}{\log x} + \frac{\log x}{y}
\end{equation}
and, under RH,
\begin{equation}
E_1,\,E_2 \ll\frac{\log y \log(u+1)}{\sqrt{y}}+ \frac{1}{\log x} + \frac{H(y,\alpha)}{u}
\end{equation}
where $H(y,\alpha)$ was defined \eqref{eq:defH}.
\end{lem}
\begin{proof}
Approximating $g$ at $\alpha$ using a linear Taylor polynomial shows
\[ g(\sigma) = g(\alpha) + g'(\alpha)(\sigma-\alpha) +g''(t)(\sigma-\alpha)^2/2\]
for some $t$ between $\sigma$ and $\alpha$. By definition, $g'(\alpha)=0$, and $g''(t) \asymp_{\varepsilon} \log x \log y$ is shown in Lemma \ref{lem:gfdouble}. For a more accurate result, we use a quadratic approximation:
\[ g(\sigma) - g(\alpha)= \frac{g''(\alpha)(\sigma-\alpha)^2}{2} \bigg(1+O\bigg(\frac{g^{(3)}(t)|\sigma-\alpha|}{\log x \log y}\bigg)\bigg) \]
for some $t$ between $\sigma$ and $\alpha$. By Lemma \ref{lem:gfdouble}, $g^{(3)}(t)\ll_{\varepsilon} \log x (\log y)^2$ and we can bound $|\sigma-\alpha|$ using the estimates in \eqref{eq:upperdifflog} and \eqref{eq:upperdifflogrh}.
The same argument works for $f(\alpha)-f(\sigma)$ by Taylor-expanding $f$ at $\sigma$ and using $f'(\sigma)=(\log (\zeta(\sigma)(\sigma-1))'$. Since $\alpha-\sigma$ goes to $0$, the term $(\alpha-\sigma)f'(\sigma)$ contributes $o(1)$ to $f(\alpha)-f(\sigma)$.
\end{proof}
Let
\[ h(t):= \frac{\log t}{\sqrt{1+t^{-1}}\log (1+t)}.\]
\begin{lem}\label{lem:Bsize}
If $y/\log x \to \infty$ then $B(x,y) \sim 1$. More precisely, if $x \ge y \ge (1+\varepsilon) \log x$ and $x \gg_{\varepsilon} 1$ then
\begin{equation}
B(x,y) = 1 +O_{\varepsilon}\left( L(y)^{-c}+ \frac{1}{\log x}+\frac{\log x}{y} \right).
\end{equation}
Recall $H(y,\alpha)$ was defined \eqref{eq:defH}. Under RH,
\begin{equation}
B(x,y) = 1 +O_{\varepsilon}\left( \frac{\log y \log(u+1)}{\sqrt{y}}+\frac{1}{\log x} + \frac{H(y,\alpha)}{u}\right).
\end{equation}
If $y/\log x \to t \in (1,\infty)$ then $B(x,y) \sim h(t)$. More precisely, if $x \ge y \ge (1+\varepsilon) \log x$ and $x \gg_{\varepsilon} 1$ then
\[ B(x,y) = h\left(\frac{y}{\log x}\right) \left( 1 + O_{\varepsilon}\left( \frac{1}{\log(1+u)} + \frac{\log \log y}{\log y}\right)\right).\]
\end{lem}
\begin{proof}
We first study $\sigma/\alpha$. Writing this ratio as $1+ (\sigma-\alpha)/\alpha$, inputting the estimate for $\alpha$ from \eqref{eq:alphasize} the estimates of $\sigma-\alpha$ given in Lemma \ref{lem:alphadifforder} yield
\begin{equation}
\frac{\sigma}{\alpha}= 1 + O_{\varepsilon}\left( L(y)^{-c}+\frac{1}{\log x} + \frac{\log x}{y}\right)
\end{equation}
unconditionally and
\begin{equation}
\frac{\sigma}{\alpha}= 1 + O_{\varepsilon}\left( \frac{\log y\log (u+1)}{\sqrt{y}}+\frac{1}{\log x} + \frac{H(y,\alpha)}{u}\right)
\end{equation}
under RH. We may also simply divide the expression for $\sigma$ and $\alpha$ given in the \eqref{eq:alphasize} and in Lemma \ref{lem:sigmasize} to deduce an alternative estimate:
\[ \frac{\sigma}{\alpha} = \frac{\log \left( \frac{y}{\log x}\right)}{\log \left( 1+ \frac{y}{\log x}\right)} \left(1+O_{\varepsilon}\left(\frac{\log \log y}{\log y}\right)\right)\]
when $y \ge (1+\varepsilon)\log x$. We turn to $ (I''(\xi)(\log y)^2)/\phi_2(\alpha,y)$. This ratio can also be written as
\begin{equation}\label{eq:varbyvar}
\frac{I''(\xi)(\log y)^2}{\phi_2(\alpha,y)} = \frac{f''(\sigma) + O(1)}{g''(\alpha)}=1 +\frac{f''(\sigma)-g''(\alpha)+O(1)}{g''(\alpha)}.
\end{equation}
The denominator is $\asymp_{\varepsilon} \log x \log y$ according to Lemma \ref{lem:gfdouble}, and the numerator is estimated in Lemma \ref{lem:gdoubfdoub}, giving
\begin{equation}\label{eq:varbyvar2}
\frac{I''(\xi)(\log y)^2}{\phi_2(\alpha,y)} =1 +O_{\varepsilon}\left( L(y)^{-c}+ \frac{1}{\log x} + \frac{\log x}{y}\right)
\end{equation}
unconditionally and
\begin{equation}\label{eq:varbyvar3}
\frac{I''(\xi)(\log y)^2}{\phi_2(\alpha,y)} =1 +O_{\varepsilon}\left( \frac{\log y \log(u+1)}{\sqrt{y}}+ \frac{1}{\log x} + \frac{H(y,\alpha)}{u}\right)
\end{equation}
under RH. We can get an alternative estimate for $(I''(\xi)(\log y)^2)/\phi_2(\alpha,y)$ using \eqref{eq:phi2} for the denominator and
\[I''(\xi) = \xi'(u)^{-1}=u\left( 1 + O\left( (\log (1+u))^{-1}\right)\right)\]
for the numerator, see Lemmas \ref{lem:Ippxi} and \ref{lem:xilem}. Hence
\begin{equation}
\frac{I''(\xi)(\log y)^2}{\phi_2(\alpha,y)} = \bigg(1+ \frac{\log x}{y}\bigg)^{-1} \left(1+O\left(\log(1+u)^{-1}\right)\right).
\end{equation}
Combining the estimates for $\sigma/\alpha$ and $(I''(\xi)(\log y)^2)/\phi_2(\alpha,y)$ finishes the proof.
\end{proof}
\begin{lem}[Sharp formula]\label{lem:sharpen}
Suppose $y > 1+\log x$. The error term in Proposition \ref{prop:first} can be taken to be $O(1/(\alpha \log x))$.
\end{lem}
\begin{proof}
The range $2 \log x \ge y \ge \log x$ is already in Proposition \ref{prop:first} because $\alpha \asymp 1/\log y$ if $y \asymp \log x$. We assume $y > 2\log x$ from now on.
Next we consider $\log y > \sqrt{\log x}$. By Lemmas \ref{lem:gdifffdiff} and \ref{lem:alphadifforder},
\[ g(\alpha)-g(\sigma), f(\alpha)-f(\sigma) \ll 1/\log x.\]
By \eqref{eq:zerofree} and Corollary \ref{cor:g2size},
\[\log G(\sigma,y), \log G(\alpha,y) \ll 1/\log x.\]
We conclude by appealing to \eqref{eq:LaBTT}. If $\log y \le \sqrt{\log x}$ and $y \ge 2\log x$, we make use of the Main Theorem of Saha, Sankaranarayanan and Suzuki \cite{Saha}, which in the current range gives
\[ \Psi(x,y) = \frac{x^{\alpha}\zeta(\alpha,y)}{\alpha\sqrt{2\pi \phi_2(\alpha,y)}} \left(1+ \frac{g^{(4)}(\alpha)}{8g^{(2)}(\alpha)^2}-\frac{5g^{(3)}(\alpha)^2}{24 g^{(2)}(\alpha)^3}+O\left( \frac{1}{\alpha \log x}\right)\right),\]
which strengthens Theorem \ref{thm:htsaddle}. We also use Smida's result \cite[Thm. 1]{Smida}
\[ \rho(u) = \frac{e^{\gamma-u\xi+I(\xi)}}{\sqrt{2\pi I''(\xi(u))} }\left(1+ \frac{\tilde{f}^{(4)}(\sigma)}{8\tilde{f}^{(2)}(\sigma)^2} - \frac{5\tilde{f}^{(3)}(\sigma)^2}{24\tilde{f}^{(2)}(\sigma)^3} + O(u^{-2})\right)\]
which strengthens Theorem \ref{thm:rho size}. Here $\tilde{f}(t)$ is as in Remark \ref{rem:var}, so $\tilde{f}^{(k)}(t)-f^{(k)}(t)=O(1)$ for $k=2,3,4$. We divide these two estimates to get the formulas in Proposition \ref{prop:first}, with the term $1+O(1/u)$ replaced with
\[1+ \frac{1}{8}\left( \frac{g^{(4)}(\alpha)}{g^{(2)}(\alpha)^2} - \frac{f^{(4)}(\sigma)}{f^{(2)}(\sigma)^2}\right) -\frac{5}{24} \left( \frac{g^{(3)}(\alpha)^2}{ g^{(2)}(\alpha)^3} -\frac{f^{(3)}(\sigma)^2}{ f^{(2)}(\sigma)^3}\right) +O\left(\frac{1}{\alpha \log x}\right).\]
This is estimated in Lemma \ref{lem:gdoubfdoub}.
\end{proof}
\section{Study of \texorpdfstring{$G$}{G}}\label{sec:Gstudy}
\subsection{Formulas and bounds for \texorpdfstring{$G_1$}{G1}}
Given $x>0$, $s \in \mathbb{C}$, let
\[ S_1(x,s) := {\sum_{n \le x}}' \Lambda(n)/n^s,\]
where the prime on the summation indicates that if $x$ is a prime power, the last term of the sum should be multiplied by $1/2$. Landau \cite[p. 353]{Landau} established an explicit formula for $S_1(x,s)$ in terms of zeros of $\zeta$. We need a truncated version of it, which we give below and is surely known to experts. It can be proved e.g.\ by adapting \cite[Thm. 12.5]{MV} and such a proof is given in the appendix. Throughout, $\langle x \rangle$ is the distance of $x$ to the nearest prime power not equal to $x$.
\begin{lem}\label{lem:truncatedpowers}
Suppose $\Re s\ge 0$. Uniformly for $x \ge 4$ and $T \ge 2+3|\Im s|$ we have
\begin{equation}\label{eq:S1}
S_1(x,s) = \frac{x^{1-s}}{1-s} - \frac{\zeta'(s)}{\zeta(s)}- \sum_{\substack{ \rho\\ |\Im (\rho +s)| \le T}} \frac{x^{\rho-s}}{\rho-s} + \sum_{k=1}^{\infty} \frac{x^{-2k-s}}{2k+s} +R_1(x,T,s)
\end{equation}
where the sum is over non-trivial zeros of $\zeta$ and
\begin{equation}\label{eq:R}
R_1(x,T,s) \ll (\log x)(x-1)^{- \Re s}\min\left\{1,\frac{x}{T\langle x \rangle}\right\}+ \frac{\log^2 (xT)}{T} \big( 2^{\Re s}x^{1-\Re s} + \frac{2^{-\Re s}}{\log x}\big).
\end{equation}
If $s=1$, the `main term' $x^{1-s}/(1-s) - \zeta'(s)/\zeta(s)$ should be interpreted as its limit at $s=1$.
\end{lem}
Let
\[ S_2(x,s) := {\sum_{n \le x}}' \Lambda(n)/(n^s \log n).\]
\begin{cor}\label{cor:S2}
Let $s \in [0,1]$. Uniformly for $x \ge 4$ and $T \ge 2$ we have
\begin{align}
S_2(x,s) &= I((1-s)\log x)+\gamma+\log \log x + \log(\zeta(s)(s-1)) \\
& \quad-\sum_{\substack{\rho \\ |\Im \rho |\le T}}\int_{0}^{\infty} \frac{x^{\rho-s-t}}{\rho-s-t}dt+ \sum_{k=1}^{\infty} \int_{0}^{\infty} \frac{x^{-2k-s-t}}{2k+s+t}dt +R_2(x,T,s),\\
\label{eq:R2}
R_2(x,T,s) &\ll x^{-s}\min\left\{1,\frac{x}{T\langle x \rangle}\right\}+ \frac{\log^2 (xT)}{T\log x} x^{1-s}.
\end{align}
\end{cor}
\begin{proof}
The starting observation for the proof, as in similar results \cite[Prop. 1]{Saias1989}, is
\[ S_2(x,s) =\int_{0}^{\infty} {\sum_{n \le x}}' \frac{\Lambda(n)}{n^{s+t}}dt = \int_{0}^{\infty} S_1(x,s+t)dt.\]
We integrate both sides of \eqref{eq:S1} along $\{ s+t: t \ge 0\}$. We may interchange sum and integral because the sum over $\rho$ is finite, while the integral of the $k$-sum converges absolutely.
It remains to show
\[ I((1-s)\log x)+ \gamma+\log (\log x \zeta(s)(s-1)) = \int_{0}^{\infty}\bigg( \frac{x^{1-s-t}}{1-s-t} - \frac{\zeta'(s+t)}{\zeta(s+t)}\bigg)dt.\]
If $s \neq 1$, this is Lemma III.5.9 of \cite{Tenenbaum2015} with $(s-1)\log x$ in place of $s$. If $s=1$, this follows by a continuity argument from the $s \neq 1$ case.
\end{proof}
The following are direct consequences of the definitions, Lemma \ref{lem:truncatedpowers} and Corollary \ref{cor:S2}.
\begin{lem}\label{lem:logG1}
Let $s \in [0,1]$. Uniformly for $x \ge 4$ and $T \ge 2$ we have, for $R_2$ estimated in \eqref{eq:R2},
\begin{align}
\log G_1(s,x) &= \mathbf{1}_{x \in \mathbb{N},\, \Lambda(x) \neq 0} \frac{\Lambda(x)}{2x^s \log x} -\sum_{\substack{\rho \\ |\Im \rho |\le T}}\int_{0}^{\infty} \frac{x^{\rho-s-t}}{\rho-s-t}dt\\&\quad + \sum_{k=1}^{\infty} \int_{0}^{\infty} \frac{x^{-2k-s-t}}{2k+s+t}dt +R_2(x,T,s).
\end{align}
Let $s \in \mathbb{C}$ with $\Re s \ge 0$. Uniformly for $x \ge 4$ and $T \ge 2+3|\Im s|$ we have
\begin{align}
-\frac{G'_1(s,x)}{G_1(s,x)} &= \mathbf{1}_{x \in \mathbb{N},\, \Lambda(x) \neq 0} \frac{\Lambda(x)}{2x^s} -\sum_{\substack{\rho \\ |\Im (\rho+s) |\le T}}\frac{x^{\rho-s}}{\rho-s} + \sum_{k=1}^{\infty} \frac{x^{-2k-s}}{2k+s}+R_1(x,T,s)
\end{align}
for $R_1$ estimated in \eqref{eq:R2}.
\end{lem}
Applying Cauchy's integral formula to the second part of Lemma \ref{lem:logG1} in the form
\[ f^{(j)}(s) = \frac{j!}{2\pi i}\int_{|z|=\frac{\varepsilon}{\log x}} \frac{f(z)}{(z-s)^{j+1}} dz\]
we get for free
\begin{cor}\label{cor:S1i}
Let $s \in [0,1]$ and fix $0\le i \le 5$. Uniformly for $x \ge 4$ and $T \ge 2$ we have
\begin{align}\label{eq:S1i}
-\frac{\partial^i}{\partial s^i}\frac{G_1'(s,x)}{G_1(s,x)} &= \mathbf{1}_{x \in \mathbb{N},\, \Lambda(x) \neq 0} \frac{\Lambda(x)(-\log x)^i}{2x^s}- \sum_{\substack{\rho\\ |\Im \rho| \le T}} \frac{\partial^i}{\partial s^i} \frac{x^{\rho-s}}{\rho-s}\\
& \quad +\frac{\partial^i}{\partial s^i}\sum_{k=1}^{\infty} \frac{x^{-2k-s}}{2k+s} +R_{1,i}(x,T,s),\\
R_{1,i}(x,T,s) &\ll (\log x)^{i+1} x^{- s}\min\left\{1,\frac{x}{T\langle x \rangle}\right\}+ \frac{\log^2 (xT)(\log x)^i}{T} x^{1-s}.
\end{align}
\end{cor}
\begin{lem}\label{lem:annoyingint}
Let $s\ge 0$ and let $\rho$ be a non-trivial zero of $\zeta$. Let $d :=\min_{t \ge 0}|\rho-s-t|$.
Uniformly for $x \ge 2$ we have
\begin{equation}\label{eq:intandmt}
\int_{0}^{\infty} \frac{x^{\rho-s-t}}{\rho-s-t}dt = \frac{x^{\rho-s}}{(\rho-s)\log x}+O\left( \frac{x^{\Re \rho-s}}{d|\rho-s|(\log x)^2}\right).
\end{equation}
\end{lem}
\begin{proof}
We write the integrand as
\[ \frac{x^{\rho-s-t}}{\rho-s} \frac{1}{1-\frac{t}{\rho-s}} = \frac{x^{\rho-s-t}}{\rho-s} \left ( 1 +O\left( \frac{t}{|\rho-s-t|}\right)\right) \]
and integrate.
\end{proof}
\begin{proposition}\label{prop:logg1size}
Let $s \in [0,1]$ and $x \ge 4$. If $\Theta \in [1/2,1]$ denotes the supremum of the real parts of zeros of $\zeta$ then, for $0 \le i \le 6$,
\begin{align}\label{eq:rhlogg}
(\log G_1(s,x))^{(i)} &\ll x^{\Theta-s} (\log x)^{i+1},\\
\label{eq:zerofree} (\log G_1(s,x))^{(i)} &\ll x^{1-s} L(x)^{-c}.
\end{align}
Also
\begin{align}\label{eq:log1reals}
\log G_1(s,x) &= -(\log x)^{-1}x^{-s}\left( \sum_{|\Im \rho| \le T} \frac{x^{\rho}}{\rho-s}\left( 1 + O\left( \frac{1}{|\rho-s|(\log x)}\right)\right) \right. \\
& \qquad \qquad \left.+ O\left(\frac{x\log^2 (xT)}{T}+ \log x \right)\right).
\end{align}
\end{proposition}
\begin{proof}
The estimates in \eqref{eq:rhlogg} follow by taking $T=\sqrt{x}$ in Lemma \ref{lem:logG1} and Corollary \ref{cor:S1i}, and using
\[\sum_{|\Im\rho |\le T} 1/|\rho-s| \ll (\log T)^2\]
coming from the fact that between height $N$ and $N+1$, $\zeta$ has $\ll \log N$ zeros. To prove \eqref{eq:zerofree} we take $T=L(x)^c$ and use the Vinogradov--Korobov zero-free region. The last part follows by simplifying Lemma \ref{lem:logG1}.
\end{proof}
\subsection{Oscillations of \texorpdfstring{$G_1$}{G1}}\label{sec:osc}
Given a function $A(x)$, its Mellin transform is
\[ \{\mathcal{M}A\}(s) = \int_{0}^{\infty} A(x)x^{s-1}dx.\]
The following proposition computes the Mellin transforms of
\begin{equation}
A_{1}(x) := \sum_{n \le x} \Lambda(n)/(n^{s_0}\log n),\qquad
A_{2}(x) := I((1-s_0)\log x)
\end{equation}
for fixed $s_0 \in (0,1)$. It is inspired by Mellin transform computations of Diamond and Pintz \cite{Diamond}, who studied the transform of
\[\sum_{p \le x} -\log\left(1-1/p\right)- \left(\log \log x+ \gamma\right)\]
in order to show oscillation of this difference.
\begin{proposition}\label{prop:mellin}
Fix $s_0 \in (0,1)$. We have, for $\Re s > 1-s_0$,
\begin{equation}
\{\mathcal{M}A_{1}\}(-s) = \frac{1}{s} \log \zeta(s+s_0), \qquad
\{\mathcal{M}A_{2}\}(-s) = \frac{1}{s} \log \frac{s}{s+s_0-1}.
\end{equation}
\end{proposition}
\begin{proof}
For $A_1$ this is \cite[Thm. 1.3]{MV}. For $A_2$, we start with \cite[Eq. (2.3)]{Diamond}:
\begin{equation}\label{eq:idendiamond} \log \frac{z+1}{z} = \int_{1}^{\infty} t^{-z} \frac{1-t^{-1}}{t \log t}dx
\end{equation}
for $\Re z >0$.
Applying \eqref{eq:idendiamond} with $z=(s_0+s-1)/(1-s_0)$ and performing the change of variables $t=x^{1-s_0}$ in \eqref{eq:idendiamond} we obtain
\[ \log \frac{s}{s+s_0-1} = \int_{1}^{\infty} x^{-s} \frac{x^{1-s_0}-1}{x \log x}dx\]
for $\Re s > 1-s_0$. Integration by part, inspired by \cite[p. 526]{Diamond}, shows
\[ \log \frac{s}{s+s_0-1} = s \int_{1}^{\infty}x^{-s-1} \int_{1}^{x} \frac{t^{1-s_0}-1}{t \log t}dt dx.\]
The change of variables $t=e^{v/(1-s_0)}$ shows that the inner integral equals $I((1-s_0)\log x)$.
\end{proof}
\begin{proposition}\label{prop:osc}
Let $\Theta \in [1/2,1]$ be the supremum of the real parts of zeros of $\zeta$. Fix $s_0 \in (0,\Theta)$. For any $\varepsilon >0$ we have
\[ \log G_1 (s_0,x) = \Omega_{\pm} (x^{\Theta-s_0-\varepsilon}).\]
\end{proposition}
\begin{proof}
By Lemma \ref{lem:logG1},
\[ \log G_1(s_0,x) = \sum_{n \le x} \frac{\Lambda(n)}{n^{s_0}\log n} - I((1-s_0)\log x) + O_{s_0}(\log \log x).\]
Letting
\[ \Delta(x) = \sum_{n \le x} \frac{\Lambda(n)}{n^{s_0}\log n} - I((1-s_0)\log x)\]
it suffices to show that $\Delta(x) = \Omega_{\pm} (x^{\Theta-s_0-\varepsilon})$. We show one direction, namely that $\Delta(x) \ge x^{\Theta-s_0-\varepsilon}$ occurs infinitely often, the other inequality is established similarly. By Proposition \ref{prop:mellin},
\[ \{\mathcal{M}\Delta\}(-s) =\frac{1}{s}\log \frac{\zeta(s+s_0)(s+s_0-1)}{s}\]
for $\Re s > 1-s_0$, and it is easily seen that, if we let $B_a(x):=x^a$,
\[ \{\mathcal{M}B_a\}(-s) =\frac{1}{s-a}.\]
Suppose for contradiction sake that $\Delta(x)<x^{\Theta-s_0-\varepsilon}$ holds once $x\ge X$ ($X$ may depend on $\varepsilon)$. Let \[F(s):=\{\mathcal{M}(B_{\Theta-s_0-\varepsilon}-\Delta)\}(-s)=\int_{1}^{\infty} \left(x^{\Theta-s_0-\varepsilon}-\Delta(x)\right)x^{-s-1}dx.\]
Let $\sigma_c$ be the infimum of $\sigma$ for which $F(\sigma)$ converges. Then Lemma 15.1 of \cite{MV} (Landau's Oscillation Theorem) says that $F(s)$ is analytic in the half-plane $\Re s > \sigma_c$, but not at $s=\sigma_c$. However,
\[ F(s) =\frac{1}{s-(\Theta-s_0-\varepsilon)}-\frac{1}{s}\log \frac{\zeta(s+s_0)(s+s_0-1)}{s}. \]
This function has a simple pole at $s=\Theta-s_0-\varepsilon$, and is analytic for real $s>\Theta-s_0-\varepsilon$ through the inequalities $\zeta(\sigma)(\sigma-1)\in (1,\sigma)$ for all $\sigma>0$ \cite[Cor. 1.14]{MV}. So $\sigma_c$ must be $\Theta-s_0-\varepsilon$, implying $F(s)$ is analytic in the half-plane $\Re s > \Theta-s_0-\varepsilon$. However, by definition of $\Theta$, this gives a contradiction.
\end{proof}
\subsection{Estimates for \texorpdfstring{$G_2$}{G2}}
We have $\log G_2 = \log G_{2,1} + \log G_{2,2}$ for
\begin{equation}
\log G_{2,1}(s,x) =\sum_{\sqrt{x} <p \le x} p^{-2s}/2, \qquad
\log G_{2,2}(s,x) =\sum_{k \ge 3}\sum_{x^{1/k} <p \le x} p^{-ks}/k.
\end{equation}
The Prime Number Theorem (PNT) with error term shows
\begin{lem}\label{lem:g21}
Uniformly for $x \ge 2$ and $s \in [0,1]$ we have
\begin{align}
\log G_{2,1}(s,x)&=\frac{1+ O( L(x)^{-c})}{2}\int_{\sqrt{x}}^{x} \frac{dt}{t^{2s}\log t}\\
&\asymp \frac{1}{\log x}\int_{\sqrt{x}}^{x} \frac{dt}{t^{2s}}= \frac{x^{1-2s}-x^{\frac{1}{2}-s}}{(1-2s)\log x}.
\end{align}
\end{lem}
For $x \neq 0$ let $ \operatorname {Ei} (x)$ be the exponential integral, to be understood in principal value sense:
\[ \operatorname {Ei} (x) =-\int_{-x}^{\infty}e^{-t}t^{-1}dt= \int_{-\infty}^{x}e^t t^{-1}dt=e^x x^{-1}\left(1+O(x^{-1})\right).\]
\begin{lem}\label{lem:intaccurate}
For $1/2 \neq s \in [0,1]$ we have
\begin{align}
\int_{\sqrt{x}}^{x} \frac{dt}{t^{2s}\log t} &= \operatorname {Ei} \left(\log x \left(1-2s\right)\right)- \operatorname {Ei} \left(\log x \left(\frac{1}{2}-s\right)\right) \\
&\sim
\begin{cases} \frac{x^{\frac{1}{2}-s}}{\log x \left(s-\frac{1}{2}\right)} &\text{if }(2s-1)\log x \to \infty, \\ \frac{x^{1-2s}}{\log x \left(1-2s\right)}&\text{if }(2s-1)\log x \to -\infty.\end{cases}
\end{align}
When $s=1/2$, the integral is $\log 2$.
When $s=1/2+O(1/\log x)$ the integral is $\Theta(1)$.
\end{lem}
\begin{proof}
When $s \neq 1/2$, we perform the change of variables $v=(1-2s)\log t$ and use the asymptotics for $ \operatorname {Ei}$.
When $s=1/2$, we use the fact that $\log \log t$ is an antiderivative of $1/(t \log t)$.
\end{proof}
\begin{lem}\label{lem:g22}
Fix $\varepsilon>0$. For $x \ge 2$ and $1 \ge s \ge \varepsilon/\log x$,
\[ \log G_{2,2}(s,x) \ll_{\varepsilon} \frac{x^{1-3s}-x^{\frac{1}{3}-s}}{(1-3s)\log x} \asymp \begin{cases} \frac{x^{\frac{1}{3}-s}}{(3s-1)\log x} & \text{if }(3s-1)\log x \ge 1, \\ 1 & \text{if }|(3s-1)\log x| \le 1,\\ \frac{x^{1-3s}}{(1-3s)\log x} & \text{if }(3s-1)\log x \le -1.\end{cases}\]
\end{lem}
\begin{proof}
The same argument as in Lemma \ref{lem:g21} shows that the contribution of $k=3$ to $\log G_{2,2}(s,x)$ is acceptable, so we omit this case from now on.
We consider the contribution of $k\ge \max\{2/s,\log_2 x\}$ (base-$2$ logarithm). For such $k$,
\[ \sum_{x^{1/k}<p\le x} p^{-ks} \le 2^{-ks} + \sum_{p \ge 3} p^{-ks} \ll 2^{-ks} + \int_{2}^{\infty} t^{-ks}dt \ll 2^{-ks}.\]
Hence
\[ \sum_{k \ge \max\{2/s,\log_2 x\}}\sum_{x^{1/k}<p\le x} p^{-ks}/k \ll \sum_{k\ge \max\{2/s,\log_2 x\}} 2^{-ks}/k \ll x^{-s},\]
which is sufficiently small.
It remains to consider the contribution of $4 \le k \le \max\{2/s,\log_2 x\}$ to $\log G_{2,2}$. We show that primes $p \in (x^{1/4},x] \subseteq (x^{1/k},x]$ have an acceptable contribution. The assumption $s \ge \varepsilon/\log x$ implies $1/(1-t^{-s}) \ll_{\varepsilon} 1$ when $t \ge x^{1/4}$, and so
\[ \sum_{\max\{2/s,\log_2 x\} \ge k \ge 4} \sum_{x^{1/4}<p \le x} \frac{1}{p^{ks}k}\ll \sum_{k \ge 4} \int_{x^{1/4}}^{x} \frac{dt}{t^{ks}k\log t} \ll_{\varepsilon} \int_{x^{1/4}}^{x} \frac{dt}{t^{4s}\log t} \]
which is smaller than the bound we give. For the smaller primes, $x^{1/k}<p \le x^{1/4}$, we use
\[ \sum_{x^{1/k}<p\le x^{1/4}} p^{-ks} \ll x^{\frac{1}{4}-s}/\log x\]
which implies
\[ \sum_{\max\{2/s,\log_2x\} \ge k \ge 4} \sum_{x^{1/k}<p \le x^{1/4}} p^{-ks}/k \ll x^{\frac{1}{4}-s}\sum_{\max\{2/s,\log_2x\} \ge k \ge 4} 1/\log x \]
which is $\ll_{\varepsilon} x^{1/4-s}$.
\end{proof}
\begin{cor}\label{cor:g2size}
Fix $\varepsilon>0$. Suppose $x \ge 2$ and $1 \ge s\ge \varepsilon/\log x$. Then
\begin{align}\label{eq:mtg2}
\log G_2(s,x)&=\frac{1}{2}\int_{\sqrt{x}}^{x} \frac{dt}{t^{2s}\log t} \left( 1+O_{\varepsilon}\left( L(x)^{-c}+x^{-s}\right)\right) \\
&\asymp_{\varepsilon } (\log x)^{-1}\int_{\sqrt{x}}^{x} t^{-2s}dt = \frac{x^{1-2s}-x^{\frac{1}{2}-s}}{(1-2s)\log x}.
\end{align}
\end{cor}
In the same way we establish
\begin{lem}\label{lem:logGderivaccurate}
Fix $\varepsilon>0$ and $0 \le i \le 6$. For $x \ge 2$ and $1 \ge s \ge \varepsilon/\log x$,
\begin{align} \big(\frac{G_2'(s,x)}{G_2(s,x)}\big)^{(i)} &=(-1)^{i-1}2^{i}\int_{\sqrt{x}}^{x}(\log t)^{i} t^{-2s} dt \left( 1+O_{\varepsilon}\left( L(x)^{-c}+x^{-s}\right)\right) \\
&\asymp_{\varepsilon} (-\log x)^{i+1} \log G_{2}(s,x).
\end{align}
\end{lem}
\begin{lem}\label{lem:logg2asymp}
Fix $\varepsilon>0$. Suppose $1/10 \ge s \ge \varepsilon/\log x$. Then
\[ \log G_2(s,x)=\int_{\sqrt{x}}^{x} (-\log(1-t^{-s})-t^{-s}) \frac{dt}{\log t}(1+O_{\varepsilon}(L(x)^{-c}))\]
\end{lem}
\begin{proof}
The contribution of $k \ge \max\{2,s/\log x\}$ is $\ll x^{-s}$ as in Lemma \ref{lem:g22}. We now consider $2 \le k < \max\{2,s/\log x\}$. If $x^{1/k} < p \le \sqrt{x}$, we get a contribution of $\ll_{\varepsilon} x^{1/2-s}$ as in Lemma \ref{lem:g22}. We handle $2 \le k < \max\{2,s/\log x\}$ and $\sqrt{x} < p \le x$ by the PNT, obtaining
\[\int_{\sqrt{x}}^{x} \sum_{2 \le k \le \max\{2/s,\log_2 x\}} \frac{t^{-ks}}{k} \frac{dt}{\log t} (1+O_{\varepsilon}(L(x)^{-c})). \]
We use $t^{s} -1 \gg_{\varepsilon} 1$ when $t \in [\sqrt{x},x]$ to complete the sum over $k$ at a negligible cost.
\end{proof}
| {
"timestamp": "2022-11-30T02:19:40",
"yymm": "2211",
"arxiv_id": "2211.08973",
"language": "en",
"url": "https://arxiv.org/abs/2211.08973",
"abstract": "We establish an asymptotic formula for $\\Psi(x,y)$ whose shape is $x \\rho(\\log x/\\log y)$ times correction factors. These factors take into account the contributions of zeta zeros and prime powers and the formula can be regarded as an (approximate) explicit formula for $\\Psi(x,y)$. With this formula at hand we prove oscillation results for $\\Psi(x,y)$, which resolve a question of Hildebrand on the range of validity of $\\Psi(x,y) \\asymp x\\rho(\\log x/\\log y)$. We also address a question of Pomerance on the range of validity of $\\Psi(x,y) \\ge x \\rho(\\log x/\\log y)$.Along the way we improve classical estimates for $\\Psi(x,y)$ and, on the Riemann Hypothesis, uncover an unexpected phase transition of $\\Psi(x,y)$ at $y=(\\log x)^{3/2+o(1)}$.",
"subjects": "Number Theory (math.NT)",
"title": "Smooth integers and the Dickman $ρ$ function",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534322330059,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573411374378
} |
https://arxiv.org/abs/1508.02878 | Fullerenes with distant pentagons | For each $d>0$, we find all the smallest fullerenes for which the least distance between two pentagons is $d$. We also show that for each $d$ there is an $h_d$ such that fullerenes with pentagons at least distance $d$ apart and any number of hexagons greater than or equal to $h_d$ exist.We also determine the number of fullerenes where the minimum distance between any two pentagons is at least $d$, for $1 \le d \le 5$, up to 400 vertices. | \section{Introduction}
A \textit{fullerene}~\cite{kroto_85} is a cubic plane graph where all faces are pentagons or hexagons. Euler's formula implies that a
fullerene with $n$ vertices contains exactly 12 pentagons and $n/2 - 10$ hexagons.
The
\textit{dual} of a fullerene is the plane graph
obtained by exchanging the roles of vertices and faces: the vertex set of the dual graph
is the set of faces of the original graph and two
vertices in the dual graph are adjacent if and only if the two faces share an edge
in the original graph.
The dual of a fullerene with $n$ vertices is a \textit{triangulation} (i.e.\ a plane graph where every face is a triangle) which contains 12 vertices with degree 5 and $n/2 - 10$ vertices with degree 6. The \textit{face-distance} between two pentagons is the distance between the corresponding vertices of degree 5 in the dual graph.
The first fullerene molecule (i.e.\ the $C_{60}$ ``buckyball'') was discovered in 1985 by Kroto et al.~\cite{kroto_85}. Among the fullerenes, the \textit{Isolated Pentagon Rule} (IPR) fullerenes are of special interest as they tend to be more stable~\cite{IPR_ref,IPR_ref2}. IPR fullerenes are fullerenes where no two pentagons share an edge, i.e.\ they have minimum face-distance at least~2. Raghavachari~\cite{raghavachari1992ground} argued that steric strain will be minimized when the pentagons are distributed as uniformly as possible and therefore proposed the \textit{uniform curvature rule} as an extension of the IPR rule. Also, more recently Rodr\'iguez-Fortea et al.~\cite{rodriguez2010maximum} proposed the maximum pentagon separation rule where they argue that the most suitable carbon cages are those with the largest separations among the 12 pentagons. These observations lead us to investigate the maximum separation between pentagons that can be achieved for a given number of atoms, or conversely how many atoms are needed to achieve a given separation.
We will refer to the least face-distance between pentagons of a fullerene as the \textit{pentagon separation} of the fullerene.
In the next section we determine the smallest fullerenes with a given pentagon separation. We also show that the minimum fullerenes for each $d$ are unique up to mirror image and that for each $d$ there is an $h_d$ such that fullerenes with pentagon separation at least $d$ and any number of hexagons greater than or equal to $h_d$ exist. The latter was already proven for $h_1$ (i.e., for all fullerenes) by Gr{\"u}nbaum and Motzkin in~\cite{grunbaum1963number} and for $h_2$ (i.e., for IPR fullerenes) by Klein and Liu in~\cite{klein1992theorems}.
Finally, we also determine the number of fullerenes of pentagon separation~$d$, for $1 \le d \le 5$, up to 400 vertices.
\section{Fullerenes with a given minimum pentagon separation}
\label{section:minnv_distant_pentagons}
In this section we determine the smallest fullerenes with a given pentagon separation.
We remind the reader of the icosahedral fullerenes~\cite{goldberg_37,coxeter_71}.
These fullerenes are uniquely determined by their Coxeter coordinates $(p,q)$ and are obtained by cutting an equilateral Goldberg triangle with coordinates $(p,q)$ from the hexagon lattice and gluing it to the faces of the icosahedron. As a Goldberg triangle with coordinates $(p,q)$ has $p^2 + pq + q^2$ vertices, an icosahedral fullerene with Coxeter coordinates $(p,q)$ has $20(p^2 + pq + q^2)$ vertices. Also note that an icosahedral fullerene with Coxeter coordinates $(p,q)$ has pentagon separation~$p+q$.
The smallest fullerene for $d=1$ is of course unique: the icosahedron $C_{20}$.
For larger~$d$, the minimal fullerenes are given in the next theorem.
\begin{theorem} \label{theorem:min_face_distance_nv}
For odd $d\ge 3$, the smallest fullerenes with pentagon separation
at least $d$ are the icosahedral fullerenes with Coxeter coordinates $(\lceil d/2\rceil,\lfloor d/2\rfloor)$ and $(\lfloor d/2\rfloor,\lceil d/2\rceil)$. These are mirror images and have $15d^2+5$ vertices.
For even $d$, the unique smallest fullerene with pentagon separation at least $d$ is the
the icosahedral fullerene with Coxeter coordinates $(d/2,d/2)$, which has $15d^2$ vertices.
\end{theorem}
\begin{proof}
\
\noindent\textbf{Proof in the case that $d\ge 3$ is odd:}
\\
The \textit{penta-hexagonal net} is the regular tiling of the plane where a central pentagon is surrounded by an infinite number of hexagons.
The number of faces at face-distance $k$ from the pentagon in the penta-hexagonal net is $5k$. So the number of faces at face-distance at most $k$ from the pentagon in the penta-hexagonal net is $\sum\limits_{i=1}^k 5i + 1 = 5k(k+1)/2 + 1$.
Figure~\ref{fig:d=7patch} shows this situation for $k=3$.
\begin{figure}[h!t]
\centering
\includegraphics[width=0.32\textwidth]{penthex7.pdf}
\caption{Patch for $d=7$ in the proof of Theorem~\ref{theorem:min_face_distance_nv}}
\label{fig:d=7patch}
\end{figure}
In a fullerene with pentagon separation at least~$d$, for odd~$d$, the sets of faces at face-distance at most $\lfloor d/2 \rfloor$ from each pentagon are pairwise disjoint.
Consequently the smallest such fullerenes we can hope to find consist of 12 copies of the above patch for $k=\lfloor d/2 \rfloor$, which comes to $15d^2+5$ vertices.
\begin{figure}[h!t]
\centering
\includegraphics[width=0.7\textwidth]{penthex7join.pdf}
\caption{Bad and good ways to join two patches for $d=7$}
\label{fig:d=7patchjoin}
\end{figure}
Since the patch boundary has no more than two consecutive vertices of degree~2, it is impossible to join any number of them into a larger patch with a boundary having more than two consecutive vertices of degree~2. Therefore, considering the complement, no union of these patches which is completable to a fullerene has more than two consecutive vertices of degree~3. Now, every way to overlap the boundaries of two patches produces three consecutive vertices of degree~3, such as indicated in the left side of Figure~\ref{fig:d=7patchjoin}, except for the way shown in the right side of Figure~\ref{fig:d=7patchjoin} or its mirror image. For each of these two starting points, there is only one way to attach a third patch to those two patches, and so on, leading to a unique completion in each case.
It is easy to see that these two fullerenes are the icosahedral fullerenes mentioned in the theorem.
\medskip
\begin{figure}[h!t]
\centering
\includegraphics[width=0.32\textwidth]{penthex6.pdf}
\caption{Patch with dangling edges for $d=6$ in the proof of
Theorem~\ref{theorem:min_face_distance_nv}}
\label{fig:d=6patch}
\end{figure}
\noindent\textbf{Proof in the case that $d$ is even:}\\
The proof in this case is similar except that we use a different type of patch.
In~\cite{cvetkovic_02} it was proven that the number of vertices at distance $k$ from the pentagon in the penta-hexagonal net is $5 \lfloor k/2 \rfloor + 5$.
So the total number of vertices at distance at most $k$ from the pentagon in the penta-hexagonal net is $\sum\limits_{i=0}^k (5 \lfloor i/2 \rfloor + 5) = 5 (\sum\limits_{i=0}^k \lfloor i/2 \rfloor + k + 1)$. If $k$ is even, $\sum\limits_{i=0}^k \lfloor i/2 \rfloor$ is equal to $k^2/4$. So the total number of vertices at distance at most $k$ from the pentagon in the penta-hexagonal net for even $k$ is $5(k^2/4 + k +1)$.
In a fullerene with pentagon separation at least $d$, for even $d$, the sets of vertices at distance at most $d-2$ from every pentagon are pairwise disjoint.
The case of $d=6$ is shown in Figure~\ref{fig:d=6patch}, excluding the ends of the dangling edges.
Therefore, the smallest such fullerene of pentagon separation $d$ we can hope to construct consists of 12 of these patches for $k=d-2$, joined together by identifying dangling edges. This would give us $15d^2$ vertices altogether.
\begin{figure}[h!t]
\centering
\includegraphics[width=0.7\textwidth]{penthex6join.pdf}
\caption{Bad and good ways to identify dangling edges for $d=6$}
\label{fig:d=6patchjoin}
\end{figure}
Since we are only permitted to create hexagons incident with the dangling
edges, dangling edges distance two apart in one patch can only be identified with
dangling edges distance two apart in another patch. Otherwise, a face of the wrong
size is created, such as the pentagon indicated in the left side of Figure~\ref{fig:d=6patchjoin}.
This allows us to join two adjacent patches in only one way, as shown by the
right side of Figure~\ref{fig:d=6patchjoin}. Extra patches can then be attached in
unique fashion, leading to a single fullerene that is easily seen to be the
icosahedral fullerene with Coxeter coordinates $(d/2,d/2)$.
\end{proof}
Next we will prove that for each $d$ there is an $h_d$ such that fullerenes with pentagon separation at least $d$ and any number of hexagons greater than or equal to $h_d$ exist. To prove this, we need Lemmas~\ref{lemma:boundarylength_plus1} and~\ref{lemma:add_hexagons_same_boundary}.
A \textit{fullerene patch} is a connected subgraph of a fullerene where all faces except one exterior face are also faces in the fullerene and all boundary vertices have degree 2 or 3 and all non-boundary vertices have degree 3. The \textit{boundary sequence} of a patch is the cyclic sequence of the degrees of the vertices in the boundary of a patch in clockwise
or counterclockwise order.
A \textit{cap}\index{cap} is a fullerene patch which contains 6 pentagons and has a boundary sequence of the form $(23)^l (32)^m$. Such a boundary is represented by the parameters $(l,m)$. In the literature, the vector $(l,m)$ is also called the \textit{chiral vector} (see~\cite{saito1998physical}).
\begin{lemma} \label{lemma:boundarylength_plus1}
Any cap with parameters $(l,0)$ can be transformed into a cap with parameters $(l,1)$ without decreasing the minimum face-distance between the pentagons of the cap.
\end{lemma}
\begin{proof}
Given a cap with parameters $(l,0)$. If the cap does not contain a pentagon in its boundary, we remove $(l,0)$ rings of hexagons until there is a pentagon in the boundary of the cap.
In Figure~\ref{fig:change_cap_bound} we show how the $(l,0)$ cap which contains a boundary pentagon (see Figure~\ref{fig:change_cap_bound_step1}) can be transformed into a cap with parameters $(l,1)$ without decreasing the minimum face-distance between the pentagons. This is done by changing the boundary pentagon into a hexagon $h$, adding a ring of hexagons (see Figure~\ref{fig:change_cap_bound_step2}) and changing a hexagon in the boundary which is adjacent to $h$ into a pentagon (see Figure~\ref{fig:change_cap_bound_step3}).
\end{proof}
\begin{figure}[h!t]
\centering
\subfloat[]{\label{fig:change_cap_bound_step1}\includegraphics[width=0.7\textwidth]{change_cap_boundary_step1.pdf}}\\
\subfloat[]{\label{fig:change_cap_bound_step2}\includegraphics[width=0.7\textwidth]{change_cap_boundary_step2.pdf}}\\
\subfloat[]{\label{fig:change_cap_bound_step3}\includegraphics[width=0.7\textwidth]{change_cap_boundary_step3.pdf}}
\caption{Procedure to change a cap with parameters $(l,0)$ to a cap with parameters $(l,1)$. The bold edges in the figure have to be identified with each other.}
\label{fig:change_cap_bound}
\end{figure}
\begin{lemma} \label{lemma:add_hexagons_same_boundary}
Given a cap $C$ with parameters $(l,m)$ with $l \neq 0$ and $m \neq 0$ and which consists of $f$ faces. A cap $C'$ with the same parameters $(l,m)$ which contains $C$ as a subgraph and has $f+l$, respectively $f+m$ faces can be constructed from $C$ by adding $l$ or $m$ hexagons to C, respectively.
\end{lemma}
\begin{proof}
Given a cap $C$ with parameters $(l,m)$ with $l \neq 0$ and $m \neq 0$. In Figure~\ref{fig:add_hexagons_same_cap_step} we show how a cap $C'$ with the same parameters $(l,m)$ which contains $C$ as a subgraph and has $f+l$ faces can be constructed from $C$ by adding $l$ hexagons to C.
A cap $C''$ with $f+m$ faces can be obtained in a completely analogous way by adding $m$ hexagons to $C$.
\end{proof}
\begin{figure}[h!t]
\centering
\subfloat[]{\label{fig:add_hexagons_same_cap_step1}\includegraphics[width=0.95\textwidth]{add_hexagons_same_cap_step1.pdf}}\\
\subfloat[]{\label{fig:add_hexagons_same_cap_step2}\includegraphics[width=0.95\textwidth]{add_hexagons_same_cap_step2.pdf}}\\
\caption{Procedure which adds $l$ hexagons to an $(l,m)$ cap without changing the boundary parameters. The bold edges in the figure have to be identified with each other.}
\label{fig:add_hexagons_same_cap_step}
\end{figure}
\begin{theorem} \label{theorem:min_face_distance_existence}
For each $d$ there is an $h_d$ such that fullerenes with pentagon separation at least $d$ and any number of hexagons greater than or equal to $h_d$ exist.
\end{theorem}
\begin{proof}
Given an icosahedral fullerene $F$ with Coxeter coordinates $(\lceil d/2 \rceil, \lceil d/2 \rceil)$. In this fullerene the minimum face-distance between the pentagons is $2\lceil d/2 \rceil$.
Brinkmann and Schein~\cite{brinkmann_schein} have proven that every icosahedral fullerene with Coxeter coordinates $(p,q)$ contains a fullerene patch with 6 pentagons which is a subgraph of a cap with parameters $(3(p+2q),3(p-q))$. So $F$ contains a fullerene patch with 6 pentagons which is a subgraph of a cap with parameters $(9\lceil d/2 \rceil, 0)$.
It follows from~\cite{saito1998physical, ficon_04} that such a fullerene patch can be completed to a cap with parameters $(9\lceil d/2 \rceil, 0)$ by adding hexagons. It follows from Lemma~\ref{lemma:boundarylength_plus1} that this cap can then be transformed to a cap with parameters $(9\lceil d/2 \rceil, 1)$ without decreasing the minimum face-distance between the pentagons of the cap.
We form a fullerene $F'$ with pentagon separation at least $d$ by gluing together two copies of the $(9\lceil d/2 \rceil, 1)$ cap and adding $(9\lceil d/2 \rceil, 1)$ rings of hexagons if necessary. Let $h_{F'}$ denote the number of hexagons of $F'$. Now a fullerene with pentagon separation at least $d$ and any number of hexagons greater than $h_{F'}$ can be obtained by recursively applying Lemma~\ref{lemma:add_hexagons_same_boundary} to $F'$.
\end{proof}
The counts of the number of fullerenes up to 400 vertices with pentagon separation at least $d$, for $1 \le d \le 5$, can be found in Tables~\ref{table:fuller_counts_1}-\ref{table:fuller_counts_4}. (Note that $d=1$ gives the set of all fullerenes and $d=2$ gives the set of all IPR fullerenes). These counts were obtained by using the program \textit{buckygen}~\cite{fuller-paper, fuller-paper-ipr} (which can be downloaded from \url{http://caagt.ugent.be/buckygen/}) to generate all non-isomorphic IPR fullerenes and then applying a separate program to compute their pentagon separation. Note that fullerenes which are mirror images of each other are considered to be in the same isomorphism class and are thus only counted once.
Some of the fullerenes from Tables~\ref{table:fuller_counts_1}-\ref{table:fuller_counts_4} can also be downloaded from the \textit{House of Graphs}~\cite{hog} at \url{http://hog.grinvin.org/Fullerenes}~.
Figures \ref{fig:smallest_d=3}-\ref{fig:smallest_d=5} show the smallest fullerenes with pentagon separation $d$, for $3 \le d \le 5$.
\begin{figure}[h!t]
\centering
\includegraphics[width=0.5\textwidth]{Fullerene_140_min_pent_dist3.pdf}
\caption{The icosahedral fullerene with Coxeter coordinates $(2,1)$. This fullerene and its mirror image are the smallest fullerenes with pentagon separation~3. They have 140 vertices.}
\label{fig:smallest_d=3}
\end{figure}
\begin{figure}[h!t]
\centering
\includegraphics[width=0.5\textwidth]{Fullerene_240_min_pent_dist4.pdf}
\caption{The icosahedral fullerene with Coxeter coordinates $(2,2)$. This is the smallest fullerene with pentagon separation 4 and has 240 vertices.}
\label{fig:smallest_d=4}
\end{figure}
\begin{figure}[h!t]
\centering
\includegraphics[width=0.5\textwidth]{Fullerene_380_min_pent_dist5.pdf}
\caption{The icosahedral fullerene with Coxeter coordinates $(3,2)$. This fullerene and its mirror image are the smallest fullerenes with pentagon separation~5. They have 380 vertices.}
\label{fig:smallest_d=5}
\end{figure}
\begin{table}
\centering
{\small
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
nv & nf & fullerenes & IPR fullerenes & pent.\, sep.\,${}\ge3$ & pent.\, sep.\,${}\ge4$ & pent.\, sep.\,${}\ge5$\\
\hline
20 & 12 & 1 & 0 & 0 & 0 & 0\\
22 & 13 & 0 & 0 & 0 & 0 & 0\\
24 & 14 & 1 & 0 & 0 & 0 & 0\\
26 & 15 & 1 & 0 & 0 & 0 & 0\\
28 & 16 & 2 & 0 & 0 & 0 & 0\\
30 & 17 & 3 & 0 & 0 & 0 & 0\\
32 & 18 & 6 & 0 & 0 & 0 & 0\\
34 & 19 & 6 & 0 & 0 & 0 & 0\\
36 & 20 & 15 & 0 & 0 & 0 & 0\\
38 & 21 & 17 & 0 & 0 & 0 & 0\\
40 & 22 & 40 & 0 & 0 & 0 & 0\\
42 & 23 & 45 & 0 & 0 & 0 & 0\\
44 & 24 & 89 & 0 & 0 & 0 & 0\\
46 & 25 & 116 & 0 & 0 & 0 & 0\\
48 & 26 & 199 & 0 & 0 & 0 & 0\\
50 & 27 & 271 & 0 & 0 & 0 & 0\\
52 & 28 & 437 & 0 & 0 & 0 & 0\\
54 & 29 & 580 & 0 & 0 & 0 & 0\\
56 & 30 & 924 & 0 & 0 & 0 & 0\\
58 & 31 & 1 205 & 0 & 0 & 0 & 0\\
60 & 32 & 1 812 & 1 & 0 & 0 & 0\\
62 & 33 & 2 385 & 0 & 0 & 0 & 0\\
64 & 34 & 3 465 & 0 & 0 & 0 & 0\\
66 & 35 & 4 478 & 0 & 0 & 0 & 0\\
68 & 36 & 6 332 & 0 & 0 & 0 & 0\\
70 & 37 & 8 149 & 1 & 0 & 0 & 0\\
72 & 38 & 11 190 & 1 & 0 & 0 & 0\\
74 & 39 & 14 246 & 1 & 0 & 0 & 0\\
76 & 40 & 19 151 & 2 & 0 & 0 & 0\\
78 & 41 & 24 109 & 5 & 0 & 0 & 0\\
80 & 42 & 31 924 & 7 & 0 & 0 & 0\\
82 & 43 & 39 718 & 9 & 0 & 0 & 0\\
84 & 44 & 51 592 & 24 & 0 & 0 & 0\\
86 & 45 & 63 761 & 19 & 0 & 0 & 0\\
88 & 46 & 81 738 & 35 & 0 & 0 & 0\\
90 & 47 & 99 918 & 46 & 0 & 0 & 0\\
92 & 48 & 126 409 & 86 & 0 & 0 & 0\\
94 & 49 & 153 493 & 134 & 0 & 0 & 0\\
96 & 50 & 191 839 & 187 & 0 & 0 & 0\\
98 & 51 & 231 017 & 259 & 0 & 0 & 0\\
100 & 52 & 285 914 & 450 & 0 & 0 & 0\\
102 & 53 & 341 658 & 616 & 0 & 0 & 0\\
104 & 54 & 419 013 & 823 & 0 & 0 & 0\\
106 & 55 & 497 529 & 1 233 & 0 & 0 & 0\\
108 & 56 & 604 217 & 1 799 & 0 & 0 & 0\\
110 & 57 & 713 319 & 2 355 & 0 & 0 & 0\\
112 & 58 & 860 161 & 3 342 & 0 & 0 & 0\\
114 & 59 & 1 008 444 & 4 468 & 0 & 0 & 0\\
\hline
\end{tabular}
}
\caption{Number of fullerenes for a given lower bound on the
pentagon separation. nv is the number of vertices and nf is the number of faces.}
\label{table:fuller_counts_1}
\end{table}
\begin{table}
\centering
{\small
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
nv & nf & fullerenes & IPR fullerenes & pent.\, sep.\,${}\ge3$ & pent.\, sep.\,${}\ge4$ & pent.\, sep.\,${}\ge5$\\
\hline
116 & 60 & 1 207 119 & 6 063 & 0 & 0 & 0\\
118 & 61 & 1 408 553 & 8 148 & 0 & 0 & 0\\
120 & 62 & 1 674 171 & 10 774 & 0 & 0 & 0\\
122 & 63 & 1 942 929 & 13 977 & 0 & 0 & 0\\
124 & 64 & 2 295 721 & 18 769 & 0 & 0 & 0\\
126 & 65 & 2 650 866 & 23 589 & 0 & 0 & 0\\
128 & 66 & 3 114 236 & 30 683 & 0 & 0 & 0\\
130 & 67 & 3 580 637 & 39 393 & 0 & 0 & 0\\
132 & 68 & 4 182 071 & 49 878 & 0 & 0 & 0\\
134 & 69 & 4 787 715 & 62 372 & 0 & 0 & 0\\
136 & 70 & 5 566 949 & 79 362 & 0 & 0 & 0\\
138 & 71 & 6 344 698 & 98 541 & 0 & 0 & 0\\
140 & 72 & 7 341 204 & 121 354 & 1 & 0 & 0\\
142 & 73 & 8 339 033 & 151 201 & 0 & 0 & 0\\
144 & 74 & 9 604 411 & 186 611 & 0 & 0 & 0\\
146 & 75 & 10 867 631 & 225 245 & 0 & 0 & 0\\
148 & 76 & 12 469 092 & 277 930 & 0 & 0 & 0\\
150 & 77 & 14 059 174 & 335 569 & 1 & 0 & 0\\
152 & 78 & 16 066 025 & 404 667 & 2 & 0 & 0\\
154 & 79 & 18 060 979 & 489 646 & 0 & 0 & 0\\
156 & 80 & 20 558 767 & 586 264 & 0 & 0 & 0\\
158 & 81 & 23 037 594 & 697 720 & 0 & 0 & 0\\
160 & 82 & 26 142 839 & 836 497 & 2 & 0 & 0\\
162 & 83 & 29 202 543 & 989 495 & 1 & 0 & 0\\
164 & 84 & 33 022 573 & 1 170 157 & 2 & 0 & 0\\
166 & 85 & 36 798 433 & 1 382 953 & 1 & 0 & 0\\
168 & 86 & 41 478 344 & 1 628 029 & 13 & 0 & 0\\
170 & 87 & 46 088 157 & 1 902 265 & 4 & 0 & 0\\
172 & 88 & 51 809 031 & 2 234 133 & 12 & 0 & 0\\
174 & 89 & 57 417 264 & 2 601 868 & 10 & 0 & 0\\
176 & 90 & 64 353 269 & 3 024 383 & 28 & 0 & 0\\
178 & 91 & 71 163 452 & 3 516 365 & 23 & 0 & 0\\
180 & 92 & 79 538 751 & 4 071 832 & 58 & 0 & 0\\
182 & 93 & 87 738 311 & 4 690 880 & 54 & 0 & 0\\
184 & 94 & 97 841 183 & 5 424 777 & 142 & 0 & 0\\
186 & 95 & 107 679 717 & 6 229 550 & 129 & 0 & 0\\
188 & 96 & 119 761 075 & 7 144 091 & 291 & 0 & 0\\
190 & 97 & 131 561 744 & 8 187 581 & 257 & 0 & 0\\
192 & 98 & 145 976 674 & 9 364 975 & 548 & 0 & 0\\
194 & 99 & 159 999 462 & 10 659 863 & 566 & 0 & 0\\
196 & 100 & 177 175 687 & 12 163 298 & 1 126 & 0 & 0\\
198 & 101 & 193 814 658 & 13 809 901 & 1 072 & 0 & 0\\
200 & 102 & 214 127 742 & 15 655 672 & 1 943 & 0 & 0\\
202 & 103 & 233 846 463 & 17 749 388 & 2 080 & 0 & 0\\
204 & 104 & 257 815 889 & 20 070 486 & 3 682 & 0 & 0\\
206 & 105 & 281 006 325 & 22 606 939 & 3 992 & 0 & 0\\
208 & 106 & 309 273 526 & 25 536 557 & 6 340 & 0 & 0\\
210 & 107 & 336 500 830 & 28 700 677 & 6 737 & 0 & 0\\
\hline
\end{tabular}
}
\caption{Number of fullerenes for a given lower bound on the
pentagon separation (continued). nv is the number of vertices and nf is the number of faces.}
\label{table:fuller_counts_2}
\end{table}
\begin{table}
\centering
{\small
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
nv & nf & fullerenes & IPR fullerenes & pent.\, sep.\,${}\ge3$ & pent.\, sep.\,${}\ge4$ & pent.\, sep.\,${}\ge5$\\
\hline
212 & 108 & 369 580 714 & 32 230 861 & 10 513 & 0 & 0\\
214 & 109 & 401 535 955 & 36 173 081 & 12 000 & 0 & 0\\
216 & 110 & 440 216 206 & 40 536 922 & 18 169 & 0 & 0\\
218 & 111 & 477 420 176 & 45 278 722 & 20 019 & 0 & 0\\
220 & 112 & 522 599 564 & 50 651 799 & 28 528 & 0 & 0\\
222 & 113 & 565 900 181 & 56 463 948 & 32 276 & 0 & 0\\
224 & 114 & 618 309 598 & 62 887 775 & 46 534 & 0 & 0\\
226 & 115 & 668 662 698 & 69 995 887 & 52 177 & 0 & 0\\
228 & 116 & 729 414 880 & 77 831 323 & 71 303 & 0 & 0\\
230 & 117 & 787 556 069 & 86 238 206 & 79 915 & 0 & 0\\
232 & 118 & 857 934 016 & 95 758 929 & 109 848 & 0 & 0\\
234 & 119 & 925 042 498 & 105 965 373 & 124 153 & 0 & 0\\
236 & 120 & 1 006 016 526 & 117 166 528 & 164 700 & 0 & 0\\
238 & 121 & 1 083 451 816 & 129 476 607 & 184 404 & 0 & 0\\
240 & 122 & 1 176 632 247 & 142 960 479 & 242 507 & 1 & 0\\
242 & 123 & 1 265 323 971 & 157 402 781 & 273 885 & 0 & 0\\
244 & 124 & 1 372 440 782 & 173 577 766 & 353 997 & 0 & 0\\
246 & 125 & 1 474 111 053 & 190 809 628 & 397 673 & 0 & 0\\
248 & 126 & 1 596 482 232 & 209 715 141 & 507 913 & 0 & 0\\
250 & 127 & 1 712 934 069 & 230 272 559 & 570 053 & 0 & 0\\
252 & 128 & 1 852 762 875 & 252 745 513 & 717 983 & 0 & 0\\
254 & 129 & 1 985 250 572 & 276 599 787 & 805 374 & 0 & 0\\
256 & 130 & 2 144 943 655 & 303 235 792 & 1 007 680 & 0 & 0\\
258 & 131 & 2 295 793 276 & 331 516 984 & 1 127 989 & 0 & 0\\
260 & 132 & 2 477 017 558 & 362 302 637 & 1 392 996 & 2 & 0\\
262 & 133 & 2 648 697 036 & 395 600 325 & 1 550 580 & 0 & 0\\
264 & 134 & 2 854 536 850 & 431 894 257 & 1 905 849 & 0 & 0\\
266 & 135 & 3 048 609 900 & 470 256 444 & 2 124 873 & 1 & 0\\
268 & 136 & 3 282 202 941 & 512 858 451 & 2 592 104 & 1 & 0\\
270 & 137 & 3 501 931 260 & 557 745 670 & 2 868 467 & 2 & 0\\
272 & 138 & 3 765 465 341 & 606 668 511 & 3 461 487 & 1 & 0\\
274 & 139 & 4 014 007 928 & 659 140 287 & 3 847 594 & 0 & 0\\
276 & 140 & 4 311 652 376 & 716 217 922 & 4 621 524 & 1 & 0\\
278 & 141 & 4 591 045 471 & 776 165 188 & 5 112 067 & 2 & 0\\
280 & 142 & 4 926 987 377 & 842 498 881 & 6 079 570 & 4 & 0\\
282 & 143 & 5 241 548 270 & 912 274 540 & 6 726 996 & 1 & 0\\
284 & 144 & 5 618 445 787 & 987 874 095 & 7 971 111 & 10 & 0\\
286 & 145 & 5 972 426 835 & 1 068 507 788 & 8 784 514 & 3 & 0\\
288 & 146 & 6 395 981 131 & 1 156 161 307 & 10 352 546 & 7 & 0\\
290 & 147 & 6 791 769 082 & 1 247 686 189 & 11 385 724 & 9 & 0\\
292 & 148 & 7 267 283 603 & 1 348 832 364 & 13 357 318 & 5 & 0\\
294 & 149 & 7 710 782 991 & 1 454 359 806 & 14 652 198 & 6 & 0\\
296 & 150 & 8 241 719 706 & 1 568 768 524 & 17 102 231 & 24 & 0\\
298 & 151 & 8 738 236 515 & 1 690 214 836 & 18 756 139 & 16 & 0\\
300 & 152 & 9 332 065 811 & 1 821 766 896 & 21 766 152 & 32 & 0\\
302 & 153 & 9 884 604 767 & 1 958 581 588 & 23 815 310 & 36 & 0\\
304 & 154 & 10 548 218 751 & 2 109 271 290 & 27 529 516 & 46 & 0\\
306 & 155 & 11 164 542 762 & 2 266 138 871 & 30 090 574 & 54 & 0\\
\hline
\end{tabular}
}
\caption{Number of fullerenes for a given lower bound on the
pentagon separation (continued). nv is the number of vertices and nf is the number of faces.}
\label{table:fuller_counts_3}
\end{table}
\begin{table}
\centering
{\small
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
nv & nf & fullerenes & IPR fullerenes & pent.\, sep.\,${}\ge3$ & pent.\, sep.\,${}\ge4$ & pent.\, sep.\,${}\ge5$\\
\hline
308 & 156 & 11 902 015 724 & 2 435 848 971 & 34 629 672 & 99 & 0\\
310 & 157 & 12 588 998 862 & 2 614 544 391 & 37 770 691 & 93 & 0\\
312 & 158 & 13 410 330 482 & 2 808 510 141 & 43 312 313 & 135 & 0\\
314 & 159 & 14 171 344 797 & 3 009 120 113 & 47 153 778 & 187 & 0\\
316 & 160 & 15 085 164 571 & 3 229 731 630 & 53 899 686 & 211 & 0\\
318 & 161 & 15 930 619 304 & 3 458 148 016 & 58 585 441 & 308 & 0\\
320 & 162 & 16 942 010 457 & 3 704 939 275 & 66 712 070 & 443 & 0\\
322 & 163 & 17 880 232 383 & 3 964 153 268 & 72 395 888 & 535 & 0\\
324 & 164 & 19 002 055 537 & 4 244 706 701 & 82 171 212 & 698 & 0\\
326 & 165 & 20 037 346 408 & 4 533 465 777 & 89 063 353 & 1 026 & 0\\
328 & 166 & 21 280 571 390 & 4 850 870 260 & 100 785 130 & 1 216 & 0\\
330 & 167 & 22 426 253 115 & 5 178 120 469 & 109 068 073 & 1 623 & 0\\
332 & 168 & 23 796 620 378 & 5 531 727 283 & 122 992 213 & 2 489 & 0\\
334 & 169 & 25 063 227 406 & 5 900 369 830 & 132 950 223 & 2 788 & 0\\
336 & 170 & 26 577 912 084 & 6 299 880 577 & 149 523 121 & 3 612 & 0\\
338 & 171 & 27 970 034 826 & 6 709 574 675 & 161 430 830 & 4 744 & 0\\
340 & 172 & 29 642 262 229 & 7 158 963 073 & 181 076 418 & 5 845 & 0\\
342 & 173 & 31 177 474 996 & 7 620 446 934 & 195 124 334 & 7 457 & 0\\
344 & 174 & 33 014 225 318 & 8 118 481 242 & 218 323 289 & 10 591 & 0\\
346 & 175 & 34 705 254 287 & 8 636 262 789 & 235 050 400 & 12 307 & 0\\
348 & 176 & 36 728 266 430 & 9 196 920 285 & 262 381 050 & 15 312 & 0\\
350 & 177 & 38 580 626 759 & 9 768 511 147 & 282 042 413 & 19 574 & 0\\
352 & 178 & 40 806 395 661 & 10 396 040 696 & 314 052 518 & 23 755 & 0\\
354 & 179 & 42 842 199 753 & 11 037 658 075 & 337 229 970 & 29 793 & 0\\
356 & 180 & 45 278 616 586 & 11 730 538 496 & 374 666 300 & 38 688 & 0\\
358 & 181 & 47 513 679 057 & 12 446 446 419 & 401 932 458 & 45 946 & 0\\
360 & 182 & 50 189 039 868 & 13 221 751 502 & 445 482 235 & 55 742 & 0\\
362 & 183 & 52 628 839 448 & 14 010 515 381 & 477 264 068 & 69 970 & 0\\
364 & 184 & 55 562 506 886 & 14 874 753 568 & 528 016 753 & 83 616 & 0\\
366 & 185 & 58 236 270 451 & 15 754 940 959 & 565 045 586 & 100 644 & 0\\
368 & 186 & 61 437 700 788 & 16 705 334 454 & 623 895 236 & 126 048 & 0\\
370 & 187 & 64 363 670 678 & 17 683 643 273 & 666 935 811 & 149 044 & 0\\
372 & 188 & 67 868 149 215 & 18 744 292 915 & 734 907 336 & 179 013 & 0\\
374 & 189 & 71 052 718 441 & 19 816 289 281 & 784 797 263 & 217 673 & 0\\
376 & 190 & 74 884 539 987 & 20 992 425 825 & 863 237 405 & 257 673 & 0\\
378 & 191 & 78 364 039 771 & 22 186 413 139 & 920 935 351 & 302 553 & 0\\
380 & 192 & 82 532 990 559 & 23 475 079 272 & 1 011 152 383 & 367 547 & 1\\
382 & 193 & 86 329 680 991 & 24 795 898 388 & 1 077 679 749 & 434 339 & 0\\
384 & 194 & 90 881 152 117 & 26 227 197 453 & 1 181 149 036 & 507 481 & 0\\
386 & 195 & 95 001 297 565 & 27 670 862 550 & 1 257 630 423 & 611 532 & 0\\
388 & 196 & 99 963 147 805 & 29 254 036 711 & 1 376 400 812 & 707 184 & 0\\
390 & 197 & 104 453 597 992 & 30 852 950 986 & 1 463 926 563 & 820 525 & 0\\
392 & 198 & 109 837 310 021 & 32 581 366 295 & 1 599 524 989 & 982 532 & 0\\
394 & 199 & 114 722 988 623 & 34 345 173 894 & 1 699 970 613 & 1 133 377 & 0\\
396 & 200 & 120 585 261 143 & 36 259 212 641 & 1 854 374 011 & 1 323 509 & 0\\
398 & 201 & 125 873 325 588 & 38 179 777 473 & 1 969 147 856 & 1 546 304 & 0\\
400 & 202 & 132 247 999 328 & 40 286 153 024 & 2 144 985 583 & 1 784 313 & 1\\
\hline
\end{tabular}
}
\caption{Number of fullerenes for a given lower bound on the
pentagon separation (continued). nv is the number of vertices and nf is the number of faces.}
\label{table:fuller_counts_4}
\end{table}
\begin{flushleft}
\textit{Acknowledgements:}
Jan Goedgebeur is supported by a Postdoctoral Fellowship of the Research Foundation Flanders (FWO). Brendan McKay is supported by the Australian Research Council. Most computations for this work were carried out using the Stevin Supercomputer Infrastructure at Ghent University. We also would like to thank Gunnar Brinkmann, Patrick Fowler and Jack Graver for useful suggestions.
\end{flushleft}
\bibliographystyle{plain}
| {
"timestamp": "2015-08-13T02:08:18",
"yymm": "1508",
"arxiv_id": "1508.02878",
"language": "en",
"url": "https://arxiv.org/abs/1508.02878",
"abstract": "For each $d>0$, we find all the smallest fullerenes for which the least distance between two pentagons is $d$. We also show that for each $d$ there is an $h_d$ such that fullerenes with pentagons at least distance $d$ apart and any number of hexagons greater than or equal to $h_d$ exist.We also determine the number of fullerenes where the minimum distance between any two pentagons is at least $d$, for $1 \\le d \\le 5$, up to 400 vertices.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Fullerenes with distant pentagons",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534322330057,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7083573411374376
} |
https://arxiv.org/abs/0812.2978 | Generalizations of Chung-Feller Theorem | The classical Chung-Feller theorem [2] tells us that the number of Dyck paths of length $n$ with flaws $m$ is the $n$-th Catalan number and independent on $m$. L. Shapiro [7] found the Chung-Feller properties for the Motzkin paths. In this paper, we find the connections between these two Chung-Feller theorems. We focus on the weighted versions of three classes of lattice paths and give the generalizations of the above two theorems. We prove the Chung-Feller theorems of Dyck type for these three classes of lattice paths and the Chung-Feller theorems of Motzkin type for two of these three classes. From the obtained results, we find an interesting fact that many lattice paths have the Chung-Feller properties of both Dyck type and Motzkin type. | \section{Introduction}
Let $\mathcal{S}$ be a subset of the set $\mathbb{Z}\times
\mathbb{Z}\setminus\{(0,0)\}$, where $\mathbb{Z}$ is the set of the
integers. We call $\mathcal{S}$ the {\it step set}. Let $k$ be an
integer.
\begin{defn} An $(\mathcal{S},k)$-lattice path is a path in
$\mathbb{Z}\times \mathbb{Z}$ which:
(a) is made only of steps in $\mathcal{S}$;
(b) starts at $(0,0)$ and ends on the line $y=k$.\\
If it is made of $m$ steps and ends at $(r,k)$, we say that it is of
order $m$ and size $r$.
\end{defn}
Let $\mathscr{L}^k$ be the set of all the $(\mathcal{S},k)$-lattice
path. For short we call $(\mathcal{S},0)$-lattice paths
$\mathcal{S}$-paths and write $\mathscr{L}^0$ as $\mathscr{L}$. Let
$w$ and $l$ be two mappings from $\mathcal{S}$ to $\mathbb{R}$,
where $\mathbb{R}$ is the set of the real number. We say that $w$
and $l$ are the weight function and the length function of
$\mathcal{S}$ respectively. For any $s\in\mathcal{S}$, $w(s)$ and
$l(s)$ are called the {\it weight} and the {\it length} of the step
$s$ respectively. We can view an $(\mathcal{S},k)$-lattice path $L$
of order $m$ as a word $s_{1}s_{2}\ldots s_{m}$, where
$s_{j}\in\mathcal{S}$. In this word, let $s_{j}$ denote the $j$-th
letter from the left. Define the weight $w(L)$ and the length $l(L)$
of the path $L$ as
$$w(L)=\prod\limits_{j=1}^mw(s_{j})\text{ and }l(L)=\sum\limits_{j=1}^ml(s_{j}).$$ Moreover, we can consider
the path $L$ as a sequence of the points
$$(0,0)=(x_0,x_0),(x_1,y_1),(x_2,y_2),\ldots,(x_m,y_m),$$ where
$(x_{j},y_{j})$ is the end point of the step $s_{j}$ in the lattice
path $L$ for $j\geq 1$. Let $h(s_{j})=h_L(s_{j})=y_{j}$ for all
$j\geq 1$. We say that $h(s_{j})$ is the {\it height} of the step
$s_{j}$ in $L$. Define
$$\bar{l}(L)=\sum\limits_{s_{j}\in L,h(s_{j})\leq 0}l(s_{j}).$$
$\bar{l}(L)$ is called the {\it non-positive length} of $L$. A {\it
minimum point} is a point $(x_i,y_i)$ in the path $L$ such that
$y_i\leq y_j$ for all $j\neq i$, let $m(L)=y_i$, we call $m(L)$ the
{\it minimum value} of $L$. A {\it absolute minimum point} is a
minimum point $(x_i,y_i)$ such that the point is the rightmost one
among all the minimum points, and the index $i$ is called the {\it
absolute minimum position}, denoted by $mp(L)$. Finally, we define
the {\it absolute minimum length} $ml(L)$ of the path $L$ as
$$ml(L)=\sum\limits_{1\leq j\leq mp(L)}l(s_j).$$
\begin{defn} An $(\mathcal{S},k)$-nonnegative path is an $(\mathcal{S},k)$-lattice path
which never goes below the line $y=k$.
\end{defn}
Let $\mathscr{N}^k$ be the set of all the
$(\mathcal{S},k)$-nonnegative path. For short we call
$(\mathcal{S},0)$-nonnegative path $\mathcal{S}$-nonnegative path
and write $\mathscr{N}^0$ as $\mathscr{N}$.
Now, we set $\mathcal{S}=\{(1,1),(1,-1)\}$, $w(s)=1$ for
$s\in\mathcal{S}$, $l((1,1))=1$ and $l((1,-1))=0$. In this situation
, an $(\mathcal{S},0)$-nonnegative path is called Dyck path as well.
We may state the classical Chung-Feller Theorem [2] as follows:
{\it The number of $(\mathcal{S},0)$-lattice paths with length $n$
and non-positive length $m$ is equal to the number of the Dyck paths
with length $n$ and independent on $m$.}
It is well known that the number of Dyck paths with length $n$ is
the $n$-th Catalan number $c_n=\frac{1}{n+1}{2n\choose{n}}$. The
generating function $C(z):=\sum_{n\ge 0}c_n z^n$ satisfies the
functional equation $C(z)=1+zC(z)^2$ and
$C(z)=\frac{1-\sqrt{1-4z}}{2z}$ explicitly.
The Chung-Feller Theorem were proved by using analytic method in
[2]. T.V.Narayana [6] showed the Chung-Feller Theorem by
combinatorial methods. S.P.Eu et al. [3] proved the Chung-Feller
Theorem by using the Taylor expansions of generating functions and
gave a refinement of this theorem. In [4], they gave a strengthening
of the Chung-Feller Theorem and a weighted version for Schr\"{o}der
paths. Y.M. Chen [1] revisited the Chung-Feller Theorem by
establishing a bijection.
Moreover, if we set $\mathcal{S}=\{(1,1),(1,-1),(1,0)\}$, $w(s)=1$
and $l(s)=1$ for $s\in\mathcal{S}$, then
$(\mathcal{S},0)$-nonnegative paths are the famous Motzkin paths. L.
Shapiro [7] found the following Chung-Feller phenomenons for the
Motzkin paths.
{\it The number of $(\mathcal{S},1)$-lattice paths with length $n+1$
and absolute minimum length $m$ is equal to the number of the
Motzkin paths with length $n$ and independent on m.}
It is well known that the number of Motzkin paths with length $n$ is
the $n$-th Motzkin number $m_n$. The generating function
$M(z):=\sum_{n\ge 0}m_n z^n$ satisfies $M(z)=1+zM(z)+z^2M(z)^2$ and
explicitly $M(z)=\frac{1-z-\sqrt{1-2z-3z^2}}{2z^2}$. Recently,
Shu-Chung Liu et al. [5] use an unify algebra approach to prove
chung-Feller theorems for Dyck path and Motzkin path and develop a
new method to find some combinatorial structures which have the
Chung-Feller property.
The direct motivations of this paper come from the following two
problems:
(1) When $\mathcal{S}=\{(1,1),(1,-1)\}$, $w(s)=1$ for
$s\in\mathcal{S}$, $l((1,1))=1$ and $l((1,-1))=0$, is the number of
$\mathcal{S}$-paths with length $n$ and absolute minimum length $m$
independent on $m$ ?
(2) When $\mathcal{S}=\{(1,1),(1,-1),(1,0)\}$, $w(s)=1$ and $l(s)=1$
for $s\in\mathcal{S}$, is the number of $\mathcal{S}$-paths with
length $n$ and non-positive length $m$ independent on $m$?
We find that the answers of these two problems are yes. In fact, in
this paper, let $A$ and $B$ be two finite subsets of the set
$\mathbb{P}$, where $\mathbb{P}$ is the set of the positive
integers. We consider the weighted versions of the following three
classes of lattice paths.
{\bf Class 1.}
$\mathcal{S}_1=\mathcal{S}_{A}\cup\mathcal{S}_{B}\cup\{(1,1)\}$,
where $\mathcal{S}_{A}=\{(2i-1,-1)\mid i\in A\}$ and
$\mathcal{S}_{B}=\{(2i,0)\mid i\in B\}.$
For any step $s\in
\mathcal{S}_1$, let
\begin{center}$l(s)=\left\{\begin{array}{lll}
i&\text{if}&s=(2i,0),\\
i-1&\text{if}&s=(2i-1,-1),\\
1&\text{if}&s=(1,1),
\end{array}\right.
$$w(s)=\left\{\begin{array}{lll}
b_i&\text{if}&s=(2i,0),\\
a_i&\text{if}&s=(2i-1,-1),\\
1&\text{if}&s=(1,1).\\
\end{array}\right.
$\end{center}
{\bf Class 2.}
$\mathcal{S}_2=\mathcal{S}_{A}\cup\mathcal{S}_{B}\cup\{(1,1)\}$,
where $\mathcal{S}_{A}=\{(i,-1)\mid i\in A\}$ and
$\mathcal{S}_{B}=\{(i,0)\mid i\in B\}.$
For any step $s\in
\mathcal{S}_2$, let
\begin{center} $l(s)=\left\{\begin{array}{lll}
i&\text{if}&s=(i,0)\text{ and }(i,-1),\\
1&\text{if}&s=(1,1),\\
\end{array}\right.
$ $w(s)=\left\{\begin{array}{lll}
b_i&\text{if}&s=(i,0),\\
a_i&\text{if}&s=(i,-1),\\
1&\text{if}&s=(1,1).\\
\end{array}\right.
$\end{center}
{\bf Class 3.}
$\mathcal{S}_3=\mathcal{S}_{A}\cup\mathcal{S}_{B}\cup\{(1,1)\}$,
where $\mathcal{S}_{A}=\{(1,-2i+1)\mid i\in A\}$ and
$\mathcal{S}_{B}=\{(2i,0)\mid i\in B\}.$
For any step $s\in \mathcal{S}_3$, let
\begin{center}
$l(s)=\left\{\begin{array}{lll}
i&\text{if}&s=(2i,0),\\
0&\text{if}&s=(1,-2i+1),\\
1&\text{if}&s=(1,1),\\
\end{array}\right.
$$w(s)=\left\{\begin{array}{lll}
b_i&\text{if}&s=(2i,0),\\
a_i&\text{if}&s=(1,-2i+1),\\
1&\text{if}&s=(1,1).\\
\end{array}\right.
$\end{center}
First, we give the definition of the pointed lattice paths.
Then we define two parameters on the pointed lattice paths:
non-positive pointed length and absolute minimum pointed length. So,
for any step set $\mathcal{S}$, we say that the pointed
$(\mathcal{S},k)$-lattice paths have the {\it Chung-Feller
properties of Dyck type ( resp. Motzkin type)} if the sum of the
weights of all the pointed $(\mathcal{S},k)$-lattice paths with
length $n$ and non-positive pointed length ( resp. absolute minimum
pointed length) $m$ are independent on $m$. Finally, we prove the
Chung-Feller theorem of Dyck type for the above three classes of
lattice paths and the Chung-Feller theorem of Motzkin type for
Classes 1 and 2. From the obtained results, we find an interesting
fact that many lattice paths have the Chung-Feller properties of
both Dyck type and Motzkin type. These results tell us that there
are closed relations between two parameters of lattice paths:
non-positive pointed length and absolute minimum pointed length.
This paper is organized as follows. In Section 2, we give the
definition of the pointed lattice path and the definitions of two
parameters on the pointed lattice path: non-positive pointed length
and absolute minimum pointed length. In Section 3, we prove the
Chung-Feller Theorem of Dyck type for Classes 1,2,3. In Section 4,
we prove the Chung-Feller Theorem of Motzkin type for Classes 1,2.
In Section 5, we give some interesting facts and problems.
\section{The pointed path}
Throughout the paper, we let the step set $\mathcal{S}_i$ as well as
the corresponding weight function $w$ and
length function $l$ be defined as that in Classes $i=1,2,3$.
In this section, we will give the definition of the pointed lattice
path and the definitions of two parameters of the pointed lattice
path: non-positive pointed length and absolute minimum pointed
length.
Let $L=s_{1}s_{2}\ldots s_{m}$ be an $(\mathcal{S}_i,k)$-lattice
path with $l(s_{m})\geq 1$. Recall that $L$ can be viewed as a
sequence of the points
$$(0,0)=(x_{0},x_{0}),(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{m},y_{m}),$$ where
$(x_{j},y_{j})$ is the end point of the step $s_{j}$ in the lattice
path $L$ for $j\geq 1$. For Classes $1$ and $3$ let
$\dot{L}=[L,(x_{m}-2j,0)]$ and for Classes $2$ let
$\dot{L}=[L,(x_{m}-j,0)]$ for some $0\leq j\leq l(s_{m})-1$,
moreover, let $p(\dot{L_i})=j$. $\dot{L}$ is called the {\it pointed
path} since it denote the the path $L$ marked by a point on the
$x$-axis and $p(\dot{L})$ is called the {\it pointed length} of
$\dot{L}_i$. Let $\mathscr{M}_i^k$ denote the set of all the pointed
$(\mathcal{S}_i,k)$-lattice path in which the length of the final
step is no less than $1$. For short we write $\mathscr{M}_i^0$ as
$\mathscr{M}_i$.
\begin{defn} Given a path
$\dot{L}\in{\mathscr{M}_i^1}$, let
$\overline{lp}(\dot{L})=\bar{l}(L)+p(\dot{L})$.
$\overline{lp}(\dot{L})$ is called the {\it non-positive pointed
length} of $\dot{L}$. Let $mlp(\dot{L})=ml(L)+p(\dot{L})$.
$mlp(\dot{L})$ is called the {\it absolute minimum pointed length}
of $\dot{L}$.
\end{defn}
\begin{exa} Let $\mathcal{S}=\{(1,1),(1,0),(5,-1)\}$, $l((1,1))=1$, $l((1,0))=1$ and $l((5,-1))=5$.
We draw a pointed $\mathcal{S}$-path $\dot{L}$ of length $14$ as
follows, where $*$ denote the marked point.
\begin{center}
\includegraphics[width=10cm]{pointed.eps}\\
Fig.1. A pointed $\mathcal{S}$-path $\dot{L}$ of length $14$
\end{center}
Note that the path is in Class $2$. So, it is easy to see $l(L)=14$
$p(\dot{L})=2$, $\bar{l}(L)=8$, and $ml(L)=6$. Hence,
$\overline{lp}(\dot{L})=10$ and $\overline{mlp}(\dot{L})=8$.
\end{exa}
For Classes $i=1,2,3$, define the generating functions
\begin{eqnarray*}D_i(y,z)&=&\sum\limits_{\dot{L}\in{\mathscr{M}_i^1}}w(L)y^{\overline{lp}(\dot{L})}z^{l(L)-1}\\
M_i(y,z)&=&\sum\limits_{\dot{L}\in{\mathscr{M}_i^1}}w(L)y^{mlp(\dot{L})}z^{l(L)-1}.
\end{eqnarray*}
Let $\bar{f}_{i;n,m}$ ( resp. $\bar{g}_{i;n,m}$) be the sum of the
weights of the pointed $(\mathcal{S}_i,1)$-lattice paths
$\dot{L}\in\mathscr{M}_i^1$ with length $n+1$ and non-positive
pointed length ( resp. absolute minimum pointed length ) $m$ for
$(n,m)\neq (0,0)$ and $\bar{f}_{i;0,0}=\bar{g}_{i;0,0}=1$. It is
easy to see
\begin{eqnarray}D_i(y,z)=\sum\limits_{n\geq 0}\sum\limits_{m=
0}^n\bar{f}_{i;n,m}y^mz^n.\end{eqnarray} and
\begin{eqnarray}M_i(y,z)=\sum\limits_{n\geq 0}\sum\limits_{m=
0}^n\bar{g}_{i;n,m}y^mz^n.\end{eqnarray}
Let $\mathscr{N}_i$ be the set of all the
$\mathcal{S}_i$-nonnegative path. Define the generating function
$$F_i(z)=\sum\limits_{L\in\mathscr{N}_i}w(L)z^{l(L)}.$$
\begin{lem}\label{Dycktypegenerating} For Classes 1,2 and 3, we have
\\
(1) $F_1(z)=1+\left(\sum\limits_{i\in
B}b_{i}z^i\right)F_1(z)+\left(\sum\limits_{i\in A}a_{i}z^i\right)[F_1(z)]^2$\\
(2) $F_2(z)=1+\left(\sum\limits_{i\in
B}b_{i}z^i\right)F_2(z)+\left(\sum\limits_{i\in
A}a_{i}z^{i+1}\right)[F_2(z)]^2,$\\
(3) $F_3(z)=1+\left(\sum\limits_{i\in
B}b_{i}z^i\right)F_3(z)+\left(\sum\limits_{i\in
A}a_{i}z^i\right)[F_3(z)]^{i+1}. $
\end{lem}
\begin{proof} (1) Given a path $L\in \mathscr{N}_1$ and $L\neq
\emptyset$, we suppose that $s$ is the first step of ${L}$ and
discuss the following two cases:
{\it Case I.} $s=(2i,0)$ for some $i\in B$. We can decompose the
path $L$ into $sR$, where $R\in\mathscr{N}_1$. Note that $l(s)=i$
and $w(s)=b_i$. This provide the term $\left(\sum\limits_{i\in
B}b_iz^i\right)F_1(z)$.
{\it Case II.} $s=(1,1)$. Let $t$ be the first step returning to the
$x$-axis. We can decompose the path $L$ into $sRtQ$, where
$R,Q\in\mathscr{N}_1$ and $t=(2i-1,-1)$ for some $i\in A$. Note that
$l(s)=1$ and $w(s)=1$, $l(t)=i-1$ and $w(t)=a_i$. This provide the
term $\left(\sum\limits_{i\in A}a_iz^i\right)[F_1(z)]^2$.
(2) The proof is similar to that of (1).
(3) Given a path $L\in \mathscr{N}_3$ and $L\neq \emptyset$, we
suppose that $s$ is the first step of ${L}$ and discuss the
following two cases:
{\it Case I.} $s=(2i,0)$ for some $i\in B$. Similar to Case I in
(1), we can obtain the term $\left(\sum\limits_{i\in
B}b_iz^i\right)F_3(z)$.
{\it Case II.} $s\notin\mathcal{S}_{B}$. Let $t=(1,-2i+1)$ be the
first step returning to the $x$-axis for some $i\in A$. Let $s_j$ be
the first step $(1,1)$ of height $j$ for $1\leq j\leq i$. We can
decompose the path $L$ into $s_1R_1s_2R_2\ldots s_iR_itQ$, where
$R_j\in\mathscr{N}_3$ for all $j$ and $Q\in\mathscr{N}_3$. Note that
$l(s_j)=1$ and $w(s_j)=1$ for all $j$, $l(t)=0$ and $w(t)=a_i$. This
provide the term $\left(\sum\limits_{i\in
A}a_iz^i\right)[F_3(z)]^{i+1}$.
\end{proof}
For Classes $i=1,2,3$, let ${f}_{i;n}$ be the sum of the weights of
the $\mathcal{S}_i$-nonnegative paths with length $n$ for $n\geq 1$
and ${f}_{i;0}=1$. It is easy to see
\begin{eqnarray}F_i(z)=\sum\limits_{n\geq 0}{f}_{i;n}z^n.\end{eqnarray}
\section{The Chung-Feller property of Dyck type}
In this section, we will prove the Chung-Feller Theorem of Dyck type
for Classes $1,2,3$. For $i=1,2,3$, let $\mathscr{P}_i$ be the set
of all the pointed $\mathcal{S}_i$-nonnegative path in which the
length of the final step is no less than $1$ and define the
generating function
$$P_i(y,z)=\sum\limits_{\dot{L}\in
\mathscr{P}_i}w(L)y^{p(\dot{L})}z^{l(L)}.$$
\begin{lem}\label{dycktypegeneragtingpoint} For Classes $1,2,3,$ we
have\\
(1) $P_1(y,z)=1+\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_1(z)+\left(\sum\limits_{i\in
A}a_iz^i\sum\limits_{j=0}^{i-2}y^{j}\right)[F_1(z)]^2$\\
(2) $ P_2(y,z)=1+\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_2(z)+\left(\sum\limits_{i\in
A}a_iz^{i+1}\sum\limits_{j=0}^{i-1}y^{j}\right)[F_2(z)]^2$\\
(3) $ P_3(y,z)=1+\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_3(z).$
\end{lem}
\begin{proof} (1) Given a path $\dot{L}\in \mathscr{P}_1$ and $\dot{L}\neq
\emptyset$, we suppose that $s$ is the final step of $\dot{L}$ and
$(x,0)$ is the final point which the path $\dot{L}$ reach. Then
$\dot{L}=[L,(x-2j,0)]$ for some $0\leq j\leq l(s)-1$. We discuss the
following two cases:
{\it Case I.} $s=(2i,0)$ for some $i\in B$. We can decompose the
path $L$ into $Rs$, where $R\in\mathscr{N}_1$. Note that $l(s)=i$ ,
$w(s)=b_i$ and $j\in\{0,1,\ldots,i-1\}$. This provide the term
$\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_1(z)$.
{\it Case II.} $s=(2i-1,-1)$ for some $i\in A$ and $i\geq 2$. Let
$t$ be the right-most step leaving the $x$-axis. We can decompose
the path $L$ into $QtRs$, where $R,Q\in\mathscr{N}_1$ and $t=(1,1)$.
Note that $l(t)=1$, $w(t)=1$, $l(s)=i-1$, $w(s)=a_i$ and
$j\in\{0,1,\ldots,i-2\}$. This provide the term
$\left(\sum\limits_{i\in
A}a_iz^i\sum\limits_{j=0}^{i-2}y^{j}\right)[F_1(z)]^2$.
(2) The proof is similar to that of (1).
(3) For any $\dot{L}\in\mathscr{P}_3$, suppose $s$ is the final step
of $\dot{L}$. Clearly, $s\neq (1,1)$. Furthermore, we have
$s=(2i,0)$ for some $i\in B$ since $l(s)\geq 1$. Using the similar
method as Case I in (1), we can obtain the identity as desired.
\end{proof}
Now, we turn to $\mathcal{S}_i$-path for Classes $i=1,2,3$. Let
$\mathscr{L}_i$ be the set of all the $\mathcal{S}_i$-paths. Define
the generating functions
$$G_i(y,z)=\sum\limits_{L\in\mathscr{L}_i}w(L)y^{\bar{l}(L)}z^{l(L)}.$$
\begin{lem}\label{dycktypegeneratingnofinal}
For Classes $1,2,3$, we have
\begin{eqnarray*}(1)~G_1(y,z)&=&1+\left(\sum\limits_{i\in
B}b_iy^iz^i\right)G_1(y,z)+\left(\sum\limits_{i\in
A}a_iy^iz^i\right)F_1(yz)G_1(y,z)\\
&&+\left(\sum\limits_{i\in
A}a_iy^{i-1}z^i\right)F_1(z)G_1(y,z),\\
Equivalently,~G_1(y,z)&=&\frac{1}{1-\sum\limits_{i\in
B}b_iy^iz^i-\left(\sum\limits_{i\in
A}a_iy^iz^i\right)F_1(yz)-\left(\sum\limits_{i\in
A}a_iy^{i-1}z^i\right)F_1(z)}\\
(2)~G_2(y,z)&=&1+\left(\sum\limits_{i\in
B}b_iy^iz^i\right)G_2(y,z)+\left(\sum\limits_{i\in
A}a_iy^{i+1}z^{i+1}\right)F_2(yz)G_2(y,z)\\
&&+\left(\sum\limits_{i\in
A}a_iy^{i}z^{i+1}\right)F_2(z)G_2(y,z),\\
Equivalently,~G_2(y,z)&=&\frac{1}{1-\sum\limits_{i\in
B}b_iy^iz^i-\left(\sum\limits_{i\in
A}a_iy^{i+1}z^{i+1}\right)F_2(yz)-\left(\sum\limits_{i\in
A}a_iy^{i}z^{i+1}\right)F_2(z)}\\
(3)~G_3(y,z)&=&1+\left(\sum\limits_{i\in
A}a_iz^{i}\sum\limits_{j=0}^{i}y^{i-j}[F_3(yz)]^{i-j}[F_3(z)]^j\right)G_3(y,z)\\
&&+\left(\sum\limits_{i\in B}b_iy^iz^i\right)G_3(y,z).\\
Equivalently,~ G_3(y,z)&=&\frac{1}{1-\sum\limits_{i\in
B}b_iy^iz^i-\sum\limits_{i\in
A}a_iz^{i}\sum\limits_{j=0}^{i}y^{i-j}[F_3(yz)]^{i-j}[F_3(z)]^j}
\end{eqnarray*}
\end{lem}
\begin{proof} (1) Given a path ${L}\in \mathscr{L}_1$ and ${L}\neq
\emptyset$, we suppose that $s$ is the first step of ${L}$. We
discuss the following three cases:
{\it Case I.} $s=(2i,0)$ for some $i\in B$. We can decompose the
path $L$ into $sR$, where $R\in\mathscr{L}_1$. Note that $l(s)=i$ ,
$w(s)=b_i$ and $h(s)=0$. This provide the term
$\left(\sum\limits_{i\in B}b_iy^iz^i\right)G_1(y,z)$.
{\it Case II.} $s=(2i-1,-1)$ for some $i\in A$. Let $t=(1,1)$ be the
left-most step returning to the $x$-axis. We can decompose the path
$L$ into $s\overline{Q}tR$, where $R\in\mathscr{L}_1$, and if we
view $\overline{Q}$ as a word $s_1s_2\ldots s_r$ of the steps in
$\mathcal{S}_1$, then $Q=s_{r}s_{r-1}\ldots s_1\in\mathscr{N}_1$.
Note that $l(t)=1$, $w(t)=1$, $l(s)=i-1$, $w(s)=a_i$, $h(s)=-1$ and
$h(t)=0$. This provide the term $\left(\sum\limits_{i\in
A}a_iy^iz^i\right)F_1(yz)G_1(y,z)$.
{\it Case III.} $s=(1,1)$. Let $t=(2i-1,-1)$ be the first step
returning to the $x$-axis. We can decompose the path $L$ into
$sQtR$, where $R\in\mathscr{L}_1$ and $Q\in\mathscr{N}_1$. Note that
$l(t)=i-1$, $w(t)=a_i$, $l(s)=1$, $w(s)=1$, $h(s)=1$ and $h(t)=0$.
This provide the term $\left(\sum\limits_{i\in
A}a_iy^{i-1}z^i\right)F_1(z)G_1(y,z)$.
(2) The proof is similar to that of (1).
(3) Given a path $L\in \mathscr{L}_3$ and $L\neq \emptyset$, we
suppose that $s$ is the first step of ${L}$. We discuss the
following two cases:
{\it Case I.} $s=(2i,0)$ for some $i\in B$. Similar to Case I in
(1), we can obtain the term $\left(\sum\limits_{i\in
B}b_iy^iz^i\right)G_3(y,z)$.
{\it Case II.} $s\notin\mathcal{S}_{B}$. Let $t=(1,-2i+1)$ be the
left-most step which passes the $x$-axis and has height $-i+j$ for
some $i\in A$ and $0\leq j\leq i$. Furthermore, let $s_r$ be the
first step $(1,1)$ with height $r$ for any $1\leq r\leq j$. Let
$u_r$ be the first step $(1,1)$ at the right of $t$ with height
$-r+1$ for any $1\leq r\leq i-j$. So, we can decompose the path $L$
into $s_1R_1s_2R_2\ldots
s_jR_jt\overline{Q}_1u_{i-j}\overline{Q}_2u_{i-j-1}\ldots
\overline{Q}_{i-j}u_1T$, where $R_r\in\mathscr{N}_3$ for all $1\leq
r\leq j$, $T\in\mathscr{L}_3$, and if we view $\overline{Q}_r$ as a
word $s'_{r,1}s'_{r,2}\ldots s'_{r,k}$ of the step in
$\mathcal{S}_3$, then $Q_r=s'_{r,k}s'_{r,k-1}\ldots
s'_{r,1}\in\mathscr{N}_3$ for all $1\leq r\leq i-j$. Note that
$l(s_r)=1$, $w(s_r)=1$, $h(s_r)\geq 1$ for all $1\leq r\leq j$,
$l(t)=0$, $w(t)=a_i$, $h(t)\leq 0$, $l(u_r)=1$, $w(u_r)=1$ and
$h(u_r)\leq 0$ for all $1\leq r\leq i-j$. This provide the term
$\left(\sum\limits_{i\in
A}a_iz^{i}\sum\limits_{j=0}^{i}y^{i-j}[F_3(yz)]^{i-j}[F_3(z)]^j\right)G_3(y,z)$.
\end{proof}
Recall that $\mathscr{M}_i^1$ is the set of all the pointed
$(\mathcal{S}_i,1)$-path in which the length of the final step is no
less than $1$ and
$D_i(y,z)=\sum\limits_{\dot{L}\in{\mathscr{M}_i^1}}w(L)y^{\overline{lp}(\dot{L})}z^{l(L)-1}$for
$i=1,2,3$.
\begin{lem}\label{dycktypegeneratingtheorem} For Classes $i=1,2,3,$ we
have
\begin{eqnarray*}D_i(y,z)=G_i(y,z)P_i(y,z).
\end{eqnarray*}
\end{lem}
\begin{proof} For any $i=1,2,3$, let $\dot{L}\in \mathscr{M}_i^1$.
Let $s$ be the right-most step $(1,1)$ leaving $x$-axis and reaching
the line $y=1$. We can decompose the path $\dot{L}$ into
$Rs\dot{Q}$, where $R\in\mathscr{L}_i$ and
$\dot{Q}\in\mathscr{P}_i$. Hence, $D_i(y,z)=G_i(y,z)P_i(y,z).$
\end{proof}
Now, we are in a position to prove the Chung-Feller theorem of Dyck
type for Classes 1,2,3.
\begin{thm}\label{dycktypechungfeller} For Classes $i=1,2,3,$ let
$\bar{f}_{i;n,m}$ be the sum of the weights of the pointed
$(\mathcal{S}_i,1)$-lattice paths which \\
(a.) have length $n+1$,\\
(b.) have non-positive pointed length $m$,\\
(c.) have the length of the final step no less than $1$.\\
Let ${f}_{i;n}$ be the sum of the weights of the
$\mathcal{S}_i$-nonnegative paths with length $n$. Then
$\bar{f}_{i;n,m}$ has the Chung-Feller property of Dyck type, i.e.,
$\bar{f}_{i;n,m}={f}_{i;n}$.
\end{thm}
\begin{proof} First, we consider Class 1. By Lemmas 3.1,
3.2 and 3.3, we have
\begin{eqnarray*}D_1(y,z)&=&G_1(y,z)P_1(y,z)\\
&=&\frac{1+\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_1(z)+\left(\sum\limits_{i\in
A}a_iz^i\sum\limits_{j=0}^{i-2}y^{j}\right)[F_1(z)]^2}{1-\sum\limits_{i\in
B}b_iy^iz^i-\left(\sum\limits_{i\in
A}a_iy^iz^i\right)F_1(yz)-\left(\sum\limits_{i\in
A}a_iy^{i-1}z^i\right)F_1(z)}\\
&=&\frac{\left[1+\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_1(z)+\left(\sum\limits_{i\in
A}a_iz^i\sum\limits_{j=0}^{i-2}y^{j}\right)[F_1(z)]^2\right]F_1(yz)}{\left[1-\sum\limits_{i\in
B}b_iy^iz^i-\left(\sum\limits_{i\in
A}a_iy^iz^i\right)F_1(yz)-\left(\sum\limits_{i\in
A}a_iy^{i-1}z^i\right)F_1(z)\right]F_1(yz)}\\
&=&\frac{\left[1+\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_1(z)+\left(\sum\limits_{i\in
A}a_iz^i\sum\limits_{j=0}^{i-2}y^{j}\right)[F_1(z)]^2\right]F_1(yz)}{1-\left(\sum\limits_{i\in
A}a_iy^{i-1}z^i\right)F_1(z)F_1(yz)}
\end{eqnarray*}
since Lemma 2.3 tells us $F_1(yz)-\left(\sum\limits_{i\in
B}b_iy^iz^i\right)F_1(yz)-\left(\sum\limits_{i\in
A}a_iy^iz^i\right)[F_1(yz)]^2=1$. Note that
\begin{eqnarray*}&&\left[1+\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_1(z)+\left(\sum\limits_{i\in
A}a_iz^i\sum\limits_{j=0}^{i-2}y^{j}\right)[F_1(z)]^2\right]F_1(yz)\\
&=&\left[1+\left(\sum\limits_{i\in
B}b_iz^i\frac{y^i-1}{y-1}\right)F_1(z)+\left(\sum\limits_{i\in
A}a_iz^i\frac{y^{i-1}-1}{y-1}\right)[F_1(z)]^2\right]F_1(yz)\\
&=&\frac{1}{y-1}\left[y-1+\left(\sum\limits_{i\in
B}b_iz^i(y^i-1)\right)F_1(z)+\left(\sum\limits_{i\in
A}a_iz^i(y^{i-1}-1)\right)[F_1(z)]^2\right]F_1(yz)\\
&=&\frac{1}{y-1}\left[yF_1(yz)+\left(\sum\limits_{i\in
B}b_iz^iy^i\right)F_1(z)F_1(yz)+\left(\sum\limits_{i\in
A}a_iz^iy^{i-1}\right)[F_1(z)]^2F_1(yz)-F_1(z)F_1(yz)\right]
\end{eqnarray*}
Furthermore, since $\left(\sum\limits_{i\in
B}b_iz^iy^i\right)F_1(yz)=F_1(yz)-\left(\sum\limits_{i\in
A}a_iy^iz^i\right)[F_1(yz)]^2-1$, we
get\begin{eqnarray*}&&\left[1+\left(\sum\limits_{i\in
B}b_iz^i\sum\limits_{j=0}^{i-1}y^{j}\right)F_1(z)+\left(\sum\limits_{i\in
A}a_iz^i\sum\limits_{j=0}^{i-2}y^{j}\right)[F_1(z)]^2\right]F_1(yz)\\
&=&\frac{1}{y-1}\left[yF_1(yz)-\left(\sum\limits_{i\in
A}a_iy^iz^i\right)[F_1(yz)]^2F_1(z)-F_1(z)+\left(\sum\limits_{i\in
A}a_iz^iy^{i-1}\right)[F_1(z)]^2F_1(yz)\right]\\
&=&\frac{\left[yF_1(yz)-F_1(z)\right]\left[1-\left(\sum\limits_{i\in
A}a_iy^{i-1}z^i\right)F_1(z)F_1(yz)\right]}{y-1}.
\end{eqnarray*}Hence, \begin{eqnarray*}D_1(y,z)&=&\frac{yF_1(yz)-F_1(z)}{y-1}\\
&=&\frac{y\sum\limits_{n\geq 0}f_{1;n}y^nz^n-\sum\limits_{n\geq
0}f_{1;n}z^n}{y-1}\\
&=&\sum\limits_{n\geq 0}f_{1;n}z^n\frac{y^{n+1}-1}{y-1}\\
&=&\sum\limits_{n\geq 0}f_{1;n}z^n\sum\limits_{m=0}^ny^m\\
&=&\sum\limits_{n\geq 0}\sum\limits_{m=0}^nf_{1;n}y^mz^n.
\end{eqnarray*}
This implies $\bar{f}_{1;n,m}=f_{1;n}$ for all $0\leq m\leq n$.
Similarly, we can prove the theorems for Classes $i=2,3$.\end{proof}
\begin{cor}(Chung-Feller.) Let $\mathcal{S}=\{(1,1),(1,-1)\}$, $w(s)=1$ for any $s\in\mathcal{S}$,
$l((1,1))=1$ and $l((1,-1))=0$. Then the number of the
$(\mathcal{S},0)$-lattice path with length $n$ and non-positive
length $m$ is the $n$-th Catalan number.
\end{cor}
\begin{proof} For any a pointed
$(\mathcal{S},1)$-lattice path, we suppose that $s$ is the final
step in this path. Then $s=(1,1)$ since $l((1,-1))=0<1$. If we
delete the final step of this path and erase the marked point, we
will obtain an $(\mathcal{S},0)$-lattice path with length $n$ and
non-positive length $m$. By Theorem 3.4, the number of the pointed
$(\mathcal{S},1)$-lattice path with length $n+1$ and non-positive
pointed length $m$ in which the length of the final step is no less
than $1$ is equal to the number of the $\mathcal{S}$-nonnegative
paths with length $n$. By Lemma 2.3, we have $F_1(z)=1+z[F_1(z)]^2$
since $\mathcal{S}=\{(1,1),(1,-1)\}$, $w(s)=1$ for any
$s\in\mathcal{S}$, $l((1,1))=1$ and $l((1,-1))=0$. Hence, the number
of the $\mathcal{S}_1$-nonnegative paths with length $n$ is the
$n$-th Catalan number. This complete the proof.
\end{proof}
\begin{cor}\label{mexample} Let $\mathcal{S}=\{(1,1),(1,-1),(1,0)\}$, $w(s)=1$ and $l(s)=1$ for any $s\in\mathcal{S}$.
Then the number of the $(\mathcal{S},1)$-lattice path
with length $n+1$ and non-positive length $m$ is the $n$-th Motzkin
number.
\end{cor}
\begin{proof} For any a pointed
$(\mathcal{S},1)$-lattice path, we suppose that the final point in
this path is $(x,1)$. Then the marked point must be $(x,0)$ since
$l(s)=1$ for any $s\in\mathcal{S}$. So, we can erase the marked
point. By Theorem 3.4, the number of the pointed
$(\mathcal{S},1)$-lattice path with length $n+1$ and non-positive
pointed length $m$ is equal to the number of the
$\mathcal{S}$-nonnegative paths with length $n$. By Lemma 2.3, we
have $F_2(z)=1+zF_2(z)+z^2[F_2(z)]^2$ since
$\mathcal{S}=\{(1,1),(1,-1),(1,0)\}$, $w(s)=1$ and $l(s)=1$ for any
$s\in\mathcal{S}$. Hence, the number of the
$\mathcal{S}$-nonnegative paths with length $n$ is the $n$-th
Motzkin number. This complete the proof.
\end{proof}
\begin{center}
\includegraphics[width=10cm]{motzkindyck.eps}\\
Fig.2. An example of Corollary 3.6, where $n=4$
\end{center}
\begin{cor}\label{coro(5,-1)} Let $\mathcal{S}=\{(1,1),(5,-1),(1,-1)\}$, $w(s)=1$ for any $s\in\mathcal{S}$,
$l((1,1))=1$, $l((5,-1))=2$ and $l((1,-1))=0$. Then the number of
the pointed $(\mathcal{S},1)$-lattice path with length $n+1$ and
non-positive pointed length $m$ is equal to the number of the
$\mathcal{S}$-nonnegative path with length $n$.
\end{cor}
We omit the proof of Corollary 3.7. In Fig.3, we show an example of
this Corollary with $n=3$, where $*$ denote the marked point.
\begin{center}
\includegraphics[width=12cm]{fig2.eps}\\
Fig.3. An example of Corollary 3.7, where $n=3$.
\end{center}
\begin{cor}\label{coro(1,-3)} Let $\mathcal{S}=\{(1,1),(2,0),(1,-3)\}$, $w(s)=1$ for any $s\in\mathcal{S}$,
$l((1,1))=1$, $l((2,0))=1$ and $l((1,-3))=0$. Then the number of the
pointed $(\mathcal{S},1)$-lattice path with length $n+1$ and
non-positive length $m$ is equal to the number of the
$\mathcal{S}$-nonnegative path with length $n$.
\end{cor}
We omit the proof of Corollary 3.8. In Fig.4, we show an example of
this Corollary with $n=3$.
\begin{center}
\includegraphics[width=5cm]{fig4.eps}\\
Fig.4. An example of Corollary 3.8, where $n=3$.
\end{center}
\section{The Chung-Feller property of Motzkin
type} In this section, we will prove the Chung-Feller Theorem of
Motzkin type for Classes 1,2. For $i=1,2,$ recall that an
$(\mathcal{S}_i,-k)$-nonnegative path is an
$(\mathcal{S}_i,-k)$-lattice path which never goes below the line
$y=-k$, where $k\geq 0$. $\mathscr{N}_i^{-k}$ is the set of all the
$(\mathcal{S}_i,-k)$-nonnegative path. Define the generating
functions
$$H_{i}^k(z)=\sum\limits_{L\in \mathscr{N}_i^{-k}}w(L)z^{l(L)}.$$
\begin{lem}\label{motzkintypegeneratingminimum} For Classes 1,2, we
have
\begin{eqnarray*}H_1^k(z)=[F_1(z)]^{k+1}\left[\sum\limits_{i\in
A}a_iz^{i-1}\right]^k\text{ and
}H_2^k(z)=[F_2(z)]^{k+1}\left[\sum\limits_{i\in A}a_iz^{i}\right]^k.
\end{eqnarray*}
\end{lem}
\begin{proof} For any a path $L\in \mathscr{N}_i^{-k}$ and $L\neq\emptyset$, we
consider the first step $s_m$ with height $-m$, where $1\leq m\leq
k$. Thus we can decompose the path $L$ into
$L_0s_{1}{L}_1s_{2}\ldots {L}_{k-1}s_{k}{L}_{k}$, where
$L_r\in\mathscr{N}_i$ for all $0\leq r\leq k$ and
$s_{j}\in\mathcal{S}_{A}$ for all $j$. Thus,
\begin{eqnarray*}H_1^k(z)=[F_1(z)]^{k+1}\left[\sum\limits_{i\in
A}a_iz^{i-1}\right]^k\text{ and
}H_2^k(z)=[F_2(z)]^{k+1}\left[\sum\limits_{i\in A}a_iz^{i}\right]^k.
\end{eqnarray*}
\end{proof}
Now we focus on the generating functions
$M_i(y,z)=\sum\limits_{\dot{L}\in{\mathscr{M}_i^1}}w(L)y^{mlp(\dot{L})}z^{l(L)-1}$
for $i=1,2$.
\begin{lem}\label{motzkintypegeneratingtheorem} For Classes 1,2, we
have
\begin{eqnarray*}
M_1(y,z)&=&\frac{P_1(y,z)F_1(yz)}{1-\left[\sum\limits_{i\in
A}a_iy^{i-1}z^{i}\right][F_1(yz)F_1(z)]},
\end{eqnarray*}and
\begin{eqnarray*}
M_2(y,z)&=&\frac{P_2(y,z)F_2(yz)}{1-\left[\sum\limits_{i\in
A}a_iy^{i}z^{i+1}\right][F_2(yz)F_2(z)]}.
\end{eqnarray*}
\end{lem}
\begin{proof} For any $k\geq 0$, let ${\mathscr{M}_i^1}(k)$ be the set of all the
paths $L$ in the set ${\mathscr{M}_i^1}$ such that $m(L)=-k$, where
$m(L)$ is the minimum value of $L$. Clearly,
${\mathscr{M}_i^1}=\bigcup\limits_{k\geq 0}{\mathscr{M}_i^1}(k)$.
Given a path $\dot{L}\in{\mathscr{M}_i^1}(k)$ and $L\neq \emptyset$,
using the absolute minimum position of $L$, we can decompose $L$
into $R(1,1)\dot{T}$, where $R\in\mathscr{N}_i^{-k}$. For the path
$\dot{T}$, we consider the rightmost step $(1,1)$ with height $-m$ ,
where $-1\leq m\leq k-1$. Thus we can decompose the path $T$ into
$L_{k-1}(1,1)L_{k-2}(1,1)\ldots L_{0}(1,1)\dot{Q}$, where $L_{j}$ is
$\mathcal{S}_i$-nonnegative path for all $0\leq j\leq k-1$ and
$\dot{Q}\in \mathscr{M}_i$, where $\mathscr{M}_i$ is the set of all
the pointed $\mathcal{S}_i$-nonnegative path. Hence, by Lemmas 3.1
and 4.1, we get \begin{eqnarray*} M_i(y,z)&=&\sum\limits_{k\geq
0}H_i^k(yz)[F_i(z)]^kz^kP_i(y,z).
\end{eqnarray*}Hence,
\begin{eqnarray*}
M_1(y,z)&=&P_1(y,z)F_1(yz)\sum\limits_{k\geq
0}[F_1(yz)]^{k}\left[\sum\limits_{i\in
A}a_iy^{i-1}z^{i-1}\right]^k[F_1(z)]^kz^k\\
&=&\frac{P_1(y,z)F_1(yz)}{1-\left[\sum\limits_{i\in
A}a_iy^{i-1}z^{i}\right][F_1(yz)F_1(z)]},
\end{eqnarray*}and
\begin{eqnarray*}
M_2(y,z)&=&P_2(y,z)F_2(yz)\sum\limits_{k\geq
0}[F_2(yz)]^{k}\left[\sum\limits_{i\in
A}a_iy^{i}z^{i}\right]^k[F_2(z)]^kz^k\\
&=&\frac{P_2(y,z)F_2(yz)}{1-\left[\sum\limits_{i\in
A}a_iy^{i}z^{i+1}\right][F_2(yz)F_2(z)]}.
\end{eqnarray*}
\end{proof}
Now, we can prove the following Chung-Feller theorem of Motzkin type
for Classes 1,2.
\begin{thm}\label{motzkintypechungfeller}
For Classes $i=1,2,$ let $\bar{g}_{i;n,m}$ be the sum of the weights
of the pointed
$(\mathcal{S}_i,1)$-lattice paths which \\
(a.) have length $n+1$,\\
(b.) have absolute minimum pointed length $m$,\\
(c.) have the length of the final step no less than $1$.\\
Let ${f}_{i;n}$ be the sum of the weights of the
$\mathcal{S}_i$-nonnegative paths with length $n$. Then
$\bar{g}_{i;n,m}$ has the Chung-Feller property of Motzkin type,
i.e., $\bar{g}_{i;n,m}={f}_{i;n}$.
\end{thm}
\begin{proof} In fact, we derived
\begin{eqnarray*}M_i(y,z)&=&\frac{yF_i(yz)-F_i(z)}{y-1}.
\end{eqnarray*}in the
proof of Theorem 3.4 for $i=1,2$. Hence, the theorems
hold.\end{proof}
\begin{cor}(L. Shapiro) Let
$\mathcal{S}=\{(1,1),(1,-1),(1,0)\}$, $w(s)=1$ and $l(s)=1$ for any
$s\in\mathcal{S}$. Then the number of the $(\mathcal{S},1)$-lattice
path with length $n+1$ and absolute minimum pointed length $m$ is
the $n$-th Motzkin number.
\end{cor}
\begin{proof} For any a pointed
$(\mathcal{S},1)$-lattice path, we suppose the final point in this
path is $(x,1)$. Then the marked point must be $(x,0)$ since
$l(s)=1$ for any $s\in\mathcal{S}$. So, we can erase the marked
point. By Theorem 4.3, the number of the pointed
$(\mathcal{S},1)$-lattice path with length $n+1$ and absolute
minimum pointed length $m$ is equal to the number of the
$\mathcal{S}$-nonnegative paths with length $n$. By Lemma 2.3, we
have $F_2(z)=1+zF_2(z)+z^2[F_2(z)]^2$ since
$\mathcal{S}=\{(1,1),(1,-1),(1,0)\}$, $w(s)=1$ and $l(s)=1$ for any
$s\in\mathcal{S}$. Hence, the number of the
$\mathcal{S}$-nonnegative paths with length $n$ is the $n$-th
Motzkin number. This complete the proof.
\end{proof}
\begin{cor}\label{dyckexample} Let $\mathcal{S}=\{(1,1),(1,-1)\}$, $w(s)=1$ for any
$s\in\mathcal{S}$, $l((1,1))=1$ and $l((1,-1))=0$. Then the number
of the $\mathcal{S}$-path with length $n$ and absolute minimum
pointed length $m$ in which the length of the final step is no less
than $1$ is the $n$-th Catalan number.
\end{cor}
\begin{proof} For any a pointed
$(\mathcal{S},1)$-lattice path, we suppose the final step in this
path is $s$. Then $s=(1,1)$ since $l((1,-1))=0<1$. If we delete the
final step of this path and erase the marked point, we will obtain a
$\mathcal{S}$-path with length $n$ and absolute minimum length $m$.
By Theorem 4.3, the number of the pointed $(\mathcal{S},1)$-lattice
path with length $n+1$ and absolute minimum pointed length $m$ in
which the length of the final step is no less than $1$ is equal to
the number of the $\mathcal{S}$-nonnegative paths with length $n$.
By Lemma 2.3, we have $F_1(z)=1+z[F_1(z)]^2$ since
$\mathcal{S}=\{(1,1),(1,-1)\}$, $w(s)=1$ for any $s\in\mathcal{S}$,
$l((1,1))=1$ and $l((1,-1))=0$. Hence, the number of the
$\mathcal{S}$-nonnegative paths with length $n$ is the $n$-th
Catalan number. This complete the proof.
\end{proof}
\begin{center}
\includegraphics[width=6cm]{dyckpathmotzkin.eps}\\
Fig.5. An example of Corollary 4.5, where $n=2$
\end{center}
\section{Conclusions}
By Theorems 3.4 and 4.3, the lattice paths in Classes 1 and 2 have
the Chung-Feller properties of both Dyck type and Motzkin type. We
only prove the Chung-Feller theorem of Dyck type for Class 3. In
fact, the lattice paths in Class 3 have the Chung-Feller properties
of Motzkin type as well. We don't include this result in this paper
since the statements are very complicate.
There are
many lattice paths which have the Chung-Feller properties of both
Dyck type and Motzkin type. For simplify, we set
$\mathcal{S}=\{(1,1),(1,-1)\}$, $w(s)=1$ for any $s\in \mathcal{S}$,
$l((1,1))=1$ and $l((1,-1))=0$. Let $\theta$ be a mapping from
$\mathscr{L}$ to $\mathbb{N}$, where $\mathscr{L}$ is the set of all
the $(\mathcal{S},1)$-lattice path. $\theta$ is called a parameter
on $(\mathcal{S},1)$-lattice path. For any $0\leq m\leq n$, if the
number of the $(\mathcal{S},1)$-lattice path $L$ with length $n$
such that $\theta(L)=m$ is independent on $m$, then we say that
$\theta$ is a {\it Chung-Feller parameter} for
$(\mathcal{S},1)$-lattice paths. There are two Chung-Feller
parameters on $(\mathcal{S},1)$-lattice path: non-positive pointed
length and absolute minimum pointed length. To end this paper, we
propose a problem: are there the other Chung-Feller parameters on
$(\mathcal{S},1)$-lattice paths?
| {
"timestamp": "2008-12-16T06:46:18",
"yymm": "0812",
"arxiv_id": "0812.2978",
"language": "en",
"url": "https://arxiv.org/abs/0812.2978",
"abstract": "The classical Chung-Feller theorem [2] tells us that the number of Dyck paths of length $n$ with flaws $m$ is the $n$-th Catalan number and independent on $m$. L. Shapiro [7] found the Chung-Feller properties for the Motzkin paths. In this paper, we find the connections between these two Chung-Feller theorems. We focus on the weighted versions of three classes of lattice paths and give the generalizations of the above two theorems. We prove the Chung-Feller theorems of Dyck type for these three classes of lattice paths and the Chung-Feller theorems of Motzkin type for two of these three classes. From the obtained results, we find an interesting fact that many lattice paths have the Chung-Feller properties of both Dyck type and Motzkin type.",
"subjects": "Combinatorics (math.CO)",
"title": "Generalizations of Chung-Feller Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534387427591,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7083573399616138
} |
https://arxiv.org/abs/1803.03705 | Geodesic Obstacle Representation of Graphs | An obstacle representation of a graph is a mapping of the vertices onto points in the plane and a set of connected regions of the plane (called obstacles) such that the straight-line segment connecting the points corresponding to two vertices does not intersect any obstacles if and only if the vertices are adjacent in the graph. The obstacle representation and its plane variant (in which the resulting representation is a plane straight-line embedding of the graph) have been extensively studied with the main objective of minimizing the number of obstacles. Recently, Biedl and Mehrabi (GD 2017) studied grid obstacle representations of graphs in which the vertices of the graph are mapped onto the points in the plane while the straight-line segments representing the adjacency between the vertices is replaced by the $L_1$ (Manhattan) shortest paths in the plane that avoid obstacles.In this paper, we introduce the notion of geodesic obstacle representations of graphs with the main goal of providing a generalized model, which comes naturally when viewing line segments as shortest paths in the Euclidean plane. To this end, we extend the definition of obstacle representation by allowing some obstacles-avoiding shortest path between the corresponding points in the underlying metric space whenever the vertices are adjacent in the graph. We consider both general and plane variants of geodesic obstacle representations (in a similar sense to obstacle representations) under any polyhedral distance function in $\mathbb{R}^d$ as well as shortest path distances in graphs. Our results generalize and unify the notions of obstacle representations, plane obstacle representations and grid obstacle representations, leading to a number of questions on such embeddings. |
\section{Introduction}
\label{sec:introduction}
\input{introduction.tex}
\section{Notation and Preliminaries}
\label{sec:prelimins}
\input{prelimins.tex}
\section{General Representations}
\label{sec:general}
\input{general.tex}
\section{Non-Crossing Representations}
\label{sec:nonCrossing}
\input{nonCrossing.tex}
\section{Graph Metrics}
\label{sec:graphMetric}
\input{graphMetric.tex}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we introduced the geodesic obstacle representation of graphs, providing a unified generalization of obstacle representations and grid obstacle representations. Our work leaves several problems open. As perhaps the main question, does every planar graph admit a non-crossing $\delta_k$-obstacle representation for some constant $k$? It would be also interesting to extend the classes of graphs for which non-crossing $\delta_k$-obstacle representations exist for small values of $k$. For graph metrics, given two graphs $G$ and $H$, is it \textsc{NP}-hard to decide if $G$ has an $H$-obstacle representation?
\subparagraph{Acknowledgement.} We thank Mark Keil for useful discussions on this problem.
\subsection{$\delta_1$-Obstacle Representations}
In this section, we show that the class of planar graphs that have plane
$\delta_1$-obstacle embeddings are exactly planar bipartite graphs. Since a
graph has a plane $\delta_1$-obstacle embedding if, and only if, it can be
straight-line embedded without $x$-monotone paths of length 2, we prove a
slightly more general result about such embeddings.
\begin{theorem}
A combinatorial embedding of a planar graph (the counter-clockwise order of
the neighbours of each vertex) has a straight-line plane embedding with the
same neighbour orders and no $x$-monotone paths of length 2 if, and only if,
the graph is bipartite.
\end{theorem}
\begin{proof}
If the graph is not bipartite, then it has an odd cycle, whose embedding
must have at least one $x$-monotone path of length 2. On the other hand, we
show how to construct the desired straight-line plane embedding if the graph
is bipartite and has at least 3 vertices. The construction has three stages:
input transformations to simplify the embedding; the embedding itself; and
adaptation of the embedding to the original input.
The first input transformation adds vertices and edges to the graph so a
2-connected quadrilateralization results. The graph is first made connected
by repeatedly adding edges between outer vertices of different connected
components. To make it 2-connected, we need to deal with cut vertices. If
$u$ is a cut vertex, let $v$ be a neighbour of $u$ whose next vertex $w$ in
the counter-clockwise order around $u$ lies in a different 2-connected
component than $v$ and add a path of length 2 between $v$ and $w$, as in
Figure~\ref{figure:bipartite.route-cut-vertex}. This path addition merges
the 2-connected components of $v$ and $w$, so eventually a 2-connected graph
remains.
\begin{figure}[h]
\centering
{\includegraphics{bipartite-1.pdf}}
\caption{\label{figure:bipartite.route-cut-vertex}Routing around a cut
vertex.}
\end{figure}
Note that because the graph is 2-connected and has at least 3 vertices, a
counter-clockwise traversal of any face will not repeat edges or vertices.
Therefore, the neighbour ordering of the original graph is preserved if we
preserve the faces of this 2-connected combinatorial embedding, and this is
what the construction will achieve. To conclude this first input
transformation, we obtain a quadrilateralization by repeatedly inserting
edges between vertices three edges apart in faces with more than four
vertices, which is possible since faces all have an even number of vertices
greater than two.
For the second input transformation, while there are two quadrilaterals
sharing exactly two adjacent edges, we merge these quadrilaterals into one
by erasing these two edges and the isolated vertex, as in Figure
\ref{figure:bipartite.merging}. Note that no cut vertices are introduced.
Also, this merging reduces the number of quadrilaterals by one, so this
process eventually ends.
\begin{figure}[h]
\centering
{\includegraphics{bipartite-2.pdf}}
\caption{\label{figure:bipartite.merging}Merging quadrilaterals to avoid
this adjacency pattern.}
\end{figure}
Finally, for the third input transformation, while there is a vertex $v$
that is not part of the outer face, we first name the neighbours of $v$ in
counter-clockwise order as $w_0, \ldots, w_k$ and the vertex opposing $v$ in
the quadrilateral to the left of $\overrightarrow{v w_k}$ as $u$. Then we
remove $v$ and its incident edges and connect $u$ to $w_1, \ldots, w_{k -
1}$, as in Figure \ref{figure:bipartite.removing-internal-vertex}. Note that
no cut vertices are introduced here either and that the quadrilaterals
originally incident to $v$ did not share any edges but the ones incident to
$v$ due to our second input transformation. Furthermore, each of these steps
removes a vertex, so this process also terminates.
\begin{figure}[h]
\centering
{\includegraphics{bipartite-3.pdf}}
\caption{\label{figure:bipartite.removing-internal-vertex}Removing an
internal vertex.}
\end{figure}
The reason performing the embedding is now easy is that we are dealing with
a 2-connected outerplanar graph, whose dual graph is thus a tree. We can
thus remove leaves from this tree until a single vertex remains. Therefore,
we can start by embedding the quadrilateral corresponding to this vertex in
any feasible way and keep embedding quadrilaterals that share a single
(outer) edge with the current embedding, one at a time. These new
quadrilaterals are embedded as concave quadrilaterals small enough so that
they do not overlap with the rest of the embedding and so that their edges
have the same slope as the shared edge, as in Figure~\ref{figure:bipartite.leaf-embedding}. Because they have the same slope, no
$x$-monotone paths of length 2 are created.
\begin{figure}[h]
\centering
{\includegraphics{bipartite-4.pdf}}
\caption{\label{figure:bipartite.leaf-embedding}Attaching a new
quadrilateral to an edge.}
\end{figure}
Now we have an embedding with no $x$-monotone paths of length 2, but we need
to ``undo'' the transformations we did to the original graph in the reverse
order they were made. To undo a transformation of the third type (Figure~\ref{figure:bipartite.removing-internal-vertex}), note that $w_0, \ldots,
w_k$ must have been embedded all to the left or all to the right of $u$.
W.l.o.g., we assume the latter case. First we erase the edges from $u$ to
$w_1, \ldots, w_{k - 1}$ and we embed the vertex $v$ close to $u$, to the
left of $\overrightarrow{u w_0}$ and to the right of $\overrightarrow{u
w_k}$. If $v$ is close enough to $u$, there will be no crossings or change
in slope when we reinsert the edges from $v$ to $w_0, \ldots, w_k$, as in
Figure~\ref{figure:bipartite.undoing-internal-vertex-removal}.
\begin{figure}[h]
\centering
{\includegraphics{bipartite-5.pdf}}
\caption{\label{figure:bipartite.undoing-internal-vertex-removal}Undoing
an internal vertex removal.}
\end{figure}
Upon close inspection, the second type of transformation (Figure~\ref{figure:bipartite.merging}) can be interpreted as a transformation of
the third type where $k = 1$. The procedure just outlined may be then used
to obtain a valid embedding prior to these transformations. As for
transformations of the first type, they are simply vertex and edge
insertions, so removing these vertices and edges clearly does not create
$x$-monotone paths of length 2, providing us our desired embedding.
\end{proof}
\begin{corollary}
A planar graph has a straight-line $\delta_1$-obstacle embedding if, and
only if, it is bipartite.
\end{corollary}
\subsection{$\delta_2$-Obstacle Representations}
In this section, we focus on plane $\delta_2$-obstacle embeddings. Recall that these are equivalent to the non-blocking planar grid obstacle representation studied by Biedl and Mehrabi~\cite{BiedlM-GD17}. We begin with the positive result that all graphs of treewidth at most 2 (i.e., partial 2-trees) have plane $\delta_2$-obstacle embeddings.
\subparagraph{Treewidth.} A \emph{$k$-tree} is any graph that can be obtained in the following manner: we begin with a clique on $k+1$ vertices and then we repeatedly select a subset of the vertices that form a $k$-clique $K$ and add a new vertex adjacent to every element in $K$. The class of $k$-trees is exactly the set of edge-maximal graphs of treewidth $k$. A graph $G$ is called a \emph{partial $k$-tree} if it is a subgraph of some $k$-tree. The class of partial $k$-trees is exactly the class of graphs of treewidth at most $k$. We will make use of the following lemma, due to Dujmovi\'c and Wood~\cite{DujmovicW07} in proving Theorem~\ref{thm:2-tree} and later in Section~\ref{subsec:higherK}.
\begin{lemma}[Dujmovi\'c and Wood~\cite{DujmovicW07}]
\label{lem:dujwood}
Every $k$-tree is either a clique on $k+1$ vertices or it contains a non-empty independent set $S$ and a vertex $u\not\in S$, such that (i) $G\setminus S$ is a $k$-tree, (ii) $\deg_{G\setminus S}(u)=k$, and (iii) every element in $S$ is adjacent to $u$ and $k-1$ elements of $N_{G\setminus S}(u)$.
\end{lemma}
\begin{theorem}
\label{thm:2-tree}
Every partial 2-tree has a plane straight-line $\delta_2$-obstacle embedding.
\end{theorem}
\begin{proof}
Let $G$ be a partial 2-tree. We can, without loss of generality, assume that $G$ is connected. If $|V(G)|< 4$, then the result is trivial, so we can assume $|V(G)|\ge 4$. We now proceed by induction on $|V(G)|$.
Let $T=T(G)$ be a 2-tree with vertex set $V(G)$ and that contains $G$. Apply Lemma~\ref{lem:dujwood} to find the vertex set $S$ and the vertex $u$. Let $x$ and $y$ be the neighbours of $u$ in $T\setminus S$. Now, apply induction to find a plane straight-line $\delta_2$-obstacle embedding of the graph $G'$ whose vertex set is $V(G')=V(G)\setminus S$ and whose edge set is $E(G')=E(G\setminus S)\cup\{ux,uy\}$. Denote by $S_x$ (resp., $S_y$) the neighbours of $x$ (resp., $y$) that belong to $S$.
Now, observe that, since $u$ has degree 2 in $G'$ and the edges $ux$
and $uy$ are in $G'$, this embedding does not contain any monotone path
of the form $uxw$ or $uyw$ for any $w\in V(G)\setminus\{u,x,y\}$.
Therefore, if we place the vertices in $S$ sufficiently close to $u$,
we will not create any monotone path of the form $ayw$ or $axw$ for
any $a\in S$ and any $w\in V(G)\setminus \{u,x,y\}$. What remains
is to show how to place the elements of $S$ in order to avoid unwanted
monotone paths of the form $uay$, $uax$, or $aub$ for any $a,b\in S$.
There are three cases to consider:
\begin{figure}[t]
\centering
\includegraphics[width=0.90\textwidth]{2treesBoth}
\caption{An illustration in supporting the proof of Theorem~\ref{thm:2-tree}.}
\label{fig:2treesBoth}
\end{figure}
\begin{enumerate}
\item $x\in Q^2_i(u)$ and $y\in Q^2_{i+2}(u)$ for some $i\in\{0,\ldots,3\}$. W.l.o.g., assume that $Q^2_{i+3}(u)$ does not intersect the segment $xy$. Then, we can embed the elements of $S$ in $Q^2_{i+3}(u)$ without creating any new monotone paths; see Figure~\ref{fig:2treesBoth}(a).
\item $x,y\in Q^2_i(u)$ for some $i\in\{0,\ldots,3\}$. There are two subcases: \begin{inparaenum}[(i)] \item At least one of $ux$ or $uy$ is in $E(G)$. Suppose $ux\in E(G)$. Then we embed $S_x$ in $Q^2_i(u)$ and embed $S_y$ in $Q^2_{i+3}(u)$; see Figure~\ref{fig:2treesBoth}(b). The only monotone paths this creates are of the form $uax$ with $a\in S_x$, which is acceptable since $ux\in E(G)$. \item Neither $ux$ nor $uy$ is in $E(G)$. In this case, we embed all of $S$ in $Q^2_{i+2}(u)$ (see Figure~\ref{fig:2treesBoth}(c)). This does not create any new monotone paths. \end{inparaenum}
\item $x\in Q^2_i(u)$ and $y\in Q^2_{i+3}(u)$ for some $i\in\{0,\ldots,3\}$. We have three subcases to consider: \begin{inparaenum}[(i)] \item $|\{ux,uy\}\cap E(G)|=1$. In this case, assume $ux\in E(G)$. Then, we embed the vertices of $S_x$ in $Q^2_i(u)$ and we embed the vertices of $S_y$ in $Q^2_{i+1}(u)$. See Figure~\ref{fig:2treesBoth}(d). The only monotone paths this creates are of the form $uax$ with $a\in S_x$, which is acceptable since $ux\in E(G)$. \item $|\{ux,uy\}\cap E(G)|=2$. In this case, we embed the vertices of $S_x$ in $Q^2_i(u)$ and we embed the vertices of $S_y$ in $Q^2_{i+3}(u)$ (see Figure~\ref{fig:2treesBoth}(e)). The only monotone paths this creates are of the form $uax$ with $a\in S_x$ and $uby$ with $b\in S_y$, which is acceptable since $ux,uy\in E(G)$. \item $|\{ux,uy\}\cap E(G)|=0$. In this case, we embed all of $S$ into $Q^2_{i+1}$ (see Figure~\ref{fig:2treesBoth}(f)). This does not create any new monotone paths. \end{inparaenum}
\end{enumerate}
This completes the proof of the theorem.
\end{proof}
We next show that not every planar 3-tree admits a plane $\delta_2$-obstacle embedding. To this end, we first need some preliminary results.
\begin{lemma}
\label{lem:labelling}
The vertices of any triangle $xyz$ can be labelled such that $y,z\in Q^2_i(x)$ for some $i\in\{0,\ldots,3\}$.
\end{lemma}
\begin{proof}
Consider the vertex $x$ and assume w.l.o.g. that $y\in Q^2_i(x)$, for some $i\in\{0,\dots,3\}$. Notice that $x\in Q^2_{i+2}(y)$. If $z\in Q^2_i(x)$ or $z\in Q^2_{i+2}(y)$, then we are done. Otherwise, we must have $x,y\in Q^2_{i+1}(z)$ or $x,y\in Q^2_{i+3}(z)$, which proves the lemma by a re-labelling.
\end{proof}
A (1-level) \emph{subdivision} of a triangle $xyz$ is obtained by adding a vertex $w$ in the interior of $xyz$ and adding the edges $wx$, $wy$, $wz$. A $d$-level subdivision of $xyz$ is obtained by repeating this process recursively to a depth of $d$.
\begin{lemma}
\label{lem:level-1}
Let $G$ be a non-crossing $\delta_2$-obstacle embedding of some graph, and let $xyz$ be a three-cycle in $G$ embedded with $x\in Q^2_i(y)$ and $z\in Q^2_i(x)$. Then, $xyz$ does not contain a 3-level subdivision in its interior.
\end{lemma}
\begin{proof}
W.l.o.g., assume that $i=0$ and $x$ is above the edge $yz$. Consider the location of the vertex $w$ that subdivides $xyz$. There are three cases to consider:
\begin{enumerate}
\item The vertex $w$ is placed in $Q^2_0(x)$. In this case,
there will be a $d_2$-monotone path from $z$ to the vertex $w'$ that
subdivides $xyw$.
\item The vertex $w$ is placed in $Q^2_2(x)$. In this case,
there will be a $d_2$-monotone path from $y$ to the vertex $w'$ that
subdivides $xwz$.
\item The vertex $w$ is placed in $Q^2_3(x)$. In this case,
consider the vertex $w'$ that subdivides $zwy$. The preceding
two arguments prevent $w'$ from being placed in $Q^2_0(w)$
or $Q^2_2(w)$. However, placing $w'$ in $Q^2_3(w)$ creates a
monotone path from $x$ to $w'$.
\end{enumerate}
\end{proof}
\begin{lemma}
\label{lem:level-2}
Let $G$ be a non-crossing $\delta_2$-obstacle embedding of some graph, and let $xyz$ be a three-cycle in $G$ with $yz\in Q^2_i(x)$ for some $i$. Then, $xyz$ does not contains a 4-level subdivision in its interior.
\end{lemma}
\begin{proof}
If $xyz$ does not already meet the criteria for Lemma~\ref{lem:level-1}, then any choice of location for the first-level subdivision vertex will create at least one triangle that does meet the criteria for Lemma~\ref{lem:level-1}.
\end{proof}
\begin{theorem}
\label{thm:stellated}
There exists a planar 3-tree that does not have a non-crossing $\delta_2$-obstacle embedding.
\end{theorem}
\begin{proof}
Consider the graph $G$ that is a 5-level subdivision of a triangle. In any embedding of $G$, there is a triangle $xyz$ with a 4-level subdivision in its interior. The theorem then follows since, by Lemma~\ref{lem:labelling}, we can apply Lemma~\ref{lem:level-1} to $xyz$.
\end{proof}
Notice that the graph in Theorem~\ref{thm:stellated} has treewidth 3. Figure~\ref{fig:triangular} shows that the infinite triangular grid has a non-crossing $\delta_2$-obstacle embedding. This means that while not all planar graphs of treewidth 3 have a non-crossing $\delta_2$-obstacle embedding, there are planar graphs of treewidth $\Theta(\sqrt{n})$ that admit such an embedding.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\textwidth]{triangular-grid}
\end{center}
\caption{A plane $\delta_2$-obstacle embedding of the triangular grid.}
\label{fig:triangular}
\end{figure}
We next prove that even 4-connectivity does not help to guarantee the existence of non-crossing $\delta_2$-obstacle embeddings. The idea is to show that a 4-connected triangulation having a plane $\delta_2$-obstacle representation must have a constrained 4-colouring in the sense that, for the neighbours of a vertex, which colours and in what order are they allowed to be assigned to them. We then find a 4-connected triangulation that does not have such a constrained 4-colouring. We next give the details. Let $G$ be a non-crossing $d_2$-obstacle representation of a 4-connected triangulation; we call a vertex of $G$ an \emph{outer vertex} if it is incident to the outerface of $G$.
\begin{lemma}
\label{lem:fourTypesOfVertices}
For every internal vertex $u$ of $G$, there is an $i\in\{0,\dots,3\}$ such that $u$ has exactly one neighbour in $Q^2_{i-1}(u)$, exactly one neighbour in $Q^2_{i+1}(u)$ and all remaining neighbours in $Q^2_i(u)$.
\end{lemma}
\begin{proof}
Suppose for a contradiction that this is not the case. Then, since $G$ is 4-connected, $u$ would have two non-adjacent neighbours $x$ and $y$ such that $x\in Q^2_j(u)$ and $y\in Q^2_{j+2}(u)$, for some $j\in\{0,\dots,3\}$. This means that the path $xuy$ is $xy$-monotone, but $x$ and $y$ are not adjacent in $G$ --- a contradiction.
\end{proof}
Lemma~\ref{lem:fourTypesOfVertices}, classifies the internal vertices of $G$ into four types 0, 1, 2, and 3. For each internal vertex $u\in V(G)$, we define $c(u)$ as the \emph{type} of the vertex $u$. For an internal vertex $u$ with $c(u)=i$, we can assume by Lemma~\ref{lem:fourTypesOfVertices} that $u$ has $\deg(u)-2$ of its neighbours in $Q^2_i(u)$ and has no neighbours in $Q^2_{i+2}(u)$.
\begin{lemma}
\label{lem:typeOfTwoNeighbours}
For an internal vertex $u$ of $G$ with $c(u)=i$, for some $i\in\{0,\dots,3\}$, the neighbour $x$ (resp., $y$) of $u$ in $Q^2_{i+1}(u)$ (resp., $Q^2_{i-1}(u)$) is either an outer vertex or $c(x)=i-1$ (resp., $c(y)=i+1$). \end{lemma}
\begin{proof}
Since $x\in Q^2_{i+1}(u)$ and $y\in Q^2_{i-1}(u)$, the vertices $u$ and $y$ are two neighbours of $x$ that are both in $Q^2_{i-1}(x)$. This means that $c(x)=i-1$, unless $x$ is an outer vertex. An analogous argument applies to the vertex $y$.
\end{proof}
Lemma~\ref{lem:typeOfTwoNeighbours} suggests the following property for an internal vertex $u$ of $G$. If $c(u)=i$ and all neighbours of $u$ are internal (i.e., $u$ has no neighbour on the outerface), then $u$ has two consecutive neighbours $x$ and $y$ such that (i) $c(x)=i-1$ and $c(y)=i+1$, and (ii) $x$ is before $y$ in the counter-clockwise ordering of the neighbours of $u$. See Figure~\ref{fig:2neighbours}. We call this as the \emph{two-neighbour property} of $u$.
\begin{figure}[t]
\centering
\includegraphics[width=1.00\textwidth]{2neighbours}
\caption{The two neighbours $x$ and $y$ of $u$ in four cases depending on the type of $u$. The value besides each vertex denotes its type.}
\label{fig:2neighbours}
\end{figure}
\begin{lemma}
\label{lem:properColouring}
The partial function $c:V(G)\rightarrow\{0,1,2,3\}$ is a proper colouring of the internal vertices of $G$.
\end{lemma}
\begin{proof}
Let $u$ be an internal vertex $G$ with $c(u)=i$. Then, by Lemma~\ref{lem:typeOfTwoNeighbours}, the neighbour $x$ (resp., $y$) of $u$ in $Q^2_{i+1}(u)$ (resp., in $Q^2_{i-1}(u)$) is either an outer vertex or $c(x)=i-1$ (resp., $c(y)=i+1$). Moreover, any vertex $z\in Q^2_i(u)$ has at least one neighbour (namely, $u$) in $Q^2_{i+2}(z)$ and so $z$ is either an outer vertex or $c(z)\neq i$.
\end{proof}
Now, consider the graph $H$ shown in Figure~\ref{fig:4connectedExample}(a) in which the labels of the vertices denote the colour of the vertices.
\begin{lemma}
\label{lem:orderingOfHVertices}
Let $G$ be a planar graph that contains $H$ such that all the vertices of $H$ are internal in $G$. If $G$ has a plane $\delta_2$-obstacle embedding, then (referring to the graph $H$ shown in Figure~\ref{fig:4connectedExample}(a)) $c(e)=i$, $c(h)=i+1$, $c(g)=i+2$ and $c(f)=i+3$ up to rotation.
\end{lemma}
\begin{proof}
Since $G$ is planar and all the vertices of $H$ are internal in $G$, the two-neighbour property must hold for every vertex $u$ of $H$ by Lemma~\ref{lem:typeOfTwoNeighbours}. W.l.o.g., assume that $a=0$ and so $c\in\{1,2,3\}$ by Lemma~\ref{lem:properColouring}. We next consider these three cases.
\noindent{\bf Case I: $a=0$ and $c=1$.} Then, we consider the two cases for $b$. (i) If $b=3$, then $f\neq 1$ because otherwise $b$ cannot satisfy the two-neighbour property (i.e., $b$ cannot have two consecutive neighbours $x$ and $y$ such that $c(x)=2$, $c(y)=0$, and $x$ appears before $y$ in the counter-clockwise ordering around $b$). By Lemma~\ref{lem:properColouring}, $f=2$ and so we then have $g=0$. If $h=2$, then we must have $d=3$ in order to satisfy the two-neighbour property for $h$. This will then imply that $e=1$, but then the two-neighbour property does not hold for $d$. If $h=3$, then $d=2$ and $e=1$ by Lemma~\ref{lem:properColouring}. But then, $h$ does not have the two-neighbour property. (ii) If $b=2$, then we must have $g=3$ in order to satisfy the two-neighbour property for $b$. This implies that $f=1$ by proper colouring, but now the two-neighbour property does not hold for $g$.
\noindent{\bf Case II: $a=0$ and $c=2$.} We consider the two cases for $b$. (i) If $b=1$, then $g=0$. Notice that $g$ cannot be 3 because then $f=2$ by proper colouring, but then the two-neighbour property does not hold for $g$. Since $g=0$ and the colours 1 and 3 must appear at two consecutive neighbours of $c$ in counter-clockwise, we must have $d=1$ and $h=3$. This gives $e=2$ by proper colouring, but then the two-neighbour property does not hold for $e$ (notice that $d=1$ and $h=3$, but they do not appear in counter-clockwise around $e$). (ii) If $b=3$, then $g\in\{0,1\}$. If $g=0$, then the remaining vertices are forced to be $d=1$ and $h=3$ (by the two-neighbour property for $c$), and then $e=2$ and $f=1$ (by proper colouring) in this order. Similarly, if $g=1$, then $f=2$ (by the two-neighbour property for $b$), $e=3$ and $d=1$ (by the two-neighbour property for $a$), and $h=0$ by proper colouring. Observe that in either case $c(e)=i$, $c(h)=i+1$, $c(g)=i+2$ and $c(f)=i+3$ up to rotation, satisfying the ordering stated in the lemma.
\noindent{\bf Case III: $a=0$ and $c=3$.} We again consider the two cases for $b$. (i) If $b=1$, then we must have $g=0$ and $f=2$ (in order to satisfy the two-neighbour property for $b$). This will then imply that we have $d=2$ and $h=0$ (in order to satisfy the two-neighbour property for $c$). But, then we cannot satisfy this property for $h$. (ii) If $b=2$, then we must have $g=1$ and $f=3$ (again, in order to satisfy the two-neighbour property for $b$). But, then we cannot satisfy this property for $f$; notice that although $b=2$ and $a=0$, they do not appear in the counter-clockwise around $f$ as required.
Therefore, the only way $G$ can have a plane $\delta_2$-obstacle embedding is to have $a=0$ and $c=2$ in which case we get $c(e)=i$, $c(h)=i+1$, $c(g)=i+2$ and $c(f)=i+3$ up to rotation.
\end{proof}
\begin{theorem}
\label{thm:existsA4Connected}
There exists a 4-connected triangulation $G$ with maximum degree 7 that has no plane $\delta_2$-obstacle embedding.
\end{theorem}
\begin{proof}
Graph $G$ is shown in Figure~\ref{fig:4connectedExample}(b). First, it is easy to see that $G$ is planar, 4-connected and has maximum degree 7. Moreover, $G$ contains two copies of graph $H$ ``attached'' to each other in its interior. By Lemma~\ref{lem:orderingOfHVertices}, the four vertices on the outerface of each of these copies of $H$ must have the ordering type given in Lemma~\ref{lem:orderingOfHVertices}. However, since these two copies of $H$ share two of the outerface vertices, it is not possible to satisfy such ordering type for the outerface vertices at the same time. As such, $G$ does not admit a plane $\delta_2$-obstacle embedding.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[width=0.80\textwidth]{graphG}
\caption{(a) Graph $H$ in supporting the proof of Lemma~\ref{lem:orderingOfHVertices}. (b) Graph $G$ in supporting the proof of Theorem~\ref{thm:existsA4Connected}.}
\label{fig:4connectedExample}
\end{figure}
\subsection{Higher-$k$ $\delta_k$-Obstacle Representations}
\label{subsec:higherK}
In this section, we consider the non-crossing embeddings for $k>2$. We start by planar 3-trees.
\begin{theorem}
\label{thm:6grid3Trees}
Every planar 3-tree has a plane $\delta_3$-obstacle embedding.
\end{theorem}
\begin{proof}
The proof is by induction on $n=|V(G)|$. However, our inductive hypothesis is slightly stronger: every $n$ vertex planar 3-tree has a plane $\delta_3$-obstacle embedding in which the neighbours of each vertex $u$ occupy at least 3 of the sectors $Q^3_0(u),\dots,Q^3_5(u)$. As the base case, the smallest planar 3-tree is the clique $K_4$ on four vertices, for which the standard planar drawing of $K_4$ satisfies the requirement. So, assume that $n>4$ and every graph $G$ has a plane $\delta_3$-obstacle embedding that satisfies the stronger induction hypothesis for all $|V(G)|=k$ for all $k<n$.
When specialized to planar 3-trees, Lemma~\ref{lem:dujwood} says that every planar 3-tree is either $K_4$ or has a vertex $u$ and an independent set $S$ ($|S|\leq 3$) such that $G\setminus S$ is a 3-tree, $u$ has degree 3 in $G\setminus S$ with neighbours $x, y$ and $z$, and every vertex $r$ in $S$ forms a clique with exactly one of $uxy, uyz$ or $uzx$.
In the case $n>4$, we apply the previous result and recurse on $G\setminus S$. This give us a plane $\delta_3$-obstacle embedding of $G\setminus S$. By our (stronger) induction hypothesis, there are two cases depending on the locations of $x, y$ and $z$ with respect to $u$. In both cases, the elements of $S$ are placed close enough to $u$ that we do not create any new $\delta_3$-monotone paths involving vertices other than those in $\{u,x,y,z\}\cup S$. Furthermore, since $\{u,x,y,z\}$ form a clique, we only need to worry about (possibly) creating a new $\delta_3$-monotone path involving at least one vertex of $S$. We now consider two cases.
\begin{itemize}
\item No two neighbours of $u$ are in consecutive cones; e.g., $x\in Q^3_1(u), y\in Q^3_3(u)$ and $z\in Q^3_5(u)$. In this case, we add the elements of $S$ as shown in Figure~\ref{fig:6grid3Trees}(a).
\item Two neighbours of $u$ are in consecutive cones; e.g., $x\in Q^3_1(u), y\in Q^3_2(u)$ and $z\in Q^3_4(u)$. Then, we add the elements of $S$ as shown in Figure~\ref{fig:6grid3Trees}(b).
\end{itemize}
In both cases, we can verify that the (at most three) new neighbours of $u$ also satisfy the stronger inductive hypothesis.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[width=0.80\textwidth]{6grid3Trees}
\caption{An illustration in support of the proof of Theorem~\ref{thm:6grid3Trees}.}
\label{fig:6grid3Trees}
\end{figure}
\subparagraph{3-connected cubic graphs.} Here, we show that every $3$-connected cubic planar graph has a plane $\delta_7$-obstacle embedding. The algorithm contrustructs a $\delta_7$-obstacle embedding by adding one vertex per time according to a canonical ordering of the graph~\cite{Kant96}, and at each step it maintains a set of geometric invariants which guarantee its correctness. The key ingredients are the fact that each new vertex $v$ to be inserted has exactly two neighbors in the already constructed representation, together with the existence of a set of edges whose removal disconnects the representation in two parts, each containing one of the two neighbors of $v$. A sufficient stretching of these edges allows for a suitable placement for vertex $v$. We next give the details.
A graph is \emph{cubic} if all its vertices have degree three (i.e., it is $3$-regular). Let $G=(V,E)$ be a $3$-connected plane graph; i.e., a $3$-connected planar graph with a prescribed planar embedding. Let $\delta = \{\mathcal{V}_1,\dots,\mathcal{V}_K\}$ be an ordered partition of $V$, that is, $\mathcal{V}_1 \cup \dots \cup \mathcal{V}_K = V$ and $\mathcal{V}_i \cap \mathcal{V}_j = \emptyset$ for $i \neq j$. Let $G_i$ be the subgraph of $G$ induced by $\mathcal{V}_1 \cup \dots \cup \mathcal{V}_i$ and denote by $C_i$ the outerface of $G_i$. The partition $\delta$ is a \emph{canonical ordering} of $G$ if
\begin{itemize}
\item $\mathcal{V}_1=\{v_1,v_2\}$, where $v_1$ and $v_2$ lie on the outerface of $G$ and $(v_1,v_2) \in E$.
\item $\mathcal{V}_K = \{v_n\}$, where $v_n$ lies on the outerface of $G$, $(v_1,v_n) \in E$, and $v_n \neq v_2$.
\item Each $C_i$ ($i > 1$) is a cycle containing $(v_1,v_2)$.
\item Each $G_i$ is $2$-connected and internally $3$-connected.
\item For each $i = \{2, \dots, K-1\}$, one of the following conditions holds:
\begin{enumerate}
\item $\mathcal{V}_i$ is a \emph{singleton} $v_i$ which belongs to $C_i$ and has at least one neighbor in $G \setminus G_i$.
\item $\mathcal{V}_i$ is a \emph{chain} $\{v_i^1,\dots, v_i^l\}$, both $v_i^1$ and $v_i^l$ have exactly one neighbor each in $C_{i-1}$, and $v_i^2, \ldots, v_i^{l-1}$ have no neighbor in $C_{i-1}$. Since $G$ is $3$-connected, this implies that each $v_i^j$ has at least one neighbor in $G \setminus G_i$.
\end{enumerate}
\end{itemize}
Kant~\cite{Kant96} proved that every $3$-connected plane graph has a canonical ordering that can be computed in linear time. Observe that, if the graph $G$ is cubic and $\mathcal{V}_i$ is a singleton, then $v_i$ has exactly two neighbors in $C_i$ and therefore exactly one neighbor in $G \setminus G_i$. Similarly, if $\mathcal{V}_i$ is a chain, then all its vertices will have exactly one neighbor in $G \setminus G_i$, since they already have two neighbors in $G_i$. Therefore, for each $\mathcal{V}_i$, $i=2,\dots,K-1$, there are exactly two vertices in $G_{i-1}$ that are adjacent one to $v_i^1$ and one to $v_i^l$ if $\mathcal{V}_i$ is a chain, or both to $v_i$ if $\mathcal{V}_i$ is a singleton. We call them the \emph{leftmost} and the \emph{rightmost predecessor} of $\mathcal{V}_i$, respectively.
Let $G$ be a $3$-connected cubic plane graph and let $\delta = \{\mathcal{V}_1,\dots,\mathcal{V}_K\}$ be a canonical ordering of $G$. For $i=2,\dots,k$, we call \emph{base edges} of $G_i$, the edges in the set $B_i$ inductively defined as follows: for $i=2$, $B_2$ contains all edges of $G_2$; for $i>2$, if $\mathcal{V}_i$ is a singleton, then $B_i=B_{i-1}$, else $B_i=B_{i-1} \cup \{(v_i^1,v_i^2),\dots, (v_i^{l-1}v_i^l)\}$. Furthermore, every vertex of $G_i$ that has a neighbor in $G \setminus G_i$ is called an \emph{attaching vertex of $G_i$}. Note that an attaching vertex of $G_i$ belongs to $C_i$ and that $G_K$ has no attaching vertices. Two attaching vertices $u$ and $v$ of $G_i$ are \emph{consecutive} if there is no attaching vertex between them when walking from $u$ to $v$ along $C_i$ in the direction that does not pass through $v_1$ and $v_2$. We first need the following two results by Di Giacomo et al.~\cite{DiGiacomoLM18}.
\begin{lemma}[Di Giacomo et al.~\cite{DiGiacomoLM18}]
\label{lem:baseEdges}
For every pair of consecutive attaching vertices $u$ and $v$ of $G_i$, there exists a set of base edges $B_i(u,v)$ whose removal disconnects $G_i$ into two subgraphs, one containing $u$ and the other one containing $v$.
\end{lemma}
All other edges of $G_i$ that are not base edges are called \emph{attaching edges}.
\begin{lemma}[Di Giacomo et al.~\cite{DiGiacomoLM18}]
\label{lem:exactlyOneBaseEdge}
For every pair of consecutive attaching vertices $u$ and $v$ of $G_i$, there exists exactly one base edge in the path from $u$ to $v$ along $C_i$, and thus all other edges in this path (if any) are attaching edges.
\end{lemma}
We are now ready to prove the following.
\begin{theorem}
\label{thm:3connectedCubic}
Every $3$-connected cubic plane graph has a plane $\delta_7$-obstacle embedding.
\end{theorem}
\begin{proof}
Let $\delta = \{\mathcal{V}_1,\dots,\mathcal{V}_K\}$ be a canonical ordering of a $3$-connected cubic plane graph $G$. The algorithm inductively constructs a drawing of $G$ by adding a set $\mathcal{V}_i$ per time. The base case is a drawing of $G_2$. We denote by $\Gamma_{i}$ the drawing after the addition of $\mathcal{V}_i$ ($i=2,3,\dots,K$); i.e., the drawing of $G_i$. We prove that each drawing $\Gamma_i$ ($i=2,\dots,K-1$) satisfies the following invariants:
\begin{itemize}
\item[\textbf{I1.}] Every base edge $(u,v)$, assuming $u$ is below $v$, is such that $v$ is inside $Q^7_0(u)$ (resp., $Q^7_6(u)$) if $v$ is to the right (resp., left) of $u$.
\item[\textbf{I2.}] Every other edge $(u,v)$, assuming $u$ is below $v$, is such that $v$ is inside $Q^7_1(u)$ or $Q^7_2(u)$ (resp., $Q^7_4(u)$ or $Q^7_5(u)$) if $v$ is to the right (resp., left) of $u$.
\item[\textbf{I3.}] For every two consecutive attaching vertices $u$ and $v$, assuming $u$ is to the left of $v$, the path from $u$ to $v$ along $C_i$ is drawn $x$-monotone and it first contains a (possibly empty) set of downward attaching edges, then the unique base edge (in Lemma~\ref{lem:exactlyOneBaseEdge}), and then a (possibly empty) set of upward attaching edges.
\end{itemize}
\begin{figure}
\centering
\begin{minipage}[b]{.36\textwidth}
\centering
\includegraphics[width=\textwidth,page=1]{cubic}
\subcaption{~}\label{fig:cubic-g2-odd}{}
\end{minipage}
\hfil
\begin{minipage}[b]{.36\textwidth}
\centering
\includegraphics[width=\textwidth,page=2]{cubic}
\subcaption{~}\label{fig:cubic-g2-even}{}
\end{minipage}
\begin{minipage}[b]{.36\textwidth}
\centering
\includegraphics[width=\textwidth,page=3]{cubic}
\subcaption{~}\label{fig:cubic-vi-before}{}
\end{minipage}
\hfil
\begin{minipage}[b]{.36\textwidth}
\centering
\includegraphics[width=\textwidth,page=4]{cubic}
\subcaption{~}\label{fig:cubic-vi-after}{}
\end{minipage}
\caption{(a-b) Illustrations for the drawing of $G_2$ when $|\mathcal{V}_2|$ is (a) odd and (b) even. (c-d) Illustrations for the addition of $\mathcal{V}_i$ (c) before the stretching operation and (d) after the stretching operation.}
\end{figure}
We first draw $G_2$ as follows. We distinguish between two cases based on whether $\mathcal{V}_2$ contains an odd or even number of vertices. This distinction is needed to avoid monotone paths and to ensure visibility between $v_1$ and $v_2$, refer to Figures~\ref{fig:cubic-g2-odd} and~\ref{fig:cubic-g2-even} for illustrations. If $|\mathcal{V}_2|$ is odd, we set $v_1$ at point $(0,0)$ and $v_2$ at point $(1,0)$. We then draw each vertex $v_1^j$ ($j=1,\dots,l$) in the intersection of $Q^7_0(v_1)$ and $Q^7_7(v_2)$. Each vertex $v_1^j$ is placed inside the sector $Q^7_0(v_1^{j-1})$ if $j$ is odd, or inside the sector $Q^7_{13}(v_1^{j-1})$ if $j$ is even, where $v_1^{0}=v_1$. Also, we may assume that all vertices with even (odd) $j$ have the same $y$-coordinate. If $|\mathcal{V}_2|$ is even, we set $v_1$ at point $(0,0)$ and $v_2$ at point $(1,-\epsilon)$, for a value of $\epsilon$ that guarantees $v_2$ be inside $Q^7_{13}(v_1)$. We then draw each vertex $v_1^j$ ($j=1,\dots,l$) below $v_1$, above the line connecting $v_1$ and $v_2$, and between $v_1$ and $v_2$. Each vertex $v_1^j$ is placed inside the sector $Q^7_{13}(v_1^{j-1})$ if $j$ is odd, or inside the sector $Q^7_0(v_1^{j-1})$ if $j$ is even, where $v_1^{0}=v_1$. Also, we may assume that all vertices with even (odd) $j$ have the same $y$-coordinate. By definition all edges of $G_2$ are base edges (thus I2. trivially follows), and this construction guarantees I1. and I3.
Next, we draw each $\mathcal{V}_i$ as follows, refer to Figures~\ref{fig:cubic-vi-before} and~\ref{fig:cubic-vi-after} for illustrations. Let $u$ and $v$ be the leftmost and rightmost predecessors of $\mathcal{V}_i$, respectively. One can see that, by the properties of the canonical ordering together with I2, if $u$ is adjacent to a vertex $w$ in $G_{i-1}$ and $u$ is in $Q^7_1(w)$ (resp., $Q^7_2(w)$), then there is no vertex $z$ in $G_{i-1}$ that is adjacent to $u$ and such that $u$ is in $Q^7_2(w)$ (resp., $Q^7_1(w)$). Hence, we can place the vertex of $\mathcal{V}_i$ adjacent to $u$ in either $Q^7_1(u)$ or $Q^7_2(u)$, say $Q^7_1(u)$, without creating any monotone path. Similarly, we can place the vertex of $\mathcal{V}_i$ adjacent to $v$ inside either $Q^7_4(v)$ or $Q^7_5(v)$, say $Q^7_4(v)$. Indeed, we can place all vertices of $\mathcal{V}_i$ in $Q^7_1(u)\cap Q^7_4(v)$. To this aim, we need to ensure that $Q^7_1(u)\cap Q^7_4(v)\neq \emptyset$ and that $Q^7_1(u)\cup Q^7_4(v)$ does not intersect $\Gamma_{i-1}$. By I3., the path between $u$ and $v$ contains first a set of downward edges, followed by one base edge, and finally a set of upward edges. Thus, by sufficiently stretching the base edges in $B_{i-1}(u,v)$ (Lemma~\ref{lem:baseEdges}) both conditions can be satisfied. I2. is trivially preserved by the stretching operation. Concerning I1 and I3, note that stretching a base edge makes it nearly horizontal but does not change the sector in which the edge lies. After the stretching operation, we can safely draw $\mathcal{V}_i$ if it is a singleton in $Q^7_1(u)\cap Q^7_4(v)$, or by using a technique similar as for $G_2$ if it is a chain.
Invariants I1. and I2. imply that $G_{K-1}$ admits a plane $\delta_7$-obstacle embedding. The addition of $\mathcal{V}_K$ requires some special care since $v_K$ has three predecessors. One of these predecessors is $v_1$ by definition of canonical ordering. Let $u$ and $v$ be the other two predecessors in the order they appear when walking clockwise along $C_{K-1}$ from $v_1$ to $v_2$. We place $v_K$ in $Q^7_3(u)\cap Q^7_4(v)$ similarly as we did for the previous sets (this may require stretching the base edges in $B_{K-1}(u,v)$). We now aim at realizing the edge $(v_1,v_K)$ such that $v_K$ is in $Q^7_2(u)$ (or equivalently in $Q^7_3(u)$). If $v_K$ already belongs to $Q^7_2(u)$, then we are done. Else, we can assume that the $y$-coordinate of $v_K$ is sufficiently large to guarantee that $v_K$ lies above $Q^7_2(u)$. Note that both the other two edges incident to $v_1$ are base edges that belong to $G_2$, thus we can move $v_1$ horizontally to the left until $v_K$ lies in $Q^7_2(u)$. This concludes the proof.
\end{proof}
\subparagraph{Bounded-degree planar graphs.} Here, we show that every planar graph with maximum degree $\Delta$ has a plane $\delta_{O(\Delta)}$-obstacle embedding. Informally speaking, the idea is to apply the algorithm of Keszegh et al.~\cite{KeszeghPP13} for drawing bounded-degree planar graphs with a few slopes and then taking theses (bounded) number of slopes to define a polyhedral distance function with $k=O(\Delta)$. However, there might exist edges in this drawing with collinear endpoints, which are not allowed for our purpose. We resolve this by re-scaling the drawing and a 4-colouring of the graph. We next give the details.
Let $G$ be a planar graph with maximum degree $\Delta$ and let $c=c(\Delta)$ be an integer that depends only on $\Delta$. Keszegh et al.~\cite{KeszeghPP13} proved that we can assign to each vertex $u\in V(G)$ is assigned a non-negative integer $\ell(u)$ such that (i) for any edge $uw\in E(G)$, $|\ell(u)-\ell(w)|\leq c$, and (ii) there exists a planar straight-line drawing of $G$ in which the $x$- and $y$-coordinates of every vertex $u$ are both divisible by $2^{\ell(u)-c}$, and each edge incident to a vertex $u$ has length at most $2^{\ell(u)+c}$. Notice that each edge in this drawing has a slope that is the same as that of some line segment whose endpoints are on a $2^{2c}\times 2^{2c}$ grid; this drawing thus uses a fixed number of slopes, for a fixed $\Delta$.
It remains to show how the collinear edges are dealt with. First, we compute a 4-colouring $c: V(G)\to \{0,1,2^c+2,2^{2c}+2^c+4\}$ of the vertices of $G$. Next, we re-scale the grid to a $2^{6c}\times 2^{6c}$ grid. Then, for every vertex $u=(x_u,y_u)\in V(G)$, we move $u$ to the grid point with coordinates $(x_u,c(u))$; i.e., the $y$-coordinate of every vertex is increased by the value of its colour. In the following, we show that this transformation will result in a drawing of $G$ with no collinear edges. For a vertex $u\in V(G)$, let $u'$ denote its new position.
Consider two edges $uv$ and $vw$ that were collinear before this transformation and assume w.l.o.g. that $x_u<x_v<x_w$. Therefore, $(x_v-x_u)/(y_v-y_u)=(x_w-x_v)/(y_w-y_v)$; assume w.l.o.g. that $x_w-x_v=a(x_v-x_u)$ and $y_w-y_v=a(y_v-y_u)$, where $a$ is a rational with both enumerator and denominator between 1 and $2^c$. If these two edges are still collinear after the transformation, then we must have
\[
\frac{x_v-x_u}{(y_v-y_u)+(c(v)-c(u))}=\frac{x_w-x_v}{(y_w-y_v)+(c(w)-c(v))},
\]
which is simplified to $c(w)-c(v)=a(c(v)-c(u))$. But, it is easy to verify by the bound on $a$ and the values given to the colours that this equality is not possible.
Now, consider two edges $uv$ and $vw$ that were non-collinear before this transformation, and assume w.l.o.g. that $x_u<x_v<x_w$. We can argue that they stay non-collinear after the transformation by noting a lower bound on the difference between their slopes. That is, we know that
\[
|\frac{x_v-x_u}{y_v-y_u}-\frac{x_w-x_v}{y_w-y_v}|>\frac{1}{2^{2c}}.
\]
Since the grid is re-scaled by $2^{6c}$ and the largest value of a colour assigned to a vertex is $2^{2c}+2^c+4$, we can ensure that these two edges cannot become collinear after the transformation. Hence, we have the following theorem.
\begin{theorem}
Every planar graph with maximum degree $\Delta$ has a plane $\delta_{O(\Delta)}$-obstacle embedding.
\end{theorem}
| {
"timestamp": "2018-03-13T01:02:40",
"yymm": "1803",
"arxiv_id": "1803.03705",
"language": "en",
"url": "https://arxiv.org/abs/1803.03705",
"abstract": "An obstacle representation of a graph is a mapping of the vertices onto points in the plane and a set of connected regions of the plane (called obstacles) such that the straight-line segment connecting the points corresponding to two vertices does not intersect any obstacles if and only if the vertices are adjacent in the graph. The obstacle representation and its plane variant (in which the resulting representation is a plane straight-line embedding of the graph) have been extensively studied with the main objective of minimizing the number of obstacles. Recently, Biedl and Mehrabi (GD 2017) studied grid obstacle representations of graphs in which the vertices of the graph are mapped onto the points in the plane while the straight-line segments representing the adjacency between the vertices is replaced by the $L_1$ (Manhattan) shortest paths in the plane that avoid obstacles.In this paper, we introduce the notion of geodesic obstacle representations of graphs with the main goal of providing a generalized model, which comes naturally when viewing line segments as shortest paths in the Euclidean plane. To this end, we extend the definition of obstacle representation by allowing some obstacles-avoiding shortest path between the corresponding points in the underlying metric space whenever the vertices are adjacent in the graph. We consider both general and plane variants of geodesic obstacle representations (in a similar sense to obstacle representations) under any polyhedral distance function in $\\mathbb{R}^d$ as well as shortest path distances in graphs. Our results generalize and unify the notions of obstacle representations, plane obstacle representations and grid obstacle representations, leading to a number of questions on such embeddings.",
"subjects": "Computational Geometry (cs.CG)",
"title": "Geodesic Obstacle Representation of Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981453438742759,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7083573399616138
} |
https://arxiv.org/abs/1705.01652 | Polluted Bootstrap Percolation with Threshold Two in All Dimensions | In the polluted bootstrap percolation model, the vertices of a graph are independently declared initially occupied with probability p or closed with probability q. At subsequent steps, a vertex becomes occupied if it is not closed and it has at least r occupied neighbors. On the cubic lattice Z^d of dimension d>=3 with threshold r=2, we prove that the final density of occupied sites converges to 1 as p and q both approach 0, regardless of their relative scaling. Our result partially resolves a conjecture of Morris, and contrasts with the d=2 case, where Gravner and McDonald proved that the critical parameter is q/{p^2}. | \section{Introduction}\label{sec-intro}
Bootstrap percolation is a fundamental cellular
automaton model for nucleation and growth from sparse
random initial seeds. In this article we address how the
model is affected by the presence of pollution in the form
of sparse random permanent
obstacles.
Let $\mathbb Z^d$ be the set of $d$-vectors of integers, which we
call \df{sites}, and let $p,q\in[0,1]$ be parameters. In the
\df{initial} (time zero) configuration, each site is chosen
to have exactly one of three possible states:
$$\begin{cases}
\begin{array}{ll}
\text{\df{closed}} & \text{with probability }q; \\
\text{\df{open} and \df{initially occupied}} & \text{with probability }p; \\
\text{\df{open} but not initially occupied} & \text{with probability }1-p-q.
\end{array}
\end{cases}
$$
Initial states are chosen independently for different sites. Closed sites represent pollution or obstacles, while occupied sites represent a growing agent.
The configuration evolves in discrete time steps $t=0,1,2,\ldots$ as follows.
As usual we make $\mathbb Z^d$ into a graph by declaring sites
$u,v\in\mathbb Z^d$ to be neighbors if $\|u-v\|_1=1$. The
\df{threshold} $r$ is an integer parameter. An open site $x$ that is unoccupied at time $t$
becomes occupied at time $t+1$ if and only if
\begin{equation}
\text{at least $r$ neighbors of $x$ are occupied}
\label{standard}
\end{equation}
at time $t$.
Closed sites remain closed forever and
cannot become occupied. Open sites remain open. Once a site is occupied, it remains
occupied. In the main cases of interest, $d\geq r\geq 2$.
Bootstrap percolation without pollution (the case $q=0$ in our formulation) has a long and rich history with many surprises.
For $d\geq r \geq 1$, there is no phase transition in $p$, in the sense that every site of $\mathbb Z^d$ is eventually occupied almost surely for every $p>0$, as proved in \cite{van-enter} ($d=2$) and \cite{schonmann} ($d\geq 3$). The metastability properties of the model on finite regions are understood in great depth (see e.g.\ \cite{AL,Hol1,BBDM,GHM}), while a broad range of variant growth rules have also been explored (e.g.\ \cite{GG,DE,BDMS}). For further background see the discussion later in the introduction, and the excellent recent survey \cite{Mor}.
The polluted bootstrap model (i.e.\ the case $q>0$) was introduced by Gravner and McDonald \cite{GM} in 1997. The principal quantity of interest is the {\it final density\/} of occupied sites, i.e.\ the probability that the origin is eventually occupied, in the regime where $p$ and $q$ are both small. In dimension $d=2$ with threshold $r=2$, Gravner and McDonald proved that the final density is strongly dependent
on the relative scaling of $p$ and $q$. Specifically,
there exist constants $c,C>0$ such that, as $p\to 0$ and $q\to 0$ simultaneously,
$$
\P\bigl(\text{the origin
is eventually occupied}\bigr)\to
\begin{cases}
1, & \text{if } q<c p^2;\\
0, & \text{if } q>C p^2.
\end{cases}
$$
In this article we give the first rigorous treatment of the polluted bootstrap percolation model in dimensions $d\geq 3$. We take the threshold $r$ to be $2$. (Threshold $r=3$ is addressed in a companion paper \cite{GHS} by the current authors together with Sivakoff, as discussed below). Our main result is that, in contrast with dimension $d=2$, occupation prevails regardless of the $p$ versus $q$ scaling.
\begin{thm}\label{main}
Consider polluted bootstrap percolation on $\mathbb Z^d$ with $d\ge 3$, threshold
$r=2$, density $p>0$ of initially occupied sites,
and density $q>0$ of closed sites. We have
$$
\P\bigl(\text{\rm the origin
is eventually occupied}\bigr)\to 1\qquad\text{as }(p,q)\to (0,0).
$$
Moreover, the probability that the origin lies in an infinite
connected set of eventually occupied sites also tends to $1$.
The same statements hold for modified bootstrap percolation.
\end{thm}
In the above statement, a set of sites is called \df{connected} if it induces a connected subgraph of $\mathbb Z^d$. The \df{modified} bootstrap percolation model is a well-known variant of the standard model, in which the condition \eqref{standard} for a site to become occupied is replaced with:
\begin{equation*}
\text{for at least $r$ of the directions $i=1,\ldots, d$,
either $x-e_i$ or $x+e_i$ is occupied,}
\end{equation*}
where $e_i$ is the $i$th coordinate vector. (As before, closed sites cannot become occupied, and occupied sites remain occupied forever).
\cref{main} resolves Conjecture 4.6 of Morris \cite{Mor} in the key
case $r=2$. To be precise, this conjecture may be expressed as: for all $d>r \geq 1$, there exists an infinite connected eventually occupied set with probability at least $1/2$ for $(p,q)$ sufficiently close to $(0,0)$. The author states that the conjecture seems to be very difficult.
Defining
$\phi(p,q)=\phi_{d,r}(p,q)$ to be the probability that the origin is eventually occupied, it follows from the obvious monotonicities of the model that $\phi$ is (weakly) increasing in $p$ and decreasing in $q$. Therefore, the convergence in \cref{main} is equivalent to $\lim_{q\to 0} \lim_{p\to 0} \,\phi(p,q)=1$. This formulation will be reflected in our proof. We will show that for $q$ sufficiently small there is an infinite structure of open sites on which occupation can spread, no matter how small $p$, and that the density of this structure tends to $1$ as $q\to 0$. Our methods are very different from those in previous works on bootstrap percolation, and involve the technology of oriented surfaces introduced recently in \cite{DDGHS}.
Our result reveals an interesting phase transition. Let $r=2$ and $d\geq 3$ and consider the decreasing function $\phi^+(q):=\phi(0^+,q)=\lim_{p\to 0^+} \phi(p,q)$. \cref{main} implies that $\phi^+(q)>0$ for $q$ sufficiently close to $0$. On the other hand, standard arguments imply that $\phi^+(q)=0$ if $q$ exceeds one minus the critical probability $p_c^{\textrm{site}}(\mathbb Z^d)$ of site percolation. Therefore the critical probability
$$q_c:=\inf\{q: \phi^+(q)=0\}$$
is nontrivial. In fact, we show the following slightly stronger fact involving a strict inequality.
\begin{cor} \label{main-follow} Consider the setting of
\cref{main}. The critical value $q_c$ defined above satisfies
$0<q_c\le 1-p_c^{\text{\rm site}}(\mathbb Z^d)$. For $d=3$, the latter inequality is strict. The function $\phi^+$ vanishes
on $(q_c,1]$, is strictly positive
on $[0,q_c)$, and converges to $1$ as $q\to 0$.
\end{cor}
Our methods do not produce a good lower bound on $q_c$, and give
no information on the behavior of $\phi^+$ near $q_c$.
As mentioned earlier, the companion paper \cite{GHS} treats polluted bootstrap percolation with threshold $r=3$. The strongest result of \cite{GHS} is for the modified bootstrap percolation model with $d=r=3$. Similarly to the case $d=r=2$ of \cite{GM}, but in contrast with the $d>r=2$ case of \cref{main}, the final density here depends on the $p$ versus $q$ scaling, but now with a \emph{cube} law (modulo logarithmic factors). Specifically, as $p,q\to 0$, the final occupied density converges to $1$ if $q< c\,(p/\log p^{-1})^3$, and to $0$ if $q> C p^3$. Interestingly, the first of these bounds relies crucially on \cref{main} of the current article (together with a straightforward renormalization argument). The second bound (which is far from straightforward) again uses oriented surfaces, but in a completely different way: to block growth rather than to facilitate it.
We record some simple observations about other choices of the threshold $r$. For $d=r$, notwithstanding the detailed results of \cite{GM,GHS}, an easy argument rules out the conclusion $\lim_{(p,q)\to(0,0)} \phi(p,q)=1$ of \cref{main}. Indeed, if $p,q\to 0$ with $p=o(q^{2^d})$ then with high probability there exist $M<0<N$ such that no site in the box $\{0,1\}^{d-1}\times [M,N]$ is initially occupied but every site on the two ends $\{0,1\}^{d-1}\times \{M,N\}$ is closed. On this event, the origin cannot become occupied. (For the modified model, the same argument works even for the line $\{0\}^{d-1}\times [M,N]$, giving the same conclusion under the weaker assumption $p=o(q)$. Similar comparisons involving
$\{0,1\}^{d-d'}\times \mathbb Z^{d'}$ or $\{0\}^{d-d'}\times \mathbb Z^{d'}$ for $d>d'$ are available, which, when combined with the results of \cite{GM} for $d'=2$ or \cite{GHS} for $d'=3$, yield further improvements.) On the other hand, the case of threshold $r=1$ is easily understood via standard site percolation: the final occupied set is simply the union of all open clusters that contain initially occupied sites. (This observation is relevant to \cref{main-follow}.) Finally, thresholds $r>d$ are less interesting to us, since, even with no closed sites, there are finite sets such as $\{0,1\}^d$ that remain unoccupied forever if unoccupied initially, so $\lim_{(p,q)\to(0,0)} \phi(p,q)=0$.
\subsection*{Background}
Bootstrap percolation is an established model for nucleation and
metastability, and one of very few cellular automaton models with a well-developed mathematical theory. It has been applied in physics, biology, and
social science to various growth phenomena, including crack formation, crystal growth, and spread of information or infection. See \cite{GZH} for a recent example. Bootstrap percolation has been used in the rigorous analysis of other models such as sandpile and Ising models; see e.g.\ \cite{Mor}. The evolving set method in Markov mixing theory can be viewed as bootstrap percolation with a randomly varying threshold \cite{evolve}.
Bootstrap percolation was first considered on trees \cite{CLR}, but the lattice $\mathbb Z^d$ with its physics connotations has received the most attention. There has been recent interest in mean-field and power-law graphs, motivated in part by applications to social networks; see e.g.\ \cite{JLTV,amini2,koch}.
Polluted bootstrap percolation was introduced in \cite{GM} on the two dimensional lattice. Potential areas of application include the effects of impurities on crystal growth, of immunization on epidemics, or of interventions on spread of rumors.
Since \cite{GM}, rigorous progress on growth processes in random environments has been limited, and the case of polluted bootstrap percolation in three and higher dimensions has been entirely open until now.
Here are some examples of work on related models.
Investigation of asymptotic shapes in models related
to polluted bootstrap percolation with $r=1$ was
initiated in \cite{GMa}; a recent paper \cite{JLTV} studies
such processes on a complete graph with excluded edges;
and \cite{DEKNS} addresses a Glauber dynamics (which can be viewed
as a non-monotone version of bootstrap percolation) with
``frozen'' vertices. Polluted bootstrap percolation
and closely related models have been used in empirical studies
of complex networks with ``damaged'' vertices \cite{BDGM2, BDGM1}.
A key element in our proof will be the simple but powerful method of random oriented surfaces recently introduced in \cite{DDGHS}. This method has been further used and developed in a variety of contexts \cite{interface,ent,geom,embed,comb,bod-tex}, but ours is the first application to cellular automata so far as we are aware. A distinct application to polluted bootstrap percolation will appear in \cite{GHS}.
Another useful tool will be the results of \cite{LSS}
concerning domination of finitely dependent processes. A
random configuration $X=(X_v)_{v\in \mathbb Z^d}$ taking values in
$\{0,1\}^{\mathbb Z^d}$ is called \df{$m$-dependent} if
$(X_v)_{v\in A}$ and $(X_v)_{v\in B}$ are independent of
each other whenever the sets $A$ and $B$ are at distance
greater than $m$. The relevant result of \cite{LSS} is
that for any $p<1$ there exists $p'=p'(p,d,m)<1$ such that,
if $X$ is $m$-dependent and satisfies $\mathbb E X_v\geq p'$ for
all $v$, then $X$ stochastically dominates an i.i.d.\ process
with parameter $p$.
\subsection*{Outline of proof and organization}
The modified bootstrap percolation model is ``weaker'' than the standard model, in the sense that it is more difficult for a site to become occupied, so that for a given initial configuration, the occupied set for the modified model is a subset of that for the standard model at each time $t$. Therefore, it suffices to prove the conclusions of \cref{main} for the modified model. Moreover, we may without loss of generality assume that $d=3$. Indeed, for $d\geq 4$ we may restrict to the $3$-dimensional subspace $\mathbb Z^3\times\{0\}^{d-3}$. Any site that becomes occupied in the $d=3$ model restricted to the subspace also becomes occupied in the full model on $\mathbb Z^d$ (where in both cases $r=2$). Therefore, for the remainder of the paper we consider the modified bootstrap percolation model with $r=2$ on $\mathbb Z^3$ except where explicitly stated otherwise.
In the absence of closed sites, the two-dimensional bootstrap rule
fills $\mathbb Z^2$ from any positive density $p$ of occupied sites.
This suggests the following approach. For $q$ sufficiently small we may attempt to construct an infinite two-dimensional surface that avoids closed sites and behaves like $\mathbb Z^2$, in the sense that it also admits growth by the $r=2$ model for any $p>0$. In \cref{sec-open-curtains} we indeed construct an oriented surface, called a \emph{curtain}, with some of the required properties. In particular, starting from an infinite fully occupied half space of $\mathbb Z^3$, a curtain will become fully occupied almost surely for any $p>0$. The construction of the curtain itself does not involve $p$, and does not depend on the locations of initially occupied sites.
A curtain alone is not sufficient to prove \cref{main}, because a \emph{finite} occupied nucleus does not lead to indefinite growth on a curtain.
To address this, we will use a renormalization argument involving curtains with different orientations that intersect each other. This part of the argument \emph{will} involve $p$, in the determination of a length scale. In \cref{sec-sails} we construct the unit of our renormalization, which is a curtain restricted to a finite box, with carefully constrained geometry, and scaled to facilitate the required intersections. This modified curtain is called a \emph{sail}. The size of the box is chosen to be a power of $p^{-1}$, which allows the sail to contain sufficient initially occupied sites for growth similar to that on a curtain. In \cref{sec-activation} we use comparison methods to show that if two sails intersect appropriately then occupation is transmitted from one to the other. Finally, \cref{sec-renormalization} completes the renormalization argument, which involves comparison of an infinite network of sails with supercritical oriented percolation, together with ``sprinkling'' for the initial nucleation.
We conclude the paper with a list of open problems.
\subsection*{Notation and conventions}
As stated earlier, we work with the polluted modified bootstrap percolation model with threshold $r=2$ on $\mathbb Z^3$ unless stated otherwise. The cubic lattice, also denoted $\mathbb Z^3$, is the graph with vertex set $\mathbb Z^3$ and with an edge between sites $u$ and $v$ whenever $\|u-v\|_1=1$. When discussing sets of sites, connectivity and components always refer to this graph.
When describing subsets of $\mathbb Z^3$, intervals will be understood to denote their intersections with $\mathbb Z$, so $[a,b)$ denotes $[a,b)\cap\mathbb Z=\{a,a+1,\ldots,b-1\}$, etc. Let $\mathbb N$ be the set of nonnegative integers.
We will frequently wish to consider $2$-dimensional layers of $\mathbb Z^3$, which by convention will be taken perpendicular to the $3$rd coordinate. Thus, for $k\in\mathbb Z$ we define the $k$th \df{layer} to be
$$\Lambda_k:=\mathbb Z^2 \times \{k\}=\bigl\{x\in\mathbb Z^3:x_3=k\bigr\}.$$
Let $\langle \cdot,\cdot\rangle$ denote the standard inner product on $\mathbb Z^3$, and let $e_1,e_2,e_3$ be the standard coordinate vectors.
We will consider paths of various types, not always with nearest-neighbor steps. In general, a \df{path} is a finite or infinite sequence of sites $(\ldots,)x_0,x_1,\ldots,x_n(,\ldots)$. Its \df{steps} are the vectors $(\ldots,)x_1-x_0,x_2-x_1,\ldots,x_n-x_{n-1}(,\ldots)$. It is a nearest-neighbor path if all steps are of the form $\pm e_i$. It is self-avoiding if all its sites are distinct.
\section{Curtains}\label{sec-open-curtains}
In this section we introduce the oriented surfaces
underlying our construction in their pure form.
Later they will be modified by scaling and restricting to finite boxes.
\begin{defn} A \df{curtain} is a set $D\subset \mathbb Z^3$ satisfying
the following.
\begin{enumerate}
\item[(C1)] For any $k\in \mathbb Z$, the intersection
$D\cap\Lambda_k$ with layer $k$ is an infinite path
comprising steps $e_1$ and $-e_2$, with no three
consecutive steps in the same direction; i.e.\ no
$e_1, e_1, e_1$ or $-e_2, -e_2, -e_2$.
\item[(C2)] For all $x\in D$, either $x+(0,0,-1)\in D$ or
$x+(1,1,-1)\in D$.
\end{enumerate}
\end{defn}
\cref{fig-proto} in the next section shows the intersection
of a curtain with a box. The main goal of this section is
to construct an infinite open curtain when $q$ is
sufficiently small. This will be done adapting the duality
technique introduced in \cite{DDGHS} for construction of
Lipshitz surfaces. The curtain will form the outer
boundary of a set reachable by certain paths from a fixed
half space. Before giving the construction, we illustrate
the relevance of curtains to bootstrap percolation with the
following lemma. (Formally, the lemma will not be used in
the proof of \cref{main}. Instead we will use a more
specialized variant, \cref{growth-sail}.)
\begin{samepage}
\begin{lemma} \label{curtain-spread}
Let $D$ be a curtain. Suppose that for every $x\in D$, the
three sites $x$ and $x+(0,0,1)$ and $x+(-1,-1,1)$ are all
open. Moreover, suppose that for every $k\in\mathbb N$, the set $
(D\cap\Lambda_k)+e_3$ contains some initially occupied site.
If $D\cap\Lambda_0$ is initially entirely occupied, then
$D\cap \bigcup_{k\in \mathbb N} \Lambda_k$ becomes entirely occupied in the modified bootstrap
model on $\mathbb Z^3$.
\end{lemma}
\end{samepage}
\begin{figure}
\centering
\includegraphics[width=.4\textwidth]{two-paths}
\caption{An illustration of the proof of \cref{curtain-spread}.
Two consecutive layers of a curtain are shown from above. Large squares
are present in the upper layer, and small (red) squares in the lower layer.
The argument showing that the upper layer site marked with a star becomes occupied is indicated.
We consider the portion of the lower layer path shown by filled squares, and deduce that
all the upper layer sites marked with discs become occupied.
}\label{paths}
\end{figure}
\begin{proof}
By induction on the layer, it suffices to prove that
$D\cap\Lambda_1$ becomes entirely occupied. This verification
is given in two steps below, and is illustrated in
\cref{paths}. Let $Y:=(D\cap\Lambda_0)+e_3$ be the set above
the intersection with the bottom layer.
First we claim that every site in
$Y$ eventually becomes occupied.
Indeed,
$Y$ is connected, open, and
contains an occupied site, and
every $y\in Y$ has an occupied neighbor $y-e_3\notin Y$.
The claim therefore follows from the bootstrap rule.
We now claim that every site in $Y-(1,1,0)$ also eventually
becomes occupied. Indeed, consider such a site
$z=y-(1,1,0)$ where $y\in Y$. Since $Y$ is a path with the
properties given in (C1), there exist sites $z+a e_1$ and
$z+b e_2$ in $Y$, where $a,b\in[0,3]$. Moreover, the
intervening sites $z+i e_1$ and $z+j e_2$ for $i\in(0,a)$
and $j\in(0,b)$ are open by (C2), and each has a neighbor
in $Y$ distinct from $z+a e_1$ and
$z+b e_2$. Since all sites in $Y$ become occupied, so do all
these sites, whence so does $z$.
The proof is now concluded by observing that
$D\cap\Lambda_1\subseteq Y\cup(Y-(1,1,0))$.
\end{proof}
Now we proceed with the construction of a curtain.
A \df{permissible path} is a finite sequence of sites $x_0,\ldots,x_n\in\mathbb Z^3$ such that every step $x_{i+1}-x_i$ satisfies the following.
Either it is a \df{taxed} step, which is to say that $x_{i+1}$ is \emph{closed}, and $x_{i+1}-x_i$ equals
$$(1,1,0).$$
Otherwise, the step is \df{free}, that is, $x_{i+1}-x_i$ lies in
$$\Bigl\{(-1,0,0),(0,-1,0),(0,0,-1),(-2,1,0),(1,-2,0),(-1,-1,1)\Bigr\}.$$
(with no restriction on the states of sites).
Fix any (deterministic) set $H\subset\mathbb Z^3$ and let $A$ be
the (random) set reachable by permissible paths from $H$.
Then define the following outer boundary:
\begin{equation}\label{boundary}
D:=\bigl\{x\notin A: x-(1,1,0)\in A\bigr\}.
\end{equation}
\begin{comment}
An outer normal to $H$ is $h=(1,1,1)$. Observe that
$h$ has a positive scalar product $2$ with the taxed
step and negative scalar product $-1$ with any of the free
steps. This observation together with a path-counting argument
will show that almost surely there exists a point not in $A$
if $q$ is small enough. We will elaborate on this in
\cref{surface} below. We define
\end{comment}
\begin{lemma} \label{curtain} For any choice of $H$, the set $D$
is either empty or an open curtain.
\end{lemma}
The lemma is of course only useful when $D$ is nonempty.
This will be proved to hold under suitable circumstances in
\cref{surface} below.
\begin{proof}[Proof of \cref{curtain}] We must prove that if $D$ is nonempty then it is open and has properties
(C1) and (C2). Consider any $x\in D$. By translation invariance of the definition, we assume
without loss of generality that $x$ is
the origin $0=(0,0,0)$.
Clearly, $0$ is open,
since otherwise the taxed step from $(-1,-1,0)\in A$ would make
$0\in A$.
Turning to property (C1), we have $(-1,-1,0)\in A$
but $0\notin A$, so using the definition of free steps, $(-1,-2,0)\in A$ but
$(1,0,0)\notin A$.
We claim that either
$(1,0,0)\in D$ or $(0,-1,0)\in D$, but not both.
Indeed, if $(0,-1,0)\in A$ then $(1,0,0)\in D$, while
if $(0,-1,0)\notin A$ then $(0,-1,0)\in D$.
A similar argument shows that either $(-1,0,0)\in D$ or $(0,1,0)\in D$ but not both.
This shows that $D\cap \Lambda_0$ is a union of disjoint paths with steps $e_1$ and $-e_2$. To check the restriction on three consecutive steps, note that $(0,-3,0)\in A$ but $(2,-1,0)\notin A$, which implies $(0,-3,0),(3,0,0)\notin D$.
To show that there is only one path, note that $(-1,-1,0)$ is a sum of two free steps, so the diagonal $\{(k,k,0):k\in\mathbb Z\}$ is partitioned into an interval belonging to $A$ and an interval belonging to $A^C$. If $D$ is nonempty then both intervals are nonempty, and so the diagonal contains exactly one site in $D$.
To prove property (C2),
note that $(-1,-1,-1)\in A$ but $(1,1,-1)\notin A$. Consequently, if
$(0,0,-1)\notin A$, then $(0,0,-1)\in D$. On the other hand,
if $(0,0,-1)\in A$, then $(1,1,-1)\in D$.
\end{proof}
We now choose $H$ to be the half-space
$$H:=\{x: x_1+x_2+x_3\le 0\},$$
and let $A$ and $D$ by defined as above. Note that, by
property (C1), a curtain intersects the line
$\{(t,t,0):t\in\mathbb Z\}$ in exactly one site.
\begin{prop}\label{surface} There exist positive constants $q_0$ and $c$
such that the following holds. For $q<q_0$, the set $D$
constructed above is, almost surely, an open curtain.
Furthermore, the probability that $(1,1,0)\in D$ tends to
$1$ as $q\to 0$, while for any $q<q_0$, the probability
that $D$ intersects the ray $\{(t,t,0): t>k\}$ is less than
$e^{-ck}$ for all $k>0$.
\end{prop}
\begin{proof}
Let $h=(1,1,1)$. Observe that the scalar product
$\langle x, h\rangle$: equals $2$ when $x$ is the
taxed step; equals $-1$ when $x$ is a free step;
and is nonpositive when $x\in H$.
Fix $k\ge 1$ and suppose $(k,k,0)\in A$.
Then there exists a permissible from $H$ to $(k,k,0)$. By erasing loops, we
may assume that the path is self-avoiding.
Let $n_F$ and $n_T$ be
the number of free and taxed steps
of the path, respectively, and let $n=n_F+n_T$ be the total length. As $\langle (k,k,0),h\rangle=2k$, the above observations about scalar products imply
$-n_F+2n_T\ge 2k$. It follows that $n\ge n_T\ge k$
and $n_T\ge (2k+n)/3$. Therefore,
\begin{equation}\label{surface-eq1}
\begin{aligned}
\P\bigl((k,k,0)\in A\bigr)&\le
\P(\text{there exists a path as above})\\
&\le \sum_{n\ge k}7^n q^{(2k+n)/3}\\
&= (7q)^k\cdot\sum_{n\ge 0}(7 q^{1/3})^{n}\\
&\le 8\cdot (7q)^k,
\end{aligned}
\end{equation}
provided $q<q_0:=8^{-3}$.
Note that $0\in A$, so that if $(k,k,0)\notin A$ then
$\{(t,t,0):t\in[1,k]\}$ intersects $D$ while
$\{(t,t,0):t>k\}$ does not. Thus, if $q<q_0$ then
\cref{curtain} implies that $D$ is almost surely nonempty,
and thus is an open curtain by \cref{curtain}. Equation
\cref{surface-eq1} also gives the claimed exponential
bound, since $8\cdot(7q)^k\leq (56 q_0)^k$. Taking $k=1$ in
\cref{surface-eq1}, we get $\P((1,1,0)\notin D)\le 56q$,
giving the second claim.
\end{proof}
The results from this section are already strongly
suggestive of the conclusions of \cref{main}, although by
no means sufficient to prove them. Indeed, consider the
initial configuration consisting of the fully occupied
half-space $\{x: \langle x, e_3\rangle \le 0\}$, and
elsewhere product measure with densities $p$ and $q$ as
usual. It follows easily from
\cref{curtain-spread,curtain,surface} that the probability
that any fixed site $x\in \mathbb Z^3$ is eventually occupied
converges to $1$ as $(p,q)\to (0,0)$. Indeed, with high
probability $x$ lies in a curtain that has the properties
in \cref{curtain-spread}: the presence of the appropriately placed
open sites can be guaranteed via \cite{LSS}, while the presence of an occupied site in
each layer of the curtain holds almost surely. The
remaining difficulty in the proof of \cref{main} is the
need to replace the occupied half-space with a finite
nucleus.
\section{Sails}\label{sec-sails}
A \df{box} in $\mathbb Z^3$ is a Cartesian product of any three
integer intervals. Its \df{dimensions} are the
cardinalities of the three intervals (in order). An
\df{oriented box} is a box with a distinguished corner.
Fix an integer length scale $L$. This scale will be later
chosen to be a suitable function of $p$. A \df{brick} is an
oriented box of dimensions $4L$, $16L$, and $32L$, in any
order. Bricks will be the units of our renormalization. We
will formulate the required properties of bricks by
translating and scaling a smaller box. The \df{proto-brick}
$\widehat B$ is the oriented box $[0,4L)\times [0,4L)\times
[0,2L)$ with the distinguished corner at the origin.
We now formulate the key definition in our renormalization
argument. The idea is that the proto-brick contains a
suitably placed portion of a curtain, with properties
analogous to those in \cref{curtain-spread}, but restricted
to the proto-brick. See \cref{fig-proto} for an
illustration.
\begin{defn}
The proto-brick $\widehat B$ is \df{good} if
there exists a set $\widehat S\subseteq \widehat B$ with the following
properties:
\begin{enumerate}
\item[(G1)] all sites in the following set are open:
$$
\sigma(\widehat S):=\bigl\{x,\;x+(0,0,1),\;x+(-1,-1,1): x\in \widehat S\bigr\}\cap \widehat B;$$
\item[(G2)] $\widehat S$ satisfies (C2) in the definition of a
curtain except at the bottom layer: for all $x\in
\widehat S\setminus\Lambda_0$, either $x+(0,0,-1)\in \widehat S$ or
$x+(1,1,-1)\in \widehat S$;
\item[(G3)] $\widehat S\subseteq\{x: 3L< x_1+x_2+x_3<4L\}$;
\item[(G4)] for each layer $k\in[0,2L)$, the intersection
$\widehat S\cap\Lambda_k$ is an oriented path that starts on
$\{x:x_1=0\}$, ends on $\{x:x_2=0\}$ and makes steps
$-e_2$ or $e_1$ with no consecutive three steps of
the same type; and
\item[(G5)] for each layer except the top, there is an
occupied site immediately above its intersection with
$\widehat S$, i.e.\ $(\widehat S\cap \Lambda_k)+e_3$ contains an
occupied site for each $k\in [0,2L-1)$.
\end{enumerate}
\end{defn}
\begin{figure
\centering
\includegraphics[width=.5\textwidth]{proto}
\caption{\label{fig-proto} A good proto-brick with $L=4$.
The set $\widehat S$ comprises the (sites in the centers of) colored cubes.
The origin is at the back corner, hidden by the
set. The third coordinate axis is vertical.}
\end{figure}
Next we scale up this definition to a brick, starting with
one in a standard location and orientation. Let $B$ be the
brick $[0,4L)\times [0,16L)\times [0,32L)$ with the
distinguished corner at the origin. For $x\in \widehat B$, define
the following subset of $B$:
$$
{\text{\tt cell}}(x)=(x_1,4x_2,16x_3)+\{0\}\times[0,4)\times [0,16).
$$
See \cref{fig-cell}. For a given configuration on $B$, we
define an \df{auxiliary configuration} on $\widehat B$ by
declaring a site $x\in \widehat B$ open if all sites in ${\text{\tt cell}}(x)$
are open; otherwise, we declare $x$ closed. We also call
$x$ initially occupied if all sites in ${\text{\tt cell}}(x)$ are
initially occupied. We call $B$ \df{good} if, in the
auxiliary configuration, $\widehat B$ is good. See
\cref{fig-cell}.
If $B$ is good and $\widehat S$ is any set satisfying the above
conditions, then we call
$$
S=\bigcup_{x\in \widehat S} {\text{\tt cell}}(x)
$$
a \df{sail} for $B$. Thus $B$ is good if and only it has a
sail.
Define the \df{tail} and \df{head} of $B$ to be its lower
and upper halves, $[0,4L)\times [0,16L)\times [0,16L)$ and
$[0,4L)\times [0,16L)\times [16L,32L)$ respectively. The
\df{tip} of $B$ is the box $[0,4L)\times [0,4L)\times
[16L,32L)$, which is a quarter of the head. See
\cref{fig-cell}. The \df{base} of $B$ is the bottom layer
of cells
$
\bigcup_{x\in \widehat B\cap\Lambda_0} {\text{\tt cell}}(x).
$
If $B$ is good and $S$ is a sail for $B$, then the
\df{head}, \df{tail}, \df{base}, and \df{tip} of $S$ are
the intersections of $S$ with the corresponding subsets of
$B$.
If the brick $B$ is good, and $S$ is a sail for $B$, then
we say that $S$ is \df{activated} by time $t$ if every site
in the the head of $S$ is occupied at time $t$.
Now we transfer all the above definitions to an arbitrary
brick $B'$ by isometry. More precisely, let $\eta$ be an
isometry of $\mathbb Z^3$ that maps $B$ to $B'$, respecting the
distinguished corners. The head of $B'$ is the image under
$\eta$ of the head of $B$. The brick $B'$ is good if
applying $\eta^{-1}$ to the configuration makes $B$ good,
in which case a sail for $B'$ is an image under $\eta$ of a
sail for $B$ in that configuration, and so on.
\begin{figure
\centering
{}\hfill
\includegraphics[width=.1\textwidth]{cell}\hfill
\raisebox{-1in}{\includegraphics[width=.4\textwidth]{sail-simple}}
\hfill{}
\caption{\emph{Left:} An example of a cell ${\text{\tt cell}}(x)$, which is a box of dimensions $(1,4,16)$.
\emph{Right:} A good brick $B$ and its sail $S$ for $L=4$.
This is obtained by scaling the set $\widehat S$ of
\cref{fig-proto} and replacing each of its sites with a cell.
The head of the brick is outlined in red, and its tip in green. Again,
the distinguished corner, the origin, is at the back, hidden by the sail.
}\label{fig-cell}
\end{figure}
We next show that with high probability a brick is good,
and moreover the sail can be chosen to contains a specific
site.
\begin{prop}\label{good-likely}
Assume $L=\lceil p^{-128}\rceil$. Then the probability
that $B$ is good and has a sail $S$ that contains the site
$x_0:=(L+1,4L+4,16L)$ converges to $1$ as $(p,q)\to (0,0)$.
\end{prop}
We remark that $L$ does not need to be such a large power
of $p^{-1}$; with suitable modifications to the definitions and proofs
(perhaps at the expense of increased complexity), order
$p^{-1}\log p^{-1}$ would suffice.
\begin{proof}[Proof of \cref{good-likely}]
Fix $\epsilon>0$. For most of the proof we will consider
the relevant event on the proto-brick $\widehat B$. Therefore let
$\widehat p=p^{64}$ and $\widehat q=1-(1-q)^{64}$, which are the
probabilities that a site is, respectively, initially
occupied and open in the auxiliary configuration. Let
$L=\lceil p^{-128}\rceil$.
Call a site $x$ \df{swell} if $x$ and $x+(0,0,1)$ and
$x+(-1,-1,1)$ are all open in the auxilliary configuration.
By the results of \cite{LSS}, the
configuration of swell sites dominates a product measure on
$\mathbb Z^3$ with parameter $1-\widehat q{\,}'$, where $\widehat q{\,}'=\widehat q{\,}'(\widehat q)\to 0$
as $\widehat q\to 0$.
\newcommand{\widehat D}{\widehat D}
Next, we apply \cref{surface} and translation invariance to
construct a swell curtain close to the half-space
$(L,L,L)+H$, rather than $H$. To be precise, translate the
configuration of swell sites by $-(L,L,L)$, construct the
set $D$ according to the last section, but using swell
sites in place of open sites, and translate it back by
$(L,L,L)$ to obtain a set $\widehat D$ of swell sites that
lies in $((L,L,L)+H)^C=\{x:x_1+x_2+x_3>3L\}$.
Let $E_1$ be the event that $\widehat D$ is a curtain and contains
the site $\widehat x_0=(1,1,0)+(L,L,L)$. By the construction of
$\widehat D$ in the previous section, $E_1$ is an increasing event
with respect to the configuration of swell sites. (This
follows because, in the notation of that section, the set
$A$ of sites reachable from $H$ via permissible paths is
decreasing). Therefore, by \cref{surface} and \cite{LSS},
there exists $q_1>0$ such that if $\widehat q<q_1$ then
$\P(E_1)>1-\epsilon$.
Moreover, by \cref{surface}, \cite{LSS}, and translation
invariance, for any deterministic $x=(x_1,x_2,x_3)$ with
$x_1+x_2+x_3=3L$ we have
\begin{equation}\label{good-likely-eq1}
\P\Bigl(\widehat D\cap\bigl\{x+(t,t,0):t\in[1,k]\bigr\}\ne \emptyset\Bigr)\ge 1-e^{-ck},
\qquad k>0,
\end{equation}
where $c>0$ is an absolute constant. (This event is again
increasing in the configuration of swell sites, by the
construction of $\widehat D$.)
Now let $$\widehat S:=\widehat D\cap \widehat B.$$
Let $E_2$ be the event that every $x\in \widehat S$ satisfies
$x_1+x_2+x_3<4L$. Then \cref{good-likely-eq1} and a union
bound imply that $\P(E_2)\geq 1- 16L^2\exp(-cL/2).$ Since
$L\to \infty$ as $p\to 0$ (i.e.\ as $\widehat p\to 0$), for $\widehat p$
is sufficiently small we have $\P(E_2)\geq 1-\epsilon$.
We have shown that $\widehat p$ and $\widehat q$ are both sufficiently
small then $\P(E_1 \cup E_2)\geq 1-2\epsilon$. On $E_1\cup
E_2$, the set $\widehat S$ satisfies properties (G1)--(G4) in the
definition of a good proto-brick. So far we have not
considered initially occupied sites (although the parameter
$p$ has appeared in the definition of the length scale
$L$). One way to sample the auxiliary configuration is as
follows. First declare each site closed independently with
probability $\widehat q$. Then, conditional on the resulting
configuration, declare each open site to be initially
occupied independently with probability $\widehat p/(1-\widehat q)$
($\geq \widehat p$). Let $E_3$ be the event that $\widehat S$ satisfies
property (G5). On $E_1\cap E_2$, each intersection with a
layer $\widehat S\cap\Lambda_k$ for $k\in[0,2L-1)$ contains at least
$L$ sites (by (G3) and (G4)). Moreover, all sites in
$(\widehat S\cap \Lambda_k)+e_3$ are open (by (G1)). Hence,
$$\P\bigl(E_3\mid E_1\cap E_2\bigr) \geq
1-2L(1-\widehat p)^L\ge 1-2L\exp(-\widehat p L).
$$
Since $L\sim \widehat p\,^{-2}$, this is at least $1-\epsilon$ for
$\widehat p$ sufficiently small.
We have shown that for $\widehat p$ and $\widehat q$ sufficiently small,
with probability at least $1-3\epsilon$ the set $\widehat S$
satisfies (G1)--(G5) and contains $\widehat x_0$. Finally,
recalling the definition of the auxilliary configuration,
we deduce that for $p$ and $q$ sufficiently small, the
brick $B$ is likewise good and has a sail containing
$x_0\in{\text{\tt cell}}(\widehat x_0)$ with probability at least
$1-3\epsilon$.
\end{proof}
\section{Activation}\label{sec-activation}
Recall that a sail of a good brick is said to be activated
if its head is fully occupied (at some time). To enable
our renormalization argument, we now show that for
appropriately placed good bricks, activation of one sail
leads to activation of another.
Let $B$ be the brick in standard position as before, and
let $B'$ be a brick with dimensions $(32L, 4L, 16L)$ such
that the centroid of its tail coincides with the centroid
of the tip of $B$. (The idea is that the tip of $B$ cuts
the tail of $B'$ in two. There are eight possible choices of
$B'$: two possible boxes that share a tail, each with four possible
orientations. See \cref{fig-bricksm} in the next section
for examples.) Then we write $B\rhd B'$. Similarly for
any isometry $\eta$ of $\mathbb Z^3$ we write $\eta(B)\rhd
\eta(B')$.
\begin{prop}\label{triggering}
Let $B$ and $B'$ be as described above. Suppose that they
are both good and let $S$ and $S'$ be any respective sails.
In the modified bootstrap percolation model, if $S$ is
activated by some time, then $S'$ is activated by some
later time.
\end{prop}
We separate the proof into the following four lemmas,
starting with the underlying growth mechanism.
\begin{lemma}\label{growth-sail}
Suppose that the proto-brick $\widehat B$ is good, and let $\widehat S$
be any set satisfying the conditions in the definition of
good. Assume also that the intersection $\widehat S\cap \Lambda_0 $
with the bottom layer is entirely occupied initially, and
that $\mathbb Z^3\setminus \sigma(\widehat S)$ is entirely closed. Then
$\widehat S$ is entirely occupied at some time.
\end{lemma}
\begin{proof} The argument is essentially the same as for \cref{curtain-spread},
except that one must verify that the relevant sites lie in
the proto-brick. We prove by induction on $k=0,\ldots,
2L-1$ that the layer $\widehat S\cap\Lambda_k$ is eventually
occupied. For $k=0$ this holds by assumption.
Fix $k\geq 1$ and let $Y=(\widehat S\cap\Lambda_{k-1})+e_3$. Then
$Y$ becomes occupied, since it is connected and open, it
contains an occupied site, and it is adjacent to
$\widehat S\cap\Lambda_{k-1}$ which becomes occupied by the inductive
hypothesis. If $z\in \widehat S\cap\Lambda_k$ then either $z\in Y$
or $z=y-(1,1,0)$ where $y\in Y$. In the latter case there
exist $z+a e_1,z+ b e_2\in\pi+e_3\in Y$ with $a,b\in[0,3]$.
The bootstrap rule then guarantees that $z+ie_1$ and
$z+je_2$ become occupied for $i\in(0,a)$ and $j\in(0,b)$,
and then $z$ becomes occupied.
\end{proof}
The following comparison lemma states that cutting off part
of a configuration only increases the eventually occupied
set, provided we make the cut surface occupied. This will
enable us to make use of sails that intersect each other.
\begin{lemma}\label{sneaky}
Consider a set of sites $A$, and a subset $F\subseteq A$.
Let $B$ be a connected component of $A\setminus F$. Suppose
that every site in $A^C$ is closed but that the initial
configuration is otherwise arbitrary. Now alter the
initial configuration by making $F$ initially occupied but
$A\setminus (F\cup B)$ closed. The alteration (weakly)
increases the set of eventually occupied sites in $B$.
\end{lemma}
\begin{proof}
We proceed by induction on the time step. Suppose that at
all times prior to $t$, the set of occupied sites of $B$ in
the altered dynamics dominates the set in the original
dynamics. Assume that a site $x\in B$ becomes occupied in
the original dynamics at time $t$. Any neighbor of $x$
that was occupied in the original dynamics at time $t-1$
either lies in $B$, in which case it is also occupied in
the altered dynamics by the induction hypothesis, or it
lies in $F$, in which case it was \emph{initially} occupied
in the altered dynamics. Thus $x$ also becomes occupied in
the altered dynamics.
\end{proof}
\begin{lemma} \label{stretching} From any configuration on $B$,
form the auxiliary configuration on $\widehat B$, and perform the modified bootstrap percolation
dynamics from the auxiliary configuration with all sites
outside $\widehat B$ closed. If $x$ becomes occupied in the
auxiliary dynamics, then ${\text{\tt cell}}(x)$ becomes fully occupied
in the original dynamics.
\end{lemma}
\begin{proof}
This follows by straightforward induction on time step.
\end{proof}
Next we state a geometric fact about sails. Let
$A\subseteq\mathbb Z^3$ and let $F,B_1,B_2$ be disjoint subsets of
$A$. We say that $F$ separates $B_1$ and $B_2$ in $A$ if
$A\setminus F$ contains no nearest-neighbor path from $B_1$
to $B_2$.
\begin{lemma}\label{separation} Suppose that the brick $B$ is good. Then any
sail $S$ for $B$ separates, in the tip of $B$, the two
faces of the tip $\{0\}\times [0, 4L) \times [16L,32L)$ and
$\{4L-1\}\times [0, 4L) \times [16L,32L)$.
\end{lemma}
\begin{proof}
By property (G4) of a good proto-brick, the intersection of
$S$ with a layer $\Lambda_k$ is an oriented path, thickened by
conversion of sites to cells. Therefore its complement
$\Lambda_k\setminus S$ clearly has two components, $U_k$ and
$V_k$ say, which contain the intersections of the first and
second faces respectively with $\Lambda_k$, by (G3).
It remains to check that no site of $U_{k-1}$ is adjacent
to a site of $V_k$, and likewise for $V_{k-1}$ and $U_k$,
for $k=1,\ldots, 4L-1$. Since such adjacent sites would
differ by $e_3$, this is easily verified from property
(G2). (Also see \cref{paths}).
\end{proof}
Now we prove the main result of this section.
\begin{proof}[Proof of \cref{triggering}]
By definition of $S$ from $\widehat S$ and \cref{separation},
the head of $S$ separates the base
of $S'$ from the head of $S'$ in
$\sigma(S')$.
Consider the dynamics from the following new configuration.
Make every site outside $\sigma(S')$ closed. Make the
base of $S'$ occupied. Otherwise, retain the initial
configuration in $\sigma(S')$.
By \cref{stretching,growth-sail},
the entire sail $S'$ becomes occupied.
The proof is concluded by applying \cref{sneaky} to $\sigma(S')$.
\end{proof}
\section{Renormalization}\label{sec-renormalization}
In this section we prove the main result, \cref{main}, as well as \cref{main-follow}. We
start with a simple geometric ingredient. Recall that $B$
is the brick in standard position.
\begin{samepage}
\begin{lemma} \label{renormalization}
There exist bricks $B_i$, $B_i'$, for $i=1,2,3$, with
\begin{align*}
&B\rhd B_1\rhd B_2\rhd B_3,\\
&B\rhd B_1'\rhd B_2'\rhd B_3',
\end{align*}
such that $B$, $B_3$, and $B_3'$ are distinct and have the
same orientation. Furthermore, there exist vectors
$u,u'\in\mathbb Z^3$ and a constant $C$, none of them depending on
$L$, such that $B_3=B+Lu$ and $B_3'=B+Lu'$ (so in particular $Lu$ and $Lu'$
are the distinguished corners of $B_3$ and $B_3'$ respectively),
and all seven bricks lie within distance $CL$ of the origin.
\end{lemma}
\end{samepage}
\begin{figure}
\centering
\includegraphics[width=.36\textwidth]{steer}
\caption{\label{fig-bricksm} {Illustration
of the proof of \cref{renormalization}.}
The initial brick $B$ (at lower left) is yellow, and $B_1=B_1'$
is green. The red box depicts $B_2$ and $B_2'$, which
are the same box but with different orientations. Finally, the two blue bricks are
$B_3$ and $B_3'$. These are translations
of $B$ by $(10,22,22)L$ and $(22,22,22)L$.}
\end{figure}
\begin{proof} See \cref{fig-bricksm}.
Recall that $B$ has dimensions $(4L,16L,32L)$. We choose
$B_1$ and $B_1'$ equal to each other, with dimensions
$(32L,4L,16L)$, and satisfying $B\rhd B_1$. Then take $B_2$
and $B_2'$ to be the same box as each other, with
dimensions $(16L,32L,4L)$, but with different orientations
and in particular different tips. Finally take $B_3$ and $B_3'$ to be
suitable translations of $B$, as determined by these tips.
\end{proof}
\newcommand{\P_{\text{\tt per}}}{\P_{\text{\tt per}}}
\newcommand{\P_{\text{\tt nuc}}}{\P_{\text{\tt nuc}}}
\begin{proof}[Proof of \cref{main}]
As discussed in the introduction, it suffices to prove the case of the
modified bootstrap model on $\mathbb Z^3$.
We will compare with oriented
percolation in $\mathbb Z^2$. Let $L=\lceil p^{-128}\rceil$ and
let $u,u'\in\mathbb Z^3$ be as in \cref{renormalization}. Also fix
$\epsilon>0$. For $a=(a_1,a_2)\in\mathbb Z^2$, define the
associated brick
$$B(a):=B+L a_1 u+ La_2 u'.$$
Call the site $a$ \df{excellent} if the translations by
$La_1 u+ La_2 u'$ of the seven bricks $B,B_i,B_i'$ of
\cref{renormalization} are all good.
Suppose that $a$ and $b$ are excellent, so that in
particular the bricks $B(a)$ and $B(b)$ are good. Suppose
also that there is a path of excellent sites in $\mathbb Z^2$ from
$a$ to $b$ consisting of steps $e_1$ and $e_2$. (We call a
path with these steps \df{oriented}). Then by
\cref{renormalization,triggering}, if some sail of $B(a)$
is activated then any sail of $B(b)$ is activated at some
later time.
Let $E$ be the event that there exists an
excellent bi-infinite oriented path $\pi$ in $\mathbb Z^2$ containing $0=(0,0)$, and
that moreover $B=B(0)$ has a good sail containing
$x_0:=(L+1,4L+4,16L)$ in its head. By
\cref{renormalization}, the random configuration of
excellent sites is $m$-dependent for some fixed $m$ not
depending on $L$. Therefore, by \cite{LSS},
\cref{good-likely}, and the fact that oriented percolation
on $\mathbb Z^2$ has a nontrivial phase transition
(see e.g.~\cite{g}), if $p$ and $q$ are
sufficiently small then $\P(E)\geq 1-\epsilon$.
It remains to show that \emph{some} sail on the path is
activated, for which a rather crude sprinkling argument
will suffice. Assuming $2p+q<1$, we consider two coupled
initial configurations. The \df{level-$1$} configuration
has parameters $p$ and $q$ as before. Conditional on the
level-$1$ configuration, the \df{level-$2$} configuration
is obtained by adding some further occupied sites;
specifically, we declare each open site that was not
initially occupied at level $1$ to be initially occupied at
level $2$ independently with probability $p/(1-p-q)$, and
leave the configuration otherwise unchanged. The law of the
level-$2$ configuration is simply a product measure with
parameters $2p$ and $q$. Now condition on the level-$1$
configuration, and suppose that it is such that $E$ occurs
at level $1$. Fix an excellent oriented path $\pi$ as in
the definition of $E$, and let $\pi^+$ and $\pi^-$ be the
forward and backward halves of $\pi$ that start at $0$ and
end at $0$ respectively. Then for each site $a$ of $\pi^-$,
\emph{all} open sites in the brick $B(a)$ are initially
occupied at level $2$ with probability at least $p^{|B|}$,
independently for each such $a$. Therefore, conditionally
almost surely, some site $a$ on $\pi^-$ has this property,
which implies in particular that any sail of the associated
brick $B(a)$ is activated at level $2$.
We conclude that if $2p$ and $q$ are sufficiently small
then with probability at least $1-\epsilon$ there exists an
infinite sequence of distinct activated sails, each
intersecting the next, one of which contains $x_0$ in its
head. By translation invariance we conclude that with
probability at least $1-\epsilon$, the origin lies in an
infinite connected eventually occupied set, as required.
\end{proof}
\begin{proof}[Proof of \cref{main-follow}]
It follows
from \cref{main} that $\lim_{q\to 0^+} \phi^+(q)=1$,
which implies that $q_c>0$.
As $\phi$ is a decreasing function, it is
positive on $[0,q_c)$. Our remaining task is to prove the claimed
upper bound on $q_c$, for which it suffices to consider the \emph{standard} (as opposed to modified) bootstrap model with $r=2$ on $\mathbb Z^d$ for $d\geq 3$.
\newcommand{\mathcal Z}{\mathcal Z}
Call a site \df{$3$-open} if it is open and has at least
$3$ open sites among its $2d$ neighbors. Let $p_c'$ be the
critical probability for existence of an infinite connected
set of $3$-open sites in $\mathbb Z^d$. Then clearly $p_c'\geq
p_c^{\text{\rm site}}$. For $d=3$, the method of essential
enhancements \cite{AG,BBR} shows that $p_c'> p_c^{\text{\rm
site}}$. (The strict inequality is expected to hold for
$d\geq 4$ also, but no complete proof is available -- see
\cite{BBR}).
For any set $Z\subseteq\mathbb Z^d$, let the external boundary $\partial Z$ be the set of sites
in $\mathbb Z^d\setminus Z$ that have a neighbor in $Z$. Note that $|\partial Z|\le 2d|Z|$.
Assume that no
site in $\partial Z$ is $3$-open, and that no site in $Z\cup\partial Z$ is initially occupied. Then we claim that no site
in $Z\cup\partial Z$ is ever occupied. Indeed, suppose on the contrary that
$x\in Z\cup \partial Z$ is a first site in the
set to become occupied, say at time $t\ge 1$. Then $x\in \partial Z$,
and so $x$ has
at most $2$ open neighbors, of which at least one is in $Z$,
which by assumption
is unoccupied at time $t-1$. So $x$ has at most one occupied neighbor at time $t-1$. Since $r=2$, this contradicts the assumption that $x$ becomes occupied at time
$t$.
Now let $q>1-p_c'$. Given the random configuration on $\mathbb Z^d$,
create an \df{adjusted} configuration by converting all closed sites among the origin and its $2d$ neighbors to open (but not initially occupied) sites.
Let $\mathcal Z$ be the maximal connected set of $3$-open sites
containing the origin in the adjusted configuration.
Clearly, $0<|\mathcal Z|<\infty$ almost surely. Then we have
\begin{equation*}
\begin{aligned}
&\P(0\text{ is eventually occupied})\\
&\le \P(0\text{ is eventually occupied starting from the adjusted configuration})\\
&\le \P(\text{$\mathcal Z\cup\partial\mathcal Z$ contains an initially occupied site})\\
&\le\sum_{k=1}^\infty \P\bigl(|\mathcal Z|=k\bigr) \bigl(1-(1-p)^{k+2dk}\bigr).
\end{aligned}
\end{equation*}
This tends to $0$ as $p\to 0^+$, by dominated convergence.
\end{proof}
\section{Open Problems} \label{sec-open}
Recall that $\phi(p,q)$ is the probability that the origin is eventually occupied
with densities $p$ and $q$ of initially occupied and closed sites respectively, and that we define $\phi^+(q)=\phi(0^+,q)$ and $q_c=\inf\{q: \phi^+(q)=0\}$.
\begin{enumerate}[(i)]
\item For which dimensions and thresholds $d> r\geq 3$ is it the case that $\phi(p,q)\to 1$ as $(p,q)\to (0,0)$?
As conjectured in \cite{Mor}, the answer ``all'' seems plausible.
(The current paper proves the cases $d>r=2$, while the conclusion fails for $d=r$.)
\item Where the convergence in (i) above does not hold
(presumably, only for $d=r$), suppose that $p,q\to 0$
in such a way that $ \log q/\log p\to\alpha. $ For
which $\alpha$ does $\phi$ converge to $0$, or to
$1$? The articles \cite{GM} and \cite{GHS} address
$d=r=2$ and $d=r=3$ respectively.
\item Is $\phi^+$ continuous at $q_c$?
\item Consider the critical value $q_c=q_c(d)$ as a function of dimension (with $r=2$, say).
Does $q_c$ approach $1$ as $d\to\infty$ and, if so, at what rate?
\item Let $T$ be the first time the origin is occupied.
What is the asymptotic behavior of $T$ as $p,q\to 0$? (For example, find ``close'' functions $f$ and $g$ of $p$ and $q$ for which $f\leq T\leq g$ with high probability).
\item For $r=2$ and $d=3$, consider
$$
\gamma(q):=\limsup_{p\to 0+}p^{-1}\phi(p,q).
$$
Is there a $q>q_c$ for which this is infinite? If so, this would
distinguish the phase transition in the case $r=2$ from that of the case $r=1$ (where $q_c=1-p_c^{\text{\rm site}}(\mathbb Z^d)$, and $\gamma(q)$ is finite for all $q>q_c$.)
\end{enumerate}
\section*{Acknowledgements}
We thank David Sivakoff for many valuable discussions.
Janko Gravner was partially supported by the NSF grant DMS--1513340, Simons Foundation Award \#281309, and the Republic of Slovenia’s Ministry of Science program P1--285. He also gratefully acknowledges the hospitality of the Theory Group at Microsoft Research, where most of this work was completed.
\bibliographystyle{alphanum}
| {
"timestamp": "2017-05-05T02:02:08",
"yymm": "1705",
"arxiv_id": "1705.01652",
"language": "en",
"url": "https://arxiv.org/abs/1705.01652",
"abstract": "In the polluted bootstrap percolation model, the vertices of a graph are independently declared initially occupied with probability p or closed with probability q. At subsequent steps, a vertex becomes occupied if it is not closed and it has at least r occupied neighbors. On the cubic lattice Z^d of dimension d>=3 with threshold r=2, we prove that the final density of occupied sites converges to 1 as p and q both approach 0, regardless of their relative scaling. Our result partially resolves a conjecture of Morris, and contrasts with the d=2 case, where Gravner and McDonald proved that the critical parameter is q/{p^2}.",
"subjects": "Probability (math.PR); Mathematical Physics (math-ph)",
"title": "Polluted Bootstrap Percolation with Threshold Two in All Dimensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534376578004,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7083573391785521
} |
https://arxiv.org/abs/1911.00574 | Local regularity result for an optimal transportation problem with rough measures in the plane | We investigate the properties of convex functions in the plane that satisfy a local inequality which generalizes the notion of sub-solution of Monge-Ampere equation for a Monge-Kantorovich problem with quadratic cost between non-absolutely continuous measures. For each measure, we introduce a discrete scale so that the measure behaves as an absolutely continuous measure up to that scale. Our main theorem then proves that such convex functions cannot exhibit any flat part at a scale larger than the corresponding discrete scales on the measures. This, in turn, implies a $C^1$ regularity result up to the discrete scale for the Legendre transform. Our result applies in particular to any Kantorovich potential associated to an optimal transportation problem between two measures that are (possibly only locally) sums of uniformly distributed Dirac masses. The proof relies on novel explicit estimates directly based on the optimal transportation problem, instead of the Monge-Ampere equation. | \section{Introduction}
\subsection{Generalized Monge-Amp\`ere equation for rough measures}
In this paper, we investigate the properties of convex functions in $\mathbb R^2$ that can be seen as {\em local} one-sided Kantorovich potentials. More specifically, we consider a (continuous) convex function $\psi:\mathbb R^2\to \mathbb R$ that satisfies
\begin{equation} \label{eq:ineq1}
\mu(A) \leq \nu (\partial\psi(A))\quad \mbox{ for all Borel sets $A\subset \Omega$}
\end{equation}
where $\mu$ and $\nu$ are two non-negative Radon measures on $\mathbb R^2$ and $\Omega$ is an open bounded subset of $\mathbb R^2$.
We recall that the subdifferential $\partial \psi $ of the function $\psi$ is given by
$$
\partial\psi(x) = \left\{ z\in \mathbb R^2\, ;\, \psi(y)\geq \psi(x)+z\cdot(y-x) \mbox{ for all } y\in\mathbb R^2\right\}
$$
and that $\partial\psi(A) = \cup_{x\in A} \partial\psi(x)$.
We note that \eqref{eq:ineq1} is a {\em local condition} as it is only posed in a subset $\Omega\subset \mathbb R^2$.
As we will explain shortly,
Inequality \eqref{eq:ineq1} appears naturally when $\psi$ is a Kantorovich potential associated to the optimal transportation problem between two probability measures (although we do not need to require $\mu$ and $\nu$ have the same mass in our paper).
When $\mu$ and $\nu$ satisfy
\begin{equation}\label{eq:munul}
d\mu(x) \geq \frac 1 \lambda dx \quad \mbox{ in } \Omega , \qquad d\nu(x) \leq \lambda dx\quad \mbox{ in } \partial\psi(\Omega)
\end{equation}
for some $\lambda>0$, then \eqref{eq:ineq1} implies that
$$ \frac 1{\lambda^2} |A| \leq |\partial\psi(A)| \quad \mbox{ for all Borel sets $A\subset \Omega$}$$
which says that $\psi$ is a solution (in the {\it Alexandrov sense}) of the one-sided Monge-Amp\`ere equation
\begin{equation}\label{eq:MA1}
\det (D^2 \psi(x)) \geq \frac 1{\lambda^2} \quad \mbox{ in } \Omega .
\end{equation}
In the case of absolutely continuous measures, inequality~\eqref{eq:ineq1} is hence a reformulation of sub-solutions to the Monge-Amp\`ere equation in terms of optimal transportation.
In dimension $2$, it is known that \eqref{eq:MA1} implies that $\psi$ is strictly convex in $\Omega$ (see Theorem \ref{th:strictConvexity2D} below) and the strict convexity of $\psi$ is the first step in the regularity theory for the solutions of the full Monge-Amp\`ere equation \eqref{eq:pureMongeAmpere} and the corresponding optimal transportation problem (see Section \ref{sec:ref} for further presentation of relevant results and references). In dimension $2$ and only in dimension $2$ (see below again), this is a {\em local property} in the sense that it does not require any boundary condition on $\psi$: Whenever $\psi$ satisfies \eqref{eq:MA1} on any subdomain $\Omega$, it is strictly convex in the interior of that subdomain, independently of what occurs outside of $\Omega$.
The goal of this paper is to extend this result when $\psi$ satisfies \eqref{eq:ineq1} with measures $\mu$ and $\nu$ that are not necessarily absolutely continuous with respect to the Lebesgue measure and satisfy some lower and upper bounds only up to a certain scale (Assumption \ref{ass:measures} below).
This includes the case where $\mu$ and $\nu$ are sums of uniformly distributed Dirac masses.
Our main theorem (Theorem \ref{th:flatpartbound}) implies in particular that $\psi$ is then strictly convex up to a certain scale (in Corollary \ref{cor:flat}, we derive a bound on the diameter of the flat parts of $\psi$).
Equivalently, our result says that the Legendre transform of $\psi$ is $C^1$ regular up to a certain scale (see Corollary \ref{cor:regularity}) on a subset of $\Omega$.
\medskip
{\bf Outline of the paper:}
Our first result, Theorem \ref{thm:optimal} in Section \ref{sec:optimal} below, makes precise the relation between inequality \eqref{eq:ineq1} and optimal transportation theory. It is proved in Section \ref{sec:proof} and is of independent interest. The strict convexity "up to a certain scale", which is the main topic of this paper is then presented in Section \ref{sec:main} together with several consequences. This is followed by an overview of the existing literature and results related to our work (Section \ref{sec:ref}).
The proof of our main theorem is given in Section \ref{sec:proofthm}, while Sections \ref{sec:proofcor1} and \ref{sec:proofcor2} are devoted to the proofs of Corollaries \ref{cor:flat} and \ref{cor:regularity}.
\medskip
\subsection{Relation to Optimal transportation problem}\label{sec:optimal}
Inequality \eqref{eq:ineq1} is natural in the context of optimal transportation theory (in any dimension $n\geq 1$). To explain this connection, we first recall that given two probability measures $\mu \in \mathscr{P}(\mathbb R^n)$, $\nu \in \mathscr{P}(\mathbb R^n)$ the {\em Monge-Kantorovich Problem} with quadratic cost is concerned with the minimization problem
\begin{equation}
\label{eq:KantorovichQuad}
\min_{\pi\in \Pi(\mu,\nu)} \int_{ \mathbb R^n \times \mathbb R^n} |x-y|^2 d\pi(x,y)
\end{equation}
where
$\Pi(\mu,\nu)$ denotes the set of all probability measures $\pi \in \mathscr{P}( \mathbb R^n \times \mathbb R^n)$ with marginals $\mu$ and $\nu$, i.e. such that $\pi(A \times \mathbb R^n)=\mu(A)$ for all $A \subset \mathbb R^n$ and $\pi( \mathbb R^n \times B)=\nu(B)$ for all $B \subset \mathbb R^n$.
The existence of a minimizer $\pi$ for problem (\ref{eq:KantorovichQuad}) (which will be called an \textit{optimal transport plan} between the measures $\mu$ and $\nu$) is a classical result, see for example \cite{villani2008optimal}. Moreover, it is known that $\pi\in \Pi(\mu,\nu)$ is a minimizer for \eqref{eq:KantorovichQuad} {\it if and only if} it is supported on the graph of the subdifferential of a lower semi-continuous convex function $\psi$:
\begin{equation}\label{eq:supgraph}
\mathrm{supp}(\pi) \subset \mathrm{Graph}(\partial\psi):= \left\{(x,z)\in \mathbb R^n\times \mathbb R^n\; |\; z\in \partial\psi(x)\right\} \mbox{.}
\end{equation}
The function $\psi$ is called a Kantorovich potential for problem (\ref{eq:KantorovichQuad}) and we can easily check that it satisfies \eqref{eq:ineq1} (globally). Indeed, for all Borel sets $A\subset \mathbb R^n$, we can write
$$
\mu(A) = \int_A \int_{\mathbb R^n} d\pi = \int_A \int_{\partial \psi(A)} d\pi \leq \int_{\mathbb R^n} \int_{\partial \psi(A)} d\pi = \nu(\partial\psi(A)).
$$
We note that this inequality might be strict in general but that we can similarly prove the inequality
\begin{equation}\label{eq:ineq2}
\mu(\partial\psi^*(A)) \geq \nu(A)
\end{equation}
where $\psi^*$ denotes the Legendre transform defined by
$$\psi^*(z) = \sup_{x\in\mathbb R^n} \big( x\cdot z -\psi(x)\big).$$
(indeed, the duality relation $ z\in\partial\psi(x) \Leftrightarrow x\in \partial\psi^*(z)$
together with \eqref{eq:supgraph} implies
$ \mathrm{supp}(\pi) \subset \left\{(x,z)\in \mathbb R^n\times \mathbb R^n\; |\; x\in \partial\psi^*(z)\right\} .$)
\medskip
The purpose of this paper is to derive strict convexity estimates on $\psi$ when we assume only that \eqref{eq:ineq1} {\em holds locally} (and we do not require \eqref{eq:ineq2}).
If we assume that both \eqref{eq:ineq1} and \eqref{eq:ineq2} hold (locally), then we get some convexity estimates on $\psi^*$, which are equivalent to $C^1$ estimates on $\psi$.
A question that arises naturally is whether assuming \eqref{eq:ineq1} and \eqref{eq:ineq2} is equivalent to assuming that $\psi$ is associated to an optimal transport plan between $\mu$ and $\nu$.
The answer is of course straightforward for the Monge-Amp\`ere equation since a function that is a sub-solution and a super-solution is necessarily a solution. It is hence natural for \eqref{eq:ineq1} and \eqref{eq:ineq2} to imply a similar result, but the situation is more delicate. We recall in particular that the Monge-Kantorovich potential is not unique when the measures $\mu$ and $\nu$ are not absolutely continuous with respect to the Lebesgue Measure.
To the authors' knowledge, this question has not been previously addressed in the literature and we will
show that if $\psi$ satisfies \eqref{eq:ineq1} and \eqref{eq:ineq2} globally (so in $\mathbb R^n$) and if $\mu$ and $\nu$ have finite second moment, then $\psi$ must indeed be a Kantorovich potential associated to the optimal transportation problem (\ref{eq:KantorovichQuad}).
More precisely, we prove the following result:
\begin{theorem}\label{thm:optimal}
Let $\mu,\;\nu$ be two probability measures on $\mathbb R^n$ with finite second moment:
$$
\int_{\mathbb R^n} |x|^2d\mu(x) + \int _{\mathbb R^n} |y|^2 d\nu(y) <\infty
$$
and let $\psi$ be a proper lower semi-continuous convex function such that
\eqref{eq:ineq1} and \eqref{eq:ineq2} hold for all measurable sets $A$. Then there exists an optimal plan $\pi\in \Pi(\mu,\nu)$ (minimizer of \eqref{eq:KantorovichQuad})
supported on $\Gamma = \mathrm{Graph}\,{\partial \psi}$ and the pair $(\psi,\psi^*)$ is then a minimizer for the dual problem
$$\inf \left\{ \int \psi d\mu +\int \varphi\, d\nu \, ;\, x\cdot y \leq \psi(x)+\varphi(y) \quad\forall (x,y)\right\}.$$
\end{theorem}
This result is completely independent from the regularity theory that we develop in the rest of this paper, but it clarifies the relation between equation \eqref{eq:ineq1} and the optimal transportation problem.
The proof (see Section \ref{sec:proof}) relies on the approximation of the measures $\mu$ and $\nu$ by sums of Dirac masses to construct a plan $\pi\in \Pi(\mu,\nu)$ supported on $\Gamma = \mathrm{Graph}\,{\partial \psi}$ (the optimality of such a plan then follows from classical results from optimal transportation theory).
\subsection{Local convexity and regularity}\label{sec:main}
The main goal of this paper is to develop a regularity theory when we do not assume that $\mu$ and $\nu$ are absolutely continuous with respect to the Lebesgue measure, but that they satisfy the following assumption:
\begin{assumption}
\label{ass:measures}
Assume that there are constants $h_1,h_2>0$ and $\lambda_1,\lambda_2>0$, such that the measures $\mu$ and $\nu$ satisfy
\begin{equation}
\label{eq:MeasuresConditions}
\mu(R) \geq \frac{|R|}{\lambda_1} \mbox{\, and \,} \nu(R'\cap \partial\psi(\Omega)) \leq \lambda_2 |R'|
\end{equation}
for any rectangles $R \subset \Omega$, $R' \subset \mathbb R^2$ with dimensions at least $h_1$ and $h_2$ in every direction for $R$ and $R'$ respectively.
\end{assumption}
This assumption is less restrictive than \eqref{eq:munul} and is
relevant in the framework of optimal transportation.
In fact, the original problem considered by Kantorovich in \cite{kantorovitch1958translocation} included measures $\mu$ and $\nu$ that are sums of Dirac masses rather than absolutely continuous measures.
This setting is also important for numerical applications:
Measures satisfying Assumption \ref{ass:measures} appear naturally when introducing discrete approximations of absolutely continuous measures with bounded densities, as is often done for computational purposes.
In order to state our main result, we introduce the set
$$
\Omega^\delta = \{ x\in\Omega\, ;\, \mathrm{dist}(x,\partial\Omega)>\delta\}.
$$
The main result of this paper is the derivation of the technical inequality \eqref{eq:flatpartbound} below, which quantify the strict convexity of $\psi$ up to a scale depending on $h_1$ and $h_2$:
\begin{theorem}[Strict convexity of Kantorovich potentials $\psi$]
\label{th:flatpartbound}
Let $\psi:\mathbb R^2\to\mathbb R$ be a convex function satisfying \eqref{eq:ineq1} for some measures $\mu$ and $\nu$ satisfying Assumption~\ref{ass:measures}.
Given $\delta>0$ and $(x,y) \in \Omega^\delta \times \Omega^\delta $, let $K$ be any constant satisfying
\begin{equation}\label{eq:defK}
K \geq \mathrm{diam} \, \partial \psi (U_{\delta})
\end{equation}
where $U_{\delta}$ is a $\delta$-neighborhood of the segment $[x,y]$ (i.e. $U_{\delta}=\displaystyle \cup_{z \in [x,y]} B_{\delta}(z)$) and define
$$ \eps = - \min_{t\in [0,1]} \psi((1-t)x+ty) -[(1-t)\psi(x)+t\psi(y)] \geq 0 .$$
There exists a universal constant $C$ such that if the length $\ell := |x-y|/2$ satisfies
\begin{equation}\label{eq:ellcond}
\ell\geq 2\,h_1,\quad \ell^{2}\geq C\,K \,\lambda_1\,\lambda_2\,h_2\,,
\end{equation}
then the following inequality holds:
\begin{equation}
\label{eq:flatpartbound}
{\ell^8} \, \log \left(1+\frac{\delta}{\gamma}\right)
\leq C\,\lambda_1^4\,\lambda_2^4\,K^8,
\end{equation}
provided
$$
\gamma:=\max \left\{ \frac{\eps }{K},2h_1, \frac{\ell h_2}{C\,K }\right\} \leq \delta/2.
$$
\end{theorem}
We immediately make the following remarks:
\begin{enumerate}
\item The logarithm in the left hand side of \eqref{eq:flatpartbound} goes to infinity when $\gamma$ goes to zero. So Theorem~\ref{th:flatpartbound} provides a lower bound on $\gamma$ depending on the quantity $\frac{\ell^2}{C \lambda_1 \lambda_2 K ^2}$. Indeed we have that either $\gamma > \delta / 2$ or inequality (\ref{eq:flatpartbound}) provides a lower bound for $\gamma$.
So with the notations of the theorem, we see that as long as \eqref{eq:ellcond} holds, we have
\begin{equation}
\label{eq:flatpartbound2}
\gamma \geq
\delta \min\left\{ \frac{1}{\exp \left( \frac{C^4 \lambda_1^4 \lambda_2^4 K^8}{ \ell^8} \right) -1} , \frac{1}{2} \right\} \mbox{.}
\end{equation}
Note that we can take $K= \mathrm {diam}\, \partial \psi (\Omega)$ which does not depends on $\ell$.
\item The conditions \eqref{eq:ellcond} are clearly satisfied in the absolutely continuous case $h_1=h_2=0$. In that case, \eqref{eq:flatpartbound2} gives a lower bound on $\gamma =\eps/K$ and implies the strict convexity of $\psi$. We actually recover a classical result, see Theorem \ref{th:strictConvexity2D} below.
\item The proof will make it clear that the assumption $(x,y) \in \Omega^\delta \times \Omega^\delta $ in the theorem is not necessary. The result holds for $(x,y) \in \Omega \times \Omega$ provided there is a rectangle $R_{\delta}(x,y)$, with base equal to the line segment $[x,y]$ and height equal to $\delta$ which is contained in $\Omega$. In this setting we can also take $K=\mathrm{diam} (\partial \psi(R_{\delta}(x,y)))$.
\item As mentioned above, the conditions \eqref{eq:ellcond} are trivially satisfied when $h_1=h_2=0$. When $h_1,h_2\neq 0$, it is clear that we need some conditions on $\ell$ since we expect the potential $\psi$ to have flat parts and so $\eps=0$ if $\ell$ is small enough.
In the simple case where $\mu$ and $\nu$ are uniformly distributed Dirac masses (on lattices of characteristic length $h_1$ and $h_2$),
then the first condition in \eqref{eq:ellcond} is necessary to have several lattice points in the set $U_{\delta}$, while the second condition in \eqref{eq:ellcond} will guarantee that all those points cannot be sent onto a thin rectangle (of height $h_2$).
\item The result is consistent with the natural scaling of the problem. For example,
if we replace the measure $\nu$ by the new measure $\tilde \nu$ defined by
$ \tilde \nu (R) := \nu (\tau R)$ for some fixed $\tau>0$, then $\tilde \nu$ satisfies the conditions of
Assumption \ref{ass:measures} with $\tilde h_2 = \tau^{-1} h_2$ and $\tilde \lambda_2 = \tau^2 \lambda_2$.
Furthermore, the function $\tilde \psi = \tau^{-1} \psi$ is a Kantorovich potential associated to the measures $\mu$ and $\tilde \nu$ which satisfies inequality \eqref{eq:ineq1} (with $\tilde \nu$ instead of $\nu$). One can then check that the conditions \eqref{eq:ellcond} and the inequality \eqref{eq:flatpartbound} are unchanged by these transformations.
\end{enumerate}
Theorem \ref{th:flatpartbound} provides a way to quantify how close $\psi$ is to being strictly convex.
For instance, we can use Theorem \ref{th:flatpartbound} to estimate the largest possible length of a "flat part" of $\psi$ by assuming that $\eps=0$ and using \eqref{eq:flatpartbound} to get an upper bound on $\ell$.
We get:
\begin{corollary}\label{cor:flat}
Under the conditions of Theorem \ref{th:flatpartbound}, assume furthermore that
$\varepsilon =0$ (that is, $\psi$ is affine on the segment $[x,y]$).
If $h_1\leq \delta/4$, $\sqrt{C \lambda_1 \lambda_2} h_2 \leq \delta$ and
\begin{equation}\label{eq:condellh2}
\frac{\ell h_2}{K }\leq \frac{C \delta}{2}
\end{equation}
then
\begin{equation}\label{eq:ellbound}
\ell \leq \max\left\{ 2 h_1, \sqrt{ C \lambda_1 \lambda_2 \, K h_2},
\frac{K \sqrt {C \lambda_1 \lambda_2}}{\left[\ln\left(\frac\delta{2 h_1}\right)\right]^{1/8}},
\frac{K \sqrt {C \lambda_1 \lambda_2}}{\left[ \ln \left(\frac{\delta}{\sqrt{C \lambda_1 \lambda_2} h_2}\right)\right]^{1/8}}
\right\} \mbox{.}
\end{equation}
\end{corollary}
We recall that we can take $K= \mathrm {diam}\, \partial\psi(\Omega)$ in which case \eqref{eq:condellh2} reads $ \ell h_2 \leq C \mathrm {diam}\, \partial\psi(\Omega)$ and \eqref{eq:ellbound} gives
an upper bound on $\ell$ which only depends on the data of the problem and goes to zero when $\max\{ h_1,h_2\}\to 0$.
When $h_1=h_2=0$, Corollary \ref{cor:flat} gives $\ell=0$,
so we recover the classical result that $\psi$ must be strictly convex in that case (no flat parts).
We can also take $K= \mathrm{diam} \, \partial \psi (U_{\delta})$ (so that \eqref{eq:ellbound} is sharper)
in which case we note that if $h_2^2 \leq \frac{\delta \ell}{\lambda_1\lambda_2}$ then we can use the estimate \eqref{eq:opsilower}, derived further in the proof, to replace the condition \eqref{eq:condellh2} with the following condition that does not depend on $\ell$:
\[
\sqrt {C \lambda_1 \lambda_2} h_2 \leq \frac{3\delta^{3/2}}{(\mathrm {diam}\, \Omega_2)^{1/2}}.
\]
Going back to Theorem \ref{th:flatpartbound}, we observe that the control it provides on the convexity of $\psi$ should imply some $C^1$ regularity up to some length scale depending on $h_1,h_2$ on the Legendre dual or conjugate (see \cite{rockafellar2009variational}) defined for all $z\in\mathbb R^2$ by
\begin{equation}
\label{eq:LegendreTransform}
\psi^*(z) = \sup_{x\in\Omega} \big( x\cdot z -\psi(x)\big).
\end{equation}
Indeed, we can show:
\begin{corollary}[$C^1$ regularity of $\psi^*$]\label{cor:regularity}
Under the conditions of Theorem \ref{th:flatpartbound} and given $\delta>0$,
there exists some functions $\rho(\ell)$, $\rho_1(\ell)$ and $\rho_2(\ell)$ monotone increasing, with limit $0$ when $\ell\to 0^+$,
and depending only on $\delta$, $\lambda_1\lambda_2$, $D=\mathrm{diam} \,\Omega$ and $K$ such that
for all $z,z'\in \partial \psi(\Omega^\delta)$, we have
\begin{equation}\label{eq:reg}
|x-x'| \leq \max \left(\rho(|z-z'|) , \rho_1( h_1),\rho_2(h_2) , \right) \qquad \forall x\in \partial\psi^*(z),\; x'\in\partial\psi^*(z').
\end{equation}
In particular if $h_1=h_2=0$ then $\psi^*$ is $C^1$ in $\partial \psi(\Omega^\delta)$ with the explicit estimate on the modulus of continuity of $\nabla \psi^*$,
\begin{equation}\label{eq:oepslowerbd}
|\nabla{\psi^*}(z)-\nabla{\psi^*}(z')| \leq C\,\frac{\sqrt{\lambda_1\,\lambda_2}\,L_\infty}{\left(\log\left(1+\frac{1}{ C\,\sqrt{\lambda_1\,\lambda_2}\,|z-z'|}\right)\right)^{1/8}}
\end{equation}
where $L_\infty$ now denotes the Lipschitz bound of $\psi$ over $\Omega$.
\end{corollary}
We conclude this presentation of our result by observing that in Assumption \ref{ass:measures} we only require a lower bound on $\mu$ and an upper bound on $\nu$. Such bounds are all that we need to study the strict convexity of $\psi$.
Opposite bounds would be required to prove the $C^1$ regularity of $\psi$ up to a certain length scale (recall that $\psi^*$ is associated to an optimal transportation problem in which the roles of $\mu$ and $\nu$ are inverted).
More precisely, if we assume that
$$\mu(R\cap\Omega) \leq \lambda_2 |R| \quad \mbox{ and } \quad \nu(R')\geq \frac{1}{\lambda_1} |R'| $$
for $R\subset \mathbb R^2$ and $R'\subset \Lambda $ (up to a certain scale), then using
inequality \eqref{eq:ineq2} instead of \eqref{eq:ineq1} our analysis yields
the $C^1$ regularity up to a certain scale for the potential $\psi$ on the set
$$
\Omega' = \bigcup_{\delta>0} \partial\psi^* (\Lambda^\delta)
$$
where $\Lambda^\delta = \{ x\in\Lambda\, ;\, \mathrm{dist}(x,\partial\Lambda)>\delta\}.$
\subsection{Brief overview of the literature}\label{sec:ref}
When the measures $\mu=f\,dx$ and $\nu = g\, dx$
are absolutely continuous and concentrated on the open sets $\Omega$ and $\Lambda$,
a classical result due to Brenier (\cite{brenier1987decomposition, brenier1991polar}) states that the solution of the minimization problem (\ref{eq:KantorovichQuad}) is unique and is given by $\pi=(Id\times \nabla\psi)_{\#} \mu$, where $\psi:\mathbb R^n\to \mathbb R$ is a globally Lipschitz convex function such that
$\nabla\psi _{\#} \mu=\nu $.
If furthermore,
there exists $\lambda>0$ such that
$1/\lambda \leq f,g\leq \lambda$ on their respective supports, then $\psi$ satisfies
\begin{equation}
\label{eq:pureMongeAmpere}
\frac{1}{\lambda^2} \chi_{\Omega}\leq \det D^2 \psi \leq \lambda^2 \chi_{\Omega}
\end{equation}
in a weak sense (the {\it Brenier sense}) together with the boundary condition $\nabla\psi(\mathbb R^n)\subset \Lambda $ (see for instance \cite{caffarelli1992regularity, philippis2013regularity}).
Even in that case, it is classical that the regularity of $\psi$ requires some condition on the support of $g$ (for example if $\Omega$ is connected but $\Lambda$ is not, then the map $\nabla \psi$ must be discontinuous).
Caffarelli proved in \cite{caffarelli1992regularity} that if we further assume that
$\Lambda $ is {\it convex}, then $\psi$ is a strictly convex solution of the Monge-Amp\`ere equation \eqref{eq:pureMongeAmpere} in the following {\it Alexandrov sense}:
\begin{equation}
\label{eq:MAalex} \frac{1}{\lambda^2} | A\cap \Omega| \leq |\partial\psi(A)| \leq \lambda^2 |A\cap\Omega| \qquad \mbox{for any Borel set $A\subset \mathbb R^n$.}
\end{equation}
The regularity theory for Monge-Amp\`ere equation developed by Caffarelli \cite{caffarelli1990localization,caffarelli1991some,caffarelli1992boundary,caffarelli1992regularity,caffarelli1996boundary} for strictly convex solutions of \eqref{eq:MAalex} then implies that $\psi$ is $C^{1,\alpha}_{loc}$.
Even in this absolutely continuous framework, our result requires only inequality \eqref{eq:ineq1} - which is equivalent to the lower bound in \eqref{eq:MAalex} - and {\em no assumption on $\Lambda$}. We note that this inequality is always satisfied by Brenier's potential, while the upper bound in \eqref{eq:MAalex} requires further assumptions on $\Lambda$ (e.g. convexity) to hold.
When $\Lambda$ is non-convex, the convex potential $\psi$ cannot be expected to be $C^1$ everywhere.
However, partial regularity results have been derived that offer a useful comparison, first in dimension $2$ by Yu \cite{Yu07} and Figalli \cite{Figalli10} and then generalized to higher dimension in \cite{FigalliKim10} and to more general cost functions in \cite{philippisfigalli15}.
These results show in particular that there exists an open subset of $\Omega$ of full measure in which $\psi$ is $C^{1,\alpha}$ and strictly convex. In dimension $2$, a precise geometric description of the singular set can be given \cite{Figalli10}. Our argument provides explicit quantitative estimates in that sense that extend to non absolutely continuous measures.
In this paper, we do not need to assume that $\psi$ is associated to an optimal transportation problem with nice properties globally. We only require inequality \eqref{eq:ineq1} to hold for measures $\mu$ and $\nu$ that satisfy some lower and upper bounds in some subsets of their supports.
In dimension $3$ and higher, functions that satisfy \eqref{eq:pureMongeAmpere} in $\Omega$ might not be strictly convex, as shown by Pogorelov's classical counterexample \cite{gutierrez2001monge} and
the regularity theory for such solutions requires appropriate assumptions on the boundary conditions \cite{caffarelli1990localization,caffarelli1991some,caffarelli1992boundary,caffarelli1996boundary}.
However, in dimension $2$ (which is the setting of our main result),
there is a simple proof of the strict convexity of smooth local solutions of \eqref{eq:pureMongeAmpere}, which was originally proved in \cite{aleksandrov1942smoothness} and \cite{heinz1959differentialungleichung} by Aleksandrov and Heinz independently (see also \cite{trudinger2008monge}).
That result can be formulated as follows:
\begin{theorem} [\cite{aleksandrov1942smoothness}, \cite{heinz1959differentialungleichung}]
\label{th:strictConvexity2D}
For $n=2$, let $ \psi \in C^2_{loc}(\Omega)$ satisfy
\begin{equation}\label{eq:MAS}
\det \, D^2\psi \geq \lambda^{-2} >0 \qquad\mbox{ in $\Omega$,}
\end{equation}
and assume that $\psi \geq 0$ in $\Omega$ and $\psi (x_0)=0$ for some $x_0$ in the interior of $\Omega$. Denote $\delta:=\mbox{dist} (x_0, \partial \Omega) >0$ and let $H$ be any line passing through $x_0$.
Then for all $\ell \leq \frac{\delta}{2}$, the quantity
$$ \gamma = \frac { \sup_{x \in B_{\ell}(x_0) \cap H} \psi(x)}{\| \nabla \psi \|_{L^\infty(\Omega)}}.
$$
satisfies
\begin{equation}\label{eq:estcontinuous}
\ell^2 \ln\left(1+ \frac{\delta}{\gamma}\right) \leq 8\lambda^2 \| \nabla \psi \|_{L^\infty(\Omega)} ^2.
\end{equation}
\end{theorem}
Inequality \eqref{eq:estcontinuous} implies the following estimate:
\begin{equation}
\label{eq:strictConvexity2D}
\sup_{x \in B_{\ell}(x_0) \cap H} \psi(x) \geq \frac{\| \nabla \psi \|_{\infty} \delta}{\exp \left(\displaystyle \frac{8 \lambda^2 \| \nabla \psi \|_{\infty}^2}{ \ell^2}\right)-1}>0
\end{equation}
for all $\ell \leq \frac{\delta}{2}$.
Our Theorem \ref{th:flatpartbound} with $h_1=h_2=0$ gives a proof of this result, and in particular estimate \eqref{eq:estcontinuous}, when we assume only that $\psi$ is an {\it Alexandrov} solution of \eqref{eq:MAS}, that is
a convex function satisfying
$$ |\partial\psi(A)| \geq \frac{1}{\lambda^2} | A\cap \Omega| \quad \mbox{ for all Borel sets in $\Omega$.}$$
But the main interest of our result is that we consider measures that may not be absolutely continuous with respect to the Lebesgue measure. In this case, the Kantorovich potential $\psi$ (which still exists but may not be unique) does not solve the Monge-Amp\`ere equation (either \eqref{eq:pureMongeAmpere} or \eqref{eq:MAalex}).
To our knowledge, no quantitative estimates on the convex function $\psi$ are known in this setting. Brenier's result does not apply (there might not be any measurable map $T$ such that $T _{\#} \mu=\nu $), and Kantorovich potentials are not expected to be either convex (they will have `flat parts') nor $C^1$ (they will have `corners').
Theorem \ref{th:flatpartbound} is the derivation of an inequality similar to \eqref{eq:estcontinuous} when \eqref{eq:MAS} is replaced by \eqref{eq:ineq1} with
measures satisfying Assumption \ref{ass:measures} with $\lambda^2=\lambda_1\lambda_2$ and $\gamma$ replaced by
$$ \max \left\{\gamma ,2h_1, \frac{\ell h_2}{C\,K }\right\} .$$
This implies in particular that (in dimension $2$) any Kantorovich potential $\psi$ is strictly convex up to some scale depending on $h_1$ and $h_2$ in any open set in which the lower and upper bounds \eqref{eq:MeasuresConditions} hold.
Although our result is similar to Theorem \ref{th:strictConvexity2D},
the proof is completely different since we cannot use the Monge-Amp\`ere equation \eqref{eq:MAS} in this non absolutely continuous setting and instead we must rely solely on the measure inequality \eqref{eq:ineq1}.
In the groundbreaking work of Caffarelli as well as in the partial regularity theory of
\cite{Yu07,Figalli10,FigalliKim10}, a key tool is the use of some variants of the maximum principle for the Monge-Amp\`ere equation and the use of appropriate barriers. This is not possible in our framework.
Instead, the proof of our Theorem \ref{th:flatpartbound} relies on the derivation of upper and lower bounds for an integral quantity defined in \eqref{eq:quantityToBound}-\eqref{eq:weight}.
Note that a variational approach (relying on optimal transportation arguments rather than using some barriers for Monge-Amp\`ere equation) to the partial regularity theory of \cite{FigalliKim10} was recently developed in \cite{Goldman17, Goldman18}.
\medskip
\medskip
It is natural to ask whether our result could be extended to dimension $n\geq 3$.
It turns out that even in the absolutely continuous case, the result of Theorem \ref{th:strictConvexity2D} does not hold in dimension $3$ and higher. Indeed, a classical example by Pogorelov shows that $\psi$ can have a flat part
and is thus not necessarily strictly convex
(see \cite{gutierrez2001monge}).
A natural extension of Theorem \ref{th:strictConvexity2D} can however be found in \cite[Theorem 2.34]{bonsante2017equivariant} : Under conditions similar to Theorem \ref{th:strictConvexity2D} but in dimension $n\geq 3$,
the convex function $\psi$ cannot be affine on a set of dimension larger than or equal to $n/2$.
For the sake of completeness, we present in Appendix \ref{app:2} a short proof, based on the ideas of \cite{bonsante2017equivariant},
of the following quantitative estimate
\begin{theorem}
\label{th:affinebound}
Let $n\geq 3$ and let $ \psi \in C^2$, $\psi \geq 0$ satisfy $ \det D^2\psi \geq \lambda^{-2}>0$ and assume that $\psi(x_0)=0$ with $\delta:=\mbox{dist} (x_0, \partial \Omega) >0$. Let $H$ be an affine surface of dimension $d$ passing through $x_0$, then for all $\ell \leq \frac{\delta}{2}$, the quantity
$$ \gamma = \frac {\sup_{x \in B_{\ell}^n(x_0) \cap H} \psi(x) } {\| \nabla \psi \|_{L^\infty(\Omega_1)}}
$$
satisfies
\begin{equation}\label{eq:estgammand}
\ell^{2d} \varphi(\delta/\gamma) \leq C\lambda^2 \| \nabla \psi \|_{L^\infty(\Omega_1)}^n \delta^{2d-n}
\end{equation}
with $\varphi(s) := s^{2d-n}\int_0^{s} \frac{r^{n-d-1}}{( r +1 )^d}\, dr$.
\end{theorem}
We note that $\varphi$ satisfies $\lim_{s\to\infty} \varphi(s) = \infty$ if and only if $d\geq n/2$ and so \eqref{eq:estgammand} implies the following lower bound:
\begin{equation}
\label{eq:affinebound}
\sup_{x \in B_{\ell}^n(x_0) \cap H} \psi(x) \geq
\left\{
\begin{array}{ll}
\min\left\{ \delta \|\nabla \psi \|_{\infty}, \left( \frac{\ell^{2d}}{C\lambda^2 \|\nabla \psi \|_{\infty}^{2n-2d}} \right)^{\frac{1}{2d-n}} \right\}
& \mbox{ if $d > n/2$} \\[3pt]
\delta \|\nabla \psi \|_{\infty} \exp \left( -C \frac{\lambda^2 \|\nabla \psi \|_{\infty}^n}{ \ell^n} \right) & \mbox{ if $d = n/2$}.
\end{array}
\right.
\end{equation}
In view of this result, it seems that the main result of this paper (Theorem \ref{th:flatpartbound}) could be extended to higher dimensions, provided one considers hypersurfaces of dimension $n/2$.
However, the basic tool of our proof, the integral quantity \eqref{eq:quantityToBound}-\eqref{eq:weight}, is not well suited for such a generalization, and a new quantity would need to be introduced.
This question will thus be addressed in a future work.
\section{Proof of Theorem \ref{th:flatpartbound}}\label{sec:proofthm}
\subsection{Preliminaries}
The Kantorovich problem with the quadratic cost function is invariant under rigid motions.
Up to a translation and a rotation of $\Omega$, we can thus assume that the points $x,y$ in Theorem \ref{th:flatpartbound}
are $a:=(-\ell,0)$ and $b:=(\ell,0)$ and that the rectangle $[-\ell,\ell] \times [0,\;\delta]$ is contained in $\Omega$.
Up to subtracting an affine function, we can also assume that $\psi$ satisfies
\begin{equation}\label{eq:convexfunc}
\psi(-\ell,0)=\psi(\ell,0)=0 \quad \mbox{ and } \quad 0\in\partial\psi([a,b]).
\end{equation}
Throughout the proofs, $x=(x_{\parallel},x_\perp)$ or $y=(y_\parallel,y_\perp)$ will denote points in $\Omega\subset \mathbb R^2$ with $x_{\parallel},\;y_\parallel$ the first coordinate parallel to the segment $[a,b]$. Similarly $z=(z_\parallel,z_\perp)$ will denote a point in $\partial\psi(\Omega)\subset \mathbb R^2$.
We will also use the following notation:
\begin{align}
R_{\delta} & =\{(x_{\parallel},\,x_\perp)\,|\;|x_{\parallel}|\leq \ell/2,\ 0 \leq x_\perp\leq \delta\}.
\end{align}
Furthermore,
\eqref{eq:convexfunc} implies that $0\in \partial \psi (U_{\delta})$ and so for any $
K \geq \mathrm{diam} \, \partial \psi (U_{\delta})$ we have
\begin{equation}\label{eq:opsi}
K\geq \| \partial \psi \|_{L^{\infty}( R_{\delta}) }= \sup_{y\inR_{\delta}} \sup_{z\in \partial\psi(y)}|z| .
\end{equation}
We also note that
\begin{equation}\label{eq:oeps}
\eps := - \min_{t \in [0,1]} \psi(t a + (1-t) b)\geq 0.
\end{equation}
Throughout the proofs, $C$ denotes a numerical constant, which depends only on the dimension $d=2$ and whose value may change from line to line in the calculations.
Before moving to the heart of the proof, we state the following simple lemma which we will use repeatedly,
\begin{lemma}
\label{lemma:SubdifferentialBound}
Let $\psi:[-\ell,\ell] \times [0, 2 \delta] \rightarrow \mathbb{R}$ be a convex function satisfying \eqref{eq:convexfunc} and \eqref{eq:oeps}.
Then for all $y\in \Omega$ such that $|y_\parallel|\leq \ell/2$ we have
\[
|z_\parallel| \leq \frac{2}{\ell}\, \left(K\, |y_\perp|+\eps \right), \qquad \forall z\in \partial\psi(y).
\]
\end{lemma}
\begin{proof}
Consider any $y\in \Omega$ with $|y_\parallel|\leq \ell/2$, $0 \leq y_\perp \leq 2\,\delta$ and any $z\in\partial\psi(y)$. \begin{comment} Denote by $a=(a_\parallel,0)$ the point where the line $y_\parallel+t\,z_\parallel$ for $t\in \mathbb R_+$ crosses $\partial B_{n}(0,\ell)$: $a_\parallel-y_\parallel=t\,z_\parallel$ with $|a_\parallel|=\ell$ and $t>0$. \end{comment}
Then we have by the definition of subdifferential
\[
\psi(b) \geq \psi(y) +z\cdot(b-y)=\psi(y)+z_\parallel\cdot (b_\parallel-y_\parallel) +z_\perp\cdot (b_\perp-y_\perp).
\]
Since $b_\parallel - y_\parallel = \ell - y_\parallel \geq \ell /2 $, and $a_\perp = 0 $, this lets us deduce that:
\begin{align*}
|z_\parallel|
& \leq \frac{1}{b_\parallel - y_\parallel}\, \left[z_\perp\cdot (y_\perp-a_\perp) +(\psi(b)-\psi(y))\right]\\
& \leq \frac{2}{\ell}\, \left[z_\perp\cdot y_\perp +(\psi(b)-\psi(y))\right]\\
& \leq \frac{2}{\ell}\left(K\, |y_\perp|+\eps \right),
\end{align*}
where we have used \eqref{eq:convexfunc} so $\psi(b)= 0$, \eqref{eq:oeps} so $\psi(y)\geq -\eps$ and the fact that $|z_\perp|\leq K $ (by \eqref{eq:opsi}).
This completes the proof of Lemma \ref{lemma:SubdifferentialBound}.
\end{proof}
We conclude these preliminaries by noting that the quantity $ \mathrm{diam} \, \partial \psi (U_{\delta})$ {\em a priori} depends on $\ell$. We obviously have
\begin{equation}\label{eq:opsiupper}
\mathrm{diam} \, \partial \psi (U_{\delta}) \leq \mathrm {diam}\, \partial\psi(\Omega)
\end{equation}
and we can show the following lower bound:
\begin{lemma}\label{lem:opsibound}
If $h_1 \leq \min\left\{ \delta , \ell\right\}$ and
$h_2^2 < \frac{\delta \ell}{\lambda_1\lambda_2}$, then
\begin{equation}\label{eq:opsilower}
\mathrm{diam} \, \partial \psi (U_{\delta}) \geq \left( \frac{\delta \ell}{\lambda_1\lambda_2}\right)^{1/2}.
\end{equation}
\end{lemma}
\begin{proof}
Inequality \eqref{eq:ineq1} gives
\begin{align*}
\mu(U_\delta) \leq \nu(\partial \psi (U_\delta))
\end{align*}
Since the dimensions of $U_\delta$ satisfy $ \min \{\delta, \ell\}\geq h_1$,
Assumption \ref{ass:measures} implies
$$
\mu(U_\delta)\geq \frac{\ell \delta}{\lambda_1} ,\quad \mbox{ and } \quad \nu(\partial \psi (U_\delta)) \leq \lambda_2 \max \{(\mathrm{diam} \, \partial \psi (U_{\delta}))^2,h_2^2\}.
$$
We deduce
$$ \frac{\ell \delta}{\lambda_1\lambda_2} \leq \max \{(\mathrm{diam} \, \partial \psi (U_{\delta})) ^2,h_2^2\}$$
and the condition $h_2^2 < \frac{\delta \ell}{\lambda_1 \lambda_2}$ implies (\ref{eq:opsilower}).
\end{proof}
\subsection{Proof of Theorem \ref{th:flatpartbound}}
We now describe our strategy for proving Theorem \ref{th:flatpartbound}.
First, we note that since $\psi$ is a convex function in $\Omega$, it is differentiable in a subset $\widetilde\Omega\subset \Omega$ of full measure ($|\Omega\setminus \widetilde\Omega|=0$), see for instance~\cite{rockafellar1970convex}.
We can thus define a map $T:\Omega\mapsto \mathbb R^2$ which satisfies
\begin{equation}\label{eq:T}
T(x):=\nabla \psi(x)\qquad \forall x\in \widetilde\Omega.
\end{equation}
\begin{comment}The last $d-n$ coordinates of $T$ will similarly play a different role from the first $n$ and therefore, as for points, we denote $T(x)=(T_\parallel(x),T_\perp(x))$ with $T_\parallel(x)=(T_1(x),\ldots,T_{n}(x))$. \end{comment}
Our proof of Theorem \ref{th:flatpartbound} relies on some careful estimates (upper and lower bounds) of the integral quantity
\begin{equation}
\label{eq:quantityToBound}
\int_{R_{\delta}\timesR_{\delta}} |T_\perp(y)-T_\perp(x)|\, \varphi(x,y) \, dy \, dx
\end{equation}
where the
weight function $\varphi(x,y)$ is given by
\begin{equation}
\label{eq:weight}
\varphi(x,y)=\frac{1}{(x_\perp+\gamma)^{2}} \mathbf{1}_{\{\frac{1}{2} x_\perp\leq y_\perp \leq 2\, x_\perp\}},
\end{equation}
for some $\gamma>0$. \begin{comment} The function $\varphi$ allows us to assign more importance in the calculation to those points $x$ and $y$ with $|x_d|,\; |y_d|<<1$ that are closer to the minimal hyperplane of $\psi$.
\end{comment}
The exponent $2$ is chosen to obtain the right logarithmic divergence in the estimates.
\begin{comment}
We chose to limit $\varphi$ to the subset of $\mathbb R_+^{d}$ where $\frac{1}{2} x_d\leq y_d \leq 2\,x_d$ as this subset is convex, contrary to the simpler subset of $\mathbb R^{d}$ where we would impose $\frac{1}{2} |x_d|\leq |y_d| \leq 2\, |x_d|$. \end{comment}
Using the notations from Theorem \ref{th:flatpartbound}, we will first prove the following upper bound which does not require \eqref{eq:ineq1}:
\begin{proposition}
\label{prop:UpperBound}
Assume that $\psi:[-\ell,\ell]\times[0,2\delta]\rightarrow \mathbb{R}$ is a convex function satisfying \eqref{eq:convexfunc}.
Then there exists a universal constant $C>0$ s.t. the following inequality holds for all $\gamma\geq \eps/K$
\begin{equation}\label{eq:upperbd}
\int_{R_{\delta}\timesR_{\delta}} |T_\perp(y)-T_\perp(x)|\, \varphi(x,y) \, dy \, dx
\leq C\, K\, \ell^{2}\,\left( \left[\log\left(1+\frac{\delta}{\gamma}\right)\right]^{1/2} + 1 \right),
\end{equation}
where we recall that $K $ and $\eps $ satisfy \eqref{eq:opsi} and \eqref{eq:oeps}.
\end{proposition}
The proof of this upper bound is fairly straightforward (see Section \ref{sec:upper}) and only makes use of the convexity of $\psi$ and Lemma \ref{lemma:SubdifferentialBound}.
Next, we will prove the following lower bound for \eqref{eq:quantityToBound}:
\begin{proposition}
\label{prop:LowerBound}
Let $\psi$ be a convex function satisfying \eqref{eq:ineq1} for some measure $\mu$ and $\nu$ satisfying Assumption~\ref{ass:measures}.
Assume further than $\psi$ satisfies \eqref{eq:convexfunc}. There exists a universal constant $C$ s.t. assuming that $\ell$ satisfies \eqref{eq:ellcond}, which we recall is
\[
\ell\geq 2\,h_1,\quad \ell^{2}\geq C\,K\,\lambda_1\,\lambda_2\,h_2,
\]
and defining
\begin{equation}\label{eq:gamma}
\gamma:=\max \left( \frac{\eps }{K},\;2\,h_1, \;\frac{\ell h_2}{C\,K }\right),
\end{equation}
then the following inequality holds
\begin{equation}\label{eq:LowerBound}\begin{split}
&\int_{R_{\delta}\timesR_{\delta}} |T_\perp(y)-T_\perp(x)|\, \varphi (x,y)\, dy\, dx \\
&\qquad \qquad\geq
\frac{\ell^{4}}{C\,\lambda_1\,\lambda_2\,K }\,\left(1\wedge\frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^2 }\right)\,\log \left(\frac{1}{2}+\frac{\delta}{2\,\gamma}\right),\\
\end{split}
\end{equation}
provided $\gamma <\delta$ and where we recall the notation $a\wedge b=\min(a,\,b)$.
\end{proposition}
The proof of this proposition, which is presented in Section \ref{sec:lower}, is more delicate.
This is where we use the fact that $\psi$ satisfy the Monge-Amp\`ere like condition \eqref{eq:ineq1} with measures $\mu$ and $\nu$ satisfying~\eqref{eq:MeasuresConditions}.
\medskip
\begin{proof}[Proof of Theorem \ref{th:flatpartbound}]
The key to conclude the proof of Theorem \ref{th:flatpartbound} is that the bounds provided by Propositions \ref{prop:UpperBound} and \ref{prop:LowerBound} scale differently in $\ell$ and $\gamma$. Combining the two will hence naturally lead either to an upper bound on $\ell$ or to a lower bound on $\gamma$. More precisely we directly obtain from \eqref{eq:upperbd} and \eqref{eq:LowerBound} that
\[
\frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}}\,\left(1\wedge\frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}}\right)\,\log \left(\frac12+\frac{\delta}{2\,\gamma}\right)\leq C\, \left( \left[\log \left(1+\frac{\delta}{\gamma}\right) \right]^{1/2} + 1 \right).
\]
Since we assumed in the theorem that $\delta\geq 2\,\gamma$, we have $\log \left(1+\frac{\delta}{\gamma}\right)\leq C\,\log \left(\frac12+\frac{\delta}{2\,\gamma}\right)$ so that we can simplify the inequality above to:
\begin{equation}\label{eq:ghgj}
\frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}\, }\,\left(1\wedge\frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^2}\right)\,\left[\log \left(1+\frac{\delta}{\gamma}\right) \right]^{1/2}\leq C \left( 1 + \left[\log \left(1+\frac{\delta}{\gamma}\right) \right]^{-1/2} \right) \leq C.
\end{equation}
Moreover we also get $\log \left(1+\frac{\delta}{\gamma}\right)\geq \log 3$ (still using the assumption that $\delta\geq 2\,\gamma$) so \eqref{eq:ghgj} gives
\[
\frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}} \left( 1 \wedge \frac{\ell^2}{\lambda_1 \lambda_2 K^2} \right) \leq C [\log 3]^{-1/2},
\]
which can be used to show that
\[
\left( \frac{\ell^2}{\lambda_1 \lambda_2 K^2} \wedge \left(\frac{\ell^2}{\lambda_1 \lambda_2 K^2} \right)^2 \right) \geq C \left(\frac{\ell^2}{\lambda_1 \lambda_2 K^2} \right)^2.
\]
for some (different) constant $C$.
Together with \eqref{eq:ghgj}, this finally implies
$$
\left(\frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^2}\right)^2\,\left[\log \left(1+\frac{\delta}{\gamma}\right) \right]^{1/2}\leq C
$$
which completes the proof of Theorem \ref{th:flatpartbound}.
\end{proof}
\subsection{Upper bound: Proof of Proposition \ref{prop:UpperBound}}\label{sec:upper}
\begin{proof}[Proof of Proposition \ref{prop:UpperBound}]
We first assume that $\psi$ is $C^2$ so that all the computations below make sense.
We can write
\begin{align}
& \int_{R_{\delta}\timesR_{\delta}} |T_\perp(y)-T_\perp(x)|\, \varphi(x,y) \, dy\, dx \nonumber \\
&\qquad\qquad = \int_{R_{\delta}\times R_{\delta}} \left|\int_{0}^{1} \nabla T_\perp (x+t(y-x))\cdot (y-x) \,dt\, \right| \varphi(x,y)\, dy\, dx\nonumber \\
&\qquad\qquad \leq \int_{R_{\delta}\timesR_{\delta}} \int_{0}^{1} \left|\partial_\parallel T_\perp (x+t(y-x))\cdot (y_\parallel-x_\parallel) \right|\, dt \, \varphi(x,y)\,dy\,dx\nonumber \\
&\qquad\qquad \quad + \int_{R_{\delta}\times R_{\delta}} \int_{0}^{1} \left|\partial_\perp T_\perp(x+t(y-x)) \cdot (y_\perp-x_\perp) \right|\, dt \, \varphi(x,y)\,dy\,dx,\nonumber
\end{align}
where $\partial_\parallel$ denotes the derivative with respect to the first component and $\partial_\perp$ is the derivative in the orthogonal direction. Using the symmetry of the expression in $x$ and $y$, we have
\begin{align}
& \int_{R_{\delta}\timesR_{\delta}} |T_\perp(y)-T_\perp(x)| \varphi(x,y) \, dy\, dx \nonumber \\
&\leq 2\,\int_{R_{\delta}\timesR_{\delta}} \int_{1/2}^{1} \left|\partial_\parallel T_\perp (x+t(y-x))\cdot(y_\parallel-x_\parallel) \right|\, dt \, \varphi(x,y)\,dy\,dx\nonumber \\
&\qquad\qquad \quad + 2\,\int_{R_{\delta}\timesR_{\delta}} \int_{1/2}^{1} \left|\partial_\perp T_\perp(x+t(y-x))\cdot (y_\perp-x_\perp) \right|\, dt \, \varphi(x,y)\,dy\,dx,\label{eq:TT}
\end{align}
To bound the first term in the right-hand side,
we note that by definition of $R_\delta$, $|y_\parallel-x_\parallel|\leq \ell$ so that
using the change of variable $y\to z=x+t(y-x)$
\begin{align}
& \int_{R_{\delta}\timesR_{\delta}} \int_{1/2}^{1} \left|\partial_\parallel T_\perp (x+t(y-x))\cdot (y_\parallel-x_\parallel) \right|\, \varphi(x,y)\, dt \,dy\,dx\nonumber \\
&\qquad\qquad \leq \ell\, \int_{R_{\delta}} \int_{1/2}^{1} \int_{R_{\delta}} \left|\partial_\parallel T_\perp (x+t(y-x)) \right| \, \varphi(x,y)\,dy\, dt\,dx \nonumber\\
&\qquad\qquad \leq \ell\, \int_{R_{\delta}} \int_{1/2}^{1} \int_{R_{\delta}} \left|\partial_\parallel T_\perp (z) \right|\, \mathbf{1}_{x+\frac{z-x}{t}\in R_{\delta}} \varphi\left(x,x+\frac{z-x}{t}\right)\frac{1}{t^d}\,dz\, dt\,dx \nonumber \\
& \qquad\qquad \leq \ell \int_{R_{\delta}} \left|\partial_\parallel T_\perp (z) \right| J_1(z) \,dz.\label{eq:term1}
\end{align}
Using the definition of $\varphi(x,y)$ (see \eqref{eq:weight})
and the notation
\[
\Omega_{x_\perp} =\left\{ y \in [\ell/2,\ell/2] \times [0, \delta] ;\; \frac{1}{2}\, x_\perp \leq y_\perp \leq 2\,x_\perp \right\},
\]
we get that the weight $J_1(z)$ is equal to
\begin{align}
J_1(z) & = 2\,\int_{1/2}^{1} \int_{R_{\delta}} {\bf 1}_{x+\frac{z-x}{t}\in R_{\delta}} \varphi\left(x,x+\frac{z-x}{t}\right)\,dx \, dt\nonumber\\
& =
2\, \int_{1/2}^{1} \int_{R_{\delta}} \frac{1}{(|x_\perp|+\gamma)^{2}} {\bf 1}_{x+\frac{z-x}{t}\in \Omega_{x_\perp} } \,dx \, dt .
\nonumber
\end{align}
Observe that the definition of $\Omega_{x_\perp}$ is actually symmetric on $R_\delta$: $y\in \Omega_{x_\perp}$ iff $x\in \Omega_{y_\perp}$ since $x_\perp \geq 0$. Consequently $z\in \Omega_{x_\perp}$ implies that $x\in \Omega_{z_\perp}$ as $x\in R_\delta$ and
\[\begin{split}
J_1(z)&\leq \frac{C}{(|z_\perp|+\gamma)^{2}}\,\int_{1/2}^{1} \int_{R_{\delta}} {\bf 1}_{x+\frac{z-x}{t}\in \Omega_{x_\perp} } \,dx \, dt\leq \frac{C}{(|z_\perp|+\gamma)^{2}}\,\int_{\Omega_{z_\perp}}\,dx\\
&\leq \frac{C}{(|z_\perp|+\gamma)^{2}}\,\ell\,|z_\perp|\leq \frac{C\,\ell}{(|z_\perp|+\gamma)} .
\end{split}
\]
Going back to \eqref{eq:term1}, we find
\begin{equation}\label{eq:term11}
\int_{R_{\delta}\timesR_{\delta}} \int_{1/2}^{1} \left|\partial_\parallel T_\perp (x+t(y-x))\cdot(y_\parallel-x_\parallel) \right|\, \varphi(x,y)dt \,dy\,dx \leq C\, \ell^2\, \int_{R_{\delta}}\frac{ \left|\partial_\parallel T_\perp (z) \right| }{(|z_\perp|+\gamma)} \,dz.
\end{equation}
Next, we note that the convexity of $\psi$ implies that the matrix
\[
\left[\begin{matrix}
&\partial_\parallel T_\parallel &\partial_\perp T_\parallel\\
&\partial_\parallel T_\perp &\partial_\perp T_\perp\\
\end{matrix}\ \right]
\]
is symmetric and non-negative with a non-negative determinant:
$\partial_\parallel T_\parallel (z) \partial_\perp T_\perp(z) - \partial_\parallel T_\perp \partial_\perp T_\parallel \geq 0$, which implies that $|\partial_\parallel T_\perp(z)| \leq |\partial_\parallel T_\parallel(z)|^{1/2}\,|\partial_\perp T_\perp(z)|^{1/2}$.
This lets us deduce that
\begin{align}
\int_{R_{\delta}} \frac{ |\partial_\parallel T_\perp(z)|}{|z_d|+\gamma} dz & \leq \int_{R_{\delta}} \frac{|\partial_\parallel T_\parallel |^{1/2}}{(|z_\perp|+\gamma)} |\partial_\perp T_\perp(z)|^{1/2} \, dz\nonumber \\
& \leq \left[\int_{R_{\delta} } \frac{|\partial_\parallel T_\parallel(z)|}{(|z_\perp|+\gamma)^2} dz \right]^{1/2} \left[\int_{R_{\delta} } |\partial_\perp T_\perp(z)|\, dz\right]^{1/2}\label{nabla'Td} .
\end{align}
Using the fact that $\partial_\parallel T_\parallel \geq 0$ from the convexity of $\psi$,
\begin{align}
\int_{R_{\delta}} \frac{| \partial_\parallel T_\parallel (z) |}{(z_\perp + \gamma)^2} d z_\perp & = \int_{0}^{ \delta} \frac{T_\parallel (\ell/2, z_\perp) - T_\parallel (-\ell/2, z_\perp)}{(z_\perp + \gamma)^2} d z_\perp \leq \frac{C}{\ell} \int_{0}^{\delta} \frac{(K z_\perp + \eps)}{(z_\perp + \gamma)^2} d z_\perp \\
& \leq \frac{C K}{\ell} \int_{0}^{\delta} \frac{1}{(z_\perp +\gamma)} d z_\perp = C K \ell^{-1} \int_{0}^{\delta} \frac{1}{z_\perp + \gamma} dz_\perp,
\label{partialjTj}
\end{align}
by using Lemma \ref{lemma:SubdifferentialBound} and the fact that $\gamma\geq \eps/K$.
Similarly, we have that $\partial_\perp T_\perp \geq 0$ so that
\begin{equation}
\int_{R_{\delta}} |\partial_\perp T_\perp (z)| dz \leq \int_{-\ell/2}^{\ell / 2} [T_\perp (z_\parallel, \delta)- T_\perp (z_\parallel, 0)] d z_\parallel \leq 2 K \ell.
\label{partialdTd}
\end{equation}
Combining \eqref{partialjTj} and \eqref{partialdTd} into \eqref{nabla'Td} and inserting the result into \eqref{eq:term11}, we conclude that
\begin{equation}\label{eq:term111}
\int_{R_{\delta}\timesR_{\delta}} \int_{1/2}^{1} \left|\partial_\parallel T_\perp (x+t(y-x))(y_1-x_1) \right| \varphi(x,y)dt \,dy\,dx \leq C\, \ell^{2}\, K \,\left[ \int_{0}^{2 \delta} \frac{1}{z_\perp + \gamma} d z_\perp \right]^{1/2}.
\end{equation}
which gives a bound for the first term in the right hand side of \eqref{eq:TT}.
\medskip
We now proceed similarly to bound the second term in the right-hand side of \eqref{eq:TT}. First we write, recalling that $ \partial_\perp T_\perp\geq 0$,
\begin{align*}
& \int_{R_{\delta}\timesR_{\delta}} \int_{1/2}^{1} \left| \partial_\perp T_\perp(x+t(y-x))\,(y_\perp-x_\perp) \right|\,dt\, \varphi(x,y)\,dy\,dx\\
&\qquad\qquad \leq \int_{R_{\delta}} \int_{1/2}^{1} \int_{R_{\delta}} \partial_\perp T_\perp (x+t(y-x))\, |y_\perp-x_\perp|\, \varphi(x,y)\,dy\, dt\,dx. \nonumber
\end{align*}
Note that the definition of $\varphi$ in \eqref{eq:weight} implies that
\begin{align*}
|y_\perp-x_\perp|\, \varphi(x,y) & =\frac{ |y_\perp -x_\perp| }{(x_\perp+\gamma)^2} \mathbf{1}_{\{\frac{1}{2} x_\perp\leq y_\perp \leq 2 x_\perp \}} \\
& \leq \frac{1}{x_\perp +\gamma } \mathbf{1}_{\{\frac{1}{2} x_\perp \leq y_\perp \leq 2 x_\perp \}}.
\end{align*}
We perform the same change of variable $z=x+t(y-x)$ as we used after \eqref{eq:term1}) to find that
\begin{align}
& \int_{R_{\delta}\timesR_{\delta}} \int_{1/2}^{1} \left| \partial_\perp T_\perp (x+t(y-x))\,(y_\perp -x_\perp ) \right|\, dt\, \varphi(x,y)\,dy\,dx\nonumber \\
&\qquad\qquad \leq \int_{R_{\delta}} \int_{1/2}^{1} \int_{R_{\delta}} \partial_\perp T_\perp (z)\, \frac{1}{x_\perp +\gamma }\, \mathbf{1}_{x+\frac{z-x}{t}\in \Omega_{x_\perp}} \,dz\, dt\,dx \nonumber \\
& \qquad\qquad \leq \int_{R_{\delta}} \partial_\perp T_\perp (z)\, J_2(z) \,dz,\label{eq:term2}
\end{align}
with
\[
J_2(z) = 2\,\int_{1/2}^{1} \int_{R_{\delta}} \frac{1}{x_\perp +\gamma } \mathbf{1}_{x+\frac{z-x}{t}\in \Omega_{x_\perp }} \, dx\, dt.
\]
Proceeding as with the weight $J_1(z)$ above (the only difference lies in the power of $(x_\perp +\gamma)$), we find that $\mathbf{1}_{x+\frac{z-x}{t}\in \Omega_{x_\perp}}\leq\mathbf{1}_{x\in \Omega_{z_\perp}}$
\[
J_2(z)\leq \frac{C}{z_\perp +\gamma}\,|\Omega_{z_\perp}|\leq C\, \ell.
\]
Inserting this bound in \eqref{eq:term2}, we obtain
\begin{align}
& \int_{R_{\delta}\timesR_{\delta}} \int_{1/2}^{1} \left| \partial_\perp T_\perp(x+t(y-x))\,(y_\perp-x_\perp) \right|\,dt \, \varphi(x,y)\,dy\,dx\nonumber \\
& \qquad \leq C\, \ell
\int_{R_{\delta}} \partial_\perp T_\perp (z)\,dz
= C\, \ell
\int_{-\ell / 2}^{\ell / 2} (T_\perp(z_\parallel,\delta)-T_\perp(z_\parallel,0))\, dz_\parallel \label{eq:secondtermreg}
\leq C\, K\, \ell^{2}.
\end{align}
Combining \eqref{eq:secondtermreg} and \eqref{eq:term111} in \eqref{eq:TT}, we obtain that
\begin{equation}
\int_{R_{\delta}\times R_{2 \delta}} |T_\perp(x)-T_\perp(y)|\,\varphi(x,y)\,dy\,dx\leq C\, K\, \ell^{2}\, \left( \left[\log\left(1+\frac{\delta}{\gamma}\right)\right]^{1/2} + 1 \right), \label{conclusionprop1}
\end{equation}
which proves the proposition if $T$ is $C^1$ and hence $\psi$ is $C^2$.
\medskip
When $\psi$ is only convex but not $C^2$, we naturally introduce the convex function $\psi^\eta= \psi \star_x \rho_{\eta}$, where $\rho_{\eta}$ is a standard mollifier. We may then apply \eqref{conclusionprop1} to $\psi^\eta$ and find for $T^\eta=\nabla\psi^\eta$
\[
\int_{R_{2 \delta}\times R_{2 \delta}} |T_\perp^\eta(x)-T_\perp^\eta(y)|\,\varphi(x,y)\,dy\,dx\leq C\, K_\eta\, \ell^{2}\, \left( \left[\log\left(1+\frac{\delta}{\gamma}\right)\right]^{1/2} + 1\right),
\]
where we observe that, in this case, since we only integrate over $R_\delta$, $K_\eta$ is given by
\[
K_\eta=\sup_{R_\delta} |\nabla\psi_\eta|\leq \|\partial \psi\|_{L^\infty(R_{\delta})}\leqK,
\]
for $\eta<\delta$. At the same time, since $\psi$ is convex then $T=\nabla\psi$ belongs to $BV(R_{\delta})$ and therefore $\|T^\eta-T\|_{L^1(R_{\delta})}\to 0$ as $\eta\to 0$. Since $\varphi$ is bounded for any fixed $\gamma>0$, we may directly pass to the limit $\eta\to 0$ and obtain \eqref{conclusionprop1} on $T$.
\end{proof}
\subsection{Lower bound: Proof of Proposition \ref{prop:LowerBound}\label{sec:lower}}
We now turn to the proof of the lower bound \eqref{eq:LowerBound}. Given $x_\perp\in (0, \delta)$, we recall for convenience the definition of the set $\Omega_{x_\perp}$, the following sets
\[
\Omega_{x_\perp} =\left\{ y \in [-\ell / 2, \ell / 2] \times [0, \delta]\, ;\; \ \frac{1}{2}\, x_\perp \leq y_\perp \leq 2\, x_\perp \right\},
\]
together with the more restricted set
\[
\Lambda_{x_\perp} =\left\{ y \in [-\ell / 4, \ell / 4] \times [0, \delta]\, ;\; \ x_\perp \leq y_\perp \leq \frac32\, x_\perp \right\}.
\]
Since we are trying to show that $T_\perp(y)$ cannot be concentrated, instead of looking at $|T_\perp(y)-T_\perp(x)|$, we define, for $\xi \in \mathbb R$ and $\eta>0$, the more general set
\begin{equation}
\label{eq:subdifftoolarge}
\Omega_{x_\perp,\eta}=\left\{ y \in \Omega_{x_\perp} \,;\, |z_\perp-\xi| > \eta \mbox{ for all } z \in \partial \psi(y) \right\} .
\end{equation}
Our first task, in Lemma \ref{lemma:etaExistence} below, is to show that for an appropriate value of $\eta$ and
for all $\xi\in \mathbb R$, the set $\Omega_{x_\perp,\eta}$ is non empty, and more precisely $\Lambda_{x_\perp}\cap \Omega_{x_\perp,\eta}\neq\emptyset$. This will allow us to construct a half-cone within $\Omega_{x_\perp,\eta}$ in Lemma \ref{lemma:subdiffangle} and finally to obtain a lower bound for $|\Omega_{x_\perp,\eta}|$ in Lemma \ref{lemma:SetLowerBound}.
This will finally let us conclude the proof of Prop. \ref{prop:LowerBound} and obtain the lower bound \eqref{eq:LowerBound}.
\subsubsection{Non-emptyness of the set \texorpdfstring{$\Omega_{x_\perp,\eta}$}{}}
First we have the following lemma, which implies in particular that the set $\Omega_{x_\perp,\eta}(\xi)$ is not empty.
\begin{lemma}
\label{lemma:etaExistence}
Let $\psi$ be a convex function satisfying \eqref{eq:ineq1} for some measure $\mu$ and $\nu$ satisfying Assumption~\ref{ass:measures} and let $K$ satisfy \eqref{eq:defK}. There exists a universal constant $C$ such that defining
\begin{equation}\label{eq:etadef}
\eta := \frac{1}{C\, \lambda_1\, \lambda_2} \frac{ \ell^{2}}{K },
\end{equation}
and assuming furthermore that $\ell$ satisfies
\[
\ell \geq 2\, h_1,\quad \ell^{2} \geq C\, K\, \lambda_1\, \lambda_2\, h_2,
\]
then for all $x_\perp > \gamma=\max ( \frac{\eps }{K},\,2h_1,\, \frac{\ell h_2}{C\,K })$
and for all $\xi\in \mathbb R$, there is at least one point $y^* \in \Lambda_{x_\perp}$ such that for some $z \in \partial \psi(y^*)$ we have $|z_\perp-\xi| \geq 3 \eta$.
\end{lemma}
The idea of the proof is to look at the image of the set $\Lambda_{x_\perp}$ by the subdifferential $\partial\psi$.
By Lemma~\ref{lemma:SubdifferentialBound} this image
is bounded in the horizontal (i.e. $z_\parallel$) directions.
However, \eqref{eq:ineq1} together with Assumption \ref{ass:measures} gives a lower bound on the measure of this image, which is where the fact that $\psi$ is the Kantorovich potential for an optimal transportation problem is crucial.
Therefore the image cannot be too small in the vertical (i.e. $z_\perp$) directions, which is essentially the statement of Lemma \ref{lemma:etaExistence}.
The lower bounds on $\ell$ and $x_\perp$ in Lemma \ref{lemma:etaExistence} are necessary so that we can use \eqref{eq:MeasuresConditions} on the measures $\mu$ and $\nu$.
\begin{proof}[Proof of Lemma \ref{lemma:etaExistence}]
We start by noticing that for all $z\in \partial \psi (\Lambda_{x_\perp})$, Lemma~\ref{lemma:SubdifferentialBound} implies that
\[
|z_\parallel| \leq \frac{2}{\ell}\, \left(K\, \frac{3}{2}\,x_\perp +\eps \right) .
\]
This leads to defining the rectangle
\[
\tilde R= \left\{z \in \mathbb R^2\, ;\, |z_\parallel| \leq \max\left( \frac{2\,K }{\ell}\, \left(\frac{3}{2}\,x_\perp+\frac{\eps}{K } \right),h_2\right) \mbox{ and } |z_\perp-\xi| \leq \max(3\, \eta, h_2) \right\}.
\]
We show the existence of $y^*$ as in Lemma \ref{lemma:etaExistence} by contradiction: Suppose there is no such point $y^*$, then we must have that $\partial \psi (\Lambda_{x_\perp}) \subset \tilde R$ and inequality \eqref{eq:ineq1} gives
\begin{align}
\mu(\Lambda_{x_\perp}) \leq \nu(\partial\psi(\Lambda_{x_\perp})) \leq \nu(\tilde R\cap\partial\psi(\Omega) ).\label{eq:transportInequality}
\end{align}
We now want to use Assumption \ref{ass:measures} to estimate the left and right hand side of \eqref{eq:transportInequality}.
The rectangle $\Lambda_{x_\perp}$ has size $\left(\frac \ell 2\right)\times\frac {x_\perp} 2$.
Since $\ell\geq 2\,h_1$ and $x_\perp \geq \gamma \geq 2h_1$, the rectangle $\Lambda_{x_\perp}$ has size at least $h_1$ in all directions and Assumption \ref{ass:measures} (see \eqref{eq:MeasuresConditions}) implies that $\mu(\Lambda_{x_\perp})\geq |\Lambda _{x_\perp} | /\lambda_1$.
Similarly, the definition of the set $\tilde R$ guarantees that $\tilde R$ has size at least $h_2$ in all directions and so
$\nu(\tilde R\cap\partial\psi(\Omega)) \leq \lambda_2 |\tilde R|$. Equation~(\ref{eq:transportInequality}) thus yields
\begin{equation}\label{eq:LambdaB}
|\Lambda_{x_\perp}| \leq \lambda_1\, \lambda_2\, |\tilde R|.
\end{equation}
We now note that
\begin{equation}
|\Lambda_{x_\perp}| =C\,\ell \, x_\perp ,\label{LambdaB}
\end{equation}
while the assumption $x_\perp > \gamma$ with $\gamma\geq \frac{\eps}{K }$ and $\gamma\geq \ell\,h_2/C\,K$ implies
\begin{equation}
\begin{split}
|\tilde R| & = C \,\max\left( \frac{2K }{\ell} \left(\frac{3}{2}\, x_\perp +\frac{\eps}{K } \right),\;h_2\right)\, \left(\max (3\, \eta, h_2)\right)\\
&\leq C\, \left(\frac{ K }{\ell}\, x_\perp \right)\, \left(\max (3\, \eta, h_2)\right).
\end{split}
\label{intermed|B|}
\end{equation}
Together with the definition of $\eta$ this shows that
\begin{equation}
|\tilde R| \leq C \left(\frac{K}{\ell} x_\perp\right) \max \left(\frac{1}{C \lambda_1 \lambda_2} \frac{\ell^2}{K}, h_2\right)
\end{equation}
Equation (\ref{eq:LambdaB}) then proves that
\begin{equation}
C \ell x_\perp \leq \frac{\ell x_\perp}{C}
\end{equation}
which is a contraction by taking $C$ large enough and concludes the second part of the proof.
\end{proof}
\subsubsection{Lower Bound on \texorpdfstring{$|\Omega_{x_\perp,\eta}|$}{}}
We now show that the measure $|\Omega_{x_\perp,\eta}|$ is bounded from below.
We first need, as an intermediary result, the following lemma which only relies on the convexity of the function $\psi$. This lemma mostly states that if the subdifferentials corresponding to two points $y'$ and $y''$ is concentrated in the vertical direction and the segment $[y',\;y'']$ is almost vertical, then the subdifferential corresponding to any point in that segment also has to be concentrated.
We will later use this lemma together with Lemma \ref{lemma:etaExistence} to obtain contradictions and ensures the absence of concentration in the subdifferential over half a cone.
\begin{lemma} \label{lemma:subdiffangle}
Let $\psi$ satisfy \eqref{eq:convexfunc}, consider any $x_\perp\geq \gamma=\max\left(\frac{\eps}{K},\,2\,h_1,\, \frac{\ell\,h_2}{C\,K}\right)$ and fix any $\xi \in \mathbb{R}$.
Assume that $y',\;y'' \in \Omega_{x_\perp}$ are such that $y'\neq\ y''$ and
\[
\partial \psi (y') \cap \{ z \in \Omega_2\, ;\, |z_\perp-\xi| \leq \eta \} \neq \emptyset,\qquad \partial \psi (y'') \cap \{ z \in \Omega_2\, ;\, |z_\perp-\xi| \leq \eta \} \neq \emptyset.
\]
There exists a universal constant $C$ s.t., if
\[
|\tan ((y',\;y''),\;e_\perp)| \leq \frac{\ell\, \eta}{C\, K x_\perp},
\]
where $(y',\,y''),\;e_\perp)$ is the angle between the vertical direction $e_\perp$ and the segment $[y',\,y'']$,
then for all $y=s\,y' +(1-s)\, y''$ with $s \in (0,1)$ we have
\[
\partial \psi (y) \subset \{ z \,;\, |z_\perp-\xi| \leq 2\, \eta \}.
\]
\end{lemma}
\begin{proof}
Take $z' \in \partial \psi(y') \cap \{ |z_\perp-\xi| \leq \eta \}$ and $y=s\, y' +(1-s)\, y''$ for some fixed $s \in (0,1)$. We can assume (without loss of generality) that $y'_\perp-y_\perp >0$ and $y''_\perp-y_\perp<0$.
For any $z\in \partial \psi (y)$, the convexity of $\psi$ implies (cyclical monotonicity of the sub-differential):
\[
(z'-z)\cdot (y'-y) \geq 0,
\]
and therefore
\[
(z'_\perp-z_\perp)\,(y'_\perp-y_\perp) \geq -(z'_\parallel-z_\parallel)(y'_\parallel-y_\parallel) \mbox{.}
\]
We hence deduce that
\[
z_\perp \leq z'_\perp +\frac{(z'_\parallel-z_\parallel)(y'_\parallel-y_\parallel) }{y'_\perp-y_\perp} \leq \xi+\eta +(|z'_\parallel|+|{z_\parallel}|)\,\frac{|y'_\parallel-{y}_\parallel| }{y'_\perp-y_\perp}\mbox{,}
\]
since $|\overline{z}_\perp-\xi|\leq\eta$.
Using now Lemma \ref{lemma:SubdifferentialBound}, we then get that
\begin{align*}
z_\perp & \leq \xi+\eta + \left[\frac{2}{\ell}\, \left(K\, |y'_\perp|+\eps \right)+\frac{2}{\ell}\, \left(K\, |y_\perp|+\eps \right)\right]\, \frac{|y'-y|}{y'_\perp-y_\perp}\\
& \leq \xi+\eta+\frac{C}{\ell}\,K\, x_\perp \, |\tan ((y'',y'),\,e_\perp)|\leq \xi +2\eta,
\end{align*}
by the definition of the tangent and where we used the fact that $x_\perp\geq \gamma\geq {\eps}/K$, that $y',\, y'' \in \Omega_{x_\perp}$ so $y\in \Omega_{x_\perp}$ and as a consequence $y'_\perp,\;y_\perp\leq 2\,x_\perp$.
Proceeding similarly using $y''$ instead of $y'$, we can get the inequality $z_\perp \geq \xi - 2\, \eta$ and the result follows.
\end{proof}
Using Lemmas \ref{lemma:etaExistence} and \ref{lemma:subdiffangle}, we can now get a lower bound on the measure of the set $\Omega_{x_\perp,\eta}(\xi)$ (which we recall is defined by \eqref{eq:subdifftoolarge}). This will be the key estimate in the proof of Proposition \ref{prop:LowerBound}.
%
\begin{lemma}
\label{lemma:SetLowerBound}
Let $\psi$ be a convex function satisfying \eqref{eq:ineq1} for some measure $\mu$ and $\nu$ satisfying Assumption~\ref{ass:measures}. Assume further that $\psi$ satisfies \eqref{eq:convexfunc}. Recall that $K$ satisfies \eqref{eq:defK} and that $\eta$ is defined by \eqref{eq:etadef}.
Assume furthermore that $\ell $ satisfies, for an appropriate universal constant $C$,
\[
\ell \geq 2 h_1,\quad \ell^{2} \geq C\,K\, \lambda_1\, \lambda_2\, h_2.
\]
Then, for all $x_\perp > \gamma=\max \left( \frac{\eps }{K},\;2\,h_1,\; \frac{\ell h_2}{C\,K }\right)$,
and for all $\xi\in \mathbb R$, one has the lower bound
\begin{equation}
\label{eq:OmegaLowerBound}
|\Omega_{x_\perp, \eta}(\xi)|\geq \frac{\ell\,x_\perp}{C}\,\min\left(1,\ \frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}}\right).
\end{equation}
\end{lemma}
\begin{proof}
Start by using Lemma \ref{lemma:etaExistence} to obtain the existence of one $\tilde y \in \Lambda_{x_\perp}$ be such that for some $\tilde z \in \partial \psi(y)$ we have $|\tilde z_\perp-\xi|\geq 3\, \eta$.
Define now $C_{\theta}$ as the cone (see figure \ref{fig:cones}) with vertex $\tilde y$ and angle $\theta$ with the vertical direction $e_\perp$, such that
\[
\tan \theta=\min\left(\displaystyle \frac{\ell}{2 x_\perp},\,\displaystyle \frac{\ell\, \eta}{C\, K x_\perp}\right).
\]
Define furthermore the truncated cone $S_\theta=\{y\in C_\theta\,|\; |y_\perp-\tilde y_\perp|\leq x_\perp/2\}$.
\begin{figure}[h]
\includegraphics[width=1\textwidth]{diagram.png}
\caption{Cones $C_{\theta}$ and $S_{\theta}$}
\label{fig:cones}
\end{figure}
We first observe that $S_\theta\subset \Omega_{x_\perp}$ as for any $y\in S_\theta$, we have first $\tilde y_\perp-x_\perp/2\leq y_\perp\leq \tilde y_\perp+x_\perp/2$ and hence $\frac{x_\perp}{2}\leq y_\perp\leq 2\,x_\perp$ since $x_\perp\leq \tilde y_\perp\leq \frac{3}{2}\,x_\perp$. Second, since $|\tilde y_\parallel|\leq \ell/4$, we have that
\[
|y_\parallel|\leq |\tilde y_\parallel|+|\tan \theta|\,|y_\perp-\tilde y_\perp|\leq \frac{\ell}{4}+|\tan \theta|\,\frac{x_\perp}{2}\leq \frac{\ell}{2},
\]
which is the reason for the condition $\tan \theta\leq \ell/(2\,x_\perp)$.
The next step is to use Lemma \ref{lemma:subdiffangle} to prove that $|\Omega_{x_\perp,\eta}\cap S_{\theta}|\geq |S_\theta|/2$. For this consider any segment in the truncated cone $S_\theta$ with hence angle $\theta'\leq \theta$ with $e_\perp$. Denote by $L_{\theta'}^1$ and $L_{\theta'}^2$ the two half-parts of the segment from $\tilde y$.
We can show that either $L_{\theta'}^1\subset \Omega_{x_\perp,\eta}$ or $L_{\theta'}^2\subset \Omega_{x_\perp,\eta}$. Indeed by contradiction, if this was not the case we would have some $y' \in L^1_{\theta'}\setminus \Omega_{x_\perp,\eta}$, $y'' \in L_{\theta'}^2\setminus \Omega_{x_\perp,\eta}$. By the definition of $\Omega_{x_\perp,\eta}$, there exists $z' \in \partial \psi(y')$ with $|z'_\perp-\xi|\leq \eta$ and similarly $z''\in \partial \psi(y'')$ with $|z''_\perp-\xi|\leq \eta$.
Of course by definition
\[ \tan(({y'},\,y''),\;e_\perp)=\tan\theta'\leq \tan\theta\leq \frac{\ell\,\eta}{C\,K\,|x_\perp|},
\]
and we can directly apply Lemma \ref{lemma:subdiffangle}. As $\tilde y$ is a convex combination of $y',\,y''$, this implies that $\partial\psi(\tilde y)\subset\{z\,|\;|z_\perp-\xi|\leq 2\,\eta\}$ contradicting the fact that $\tilde z\in \partial\psi(\tilde y)$ but $|\tilde z_\perp-\xi|\geq 3\,\eta$.
This proves as claimed that either $L_{\theta'}^1\subset \Omega_{x_\perp,\eta}$ or $L_{\theta'}^2\subset \Omega_{x_\perp,\eta}$ and integrating over all possible segments with all possible angles that $|\Omega_{x_\perp,\eta}\cap S_{\theta}|\geq |S_\theta|/2$.
To conclude the proof, it is hence enough to bound from below $|S_\theta|$,
\[
|S_\theta|=C \,x_\perp\,(x_\perp\,\tan\theta)\geq \frac{1}{C}\,x_\perp\,\ell\,\left(\min\left(1,\,\frac{\eta}{K}\right)\right).
\]
Using the definition of $\eta$ in \eqref{eq:etadef}, we eventually obtain that
\[
|S_\theta|\geq \frac{\ell\,x_\perp}{C}\,\min\left(1,\ \frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}\, }\right).
\]
\end{proof}
\subsubsection{Proof of Proposition \ref{prop:LowerBound}}
We now have all the estimates required to prove Proposition \ref{prop:LowerBound}:
\begin{proof}[Proof of Proposition \ref{prop:LowerBound}]
Using the definition of $\varphi$ as given in \eqref{eq:weight}, and the set $\Omega_{x_\perp,\eta}(\xi)$ introduced in \eqref{eq:subdifftoolarge}, we get
\begin{align*}
\int_{R_{\delta}\timesR_{\delta}} |T_\perp(y)-T_\perp(x)|\, \varphi (x,y)\, dy\, dx
& = \int_{R_{\delta}} \frac{1}{(x_\perp+\gamma)^{2}}\, \int_{\Omega_{x_\perp}}|T_\perp(y)-T_\perp(x)| \, dy \, dx.\\
\end{align*}
Now fix $\xi=T_\perp(x)$ and calculate
\[
\int_{\Omega_{x_\perp}}|T_\perp(y)-T_\perp(x)|\,dy\geq \eta\,\int_{\Omega_{x_\perp,\eta}} dy=\eta\,|\Omega_{x_\perp,\eta}|.
\]
Observe that the assumptions on $\ell$ and the definition of $\gamma$ in Proposition \ref{prop:LowerBound} exactly coincide with Lemma~\ref{lemma:SetLowerBound}. Hence we may apply the lemma whenever $x_\perp>\gamma$ to find
\[
\begin{split}
&\int_{\Omega_{x_\perp}}|T_\perp(y)-T_\perp(x)|\,dy\geq
\frac{\ell\,x_\perp\,\eta}{C}\,\min\left(1, \ \frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}\, }\right)\\
&\qquad =\frac{\ell^{3}\,x_\perp}{C\,\lambda_1\,\lambda_2\,K }\,\min\left(1,\ \frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}}\right),\\
\end{split}
\]
by the definition of $\eta$.
This leads to
\begin{align*}
&\int_{R_{\delta}\timesR_{\delta}} |T_\perp(y)-T_\perp(x)|\, \varphi (x,y)\, dy\, dx
\geq \int_{R_{\delta}\cap\{x_\perp \geq \gamma\}} |T_\perp(y)-T_\perp(x)|\, \varphi (x,y)\, dy\, dx\\
&\qquad\geq \frac{\ell^{3}}{C\,\lambda_1\,\lambda_2\,K }\,\min\left(1, \ \frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}\, }\right)\,\int_{R_{\delta}\cap\{ x_\perp \geq \gamma\}} \frac{dx_\perp}{(x_\perp+\gamma)}\\
&\qquad\geq \frac{\ell^{4}}{C\,\lambda_1\,\lambda_2\,K }\,\min\left(1, \ \frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}}\right)\,\int_\gamma^\delta \frac{dr}{r+\gamma},
\end{align*}
and we may conclude that
\[\begin{split}
&\int_{R_{\delta}\timesR_{\delta}} |T_\perp(y)-T_\perp(x)|\, \varphi (x,y)\, dy\, dx
\\
&\quad\geq \frac{\ell^{4}}{C\,\lambda_1\,\lambda_2\,K }\,\min\left(1,\ \frac{\ell^{2}}{\lambda_1\,\lambda_2\,K^{2}}\right)\,\log \left(\frac{1}{2}+\frac{\delta}{2\,\gamma}\right),
\end{split}
\]
which completes the proof.
\end{proof}
\section{Proof of Corollary \ref{cor:flat}}\label{sec:proofcor1}
\begin{proof}[Proof of Corollary \ref{cor:flat}]
We now have that $\eps=0$ and so
$$\gamma=\max \left\{ 2h_1,\frac{\ell h_2}{CK }\right\}.$$
If the length $\ell$ does not satisfy \eqref{eq:ellcond} then we have
\begin{equation}\label{eq:ellnotcond}
\mbox{either }\quad \ell < 2 h_1,\quad \mbox{ or } \quad \ell^2 < C \lambda_1 \lambda_2 \, K h_2,
\end{equation}
which gives the first two terms in \eqref{eq:ellbound}.
If $\ell$ satisfies \eqref{eq:ellcond}, then we note that
since $h_1 \leq \delta/4$, condition \eqref{eq:condellh2} implies
$ \gamma < \delta/2$
and so we can use Theorem \ref{th:flatpartbound} to find
\[
\left( \frac{ \ell^2}{C \lambda_1 \lambda_2 K^2}\right)^4 \ln \left(\frac{\gamma + \delta}{ \gamma}\right)
\leq 1
\]
Setting $u:=\frac{\ell}{ K\sqrt {C \lambda_1 \lambda_2}}$, we rewrite this inequality as
\begin{equation}\label{eq:ineqflat1}
u^8 \ln \left(\frac{\gamma(u) + \delta}{\gamma(u)}\right)
\leq 1,
\end{equation}
where $\gamma(u)= \max \left\{ 2h_1,\sqrt { \frac {\lambda_1 \lambda_2} C} h_2 u \right\}$.
When $\gamma(u)=2h_1\leq \delta/2 $, then \eqref{eq:ineqflat1} implies
\begin{equation}\label{eq:u1}
u \leq \left[\ln\left(\frac\delta{2 h_1}\right)\right]^{-1/8}.
\end{equation}
When $\gamma(u)= \sqrt { \frac {\lambda_1 \lambda_2} C} h_2 u $,
the assumption $ \frac{\sqrt{C \lambda_1 \lambda_2} h_2}{\delta} \leq 1$ implies $\gamma(u) \leq \frac{\delta u } {C}$ and so
the inequality \eqref{eq:ineqflat1} gives
$$
u^8 \ln \left(1+ \frac{C}{u}\right)\leq u^8 \ln \left(1+ \frac{\delta}{\gamma(u)}\right)\leq 1.
$$
Thus we obtain $u\leq C$
(since $C\geq1$ and $u\mapsto u^8 \ln \left(1+ \frac{C}{u}\right)\leq u^8$ is increasing).
It follows that $\gamma(u)= \sqrt { \frac {\lambda_1 \lambda_2} C} h_2 u \leq \sqrt{C\lambda_1\lambda_2} h_2$
and Inequality \eqref{eq:ineqflat1} then yields:
\begin{equation}\label{eq:u2}
u^8\leq \left[ \ln \left(1+ \frac{\delta}{\gamma(u)}\right)\right]^{-1}\leq
\left[ \ln \left(\frac{\delta}{\sqrt{C \lambda_1 \lambda_2} h_2}\right)\right]^{-1}.
\end{equation}
Inequalities \eqref{eq:u1} and \eqref{eq:u2} gives the last two terms in \eqref{eq:ellbound}
and conclude the proof of this first corollary.
\end{proof}
\section{Proof of corollary \ref{cor:regularity}}\label{sec:proofcor2}
Before proving Corollary \ref{cor:regularity}, we state the following lemma which is proved at the end of this section.
\begin{lemma}
\label{lemma:sub}
Let $\psi$ be a convex function on $\Omega$ and let $x,x'\in \Omega \times\Omega $. Denote $\ell=|x-x'|$
and
\begin{equation}\label{eq:oeps2}
\eps = - \min_{t\in [0,1]} \psi((1-t)x+tx') -[(1-t)\psi(x)+t\psi(x')].
\end{equation}
Then, for any $z \in \partial \psi(x)$ and $z' \in \partial \psi (x')$, we have that
\begin{equation}
\label{eq:dualitybound}
|z-z'| \geq 2 \frac{\eps }{\ell} \mbox{.}
\end{equation}
\end{lemma}
\begin{proof}[Proof of Corollary \ref{cor:regularity}]
We recall that
$$ x\in \partial\psi^*(z) \Longleftrightarrow z\in\partial\psi(x)$$
so we want to use \eqref{eq:flatpartbound2} to prove \eqref{eq:reg}. But in order to apply
\eqref{eq:flatpartbound2}, we first need to prove that the conditions of Theorem \ref{th:flatpartbound} are satisfied.
We will first prove that
\begin{equation}\label{eq:first}
\partial\psi^*(z) \subset \Omega^{\delta/2} \quad\forall z\in \partial \psi(\Omega^\delta).
\end{equation}
This is not obvious, since the definition of $\partial \psi(\Omega^\delta)$ only guarantees that there exists at least one
$\bar x\in \Omega^\delta$ such that $z\in\partial\psi(\bar x)$ (in other words
$ \partial\psi^*(z) \cap \Omega^{\delta/2}\neq \emptyset$).
We will prove \eqref{eq:first} by contradiction: Assume that there exists also $x\in \Omega\setminus \Omega^{\delta/2} $ such that
$z\in\partial\psi(x)$. Since $\psi$ is convex, this implies that $\psi$ must have a flat part along the segment $[\bar x,x]$. Indeed, the definition of the subdifferential implies that
$$ \psi(tx+(1-t)\bar x) \geq \psi(\bar x) +t z\cdot (x-\bar x) \qquad \forall t\in[0,1]$$
and
$$ \psi(tx+(1-t)\bar x) \geq \psi(x) +(1-t) z\cdot (\bar x-x)\qquad \forall t\in[0,1]$$
and a linear combination of those inequality yields
$$
\psi(tx+(1-t)\bar x) \geq (1-t)\psi(\bar x) + t\psi(x)\qquad \forall t\in[0,1].
$$
The convexity of $\psi $ implies that we must have equality in this inequality.
After possibly replacing $x$ with the point $[\bar x,x] \cap \partial\Omega^{\delta/2}$, we deduce (since $\bar x\in \Omega^\delta$) that $\psi$ has a flat part of size at least $\delta/2$ in the set $\Omega^{\delta/2}.$
By Corollary \ref{cor:flat}, this is impossible if
$h_1\leq k_1(\delta)$ and $h_2\leq k_2(\delta)$ for some functions $k_1$, $k_2$ depending only on $\lambda_1\lambda_2$ and $L_\infty$.
This proves that \eqref{eq:first} must hold.
\medskip
Next, we use Theorem \ref{th:flatpartbound}. We denote $\ell = |x-x'|$ and
assume that $h_1\leq \delta$ and that $\ell$ satisfies:
\begin{equation}\label{eq:cond3}
\ell \geq 2 h_1,\quad \ell \geq \max\left\{ \sqrt{C \lambda_1 \lambda_2 \, K h_2 } , \frac{\lambda_1\lambda_2}{\delta} h_2\right\}
\end{equation}
Then $h_1$ and $h_2$ satisfy
\begin{equation}\label{eq:h1h2}
h_1\leq \min\left\{ \delta , \ell/2 \right\} \quad h_2 \leq \min\left\{ \sqrt{\frac{\delta \ell}{\lambda_1\lambda_2}}, \frac{\ell^2}{C\lambda_1\lambda_2 \, K }\right\} .
\end{equation}
In particular, $h_1$ and $h_2$ satisfies \eqref{eq:ellcond}
and so we can apply Theorem \ref{th:flatpartbound} to get (see \eqref{eq:flatpartbound2})
\begin{equation*}\label{eq:lhjk}
\max \left\{ \eps ,h_1 K , \ell h_2 \right\} \geq
\delta K \min\left\{ \frac{1}{\exp \left( \frac{C^4 \lambda_1^4 \lambda_2^4 K^8}{ \ell^8} \right) -1} , 1\right\}
\end{equation*}
Furthermore, under conditions \eqref{eq:h1h2} we can use
Lemma \ref{lem:opsibound} to write
\[
K \geq \left( \frac{\delta \ell}{\lambda_1\lambda_2}\right)^{1/2}.
\]
It follows that (recall that $D=\mathrm{diam}\, \Omega_1$),
\[
\frac{C^4 \lambda_1^4 \lambda_2^4 K^8}{\ell^8} \geq C^4 \left( \frac \delta \ell\right)^4 \geq C^4 \left( \frac \delta D \right)^4
\]
and so
\[
\frac{1}{\exp \left( \frac{C^4 \lambda_1^4 \lambda_2^4 K^8}{ \ell^8} \right) -1} \leq
\frac{1}{\exp \left( C^4 \left( \frac \delta D \right)^4 \right) -1}.
\]
We deduce (using \eqref{eq:lhjk}) that there exists a constant $C_0$, depending on $D$, $\delta$ and the dimension such that
\[
\max \left\{ \eps , K h_1 , D h_2 \right\} \geq
\frac {1} {C_0} \frac{K}{\exp \left( \frac{C^4 \lambda_1^4 \lambda_2^4 K^8}{ \ell^8} \right)-1}.
\]
We now observe the following elementary fact:
\begin{equation}
\forall a>0,\qquad u\mapsto \frac{u}{\exp \left( a u^8 \right) -1}\ \mbox{is monotone decreasing in } u.\label{stat:ospilin}
\end{equation}
This implies in particular that for all $\ell$ we have
\[
\frac{K }{\exp \left( \frac{C^4 \lambda_1^4 \lambda_2^4 K^8}{ \ell^8} \right) -1}
\geq \frac{L_\infty }{\exp \left( \frac{C^4 \lambda_1^4 \lambda_2^4 L_\infty^8}{ \ell^8} \right) -1},
\]
and so
\begin{equation}\label{eq:bbnm}
\max \left\{ \eps , K h_1 , D h_2 \right\} \geq
\sigma(\ell) : = \frac {1} {C_0} \frac{L_{\infty} }{\exp \left( \frac{C^4 \lambda_1^4 \lambda_2^4 L_{\infty}^8}{ \ell^8} \right)-1},
\end{equation}
where the function $\sigma(\ell) $ is monotone increasing and satisfies $\lim_{\ell\to 0^+} \sigma(\ell)=0$.
In other words
\begin{equation*}\label{eq:eoo}
\mbox{either } \frac{\eps}{\ell} \geq \frac{\sigma(\ell)}{\ell} \mbox{ or } K h_1 \geq \sigma(\ell)
\mbox{ or } D h_2 \geq \sigma(\ell).
\end{equation*}
Since both functions $\ell \mapsto \sigma(\ell)$ and $\ell \mapsto \frac{\sigma(\ell)}{\ell}$
are monotone increasing (for the second one, this is a consequence of \eqref{stat:ospilin} again),
we can introduce their inverses $\sigma_1$ and $\sigma_2$.
The conditions above
are then equivalent to
\[
\ell \leq \max\left\{ \sigma_2\left(\frac{\eps}{\ell}\right), \sigma_1(Kh_1),\sigma_1(D h_2)\right\}.
\]
Combining this with \eqref{eq:cond3}, we deduce that for all $\ell>0$ we have
\[
\ell
\leq \max \left\{ \sigma_2\left(\frac{\eps}{\ell} \right) , \max\left\{ \sigma_1(K h_1), 2h_1\right\} , \max \left\{ \sigma_1( D h_2), \sqrt{C \lambda_1 \lambda_2 \, L_\infty h_2},\frac{\lambda_1\lambda_2}{\delta} h_2 \right\} \right\}
\]
and the general result follows, recalling that $\ell=|x-x'|$ and that Lemma \ref{lemma:sub} gives
$|z-z'| \geq 2 \frac{\eps }{\ell}$.
It remains to treat the special case $h_1=h_2=0$, where we immediately obtain that
\[
|x-x'|\leq \sigma_2(|z-z'|),
\]
proving that for any given $z$ the sub-differential of $\psi^*$ is always reduced to one point (take $z=z'$ and any $x,\;x'\in \partial\psi^*(z)$). Consequently $\psi^*$ is $C^1$ as claimed.
To bound $\sigma_2$, we trivially observe that
\[
\frac{u}{\exp(a\,u^8)-1}\leq 2\,\frac{a^{-1/8}}{\exp(a\,u^8/2)-1}.
\]
Consequently for some numerical constant $\tilde C$
\[
\sigma_2^{-1}(\ell)=2\,\frac{\sigma(\ell)}{\ell}\geq \frac {1} {\tilde C} \frac{\lambda_1^{-1/2}\,\lambda_2^{-1/2}}{\exp \left( \frac{\tilde C^4 \lambda_1^4 \lambda_2^4 L_{\infty}^8}{ \ell^8} \right)-1}.
\]
Therefore for some $\tilde C$
\[
\sigma_2(w)\leq \tilde C\,\frac{\sqrt{\lambda_1\,\lambda_2}\,L_\infty}{\left(\log\left(1+\frac{1}{\tilde C\,\sqrt{\lambda_1\,\lambda_2}\,w}\right)\right)^{1/8}},
\]
which concludes the proof.
\end{proof}
\begin{proof}[Proof of lemma \ref{lemma:sub}]
By definition of $\eps $, there exists $y \in (x,x')$, with $y=tx+(1-t)x'$ for some $t \in (0,1)$ such that
\begin{equation}
\label{eq:affinemin}
\psi(y) = t \psi(x)+(1-t) \psi(x') - \eps.
\end{equation}
By the definition of subdifferential we have that
\begin{align}
\label{eq:subdiffmin}
\psi(z) & \geq \psi(x') + y' \cdot (z-x') \\
\psi(z) & \geq \psi(x') + y'' \cdot (z-x'') \mbox{.}\nonumber
\end{align}
Plugging (\ref{eq:affinemin}) into the inequalities (\ref{eq:subdiffmin}) yields
\begin{align*}
y' \cdot (x'-x'') & \geq \psi(x')- \psi(x'') +\frac{\eps }{1-t} \\
-y'' \cdot (x'-x'') & \geq \psi(x'')-\psi(x') + \frac{\eps }{t} \mbox{,}
\end{align*}
so that finally, by adding both inequalities, we get
\begin{equation*}
2 \ell |y'-y''|\geq (y'-y'') \cdot (x'-x'') \geq \frac{\eps }{1-t}+ \frac{\eps }{t} \geq 4 \eps \mbox{,}
\end{equation*}
which concludes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:optimal}}\label{sec:proof}
Theorem \ref{thm:optimal} follows from the following result together with well known facts from the theory of optimal transportation:
\begin{theorem}\label{thm:sub}
Let $\mu,\;\nu$ be two probability measures on $\mathbb R^n$.
Assume $\Gamma\subset \mathbb R^{2n}$ is a closed set such that for any Borel set $O$ there holds:
\begin{equation}\begin{split}
&\mu(O)\leq \nu\left(\{y\in \mathbb R^n\,|\;\exists x\in O,\ (x,y)\in \Gamma\}\right),\\
&\nu(O)\leq \mu\left(\{x\in \mathbb R^n\,|\;\exists y\in O,\ (x,y)\in \Gamma\}\right).
\end{split}\label{assumemunu}
\end{equation}
Then there exists $\pi\in\Pi(\mu,\nu)$ concentrated on $\Gamma$ (that is $\mathrm{Supp}(\pi) \subset \Gamma$).
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:optimal}]
We note that when $\Gamma = \mathrm{Graph}\,{\partial \psi}$ for some convex function $\psi$, then \eqref{assumemunu} is equivalent to the conditions \eqref{eq:ineq1} and \eqref{eq:ineq2}.
Theorem \ref{thm:sub} thus implies that there exists $\pi\in\Pi(\mu,\nu)$ such that $\mathrm{Supp}(\pi) \subset \Gamma$.
The result then follows from a classical result of measure theory (see for example Theorem 2.12 in \cite{villani2003topics}).
\end{proof}
We now turn to the proof of Theorem \ref{thm:sub}. The first step is to prove the result when $\mu$ and $\nu$ are both sums of Dirac masses with identical mass:
$$ \mu = \frac 1 N \sum_{i=1}^N \delta_{x_i}, \quad \nu = \frac 1 N \sum_{j=1}^N \delta_{x_j}.$$
In that case, we define the $N\times N$ matrix $A=(A_{ij})$ by
$$ A_{ij} =
\left\{
\begin{array}{ll}
1 & \mbox{ if } (x_i,y_j)\in \Gamma, \\
0 & \mbox{ otherwise}
\end{array}
\right.
$$
and Theorem \ref{thm:sub} is equivalent to
\begin{proposition}\label{prop:sub1}
Let $A$ be a $N\times N $ matrix with $A_{ij}\in \{0,\;1\}$ and such that for any $I,\, J$ subsets of $\{1,\ldots,N\}$, we have
\begin{equation}
|I|\leq |\{j\,|\;\sum_{i\in I} A_{ij}>0\}|,\quad |J|\leq |\{i\,|\;\sum_{j\in J} A_{ij}>0\}|. \label{enoughmass}
\end{equation}
Then there exists a stochastic matrix $\pi=(\pi_{ij})$ such that
$\pi_{ij} \in \{0,1\}$, $\sum_j \pi_{ij}=1$, $\sum_i \pi_{ij}=1$ and satisfying $\pi_{ij}=0$ whenever $A_{ij}=0$.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:sub1}]
The proof proceeds by induction on $N$, the result being obvious for $N=1$.
We distinguish two cases:
Whether there are strict subsets $I_0$ or $J_0$ for which there is equality in \eqref{enoughmass} or not.
\bigskip
\noindent {\bf Case 1:}
We assume that for all strict subsets $I$ or $J$ of $\{1,\dots,N\}$, we have a strict inequality in \eqref{enoughmass}, that is
\[
|I|< |\{j\,|\;\sum_{i\in I} A_{ij}>0\}|,\quad |J|< |\{i\,|\;\sum_{j\in J} A_{ij}>0\}|.
\]
We then choose any $i_0,\,j_0$ such that $A_{i_0j_0}=1$. Up to relabeling the rows and columns of $A$, we can always assume that we can take $i_0=j_0=N$ and we consider the $N-1\times N-1$ matrix $\tilde A$ consisting of the first $N-1$ rows and columns of $A$.
We claim that $\tilde A$ satisfies \eqref{enoughmass}:
Indeed, for any $I\subset\{1,\ldots,N-1\}$, by applying \eqref{enoughmass} to $A$, we have that
\begin{align*}
|I|
& \leq |\{j\in \{1,\ldots,N\}\,|\;\sum_{i\in I} A_{ij}>0\}|-1\\
& \leq |\{j\in \{1,\ldots,N-1\}\,|\;\sum_{i\in I} A_{ij}>0\}|\\
& =|\{j\,|\;\sum_{i\in I} \tilde A_{ij}>0\}|.
\end{align*}
and a similar inequality for $J\subset\{1,\ldots,N-1\}$.
Therefore by induction, there exists a stochastic $(N-1)\times (N-1)$ matrix $\tilde \pi$ with $\tilde \pi \in \{0,1\}$, $\tilde\pi_{ij}=0$ if $\tilde A_{ij}=0$ and $\sum_i \tilde\pi_{ij}=1$, $\sum_j \tilde\pi_{ij}=1$.
We can then define the matrix $\pi$ by setting $\pi_{ij}=\tilde \pi_{ij}$ if $i\leq N-1$ and $j\leq N-1$,
$\pi_{N N}=1$ and $\pi_{ij}=0$ otherwise. It is straightforward to check that $\pi$ satisfies all requirements.
\bigskip
\noindent {\bf Case 2:}
We assume that there exists a strict subset $I_0$ or $J_0$ of $\{1,\dots,N\}$ for which we have equality in \eqref{enoughmass}. For example, we assume that there is a strict subset $I_0$ such that
\begin{equation}\label{eq:I0}
|I_0|= |\{j\,|\;\sum_{i\in I_0} A_{ij}>0\}|
\end{equation}
and we denote
$J_0=\{j\,|\;\sum_{i\in I_0} A_{ij}>0\}$.
Since $|I_0|=|J_0|$, we can define the square matrices $P$ and $Q$ by
\[
P_{ij}=A_{ij}\,\mathbb{I}_{i\in I_0, j\in J_0},\quad Q_{ij}=A_{ij}\,\mathbb{I}_{i\in I_0^c, j\in J_0^c}.
\]
and we are going to show that $P$ and $Q$ satisfy \eqref{enoughmass} on $I_0\times J_0$, and $I_0^c\times J_0^c$ respectively.
We start with $P$: for any $I\subset I_0$, \eqref{enoughmass} implies:
\[
|I|\leq |\{j\,|\;\sum_{i\in I} A_{ij}>0\}|.
\]
The definition of $J_0$ implies that $A_{ij}=0$ if $i\in I_0$ and $j\not\in J_0$. Since $I\subset I_0$, we can thus write:
\[
\{j\,|\;\sum_{i\in I} A_{ij}>0\}=\{j\in J_0\,|\;\sum_{i\in I} A_{ij}>0\}=\{j\in J_0\,|\;\sum_{i\in I} P_{ij}>0\},
\]
and we deduce that
\begin{equation}
|I|\leq |\{j\in J_0\,|\;\sum_{i\in I} P_{ij}>0\}| \qquad \mbox{ for all $I\subset I_0$}\label{enoughmassM1}
\end{equation}
which shows that $P$ satisfies the first inequality in \eqref{enoughmass}. To prove the second inequality, we proceed by contradiction: Assume that there is a subset $J\subset J_0$ such that
\[
|J|> |\{i\in I_0\,|\;\sum_{j\in J} P_{ij}>0\}| = |\{i\in I_0\,|\;\sum_{j\in J} A_{ij}>0\}|,
\]
then denote $\tilde I=\{i\in I_0\,|\;\sum_{j\in J} A_{ij}>0\}$ and define $I'=I_0\setminus \tilde I$, $J'=J_0\setminus J$. Since $|J|>|\tilde I|$, we have that
\begin{equation}\label{eq:I'}
|I'|=|I_0|-|\tilde I|>|I_0|-|J|=|J_0|-|J|=|J'|.
\end{equation}
Consider any $j'$ s.t. $\sum_{i\in I'} A_{ij'}>0$. Since $A_{ij'}=0$ for $i\in I_0$ and $j'\not \in J_0$, we have that $j'\in J_0$. Next, let $i'\in I'$ be such that $A_{i'j'}>0$ (which exists by the choice of $j'$). Since $i'\in I'= I_0\setminus \tilde I$, the definition of $\tilde I$ implies that $A_{i'j}=0$ for all $j\in J$. So we must have $j'\not\in J$. We thus have $j'\in J'$.
We just proved that
\[
\{j'\,|\;\sum_{i'\in I'} A_{i'j'}>0\}=\{j'\in J_0\,|\;\sum_{i'\in I'} A_{i'j'}>0\}\subset J'.
\]
Together with \eqref{eq:I'}, this implies that
\[
|I'|>|J'|\geq |\{j'\,|\;\sum_{i'\in I'} A_{i'j'}>0\}|,
\]
which contradict \eqref{enoughmass}. We can then conclude that $P$ satisfies \eqref{enoughmass} on $I_0\times J_0$.
\medskip
We now turn to $Q$ and start by considering any $J\subset J_0^c$. We recall that $A_{ij}=0$ for $i\in I_0$ and $j\in J\subset J_0^c$ and since $\{1,\ldots,N\}=I_0\cup I_0^c$ we have:
\[
\{i\,|\;\sum_{j\in J} A_{ij}>0\}=\{i\in I_0^c\,|\;\sum_{j\in J} A_{ij}>0\}.
\]
Applying \eqref{enoughmass} on $A$ for $J$, we get
\[
|J|\leq |\{i\,|\;\sum_{j\in J} A_{ij}>0\}|=|\{i\in I_0^c\,|\;\sum_{j\in J} Q_{ij}>0\}|,
\]
proving the corresponding half of \eqref{enoughmass} for $Q$.
Next, for any $I\subset I_0^c$, we apply \eqref{enoughmass} with $\bar I=I_0\cup I$:
\[
|I_0|+|I|=|\bar I|\leq |\bar J|,\quad \bar J=\{j\,|\;\sum_{i\in \bar I} A_{ij}>0\}.
\]
We recall that $J_0=\{j\,|\;\sum_{i\in I_0} A_{ij}>0\}$ and denote
$J=\{j\in J_0^c\,|\;\sum_{i\in I} A_{ij}>0\}$.
We have $J\cup J_0 \supset \{j \,|\;\sum_{i\in I} A_{ij}>0\}$, and
since
\[
\bar J=\{j\,|\;\sum_{i\in I_0} A_{ij}>0\}\bigcup \{j\,|\;\sum_{i\in I} A_{ij}>0\}.
\]
we have $\bar J=J_0\cup J$.
This implies that $|I_0|+|I|\leq |\bar J|\leq |J_0|+|J|$, that is
$$
|I|\leq |J|=|\{j\in J_0^c\,|\;\sum_{i\in I} A_{ij}>0\}|,
$$
which completes the proof that $Q$ satisfies \eqref{enoughmass}.
\medskip
We can now complete the proof:
Since $I_0\neq\emptyset$ and $I_0\neq \{1,\ldots,N\}$, $P$ and $Q$ have dimensions strictly less than $N$ and we may apply our induction assumption. This gives us $p_{ij}$ on $I_0\times J_0$ and $q_{ij}$ on $I_0^c\times J_0^c$, stochastic matrices,
$$
\ \sum_j p_{ij}=1\quad \forall i\in I_0,\qquad \sum_{j} q_{ij}=1\quad \forall i\in I_0^c,$$
$$
\ \sum_j p_{ij}=1\quad \forall j\in J_0,\qquad \ \sum_j q_{ij}=1 \quad \forall j\in J_{0}^c,
$$
with $p_{ij}=0$ if $P_{ij}=0$ and $q_{ij}=0$ if $Q_{ij}=0$. We simply extend $p$ and $q$ by 0 on the whole $\{1,\ldots,N\}^2$ and define $\pi=p+q$.
\end{proof}
Proposition \ref{prop:sub1} proves Theorem \ref{thm:sub} when the measures $\mu$ and $\nu$ are
both sums of Dirac masses with identical mass.
Our next step is to extend that result to general sums of Dirac masses.
\begin{proposition}\label{prop:sub2}
Assume that
\[
\mu=\sum_{i=1}^{M_1} m_i\,\delta_{x_i},\quad \nu=\sum_{j=1}^{M_2} n_j\,\delta_{y_j}
\]
for some points $x_1,\ldots,x_{M_1}$ and $y_1,\ldots, y_{M_2}$ in $\mathbb R^n$ and some (positive) masses $m_1,\ldots,m_{M_1}$, and $n_1,\ldots,n_{M_2}$.
Then Theorem \ref{thm:sub} holds.
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:sub2}]
By splitting points if needed, we can always assume that $M_1=M_2=M$ and that $m_i, n_j>0$ for all $i$, $j$.
We can also assume that $\Gamma$ is concentrated on $\bigcup_{i,j} \{(x_i,y_j)\}$. For this reason, we define $\gamma$ as the set of indices $(i,j)$ s.t. $(x_i,x_j)\in \Gamma$. In that discrete setting, Assumption \eqref{assumemunu} is then equivalent to
\begin{equation}
\begin{split}
&\forall I\subset \{1,\ldots,M\},\quad \sum_{i\in I} m_i\leq \sum_{\{j\,|\,\exists i\in I\ s.t.\ (i,j)\in \gamma\} } n_j,\\
& \forall J\subset \{1,\ldots,M\},\quad \sum_{i\in J} n_i\leq \sum_{\{i\,|\,\exists j\in J\ s.t.\ (i,j)\in \gamma\}} m_j.
\end{split}\label{assumemunudiscrete}
\end{equation}
In order to use the result of Proposition \ref{prop:sub1}, we want to approximate $\mu$ and $\nu$ by measures
$\mu_N$, $\nu_N$ that are sums of Dirac masses with identical mass.
To do this, given $N\geq 2\,M$, we replace the mass $m_i$ at $x_i$ (respectively the mass $n_j$ at $y_j$) into $k(i)$ masses (respectively $l(j)$ masses) $\frac 1 N$ all located at that same point, with $k(i)$ and $l(j)$ such that
$$ m_i-\frac{1}{N}\leq \frac{k(i)}{N}\leq m_i,\quad n_j-\frac{1}{N}\leq \frac{l(j)}{N}\leq n_j$$
(we can always assume that $1/N\leq \inf(\inf_i m_i,\;\inf_j n_j)$ so that $k(i),l(j)>0$).
We note that we have some left over mass $m_i-\frac{k(i)}{N}$ at each points. By summing over all $i$ (and all $j$), the total left over masses are
\[
1- \sum_i \frac{k(i)}{N} = \frac{k(0)}{N} \in\left(0, \frac{M}{N}\right) ,\quad 1-\sum_j \frac{l(j)}{N} = \frac{l(0)}{N} \leq 1\in\left(0, \frac{M}{N}\right).
\]
where $k(0)= N- \sum_i k(i)\leq M$ and $l(0)= N- \sum_j l(j)\leq M$.
We thus add $k(0)$ (resp. $l(0)$) masses $1/N$ at some point $x_0\neq x_i$ (resp. $y_0\neq y_j$).
We can write
\[
\mu_N=\frac{1}{N}\sum_{k=1}^{N} \delta_{\bar x_k},\quad \nu_N=\frac{1}{N}\sum_{l=1}^{N} \delta_{\bar y_l}
\]
where the points $\bar x_k$ (resp. $\bar y_l$) are the same points as the $x_i$ or the additional distinct point $x_0$ (resp. $y_i$ and $y_0$) which we just subdivide. It is useful to introduce $K(i)$ (resp. $L(j)$), the set of indices $k$ such that $\bar x_k=x_i$ (resp. $\bar y_k=y_j$).
We have in particular $|K(i)| = k(i)$ and $|L(j)|=l(j)$.
Observe that $\mu_N$ and $\nu_N$ converge strongly to $\mu$ and $\nu$ when $N\to\infty$ since
\[
\int d|\mu_N-\mu|\leq \frac{2M}{N},\quad \int d|\nu_N-\nu|\leq \frac{2M}{N}
\]
(since there is a mass discrepancy of at most $1/N$ at each of the $M$ points $x_1,\dots,x_M$ and an additional mass of at most $M/N$ at $x_0$).
We now define $\gamma_N$ as
\[
\begin{split}
\gamma_N=&\left(\bigcup_{(i,j)\in \gamma} K(i)\times L(j)\right)\bigcup \left(K(0)\times (\{1,\ldots,N\}\setminus L(0))\right)\\
&\bigcup\left((\{1,\ldots,N\}\setminus K(0))\times L(0)\right).
\end{split}\]
The first component of $\gamma_N$ is the natural extension from $\gamma$: If $x_i$ and $y_j$ were connected, then any $\bar x_k$ s.t. $\bar x_k=x_i$ is still connected to any $\bar y_l$ with $\bar y_l=y_j$. Because we lose a bit of mass on the points $\bar x_k$ and $\bar y_l$, this may not be enough to ensure that \eqref{assumemunudiscrete} holds on $\mu_N$ and $\nu_N$.
For this reason, $K(0)$ and $L(0)$ serve as a mass reservoir: $K(0)$ is connected to all $l\geq 1$ and $L(0)$ to all $k\geq 1$, but of course $K(0)$ is not connected with $L(0)$.
We now check that \eqref{assumemunudiscrete} holds for this $\gamma_N$. Let $I$ be a subset of $\{1,\dots , N\}$. We consider three cases:
\noindent\underline{If $I\subset K(0)$} then $\{l\,|\, \exists k\in I\ s.t.\ (k,l)\in \gamma_N\}=\{1,\ldots,N\}\setminus L(0)$. Therefore
\[
\mu_N(I)\leq \mu_N(K(0))\leq \frac{M}{N}\leq 1-\frac{M}{N}=\nu_N(\{l,\;\exists k\in I\ s.t.\ (k,l)\in \gamma_N\}),
\]
since $N\geq 2\,M$.
\noindent\underline{If $I\cap K(0)\neq \emptyset$ but $I\not\subset K(0)$}, then $\{l\,|\, \exists k \in I\ s.t.\ (k,l)\in \gamma_N\}=\{1,\ldots,N\}$. Trivially
\[
\mu_N(I)\leq 1=\nu_N(\{l,\;\exists k\in I\ s.t.\ (k,l)\in \gamma_N\}).
\]
\noindent\underline{If $I\cap K(0)=\emptyset$}, then denote $\bar I=\{i\,|\,\ K(i)\cap I\neq\emptyset\}$. Observe that if $(k,l)\in \gamma_N$ and $k\in K(i),\;l\in L(j)$ then for any $k'\in K(i),\; l'\in L(j)$ one also has that $(k',l')\in \gamma_N$ by the definition of $\gamma_N$. Therefore
\begin{equation}
\{l\,|\,\exists k\in I\ s.t.\ (k,l)\in \gamma_N\}=L(0) \bigcup\left(\bigcup_{j,\; \exists i\in \bar I\ s.t.\ (i,j)\in \gamma} L(j)\right).\label{explicitinggammaNonI}
\end{equation}
Consequently by the definition of $\bar I$ we have
\[
\mu_N(I)=\frac{|I|}{N}\leq \sum_{i\in \bar I} \frac{k(i)}{N}
\]
and since $k(i)\leq N \, m_i$ from the construction of $\mu_N$ we deduce
\[
\mu_N(I)\leq \sum_{i\in \bar I} \frac{k(i)}{N}\leq \sum_{i\in \bar I} m_i=\mu(\bar I).
\]
Applying \eqref{assumemunu} to $\mu$, we get
\[
\mu_N(I)\leq \mu(\bar I)\leq \nu(\{j,\; \exists i\in \bar I\ s.t.\ (i,j)\in \gamma\})=\sum_{j,\; \exists i\in \bar I\ s.t.\ (i,j)\in \gamma} n_j.
\]
From the construction of $\nu_N$, we have $n_j\geq \frac{l(j)}{N}$ and
\[
\begin{split}
\mu_N(I)&\leq \sum_{j,\; \exists i\in \bar I\ s.t.\ (i,j)\in \gamma} \frac{l(j)}{N}+\sum_{j,\; \exists i\in \bar I\ s.t.\ (i,j)\in \gamma} \left(n_j-\frac{l(j)}{N}\right)\\
&\leq \sum_{j,\; \exists i\in \bar I\ s.t.\ (i,j)\in \gamma} \frac{l(j)}{N}+\sum_{j=1}^M \left(n_j-\frac{l(j)}{N}\right)\\
&\leq \sum_{j,\; \exists i\in \bar I\ s.t.\ (i,j)\in \gamma} \frac{l(j)}{N}+1-\sum_{j=1}^M \frac{l(j)}{N}\\
&\leq \sum_{j,\; \exists i\in \bar I\ s.t.\ (i,j)\in \gamma} \frac{l(j)}{N}+\frac{l(0)}{N}
\end{split}
\]
Using \eqref{explicitinggammaNonI}, we deduce
\[
\mu_N(I)\leq \nu_N(\{l,\;\exists k\in I\ s.t.\ (k,l)\in \gamma_N\}),
\]
which proves that \eqref{assumemunu} holds for the measures $\mu_N$, $\nu_N$ and the set $\gamma_N$.
\medskip
We now apply Proposition \ref{prop:sub1} to $\mu_N$ and $\nu_N$. We obtain $\pi_N$ a transference plan
\[
\pi_N=\sum_{k,l} \frac{\pi_{k,l}^N}{N} \delta_{(\bar x_k,\;\bar y_l)},\quad \sum_l \pi_{k,l}^N=1,\quad \sum_k \pi_{k,l}^N=1,\quad \pi^N_{k,l}=0\ \mbox{if}\ (k,l)\not\in \gamma_N.
\]
Because the $\bar x_k$ and $\bar y_k$ are equal to the $x_i,\; y_j$ or to $x_0$, $y_0$, we can also express $\pi_N$ as
\begin{equation}
\pi_N=\sum_{i,j\geq 1} \bar \pi_{i,j}^N\, \delta_{(x_i,\;y_j)}+\sum_{j\geq 1} \bar\pi^N_{0,j}\,\delta_{( x_0,\; y_j)}+\sum_{i\geq 1} \bar\pi^N_{0,j}\,\delta_{( x_i,\; y_0)}.\label{piNfixedpoints}
\end{equation}
Moreover for any $i\geq 1$, or any $j\geq 1$
\[
\sum_{j\geq 1} \bar \pi_{ij}^N+\pi_{i0}^N=\frac{k(i) }{N}, \quad \sum_{j\geq 1} \bar \pi_{ij}^N+\pi_{i0}^N=\frac{l(i)}{N}.
\]
By the construction of $\mu_N,\;\nu_N$, this yields that
\begin{equation}
m_i-\frac{1}{N}\leq\sum_{j\geq 1} \bar \pi_{ij}^N+\pi_{i0}^N\leq m_i,\quad n_i-\frac{1}{N}\leq\sum_{i\geq 1} \bar \pi_{ij}^N+\pi_{0j}^N\leq n_i.\label{boundpiN1}
\end{equation}
On the other hand,
\begin{equation}
\sum_j \bar \pi_{0j}^N=\frac{k(0)}{N}\leq \frac{M}{N},\quad \sum_i \bar \pi_{i0}^N=\frac{l(0)}{N}\leq \frac{M}{N}.
\label{boundpiN0}
\end{equation}
Since the points $x_i,\;y_j$ together with $x_0,\;y_0$ are fixed, we may simply pass to the limit in $\pi_N\to \pi$ using \eqref{piNfixedpoints} by extracting subsequences such that all $\bar\pi_{ij}^N\to \bar \pi_{ij}$ for all $i,\;j\geq 0$. By \eqref{boundpiN0}, we have that $\bar \pi_{0j}=\bar \pi_{i0}=0$ so that
\[
\pi=\sum_{i,j\geq 1} \bar \pi_{i,j}\, \delta_{(x_i,\;y_j)}.
\]
By \eqref{boundpiN1} and given that $\bar\pi_{i0}=\bar\pi_{0j}=0$, we get
\[
\sum_{j\geq 1} \bar \pi_{ij}=m_i,\quad \sum_{i\geq 1} \bar \pi_{ij}=n_j,
\]
and so $\pi$ is a transference plan between $\mu$ and $\nu$. It only remains to check that $\pi$ is concentrated on $\Gamma$: Given any $(i,j)\not\in \gamma$, then for any $N$ and any $k\in K(i),\;l\in L(j)$, we have that $(k,l)\not\in \gamma_N$. Therefore $\pi^N_{k,l}=0$ and
\[
\bar\pi_{ij}^N=\sum_{k\in K(i),\;l\in L(j)} \frac{\pi^N_{kl}}{N}=0,
\]
so that we also have that $\bar \pi_{ij}=0$, thus proving that $\pi$ is indeed concentrated on $\Gamma$.
\end{proof}
We are now ready to prove Theorem \ref{thm:sub}.
\begin{proof}[Proof of Theorem \ref{thm:sub}]
First of all by density, we can assume that both $\mu$ and $\nu$ are compactly supported on some large ball $B(0,R)$.
Define $\Gamma_x$ and $\Gamma_y$ the projections of $\Gamma$,
\[
\Gamma_x=\{x\in \mathbb R^n\,|\;\exists y\in \mathbb R^n,\ (x,y)\in \Gamma\},\quad \Gamma_y=\{y\in \mathbb R^n\,|\;\exists x\in \mathbb R^n,\ (x,y)\in \Gamma\}.
\]
Of course $\Gamma_x,\;\Gamma_y$ are closed. Moreover $\mu$ is supported on $\Gamma_x$: For any open set $O$ with $O\cap\Gamma_x=\emptyset$ then for any $y\in \mathbb R^n$ and any $x\in O$, $(x,\,y)\not\in \Gamma$ so
\[
\{y\in \mathbb R^n\,|\;\exists x\in O,\ (x,y)\in \Gamma\}=\emptyset,
\]
and therefore $\mu(O)=0$ by assumption \eqref{assumemunu}. Similarly $\nu$ is supported on $\Gamma_y$.
\medskip
For any $k\in \mathbb{N}$, we define the hypercubes $C_i^k$ of diameter $2^{-k}$, centered at points $x_i \in 2^{-k} \mathbb{Z}^n$ and that cover a fixed selected hypercube $C_0$ s.t. $B(0,R)\subset C_0$. This decomposition is obviously hierarchical since $C_i^k$ is composed of exactly $2^{n}$ small hypercubes $C_j^{k+1}$.
By shifting the hypercubes if necessary, we may assume that $\mu\left(\bigcup_i \partial{C}_{i}^k \right)=0$. For any $k$, we define an approximation $\mu_N$ with $N=C\,2^{kn}$ points,
\[
\mu_N=\sum_{i=1}^N m_i^N\,\delta_{x_i},\quad m_i^N=\mu(C_i^k).
\]
We also define $\nu_N$ in the same manner. Both $\mu_N$ and $\nu_N$ remain probability measures since $\sum_i m_i^N=\sum_i \mu(C_i^k)=\mu(\bigcup C_i^k)$ as $\mu\left(\bigcup_i \partial {C}_{i}^k \right)=0$.
Finally we define $\Gamma_N$ as the union $\bigcup_{(i,j)\in \gamma_N} C_i^k\times C_j^k$ where
\[
\gamma_N=\{(i,j)\,|\; \exists x\in C_i^k,\ \exists y\in C_j^k,\ (x,y)\in \Gamma\}.
\]
We observe of course that $\Gamma\subset \Gamma_N$ and that $d(\Gamma_N,\Gamma)\leq C\,2^{-k}\to 0$ as $N\to \infty$.
Consider now any subset $I$ of indices $i$ and define $O=\bigcup_{i\in I} C_i^k$.
By our construction and assumption \eqref{assumemunu}
\[
\mu(O)=\sum_{i\in I} m_i^N\leq \nu\left(\{y\in \mathbb R^n\,|\;\exists x\in O,\ (x,y)\in \Gamma\}\right).
\]
By the definition of $O$, we have
\[
\{y\in \mathbb R^n\,|\;\exists x\in O,\ (x,y)\in \Gamma\}=\bigcup_{i\in I} \{y\in \mathbb R^n\,|\;\exists x\in C_i^k,\ (x,y)\in \Gamma\}.
\]
And since $\Gamma\subset \Gamma_N$, we deduce
\[\begin{split}
\{y\in \mathbb R^n\,|\;\exists x\in O,\ (x,y)\in \Gamma\}&\subset \bigcup_{i\in I} \{y\in \mathbb R^n\,|\;\exists x\in C_i^k,\ (x,y)\in \Gamma_N\}\\
&\subset\bigcup_{\{j\,|\, \exists i\in I\;s.t.\;(i,j)\in \gamma_N\}} C_j^k,
\end{split}
\]
by the definition of $\gamma_N$. Hence, since $\nu\left(\bigcup_i \partial {C}_{i}^k \right)=0$
\[
\begin{split}
\nu\left(\{y\in \mathbb R^n\,|\;\exists x\in O,\ (x,y)\in \Gamma\}\right)& \leq \sum_{\{j\,|\, \exists i\in I\;s.t.\;(i,j)\in \gamma_N\}} \nu(C_j^k)= \sum_{\{j\,|\, \exists i\in I\;s.t.\;(i,j)\in \gamma_N\}} n_j^N
\end{split}
\]
Therefore
\[
\sum_{i\in I} m_i^N\leq
\sum_{\{j\,|\, \exists i\;s.t.\;(i,j)\in \gamma_N\}} n_j^N
\]
which is the first inequality in \eqref{assumemunudiscrete}.
The proof is similar when reversing the roles of $\mu_N$ and $\nu_N$ and this allows us to conclude that $\mu_N$ and $\nu_N$ satisfy \eqref{assumemunu} with $\Gamma_N$.
\medskip
We can thus apply Proposition \ref{prop:sub2} to get the existence of a transference plan $\pi_N$ concentrated on $\Gamma_N$ and with marginals $\mu_N$ and $\nu_N$.
\medskip
By the tightness of $\pi_N$, we can extract a converging subsequence (still denoted by $N$ for simplicity) s.t. $\pi_N \to \pi$ for the weak-* topology of measures.
Trivially $\pi$ has marginals $\mu$ and $\nu$.
\medskip
To conclude the proof, we need to show that $\pi$ is concentrated on $\Gamma$: Consider any open set $O$ with $O\cap \Gamma=\emptyset$ and any continuous function $\phi$ on $\mathbb R^{2n}$ with compact support on $O$.
We claim that $\sup_{\Gamma_N} |\phi|\to 0$ as $N$ tends to $\infty$. Indeed assume, by contradiction, that there exists $(x_N,\;y_N)\in \Gamma_N$ and $\eta>0$ s.t. $\phi(x_N,y_N)>\eta$. Since $\Gamma_N\subset B(0,2R)$ then we can extract converging subsequences $x_N\to x$ and $y_N\to y$. Since $d(\Gamma_N,\Gamma)\to 0$ then $(x,y)\in \Gamma$ but, since $\phi$ is continuous, we have that $\phi(x,y)>\eta$. Recalling that $\phi$ has compact support in $O$ with $O\cap \Gamma=\emptyset$ gives a contradiction.
We thus have
\[
\int \phi\,d\pi_N=\int_{\Gamma_N} \phi\,d\pi_N\leq \sup_{\Gamma_N} |\phi|\longrightarrow 0,\quad \mbox{as}\ N\to \infty,
\]
which gives $\int \phi\,d\pi=0$, proving that $\pi$ is concentrated on $\Gamma$ and completing the proof of Theorem~\ref{thm:sub}.
\end{proof}
| {
"timestamp": "2021-04-08T02:25:38",
"yymm": "1911",
"arxiv_id": "1911.00574",
"language": "en",
"url": "https://arxiv.org/abs/1911.00574",
"abstract": "We investigate the properties of convex functions in the plane that satisfy a local inequality which generalizes the notion of sub-solution of Monge-Ampere equation for a Monge-Kantorovich problem with quadratic cost between non-absolutely continuous measures. For each measure, we introduce a discrete scale so that the measure behaves as an absolutely continuous measure up to that scale. Our main theorem then proves that such convex functions cannot exhibit any flat part at a scale larger than the corresponding discrete scales on the measures. This, in turn, implies a $C^1$ regularity result up to the discrete scale for the Legendre transform. Our result applies in particular to any Kantorovich potential associated to an optimal transportation problem between two measures that are (possibly only locally) sums of uniformly distributed Dirac masses. The proof relies on novel explicit estimates directly based on the optimal transportation problem, instead of the Monge-Ampere equation.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Local regularity result for an optimal transportation problem with rough measures in the plane",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534365728416,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7083573383954905
} |
https://arxiv.org/abs/1909.10027 | Invariant solutions of a nonlinear wave equation with a small dissipation obtained via approximate symmetries | In this paper, it is shown how a combination of approximate symmetries of a nonlinear wave equation with small dissipations and singularity analysis provides exact analytic solutions. We perform the analysis using the Lie symmetry algebra of this equation and identify the conjugacy classes of the one-dimensional subalgebras of this Lie algebra. We show that the subalgebra classification of the integro-differential form of the nonlinear wave equation is much larger than the one obtained from the original wave equation. A systematic use of the symmetry reduction method allows us to find new invariant solutions of this wave equation. | \section{Introduction}
A systematic computational method for constructing an approximate symmetry group for a given system of partial differential equations (PDEs) has been extensively developed by many authors, see e.g. \cite{Ames,Bluman1,Fushchich}. A broad review of recent developments in this subject can be found in such books as G. Bluman and S. Kumei \cite{Bluman2}, P. Olver \cite{Olver}, D. Sattinger and O. Weaver \cite{Sattinger}, B. Rozdestvenskii and N. Janenko \cite{Rozdestvenskii} and V. Baikov, R. Gazizov and N. Ibragimov \cite{Ibragimov,Ibragimov2}. Recently, M. Ruggieri and M. Speciale \cite{Ruggieri} determined the Lie algebras of approximate symmetries of nonlinear wave equations admitting a small perturbative dissipation. They discussed the generators of four different versions of the system of equations associated with the nonlinear wave equation
\begin{equation}
u_{tt}=\left[f(u)u_x\right]_x,
\label{i1}
\end{equation}
where $u(t,x)$ is a function of $t$ and $x$. They considered the following second-order PDE with a small dissipative term:
\begin{equation}
u_{tt}=\left[f(u)u_x\right]_x+\varepsilon\left[\lambda(u)u_t\right]_{xx},
\label{i2}
\end{equation}
where $\varepsilon<<1$ is a small parameter and $f$ and $\lambda$ are smooth functions of $u$. If we suppose that the function $u(t,x)$ can be written as
\begin{equation}
u(t,x,\varepsilon)=u_0(t,x)+\varepsilon u_1(t,x)+{\mathcal O}(\varepsilon^2),
\label{i3}
\end{equation}
where $u_0$ and $u_1$ are smooth functions of $t$ and $x$, then equation (\ref{i2}) becomes the following two equations:
\begin{equation}
u_{0,tt}-f(u_0)u_{0,xx}-f'(u_0)(u_{0,x})^2=0,
\label{bigeq1}
\end{equation}
and
\begin{equation}
\begin{split}
& u_{1,tt}-f(u_0)u_{1,xx}-f'(u_0)u_{0,xx}u_1-2f'(u_0)u_{0,x}u_{1,x}-f''(u_0)(u_{0,x})^2u_1 \\ &-\lambda''(u_0)(u_{0,x})^2u_{0,t}-\lambda'(u_0)u_{0,xx}u_{0,t}-2\lambda'(u_0)u_{0,x}u_{0,xt}-\lambda(u_0)u_{0,xxt}=0.
\end{split}
\label{bigeq2}
\end{equation}
The Lie symmetry algebra of equations (\ref{bigeq1}) and (\ref{bigeq2}) was identified for three separate cases \cite{Ruggieri}:
\begin{equation}
\begin{split}
(I):& \quad f(u_0)=f_0e^{\frac{1}{p}u_0} , \quad \lambda(u_0)=\lambda_0e^{\frac{1+s}{p}u_0}\\
(II):& \quad f(u_0)=f_0(u_0+q)^{\frac{1}{p}} , \quad \lambda(u_0)=\lambda_0(u_0+q)^{\frac{1+s}{p}-1}\\
(III):& \quad f(u_0)=f_0(u_0+q)^{-\frac{4}{3}} , \quad \lambda(u_0)=\lambda_0(u_0+q)^{-\frac{4}{3}}\\
\end{split}
\label{i4}
\end{equation}
In addition, equation (\ref{i1}) is equivalent to the following integro-differential system of equations:
\begin{equation}
\begin{split}
u_t-&v_x=0,\\
v_t-\Big{(}\int^{u}f(s)ds+&\varepsilon\lambda(u)v_x\Big{)}_x=0.
\end{split}
\label{i5}
\end{equation}
In the paper \cite{Ruggieri}, two different cases of equation (\ref{i5}) were considered:
\begin{equation}
\begin{split}
(IV):& \quad f(u_0)=f_0e^{\frac{1}{p}u_0} , \quad \lambda(u_0)=\lambda_0e^{\frac{1+s}{p}u_0}\\
(V):& \quad f(u_0)=f_0(u_0+q)^{\frac{1}{p}} , \quad \lambda(u_0)=\lambda_0(u_0+q)^{\frac{1+s}{p}-1}\\
\end{split}
\label{i6}
\end{equation}
and their Lie symmetry algebras were identified. The objectives of this work are the following. For each of the five cases listed in equations (\ref{i4}) and (\ref{i6}), we identify the classification of the one-dimensional subalgebras of the Lie symmetry algebra into conjugacy classes under the action of the associated Lie group. That is, we obtain a list of representative subalgebras of each Lie symmetry algebra ${\mathcal L}$ such that each one-dimensional subalgebra of ${\mathcal L}$ is conjugate to one and only one element of the list. In order to obtain these classifications, we make use of the results obtained by J. Patera and P. Winternitz in \cite{Patera}. For cases $(I)$ and $(II)$, we identify the Lie symmetry subalgebra as $2A_2$ from the list of Lie algebras of dimension $4$ found in \cite{Patera}. For case $(III)$, we first express the Lie symmetry subalgebra as a direct sum of two algebras, one of which is the three-dimensional algebra $A_{3,8}=su(1,1)$ found in \cite{Patera}. The Goursat method of twisted and non-twisted subalgebras is used to complete the classification \cite{Winternitz}. Next, we make a systematic use of the symmetry reduction method to generate invariant solutions corresponding to the above-mentioned subalgebras. We then perform a subalgebra classification for the integro-differential equation (\ref{i5}) and give two examples of symmetry reductions for this case. We provide a physical interpretation of the obtained results.
\section{Subalgebra classification and invariant solutions}
\subsection{The case where $f(u_0)=f_0e^{\frac{1}{p}u_0}$ and $\lambda(u_0)=\lambda_0e^{\frac{1+s}{p}u_0}$}
We first consider the case where $f(u_0)=f_0e^{\frac{1}{p}u_0}$ and $\lambda(u_0)=\lambda_0e^{\frac{1+s}{p}u_0}$, where $f_0$, $\lambda_0$, $p$ and $s$ are constants and $p\neq 0$. For this case, equations (\ref{bigeq1}) and (\ref{bigeq2}) become
\begin{equation}
u_{0,tt}-f_0e^{\frac{1}{p}u_0}u_{0,xx}-\dfrac{f_0}{p}e^{\frac{1}{p}u_0}(u_{0,x})^2=0,
\label{bigeq3}
\end{equation}
and
\begin{equation}
\begin{split}
& u_{1,tt}-f_0e^{\frac{1}{p}u_0}u_{1,xx}-\dfrac{f_0}{p}e^{\frac{1}{p}u_0}u_{0,xx}u_1-\dfrac{2f_0}{p}e^{\frac{1}{p}u_0}u_{0,x}u_{1,x}-\dfrac{f_0}{p^2}e^{\frac{1}{p}u_0}(u_{0,x})^2u_1 \\ &-\lambda_0\left(\frac{1+s}{p}\right)^2e^{\frac{1+s}{p}u_0}(u_{0,x})^2u_{0,t}-\lambda_0\left(\frac{1+s}{p}\right)e^{\frac{1+s}{p}u_0}u_{0,xx}u_{0,t}\\ &-2\lambda_0\left(\frac{1+s}{p}\right)e^{\frac{1+s}{p}u_0}u_{0,x}u_{0,xt}-\lambda_0e^{\frac{1+s}{p}u_0}u_{0,xxt}=0.
\end{split}
\label{bigeq4}
\end{equation}
The Lie algebra of infinitesimal symmetries of equations (\ref{bigeq3}) and (\ref{bigeq4}) is spanned by the four generators \cite{Ruggieri}
\begin{equation}
\begin{split}
&X_1=\partial_t,\qquad X_2=\partial_x,\qquad X_3=t\partial_t+x\partial_x-u_1\partial_{u_1},\\ &X_4=x\partial_x+2p\partial_{u_0}+2su_1\partial_{u_1}.
\end{split}
\label{gen1}
\end{equation}
This Lie algebra is isomorphic to the algebra $2A_2$ given in Table II of \cite{Patera}. The list of conjugacy classes includes the following one-dimensional subalgebras:
\begin{equation}
\begin{split}
&\{X_1\}, \qquad \{X_4\}, \qquad \{X_2\}, \qquad \{X_3+aX_4\}, \qquad \{X_4-X_3+\varepsilon X_2\}, \\ &\{X_1+\varepsilon X_2\}, \qquad \{X_1+\varepsilon X_4\},
\end{split}
\label{subalg1}
\end{equation}
where $a\in \mathbb{R}$, $a\neq 0$ and $\varepsilon=\pm 1$. We proceed to use the symmetry reduction method to reduce the system of equations using each subalgebra given in the list (\ref{subalg1}).
{\bf 1.}\hspace{5mm} For the subalgebra $\{X_1\}$, we obtain the solution
\begin{equation}
\begin{split}
u_0(x)=p\ln{|x+C_1|}+C_2,\qquad u_1(x)=\dfrac{C_3}{x+C_1}+C_4,
\end{split}
\label{solution1}
\end{equation}
where $C_1$, $C_2$, $C_3$ and $C_4$ are constants. This is a singular logarithmic solution with one simple pole.
{\bf 2.}\hspace{5mm} For the subalgebra $\{X_4\}$, we obtain a dissipative solution of the form
\begin{equation}
\begin{split}
u_0(t,x)=F(t)+2p\ln{x},\qquad u_1(t,x)=x^{2s}G(t),
\end{split}
\label{solution2}
\end{equation}
where the functions $F(t)$ and $G(t)$ are given by the quadratures
\begin{equation}
\int\dfrac{dF}{\varepsilon(4p^2f_0e^{\frac{1}{p}F}+K_0)^{1/2}}=t-t_0,
\label{solution2e}
\end{equation}
for $F$ and
\begin{equation}
G=\mu\int\sqrt{4b(s+1)(2s+1)\int e^{\frac{1}{p}F}[f_0G+\lambda_0\varepsilon e^{\frac{s}{p}F}(4p^2f_0e^{\frac{1}{p}F}+K_0)^{1/2}]}dFdt,
\label{solution2f}
\end{equation}
where $K_0$ and $b$ are constants and $\mu=\pm 1$. Therefore,
\begin{equation}
u_1=x^{2s}\mu\int\sqrt{4b(s+1)(2s+1)\int e^{\frac{1}{p}F}[f_0G+\lambda_0\varepsilon e^{\frac{s}{p}F}(4p^2f_0e^{\frac{1}{p}F}+K_0)^{1/2}]}dFdt.
\end{equation}
{\bf 3.}\hspace{5mm} For the subalgebra $\{X_2\}$, we obtain the trivial linear (in $t$) solution
\begin{equation}
u_0(t)=C_1t+C_2,\qquad u_1(t)=C_3t+C_4
\label{solution3}
\end{equation}
where $C_1$, $C_2$, $C_3$ and $C_4$ are constants.
{\bf 4.}\hspace{5mm} For the subalgebra $\{X_3+aX_4\}$, equations (\ref{bigeq3}) and (\ref{bigeq4}) reduce to the system of third-order ordinary differential equations (ODE)
\begin{equation}
(a+1)(a+2)\xi F_{\xi}+(a+1)^2\xi^2F_{\xi\xi}-2ap-f_0e^{\frac{1}{p}F}F_{\xi\xi}-\dfrac{f_0}{p}e^{\frac{1}{p}F}(F_{\xi})^2=0,
\label{bigeq3A}
\end{equation}
and
\begin{equation}
\begin{split}
&(2as-1)(2as-2)G-(a+1)(4as-a-4)\xi G_{\xi}+(a+1)^2\xi^2G_{\xi\xi}\\ &-f_0e^{\frac{1}{p}F}\left[G_{\xi\xi}+\dfrac{1}{p}F_{\xi\xi}G+\dfrac{2}{p}F_{\xi}G_{\xi}+\dfrac{1}{p^2}(F_{\xi})^2G\right]\\ &+\dfrac{\lambda_{0}(1+s)}{p}e^{\frac{1+s}{p}F}\Big{[}\dfrac{(a+1)(1+s)}{p}\xi(F_{\xi})^3-\dfrac{2ap(1+s)}{p}(F_{\xi})^2+(a+1)\xi F_{\xi}F_{\xi\xi}\\ &\hspace{3.5cm}-2apF_{\xi\xi}+2(a+1)(F_{\xi})^2+2(a+1)\xi F_{\xi}F_{\xi\xi}\Big{]}\\ &+\lambda_0e^{\frac{1+s}{p}F}\left[2(a+1)F_{\xi\xi}+(a+1)\xi F_{\xi\xi\xi}\right]=0,
\label{bigeq4A}
\end{split}
\end{equation}
where we have the self-similar symmetry variable $\xi=xt^{-a-1}$, and the functions
\begin{equation}
u_0=F(\xi)+2ap\ln{t}\qquad \mbox{and}\qquad u_1=t^{2as-1}G(\xi).
\end{equation}
For the special case of the subalgebra where $a=-1$, we obtain the singular logarithmic solution:
\begin{equation}
F(\xi)=2p\ln{\left(\frac{1}{\sqrt{f_0}}\xi+C_0\right)},
\label{solution4}
\end{equation}
where $C_0$ is a constant. The function $G$ satisfies the single second-order linear differential equation
\begin{equation}
\begin{split}
&-f_0\Delta^2G_{\xi\xi}-4\sqrt{f_0}\Delta G_{\xi}-2G+(2s+1)(2s+2)G\\ &+\dfrac{\lambda_0(1+s)\Delta^{2(1+s)}}{p}\left[\dfrac{4p^2}{\Delta^2}\left(\dfrac{2(1+s)}{f_0^2}-\dfrac{1}{f_0}\right)\right]=0,
\end{split}
\label{bigeq4AE}
\end{equation}
where $\Delta=\frac{1}{\sqrt{f_0}}\xi+C_0$. In the specific case where $\lambda_0=0$, we obtain the explicit solutions
\begin{equation}\label{earlydamp}
G=\xi^{-3/4}(C_1+C_2\ln{\xi})
\end{equation}
in the case where $s=-3/4$ and
\begin{equation}\label{otherearlydamp}
G=C_1\xi^{r_+}+C_2\xi^{r_-}
\end{equation}
where
\begin{equation}
r_{\pm}=\dfrac{-3\pm \sqrt{9-4[2-(2s+1)(2s+2)]}}{2}
\end{equation}
in the case where $s\neq -3/4$. The functions $G$ in equations (\ref{earlydamp}) and (\ref{otherearlydamp}) correspond respectively to the solutions
\begin{equation}\label{earlydampbis}
\begin{split}
& u_0=2p\ln{\left(\frac{1}{\sqrt{f_0}}xt^{-a-1}+C_0\right)}+2ap\ln{t},\\ & u_1=t^{2as-1}(xt^{-a-1})^{-3/4}(C_1+C_2\ln{(xt^{-a-1}}))
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
& u_0=2p\ln{\left(\frac{1}{\sqrt{f_0}}xt^{-a-1}+C_0\right)}+2ap\ln{t},\\ & u_1=t^{2as-1}(C_1(xt^{-a-1})^{r_+}+C_2(xt^{-a-1})^{r_-}).
\end{split}
\end{equation}
The solution (\ref{earlydampbis}) involves damping.
{\bf 5.}\hspace{5mm} For the subalgebra $\{X_4-X_3+\varepsilon X_2\}$, we get
\begin{equation}
u_0=F(\xi)-2p\ln{t},\qquad u_1=t^{-2s-1}G(\xi),
\label{solution5}
\end{equation}
where we have the symmetry variable $\xi=x+\varepsilon\ln{t}$. Here, $F$ satisfies the nonlinear equation
\begin{equation}
F_{\xi\xi}=\dfrac{1}{1-f_0e^{\frac{1}{p}F}}\left[\dfrac{f_0}{p}e^{\frac{1}{p}F}(F_{\xi})^2+\varepsilon F_{\xi}-2p\right],
\label{solution5A}
\end{equation}
and $G$ satisfies
\begin{equation}
\begin{split}
&\left(1-f_0e^{\frac{1}{p}F}\right)G_{\xi\xi}-\left(\varepsilon(4s+3)+\frac{2f_0}{p}e^{\frac{1}{p}F}F_{\xi}\right)G_{\xi}\\&+\left((2s+1)(2s+2)-\frac{f_0}{p}e^{\frac{1}{p}F}F_{\xi\xi}-\frac{f_0}{p^2}e^{\frac{1}{p}F}(F_{\xi})^2\right)G\\
&-\lambda_0\Bigg{[}\left(\dfrac{1+s}{p}\right)^2e^{\frac{1+s}{p}F}(F_{\xi})^2\left(\varepsilon F_{\xi}-2p\right)+\left(\dfrac{1+s}{p}\right)e^{\frac{1+s}{p}F}F_{\xi\xi}\left(\varepsilon F_{\xi}-2p\right)\\ &+2\varepsilon\left(\dfrac{1+s}{p}\right)e^{\frac{1+s}{p}F}F_{\xi}F_{\xi\xi}+\varepsilon e^{\frac{1+s}{p}F}F_{\xi\xi\xi} \Bigg{]}=0.
\end{split}
\end{equation}
In the specific case where $\lambda_0=0$ and $f_0=0$, we obtain the explicit solution
\begin{equation}\label{earlysolution1A}
u_0=K_1te^{\varepsilon x}+2\varepsilon px+K_2,\qquad\qquad u_1=K_3e^{(2s+1)x}t^{(2s+1)(\varepsilon-1)}+K_4e^{(2s+2)x}t^{(2s+1)(\varepsilon-1)}t^{\varepsilon}.
\end{equation}
Solution (\ref{earlysolution1A}) involves damping terms in the case when $\varepsilon=-1$. Otherwise, for $\varepsilon=1$, this solution may contain unbounded terms.
{\bf 6.}\hspace{5mm} For the subalgebra $\{X_1+\varepsilon X_2\}$, we have the travelling wave solution
\begin{equation}
u_0=u_0(\xi),\qquad u_1=u_1(\xi),
\label{solution6}
\end{equation}
where $\xi=x-\varepsilon t$. Here, $u_0$ can be determined implicitly by the equation
\begin{equation}
u_0-\dfrac{f_0}{p}e^{\frac{1}{p}u_0}=K_0\xi+K_1.
\label{solution6AAA}
\end{equation}
In the case where $\lambda_0=0$, $u_1$ satisfies the second-order ODE
\begin{equation}
\begin{split}
&\left(1-f_0e^{\frac{1}{p}u_0}\right)u_{1,\xi\xi}-\dfrac{2f_0}{p}e^{\frac{1}{p}u_0}\dfrac{K_0}{1-f_0e^{\frac{1}{p}u_0}}u_{1,\xi}\\ &-\dfrac{f_0}{p}e^{\frac{1}{p}u_0}\left[\dfrac{K_0f_0e^{\frac{1}{p}u_0}+K_0^2}{p\left(1-f_0e^{\frac{1}{p}u_0}\right)^2}\right]u_1=0
\end{split}
\label{solution6BBB}
\end{equation}
which is linear in $u_1$.
{\bf 7.}\hspace{5mm} For the subalgebra $\{X_1+\varepsilon X_4\}$, we obtain the center wave solution
\begin{equation}
u_0=F(\xi)+2\varepsilon pt,\qquad u_1=e^{2\varepsilon st}G(\xi),
\label{solution7}
\end{equation}
where the symmetry variable is $\xi=xe^{-\varepsilon t}$, $F$ satisfies the equation
\begin{equation}
\xi F_{\xi}+\xi^2F_{\xi\xi}-f_0e^{\frac{1}{p}F}F_{\xi\xi}-\dfrac{f_0}{p}e^{\frac{1}{p}F}(F_{\xi})^2=0,
\label{solution7A}
\end{equation}
and $G$ satisfies the equation
\begin{equation}
\begin{split}
&\left(\xi^2-f_0e^{\frac{1}{p}F}\right)G_{\xi\xi}+\left((1-4s)\xi-\dfrac{2f_0}{p}e^{\frac{1}{p}F}F_{\xi}\right)G_{\xi}\\ &+\left(4s^2-\dfrac{f_0}{p}e^{\frac{1}{p}F}F_{\xi\xi}-\dfrac{f_0}{p^2}e^{\frac{1}{p}F}(F_{\xi})^2\right)G\\ &+\lambda_0\varepsilon e^{\frac{1+s}{p}F}\bigg{[}\left(\dfrac{1+s}{p}\right)^2\xi(F_{\xi})^3-\dfrac{2s(1+s)}{p}(F_{\xi})^2+3\left(\dfrac{1+s}{p}\right)\xi F_{\xi}F_{\xi\xi}\\ &-2sFF_{\xi\xi}+\xi F_{\xi\xi\xi}\bigg{]}=0.
\end{split}
\label{solution7B}
\end{equation}
In the case where $\lambda_0=0$ and $s=\frac{1\pm\sqrt{2}}{2}$, we obtain the periodic damping solution
\begin{equation}
u_0=p\ln{x}+\varepsilon pt-\frac{p}{2}\ln{f_0},\qquad u_1=e^{2\varepsilon st+\frac{1}{2}xe^{-\varepsilon t}}\left[C_1\cos{\left(\frac{\sqrt{7}}{2}xe^{-\varepsilon t}\right)}+C_2\sin{\left(\frac{\sqrt{7}}{2}xe^{-\varepsilon t}\right)}\right].
\label{solution7C}
\end{equation}
\subsection{The case where $f(u_0)=f_0(u_0+q)^{\frac{1}{p}}$ and $\lambda(u_0)=\lambda_0(u_0+q)^{\frac{1+s}{p}-1}$}
Next, we consider the case where $f(u_0)=f_0(u_0+q)^{\frac{1}{p}}$ and $\lambda(u_0)=\lambda_0(u_0+q)^{\frac{1+s}{p}-1}$, where $f_0$, $\lambda_0$, $p$, $q$ and $s$ are constants with $p\neq 0$. For this case, equations (\ref{bigeq1}) and (\ref{bigeq2}) become
\begin{equation}
u_{0,tt}-f_0(u_0+q)^{\frac{1}{p}}u_{0,xx}-\dfrac{f_0}{p}(u_0+q)^{\frac{1}{p}-1}(u_{0,x})^2=0,
\label{bigeq5}
\end{equation}
and
\begin{equation}
\begin{split}
& u_{1,tt}-f_0(u_0+q)^{\frac{1}{p}}u_{1,xx}-\dfrac{f_0}{p}(u_0+q)^{\frac{1}{p}-1}u_{0,xx}u_1-\dfrac{2f_0}{p}(u_0+q)^{\frac{1}{p}-1}u_{0,x}u_{1,x}\\ &-\dfrac{f_0}{p}\left(\dfrac{1}{p}-1\right)(u_0+q)^{\frac{1}{p}-2}(u_{0,x})^2u_1\\ &-\lambda_0\left(\dfrac{1+s}{p}-1\right)\left(\dfrac{1+s}{p}-2\right)(u_0+q)^{\frac{1+s}{p}-3}(u_{0,x})^2u_{0,t}\\ &-\lambda_0\left(\dfrac{1+s}{p}-1\right)(u_0+q)^{\frac{1+s}{p}-2}u_{0,xx}u_{0,t}\\ &-2\lambda_0\left(\dfrac{1+s}{p}-1\right)(u_0+q)^{\frac{1+s}{p}-2}u_{0,x}u_{0,xt}-\lambda_0(u_0+q)^{\frac{1+s}{p}-1}u_{0,xxt}=0.
\end{split}
\label{bigeq6}
\end{equation}
The Lie algebra of infinitesimal symmetries of equations (\ref{bigeq5}) and (\ref{bigeq6}) is spanned by the four generators \cite{Ruggieri}
\begin{equation}
\begin{split}
&X_1=\partial_t,\qquad X_2=\partial_x,\qquad X_3=t\partial_t+x\partial_x-u_1\partial_{u_1},\\ &X_4=x\partial_x+2p(u_0+q)\partial_{u_0}+2su_1\partial_{u_1}.
\end{split}
\label{gen2}
\end{equation}
This Lie algebra is isomorphic to the algebra $2A_2$ given in Table II of \cite{Patera}. The list of conjugacy classes includes the one-dimensional subalgebras:
\begin{equation}
\begin{split}
&\{X_1\}, \qquad \{X_4\}, \qquad \{X_2\}, \qquad \{X_3+aX_4\}, \qquad \{X_4-X_3+\varepsilon X_2\}, \\ &\{X_1+\varepsilon X_2\}, \qquad \{X_1+\varepsilon X_4\},
\end{split}
\label{subalg2}
\end{equation}
where $a\in \mathbb{R}$, $a\neq 0$ and $\varepsilon=\pm 1$. We obtain solutions of the equations by symmetry reduction using the different subalgebras in the list (\ref{subalg2}).
{\bf 8.}\hspace{5mm} For the subalgebra $\{X_1\}$, we obtain the explicit stationary solution
\begin{equation}
\begin{split}
u_0=\left(\dfrac{(p+1)(Kx+C)}{p}\right)^{\frac{p}{p+1}}-q,\qquad u_1=B_1(Kx+C)^{\frac{\sqrt{p}\lambda_1}{p+1}}+B_2(Kx+C)^{\frac{\sqrt{p}\lambda_2}{p+1}},
\end{split}
\label{solution8}
\end{equation}
where
\begin{equation}
\lambda=\dfrac{\dfrac{p-1}{\sqrt{p}}\pm\sqrt{\dfrac{(1-p)^2}{p}+4}}{2},
\label{solution8A}
\end{equation}
and $B_1$, $B_2$, $K$ and $C$ are constants. This solution involves a combination of powers of $x$.
{\bf 9.}\hspace{5mm} For the subalgebra $\{X_2\}$, we obtain the trivial linear (in $t$) solution
\begin{equation}
u_0=C_1t+C_2,\qquad u_1=C_3t+C_4,
\label{solution9}
\end{equation}
where $C_1$, $C_2$, $C_3$ and $C_4$ are constants.
{\bf 10.}\hspace{5mm} For the subalgebra $\{X_4\}$, we obtain
\begin{equation}
\begin{split}
u_0=x^{2p}F(t)-q,\qquad u_1=x^{2s}G(t),
\end{split}
\label{solution10}
\end{equation}
where
\begin{equation}
F=\left(\varepsilon\sqrt{f_0}(t-t_0)\right)^{-2p},
\label{solution10e}
\end{equation}
and $G$ satisfies the linear second-order ODE
\begin{equation}
G_{tt}-f_0(4s^2+6s+2)(\varepsilon\sqrt{f_0}(t-t_0))^{-2}G+2\lambda_0\varepsilon\sqrt{f_0}p(4s^2+6s+2)(\varepsilon\sqrt{f_0}(t-t_0))^{-2s-3}=0.
\label{solution10f}
\end{equation}
The function $F$ involves damping if $p>0$. In the specific case where $\lambda_0=0$, $t_0=0$ and either $s=-2$ or $s=\frac{1}{2}$, we obtain $G=C_1t^3+C_2t^{-2}$, so the solution is
\begin{equation}
u_0=x^{2p}\left(\varepsilon\sqrt{f_0}(t-t_0)\right)^{-2p}-q,\qquad u_1=x^{2s}\left(C_1t^3+C_2t^{-2}\right).
\end{equation}
In the specific case where $\lambda_0=0$, $t_0=0$ and either $s=1$ or $s=-\frac{5}{2}$, we obtain $G=C_1t^4+C_2t^{-3}$, so the solution is
\begin{equation}
u_0=x^{2p}\left(\varepsilon\sqrt{f_0}(t-t_0)\right)^{-2p}-q,\qquad u_1=x^{2s}\left(C_1t^4+C_2t^{-3}\right).
\end{equation}
These solutions involve combinations of powers of $x$ and $t$.
{\bf 11.}\hspace{5mm} For the subalgebra $\{X_3+aX_4\}$, we get
\begin{equation}
\begin{split}
u_0=t^{2ap}F(\xi)-q,\qquad u_1=t^{2as-1}G(\xi),
\end{split}
\label{solution11}
\end{equation}
where the self-similar invariant has the form $\xi=xt^{-a-1}$, with $F=\dfrac{(a+1)^{2p}}{f_0^p}\xi^{2p}$ and $G=R\xi^{2s}$, where $R$ is a constant. Here, the following conditions have to be satisfied:
\begin{equation}
\begin{split}
&\mbox{(1) }\quad a(a+2)(2p+1)=0\\
&\mbox{(2) }\quad -2a(2as^2+4s^2+3as+6s+a+2)+4\lambda_0p(1+3s+2s^2)=0
\end{split}
\label{conditions11AE}
\end{equation}
Equation (\ref{solution11}) leads to a power function solution.
{\bf 12.}\hspace{5mm} For the subalgebra $\{X_4-X_3+\varepsilon X_2\}$, we get
\begin{equation}
u_0=t^{-2p}F(\xi)-q,\qquad u_1=t^{-2s-1}G(\xi),
\label{solution12}
\end{equation}
with symmetry variable $\xi=x+\varepsilon\ln{t}$. Here, $F$ satisfies the equation
\begin{equation}
\left(1-f_0F^{\frac{1}{p}}\right)F_{\xi\xi}-\dfrac{f_0}{p}F^{\frac{1}{p}-1}(F_{\xi})^2-\varepsilon(4p+1)F_{\xi}+2p(2p+1)F=0.
\label{solution12A}
\end{equation}
In the case where $p=-\frac{1}{2}$, we obtain the implicitly-defined function
\begin{equation}
-\varepsilon\ln{(A-\varepsilon F)}+\dfrac{f_0}{A^2}\left(\dfrac{A-\varepsilon F}{F}+\varepsilon\ln{\left(\dfrac{A-\varepsilon F}{F}\right)}\right)=\xi-\xi_0.
\label{solution12B}
\end{equation}
The equation for $G(\xi)$ in this case becomes
\begin{equation}
\begin{split}
&\left(1-f_0F^{-2}\right)G_{\xi\xi}+\left(-4s\varepsilon-3\varepsilon+4f_0F^{-3}F_{\xi}\right)G_{\xi}\\ &+\left((2s+1)(2s+2)+2f_0F^{-3}F_{\xi\xi}-6f_0F^{-4}(F_{\xi})^2\right)G\\ &-\lambda_0(2s+3)(2s+4)F^{-2s-5}(F_{\xi})^2\left[F+\varepsilon F_{\xi}\right]+\lambda_0(2s+3)F^{-2s-4}F_{\xi\xi}\left[F+\varepsilon F_{\xi}\right]\\ &+2\lambda_0(2s+3)F^{-2s-4}F_{\xi}\left[F_{\xi}+\varepsilon F_{\xi\xi}\right]-\lambda_0F^{-2s-3}\left[F_{\xi\xi}+\varepsilon F_{\xi\xi\xi}\right]=0.
\end{split}
\label{solution12C}
\end{equation}
If we further suppose that $\lambda_0=0$ and $f_0=0$, we obtain the solution
\begin{equation}
u_0=\varepsilon At^{-2p}-\varepsilon t^{-2p-1}e^{-\varepsilon x}e^{\varepsilon\xi_0}-q,\qquad u_1=C_1e^{\lambda_1x}t^{\varepsilon\lambda_1-2s-1}+C_2e^{\lambda_2x}t^{\varepsilon\lambda_2-2s-1},
\end{equation}
where
\begin{equation}
\lambda_1=\dfrac{4\varepsilon s+3\varepsilon+1}{2},\qquad \lambda_2=\dfrac{4\varepsilon s+3\varepsilon-1}{2}.
\end{equation}
{\bf 13.}\hspace{5mm} For the subalgebra $\{X_1+\varepsilon X_2\}$, we have the travelling wave solution
\begin{equation}
u_0=u_0(\xi),\qquad u_1=u_1(\xi),
\label{solution13}
\end{equation}
where $\xi=x-\varepsilon t$ is the symmetry variable. Here, $u_0$ satisfies
\begin{equation}
\left(1-f_0(u_0+q)^{\frac{1}{p}}\right)u_{0,\xi\xi}=\dfrac{f_0}{p}(u_0+q)^{\frac{1}{p}-1}(u_{0,\xi})^2,
\label{solution13A}
\end{equation}
and $u_1$ satisfies
\begin{equation}
\begin{split}
&\left(1-f_0(u_0+q)^{\frac{1}{p}}\right)u_{1,\xi\xi}-\dfrac{2f_0}{p}(u_0+q)^{\frac{1}{p}-1}u_{0,\xi}u_{1,\xi}\\ &-\dfrac{f_0}{p}\left[(u_0+q)^{\frac{1}{p}-1}u_{0,\xi\xi}+\left(\dfrac{1}{p}-1\right)(u_0+q)^{\frac{1}{p}-2}(u_{0,\xi})^2\right]u_1\\ &+\varepsilon\lambda_0\bigg{[}\left(\dfrac{1+s}{p}-1\right)\left(\dfrac{1+s}{p}-2\right)(u_0+q)^{\frac{1+s}{p}-3}(u_{0,\xi})^3\\ &+3\left(\dfrac{1+s}{p}-1\right)(u_0+q)^{\frac{1+s}{p}-2}u_{0,\xi}u_{0,\xi\xi}+(u_0+q)^{\frac{1+s}{p}-1}u_{0,\xi\xi\xi}\bigg{]}=0.
\end{split}
\label{solution13B}
\end{equation}
In the case where $\lambda_0=0$, we obtain the explicit solution
\begin{equation}
u_0=\dfrac{1}{(f_0)^p}-q,\quad\mbox{while}\quad u_1=u_1(\xi)\quad\mbox{is an arbitrary function of }\xi.
\end{equation}
{\bf 14.}\hspace{5mm} For the subalgebra $\{X_1+\varepsilon X_4\}$, we obtain
\begin{equation}
u_0=x^{2p}F(\xi)-q,\qquad u_1=x^{2s}G(\xi),
\label{solution14}
\end{equation}
where the symmetry variable is $\xi=\ln{x}-\varepsilon t$ and $F$ satisfies the equation
\begin{equation}
\left(1-f_0F^{\frac{1}{p}}\right)F_{\xi\xi}-\dfrac{f_0}{p}F^{\frac{1}{p}-1}(F_{\xi})^2-f_0F^{\frac{1}{p}}(4p+3)F_{\xi}-2p(2p+1)f_0F^{\frac{1}{p}+1}=0,
\label{solution14A}
\end{equation}
and $G$ satisfies
\begin{equation}
\begin{split}
& \left(1-f_0F^{\frac{1}{p}}\right)G_{\xi\xi}-\left(f_0(4s-1)F^{\frac{1}{p}}+\dfrac{2f_0}{p}F^{\frac{1}{p}-1}\left(2pF+F_{\xi}\right)\right)G_{\xi}\\ & -\Bigg{(}2s(2s-1)f_0F^{\frac{1}{p}}+\dfrac{4sf_0}{p}F^{\frac{1}{p}-1}\left(2pF+F_{\xi}\right)\\ & +\dfrac{f_0}{p}F^{\frac{1}{p}-1}\left[2p(2p-1)F+(4p-1)F_{\xi}+F_{\xi\xi}\right]\\ & +\dfrac{f_0}{p}\left(\dfrac{1}{p}-1\right)F^{\frac{1}{p}-2}\left[4p^2F^2+4pFF_{\xi}+(F_{\xi})^2\right]\Bigg{)}G\\ & +\varepsilon\lambda_0\Bigg{[}\left(\dfrac{1+s}{p}-1\right)\left(\dfrac{1+s}{p}-2\right)F^{\frac{1+s}{p}-3}F_{\xi}\left(4p^2F^2+4pFF_{\xi}+(F_{\xi})^2\right)\\ & +\left(\dfrac{1+s}{p}-1\right)F^{\frac{1+s}{p}-2}F_{\xi}\left(2p(2p-1)F+(4p-1)F_{\xi}+F_{\xi\xi}\right)\\ & +2\left(\dfrac{1+s}{p}-1\right)F^{\frac{1+s}{p}-2}\left(2pF+F_{\xi}\right)\left(2pF_{\xi}+F_{\xi\xi}\right)\\ & +F^{\frac{1+s}{p}-1}\left(2p(2p-1)F_{\xi}+(4p-1)F_{\xi\xi}+F_{\xi\xi\xi}\right)\Bigg{]}=0.
\end{split}
\end{equation}
In the case where $p=-\frac{1}{2}$ and $\lambda_0=0$, we obtain the solution
\begin{equation}
u_0=x^{2p}\sqrt{f_0}-q,\qquad\qquad u_1=K_0x^{2s-r}e^{\varepsilon rt},\quad\mbox{where}\quad r=\dfrac{4s^2+6s+2}{4s+3}.
\end{equation}
\subsection{The case where $f(u_0)=f_0(u_0+q)^{-\frac{4}{3}}$ and $\lambda(u_0)=\lambda_0(u_0+q)^{-\frac{4}{3}}$}
We now consider the case where $f(u_0)=f_0(u_0+q)^{-\frac{4}{3}}$ and $\lambda(u_0)=\lambda_0(u_0+q)^{-\frac{4}{3}}$, where $f_0$, $\lambda_0$ and $q$ are constants. This corresponds to the special instance of the previous case (in subsection 2.2) in which $p=-\frac{3}{4}$ and $s=-\frac{3}{4}$. For this case, equations (\ref{bigeq1}) and (\ref{bigeq2}) become
\begin{equation}
u_{0,tt}-f_0(u_0+q)^{-\frac{4}{3}}u_{0,xx}+\dfrac{4}{3}f_0(u_0+q)^{-\frac{7}{3}}(u_{0,x})^2=0,
\label{bigeq7}
\end{equation}
and
\begin{equation}
\begin{split}
& u_{1,tt}-f_0(u_0+q)^{-\frac{4}{3}}u_{1,xx}+\dfrac{4}{3}f_0(u_0+q)^{-\frac{7}{3}}u_{0,xx}u_1+\dfrac{8}{3}f_0(u_0+q)^{-\frac{7}{3}}u_{0,x}u_{1,x}\\ &-\dfrac{28}{9}f_0(u_0+q)^{-\frac{10}{3}}(u_{0,x})^2u_1-\dfrac{28}{9}\lambda_0(u_0+q)^{-\frac{10}{3}}(u_{0,x})^2u_{0,t}\\ &+\dfrac{4}{3}\lambda_0(u_0+q)^{-\frac{7}{3}}u_{0,xx}u_{0,t}+\dfrac{8}{3}\lambda_0(u_0+q)^{-\frac{7}{3}}u_{0,x}u_{0,xt}-\lambda_0(u_0+q)^{-\frac{4}{3}}u_{0,xxt}=0.
\end{split}
\label{bigeq8}
\end{equation}
The Lie algebra of infinitesimal symmetries of equations (\ref{bigeq7}) and (\ref{bigeq8}) is spanned by the five generators \cite{Ruggieri}
\begin{equation}
\begin{split}
&X_1=\partial_t,\qquad X_2=\partial_x,\qquad X_3=t\partial_t+x\partial_x-u_1\partial_{u_1},\\ &X_4=x\partial_x-\dfrac{3}{2}(u_0+q)\partial_{u_0}-\dfrac{3}{2}u_1\partial_{u_1},\qquad X_5=x^2\partial_x-3x(u_0+q)\partial_{u_0}-3xu_1\partial_{u_1}.
\end{split}
\label{gen3}
\end{equation}
This Lie algebra is the direct sum
\begin{equation}
\{X_3-X_4,X_1\}\oplus\{X_4,X_2,X_5\},
\end{equation}
where $\{X_4,X_2,X_5\}$ is isomorphic to the three-dimensional algebra $A_{3,8}=su(1,1)$ given in Table I of \cite{Patera}. The classification of $A_{3,8}$ was found in \cite{Patera} and, in this paper, the Goursat method of twisted and non-twisted subalgebras is used to obtain the list of conjugacy classes for the complete Lie symmetry algebra. The one-dimensional subalgebras of the Lie algebra can be classified as follows:
\begin{equation}
\begin{split}
&\{X_3-X_4\}, \qquad \{X_1\}, \qquad \{X_2\}, \qquad \{X_4\}, \qquad \{X_2-X_5\},\\ & \{X_3-X_4+\varepsilon X_2\}, \qquad \{X_3+aX_4\}, \qquad \{X_3-X_4+a(X_2-X_5)\},\\ & \{X_1+\varepsilon X_2\}, \qquad \{X_1+\varepsilon X_4\}, \qquad \{X_1+\varepsilon(X_2-X_5)\},
\end{split}
\label{subalg3}
\end{equation}
where $a\in \mathbb{R}$, $a\neq 0$ and $\varepsilon=\pm 1$. We obtain the following solutions through symmetry reduction.
{\bf 15.}\hspace{5mm} For the subalgebra $\{X_3-X_4\}$, we obtain the power function solution
\begin{equation}
\begin{split}
u_0=\left(\dfrac{t}{x}\right)^{\frac{3}{2}}-q,\qquad u_1=\dfrac{t^{\frac{1}{2}}}{x^{\frac{3}{2}}},
\end{split}
\label{solution15}
\end{equation}
for the case where $f_0=1$ and $\lambda_0=0$. A second solution, obtained by making the hypothesis $F=C_0x^a$, is
\begin{equation}
u_0=f_0^{\frac{3}{4}}\left(\dfrac{t}{x}\right)^{\frac{3}{2}}-q,\qquad u_1=t^{\frac{1}{2}}\left[C_1x^{\frac{-3+\sqrt{\frac{140}{3}}}{2}}+C_2x^{\frac{-3-\sqrt{\frac{140}{3}}}{2}}-\dfrac{69}{280f_0^{\frac{1}{4}}}x^{-\frac{3}{2}}\right],
\label{solution15A}
\end{equation}
which constitutes a combination of monomial power functions.
{\bf 16.}\hspace{5mm} For the subalgebra $\{X_4\}$, we get the center wave solution
\begin{equation}
\begin{split}
&u_0=\dfrac{f_0^{\frac{3}{4}}t^{\frac{3}{2}}}{x^{\frac{3}{2}}}-q,\\
&u_1=\dfrac{C_1t^{\frac{1}{2}}}{x^{\frac{3}{2}}}+\dfrac{C_2t^{\frac{1}{2}}\ln{t}}{x^{\frac{3}{2}}}+\dfrac{3\lambda_0}{32f_0^{\frac{1}{4}}}\dfrac{t^{\frac{5}{2}}}{x^{\frac{3}{2}}}(2\ln{t}-1)-\dfrac{3\lambda_0}{16f_0^{\frac{1}{4}}}\dfrac{t^{\frac{1}{2}}}{x^{\frac{3}{2}}}\ln{t}.
\end{split}
\label{solution16}
\end{equation}
{\bf 17.}\hspace{5mm} For the subalgebra $\{X_2\}$, we obtain the linear trivial solution in $t$
\begin{equation}
u_0=C_1t+C_2,\qquad u_1=C_3t+C_4,
\label{solution17}
\end{equation}
where $C_1$, $C_2$, $C_3$ and $C_4$ are constants.
{\bf 18.}\hspace{5mm} For the subalgebra $\{X_1\}$, we have $u_0=u_0(x)$ and $u_1=u_1(x)$ (i.e. $u_0$ and $u_1$ are functions of $x$ only), where $u_0$ satisfies the equation
\begin{equation}
u_{0,xx}=\dfrac{4(u_{0,x})^2}{3(u_0+q)},
\label{solution18}
\end{equation}
and $u_1$ satisfies the equation
\begin{equation}
\begin{split}
u_{1,xx}=\dfrac{4}{3(u_0+q)}u_{0,xx}u_1+\dfrac{8}{3(u_0+q)}u_{0,x}u_{1,x}-\dfrac{28}{9(u_0+q)^2}(u_{0,x})^2u_1.
\end{split}
\label{solution18A}
\end{equation}
For the specific case when $q=0$, the solution of equation (\ref{solution18}) is expressed in terms of the Gaussian quadrature
\begin{equation}
\int e^{-\frac{2}{3}u_0^2}du_0=k(x-x_0),
\label{solution18B}
\end{equation}
and equation (\ref{solution18A}) becomes the second-order ordinary differential equation
\begin{equation}
u_{1,xx}=\dfrac{4k}{3u_0}e^{\frac{2}{3}u_0^2}\left(2u_{1,x}-\dfrac{1}{u_0}ke^{\frac{2}{3}u_0^2}u_1\right),
\label{solution18C}
\end{equation}
where $k$ is a constant.
{\bf 19.}\hspace{5mm} For the subalgebra $\{X_2-X_5\}$, we get
\begin{equation}
\begin{split}
u_0=(1-x^2)^{-\frac{3}{2}}F(t)-q,\qquad u_1=(1-x^2)^{-\frac{3}{2}}G(t),
\end{split}
\label{solution19}
\end{equation}
where the functions $F$ and $G$ of $t$ satisfy the equations
\begin{equation}
F_{tt}-3f_0F^{-\frac{1}{3}}=0,
\label{solution19A}
\end{equation}
and
\begin{equation}
G_{tt}+f_0F^{-\frac{4}{3}}G+\lambda_0 F^{-\frac{4}{3}}F_t=0.
\label{solution19B}
\end{equation}
In the case where $\lambda_0=0$, looking for solutions of the type $F=At^a$, $G=Bt^b$, we obtain the solution
\begin{equation}
u_0=(1-x^2)^{-\frac{3}{2}}(4f_0)^{3/4}t^{3/2}-q,\qquad u_1=(1-x^2)^{-\frac{3}{2}}Bt^{1/2},\qquad \mbox{where }B\mbox{ is a constant}.
\end{equation}
This solution involves a separation of the variables $x$ and $t$.
{\bf 20.}\hspace{5mm} For the subalgebra $\{X_3-X_4+\varepsilon X_2\}$, we get
\begin{equation}
u_0=t^{\frac{3}{2}}F(\xi)-q,\qquad u_1=t^{\frac{1}{2}}G(\xi),
\label{solution20}
\end{equation}
where the functions $F$ and $G$ of the symmetry variable $\xi=x-\varepsilon\ln{t}$ satisfy the equations
\begin{equation}
\left(1-f_0F^{-\frac{4}{3}}\right)F_{\xi\xi}+\dfrac{4}{3}f_0F^{-\frac{7}{3}}(F_{\xi})^2-2\varepsilon F_{\xi}+\dfrac{3}{4}F=0,
\label{solution20A}
\end{equation}
and
\begin{equation}
\begin{split}
&\left(1-f_0F^{-\frac{4}{3}}\right)G_{\xi\xi}+\dfrac{8}{3}f_0F^{-\frac{7}{3}}F_{\xi}G_{\xi}\\ &+\left(\dfrac{4}{3}f_0F^{-\frac{7}{3}}F_{\xi\xi}-\dfrac{28}{9}f_0F^{-\frac{10}{3}}(F_{\xi})^2-\dfrac{1}{4}\right)G\\ &+\lambda_0F^{-\frac{10}{3}}\bigg{(}-\dfrac{2}{3}F(F_{\xi})^2+\dfrac{28}{9}\varepsilon(F_{\xi})^3+\dfrac{1}{2}F^2F_{\xi\xi}-4\varepsilon FF_{\xi}F_{\xi\xi}+\varepsilon F^2F_{\xi\xi\xi}\bigg{)}=0.
\end{split}
\label{solution20B}
\end{equation}
Here, $F(\xi)$ is the function such that
\begin{equation}
\left(\eta+2\varepsilon F\right)\eta'=1,
\end{equation}
where $F$ and $\eta$ obey the constraints
\begin{equation}
\eta=\eta(\zeta)=F_{\xi}\left(1-f_0F^{-4/3}\right)-2\varepsilon F,\qquad\mbox{and}\qquad \zeta=-\frac{3}{8}F^2+\frac{9}{8}f_0F^{2/3}.
\end{equation}
{\bf 21.}\hspace{5mm} For the subalgebra $\{X_3+aX_4\}$, we obtain
\begin{equation}
\begin{split}
u_0=t^{-\frac{3a}{2}}F(\xi)-q,\qquad u_1=t^{-\frac{3a+2}{2}}G(\xi),
\end{split}
\label{solution21}
\end{equation}
where $F$ and $G$ are functions of the self-similar symmetry variable $\xi=xt^{-a-1}$. Here, $F$ satisfies the equation
\begin{equation}
\begin{split}
&\left((a+1)^2\xi^2-f_0F^{-\frac{4}{3}}\right)F_{\xi\xi}+\dfrac{4}{3}f_0F^{-\frac{7}{3}}(F_{\xi})^2+2(a+1)(2a+1)\xi F_{\xi}\\ &+\dfrac{3a(3a+2)}{4}F=0.
\end{split}
\label{solution21A}
\end{equation}
In the case where $\lambda_0=0$ and either $a=0$ or $a=-2$, the function
\begin{equation}
F=f_0^{\frac{3}{4}}(a+1)^{-\frac{3}{2}}\xi^{-\frac{3}{2}}
\label{solution21AAA}
\end{equation}
is a solution with damping of equation (\ref{solution21A}). Substituting the function (\ref{solution21AAA}) and any arbitrary function $G(\xi)$ of the symmetry variable $\xi=xt^{-a-1}$ into (\ref{solution21}), we obtain a solution of the system consisting of equations (\ref{bigeq7}) and (\ref{bigeq8}) of the form
\begin{equation}
u_0=f_0^{\frac{3}{4}}(a+1)^{-\frac{3}{2}}x^{-\frac{3}{2}}t^{\frac{3}{2}}-q,\qquad u_1=t^{-\frac{3a+2}{2}}G(\xi),
\end{equation}
where $G$ is an arbitrary function of $\xi=xt^{-a-1}$.
{\bf 22.}\hspace{5mm} For the subalgebra $\{X_1+\varepsilon X_2\}$, we obtain the travelling wave solution
\begin{equation}
u_0=u_0(\xi),\qquad u_1=u_1(\xi),
\label{solution22}
\end{equation}
where we have $\xi=x-\varepsilon t$. Here, $u_0$ satisfies the equation
\begin{equation}
\left(1-f_0(u_0+q)^{-\frac{4}{3}}\right)u_{0,\xi\xi}+\dfrac{4}{3}f_0(u_0+q)^{-\frac{7}{3}}(u_{0,\xi})^2=0,
\label{solution22A}
\end{equation}
and $u_1$ satisfies
\begin{equation}
\begin{split}
&\left(1-f_0(u_0+q)^{-\frac{4}{3}}\right)u_{1,\xi\xi}+\dfrac{8}{3}f_0(u_0+q)^{-\frac{7}{3}}u_{0,\xi}u_{1,\xi}\\ &+f_0\left(\dfrac{4}{3}(u_0+q)^{-\frac{7}{3}}u_{0,\xi\xi}-\dfrac{28}{9}(u_0+q)^{-\frac{10}{3}}(u_{0,\xi})^2\right)u_1\\ &+\lambda_0\bigg{(}\dfrac{28}{9}\varepsilon(u_0+q)^{-\frac{10}{3}}(u_{0,\xi})^3-4\varepsilon(u_0+q)^{-\frac{7}{3}}u_{0,\xi}u_{0,\xi\xi}+\varepsilon(u_0+q)^{-\frac{4}{3}}u_{0,\xi\xi\xi}\bigg{)}=0.
\end{split}
\label{solution22B}
\end{equation}
Equation (\ref{solution22A}) can be solved implicitly through the quadrature
\begin{equation}
\int\dfrac{du_0}{\ln{\left(1-f_0(u_0+q)^{-\frac{4}{3}}\right)}}=\xi_0-\xi.
\end{equation}
{\bf 23.}\hspace{5mm} For the subalgebra $\{X_1+\varepsilon X_4\}$, we get
\begin{equation}
u_0=x^{-\frac{3}{2}}F(\xi)-q,\qquad u_1=x^{-\frac{3}{2}}G(\xi),
\label{solution23}
\end{equation}
where we have the symmetry variable $\xi=t-\varepsilon\ln{x}$, $x>0$. Here, $F$ and $G$ satisfy the equations
\begin{equation}
\left(1-f_0F^{-\frac{4}{3}}\right)F_{\xi\xi}+\dfrac{4}{3}f_0F^{-\frac{7}{3}}(F_{\xi})^2-\dfrac{3}{4}f_0F^{-\frac{1}{3}}=0,
\label{solution23A}
\end{equation}
and
\begin{equation}
\begin{split}
&\left(1-f_0F^{-\frac{4}{3}}\right)G_{\xi\xi}+\dfrac{8}{3}f_0F^{-\frac{7}{3}}F_{\xi}G_{\xi}\\ &+\left(\dfrac{4}{3}f_0F^{-\frac{7}{3}}F_{\xi\xi}-\dfrac{28}{9}f_0F^{-\frac{10}{3}}(F_{\xi})^2+\dfrac{1}{4}f_0F^{-\frac{4}{3}}\right)G\\ &+\lambda_0\bigg{(}\dfrac{1}{4}F^{-\frac{4}{3}}F_{\xi}-\dfrac{28}{9}F^{-\frac{10}{3}}(F_{\xi})^3+4F^{-\frac{7}{3}}F_{\xi}F_{\xi\xi}-F^{-\frac{4}{3}}F_{\xi\xi\xi}\bigg{)}=0,
\end{split}
\label{solution23B}
\end{equation}
respectively.
Equation (\ref{solution23A}) can be solved implicitly through the quadrature
\begin{equation}
\int\dfrac{f_0F^{-\frac{4}{3}}-1}{\sqrt{\frac{9}{4}(f_0)^2F^{-\frac{2}{3}}+\frac{9}{4}f_0F^{\frac{2}{3}}+K}}dF=\xi-\xi_0.
\end{equation}
{\bf 24.}\hspace{5mm} For the subalgebra $\{X_1+\varepsilon(X_2-X_5)\}$, we have
\begin{equation}
u_0=(x^2-1)^{-\frac{3}{2}}F(\xi)-q,\qquad u_1=(x^2-1)^{-\frac{3}{2}}G(\xi),
\label{solution24}
\end{equation}
where we have the symmetry variable
\begin{equation}
\xi=\varepsilon t+\frac{1}{2}\ln{\left(\frac{x-1}{x+1}\right)}.
\end{equation}
Here, $F$ and $G$ satisfy the equations
\begin{equation}
\left(1-f_0F^{-\frac{4}{3}}\right)F_{\xi\xi}+\dfrac{4}{3}f_0F^{-\frac{7}{3}}(F_{\xi})^2-3f_0F^{-\frac{1}{3}}=0,
\label{solution24A}
\end{equation}
and
\begin{equation}
\begin{split}
&\left(1-f_0F^{-\frac{4}{3}}\right)G_{\xi\xi}+\dfrac{8}{3}f_0F^{-\frac{7}{3}}F_{\xi}G_{\xi}\\ &+\left(\dfrac{4}{3}f_0F^{-\frac{7}{3}}F_{\xi\xi}-\dfrac{28}{9}f_0F^{-\frac{10}{3}}(F_{\xi})^2+f_0F^{-\frac{4}{3}}\right)G\\ &+\lambda_0\varepsilon\bigg{(}F^{-\frac{4}{3}}F_{\xi}-\dfrac{28}{9}F^{-\frac{10}{3}}(F_{\xi})^3+4F^{-\frac{7}{3}}F_{\xi}F_{\xi\xi}-F^{-\frac{4}{3}}F_{\xi\xi\xi}\bigg{)}=0.
\end{split}
\label{solution24B}
\end{equation}
Equation (\ref{solution24A}) can be solved implicitly through the quadrature
\begin{equation}
\int\dfrac{f_0F^{-\frac{4}{3}}-1}{\sqrt{9(f_0)^2F^{-\frac{2}{3}}+9f_0F^{\frac{2}{3}}+K}}dF=\xi-\xi_0.
\end{equation}
{\bf 25.}\hspace{5mm} For the subalgebra $\{X_3-X_4+a(X_2-X_5)\}$, we consider the case where $a=\frac{1}{2}$. We obtain
\begin{equation}
u_0=(x-1)^{-3}F(\xi)-q,\qquad u_1=(x-1)^{-2}(x+1)^{-1}G(\xi),
\label{solution25}
\end{equation}
where the rational symmetry variable is $\xi=t(x-1)(x+1)^{-1}$. Here, $F$ satisfies the equation
\begin{equation}
\left(1-4f_0\xi^2F^{-\frac{4}{3}}\right)F_{\xi\xi}+\dfrac{16}{3}f_0\xi^2F^{-\frac{7}{3}}(F_{\xi})^2-8f_0\xi F^{-\frac{4}{3}}F_{\xi}=0.
\label{solution25A}
\end{equation}
A particular solution is
\begin{equation}
F=2^{\frac{3}{2}}f_0^{\frac{3}{4}}\xi^{\frac{3}{2}}.
\label{solution25B}
\end{equation}
In the case where $\lambda_0=0$ and $a=\frac{1}{2}$, substituting the function (\ref{solution25B}) and any arbitrary function $G(\xi)$ of the symmetry variable $\xi=t(x-1)(x+1)^{-1}$ into (\ref{solution25}) yields a solution of the system consisting of equations (\ref{bigeq7}) and (\ref{bigeq8})
\begin{equation}
u_0=2^{\frac{3}{2}}f_0^{\frac{3}{4}}t^{\frac{3}{2}}(x-1)^{-\frac{3}{2}}(x+1)^{-\frac{3}{2}}-q.
\end{equation}
\section{Subalgebra classification and solutions for the integro-differential case}
The system (\ref{i5}) given by the equations
\begin{equation}
\begin{split}
u_t-&v_x=0\\
v_t-\big{(}\int^{u}f(s)ds+&\varepsilon\lambda(u)v_x\big{)}_x=0
\end{split}
\label{integralformA}
\end{equation}
is the potential system for equation (\ref{i2}) in the sense that its compatibility condition is given by equation (\ref{i2}). Here, we have
\begin{equation}
u(t,x,\varepsilon)=u_0(t,x)+\varepsilon u_1(t,x)+{\mathcal O}(\varepsilon^2)\quad\mbox{ and }\quad v(t,x,\varepsilon)=v_0(t,x)+\varepsilon v_1(t,x)+{\mathcal O}(\varepsilon^2)
\label{uandvform}
\end{equation}
The approximate Lie algebra of infinitesimal symmetries of equation (\ref{integralformA}) is spanned by the five generators \cite{Ruggieri}
\begin{equation}
\begin{split}
&X_1=\partial_t,\qquad X_2=\partial_x,\qquad X_3=\partial_{v_0},\qquad X_4=\partial_{v_1},\\ &X_5=t\partial_t+x\partial_x-u_1\partial_{u_1}-v_1\partial_{v_1}
\end{split}
\label{gen4}
\end{equation}
For two specific cases of $f(u_0)$ and $\lambda(u_0)$, we also have an additional generator $X_6$. Specifically:
\begin{itemize}
\item For the case where $f(u_0)=f_0e^{u_0/p}$ and $\lambda(u_0)=\lambda_0e^{(1+s)u_0/p}$, we have\\ $X_6=x\partial_x+2p\partial_{u_0}+v_0\partial_{v_0}+2su_1\partial_{u_1}+(2s+1)v_1\partial_{v_1}$
\item For the case where $f(u_0)=f_0(u_0+q)^{\frac{1}{p}}$ and $\lambda(u_0)=\lambda_0(u_0+q)^{\frac{1+s}{p}-1}$, we have $X_6=x\partial_x+2p(u_0+q)\partial_{u_0}+(2p+1)v_0\partial_{v_0}+2su_1\partial_{u_1}+(2s+1)v_1\partial_{v_1}$
\end{itemize}
For both cases, we obtain a classification of 63 conjugacy classes of one-dimensional subalgebras, which we list in the Appendix.
\subsection{The case where $f(u_0)=f_0e^{u_0/p}$ and $\lambda(u_0)=\lambda_0e^{(1+s)u_0/p}$}
Here, $f_0$, $\lambda_0$, $p$ and $s$ are constants. In this case, we have the additional symmetry generator
\begin{equation}
X_6=x\partial_x+2p\partial_{u_0}+v_0\partial_{v_0}+2su_1\partial_{u_1}+(2s+1)v_1\partial_{v_1}
\label{gen4A}
\end{equation}
Performing a symmetry reduction corresponding to the subalgebra $\{X_6\}$, we obtain the solution
\begin{equation}
u_0=F(t)+2p\ln{x},\qquad v_0=xF_t,\qquad u_1=x^{2s}H(t),\qquad v_1=\dfrac{x^{2s+1}}{2s+1}H_t
\label{solution26}
\end{equation}
where
\begin{equation}
\int\dfrac{dF}{\sqrt{4p^2f_0e^{\frac{F}{p}}+K}}=t-t_0
\label{solution26A}
\end{equation}
and $H(t)$ satisfies the equation
\begin{equation}
H_{tt}+f_0e^{\frac{F}{p}}(-4s^2-6s-2)H-\dfrac{\lambda_0(1+s)}{p}e^{\frac{1+s}{p}F}\sqrt{4p^2+f_0e^{\frac{F}{p}}+K}(4ps+2p)=0
\label{solution26B}
\end{equation}
In the case where $\lambda_0=0$ and $K=0$, we obtain
\begin{equation}
F=-2p\ln{\left(\sqrt{f_0}(t_0-t)\right)},
\end{equation}
and equation (\ref{solution26B}) becomes
\begin{equation}\label{laterequation2B}
H_{tt}+f_0\left(-4s^2-6s-2\right)\left[-2p\ln{\left(\sqrt{f_0}(t_0-t)\right)}\right]^{-2}H=0.
\end{equation}
Therefore, we obtain the solution
\begin{equation}
u_0=-2p\ln{\left(\sqrt{f_0}(t_0-t)\right)}+2p\ln{x},\qquad v_0=\dfrac{2p}{t_0-t},\qquad u_1=x^{2s}H(t),\qquad v_1=\dfrac{x^{2s+1}}{2s+1}H_t
\end{equation}
where $H$ satisfies (\ref{laterequation2B}).
\subsection{The case where $f(u_0)=f_0(u_0+q)^{\frac{1}{p}}$ and $\lambda(u_0)=\lambda_0(u_0+q)^{\frac{1+s}{p}-1}$}
Here, $f_0$, $\lambda_0$, $p$, $q$ and $s$ are constants. In this case, we have the additional symmetry generator
\begin{equation}
X_6=x\partial_x+2p(u_0+q)\partial_{u_0}+(2p+1)v_0\partial_{v_0}+2su_1\partial_{u_1}+(2s+1)v_1\partial_{v_1}
\label{gen4B}
\end{equation}
Performing a symmetry reduction corresponding to the subalgebra $\{X_6\}$, we obtain the solution
\begin{equation}
u_0=x^{2p}F(t)-q,\qquad v_0=\dfrac{x^{2p+1}}{2p+1}F_t,\qquad u_1=x^{2s}H(t),\qquad v_1=\dfrac{x^{2s+1}}{2s+1}H_t
\label{solution27}
\end{equation}
where
\begin{equation}
\int\sqrt{\dfrac{2p+1}{2f_0p(4p^2-2p+1)F^{\frac{1}{p}+2}}}dF=t-t_0
\label{solution27A}
\end{equation}
and $H(t)$ satisfies the equation
\begin{equation}
\begin{split}
&H_{tt}-f_0\left(2s(2s-1)+2(2p-1)+8s+4p\left(\frac{1}{p}-1\right)\right)F^{\frac{1}{p}}H\\ &-\lambda_0\bigg{[}\left(\dfrac{1+s}{p}-1\right)\left(\dfrac{1+s}{p}-2\right)(4p^2)+\left(\dfrac{1+s}{p}-1\right)(2p)(2p-1)\\ &+2\left(\dfrac{1+s}{p}-1\right)(4p^2)+(2p)(2p-1)\bigg{]}F^{\frac{1+s}{p}-1}F_t=0
\end{split}
\label{solution27B}
\end{equation}
In the specific case where $p=\frac{1}{2}$, $s=-\frac{3}{2}$ and $t_0=0$, we obtain the explicit solution
\begin{equation}
\begin{split}
& u_0=x^{2p}\left(-\sqrt{\dfrac{2}{f_0}}\right)\left(\dfrac{1}{t}\right)-q,\qquad v_0=\dfrac{x^{2p+1}}{2p+1}\sqrt{\dfrac{2}{f_0}}\left(\dfrac{1}{t^2}\right),\\ & u_1=C_1t^{r_1}x^{2s}+C_2t^{r_2}x^{2s}+\dfrac{\sqrt{2}f_0^{\frac{3}{2}}\lambda_0x^{2s}t^2}{2(f_0-2)},\\ & v_1=\dfrac{x^{2s+1}}{2s+1}\left[C_1r_1t^{r_1-1}+C_2r_2t^{r_2-1}+\dfrac{\sqrt{2}f_0^{\frac{3}{2}}\lambda_0t}{2(f_0-2)}\right].
\end{split}
\label{solution27C}
\end{equation}
where $r_1=\frac{1}{2}\left(1+\sqrt{1+\frac{16}{f_0}}\right)$ and $r_2=\frac{1}{2}\left(1-\sqrt{1+\frac{16}{f_0}}\right)$.
\section{Concluding Remarks}
In this paper, the approximate symmetry analysis of a nonlinear wave equation with small dissipation has been performed. Based on the Lie symmetry approach, we determined subalgebras of dimension one and reduced the perturbed system of PDEs to systems of ODEs. These ODEs could often be explicitly integrated in terms of known functions or at least their singularity structure could be investigated using well-known methods. In particular, for ODEs of second and third order, it is possible to determine whether they are of the Painlev\'e type (i.e. whether all of their critical points are fixed and independent of the initial data). This approach has achieved a systematic classification of equations and invariant solutions from the group-theoretical point of view. Solutions obtained included elementary solutions (constant and algebraic solutions involving one or two simple poles), combinations of monomial powers of $x$ and $t$, solutions admitting damping and going to zero for large values of $t$ and solutions given by quadratures. This analysis can be applied to more general hydrodynamic systems admitting dissipation terms like viscosity and could lead to some new understanding of the problem of solving the Navier-Stokes system through the use of approximate symmetries.
\noindent {\bf Acknowledgements}\\
AMG's work was supported by a research grant from NSERC of Canada. AJH wishes to thank the Mathematical Physics Laboratory of the Centre de Recherches Math\'{e}matiques, Universit\'{e} de Montr\'{e}al, for the opportunity to participate in this research.
| {
"timestamp": "2019-09-24T02:15:13",
"yymm": "1909",
"arxiv_id": "1909.10027",
"language": "en",
"url": "https://arxiv.org/abs/1909.10027",
"abstract": "In this paper, it is shown how a combination of approximate symmetries of a nonlinear wave equation with small dissipations and singularity analysis provides exact analytic solutions. We perform the analysis using the Lie symmetry algebra of this equation and identify the conjugacy classes of the one-dimensional subalgebras of this Lie algebra. We show that the subalgebra classification of the integro-differential form of the nonlinear wave equation is much larger than the one obtained from the original wave equation. A systematic use of the symmetry reduction method allows us to find new invariant solutions of this wave equation.",
"subjects": "Mathematical Physics (math-ph)",
"title": "Invariant solutions of a nonlinear wave equation with a small dissipation obtained via approximate symmetries",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534360303622,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7083573380039597
} |
https://arxiv.org/abs/1103.4125 | The geometric stability of Voronoi diagrams with respect to small changes of the sites | Voronoi diagrams appear in many areas in science and technology and have numerous applications. They have been the subject of extensive investigation during the last decades. Roughly speaking, they are a certain decomposition of a given space into cells, induced by a distance function and by a tuple of subsets called the generators or the sites. Consider the following question: does a small change of the sites, e.g., of their position or shape, yield a small change in the corresponding Voronoi cells? This question is by all means natural and fundamental, since in practice one approximates the sites either because of inexact information about them, because of inevitable numerical errors in their representation, for simplification purposes and so on, and it is important to know whether the resulting Voronoi cells approximate the real ones well. The traditional approach to Voronoi diagrams, and, in particular, to (variants of) this question, is combinatorial. However, it seems that there has been a very limited discussion in the geometric sense (the shape of the cells), mainly an intuitive one, without proofs, in Euclidean spaces. We formalize this question precisely, and then show that the answer is positive in the case of R^d, or, more generally, in (possibly infinite dimensional) uniformly convex normed spaces, assuming there is a common positive lower bound on the distance between the sites. Explicit bounds are given, and we allow infinitely many sites of a general form. The relevance of this result is illustrated using several pictures and many real-world and theoretical examples and counterexamples. | \section{Introduction}\label{sec:Intro}
\subsection{Background} The Voronoi diagram (the Voronoi tessellation, the Voronoi decomposition, the Dirichlet tessellation) is one of the basic structures
in computational geometry. Roughly speaking, it is a certain decomposition of a given space $X$ into cells, induced by a distance function and by a tuple of subsets $(P_k)_{k\in K}$, called the generators or the sites. More precisely, the Voronoi cell $R_k$ associated with the site $P_k$ is the set
of all the points in $X$ whose distance to $P_k$ is not greater than their distance to the union of the other sites $P_j$.
Voronoi diagrams appear in a huge number of fields in science and technology and have many applications. They have been the subject of research for at least 160 years, starting formally with L. Dirichlet \cite{Dirichlet} and G. Voronoi \cite{Voronoi}, and of extensive research during the last 40 years. For several well written surveys on Voronoi diagrams which contain extensive bibliographies and many applications, see \cite{Aurenhammer}, \cite{AurenhammerKlein}, \cite{OBSC}, and \cite{VoronoiWeb}.\\
\noindent Consider the following question:
\begin{question}\label{ques:main}
Does a small change of the sites, e.g., of their position or shape, yield a small change in the corresponding Voronoi cells?
\end{question}
This question is by all means natural, because in practice, no matter which algorithm is being used for the computation of the Voronoi cells, one approximates the sites either because of lack of exact information about them, because of inevitable numerical errors occurring when a site is represented in an analog or a digital manner, for simplification purposes and so on, and it is important to know whether the resulting Voronoi cells approximate well the real ones.
For instance, consider the Voronoi diagram whose sites are either
shops (or large shopping centers), antennas, or other facilities in some city/district such as post offices. See Figures \ref{fig:Shops}-\ref{fig:ShopsReality}.
\begin{figure*}
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
{\includegraphics[scale=0.78]{figure1.eps}}
\end{center}
\caption{10 shopping centers (or post offices) in a flat city. Each shopping center is represented by a point. }
\label{fig:Shops}
\end{minipage
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
{\includegraphics[scale=0.78]{figure2.eps}}
\end{center}
\caption{In reality each shopping center/post office is not a point and its location is approximated. The combinatorial structure is somewhat different and the Voronoi cells are not exactly polygons, but still, their shapes are almost the same as in Figure \ref{fig:Shops}.}
\label{fig:ShopsReality}
\end{minipage
\end{figure*}
Each Voronoi cell is the domain of influence of its site and it can be used for various purposes, among them estimating the number of potential costumers \cite{EconomyFacilityPNAS} or understanding the spreading patterns of mobile phone viruses \cite{VoronoiVirus}. In reality, each site has a somewhat vague shape, and its real location is not known exactly. However, to simplify matters we regard each site as a point (or a finite collection of points if we consider firms of shops) located more or less near the real location. As a result, the resulting cells only approximate the real ones, but we hope that the approximation will be good in the geometric sense, i.e., that the shapes of the corresponding real and approximate cells will be almost the same. (See Section \ref{sec:examples} for many additional examples, including ones with infinitely many sites or in higher/infinite dimensional spaces.) As the counterexamples in Section \ref{sec:CounterExamples} show, it is definitely not obvious that this is the case.
A similar question to Question \ref{ques:main} can be asked regarding any geometric structure/algorithm, and, in our opinion, it is a fundamental question which is analogous to the question about the stability of the solution of a differential equation with respect to small changes in the initial conditions.
The traditional approach to Voronoi diagrams, and, in particular, to (variants of) Question \ref{ques:main}, is combinatorial. For instance, as already mentioned in Aurenhammer \cite[p. 366]{Aurenhammer}, the combinatorial structure of Voronoi diagrams (in the case of the Euclidean distance with point sites), i.e., the structure of vertices, edges and so
on, is not stable under continuous motion of the sites, but it is stable ``most of the time''. A more extensive discussion about this issue, still with point sites but possibly in higher dimensions, can be found in Weller \cite{Weller}, Vyalyi et al. \cite{VGT}, and Albers et al. \cite{AGMR}.
However, it seems that this question, in the geometric sense, has been raised or discussed only rarely in the context of Voronoi diagrams. In fact, after a comprehensive survey of the literature about Voronoi diagrams, we have found only very few places that have a very brief, particular, and intuitive discussion which is somewhat related to this question. The following cases were mentioned: the Euclidean plane with finitely many sites \cite{Kaplan}, the Euclidean plane with finitely many point sites
\cite[p. 366]{Aurenhammer}, and the $d$-dimensional Euclidean space with finitely many point sites \cite{AGMR} (see also some of the references therein). It was claimed there without proofs and exact definitions that the Voronoi cells have a continuity property: a small change in the position or the shape of the sites yields a small change in the corresponding Voronoi cells.
Another continuity property was discussed by Groemer \cite{Groemer} in the context of the geometry of numbers. He considered Voronoi diagrams generated by a lattice of points in a $d$-dimensional Euclidean space, and proved that if a sequence of lattices converges to a certain lattice (meaning that the basis elements which generate the lattices converge with respect to the Euclidean distance to the basis which generates the limit lattice), then the corresponding Voronoi cells of the origin converge, with respect to the Hausdorff distance, to the cell of the origin of the limit lattice. His result is, in a sense and in a very particular case, a stability result, but it definitely does not answer Question \ref{ques:main} (which, actually, was not asked at all in \cite{Groemer}) for several reasons: first, usually the sites or the perturbed ones do not form a (infinite) lattice. Second, in many cases they are not points (singletons). Third, a site is usually different from the perturbed site (in \cite{Groemer} the discussed sites equal $\{0\}$). In this connection, we also note that Groemer's proof is very restricted to the above setting and it uses arguments based on compactness and no explicit bounds are given.
It is quite common in the computational geometry literature to assume ``ideal conditions'', say infinite precision in the computation, exact and simple input, and so on. These conditions are somewhat non-realistic. Issues related to the stability of geometric structures under small perturbations of their building blocks (not necessarily the geometric stability) are not so common in the literature, but they can be found in several places, e.g., in \cite{AGGKKRS, AryaMalamatosMount, AttaliBoissonnatEdelsbrunner, BandyopadhyaySnoeyink, ChazalCohenSteinerLieutier, ChoiSeidel, SteinerEdelsbrunerHarer, FortuneStability, HarPeled, Khanban, LofflerPhD, LofflerKreveld, GuibasSalesinStolfi, SugiharaIriInagakiImai}. However, in many of the above places the discussion has combinatorial characteristics and there are several restrictive assumptions: for instance, the underlying setting is usually a finite dimensional space (in many cases only $\R^2$ or $\R^3$), with the Euclidean distance, and with finitely many objects of a specific form (merely points in many cases). In addition, the methods are restricted to this setting. In contrast, the infinite dimensional case or the case of (possibly infinitely many) general objects or general norms have never been considered.
\subsection{Contribution of this paper} We discuss the question of stability of Voronoi diagrams with respect to small changes of the corresponding sites. We first formalize this question precisely, and then show that the answer is positive in the case of $\R^d$, or, more generally, in the case of (possibly infinite dimensional) uniformly convex normed spaces, assuming there is a common positive lower bound on the distance between the sites. Explicit bounds are presented, and we allow infinitely many sites of a general form. We also present several counterexamples which show that the assumptions formulated in the main result are crucial. We illustrate the relevance of this result using several pictures and many real-world and theoretical examples and counterexamples. To the best of our knowledge, the main result and the approach used for deriving it are new. Two of our main tools are: a new representation theorem which characterizes the Voronoi cells as a collection of line segments and a new geometric lemma which provides an explicit geometric estimate.
\subsection{The structure of the paper} In Section \ref{sec:Definitions} we present the basic definitions and notations. Exact formulation of Question \ref{ques:main} and informal description of the main result are given in Section \ref{sec:FormalInformal}. The relevance of the main result is illustrated using many theoretical and real-world examples in Section \ref{sec:examples}. The main result is presented in Section \ref{sec:Outline}, and we discuss briefly some aspects related to its proof. In Section \ref{sec:CounterExamples} we present several interesting counterexamples showing that the assumptions imposed in the main result are crucial. We end the paper in Section \ref{sec:Concluding} with several concluding remarks. Since the proof of the main result is quite long and technical, and because the main goal of this paper is to introduce the issue and to discuss it in a qualitative manner, rather than going deep into technical details, proofs were omitted from the main body of the text. Full proofs can be found in the appendix (Section \ref{sec:appendix}) and a preliminary version in \cite{ReemPhD}.
\section{Notation and basic definitions}\label{sec:Definitions}
In this section we present our notation and basic definitions. In the main discussion we consider a closed and convex set $X\neq \emptyset$ in some uniformly convex normed space $(\widetilde{X},|\cdot|)$ (see Definition \ref{def:UniformlyConvex} below), real or complex, finite or infinite dimensional. The induced metric is $d(x,y)=|x-y|$. We assume that $X$ is not a singleton, for otherwise everything is trivial. We denote by $[p,x]$ and $[p,x)$ the closed and half open line segments connecting $p$ and $x$, i.e., the sets $\{p+t(x-p): t\in [0,1]\}$ and $\{p+t(x-p): t\in [0,1)\}$ respectively. The (possibly empty) boundary of $X$ with respect to the affine hull spanned by $X$ is denoted by $\partial X$. The open ball with center $x\in X$ and radius $r>0$ is denoted by $B(x,r)$.
\begin{defin}\label{def:dom}
Given two nonempty subsets $P,A\subseteq X$, the dominance region
$\dom(P,A)$ of $P$ with respect to $A$ is the set of all $x\in X$
whose distance to $P$ is not greater than their distance to $A$, i.e.,
\begin{equation*}
\dom(P,A)=\{x\in X: d(x,P)\leq d(x,A)\}.
\end{equation*}
Here $d(x,A)=\inf\{d(x,a): a\in A\}$ and in general we denote $d(A_1,A_2)=\inf\{d(a_1,a_2): a_1\in A_1,\,a_2\in A_2\}$ for any nonempty subsets $A_1,A_2$.
\end{defin}
\begin{defin}\label{def:Voronoi}
Let $K$ be a set of at least 2 elements (indices), possibly
infinite. Given a tuple $(P_k)_{k\in K}$ of nonempty subsets
$P_k\subseteq X$, called the generators or the sites, the Voronoi diagram induced by this tuple is the tuple $(R_k)_{k\in K}$ of non-empty subsets
$R_k\subseteq X$, such that for all $k\in K$,
\begin{equation*}
R_k=\dom(P_k,{\underset{j\neq k}\bigcup P_j})
=\{x\in X: d(x,P_k)\leq d(x,P_j)\,\,\forall j\in K ,\, j\neq k \}.
\end{equation*}
In other words, the Voronoi cell $R_k$ associated with the site $P_k$ is the set of all $x\in X$ whose distance to $P_k$ is not greater than their distance to the union of the other sites $P_j$.
\end{defin}
In general, the Voronoi diagram induces a decomposition of $X$ into its Voronoi cells and the rest. If $K$ is finite, then the union of the cells is the whole space. However, if $K$ is infinite, then there may be a ``neutral cell'': for example, if $X$ is the Euclidean plane, $K=\N=\{1,2,3,\ldots\}$ and $P_k=\R\times \{1/k\}$, then no point in the lower half-plane $\R\times (-\infty,0]$ belongs to any Voronoi cell. In the above definition and the rest of the paper we ignore the neutral cell.
We now recall the definition of strictly and uniformly convex spaces.
\begin{defin}\label{def:UniformlyConvex}\label{page:UniConvDef}
A normed space $(\widetilde{X},|\cdot|)$ is said to be strictly convex if for all $x,y\in \wt{X}$ satisfying $|x|=|y|=1$ and $x\neq y$, the inequality $|(x+y)/2|<1$ holds. $(\widetilde{X},|\cdot|)$ is said to be uniformly convex if for any $\epsilon\in (0,2]$ there exists $\delta\in (0,1]$ such that for all $x,y\in \wt{X}$, if $|x|=|y|=1$ and $|x-y|\geq \epsilon$, then $|(x+y)/2|\leq 1-\delta$.
\end{defin}
Roughly speaking, if the space is uniformly convex, then for any $\epsilon>0$ there exists a uniform positive lower bound on how deep the midpoint between any two unit vectors must penetrate the unit ball, assuming the distance between them is at least $\epsilon$. In general normed spaces the penetration is not necessarily positive, since the unit sphere may contain line segments. $\R^2$ with the max norm $|\cdot|_{\infty}$ is a typical example for this. A uniformly convex space is always strictly convex, and if it is also finite dimensional, then the converse is true too. The $m$-dimensional Euclidean space $\R^m$, or more generally, inner product spaces, the sequence spaces $\ell_p$, the Lebesgue spaces $L_p(\Omega)$ ($1<p<\infty$), and a uniformly convex product of a finite number of uniformly convex spaces, are all examples of uniformly convex spaces. See Clarkson \cite{Clarkson} and, for instance, Goebel-Reich \cite{GoebelReich} and Lindenstrauss-Tzafriri \cite{LindenTzafriri} for more information about uniformly convex spaces.
From the definition of uniformly convex spaces we can obtain a function which
assigns to the given $\epsilon$ a corresponding value $\delta(\epsilon)$. There are several ways to obtain such a function, but for our purpose we only need $\delta$ to be increasing, and to satisfy $\delta(0)=0$ and $\delta(\epsilon)>0$ for any $\epsilon\in (0,2]$. One choice, which is not necessarily the most convenient one, is the modulus of convexity, which is the function $\delta:[0,2]\to[0,1]$ defined by
\begin{equation*}
\displaystyle{\delta(\epsilon)=\inf\{1-|(x+y)/2|: |x-y|\geq \epsilon,\,|x|=|y|=1\}}.\label{eq:delta}
\end{equation*}
For specific spaces we can take more convenient functions. For instance, for the spaces $L_p(\Omega)$ or $\ell_p\,$, $1<p<\infty$, we can take
\begin{equation*}
\begin{array}{l}
\delta(\epsilon)=1-(1-\left(\epsilon/2)^p\right)^{1/p},\,\, \textnormal{for}\,\,p\geq 2,\\
\delta(\epsilon)=1-\left(1-(\epsilon/2)^q\right)^{1/q},\,\, \textnormal{for}\,\,1<p\leq 2\,\,
\textnormal{and}\,\,\frac{1}{p}+\frac{1}{q}=1.
\end{array}
\end{equation*}
We finish this section with the definition of the Hausdorff distance, a definition which is essential for the rest of the paper.
\begin{defin}\label{def:Hausdorff}
Let $(X,d)$ be a metric space. Given two nonempty sets $A_1,A_2\subseteq X$, the Hausdorff distance between them is defined by
\begin{equation*}
D(A_1,A_2)=\max\{\sup_{a_1\in A_1}d(a_1,A_2),\sup_{a_2\in A_2}d(a_2,A_1)\}.
\end{equation*}
\end{defin}
Note that the Hausdorff distance $D(A_1,A_2)$ is definitely different from the usual distance $d(A_1,A_2)=\inf\{d(a_1,a_2): a_1\in A_1,\,a_2\in A_2\}$. As a matter of fact, $D(A_1,A_2)\leq \epsilon$ if and only if $d(a_1,A_2)\leq \epsilon$ for any $a_1\in A_1$, and $d(a_2,A_1)\leq \epsilon$ for any $a_2\in A_2$. In addition, if $D(A_1,A_2)<\epsilon$, then for any $a_1\in A_1$ there exists $a_2\in A_2$ such that $d(a_1,a_2)<\epsilon$, and for any $b_2\in A_2$ there exists $b_1\in A_1$ such that $d(b_2,b_1)<\epsilon$. These properties explain why the Hausdorff distance is the natural distance to be used when discussing approximation and stability in the context of sets: suppose that our resolution is at most $r$, i.e., we are not able to distinguish between two points whose distance is at most some given positive number $r$. If it is known that $D(A_1,A_2)<r$, then we cannot distinguish between the sets $A_1$ and $A_2$, at least not by inspections based only on distance measurements. As a result of the above discussion, the intuitive phrase ``two sets have almost the same shape'' can be formulated precisely: the Hausdorff distance between the sets is smaller than some given positive parameter (note that a set and a rigid transformation of it usually have different shapes).
\section{Exact formulation of the main question and informal formulation of the main result}\label{sec:FormalInformal}
The exact formulation of Question \ref{ques:main} is based on the concept of Hausdorff distance for reasons which were explained at the end of the previous section.
\begin{question}
Suppose that $(P_k)_{k\in K}$ is a tuple of non-empty sets in $X$. Let $(R_k)_{k\in K}$ be the corresponding Voronoi diagram. Is it true that a small change of the sites yields a small change in the corresponding Voronoi cells, where both changes are measured with respect to the Hausdorff distance? More precisely, is it true that for any $\epsilon>0$ there exists $\Delta>0$ such that for any tuple $(P'_k)_{k\in K}$, the condition $D(P_k,P'_k)<\Delta$ for each $k\in K$ implies that $D(R_k,R'_k)<\epsilon$ for each $k\in K$, where $(R'_k)_{k\in K}$ is the Voronoi diagram of $(P'_k)_{k\in K}$?
\end{question}
The main result (Theorem \ref{thm:stabilityUC}) says that the answer is positive. Here is an informal description of it:
\begin{answer}
Suppose that the underlying subset $X$ is a closed and convex set of a (possibly infinite dimensional) uniformly convex normed space $\wt{X}$. Suppose that a certain boundedness condition on the distance between points in $X$ and the sites holds, e.g., when $X$ is bounded or when the sites form a (distorted) lattice. If there is a common positive lower bound on the distance between the sites, and the distance to each of them is attained, then indeed a small enough change of the (possibly infinitely many) sites yields a small change of the corresponding Voronoi cells, where both changes are measured with respect to the Hausdorff distance; in other words, the shapes of the real cells and the corresponding perturbed ones are almost the same. Moreover, explicit bounds on the changes can be derived and they hold simultaneously for all the cells. There are counterexamples which show that the assumptions imposed above are crucial.
\end{answer}
The condition that the distance to a site is attained holds, e.g., when the site is either a closed subset contained in a (translation of a) finite dimensional space, or a compact set, or a convex and closed subset in a uniformly convex Banach space. The sites can always be assumed to be closed, since the distance and the Hausdorff distance preserve their values when the involved subsets are replaced by their closures. The ``certain boundedness condition on the distance between points in $X$ and the sites'' is a somewhat technical condition expressed in \eqref{eq:BallRhokThm} (see also Remark \ref{rem:rho}).
\section{ The relevance of the main result}\label{sec:examples}
In Section \ref{sec:Intro} we explained why Question \ref{ques:main} is natural and fundamental, and mentioned the real-world example of a Voronoi diagram induced by shops/cellular antennas. The goal of this section is to illustrate further the relevance of the main result using a (far from being exhaustive) list of real-world and theoretical exampls. In these examples the shape or the position of the real sites are obviously approximated, and the main result (Theorem \ref{thm:stabilityUC}) ensures that the approximate Voronoi cells and the real ones have almost the same shape, and no unpleasant phenomenon such as the one described in Figures \ref{fig:InStability000}-\ref{fig:InStability1Full} can occur.
One example is in molecular biology for modeling the proteins structure
(Richards \cite{Richards}, Kim et al. \cite{KKCRCP}, Bourquard et al. \cite{VoronoiBiology2}), where the sites are either the atoms of a protein or special selected points in the amino acids and they are approximated by spheres/points. Another example is related to collision detection and robot motion (Goralski-Gold \cite{GoralskiGold}, Schwartz et al.
\cite{SchwartzSharirHopcroft}), where the sites are the (static or dynamic) obstacles located in an environment in which a vehicle/airplane/ship/robot/satellite should move. A third example is in solid state physics (Ashcroft-Mermin \cite{AshcroftMermin}; here the common terms are ``the first Brillouin zone'' or ``the Wigner-Seitz cell'' instead of ``the Voronoi cell''), where the sites are infinitely many point atoms in a (roughly) periodic structure which represents a crystal. A fourth example is in material engineering (Li-Ghosh \cite{LiGhosh}), where the sites are cracks in a material.
A fifth example is in numerical simulations of various dynamical phenomena, e.g., gas, fluid or urban dynamics (Slotterback et al. \cite{GranularMatter}, Mostafavi et al. \cite{VoronoiSpatial}). Here the sites are certain points/shapes taken from the sampled data of the simulated phenomena, and the cells help to cluster and analyze the data continuously. A sixth example is in astrophysics (Springel et al. \cite{DarkMatterGalactic}) where the (point) sites are actually huge regions in the universe (of diameter equals to hundreds of light years) used in simulations performed for understanding the behavior of (dark) matter in the universe. A seventh example is in image processing and computer graphics, where the sites are either certain important features/parts in an image (Tagare et al. \cite{TagareJaffeDuncan}, Dobashi et al. \cite{DobashiHagaJohanNishita}, Sabha-Dutr\'e \cite{SabhaDutre}) used for processing/analyzing it, or they are points form a useful configuration such as (an approximate) centroidal Voronoi diagram (CVD) which induces cells having good shapes (Du et al. \cite{VoronoiCVD_Review}, Liu et al. \cite{Graphics_CVD}, Faustino-Figueiredo \cite{FaustinoFigueiredo}).
An eighth example is in computational geometry, and it is actually a large collection of familiar problems in this field where Voronoi cells appear
and being used, possibly indirectly: (approximate) nearest neighbor searching/the post office problem, cluster analysis, (approximate) closest pairs, motion planning, finding (approximate) minimum spanning trees, finding good triangulations, and so on. See, e.g., Aurenhammer \cite{Aurenhammer}, Aurenhammer-Klein \cite{AurenhammerKlein}, Clarkson \cite{ClarksonNN}, Indyk \cite{Indyk}, and Okabe et al. \cite{OBSC}. Here the sites are either points or other shapes, and the space is usually $\R^n$ with some norm. In some of the above problems our stability result is clearly related because of the analysis being used (e.g., cluster analysis) or because the position/shapes of the sites are approximated (e.g., motion planning, the post office problem). However, it may be related also because in many of the previous problems the difficulty/complexity of finding an exact solution is too high, so one is forced to use approximate algorithms, to impose a general position assumption, and so on. Now, by perturbing slightly the sites (moving points, replacing a non-smooth curve by an approximating smooth one, etc.,) one may obtain much simpler and tractable configurations and now, using the geometric stability of the Voronoi cells, one may estimate how well the obtained solution approximates the best/real one.
As for a theoretical example of a different nature, we mention Kopeck\'a et al. \cite{KopeckaReemReich} in which the stability results described here have been used, in particular, for proving the existence of a zone diagram (a concept which was first introduced by Asano et al. \cite{AMTn} in the Euclidean plane with point sites) of finitely many compact sites which are strictly contained in a (large) compact and convex subset of a uniformly convex space, and also for proving interesting properties of Voronoi cells there.
Another example is for the infinite dimensional Hilbert space $L_2(I)$ for some $I$ (perhaps an interval or a finite dimensional space): functions in it are used in signal processing and in many other areas in science and technology. In practice the signals (functions) can be distorted, e.g., because of noise, and in addition, they are approximated by finite series (e.g., finite Fourier series) or integrals (e.g., Fourier transform). Given several signals, the (approximate) Voronoi cell of a given signal may help, at least in theory, to cluster or analyze data related to the sites. Such an analysis can be done also when the signal is considered as a point in a finite dimensional space of a high dimension. See, for instance, Conway-Sloane \cite[pp. 66-69, 451-477]{ConwaySloane} (coding) and Shannon \cite{Shannon} (communication) for a closely related discussion (in \cite{Shannon} Voronoi diagrams are definitely used in various places, but without their explicit name).
We mention several additional examples related to our stability result, sometimes in a somewhat unexpected way. For instance, Voronoi diagrams of infinitely many sites generated by a Poisson process (Okabe et al. \cite[pp. 39, 291-410]{OBSC}), Voronoi diagrams of atom nuclei used for the mathematical analysis of stability phenomena in matter (Lieb-Yau \cite{LiebYau}), Voronoi diagrams of infinitely many lattice points in a multi-dimensional Euclidean space which appear in the original works of Dirichlet \cite{Dirichlet} and Voronoi \cite{Voronoi} (see also Groemer \cite{Groemer} and Gruber-Lekkerkerker \cite{GruberLek} regarding the geometry of numbers and quadratic forms; Groemer used his stability result for deriving the Mahler compactness theorem \cite{Mahler}),
and packing problems such as the Kepler conjecture and the Dodecahedral conjecture (Hales \cite{HalesKepler},\cite{KeplerDCG}, Hales-McLaughlin \cite{HalesMcLaughlin}; because of continuity arguments needed in the proof) or those described in Conway-Sloane \cite{ConwaySloane}.
\section{The main result and some aspects related to its proof}\label{sec:Outline}
In this section we formulate the main result and discuss briefly issues related to its proof. See also the remarks after
Theorem \ref{thm:stabilityUC} for several relevant clarifications.
\begin{thm}\label{thm:stabilityUC}
Let $(\wt{X},|\cdot|)$ be a uniformly convex normed space. Let $X\subseteq \wt{X}$ be closed and convex. Let $(P_k)_{k\in K}$, $(P'_k)_{k\in K}$ be two given tuples of nonempty subsets of $X$ with the property that the distance between each $x\in X$ and each $P_k,P'_k$ is attained. For each $k\in K$ let $A_k=\bigcup_{j\neq k}P_j,\,A'_k=\bigcup_{j\neq k}P'_j$. Suppose that the following conditions hold:
\begin{equation}\label{eq:eta}
\eta:=\inf\{d(P_k,P_j): j,k\in K, j\neq k\}>0,
\end{equation}
\begin{multline}\label{eq:BallRhokThm}
\exists \rho\in (0,\infty)\,\, \textnormal{such that for all}\,\,k\in K\,\,\textnormal{and for all}\,\, x\in X\,\,\\
\textnormal{the open ball}\,\, B(x,\rho)\,\,\textnormal{intersects} \,\,A_k.
\end{multline}
For each $k\in K$ let $R_k=\dom(P_k,A_k),R'_k=\dom(P'_k,A'_k)$ be, respectively, the Voronoi cells associated with the original site $P_k$ and the perturbed one $P'_k$. Then for each $\epsilon\in (0,\eta/6)$ there exists $\Delta>0$ such that if $D(P_k,P'_k)<\Delta$ for each $k\in K$, then $D(R_k,R'_k)<\epsilon$ for each $k\in K$.
\end{thm}
See Figures \ref{fig:Stability_0005_lpi_Before}, \ref{fig:Stability_0005_lpi_After} for an illustration. The pictures were produced using the algorithm described in \cite{ReemISVD09}.
\begin{figure*}[t]
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
{\includegraphics[scale=0.75]{figure3.eps}}
\end{center}
\caption{Illustration of Theorem \ref{thm:stabilityUC}: five sites in a square in $(\R^2,\ell_p)$ where the parameter is $p=3.14159$.}
\label{fig:Stability_0005_lpi_Before}
\end{minipage
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
{\includegraphics[scale=0.75]{figure4.eps}}
\end{center}
\caption{The sites have been slightly perturbed: the two points have merged, the ``sin'' has shrunk, and so on. The cells have been slightly perturbed.}
\label{fig:Stability_0005_lpi_After}
\end{minipage
\end{figure*}
\begin{remark}\label{rem:rho}
The assumption mentioned in \eqref{eq:BallRhokThm} may seem somewhat complicated at a first glance, but it actually expresses a certain uniform boundedness condition on the distance between any point in $X$ to its neighbor sites. No matter which point $x\in X$ and which site $P_k$ are chosen, the distance between $x$ and the collection of other sites $P_j,\,j\neq k$ cannot be arbitrary large. A sufficient condition for it to hold is when a uniform bound on the diameter of the cells (including the neutral one, if it is nonempty) is known in advance, e.g., when $X$ is bounded or when the sites form a (distorted) lattice. But \eqref{eq:BallRhokThm} can hold even if the cells are not bounded, e.g., when the setting is the Euclidean plane and $P_k=\R\times \{k\}$ where $k$ runs over all integers.
\end{remark}
\begin{remark}\label{rem:Delta}
In general, we have $\Delta=O(\epsilon^2)$. However, if there is a positive lower bound on the distance between the sites and the boundary of $X$ (relative to the affine hull spanned by $X$), i.e., if the sites are strictly contained in the (relative) interior of $X$, then actually the better estimate $\Delta=O(\epsilon)$ holds. The constants inside the big $O$ can be described explicitly: when $\Delta=C\epsilon^2$ we can take
\begin{equation*
C=\frac{1}{16(\rho+5\eta/12)}\cdot\delta\left(\frac{\eta}{12\rho+5\eta}
\right),
\end{equation*}
and when $\Delta=C\epsilon$ we can take
\begin{equation*
C=\min\left\{\displaystyle{ \frac{1}{16}\delta\left(\frac{\eta}{12\rho+5\eta}\right), \frac{d(\bigcup_{k\in K}P_k,\partial X)}{8(\rho+\eta/6)}}\right\}.
\end{equation*} \\
In the second case, in addition to $\epsilon <\eta/6$, the inequality
$\epsilon\leq 8\cdot d(\bigcup_{k\in K}P_k,\partial X)$ should be satisfied too.
\end{remark}
The proof of Theorem \ref{thm:stabilityUC} is quite long and technical, and hence it is given in the appendix. Despite this, we want to say a few words about the proof and some of the difficulties which arise along the way. First, as the counterexamples mentioned in Section \ref{sec:CounterExamples} show, one must take into account all the assumptions mentioned in the formulation of the theorem.
Second, in order to arrive to the generality described in the theorem,
one is forced to avoid many familiar arguments used in computational geometry and elsewhere, such as Euclidean arguments (standard angles, trigonometric functions, normals, etc.), arguments based on lower envelopes and algebraic arguments (since the intersections between the graphs generating the lower envelope may be complicated and since the boundaries of the cells may not be algebraic), arguments based on continuous motion of points, arguments based on finite dimensional properties such as compactness (since in infinite dimensional spaces closed and bounded balls are not compact), arguments based on finiteness (since we allow infinitely many sites and sites consist of infinitely many points) and so on. Our way to overcome these difficulties is to use a new representation theorem for dominance regions as a collection of line segments (Theorem \ref{thm:domInterval} below) and a new geometric lemma (Lemma \ref{lem:StrictSegment} below) which enables us to arrive to the explicit bounds mentioned in the theorem. As a matter of fact, we are not aware of any other way to obtain these explicit bounds even in a square in the Euclidean plane with point sites. These tools also allow us to overcome the difficulty of a potential infinite accumulated error due to the possibility of infinitely many sites/sites with infinitely many points/infinite dimension.
\begin{thm}\label{thm:domInterval}
Let $X$ be a closed and convex subset of a normed space $(\widetilde{X},|\cdot|)$, and
let $P,A\subseteq X$ be nonempty. Suppose that for all $x\in X$ the distance between $x$ and $P$ is attained. Then $\dom(P,A)$ is a union of line segments starting at the points of $P$. More precisely, given $p\in P$ and a unit vector $\theta$, let
\begin{multline*
T(\theta,p)=\sup\{t\in [0,\infty): p+t\theta\in X\,\,\mathrm{and}\,\
d(p+t\theta,p)\leq d(p+t\theta,A)\}.
\end{multline*}
Then
\begin{equation*}\label{eq:dom}
\dom(P,A)=\bigcup_{p\in P}\bigcup_{|\theta|=1}[p,p+T(\theta,p)\theta].
\end{equation*}
When $T(\theta,p)=\infty$, the notation $[p,p+T(\theta,p)\theta]$ means the ray $\{p+t\theta: t\in [0,\infty)\}$.
\end{thm}
\begin{lem}\label{lem:StrictSegment}
Let $(\widetilde{X},|\cdot|)$ be a uniformly convex normed space and let
$A\subseteq \widetilde{X}$ be nonempty. Suppose that $y,p\in \widetilde{X}$ satisfy $d(y,p)\leq d(y,A)$ and $d(p,A)>0$. Let $x\in [p,y)$. Let $\sigma\in (0,\infty)$ be arbitrary. Then $d(x,p)<d(x,A)-r$ for any $r>0$ satisfying
\begin{equation*
r\leq\!\min\!\left\{\sigma,\frac{4d(p,A)}{10},d(y,x)\delta\left(\frac{d(p,A)}{10(d(x,A)+\sigma+d(y,x))}\right)\!\right\}\!.
\end{equation*}
\end{lem}
The proof of Lemma \ref{lem:StrictSegment} is based on the strong triangle inequality of Clarkson
\cite[Theorem 3]{Clarkson}. It is interesting to note that although this inequality was formulated in the very famous paper \cite{Clarkson} of Clarkson, it seems that it has been almost totally forgotten, and in fact, despite a comprehensive search we have made in the literature, we have found evidences to its existence only in \cite{Clarkson} and later in \cite{Plant}.
\section{Counterexamples}\label{sec:CounterExamples}
\begin{figure*}
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
{\includegraphics[scale=0.8]{figure5.eps}}
\end{center}
\caption{Four sites in a square in $(\R^2,\ell_{\infty})$. The cell of $P_1=\{(0,0)\}$ is displayed. The other sites
are $\,P_2=\{(2,0)\}$,$\,P_3=\{(-2,0)\}$, $\,P_4=\{(0,-2)\}$.}
\label{fig:InStability000} \label{page:InStability000}
\end{minipage
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
{\includegraphics[scale=0.8]{figure6.eps}}
\end{center}
\caption{Now either $P_4$ is the square $[-\beta,\beta]\times [-2-\beta,-2+\beta]$ or $P_1=\{(\beta,\beta)\}$, $\beta>0$ arbitrary small. The two lower rays have disappeared. No stability.}
\label{fig:InStability001}
\end{minipage
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
{\includegraphics[scale=0.8]{figure7.eps}
\end{center}
\caption{The full diagram of Figure \ref{fig:InStability000}. Note the large intersection between cells 1,2, and 3. For emphasizing this intersection, each cell is represented as a union of rays (see Theorem~ \ref{thm:domInterval} for more information) and some rays were emphasized.}
\label{fig:InStability0Full} \label{page:InStability000}
\end{minipage
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
{\includegraphics[scale=0.8]{figure8.eps}}
\end{center}
\caption{The full diagram of Figure \ref{fig:InStability001} when $P_1=\{(\beta,\beta)\}$. Cells 2,3 have been (significantly) changed too.}
\label{fig:InStability1Full}
\end{minipage
\end{figure*}
In this section we mention several counterexamples which show that the assumptions in Theorem \ref{thm:stabilityUC} are essential.
If the space is not uniformly convex, then the Voronoi cells may not be stable as shown in Figures
\ref{fig:InStability000}-\ref{fig:InStability1Full}. Here the setting is point sites in a square in $\R^2$ with the max norm.
The positive common lower bound expressed in \eqref{eq:eta} is necessary even in a square in the Euclidean plane. Consider $X=[-10,10]^2$, $P_{1,\beta}=\{(0,\beta)\}$ and $P_{2,\beta}=\{(0,-\beta)\}$, where $\beta\in [0,1]$. As long as $\beta>0$, the cell $\dom(P_{1,\beta},P_{2,\beta})$ is the upper half of $X$. However, if $\beta=0$, then $\dom(P_{1,0},P_{2,0})$ is $X$. A more interesting example occurs when considering in $X$ the rectangle $P_{1,\beta}=[-a,a]\times [-10,-\beta]$ and the line segment $P_2=[-10,10]\times \{0\}$, where $a,\beta\in [0,1]$. If $\beta=0$, then $d(P_{1,0},P_2)=0$, and the cell $\dom(P_{1,0},P_2)$ contains the rectangle $[-a,a]\times [0,10]$. However, if $\beta>0$, then this cell does not contain this rectangle.
The assumption expressed in \eqref{eq:BallRhokThm} is essential even in the Euclidean plane with two points. Indeed, given $\beta\geq 0$, let $P_{1,\beta}=\{(\beta,1)\}$ and $P_2=\{(0,-1)\}$. Then $\dom(P_{1,0},P_2)$ is the upper half space. However, if $\beta>0$, then the half space $\dom(P_{1,\beta},P_2)$ contains points $(x,y)$ with $y\to -\infty$. Thus the Hausdorff distance between the original cell $\dom(P_{1,0},P_2)$ and the perturbed one $\dom(P_{1,\beta},P_2)$ is $\infty$, so there can be no stability.
\section{Concluding Remarks}\label{sec:Concluding}
We conclude the paper with the following remarks.
First, despite the counterexamples mentioned above, some of the assumptions can be weakened, with some caution. For instance, under a certain compactness assumption and a certain geometric condition that the sites should satisfy, the main result can be generalized to normed spaces which are not uniformly convex. A particular case of these assumptions is when the space is $m$-dimensional with the $|\cdot|_{\infty}$ norm, the sites are positively separated, and no two points of different sites are on a hyperplane perpendicular to one of the standard axes.
Second, an interesting (but not immediate) consequence of the main result is that it implies the stability of the (multi-dimensional) volume, namely a small change in the sites yields a small change in the volumes of the corresponding Voronoi cells.
Third, it can be shown that the function $T$ defined in Theorem~ \ref{thm:domInterval} also has a certain continuity property if the space is uniformly convex and this expresses a certain continuity property of the boundary of the cells.
Fourth, the estimate for $\Delta$ from Remark \ref{rem:Delta} is definitely not optimal and it can be improved, but, as simple examples show, $\Delta$ cannot be too much larger and its estimate should be taken into account when performing a relevant analysis. There is nothing strange in this and the situation is analogous to the familiar case of real valued functions. For instance, consider $f:\R\to\R$ defined by $f(x)=0$, $\,x<0$, $f(x)=(1/\beta)x,\,x\in [0,\beta]$, $f(x)=1,\,\,x>\beta$, where $\beta>0$ is given. Although $f$ is continuous, a ``large'' change near $x=0$ (of more than $\beta$) will cause a large change to $f$.
\section*{Acknowledgments}
I want to thank Dan Halperin, Maarten L{\"o}ffler, and Simeon Reich for helpful discussions regarding some of the references. I also thank all the reviewers for their comments.
\bibliographystyle{amsplain}
| {
"timestamp": "2011-04-08T02:00:21",
"yymm": "1103",
"arxiv_id": "1103.4125",
"language": "en",
"url": "https://arxiv.org/abs/1103.4125",
"abstract": "Voronoi diagrams appear in many areas in science and technology and have numerous applications. They have been the subject of extensive investigation during the last decades. Roughly speaking, they are a certain decomposition of a given space into cells, induced by a distance function and by a tuple of subsets called the generators or the sites. Consider the following question: does a small change of the sites, e.g., of their position or shape, yield a small change in the corresponding Voronoi cells? This question is by all means natural and fundamental, since in practice one approximates the sites either because of inexact information about them, because of inevitable numerical errors in their representation, for simplification purposes and so on, and it is important to know whether the resulting Voronoi cells approximate the real ones well. The traditional approach to Voronoi diagrams, and, in particular, to (variants of) this question, is combinatorial. However, it seems that there has been a very limited discussion in the geometric sense (the shape of the cells), mainly an intuitive one, without proofs, in Euclidean spaces. We formalize this question precisely, and then show that the answer is positive in the case of R^d, or, more generally, in (possibly infinite dimensional) uniformly convex normed spaces, assuming there is a common positive lower bound on the distance between the sites. Explicit bounds are given, and we allow infinitely many sites of a general form. The relevance of this result is illustrated using several pictures and many real-world and theoretical examples and counterexamples.",
"subjects": "Computational Geometry (cs.CG); Functional Analysis (math.FA)",
"title": "The geometric stability of Voronoi diagrams with respect to small changes of the sites",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534360303622,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7083573380039597
} |
https://arxiv.org/abs/2212.13440 | Contraction and $k$-contraction in Lurie systems with applications to networked systems | A Lurie system is the interconnection of a linear time-invariant system and a nonlinear feedback function. We derive a new sufficient condition for $k$-contraction of a Lurie system. For $k=1$, our sufficient condition reduces to the standard stability condition based on the bounded real lemma and a small gain condition. However, Lurie systems often have more than a single equilibrium and are thus not contractive with respect to any norm. For $k=2$, our condition guarantees a well-ordered asymptotic behaviour of the closed-loop system: every bounded solution converges to an equilibrium, which is not necessarily unique. We demonstrate our results by deriving a sufficient condition for $k$-contraction of a general networked system, and then applying it to guarantee $k$-contraction in a Hopfield neural network, a nonlinear opinion dynamics model, and a 2-bus power system. | \section{Introduction}
Consider a nonlinear system obtained by connecting
a linear time-invariant~(LTI) system with state vector~$x\in \mathbb R^n$, input~$u\in\mathbb R^m$ and output~$y\in \mathbb R^q$:
\begin{equation}\label{initial}
\begin{array}{l}
\dot x(t)=Ax(t)+ Bu(t) ,\\%[1em]
y(t)=Cx(t) ,
\end{array}
\end{equation}
with a time-varying
nonlinear feedback control
\[u(t)=-\Phi(t,y(t))
\]
(see Fig.~\ref{fig:lurie}). The resulting closed-loop system
\begin{equation}\label{eq:closed_loop1}
\dot x(t) = Ax(t) - B\Phi(t,Cx) .
\end{equation}
is known as a Lurie (sometimes written Lure, Lur'e or Lurye) system after the Russian mathematician Anatolii Isakovich Lurie.
The non-trivial and well-studied absolute
stability problem
is to prove that the closed-loop system is asymptotically stable for any $\Phi$ belonging to a certain class of nonlinear functions, e.g., the class of sector-bounded functions~\citep[Ch.~7]{khalil_book}.
In the 1940s and 1950s, M. Aizerman and R. Kalman
conjectured that for certain classes of non-linear functions the absolute stability problem
can be reduced to the stability analysis of certain classes of linear systems.
These conjecture are now known to be false. However, the study of the absolute stability problem has led to many important developments including:
(1)~sufficient conditions for absolute stability in terms of
the transfer function of the linear system and their graphical interpretations~\citep{khalil_book,vidyasagar2002nonlinear};
(2)~passivity-based analysis of interconnected systems, and the so-called Zames–Falb multipliers~\citep{CARRASCO20161};
(3)~the theory of integral quadratic constraints~(ICQs)~\citep{ICQ};
and
(4)~the formulation of an optimal control approach in the stability analysis of switched linear systems~(see the survey paper~\citep{MARGALIOT20062059}).
Several authors studied~\eqref{eq:closed_loop1} using contraction theory. A system is called contractive if any two trajectories approach each other at an exponential rate. In particular, if an equilibrium exists then it is unique and globally exponentially asymptotically stable.
\cite{Smith1986haus} derived a sufficient condition for what is now known as $\alpha$-contraction, with~$\alpha$ real, with respect to (w.r.t.) Euclidean norms, applied it to a Lurie system, and demonstrated the results by bounding the Hausdorff dimension of attractors of the Lorentz equation. However, his sufficient condition is
highly conservative, especially for large-scale systems. \cite{Andrieu2019LMIContract} provide a linear matrix inequality (LMI) sufficient condition for contraction w.r.t.
Euclidean norms under differential sector bound or monotonicity assumptions on the non-linearity (see also~\cite[Theorem 3.24]{bullo_contraction} for a similar condition under different assumptions), and use it to design controllers which guarantee contraction of the closed-loop system. \cite{control_syn_vincent} showed that the designed controllers yield a closed-loop system with the desirable property of
infinite gain margin. \cite{Proskurnikov2022GeneralizedSLemma} provide a sufficient condition for contraction w.r.t. non-Euclidean norms (see also~\cite{AD-AVP-FB:21k} where this question was studied in the context of recurrent neural networks). However,
a Lurie system may have more than a single equilibrium point (see, e.g.~\citep{MIRANDAVILLATORO201876}),
and then it is not contractive w.r.t. any norm.
Following the seminal work of~\cite{muldowney1990compound},
\cite{kordercont} recently
introduced the notion of $k$-contractive systems.
Classical contractivity implies that under the phase flow of the system the tangent vectors to the phase space contract exponentially fast; $k$-contactivity implies that the same property holds for elements of $k$-exterior powers of the tangent spaces. Roughly speaking, this is equivalent to the fact that the flow of the variational equation contracts $k$-dimensional parallelotopes at an exponential rate. In particular, a~$1$-contractive system is just a contractive system.
However, a system that is $k$-contractive, with~$k>1$, may not be contractive in the standard sense. For example, every bounded solution of a time-invariant $2$-contractive system
converges to an equilibrium point, which may not be unique~\citep{li1995}. Thus, 2-contraction may be useful for analyzing multi-stable systems that cannot be analyzed using standard contraction theory.
The basic tools required to define and study~$k$-contractivity are the~$k$-multiplicative and~$k$-additive compounds of a matrix.
The reason for this is simple:
$k$-multiplicative compounds provide information on the volume of parallelotopes generated by~$k$ vertices, and
$k$-additive compounds describe the dynamics of $k$-multiplicative compounds, when the vertices follow
a linear dynamics~\citep{comp_long_tutorial}.
Here, we derive a novel sufficient condition for~$k$-contracti\-vi\-ty of a Lurie system. A unique feature of this condition is that it
combines an algebraic Riccati inequality~(ARI)
that includes $k$-additive compounds of the matrices of the~LTI, and a kind of gain condition on the Jacobian of the nonlinear function~$\Phi$. We refer to this special ARI as the~$k$-ARI.
In the special case~$k=1$, the~$k$-ARI reduces to the standard Hamilton-Jacobi
inequality appearing in the small gain theorem~\cite[Ch.~5]{khalil_book}, and our contraction condition reduces to a small-gain sufficient condition for standard contraction. However, for~$k>1$
our condition provides new results.
We demonstrate this by deriving a simple sufficient condition for~$k$-contraction of a general networked system and then applying it to a Hopfield network, a nonlinear opinion dynamics model, and a 2-bus power systems. These systems are typically multi-stable,
and thus
cannot be analyzed using standard contraction theory. Nevertheless, for the case~$k=2$ our sufficient condition still guarantees a well-ordered global
behaviour: any bounded solution converges to an equilibrium point, that is not necessarily unique.
We use standard notation. For a square matrix~$A\in\mathbb C^{n\times n}$, $\operatorname{tr} (A)$ is the trace of~$A$, and~$\det (A)$ is the determinant
of~$A$.
$A^*$ is the conjugate
transpose of~$A$. If~$A$ is real then
this is just the transpose of~$A$, denoted
$A^T$.
A symmetric matrix~$P \in \mathbb R^{n \times n}$ is called positive definite [positive semi-definite] if $x^TPx > 0$ [$x^TPx \ge 0$] for all~$x \in \mathbb R^n\setminus\{0\}$. Such matrices are denoted by $P \succ 0$ and $P \succeq 0$, respectively. For~$A \in \mathbb C^{n \times m}$, we use~$\sigma_1(A) \ge \dots \ge \sigma_{\min\{n,m\}}(A) \ge 0$ to
denote the ordered singular values of~$A$, that is, the ordered
square roots of the eigenvalues of~$A^*A$ if $m<n$, or of~$AA^*$, otherwise. The~$n\times n$ identity matrix is denoted by~$I_n$.
The~$L_2$ norm of a vector~$x$ is~$|x|_2:=(x^Tx)^{1/2}$. For two integers
$i\leq j$, we let~$[i,j]:=\{i,i+1,\dots,j\}$.
The remainder of this paper is organized as follows. The next section reviews two basic tools used to guarantee $k$-contraction: matrix compounds and matrix measures. Section~\ref{sec:main}
presents and discusses the main result.
Section~\ref{sec:proof} proves the main result, based on two auxiliary results that may be of independent interest.
Section~\ref{sec:networked} describes an application of our main result to a networked system and demonstrates
how this can be used to analyze $k$-contraction in
a Hopfield network, a non-linear opinion dynamics model, and a 2-bus power system.
The final section concludes.
\begin{figure}
\centering
\begin{tikzpicture}[
block/.style = {draw, rectangle, thick, minimum height=2em, minimum width=3em},
sum/.style = {draw, circle, thick, minimum width=2em}]
\node[block, minimum height=1.25cm, minimum width=2.4cm] (LTI) {$\begin{aligned}\dot{x} &= Ax + Bu \\ y &= Cx\end{aligned}$} ;
\node[block, minimum height=1cm, minimum width=1cm] (nonlin) [below=1cm of LTI] {$\Phi$} ;
\node[sum, left=1cm of LTI] (fbsum) {};
\draw[->, thick] (LTI.east) -- node[above, midway] {$y $} +(1,0) coordinate(LTIaux) |- (nonlin.east);
\draw[->, thick] (LTIaux) -- +(1,0);
\draw[->, thick] (nonlin.west) -| (fbsum.south) node[below left] {$-$};
\draw[->, thick] (fbsum.east) -- (LTI.west) node[midway,above] {$u $};
\draw[<-, thick] (fbsum.west) node[above left] {$+$} -- +(-1,0) node[left] {$0$};
\end{tikzpicture}
\caption{Block diagram of a Lurie system.}
\label{fig:lurie}
\end{figure}
\section{Preliminaries}
In this section, we review several known definitions and results on matrix compounds and matrix measures that will be used in Section~\ref{sec:main}.
\subsection{Matrix compounds}
For two integers~$i,j$, with~$ i\leq j$, let~$[i,j]:=\{i,i+1,\dots,j\}$.
Let~$Q_{k,n}$ denote the set
of increasing sequences of~$k$ numbers from~$[1,n]$
ordered lexicographically.
For example,~$Q_{2,3} =
\{ (1,2), (1,3), (2,3) \}
$.
For~$A\in\mathbb R^{n\times m}$ and~$k\in[1,\min\{n,m\}]$,
a \emph{minor of order~$k$} of~$A$ is the determinant of some~$k \times k$
submatrix of~$A$.
Consider the~$\binom{n}{k}\times \binom{m}{k} $
minors of order~$k$ of~$A$.
Each such minor is defined by a set of row indices~$\kappa^i \in Q_{k,n}$ and column indices~$\kappa^j\in
Q_{k,m}$. This minor
is denoted by~$A(\kappa^i|\kappa^j)$.
For example, for~$A=\begin{bmatrix} 1&2 \\ -1 &3 \\0&3
\end{bmatrix}$, we have
$
A((1,3) |(1,2))=\det \begin{bmatrix} 1&2\\0&3
\end{bmatrix} =3.
$
\begin{Definition}\label{def:multi}
The~$k$-\emph{multiplicative compound matrix}
of~$A\in\mathbb R^{n\times m}$, denoted~$A^{(k)}$, is the~$\binom{n}{k}\times \binom{m}{k}$ matrix
that includes all the minors of order~$k$ ordered lexicographically.
\end{Definition}
For example, for~$n = m =3$ and~$k=2$, we have
\[
A^{(2)}= \begin{bmatrix}
A((1,2)|(1,2)) & A((1,2)|(1,3)) & A((1,2)|(2,3))\\
A((1,3)|(1,2)) & A((1,3)|(1,3)) & A((1,3)|(2,3))\\
A((2,3)|(1,2)) & A((2,3)|(1,3)) & A((2,3)|(2,3))
\end{bmatrix}.
\]
Definition~\ref{def:multi} has several implications. First, if~$A$ is square then~$(A^T)^{(k)} = (A^{(k)})^T$, and in particular if~$A$ is symmetric then so is~$A^{(k)}$.
Also, $A^{(1)}=A$
and if~$A \in \mathbb{R}^{n \times n}$ then~$A^{(n)}=\det(A)$.
If~$D$ is an~$n\times n$ diagonal matrix, i.e.~$D=\operatorname{diag}(d_1,\dots,d_n)$ then
$
D^{(k)}=\operatorname{diag}(d_1\dots d_k, d_1\dots d_{k-1}d_{k+1},\dots,d_{n-k+1}\dots d_n)
$.
In particular, every eigenvalue of~$D^{(k)}$ is the product of~$k$ eigenvalues of~$D$.
In the special case~$D= p I_n$, with~$p \in \mathbb R$, we have that~$(pI_n)^{(k)}=p^k I_r$, with~$r:=\binom{n}{k}$.
The \emph{Cauchy-Binet formula} (see, e.g.,~\cite[Thm.~1.1.1]{total_book}) asserts
that
\begin{equation}\label{eq:cbf}
(AB)^{(k)}=A^{(k)} B^{(k)}
\end{equation}
for any $A \in \mathbb R^{n \times p}$, $B \in \mathbb R^{p \times m}$, $k \in [1,\min\{n,p,m\} ]$.
This justifies the term \emph{multiplicative compound}.
When~$n=p=m=k $, Eq.~\eqref{eq:cbf} becomes the familiar formula~$\det(AB)=\det(A)\det(B)$.
If~$A$ is $n\times n $ and non-singular then~\eqref{eq:cbf} implies that $ I_n^{(k)}=(AA^{-1})^{(k)}=A^{(k)} (A^{-1})^{(k)}$,
so~$A^{(k)} $ is also non-singular with~$(A^{(k)})^{-1}=(A^{-1})^{(k)} $.
Another implication of~\eqref{eq:cbf}
is that if~$A\in\mathbb R^{n\times n}$ with eigenvalues~$\lambda_1,\dots,\lambda_n$
then the eigenvalues of~$A^{(k)}$ are all the~$\binom{n}{k}$ products:
\[
\lambda_{i_1} \lambda_{i_2} \dots \lambda_{i_k}, \text{ with } 1\leq i_1 < i_2 <\dots < i_k\leq n.
\]
The usefulness of the $k$-multiplicative compound in analyzing
$k$-contraction follows from the relation between the $k$-compound and the volume of~$k$-parallelotopes. To explain this,
fix~$k$ vectors~$x^1,\dots,x^k \in \mathbb R^n$. The parallelotope generated by these vectors (and the zero vertex) is
\[
\mathcal{P}(x^1 , \dots,x^k) := \left\{\sum_{i=1}^k r_i x^i \, | \, r_i \in [0,1] \text { for all } i \right\},
\]
(see Fig.~\ref{fig:parallelogram}).
Let
\[
X:=\begin{bmatrix} x^1&\dots&x^k
\end{bmatrix} \in \mathbb R^{n\times k}.
\]
The volume of~$\mathcal{P}(x^1,\dots,x^k)$ satisfies~\citep[Chapter~IX]{Gantmacher_vol1}:
\begin{equation}\label{eq:voldet}
\vol (\mathcal{P}(x^1,\dots,x^k))= |X^{(k)} |_2.
\end{equation}
Note that since~$X\in\mathbb R^{n\times k}$, the dimensions of~$X^{(k)}$ are~$\binom{n}{k}\times 1$, that is, $X^{(k)}$ is a column vector.
\begin{Example}
Consider the case~$n=3$, $k=2$, $x^1=\begin{bmatrix} a& 0& 0
\end{bmatrix}^T$, and~$x^2=\begin{bmatrix} 0& b& 0
\end{bmatrix}^T$, with~$a,b\in \mathbb R$. Then
$X=\begin{bmatrix}
a&0 \\0&b \\0&0
\end{bmatrix}
$, so
$X^{(2)}=\begin{bmatrix}
ab&0&0
\end{bmatrix}^T
$, and~$|X^{(2)}|_2=|ab|$.
\end{Example}
\begin{figure}
\centering
\begin{tikzpicture}
\draw[dashed,fill opacity=0.5,fill=green!10] (0,0,0)--(3,0.25,0)--(4.5,1.25,1)--(1.5,1,1)--cycle;
\draw[dashed,fill opacity=0.5,fill=green!10] (1,2,0)--(4,2.25,0)--(5.5,3.25,1)--(2.5,3,1)--cycle;
\draw[dashed,fill opacity=0.5,fill=green!10] (3,0.25,0)--(4.5,1.25,1)--(5.5,3.25,1)--(4,2.25,0)--cycle;
\draw[dashed,fill opacity=0.5,fill=green!10] (0,0,0)--(1.5,1,1)--(2.5,3,1)--(1,2,0)--cycle;
\draw[dashed,fill opacity=0.5,fill=green!10] (1.5,1,1)--(4.5,1.25,1)--(5.5,3.25,1)--(2.5,3,1)--cycle;
\draw[dashed,fill opacity=0.5,fill=green!10] (0,0,0)--(3,0.25,0)--(4,2.25,0)--(1,2,0)--cycle;
\draw[thick,->] (0,0,0)--(3,0.25,0) node[right]{$x^1$};
\draw[thick,->] (0,0,0)--(1,2,0) node[above]{$x^2$};
\draw[thick,->] (0,0,0)--(1.5,1,1) node[above left=-0.1cm]{$x^3$};
\draw (0,0,0) node[below]{0};
\draw (2.55,1.35,0.5) node[]{$\mathcal{P}(x^1,x^2,x^3)$};
\end{tikzpicture}
\caption{A 3D parallelotope with vertices $0,x^1,x^2$, and $x^3$.}
\label{fig:parallelogram}
\end{figure}
In the special case~$k=n$, Eq.~\eqref{eq:voldet}
becomes the well-known formula
\begin{align*}
\vol (\mathcal{P}(x^1,\dots,x^n))&= |X^{(n)} |_2\\&=|\det(X)|.
\end{align*}
When the vertices of the parallelotope
follow a linear time-varying dynamics, the evolution of the $k$-multiplicative compound depends on another algebraic construction called the $k$-additive compound.
\begin{Definition}\label{def:addi_comp}
The $k$-\emph{additive compound matrix} of~$A \in \mathbb R^{n \times n}$
is defined by
\begin{align}
A^{[k]} := \frac{d}{d \varepsilon} (I_n+\varepsilon A)^{(k)} |_{\varepsilon=0} .
\end{align}
\end{Definition}
Note that this implies that~$ A^{[k]} =
\frac{d}{d \varepsilon}
(\exp(A\varepsilon ))^{(k)}|_{\varepsilon=0}$.
\begin{Example}
Suppose that~$A=pI_n$, with~$p\in\mathbb R$. Then
\begin{align*}
( I_n+\varepsilon A)^{(k)} &=( (1+\varepsilon p ) I_n)^{(k)} \\
&=(1+\varepsilon p )^k I_r,
\end{align*}
where~$r:=\binom{n}{k}$, so
\begin{align*}
(p I_n)^{[k]} &= \frac{d}{d \varepsilon} (1+\varepsilon p )^k I_r |_{\varepsilon=0}\\
&=k p I_r.
\end{align*}
\end{Example}
Definition~\ref{def:addi_comp} implies that~$A^{[1]}=A$,
$A^{[n]}=\operatorname{tr}(A)$,
and that
\begin{equation}\label{eq:poyrt}
(I_n+\varepsilon A)^{(k)}= I_r+\varepsilon A^{[k]} +o(\varepsilon ) ,
\end{equation}
where~$r:=\binom{n}{k}$.
Thus,~$\varepsilon A^{[k]}$ is the first-order term in the Taylor series of~$(I+\varepsilon A)^{(k)}$.
Also, if~$A$ is square then~$(A^T)^{ [k]} = (A^{[k]})^T$, and in particular if~$A$ is symmetric then so is~$A^{[k]}$.
\begin{Example}\label{exa:sdig}
If~$D=\operatorname{diag}(d_1,\dots,d_n)$ then
$
(I+\varepsilon D)^{(k)}=\operatorname{diag} \left ( \prod_{i=1}^k (1+\varepsilon d_i)
,\dots,\prod_{i=n-k+1}^n (1+\varepsilon d_i) \right ),
$
so~\eqref{eq:poyrt} gives
$
D^{[k]}=\operatorname{diag}( \sum_{i=1}^k d_i
,\dots, \sum_{i=n-k+1}^n d_i
) .
$
In particular, every eigenvalue of~$D^{[k]}$ is the sum of~$k$ eigenvalues of~$D$.
\end{Example}
More generally, if~$A\in\mathbb R^{n\times n}$ with eigenvalues~$\lambda_1,\dots,\lambda_n$
then the eigenvalues of~$A^{ [ k ] }$ are all the~$\binom{n}{k}$ sums:
\[
\lambda_{i_1} + \lambda_{i_2} + \dots + \lambda_{i_k} , \text{ with } 1\leq i_1 < i_2 <\dots < i_k\leq n.
\]
It follows from~\eqref{eq:poyrt} and the properties of the multiplicative compound that~$
(A+B)^{[k]}= A^{[k]}+B^{[k]}
$ for any~$A,B\in\mathbb R^{n\times n}$,
thus justifying the term \emph{additive compound}. In fact, the mapping~$A\to A^{[k] }$ is linear~\citep{schwarz1970}.
Note that if~$Q\in\mathbb R^{n\times n}$ is positive definite then it is symmetric with positive eigenvalues and thus~$Q^{(k)}$ and~$Q^{[k]}$ are symmetric with positive eigenvalues, so they are also positive definite.
Below we will use the following relations. Let~$A \in \mathbb R^{n \times n}$. If $U \in \mathbb R^{p \times n}$ and~$V\in \mathbb R^{n \times p}$ then
\begin{equation}\label{eq:k_mul_coor_trans}
(U A V )^{ (k) } = U^{(k)} A^{ (k) } V^{ (k) } ,
\end{equation}
and if, in addition, $UV=I_p$ then combining this with Definition~\ref{def:addi_comp} gives
\begin{equation}\label{eq:k_add_coor_trans}
(U A V )^{[k]} = U^{(k)}A^{[k]} V ^{(k)} .
\end{equation}
For more on the applications of compound matrices to
systems and control theory, see e.g.~\citep{wu2021diagonal, margaliot2019revisiting, ofir2021sufficient, ron_DAE,grussler2022variation,LI1999191}, and the recent tutorial by~\cite{comp_long_tutorial}.
\subsection{Matrix measures}
Matrix measures (also called logarithmic norms~\citep{strom1975logarithmic}) provide an easy to check sufficient condition for contraction~\citep{sontag_cotraction_tutorial}.
Consider a norm~$|\cdot|:\mathbb R^n\to\mathbb R_+$.
The induced matrix norm~$\|\cdot\|:\mathbb R^{n\times n}\to\mathbb R_+$ is
defined by~$\|A\| := \max_{|x|=1} |Ax| $, and the
induced matrix measure $\mu(\cdot):\mathbb R^{n\times n}\to\mathbb R $ is
defined by
\[
\mu(A) := \lim_{\varepsilon \downarrow 0} \frac{\|I + \varepsilon A\| - 1}{\varepsilon} .
\]
The matrix measure is sub-additive, i.e.~$\mu(A+B)\leq\mu(A)+\mu(B)$. Also,~$\mu(c I_n)=c$ for any~$c\in \mathbb R$.
Denote the~$L_2$ norm by
$|x|_2:=(x^Tx)^{1/2}$.
The corresponding matrix norm
is~$\| A \|_2=\left (\lambda_{\max}(A^TA)\right)^{1/2} $,
and the
corresponding matrix
measure is~\citep{vidyasagar2002nonlinear}:
\begin{equation}\label{eq:matirxm} \begin{aligned}
\mu_2(A) = (1/2) \lambda_{\max}\left( A + A^T \right) ,
\end{aligned} \end{equation}
where~$\lambda_{\max} (S)$ denotes the largest eigenvalue of the symmetric matrix~$S$.
For an invertible matrix~$H \in\mathbb R^{n\times n}$, a scaled~$L_2$ norm is defined by~$|x|_{2,H} :=|H x|_2$, and the induced matrix measure is
\begin{align}\label{eq:weight_mat_meas}
\mu_{2,H}(A)& =
\mu_2(H A H^{-1} )\nonumber\\&
= (1/2) \lambda_{\max}\left(H A H^{-1} +
(H A H^{-1} )^T \right).
\end{align}
In particular,
\begin{equation}\label{eq:weight_mat_meas_symmet}
\mu_{2,H}(A)
= \lambda_{\max}\left(H A H^{-1} \right) \text{ if } A=A^T \text{ and } H=H^T.
\end{equation}
As shown in~\citep{kordercont}, a sufficient condition for the system~$\dot x=f(t,x)$ to be $k$-contractive is that~$\mu((J(t,x))^{[k]})\leq-\eta<0$ for all~$t,x$, where~$J:=\frac{\partial}{\partial x}f$ is the Jacobian of the vector field~$f$. For~$k=1$, this reduces to the standard sufficient condition for contraction, namely, $\mu(J (t,x)) \leq -\eta<0$ for all~$t,x$.
In other words,~$1$-contraction is just contraction.
\begin{Example}
Consider the LTI
\begin{equation}\label{eq:lti}
\dot x(t)=Ax(t).
\end{equation}
If~$\mu(A ^{[1]} ) <0$ for some matrix measure~$\mu$ then~$A^{[1]}=A$ is Hurwitz, and thus every solution of~\eqref{eq:lti} converges to the unique eqilbrium at the origin.
If~$\mu(A^{[2]}) <0$ for some matrix measure~$\mu$ then~$A^{[2]}$ is Hurwitz.
Thus, the sum of any two eigenvalues of~$A$ has a negative real part. In particular,~$A $ cannot have any purely imaginary eigenvalues, so
any bounded solution of~\eqref{eq:lti} converges to the origin.
\end{Example}
Note that if~$A,T\in\mathbb R^{n\times n}$, with~$T$ non-singular, then
\begin{align}\label{eq:mu2add}
\mu_{2,T^{(k)} } (A^{[k]} ) & = \mu_{2 } (T^{(k)}A^{[k]} (T^{(k)}) ^{-1} )\nonumber\\
&=\mu_2( (TAT^{-1})^ { [k]} ),
\end{align}
where the last equality follows from~\eqref{eq:k_add_coor_trans}.
\section{Main result}\label{sec:main}
In this section, we derive a sufficient condition for $k$-contraction of the closed-loop system~\eqref{eq:closed_loop1}.
We assume that~$\Phi$ is continuously differentiable and denote its Jacobian
by~$J_\Phi(t,y) := \frac{\partial \Phi}{\partial y}(t,y)$. The Jacobian of~\eqref{eq:closed_loop1} is then
\begin{equation}\label{eq:closed_loop_jac}
J(t,x): = A - BJ_\Phi(t,Cx)C,
\end{equation}
so
\[
J^{[k]}=A^{[k]}-(BJ_\Phi C)^{[k]}.
\]
Guaranteeing that~$\mu(J^{[k]})\leq-\eta<0$ is non-trivial due to the term~$(BJ_\Phi C)^{[k]}$. Our goal is to find a sufficient condition guaranteeing that~$\mu_2(J^{[k]})\leq-\eta<0$
that satisfies the following properties: (1) it decomposes, as much as possible,
to a condition on the linear subsystem and a condition on the non-linearity~$\Phi$; (2) it reduces for~$k=1$ to a standard sufficient condition for contraction; and (3) for $k>1$ it is strictly weaker than the
standard sufficient condition for contraction.
We can now state our main result.
\begin{Theorem}\label{thm:lure_suff}
Consider the Lurie system~\eqref{eq:closed_loop1}.
Fix~$k \in [1,n]$.
Suppose that
there exist~$\eta_1,\eta_2\in \mathbb R$ and~$P \in \mathbb R^{n \times n}$, where~$P =QQ $ with~$Q \succ 0$, such that
\begin{align}\label{eq:k_riccati_Pk}
& P^{(k)}A^{[k]} + (A^{[k]})^TP^{(k)} + \eta_1 P^{(k)} \\
&+ Q^{(k)}\left((QBB^TQ)^{[k]} + (Q^{-1}C^TCQ^{-1})^{[k]}\right)Q^{(k)} \preceq 0 , \nonumber
\end{align}
and
\begin{equation}\label{cond:k_smallgain}
\sum_{i=1}^k \sigma_i^2(C Q^{-1}) \left( \sigma_i^2(J_\Phi(t,y))-1\right ) \leq - \eta_2 ,
\end{equation}
for all $t \ge 0, y \in \mathbb R^q$. Then the Jacobian of the closed-loop system~\eqref{eq:closed_loop1} satisfies
\[
\mu_{2,Q^{(k)}}(J^{[k]}(t,x)) \leq -(\eta_1 + \eta_2)/2 \text{ for all } t \ge 0, x \in \mathbb R^n.
\]
In particular, if~$\eta_1 + \eta_2 > 0$, then the closed-loop system~\eqref{eq:closed_loop1} is $k$-contractive with rate~$(\eta_1 + \eta_2)/2$ w.r.t. the scaled $L_2$ norm~$|z|_{2,Q^{(k)}}=|Q^{(k)}z|_2$.
\end{Theorem}
Before proving this result (see Section~\ref{sec:proof}), we give several comments.
We refer to condition~\eqref{eq:k_riccati_Pk} as the~$k$-ARI. Note that this condition only involves the matrices $A,B,C$ defining the linear subsystem.
Condition~\eqref{cond:k_smallgain} includes both the matrices~$C,Q$ and the Jacobian of the non-linear function. However, if
the small gain condition~$\sigma_1(J_\Phi)\leq 1 $ holds
then~\eqref{cond:k_smallgain} holds with~$\eta_2=0$. More generally, if~$\sigma_1(J_\Phi)$ is uniformly
bounded by some bound~$q$
then we can always scale the closed-loop system~\eqref{eq:closed_loop1}
so
that the small gain condition holds by considering
\begin{equation}\label{initial_q}
\begin{array}{l}
\dot x=Ax+ qBu ,\\%[1em]
y=Cx,\\
u=-\frac1{q}\Phi(t,y).
\end{array}
\end{equation}
Now applying Thm.~\ref{thm:lure_suff} yields the following result.
\begin{Corollary}
Suppose that
\[
\sigma_1(J_\Phi(t,y))\leq q \text{ for all } t\geq 0
\text{ and } y \in \mathbb R^q,
\]
and that
there exist~$\eta_1 >0 $ and~$P \in \mathbb R^{n \times n}$, where~$P =QQ $, with~$Q \succ 0$, such that
\begin{align} \label{eq:k_riccati_Pk_2}
& P^{(k)}A^{[k]} + (A^{[k]})^TP^{(k)} + \eta_1 P^{(k)}\nonumber \\
&
+ Q^{(k)}\left(q^{2k}(QBB^TQ)^{[k]} + (Q^{-1}C^TCQ^{-1})^{[k]}\right)Q^{(k)} \preceq 0 .
\end{align}
Then the closed-loop system~\eqref{eq:closed_loop1} is $k$-contractive with rate~$\eta_1/2$ w.r.t. the scaled $L_2$ norm~$|z|_{2,Q^{(k)}}=|Q^{(k)}z|_2$.
\end{Corollary}
\begin{Remark}
Note that when~$k=1$, Eq.~\eqref{eq:k_riccati_Pk} holds for some~$\eta_1 > 0$ if and only if the familiar ARI
\begin{equation}\label{eq:fari}
P A + A^TP + P BB^T P + C^TC \prec 0
\end{equation}
holds. Assuming that the LTI subsystem is minimal,~\eqref{eq:fari} holds if and only if $A$ is Hurwitz and the $H_\infty$ norm of the~LTI subsystem is smaller than one~\cite[Chapter~5]{khalil_book}. Similarly,~\eqref{cond:k_smallgain} holds for any $\eta_2 > 0$ if and only if~$\|J_\Phi\|_2 \le 1$, so in the special case~$k=1$ Thm.~\ref{thm:lure_suff} becomes a small-gain sufficient condition for standard contraction.
\end{Remark}
\begin{Remark}
Let
\begin{align}\label{eq:defmatr}
R &:= QAQ^{-1} +Q^{-1}A^TQ + {\eta_1}{k^{-1}} I_n + QB B^TQ\nonumber \\&+ Q^{-1}C^TCQ^{-1} .
\end{align}
Then
\begin{align*}
R^{[k]}&= Q^{(k)}A^{[k]}(Q^{(k)})^{-1} + (Q^{(k)})^{-1}(A^{[k]})^TQ^{(k)} + \eta_1 I_r\\& + (QBB^TQ)^{[k]} + (Q^{-1}C^TCQ^{-1})^{[k]} ,
\end{align*}
and this implies that
condition~\eqref{eq:k_riccati_Pk} can be written more succinctly as
\begin{equation}\label{eq:rkk}
R^{[k]} \preceq 0.
\end{equation}
If $\lambda_1\ge\lambda_2\ge\dots\ge\lambda_n$ is the (real) spectrum of the symmetric matrix $R$, then~\eqref{eq:rkk} can be written as $\sum_{i=1}^k\lambda_i\leq 0$.
Consider the particular choice~$P=p I_n$, with~$p>0$. Then~$Q=p^{1/2} I_n$, so
\[
R=A+A^T+\eta_1k^{-1}I_n +p B B^T +p^{-1} C^T C ,
\]
and~\eqref{eq:rkk} becomes
\begin{equation}\label{eq:almostricca}
A^{[k]}+(A^{[k]})^T +\eta_1 I_r+p (BB^T)^{[k]} +p^{-1}(C^TC)^{[k]}\preceq 0.
\end{equation}
\end{Remark}
It is natural to expect that a sufficient condition for~$k$-contraction implies~$\ell$-contraction for any~$\ell > k$ (see \citep{kordercont,wu2020generalization}). The next result shows that this is indeed so for the conditions in Theorem~\ref{thm:lure_suff}.
\begin{prop}
Suppose that the conditions in Theorem~\ref{thm:lure_suff} hold for some integer~$k\geq 1$ and~$\eta_1,\eta_2\geq 0$.
Then they hold for any~$\ell>k $ with the same~$\eta_1,\eta_2$.
\end{prop}
\begin{pf}
Suppose that there exists~$P=QQ$, with~$Q\succ 0$, such that~\eqref{eq:k_riccati_Pk} and~\eqref{cond:k_smallgain}
hold with~$\eta_1,\eta_2\geq 0$. Fix an integer~$\ell>k$.
Recall that
condition~\eqref{eq:k_riccati_Pk} is equivalent to
$\sum_{i=1}^k \lambda_i(R) \leq 0$, where~$R$ is the symmetric
matrix defined in~\eqref{eq:defmatr}. Since the eigenvalues of~$R$ are ordered in decreasing
order, we have~$\lambda_k(R) \leq 0$ and thus~$\lambda_j(R) \leq 0$ for any~$j>k$.
Hence, $\sum_{i=1}^\ell \lambda_i(R) \leq 0$, so condition~\eqref{eq:k_riccati_Pk}
also holds when we replace~$k$ by~$\ell$.
Similarly, since the singular values of~$J_\Phi$ are ordered in decreasing order, it
follows from~\eqref{cond:k_smallgain}
that~$ \sigma_k^2(J_\Phi(t,y))<1$, and thus~$\sigma_j^2(J_\Phi(t,y))<1$ for any~$j>k$, so
\begin{align*}
\sum_{i=1}^\ell \sigma_i^2(C Q^{-1}) &\left( \sigma_i^2(J_\Phi(t,y))-1\right )\\
& \leq \sum_{i=1}^k \sigma_i^2(C Q^{-1}) \left( \sigma_i^2(J_\Phi(t,y))-1\right ) \\
& \leq - \eta_2 ,
\end{align*}
so condition~\eqref{cond:k_smallgain} holds when we replace~$k$ by~$\ell$.
\hfill{\qed}
\end{pf}
\section{Proof of main result}\label{sec:proof}
This section is devoted to the proof
of Thm.~\ref{thm:lure_suff}. This requires two auxiliary results.
We begin by restating the ARI~\eqref{eq:k_riccati_Pk} using scaled~$L_2$ matrix measures. Recall that given~$P,Q \succ 0$ such that~$P = QQ$, the inequality~$PA + A^TP \prec 0$ holds iff~$\mu_{2,Q}(A)<0$~\citep{sontag_cotraction_tutorial}.
In other words,~$A$ is Hurwitz iff it is contractive w.r.t. to some scaled~$L_2$ norm.
The following result generalizes this from a Lyapunov inequality to a Riccati inequality, and from contraction to~$k$-contraction.
\begin{lem}\label{lem:riccati_lognorm} (ARI and scaled~$L_2$ norm).
Let $A \in \mathbb R^{n \times n}, B \in \mathbb R^{n \times m}, C \in \mathbb R^{q \times n}$. Fix~$k \in [1,n]$. Then the following two conditions are equivalent:
\begin{enumerate}
\item there exist~$\eta_1 \in \mathbb R$ and~$P \in \mathbb R^{n \times n}$, where~$P =QQ $ with~$Q \succ 0$, such that
the ARI~\eqref{eq:k_riccati_Pk} holds.
\item the scaled~$L_2$ norm of~$A^{[k]}$ satisfies
\begin{align}\label{eq:muric}
2\mu_{2,Q^{(k)}}\left(A^{[k]} \right ) &\le -\eta_1 -\mu_2\left ((QBB^TQ)^{[k]} \right )\nonumber\\& - \mu_2\left ((Q^{-1}C^TCQ^{-1})^{[k]} \right ).
\end{align}
\end{enumerate}
\end{lem}
\begin{pf}
Suppose that the $k$-ARI~\eqref{eq:k_riccati_Pk} holds.
Substituting~$P=QQ$ and using the Cauchy-Binet formula gives
\begin{align}
& Q^{(k)}Q^{(k)}A^{[k]} + (A^{[k]})^T Q^{(k)} Q^{(k)} + \eta_1 Q^{(k)} Q^{(k)} \nonumber \\
&+ Q^{(k)}\left((QBB^TQ)^{[k]} + (Q^{-1}C^TCQ^{-1})^{[k]} \right )
Q^{(k)} \preceq 0. \label{eq:midd}
\end{align}
Multiplying~\eqref{eq:midd} on the left- and on the right-hand sides by the symmetric matrix~$(Q^{(k)})^{-1}=(Q^{-1} ) ^{(k)}$
yields
\begin{align*}
&Q^{(k)}A^{[k]}(Q^{-1} )^{(k)} + (Q^{-1} ) ^{(k)}(A^{[k]})^T Q^{(k)} + \eta_1 I_r\\& + (QBB^TQ)^{[k]} + (Q^{-1}C^TCQ^{-1})^{[k]} \preceq 0,
\end{align*}
where $r := \binom{n}{k}$.
Combining this with~\eqref{eq:k_add_coor_trans} gives
\begin{align*}
& ( QAQ^{-1} +Q^{-1}A^TQ )^{[k]} + \eta_1 I_r + (QB B^TQ)^{[k]} \\&+ (Q^{-1}C^TCQ^{-1})^{[k]} \preceq 0,
\end{align*}
and using~\eqref{eq:weight_mat_meas} yields
\begin{align*}
2\mu_{2,Q^{(k)}} \left(A^{[k]} \right ) &\le -\eta_1 - \mu_{2}\left((QB B^TQ)^{[k]} \right) \\&- \mu_2\left((Q^{-1}C^TCQ^{-1})^{[k]} \right ).
\end{align*}
We conclude that the
$k$-ARI~\eqref{eq:k_riccati_Pk} implies~\eqref{eq:muric}. The proof of the converse implication is very similar and is thus omitted.
This completes the proof of Lemma~\ref{lem:riccati_lognorm}.~\hfill{\qed}
\end{pf}
\begin{Remark}
Note that for~$k=1$, Eq.~\eqref{eq:k_riccati_Pk} holds for any~$\eta_1 > 0$ if and only if the ARI~\eqref{eq:fari} holds, whereas~\eqref{eq:muric}
becomes
\[
2\mu_{2,Q}(A) \le -\eta_1 -\mu_2\left ( QBB^TQ \right ) - \mu_2\left (Q^{-1}C^TCQ^{-1} \right ).
\]
For~$k=n$, the $k$-ARI~\eqref{eq:k_riccati_Pk} reduces to
\begin{align*}
& 2\operatorname{tr}(A) \det(P) + \eta_1 \det(P) + (\det(Q))^2 \operatorname{tr}(QBB^T Q)\\& + (\det(Q))^{2}\operatorname{tr}(Q^{-1}CC^TQ^{-1}) \le 0 ,
\end{align*}
whereas~\eqref{eq:muric}
becomes
\[
2\operatorname{tr}(A) \le -\eta_1 -\operatorname{tr}(QBB^TQ) - \operatorname{tr}(Q^{-1}C^TCQ^{-1}).
\]
Since~$\det(P) = (\det(Q))^2>0$, these two inequalities are indeed identical.
\end{Remark}
The next result gives an upper bound on the scaled $L_2$ matrix measure of the product of two matrices. It is based on the following simple completing the square identity:
\begin{equation}\label{eq:square_comp}
AB + B^TA^T = (A^T + B)^T(A^T + B) - AA^T - B^TB,
\end{equation}
for any two matrices~$A \in \mathbb R^{n \times m}$ and~$B \in \mathbb R^{m \times n}$.
\begin{lem}[Matrix measure of a product of matrices]\label{lem:lognorm_prod}
Let $A \in \mathbb R^{n \times m}, B \in \mathbb R^{m \times n}$. Fix~$k \in [1,n]$. Let~$T \in \mathbb R^{n \times n}$ be invertible. Then
\begin{align*}
2\mu_{2,T^{(k)}}(-(AB)^{[k]}) &\le \mu_2((TAA^TT^T)^{[k]}) \\&+ \mu_2(((T^{-1})^T B^TBT^{-1})^{[k]}).
\end{align*}
\end{lem}
\begin{pf}
We begin by considering
\begin{align}\label{eq:samear}
2 \mu_2(-AB) &= \lambda_{\max} \left ( -AB-B^TA^T \right )\nonumber\\
&= \lambda_{\max} \left ( AA^T+B^TB-
(A^T+B)^T(A^T+B) \right )\nonumber\\\
&\leq \lambda_{\max} \left ( AA^T \right ) +\lambda_{\max} \left ( B^TB \right )\nonumber\\
&=\mu_2(AA^T)+\mu_2(B^T B) .
\end{align}
By~\eqref{eq:square_comp} and linearity of the~$k$-additive compound,
\begin{align*}
(AB)^{[k]} +( B^TA^T)^{[k]}&=( (A^T + B)^T(A^T + B) )^{[k]}\\&- (AA^T)^{[k]} -( B^TB)^{[k]},
\end{align*}
and thus arguing as in~\eqref{eq:samear} gives
\begin{equation}\label{eq:samear_k}
2\mu_2(-(AB)^{[k]}) \le \mu_2((AA^T)^{[k]}) + \mu_2((B^TB)^{[k]}).
\end{equation}
By~\eqref{eq:mu2add},
\begin{align*}
\mu_{2,T^{(k)}}(-(AB)^{[k]}) &= \mu_2(-(TABT^{-1})^{[k]})\\
&=\mu_2(-(\bar A \bar B)^{[k]}) ,
\end{align*}
where~$\bar A:= TA$ and~$\bar B:=BT^{-1}$, and
applying~\eqref{eq:samear_k}
gives
\begin{align*}
2 \mu_{2,T^{(k)}}(-(AB)^{[k]}) & \leq \mu_2((\bar A \bar A^T)^{[k]}) + \mu_2((\bar B^T \bar B)^{[k]}).
\end{align*}
This
completes the proof of Lemma~\ref{lem:lognorm_prod}.~\hfill{\qed}
\end{pf}
\begin{Remark}
Note that for~$A=-I_n$,~$B=T=I_n$, we get
\[2 \mu_{2,T^{(k)}}(-(AB)^{[k]}) = 2k,
\]
whereas
\[
\mu_2((TAA^TT^T)^{[k]}) + \mu_2((T^{-T}B^TBT^{-1})^{[k]}) = 2k,
\]
so in this case the bound in Lemma~\ref{lem:lognorm_prod} is tight.
\end{Remark}
\begin{Remark}
The main
tool in the proof of Lemma~\ref{lem:lognorm_prod} is the inequality~\eqref{eq:samear}.
For the case~$k=1$, we can generalize~\eqref{eq:samear} to arbitrary matrix measures as follows:
\begin{align*}
2\mu(-AB) &\le 2\|-AB\| \\
&\le 2 \|A\|\|B\| \\
&=2 \sqrt{\|A\|^2\|B\|^2} \\
&\le \|A\|^2 + \|B\|^2 ,
\end{align*}
where the last step is the arithmetic and geometric means inequality. However, it is not obvious how this can be extended to $\mu( -(AB)^{[k]})$, where~$\mu$ is a general matrix measure.
\end{Remark}
We can now prove Theorem~\ref{thm:lure_suff}.
\begin{pf}
Let~$Q \succ 0$ be such that $P = QQ$, and let $J$ be the Jacobian defined in~\eqref{eq:closed_loop_jac}. By the linearity of the additive compound and sub-additivity of matrix measures, we have
\begin{align*}
2\mu_{2,Q^{(k)}}(J^{[k]}) &= 2\mu_{2,Q^{(k)}}((A - BJ_\Phi C)^{[k]}) \\
&= 2\mu_{2,Q^{(k)}}( A ^{[k]} - (BJ_\Phi C)^{[k]}) \\
&\le 2\mu_{2,Q^{(k)}}(A^{[k]}) + 2\mu_{2,Q^{(k)}}(-(BJ_\Phi C)^{[k]}) ,
\end{align*}
and applying Lemma~\ref{lem:riccati_lognorm} gives
\begin{align*}
2\mu_{2,Q^{(k)}}(J^{[k]})
&\le -\eta_1 -\mu_2\left ((QBB^TQ)^{[k]} \right ) \\& - \mu_2\left ((Q^{-1}C^TCQ^{-1})^{[k]}\right ) \\& + 2\mu_{2,Q^{(k)}}(-(BJ_\Phi C)^{[k]})
.
\end{align*}
By Lemma~\ref{lem:lognorm_prod},
\begin{align*}
2 \mu_{2,Q^{(k)}}(-(BJ_\Phi C)^{[k]}) &\leq \mu_2 ( ( QBB^T Q^T )^{[k]} )\\& + \mu_2(
( Q^{-T} C^T J_\Phi ^T J_\Phi C Q ^{-1} )^{[k]}
) ,
\end{align*}
so
\begin{align}\label{eq:mulj}
2 \mu_{2,Q^{(k)}}(J^{[k]})
\le& -\eta_1 -\mu_2((Q^{-1} C^T C Q^{-1})^{[k]}) \nonumber\\
&+\mu_2((Q^{-1} C^T J_\Phi^T J_\Phi C Q^{-1})^{[k]}) \nonumber\\
&= -\eta_1 -\sum_{i=1}^k \sigma_i^2(C Q^{-1})\nonumber\\& + \sum_{i=1}^k \sigma_i^2(J_\Phi C Q^{-1}) .
\end{align}
Recall that for any~$A\in\mathbb R^{m\times p},B\in\mathbb R^{p\times n}$, we have
\begin{equation}\label{eq:sigmai2}
\sum_{i=1}^k \sigma_i^s(AB) \leq
\sum_{i=1}^k (\sigma_i(A)\sigma_i(B))^s
\end{equation}
for any $k\in[1,\min\{m,p,n\}]$, $s>0$
\citep[Thm.~3.3.14]{Horn1991TopicsMatrixAna}.
Applying this to~\eqref{eq:mulj} and using~\eqref{cond:k_smallgain}
gives
\begin{align*}
2\mu_{2,Q^{(k)}}(J^{[k]}) &\le -\eta_1 + \sum_{i=1}^k \sigma_i^2(CQ^{-1})( \sigma_i^2(J_\Phi)-1) \\
&\leq - \eta_1 - \eta_2 ,
\end{align*}
so if~$\eta_1 + \eta_2 > 0$ then
the closed-loop system is~$k$-contractive with rate~$ (\eta_1 + \eta_2)/2$ w.r.t. the scaled~$L_2$ norm~$|z|_{2,Q^{(k)}}= |Q^{(k)} z|_2$. This completes the proof of Theorem~\ref{thm:lure_suff}.~\hfill{\qed}
\end{pf}
\begin{Remark}\label{rem:scalar_P_C}
Note that for the particular case~$P=pI_n$, with~$p>0$, we have~$Q=p^{1/2} I$, and Eq.~\eqref{eq:mulj} gives
\begin{align*}
2 \mu_{2,Q^{(k)}}(J^{[k]})
\le &
-\eta_1 -p^{-1}\sum_{i=1}^k \sigma_i^2(C ) + p^{-1}\sum_{i=1}^k \sigma_i^2(J_\Phi C ) .
\end{align*}
If, furthermore,~$C=I_n$ (i.e., the output of the~LTI is~$y=x$) then this gives
\begin{align*}
2 \mu_{2,Q^{(k)}}(J^{[k]})
\le &
-\eta_1 -p^{-1} k + p^{-1}\sum_{i=1}^k \sigma_i^2(J_\Phi ),
\end{align*}
so in this case a sufficient condition for $k$-contraction is
\begin{equation}\label{eq:sumsuff}
\sum_{i=1}^k \sigma_i^2(J_\Phi (t,y) ) < k+ {\eta_1}{p },
\end{equation}
for all~$t\geq 0,y\in\mathbb R^n$.
\end{Remark}
\section{An application: $k$-contraction in a networked~system}\label{sec:networked}
We now apply our main result to analyze the global behaviour of several models including Hopfield networks, a nonlinear opinion dynamics model, and a 2-bus system. The first step is to consider
a general networked dynamical system
\begin{equation}\label{eq:net_sys}
\dot x(t) = -D x (t)+ W_1f \left (W_2x(t) \right )+v,
\end{equation}
where
$x \in \Omega \subseteq \mathbb R^n$, $D=\operatorname{diag}(d_1,\dots,d_n)$ is a diagonal matrix,
$W_1 \in \mathbb R^{n \times m}, W_2 \in \mathbb R^{q \times n} $ are matrices of interconnection weights, $v\in\mathbb R^n$ is a constant ``offset'' vector, and $f : \mathbb R^q \to \mathbb R^m$. In the context of neural network models, the~$f_i$s are the neuron activation functions. More generally, they may represent functions that are bounded or saturated
and thus non-linear. We assume that the state space~$\Omega$ is convex and that~$f$ is continuously differentiable. Let
\begin{equation*}
J_f(z) = \begin{bmatrix}
\frac{\partial f_1}{\partial z_1}(z) & \dots & \frac{\partial f_1}{\partial z_q}(z) \\
\vdots & \ddots & \\
\frac{\partial f_m}{\partial z_1}(z) & \dots & \frac{\partial f_m}{\partial z_q}(z)
\end{bmatrix}
\end{equation*}
denote the Jacobian of~$f$.
Intuitively speaking, it is clear that as we take all the~$d_i$s larger the system becomes ``more stable''.
The next result rigorously
formalizes this by providing
a sufficient condition for $k$-contraction based on Theorem~\ref{thm:lure_suff}.
\begin{Theorem}
\label{thm:net_k_contract}
Consider~\eqref{eq:net_sys}. Fix~$k \in [1,n]$, and let
\begin{equation}\label{eq:alphak}
\alpha_k:=\frac{1}{k}\min\left\{ d_{i_1}+\dots+d_{i_k}\, | \, 1\leq i_1<\dots<i_k\leq n \right \}.
\end{equation}
If $ \alpha_k>0 $ and
\begin{equation}\label{cond:net_k_smallgain}
\|J_f(W_2x)\|_2^2 \sum_{i=1}^k \sigma_i^2(W_1)\sigma_i^2(W_2) < \alpha_k^2 k \text{ for all } x \in \Omega,
\end{equation}
then~\eqref{eq:net_sys} is $k$-contractive.
Furthermore, if these conditions hold for~$k=2$ then every bounded trajectory of~\eqref{eq:net_sys} converges to an equilibrium point (which is not necessarily unique).
\end{Theorem}
\begin{Remark}
Note that condition~\eqref{cond:net_k_smallgain} does not require to explicitly compute any~$k$-compounds. This is useful, as for a matrix~$A\in\mathbb R^{n\times n}$ the~$k$-compounds have dimensions~$\binom{n}{k}\times\binom{n}{k}$, and this may be quite large (see also~\cite{Dalin2022Duality_kcont}).
The condition~$\alpha_k>0$ is equivalent to requiring that
the sum of every~$k$ eigenvalues of~$D$ is positive. For~$k=1$, this amounts to requiring that~$D$ is a positive diagonal matrix, but for~$k>1$ some of the~$d_i$s may be negative, as long as the sum of every~$k$ of the~$d_i$s is positive.
\end{Remark}
\begin{pf}
The proof is based on Theorem~\ref{thm:lure_suff}. We first represent~\eqref{eq:net_sys} as a Lurie system. By~\eqref{cond:net_k_smallgain}, there exists~$\gamma\in\mathbb R$ satisfying
\begin{equation}\label{eq:gammprop}
0<\gamma < \alpha_k \text{ and } \|J_f(z)\|_2^2 \sum_{i=1}^k \sigma_i^2(W_1) \sigma_i^2(W_2)< \gamma^2 k.
\end{equation}
We can represent~\eqref{eq:net_sys} as
the interconnection of the LTI system with $(A,B,C)=(-D, \gamma I_n, I_n)$ and the nonlinearity~$\Phi(y) := -\gamma^{-1}W_1f(W_2 y) -\gamma^{-1}v$, that is,
\begin{align}\label{eq:lui}
\dot x&= - D x + \gamma u,\nonumber\\
y&=x,\nonumber\\
u&= \gamma^{-1} W_1f(W_2 y )+\gamma^{-1}v.
\end{align}
For this Lurie system, there exist~$Q \succ 0$ with~$P=QQ$ and~$\eta_1>0$ such that the~$k$-ARI~\eqref{eq:k_riccati_Pk} holds if and only if
\begin{equation}\label{eq:k_riccati_net}
- P^{(k)}D^{[k]}
-D^{[k]} P^{(k)}+ Q^{(k)}(\gamma^2 P + P^{-1})^{[k]}Q^{(k)} \prec 0.
\end{equation}
Taking~$P = p I_n$, with~$p>0$, gives
\begin{equation}\label{eq:k_riccati_scalar}
\left( -2 D^{[k]}+
( \gamma^2 p + p^{-1}) k I_r \right ) p^k \prec 0.
\end{equation}
By definition,~$ \alpha_k k$
is a lower bound of the diagonal entries of~$D^{[k]}$. Thus, Eq.~\eqref{eq:k_riccati_scalar}
will hold for any~$p>0$ such that
\[
-2\alpha_k+
\gamma^2 p + p^{-1} < 0,
\]
and this indeed admits a solution~$p>0$ since~$\alpha_k > 0$ and~$\gamma < \alpha_k$. We conclude that there exists a matrix~$P = p I_n$, with~$p>0$, and a scalar~$\eta_1 > 0$ for which the~$k$-ARI~\eqref{eq:k_riccati_Pk} holds.
We now show that~\eqref{cond:net_k_smallgain} implies that~\eqref{cond:k_smallgain} holds for some~$\eta_2>0$.
Since~$P=pI_n$ and~$C=I_n$, we may apply the result in Remark~\ref{rem:scalar_P_C}. Consider
\begin{align*}
\sum_{i=1}^k \sigma_i^2(J_\Phi)
& = \sum_{i=1}^k \sigma_i^2 (-\gamma^{-1} W_1 J_f W_2)\\
&\leq\gamma^{-2} \sum_{i=1}^k \sigma_i^2 ( W_1 J_f ) \sigma_i^2 ( W_2 ) \\
& \leq \gamma^{-2} \sigma_1^2 ( J_f ) \sum_{i=1}^k \sigma_i^2 ( W_1 ) \sigma_i^2 ( W_2 ) \\
&<k,
\end{align*}
where the first inequality follows from~\eqref{eq:sigmai2},
and the second from~\eqref{eq:gammprop}.
We conclude that the sufficient condition~\eqref{eq:sumsuff} holds,
and Theorem~\ref{thm:lure_suff} implies that~\eqref{eq:net_sys} is $k$-contractive.
Suppose now that~\eqref{cond:net_k_smallgain} holds with~$k=2$. Then~\eqref{eq:net_sys} is 2-contractive. If in addition~$f$ is uniformly bounded, then all the trajectories of~\eqref{eq:net_sys} are bounded, and by known results on time-invariant 2-contractive systems~\citep{li1995} we then have that every trajectory converges to an equilibrium point. This completes the proof of Theorem~\ref{thm:net_k_contract}.~\hfill{\qed}
\end{pf}
\begin{Remark}
In the special case where~$D=\alpha I_n$, the networked dynamical system becomes
\begin{equation}\label{eq:net_sys_DI}
\dot x = -\alpha x + W_1f(W_2x),
\end{equation}
and the sufficient condition for~$k$-contraction is
\begin{equation}\label{cond:net_k_smallgain_special}
\alpha>0 \text{ and } \|J_f(W_2x)\|_2^2 \sum_{i=1}^k \sigma_i^2(W_1)\sigma_i^2(W_2) < \alpha^2 k ,
\end{equation}
for all $x \in \Omega$.
Note also that if either~$f=0$ or~$W_1=0$ or~$W_2=0$ then~\eqref{cond:net_k_smallgain_special} holds for~$k=1$ (and thus for any~$k\in [1,n]$). This is reasonable, as in this case we have~$\dot x=-\alpha x$, and this is indeed $k$-contractive for any~$k\geq 1$.
\end{Remark}
We now apply Theorem~\ref{thm:net_k_contract}
to three specific models: a Hopfield network,
a nonlinear opinion dynamics system, and a 2-bus power system. All these application are typically multi-stable, that is, they include more than a single equilibrium point, and thus are not contractive (i.e., not $1$-contractive) w.r.t. any norm. However, our results may still be applied to prove~$k$-contraction, with~$k>1$.
\subsection{2-Contraction in Hopfield networks}
A particular example of a networked system in the form~\eqref{eq:net_sys} is the well-known Hopfield network~\citep{hopfield_net}:
\begin{equation}\label{eq:hopfield}
\dot x = -\alpha x + W f(x).
\end{equation}
The stability of this model has been studied extensively, including using contraction theory~\citep{cont_hopfield}. However,
the network has been often
used as an associative memory, where each equilibrium corresponds to a stored pattern (see, e.g.,~\cite{krotov2016}). Hence, the network is naturally multi-stable and thus not contractive (i.e., not 1-contractive) w.r.t. any norm. Here we consider the typical choice of using $\tanh(\cdot)$ as the activation function, i.e., taking
\begin{equation}\label{Eq:tabhh}
f(x) = \begin{bmatrix}
\tanh(x_1)&\dots&\tanh(x_n)
\end{bmatrix}^T.
\end{equation}
\begin{Corollary}\label{coro:hopfield}
If
\begin{align}\label{eq:rhp}
\sqrt{\sigma_1^2(W) + \sigma_2^2(W)} < \sqrt{2}\alpha
\end{align}
then every solution of~\eqref{eq:hopfield} with~$f$ given in~\eqref{Eq:tabhh}
converges to an equilibrium point.
\end{Corollary}
\begin{pf}
First, note that it follows from~\eqref{eq:hopfield} and~\eqref{Eq:tabhh} that every solution of the Hopfield network is bounded.
Second, note that~\eqref{eq:hopfield} is a special case of~\eqref{eq:net_sys_DI} with~$W_1=W$ and~$W_2=I_n$, so we can
apply Theorem~\ref{thm:net_k_contract} to the Hopfield network model. In this case,~\eqref{eq:alphak}
gives~$\alpha_k=\alpha$ for all~$k$, so~\eqref{cond:net_k_smallgain}
becomes~$\alpha>0$ and~$
\sum_{i=1}^k \sigma_i^2(W ) < \alpha^2 k$.
In the particular case~$k=2$ this is equivalent to~\eqref{eq:rhp}, and this implies that every bounded solution converges to an equilibrium point.~\hfill{\qed}
\end{pf}
The next example demonstrates that Corollary~\ref{coro:hopfield} may be used to analyze the case where
the network is multi-stable, and thus it is certainly not contractive (i.e., not~$1$-contractive) w.r.t. any norm. We consider the case~$n=3$, as then we can plot the system trajectories.
\begin{Example}
Consider a fully connected Hopfield network with~$3$ neurons, i.e. Eq.~\eqref{eq:hopfield} with~$W = \frac{1}{3} 1_3 1_3^T$, where~$1_n \in \mathbb R^n$ is a column vector of ones. In this case, it can be shown that for~$0 <\alpha < 1$, the network has at least three equilibrium points, namely,
$e^1=0$, $e^2 = z \cdot 1_3$ where~$z>0$ is such that~$\alpha z = \tanh(z)$ (guaranteed to exist since $\alpha \in (0,1)$), and~$e^3=-e^2$. Thus the network is multistable and so it not $1$-contractive with respect to any norm. If, in addition,~$\alpha > 1/\sqrt{2}$, then the conditions of Corollary~\ref{coro:hopfield} hold since~$\sigma_1^2(W) = 1$ and $\sigma_2^2(W) = 0$, so~$\sigma_1^2(W) + \sigma_2^2(W) \le 2 \alpha ^2$. Fig.~\ref{fig:hopfield} shows several trajectories of the
system with~$\alpha=0.71>1/\sqrt{2}$. It may be seen that as expected, every solution converges to an equilibrium point.
\end{Example}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{hopfield.eps}
\caption{Several trajectories of a Hopfield network with~$n=3$ and~$\alpha=0.71$. For these parameters, $e^1=0$, $e^2 \approx 1.153\cdot 1_3$, and~$e^3 \approx -1.153\cdot 1_3$ are equilibrium points (marked by circles). Initial conditions are marked with crosses. }
\label{fig:hopfield}
\end{figure}
\begin{comment}
MATLAB CODE FOR HOPFIELD NETWORK FIGURE:
set(0,'defaulttextinterpreter','latex');
set(0,'defaultAxesTickLabelInterpreter','latex');
set(0,'defaultfigurecolor',[1 1 1]);
set(0,'defaultaxesfontsize',16);
set(0,{'DefaultAxesXColor','DefaultAxesYColor','DefaultAxesZColor','DefaultTextColor'},...
{'k','k','k','k'});
n = 3;
W = 1/n*ones(n);
d = 0.71;
e1 = fsolve(@(x) d*x - tanh(x), 0);
e2 = fsolve(@(x) d*x - tanh(x), 1);
e3 = -e2;
figure();
for i = 1:10
[~,x] = ode45(@(t,x) -d*x + W*tanh(x), [0 15], randn(3,1));
plot3(x(:,1), x(:,2), x(:,3), 'LineWidth', 1); hold on;
scatter3(x(1,1),x(1,2),x(1,3),'xk', 'LineWidth', 1);
end
E = [e1;e2;e3];
scatter3(E,E,E,'ok', 'LineWidth', 1);
grid on;
xlabel('$x_1$', 'Interpreter', 'latex');
ylabel('$x_2$', 'Interpreter', 'latex');
zlabel('$x_3$', 'Interpreter', 'latex');
\end{comment}
\subsection{An application to a nonlinear opinion dynamics model}
\cite{opinion_leonard} considered
a nonlinear opinion dynamics model in the form
\begin{equation}\label{eq:opin}
\dot x_ i (t)=-d_i x_i +u_i f \left( \sum_{j =1 }^n a_{ij} x_j (t) \right)+b_i,\quad i\in[1,n] ,
\end{equation}
where~$d_i >0$, and~$f:\mathbb R\to\mathbb R$ is an odd saturating function.
Here~$x_i$ represents the opinion of agent~$i$, the term~$ \sum_{j =1 }^n a_{ij} x_j$ is the cue obtained from all the agents that communicate over a network with weights~$a_{ij}$, the term~$-d_i x_i$
represents a ``forgetting term'', the parameter~$u_i$ determines how ``attentive'' is agent~$i$ to the opinions of the agents, and~$b_i$ is a constant offset term.
\cite{opinion_leonard} showed that the nonlinear function~$f$ in the model
introduces many behaviours that cannot be captured using linear consensus systems. In particular, the model goes through a pitchfork bifuraction as~$u$ grows larger: that is, if~$u$ is larger than a certain threshold depending on the topology of the interconnection network, then the model has multiple equilibrium points, several of which are stable. However, Bizyaeva et al. only proved local stability. In this section, we use Theorem~\ref{thm:net_k_contract} to study $k$-contraction in this model, which for the case of $k=2$ will prove global asymptotic stability.
To apply our results, note that~\eqref{eq:opin} can be written as in~\eqref{eq:net_sys}
with~$D=\operatorname{diag}(d_1,\dots,d_n)$, $W_1=\operatorname{diag}(u_1,\dots,u_n)$,
$W_2=A=\{a_{ij}\}_{i,j=1}^n$, and~$v=b=\begin{bmatrix}b_1&\dots&b_n
\end{bmatrix}^T$.
Applying Theorem~\ref{thm:net_k_contract} yields the following result.
\begin{Corollary}\label{coro:opinion_k_cont}
Consider~\eqref{eq:opin} and assume without loss of generality that the state-variables are ordered such that~$u_1^2\geq\dots\geq u_n^2$. Fix~$k \in [1,n]$, and let
\[
\alpha:=\frac{1}{k}\min\left\{ d_{i_1}+\dots+d_{i_k}\, | \, 1\leq i_1<\dots<i_k\leq n \right \} .
\]
If $\alpha>0 $ and
\begin{equation}\label{cond:net_k_smallgain_opinion}
\|J_f(Ax)\|_2^2 \sum_{i=1}^k u_i^2 \sigma_i^2(A) < \alpha^2 k \text{ for all } x \in \Omega
\end{equation}
then~\eqref{eq:opin} is $k$-contractive.
Furthermore, if~$f$ is uniformly bounded and~\eqref{cond:net_k_smallgain_opinion} holds with $k=2$ then every trajectory of~\eqref{eq:opin} converges to an equilibrium point (which is not necessarily unique).
\end{Corollary}
\begin{Example}\label{exa:opinion}
Consider~\eqref{eq:opin} with $n=3$ agents,
$D = I_3$, $W_1 = u I_3$, with~$u > 0$, $b = \begin{bmatrix} 0.2 & 0 & -0.2 \end{bmatrix}^T$,
connection matrix
\begin{equation*}
A = \begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix} - \begin{bmatrix}
0 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & 0
\end{bmatrix},
\end{equation*}
and $f$ as in~\eqref{Eq:tabhh}. It then follows from Corollary~\ref{coro:opinion_k_cont} that the system is $k$-contractive if
\begin{equation}
u^2\sum_{i=1}^k \sigma_i^2(A) < k.
\end{equation}
In this case,
$\sigma_1^2(A)=3+2\sqrt{2} $,
$\sigma_2^2(A)= 1$,
and~$\sigma_3^2(A)= 3 - 2\sqrt{2} $, so
the system is $1$-contractive for $u < (1 + \sqrt{2})^{-1} \approx 0.414$, it is $2$-contractive for $u < \sqrt{ \frac{2}{4+2\sqrt{2}}} \approx 0.541$, and $3$-contractive for $u < \sqrt{ \frac{3}{7}} \approx 0.655$. Several trajectories of this model with~$u=0.5$ (for which the system is 2-contractive) are shown in Fig.~\ref{fig:opinion}. It may be seen that there exist at least two equilibrium points, so the system is indeed not 1-contractive for these parameter values, and every trajectory converges to an equilibrium.
\end{Example}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{opinion.eps}
\caption{Numerical simulation of several trajectories of the opinion dynamics model in Example~\ref{exa:opinion} with $u=0.5$. Initial conditions are marked with crosses.}
\label{fig:opinion}
\end{figure}
\begin{comment}
MATLAB CODE FOR BOUND CALCULATIONS AND NUMERICAL SIMULATION:
set(0, 'defaulttextinterpreter', 'latex');
set(0, 'defaultAxesTickLabelInterpreter','latex');
set(0, 'defaultfigurecolor', [1 1 1]);
set(0, 'defaultaxesfontsize', 11);
set(0, {'DefaultAxesXColor', 'DefaultAxesYColor', 'DefaultAxesZColor', 'DefaultTextColor'}, ...
{'k', 'k', 'k', 'k'});
n = 3;
D = eye(n);
gamma = -1;
alpha = 1;
W2 = [alpha, gamma, 0 ;
gamma, alpha, gamma ;
0 , gamma, alpha];
v = [0.2;0;-0.2];
E = svd(W2);
for k = (1:3)
sqrt(k / sum(E(1:k).^2))
end
figure();
for i = 1:10
[~,x] = ode45(@(t,x) -x + W1*tanh(W2*x) + v, [0 15], 2*rand(3,1) - 1);
plot3(x(:,1), x(:,2), x(:,3), 'LineWidth', 1); hold on;
scatter3(x(1,1),x(1,2),x(1,3),'xk', 'LineWidth', 1);
end
grid on;
xlabel('$x_1$', 'Interpreter', 'latex');
ylabel('$x_2$', 'Interpreter', 'latex');
zlabel('$x_3$', 'Interpreter', 'latex');
\end{comment}
\subsection{An application to power systems}
We now use our results to provide a global stability result for a power system consisting of two interconnected synchronous generators (see Fig.~\ref{fig:2bus}) based on the so-called Network-Reduced Power System (NRPS) model~\citep{Sauer1998power}. A useful approach for analysing
the stability of the NRPS model, that is based on
singular perturbation theory,
was first proposed by~\cite{Dorfler2012kuramoto}, and recently extended by~\cite{Weiss2019StabilityNRPS}. In this approach, the NRPS is related to a Nonuniform Kuramoto model, for which global stability results can be derived. However, since the approach is based on singular perturbations, it only guarantees local (exponential) stability for the NRPS model and a weaker boundedness result for all trajectories starting in a certain invariant set. In this section we focus on the case of a system with two generators and combine the boundedness result with a sufficient condition for 2-contractivity, yielding an explicit expression for an invariant set in which asymptotic stability is guaranteed.
\begin{figure}[t]
\centering
\begin{circuitikz}[european,busbar/.style = {draw, rectangle, thick, minimum height=2em, minimum width=0em, inner sep=0em}]
\def\normalcoord(#1){coordinate(#1)}
\def\showcoord(#1){node[circle, red, draw, inner sep=1pt, pin={[red, overlay, inner sep=0.5pt, font=\tiny, pin distance=0.1cm, pin edge={red, overlay}]45:#1}](#1){}}
\let\coord=\normalcoord
\node[busbar,label={[above]Bus 1}](bus1){};
\draw (bus1.west) to[short] ++(-0.5,0) node[vsourcesinshape,anchor=north,rotate=-90](SG1){};
\draw ($(bus1.west)!0.5!(bus1.south west)$) to[short] ++(-0.25,0) to[short] ++(0,-0.1) node[vee](){};
\draw (bus1.east) to[generic,l_={\parbox{2cm}{\centering Transmission\\Line}}] ++(3,0) node[busbar,label={[above]Bus 2}](bus2){};
\draw (bus2.east) to[short] ++(0.5,0) node[vsourcesinshape,anchor=north,rotate=90](SG2){};
\draw ($(bus2.east)!0.5!(bus2.south east)$) to[short] ++(0.25,0) to[short] ++(0,-0.1) node[vee](){};
\end{circuitikz}
\caption{Schematic description of the 2-bus power system.}
\label{fig:2bus}
\end{figure}
Following the network reduced power system model, the system under study is described by
\begin{align}\label{eq:2bus}
M_1 \dot \omega_1(t) &= p_1 -R_1 \omega_1(t) - a \sin(\delta(t) + \varphi), \nonumber\\
M_2 \dot \omega_2(t) &= p_2 -R_2 \omega_2(t) + a \sin(\delta(t) - \varphi), \nonumber\\
\dot \delta(t) &= \omega_2(t) - \omega_1(t),
\end{align}
where~$\omega_1,\omega_2 :\mathbb R_+\to \mathbb R$ are the rotor rotational frequencies of the two generators,~$\delta : \mathbb R_+\to \mathbb R$ is the phase angle of the second generator in reference to the first,~$R_i > 0, i=1,2$, are the damping coefficients,~$M_i >0, i=1,2$, are the inertia constants, $p_1,p_2 > 0$ are the constant power consumption at each bus, and~$a > 0$ and~$\varphi \in (-\pi/2, \pi/2)$ describe the nominal voltages of the generators and the admittance of the transmission line (see~\cite{Weiss2019StabilityNRPS} for a detailed derivation of this model).
\begin{Corollary}\label{coro:2bus_2cont}
Suppose that~$a > \max\{M_1,M_2\}$. If
\begin{equation}\label{cond:2bus_2cont}
3a^2\left(1 + |\cos(2\varphi)|\right) < \frac{\displaystyle \min_i\{M_i\}}{\displaystyle \max_i \{M_i\}}\min_{i } \frac{R_i^2}{2},
\end{equation}
then~\eqref{eq:2bus} is 2-contractive.
\end{Corollary}
\begin{pf}
Our proof is based on Theorem~\ref{thm:net_k_contract}. First note that
we can write~\eqref{eq:2bus} as the networked system~\eqref{eq:net_sys} with:
$x=\begin{bmatrix}
\omega _1&\omega _2&\delta \end{bmatrix}^T $,
$D = \operatorname{diag}(R_1/M_1,R_2/M_2,0)$, $v=\begin{bmatrix}\frac{p_1}{M_1} &\frac{p_2}{M_2}&0\end{bmatrix}^T$, $W_1 = \operatorname{diag}(-a/M_1,a/M_2,1)$,
$
W_2 = \begin{bmatrix}
0 & 0 & 1 \\
-1 & 1 & 0
\end{bmatrix},
$ so that~$W_2x=\begin{bmatrix}
\delta & \omega_2-\omega_1 \end{bmatrix}^T$,
and \[ f(z) = \begin{bmatrix}
\sin(z_1 + \varphi) \\
\sin(z_1 - \varphi) \\
z_2
\end{bmatrix}.
\]
Thus,~\eqref{eq:alphak} gives
\[
\alpha_2=\frac{1}{2} \min_i \left \{\frac{R_i}{M_i} \right\} ,
\]
and
\begin{equation*}
J_f(z) = \begin{bmatrix}
\cos(z_1 + \varphi) & 0 \\
\cos(z_1 - \varphi) & 0 \\
0 & 1
\end{bmatrix},
\end{equation*}
so
\begin{align*}
(J_f(z))^TJ_f(z) & = \begin{bmatrix}
\cos^2(z_1+\varphi)+\cos^2(z_1-\varphi) & 0\\
0 &1
\end{bmatrix}\\
&=
\begin{bmatrix}
1+\cos(2z_1)\cos(2\varphi) & 0\\
0 &1
\end{bmatrix},
\end{align*}
and thus
\begin{align*}
\|J_f(z)\|_2^2&=\lambda_{\max}\left((J_f(z))^TJ_f(z)\right)\\& \le 1 + |\cos(2\varphi)| .
\end{align*}
Furthermore, the ordered singular values of~$W_1$ are
\[
\frac{a}{\min\{M_1,M_2\}},\; \frac{a}{\max\{M_1,M_2\}}, \;1 ,
\]
and the singular values of~$W_2$ are $ \sqrt{2},\;1 $. Substituting all these
values in~\eqref{cond:net_k_smallgain} gives
\begin{align*}
\| & J_f (W_2x)\|_2^2 \sum_{i=1}^2 \sigma_i^2(W_1)\sigma_i^2(W_2) \\
&\leq \left(1 + |\cos(2\varphi)|\right) \left( \frac{2a^2}{(\min\{M_i\})^2} + \frac{a^2}{(\max\{M_i\})^2} \right) \\
&\le \frac{3a^2}{(\min\{M_i\})^2}\left(1 + |\cos(2\varphi)|\right) \\
&< \frac{1}{(\max\{M_i\})^2}\min_{i } \frac{R_i^2}{2} \\
&\le \min_{i } \frac{R_i^2}{2M_i^2}\\
&= 2\alpha_2^2,
\end{align*}
where we used~\eqref{cond:2bus_2cont} in the last inequality. Therefore,~\eqref{cond:net_k_smallgain} holds with $k=2$.~\hfill{\qed}
\end{pf}
Note in particular that the system will always be 2-contractive if the damping coefficients are large enough or if the inertia constants are small enough.
We now combine Corollary~\ref{coro:2bus_2cont} and~\cite[Thm. 4.1]{Weiss2019StabilityNRPS} to find an explicit region of convergence. We first require a few definitions. Let
\begin{align*}
\Gamma_{\min} &:= 2\min_i\frac{a}{R_i}\cos(\varphi), \\
\varphi_{\max} &:= |\varphi|, \\
\Gamma_{\text{crit}} &:= \frac{1}{\cos(\varphi_{\max})}\left(\left|\frac{p_1}{R_1} - \frac{p_2}{R_2}\right| + 2 \max_i \frac{a}{R_i}\sin(\varphi)\right).
\end{align*}
Assume that~$\Gamma_{\min} > \Gamma_{crit}$. Define~$\gamma_{\min} \in [0, \frac{\pi}{2} - \varphi_{max})$ as the unique solution of
\[
\sin(\gamma_{\min}) = \frac{\Gamma_{\text{crit}}}{\Gamma_{\min}}\cos(\varphi_{\max}),
\]
and set $\gamma_{\max} := \pi - \gamma_{\min}$.
\begin{Corollary}\label{coro:2bus_stability}
If~\eqref{cond:2bus_2cont} holds, then any solution~$\begin{bmatrix} \omega_1(t)&\omega_2(t)&\delta(t)\end{bmatrix}^T$ of~\eqref{eq:2bus} satisfying
\[
|\delta(0) - 2\pi s| < \gamma_{\max} \text{ for some }s \in \mathbb Z
\]
converges to an equilibrium point.
\end{Corollary}
\begin{pf}
Suppose that $x(t)=\begin{bmatrix}\omega_1(t)&\omega_2(t)&\delta(t)\end{bmatrix}^T$ is a solution of~\eqref{eq:2bus} satisfying~$|\delta(0) - 2\pi s| < \gamma_{\max}$. Then, by items (1) and (3) of~\cite[Theorem 4.1]{Weiss2019StabilityNRPS},~$x(t)$ is a bounded trajectory. Since~\eqref{cond:2bus_2cont} holds, the system~\eqref{eq:2bus} is 2-contractive, so $x(t)$ is a bounded trajectory of a 2-contractive system, and therefore converges to an equilibrium point.
\end{pf}
\section{Conclusion}
We derived a sufficient condition for~$k$-contraction of Lurie systems. For~$k=1$, this reduces to the standard small gain sufficient condition for contraction. However, often Lurie systems admit more than a single equilibrium point, and are thus not contractive (that is, not~$1$-contractive) with respect to
any norm.
Our condition may still be used to guarantee a well-ordered behaviour of the closed-loop system. For example, establishing that a time-invariant system is~$2$-contractive implies that
any bounded solution converges to an equilibrium, that is not necessarily unique. Such a property is important, for example, in dynamical models of associative memories, where every equilibrium corresponds to a stored memory.
Our results suggest several possible research directions.
First, an important advantage of ARIs is that they are linear matrix inequalities and there exist efficient numerical algorithms for solving
them. An interesting question is whether this remains true for the k-ARIs developed here.
Second, several criteria
for the asymptotic stability of a Lurie system, e.g. the Popov criterion and the circle criterion can be stated
using the transfer function of the linear subsystem. It may be of interest to relate the conditions in Theorem~\ref{thm:lure_suff}
to the transfer function of a linear system with $k$-compound matrices.
{\sl Acknowledgments.} We thank Rami Katz for helpful comments.
\bibliographystyle{abbrvnat}
| {
"timestamp": "2022-12-29T02:05:49",
"yymm": "2212",
"arxiv_id": "2212.13440",
"language": "en",
"url": "https://arxiv.org/abs/2212.13440",
"abstract": "A Lurie system is the interconnection of a linear time-invariant system and a nonlinear feedback function. We derive a new sufficient condition for $k$-contraction of a Lurie system. For $k=1$, our sufficient condition reduces to the standard stability condition based on the bounded real lemma and a small gain condition. However, Lurie systems often have more than a single equilibrium and are thus not contractive with respect to any norm. For $k=2$, our condition guarantees a well-ordered asymptotic behaviour of the closed-loop system: every bounded solution converges to an equilibrium, which is not necessarily unique. We demonstrate our results by deriving a sufficient condition for $k$-contraction of a general networked system, and then applying it to guarantee $k$-contraction in a Hopfield neural network, a nonlinear opinion dynamics model, and a 2-bus power system.",
"subjects": "Systems and Control (eess.SY)",
"title": "Contraction and $k$-contraction in Lurie systems with applications to networked systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534349454033,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.708357337220898
} |
https://arxiv.org/abs/2106.06012 | Learning distinct features helps, provably | We study the diversity of the features learned by a two-layer neural network trained with the least squares loss. We measure the diversity by the average $L_2$-distance between the hidden-layer features and theoretically investigate how learning non-redundant distinct features affects the performance of the network. To do so, we derive novel generalization bounds depending on feature diversity based on Rademacher complexity for such networks. Our analysis proves that more distinct features at the network's units within the hidden layer lead to better generalization. We also show how to extend our results to deeper networks and different losses. | \section{Introduction}
Neural networks are a powerful class of non-linear function approximators that have been successfully used to tackle a wide range of problems. They have enabled breakthroughs in many tasks, such as image classification \citep{krizhevsky2012imagenet}, speech recognition \citep{hinton2012deep}, and anomaly detection \citep{golan2018deep}.
However, neural networks are often over-parameterized, i.e., have more parameters than data. As a result, they tend to overfit to the training samples and not generalize well on unseen examples \citep{goodfellow2016deep}. Avoiding overfitting has been extensively studied \citep{neyshabur2018role,nagarajan2019uniform,poggio2017theory,dziugaite2017computing,foret2020sharpness} and various approaches and strategies have been proposed, such as data augmentation \citep{goodfellow2016deep,zhang2018mixup}, regularization \citep{kukavcka2017regularization,bietti2019kernel,arora2019implicit}, and Dropout \citep{hinton2012improving,lee2019meta,li2016improved}, to close the gap between the empirical loss and the expected loss.
Formally, the output of a neural network consisting of P layers can be defined as follows:
\begin{equation}
f(\vx;\tW) = \rho^P(\mW^P(\rho^{P-1}( \cdots \rho^2(\mW^2 \rho^1(\mW^1 \vx) ))),
\end{equation}
where $\rho^i(.)$ is the element-wise activation function, e.g., \textit{ReLU} or \textit{Sigmoid}, of the $i^{th}$ layer and $\tW= \{\mW^1,\dots,\mW^P\}$ are the weights of the network with the superscript denoting the later. By defining $\Phi(\cdot)=\rho^{P-1}( \cdots \rho^2(\mW^2 \rho^1(\mW^1 \cdot) )) $, the output of neural network becomes
\begin{equation}
f(\vx;\tW) = \rho^P(\mW^P\Phi(\vx)),
\end{equation}
where $\Phi(\vx)= [\phi_1(\vx),\cdots, \phi_M(\vx)] $ is the $M$-dimensional feature representation of the input $\vx$. This way neural networks can be interpreted as a two-stage approach, with the first stage being representation learning, i.e., learning $\Phi$, followed by the final prediction layer. Both parts are jointly-optimized.
Learning a rich and diverse set of features, i.e., the first stage, is critical for achieving top performance \citep{arpit2017closer,cogswell2015reducing}. Studying the different properties of the learned features is an active field of research \citep{deng2021discovering,kornblith2021better,deng2021discovering,du2020few}. For example, \cite{du2020few} showed theoretically that learning a good feature representation can be helpful in few-shot learning. In this paper, we focus on the diversity of the features. In particular, we propose to quantify the diversity of the feature set $\{ \Phi_1(\cdot),\cdots, \Phi_M(\cdot)\}$ using the average pairwise $L_2$-distance between their outputs. Formally, given a dataset $\{\vx_i\}_{i=1}^{i=N}$, we have
\begin{equation} \label{div_diff}
diversity =\frac{1}{N} \sum_{k=1}^N \frac{1}{2M(M-1)} \sum_{i \neq j}^M ( \phi_i(\vx_k) - \phi_j(\vx_k))^2.
\end{equation}
Intuitively, $diversity$ measures how distinct the learned features are. If the mappings learned by two different units are redundant, then given the same input, both units would yield similar output. This yields in low $L_2$-distance and as a result a low diversity.
In contrast, if the mapping learned by each unit is distinct, the corresponding average distances to the outputs of the other units within the layer are high. Thus, this yields a high global diversity.
To confirm this intuition and further motivate the analysis of this attribute, we conduct empirical simulations. We track the diversity of the representation of the last hidden layer, as defined in \eqref{div_diff}, during the training of three different ResNet \citep{he2016deep} models on CIFAR10 \citep{krizhevsky2009learning}. The results are reported in Figure \ref{cifar_div}. Indeed, diversity consistently increases during the training for all the models. This shows that, in order to solve the task at hand, neural networks learn distinct features. \\
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{cifar_div_resnet.jpg}
\caption{Preliminary empirical results for additional motivation to theoretically understand feature diversity. The figure shows diversity versus the number of epochs for three different ResNet models trained on CIFAR10 dataset.}
\label{cifar_div}
\end{figure}
\textbf{Our contributions:} In this paper, we theoretically investigate diversity in the neural network contest and study how learning non-redundant features affects the performance of the model. We derive a bound for the generalization gap which is inversely proportional to the proposed diversity measure showing that learning distinct features helps. In our analysis, we focus on simple neural network model with one-hidden layer trained with mean squared error. This configuration is simple, however it has been shown to be convenient and insightful for the theoretical analysis \citep{deng2021adversarial,du2020few,bubeck2021universal}. Moreover, we show how to extend our theoretical analysis to different losses and different network architectures.
Our contributions can be summarized as follows:
\begin{itemize}
\item We analyse the effect the feature diversity on the generalization error bound of a neural network. The analysis is presented in Section \ref{sec_theor}. In Theorem \ref{theorm1}, we derive an upper bound for the generalization gap which is inversely proportional to the diversity factor. Thus, we provide theoretical evidence that learning distinct features can help reduce the generalization error.
\item We extend our analysis to different losses and general multi-layer networks. These results are presented in Theorems \ref{theorm2}, \ref{theorm3}, \ref{theorm_multi}, \ref{theorm4}, and \ref{theorm5}.
\end{itemize}
\textbf{Outline of the paper:} The rest of the paper is organized as follows: Section \ref{pre} summarizes the preliminaries for our analysis. Section \ref{sec_theor} presents our main theoretical results along with the proofs. Section \ref{sec_theorext} extends our results for different settings. Section \ref{con} concludes the work with a discussion and several open problems.
\section{PRELIMINARIES} \label{pre}
Generalization theory \citep{shalev2014understanding,kawaguchi2017generalization} focuses on the relation between the empirical loss defined as
\begin{equation} \label{eq:loss}
\hat{L}(f) = \frac{1}{N} \sum_{i=1}^{N} l\big(f(\vx_i;\tW),y_i \big),
\end{equation}
and the expected risk, for any $f$ in the hypothesis class $\mathcal{F}$, defined as
\begin{equation}
L(f) = \E_{(\vx,y)\sim \mathcal{Q}} [l(f(\vx),y)],
\end{equation}
where $\mathcal{Q}$ is the underlying distribution of the dataset and $y_i$ the corresponding label of $x_i$. Let $f^*= \argmin_{f\in \mathcal{F}} L(f)$ be the expected risk minimizer and $\hat{f}= \argmin_{f\in \mathcal{F}} \hat{L}(f)$ be the empirical risk minimizer. We are interested in the estimation error, i.e., $L(f^*) - L(\hat{f})$, defined as the gap in the loss between both minimizers \citep{barron1994approximation}. The estimation error represents how well an algorithm can learn. It usually depends on the complexity of the hypothesis class and the number of training samples \citep{barron1993universal,zhai2018adaptive}.
Several techniques have been proposed to quantify the generalization error, such as Probably Approximately Correctly (PAC) learning \citep{shalev2014understanding,valiant1984theory}, VC dimension \citep{sontag1998vc}, and the Rademacher complexity \citep{shalev2014understanding}. The Rademacher complexity has been widely used as it usually leads to a tighter generalization error bound than the other metrics \citep{sokolic2016lessons,neyshabur2018role,golowich2018size}. The formal definition of the empirical Rademacher complexity is given as follows:
\begin{definition} \citep{shalev2014understanding,bartlett2002rademacher}
For a given dataset with N samples $\mathcal{D} = \{\textbf{x}_i,y_i\}_{i=1}^N$ generated by a distribution $\mathcal{Q}$ and for a model space $\mathcal{F} : \mathcal{X} \rightarrow \mathbb{R}$ with a single dimensional output, the empirical Rademacher complexity $\mathcal{R}_N(\mathcal{F})$ of the set $\mathcal{F}$ is defined as follows:
\begin{equation}
\mathcal{R}_N(\mathcal{F}) = \E_\sigma \bigg[ \sup_{f \in \mathcal{F} } \frac{1}{N} \sum_{i=1}^{N} \sigma_i f(\vx_i) \bigg],
\end{equation}
where the variables $\sigma = \{\sigma_1, \cdots, \sigma_N \}$ are independent uniform random variables in $\{ -1,1 \}$.
\end{definition}
In this work, we rely on the Rademacher complexity to study diversity. We recall the following three lemmas related to the Rademacher complexity and the generalization error:
\begin{lemma} \label{complemma} \citep{bartlett2002rademacher} For $\mathcal{F} \in \mathbb{R}^{\mathcal{X}}$, assume that $g:\mathbb{R} \xrightarrow{} \mathbb{R}$ is a $L_{g}$-Lipschitz continuous function and $\mathcal{A} = \{g \circ f : f \in \mathcal{F} \}$. Then we have
\begin{equation}
\mathcal{R}_N(\mathcal{A}) \leq L_{g} \mathcal{R}_N(\mathcal{F}).
\end{equation}
\end{lemma}
\begin{lemma} \label{radddd_bound} \citep{xie2015generalization} The Rademacher complexity $\mathcal{R}_N(\mathcal{F})$ of the hypothesis class $\mathcal{F} = \{ f| f(\vx) = \sum_{m=1}^M v_m \phi_m(\vx) = \sum_{m=1}^M v_m \phi(\vw_m^T\vx) \}$ can be upper-bounded as follows:
\begin{equation}
\mathcal{R}_N(\mathcal{F} ) \leq \frac{2L_{\rho}C_{134} M}{ \sqrt{N}} + \frac{C_4 |\phi(0)|M}{\sqrt{N}},
\end{equation}
where $C_{134}=C_1C_3C_4$ and $\phi(0)$ is the output of the activation function at the origin.
\end{lemma}
\begin{lemma} \label{mainlemma} \citep{bartlett2002rademacher}
With a probability of at least $1 - \delta$,
\begin{equation}
L(\hat{f}) - L(f^*) \leq 4 \mathcal{R}_N(\mathcal{A}) + B \sqrt{ \frac{2 \log(2/ \delta)}{N}},
\end{equation}
where $B\geq \sup_{\vx,y,f} |l(f(\vx),y)| $ and $\mathcal{R}_N(\mathcal{A})$ is the Rademacher complexity of the loss set $\mathcal{A}$.
\end{lemma}
Lemma \ref{mainlemma} upper-bounds the generalization error using the Rademacher complexity defined over the loss set and $\sup_{x,y,f} |l(f(x),y)| $. Our analysis aims at expressing this bound in terms of diversity, in order to understand how it affects the generalization.
In order to study the effect of diversity on the generalization, given a layer with M units $\{ \phi_1(\cdot), \cdots, \phi_M(\cdot) \}$, we make the following assumption:
\begin{assumption}
Given any input $\vx$, we have
\begin{equation}
\frac{1}{2M(M-1)} \sum_{i \neq j}^M ( \phi_n(\vx) - \phi_m(\vx))^2 \geq d^2_{min}.
\end{equation}
\end{assumption}
$d_{min}$ lower-bounds the average $L_2$-distance between the different units' activations within the same representation layer. Intuitively, if several neurons pairs $i$ and $j$ have similar outputs, the corresponding $L_2$ distance is small. Thus, the lower bound $d_{min}$ is also small and the units within this layer are considered redundant and ``not diverse". Otherwise, if the average distance between the different pairs is large, their corresponding $d_{min}$ is large and they are considered ``diverse". By studying how the lower bound $d_{min}$ affects the generalization of the model, we can analyse how the diversity theoretically affects the performance of neural networks. In the rest of the paper, we derive generalization bounds for neural networks using $d_{min}$.
\section{Learning distinct features helps} \label{sec_theor}
In this section, we derive generalization bounds for neural networks depending on their diversity. Here, we consider a simple neural network with one hidden-layer with $M$ neurons and one-dimensional output trained for a regression task. The full characterization of the setup can be summarized as follows:
\begin{itemize}
\item The activation function of the hidden layer, $\rho(.)$, is a positive $L_{\rho}$-Lipschitz continuous function.
\item The input vector $\vx\in \mathbb{R}^D$ satisfies $||\vx||_2 \le C_1$ and the output scalar $y\in \mathbb{R}$ satisfies $|y| \le C_2$.
\item The weight matrix $\mW=[\vw_1,\vw_2,\cdots,\vw_M] \in \mathcal{R}^{D\times M} $ connecting the input to the hidden layer satisfies $||\vw_m||_2 \le C_3$.
\item The weight vector $\vv \in \mathbb{R}^{M} $ connecting the hidden-layer to the output satisfies $||\vv||_\infty \le C_4$.
\item The hypothesis class is $\mathcal{F} = \{ f| f(\vx) = \sum_{m=1}^M v_m \phi_m(\vx) = \sum_{m=1}^M v_m \rho(\vw_m^T\vx) \}$.
\item Loss function set is $\mathcal{A}= \{ l| l(f(\vx),y) = \frac{1}{2}|f(\vx)-y|^2 \} $.
\item Given an input $\vx$, $ \frac{1}{2M(M-1)} \sum_{n \neq m}^M ( \phi_n(\vx) - \phi_m(\vx))^2 \geq d_{min}^2$.
\end{itemize}
Our main goal is to analyse the generalization error bound of the neural network and to see how its upper-bound is linked to the diversity, expressed by $d_{min}$, of the different units. The main result of the paper is presented in Theorem \ref{theorm1}. Our proof consists of three steps: At first, we derive a novel bound for the hypothesis class $\mathcal{F}$ depending on $d_{min}$. Then, we use this bound to derive bounds for the loss class $\mathcal{A}$ and its Rademacher complexity $\mathcal{R}_N(\mathcal{A})$. Finally, we plug all the derived bounds in Lemma \ref{mainlemma} to complete the proof of Theorem \ref{theorm1}.
The first step of our analysis is presented in Lemma \ref{supf}:
\begin{lemma} \label{supf}
We have
\begin{equation}
\sup_{\vx,f \in \mathcal{F} } |f(\vx)| \leq \sqrt{\mathcal{J}},
\end{equation}
where $ \mathcal{J} = C_4^2 \big( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$ and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$,
\end{lemma}
\begin{proof}
\begin{small}
\begin{multline} \label{equ:inequality1}
f^2(\vx) = \left(\sum_{m=1}^M v_m \phi_m(\vx)\right)^2 \leq \left(\sum_{m=1}^M ||\vv||_\infty \phi_m(\vx)\right)^2 = ||\vv||^2_\infty \left(\sum_{m=1}^M \phi_m(\vx)\right)^2 \leq C_4^2 \left(\sum_{m=1}^M \phi_m(\vx)\right)^2 \\
= C_4^2 \left(\sum_{m,n} \phi_m(\vx) \phi_n(\vx)\right) = C_4^2 \left( \sum_{m}\phi_m(\vx)^2 + \sum_{m\neq n} \phi_n(\vx) \phi_m(\vx) \right).
\end{multline}
\end{small}
We have $\sup_{w,\vx} \phi_m(\vx) = \sup_{w,\vx} \rho(\vw^T\vx) \leq \sup (L_{\rho}|\vw^T\vx| + \phi(0)) $, because $\rho$ is $L_{\rho}$-Lipschitz. Thus, $||\phi||_\infty \leq L_{\rho} C_1C_3 + \phi(0)=C_5 $. For the first term in \eqref{equ:inequality1}, we have $ \sum_{m}\phi_m(\vx)^2 < M(L_{\rho} C_1C_3 + \phi(0))^2 =MC^2_5 $. The second term, using the identity \\ $\phi_m(\vx) \phi_n(\vx) = \frac{1}{2}\left(\phi_m(\vx)^2 + \phi_n(\vx)^2 - (\phi_m(\vx) - \phi_n(\vx))^2 \right) $, can be rewritten as
\small
\begin{equation}
\sum_{m\neq n} \phi_m(\vx) \phi_n(\vx) = \frac{1}{2}\left( \sum_{m\neq n} \phi_m(\vx)^2 + \phi_n(\vx)^2 - \Big(\phi_m(\vx) - \phi_n(\vx)\Big)^2 \right).
\end{equation}
In addition, we have $\frac{1}{2}\sum_{m\neq n} (\phi_m(\vx) - \phi_n(\vx))^2 \geq M(M-1)d_{min}^2$. Thus, we have:
\begin{equation}
\sum_{m\neq n} \phi_m(\vx) \phi_n(\vx) \leq \frac{1}{2} \sum_{m\neq n} (2C_5^2) - M(M-1) d_{min}^2
= M(M-1) (C_5^2 -d_{min}^2) .
\end{equation}
By putting everything back to \eqref{equ:inequality1}, we have:
\begin{equation}
f^2(\vx) \leq C_4^2 \Big( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2) \Big) = \mathcal{J}.
\end{equation}
Thus,
$\sup_{\vx,f} |f(\vx)| \leq \sqrt{\sup_{\vx,f} f(\vx)^2} \leq \sqrt{\mathcal{J}}. $
\end{proof}
Note that in Lemma \ref{supf}, we have expressed the upper-bound of $\sup_{\vx,f} |f(\vx)|$ in terms of $d_{min}$. Using this bound, we can now find an upper-bound for $\sup_{\vx,f,y} |l(f(\vx),y)|$ in the following lemma:
\begin{lemma} \label{supl}
We have
\begin{equation}
\sup_{\vx,y,f} |l(f(\vx),y)| \leq \frac{1}{2} (\sqrt{\mathcal{J}} + C_2)^2.
\end{equation}
\end{lemma}
\begin{proof}
We have $\sup_{\vx,y,f} |f(\vx) - y| \leq \sup_{\vx,y,f} ( |f(\vx)| + |y|) = \sqrt{\mathcal{J}} + C_2$.
Thus, \\ $\sup_{x,y,f} |l(f(x),y)| \leq \frac{1}{2} (\sqrt{\mathcal{J}} + C_2)^2 $.
\end{proof}
Next, using the result of lemmas \ref{complemma}, \ref{radddd_bound}, and \ref{supl}, we can derive a bound for the Rademacher complexity of $\mathcal{A}$. We have above expressed all the elements of Lemma \ref{mainlemma} using the diversity term $d_{min}$. By plugging in the derived bounds in Lemmas \ref{supf}, \ref{supl}, we obtain Theorem \ref{theorm1}.
\begin{theorem} \label{theorm1}
With probability at least $(1 - \delta)$, we have
\begin{equation}
L(\hat{f}) - L(f^*) \leq \Big(\sqrt{\mathcal{J}} + C_2\Big)\frac{A}{\sqrt{N}}
+ \frac{1}{2} ( \sqrt{\mathcal{J}} + C_2)^2 \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{equation}
where $C_{134}=C_1C_3C_4$, $\mathcal{J}= C^2_4 \big( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$, $A=4\Big(2L_{\rho} C_{134}+ C_4 |\phi(0)| \Big)M$, and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$.
\end{theorem}
\begin{proof}
Given that $l(\cdot)$ is $K$-Lipschitz with a constant $K = sup_{\vx,y,f} |f(\vx) - y| \leq \sqrt{\mathcal{J}} + C_2$, and using Lemma \ref{complemma}, we can show that $\mathcal{R}_N(\mathcal{A})\leq K \mathcal{R}_N(\mathcal{F}) \leq (\sqrt{\mathcal{J}} + C_2) \mathcal{R}_N(\mathcal{F})$.
For $ \mathcal{R}_N(\mathcal{F})$, we use the bound found in Lemma \ref{radddd_bound}. Using Lemmas \ref{mainlemma} and \ref{supl}, we have
\begin{multline}
L(\hat{f}) - L(f^*) \leq 4 \Big(\sqrt{\mathcal{J}} + C_2\Big) \Big(2L_{\rho} C_{134}+ C_4 |\phi(0)| \Big) \frac{M}{\sqrt{N}} + \frac{1}{2}( \sqrt{\mathcal{J}} + C_2)^2 \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{multline}
where $C_{134}=C_1C_3C_4$, $\mathcal{J}= C^2_4 \big( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2) \big)$, and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$. Thus, setting $A=4\Big(2L_{\rho} C_{134}+ C_4 |\phi(0)| \Big)M$ completes the proof.
\end{proof}
Theorem \ref{theorm1} provides an upper-bound for the generalization gap. We note that it is a decreasing function of $d_{min}$. Thus, this suggests that higher $d_{min}$, i.e., more diverse activations, yields a lower generalization error bound. This shows that learning distinct features helps in neural network context.
We note that the bound in Theorem \ref{theorm1} is non-vacuous in the sense that it converges to zero when the number of training samples $N$ goes to infinity. Moreover, we note that in this paper, we do not claim to reach a tighter generalization bound for neural network in general \citep{rodriguez2021tighter,jiang2019fantastic,neyshabur2017exploring,dziugaite2017computing}. Our main claim is that we derive a generalization bound which depends on the diversity of learned features, as measured by $d_{min}$. To the best of our knowledge, this is the first work that performs such theoretical analysis based on the average $L_2$-distance between the units within the hidden layer.
\subsection*{Connection to prior studies}
Theoretical analysis of the properties of the features learned by neural network models is an active field of research. Feature representation has been theoretically studied in the context of few-shot learning in \citep{du2020few}, where the advantage of learning a good representation in the case of scarce data was demonstrated. \cite{arora2020provable} showed the same in the context of imitation learning, demonstrating that it has sample complexity benefits for imitation learning. \cite{wang2021towards} developed similar findings for the self-supervised learning task.
\cite{JMLR_ss} derived novel bounds showing the statistical benefits of multitask representation learning in linear Markov Decision Process. Opposite to the aforementioned works, the main focus of this paper is not on the large sample complexity problems. Instead, we focused on the feature diversity in the learned representation and showed that learning distinct features leads to better generalization.
Another line of research related to our work is weight-diversity in neural networks \citep{yu2011diversity,bao2013incoherent,xie2015generalization,xie2017diverse,kwok2012priors}. Diversity in this context is defined based on dissimilarity between the weight component using, e.g., cosine distance and weight matrix covariance \citep{xie2017uncorrelation}. In \citep{xie2015generalization}, theoretical benefits of weight-diversity have been demonstrated. We note that, in our work, diversity is defined in a fundamentally different way. We do not consider dissimilarity between the parameters of the neural network. Our main scope is the feature representation and, to this end, diversity is defined based on the $L_2$ distance between the feature maps directly and not the weights. Empirical analysis of the deep representation of neural networks has drawn attention lately \citep{deng2021discovering,kornblith2021better,cogswell2015reducing}. For example, \cite{cogswell2015reducing} showed empirically that learning decorrelated features reduces overfitting. However, theoretical understanding of the phenomena is lacking. Here, we close this gap by studying how feature diversity affects generalization.
\section{Extensions} \label{sec_theorext}
In this section, we show how to extend our theoretical analysis for classification, for general multi-layer network, and for different losses.
\subsection{Binary classification}
Here, we extend our analysis of the effect of learning a diverse feature representation on the generalization error to the case of a binary classification task, i.e., $y \in \{-1,1\} $. Here, we consider the special cases of a hinge loss and a logistic loss. To derive a diversity-dependent generalization bounds for these cases, similar to the proofs of Lemmas 7 and 8 in \cite{xie2015generalization}, we can show the following two lemmas:
\begin{lemma} \label{lemmaa2}
Using the hinge loss, we have with probability at least $(1 - \delta)$
\begin{multline}
L(\hat{f}) - L(f^*) \leq 4 \Big(2L_{\rho}C_{134} + C_4 |\phi(0)| \Big) \frac{M}{\sqrt{N}} + (1+\sqrt{\mathcal{J}}) \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{multline}
where $C_{134}=C_1C_3C_4$, $\mathcal{J}= C^2_4 ( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$, and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$.
\end{lemma}
\begin{lemma} \label{lemmaa3}
Using the logistic loss $l(f(x),y) = \log(1 + \ve^{-yf(x)})$, we have with probability at least $(1 - \delta)$
\begin{multline}
L(\hat{f}) - L(f^*) \leq \frac{4}{1 + \ve^{\sqrt{-\mathcal{J}}}} \Big(2L_{\rho}C_{134} + C_4 |\phi(0)|\Big) \frac{M}{\sqrt{N}}
+ \log(1+\ve^{\sqrt{\mathcal{J}}}) \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{multline}
where $C_{134}=C_1C_3C_4$, $\mathcal{J}= C^2_4 ( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$, and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$. \end{lemma}
Using the above lemmas, we can now derive a diversity-dependant bound for the binary classification case. The extensions of Theorem \ref{theorm1} in the cases of a hinge loss and a logistic loss are presented in Theorems \ref{theorm2} and \ref{theorm3}, respectively.
\begin{theorem} \label{theorm2}
Using the hinge loss, with probability at least $(1 - \delta)$, we have
\begin{equation}
L(\hat{f}) - L(f^*) \leq A/\sqrt{N} + (1+\sqrt{\mathcal{J}}) \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{equation}
where $\mathcal{J}= C^2_4 ( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$, $A=4 \Big(2L_{\rho}C_{134} + C_4 |\phi(0)| \Big)M$, and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$.
\end{theorem}
\begin{theorem} \label{theorm3}
Using the logistic loss $l(f(x),y) = \log(1 + \ve^{-yf(x)})$, with probability at least $(1 - \delta)$, we have
\begin{equation}
L(\hat{f}) - L(f^*) \leq \frac{A }{(1 + \ve^{\sqrt{-\mathcal{J}}})\sqrt{N}} + \log(1+\ve^{\sqrt{\mathcal{J}}}) \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{equation}
where $\mathcal{J}= C^2_4 ( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$, $A=4 \Big(2L_{\rho}C_{134} + C_4 |\phi(0)| \Big)M$, and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$.
\end{theorem}
As we can see, also for the binary classification task, the generalization bounds for the hinge and logistic losses are decreasing with respect to $d_{min}$. Thus, this shows that learning distinct features helps and can improve the generalization also in binary classification.
\subsection{Multi-layer networks}
Here, we extend our result for networks with P ($>1$) hidden layers. We assume that the pair-wise distances between the activations within layer $p$ are lower-bounded by $d_{min}^{(p)}$. In this case, the hypothesis class can be defined recursively. In addition, we assume that: $||\mW^{(p)}||_{\infty} \leq C^{(p)}_3$ for every $\mW^{(p)}$, i.e., the weight matrix of the $p$-th layer. In this case, the main theorem is extended as follows:
\begin{theorem} \label{theorm_multi}
With probability of at least $ (1 - \delta)$, we have
\begin{equation}
L(\hat{f}) - L(f^*) \leq (\sqrt{\mathcal{J}^P} + C_2) \frac{A}{\sqrt{N}}
+ \frac{1}{2}\left( \sqrt{\mathcal{J}^P} + C_2 \right)^2 \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{equation}
where $A= 4( (2L_{\rho})^P C_1 C^0_3 \prod^{P-1}_{p=0} \sqrt{M^{(p)}} C_3^{(p)} + |\phi(0)|\sum_{p=0}^{P-1} (2L_{\rho})^{P-1-p} \prod^{P-1}_{j=p} \sqrt{M^j} C_3^j ) $, and $\mathcal{J}^P$ is defined recursively using the following identities: $ \mathcal{J}^0 = C_3^{0} C_1 $ and \\ $\mathcal{J}^{(p)}= M^{(p)} {C^{p}}^2 \big( M^{p2} (L_{\rho} \mathcal{J}^{p-1} + \phi(0))^2 - M(M-1) {d_{min}^{(p)}}^2 ) \big)$, for $p=1, \dots,P$.
\end{theorem}
\begin{proof}
Lemma 5 in \cite{xie2015generalization} provides an upper-bound for the hypothesis class. We denote by $\vv^{(p)}$ the outputs of the $p^{th}$ hidden layer before applying the activation function:
\begin{equation}
\vv^0 = [\vw_1^{0^T}\vx, .... , \vw^{0^T}_{M^0}\vx ],
\end{equation}
\begin{equation}
\vv^{(p)} = [\sum_{j=1}^{M^{p-1}} w^{(p)}_{j,1} \phi(\vv^{p-1}_j), .... , \sum_{j=1}^{M^{p-1}} w^{(p)}_{j,M^{(p)}} \phi(v^{p-1}_j) ],
\end{equation}
\begin{equation}
\vv^{(p)} = [{\vw_1^{(p)}}^T \boldsymbol{\phi}^{(p)}, ..., {\vw^{(p)}_{M^{(p)}}}^T \boldsymbol{\phi}^{(p)} ],
\end{equation}
where $\boldsymbol{\phi}^{(p)}= [ \phi(v^{p-1}_1), \cdots, \phi(v^{p-1}_{M^{p-1}}) ]$. We have
$||\vv^{(p)}||^2_2 = \sum_{m=1}^{M^{(p)}} ({\vw^{(p)}_m}^T \boldsymbol{\phi}^{(p)})^2 $
and
${\vw^{(p)}_m}^T \boldsymbol{\phi}^{(p)} \leq C_3^{(p)} \sum_n \phi^{(p)}_n $. Thus,
\begin{equation}
||\vv^{(p)}||^2_2 \leq \sum_{m=1}^{M^{(p)}} ( C_3^{(p)} \sum_n \phi^{(p)}_n )^2 = M^{(p)} {C_3^{p}}^2 (\sum_n \phi^{(p)}_n )^2
= M^{(p)} {C_3^{p}}^2 \sum_{mn} \phi^{(p)}_m \phi^{(p)}_n.
\end{equation}
We use the same decomposition trick of $\phi^{(p)}_m \phi^{(p)}_n$ as in the proof of Lemma 2. We need to bound $\sup_x \phi^{(p)} $: \\
\begin{equation}
\sup_x \phi^{(p)}< \sup(L_{\rho} |\vv^{p-1}|+ \phi(0))
< L_{\rho} ||\vv^{p-1}||^2_2 + \phi(0).
\end{equation}
Thus, we have
\begin{equation}
||\vv^{(p)}||^2_2 \leq M^{(p)} {C_3^{p}}^2 \big( M^2 (L_{\rho} ||\vv^{p-1}||^2_2 + \phi(0))^2
- M(M-1) d_{min}^2 ) \big) = \mathcal{J}^P.
\end{equation}
We found a recursive bound for $||\vv^{(p)}||^2_2$ and we note that for $p=0$ we have
$||\vv^0||^2_2 \leq ||W^0||_{\infty} C_1 \leq C^0_3 C_1 = \mathcal{J}^0 $. Thus,
\begin{equation}
\sup_{\vx,f^P \in \mathcal{F}^P} |f(\vx)| = \sup_{\vx,f^P \in \mathcal{F}^P} |\vv^P| \leq \sqrt{\mathcal{J}^P}.
\end{equation}
By replacing the variables in Lemma \ref{mainlemma}, we have
\begin{multline}
L(\hat{f}) - L(f^*) \leq 4(\sqrt{\mathcal{J}^P} + C_2) \Bigg( \frac{(2L_{\rho})^P C_1 C^0_3}{\sqrt{N}} \prod^{P-1}_{p=0} \sqrt{M^{(p)}} C_3^{(p)} \nonumber \\
+ \frac{ |\phi(0)|}{\sqrt{N}}\sum_{p=0}^{P-1} (2L_{\rho})^{P-1-p} \prod^{P-1}_{j=p} \sqrt{M^j} C_3^j \Bigg)
+ \frac{1}{2}\left( \sqrt{\mathcal{J}^P} + C_2 \right)^2 \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{multline}
Taking $A= 4\bigg( (2L_{\rho})^P C_1 C^0_3 \prod^{P-1}_{p=0} \sqrt{M^{(p)}} C_3^{(p)} + |\phi(0)|\sum_{p=0}^{P-1} (2L_{\rho})^{P-1-p} \prod^{P-1}_{j=p} \sqrt{M^j} C_3^j \bigg) $ completes the proof.
\end{proof}
In Theorem \ref{theorm_multi}, we see that $\mathcal{J}^P$ is decreasing with respect to $d^{(p)}_{min}$. This extends our results to the multi-layer neural network case.
\subsection{ Multiple outputs}
Finally, we consider the case of a neural network with a multi-dimensional output, i.e., $\vy \in R^D$. In this case, we can extend Theorem \ref{theorm1} with the following two theorems:
\begin{theorem} \label{theorm4}
For a multivariate regression trained with the squared error, there exist a constant A such that, with probability at least $(1 - \delta)$, we have
\begin{equation}
L(\hat{f}) - L(f^*) \leq (\sqrt{\mathcal{J}} + C_2)\frac{A}{\sqrt{N}} + \frac{D}{2}( \sqrt{\mathcal{J}} + C_2)^2 \sqrt{\frac{2 \log(2/ \delta)}{N}}
\end{equation}
where $\mathcal{J}= C^2_4 ( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$, $ C_5 =L_{\rho} C_1C_3 + \phi(0)$, and $A= 4D \Big(2L_{\rho} C_{134}+ C_4 |\phi(0)|\Big)M$.
\end{theorem}
\begin{proof}
The squared loss $ \frac{1}{2}||f(\vx) - \vy||_2^2 $ can be decomposed into D terms $\frac{1}{2} (f(\vx)_k - y_k)^2$. Using Theorem \ref{theorm1}, we can derive the bound for each term and thus, we have:
\begin{equation}
L(\hat{f}) - L(f^*) \leq 4D(\sqrt{\mathcal{J}} + C_2) \Big(2L_{\rho} C_{134}+ C_4 |\phi(0)|\Big) \frac{M}{\sqrt{N}} + \frac{D}{2}( \sqrt{\mathcal{J}} + C_2)^2 \sqrt{\frac{2 \log(2/ \delta)}{N}},
\end{equation}
where $C_{134}=C_1C_3C_4$, $\mathcal{J}= C^2_4 ( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$, and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$. Taking $A= 4D \Big(2L_{\rho} C_{134}+ C_4 |\phi(0)|\Big)M$ completes the proof.
\end{proof}
\begin{theorem} \label{theorm5}
For a multi-class classification task using the cross-entropy loss, there exist a constant A such that, with probability at least $(1 - \delta)$, we have
\begin{multline}
L(\hat{f}) - L(f^*) \leq \frac{A}{(D-1+\ve^{-2\sqrt{\mathcal{J}}})\sqrt{N}} + \log\Big(1 + (D-1) \ve^{2\sqrt{\mathcal{J}}}\Big) \sqrt{\frac{2 \log(2/ \delta)}{N}}
\end{multline}
where $\mathcal{J}= C^2_4 ( MC^2_5 + M(M-1) (C_5^2 -d_{min}^2 ) \big)$ and $ C_5 =L_{\rho} C_1C_3 + \phi(0)$, and $A=4D(D-1)\left(2L_{\rho}C_{134} + C_4 |\phi(0)|\right)M $.
\end{theorem}
\begin{proof}
Using Lemma 9 in \cite{xie2015generalization}, we have $\sup_{f,\vx,\vy} l= \log\big(1 + (D-1) \ve^{2\sqrt{\mathcal{J}}}\big) $ and $l$ is $\frac{D-1}{D-1+\ve^{-2\sqrt{\mathcal{J}}}}$-Lipschitz. Thus, using the decomposition property of the Rademacher complexity, we have
\small
\begin{equation}
\mathcal{R}_n(\mathcal{A}) \leq \frac{4D(D-1)}{D-1+\ve^{-2\sqrt{\mathcal{J}}}} \left(2L_{\rho}C_{134} + C_4 |\phi(0)|\right)\frac{M}{\sqrt{N}}.
\end{equation}
Taking $A=4D(D-1)\left(2L_{\rho}C_{134} + C_4 |\phi(0)|\right)M $ completes the proof.
\end{proof}
Theorems \ref{theorm4} and \ref{theorm5} extend our result for the multi-dimensional regression and classification tasks, respectively. Both bounds are inversely proportional to the diversity factor $d_{min}$. We note that for the classification task, the upper-bound is exponentially decreasing with respect to $d_{min}$. This shows that learning a diverse and rich feature representation yields a tighter generalization gap and, thus, theoretically guarantees a stronger generalization performance.
\section{Discussions and open problems} \label{con}
In this paper, we showed how the diversity of the features learned by a two-layer neural network trained with the least-squares loss affects generalization. We quantified the diversity by the average $L_2$-distance between the hidden-layer features and we derived novel diversity-dependant generalization bounds based on Rademacher complexity for such models. The derived bounds are inversely-proportional to the diversity term, thus demonstrating that more distinct features within the intermediate layer can lead to better generalization. We also showed how to extend our results to deeper networks and different losses.
The bound found in Theorem \ref{theorm1} suggests that generalization gap, with respect to diversity, is inversely proportional to $d_{min}$ and scales as $\sim (C_5^2 - d^2_{min})/\sqrt{N}$. We validate this finding empirically in Figure \ref{fig_theorydiv}. We train a two-layer neural network on the MNIST dataset for 100 epochs using SGD with a learning rate of $0.1$ and batch-size of 256. We show the generalization gap, i.e., test error - train error, and the theoretical bound, i.e., $ (C_5^2 - d^2_{min})/\sqrt{N}$, for different training set sizes. $d_{min}$ is the lower bound of diversity. Empirically, it can be estimated as the minimum feature diversity over the training data $S$: $ d_{min}= \min_{x \in S} \frac{1}{2M(M-1)} \sum_{n \neq m}^M ( \phi_n(\vx) - \phi_m(\vx))^2 $. We experiment with different sizes of the intermediate layer, namely 128, 256, and 512. The average results over 5 random seeds are reported for different training sizes in Figure \ref{fig_theorydiv} showing that the theoretical bound correlates consistently well (correlation $>$ 0.9939) with the generalization error.
\begin{figure}[t]
\centering
\includegraphics[width=0.33\linewidth]{mnist_diversityboundColt1_128_corr_0.9947586098669323.jpg}
\includegraphics[width=0.33\linewidth]{mnist_diversityboundColt1_256_corr_0.9938945932167453.jpg}
\includegraphics[width=0.32\linewidth]{mnist_diversityboundColt1_512_corr_0.9953041418048238.jpg}
\caption{Generalization gap, i.e., train error - test error, and the theoretical bound, i.e., $ (C_5^2 - d^2_{min})/\sqrt{N}$, as a function of the number of training samples on MNIST dataset for neural networks with intermediate layer sizes from left to right: 128 (correlation=0.9948), 256 (correlation=0.9939), and 512 (correlation=0.9953). The theoretical term has been scaled in the same range as the generalization gap. All results are averaged over 5 random seeds. }
\label{fig_theorydiv}
\end{figure}
As shown in Figure \ref{cifar_div}, diversity increases for neural networks along the training phase. To further investigate this observation, we conduct additional experiments on ImageNet \citep{russakovsky2015imagenet} dataset using 4 different state-of-the-art models: \textbf{ResNet50} and \textbf{ResNet101}, i.e., the standard ResNet model \citep{he2016deep} with 50 layers and 101 layers, \textbf{ResNext50} \citep{xie2017aggregated}, and \textbf{WideResNet50} \citep{BMVC2016_87} with a 50 layers. All models are trained with SGD using standard training protocol \citep{zhang2018mixup,huang2017densely,cogswell2015reducing}. We track the diversity, as defined in \eqref{div_diff}, of the features of the last intermediate layer. The results are shown in Figure \ref{training_div} (a) and (b). As it can be seen, SGD without any
explicit regularization, implicitly optimizes diversity and converges toward regions with high features distinctness. These observations suggest the following conjecture:
\begin{conjecture} \label{implicit_div}
Standard training with SGD implicitly optimizes the diversity of intermediate features.
\end{conjecture}
\begin{figure}[t]
\centering
\includegraphics[width=0.33\linewidth]{imagenet_div_resnet.jpg}
\includegraphics[width=0.33\linewidth]{imagenet_div_resnext_wideresnet.jpg}
\includegraphics[width=0.32\linewidth]{div_vs_depth_mnist.jpg}
\caption{From left to right: (a)-(b) Tracking the diversity during the training for different models on ImageNet. (c) Final diversity as a function of depth for different models on MNIST. }
\label{training_div}
\end{figure}
Studying the fundamental properties of SGD is extremely important to understand generalization in deep learning \citep{kawaguchi2019gradient,kalimeris2019sgd,volhejn2020does,zou2021benefits,pmlr-v99-ji19a}. Problem \ref{implicit_div} suggests a new implicit bias for SGD, showing that it favors regions with high feature diversity.
Another research question related to diversity, that is worth investigating, is: \textit{How does the network depth affect diversity?}. In order to answer this question, we conduct an empirical experiment using MNIST dataset \citep{lecun1998gradient}. We use fully connected networks (FCNs) with ReLu activation and different depths (1 to 12). We experiment with three models with different widths, namely FCN-256, FCN-512, and FCN-1024, with 256, 512, and 1024 units per layer, respectively. We measure the final diversity of the last hidden layer for the different depths. The average results over 5 random seeds are reported in Figure \ref{training_div} (c). Interestingly, in this experiment, increasing the depth consistently leads to learning more distinct features and higher diversity for the different models. However, by looking at Figure \ref{cifar_div}, we can see that more parameters does not always lead to higher diversity. This suggests the following open question:
\begin{OpenProblem} \label{deep_div}
When does more parameters/depth lead to higher diversity?
\end{OpenProblem}
Understanding the difference between shallow and deep models and why deeper models generalize better is one of the puzzles of deep learning \citep{liao2019generalization,kawaguchi2019depth,poggio2017theory}. The insights gained by studying Open Problem \ref{deep_div} can lead to a novel key advantage of depth: deeper models are able to learn a richer and more diverse set of features.
Another interesting line of research is adversarial robustness
\citep{NEURIPS2019_36ab6265,wu2021wider,liao2019generalization,mao2020multitask}. Intuitively, learning distinct features can lead to a richer representation and, thus, more robust networks. However, the theoretical link is missing. This leads to following open problem:
\begin{OpenProblem} \label{prob_corr}
Can the theoretical tools proposed in this paper be used to prove benefits of feature diversity for adversarial robustness?
\end{OpenProblem}
We hope that our work inspires further investigations in each of these problems, as we keep exploring the different properties of neural network.
\acks{This work has been supported by the NSF-Business Finland
Center for Visual and Decision Informatics (CVDI) project
AMALIA. The work of Jenni Raitoharju was funded by the Academy of Finland (project 324475). }
| {
"timestamp": "2022-02-16T02:01:47",
"yymm": "2106",
"arxiv_id": "2106.06012",
"language": "en",
"url": "https://arxiv.org/abs/2106.06012",
"abstract": "We study the diversity of the features learned by a two-layer neural network trained with the least squares loss. We measure the diversity by the average $L_2$-distance between the hidden-layer features and theoretically investigate how learning non-redundant distinct features affects the performance of the network. To do so, we derive novel generalization bounds depending on feature diversity based on Rademacher complexity for such networks. Our analysis proves that more distinct features at the network's units within the hidden layer lead to better generalization. We also show how to extend our results to deeper networks and different losses.",
"subjects": "Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV)",
"title": "Learning distinct features helps, provably",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534333179648,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7083573360463054
} |
https://arxiv.org/abs/1909.09043 | A note on minimal art galleries | We will consider some extensions of the polygonal art gallery problem. In a recent paper Morrison has shown the smallest (9 sides) example of an art gallery that cannot be observed by guards placed in every third corner. Author also mentioned two related problems, for which the minimal examples are not known. We will show that a polygonal fortress such that its exterior cannot be guarded by sentries placed in every second vertex has at least 12 sides. Also, we will show an example of three-dimensional polyhedron such that its inside cannot be covered by placing guard in every vertex which has both fewer vertices and faces than previously known. | \section{Introduction}
Original art gallery problem is posed as following: given a polygon with $n$ sides choose $x$ points called guards inside it such that any point of polygon can be observed by at least one guard (precisely, for any $p$ in the polygon there exists guard $q$ such that the line segment $\overline{pq}$ is contained in the polygon). It has been proven by Chv\'atal that in general $x=\lfloor n / 3 \rfloor$ guards is enough and there exist galleries for which this limit cannot be lowered. Later, Fisk proved that guarding with $\lfloor n /3 \rfloor$ can be achieved by placing guards only in vertices of polygon. However simply placing guard in every third vertex is not always successful strategy and Morrison \cite{RM} showed that minimal example for which this strategy does not work has 9 vertices.
We will be first considering the fortress problem: given a polygon with $n$ sides choose $x$ points called guards inside it such that for any point $p$ outside of polygon there exists guard $q$ such that the line segment $\overline{pq}$ is outside the polygon. It has been proven by O'Rourke and Wood \cite{JO} that $\lceil n / 2 \rceil$ guards (placed in vertices) suffice and are sometimes necessary to fulfill this task. Our goal is to prove that a minimal example that cannot be guarded with the simple strategy of placing guard in every second vertex is 12-sided.
Second problem we will address is the three dimensional art gallery problem: given a polyhedron choose points called guards inside it such that for any point $p$ in the polyhedron there exists guard $q$ such that the line segment $\overline{pq}$ is contained in the polyhedron. In contrast to the previously mentioned problems placing guards in vertices is not optimal strategy, there are known examples of polyhedra which are not guarded even when guard is placed at every vertex, notably the Octoplex \cite{TSM} with 56 vertices and 30 faces. We present an example having 24 vertices and 26 faces, however we weren't able to prove that it is minimal.
In the whole paper by writing "$A$ is visible from $B$" we mean that segment $\overline{AB}$ does not intersect border of discussed polytope, so it is either fully contained in interior and border of polytope (when talking about art galleries) or in border and complement of polytope (when talking about fortresses). In particular this means that segment can "touch" border without intersecting it.
By "$A$ is observed" we mean that there exists guard for whom $A$ is visible. \ensuremath{\mathrm{Conv}\left(F\right)}\ denotes convex hull of $F$.
\section{Fortress}
\begin{lem} \label{convex}
Let $F$ be $n$-gon with guard placed at every second vertex. If there is point outside $F$ which cannot be observed by any guard then this point is not visible from any point outside of \ensuremath{\mathrm{Conv}\left(F\right)}.
\end{lem}
\begin{proof}
Assume that there exists point $X$ that is not observed by any guard, yet it is visible from outside of \ensuremath{\mathrm{Conv}\left(F\right)}. This means that we can pick two vertices $A,B$ such that whole $F$ is inside angle $\angle AXB$. Because $X$ is not observed there are no guards in $A$ nor $B$, hence $\overline{AB}$ is not edge of $F$. Let $C,D$ denote the vertices of $F$ such that $\overline{BC},\overline{BD}$ are edges of $F$, without loss of generality $\angle XBC < \angle XBD$. As guards are placed in every second vertex there must be a guard in $C$.
Find a vertex $B'\neq B$ of $F$ that lies inside $\triangle BXC$ such that $\angle B'XB$ is minimal. There are no edges intersecting $\overline{BX}$ or $\overline{BC}$ so $B'$ must be visible from $X$ as there cannot exist any edge hiding it. So $B'$ has no guard. We can repeat this process using $B'$ instead of $B$ and we will get an infinite sequence of different vertices of $F$ each without a guard. This is a contradiction as $F$ has only $n$ vertices.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{lemat.png}
\caption{Red dots are guards. Going from $B$ after each guard there must be a segment hiding it from $X$. }
\label{fig:lemat}
\end{figure}
\end{proof}
\begin{thm}
Let $F$ be $n$-gon with $n<12$ then for some choice of starting vertex, placing guard on every second vertex all points outside $F$ are observed.
\end{thm}
\begin{proof}
Lets assume that we have a polygon $F$ with fewer than 12 sides such that for any choice of initial vertex, placing guard every second vertex there will always be not observed point outside of $F$.
Pick point $O$ which is vertex of both $F$ and \ensuremath{\mathrm{Conv}\left(F\right)}. We color vertices of $F$ with two alternating colors (red and blue) starting with red $O$ and going clockwise. If we place guards in every second vertex starting with $O$ then from our hypothesis we can find point $X$, which is visible only from blue vertices. If we place guards starting with the first vertex after $O$ then we can find point $Y$ which is visible only from red vertices (excluding $O$ if $n$ is odd).
Starting clockwise from $O$ we create the sequence $a_1,a_2,\ldots,a_p$ of blue vertices from which $X$ is visible, let $b_i$ be the first vertex after $a_i$. Again going clockwise from $O$ create the sequence $c_1,c_2,\ldots,c_q$ of red vertices from which $Y$ is visible and let $d_i$ be the first vertex after $c_i$. From Lemma \ref{convex} $X,Y$ are inside \ensuremath{\mathrm{Conv}\left(F\right)}\ so they are visible from at least three vertices. Notice that all $a_i$ must lay on the border of one compact component of $F \backslash \ensuremath{\mathrm{Conv}\left(F\right)}$; also from Lemma \ref{convex} $a_i$ cannot be on the border of \ensuremath{\mathrm{Conv}\left(F\right)}\ or $X$ would be visible from outside of \ensuremath{\mathrm{Conv}\left(F\right)}. Same reasoning applies to $c_i$, so only two points from those four sequences that may belong to the border of \ensuremath{\mathrm{Conv}\left(F\right)}\ are $b_p$ and $d_q$. Even if $b_p$ is on the border of \ensuremath{\mathrm{Conv}\left(F\right)}, it is still in the same compact component as $a_p$ and same applies to $d_q$ and $c_q$.
Notice that if $X,Y$ are in different compact components of $F \backslash \ensuremath{\mathrm{Conv}\left(F\right)}$ (figure \ref{fig:two}), then all four sequences are disjoint giving $F$ at least 12 vertices, so from now on we can assume that $X,Y$ are in the same compact component of $F \backslash \ensuremath{\mathrm{Conv}\left(F\right)}$. This means that $b_p,d_q$ cannot simultaneously be on the border of \ensuremath{\mathrm{Conv}\left(F\right)} so there must be at least two vertices of \ensuremath{\mathrm{Conv}\left(F\right)}\ which do not belong to any of four sequences.
\newpage
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{twocomp.png}
\caption{Example of configuration with $X$ and $Y$ being in different compact components.}
\label{fig:two}
\end{figure}
Next step will be proving that $\left(b_i\right)$ and $\left(c_i\right)$ have at most one common element (and the same is true for $\left(a_i\right)$ and $\left(d_i\right)$). So assume we have $j<k$ such that $b_j,b_k$ belong to sequence $\left(c_i\right)$, which means that $Y$ is visible from them.
There are two possibilities depicted on figure \ref{fig:xycabab}.
In both cases, if we pick vertex $C$ between $b_j$ and $a_k$ (inclusively) such that $\angle XYC$ is minimal then such vertex is visible both from $X$ and $Y$ (in first case $C=b_j$).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{XYCabab.png}
\caption{Two possible configurations when $\left(b_i\right)$ and $\left(c_i\right)$ have more than one common element. Green lines are segments that do not intersect with border of $F$.}
\label{fig:xycabab}
\end{figure}
Hence, set $\left\{a_1,\ldots,a_p,b_1,\ldots,b_p,c_1,\ldots,c_q,d_1,\ldots,d_q \right\}$ has at least $2\cdot(p+q-1)$ elements. Because $p,q \ge 3$ we get at least 10 different vertices of $F$ in one connected component of $F \backslash \ensuremath{\mathrm{Conv}\left(F\right)}$. As we mentioned earlier there must be at least two vertices that are not elements of those sequences, so $n \ge 12$.
\end{proof}
\newpage
\begin{thm}
There exists 12-gon that for any choice of starting vertex, placing guard on every second vertex some point outside $F$ is not observed.
\end{thm}
\begin{proof}
The Leszek-the-dog-fortress\footnote{As pointed out by P. Miska this shape resembles one of cartoon characters.} is an example of such polygon (figure \ref{fig:12gon}), red area is visible only from red vertices, blue area is visible only from blue vertices.
\begin{figure}[h]
\centering
\includegraphics{12gon.png}
\caption{Leszek-the-dog-fortress. Example of 12 sided fortress that cannot be guarded by placing guard in every second vertex.}
\label{fig:12gon}
\end{figure}
It is worth noting that this fortress can be guarded by 4 guards.
\end{proof}
There is a big difference in placing guards at every second vertex depending if $n$ is even or odd. For even $n$ there are only two ways we can place guards, so for this strategy to fail we need only two "hard to observe" points outside $F$. However, when $n$ is odd there are exactly $n$ different ways to place the guards, and there is always one edge with guards on both ends. Hence the question:
\begin{que}
What is the smallest odd $n$ such that there exists $n$-gon that for any choice of starting vertex, placing guard on every second vertex some point outside is not observed?
\end{que}
Smallest example we could find (figure \ref{fig:odd}) has $n=21$ and we weren't able to prove it is minimal. In fact it is just two copies of previous example connected together, so for at least one of the copies the strategy fails depending on where the starting vertex is.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{odd.png}
\caption{Two connected Leszek-the-dog-fortresses form a 21-gon which cannot be guarded by placing guard every second vertex. For any choice of starting vertex at least one green area will not be observed.}
\label{fig:odd}
\end{figure}
\section{Three-dimensional gallery}
In this section we will be considering guards observing the interior of a polyhedron. Main difference and our focus will be the fact that some such galleries are not entirely observed even when guard is placed at every vertex. One known example is the Octoplex.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{octoplex.png}
\includegraphics[width=0.4\textwidth]{octo_projections.png}
\caption{The Octoplex. Isometric projection is figure 3.19 in \cite{TSM}.}
\label{fig:octo}
\end{figure}
This polyhedron has 56 vertices and 30 faces, it is constructed from 20-by-20-by-20 cube by removing six rectangular cuboids of varying sizes, and center $q$ cannot be observed from any vertex, for details see \cite{TSM}. In first attempt of finding a smaller polyhedron with the same property we tried to modify the Octoplex, we noticed that there are four pairs of faces, each pair sharing an edge, that are completely invisible from $q$. By cutting out additional parts from the cube we obtained the "truncated Octoplex" (figure \ref{fig:tocto}).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{trucnated_octoplex.png}
\includegraphics[width=0.4\textwidth]{truncating_projection.png}
\caption{Truncated Octoplex and its orthogonal projection. Green lines show that some parts of polyhedra are hidden behind front/back rectangular faces, so we can cut the red part out without risk of creating any new vertices in the area visible from the center.}
\label{fig:tocto}
\end{figure}
This polyhedron has 48 vertices and 26 faces; we removed 4 faces and 8 vertices, also 8 other vertices have been slightly moved, however it is easy to check they are still in a blind spot. Further attempts to truncate the Octoplex were unsuccessful because of lack of symmetry, the cutouts from the cube have different width and height, and repeating same operation for other sides will cause 8 moved vertices to be visible from center.
To fix the symmetries we took "regular" Octoplex, with cutouts of the same size. Problem with such shape is that the corners of cube are visible from its center. However we first performed partial truncation to get rid of all 8-sided faces which have produced lots of additional vertices, and as the last step we sliced out corners of cube, leaving nice triangular faces in place of the corners. We called this new polytope an \"Uberoctoplex \footnote{Thanks to K. Łasocha for winning idea.} (figure \ref{fig:uber}).
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{uberoctoplex.png}
\includegraphics[width=0.4\textwidth]{uber_projection.png}
\caption{The \"Uberoctoplex and its orthogonal projection. Red vertices are hidden from the center by the red rectangular faces in front and back. Other vertices are hidden by other rectangular faces in the same way.}
\label{fig:uber}
\end{figure}
This polyhedron has 26 faces but only 24 vertices, however we were unable to prove this is minimal number of vertices or faces. Later we noticed that same partial truncation and cutting out corner can be done for usual Octoplex, resulting in a similar but less regular shape which still has the same property.
Recipe for The \"Uberoctoplex (figure \ref{fig:recipe}): Take a 20-by-20-by-20 cube, similarly to the Octoplex from each side cut out a prism, but with trapezoid as base. The trapezoid has height 4 and bases 12 and 10. Now, near each corner of cube there are 7 vertices including the corner itself, choose those three of them that are farther away from corner and make a cut with plane determined by them. Add oil, fry, season to taste.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{uberoctoplex2.png}
\caption{Stages of creating \"Uberoctoplex from cube.}
\label{fig:recipe}
\end{figure}
Sadly, after some research we found that we weren't first to find \"Uberoctoplex, image depicting it can be found in internet dating back to 2007 \cite{IK}. We contacted I. Karonen, who was author of this image, and he said that he found this example on his own "as it's in some ways a fairly natural construction" (and we certainly agree) but he is unsure if anyone described it earlier.
As we weren't able to prove that this is minimal example of such polyhedron there are two questions that are still open:
\begin{que}
Is there any polyhedron that cannot be guarded by placing guards in every vertex with less than 24 vertices?
\end{que}
\begin{que}
Is there any polyhedron that cannot be guarded by placing guards in every vertex with less than 26 faces?
\end{que}
\bibliographystyle{unsrt}
| {
"timestamp": "2019-09-20T02:22:18",
"yymm": "1909",
"arxiv_id": "1909.09043",
"language": "en",
"url": "https://arxiv.org/abs/1909.09043",
"abstract": "We will consider some extensions of the polygonal art gallery problem. In a recent paper Morrison has shown the smallest (9 sides) example of an art gallery that cannot be observed by guards placed in every third corner. Author also mentioned two related problems, for which the minimal examples are not known. We will show that a polygonal fortress such that its exterior cannot be guarded by sentries placed in every second vertex has at least 12 sides. Also, we will show an example of three-dimensional polyhedron such that its inside cannot be covered by placing guard in every vertex which has both fewer vertices and faces than previously known.",
"subjects": "Computational Geometry (cs.CG); Combinatorics (math.CO)",
"title": "A note on minimal art galleries",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534327754854,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7083573356547745
} |
https://arxiv.org/abs/1906.03058 | Robust subgaussian estimation of a mean vector in nearly linear time | We construct an algorithm, running in time $\tilde{\mathcal O}(N d + uK d)$, which is robust to outliers and heavy-tailed data and which achieves the subgaussian rate from [Lugosi, Mendelson] \begin{equation}\label{eq:intro_subgaus_rate} \sqrt{\frac{{\rm Tr}(\Sigma)}{N}}+\sqrt{\frac{||\Sigma||_{op}K}{N}} \end{equation}with probability at least $1-\exp(-c_0K)-\exp(-c_1 u)$ where $\Sigma$ is the covariance matrix of the informative data, $K\in\{1, \ldots, K\}$ is some parameter (number of block means) and $u>0$ is another parameter of the algorithm. This rate is achieved when $K\geq c_1 |\mathcal O|$ where $|\mathcal O|$ is the number of outliers in the database and under the only assumption that the informative data have a second moment. The algorithm is fully data-dependent and does not use in its construction the proportion of outliers nor the rate above. Its construction combines recently developed tools for Median-of-Means estimators and covering-Semi-definite Programming [Chen, Diakonikolas, Ge] and [Peng, Tangwongsan, Zhang]. |
\section{Introduction on the robust mean vector estimation problem}
\label{sec:introduction_on_the_mean_vector_problem}
Estimating the mean of a random variable in a $d$-dimensional space when given some of its realizations is arguably the oldest and most fundamental problem of statistics. In the past few years, it has received important attention from two communities: the Statistics \cite{AIHPB_2012__48_4_1148_0,minsker2015geometric,MR3845006,CG,lugosi2019sub,MR3851758,LMSL,hopkins2018sub,Bartlett19} and Computer Science \cite{MR3631028,MR3945261,diakonikolas2018robustly,diakonikolas2016robust,MR3826316,MR3909639,MR3909640} communities. Both communities consider the problem of \textit{robust mean estimation}, focusing mainly on different definitions of robustness.
In recent years, many efforts have been made by the Statistics community on the construction of estimators performing in a \textit{subgaussian way} for heavy-tailed data. Such estimators achieve the same statistical properties as the empirical mean of a $N$-sample of i.i.d. gaussian variables $\cN(\mu, \Sigma)$ where $\mu\in\bR^d$ and $\Sigma\succeq0$ is the covariance matrix. In that case, for a given confidence $1-\delta$, the subgaussian rate as defined in \cite{lugosi2019sub} is (up to an absolute multiplicative constant)
\begin{equation}\label{eq:sub_gauss}
r_\delta = \sqrt{\frac{\Tr(\Sigma)}{N}} + \sqrt{\frac{||\Sigma||_{op} \log(1/\delta)}{N}}
\end{equation}where $\Tr(\Sigma)$ is the trace of $\Sigma$ and $||\Sigma||_{op}$ is the operator norm of $\Sigma$. Indeed, it follows from Borell-TIS's inequality (see Theorem~7.1 in \cite{Led01} or pages 56-57 in \cite{MR2814399}) that with probability at least $1-\delta$,
\begin{equation*}
\norm{\bar X_N-\mu}_2 = \sup_{\norm{v}_2\leq 1}\inr{\bar X_N-\mu,v}\leq \E \sup_{\norm{v}_2\leq 1}\inr{\bar X_N-\mu,v} + \sigma \sqrt{2\log(1/\delta)}
\end{equation*}where $\sigma =\sup_{\norm{v}_2\leq1} \sqrt{\E\inr{\bar X_N-\mu, v}^2}$. It is straightforward to check that $\E \sup_{\norm{v}_2\leq 1}\inr{\bar X_N-\mu,v}\leq \sqrt{\Tr(\Sigma)/N}$ and $\sigma=\sqrt{\norm{\Sigma}_{op}/N}$, which leads to the rate in \eqref{eq:sub_gauss} (up to the constant $\sqrt{2}$ on the second term in \eqref{eq:sub_gauss}). In most of the recent works, the effort has been made to achieve the rate $r_\delta$ for i.i.d. heavy-tailed data even under the minimal requirement that the data only have a second moment. Under this second-moment assumption only, the empirical mean cannot achieve the rate \eqref{eq:sub_gauss} and one needs to consider other procedures\footnote{Under only a second-moment assumption, the empirical mean achieves the rate $\sqrt{\Tr(\Sigma)/(\delta N)}$ which can not be improved in general.}. Over the years, some procedures have been proposed to achieve such a goal: a Le Cam test estimator, called a tournament estimator in \cite{lugosi2019sub}, a minmax Median-Of-Means estimator in \cite{LMSL} and a PAC-Bayesian estimator in \cite{CG}. The first two one are based on the median-of-means principle that we will also use.
On the other side, the Computer Science community mostly considers a different definition of robustness and targets a different goal. In many recent CS papers, algorithms (not only estimators) have been constructed and proved to be robust with respect to a \textit{contamination} of the dataset that is when some of the data are replaced by other data which may have nothing to do with the original batch. This covers the Huber $\eps$-contamination model but also adversarial data which receives an important attention recently in the deep learning community. Moreover, the Computer Science community looks at the problem of robust mean estimation from algorithmic perspectives such as the running time. A typical result in this line of research is Theorem~1.3 from \cite{MR3909640} that we recall now.
\begin{Theorem}[Theorem~1.3, \cite{MR3909640}]\label{theo:diakonikolas} Let $X_1, \ldots, X_N$ be random vectors in $\bR^d$. We assume that there is a partition $\{1, \ldots, N\}=\cO\cup\cI$ such that nothing is assumed on $(X_i)_{i\in\cO}$ and $(X_i)_{i\in\cI}$ are independent with mean $\mu$ and covariance matrix $\Sigma \preceq \sigma^2 I_d$. We assume that $\eps=|\cO|/N$ is such that $0<\eps<1/3$ and $N\gtrsim d \log(d) /\eps$. There exists an algorithm running in $\tilde \cO(Nd)/{\rm poly}(\eps)$ which outputs $\hat \mu_\eps$ such that with probability at least $9/10$, $\norm{\hat \mu_\eps - \mu}_2\lesssim \sigma \sqrt{\eps}$.
\end{Theorem}
The first result proving the existence of a polynomial time algorithm robust to contamination may be found in \cite{MR3631028}. Theorem~\ref{theo:diakonikolas} improves upon many existing results since it achieves the optimal information theoretic-lower bound with a (nearly) linear-time algorithm.
Finally, there are two recent papers for which both algorithmic and statistical considerations are important. In \cite{hopkins2018sub,Bartlett19}, algorithms achieving the subgaussian rate in \eqref{eq:sub_gauss} have been constructed. They both run in polynomial time : $\cO(N^{24} + Nd)$ for \cite{hopkins2018sub} and $\cO(N^4+N^2d)$ for \cite{Bartlett19} (see \cite{Bartlett19} for more details on these running time
). They do not consider a contamination of the dataset even though their results easily extend to this setup. Some other estimators which have been proposed in the Statistics literature are very fast to compute but they do not achieve the optimal subgaussian rate from \eqref{eq:sub_gauss}. A typical example is Minsker's geometric median estimator \cite{minsker2015geometric} which achieves the rate $\sqrt{\Tr(\Sigma) \log(1/\delta)/N}$ in linear time $\tilde \cO(Nd)$. All the later three papers use the Median-of-means principle. We will use this principle but only to construct a starting point (which will simply be the coordinate-wise median) and for the computation of the step size (where we will only use the one dimensional definition of the median along the descent line direction). What we mainly borrow from the literature on MOM estimators is the advantage to work with local block means instead of the data themselves. We will identify two such advantages by doing so: a stochastic one and a computational one (see Remark~\ref{rem:effects_blocks} below).
Robust mean estimation have been raised in pioneered works in robust statistics from Huber \cite{MR0161415,MR2488795}, Tukey \cite{MR0120720,MR0133937} or Hampel \cite{MR0359096,MR0301858}. Their concerns was more about robustness to model misspecification and on the breakdown point property (``smallest amount of contamination necessary to upset an estimator entirely'' taken from \cite{MR1193313}). The computational problem connected to this issue was not of primary interest even though it was already raised, for instance, in Section~5.3 from \cite{MR1193313} for the construction of Tukey contours (a $d$-dimensional definition of quantiles).
The aim of this work is to show that a single algorithm can answer the three problems: robustness to heavy-tailed data, to contamination and computational cost. In this article, we construct an algorithm running in time $\tilde\cO(N d + u\log(1/\delta) d)$ which outputs an estimator of the true mean achieving the subgaussian rate \eqref{eq:sub_gauss} with confidence $1-\delta$ (for $\exp(-c_0N)\leq \delta\leq \exp(-c_1|\cO|)$) on a corrupted database and under a second moment assumption only. It is therefore robust to heavy-tailed data and to contamination. Our approach takes ideas from both communities: the median-of-means principle which has been recently used in the Statistics community and a SDP relaxation from \cite{MR3909640} which can be computed fast. The baseline idea is to construct $K$ equal size groups of data from the $N$ given ones and to compute their empirical means $\bar X_k, k=1, \ldots, K$. These $K$ empirical means are used successively to find a robust descent direction thanks to a SDP relaxation from \cite{MR3909640}. We prove the robust subgaussian statistical property of the resulting descent algorithm under the only following assumption.
\begin{Assumption}\label{assum:first}There exists a partition $\cI\cup\cO=\{1, \ldots, N\}$ of the dataset $(X_i)_{i \leq N}$ such that 1) nothing is assumed on $(X_i)_{i\in\cI}$ 2) $(X_i)_{i\in\cI}$ are independent with mean $\mu$ and covariance $\E (X_i-\mu) (X_i-\mu)^\top) \preceq \Sigma $ where $\Sigma$ is a given (unknown) covariance matrix.
\end{Assumption}
Assumption~\ref{assum:first} covers the two concepts of robustness considered in the Statistics and Computer Science communities since the \textit{informative data} (data indexed by $\cI$) are only assumed to have a second moment and there are $|\cO|$ outliers onto which we do not make any assumption. Our aim is to show that the rate of convergence \eqref{eq:sub_gauss} which is the rate achieved by the empirical mean in the ideal i.i.d. Gaussian case can be achieved in the corrupted and heavy-tailed setup from Assumption~\ref{assum:first} with a fast algorithm.\\
The paper is organized as follows. In the next section, we give a high-level description of the algorithm and its statistical and computation performances. In section~3, we prove its statistical properties and give a precise definition of the algorithm. In Section~4, we study the statistical performance of the SDP relaxation at the heart of the descent direction. In Section~5, we fully characterize its computational cost. In Section~\ref{sec:adaptive_choice_of_}, we construct a procedure achieving the same statistical properties and can automatically adapt to the number of outliers.
\section{Construction of the algorithms and main result}
\label{sec:construction_of_the_algorithms_and_main_result}
The construction of our robust subgaussian descent procedure is using two ideas. The first one comes from the median-of-means (MOM) approach which has recently received a lot of attention in the statistical and machine learning communities \cite{MR3124669,LO,MR3576558,MS,minsker2015geometric}. The MOM approach \cite{MR702836,MR1688610,MR855970,MR762855} often yields robust estimation strategies (but usually at a high computational cost). Let us give the general idea behind that approach: we first randomly split the data into $K$ equal-size blocks $B_1,\ldots ,B_K$ (if $K$ does not divide $N$, we just remove some data). We then compute the empirical mean within each block: for $k=1,\ldots,K$,
\begin{equation*}
\bar{X}_k=\frac{1}{|B_k|}\sum_{i \in B_k} X_i
\end{equation*}
where we set $|B_k|=\Card(B_k)=N/K$. In the one-dimensional case, we then take the median of the latter $K$ empirical means to construct a robust \emph{and subgaussian} estimator of the mean \cite{MR3576558}. It is more complicated in the multi-dimensional case, where there is no \textit{definitive} equivalent of the one dimensional median but several candidates: coordinate-wise median, the geometric median (also known as Fermat point), the Tukey Median, among many others (see \cite{small1990survey}). The strength of this approach is the robustness of the median operator, which leads to good statistical properties even on corrupted databases. For the construction of our algorithm, we actually only use the idea of grouping the data and computing their $K$ means $\bar X_k, k=1, \ldots, K$.
Finding good descent directions in the heavy-tailed and corrupted scenario considered in Assumption~\ref{assum:first} in reasonnable time is a main issue. A construction has been proposed by \cite{Bartlett19} which also uses a SDP relaxation, which costs $\cO(N^4+Nd)$ to be computed. Our approach also uses a SDP relaxation, with an other SDP. It is based on the observation that $\mu$ is solution of the minimization problem $\min_{\nu\in\bR^d}f(\nu)$ where $f:\nu\in\bR^d\to \norm{\E X - \nu}_2^2$ and $X$ is any random vector with mean $\mu$. One way to approach $\mu$ is therefore to run a gradient descent algorithm using $f$ as an objective function: from $x_c\in\bR^d$ we go to the next iteration with $x_c- \theta \nabla f(x_c)$ where $\theta\geq0$ is a step size. Since $\nabla f(x_c) = x_c-\bE X$, for $\theta=1$, the latter algorithm achieves the target mean $\mu$ in one step, which is not surprising given that $x_c-\E X$ is the best descent direction towards $\E X$ starting from $x_c$. We can also re-write that as a matrix problem : the top eigenvector of
\begin{equation}\label{eq:minimizing_pb}
\argmax_{M\succeq 0, \Tr(M)=1}\inr{M, (\E X-x_c)(\E X-x_c)^\top}
\end{equation}is given by $\frac{x_c-\E X}{\norm{x_c-\E X}_2}$, which is the best descent direction we are looking for. \\
Of course, we don't know $(\E X-x_c)(\E X-x_c)^\top$ in \eqref{eq:minimizing_pb} but we are given a database of $N$ data $X_1, \ldots, X_N$ (among which $|\cI|$ of them have mean $\mu$). We use these data to estimate in a robust way the unknown quantity $(\E X-x_c)(\E X-x_c)^\top$ in \eqref{eq:minimizing_pb}. Ideally, we would like to identify the \textit{informative data} and then use $(1/|\cI|)\sum_{i\in\cI}(X_i-x_c)(X_i-x_c)^\top$ or its block means version $(1/|\cK|)\sum_{k\in\cK}(\bar X_k-x_c)(\bar X_k-x_c)^\top$, where $\cK=\{k:B_k\cap \cO=\emptyset\}$, to estimate this quantity but this information is not available either.
To address this problem we use a tool introduced in \cite{MR3909640} adapted to the block means. The idea is to endow each block mean $\bar{X}_k$ with a weight $\omega_k$ taken in $\Delta_{K}$ defined as
\begin{equation*}
\Delta_{K}=\left\{( \omega_k)_{k=1}^K: 0 \leq \omega_k \leq \frac{1}{9K/10}, \sum_{k=1}^K \omega_k =1 \right\}.
\end{equation*}Ideally we would like to put $0$ weights to all block means $\bar{X}_k$ corrupted by an outliers. But, we cannot do it since $\cK$ is unknown. To overcome this issue, we learn the optimal weights and consider the following minmax optimization problem
\begin{equation}\label{formule1} \tag{$E_{x_c}$}
\underset{M \succeq 0, \Tr(M)=1}{\text{max}} \ \underset{w \in \Delta_{K}}{\text{min}} \ \inr{M, \sum_{k=1}^K \omega_k (\bar{X}_k-x_c)(\bar{X}_k-x_c)^\top}.
\end{equation}This is the dual problem from \cite{MR3909640} adapted to the block means. The key insight from \cite{MR3909640} is that an approximating solution $M_c$ of the maximization problem in \eqref{formule1} can be obtained in reasonable time using a covering SDP approach \cite{MR3909640,PTZ12} (see Section~\ref{sec:SDP}). We expect a solution (in $M$) to \eqref{formule1} to be close to a solution of the minimization problem in \eqref{eq:minimizing_pb} -- which is $M^*=(\mu-\nu)(\mu-\nu)^\top/\norm{\mu-\nu}_2^2$ -- and the same for their top eigenvectors (up to the sign).
At a high level description, the robust descent algorithm we perform outputs $\hat \mu_K $ after at most $\log d$ iterations of the form $x_c - \theta_c v_1$ where $v_1$ is a top eigenvector of an approximating solution $M_c$ to the problem \eqref{formule1} and $\theta_c$ is a step size. It starts at the coordinate-wise median of the means $\bar X_1, \ldots, \bar X_K$ . In Algorithm~\ref{algo:final}, we define precisely the step size and the stopping criteria we use to define the algorithm (it requires too many notation to be defined at this stage). This algorithm outputs the vector $\hat \mu_K$ : its running time and statistical performances are gathered in the following result.
\begin{Theorem}\label{theo:main}
Grant Assumption~\ref{assum:first}. Let $K\in\{1,\ldots, N \}$ be the number of equal-size blocks and assume that $K\geq 300 |\cO|$. Let $u\in\bN^*$ be a parameter of the covering SDP used at each descent step. With probability at least $1-\exp(-K/180000)-(1/10)^u$, the descent algorithm finishes in $\tilde \cO(Nd+K u d)$ and outputs $\hat \mu_K$ such that
\begin{equation*}
\norm{\hat \mu_K - \mu}_2\leq 808 \left(1200 \sqrt{\frac{\Tr(\Sigma)}{N}} + \sqrt{\frac{1200\norm{\Sigma}_{op}K}{N}}\right).
\end{equation*}
\end{Theorem}
To make the presentation of the proof of Theorem~\ref{theo:main} as simple as possible we did not optimize the constants. Theorem~\ref{theo:main} generalizes and improves Theorem~\ref{theo:diakonikolas} in several ways. We first improve the confidence from a constant ``$9/10$'' to an exponentially large confidence $1-\exp(-c_0K)$. We obtain the result for any covariance structure $\Sigma$ and $\hat \mu_K$ does not require the knowledge of $\Sigma$ for its construction. We obtain a result which holds for any $N$ (even under the sample complexity). The construction of $\hat \mu_K$ does not require the knowledge of the exact proportion of outliers $\eps$ in the dataset unlike $\hat\mu_\eps$ in Theorem~\ref{theo:diakonikolas}. We only need to know that $K\gtrsim |\cO|$. Moreover, using a Lepskii adaptation method it is also possible to automatically choose $K$ and therefore to adapt to the proportion of outliers if we have some extra knowledge on $\Tr(\Sigma)$ and $\norm{\Sigma}_{op}$ (see Section~\ref{sec:adaptive_choice_of_} for more details). Moreover, if we only care about constant $9/10$ confidence, our runtime does not depend on $\epsilon$ and is nearly-linear $\tilde \cO(Nd)$. We also refer the reader to Corollary~\ref{coro:lepski} for more comparison with Theorem~\ref{theo:diakonikolas}.
\begin{Remark}[Nearly-linear time]
We identify two important situations where the algorithm from Theorem~\ref{theo:main} runs in nearly-linear time that is in $\tilde\cO(Nd)$. First, when the number of outliers is known to be less than $\sqrt{N}$, we can choose $K\leq \sqrt{N}$ and $u=K$. In that case, the algorithm runs in $\tilde\cO(Nd)$ and the subgaussian rate is achieved with probability at least $1-2\exp(-c_0K)$ for some constant $c_0$ (see also Corollary~\ref{coro:lepski_2_subgaussian} for an adaptive to $K$ version of this result). Another widely investigated situation is when we only want to have a constant confidence like $9/10$. In that case, one may chose $u=1$ and any values of $K\in[N]$ can be chosen (so we can have any number of outliers) to achieve the subgaussian rate with constant probability and in nearly-linear time $\tilde\cO(Nd)$ (see also Corollary~\ref{coro:lepski} for an adaptive to $K$ version of this result).
\end{Remark}
Theorem~\ref{theo:main} improves the result from \cite{hopkins2018sub,Bartlett19} since $\hat \mu_K$ runs faster than the polynomial times $\cO(N^{24} + Nd)$ and $\cO(N^4 + Nd)$ in \cite{hopkins2018sub} and \cite{Bartlett19}. The algorithm $\hat \mu_K$ also does not require the knowledge of $\Tr(\Sigma)$ and $\norm{\Sigma}_{op}$. Finally, Theorem~\ref{theo:main} provides running time guarantees on the algorithm unlike in \cite{lugosi2019sub,LMSL,CG} and it improves upon the statistical performances from \cite{minsker2015geometric}.
\section{Proof of the statistical performance in Theorem~\ref{theo:main}}
\label{sec:proof_of_the_statistical_performance_in_theorem_theo:main}
In this section, we prove the statistical performance of $\hat \mu_K$ as stated in Theorem~\ref{theo:main}. We first identify an event $\cE$ onto which we will derive the rate of convergence of the order of \eqref{eq:sub_gauss}. This event is also used to compute the running time of $\hat \mu_K$ in the next section as announced in Theorem~\ref{theo:main}.
\begin{Proposition}\label{Prop:MatriceLemme}
Denote by $\cE$ the event onto which for all matrix $M \succeq 0$ such that $\Tr(M)=1$, there are at least $9K/10$ of the blocks for which $\norm{M^{1/2} (\bar{X}_k- \mu)}_2 \leq 8r$ where
\begin{equation}\label{eq:def_r}
r = 1200 \sqrt{\frac{\Tr(\Sigma)}{N}} + \sqrt{\frac{1200\norm{\Sigma}_{op}K}{N}}.
\end{equation} If Assumptions~\ref{assum:first} holds and $K\geq 300|\cO|$ then $\bP[\cE]\geq 1-\exp(-K/180000)$.
\end{Proposition}
Proposition~\ref{Prop:MatriceLemme} contains all the stochastic arguments we will use in this paper (constants have not been optimized). In other words, after identifying $\cE$ all the remaining arguments do not involve any other stochastic tools. Before proving Proposition~\ref{Prop:MatriceLemme}, let us first state a result that is of particular interest beyond our problem.
\begin{Corollary}\label{coro:dual_isometry}
On the event $\cE$, for all $M\in\bR^{d\times d}$ such that $M\succeq 0$ and $\Tr(\Sigma)=1$ there are at least $9K/10$ blocks such that for all $x_c\in\bR^d$,
\begin{equation}\label{eq:coro_dual_isometry}
\norm{M^{1/2}(\mu - x_c)}_2 - 8r\leq \norm{M^{1/2}(\bar X_k - x_c)}_2\leq \norm{M^{1/2}(\mu - x_c)}_2 + 8r.
\end{equation}
\end{Corollary}
Let us now turn to a proof of Proposition~\ref{Prop:MatriceLemme}. We first remark that if we were to only consider matrices $M$ of rank $1$, Proposition~\ref{Prop:MatriceLemme} would boil down to show that for all $v\in\cS_2^{d-1}$ (the unit sphere in $\ell_2^d$) on more than $9/10$ blocks $|\inr{v,\bar X_k-\mu}|\leq 8r$. This is a ``classical'' result in the MOM literature which has been proved in \cite{lugosi2019sub} and \cite{LMSL}. We recall now this result and the short proof from \cite{LMSL} for completeness. We will use it to prove Proposition~\ref{Prop:MatriceLemme}.
\begin{Lemma}\label{lemm:VecteurLemme}
Grant Assumption~\ref{assum:first} and assume that $K\geq 300|\cO|$. With probability at least $1- \exp(-K/180000)$, for all $v\in\cS_2^{d-1}$, there are at least $99K/100$ of the blocks $k$ such that $|\inr{v,\bar{X}_k-\mu}| \leq r$.
\end{Lemma}
{\bf Proof. {\hspace{0.2cm}}}
We want to show that with probability at least $1-\exp(-K/180000)$, for all $v\in \cS_2^{d-1}$,
\begin{equation*}
\sum_{k\in[K]} I(|\inr{\bar{X}_k - \mu, v}|> r)\leq K/100.
\end{equation*}We take $\cK=\{k\in[K]: B_k\cap \cO=\emptyset\}$. We define $\phi(t) = 0 $ if $t\leq1/2$, $\phi(t) = 2(t-1/2)$ if $1/2\leq t\leq 1$ and $\phi(t) = 1$ if $t\geq1$. We have $I(t\geq1)\leq \phi(t)\leq I(t\geq1/2)$ for all $t\in\bR$ and so
\begin{align*}
&\sum_{k\in \cK} I(|\inr{\bar{X}_k - \mu, v}|> r)\leq \sum_{k\in\cK} I(|\inr{\bar{X}_k - \mu, v}|> r) - \bP[|\inr{\bar{X}_k - \mu, v}|> r/2] + \bP[|\inr{\bar{X}_k - \mu, v}|> r/2]\\
&\leq \sum_{k\in\cK}\phi\left(\frac{|\inr{\bar{X}_k - \mu, v}|}{r}\right) - \bE \phi\left(\frac{|\inr{\bar{X}_k - \mu, v}|}{r}\right) + \bP[|\inr{\bar{X}_k - \mu, v}|> r/2]\\
& \leq \sup_{v\in \cS_2^{d-1}}\left(\sum_{k\in\cK}\phi\left(\frac{|\inr{\bar{X}_k - \mu, v}|}{r}\right) - \bE \phi\left(\frac{|\inr{\bar{X}_k - \mu, v}|}{r}\right) \right)+ \sum_{k\in\cK} \bP[|\inr{\bar{X}_k - \mu, v}|> r/2].
\end{align*}For all $k\in\cK$, we have
\begin{align*}
\bP[|\inr{\bar{X}_k - \mu, v}|> r/2]\leq \frac{\bE \inr{\bar{X}_k - \mu, v}^2}{(r/2)^2} \leq \frac{4Kv^\top \Sigma v}{Nr^2}\leq \frac{4K\sup_{v\in \cS_2^{d-1}}v^\top \Sigma v}{Nr^2} = \frac{4K\norm{\Sigma}_{op}}{Nr^2}\leq \frac{1}{300}
\end{align*}because $r^2\geq 1200K \norm{\Sigma}_{op}/N$. Next, using the bounded difference inequality (Theorem~6.2 in \cite{MR3185193}), the symmetrization argument and the contraction principle (Chapter~4 in \cite{MR2814399}), with probability at least $1-\exp(-K/180000)$,
\begin{align*}
&\sup_{v\in \cS_2^{d-1}}\left(\sum_{k\in\cK}\phi\left(\frac{|\inr{\bar{X}_k - \mu, v}|}{r}\right) - \bE \phi\left(\frac{|\inr{\bar{X}_k - \mu, v}|}{r}\right) \right)\\
& \leq \bE \sup_{v\in S}\left(\sum_{k\in\cK}\phi\left(\frac{|\inr{\bar{X}_k - \mu, v}|}{r}\right) - \bE \phi\left(\frac{|\inr{\bar{X}_k - \mu, v}|}{r}\right) \right)+ \sqrt{\frac{|\cK|K}{360000}}\\
&\leq \frac{4K}{Nr} \bE \sup_{v\in \cS_2^{d-1}} \inr{v, \sum_{i\in \cup_{k\in\cK}B_k}\eps_i (X_i-\mu)} + \sqrt{\frac{|\cK|K}{360000}}\\
& = \frac{4K}{\sqrt{N}r}\bE \norm{\frac{1}{\sqrt{N}}\sum_{i\in \cup_{k\in\cK}B_k}\eps_i (X_i-\mu)}_{2} + \sqrt{|\cK|K/360000}\leq \frac{K}{300}
\end{align*}because $r\geq 1200 \bE \norm{\sum_{i\in \cup_{k\in\cK}B_k}\eps_i (X_i-\mu^*)}_2/\sqrt{N}$ since
\begin{equation*}
\bE \norm{\frac{1}{\sqrt{N}}\sum_{i\in \cup_{k\in\cK}B_k}\eps_i (X_i-\mu)}_{2}\leq \sqrt{\bE \norm{\frac{1}{\sqrt{N}}\sum_{i\in \cup_{k\in\cK}B_k}\eps_i (X_i-\mu)}_2^2}=\sqrt{\frac{|\cup_{k\in\cK}B_k|}{N}}\sqrt{\Tr(\Sigma)}\leq \sqrt{\Tr(\Sigma)}.
\end{equation*}
As a consequence, when $K\geq 300 |\cO|$, with probability at least $1-\exp(-K/180000)$, for all $v\in \cS_2^{d-1}$,
\begin{equation*}
\sum_{k\in [K]} I(|\inr{\bar{X}_k - \mu, v}|> r)\leq |\cO| + \frac{|\cK|}{300} + \frac{K}{300}\leq \frac{K}{100}.
\end{equation*}
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\textbf{Proof of Proposition~\ref{Prop:MatriceLemme}:} Let $M\in\bR^{d\times d}$ be such that $M\succeq 0$ and $\Tr(\Sigma)=1$. Denote by $\mathcal{A}_M=\{k \in [K]: \norm{M^{1/2}(\bar{X}_k-\mu)}_2 \geq 8 r\}$ and assume that $|\cA_M|\geq 0.1K$. Let $G$ be a Gaussian vector in $\bR^d$ with mean $0$ and covariance matrix $M$ (and independent from $X_1, \ldots, X_N$). We consider the random variable $Z=\sum_{k\in[K]} I\left(|\inr{\bar{X}_k-\mu,G}| > 5r\right)$. We work conditionally to $X_1, \ldots, X_N$ in this paragraph. For all $k\in[K]$, $ \inr{\bar{X}_k-\mu,G} $ is a centered Gaussian variable with variance $\sigma_k^2:=\norm{M^{1/2}(\bar{X}_k-\mu)}_2^2$. In particular, for all $k\in\cA_M$, if we denote by $g$ a standard real-valued Gaussian variable, we have $\bP_G\left[|\inr{\bar{X}_k-\mu,G}| >5r\right]\geq \bP_G\left[|\inr{\bar{X}_k-\mu,G}| > 5\sigma_k/8\right] = 2 \bP[g>5/8]\geq 0.528$ (where $\bP_G$ (resp. $\bE_G$) denotes the probability (resp. expectation) w.r.t. $G$ conditionally on $X_1, \ldots, X_N$). Hence, $\E_G Z\geq 0.528|\cA_M|\geq 0.0528 K$. Since $|Z|\leq K$ a.s., it follows from Paley-Zygmund inequality (see Proposition~3.3.1 in \cite{MR1666908}) that
\begin{equation*}
\bP_G[Z> 0.01K]\geq \frac{(\E_G Z-0.01K)^2}{\E_G Z^2}\geq (0.0428)^2 =0.0018.
\end{equation*} Moreover, it follows from the Borell-TIS inequality (see Theorem~7.1 in \cite{Led01} or pages 56-57 in \cite{MR2814399}) that with probability at least $1-\exp(-8)$, $\norm{G}_2\leq \E \norm{G}_2 + 4\sqrt{\norm{M}_{op}}$. Moreover, $\E\norm{G}_2 \leq \sqrt{\Tr(M)}\leq 1$ and $\norm{M}_{op} \leq\Tr(M)\leq1$, so $\norm{G}_2\leq 5$ with probability at least $1- \exp(-8) \geq 0.9996$. Since $0.9996+0.0018 > 1$ there exists a vector $G_M\in\bR^d$ such that $\norm{G_M}_2\leq 5$ and $\sum_{k\in[K]} I\left(|\inr{\bar{X}_k-\mu,G_M}| > 5 r\right) > 0.01 K$. We recall that this latter result holds when we assume that $|\cA_M|\geq 0.1K$.
Next, we denote by $\Omega_0$ the event onto which for all $v\in\cS_2^{d-1}$, there are at least $99K/100$ blocks such that $|\inr{\bar X_k-\mu,v}|\leq r$. We know from Lemma~\ref{lemm:VecteurLemme} that $\bP[\Omega_0]\geq 1-\exp(-K/180000)$. Let us place ourselves on the event $\Omega_0$ up to the end of the proof. Let $M\in\bR^{d\times d}$ be such that $M\succeq 0$ and $\Tr(\Sigma)=1$ and assume that $|\cA_M|\geq 0.1K$. It follows from the first paragraph of the proof that there exists $G_M\in\bR^d$ such that $\norm{G_M}_2\leq 5$ and $\sum_{k\in[K]} I\left(|\inr{\bar{X}_k-\mu,G_M}| > 5 r\right) >0.01 K$. Given that we work on the event $\Omega_0$, we have for $v_M=G_M/\norm{G_M}_2$, that for more than $99K/100$ blocks $|\inr{\bar{X}_k-\mu,v_M}| \leq r$ and so $|\inr{\bar{X}_k-\mu,G_M}| \leq \norm{G_M}_2r\leq 5r$ which contradicts the fact that $\sum_{k\in[K]} I\left(|\inr{\bar{X}_k-\mu,G_M}| > 5 r\right) >0.01 K$. Therefore, we necessarily have $|\cA_M|\leq 0.1K$, which concludes the proof.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\textbf{Proof of Corollary~\ref{coro:dual_isometry}:}
Let us assume that the event $\cE$ holds up to the end of the proof. Let $M\in\bR^{d\times d}$ be such that $M\succeq0$ and $\Tr(\Sigma)=1$. Let $\cK_M = \{k\in[K]: \norm{M^{1/2}(\bar X_k - \mu)}_2\leq 8r\}$. On the event $\cE$, we have $|\cK_M|\geq 9K/10$. Let $x_c\in\bR^d$. For all $k\in\cK_M$, we have $\norm{M^{1/2}(\mu-x_c)}_2\leq 8r$ and so
\begin{align*}
\norm{M^{1/2}(\bar X_k - x_c)}_2 &\in\left[\norm{M^{1/2}(\bar X_k - \mu)}_2 - \norm{M^{1/2}(\mu-x_c)}_2, \norm{M^{1/2}(\bar X_k - \mu)}_2 + \norm{M^{1/2}(\mu-x_c)}_2\right]\\
&\subset \left[\norm{M^{1/2}(\bar X_k - \mu)}_2 - 8r, \norm{M^{1/2}(\bar X_k - \mu)}_2 + 8r\right].
\end{align*}
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
Let us now turn to the study of the optimization problem \eqref{formule1} on the event $\cE$. Like in \cite{MR3909640}, we denote by $OPT_{x_c}$ the optimal value of \eqref{formule1} and by $h_{x_c}: M \to \underset{w \in \Delta_{K}}{\text{min}} \ \braket{M, \sum_k \omega_k (\bar{X}_k-x_c)(\bar{X}_k-x_c)^\top}$ its objective function to be minimized over the constraint set $\{M\in\bR^{d\times d}:M\succeq 0, \Tr(M)=1\}$.
\begin{Remark}\label{rem:objective_val}
For a given $M$, the optimal choice of $w\in\Delta_K$ in the definition of $h_{x_c}(M)$ is straightforward: one just have to put the maximum possible weight on the $9K/10$ smallest $\inr{M, (\bar{X}_k-x_c)(\bar{X}_k-x_c)^\top}, k\in[K]$. Formally, we set $\mathcal{S}_M= \sigma(\{1,2,\cdots, 9K/10 \})$, where $\sigma$ is a permutation on $[K]$ that arranges the $(\bar{X}_k- x_c)^\top M (\bar{X}_k- x_c), k\in[K]$ in ascending order:
\begin{align*}
(\bar{X}_{\sigma(1)}- x_c)^\top M (\bar{X}_{\sigma(1)}- x_c) \leq (\bar{X}_{\sigma(2)}- x_c)^\top & M (\bar{X}_{\sigma(2)}- x_c) \leq \cdots\leq (\bar{X}_{\sigma(K)}- x_c)^\top M (\bar{X}_{\sigma(K)}- x_c).
\end{align*}
Then we get $h_{x_c}(M)=(1/|\mathcal{S}_M|)\sum_{k \in \mathcal{S}_M} (\bar{X}_k- x_c)^\top M (\bar{X}_k- x_c)$.
\end{Remark}
The first lemma deals with the optimal value of \eqref{formule1} when the current point $x_c$ is far from $\mu$.
\begin{Lemma}\label{DistanceLemme}
On the event $\cal E$, for all $x_c\in\bR^d$, if $\norm{x_c-\mu}_2> 16r$ then $$(8/9)(\norm{x_c-\mu}_2-8r)^2\leq OPT_{x_c} \leq (\norm{x_c-\mu}_2+8r)^2.$$
\end{Lemma}
{\bf Proof. {\hspace{0.2cm}}}
Let $M$ be a matrix such that $M \succeq 0$ and $\Tr(M)=1$. Set $\cK_M = \{k\in[K]: \norm{M^{1/2}(\bar X_k - \mu)}_2\leq 8 r\}$. On the event $\cE$, we have $|\cK_M|\geq 9K/10$ and it follows from the proof of Corollary~\ref{coro:dual_isometry} that for all $k\in\cK_M$ and all $x_c\in\bR^d$,
\begin{equation}\label{eq:inter_lemma_distance_opt}
\norm{M^{1/2}(\mu - x_c)}_2 - 8r\leq \norm{M^{1/2}(\bar X_k - x_c)}_2\leq \norm{M^{1/2}(\mu - x_c)}_2 + 8r.
\end{equation} Then we define a weight vector $ \tilde \omega \in \Delta_{K}$ by setting for all $k\in[K]$
\begin{equation*}
\tilde \omega_k = \left\{
\begin{array}{ll}
1/|\cK_M|& \mbox{if } k \in \cK_M\\
0 & \mbox{else.}
\end{array}
\right.
\end{equation*}It follows from the definition of $h_{x_c}$ and \eqref{eq:inter_lemma_distance_opt} that
\begin{equation}\label{eq:inter_2_lemma_distance_opt}
h_{x_c}(M) \leq \sum_{k\in[K]} \tilde \omega_k (\bar{X}_k- x_c)^\top M (\bar{X}_k- x_c) = \frac{1}{|\cK_M|}\sum_{k\in\cK_M}\norm{M^{1/2}(\bar X_k-x_c)}_2^2 \leq \left(\norm{M^{1/2}(\mu - x_c)}_2 + 8r\right)^2.
\end{equation}Taking the maximum over all $M\in\bR^d$ such that $M\succeq 0$ and $\Tr(\Sigma)=1$ on both side of the latter inequality yields the right-hand side inequality of Lemma~\ref{DistanceLemme}.
For the left-hand side inequality of Lemma~\ref{DistanceLemme}, we let $x_c\in\bR^d$ be such that $\norm{x_c-\mu}_2> 16r$. Let $M$ be such that $M\succeq 0$ and $\Tr(M)=1$. We use the notation and observation from Remark~\ref{rem:objective_val}: we note that $|\cK_M \cap \cS_M| \geq 8K/10$ so that it follows from Corollary~\ref{coro:dual_isometry} that
\begin{align*}
h_{x_c}(M)&=\frac{1}{9K/10}\sum_{k \in \mathcal{S}_M} \norm{M^{1/2} (\bar{X}_k- x_c)}_2^2 \geq \frac{1}{9K/10} \sum_{k \in \mathcal{A}_M \cap \mathcal{S}_M} \norm{ M^{1/2} (\bar{X}_k- x_c)}_2^2\\
&\geq \frac{8K/10}{9K/10} \left( \norm{M^{1/2}(\mu- x_c)}_2- 8r\right)^2.
\end{align*} Then, taking the maximum over all $M\succeq0$ such that $\Tr(M)=1$ on both sides, finishes the proof.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
Next lemma shows that the top eigenvector of an approximating solution to \eqref{formule1} is aligned with the best possible descent direction $(\mu-x_c)/\norm{\mu-x_c}_2$. It is taken from the proof of Lemma~3.3 in \cite{MR3909640}. We reproduce here a short proof for completeness.
\begin{Proposition}\label{Prop:direction}
On the event $\cE$, if $M$ is a matrix such that $M \succeq 0$, $\Tr(M)=1$ and $h_{x_c}(M) \geq (\beta \norm{x_c-\mu}_2 +8r)^2$ for some $1/\sqrt{2}\leq \beta\leq1$, then any top eigenvector $v_1$ of $M$ satisfies
$$ \left|\inr{v_1,\frac{x_c-\mu}{\norm{x_c-\mu}_2}} \right| >\sqrt{2\beta^2-1}.$$
\end{Proposition}
{\bf Proof. {\hspace{0.2cm}}}
Let $M$ be a matrix such that $M \succeq 0$ , $\Tr(M)=1$ and $h_{x_c}(M) \geq (\beta \norm{x_c-\mu}_2 +8r)^2$ for some $1/\sqrt{2}\leq \beta\leq1$. We know from the proof of Lemma~\ref{DistanceLemme} (see Equation~\eqref{eq:inter_2_lemma_distance_opt}) that
$ h_{x_c}(M) \leq \left(\norm{M^{1/2} (\mu- x_c)}_2+8r \right)^2$. This implies that $\norm{M^{1/2}(\mu- x_c)}_2^2 \geq \beta^2 \norm{\mu-x_c}_2^2$.
Let $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_d\geq 0$ denote the eigenvalues of $M$ and let $v_1,\ldots,v_d$ denote corresponding eigenvectors. The conditions on $M$ implies that $\sum_{j} \lambda_j = 1$ and $\cB_M=(v_1, \ldots, v_d)$ is an orthonormal basis of $\bR^d$. We denote $v=(\mu-x_c) / \norm{\mu-x_c}_2$. We decompose $v$ in $\cB_M$ as $v= \sum_j \alpha_j v_j$ with $\sum_j \alpha_j^2 = 1$. Using this decomposition, we have $ v^\top M v = \sum_j \lambda_j \alpha_j^2$. We have $\lambda_1 = \lambda_1 \sum_j \alpha_j^2 \geq \sum_j \lambda_j \alpha_j^2 \geq \beta^2$, so $\lambda_1 \geq \beta^2$. Moreover, since $\sum_{j} \lambda_j = 1$, we have $\beta^2 \sum_j \alpha_j^2 \leq \sum_j \lambda_j \alpha_j^2 \leq \lambda_1 \alpha_1^2 +(1-\lambda_1)(1-\alpha_1^2) \leq \alpha_1^2+ (1-\beta^2)\sum_j \alpha_j^2$, so we have $\alpha_1^2 \geq (2\beta^2-1)$. As we know that $\alpha_1=\inr{v_1, v}$, we get the result.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
Proposition~\ref{Prop:direction} is the first tool we need to construct a descent algorithm since it provides a descent/ascent direction (depending on the sign of the top eigenvector of an approximate solution to \eqref{formule1}). It remains to specify three other quantities to fully characterize our algorithm: a starting point, a step size and a stopping criteria. We start with the starting point. Here we simply use the coordinate-wise median-of-means. The following statistical guarantee on the coordinate-wise median-of-means is known or folklore but we want to put forward that in our case it holds on the event $\cE$. This again shows that $\cE$ is the only event we need to fully analyze all the building blocks of our algorithm. We recall that the coordinate-wise median-of-means is the estimator $\hat\mu^{(0)}\in\bR^d$ whose coordinates are for all $j\in[d], \hat \mu^{(0)}_j = {\rm med}(\bar X_{k,j}:k\in[K])$ where $\bar X_{k,j}$ is the $j$-th coordinate of the block mean $\bar X_k$ for all $k\in[K]$.
\begin{Proposition}\label{prop:coordinate_wise_MOM}
On the even $\cE$, we have $\norm{\hat \mu^{(0)} - \mu}_2\leq 8\sqrt{d} r$.
\end{Proposition}
{\bf Proof. {\hspace{0.2cm}}}
Let us place ourselves on the event $\cE$ during all the proof. For all direction, $v\in\cS_2^{d-1}$, there are at least $9K/10$ blocks $k$ such that $|\inr{\bar X_k-\mu, v}|\leq 8 r$. In particular, for all $j\in[d], |\inr{\bar X_k - \mu, e_j}|\leq 8r$ where $(e_1, \ldots, e_d)$ is the canonical basis of $\bR^d$. That is for at least $9K/10$ blocks $|\bar X_{k,j}-\mu_j|\leq 8r$. In particular, the latter result is true for the median of $\{\bar X_{k,j}:k\in[K]\}$ that is for $\hat \mu^{(0)}_j$. We therefore have $\norm{\hat \mu^{(0)} - \mu}_\infty\leq 8r$ and so $\norm{\hat \mu^{(0)} - \mu}_2\leq 8r\sqrt{d}$.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
Proposition~\ref{prop:coordinate_wise_MOM} guarantees that starting from the coordinate-wise Median-of-Means we are off by a $\sqrt{d}$ proportional factor from the optimal rate $r$. This will play a key role to analyze the number of steps we need to reach $\mu$ within the optimal rate $r$. Indeed, if we prove a geometric decay of the distance to $\mu$ along the descent step then only $\log d$ steps (up to a mutliplicative constants) would be enough to reach $\mu$ by a distance at most of the order of $r$. \\
Let us now specify the step size we use at each iteration. At the current point $x_c$ we compute a top eigenvector $v_1$ of an approximating solution $M$ to \eqref{formule1} (i.e. $M$ such that $h_{x_c}(M)\geq (\beta\norm{x_c-\mu}_2 + 8r)^2$ for some $1/\sqrt{2}\leq \beta\leq 1$). Next iteration is $x_{c+1} = x_c - \theta_c v_1$ where the step size is
\begin{equation}\label{eq:step_size_def}
\theta_c = - {\rm Med}\left(\inr{\bar X_k - x_c, v_1}:k\in[K]\right).
\end{equation} In particular, since $\theta_c v_1$ does not depend on the sign of $v_1$ (the product $\theta_c v_1$ is the same if we replace $v_1$ by $-v_1$), we do not care which top eigenvector of $M$ we choose.
Let us now prove a geometric decay of the algorithm while $x_c$ is far from $\mu$. Again, this result is proved on the event $\cE$.
\begin{Proposition}\label{prop:geometric_decay}
On the event $\cE$, the following holds. Let $x_c\in\bR^d$ (be the current point of the algorithm). Assume that $M$ is an approximating solution of \eqref{formule1}: $M$ is such that $h_{x_c}(M)\geq (\beta\norm{x_c-\mu}_2 + 8r)^2$ for some $0.78\leq \beta\leq 1$ and let $v_1$ be one of its top eigenvector. Then, we have
\begin{equation*}
\norm{x_{c+1} - \mu}_2^2\leq 0.8 \norm{x_c - \mu}_2^2 + 64r^2
\end{equation*} when $x_{c+1} = x_c - \theta_c v_1$ for $\theta_c$ defined in \eqref{eq:step_size_def}.
\end{Proposition}
{\bf Proof. {\hspace{0.2cm}}} Let us assume that the event $\cE$ holds up to the end of the proof. Let $M$ be an approximating solution to \eqref{formule1} such that $h_{x_c}(M)\geq (\beta\norm{x_c-\mu}_2 + 8r)^2$ for some $0.78\leq \beta\leq 1$ and let $v_1$ be a top eigenvector of $M$.
In direction $v_1$, there are at least $9K/10$ blocks such that $|\inr{\bar X_k - \mu, v_1}|\leq 8r$ hence on these blocks we also have
\begin{equation}\label{eq:prop_theta_c}
|\theta_c - \inr{x_c-\mu,v_1}| = |{\rm Med}\left(\inr{\mu-\bar X_k,v_1}:k\in[K]\right)|\leq {\rm Med}\left(|\inr{\mu-\bar X_k,v_1}|:k\in[K]\right)\leq 8r.
\end{equation}
Let $v= (\mu-x_c)/\norm{\mu-x_c}_2$ denote the optimal normalized descent direction. We write $v = \lambda_1 v_1 + \lambda_2 v_1^\perp$ where $v_1^\perp$ is a normalized orthogonal vector to $v_1$. We have $\lambda_1^2+\lambda_2^2=1$ and it follows from Proposition~\ref{Prop:direction} that $|\lambda_1| = |\inr{v_1, v}|>\sqrt{2\beta^2-1}$. We conclude that
\begin{align*}
\norm{x_{c+1} - \mu}_2^2 & = \norm{x_c-\mu - \theta_c v_1}_2^2 = \norm{(\inr{x_c-\mu, v_1} -\theta_c)v_1 + \inr{x_c-\mu, v_1^\perp} v_1^\perp}_2^2\\
&= (\inr{x_c-\mu, v_1} -\theta_c)^2 +\inr{x_c-\mu, v_1^\perp}^2 \leq (8 r)^2 + \lambda_2^2 \norm{x_c-\mu}_2^2
\end{align*} As $\lambda_2^2 = 1- \lambda_1^2<2-2\beta^2 < 0.8$ we get the result.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
We now have almost all the building blocks to fully characterize the algorithm. The last and final step is to find a stopping rule. The idea we use to design such a rule is based on Proposition~\ref{prop:geometric_decay}: we know that when the current point $x_c$ is not in a $\ell_2^d$-neighborhood of $\mu$ with a radius of the order of $r$ then the $\ell_2^d$-distance between the next iteration $x_{c+1}$ and $\mu$ should be less than $\sqrt{0.81}$ times the $\ell_2^d$-distance between $x_c$ and $\mu$. We therefore have a geometric decay of the distance to $\mu$ along the iterations until we reach $\mu$ in a $\ell_2^d$-neighborhood of radius proportional to $r$. Starting from the coordinate-wise median(-of-means) which is in a $8\sqrt{d}r$ neighborhood of $\mu$, we only have to do $\log(8\sqrt{d})/\log(1/\sqrt{0.81})$ iterations to output a current point which is $r$-close to $\mu$ w.r.t. the $\ell_2^d$-norm (see Proposition~\ref{prop:coordinate_wise_MOM}).
We are now in a position to write an ``almost final'' pseudo-code of our algorithm. In the next section, we will dive a bit deeper in this pseudo-code (and in particular on the covering SDP algorithm used to construct an approximating solution to \eqref{formule1}) in order to provide a final pseudo-code together with its total running time.
\vspace{0.7cm}
\begin{algorithm}[H]\label{algo:almost_final}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}\SetKw{Or}{or}
\SetKw{Return}{Return}
\Input{$X_1, \ldots, X_N$ and a number $K$ of blocks}
\Output{A robust subgaussian estimator of $\mu$}
\BlankLine
Construct an equipartition $B_1\sqcup \cdots \sqcup B_K=\{1,\cdots,N\}$\\
Construct the $K$ empirical means $\bar{X}_k=(N/K)\sum_{i\in B_k}X_i, k\in[K]$\\
Compute $\hat\mu^{(0)}$ the coordinate-wise median-of-means and put $x_c \leftarrow \hat \mu^{(0)}$\\
\For{$T=1, 2, \cdots, \log(8\sqrt{d})/\log(1/\sqrt{0.81})$}{
Compute $M_c$ an approximating solution to \eqref{formule1} such that
\begin{equation*}
h_{x_c}(M_c)\geq \left(0.78\norm{x_{c} - \mu}_2 + 8r\right)^2
\end{equation*}\\
Compute $v_1$ a top eigenvector of $M_{c}$\\
Compute a step size $\theta_{c} = -\Med\left(\inr{\bar X_k-x_{c},v_1}:k\in[K]\right)$\\
Update $x_c\leftarrow x_{c}-\theta_{c} v_1$\\}
\Return $x_{c}$
\caption{``Almost final'' pseudo-code of the robust sub-gaussian estimator of $\mu$}
\end{algorithm}
\vspace{0.7cm}
Algorithm~\ref{algo:almost_final} is ``almost'' our final algorithm. There is one last step we need to check carefully: given a current point $x_c$ we need to find a way to construct $M_c$ satisfying ``$h_{x_c}(M_c)\geq \left(0.78\norm{x_{c} - \mu}_2 + 8r\right)^2$'' without knowing $r$ or $\mu$. This is the last issue we need to address in order to explain how step \textbf{5} from Algorithm~\ref{algo:almost_final} can be realized in a fully data-dependent way in a good time. This issue is answered in the next section together with the computation of its running time.
\section{Solving (approximatively) the SDP \eqref{formule1}}
\label{sec:SDP}
The aim of this section is to show that, on the event $\cE$, it is possible to construct in reasonnable time a matrix $M_c$ such that ``$h_{x_c}(M_c)\geq \left(0.78\norm{x_{c} - \mu}_2 + 8r\right)^2$'' without any extra information than the data. To that end we construct in an efficient way an approximation solution to the optimization problem \eqref{formule1} using covering SDP as in \cite{MR3909640}. The main result of this section is the following.
\begin{Theorem}\label{theo:approx_sol_sdp}Let $u\in\bN^*$.
On $ \cE$, for every $x_c\in\bR^d$ such that $\norm{x_c-\mu}_2\geq 800 r$, we can either compute, in time $\tilde\cO(K u d)$, with probability $> 1 -(1/10)^{u+5}/\sqrt{d}$ :
\begin{itemize}
\item A matrix $M_c$ such that
\begin{equation*}
h_{x_c}(M_c)\geq \left(0.78\norm{x_{c} - \mu}_2 + 8r\right)^2
\end{equation*}
\item Or directly a subgaussian estimate of $\mu$, using only the block means $\bar X_1, \ldots, \bar X_K$ as inputs.
\end{itemize}
\end{Theorem}
Theorem~\ref{theo:approx_sol_sdp} answers the last issue raised at the end of Section~\ref{sec:proof_of_the_statistical_performance_in_theorem_theo:main} and provides the running time for step \textbf{5} of Algorithm~\ref{algo:almost_final}. It therefore concludes the statement that there exists a fully data-driven robust subgaussian algorithm for the estimation of a mean vector under the only Assumption~\ref{assum:first} (the total running time of Algorithm~\ref{algo:almost_final} is studied in Section~\ref{sec:final}).
\begin{Remark}
Theorem~\ref{theo:approx_sol_sdp} states that we either find an approximating solution $M_c$ to \eqref{formule1} or a good estimate of $\mu$ (at the current point $x_c$). As we will see in this section, this second case is degenerate as it is not the typical situation.
\end{Remark}
We now turn to the proof of Theorem~\ref{theo:approx_sol_sdp}. It is decomposed into several lemmas adapted from techniques developed by \cite{MR3909640} to approximately solve the semi-definite positive problem \eqref{formule1} in polynomial time. To that end, we first introduce the following covering SDP
\begin{equation} \label{formule2} \tag{$C_\rho$}
\begin{aligned}
& \text{minimize}
& & \Tr(M^\prime)+ \norm{y^\prime}_1 \\
& \text{subject to}
& & M^\prime \succeq 0 , \ y^\prime \geq 0 , \\
& & & \forall k\in [K], \ \rho(\bar{X}_k- x_c)^\top M^\prime (\bar{X}_k- x_c) + 9K/10 \ y_k^\prime \geq 1
\end{aligned}
\end{equation}where $\rho>0$ is some parameter that we will show how to fine-tune later. Then, we show that, for a good choice of $\rho$, we can turn a good approximation solution for \eqref{formule2} into a good approximation solution for \eqref{formule1}.
We note $g(\rho)$ the optimal objective value of \eqref{formule2}. We begin with a first lemma that shows how to link the two optimization problems \eqref{formule1} and \eqref{formule2}. The proof can be found in Lemma 4.2 from \cite{MR3909640}. We adapt it here for our purpose.
\begin{Lemma}\label{SDP1}
Let $\rho>0$. From a feasible solution $(M^\prime, y^\prime)$ for \eqref{formule2} that achieves $\Tr(M^\prime)+ ||y^\prime||_1 \leq 1$, we can construct a feasible solution for \eqref{formule1} with objective value $\geq 1/\rho$ (and conversely).
\end{Lemma}
{\bf Proof. {\hspace{0.2cm}}}
We first note that the optimization problem \eqref{formule1} is equivalent to the following one:
\begin{equation} \label{formule3} \tag{$\tilde E_{x_c}$}
\begin{aligned}
& \text{maximize}
& & z-\frac{\norm{y}_1}{9K/10} \\
& \text{subject to}
& & M \succeq 0 , \ \Tr(M)=1 , \ y \geq 0 , \ z\geq0 \\
& & & \forall k \in[K], \ (\bar{X}_k- x_c)^\top M (\bar{X}_k- x_c) + \ y_k \geq z
\end{aligned}
\end{equation}
Indeed, for a given $M\succeq0$ such that $\Tr(M)=1$, one can notice that the optimal value is achieved in \eqref{formule3} for $y_k=\max(0,z-(\bar{X}_k- x_c)^\top M (\bar{X}_k- x_c)), k\in[K]$ and $z=\mathcal{Q}_{9/10}\left( (\bar{X}_k- x_c)^\top M (\bar{X}_k- x_c) \right)$ the $9/10$-th quantile of $\{(\bar{X}_k- x_c)^\top M (\bar{X}_k- x_c):k\in[K]\}$, so that $z-\norm{y}_1/(9K/10)=h_{x_c}(M)$ which gives the equivalence between \eqref{formule1} and \eqref{formule3}.
Then, once a feasible solution $(M^\prime, y^\prime)$ for \eqref{formule2} that achieves $\Tr(M^\prime)+ \norm{y^\prime}_1 \leq 1$ is obtained, by taking $M=M^\prime/\Tr(M^\prime)$, $z=1/(\rho \Tr(M^\prime))$ and $y=(9K/10)/(\rho \Tr(M^\prime)) y^\prime$, we get the desired result (and the converse follows from inverting those relations).
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
From Lemma~\ref{SDP1}, it is enough to solve \eqref{formule2} -- for a good choice of $\rho$ -- to find a good approximating solution for \eqref{formule1}. It therefore remains to find such a good $\rho$. To do so, we rely on the next two lemmas. The first one is adapted from Lemma 4.3 in \cite{MR3909640}.
\begin{Lemma}\label{SDP2}
For every $\rho>0$ and every $\alpha \in (0,1)$, $g( (1-\alpha)\rho) \geq g(\rho) \geq (1-\alpha) g( (1-\alpha)\rho)$.
\end{Lemma}
{\bf Proof. {\hspace{0.2cm}}} A feasible pair $(M^\prime, y^\prime)$ for $(C_{(1-\alpha)\rho})$ is feasible for $(C_{\rho})$, which gives the first inequality. If $(M^\prime, y^\prime)$ is a feasible pair for $(C_{\rho})$, then $(M^\prime/(1-\alpha), y^\prime/(1-\alpha))$ is a feasible pair for $(C_{(1-\alpha)\rho})$, which gives the second inequality.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
It follows from Lemma~\ref{SDP2}, that $g$ is continuous, non increasing, and (from Lemma \ref{SDP1}, using both sides of the implication, we have that $g(\rho)\leq1$ iff $1/\rho\geq OPT_{x_c}$) that $g(1/OPT_{x_c})=1$. So in order to find a good solution, we must find a $\rho$ such that $g(\rho)$ is as close to $1$ as possible. Unfortunately, we do not know how to solve \eqref{formule2} exactly for a given $\rho>0$, but we can compute efficiently a good approximation $(M^\prime, y^\prime)$ and a top eigenvector of $M^\prime$ thanks to the following result which can be found in \cite{PTZ12} and is detailed in \cite{MR3909640} (see Section~4 and Remark~3.4).
\begin{Lemma}\label{SDP4}[\cite{PTZ12}] Let $u\geq1$ be an integer. For every $\rho > 0$ and every fixed $\eta > 0$, we can find with probability $>1-(1/10)^{u+10}/d$ a feasible solution to \eqref{formule2} that is $\eta$-close to the optimal, that is to say a feasible pair $(M^\prime, y^\prime)$ so that $\Tr(M^\prime)+\norm{y^\prime}_1 \leq (1+\eta)g(\rho)$ in time $ \tilde{\mathcal{O}}(uKd)$. Moreover, it is possible to find a top eigenvector of $M^\prime$ in $\tilde\cO(Kd)$.
\end{Lemma}
We compute $(u+ 3 \log(d)+10)$ times independently the (randomized) algorithm from \cite{PTZ12} that has a runtime of $ \tilde{\mathcal{O}}(Kd)$ and that outputs an $\eta$-close feasible solution with probability $9/10$. By taking the largest of the output's objective value, we have an $\eta$-close feasible solution with probability $1-(1/10)^{u+3 \log(d)+10}$, in time $ \tilde{\mathcal{O}}(uKd)$, proving Lemma~\ref{SDP4}. Let us call $\texttt{ALG}_\rho$ the algorithm from Lemma~\ref{SDP4}, that takes as input $((\bar{X}_k)_{k=1}^K, x_c, \rho, \eta, u)$ and returns a feasible pair $(M^\prime, y^\prime)$ for \eqref{formule2} satisfying $\Tr(M^\prime)+\norm{y^\prime}_1 \leq (1+\eta)g(\rho)$ in $\tilde{\mathcal{O}}(uKd)$, with probability $>1-(1/10)^{u+10}/d$. Next, in order to find a good $\rho$, we have to get some additional information on the function $g$. We will get it on the event $\cE$.
\begin{Lemma}\label{SDP3}
On the event $\cal E$, for all $x_c\in\bR^d$, if $\norm{x_c-\mu}_2> 8r$ then
\begin{equation*}
g(\rho) \leq \frac{1}{\rho \ OPT_{x_c}} \left(1 +\rho OPT_{x_c} \left(\frac{9(\norm{x_c-\mu}_2+8r)^2}{8(\norm{x_c-\mu}_2-8r)^2}-1\right)\right).
\end{equation*}
\end{Lemma}
{\bf Proof. {\hspace{0.2cm}}} We use the same notation as in the proof of Lemma~\ref{SDP1}. For any $\nu > 0$, we can choose a triplet $(z, y, M)$ feasible for \eqref{formule3} such that $z-\norm{y}_1/(9K/10) > OPT_{x_c}-\nu$. On the event $\cal E$, Lemma~\ref{DistanceLemme} yields $OPT_{x_c} > (8/9)(\norm{x_c-\mu}_2-8r)^2$ and we have from Corollary~\ref{coro:dual_isometry} that
\begin{equation*}
z=\mathcal{Q}_{9/10}\left( (\bar{X}_k- x_c)^\top M (\bar{X}_k- x_c) \right) =\mathcal{Q}_{9/10}\left(\norm{M^{1/2}(\bar{X}_k- x_c)}_2\right) \leq \left(\norm{M^{1/2}(x_c-\mu)}_2+8r\right)^2\leq (\norm{x_c-\mu}_2+8r)^2
\end{equation*}because $M\succeq0$ and $\Tr(M)=1$. Let $M^\prime= M/(\rho z), y^\prime= y/[z(9K/10)]$. We have
\begin{align*}
g(\rho) &\leq \Tr(M^\prime) + \norm{y^\prime}_1 \leq \frac{1+ \rho \norm{y}_1/(9K/10)}{\rho z}\\ &< \frac{1+\rho(z-OPT_{x_c} +\nu)}{\rho z }
\leq \frac{1+\rho \nu +\rho OPT_{x_c}\left(\frac{ 9 (\norm{x_c-\mu}_2+8r)^2}{ 8(\norm{x_c-\mu}_2-8r)^2}-1\right) }{\rho (OPT_{x_c}-\nu)}.
\end{align*}
By taking $\nu \rightarrow 0$, we get the result.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\textbf{Proof of Theorem~\ref{theo:approx_sol_sdp}.} Let us place ourselves on the event $\cE$ so that we can apply Lemma~\ref{SDP3}. Let $x_d\in\bR^d$ and assume that $\norm{x_c-\mu}_2 > 800 r$. It follows from Lemma~\ref{SDP3} that $g(\rho) \leq 1/(\rho \ OPT_{x_c}) + 0.171$. Therefore, if we can find a $\rho$ such that $g(\rho)\geq 1-\epsilon + 0.171$ for some $0<\eps<1$, then necessarily $1/\rho \geq OPT_{x_c} (1-\epsilon)$. Let us take $\epsilon= 0.173$, and $\eta =0.0001$. Then if $\texttt{ALG}_\rho$ returns, a feasible pair $(M^\prime, y^\prime)$ for \eqref{formule2} so that $ 0.9981\leq \Tr(M^\prime)+\norm{y^\prime}_1 \leq 1$, then, since $0.9981> 1.0001\times 0.998 =(1+\eta)(1-\eps+0.171)$ we will know that, with probability $>1-(1/10)^{u+10}/d$,
\begin{equation*}
(1+\eta)g(\rho)\geq \Tr(M^\prime)+\norm{y^\prime}_1\geq (1+\eta)(1-\eps+0.171)
\end{equation*}hence $1/\rho \geq OPT_{x_c}(1-\epsilon)$, and by Lemma \ref{SDP1}, we can construct a feasible solution $M_c$ for \eqref{formule1} with objective value satisfying $h_{x_c}(M_c)\geq OPT_{x_c}(1-\epsilon)$. Next, using Lemma~\ref{DistanceLemme}, we obtain that when $\norm{x_{c} - \mu}_2\geq 800r$
\begin{equation*}
h_{x_c}(M_c)\geq OPT_{x_c}(1-\epsilon)\geq(1-\epsilon)(8/9)\left(\norm{x_{c} - \mu}_2 - 8r\right)^2\geq \left(0.78\norm{x_{c} - \mu}_2 + 8r\right)^2
\end{equation*}for $\eps= 0.173$, solving step~\textbf{5} from Algorithm~\ref{algo:almost_final}.
Therefore, it only remains to show how to find a $\rho$ such that $\texttt{ALG}_\rho$ returns a pair $(M^\prime, y^\prime)$ (feasible for \eqref{formule2}) satisfying $ 0.9981\leq \Tr(M^\prime)+\norm{y^\prime}_1 \leq 1$. We do it first by assuming that we have access to an initial $\rho_0$ such that $\texttt{ALG}_{\rho_0}$ returns a feasible pair $(M^\prime, y^\prime)$ for \eqref{formule2} (for $\rho=\rho_0$) so that $\Tr(M^\prime) + \norm{y^\prime}_1\leq 1$ and to a maximal number $T$ of iterations (we will also see later how to choose such $\rho_0$ and $T$). The following algorithm (which is a binary search) taking as input $(\bar{X}_1, \ldots, \bar{X}_K, x_c, \rho_0,u, T)$ returns a feasible pair $(M^\prime, y^\prime)$ for \eqref{formule2} so that $ 0.9981\leq \Tr(M^\prime)+\norm{y^\prime}_1 \leq 1$ (when $T$ is large enough). This is simply due to the fact that $g$ is continuous, non increasing, $g(0)=10/9>1$ and $g(\rho)\leq2/8$ when $\rho\to+\infty$ and $\norm{x_c-\mu}_2>800r$ (because of Lemma~\ref{SDP3}). For this to work, we need that for each iteration, $\texttt{ALG}_{\rho}$ returns a feasible pair $(M^\prime, y^\prime)$ for \eqref{formule2} (for $\rho=\rho_0$) so that $\Tr(M^\prime) + \norm{y^\prime}_1\leq (1+ 0.0001) g(\rho)$. We will suppose that it is the case for the rest of the proof. By union bound, this happens with probability at least $>1- T (1/10)^{u+10}/d$
\vspace{0.7cm}
\begin{algorithm}[H]\label{algo:binarySearch}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}\SetKw{Or}{or}
\SetKw{Return}{Return}
\Input{$\bar{X}_1, \ldots, \bar{X}_K$, $x_c$, $\rho_0$,u, $T$}
\Output{A feasible pair $(M^\prime, y^\prime)$ for \eqref{formule2} satisfying $ 0.9981\leq \Tr(M^\prime)+\norm{y^\prime}_1 \leq 1$}
\BlankLine
$\rho_m \leftarrow 0$, $\rho_M \leftarrow \rho_0$, $V \leftarrow $ $\texttt{ALG}_{\rho_0}(u)$ , $i \leftarrow 0 $\\
\While{$ V \notin[0,9981,1] $ and $i <T$}{\If{$V<0,9981$}{$\rho_M \leftarrow (\rho_M+\rho_m)/2$} \Else{$\rho_m \leftarrow (\rho_M+\rho_m)/2$}
$V \leftarrow objective(\texttt{ALG}_{\frac{\rho_m+\rho_M}{2}}(u))$ , $i \leftarrow i+1$}
\Return $\texttt{ALG}_{\frac{\rho_m+\rho_M}{2}(u)}$
\caption{The \texttt{BinarySearch} algorithm to find a $\rho$ so that $\texttt{ALG}_\rho$ returns a pair $(M^\prime, y^\prime)$ (feasible for \eqref{formule2}) satisfying $ 0.9981\leq \Tr(M^\prime)+\norm{y^\prime}_1 \leq 1$.}
\end{algorithm}
\vspace{0.7cm}
If we can find a $\rho_0$ (such that $\texttt{ALG}_{\rho_0}$ returns a feasible pair $(M^\prime, y^\prime)$ for \eqref{formule2} so that $\Tr(M^\prime) + \norm{y^\prime}_1\leq 1$) and a large enough number of iterations $T$ in \texttt{BinarySerach}, Algorithm \ref{algo:binarySearch} returns a feasible pair $(M^\prime, y^\prime)$ for \eqref{formule2} from which we can construct an approximating solution $M_c$ for \eqref{formule1} with objective value $h_{x_c}(M_c)$ larger than $\left(0.78\norm{x_{c} - \mu}_2 + 8r\right)^2$ whenever $\norm{x_c-\mu}_2 \geq 800 r$. This is exactly what we expect in step \textbf{5} of Algorithm~\ref{algo:almost_final}. Next, the last and final step that remains to be explained is to show how one can get such a $\rho_0$ and $T$ using only the block means $(\bar X_k)_{k=1}^K$ in $\tilde\cO(Nd+uKd)$.
Let us consider $\hat\mu^{(0)}$ the coordinate-wise median(-of-means) and let us define $\delta= \Med( \norm{\bar{X}_k-\hat\mu^{(0)}}_2:k\in[K])$ -- both quantities can be computed in $\tilde{\mathcal{O}}(Kd)$. On the event $\cE$, it follows from Corollary~\ref{coro:dual_isometry} (for $M=I_d/d$) and Proposition~\ref{prop:coordinate_wise_MOM} that $\delta \leq 16 \sqrt{d} \times r$. So if one takes $\rho_0= d/ \delta^2 \geq 1/[(16)^2r^2]$, and if $\norm{x_c-\mu}_2 > 800 r$, Lemma~\ref{DistanceLemme} and Lemma~\ref{SDP3} guarantee that $OPT_{x_c}\geq (8/9)\left(\norm{x_c-\mu}_2-8r\right)^2\geq (8/9)(792)^2r^2$ and so
\begin{equation*}
g(\rho_0)\leq \frac{1}{\rho\ OPT_{x_c}} + 0.171 \leq \frac{16^2}{(8/9)(792)^2} + 0.171<0.18
\end{equation*}
so $\texttt{ALG}_{\rho_0}\leq (1+\eta)g(\rho)<1.0001\times 0.18<1$ (for the same choice of $\eta=0.0001$).
Now we tackle the question of the number $T$ of iterations, which is crucial for the runtime. We know from Lemma~\ref{SDP2} and Lemma~\ref{SDP3} that the interval $I$ of all $\rho$'s such that $ 0.9981 \leq objective(\texttt{ALG}_{\rho}) \leq 1$ is at least of size $0.001/OPT_{x_c}$ when $\norm{x_c-\mu}_2 > 800 r$. Indeed, since $g(\rho)\leq objective(\texttt{ALG}_{\rho})\leq (1+\eta)g(\rho)$, if $\rho$ is such that $0.9981\leq g(\rho)\leq1/(1+\eta)$ then $ 0.9981 \leq objective(\texttt{ALG}_{\rho}) \leq 1$. Now, if we let $\rho_1>0$ and $0<\alpha<1$ be such that $g(\rho_1)=0.9981$ and $g((1-\alpha)\rho_1)=1/(1+\eta)$ the interval $I$ is at least of size $\alpha \rho_1$. Moreover, from Lemma~\ref{SDP2} we have $1/(1+\eta)\leq g((1-\alpha)\rho_1)\leq g(\rho_1)/(1-\alpha)$ and so $0.9981=g(\rho_1)\geq (1-\alpha)/(1+\eta)$, i.e. $\alpha\geq 1-0.9981(1+\eta)>0.001$. Finally, since $g(\rho_1)\leq 1$, $g(1/OPT_{x_c})=1$ and $g$ is non-increasing, we conclude that $\rho_1\geq 1/OPT_{x_c}$ and so the length of $I$ is at least $\alpha \rho_1\geq 0.001/OPT_{x_c}$.
So, in the case where $\norm{x_c-\mu}_2>800r$, $\log_2(\rho_0 \times OPT_{x_c}/0.001)$ iterations are enough to insure that \texttt{BinarySearch} outputs $(M^\prime, y^\prime)$ (from $\texttt{ALG}_{\rho}$ for a well-chosen $\rho$) feasible for \eqref{formule2} and such that $ 0.9981\leq \Tr(M^\prime)+\norm{y^\prime}_1 \leq 1$. Moreover, on the event $\cE$ it is possible to show that for all iterations $x_c$ along the algorithm we have $\norm{x_c-\mu}_2 < C \sqrt{d}r$ for a constant $C \leq 800$ (we may take that as an induction hypothesis for the firsts iterates $x_c$, and the proof of Theorem~\ref{theo:main} below in Section~\ref{sec:final} shows that it will still holds for $x_{c+1}$). So if $\delta > r/d$ then $\rho_0 <d^3/r^2 $, and since $OPT_{x_c} < (C^2 d +8) r^2$ (this follows from Lemma~\ref{DistanceLemme}), the binary search ends in time $T = \log_2(\tilde C d^4)$ with $\tilde C < 10^6$. \\
Thus, if the binary search has not ended in that time, we have either $\delta < r/d$ (which is a degenerate case) or $\norm{x_c-\mu}_2 < 800 r$ (or both). If $\norm{x_c-\mu}_2 > 800 r$ and $\delta < r/d$, then, taking $\rho_1= 1/ (d \delta)^2$, we have, by Lemma \ref{SDP3}, $ \texttt{ALG}_{\rho_1} < 1/2$. So, if we can not end our binary search in time $\log_2(\tilde C d^4)$, we compute $ \texttt{ALG}_{1/(d \delta)^2} $: if this gives something smaller than 1, that means that $1/(d \delta)^2 > 1/OPT_{x_c} \Rightarrow \delta < \sqrt{(C^2 d +8)} r/d < (C+1)r / \sqrt{d} $. We notice that on $\cal E$, $\norm{\hat\mu^{(0)}-\mu}_2 < \delta+8r $, so if $ \texttt{ALG}_{1/(d \delta)^2} <1 $, then $\hat\mu^{(0)}$ is a good estimate for $\mu$. If on the contrary we have $ \texttt{ALG}_{\rho_1} > 1$, it means that $\norm{x_c-\mu}_2 < 800 r$, so we stop the algorithm and return $x_c$. \\
Let us write now in pseudo-code the procedure we just described. This is an algorithm, named \texttt{SolveSDP}, running in $\tilde\cO(Kud)$ which takes as inputs $\bar{X}_1, \ldots, \bar{X}_K$, $x_c$, $u$ and which outputs, on the event $\cE$, with probability $>1-\log(\tilde C d^4)(1/10)^{u+10}/d$, for every $x_c\in\bR^d$ such that $\norm{x_c-\mu}_2\geq 800 r$ either a matrix $M_c$ such that
\begin{equation*}
h_{x_c}(M_c)\geq \left(0.78\norm{x_{c} - \mu}_2 + 8r\right)^2
\end{equation*}
or a subgaussian estimate of $\mu$. It therefore describes step~\textbf{5} from Algorithm~\ref{algo:almost_final}.
\vspace{0.3cm}
\begin{algorithm}[H]\label{algo:solveur}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}\SetKw{Or}{or}
\SetKw{Return}{Return}
\Input{$\bar{X}_1, \ldots, \bar{X}_K$, $x_c$ and $u$}
\Output{A feasible solution for \eqref{formule1}}
\BlankLine
Compute $\hat\mu^{(0)}$, compute $\delta$\\
$T \leftarrow \log(\tilde C d^4)$, $\rho_0 \leftarrow d/ \delta^2$\\
$(M^\prime, y^\prime) \leftarrow$ BinarySearch($T$, $\rho_0$)\\
\If{$\Tr(M^\prime)+||y||_1 \in[0,9981,1]$ }{$M\leftarrow M^\prime/\Tr(M^\prime)$\\ \Return (True, $M$)}
\Else{\If{ $\texttt{ALG}_{1/(d \delta)^2} <1$}{\Return(False, $\hat\mu^{(0)}$) } \Else{\Return(False, $x_c$)}}
\caption{SolveSDP}
\end{algorithm}
\begin{Remark}\label{rem:effects_blocks}[Two advantages of block means]
During the whole algorithm, we solve the program \eqref{formule2} up to a factor $(1+ \eta)$ where $ \eta $ is \emph{fixed} (here we take it equal to $0.0001$). This differs crucially from the work of \cite{MR3909640} where $\eta$ depends on the fraction of outliers, which decreases the performance of the algorithm in Lemma \ref{SDP4}, the true runnnig time being $\tilde \cO(Kd/\text{Poly}(\eta))$. This is another advantages of using the mean blocks instead of the data themselves. Indeed, using blocks of data, we work with a constant fraction of corrupted blocks (we took it equal to $1/10$), therefore the approximation parameter used to approximately solved \eqref{formule2} can be taken equal to a constant (we took it equal to $\eta=0.0001$) unlike \cite{MR3909640} where $\eta$ depends on $\eps=|\cO|/N$. Taking the block means has therefore two advantages: a stochastic one, which is to exhibit a subgaussian behavior for $9K/10$ blocks even under a $L_2$-moment assumption and a computational one, which is to make the proportion of corrupted blocks constant.
\end{Remark}
\section{The final algorithm and its computational cost: proof of Theorem~\ref{theo:main}.}
\label{sec:final}
We are now in a position to fully describe our robust subgaussian descent algorithm running in $\tilde \cO(Nd+uKd)$. One may check that its construction is fully data-dependent, in particular, we do not need to know the value of $r$ or the proportion of outliers.
\vspace{0.7cm}
\begin{algorithm}[H]\label{algo:final}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}\SetKw{Or}{or}
\SetKw{Return}{Return}
\Input{$X_1, \ldots, X_N$ and $K\in[N]$ and $u\in\bN^*$}
\Output{A robust subgaussian estimator of $\mu$}
\BlankLine
Construct an equipartition $B_1\sqcup \cdots \sqcup B_K=\{1,\cdots,N\}$\\
Construct the $K$ empirical means $\bar{X}_k=(N/K)\sum_{i\in B_k}X_i, k\in[K]$\\
Compute $\hat\mu^{(0)}$ the coordinate-wise median\\
$x_c \leftarrow \hat \mu^{(0)}$, Bool $\leftarrow$ True, $T \leftarrow 0$ \\
\While{Bool and $T <\log(8\sqrt{d})/\log(1/0.81)$}{Bool, $A$ $\leftarrow$SolveSDP($\bar{X}_1, \ldots, \bar{X}_K$, $x_c$)\\
\If{Bool}{$M \leftarrow A$ \\ Compute $v_1$ a top eigenvector of $M_{c}$\\
Compute a step size $\theta_{c} = -\Med\left(\inr{\bar X_k-x_{c},v_1}:k\in[K]\right)$\\
Update $x_{c}\leftarrow x_{c}-\theta_{c} v_1$ \\ $T\leftarrow T+1$\\ }\Else{$x_c \leftarrow A$}}
\Return $x_c$
\caption{Final Algorithm: covSDPofMeans}
\end{algorithm}
\vspace{0.7cm}
\textbf{Proof of Theorem~\ref{theo:main}.} From Theorem~\ref{theo:approx_sol_sdp}, we know that on $\cE $, when, $\norm{x_c-\mu}_2> 800r$, we get, with probability $>1-(1/10)^{u+5}/\sqrt{d}$, an $M_c$ so that $ h_{x_c}(M_c)\geq \left(0.8\norm{x_{c} - \mu}_2 + 8r\right)^2$ (or directly a subgaussian estimate, in which case our work is done). Proposition \ref{prop:geometric_decay}, states that in that case $\norm{x_{c+1} - \mu}_2^2\leq 0.8 \norm{x_c - \mu}_2^2 + 64r^2 \leq 0.81 \norm{x_c - \mu}_2^2$. So we have a geometric decays and Proposition \ref{prop:coordinate_wise_MOM} guarantees that our starting point is at most $8\sqrt{d} r$ far away from the mean so that in at most $\log(8\sqrt{d})/\log(1/0.81))$ steps the algorithm outputs its current point which is $r$-close to $\mu$, with probability $>1-(1/10)^{u+5} \log(8\sqrt{d})/(\log(1/0.81))\sqrt{d})>1-(1/10)^u$ (by union bound).
The last thing to do is to control what happens when $\norm{x_c-\mu}_2< 800r$. Then, we have no guarantees on $v_1$, but using the similar argument as in the proof of Proposition~\ref{prop:geometric_decay} we know that
\begin{equation}\label{eq:prop_theta_c}
|\theta_c - \inr{x_c-\mu,v_1}| = |{\rm Med}\left(\inr{\mu-\bar X_k,v_1}:k\in[K]\right)|\leq {\rm Med}\left(|\inr{\mu-\bar X_k,v_1}|:k\in[K]\right)\leq 8r
\end{equation} and (for some $v_1^\perp$ a normalized orthogonal vector to $v_1$)
\begin{align*}
\norm{x_{c+1} - \mu}_2^2 & = \norm{x_c-\mu - \theta_c v_1}_2^2 = \norm{(\inr{x_c-\mu, v_1} -\theta_c)v_1 + \inr{x_c-\mu, v_1^\perp} v_1^\perp}_2^2\\
&= (\inr{x_c-\mu, v_1} -\theta_c)^2 +\inr{x_c-\mu, v_1^\perp}^2 \leq (8 r)^2 + \norm{x_c-\mu}_2^2 .
\end{align*} Hence, $\norm{x_{c+1} - \mu}_2 \leq (8 r) + \norm{x_c-\mu}_2 $. Therefore, in the worst case scenario where $\norm{x_c-\mu}_2 >800r $ at the last iteration, the algorithm outputs the next iteration $\hat \mu_K = x_{c+1}$ so that $\norm{\hat \mu_K - \mu}_2\leq 808r$.
We end this proof with the computation of the running time of Algorithm~\ref{algo:final}. We detail the computation cost for each line of Algorithm~\ref{algo:final}: line~\textbf{1} cost $N$, line~\textbf{2} costs $Nd$, line~\textbf{3} costs $\cO(d K \log(K))$. The while loop in line~\textbf{5} is running at least $\log d$ times (up to constant) so that the computational cost of all remaining lines of Algorithm~\ref{algo:final} are at worst to be multiplied by $\log d$. Line~\textbf{6} costs $\log(\tilde C d^4)$ steps, each of cost $\tilde \cO(Kud)$ (that comes from Lemma \ref{SDP4}). Line~\textbf{9} can be computed in $\tilde\cO(Nd)$ thanks to Lemma~\ref{SDP4}. Finally, line~\textbf{10} costs $\cO(Kd)$. Other lines take time at most $d$. We thus recover the running time announced in Theorem~\ref{theo:main}.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\section{Adaptive choice of $K$}
\label{sec:adaptive_choice_of_}
Given a number of blocks $K\in\{1, \ldots, N\}$, a parameter $u\geq1$ (so that the covering SDPs from \cite{PTZ12} (used in Lemma~\ref{SDP4}) is ran $u+3 \log d+10$ times) and the dataset $\{X_1, \ldots, X_N\}$, Algorithm~\ref{algo:final} returns a vector $\hat \mu_K$ in $\bR^d$ and Theorem~\ref{theo:main} insures that $\hat \mu_K$ estimates the true mean $\mu$ at the subgaussian rate \eqref{eq:intro_subgaus_rate} with large probability as long as $K\geq 300|\cO|$. As a consequence, we have certified statistical guarantees for $\mu_K$ only when some a priori knowledge on the number $|\cO|$ of outliers is provided (such as ``the corruption of this database is less than $5\%$'' ) or if we choose $K$ like $N$- but, in this later case the rate \eqref{eq:intro_subgaus_rate} may be too pessimistic. The aim of this section is to overcome this issue by constructing a procedure which can automatically adapt to the number of outliers. The resulting procedure satisfies the same statistical bounds as $\mu_K$ for all $K\geq 300|\cO|$ without knowing $|\cO|$ (up to constants).
The adaptation method we use is based on the Lepski method \cite{MR1091202,MR1147167} which is another tool used by the ``MOM community'' since \cite{lugosi2019sub}. The price we pay for this adaptation is the a priori knowledge of the rate \eqref{eq:intro_subgaus_rate} for all $K$ which means that we know in advance $\Tr(\Sigma)$ and $\norm{\Sigma}_{op}$ -- this is for instance the case when it is known that $\Sigma$ is the identity matrix $I_d$. Of course, one can design robust estimators for $\Tr(\Sigma)$ (see \cite{Jules_Guillaume_1}) and $\norm{\Sigma}_{op}$ but this requires stronger assumptions that we want to avoid at this stage.
Lepski's method proceeds as follows. We set for all $K\in\{1, \ldots, N\}$ and all $j\in\{0,1,\ldots, \log_2N\}$
\begin{equation*}
r_K^* = 808\left(1200\sqrt{\frac{\Tr(\Sigma)}{N}} + \sqrt{\frac{1200\norm{\Sigma}_{op}K}{N}}\right) \mbox{ and } r^{(j)} = r^*_{\lceil N/2^j\rceil}
\end{equation*}the rate of convergence from Theorem~\ref{theo:main}. For a given parameter $u_j\in\bN^*$, we construct from Algorithm~\ref{algo:final}
\begin{equation}\label{eq:def_esti_lepski}
\hat \mu^{(j)}\leftarrow covSDPofMeans(X_1,\ldots,X_N, K=\lceil N/2^j\rceil, u=u_j).
\end{equation}
Classical Lepski's method considers the largest $J$ such that $\cap_{j=0}^J B_2(\hat \mu^{(j)}, r^{(j)})$ is none empty and then take any point $\hat \mu$ in this none empty intersection. Standard analysis of Lepski's method shows that $\hat \mu$ estimates $\mu$ at the rate $r_K^*$ (up to an absolute constant) simultaneously for all $K\in \{300 |\cO|, \ldots, N\}$ without knowing $|\cO|$. Given that checking that the intersection of several $\ell_2^d$-balls may not be straigtforward, we use a slightly modified version of Lepski's method as described in the following algorithm.
\vspace{0.7cm}
\begin{algorithm}[H]\label{algo:lepski}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}\SetKwInOut{Init}{init}\SetKw{Or}{or}
\SetKw{Return}{Return}
\Input{$X_1, \ldots, X_N$ and $\{u_j:j=0,1,2,\ldots,\log_2 N\}\subset \bN^*$}
\Output{A robust subgaussian estimator of $\mu$ with adaptive choice of $K$}
\Init{$J=0$ and $\hat \mu^{(0)} = covSDPofMeans(X_1,\ldots,X_N, K=N, u=u_0)$}
\BlankLine
\While{$\norm{\hat \mu^{(J)} - \hat \mu^{(j)}}_2\leq r^{(J)} + r^{(j)} , j= J-1,J-2,\ldots, 0$}{
$J\leftarrow J+1$\\
$\hat \mu^{(J)}\leftarrow covSDPofMeans(X_1,\ldots,X_N, K=\lceil N/2^J\rceil, u=u_J)$
}
\Return $\hat \mu^{(J)}$
\caption{Adaptive choice of $K$ in covSDPofMeans}
\end{algorithm}
\vspace{0.7cm}
Unlike for the traditional Lepski's method we check that $\hat\mu^{(J)}$ is in $\cap_{j=0}^{J-1} B_2(\hat \mu^{(j)}, r^{(J)} +r^{(j)})$ instead of checking that $\cap_{j=0}^J B_2(\hat \mu^{(j)}, r^{(j)})$ is none empty -- this simplifies the adaptation step. It is also possible to speed up the whole procedure by constructing iteratively the block means. Indeed, given that we consider a dyadic grid for $K$, i.e. $K\in\{N,\lceil N/2\rceil,\lceil N/4\rceil, \ldots\}$, for all $j\in\bN$, we can construct the block means $\{\bar X_k^{(j+1)},k=1,\ldots, \lceil N/2^{j+1}\rceil\}$ at step $K=\lceil N/2^{j+1}\rceil$ using the block means from the previous step $K=\lceil N/2^{j}\rceil$ by simply averaging two successive block means: $\bar X_k^{(j+1)}\leftarrow (\bar X_{2k}^{(j)} + \bar X_{2k+1}^{(j)})/2$.
Let us now turn to the statistical analysis of the output $\hat \mu^{(\hat J)}$ from Algorithm~\ref{algo:lepski} where
\begin{equation*}
\hat J =\max\left(J\in\{0,1,\ldots, \log_2 N\}: \hat \mu^{(J)}\in\cap_{j=0}^{J-1} B_2(\hat \mu^{(j)}, r^{(J)} + r^{(j)})\right).
\end{equation*}
\begin{Theorem}\label{theo:lepski}
Let $\{u_j:j=0,1,2,\ldots,\log_2 N\}\subset \bN^*$ be the family of parameters used to construct the family of estimators $\{\hat \mu^{(j)}, j=0, 1, \ldots\}$ in Algorithm~\ref{algo:lepski} (see also \eqref{eq:def_esti_lepski}). For all $K\in\{600|\cO|, \ldots, N\}$, with probability at least
\begin{equation}\label{eq:proba_lepski}
1-2\exp(-K/360000)-\sum_{j=0}^{\log_2(N/(K-1))}(1/10)^{u_j}
\end{equation} the output $\hat \mu^{(\hat J)}$ of Algorithm~\ref{algo:lepski} is such that $\norm{\hat \mu^{(\hat J)} - \mu}_2\leq 3 r^*_K$.
\end{Theorem}
{\bf Proof. {\hspace{0.2cm}}}
For all $j\in\{0,1,\ldots, \log_2 N\}$ denote by $\cE_j$ the event onto which Theorem~\ref{theo:main} is valid for $K=\lceil N/2^{j}\rceil$ and for $u=u_j$: that is on $\cE_j$, if $\lceil N/2^{j}\rceil\geq 300|\cO|$, $\norm{\hat \mu^{(j)}-\mu}_2\leq r^{(j)}$ and $\bP[\cE_j]\geq 1-\exp(-\lceil N/2^{j}\rceil/180000)-(1/10)^{u_j}$. Let $K\in\{600|\cO|, \ldots, N\}$ and $J\in\{0,1,\ldots, \log_2 N\}$ be such that $\lceil N/2^{J}\rceil\leq K <\lceil N/2^{J-1}\rceil$. On the event $\cap_{j=0}^J \cE_{j}$, we have $\norm{\hat \mu^{(j)}-\mu}_2\leq r^{(j)}$ for all $j=0,1,\ldots,J$, in particular, for all $j=0,1,\ldots, J-1$, $\norm{\hat \mu^{(J)} - \hat \mu^{(j)}}_2\leq r^{(J)} + r^{(j)}$ and so $\hat \mu^{(J)}\in\cap_{j=0}^{J-1} B_2(\hat \mu^{(j)}, r^{(J)} +r^{(j)})$. As a consequence $\hat J\geq J$ therefore $\norm{\hat \mu^{(\hat J)} - \hat \mu^{(J)}}_2\leq r^{(\hat J)} + r^{(J)}\leq 2 r^{(J)}\leq 2 r_K^*$. Finally, we have
\begin{align*}
\bP[\cap_{j=0}^J \cE_{j}] \geq 1-\sum_{j=0}^J \exp(-\lceil N/2^{j}\rceil/180000)-(1/10)^{u_j}\geq 1-2\exp(-K/360000)-\sum_{j=0}^{\log_2(N/(K-1))}(1/10)^{u_j}.
\end{align*}
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
We can see in Algorithm~\ref{algo:lepski} that $\hat \mu^{(\hat J)}$ does not use any information on the number of outliers $|\cO|$ for its construction but it can still estimate $\mu$ at the optimal rate $r^*_K$ for all deviation parameters $K$ in $\{600|\cO|, \ldots, N\}$. The maximum total running time of Algorithm~\ref{algo:lepski} is achieved when $\hat J = \log_2N$; in that case, it is at most $\tilde\cO(Nd + \sum_{j=0}^{\log_2N} \lceil N/2^j\rceil u_j d)$. In particular, if one chooses $u_j = 2^j$ for all $j=0,1,\ldots, \log_2N$ then the total running time for the construction of $\hat \mu^{(\hat J)}$ is nearly-linear $\tilde \cO(Nd)$. For this choice of $u_j$, the probability deviation in \eqref{eq:proba_lepski} is constant and so one should choose the smallest possible $K$ allowed in Theorem~\ref{theo:lepski}, that is $K=600|\cO|$. Let us write formally this result.
\begin{Corollary}\label{coro:lepski}
If one takes $u_j=2^j$ for all $j=0,1,\ldots, \log_2N$ in Algorithm~\ref{algo:lepski} then, in nearly-linear time $\tilde\cO(Nd)$, with probability at least $1-2\exp(-600|\cO|/360000)-1/11$, the output $\hat \mu^{(\hat J)}$ from Algorithm~\ref{algo:lepski} satisfies
\begin{equation}\label{eq:rate_coro_lepski}
\norm{\hat\mu^{(\hat J)} - \mu}_2\leq 2r^*_{600|\cO|}=1616\left(1200\sqrt{\frac{\Tr(\Sigma)}{N}} + 850\sqrt{\frac{\norm{\Sigma}_{op}|\cO|}{N}}\right).
\end{equation}
\end{Corollary}
In particular, considering the setup from Theorem~\ref{theo:diakonikolas}, if $|\cO| = \eps N$ for some $\eps\leq 1/600$ then the rate achieved by $\hat\mu^{(\hat J)}$ in Corollary~\ref{coro:lepski} is of the order of
\begin{equation*}
\sqrt{\frac{\Tr(\Sigma)}{N}} + \sqrt{\norm{\Sigma}_{op}\eps}
\end{equation*}which is like $\sqrt{\norm{\Sigma}_{op}\eps}$ when $N\geq (\Tr(\Sigma)/\norm{\Sigma}_{op})/\eps$. As a consequence, the result from Corollary~\ref{coro:lepski} improves the one from Theorem~\ref{theo:diakonikolas} by removing an extra $\log d$ factor in the sample complexity in the case considered in Theorem~\ref{theo:diakonikolas} that is when $\Sigma\preceq \sigma^2 I_d$. Moreover, Corollary~\ref{coro:lepski} also shows that the sample complexity depends on the \textit{effective rank} $\Tr(\Sigma)/\norm{\Sigma}_{op}$ of $\Sigma$. This ratio can be much smaller than $d$ if the spectrum of $\Sigma$ decays sufficiently fast. Finally, Corollary~\ref{coro:lepski} also covers the case where the sample size $N$ is less than the sample complexity -- that is when $N\leq (\Tr(\Sigma)/\norm{\Sigma}_{op})/\eps$. In that case, the estimation rate is given by $\sqrt{\Tr(\Sigma)/N}$ which is the complexity coming from the estimation of $\mu$ in the none corrupted case. As a consequence, Corollary~\ref{coro:lepski} exhibits a phase transition happening at $N\sim (\Tr(\Sigma)/\norm{\Sigma}_{op})/\eps$ above which corruption is the main source of estimation mistakes and below which corruption does not play any role.
Corollary~\ref{coro:lepski} covers the case where $\hat\mu^{(\hat J)}$ is computed in nearly-linear time and with statistical guarantees happening with constant probability. In the following final result, we show that $\hat\mu^{(\hat J)}$ can estimate $\mu$ at the optimal rate $r^*_K$ for all $K\geq 600|\cO|$ with a subgaussian deviation $1-2\exp(-K/360000)$ if we perform more iterations $u_j$ of the covering SDP from Lemma~\ref{SDP4}. The price we pay for this subgaussian behavior of $\hat\mu^{(\hat J)}$ is on the total running time which goes from nearly-linear time $\tilde\cO(Nd)$ to $\tilde\cO(N^2d)$ by taking $u_j = \lceil N/2^j\rceil$ for $j=0,1,\ldots, \log_2N$ ($u_j=N$ would do as well). We write formally this statement in the next corollary which follows directly from Theorem~\ref{theo:lepski}.
\begin{Corollary}\label{coro:lepski_2_subgaussian}
If one takes $u_j=\lceil N/2^j\rceil$ for all $j=0,1,\ldots, \log_2N$ in Algorithm~\ref{algo:lepski} then, in time $\tilde\cO(N^2d)$, for all $K\geq 600|\cO|$, with probability at least $1-4\exp(-K/360000)$, the output $\hat \mu^{(\hat J)}$ from Algorithm~\ref{algo:lepski} satisfies
\begin{equation}\label{eq:rate_coro_lepski}
\norm{\hat\mu^{(\hat J)} - \mu}_2\leq 2r^*_{K}=1616\left(1200\sqrt{\frac{\Tr(\Sigma)}{N}} + \sqrt{\frac{1200\norm{\Sigma}_{op}K}{N}}\right).
\end{equation}
\end{Corollary}
As a consequence $\hat \mu^{(\hat J)}$ is a subgaussian estimator of $\mu$ for all range of $K$ from $600|\cO|$ to $N$ which can handle up to $|\cO|$ outliers in the database (even when $|\cO|\sim N$) and that can be constructed in time $\tilde\cO(N^2d)$. It does not require any knowledge on $|\cO|$ for its construction.
\vspace{0.7cm}
\textbf{Acknowlegements:} We would like to thank Yeshwanth Cherapanamjeri, Ilias Diakonikolas, Yihe Dong, Nicolas Flammarion, Sam Hopkins and Jerry Li for helpful comments on our work.
\begin{footnotesize}
\bibliographystyle{plain}
| {
"timestamp": "2019-06-28T02:09:35",
"yymm": "1906",
"arxiv_id": "1906.03058",
"language": "en",
"url": "https://arxiv.org/abs/1906.03058",
"abstract": "We construct an algorithm, running in time $\\tilde{\\mathcal O}(N d + uK d)$, which is robust to outliers and heavy-tailed data and which achieves the subgaussian rate from [Lugosi, Mendelson] \\begin{equation}\\label{eq:intro_subgaus_rate} \\sqrt{\\frac{{\\rm Tr}(\\Sigma)}{N}}+\\sqrt{\\frac{||\\Sigma||_{op}K}{N}} \\end{equation}with probability at least $1-\\exp(-c_0K)-\\exp(-c_1 u)$ where $\\Sigma$ is the covariance matrix of the informative data, $K\\in\\{1, \\ldots, K\\}$ is some parameter (number of block means) and $u>0$ is another parameter of the algorithm. This rate is achieved when $K\\geq c_1 |\\mathcal O|$ where $|\\mathcal O|$ is the number of outliers in the database and under the only assumption that the informative data have a second moment. The algorithm is fully data-dependent and does not use in its construction the proportion of outliers nor the rate above. Its construction combines recently developed tools for Median-of-Means estimators and covering-Semi-definite Programming [Chen, Diakonikolas, Ge] and [Peng, Tangwongsan, Zhang].",
"subjects": "Statistics Theory (math.ST); Optimization and Control (math.OC)",
"title": "Robust subgaussian estimation of a mean vector in nearly linear time",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180673335566,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7083314877225766
} |
https://arxiv.org/abs/1809.09442 | Local biquandles and Niebrzydowski's tribracket theory | We introduce a new algebraic structure called \textit{local biquandles} and show how colorings of oriented classical link diagrams and of broken surface diagrams are related to tribracket colorings. We define a (co)homology theory for local biquandles and show that it is isomorphic to Niebrzydowski's tribracket (co)homology. This implies that Niebrzydowski's (co)homology theory can be interpreted similary as biqandle (co)homology theory. Moreover through the isomorphism between two cohomology groups, we show that Niebrzydowski's cocycle invariants and local biquandle cocycle invariants are the same. | \section*{Introduction}
Invariants of knots and knotted surfaces defined in terms of colorings by
algebraic structures have a long history, including colorings by algebraic
structures such as groups, quandles, biquandles and more \cite{ElhamdadiNelson}.
Enhancements, called cocycle invariants, of these invariants using cocycles
in cohomology theories
associated to the coloring structures were popularized in the late 1990s
in papers such as \cite{CarterJelsovskyKamadaLangfordSaito03} and have been
a topic of much study ever since.
In \cite{Niebrzydowski0}, colorings of the planar complement of a
knot diagram by algebraic structures now known as \textit{(knot-theoretic)
ternary quasigroups} were introduced and used to define invariants of knots.
In \cite{Niebrzydowski1,Niebrzydowski2}, a (co)homology theory for these structures
was introduced and used to enhance the coloring invariants.
In \cite{NeedellNelson16}, an algebraic structure known as \textit{biquasile}
was introduced by the first author and used to
define oriented link invariants via colorings of certain graphs obtained
from oriented link diagrams. These invariants were enhanced with Boltzmann
weights in \cite{ChoiNeedellNelson17} and used to distinguish orientable
surface-links in \cite{KimNelson}. Biquasile colorings can be understood
in terms of ternary quasigroup colorings, and these Boltzmann weights can
be understood as enhancement by cocycles in ternary quasigroup cohomology.
In more recent papers by the first author, knot-theoretic ternary quasigroup
structures have been studied and generalized in terms of ternary operations
called \textit{Niebrzydowski tribrackets}. In \cite{PicoNelson}, tribracket
coloring invariants are extended to the case of oriented virtual links. In
\cite{GravesNelsonTamagawa} tribracket colorings are extended to $Y$-oriented
trivalent spatial graphs and handlebody-links, and in \cite{NeedellNelsonShi},
enhancements of tribracket colorings by structures called
\textit{tribracket modules} are defined.
In this paper, we introduce a new algebraic structure called \textit{local
biquandles} and show how colorings of oriented classical link diagrams
and of broken surface diagrams are related to tribracket colorings. We
define a (co)homology theory for local biquandles and show that it is isomorphic
to Niebrzydowski's tribracket (co)homology.
This implies that Niebrzydowski's (co)homology theory can be interpreted
similary as biqandle (co)homology theory since our local biquandle (co)homology theory
is analogous to biquandle (co)homology theory.
Moreover through the isomorphism between two cohomology groups,
we show that Niebrzydowski's cocycle invariants and local biquandle cocycle
invariants are the same. We provide examples of cocycles
and computation of these cocycle invariants.
The paper is organized as follows: In Section~\ref{Preliminaries}, we review the definitions of links, surface-links and tribrackets, and we introduce local biquandles.
In Section~\ref{Local biquandle homology groups and cocycle invariants}, we define local biquandle (co)homology groups, colorings using local biquandles and cocycle invariants.
In Section~\ref{Niebrzydowski's work}, we summarize Niebrzydowski's work shown in \cite{Niebrzydowski0, Niebrzydowski1,Niebrzydowski2}, that is, we review Niebrzydowski's (co)homology groups, colorings using tribrackets and cocycle invariants.
In Section~\ref{Correspondence between our work and Niebrzydowski's work}, our main results are stated and proved, that is, we show that local biquandle (co)homology groups are isomorphic to Niebrzydowski's ones, and local biquandle cocycle invariants are the same as Niebrzydowski's ones.
We provide some examples in Section~\ref{Examples}.
\section{Preliminaries} \label{Preliminaries}
\subsection{Knots, links, connected diagrams}
Throughout this paper, a knot/link means an oriented classical knot/link.
A knot diagram is always connected.
A link diagram with at least two components is said to be {\it connected} if every component intersects another component.
It is known that between two connected diagrams $D$ and $D'$ that represent the same link, there exists a finite sequence of connected diagrams and oriented Reidemeister moves that transforms $D$ to $D'$, i.e., there exists
\[
D=D_0 \overset{R_0} {\longrightarrow} D_1 \overset{R_1} {\longrightarrow}\cdots \overset{R_{i-1}} {\longrightarrow}D_i \overset{R_i} {\longrightarrow} \cdots \overset{R_{n-1}} {\longrightarrow} D_n =D',
\]
where for each $i\in \{0,1,\ldots ,n\}$, $R_i$ is an oriented Reidemeister move, and $D_i$ is a connected diagram of a link.
For a diagram $D$, we remove a small neighborhood of each crossing, and then, we call each connected component a {\it semi-arc} of $D$.
In this paper, for a link diagram $D$, $\mathcal{SA}(D)$ means the set of semi-arcs of $D$ and $\mathcal{R}(D)$ means the set of connected regions of $\mathbb R^2\setminus D$.
For each semi-arc $x$ of a connected diagram $D$, we set two parallel copies $x^{(+)}$ and $x^{(-)}$ of $x$ as depicted in Figure~\ref{semi-arc}, where we note that $x^{(+)}$ (resp. $x^{(-)}$) is in the right-side (resp. left-side) of $x$.
The {\it $2$-parallel $\widetilde{D}$} of $D$ is the gathering of the two copies for all semi-arcs of $D$, that is, $\widetilde{D}=\bigsqcup \mathcal{SA}(\widetilde{D})$, see Figure~\ref{semi-arc}, where throughout this paper, $\mathcal{SA}(\widetilde{D})$ means the set $\{x^{(\varepsilon)} ~|~ x\in \mathcal{SA}(D), \varepsilon \in \{+, -\}\}$.
\begin{figure
\begin{center}
\includegraphics[clip,width=8cm]{semi-arc.pdf}
\label{semi-arc}
\end{center}
\end{figure}
\subsection{Surface-knots, surface-links, connected diagrams}
A {\it surface-knot} is an oriented closed surface locally flatly embedded in $\mathbb R^4$. A {\it surface-link} is a disjoint union of surface-knots. We note that every surface-knot is a surface-link.
Two surface-links are said to be {\it equivalent} if they can be deformed into each other through an isotopy of $\mathbb R^4$.
A {\it diagram} of a surface-link is its image by a regular projection, from $\mathbb R^4$ to $\mathbb R^3$, equipped with the height information for each double point curve, where the height information is represented by removing small neighborhoods of lower double point curves. Then a diagram is composed of four kinds of local pictures depicted in Figure~\ref{multiplepoint}, and the indicated points are called a {\it regular point}, a {\it double point}, a {\it triple point} and a {\it branch point}, respectively.
It is known that two surface-link diagrams represent the same surface-link if and only if they are related by a finite sequence of Roseman moves, see \cite{Roseman} for details.
A surface-knot diagram is always connected.
A surface-link diagram with at least two components is said to be {\it connected} if every component intersects another component.
It is known that between two connected diagrams $D$ and $D'$ that represent the same surface-link, there exists a finite sequence of connected diagrams and oriented Roseman moves that transforms $D$ to $D'$.
\begin{figure
\begin{center}
\includegraphics[clip,width=10cm]{multiple-point.pdf}
\label{multiplepoint}
\end{center}
\end{figure}
For a surface-link diagram $D$, we remove small neighborhoods of double point curves, and then, we call each connected component a {\it semi-sheet} of $D$.
In this paper, for a surface-link diagram $D$, $\mathcal{SS}(D)$ means the set of semi-sheets of $D$ and $\mathcal{R}(D)$ means the set of connected regions of $\mathbb R^3 \setminus D$.
For a semi-sheet $x$ of a surface-link diagram $D$, we assign a normal orientation $n_x$ to $x$ to satisfy that
the triple $(o_1, o_2, n_x)$ of the orientation $(o_1, o_2)$ of $D$ and $n_x$ coincides with the right-handed orientation of $\mathbb R^3$, and thus, we represent the orientation of $D$.
\subsection{Tribrackets}
\begin{definition}\label{def:horizontal}
A {\it knot-theoretic horizontal-ternary-quasigroup} is a pair of a set $X$ and a ternary operation $[\, ]: X^3 \to X; (a,b,c) \mapsto [a,b,c]$ satisfying the following property:
\begin{enumerate}
\item[] \hspace{-0.5cm}($\mathcal{H}$1) For any $a,b,c\in X$,
\begin{itemize}
\item[(i)] there exists a unique $d_1\in X$ such that $[a,b,d_1]=c$,
\item[(ii)] there exists a unique $d_2\in X$ such that $[a,d_2,b]=c$,
\item[(iii)] there exists a unique $d_3 \in X$ such that $[d_3,a,b]=c$.
\end{itemize}
\item[] \hspace{-0.5cm}($\mathcal{H}$2) For any $a,b,c,d \in X$, it holds that
\[
\begin{array}{l}
[b,[a,b,c],[a,b,d]] = [c,[a,b,c],[a,c,d]]=[d,[a,b,d],[a,c,d]].
\end{array}
\]
\end{enumerate}
We call the operation $[\,]$ a {\it horizontal-tribracket}.
\end{definition}
The axioms of a knot-theoretic horizontal-ternary-quasigroup $(X, [\,])$ are obtained from the oriented Reidemeister moves of link diagrams, which is observed when we consider an assignment of an element of $X$ to each region of a link diagram satisfying the condition depicted in Figure~\ref{coloring2}.
See Figure~\ref{RmoveIII} for the correspondence between the Reidemeister move of type III and the axiom ($\mathcal{H}$2) of a knot-theoretic horizontal-ternary-quasigroup $(X, [\,])$.
\begin{figure
\begin{center}
\includegraphics[clip,width=7.0cm]{coloring2.pdf}
\label{coloring2}
\end{center}
\end{figure}
\begin{figure
\begin{center}
\includegraphics[clip,width=6.0cm]{RmoveIII.pdf}
\label{RmoveIII}
\end{center}
\end{figure}
\begin{definition} \label{s-ternary operation}
A {\it knot-theoretic vertical-ternary-quasigroup} is a pair of a set $X$ and a ternary operation $\langle\, \rangle: X^3 \to X; (a,b,c) \mapsto \langle a,b,c \rangle $ satisfying the following property:
\begin{itemize}
\item[] \hspace{-0.5cm}($\mathcal{V}$1) For any $a,b,c\in X$,
\begin{itemize}
\item[(i)] there exists a unique $d_1\in X$ such that $\langle a,b,d_1\rangle=c$,
\item[(ii)] there exists a unique $d_2\in X$ such that $\langle a, d_2, b \rangle=c$,
\item[(iii)] there exists a unique $d_3\in X$ such that $\langle d_3,a,b \rangle=c$.
\end{itemize}
\item[] \hspace{-0.5cm}($\mathcal{V}$2) For any $a,b,c,d \in X$, it holds that
\begin{itemize}
\item[(i)] $\big\langle a , \langle a,b,c\rangle , \langle \langle a, b, c\rangle , c, d \rangle \big \rangle =\big\langle a, b, \langle b, c, d \rangle \big \rangle $ and
\item[(ii)] $\big\langle \langle a, b, c \rangle, c, d \big \rangle =\big\langle \langle a, b, \langle b, c, d\rangle \rangle, \langle b, c, d \rangle , d \big\rangle.$
\end{itemize}
\end{itemize}
We call the operation $\langle\, \rangle$ a {\it vertical-tribracket}.
\end{definition}
The axioms of a vertical-tribracket are obtained from the oriented Reidemeister moves of link diagrams, which is observed when we consider an assignment of an element of $X$ to each region of a link diagram satisfying the condition depicted in Figure~\ref{coloring3} (see Definition~\ref{def:regioncoloring1}).
\begin{figure
\begin{center}
\includegraphics[clip,width=7.0cm]{coloring3.pdf}
\label{coloring3}
\end{center}
\end{figure}
\begin{remark}\label{rem:correspondence}
Horizontal- and vertical-tribrackets are induced from each other as follows:
Each vertical-tribracket $\langle\, \rangle$ satisfies the condition
\begin{itemize}
\item[] \hspace{-0.5cm}($\mathcal{V}$1)-(i) For any $a,b,c \in X$, there exists a unique $d\in X$ such that $\langle a, b, d \rangle =c$,
\end{itemize}
and hence, by defining a horizontal-tribracket $[\,]: X^3\to X$ by $ (a,b,c) \mapsto d(=:[a,b,c])$, $[\, ]$ is induced from $\langle\, \rangle$. We call $[\, ]$ the {\it corresponding horizontal-bracket} of $\langle\, \rangle$.
On the other hand, each horizontal-tribracket $[\,]$ satisfies the condition
\begin{itemize}
\item[] \hspace{-0.5cm}($\mathcal{H}$1)-(i) For any $a,b,c \in X$, there exists a unique $d\in X$ such that $[a,b,d]=c$,
\end{itemize}
and hence, by defining a vertical-tribracket $\langle\, \rangle: X^3\to X$ by $ (a,b,c) \mapsto d(=:\langle a,b,c \rangle)$, $\langle\, \rangle$ is induced from $[\, ]$. We call $\langle\, \rangle$ the {\it corresponding vertical-bracket} of $[\, ]$.
\end{remark}
Next lemma follows from Remark~\ref{rem:correspondence}, see also Figure~\ref{tribracketsfomula1}.
\begin{lemma}\label{lem:tribrackets1}
For a set $X$, let $[\, ]$ be a horizontal-tribracket on $X$ and $\langle\, \rangle$
the corresponding vertical-tribracket of $[\,]$. Then for any $a,b,c,d\in X$, we have
\[
\mbox{\rm (1)}~~c=\langle a,b,[a,b,c]\rangle\quad \mathrm{and}\quad \mbox{\rm (2)}~~
d=[a,b,\langle a,b,d\rangle].\]
\end{lemma}
\begin{figure
\begin{center}
\includegraphics[width=5cm]{coloring4.pdf}
\label{tribracketsfomula1}
\end{center}
\end{figure}
\begin{lemma}\label{lem:tribrackets2}
For a set $X$, let $[\, ]$ be a horizontal-tribracket on $X$ and $\langle\, \rangle$
the corresponding vertical-tribracket of $[\,]$. Then for any $a,b,c,d\in X$, we have
\begin{itemize}
\item[(1)]
\begin{eqnarray*}
{}\big[b,d,[a,b,c]\big] & = & \big[\langle a,b,d\rangle, d, [a,\langle a,b,d\rangle, c]\big]\\
& = &\big[c,[a,b,c],[a,\langle a,b,d\rangle, c]\big],\\
\end{eqnarray*}
\item[(2)]
\begin{eqnarray*}
{}\big[a,\langle a,b,d\rangle, c\big] & = & \big\langle c,[a,b,c], [b,d,[a,b,c]]\big\rangle\\
& = & \big\langle \langle a,b,d\rangle, d,[b,d,[a,b,c]]\big\rangle.
\end{eqnarray*}
\end{itemize}
\end{lemma}
\begin{proof}
(1) \ We have
\[
\begin{array}{ll}
\big[c,[a,b,c],[a,\langle a,b,d\rangle, c]\big]\\[3pt]
= \big[\langle a,b,d\rangle, [a,b,\langle a,b,d\rangle], [a,\langle a,b,d\rangle, c]\big]\\[3pt]
= \big[\langle a,b,d\rangle, d, [a,\langle a,b,d\rangle, c]\big],\\
\end{array}
\]
where the first equality follows from the second equality of ($\mathcal{H}$2) of Definition~\ref{def:horizontal} and the second equality follows from (2) of Lemma~\ref{lem:tribrackets1}.
We leave the proof of the other equality of (1) to the reader, see also Figure~\ref{tribracketsfomula2}.
(2) \ We have
\[
\begin{array}{ll}
\big\langle \langle a,b,d\rangle, d,[b,d,[a,b,c]]\big\rangle\\[3pt]
= \big\langle \langle a,b,\langle b,d,[b,d,[a,b,c]]\rangle \rangle, \langle b,d,[b,d,[a,b,c]]\rangle,[b,d,[a,b,c]] \big\rangle\\[3pt]
=\big\langle \langle a,b,[a,b,c]\rangle,[a,b,c], [b,d,[a,b,c]]\big\rangle\\[3pt]
=\big\langle c,[a,b,c], [b,d,[a,b,c]]\big\rangle,\\
\end{array}
\]
where the first equality follows from (ii) of ($\mathcal{V}$2) of Definition~\ref{s-ternary operation} and the second and third equalities follow from (1) of Lemma~\ref{lem:tribrackets1}. We leave the proof of the other equality of (2) to the reader, see also Figure~\ref{tribracketsfomula2}.
\end{proof}
\begin{figure
\begin{center}
\includegraphics[width=13.0cm]{RmoveIII3.pdf}
\label{tribracketsfomula2}
\end{center}
\end{figure}
\begin{remark}
In \cite{Niebrzydowski1}, Niebrzydowski defined
a knot-theoretic ternary quasigroup $(X, T)$ with the left division $\mathcal{L}$, the middle division $\mathcal{M}$ and the right devision $\mathcal{R}$.
An horizontal-tribracket $[\,]$ in this paper coincides with a right division $\mathcal{R}$ in \cite{Niebrzydowski1}.
A vertical-tribracket $\langle\, \rangle$ in this paper coincides with his tribracket $T$ in \cite{Niebrzydowski1}.
In \cite{NeedellNelson16, ChoiNeedellNelson17}, binary operations related to vertical-tribrackets are defined.
\end{remark}
\subsection{Local biquandles}
\begin{definition}\label{def:localbiquandle}
A {\it local biquandle} is a triple $(X, \{\uline{\star}_a\}_{a\in X}, \{\oline{\star}_a\}_{a\in X} )$ of a set $X$ and two families of operations $\uline{\star}_a, \oline{\star}_a: (\{a\} \times X)^2 \to X^2$
satisfying the following property:
\begin{itemize}
\item[] \hspace{-0.5cm}($\mathcal{L}$1) For any $a,b,c \in X$,
\begin{itemize}
\item[(i)] the first component of the result of $(a, b) \uline{\star}_a (a,c)$ is $c$,
\item[(ii)] the first component of the result of $(a, b) \oline{\star}_a (a,c)$ is $c$,
\item[(iii)] the second component of the result of $(a, b) \uline{\star}_a (a,c)$ coincides with that of the result of $(a, c) \oline{\star}_a (a,b)$.
\end{itemize}
\item[] \hspace{-0.5cm}($\mathcal{L}$2)
\begin{itemize}
\item[(i)] For any $a, b\in X$, the map $\uline{\star}_a(a, b) :\{a\}\times X \to \{b\}\times X$ sending $(a,c)$ to $(a,c)\uline{\star}_a(a,b)$ is bijective.
\item[(ii)]
For any $a, b\in X$, the map $\oline{\star}_a(a,b):\{a\}\times X \to \{b\}\times X$ sending $(a,c)$ to $(a,c)\oline{\star}_a(a,b)$ is bijective.
\item[(iii)]
The map $S: \bigcup_{a\in X}(\{a\} \times X)^2\to \bigcup_{d\in X} (X\times \{d\})^2$ defined by $S\big((a,b),(a,c)\big)=\big((a,c)\oline{\star}_a(a,b),(a,b)\uline{\star}_a(a,c) \big)$ is bijective.
\end{itemize}
\item[] \hspace{-0.5cm}($\mathcal{L}$3)
For any $a,b,c\in X$, it holds that
\begin{itemize}
\item[(i)] $\big((a,b)\uline{\star}_a(a,c)\big)\uline{\star}_c\big((a,d)\oline{\star}_a(a,c)\big)=\big((a,b)\uline{\star}_a(a,d)\big)\uline{\star}_d\big((a,c)\uline{\star}_a(a,d)\big)$,
\item[(ii)]
$\big((a,b)\oline{\star}_a(a,c)\big)\uline{\star}_c\big((a,d)\oline{\star}_a(a,c)\big)=\big( (a,b)\uline{\star}_a(a,d)\big)\oline{\star}_d\big((a,c)\uline{\star}_a(a,d)\big)$,
\item[(iii)] $\big((a,b)\oline{\star}_a(a,c)\big)\oline{\star}_c\big((a,d)\uline{\star}_a(a,c)\big)=\big((a,b)\oline{\star}_a(a,d)\big)\oline{\star}_d\big((a,c)\oline{\star}_a(a,d)\big)$.
\end{itemize}
\end{itemize}
In this paper, for simplicity, we often omit the subscript by $a$ as $\uline{\star}=\uline{\star}_a$, $\oline{\star}=\oline{\star}_a$, $\{\uline{\star}\}=\{\uline{\star}_a\}_{a\in X}$ and $\{\oline{\star}\}=\{\oline{\star}_a\}_{a\in X}$ unless it causes confusion.
\end{definition}
\noindent We note that from ($\mathcal{L}$1), we can obtain that
\begin{itemize}
\item[] \hspace{-0.5cm}($\mathcal{L}$1') for any $a, b\in X$, it holds that $(a,b)\uline{\star}_a (a,b) =(a,b)\oline{\star }_a (a,b)$.
\end{itemize}
The property ($\mathcal{L}$1') and the axioms ($\mathcal{L}$2) and ($\mathcal{L}$3) are analogous to the axioms of a biquandle, see \cite{FennRourkeSanderson92, KR02} for the definition of a biquandle.
\begin{proposition}\label{prop:localbqandhorizontaltqg}
\begin{itemize}
\item[(1)] Given a local biquandle $(X, \{\uline{\star}_a\}_{a\in X}, \{\oline{\star}_a\}_{a\in X} )$, we have the knot-theoretic horizontal-ternary-quasigroup $(X, [\, ])$ whose horizontal-tribracket $[\,]$ sends $(a,b,c)$ to the second component of the result of $(a, b) \uline{\star}_a (a,c)$ (or $(a, c) \oline{\star}_a (a,b)$).
\item[(2)] Given a knot-theoretic horizontal-ternary-quasigroup $(X, [\, ])$, we have the local biquandle $(X, \{\uline{\star}_a\}_{a\in X}, \{\oline{\star}_a\}_{a\in X} )$ whose operations $\uline{\star}_a$ and $\oline{\star}_a$ are defined by
\[
\begin{array}{l}
(a, b) \uline{\star}_a (a,c) = (c, [a,b,c]), \mbox{ and }\\[5pt]
(a,b) \oline{\star}_a (a,c) = (c,[a,c,b]).
\end{array}
\]
\item[(3)] The correspondences of (1) and (2) of this proposition are inverse of each other.
\end{itemize}
\end{proposition}
\begin{proof}
(1) Suppose that we have a local biquandle $(X, \{\uline{\star}_a\}_{a\in X}, \{\oline{\star}_a\}_{a\in X} )$. We show that the map $[\, ]: X^3 \to X$ sending $(a,b,c)$ to the second component of the result of $(a, b) \uline{\star}_a (a,c)$ (or $(a, c) \oline{\star}_a (a,b)$) satisfies the axioms of horizontal-tribrackets in Definition~\ref{def:horizontal}.
For $a,b,c\in X$,
since the map $\uline{\star}_a(a,b): \{a\} \times X \to \{b\} \times X $ is bijective by the axiom ($\mathcal{L}$2)-(i) of Definition~\ref{def:localbiquandle}, there exists a unique $d\in X$ such that $(b,c)=(a,d) \uline{\star}_a (a,b)= (b, [a,d,b])$, that is, $[a,d,b] =c$, which satisfies the axiom ($\mathcal{H}$1)-(ii) of Definition~\ref{def:horizontal}. Similarly, the axioms ($\mathcal{L}$2)-(ii) and ($\mathcal{L}$2)-(iii) of Definition~\ref{def:localbiquandle} lead to the axioms ($\mathcal{H}$1)-(i) and ($\mathcal{H}$1)-(iii) of Definition~\ref{def:horizontal}, respectively.
For $a,b,c, d \in X$, we have
\[
\begin{array}{ll}
\big([a,c,d], [c, [a,b,c], [a,c,d]]\big)\\[3pt]
=\big((a,b)\uline{\star}_a(a,c)\big)\uline{\star}_c\big((a,d)\oline{\star}_a(a,c)\big)\\[3pt]
=\big((a,b)\uline{\star}_a(a,d)\big)\uline{\star}_d\big((a,c)\uline{\star}_a(a,d)\big)\\[3pt]
=\big([a,c,d], [d,[a,b,d],[a,c,d]]\big)\\
\end{array}
\]
by the axiom ($\mathcal{L}$3)-(i) of Definition~\ref{def:localbiquandle},
\[
\begin{array}{ll}
\big([a,b,d], [b, [a,b,c], [a,b,d]]\big)\\[3pt]
=\big((a,c)\oline{\star}_a(a,b)\big)\uline{\star}_b\big((a,d)\oline{\star}_a(a,b)\big)\\[3pt]
=\big( (a,c)\uline{\star}_a(a,d)\big)\oline{\star}_d\big((a,b)\uline{\star}_a(a,d)\big)\\[3pt]
=\big([a,b,d], [d,[a,b,d],[a,c,d]]\big)\\
\end{array}
\]
by the axiom ($\mathcal{L}$3)-(ii) of Definition~\ref{def:localbiquandle}, and
\[
\begin{array}{ll}
\big([a,b,c], [c, [a,b,c], [a,c,d]]\big)\\[3pt]
=\big((a,d)\oline{\star}_a(a,c)\big)\oline{\star}_c\big((a,b)\uline{\star}_a(a,c)\big)\\[3pt]
=\big((a,d)\oline{\star}_a(a,b)\big)\oline{\star}_b\big((a,c)\oline{\star}_a(a,b)\big)\\[3pt]
=\big([a,b,c], [b,[a,b,c],[a,b,d]]\big)\\
\end{array}
\]
by the axiom ($\mathcal{L}$3)-(iii) of Definition~\ref{def:localbiquandle}.
Thus the axiom ($\mathcal{H}$2) of Definition~\ref{def:horizontal} can be obtained from the equality of the second components. We leave the proof of (2) and (3) to the reader.
\end{proof}
For a local biquandle $(X, \{\uline{\star}\} , \{\oline{\star}\})$, we call $(X,[\,])$ in (1) of Proposition~\ref{prop:localbqandhorizontaltqg} the {\it corresponding knot-theoretic horizontal-ternary-quasigroup of $(X, \{\uline{\star}\} , \{\oline{\star}\})$}. For a knot-theoretic horizontal-ternary-quasigroup $(X,[\,])$, we call $(X, \{\uline{\star}\} , \{\oline{\star}\})$ in (2) of Proposition~\ref{prop:localbqandhorizontaltqg} the {\it corresponding local biquandle of $(X,[\,])$}.
\section{Local biquandle homology groups and cocycle invariants}\label{Local biquandle homology groups and cocycle invariants}
\subsection{Local biquandle homology groups}
Let $(X, \{\uline{\star}\} , \{\oline{\star}\})$ be a local biquandle, and $(X,[\,])$ the corresponding knot-theoretic horizontal-ternary-quasigroup of $(X, \{\uline{\star}\} , \{\oline{\star}\})$.
Let $n \in \mathbb Z$.
Let $C_n(X)$ be the free $\mathbb Z$-module generated by the elements of
\[
\bigcup_{a\in X} (\{a\} \times X)^n =\big\{\big( (a,b_1), (a,b_2), \ldots , (a,b_n) \big) ~|~ a, b_1, \ldots , b_n \in X \big\}
\]
if $n\geq 1$, and $C_n(X)=0$ otherwise.
We define a homomorphism $\partial_n : C_n (X) \to C_{n-1} (X)$ by
\begin{align}
&\partial_n \Big( \big( (a,b_1), \ldots , (a,b_n) \big) \Big) = \sum_{i=1}^{n} (-1)^i \big\{ \big( (a,b_1), \ldots, \widehat{(a, b_i)}, \ldots , (a,b_n) \big) \notag \\
&- \big( (a,b_1)\uline{\star} (a, b_i), \ldots, (a,b_{i-1})\uline{\star} (a, b_i), (a,b_{i+1})\oline{\star} (a, b_i) ,\ldots , (a,b_n) \oline{\star} (a, b_i)\big) \big\} \notag \\
&= \sum_{i=1}^{n} (-1)^i \big\{ \big( (a,b_1), \ldots, \widehat{(a, b_i)}, \ldots , (a,b_n) \big) \notag \\
&- \big( (b_i, [a, b_1, b_i] ), \ldots, (b_i, [a, b_{i-1}, b_i]), (b_i, [a, b_i, b_{i+1}]) ,\ldots , (b_i, [a, b_i, b_{n}])\big) \big\} \notag
\end{align}
if $n\geq 2$, and $\partial_n=0$ otherwise, where $\widehat{x}$ means that the component $x$ is removed.
\begin{lemma}\label{lem:chain complex}
$C_*(X)=\{C_n(X), \partial_n\}_{n\in \mathbb Z}$ is a chain complex, i.e., for any $n \in \mathbb Z$, $\partial_n\circ \partial_{n+1} =0$ holds.
\end{lemma}
\begin{proof}
This is shown by a direct calculation as in the case of biquandle chain complexes, see \cite{CarterElhamdadiSaito04, CenicerosElhamdadiGreenNelson14}.
\end{proof}
Let $D_n(X)$ be a submodule of $C_n(X)$ which is generated by the elements of
\[
\Big\{\big( (a,b_1), \ldots , (a,b_n) \big) \in \bigcup_{a\in X
} (\{a\} \times X)^n ~\Big|~ \mbox{ $b_i =b_{i+1}$ for some $i\in \{1, \ldots , n-1 \} $ } \Big\}.
\]
\begin{lemma}
$D_*(X)=\{D_n(X), \partial_n\}_{n\in \mathbb Z}$ is a subchain complex of $C_*(X)$, i.e., for any $n \in \mathbb Z$, $\partial_n (D_n(X)) \subset D_{n-1} (X)$ holds.
\end{lemma}
\begin{proof}
This is shown by a direct calculation as in the case of biquandle subchain complexes, see \cite{CarterElhamdadiSaito04, CenicerosElhamdadiGreenNelson14}.
\end{proof}
\noindent Therefore the chain complex $$C_*^{\rm LB} (X)=\{C_n^{\rm LB}(X):=C_n(X)/D_n(X), \partial_n^{\rm LB}:=\partial_n\}_{n\in \mathbb Z}$$ is induced.
We call the homology group $H_n^{\rm LB} (X)$ of $C_*^{\rm LB} (X)$ the \textit{$n$th local biquandle homology group} of $(X, \{\uline{\star}\} , \{\oline{\star}\})$.
For an abelian group $A$, we define the chain and cochain complexes by
\[
\begin{array}{l}
C_n^{\rm LB}(X; A)=C_n^{\rm LB}(X) \otimes A, \quad \partial_n^{\rm LB} \otimes {\rm id} \mbox{ and }\\[5pt]
C_{\rm LB}^n(X; A) ={\rm Hom}(C_n^{\rm LB}(X); A), \quad \delta^n_{\rm LB} \mbox{ s.t. }\delta^n_{\rm LB}(f)=f \circ \partial_{n+1}^{\rm LB}.
\end{array}
\]
Let $C_\ast^{\rm LB}(X; A)=\{C_n^{\rm LB}(X; A), \partial_n^{\rm LB}\otimes {\rm id}\}_{n\in \mathbb Z}$ and $C_{\rm LB}^\ast(X; A)=\{C_{\rm LB}^n(X; A), \delta^n_{\rm LB}\}_{n\in \mathbb Z}$. The \textit{nth homology group} $H_n^{\rm LB}(X; A)$ \textit{and nth cohomology group} $H^n_{\rm LB}(X; A)$ of $(X, \{\uline{\star}\} , \{\oline{\star}\})$ with coefficient group $A$ are defined by
\[
H_n^{\rm LB}(X; A)=H_n(C_\ast^{\rm LB}(X; A)) \qquad {\rm and} \qquad H_{\rm LB}^n(X; A)=H^n(C^\ast_{\rm LB}(X; A)).
\]
Note that we omit the coefficient group $A$ if $A=\mathbb Z$ as usual.
We remark that these definitions are anologous to those in biquandle (co)homology theory.
\subsection{Semi-arc colorings of link diagrams, cocycle invariants}\label{subsection:linkinvariant}
Let $(X, \{\uline{\star}\} , \{\oline{\star}\})$ be a local biquandle, and $(X,[\,])$ the corresponding knot-theoretic horizontal-ternary-quasigroup of $(X, \{\uline{\star}\} , \{\oline{\star}\})$. Let $D$ be a connected diagram of a link.
\begin{definition}
A \textit{semi-arc $X^2$-coloring} of $D$ is a map $C: \mathcal{SA}(D) \to X^2$ satisfying the following condition:
\begin{itemize}
\item For a crossing composed of under-semi-arcs $u_1, u_2$ and over-semi-arcs $o_1, o_2$ as depicted in Figure~\ref{coloring1}, let $C(u_1)=(a_1,b), C(o_1)=(a_2, c)$. Then
\begin{itemize}
\item $a_1=a_2$,
\item $C(u_2) = C(u_1) \uline{\star} C(o_1)= (a,b) \uline{\star} (a,c) = (c, [a,b,c])$, and
\item $C(o_2) = C(o_1) \oline{\star} C(u_1)=(a,c) \oline{\star} (a,b) = (b, [a,b,c])$
\end{itemize}
hold, where $a= a_1 =a_2$, see Figure~\ref{coloring1}.
\begin{figure
\begin{center}
\includegraphics[clip,width=10.0cm]{coloring1.pdf}
\label{coloring1}
\end{center}
\end{figure}
\end{itemize}
We denote by ${\rm Col}_{X^2}^{\rm SA} (D) $ the set of semi-arc $X^2$-colorings of $D$. We call $C(x)$ for a semi-arc $x$ the {\it color} of $x$.
We call a pair $(D,C)$ of a diagram $D$ and a semi-arc $X^2$-coloring $C$ of $D$ a {\it semi-arc $X^2$-colored diagram}.
\end{definition}
\begin{remark}\label{Rem:coloringtranslation}
As shown in Figure~\ref{coloring5}, a semi-arc $X^2$-coloring $C$ of a connected diagram $D$ induces a semi-arc $X$-coloring $\widetilde{C}$ of the $2$-parallel $\widetilde{D}$, that is, a map $\widetilde{C}: \mathcal{SA}(\widetilde{D}) \to X$ is obtained from $C$ by setting $\widetilde{C}(x^{(+)})=a$ and $\widetilde{C}(x^{(-)})=b$ for each semi-arc $x$ of $D$ such that $C(x)=(a,b)$.
Moreover, the semi-arc $X$-coloring $\widetilde{C}$ of $\widetilde{D}$ induces a region $X$-coloring $\overline{C}$ of $D$, that is, a map $\overline{C}: \mathcal{R}(D) \to X$ is obtained from $\widetilde{C}$ by setting $\overline{C}(r) = \widetilde{C}(x^{(\varepsilon)})$ for a region $r\in \mathcal{R}(D)$ and a semi-arc $x^{(\varepsilon)} \in \mathcal{SA}(\widetilde{D})$ that is on the boundary of $r$, where this construction is well-defined only when we use a connected diagram.
We also note that the induced region coloring satisfies the condition depicted in Figures~\ref{coloring2} and \ref{coloring3}, see also Definition~\ref{def:regioncoloring1}.
Thus we have a map $T: {\rm Col}_{X^2}^{\rm SA} (D) \to {\rm Col}_{X}^{\rm R}(D); C \mapsto \overline{C}$, which we call the {\it translation map from ${\rm Col}_{X^2}^{\rm SA} (D) $ to ${\rm Col}_{X}^{\rm R}(D)$}, where ${\rm Col}_{X}^{\rm R}(D)$ means the set of region $X$-colorings of $D$ that satisfy the condition depicted in Figures~\ref{coloring2} and \ref{coloring3}.
For a semi-arc $X^2$-coloring $C$, we call $\overline{C}=T(C)$ the {\it corresponding region $X$-coloring} of $C$ through $T$.
Since the translation map $T$ is invertible, we have the inverse translation map $T^{-1}: {\rm Col}_{X}^{\rm R}(D) \to {\rm Col}_{X^2}^{\rm SA} (D)$, and
for a region $X$-coloring $C$, we call $\overline{C}=T^{-1} (C)$ the {\it corresponding semi-arc $X^2$-coloring } of $C$ through $T^{-1}$.
\begin{figure
\begin{center}
\includegraphics[clip,width=12cm]{coloring5.pdf}
\label{coloring5}
\end{center}
\end{figure}
\end{remark}
\begin{proposition}\label{prop:coloring1}
Let $D$ and $D'$ be connected diagrams of links.
If $D$ and $D'$ represent the same link, then there exists a bijection between
${\rm Col}_{X^2}^{\rm SA} (D) $ and ${\rm Col}_{X^2}^{\rm SA} (D') $.
\end{proposition}
\begin{proof}
Let $D$ and $D'$ be connected diagrams such that $D'$ is obtained from $D$ by a single Reidemeister move.
Let $E$ be a $2$-disk in which the move is applied.
Let $C$ be a semi-arc $X^2$-coloring of $D$.
We define a semi-arc $X^2$-coloring $C'$ of $D'$, corresponding to $C$, by $C'(e) = C(e)$ for a
semi-arc $e$ appearing in the outside of $E$. Then the colors of the semi-arcs appearing in $E$,
by $C'$, are uniquely determined, see Figure~\ref{RmoveII}. Thus we have a bijection ${\rm Col}_{X^2}^{\rm SA} (D) \to {\rm Col}_{X^2}^{\rm SA} (D') $ that maps $C$ to $C'$.
\begin{figure
\begin{center}
\includegraphics[clip,width=12cm]{RmoveII.pdf}
\label{RmoveII}
\end{center}
\end{figure}
\end{proof}
Next, we show how to obtain a cocycle invariant by using the semi-arc $X^2$-colorings of a connected diagram.
Let $C$ be a semi-arc $X^2$-coloring of $D$.
We define the local chain
$w(D, C; \chi) \in C^{\rm LB}_2 (X)$ at each crossing $\chi$ by
$$w(D, C; \chi) ={\rm sign}(\chi) \big((a,b), (a,c)\big)$$ when $C(u_1)=(a,b)$ and $C(o_1)=(a,c)$, where $u_1$ and $o_1$ are the under-semi-arc and the over-semi-arc as depicted in Figure~\ref{coloring1}.
We define a chain by
$$\displaystyle W(D, C)=\sum_{\chi \in \{\mbox{\small crossings of $D$}\}} w(D, C; \chi) \in C^{\rm LB}_2 (X).$$
\begin{lemma}
The chain $W(D, C)$ is a $2$-cycle of $C_\ast ^{\rm LB}(X)$.
\end{lemma}
\begin{proof}
This can be shown similarly as in the case of biquandles.
More precisely, for each positive crossing $\chi$ as depicted in the left of Figure~\ref{coloring1}, we have
\begin{align*}
\partial_2^{\rm LB} \Big(w(D, C; \chi)\Big) &=\partial_2^{\rm LB} \Big(\big((a, b), (a, c)\big)\Big) \notag\\
&=-(a, c)+(a, c)\oline{\star}(a, b)+(a, b)-(a, b)\uline{\star}(a, c) \notag\\
&=-(a, c)+(b, [a, b, c])+(a, b)-(c, [a, b, c]),
\end{align*}
the positive (resp. negative) terms of which can be assigned to the incoming (resp. outgoing) semi-arcs around $\chi$ so as that for each semi-arc around $\chi$ the color of the semi-arc coincides with the assigned pair.
For a negative crossing, the same thing holds.
This ensures that the two ends of each semi-arc have the same pair of elements of $X$, but with opposite signs. Thus, we have $\partial_2^{\rm LB}(W(D, C))=0$.
\end{proof}
Let $A$ be an abelian group. For a $2$-cocycle $\theta \in C^2_{\rm LB}(X; A)$, we define
\[
\begin{array}{l}
\mathcal{H}(D)=\{[W(D, C)] \in H^{\rm LB}_2(X) \ | \ C \in {\rm Col}_{X^2}^{\rm SA} (D) \}, and \\[5pt]
\Phi_{\theta}(D)=\{\theta(W(D, C)) \in A \ | \ C \in {\rm Col}_{X^2}^{\rm SA} (D) \}
\end{array}
\]
as multisets. Then we have the following theorem:
\begin{theorem}
$\mathcal{H}(D)$ and $\Phi_{\theta}(D)$ are invariants of $L$.
\end{theorem}
\begin{proof}
It is sufficient to show that $[W(D, C)] \in H^{\rm LB}_2(X)$ is unchanged under each semi-arc $X^2$-colored Reidemeister move. Here we show this property for the case of a Reidemeister move of type III. We leave the proof of the other cases to the reader.
\begin{figure
\begin{center}
\includegraphics[clip,width=11cm]{RmoveIII2.pdf}
\label{RmoveIII2}
\end{center}
\end{figure}
Let $(D, C)$ and $(D', C')$ be semi-arc $X^2$-colored, connected diagrams of $L$ that differ by the Reidemeister move of type III shown in Figure~\ref{RmoveIII2}. We then have
\begin{eqnarray*}
&&W(D, C) \\
&&=\big((a, b), (a, c)\big)+\big((a, c), (a, d)\big)+\big((c, [a, b, c]), (c, [a, c, d])\big)+\mathcal{C}\\
&&=\big((a, b), (a, c)\big)+\big((a, c), (a, d)\big)+\big((a, b)\uline{\star}(a, c), (a, d)\oline{\star}(a, c)\big)+\mathcal{C},
\end{eqnarray*}
\begin{eqnarray*}
&&W(D', C')\\
&&=\big((a, b), (a, d)\big)+\big((b, [a, b, c]), (b, [a, b, d])\big)+\big((d, [a, b, d]), (d, [a, c, d])\big)+\mathcal{C}\\
&&=\big((a, b), (a, d)\big)+\big((a, c)\oline{\star}(a, b), (a, d)\oline{\star}(a, b)\big)+\big((a, b)\uline{\star}(a, d), (a, c)\uline{\star}(a, d)\big)+\mathcal{C},
\end{eqnarray*}
for some chain $\mathcal{C}$ in $C_2^{\rm LB}(X)$. On the other hand,
\begin{eqnarray*}
\partial_3^{\rm LB}\Big(\big((a, b), (a, c), (a, d)\big)\Big)&=&-\big((a, c), (a, d)\big)+\big((a, c)\oline{\star}(a, b), (a, d)\oline{\star}(a, b)\big)\\
&&+\big((a, b), (a, d)\big)-\big((a, b)\uline{\star}(a, c), (a, d)\oline{\star}(a, c)\big)\\
&&-\big((a, b), (a, c)\big)+\big((a, b)\uline{\star}(a, d), (a, c)\uline{\star}(a, d)\big),
\end{eqnarray*}
which implies that
$$W(D', C')-W(D, C)=\partial_3^{\rm LB}\Big(\big((a, b), (a, c), (a, d)\big)\Big) \in \rm{Im} \, \partial_3^{\rm LB}.$$ Therefore we have $[W(D, C)]=[W(D', C')] \in H^{\rm LB}_2(X)$.
\end{proof}
\begin{proposition}
For cohomologous $2$-cocycles $\theta, \theta' \in C^2_{\rm LB}(X; A)$, we have $\Phi_\theta(D)=\Phi_{\theta^{'}}(D)$.
\end{proposition}
\begin{proof}
Since $\theta$ and $\theta'$ are cohomologous, we have $\theta - \theta' \in {\rm Im} \ \delta^1$, and hence, there exists a homomorphism $\phi : C_1^{\rm LB}(X) \to A$ such that $\theta - \theta'=\delta^1_{\rm LB}(\phi )=\phi \circ \partial_2^{\rm LB}$. Then for each semi-arc $X^2$-coloring $C$ of $D$, we have
\begin{align*}
\theta(W(D, C)) - \theta'(W(D, C))&=(\theta - \theta')(W(D, C))\\
&=(\phi \circ \partial_2^{\rm LB}) (W(D, C))\\
&=\phi ( \partial_2^{\rm LB} (W(D, C))\\
&=0.
\end{align*}
Thus we have $\Phi_\theta(D)=\Phi_{\theta^{'}}(D)$.
\end{proof}
\subsection{Semi-sheet colorings of surface-link diagrams, cocycle invariants}\label{subsection:surfaceinvariant}
For surface-links, we can also define semi-sheet $X^2$-colorings for diagrams and cocycle invariants.
In this subsection, we briefly show these definitions and similar properties.
Let $(X, \{\uline{\star}\} , \{\oline{\star}\})$ be a local biquandle, and $(X,[\,])$ the corresponding knot-theoretic horizontal-ternary-quasigroup of $(X, \{\uline{\star}\} , \{\oline{\star}\})$. Let $D$ be a connected diagram of a surface-link.
\begin{definition}
A \textit{semi-sheet $X^2$-coloring} of $D$ is a map $C: \mathcal{SS}(D) \to X^2$ satisfying the following condition:
\begin{itemize}
\item For a double point curve composed of under-semi-sheets $u_1, u_2$ and over-semi-sheets $o_1, o_2$ as depicted in Figure~\ref{doublepoint1}, let $C(u_1)=(a_1,b), C(o_1)=(a_2, c)$. Then
\begin{itemize}
\item $a_1=a_2$,
\item $C(u_2) = C(u_1) \uline{\star} C(o_1)= (a,b) \uline{\star} (a,c) = (c, [a,b,c])$, and
\item $C(o_2) = C(o_1) \oline{\star} C(u_1)=(a,c) \oline{\star} (a,b) = (b, [a,b,c])$
\end{itemize}
hold, where $a= a_1 =a_2$, see Figure~\ref{doublepoint1}.
\begin{figure
\begin{center}
\includegraphics[clip,width=5.0cm]{doublepoint.pdf}
\label{doublepoint1}
\end{center}
\end{figure}
\end{itemize}
We denote by ${\rm Col}_{X^2}^{\rm SS} (D) $ the set of semi-sheet $X^2$-colorings of $D$. We call $C(x)$ for a semi-sheet $x$ the {\it color} of $x$. We call a pair $(D,C)$ of a diagram $D$ and a semi-sheet $X^2$-coloring $C$ of $D$ a {\it semi-sheet $X^2$-colored diagram}.
\end{definition}
\begin{remark}\label{Rem:coloringtranslation2}
As shown in Remark~\ref{Rem:coloringtranslation} and Figure~\ref{doublepoint2}, we have the bijective {\it translation maps} $T: {\rm Col}_{X^2}^{\rm SS} (D) \to {\rm Col}_{X}^{\rm R}(D); C \mapsto \overline{C}$ and $T^{-1}: {\rm Col}_{X}^{\rm R}(D) \to {\rm Col}_{X^2}^{\rm SS} (D); \overline{C}\mapsto C $, where ${\rm Col}_{X}^{\rm R}(D)$ means the set of region colorings of $D$ that satisfy the condition depicted in the right of Figure~\ref{doublepoint2} and Figure~\ref{doublepoint3}.
For a semi-sheet $X^2$-coloring $C$, we call $\overline{C}=T (C)$ the {\it corresponding region $X$-coloring } of $C$ through $T$.
For a region $X$-coloring $C$, we call $\overline{C}=T^{-1} (C)$ the {\it corresponding semi-sheet $X^2$-coloring } of $C$ through $T^{-1}$.
\begin{figure
\begin{center}
\includegraphics[clip,width=9.0cm]{doublepoint2.pdf}
\label{doublepoint2}
\end{center}
\end{figure}
\end{remark}
\begin{proposition}\label{prop:coloring2}
Let $D$ and $D'$ be connected diagrams of surface-links.
If $D$ and $D'$ represent the same surface-link, then there exists a bijection between
${\rm Col}_{X^2}^{\rm SS} (D) $ and ${\rm Col}_{X^2}^{\rm SS} (D') $.
\end{proposition}
Next, we show how to obtain a cocycle invariant by using semi-sheet $X^2$-colorings of a connected diagram.
Let $C$ be a semi-sheet $X^2$-coloring of $D$.
We define the local chain
$w(D, C; \tau) \in C^{\rm LB}_3 (X)$ at each triple point $\tau$ by
$$w(D, C; \tau) ={\rm sign}(\tau) \big((a,b), (a,c), (a,d)\big)$$ when $C(b_1)=(a,b), C(m_1)=(a,c)$ and $C(t_1)=(a,d)$, where $b_1$, $m_1$ and $t_1$ are the bottom-, middle- and top-semi-sheet as depicted in Figure~\ref{triplepoint1}.
\begin{figure
\begin{center}
\includegraphics[clip,width=10cm]{triplepoint.pdf}
\label{triplepoint1}
\end{center}
\end{figure}
Let $A$ be an abelian group. We define a chain by
$$\displaystyle W(D, C)=\sum_{\tau \in \{\mbox{\small triple points of $D$}\}} w(D, C; \tau) \in C^{\rm LB}_3 (X).$$
\begin{lemma}
The chain $W(D, C)$ is a $3$-cycle of $C_* ^{\rm LB}(X)$.
\end{lemma}
For a $3$-cocycle $\theta \in C^3_{\rm LB}(X;A)$, we define
\[
\begin{array}{l}
\mathcal{H}(D)=\{[W(D, C)] \in H^{\rm LB}_3(X) \ | \ C \in {\rm Col}_{X^2}^{\rm SS} (D) \}, and \\[5pt]
\Phi_{\theta}(D)=\{\theta(W(D, C)) \in A \ | \ C \in {\rm Col}_{X^2}^{\rm SS} (D) \}
\end{array}
\]
as multisets. Then we have the following theorem:
\begin{theorem}
$\mathcal{H}(D)$ and $\Phi_{\theta}(D)$ are invariants of $F$.
\end{theorem}
\begin{proposition}
For cohomologous $3$-cocycles $\theta, \theta' \in C^3_{\rm LB}(X; A)$, we have $\Phi_\theta(D)=\Phi_{\theta^{'}}(D)$.
\end{proposition}
\section{Niebrzydowski's work}
\label{Niebrzydowski's work}
In \cite{Niebrzydowski1} (see also \cite{Niebrzydowski2}), Niebrzydowski studied knot-theoretic vertical-tribracket $\langle\, \rangle$ (with some conditions) to give a region coloring of a link diagram, and he gave a homology theory for knot-theoretic ternary quasigroups.
In this section, we review his idea with our notation.
\subsection{Homology groups}\label{subsec:NHom}
Let $(X, \langle\, \rangle )$ is a knot-theoretic vertical-ternary-quasigroup.
Let $n \in \mathbb Z$.
Let $C_n^{\rm Nie}(X)$ be the free $\mathbb Z$-module generated by the elements of $X^{n+2}$
if $n\geq 0$, and $C_n^{\rm Nie}(X)=0$ otherwise.
We define a homomorphism $\partial_n^{\rm Nie} : C_n^{\rm Nie} (X) \to C_{n-1}^{\rm Nie} (X)$ by
\begin{align}
&\partial_n^{\rm Nie} \big( ( a_0, a_1, \ldots , a_{n+1} ) \big) \notag \\
&= \sum_{i=0}^{n} (-1)^i \Big\{ \big( y_{(i,1)}, y_{(i,2)}, \ldots , y_{(i,i)}, a_{i+1}, a_{i+2}, \ldots , a_{n+1} \big)\notag \\
&\hspace{2cm}- \big( a_0, a_1, \ldots , a_i, y_{(i,i+1)},y_{(i,i+2)},\ldots , y_{(i,n)} \big) \Big\} \notag
\end{align}
if $n> 0$, and $\partial_n^{\rm Nie}=0$ otherwise, where
\[
y_{(i,j)} = \left\{
\begin{array}{ll}
\langle a_{j-1}, a_j, y_{(i,j+1)}\rangle & (j<i),\\[5pt]
\langle a_{j-1}, a_j, a_{j+1} \rangle & (j=i, i+1),\\[5pt]
\langle y_{(i,j-1)}, a_j, a_{j+1} \rangle & (j>i+1).
\end{array}
\right.
\]
\begin{example} For $a,b,c,d \in X$,
\[
\begin{array}{rcl}
\partial_1^{\rm Nie}\big((a,b,c)\big) &=& (-1)^0\{ (b,c)-(a, \langle a,b,c\rangle)\}+(-1)^1\{ (\langle a,b,c\rangle,c)-(a, b)\},\\[3pt]
&=&(b,c)-(a, \langle a,b,c\rangle)-(\langle a,b,c\rangle,c)+(a, b), \\[3pt]
\partial_2^{\rm Nie}\big((a,b,c,d)\big) &=& (-1)^0\{ (b,c,d)-(a, \langle a,b,c\rangle, \langle \langle a,b,c\rangle ,c,d \rangle)\}\\[3pt]
&&+(-1)^1\{ (\langle a,b,c\rangle,c,d)-(a, b, \langle b,c,d\rangle )\}\\[3pt]
&&+(-1)^2\{(\langle a,b , \langle b,c,d \rangle \rangle, \langle b,c,d \rangle, d)-(a,b,c)\} \\[3pt]
&=& (b,c,d)-(a, \langle a,b,c\rangle, \langle \langle a,b,c\rangle ,c,d \rangle)- (\langle a,b,c\rangle,c,d)\\[3pt]
&&+(a, b, \langle b,c,d\rangle )+(\langle a,b , \langle b,c,d \rangle \rangle, \langle b,c,d \rangle, d)-(a,b,c).\\
\end{array}
\]
\end{example}
\begin{remark}
We modified the definitions of the chain group $C_{-1}^{\rm Nie}(X)$ and the boundary map $\partial_0^{\rm Nie}$, while Niebrzydowski defined $C_0^{\rm Nie}(X)$ to be the free $\mathbb Z$-module generated by the elements of $X$, and
$\partial_0^{\rm Nie}$ by $\partial_0^{\rm Nie}\big((a_0,a_1)\big) = (a_1)-(a_0)$. This does not affect the $2$- or $3$-cocycle conditions and his cocycle invariants defined in Subsections~\ref{subsec: Region colorings of link diagrams, cocycle invariants} and \ref{subsec: Region colorings of surface-link diagrams, cocycle invariants}.
\end{remark}
\begin{lemma}{\rm (\cite{Niebrzydowski1})}
$C_*^{\rm Nie}(X)=\{C_n^{\rm Nie}(X), \partial_n^{\rm Nie}\}_{n\in \mathbb Z}$ is a chain complex, i.e., for any $n \in \mathbb Z$, $\partial_n^{\rm Nie}\circ \partial_{n+1}^{\rm Nie} =0$ holds.
\end{lemma}
Let $D_n^{\rm Nie}(X)$ be the submodule of $C_n^{\rm Nie}(X)$ that is generated by the elements of
\[
\big\{ (a_0, a_1,\ldots , a_{n+1}) \in X^{n+2} ~|~ \mbox{ $\langle a_{j-1}, a_j, a_{j+1} \rangle = a_j$ for some $j\in \{1, \ldots , n \} $ } \big\}.
\]
\begin{lemma}{\rm (\cite{Niebrzydowski1})} \label{lemma:degenerate(N)}
$D_*^{\rm Nie}(X)=\{D_n^{\rm Nie}(X), \partial_n^{\rm Nie}\}_{n\in \mathbb Z}$ is a subchain complex of $C_*^{\rm Nie}(X)$, i.e., for any $n \in \mathbb Z$, $\partial_n (D_n^{\rm Nie}(X)) \subset D_{n-1}^{\rm Nie} (X)$ holds.
\end{lemma}
\noindent Therefore the chain complex
$$C_*^{\rm N} (X)=\{C_n^{\rm N}(X):=C_n^{\rm Nie}(X)/D_n^{\rm Nie}(X), \partial_n^{\rm N} := \partial_n^{\rm Nie} \}_{n\in \mathbb Z}$$
is induced.
The homology group $H_n^{\rm N} (X)$ of $C_*^{\rm N} (X)$ is called the \textit{$n$th knot-theoretic ternary quasigroup homology} of $(X, \langle\, \rangle )$.
For an abelian group $A$, we define the chain and cochain complexes by
\[
\begin{array}{l}
C_n^{\rm N} (X; A)=C_n^{\rm N}(X) \otimes A, \quad \partial_n^{\rm N} \otimes {\rm id} \mbox{ and }\\[5pt]
C_{\rm N}^n(X; A) ={\rm Hom}(C_n^{\rm N}(X); A), \quad \delta^n_{\rm N} \mbox{ s.t. }\delta^n_{\rm N}(f)=f \circ \partial_{n+1}^{\rm N}.
\end{array}
\]
Let $C_\ast^{\rm N}(X; A)=\{C_n^{\rm N}(X; A), \partial_n^{\rm N}\otimes {\rm id}\}_{n\in \mathbb Z}$ and $C_{\rm N}^\ast(X; A)=\{C_{\rm N}^n(X; A), \delta^n_{\rm N}\}_{n\in \mathbb Z}$.
The $n$th homology group $H_n^{\rm N}(X; A)$ and $n$th cohomology group $H^n_{\rm N}(X; A)$ of $(X, \langle\, \rangle )$ with coefficient group $A$ are defined by
\[
H_n^{\rm N}(X; A)=H_n(C_\ast^{\rm N}(X; A)) \qquad {\rm and} \qquad H_{\rm N}^n(X; A)=H^n(C^\ast_{\rm N}(X; A)).
\]
Note that we omit the coefficient group $A$ if $A=\mathbb Z$ as usual.
\subsection{Region colorings of link diagrams, cocycle invariants}\label{subsec: Region colorings of link diagrams, cocycle invariants}
Let $(X, \langle\, \rangle )$ be a knot-theoretic vertical-ternary-quasigroup.
Let $D$ be a diagram of a link.
\begin{definition}\label{def:regioncoloring1}
A \textit{region $X$-coloring} of $D$ is a map $C: \mathcal{R}(D) \to X$ satisfying the following condition:
\begin{itemize}
\item For a crossing with regions $r_1$, $r_2$, $r_3$ and $r_4$ near it as depicted in Figure~\ref{coloring3}, let $C(r_1)=a$, $C(r_2)=b$ and $C(r_3)= c$. Then $C(r_4) = \langle a,b,c \rangle $ holds, see Figure~\ref{coloring3}.
\end{itemize}
We denote by ${\rm Col}_{X}^{\rm R} (D) $ the set of region $X$-colorings of $D$.
We call $C(r)$ for a region $r$ the {\it color} of $r$.
We call a pair $(D,C)$ of a diagram $D$ and a region $X$-coloring $C$ of $D$ a {\it region $X$-colored diagram}.
\end{definition}
\begin{proposition}{\rm (\cite{Niebrzydowski1})}
Let $D$ and $D'$ be diagrams of links.
If $D$ and $D'$ represent the same links, then there exists a bijection between
${\rm Col}_{X}^{\rm R} (D) $ and ${\rm Col}_{X}^{\rm R} (D') $.
\end{proposition}
Next, we show how to obtain a cocycle invariant by using region $X$-colorings of a diagram.
Let $C$ be a region $X$-coloring of $D$.
We define the local chain
$w^{\rm N}(D, C; \chi) \in C^{\rm N}_1 (X)$ at each crossing $\chi$ by
$$w^{\rm N}(D, C; \chi) ={\rm sign}(\chi) \big(a,b,c\big)$$ when $C(r_1)=a, C(r_2)=b$ and $C(r_3)=c$, where $r_1$, $r_2$ and $r_3$ are the regions near $\chi$ as depicted in Figure~\ref{coloring3}.
We define a chain by
$$\displaystyle W^{\rm N}(D, C)=\sum_{\chi \in \{\mbox{\small crossings of $D$}\}} w^{\rm N}(D, C; \chi) \in C^{\rm N}_1 (X).$$
\begin{lemma}{\rm (\cite{Niebrzydowski1})}
The chain $W^{\rm N}(D, C)$ is a $1$-cycle of $C_* ^{\rm N}(X)$.
\end{lemma}
Let $A$ be an abelian group. For a $1$-cocycle $\theta \in C^1_{\rm N}(X; A)$, we define
\[
\begin{array}{l}
\mathcal{H}^{\rm N}(D)=\{[W^{\rm N}(D, C)] \in H^{\rm N}_1(X) \ | \ C \in {\rm Col}_{X}^{\rm R} (D) \}, and \\[5pt]
\Phi_{\theta}^{\rm N}(D)=\{\theta(W^{\rm N}(D, C)) \in A \ | \ C \in {\rm Col}_{X}^{\rm R} (D) \}
\end{array}
\]
as multisets.
\begin{theorem}{\rm (\cite{Niebrzydowski1})}
$\mathcal{H}^{\rm N}(D)$ and $\Phi_{\theta}^{\rm N}(D)$ are invariants of $L$.
\end{theorem}
\subsection{Region colorings of surface-link diagrams, cocycle invariants}\label{subsec: Region colorings of surface-link diagrams, cocycle invariants}
Let $(X, \langle\, \rangle )$ be a knot-theoretic vertical-ternary-quasigroup.
Let $D$ be a diagram of a surface-link.
\begin{definition}\label{def:regioncoloring2}
A \textit{region $X$-coloring} of $D$ is a map $C: \mathcal{R}(D) \to X$ satisfying the following condition:
\begin{itemize}
\item For a double point curve with regions $r_1$, $r_2$, $r_3$ and $r_4$ near it as depicted in Figure~\ref{doublepoint3}, let $C(r_1)=a$, $C(r_2)=b$ and $C(r_3)= c$. Then $C(r_4) = \langle a,b,c \rangle $ holds, see Figure~\ref{doublepoint3}.
\begin{figure
\begin{center}
\includegraphics[clip,width=5.0cm]{doublepoint3.pdf}
\label{doublepoint3}
\end{center}
\end{figure}
\end{itemize}
We denote by ${\rm Col}_{X}^{\rm R} (D) $ the set of region $X$-colorings of $D$.
We call $C(r)$ for a region $r$ the {\it color} of $r$.
We call a pair $(D,C)$ of a diagram $D$ and a region $X$-coloring $C$ of $D$ a {\it region $X$-colored diagram}.
\end{definition}
\begin{proposition}{\rm (\cite{Niebrzydowski1})}
Let $D$ and $D'$ be diagrams of surface-links.
If $D$ and $D'$ represent the same surface-link, then there exists a bijection between
${\rm Col}_{X}^{\rm R} (D) $ and ${\rm Col}_{X}^{\rm R} (D') $.
\end{proposition}
Next, we show how to obtain a cocycle invariant by using region $X$-colorings of a diagram.
Let $C$ be a region $X$-coloring of $D$.
We define the local chain
$w^{\rm N}(D, C; \tau) \in C^{\rm N}_2 (X)$ at each triple point $\tau$ by
$$w^{\rm N}(D, C; \tau) ={\rm sign}(\tau) \big(a,b,c,d\big)$$ when $C(r_1)=a, C(r_2)=b$, $C(r_3)=c$ and $C(r_4)=d$, where $r_1$, $r_2$, $r_3$ and $r_4$ are the regions near $\tau$ as depicted in Figure~\ref{triplepoint2}.
\begin{figure
\begin{center}
\includegraphics[clip,width=10cm]{triplepoint2.pdf}
\label{triplepoint2}
\end{center}
\end{figure}
We define a chain by
$$\displaystyle W^{\rm N}(D, C)=\sum_{\tau \in \{\mbox{\small triple points of $D$}\}} w^{\rm N}(D, C; \tau) \in C^{\rm N}_2 (X).$$
\begin{lemma}{\rm (\cite{Niebrzydowski1})}
The chain $W^{\rm N}(D, C)$ is a $2$-cycle of $C_* ^{\rm N}(X)$.
\end{lemma}
Let $A$ be an abelian group. For a $2$-cocycle $\theta \in C^2_{\rm N}(X; A)$, we define
\[
\begin{array}{l}
\mathcal{H}^{\rm N}(D)=\{[W^{\rm N}(D, C)] \in H^{\rm N}_2(X) \ | \ C \in {\rm Col}_{X}^{\rm R} (D) \}, and \\[5pt]
\Phi_{\theta}^{\rm N}(D)=\{\theta(W^{\rm N}(D, C)) \in A \ | \ C \in {\rm Col}_{X}^{\rm R} (D) \}
\end{array}
\]
as multisets.
\begin{theorem}{\rm (\cite{Niebrzydowski1})}
$\mathcal{H}^{\rm N}(D)$ and $\Phi_{\theta}^{\rm N}(D)$ are invariants of $F$.
\end{theorem}
\section{Correspondence between our work and Niebrzydowski's work}
\label{Correspondence between our work and Niebrzydowski's work}
\subsection{Correspondence between (co)homology groups}\label{subsec:correspondence1}
From now on, $x|_{a\mapsto b}$ means the element $x$ after replacing $a$ in $x$ with $b$.
Let $X$ be a set, $[\, ]$ a horizontal-tribracket on $X$, and $\langle\, \rangle$ the corresponding vertical-tribracket of $[\, ]$.
Define a homomorphism
$\varphi_n: C_n^{\rm LB}(X) \to C_{n-1}^{\rm N}(X)$ by
\[
\varphi_n \Big(\big((a,b_1), (a, b_2), \ldots , (a ,b_n) \big)\Big) = (z_0, z_1, \ldots , z_n)
\]
if $n\geq 1$, and $\varphi_n=0$ otherwise, where
\[
\left\{
\begin{array}{ll}
z_0=a,\\
z_1=b_1,\\
z_i=[z_{i-2}, z_{i-1}, z_{i-1}|_{b_{i-1} \mapsto b_i}] & (i\in\{2,\ldots, n \}).
\end{array}
\right.
\]
Define a homomorphism
$\psi_n: C_{n-1}^{\rm N}(X) \to C_{n}^{\rm LB}(X)$ by
\[
\psi_n \big((a, a_1, a_2, \ldots , a_n)\big) = \big((a,w_1), (a,w_2), \ldots , (a, w_n)\big)
\]
if $n\geq 1$, and $\psi_n=0$ otherwise, where
\[
\left\{
\begin{array}{ll}
w_1=a_1,\\
w_2=\langle a,a_1,a_2\rangle,\\
w_i= w_{i-1} |_{a_{i-1} \mapsto \langle a_{i-2}, a_{i-1}, a_i \rangle}& (i\in \{3,\ldots , n\}).
\end{array}
\right.
\]
\begin{example} For $a,b,c,d \in X$,
\[
\begin{array}{rcl}
\varphi_2 \Big(\big((a,b), (a,c) \big)\Big) &=& \big(a,b, [a,b,c]\big), \\
\varphi_3 \Big(\big((a,b), (a,c), (a,d) \big)\Big) &=& \big(a,b, [a,b,c], [b,[a,b,c],[a,b,d]]\big),\\
\psi_2 \Big(\big(a,b,c \big)\Big) &=& \big((a,b), (a, \langle a,b,c\rangle)\big),\\
\psi_3 \Big(\big(a,b,c ,d \big)\Big) &=&\big((a,b), (a, \langle a,b,c\rangle), (a, \langle a,b,\langle b,c,d\rangle \rangle)\big). \\
\end{array}
\]
\end{example}
We postpone the proof of the fact that $\varphi_n$ and $\psi_n$ are chain maps and the inverses of each other
to Subsection~\ref{subsec:chainmaps}.
This implies that for each $n\in\mathbb Z$, we have the induced isomorphism $\varphi^*_n: H_n^{\rm LB}(X) \to H_{n-1}^{\rm N}(X)$ defined by
\[
\varphi^*_n \Big(\Big[\big((a,b_1), (a, b_2), \ldots , (a ,b_n) \big)\Big]\Big) = \Big[\varphi_n\Big(\big((a,b_1), (a, b_2), \ldots , (a ,b_n) \big)\Big)\Big],
\]
and the inverse isomorphism $\psi^*_n: H_{n-1}^{\rm N}(X) \to H_{n}^{\rm LB}(X)$ defined by
\[
\psi^*_n \Big(\Big[(a, a_1, a_2, \ldots , a_n)\Big]\Big) = \Big[ \psi_n \Big( (a, a_1, a_2, \ldots , a_n) \Big) \Big].
\]
Let $A$ be an abelian group.
The bijective chain map $\varphi_n$ induces the bijective chain map $\varphi_n\otimes {\rm id}: C_n^{\rm LB}(X;A) \to C_{n-1}^{\rm N}(X;A)$, and hence, we have an isomorphism $\varphi^*_n \otimes {\rm id}: H_n^{\rm LB}(X;A) \to H_{n-1}^{\rm N}(X;A)$.
The bijective chain map $\varphi_n$ induces the bijective cochain map $\varphi^n: C^{n-1}_{\rm N}(X;A) \to C^n_{\rm LB}(X;A)$ defined by $\varphi^n (f) = f\circ \varphi_n$, and hence, we have an isomorphism
${\varphi}_*^n: H^{n-1}_{\rm N}(X;A) \to H^n_{\rm LB}(X;A)$.
Thus we have the following theorem.
\begin{theorem}\label{mainthm1}
For any $n\in \mathbb Z$, we have
\[
H_n^{\rm LB}(X; A) \cong H_{n-1}^{\rm N}(X; A) \mbox{ and }
H^n_{\rm LB}(X; A) \cong H^{n-1}_{\rm N}(X; A).
\]
\end{theorem}
\subsection{Correspondence between cocycle invariants of links}
Let $\varphi_2: C_2^{\rm LB}(X) \to C_1^{\rm N}(X)$ be the chain map defined in Subsection~\ref{subsec:correspondence1}, and $\psi_2$ the inverse chain map of $\varphi_2$.
Suppose that we have exactly the same situation as in Subsection~\ref{subsection:linkinvariant}.
Let $\langle\, \rangle: X^3\to X$ be the corresponding vertical-tribracket of the horizontal-tribracket $[\, ]$, and $\overline{C}$ the corresponding region $X$-coloring of the semi-arc $X^2$-coloring $C$ through the translation map $T$ defined in Remark~\ref{Rem:coloringtranslation}.
\begin{figure
\begin{center}
\includegraphics[clip,width=8cm]{doublepoint4.pdf}
\label{doublepoint4}
\end{center}
\end{figure}
At a crossing $\chi$ of $D$ as depicted in the upper of Figure~\ref{doublepoint4}, we have
\[
\begin{array}{ll}
w^{\rm N}(D, \overline{C}; \chi)
&={\rm sign}(\chi) \big(a,b,[a,b,c]\big)\\[3pt]
&=\varphi_2 \Big({\rm sign}(\chi)\big((a,b),(a,c)\big)\Big)\\[3pt]
&=\varphi_2\big(w(D, C; \chi)\big),
\end{array}
\]
which implies that $W^{\rm N}(D, \overline{C}) =\varphi_2\big(W(D, C)\big)$. Thus we have
\[
\begin{array}{ll}
\mathcal{H}^{\rm N}(D)&=\Big\{\big[W^{\rm N}(D, \overline{C})\big] ~\Big|~ \ \overline{C} \in {\rm Col}_{X}^{\rm R} (D) \Big\}\\[4pt]
&=\Big\{\varphi_2^{\ast}\big(\big[W(D, C)\big]\big) ~\Big|~ \ C \in {\rm Col}_{X^2}^{\rm SA} (D) \Big\}\\[4pt]
&=\varphi_2^{\ast}\big(\mathcal{H}(D)\big).
\end{array}
\]
For the 2-cocycle $\theta: C^{\rm LB}_2(X) \to A$, set $\overline{\theta}: C^{\rm N}_1(X) \to A$ by $\overline{\theta}=\theta \circ \psi_2$, which is a $1$-cocycle of $C_{\rm N}^{\ast}(X)$. Then we have
\[
\begin{array}{ll}
\overline{\theta}\big(w^{\rm N}(D, \overline{C}; \chi)\big)&= \theta \circ \psi_2 \Big(\varphi_2\big(w(D, C; \chi)\big)\Big)\\[3pt]
&=\theta \big(w(D, C; \chi)\big),
\end{array}
\]
which implies $\overline{\theta}\big(W^{\rm N}(D, \overline{C})\big)=\theta(W(D, C))$.
Thus we have $\Phi_{\overline{\theta}}^{\rm N}(D) =\Phi_{\theta}(D)$.
Therefore, the Niebrzydowski's invariants $\mathcal{H}^{\rm N}(L)$ and $\Phi_{\overline{\theta}}^{\rm N}(L)$ can be obtained from our invariants $\mathcal{H}(L)$ and $\Phi_{\theta}(L)$, respectively.
Conversely, suppose that we have exactly the same situation as in Subsection~\ref{subsec: Region colorings of link diagrams, cocycle invariants}.
We may assume that a diagram $D$ is connected.
Let $[\,]: X^3 \to X$ be the corresponding horizontal-tribracket of the vertical-tribracket $\langle\, \rangle$, and $\overline{C}$ the corresponding semi-arc $X^2$-coloring of the region $X$-coloring $C$ through the translation map $T^{-1}$ defined in Remark~\ref{Rem:coloringtranslation}.
At a crossing $\chi$ of $D$ depicted in the lower of Figure~\ref{doublepoint4}, we have
\[
\begin{array}{ll}
w(D, \overline{C}; \chi)
&={\rm sign}(\chi)\big((a,b),(a,\langle a,b,c\rangle)\big)\\[3pt]
&=\psi_2 \big({\rm sign}(\chi) (a,b,c)\big)\\[3pt]
&=\psi_2\big(w^{\rm N}(D, C; \chi)\big),
\end{array}
\]
which implies that $W(D, \overline{C}) =\psi_2\big(W^{\rm N}(D, C)\big)$. Thus we have
\[
\begin{array}{ll}
\mathcal{H}(D)&=\Big\{\big[W(D, \overline{C})\big] ~\Big|~ \ \overline{C} \in {\rm Col}_{X^2}^{\rm SA} (D) \Big\}\\[4pt]
&=\Big\{\psi_2^{\ast}\big(\big[W^{\rm N}(D, C)\big]\big) ~\Big|~ \ C \in {\rm Col}_{X}^{\rm R} (D) \Big\}\\[4pt]
&=\psi_2^{\ast}\big(\mathcal{H}^{\rm N}(D)\big).
\end{array}
\]
For the 1-cocycle $\theta: C^{\rm N}_1(X) \to A$, set $\overline{\theta}: C^{\rm LB}_2(X) \to A$ by $\overline{\theta}=\theta \circ \varphi_2$, which is a 2-cocycle of $C_{\rm LB}^{\ast}(X)$. Then we have
\[
\begin{array}{ll}
\overline{\theta}\big(w(D, \overline{C}; \chi)\big)&=\theta \circ \varphi_2 \Big( \psi_2\big( w^{\rm N}(D, C; \chi)\big)\Big)\\[3pt]
&=\theta \big( w^{\rm N}(D, C; \chi)\big),
\end{array}
\]
which implies $\overline{\theta}\big(W(D, \overline{C})\big)=\theta\big(W^{\rm N}(D, C)\big)$.
Thus we have $\Phi_{\overline{\theta}}(D) =\Phi_{\theta}^{\rm N}(D)$.
Therefore, our invariants $\mathcal{H}(L)$ and $\Phi_{\overline{\theta}}(L)$ can be obtained from the Niebrzydowski's invariants $\mathcal{H}^{\rm N}(L)$ and $\Phi_{\theta}^{\rm N}(L)$, respectively.
As a consequence, we have the following theorem.
\begin{theorem}
Let $X$ be a set, and let $[\, ]$ and $\langle\, \rangle$ be a horizontal- and a vertical-tribracket on $X$, respectively, that are corresponding each other.
Let $\theta$ and $\theta'$ be a $2$-cocycle of $C^{*}_{\rm LB} (X)$ and a $1$-cocycle of $C^*_{N}(X)$, respectively, that are corresponding each other through the isomorphisms $\varphi^2_*$ and $\psi^2_*$.
Let $L$ be a link.
Then we have
\[
\varphi_2^* \big(\mathcal{H}(L)\big) = \mathcal{H}^{\rm N}(L) \mbox{ and } \psi_2^* \big(\mathcal{H}^{\rm N}(L)\big) = \mathcal{H}(L),
\]
and
\[
\Phi_{\theta}(L)=\Phi_{\theta'}^{\rm N}(L).
\]
\end{theorem}
\subsection{Correspondence between cocycle invariants of surface-links}
Let $\varphi_3: C_3^{\rm LB}(X) \to C_2^{\rm N}(X)$ be the chain map defined in Subsection~\ref{subsec:correspondence1}, and $\psi_3$ the inverse chain map of $\varphi_3$.
Suppose that we have exactly the same situation as in Subsection~\ref{subsection:surfaceinvariant}.
Let $\langle\, \rangle: X^3\to X$ be the corresponding vertical-tribracket of the horizontal-tribracket $[\, ]$, and $\overline{C}$ the corresponding region $X$-coloring of the semi-sheet $X^2$-coloring $C$ through the translation map $T$ defined in Remark~\ref{Rem:coloringtranslation2}.
\begin{figure
\begin{center}
\includegraphics[clip,width=12cm]{triplepoint3.pdf}
\label{triplepoint3}
\end{center}
\end{figure}
At a triple point $\tau$ of $D$ as depicted in the upper of Figure~\ref{triplepoint3}, we have
\[
\begin{array}{ll}
w^{\rm N}(D, \overline{C}; \tau)
&={\rm sign}(\chi) \Big(a,b,[a,b,c],\big[b,[a,b,c],[a,b,d]\big]\Big) \\[3pt]
&=\varphi_3 \Big({\rm sign}(\chi)\big((a,b),(a,c), (a,d)\big)\Big)\\[3pt]
&=\varphi_3\big(w(D, C; \tau)\big),
\end{array}
\]
which implies that $W^{\rm N}(D, \overline{C}) =\varphi_3\big(W(D, C)\big)$.
Thus we have
\[
\begin{array}{ll}
\mathcal{H}^{\rm N}(D)&=\Big\{\big[W^{\rm N}(D, \overline{C})\big] ~\Big|~ \ \overline{C} \in {\rm Col}_{X}^{\rm R} (D) \Big\}\\[4pt]
&=\Big\{\varphi_3^{\ast}\big(\big[W(D, C)\big]\big) ~\Big|~ \ C \in {\rm Col}_{X^2}^{\rm SS} (D) \Big\}\\[4pt]
&=\varphi_3^{\ast}\big(\mathcal{H}(D)\big).
\end{array}
\]
For the 3-cocycle $\theta: C^{\rm LB}_3(X) \to A$, set $\overline{\theta}: C^{\rm N}_2(X) \to A$ by $\overline{\theta}=\theta \circ \psi_3$, which is a $2$-cocycle of $C_{\rm N}^{\ast}(X)$. Then we have
\[
\begin{array}{ll}
\overline{\theta}\big(w^{\rm N}(D, \overline{C}; \tau)\big)&=\theta \circ \psi_3 \Big(\varphi_3\big(w(D, C; \tau)\big) \Big)\\[3pt]
&=\theta \big(w(D, C; \tau)\big),
\end{array}
\]
which implies $\overline{\theta}\big(W^{\rm N}(D, \overline{C})\big)=\theta(W(D, C))$.
Thus we have $\Phi_{\overline{\theta}}^{\rm N}(D) =\Phi_{\theta}(D)$.
Therefore, the Niebrzydowski's invariants $\mathcal{H}^{\rm N}(F)$ and $\Phi_{\overline{\theta}}^{\rm N}(F)$ can be obtained from our invariants $\mathcal{H}(F)$ and $\Phi_{\theta}(F)$, respectively.
Conversely, suppose that we have exactly the same situation as in Subsection~\ref{subsec: Region colorings of surface-link diagrams, cocycle invariants}.
We may assume that a diagram $D$ is connected.
Let $[\,]: X^3 \to X$ be the corresponding horizontal-tribracket of the vertical-tribracket $\langle\, \rangle$, and $\overline{C}$ the corresponding semi-sheet $X^2$-coloring of the region $X$-coloring $C$ through the translation map $T^{-1}$ defined in Remark~\ref{Rem:coloringtranslation2}.
At a triple point $\tau$ of $D$ depicted in the lower of Figure~\ref{triplepoint3}, we have
\[
\begin{array}{ll}
w(D, \overline{C}; \tau)
&={\rm sign}(\tau)\Big(\big(a,b\big),\big(a,\langle a,b,c\rangle\big),\big(a,\langle a,b,\langle b,c,d\rangle \rangle \big)\Big)\\[3pt]
&=\psi_3 \big({\rm sign}(\tau) (a,b,c,d)\big)\\[3pt]
&=\psi_3\big(w^{\rm N}(D, C; \tau)\big),
\end{array}
\]
which implies that $W(D, \overline{C}) =\psi_3\big(W^{\rm N}(D, C)\big)$. Thus we have
\[
\begin{array}{ll}
\mathcal{H}(D)&=\Big\{\big[W(D, \overline{C})\big] ~\Big|~ \ \overline{C} \in {\rm Col}_{X^2}^{\rm SS} (D) \Big\}\\[4pt]
&=\Big\{\psi_3^{\ast}\big(\big[W^{\rm N}(D, C)\big]\big) ~\Big|~ \ C \in {\rm Col}_{X}^{\rm R} (D) \Big\}\\[4pt]
&=\psi_3^{\ast}\big(\mathcal{H}^{\rm N}(D)\big).
\end{array}
\]
For the 2-cocycle $\theta: C^{\rm N}_2(X) \to A$, set $\overline{\theta}: C^{\rm LB}_3(X) \to A$ by $\overline{\theta}=\theta \circ \varphi_3$, which is a 3-cocycle of $C_{\rm LB}^{\ast}(X)$. Then we have
\[
\begin{array}{ll}
\overline{\theta}\big(w(D, \overline{C}; \tau)\big)&=\theta \circ \varphi_3 \Big(\psi_3\big(w^{\rm N}(D, C; \tau)\big)\Big)\\[3pt]
&=\theta \big( w^{\rm N}(D, C; \tau)\big),
\end{array}
\]
which implies $\overline{\theta}\big(W(D, \overline{C})\big)=\theta\big(W^{\rm N}(D, C)\big)$.
Thus we have $\Phi_{\overline{\theta}}(D) =\Phi_{\theta}^{\rm N}(D)$.
Therefore, our invariants $\mathcal{H}(L)$ and $\Phi_{\overline{\theta}}(L)$ can be obtained from the Niebrzydowski's invariants $\mathcal{H}^{\rm N}(L)$ and $\Phi_{\theta}^{\rm N}(L)$, respectively.
As a consequence, we have the following theorem.
\begin{theorem}
Let $X$ be a set, and let $[\, ]$ and $\langle\, \rangle$ be a horizontal- and a vertical-tribracket on $X$, respectively, that are corresponding each other.
Let $\theta$ and $\theta'$ be a $3$-cocycle of $C^{*}_{\rm LB} (X)$ and a $2$-cocycle of $C^*_{N}(X)$, respectively, that are corresponding each other through the isomorphisms $\varphi^3_*$ and $\psi^3_*$.
Let $F$ be a surface-link.
Then we have
\[
\varphi_3^* \big(\mathcal{H}(F)\big) = \mathcal{H}^{\rm N}(F) \mbox{ and } \psi_3^* \big(\mathcal{H}^{\rm N}(F)\big) = \mathcal{H}(F),
\]
and
\[
\Phi_{\theta}(F)=\Phi_{\theta'}^{\rm N}(F).
\]
\end{theorem}
\subsection{Chain maps $\varphi_n$ and $\psi_n$}\label{subsec:chainmaps}
In this subsection, we will discuss the homomorphisms $\varphi_n$ and $\psi_n$ defined in Subsection~\ref{subsec:correspondence1}.
In particular, we show that they are chain maps between the chain complexes $C_*^{\rm LB} (X)$ and $C_*^{\rm N} (X)$, and that they are the inverses of each other.
\begin{notation} In this subsection, we use the following notations.
\begin{itemize}
\item The bold angle bracket $\blangle a_0, a_1, a_2, \ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} $ means
$$\Bigg\langle a_{0}, a_1, \bigg\langle a_{1}, a_{2}, \Big\langle a_2, a_3, \big\langle \cdots \big\langle a_{n-3}, a_{n-2}, \langle a_{n-2}, a_{n-1}, a_n\rangle \big\rangle \cdots \big\rangle \Big\rangle\bigg\rangle\Bigg\rangle.$$
\item The augmented bold square bracket$\blsq a_0, a_1, a_2, \ldots , a_n; b \brsq $ means
$$\Bigg[ a_{n-1}, a_n, \bigg[ a_{n-2}, a_{n-1}, \Big[\cdots \Big[ a_2, a_3, \big[ a_1, a_2, [a_0, a_1, b] \big] \Big] \cdots \Big]\bigg]\Bigg].$$
\end{itemize}
\end{notation}
Let $X$ be a set $X$, $[\, ]$ a horizontal-tribracket on $X$, and $\langle\, \rangle$
the corresponding vertical-tribracket of $[\,]$.
\begin{lemma}\label{lem:tribracketfomula}
For nonnegative integers $m, n$ and $i$, let $a_0, a_1, \ldots, a_{\max\{m, n\}}, b, x_{i+1},$ $x_{i+2}, \ldots , x_{m}, y_{i+1}, \ldots , y_{n}\in X$.
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})}
\item \label{formula1-1} For $n\geq 2$ and $i\leq n-2$, we have
\[
\begin{array}{l}
\blangle a_0, a_1, \ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} = \blangle a_0, a_1, \ldots ,a_{i}, \blangle a_i, a_{i+1}, \ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\brangle.
\end{array}
\]
\item \label{formula1-2} For $n\geq 2$, we have
\[
\begin{array}{l}
\blangle a_0, a_1, \ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} = \blangle a_0, \langle a_0, a_1, a_2 \rangle, a_2, a_3,\ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}.
\end{array}
\]
\item\label{formula1-3} For $n\geq 1$, we have
$$\blangle a_0, a_1,\ldots , a_n ,\blsq a_0, a_1,\ldots , a_n; b \brsq{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} =b. $$
\item \label{formula1-4} For $m\geq 1$, $n\geq 2$ and $0\leq i \leq \min\{ m-1, n-2\}$, we have
$$\begin{array}{l}
\blsq a_0, a_1,\ldots , a_{i}, x_{i+1}, \ldots ,x_{m} ; \blangle a_0, a_1,\ldots , a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[5pt]
=\blsq a_{i},x_{i+1}, \ldots ,x_{m} ; \blangle a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq.
\end{array}
$$
In particular,
$$\begin{array}{l}
\blsq a_0, a_1,\ldots , a_{m} ; \blangle a_0, a_1,\ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[5pt]
=\left\{\begin{array}{cl}
\blangle a_m, a_{m+1},\ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} &(m<n-1),\\[5pt]
a_n&(m=n-1),\\[5pt]
\blsq a_{n-1}, a_n,\ldots , a_m; a_n \brsq &(m>n-1). \\
\end{array}
\right.
\end{array}
$$
\end{enumerate}
\end{lemma}
\begin{proof}
(\ref{formula1-1}) This follows easily from the definition of the bold angle bracket.
(\ref{formula1-2}) We have
\[
\begin{array}{l}
\blangle a_0, a_1, \ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
=\Big\langle a_0, a_1, \big\langle a_1, a_2, \blangle a_2, \ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \big\rangle \Big\rangle \\[5pt]
=\Big\langle a_0, \langle a_0 , a_1, a_2\rangle , \big\langle\langle a_0 , a_1, a_2\rangle, a_2, \blangle a_2, \ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \big\rangle \Big\rangle \\[5pt]
=\blangle a_0, \langle a_0 , a_1, a_2\rangle , a_2 , \blangle a_2, \ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle a_0, \langle a_0, a_1, a_2 \rangle, a_2, a_3,\ldots , a_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle},
\end{array}
\]
where the first and third equalities follow from the definition of the bold angle bracket, the second equality follows from (i) of ($\mathcal{V}2$) of Definition~\ref{s-ternary operation}, and the last equality follows from (\ref{formula1-1}) of Lemma~\ref{lem:tribracketfomula}.
(\ref{formula1-3}) We prove this formula by induction for $n\geq 1$.
When $n=1$, we have
$\blangle a_0, a_1 ,\blsq a_0, a_1; b \brsq{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}=\big\langle a_0, a_1 ,[ a_0, a_1, b ] \big\rangle =b $ by (1) of Lemma~\ref{lem:tribrackets1}.
Assuming the formula holds for some $n\geq 1$, we have
$$
\begin{array}{l}
\blangle a_0, a_1,\ldots , a_{n+1} ,\blsq a_0, a_1,\ldots , a_{n+1}; b \brsq{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
=\blangle a_0, a_1, \ldots , a_{n}, \bigg \langle a_{n}, a_{n+1}, \Big [ a_{n}, a_{n+1}, \blsq a_0, a_1,\ldots , a_{n}; b \brsq \Big ] \bigg\rangle {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
=\blangle a_0, a_1,\ldots , a_{n} ,\blsq a_0, a_1,\ldots , a_{n}; b \brsq{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
=b,
\end{array}
$$
where the first equality follows from the definitions, the second equality follows from (1) of Lemma~\ref{lem:tribrackets1}, and the third equality follows from the assumption.
(\ref{formula1-4})
We show the first equality only, and we leave the proof of the other formulas to the reader.
For $k<i$, we have
$$\begin{array}{l}
\blsq a_k, a_{k+1},\ldots , a_{i}, x_{i+1}, \ldots ,x_{m} ; \blangle a_k, a_{k+1},\ldots , a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[7pt]
=\Bigg[ x_{m-1} , x_m,\bigg[ \cdots \Big[ a_{k+1}, \bullet_{k+2}, \big[a_k, a_{k+1}, \blangle a_k, a_{k+1},\ldots , a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \big] \Big] \cdots \bigg] \Bigg]\\[11pt]
=\Bigg[ x_{m-1} , x_m,\bigg[ \cdots \Big[ a_{k+1}, \bullet_{k+2}, \big[a_k, a_{k+1}, \big\langle a_k, a_{k+1}, \blangle a_{k+1},\ldots , a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \big\rangle \big] \Big] \cdots \bigg] \Bigg]\\[11pt]
=\Bigg[ x_{m-1} , x_m,\bigg[ \cdots \Big[ a_{k+1}, \bullet_{k+2}, \blangle a_{k+1}, \ldots , a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \Big] \cdots \bigg] \Bigg]\\[11pt]
=\blsq a_{k+1},\ldots , a_{i}, x_{i+1}, \ldots ,x_{m} ; \blangle a_{k+1},\ldots , a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq ,
\end{array}
$$
where the first, second and fourth equalities follow from the definitions, and for the third equality, we applied (2) of Lemma~\ref{lem:tribrackets1} for the innermost square bracket and the outermost angle bracket,
and where $\bullet=\left\{
\begin{array}{ll}
a & (k+2\leq i),\\
x & (\mbox{otherwise}).
\end{array}
\right.
$
By repeating the above operation, we have
$$\begin{array}{l}
\blsq a_0, a_1,\ldots , a_{i}, x_{i+1}, \ldots ,x_{m} ; \blangle a_0, a_1,\ldots , a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[5pt]
\blsq a_1,\ldots , a_{i}, x_{i+1}, \ldots ,x_{m} ; \blangle a_1,\ldots , a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[5pt]
=\cdots =\blsq a_{i},x_{i+1}, \ldots , x_{m} ; \blangle a_i, y_{i+1}, \ldots , y_n {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq. \\[5pt]
\end{array}
$$
\end{proof}
For a positive integer $n$ and for $a(=a_0), a_1, \ldots , a_n, b_1, \ldots , b_n \in X$, we set $z_i~(i\in \{0,1,\ldots ,n \})$ by
\[
\left\{
\begin{array}{ll}
z_0=a,\\
z_1=b_1,\\
z_i=[z_{i-2}, z_{i-1}, z_{i-1}|_{b_{i-1} \mapsto b_i}] & (i\in\{2,\ldots, n \}),
\end{array}
\right.
\]
we set $w_i~(i\in \{1,\ldots ,n \})$ by
\[
\left\{
\begin{array}{ll}
w_1=a_1,\\
w_2=\langle a,a_1,a_2\rangle,\\
w_i= w_{i-1} |_{a_{i-1} \mapsto \langle a_{i-2}, a_{i-1}, a_i \rangle}& (i\in \{3,\ldots , n\}),
\end{array}
\right.
\]
and we set $y_{(i,j)}~(i\in \{0,\cdots ,n-1\}, j\in \{1,\ldots ,n-1\})$ by
\[
y_{(i,j)} = \left\{
\begin{array}{ll}
\langle a_{j-1}, a_j, y_{(i,j+1)}\rangle & (j<i),\\[5pt]
\langle a_{j-1}, a_j, a_{j+1} \rangle & (j=i, i+1),\\[5pt]
\langle y_{(i,j-1)}, a_j, a_{j+1} \rangle & (j>i+1).
\end{array}
\right.
\]
We then have the following lemmas.
\begin{lemma}\label{lem:zw}
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})}
\item \label{formula2-1} When $i\geq 2$, $z_i=\blsq z_0, z_1, \ldots , z_{i-1}; b_i \brsq$.
\item \label{formula2-2} When $j\geq 2$, $w_j=\blangle a, a_1, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(\ref{formula2-1}) We show this formula by induction for $i\in \{2,\ldots , n\}$.
When $i=2$, we have
$z_2= [z_0, z_1, z_1|_{b_1 \mapsto b_2}] = [z_0,z_1, b_2] = \blsq z_0,z_1; b_2 \brsq $.
Assuming the formula holds for some $i \geq 2$,
we have
\[
\begin{array}{ll}
z_{i+1} &= \big[z_{i-1}, z_i, z_i|_{b_{i} \mapsto b_{i+1}}\big]\\[5pt]
& =\big[z_{i-1}, z_i, \blsq z_0, z_1, \ldots , z_{i-1}; b_i \brsq \Big|_{b_{i} \mapsto b_{i+1}}\big] \\[5pt]
& =\big[z_{i-1}, z_i, \blsq z_0, z_1, \ldots , z_{i-1}; b_{i+1} \brsq \big] \\[5pt]
& =\blsq z_0, z_1, \ldots , z_{i-1}, z_i ; b_{i+1} \brsq,
\end{array}
\]
where the first and last equalities follow from the definitions, and the second quality follows from the assumption.
(\ref{formula2-2}) This follows easily from the definition of $w_j$.
\end{proof}
\begin{lemma}\label{lem:y}
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})}
\item \label{formula3-5} When $j\leq i-1$,
$$\big[ a_{j-1}, a_{j}, y_{(i-1,j)} \big] =
\left\{ \begin{array}{ll}
y_{(i-1, j+1)} & (j<i-1), \\[3pt]
a_i & (j=i-1).
\end{array}
\right.
$$
\item \label{formula3-2} When $j\leq i-1$,
$$\blangle a_{j-1}, a_{j},\ldots , a_{i} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}=y_{(i-1,j)}. $$
In particular, $w_i = y_{(i-1, 1)}$.
\item \label{formula3-1} When $j\geq i+1$,
$$\blangle a_{i-1}, a_{i},\ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} = \blangle a_{i-1}, y_{(i-1,i)}, y_{(i-1,i+1)},\ldots , y_{(i-1,j-1)} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} . $$
\item \label{formula3-3} When $j\leq i-1$,
$$ \Big[ a, w_j, w_i \Big] =\left\{
\begin{array}{ll}
\blangle y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1,j+1)} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} & (j<i-1),\\[5pt]
\blangle y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1,i-1)}, a_i {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} & (j=i-1).
\end{array}
\right. $$
\item \label{formula3-4} When $j\geq i+1$,
$$ \Big[ a, w_i, w_j \Big] =
\blangle y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1,i-1)}, a_i , a_{i+1}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}.$$
\end{enumerate}
\end{lemma}
\begin{proof}
(\ref{formula3-5}) This follows from (2) of Lemma~\ref{lem:tribrackets1} as
$$
\big[a_{j-1}, a_{j} , y_{(i-1,j)} \big] =\big[a_{j-1}, a_{j} , \langle a_{j-1}, a_{j}, y_{(i-1,j+1)} \rangle \big]
=y_{(i-1,j+1)}.
$$
(\ref{formula3-2}) This follows easily from the definition of $y_{(i,j)}$.
(\ref{formula3-1})
For $i\leq k \leq j-3$,
\[
\begin{array}{l}
\blangle a_{i-1}, y_{(i-1, i)}, \ldots, y_{(i-1, k)}, a_{k+1}, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle a_{i-1}, y_{(i-1, i)}, \ldots, y_{(i-1, k)}, \blangle y_{(i-1, k)}, a_{k+1}, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
= \blangle a_{i-1}, y_{(i-1, i)}, \ldots, y_{(i-1, k)}, \blangle y_{(i-1, k)}, \langle y_{(i-1, k)} ,a_{k+1}, a_{k+2} \rangle, a_{k+2}, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle a_{i-1}, y_{(i-1, i)}, \ldots, y_{(i-1, k)}, \blangle y_{(i-1, k)}, y_{(i-1, k+1)}, a_{k+2}, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
=\blangle a_{i-1}, y_{(i-1, i)}, \ldots, y_{(i-1, k+1)}, a_{k+2}, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle},
\end{array}
\]
where the first and fourth equalities follow from (\ref{formula1-1}) of Lemma~\ref{lem:tribracketfomula}, the second equality follows from (\ref{formula1-2}) of Lemma~\ref{lem:tribracketfomula}, and
the third equality follows from the definition of $y_{(i,j)}$.
By repeating the above operation, we have
\[
\begin{array}{l}
\blangle a_{i-1}, a_{i},\ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle a_{i-1}, \langle a_{i-1}, a_{i}, a_{i+1} \rangle , a_{i+1}, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
= \blangle a_{i-1}, y_{(i-1, i)}, a_{i+1}, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \cdots = \blangle a_{i-1}, y_{(i-1,i)}, y_{(i-1,i+1)},\ldots , y_{(i-1,j-2)}, a_{j-1}, a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle a_{i-1}, y_{(i-1,i)}, y_{(i-1,i+1)},\ldots , y_{(i-1,j-2)}, \langle y_{(i-1,j-2)}, a_{j-1}, a_j \rangle {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle a_{i-1}, y_{(i-1,i)}, y_{(i-1,i+1)},\ldots , y_{(i-1,j-2)}, y_{(i-1,j-1)} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} , \\[5pt]
\end{array}
\]
where the first equality follows from (\ref{formula1-2}) of Lemma~\ref{lem:tribracketfomula}, the second and last equalities follow from the definition of $y_{(i,j)}$, and the second equality from the last follows from (\ref{formula1-1}) of Lemma~\ref{lem:tribracketfomula}.
(\ref{formula3-3}) For $k\leq j-3$, we have
\[
\begin{array}{l}
\Big[ a_k , \blangle a_k, a_{k+1}, \ldots , a_j{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} , y_{(i-1,k+1)} \Big]\\[5pt]
=\Big[ a_k , \big\langle a_k, a_{k+1} , \blangle a_{k+1}, a_{k+2}, \ldots , a_j{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \big\rangle , y_{(i-1,k+1)} \Big]\\[5pt]
=\Big \langle y_{(i-1,k+1)} , \big[a_k, a_{k+1} , y_{(i-1,k+1)} \big] , \Big[ a_{k+1} ,\blangle a_{k+1}, a_{k+2}, \ldots , a_j{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}, \big[a_k, a_{k+1} , y_{(i-1,k+1)} \big] \Big] \Big\rangle \\[5pt]
=\Big \langle y_{(i-1,k+1)} , y_{(i-1,k+2)}, \Big[ a_{k+1} ,\blangle a_{k+1}, a_{k+2}, \ldots , a_j{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}, y_{(i-1,k+2)}\Big] \Big\rangle ,
\end{array}
\]
where the first equality follows from the definition of the bold angle bracket, the second equality follows from (2) of Lemma~\ref{lem:tribrackets2}, and the third equality follows from (\ref{formula3-5}) of Lemma~\ref{lem:y}.
Hence, by repeating the above operation, for $j<i-1$, we have
\[
\begin{array}{l}
\Big[ a, w_j, w_i \Big] \\[5pt]
= \Big[ a, \blangle a, a_1, \ldots , a_j{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} , y_{(i-1,1)} \Big]\\[5pt]
=\cdots
= \blangle y_{(i-1,1)} , y_{(i-1,2)}, \ldots , y_{(i-1,j-1)} , \Big[ a_{j-2}, \blangle a_{j-2}, a_{j-1} , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} , y_{(i-1,j-1)} \Big] {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle y_{(i-1,1)} , y_{(i-1,2)}, \ldots , y_{(i-1,j)} , y_{(i-1,j+1) } {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle},
\end{array}
\]
where the first equality follows from (\ref{formula2-2}) of Lemma~\ref{lem:zw} and (2) of Lemma~\ref{lem:y}, and the last equality follows from the definition of the bold angle bracket, (2) of Lemma~\ref{lem:tribrackets2}, and (\ref{formula3-5}) of Lemma~\ref{lem:y}
as
\[
\begin{array}{l}
\Big[ a_{j-2}, \blangle a_{j-2}, a_{j-1} , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} , y_{(i-1,j-1)} \Big] \\[5pt]
=\Big[ a_{j-2}, \big \langle a_{j-2}, a_{j-1} , a_j \big \rangle , y_{(i-1,j-1)} \Big] \\[5pt]
= \Big\langle y_{(i-1,j-1)}, \big[a_{j-2}, a_{j-1}, y_{(i-1,j-1)}\big], \big[a_{j-1}, a_j, [a_{j-2}, a_{j-1}, y_{(i-1,j-1)}] \big] \Big\rangle \\[5pt]
= \Big\langle y_{(i-1,j-1)}, y_{(i-1,j)} , \big[a_{j-1}, a_j, y_{(i-1,j)} \big] \Big\rangle \\[5pt]
= \Big\langle y_{(i-1,j-1)}, y_{(i-1,j)} , y_{(i-1,j+1)} \Big\rangle.
\end{array}
\]
Similarly, for $j=i-1$, we have
\[
\begin{array}{l}
\Big[ a, w_{i-1}, w_i \Big] \\[5pt]
=\cdots
= \blangle y_{(i-1,1)} , y_{(i-1,2)}, \ldots , y_{(i-1,i-2)} , \Big[ a_{i-3}, \blangle a_{i-3}, a_{i-2} , a_{i-1} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} , y_{(i-1,i-2)} \Big] {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle y_{(i-1,1)} , y_{(i-1,2)}, \ldots , y_{(i-1,i-2)} , \Big\langle y_{(i-1,i-2)}, y_{(i-1,i-1)} , \big[a_{i-2}, a_{i-1}, y_{(i-1,i-1)} \big] \Big\rangle {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \\[5pt]
= \blangle y_{(i-1,1)} , y_{(i-1,2)}, \ldots , y_{(i-1,i-1)} , a_i {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle},
\end{array}
\]
where the second equality from the last follows from (2) of Lemma~\ref{lem:tribrackets2}, (1) of Lemma~\ref{lem:y}, and the last equality follows from (\ref{formula3-5}) of Lemma~\ref{lem:y}.
(\ref{formula3-4}) For $k\leq i-3$, we have
\[
\begin{array}{l}
\Big[ a_k , y_{(i-1, k+1)} , \blangle a_k, a_{k+1}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \Big]\\[5pt]
= \Big[ a_k , \big\langle a_k, a_{k+1}, y_{(i-1, k+2)} \big\rangle , \blangle a_k, a_{k+1}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \Big]\\[5pt]
= \Big\langle y_{(i-1, k+1)} , y_{(i-1, k+2)}, \big[a_{k+1}, y_{(i-1,k+2)}, [a_{k}, a_{k+1}, \blangle a_k, a_{k+1}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} ]\big] \Big\rangle\\[6pt]
= \bigg\langle y_{(i-1, k+1)} , y_{(i-1, k+2)}, \Big[a_{k+1}, y_{(i-1,k+2)}, \big[a_{k}, a_{k+1}, \langle a_{k}, a_{k+1}, \blangle a_{k+1}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \rangle \big]\Big] \bigg\rangle\\[8pt]
= \blangle y_{(i-1, k+1)} , y_{(i-1, k+2)}, \big[a_{k+1}, y_{(i-1,k+2)}, \blangle a_{k+1}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \big] {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle},\\[5pt]
\end{array}
\]
where the first and third equalities follow from the definitions, the second equality follows from (2) of Lemma~\ref{lem:tribrackets2}, the definition of $y_{(i,j)}$ and the fourth equality follows from (2) of Lemma~\ref{lem:tribrackets1}.
Hence, by repeating the above operation, we have
\[
\begin{array}{l}
\Big[ a, w_i, w_j \Big] \\[5pt]
= \Big[ a, y_{(i-1, 1)} , \blangle a, a_1, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \Big]\\[5pt]
=\cdots= \blangle y_{(i-1, 1)} , \ldots ,y_{(i-1, i-1)}, \big[a_{i-2}, y_{(i-1,i-1)}, \blangle a_{i-2}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \big] {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
= \blangle y_{(i-1, 1)} , \ldots ,y_{(i-1, i-1)}, \Big[a_{i-2}, \langle a_{i-2},a_{i-1}, a_{i} \rangle, \blangle a_{i-2}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \Big] {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
= \blangle y_{(i-1, 1)} , \ldots ,y_{(i-1, i-1)}, \Big\langle y_{(i-1,i-1)}, a_i, \big[a_{i-1}, a_i, [ a_{i-2}, a_{i-1}, \blangle a_{i-2}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} ] \big] \Big\rangle {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
= \blangle y_{(i-1, 1)} , \ldots ,y_{(i-1, i-1)}, \Big\langle y_{(i-1,i-1)}, a_i, \big[a_{i-1}, a_i, [ a_{i-2}, a_{i-1}, \langle a_{i-2}, a_{i-1}, \blangle a_{i-1}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\rangle ] \big] \Big\rangle {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
= \blangle y_{(i-1, 1)} , \ldots ,y_{(i-1, i-1)}, \Big\langle y_{(i-1,i-1)}, a_i, \blangle a_{i}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \Big\rangle {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
= \blangle y_{(i-1, 1)} , \ldots ,y_{(i-1, i-1)}, a_i, \ldots , a_j{\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle},\\[5pt]
\end{array}
\]
where the first equality follows from (2) of Lemma~\ref{lem:zw} and (\ref{formula3-2}) of Lemma~\ref{lem:y},
the fifth and third equalities from the last follow from the definitions,
the fourth equality from the last follows from (2) of Lemma~\ref{lem:tribrackets2}, the second equality from the last follows from (2) of Lemma~\ref{lem:tribrackets1} and the last equality follows from (\ref{formula1-1}) of Lemma~\ref{lem:tribracketfomula}.
\end{proof}
We recall that in Subsection~\ref{subsec:correspondence1}, the homomorphism
$\varphi_n: C_n^{\rm LB}(X) \to C_{n-1}^{\rm N}(X)$ was defined by
\[
\varphi_n \Big(\big((a,b_1), (a, b_2), \ldots , (a ,b_n) \big)\Big) = (z_0, z_1, \ldots , z_n)
\]
if $n\geq 1$, and $\varphi_n=0$ otherwise.
The homomorphism
$\psi_n: C_{n-1}^{\rm N}(X) \to C_{n}^{\rm LB}(X)$ was defined by
\[
\psi_n \big((a, a_1, a_2, \ldots , a_n)\big) = \big((a,w_1), (a,w_2), \ldots , (a, w_n)\big)
\]
if $n\geq 1$, and $\psi_n=0$ otherwise
\begin{lemma}\label{lem:welldefined}
\begin{itemize}
\item[(1)] $\varphi_n$ is well-defined.
\item[(2)] $\psi_n$ is well-defined.
\end{itemize}
\end{lemma}
\begin{proof}
(1) We regard $\varphi_n$ as a homomorphism from $C_n(X)$ to $C_{n-1}^{\rm Nie}(X)$.
It suffices to show that $\varphi_{n} \big(D_{n}(X)\big) \subset D_{n-1}^{\rm Nie}(X)$.
For $\big((a,b_1), (a,b_2), \ldots , (a,b_n)\big) \in \bigcup_{a\in X} (\{a\} \times X)^n $, suppose $b_{i}=b_{i+1}$ for some $i$ ($i \in \{1, 2, \cdots n\} $).
We then have
\[
\begin{array}{ll}
&\varphi_{n}\Big(\big((a,b_1), (a,b_2), \ldots , (a,b_n)\big)\Big)\\[5pt]
&=\big( z_0, z_1, \cdots, z_{i-1}, z_i, [z_{i-1}, z_i, z_i |_{b_{i} \mapsto b_{i+1}}], z_{i+2}, \cdots, z_n \big) \\[5pt]
&=\big( z_0, z_1, \cdots, z_{i-1}, z_i, [z_{i-1}, z_i, z_i ], \cdots, z_n \big) .
\end{array}
\]
Here we have $\langle z_{i-1}, z_i, [z_{i-1}, z_i, z_i ] \rangle=z_i$ by (1) of Lemma~\ref{lem:tribrackets1}, which implies that $\varphi_{n} \Big(\big((a,b_1), (a,b_2), \ldots , (a,b_n)\big)\Big) \in D_{n-1}^{\rm Nie}(X)$, i.e. $\varphi_{n} \big(D_{n}(X)\big) \subset D_{n-1}^{\rm Nie}(X)$.
(2) We regard $\psi_n$ as a homomorphism from $C_{n-1}^{\rm Nie}(X)$ to $C_n(X)$.
It suffices to show that $\psi_{n} \big(D_{n-1}^{\rm Nie}(X)\big) \subset D_{n}(X)$.
For $(a, a_1, a_2, \ldots , a_n)\in X^{n+1}$, suppose $\langle a_{j-1}, a_j, a_{j+1}\rangle =a_j$ for some $j$ $(j \in \{ 2, 3, \cdots, n-1\})$.
We then have
\[
\begin{array}{ll}
&\psi_n \big((a, a_1, a_2, \ldots , a_n)\big)\\[5pt]
&= \big((a,w_1), (a,w_2), \ldots ,(a,w_j), (a,w_{j+1}=w_j) , (a, w_{j+2}), \ldots , (a, w_n)\big)
\end{array}
\]
since $w_{j+1}=w_{j}|_{a_{j} \mapsto \langle a_{j-1}, a_j, a_{j+1}\rangle } =w_{j}|_{a_{j} \mapsto a_j }=w_{j}$. Hence there is a pair of neighboring components which are equal. It follows that $\psi_n \big((a, a_1, a_2, \ldots , a_n)\big) \in D_{n}(X)$, i.e. $\psi_{n} \big(D_{n-1}^{\rm Nie}(X) \big) \subset D_{n}(X)$.
\end{proof}
\begin{lemma}\label{lem:inverses}
$\varphi_n$ and $\psi_n$ are the inverses of each other.
\end{lemma}
\begin{proof}
We first show that $\psi_n \circ \varphi_n ={\rm id}$. We have
\[
\begin{array}{ll}
&\psi_n \circ \varphi_n \Big(\big((a,b_1), (a,b_2), \ldots , (a,b_n)\big)\Big)\\[5pt]
&=\psi_n \Big( (a, z_1, \ldots , z_n) \Big)\\[5pt]
&=\big((a,w'_1), (a,w'_2), \ldots , (a, w'_n)\big),\\[5pt]
\end{array}
\]
where $w_1'=z_1 = b_1$, and for $i\geq2$,
\[
\begin{array}{ll}
w'_i&=\blangle a, z_1, \ldots , z_i {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
&=\blangle a, z_1, \ldots , z_{i-1}, \blsq a, z_1, \ldots , z_{i-1}; b_i \brsq {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
&=b_i, \\[5pt]
\end{array}
\]
where the first and second equalities respectively follow from (\ref{formula2-2}) and (\ref{formula2-1}) of Lemma~\ref{lem:zw}, and the third equality follows from (\ref{formula1-4}) of Lemma~\ref{lem:tribracketfomula}.
We next show that $\varphi_n \circ \psi_n ={\rm id}$. We have
\[
\begin{array}{ll}
&\varphi_n \circ \psi_n \big((a, a_1, a_2, \ldots , a_n)\big)\\[5pt]
&=\varphi_n \Big( \big((a,w_1), (a,w_2), \ldots , (a, w_n)\big) \Big)\\[5pt]
&=(a, z'_1, \ldots , z'_n). \\[5pt]
\end{array}
\]
Here we show that $z'_{i}=a_{i}$ for $i \in \{1, 2, \cdots, n\}$ by induction. When $i=1$, $z'_{1}=w_{1}=a_{1}$.
Assuming that $z'_{k}=a_{k}$ for $k \leq {i-1}$, we have
\[
\begin{array}{ll}
z'_{i}&=\blsq a, z'_1, \ldots , z'_{i-1}; w_{i} \brsq \\[5pt]
&=\blsq a, a_1, \ldots , a_{i-1}; \blangle a, a_1, \ldots , a_{i} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[5pt]
&=a_{i}, \\[5pt]
\end{array}
\]
where the first equality follows from (\ref{formula2-1}) of Lemma~\ref{lem:zw}, the second equality follows from the assumption and (\ref{formula2-2}) of Lemma~\ref{lem:zw}, and the third equality follows from (\ref{formula1-4}) of Lemma~\ref{lem:tribracketfomula}.
\end{proof}
Now we consider about the composition $\varphi_{n-1} \circ \partial_n^{\rm LB} \circ \psi_n$.
For an integer $n\geq 2$, we have
\begin{align*}
&\varphi_{n-1} \circ \partial_n^{\rm LB} \circ \psi_n \big( (a, a_1, a_2, \ldots , a_n) \big) \notag \\[3pt]
&= \varphi_{n-1} \circ \partial_n^{\rm LB}\Big( \big((a,w_1), (a,w_2), \ldots , (a, w_n)\big) \Big)\notag \\[3pt]
&= \varphi_{n-1}\Big(\sum_{i=1}^{n} (-1)^i \big\{ \big( (a, w_1 ), \ldots, (a, w_{i-1}), (a, w_{i+1}), \ldots , (a,w_n) \big) \notag \\[3pt]
&\hspace{2.5cm}- \big( (w_i, [a, w_1, w_i]=:b_{(i,1)} ), \ldots, (w_i, [a, w_{i-1}, w_i]=:b_{(i,i-1)}), \notag \\[3pt]
&\hspace{3cm} (w_i, [a, w_i, w_{i+1}]=:b_{(i,i+1)}) ,\ldots , (w_i, [a, w_i, w_{n}]=:b_{(i,n)})\big) \big\} \Big) \notag \\[3pt]
&=\sum_{i=1}^{n} (-1)^i (u_{(i,0)}, u_{(i,1)}, \ldots, u_{(i,i-1)}, u_{(i,i+1)}, \ldots , u_{(i,n)})
\\[3pt]
&\hspace{1cm}+\sum_{i=1}^{n} (-1)^{i+1} (v_{(i,0)}, v_{(i,1)}, \ldots, v_{(i,i-1)}, v_{(i,i+1)}, \ldots , u_{(i,n)}),
\end{align*}
where
\[
u_{(1,j)}=\left\{
\begin{array}{ll}
a & (j=0),\\
w_2 & (j=2),\\
\big[ u_{(1,0)} , u_{(1,2)} , u_{(1,2)} |_{w_2 \mapsto w_3}\big] & (j=3),\\[3pt]
\big[ u_{(1,j-2)}, u_{(1,j-1)} , u_{(1,j-1)}|_{w_{j-1}\mapsto w_j}\big] & (j\geq 4),
\end{array}
\right.
\]
and for $i\geq 2$,
\[
u_{(i,j)}=\left\{
\begin{array}{ll}
a & (j=0),\\
w_1 & (j=1),\\
\big[ u_{(i,i-2)} , u_{(i,i-1)} , u_{(i,i-1)} |_{w_{i-1} \mapsto w_{i+1}}\big] & (j=i+1),\\[3pt]
\big[ u_{(i,i-1)} , u_{(i,i+1)} , u_{(i,i+1)} |_{w_{i+1} \mapsto w_{i+2}}\big] & (j=i+2),\\[3pt]
\big[ u_{(i,j-2)}, u_{(i,j-1)} , u_{(i,j-1)}|_{w_{j-1}\mapsto w_j}\big] & (\mbox{otherwise}),
\end{array}
\right.
\]
and where
\[
v_{(1,j)}=\left\{
\begin{array}{ll}
w_i & (j=0),\\
b_{(1,2)} & (j=2),\\
\big[ v_{(1,0)} , v_{(1,2)} , v_{(1,2)} |_{b_{(1,2)} \mapsto b_{(1,3)}}\big] & (j=3),\\[3pt]
\big[ v_{(1,j-2)}, v_{(1,j-1)} , v_{(1,j-1)}|_{b_{(1,j-1)}\mapsto b_{(1,j)}}\big] & (j\geq 4),
\end{array}
\right.
\]
and for $i\geq 2$,
\[
v_{(i,j)}=\left\{
\begin{array}{ll}
w_i & (j=0),\\
b_{(i,1)} & (j=1),\\
\big[ v_{(i,i-2)} , v_{(i,i-1)} , v_{(i,i-1)} |_{b_{(i, i-1)} \mapsto b_{(i, i+1)}}\big] & (j=i+1),\\[3pt]
\big[ v_{(i,i-1)} , v_{(i,i+1)} , v_{(i,i+1)} |_{b_{(i, i+1)} \mapsto b_{(i, i+2)}}\big] & (j=i+2),\\[3pt]
\big[ v_{(i,j-2)}, v_{(i,j-1)} , v_{(i,j-1)}|_{b_{(i, j-1)}\mapsto b_{(i,j)}}\big] & (\mbox{otherwise}).
\end{array}
\right.
\]
We recall that in Subsection~\ref{subsec:NHom}, a homomorphism $\partial_{n-1}^{\rm N} : C_{n-1}^{\rm N} (X) \to C_{n-2}^{\rm N} (X)$ was defined by
\begin{align}
&\partial_{n-1}^{\rm N} \big( ( a, a_1, \ldots , a_{n} ) \big) \notag \\
&= \sum_{i=0}^{n-1} (-1)^i \big( y_{(i,1)}, y_{(i,2)}, \ldots , y_{(i,i)}, a_{i+1}, a_{i+2}, \ldots , a_{n} \big)\notag \\
&\hspace{1cm}+ \sum_{i=0}^{n-1} (-1)^{i+1} \big( a, a_1, \ldots , a_i, y_{(i,i+1)},y_{(i,i+2)},\ldots , y_{(i,n-1)} \big) \notag
\end{align}
if $n> 0$, and $\partial_n^{\rm N}=0$ otherwise.
\begin{lemma}\label{lem:chainmap}
$\varphi_n$ and $\psi_n$ are chain maps between the chain complexes $C_*^{\rm LB} (X)$ and $C_*^{\rm N} (X)$.
\end{lemma}
\begin{proof}
We may show that $\varphi_{n-1} \circ \partial_n^{\rm LB} \circ \psi_n = \partial_{n-1}^{\rm N}$ for $n\geq 2$.
We will show that
\begin{subequations}
\begin{align}
&(u_{(i,0)}, u_{(i,1)}, \ldots, u_{(i,i-1)}, u_{(i,i+1)}, \ldots , u_{(i,n)})\notag \\
&= \big( a, a_1, \ldots , a_{i-1}, y_{(i-1,i)},y_{(i-1,i+1)},\ldots , y_{(i-1,n-1)} \big) \label{eq:1a}
\end{align}
\end{subequations}
and
\begin{subequations}
\begin{align}
& (v_{(i,0)}, v_{(i,1)}, \ldots, v_{(i,i-1)}, v_{(i,i+1)}, \ldots , v_{(i,n)})\notag \\
&=\big( y_{({i-1},1)}, y_{(i-1,2)}, \ldots , y_{(i-1,i-1)}, a_{i}, a_{i+1}, \ldots , a_{n} \big). \label{eq:1b}
\end{align}
\end{subequations}
First we consider the above (\ref{eq:1a}) in the case of $i \geq 2$.
For the first $i$ components, we have
\[
\begin{array}{l}
(u_{(i,0)}, u_{(i,1)}, \ldots, u_{(i,i-1)}) =\varphi_{i-1}\circ \psi_{i-1} \big( (a, a_1,\ldots , a_{i-1}) \big)\\[3pt]
={\rm id} \big((a, a_1,\ldots , a_{i-1}) \big)= (a, a_1,\ldots , a_{i-1}).
\end{array}
\]
For $u_{(i,i+1)}$, we have
\[
\begin{array}{ll}
u_{(i,i+1)}&= \blsq u_{(i,0)}, u_{(i,1)}, \ldots , u_{(i,i-1)}; w_{i+1} \brsq \\[5pt]
&= \blsq a, a_1,\ldots , a_{i-1} ; \blangle a, a_1, \ldots , a_{i+1} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq\\[5pt]
&= \blangle a_{i-1}, a_{i} , a_{i+1} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
&= \langle a_{i-1}, a_{i} , a_{i+1} \rangle\\[5pt]
&=y_{(i-1, i)},
\end{array}
\]
where the first and second equalities respectively follow from (1) and (2) of Lemma~\ref{lem:zw}, the third equality follows from (\ref{formula1-4}) of Lemma~\ref{lem:tribracketfomula}, and the fourth and fifth equalities follow from the definitions.
For $j \geq i+2$, we show that $u_{(i, j)}=y_{(i-1, j-1)}$ by induction for $j$. Assuming that $u_{(i, k)}=y_{(i-1, k-1)}$ for $i+1 \leq k \leq j-1$, we have
\[
\begin{array}{ll}
u_{(i,j)}&= \blsq u_{(i,0)}, u_{(i,1)}, \ldots , u_{(i,i-1)}, u_{(i,i+1)}, \ldots , u_{(i,j-1)}; w_{j} \brsq \\[5pt]
&= \blsq a, a_1,\ldots , a_{i-1}, y_{(i-1, i)}, \ldots , y_{(i-1, j-2)} ; \blangle a, a_1, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq\\[5pt]
&= \blsq a_{i-1}, y_{(i-1, i)}, \ldots , y_{(i-1, j-2)} ; \blangle a_{i-1}, \ldots , a_{j} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq\\[5pt]
&= \blsq a_{i-1}, y_{(i-1, i)}, \ldots , y_{(i-1, j-2)} ; \blangle a_{i-1}, y_{(i-1, i)}, \ldots , y_{(i-1, j-2)} , y_{(i-1, j-1)} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq\\[5pt]
&= y_{(i-1, j-1)},\\[5pt]
\end{array}
\]
where the first equality follows from (1) of Lemma~\ref{lem:zw}, the second equality follows from the assumption, the third and fifth equalities follow from (4) of Lemma~\ref{lem:tribracketfomula}, and the fourth equality follows from (3) of Lemma~\ref{lem:y}.
Next we consider the equality (\ref{eq:1b}) in the case of $i \geq 2$. For $0 \leq j \leq i-2$, we show that $v_{(i, j)}=y_{(i-1, j+1)}$ by induction for $j$. When $j=0$, we have
\[
\begin{array}{ll}
v_{(i,0)}&=w_i\\[5pt]
&= \blangle a, a_1, \ldots , a_{i} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
&= y_{(i-1, 1)},\\[5pt]
\end{array}
\]
where the first equality follows from the definition, the second equality follows from (2) of Lemma~\ref{lem:zw}, and the third equality follows from (2) of Lemma~\ref{lem:y}.
When $j=1$, we have
\[
\begin{array}{ll}
v_{(i,1)}&=b_{(i, 1)}\\[5pt]
&= [a, w_1, w_{i}]\\[5pt]
&= \blsq a, a_1; \blangle a, a_1, \ldots , a_{i} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[5pt]
&= \blangle a_1, a_2, \ldots , a_{i} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle}\\[5pt]
&= y_{(i-1, 2)},\\[5pt]
\end{array}
\]
where the first and second equalities follow from the definitions, the third equality follows from (2) of Lemma~\ref{lem:zw} and the definition, the fourth equality follows from (4) of Lemma~\ref{lem:tribracketfomula}, and the fifth equality follows from (2) of Lemma~\ref{lem:y}.
Assuming that $v_{(i, k)}=y_{(i-1, k+1)}$ for $k \leq j-1$, we have
\[
\begin{array}{ll}
v_{(i,j)}&= \blsq v_{(i,0)}, v_{(i,1)}, \ldots , v_{(i,j-1)}; b_{(i,j)} \brsq \\[5pt]
&= \blsq y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1, j)} ; [ a, w_j, w_i ] \brsq \\[5pt]
&= \blsq y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1, j)} ; \blangle y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1, j+1)} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq\\[5pt]
&= y_{(i-1, j+1)},\\[5pt]
\end{array}
\]
where the first equality follows from (1) of Lemma~\ref{lem:zw}, the second equality follows from the assumption and the definition, the third equality follows from (4) of Lemma~\ref{lem:y}, and the fourth equality follows from (4) of Lemma~\ref{lem:tribracketfomula}.
When $j=i-1$, we have
\[
\begin{array}{ll}
v_{(i,i-1)}&= \blsq v_{(i,0)}, v_{(i,1)}, \ldots , v_{(i,i-2)}; b_{(i,i-1)} \brsq \\[5pt]
&= \blsq y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1, i-1)} ; [ a, w_{i-1}, w_i ] \brsq \\[5pt]
&= \blsq y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1, i-1)} ; \blangle y_{(i-1,1)}, y_{(i-1,2)}, \ldots , y_{(i-1, i-1)}, a_i {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq\\[5pt]
&= a_i ,\\[5pt]
\end{array}
\]
where the first equality follows from (1) of Lemma~\ref{lem:zw}, the second equality follows from the assumption and the definition, the third equality follows from (4) of Lemma~\ref{lem:y}, and the fourth equality follows from (4) of Lemma~\ref{lem:tribracketfomula}.
For $j \geq i+1$, we show that $v_{(i,j)}=a_j$ by induction for $j$. When $j=i+1$, we have
\[
\begin{array}{ll}
v_{(i,i+1)}&= \blsq v_{(i,0)}, \ldots ,v_{(i,i-2)}, v_{(i,i-1)}; b_{(i,i+1)} \brsq \\[5pt]
&= \blsq y_{(i-1,1)}, \ldots , y_{(i-1, i-1)}, a_i ; [a, w_i, w_{i+1}] \brsq \\[5pt]
&= \blsq y_{(i-1,1)}, \ldots , y_{(i-1, i-1)}, a_i ; \blangle y_{(i-1,1)}, \ldots , y_{(i-1, i-1)}, a_i, a_{i+1} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[5pt]
&= a_{i+1} ,\\[5pt]
\end{array}
\]
where the first equality follows from (1) of Lemma~\ref{lem:zw}, the second equality follows from the assumption and the definition, the third equality follows from (5) of Lemma~\ref{lem:y}, and the fourth equality follows from (4) of Lemma~\ref{lem:tribracketfomula}. Assuming that $v_{(i,k)}=a_k$ for $i+1 \leq k \leq j-1$, we have
\[
\begin{array}{ll}
v_{(i,j)}&= \blsq v_{(i,0)}, \ldots ,v_{(i,i-2)}, v_{(i,i-1)}, v_{(i,i+1)}, \ldots , v_{(i,j-1)}; b_{(i,j)} \brsq \\[5pt]
&= \blsq y_{(i-1,1)}, \ldots , y_{(i-1, i-1)}, a_i, a_{i+1}, \ldots , a_{j-1}; [a, w_i, w_{j}] \brsq \\[5pt]
&= \blsq y_{(i-1,1)}, \ldots , y_{(i-1, i-1)}, a_i, a_{i+1}, \ldots , a_{j-1};\\
& \hspace{50pt} \blangle y_{(i-1,1)}, \ldots , y_{(i-1, i-1)}, a_i, a_{i+1}, \ldots , a_j {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle}\hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \hspace{-2.1mm} {\Big\rangle} \brsq \\[5pt]
&= a_{j} ,\\[5pt]
\end{array}
\]
where the first equality follows from (1) of Lemma~\ref{lem:zw}, the second equality follows from the assumption and the definition, the third equality follows from (5) of Lemma~\ref{lem:y}, and the fourth equality follows from (4) of Lemma~\ref{lem:tribracketfomula}.
Therefore we have
\begin{align*}
&\varphi_{n-1} \circ \partial_n^{\rm LB} \circ \psi_n \big( (a, a_1, a_2, \ldots , a_n) \big) \notag \\
&=\sum_{i=1}^{n} (-1)^i (u_{(i,0)}, u_{(i,1)}, \ldots, u_{(i,i-1)}, u_{(i,i+1)}, \ldots , u_{(i,n)})
\\
&\hspace{1cm}+\sum_{i=1}^{n} (-1)^{i+1} (v_{(i,0)}, v_{(i,1)}, \ldots, v_{(i,i-1)}, v_{(i,i+1)}, \ldots , u_{(i,n)}) \\
&= \sum_{i=1}^{n} (-1)^i \big( a, a_1, \ldots , a_{i-1}, y_{(i-1,i)},y_{(i-1,i+1)},\ldots , y_{(i-1,n-1)} \big) \notag \\
&\hspace{1cm}+ \sum_{i=1}^{n} (-1)^{i+1} \big( y_{({i-1},1)}, y_{(i-1,2)}, \ldots , y_{(i-1,i-1)}, a_{i}, a_{i+1}, \ldots , a_{n} \big)\notag \\
&= \sum_{i=0}^{n-1} (-1)^{i+1} \big( a, a_1, \ldots , a_i, y_{(i,i+1)},y_{(i,i+2)},\ldots , y_{(i,n-1)} \big) \notag \\
&\hspace{1cm}+ \sum_{i=0}^{n-1} (-1)^i \big( y_{(i,1)}, y_{(i,2)}, \ldots , y_{(i,i)}, a_{i+1}, a_{i+2}, \ldots , a_{n} \big) \notag \\
&=\partial_{n-1}^{\rm N} \big( ( a, a_1, \ldots , a_{n} ) \big).
\end{align*}
We leave the proof in the case of $i=1$ to the reader.
\end{proof}
The next theorem follows from Lemmas~\ref{lem:welldefined}, \ref{lem:inverses} and \ref{lem:chainmap}.
\begin{theorem}
$\varphi_n$ and $\psi_n$ are chain maps, and they are the inverses of each other.
\end{theorem}
\section{Examples}
\label{Examples}
\begin{example} (Alexander local biquandles)
Let $X=R$ be a commutative ring with identity. Then for any
choice of two units $x,y\in R$, we have a horizontal-tribracket known as an
\textit{Alexander tribracket} defined by
\[[a,b,c]=xb+yc-xya.\]
This determines a local biquandle structure on $X^2$ by the maps
\begin{eqnarray*}
(a,b)\uline{\star}(a,c) & = & (c,xb+yc-xya)\\
(a,b)\oline{\star}(a,c) & = & (c,xc+yb-xya).
\end{eqnarray*}
For example, in the Alexander local biquandle on $X=\mathbb{Z}_5$ with
$x=3$ and $y=2$,
we have
\[(1,3)\uline{\star}(1,4) = (4,3(3)+2(4)-(3)(2)1)=(4,9+8-6)=(4,1)\]
and
\[(1,3)\oline{\star}(1,4) = (4,3(4)+2(3)-(3)(2)1)=(4,12+6-6)=(4,2).\]
\end{example}
\begin{example}
Let $X=G$ be a group. Then we have a horizontal-tribracket known as an
\textit{Dehn tribracket} defined by
\[[a,b,c]=ba^{-1}c.\]
This determines a local biquandle structure on $X^2$ by the maps
\begin{eqnarray*}
(a,b)\uline{\star}(a,c) & = & (c,ba^{-1}c)\\
(a,b)\oline{\star}(a,c) & = & (c,ca^{-1}b).
\end{eqnarray*}
\end{example}
\begin{example}
More generally, we can represent a tribracket on a finite set
$X=\{1,2,\dots, n\}$ with an operation $3$-tensor,
i.e. a list of $n$ $n\times n$ matrices, with the rule that the
element $[i,j,k]$ is the entry in the $i$th matrix's $j$th row and
$k$th column. Then \texttt{python} calculations show that there are
two horizontal-tribrackets with two elements, given by
\[\left[\left[\begin{array}{rr} 1& 2 \\2 & 1\end{array}\right],
\left[\begin{array}{rr}2 & 1 \\ 1 & 2\end{array}\right]\right]
\]
and
\[\left[
\left[\begin{array}{rr}2 & 1 \\ 1 & 2\end{array}\right],
\left[\begin{array}{rr} 1& 2 \\2 & 1\end{array}\right]
\right]
\]
and twelve tribrackets on the set of three elements.
\end{example}
Theorem~\ref{mainthm1} says that there is an isomorphism between the $(n-1)$st
Niebrzydowski's (co)homology of a vertical-tribracket on $X$ and its corresponding
$n$th local biquandle (co)homology. We can represent an element
of $C_{\rm LB}^n(X)$ with an $(n+1)$-tensor whose entry in position
$(z_0,z_1,\dots, z_n)$ is the coefficient of $\chi_{(z_0,z_1,\dots, z_n)}$.
\begin{example}
In \cite{NeedellNelson16}, an algebraic structure called \textit{biquasile}
was introduced, consisting of a pair of quasigroup operations
$\ast,\cdot:X\times X\to X$ satisfying the conditions
\[\begin{array}{rcl}
a\ast(x\cdot (y\ast(a\cdot b))) & = & (a\ast(x\cdot y))\ast(x\cdot (y\ast((a\ast(x\cdot y))\cdot b))) \\
y\ast((a\ast (x\cdot y))\cdot b) & = & (y\ast(a\cdot b))\ast((a\ast (x\cdot (y\ast(a\cdot b))))\cdot b).
\end{array}\]
A biquasile determines a vertical-tribracket by
\[\langle a,b,c\rangle =b\ast(a\cdot c).\]
For example, the biquasile structure on $X=\{1,2\}$ given by
\[\begin{array}{r|rr}
\ast & 1 & 2 \\ \hline
1 & 1 & 2 \\
2 & 2 & 1\\
\end{array} \quad
\begin{array}{r|rr}
\cdot & 1 & 2 \\ \hline
1 & 2 & 1 \\
2 & 1 & 2\\
\end{array}
\]
yields the tribracket with operation tensor
\[\left[\left[\begin{array}{rr}
2 & 1\\
1 & 2
\end{array}\right],
\left[\begin{array}{rr}
1 & 2\\
2 & 1
\end{array}\right]\right].\]
In \cite{ChoiNeedellNelson17}, the notion of \textit{Boltzmann enhancements}
for biquasile colorings of oriented knots and links was considered and conditions
for a function $f:X\times X\times X\to X$ to define a Boltzmann weight were
identified. These correspond to local biquandle 2-cocycles in our present
notation.
For example, the Boltzmann weight $\phi$ with $\mathbb{Z}_5$
coefficients in Example 5 in \cite{ChoiNeedellNelson17} corresponds to the local
biquandle $2$-cocycle specified by the 3-tensor
\[\phi=\left[\left[\begin{array}{rr}
0 & 2\\
3 & 0
\end{array}\right],
\left[\begin{array}{rr}
0 & 4\\
3 & 0
\end{array}\right]\right]\]
or alternatively
\[\phi=2\chi_{((1,1),(1,2))}+3\chi_{((1,2),(1,1))}+4\chi_{((2,1),(2,2))}+3\chi_{((2,2),(2,1))}\]
where
\[\chi_{\vec{x}}(\vec{y})=\left\{\begin{array}{ll}
1 & \vec{x}=\vec{y}\\
0 & \vec{x}\ne\vec{y}
\end{array}\right.\]
for $\vec{x},\vec{y}\in\{\left((a,b),(a,c)\right)\ |\ a,b,c\in X\}$.
\end{example}
\begin{remark}
We have defined the cocycle invariant as a multiset of diagram weights over the
set of colorings of a diagram. It is common in the literature
(see \cite{CarterJelsovskyKamadaLangfordSaito03, ElhamdadiNelson} etc.) to encode a multiset in a polynomial format by
summing a formal variable $u$ to the power of each element in the multset,
effectively rewriting multiplicities as coefficients and elements as exponents.
For example, the multiset $\{0,0,2,2,2,4\}$ becomes
\[u^0+u^0+u^2+u^2+u^2+u^4=2+3u^3+u^4.\]
This notation is convenient since it is easier to compare polynomials
visually than to compare multisets visually, and evaluation of the polynomial
at $u=1$ yields the number of colorings.
\end{remark}
\begin{example}
Let $X=\{1,2,3\}$ and consider the horizontal-tribracket defined by the 3-tensor
\[\left[
\left[\begin{array}{rrr}
2& 3& 1 \\
3& 1& 2 \\
1& 2& 3
\end{array}\right],\left[\begin{array}{rrr}
1& 2& 3 \\
2& 3& 1 \\
3& 1& 2
\end{array}\right],\left[\begin{array}{rrr}
3& 1& 2 \\
1& 2& 3 \\
2& 3& 1
\end{array}\right]
\right].\]
Our \texttt{python} computations reveal $3$-cocycles in $H^3_{\rm LB}(X;\mathbb{Z}_5)$
including
\[\theta=
\left[
\left[\begin{array}{rrr}
0 & 0 & 0 \\
1 & 0 & 3 \\
1 & 3 & 0
\end{array}\right],\left[\begin{array}{rrr}
0 & 0 & 0 \\
1 & 0 & 0 \\
1 & 1 & 0
\end{array}\right],\left[\begin{array}{rrr}
0 & 3 & 3 \\
3 & 0 & 2 \\
3 & 4 & 0
\end{array}\right]\right].
\]
Let us illustrate the computation of the cocycle enhancement invariant
for the Hopf link using this cocycle. There are 27 semi-arc $X^2$- (and region $X$-)colorings of
the Hopf link; for each, we must compute the diagram weight. Writing the
colorings of the diagrams in terms of 2-parallels, we have the following
weight contributions at each crossing:
\[\includegraphics{lemma3.pdf}\]
Then for example the coloring below has diagram weight
$\theta(1,1,1)+\theta(1,1,1)=0+0=0$,
\[\includegraphics{lemma4.pdf}\]
while the coloring
\[\includegraphics{lemma5.pdf}\]
has diagram weight
$\theta(1,2,1)+\theta(1,1,2)=1+0=1$.
Repeating for all 27 colorings, we obtain the multiset invariant value
\[\Phi_{\theta}(L2a1)=
\{0,0,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1\}\]
or in polynomial form, $\Phi_{\theta}(L2a1)=9+18u$.
The invariant takes the following values on links with
up to seven crossings:
\[
\begin{array}{r|l}
\Phi_{\theta}(L) & L \\ \hline
9 & L6a4 \\
27 & L5a1, L7a1, L7a3, L7a4 \\
9+18u & L2a1, L7a5, L7a6 \\
9+18u^2 & L4a1, L6a1, L7a2 \\
9+18u^3 & L6a2, L6a3, L7n1 \\
9+54u^2+18u^3 & L6a5, L6n1 \\
45+18u+18u^2 & L7a7 \\
14+3u+2u^2+4u^3+4u^4 & L7n2 \\
\end{array}
\]
\end{example}
\begin{example}
In Example 4.17 of \cite{Niebrzydowski2}, a coloring of the trefoil knot by the
Alexander tribracket with $[a,b,c]=a+b-c$, i.e. given by the 3-tensor
\[
\left[
\left[\begin{array}{rrr}
1 & 3 & 2 \\
2 & 1 & 3 \\
3 & 2 & 1
\end{array}\right], \left[\begin{array}{rrr}
2 & 1 & 3 \\
3 & 2 & 1 \\
1 & 3 & 2
\end{array}\right], \left[\begin{array}{rrr}
3 & 2 & 1 \\
1 & 3 & 2 \\
2 & 1 & 3
\end{array}\right]
\right]
\]
was observed to have a nontrivial weight sum with respect to the
cocycle which in our notation is
\[\theta=
\left[
\left[\begin{array}{rrr}
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 1 & 0
\end{array}\right], \left[\begin{array}{rrr}
0 & 0 & 1 \\
0 & 0 & 0 \\
1 & 0 & 0
\end{array}\right], \left[\begin{array}{rrr}
0 & 2 & 0 \\
1 & 0 & 0 \\
2 & 0 & 0
\end{array}\right]
\right].\]
Our \texttt{python} computations reveal that the right-handed trefoil $3_1$
knot has $\Phi_{\theta}(3_1)=9+18u$ while the left-handed trefoil $\overline{3_1}$
has $\Phi_{\theta}(3_1)=9+18u^2$, distinguishing the two and showing that
these invariants can distinguish mirror images. We note also that the
square knot and granny knot are distinguished by this invariant with
$\Phi_{\theta}(3_1\#3_1)=45+18u+18u^2$ and
$\Phi_{\theta}(3_1\#\overline{3_1})=9+36u+36u^2$.
For prime knots with eight or fewer crossings, the nontrivial values are listed
in the table.
\[
\begin{array}{r|l}
\Phi_{\theta}(K) & K \\ \hline
27 & 6_1, 8_{10}, \\
9+18u & 3_1, 7_4, 7_7, 8_{15}, 8_{21} \\
9+18u^2 & 8_5, 8_{19} \\
15+6u+6u^2 & 8_{11} \\
18+6u+3u^2 & 8_{20} \\
33+24u+24u^2 & 8_{18} \\
\end{array}
\]
\end{example}
\begin{example}
Let $X$ be the set $\{1,2,3\}$ with horizontal-tribracket given by the 3-tensor
\[\left[
\left[\begin{array}{rrr}
2 & 3 & 1 \\
1 & 2 & 3 \\
3 & 1 & 2
\end{array}\right],\left[\begin{array}{rrr}
3 & 1 & 2 \\
2 & 3 & 1 \\
1 & 2 & 3
\end{array}\right],\left[\begin{array}{rrr}
1 & 2 & 3 \\
3 & 1 & 2 \\
2 & 3 & 1
\end{array}\right]
\right].\]
With the cocycle with $\mathbb{Z}_3$ coefficients specified by
\[\theta=\left[
\left[\begin{array}{rrr}
0 & 0 & 0 \\
2 & 0 & 2 \\
2 & 2 & 0
\end{array}\right],\left[\begin{array}{rrr}
0 & 1 & 2 \\
2 & 0 & 2 \\
1 & 0 & 0
\end{array}\right],\left[\begin{array}{rrr}
0 & 0 & 1 \\
2 & 0 & 0 \\
1 & 1 & 0
\end{array}\right]
\right], \]
we compute the following invariant values for prime knots with up to 8 crossings
and prime links with up to seven crossings. Note in particular that since the
trivial invariant values for links of two and three components are 27 and 81
respectively, this example detects the non-triviality of every link on the list.
\[\begin{array}{r|l}
\Phi_{\theta}(K) & K \\\hline
9 & 4_1,5_1,5_2,6_2,6_3,7_1,7_2,7_3,7_5,7_6, 8_1, 8_2, 8_3,8_4,8_6,8_7,8_8,8_9,
8_{12}, 8_{13}, 8_{14}, 8_{16}, 8_{17} \\
27 & 6_1, 8_{10}\\
9+18u & 3_1, 7_4, 7_7, 8_{15}, 8_{21} \\
9+18u^2 & 8_5, 8_{19} \\
13+4u+10u^2 & 8_{11} \\
16+4u+7u^2& 8_{20} \\
25+28u+28u^2 & 8_{18}
\end{array}\]
\[\begin{array}{r|l}
\Phi_{\theta}(L) & L \\\hline
9 & L2a1, L4a1, L5a1, L6a2, L6a4, L6n1, L7a2,L7a3, L7a4, L7a6, L7a7,L7n1, L7n2 \\
9+18u & L6a1 \\
9+18u^2 & L6a3, L6a5, L7a1 \\
13+10u+4u^2 & L7a5
\end{array}\]
\end{example}
\section*{Acknowledgments}
The first author was supported by Simons Foundation collaboration grant 316709.
The second author was supported by JSPS KAKENHI Grant Number 16K17600.
| {
"timestamp": "2019-02-19T02:19:50",
"yymm": "1809",
"arxiv_id": "1809.09442",
"language": "en",
"url": "https://arxiv.org/abs/1809.09442",
"abstract": "We introduce a new algebraic structure called \\textit{local biquandles} and show how colorings of oriented classical link diagrams and of broken surface diagrams are related to tribracket colorings. We define a (co)homology theory for local biquandles and show that it is isomorphic to Niebrzydowski's tribracket (co)homology. This implies that Niebrzydowski's (co)homology theory can be interpreted similary as biqandle (co)homology theory. Moreover through the isomorphism between two cohomology groups, we show that Niebrzydowski's cocycle invariants and local biquandle cocycle invariants are the same.",
"subjects": "Geometric Topology (math.GT)",
"title": "Local biquandles and Niebrzydowski's tribracket theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180656553329,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7083314865166145
} |
https://arxiv.org/abs/1806.09062 | A simplified and unified generalization of some majorization results | We consider positive, integral-preserving linear operators acting on $L^1$ space, known as stochastic operators or Markov operators. We show that, on finite-dimensional spaces, any stochastic operator can be approximated by a sequence of stochastic integral operators (such operators arise naturally when considering matrix majorization in $L^1$). We collect a number of results for vector-valued functions on $L^1$, simplifying some proofs found in the literature. In particular, matrix majorization and multivariate majorization are related in $\mathbb{R}^n$. In $\mathbb{R}$, these are also equivalent to convex function inequalities. | \section{Introduction}\label{sec:Intro}
In this work, we connect several generalizations of majorization in reference to vector-valued measurable functions; notably, matrix majorization, multivariate majorization, mixing distance, $f$-divergence, and coarse graining.
While some results are known, they appear rather obscure in the literature; we also simplify arguments when possible.
We first recall the definition of (vector) majorization: if $x, y\in \mathbb{R}^n$, we say $x$ is \emph{majorized} by $y$, denoted $x\prec y$, if \begin{eqnarray*}
\sum_{j=1}^{k}x^{\downarrow}_{j}\leq \sum_{j=1}^{k}y^{\downarrow}_{j}\quad \forall k\in \{1,\dots,n-1\}
\end{eqnarray*}
with equality when $k=n$, where $x$ has been reordered so that $x^{\downarrow}_{1}\geq x^{\downarrow}_{2}\geq \cdots \geq x^{\downarrow}_{n}$ (and similarly for $y$). A well-known theorem of Hardy, Littlewood, and P\'{o}lya states that $x\prec y$ is equivalent to the existence of a doubly stochastic matrix $S$ such that $x=Sy$ \cite[Theorem 8]{HLP}.
Consider now two matrices $R\in M_{m\times n}(\mathbb{R})$ and $T\in M_{p\times n}(\mathbb{R})$. We say $R$ is \emph{majorized} by $T$, denoted $R\prec T$ (where it is clear from context that this is \emph{matrix} majorization rather than vector majorization, although matrix majorization is sometimes denoted $\prec_d$ or $\prec_S$ to distinguish it from vector majorization) if there exists a column stochastic matrix $S\in M_{m\times p}(\mathbb{R})$ such that $R= ST$. For more information on matrix majorization, see \cite{dahl1999}; when we restrict ourselves to the special case where $m=p$ and $S$ is doubly stochastic we get a more restrictive ordering called multivariate majorization, see \cite[Chapter 15]{MOA}. Matrix majorization has recently been generalized to quantum majorization between bipartite states \cite{Gour2017}.
We denote by $L^1(X, \mu)$, or simply $L^1(X)$ if the measure $\mu$ is clear from context, the set of all functionals $f$ satisfying $\int_X|f|d\mu<\infty$.
If $f\in L^1(X)$, the distribution function of $f$ is defined by $d_f(s)= \mu (\{x: f(x)>s\} )$ for all real $s$,
and the decreasing rearrangement of $f$ is defined by
\begin{eqnarray*}
f^\downarrow (t)&=& \inf \{s: d_f(s)\leq t\},\quad \quad\quad 0 \leq t \leq \mu( X)\\
&=& \sup \{s: d_f(s)> t\}, \quad \quad\quad 0 \leq t \leq \mu(X).
\end{eqnarray*}
We are now in the position to define continuous majorization. Typically the word ``continuous'' is dropped as it is clear from context.
\begin{definition}\label{def:cont} Let $(X, \mu)$ and $(Y, \nu)$ be finite measure spaces for which $a=\mu( X)=\nu(Y)$.
If $f\in L^1(X,\mu)$ and $g\in L^1( Y, \nu)$ satisfy
\begin{eqnarray*}
\int_0^t f^\downarrow d x&\leq & \int_0^t g^\downarrow d x\quad \forall t:\, 0\leq t\leq a\\
\textnormal{and }\int_0^a f^\downarrow d x&=& \int_0^a g^\downarrow d x,
\end{eqnarray*}
where the integration is with respect to Lebesgue measure, then we say that $f$ is majorized by $g$, denoted $f\prec g$.
\end{definition}
Following \cite{dahl1999}, we define the positive homogeneous subadditive functionals on $\mathbb{R}^n$, also called \emph{sublinear} functionals, to be all functionals $\psi$ satisfying $\psi(\lambda x)=\lambda\psi(x)$ and $\psi(x+y)\leq \psi(x)+\psi(y)\ $ for all $x, y\in \mathbb{R}^n$, and $\lambda\geq 0$.
Part of \cite[Theorem 3.3]{dahl1999} shows that if $R\in M_{m\times n}(\mathbb{R})$ and $T\in M_{p\times n}(\mathbb{R})$, then $R\prec T$ is equivalent to $\sum_{j=1}^m\psi(r_j)\leq \sum_{j=1}^p\psi(t_j)$ for all sublinear functionals $\psi$, where $r_j$ is the $j$th row of the matrix $R$, and similarly for $t_j$.
Given a measure space $(X,\mu)$, let $L^1(X, \mu, \mathbb{R}^n)$, or simply $L^1(X, \mathbb{R}^n)$, denote the set of all measurable functions $f$ from $(X,\mu)$ to $\mathbb{R}^n$ that satisfy $\int_X \vert f\vert d\mu <\infty$, where $\vert f \vert(x)=\sum_{k=1}^n |f_k(x)|$.
The notion of a stochastic matrix was generalized to a stochastic operator on $L^1([0,1])$ in \cite{RSS1980}; we provide the corresponding definition for a stochastic operator from $L^1(Y, \nu)$ to $L^1(X, \mu)$. Such an operator is sometimes referred to a \emph{Markov operator} in the literature \cite{LMbook}.
\begin{definition}
Let $(X,\mu)$ and $(Y,\nu)$ be $\sigma$-finite measure spaces. A linear operator $S:L^1(Y)\to L^1(X)$ is called a {\em stochastic operator} if
\begin{enumerate}
\item $S$ is positive (that is, $S$ takes positive elements to positive elements), and
\item $\int_X Sf d\mu = \int_Y f d\nu, \quad \forall f\in L^1(Y)$.
\end{enumerate}
Moreover, if in addition to the two conditions above, $\mu(X)=\nu(Y)<\infty$ and $S1=1$, then $S$ is called a {\em doubly stochastic operator}.
\end{definition}
We have the following lemma which will be useful later on.
\begin{lemma} \label{absineq} Let $(X,\mu )$ and $(Y,\nu)$ be $\sigma$-finite measure spaces. Let $f\in L^1(Y)$ and $S:L^{1}(Y)\to L^{1}(X)$ be a stochastic operator, then $\int_{X}\vert (Sf)(t)\vert d\mu(t)\le \int_{Y}\vert f(y)\vert d\nu(y)$. \end{lemma}
\begin{proof} Let $f_+(y)=\max(f(y),0)$ and $f_-(y)=\max(-f(y),0)$. Then \begin{eqnarray*} \int_{X}\vert Sf(x) \vert d\mu(x) &\le &\int_{X} S(f_+)(x)d\mu(x) + \int_{X} S(f_-)(x)d\mu(x)\\ &=& \int_{Y}f_+(y)d\nu(y)+\int_{Y}f_-(y)d\nu(y)\\ & =& \int_{Y}\vert f(y)\vert d\nu(y).\end{eqnarray*} \end{proof}
Note that the absolute value function is a nonnegative sublinear functional on $\mathbb{R}$; we will later show that a similar inequality holds for all nonnegative sublinear functionals on $\mathbb{R}^n$. We first need to describe how $S$ acts on an element of $ L^1(Y,\mathbb{R}^n)$.
If $f=(f_1,f_2,...,f_n)\in L^1(Y,\mathbb{R}^n)$, then $S$ acts componentwise on $f$; that is, $Sf=(Sf_1,Sf_2,...,Sf_n)$.
\begin{definition} \label{exp} Let $(X,\mu)$ be a $\sigma$-finite measure space and let $\mathcal{P}=\{E_i\}_{i\in \mathbb{N}}$ be a partition of $X$ into disjoint measurable sets of finite measure. We define $M_{\mathcal{P}}$ to be the operator which maps every $f\in L^1(X)$ to $\sum_{i\in \mathbb{N}} a_i \chi_{E_i}$ where $a_i=\frac{1}{\mu(E_i)}\int_{E_i}f(x)d\mu(x)$ if $E_i$ has positive measure and $a_i=0$ if $E_i$ is measure zero.
\end{definition}
We note that it is easy to verify that $M_{\mathcal{P}}$ in Definition \ref{exp} is a stochastic operator on $L^1(X)$; in fact, it maps $1\mapsto 1$ and is therefore a doubly stochastic operator.
Let $K$ be a convex set. A function $f:K\rightarrow K$ is \emph{affine} if $f(\lambda x +(1-\lambda)y)=\lambda f(x)+(1-\lambda)f(y)$ for all $x, y\in K$ and all $\lambda\in (0,1)$. We note that affine functions on the nonnegative face of the unit ball of $L^1$ are exactly the stochastic operators. Affine transformations on measure spaces are used to define coarse graining, a relation on the measurement statistics coming from two positive operator valued measures \cite{BQ1990, BQ1993, Heinonen, HZbook, Zan}.
A stochastic operator is an affine
transformation between nonnegative faces of the unit balls in the respective measure spaces. Note that the $L^1$ norm is one of the few norms where the nonnegative elements of the unit ball form a face.
The following theorem is a combination of two well-known results in the literature.
\begin{theorem}\label{HLP}
If $f\in L^1(X,\mu)$, $g\in L^1(Y,\nu)$ where $\mu(X)=\nu(Y)< \infty$, then the following are equivalent:
\begin{enumerate}
\item
$f\prec g$, as in Definition \ref{def:cont}.
\item For all convex functions $\phi:\mathbb{R}\to \mathbb{R}$,
$$\int_X \phi(f)d\mu\leq \int_Y \phi(g)d\nu.$$
\item
There exists a doubly stochastic operator $D:L^1(Y)\to L^1(X)$ such that $f=Dg$.
\end{enumerate}
\end{theorem}
\begin{proof}
Chong \cite{Chong} in Theorem 2.5 proved the equivalence of 1 and 2 and Day \cite{Day} in Theorem 4.9 proved the equivalence of 2 and 3.
\end{proof}
Of particular interest are the integral operators which are stochastic or doubly stochastic.
\begin{definition}
A \emph{stochastic kernel} is a measurable function $S:X\times Y\to[0,\infty)$ such that $\int_X S(x,y)d\mu(x)=1$ for almost all $y\in Y$.
A \emph{doubly stochastic kernel} is a stochastic kernel with the additional property that $\int_Y S(x,y)d\nu(y)=1$ for almost all $x\in X$.
\end{definition}
\begin{definition} An integral operator $M$ from $L^1(Y)$ to $L^1(X)$ given by $Mg=\int_{Y}S(x,y)g(y)d\nu(y)$ is said to be a \emph{stochastic integral operator} (resp.\ \emph{doubly stochastic integral operator}) if $S(x,y)$ is stochastic kernel (resp.\ doubly stochastic kernel).
\end{definition}
All stochastic integral operators are stochastic operators, and all doubly stochastic integral operators are doubly stochastic operators. However, the converse of either statement is false. Indeed, consider the identity operator which is a doubly stochastic operator but is not a doubly stochastic integral operator nor a stochastic integral operator.
\section{Convex function inequalities}
We now discuss some properties of convex and sublinear functionals.
\begin{proposition} \label{prop:sublinHLP} Let $K$ be a convex cone of $\mathbb{R}^n$. Let $\phi:K \to \mathbb{R}$ be a convex functional. Then $\psi (v,x):=x\phi(\frac{v}{x})$ is a sublinear functional on the cone $K\times (0,\infty)$.
\end{proposition}
\begin{proof}
For $\lambda>0$, we have $\psi(\lambda(v,x))=\lambda x \phi (\frac{\lambda v}{\lambda x})=\lambda x \phi (\frac{v}{x})=\lambda \psi(v,x)$.
Next, let $v_1,v_2\in K$ and $x_1,x_2\in (0,\infty)$. Then by convexity of $\phi$ we have
$$\phi\Big(\frac{v_1+v_2}{x_1+x_2}\Big)=\phi\Big(\frac{x_1v_1 }{x_1(x_1+x_2)}+ \frac{x_2v_2}{x_2(x_1+x_2)}\Big)\leq \frac{x_1}{x_1+x_2}\phi\Big(\frac{v_1}{x_1}\Big)+\frac{x_2}{x_1+x_2}\phi\Big(\frac{v_2}{x_2}\Big).$$
It follows that $\psi$ is a sublinear functional on $K\times (0,\infty)$.
\end{proof}
Due to issues with convergence, we have avoided considering $x=0$ in the above proposition, whence the restriction to $ (0,\infty)$. However, one can consider $x\rightarrow 0^+$ to obtain the recession function $\phi_\infty$. Note that the converse of the above proposition is immediate; any sublinear functional on a convex set is automatically convex on that set.
\begin{proposition}\label{thm:Lipseq}
Let $K$ be a convex subset of $\mathbb{R}^n$. For any continuous convex nonnegative functional $\phi$ on $K$, there exists an increasing sequence of Lipschitz convex nonnegative functionals $\{\phi_k\}_{k=1}^\infty$ that converges pointwise to it on $K$. If further, $K$ is a convex cone and $\phi$ is sublinear, then $\{\phi_k\}_{k=1}^\infty$ can be taken to be sublinear.
\end{proposition}
If $K$ is a closed convex cone in $\mathbb{R}^n$, then in fact every continuous sublinear functional $\phi:K\to \mathbb{R}$
is Lipschitz.
To prove Theorem \ref{contmaj}, we require the generalization of Jensen's inequality to the multivariate case; see \cite[Proposition 16.C.1]{MOA}.
\begin{theorem}\label{thm:Jensen}(multivariate Jensen's inequality) Let $(X, \mu)$ be a probability measure space. Let $\phi:\mathbb{R}^n\to \mathbb{R}$ be a convex function and let $f\in L^1(X, \mathbb{R}^n)$. Then $\phi(\int_{X}f(x) d\mu(x) )\le \int_{X}\phi(f(x)) d\mu(x)$.
\end{theorem}
If $\phi$ is a sublinear function, we no longer require the measure to be a probability measure.
\begin{theorem}[Roselli-Willem inequality]\label{RW}\cite[Theorem 6]{RW} Let $(X, \mu)$ be an arbitrary measure space. Let $\phi:\mathbb{R}^n\to \mathbb{R}$ be a sublinear function and let $f\in L^1(X, \mathbb{R}^n)$. Then $\phi(\int_{X}f(x) d\mu(x) )\le \int_{X}\phi(f(x)) d\mu(x)$.
\end{theorem}
The related concept of $f$-divergence (which we call $\phi$-divergence since $\phi$ is convex) was introduced by Csisz{\'a}r \cite{Csiszar} and was studied extensively by statisticians \cite[Chapter 2]{CohenBook} and \cite{CD05, JC15, LV06, MM13, SV}. Similar concepts with different names have also appeared in the Physics literature \cite[Chapter 6]{QIStext} and \cite{Morimoto, Zan}.
\begin{definition} Let $(X,\mu)$ be a measure space and $V$ be a real vector space. Let $\phi$ be a real valued convex function on $V$. Let $f:X \to V$ and let $h:X\to (0,\infty)$ with all functions being measurable. Then the $\phi$-divergence of $f$ with respect to $h$ is $\int_X h\phi (\frac{f}{h}) \,d\mu$.
\end{definition}
The following result is useful in relating $\phi$-divergence inequalities with sublinear functional integral inequalities:
\begin{theorem} \label{thm:all} Let $(X,\mu)$ and $(Y,\nu)$ be measure spaces, $f\in L^1(X,K)$, $g\in L^1(Y,K)$, $h\in L^1(X,(0,\infty))$ and $k\in L^1(Y,(0,\infty))$, where $K$ is a convex cone. Define $F:X \to K \times \mathbb{R}$ and $G:Y \to K \times \mathbb{R}$ as $F(x)=(f(x),h(x))$ and $G(y)=(g(y),k(y))$. The following are equivalent.
\begin{enumerate}
\item\label{i:fdiv} The $\phi$-divergence of $f$ with respect to $h$ is less than or equal to that of $g$ with respect to $k$; that is, $\int_X \phi (\frac{f}{h})h d\mu\leq \int_Y \phi (\frac{g}{k})k d\nu$, for all convex functions $\phi:K\to \mathbb{R}$.
\item\label{i:sublin} $\int_X \psi (F(x))d\mu(x)\leq \int_Y \psi (G(y))d\nu(y)$ for all sublinear functionals $\psi:K\times (0,\infty)\to\mathbb{R}$.
\end{enumerate} \end{theorem}
\begin{proof}
(\ref{i:fdiv})$\Rightarrow $ (\ref{i:sublin}):
Let $\psi(v,t) :K \times (0,\infty) \to \mathbb{R}$ be a sublinear functional. Let $\phi(v)=\psi(v,1)$ for all $v\in K$. Then $\phi$ is a real valued convex function on $K$. We can then use the sublinearity of $\psi$ to prove the following:
$\begin{array}{rcl}
\int_X \psi (F(x))d\mu(x
& = & \int_X \phi (\frac{f(x)}{h(x)})h(x)d\mu(x)\\
& \le & \int_Y \phi (\frac{g(x)}{k(x)})k(x)d\nu(x)\\
& = & \int_Y \psi (G(x))d\nu(x).
\end{array}$
(\ref{i:sublin})$\Rightarrow $ (\ref{i:fdiv}): This implication is a straightforward application of Proposition \ref{prop:sublinHLP}. Let $\phi $ be any real-valued convex functional on $K$. Then $\psi (v,x)=x\phi(\frac{v}{x})$ is a sublinear functional on $K\times (0,\infty)$, and we have
$\begin{array}{rcl}
\int_X \phi (\frac{f(x)}{h(x)})h(x)d\mu(x) & = & \int_X \psi(f(x),h(x))d\mu(x)\\
& \le & \int_Y \psi (g(y),k(y))d\nu(y)\\
& = & \int_Y \phi (\frac{g(y)}{k(y)})k(y)d\nu(y).
\end{array}$
\end{proof}
\section{A generalization of matrix majorization}
With the notion of a stochastic operator from $L^1(Y, \nu)$ to $L^1(X, \mu)$, we can generalize the definition of matrix majorization:
\begin{definition}\label{defn:matrixmaj}
Let $(X,\mu)$ and $(Y,\nu)$ be measure spaces. Let $f=(f_1, f_2, \dots, f_n)\in L^1(X,\mathbb{R}^n)$ and $g=(g_1, g_2, \dots, g_n)\in L^1(Y,\mathbb{R}^n)$. Then we say that $f$ is \emph{matrix majorized} by $g$, denoted $f\prec_M g$, if there exists a stochastic operator $S$ such that $f=S(g)$; i.e., $f_k=Sg_k$ for all $k=1, \dots, n$.
\end{definition}
It is straightforward to check that matrix majorization between measurable functions in $L^1$ is a reflexive, transitive relation and therefore is a preorder, which generalizes the same result in \cite[Theorem 3.3]{dahl1999} for matrix majorization on matrices.
The term matrix majorization was coined by Dahl \cite{dahl1999}. To see that our formulation is a generalization of Dahl's, we now restrict ourselves to the special case where $X=\{ 1,2,...,m\}$ and $Y=\{ 1,2,...,p\}$ are finite sets, $\mu$ and $\nu$ are counting measures. We can represent each function $f=(f_1,f_2,...,f_n)\in L^1(X,\mathbb{R}^n)$ as an $n$ by $m$
matrix $A_{f}$ whose $k$th column is $f(k)$. We can see that $f$ is matrix majorized by $g$ if there exists a row stochastic matrix $S$ such that $A_{f}=A_{g}S$; the later is Dahl's formulation of matrix majorization on matrices.
We now note that part of \cite[Theorem 3.3]{dahl1999} can now be rephrased as follows:
\begin{theorem} Let $X=\{ 1,2,...,m\}$, $Y=\{ 1,2,...,p\}$, $f\in L^1(X,\mathbb{R}^n)$ and $g\in L^1(Y,\mathbb{R}^n)$. Then $f$ is matrix majorized by $g$ if and only if $\sum_{k=1}^m \phi (f(k)) \le \sum_{k=1}^p \phi (g(k))$ for all sublinear functionals $\phi: \mathbb{R}^n\to \mathbb{R} $. \end{theorem}
This suggests the following one-sided extension to the general case.
\begin{theorem} \label{contmaj} Let $(X,\mu)$ and $(Y,\nu)$ be measure spaces, $f\in L^1(X,\mathbb{R}^n)$ and $g\in L^1(Y,\mathbb{R}^n)$. If there exists a stochastic kernel $S(x,y): X\times Y\rightarrow [0,\infty)$ such that $f(x)=\int_Y S(x,y)g(y)d\nu(y)$ then $\int_X \phi (f(x))d\mu(x) \le \int_Y \phi (g(y))d\nu(y)$ for all sublinear functionals $\phi: \mathbb{R}^n\to \mathbb{R}$. \end{theorem}
\begin{proof
Suppose there exists a stochastic kernel $S(x,y): X\times Y\rightarrow [0,\infty)$ such that $f(x)=\int_Y S(x,y)g(y)d\mu(y)$. Let $\phi :\mathbb{R}^n\to \mathbb{R}$ be sublinear.
Hence by using Theorem \ref{RW} and Fubini's Theorem
$\begin{array}{rcl}
\int_X \phi (f(x))d\mu(x)& = &\int_X \phi (\int_Y S(x,y)g(y)d\nu(y))d\mu(x) \\
&\le & \int_X \int_Y \phi (S(x,y)g(y))d\nu(y)d\mu(x)\\
& = & \int_X \int_Y S(x,y)\phi(g(y))d\nu(y)d\mu(x)\\
& = & \int_Y (\int_X S(x,y)d\mu(x))\phi(g(y))d\nu(y)\\
& = & \int_Y \phi(g(y))d\nu(y),
\end{array}$
as desired.
\end{proof}
Note that, if we take $X=Y=[0,1]$ in Theorem \ref{contmaj}, then this is nearly the definition of mixing distance \cite[Definition 1a]{RSS1978}, except that the authors of \cite{RSS1978} take $\phi$ to be any convex functions, or certain subsets thereof, whereas we are working with sublinear (positively homogeneous convex) functionals in accordance with \cite{dahl1999}. This similarity hints at a connection between matrix majorization and the mixing distance.
\begin{lemma}\label{seqpart} Let $(X,\mu)$ be a $\sigma$-finite measure space and let $f\in L^1(X)$. Then there exists a sequence of partitions $\{\mathcal{P}_n\}_{n=1}^\infty$ of $X$ into disjoint sets of finite measure such that $\{M_{\mathcal{P}_n}f\}_{n=1}^\infty$ converges to $f$ in the $L^1$ norm. \end{lemma}
\begin{proof} Let $P_n=\{E_k \}_{k=0}^{2n^2+1}$ where $E_0=\{ x\in X: f(x)<-n\}$ and $E_j=\{x\in X: f(x)\in [-n+\frac{j-1}{n},-n+\frac{j}{n})\} $ if $1\le j\le 2n^2$ and $E_{2n^2+1}=\{ x\in X: f(x)\ge n\}$. If $E_k$ has infinite measure for some $k\in \{0, \dots, 2n^2+1\}$, since $X$ is $\sigma$-finite, $E_k$ is a countable disjoint union of sets of finite measure. Replace every such $E_k$ in the partition with the sets of finite measure. It is then easy to verify that $\{M_{\mathcal{P}_n}f\}_{n=1}^\infty$ converges to $f$ in the $L^1$ norm. \end{proof}
We note that if $\mathcal{P}=\{E_i\}_{i\in \mathbb{N}}$ and $\mathcal{Q}=\{F_i\}_{j\in \mathbb{N}}$ are two partitions of $X$ into disjoint sets of finite measure, then we can form the intersection partition $\mathcal{P}\cap \mathcal{Q}=\{E_i\cap F_j: i,j\in \mathbb{N}\}$.
\begin{lemma}\label{seqpart2} Let $(X,\mu)$ be a $\sigma$-finite measure space and let $V$ be a finite dimensional subspace of $L^1(X)$. Then there exists a sequence of partitions $\{\mathcal{P}_n\}_{n=1}^\infty$ of $X$ into disjoint sets of finite measure such that $\{M_{\mathcal{P}_n}f\}_{n=1}^\infty$ converges to $f$ in the $L^1$ norm for all $f\in V$. \end{lemma}
\begin{proof} The proof is by induction on the dimension of $V$ with the base case being Lemma \ref{seqpart}. Now suppose the induction hypothesis holds for dimension $n$ and let $V$ be a subspace of dimension $n+1$. Let $S$ be a subspace of $V$ of dimension $n$; by the induction hypothesis there exists a sequence of partitions $\{\mathcal{P}_n\}_{n=1}^\infty$ of $X$ into disjoint sets of finite measure such that $\{M_{\mathcal{P}_n}f\}_{n=1}^\infty$ converges to $f$ in the $L^1$ norm for all $f\in S$. Now let $g\in V$ with $g\not \in S$, then by Lemma \ref{seqpart} there exists a sequence of partitions $\{\mathcal{Q}_n\}_{n=1}^\infty$ into disjoint sets of finite measure such that $\{M_{\mathcal{Q}_n}g\}_{n=1}^\infty$ converges to $g$ in the $L^1$ norm. It is easy to see that sequence of intersection partitions $\{\mathcal{R}_n=\mathcal{P}_n\cap \mathcal{Q}_n\}_{n=1}^\infty$ now satisfies the property that that $\{M_{\mathcal{R}_n}f\}_{n=1}^\infty$ converges to $f$ in the $L^1$ norm for all $f\in V$. \end{proof}
\begin{theorem} \label{sto:markov} Let $(X,\mu )$ and $(Y,\nu)$ be $\sigma$-finite measure spaces. Let $S:L^{1}(Y)\to L^{1}(X)$ be a stochastic operator and let $V$ be a finite dimensional subspace of $L^{1}(Y)$. Then there exists a sequence of stochastic integral operators from $L^{1}(Y)$ to $L^{1}(X)$ which converges to $S$ on $V$. \end{theorem}
\begin{proof} Let $\mathcal{P}=\{E_i\}_{i\in \mathbb{N}}$ be a partition of $X$ into disjoint sets of finite measure. Then $M_{\mathcal{P}}S$ is a stochastic operator from $L^{1}(Y)\to L^{1}(X)$; we will show that it is a stochastic integral operator. Fix $x\in X$. Then there exists a unique $k$ such that $x\in E_k$. Define the functional $g_x(f)=(M_{\mathcal{P}}Sf)(x)$. Then $$\begin{array}{rcl} \vert g_x(f)\vert &=& \vert \frac{1}{\mu(E_k)}\int_{E_k}(Sf(t))d\mu(t)\vert \\
& \le& \frac{1}{\mu(E_k)}\int_{X}\vert (Sf)(t)\vert d\mu(t) \\ & \le & \frac{1}{\mu(E_k)}\int_{Y}\vert f(y)\vert d\nu(y)\quad \textnormal{by Lemma \ref{absineq} }\\ & = &\frac{1}{\mu(E_k)}\Vert f\Vert_1.\end{array}$$ Hence $g_x$ is a bounded linear functional of $L^1(Y)$. So by the Riesz representation theorem, there exist a nonnegative function $h_x\in L^{\infty}(Y)$ such that $g_x(f)=\int_Y f(y)h_x(y)d\nu(y)$. Now let $K_{\mathcal{P}}(x,y)=h_x(y)$ for all $x\in X$ and all $y\in Y$. Since $K_{\mathcal{P}}(x,y)=\sum_{i\in \mathbb{N}}\chi_{E_i}(x)h_x(y)$, $K_{\mathcal{P}}(x,y)$ is measurable. Then $(M_{\mathcal{P}}Sf)(x)=\int_Y K_{\mathcal{P}}(x,y)f(y)d\nu(y)$.
Since $M_{\mathcal{P}}S$ is a stochastic operator, we have
$$\int_X ((M_{\mathcal{P}}S)f)(x) d\mu(x)=\int_Y f(y)d\nu(y)\quad \forall f\in L^1(Y)$$ and by using Fubini's Theorem, we find
$$\begin{array}{rcl}
\int_Y f(y)d\nu(y) &=& \int_X ((M_{\mathcal{P}}S)f)(x) d\mu(x)\\ &=& \int_X\int_Y K_{\mathcal{P}}(x,y)f(y)d\nu(y)d\mu(x)\\
&=&\int_Y f(y) (\int_X K_{\mathcal{P}}(x,y)d\mu(x))d\nu(y).
\end{array}$$
Therefore $\int_X K_{\mathcal{P}}(x,y)d\mu(x)=1$ for almost all $y\in Y$.
Since $V$ is a finite dimensional subspace of $L^{1}(Y)$, the forward image $S(V)$ is a finite dimensional subspace of $L^{1}(X)$. The result now follows from Lemma \ref{seqpart2}.
\end{proof}
We also have a doubly stochastic version of this theorem:
\begin{theorem}
Let $(X,\mu )$ and $(Y,\nu)$ be finite measure spaces. A doubly stochastic operator $D:L^1(Y)\to L^1(X)$ on a finite dimensional subspace $V$ of $L^1(Y)$ can be approximated by doubly stochastic integral operators.
\end{theorem}
\begin{proof}
Let $\mathcal{P}=\{E_i\}_{i=1}^n$ be a partition of $X$ into disjoint sets of finite measure.
The operator $M_\mathcal{P}:L^1(X)\to L^1(X)$ from Definition \ref{exp} is a doubly stochastic operator and since the composition of doubly stochastic operators is a doubly stochastic operator, $M_\mathcal{P}D$ is a doubly stochastic operator. By a proof similar to that of Theorem \ref{sto:markov}, for all $f\in L^1(Y)$ we have $(M_{\mathcal{P}}Df)(x)=\int_Y K_{\mathcal{P}}(x,y)f(y)d\nu(y)$ such that $\int_X K_{\mathcal{P}}(x,y)d\mu(x)=1$ for almost all $y\in Y$. Now suppose $f=1$. Then $\int_Y K_{\mathcal{P}}(x,y)d\nu(y)=1$ for almost all $x\in X$ and hence $M_{\mathcal{P}}D$ is a doubly stochastic integral operator.
\end{proof}
\begin{theorem}\label{main2}
Let $(X,\mu)$ and $(Y,\nu)$ be two $\sigma$-finite measure spaces, $f\in L^1(X,\mathbb{R}^n)$ and $g\in L^1(Y,\mathbb{R}^n)$. If
$f$ is matrix majorized by $g$,
then
$$\int_X \phi (f(x))d\mu(x)\leq \int_Y \phi (g(y))d\nu(y)$$ for all nonnegative sublinear functionals
$ \phi: \mathbb{R}^n\rightarrow [0,\infty)$.
\end{theorem}
\begin{proof}
Let $g=(g_1,...,g_n)$ and $V=\operatorname{span}\{g_1,..., g_n\}$. Since $V$ is a finite dimensional subspace of $L^1(Y)$, by using Theorem \ref{sto:markov}, there exists a sequence of stochastic integral operators $\{S_k\}_{k=1}^\infty$ which converges to the stochastic operator $S$ coming from Definition \ref{defn:matrixmaj}.
Now by using Theorem \ref{contmaj}, for each $k\in\mathbb{N}$ we obtain $\int_X \phi(S_k g)d\mu(x)\leq \int_Y \phi(g)d\nu(y)$ for all sublinear functionals $\phi$.
Since $\phi$ is sublinear on $\mathbb{R}^n$, it is Lipschitz; denote its Lipschitz constant as $c$. Then we have,
$$\int_{X}|\phi(S_kg)-\phi(Sg)|d\mu(x) \leq c\sum_{j=1}^n\int_X|S_kg_j-Sg_j|d\mu(x)\quad \forall k\in\mathbb{N}, \, \textnormal{ where } c\geq 0.$$
Since $\lim_{k\to \infty} S_kg_j=Sg_j$ in $L^1$ for all $j$, the left hand side must go to zero which means that $\lim_{k\to \infty }\int_{X}\phi(S_kg)d\mu(x)=\int_{X}\phi(Sg)d\mu(x) $. Therefore we have
$$\int_X \phi(f)d\mu(x)=\int_X \phi(S g)d\mu(x)\leq \int_Y \phi(g)d\nu(y).$$
\end{proof}
We note that a special case of this result is a slight generalization of a theorem of Alberti; when $X=Y$ and $\mu=\nu$, Theorem \ref{main2} reduces to one direction of the following result which was proved using methods from the theory of von Neumann algebras.
\begin{theorem} \cite[Theorem 1]{Alberti}
Let $(X,\mu)$ be a $\sigma$-finite measure space and $f,g\in L^1(X,\mathbb{R}^n)$. Then
$f$ is matrix majorized by $g$,
if and only if
$$\int_X \phi (f(x))d\mu(x)\leq \int_X \phi (g(x))d\mu(x)$$ for all nonnegative sublinear functionals
$ \phi: \mathbb{R}^n\rightarrow [0,\infty)$.
\end{theorem}
We do not know if the converse to Theorem \ref{main2} holds for arbitrary measures.
We now consider a generalization of majorization known as multivariate majorization. In the setting of $\mathbb{R}^n$, we can show (Theorem \ref{thm:DSmultivar}) that matrix majorization and multivariate majorization are strongly related.
\begin{definition}
Let $(X,\mu)$ and $(Y,\nu)$ be finite measure spaces, $f\in L^1(X,\mathbb{R}^n)$, and $g\in L^1(Y,\mathbb{R}^n)$. Then $f$ is \emph{multivariate majorized} by $g$ if there exists a doubly stochastic operator $D:L^1(Y)\to L^1(X)$ such that $f=Dg$.
\end{definition}
\begin{theorem}\label{thm:DSmultivar}
Let $(X,\mu)$ and $(Y,\nu)$ be finite measure spaces, $f\in L^1(X,\mu ,\mathbb{R}^n)$, $g\in L^1(Y,\nu ,\mathbb{R}^n)$, $h\in L^1(X,\mu, (0,\infty))$, and $k\in L^1(Y,\nu, (0,\infty))$. The following are equivalent:
\begin{enumerate}
\item \label{i1} $(f_1, f_2, \dots, f_n, h)$ is matrix majorized by $(g_1, g_2, \dots, g_n, k)$; i.e.,
there exists a stochastic operator $S:L^1(Y,\nu)\to L^1(X,\mu)$ such that $Sg_i=f_i$ for all $i=1,...,n$ and $Sk=h$,
\item \label{i2}$\left(\frac{f_1}{h}, \frac{f_2}{h}, \dots, \frac{f_n}{h}\right)$ is multivariate majorized by $\left(\frac{g_1}{k}, \frac{g_2}{k}, \dots, \frac{g_n}{k}\right)$ with respect to measures $\alpha$ and $\beta$ where the measures $\alpha$ and $\beta$ are defined by
$\alpha=h\, d\mu$ and $\beta=k\, d\nu$; i.e.,
there exists a doubly stochastic operator $D:L^1(Y,\beta)\to L^1(X,\alpha)$ such that $D\frac{g_i}{k}=\frac{f_i}{h}$ for all $i=1,...,n$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $T_s$ denote the multiplication operator which maps any function $f$ to the product $sf$.
(\ref{i1}) $\Rightarrow (\ref{i2})$:
Suppose there exists a stochastic operator $S:L^1(Y,\nu)\rightarrow L^1(X,\mu)$ such that $Sg_i=f_i$ for all $i=1, \dots, n$ and $Sk=h$. We now show that \newline $\left(\frac{f_1}{h}, \frac{f_2}{h}, \dots, \frac{f_n}{h}\right) \in L^1(Y, \beta)$ is multivariate majorized by $\left(\frac{g_1}{k}, \frac{g_2}{k}, \dots, \frac{g_n}{k}\right) \in L^1(X, \alpha)$.
The multiplication operator $T_{1/h}$ is a stochastic operator from $L^1(X,\mu)$ to $L^1(X,\alpha)$. Note that for all $i=1, \dots, n$, $f_i\in L^1(X,\mu)$ and $T_{1/h}(f_i)=\frac{f_i}{h}$. Similarly, $T_{k}$ is a stochastic operator from $L^1(Y,\beta)$ to $L^1(Y,\nu)$ and for all $i=1, \dots, n$, $g_i\in L^1(Y,\beta)$ and $T_{k}(g_i)=kg_i$. Construct $D=T_{1/h}ST_{k}:L^1(Y,\beta)\to L^1(X,\alpha)$, which is a stochastic operator since it is a product of stochastic operators. Furthermore, $D1=1$, therefore $D$ is a doubly stochastic operator that maps $\frac{g_i}{k}$ to $\frac{f_i}{h}$.
(\ref{i2}) $\Rightarrow (\ref{i1})$: Assume there exists a doubly stochastic $D:L^1(Y, \beta)\to L^1(X,\alpha)$ such that $\frac{f_i}{h}=D\frac{g_i}{k}$. We define a stochastic operator $T_{h}:L^1(X,\alpha)\to L^1(X,\mu)$ such that for all $i=1, \dots, n$, $f_i\in L^1(X,\mu )$, and $T_{h}(f_i)=f_ih$. Similarly, define $T_{1/k}:L^1(Y,\nu)\to L^1(Y,\beta)$ such that for all $i=1, \dots, n$, $g_i\in L^1(Y,\nu)$ and $T_{1/k}(g_i)=\frac{g_i}{k}$. Construct $S=T_{h}DT_{1/k}:L^1(Y,\nu)\to L^1(X,\mu)$. Since $D$ is a (doubly) stochastic operator and the product of stochastic operators is a stochastic operator, $S$ is a stochastic operator such that $Sg_i=f_i$ for all $i=1,...,k$ and $Sk=h$.
\end{proof}
In the setting of $\mathbb{R}$ we can show that matrix majorization, multivariate majorization, and the convex function inequalities are all strongly related. The following theorem can be viewed as a simplified version of the result on mixing distance in \cite{RSS1978}.
\begin{theorem}
Let $(X,\mu)$ and $(Y,\nu)$ be finite measure spaces, $f\in L^1(X,\mathbb{R})$, $g\in L^1(Y,\mathbb{R})$, $h\in L^1(X,(0,\infty))$, $k\in L^1(Y,(0,\infty))$, with $\int_X h \,d\mu=\int_Y k \,d\nu$. The following are equivalent:
\begin{enumerate}
\item \label{i:S} There exists a stochastic operator $S:L^1(Y, \nu)\rightarrow L^1(X, \mu)$ such that $Sg=f$ and $Sk=h$.
\item \label{i:C} For all real valued convex functions on $\mathbb{R}$,
$$\int_X \phi \left(\frac{f}{h}\right)h \,d\mu\leq \int_Y \phi \left(\frac{g}{k}\right)k \,d\nu. $$
\item \label{i:DS}
There exists a doubly stochastic $D:L^1(Y, \beta)\to L^1(X, \alpha)$ such that $D\left(\frac{g}{k}\right)=\frac{f}{h}$, where the measures $\alpha$ and $\beta$ are defined by
$\alpha=h\, d\mu$ and $\beta=k\, d\nu$.
\end{enumerate}
\end{theorem}
\begin{proof}
Both (\ref{i:C}) and (\ref{i:DS}) are equivalent to $\frac{g}{k}\prec \frac{f}{h}$ by Theorem \ref{HLP}.
The equivalence of (\ref{i:S}) and (\ref{i:DS}) follows from Theorem \ref{thm:DSmultivar} with $n=1$.
\end{proof}
\section*{Acknowledgements}
S.M.\ was supported by a travel grant from the Iranian Ministry of Science Research and Technology. She gratefully acknowledges the University of Guelph and Brandon University for the time she spent at the respective universities during the course of this work. R.P.\ was supported by NSERC Discovery Grant number 400550. S.P.\ was supported by NSERC Discovery Grant number 1174582, the Canada Foundation for Innovation (CFI) grant number 35711, and the Canada Research Chairs (CRC) Program grant number 231250. The authors thank the anonymous referee for their helpful comments.
\begin{bibdiv}
\begin{biblist}
\bibitem{Alberti} P.M.~Alberti, \emph{A note on stochastic operators on $L^1$-spaces and convex functions}, J.\ Math.\ Anal.\ Appl.\ \textbf{130}(2) (1988), pp.~556-563.
\bibitem{BQ1993} P.\ Busch and R.\ Quadt, \emph{Concepts of coarse graining in quantum mechanics}, Int.\ J.\ Theor.\ Phys.\ \textbf{32}(12) (1993), pp.\ 2261-2269.
\bibitem{BQ1990} P.\ Busch and R.\ Quadt, \emph{On Ruch's principle of decreasing mixing distance in classical statistical physics}, J.\ Stat.\ Phys.\ \textbf{61}(1/2) (1990), pp.\ 311-328.
\bibitem{CD05} P.~Cerone and S.S.~Dragomir, \emph{Approximation of the integral mean divergence and $f$-divergence via mean results}, Math.\ Comput.\ Modelling \textbf{42}(1-2) (2005), pp.~207-219.
\bibitem{CohenBook} J.E.~Cohen, J.H.B.~Kempermann, and G.~Zb\u {a}ganu, \emph{Comparisons of stochastic matrices with applications in information theory, statistics, economics and population}. Birkh\"{a}user: Boston, 1998.
\bibitem{Csiszar} I. Csisz{\'a}r, \emph{Information measures of difference of probability distributions and indirect observations}, Studia Sci. Math. Hungar., 2, (1967)
pp.\ 299--318.
\bibitem{Chong} K.M.\ Chong, \emph{Some extensions of a theorem of Hardy, Littlewood and P{\'o}lya and their applications}, Canadian Journal of Mathematics, 26, (1974) pp.\ 1321-1340.
\bibitem{dahl1999}G.\ Dahl, \emph{Matrix majorization},
Lin.\ Alg.\ Appl., \textbf{288} (1999), pp.\ 53-73.
\bibitem{Day} P.W.\ Day \emph{Decreasing rearrangements and doubly stochastic operators},
{Amer. Math. Soc}, \textbf{178} (1973), pp.\ 383-392.
\bibitem{Gour2017} G.\ Gour, \emph{Quantum majorization and a complete set of entropic conditions
for quantum thermodynamics}, arXiv preprint arXiv:1708.04302 (2017).
\bibitem{HLP} G.H.~Hardy, J.E.~Littlewood, and G.~P{\'o}lya, \emph{Some simple inequalities satisfied by convex functions},
{Messenger Math}, \textbf{58} (1929), pp.\ 145-152.
\bibitem{QIStext} M.~Hayashi, S.~Ishizaka, A.~Kawachi, G.~Kimura, and T.~Ogawa, \emph{Introduction to quantum information science}. Springer-Verlag: Berlin, 2014.
\bibitem{Heinonen} T.~Heinonen, \emph{Optimal measurements in quantum mechanics}, Phys.\ Lett.\ A, \textbf{346} (2005), pp.\ 77-86.
\bibitem{HZbook} T.\ Heinosaari and M.~Ziman, \emph{The mathematical language of quantum theory: from uncertainty to entanglement}, Cambridge Univ.\ Press: New York (2011).
\bibitem{JC15} K.C.~Jain and P.~Chhabra, \emph{New information inequalities on new generalized $f$-divergence and applications}, Le Matematiche \textbf{70}(2) (2015), pp.~271-281.
\bibitem{LMbook} A.~Lasota and M.C.~Mackey, \emph{Chaos, fractals and noise: stochastic aspects of dynamics}, 2nd ed.\ (1994), Springer: New York
\bibitem{LV06} F.~Liese and I.~Vajda, \emph{On divergences and informations in statistics and information theory}, IEEE Trans.\ Information Theory \textbf{52}(10) (2006), pp.~4394-4412.
\bibitem{MOA}A.W.\ Marshall, I.\ Olkin, and B.C.\ Arnold, \emph{Inequalities: theory of majorization and its applications}, 2nd ed.\ (2011), Springer: New York.
\bibitem{Morimoto} T.\ Morimoto, \emph{Markov processes and the H-theorem}, J.\ Phys.\ Soc.\ Jpn, \textbf{18} (1963), pp.~328-331.
\bibitem{MM13} M.S.~Moslehian and M.~Kian, \emph{Non-commutative $f$-divergence functional}, Math.\ Nachr.\ \textbf{286}(14-15) (2013), pp.~1514-1529.
\bibitem{RW} P.\ Roselli and M.\ Willem, \emph{A convexity inequality}, Amer.\ Math.\ Monthly \ \textbf{109} (2002), pp.\ 64-70.
\bibitem{RSS1978} E.\ Ruch, R.\ Schranner, and T.H.\ Seligman, \emph{The mixing distance}, J.\ Chem.\ Phys.\ \textbf{69}(1) (1978), pp.\ 386-392.
\bibitem{RSS1980} E.\ Ruch, R.\ Schranner, and T.H.\ Seligman, \emph{Generalization of a theorem by Hardy, Littlewood, and P\'olya}, J.\ Math.\ Anal.\ Appl.\ \textbf{76} (1980), pp.\ 222-229.
\bibitem{SV} I.\ Sason, and S. Verd{\'u}, \emph{$ f $-divergence inequalities}, IEEE Trans.\ Information Theory, \textbf{62} (2016), pp.\ 5973--6006.
\bibitem{Zan} S.~Zanzinger, \emph{On informational divergences for general statistical theories}. Int.\ J.\ Theor.\ Phys.\ \textbf{37}(1) (1998), pp.~357-363.
\end{biblist}
\end{bibdiv}
\end{document} | {
"timestamp": "2019-06-13T02:17:52",
"yymm": "1806",
"arxiv_id": "1806.09062",
"language": "en",
"url": "https://arxiv.org/abs/1806.09062",
"abstract": "We consider positive, integral-preserving linear operators acting on $L^1$ space, known as stochastic operators or Markov operators. We show that, on finite-dimensional spaces, any stochastic operator can be approximated by a sequence of stochastic integral operators (such operators arise naturally when considering matrix majorization in $L^1$). We collect a number of results for vector-valued functions on $L^1$, simplifying some proofs found in the literature. In particular, matrix majorization and multivariate majorization are related in $\\mathbb{R}^n$. In $\\mathbb{R}$, these are also equivalent to convex function inequalities.",
"subjects": "Functional Analysis (math.FA); Classical Analysis and ODEs (math.CA)",
"title": "A simplified and unified generalization of some majorization results",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985718065235777,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.708331486215124
} |
https://arxiv.org/abs/1212.3893 | A sufficient condition for congruency of orbits of Lie groups and some applications | We give a sufficient condition for isometric actions to have the congruency of orbits, that is, all orbits are isometrically congruent to each other. As applications, we give simple and unified proofs for some known congruence results, and also provide new examples of isometric actions on symmetric spaces of noncompact type which have the congruency of orbits. | \section{Introduction}
Isometric actions of Lie groups on Riemannian manifolds $M$
and submanifold geometry of their orbits are fundamental topics in geometry.
In this paper, we consider isometric actions which have the congruency of orbits,
that is, all of whose orbits are isometrically congruent to each other.
The congruency of orbits yields good benefits
for studying submanifold geometry of orbits,
since it is sufficient to study only one orbit.
It has been known that the following isometric actions have the congruency of orbits:
\begin{enumerate}
\item the actions of $\mathrm{U}(1)$ on spheres $\mathbb{S}^{2n+1}$ which induce the Hopf fibrations,
\item the actions of $N$ on hyperbolic spaces which induce the horosphere foliations,
\item the actions of $S_V$ on symmetric spaces of noncompact type $M = G/K$,
where $S_V$ are some codimension one subgroups of $AN$ (\cite[Proposition 3.1]{BT03}, see Section~4 for details), and
\item the actions of $N$ on symmetric spaces of noncompact type $M=G/K$ which induce horocycle foliations (\cite[Corollary 6.5]{BDT}).
\end{enumerate}
Note that, for a symmetric space of noncompact type $M$,
we denote by $G$ the identity component of the isometry group of $M$,
and by $G=KAN$ the Iwasawa decomposition.
In this paper, we obtain a sufficient condition for isometric actions
to have the congruency of orbits (Lemma \ref{main}).
Indeed our sufficient condition
is stated in terms of Lie algebras, and it is very practical to apply.
The first applications of our sufficient condition are
simple and unified proofs for the congruency of orbits
for all of the above mentioned actions.
In Section~3 we show the congruency of orbits in the case of the Hopf fibrations (1).
In Section~4 we
prove the congruency of orbits for a class of actions,
which contains the actions (2), (3) and (4).
As the second application of our sufficient condition, in
Section~5, we
provide new examples of isometric actions which have the congruency of orbits.
Namely, we
prove the congruency of
orbits
of $S_\Phi$ on symmetric spaces of noncompact type $M = G/K$
(Proposition \ref{orbits_S_Phi}),
where
$S_\Phi$ denotes the solvable part of
the
parabolic subgroup
$Q_\Phi$ of $G$.
Recently, $S_\Phi$
have played very important roles in studying submanifolds in $M$
(\cite{BDT, BT10, T},
see also a survey \cite{Tsurvey}).
Among others, it is remarkable
that the orbit $(S_\Phi).o$ is always minimal in $M$ and
is Einstein with respect to the induced metric (\cite{T}),
where $o$ is
the origin of $M = G/K$.
Hence,
as a corollary of the congruency of orbits of $S_{\Phi}$,
we have the following:
any orbits of $S_\Phi$ are minimal submanifolds in $M$,
and are Einstein with respect to the induced metrics.
We are interested in studying further geometric properties of
$S_{\Phi}$-orbits.
For the study, the congruency of $S_{\Phi}$-orbits is quite useful,
since we have only to consider one orbit.
Furthermore, our sufficient condition would be
useful
for further studies on isometric actions on symmetric spaces of noncompact type,
since some interesting actions do satisfy our sufficient condition,
and hence applicable to study of geometry of their orbits.
Throughout this paper,
we denote by $\Isom(M)$ the isometry group of a Riemannian manifold $M$,
and by $\Lie(G)$ the Lie algebra of a Lie group $G$.
\section{A key lemma}
In this section, we give a sufficient condition for isometric actions to have the congruency of orbits on Riemannian manifolds.
\begin{Lem} \label{main}
Let $M$ be a Riemannian manifold and
$S$ be a connected Lie subgroup of $\Isom(M)$ with $\Lie(S) = \fr{s}$,
and assume
that $S$ acts transitively on $M$.
If $\fr{s}'$ is an ideal of $\fr{s}$,
then all orbits of $S'$ in $M$ are isometrically congruent to each other,
where $S'$ is the connected Lie subgroup of $S$ with $\Lie(S') = \fr{s}'$.
\end{Lem}
\begin{proof}
Take any $p, q \in M$.
We shall show that the orbits $S' . p $ and $S' . q $ are isometrically congruent.
Owing to transitivity of the action of $S$, there exists $g \in S$ such that $p = g.q$ holds.
Since
$S$ and $S'$ are connected and $\fr{s'}$ is an ideal in $\fr{s}$,
one knows that
$S'$ is a normal subgroup of $S$ (see for instance \cite[Theorem 3.48]{W}).
Hence one has $g^{-1} S' g = S'$.
Thus we obtain
\begin{align*}
g. (S'.q) = g. (g^{-1} S' g). q = S'. (g. q) = S'.p,
\end{align*}
which implies $S' . p $ and $S' . q $ are isometrically congruent.
\end{proof}
\section{Hopf fibrations}
In this section,
by applying
Lemma~\ref{main},
we give a simple proof for
the congruency of orbits of the actions
$\mathrm{U}(1)$ on spheres $\mathbb{S}^{2n+1}$
which induce Hopf fibrations.
Let $\mathbb{S}^{2n+1}$ be the unit sphere in $\mathbb{C}^{n+1}$.
Consider
the natural action of the unitary group $\mathrm{U}(n+1)$ on $\mathbb{S}^{2n+1}$,
and let $\mathrm{U}(1)$ be the center of $\mathrm{U}(n+1)$.
It is well-known that the orbit space of the action of $\mathrm{U}(1)$
on $\mathbb{S}^{2n+1}$ satisfies
\begin{align*}
\mathrm{U}(1) \backslash \mathbb{S}^{2n+1}
= \mathrm{U}(1) \backslash \mathrm{U}(n+1) / \mathrm{U}(n)
= \mathrm{U}(n+1) / ( \mathrm{U}(1) \times \mathrm{U}(n))
= \mathbb{C}\textrm{P}^n .
\end{align*}
The natural projection
from $\mathbb{S}^{2n+1}$ onto $\mathbb{C}\textrm{P}^n$ provides
the \textit{Hopf fibration}.
We now show the congruency of orbits of the action above, as an application of Lemma~\ref{main}.
\begin{Prop} \label{Hopf}
Under the action of $\mathrm{U}(1)$ on $\mathbb{S}^{2n+1}$ defined above, all orbits of $\mathrm{U}(1)$ in $\mathbb{S}^{2n+1}$ are isometrically congruent to each other.
\end{Prop}
\begin{proof}
Recall that $\mathrm{U}(n+1)$ acts transitively on $\mathbb{S}^{2n+1}$.
We know that $\mathfrak{u}(1)$ is an ideal in $\mathfrak{u}(n+1)$, or that $\mathrm{U}(1)$ is a normal subgroup of $\mathrm{U}(n+1)$.
Hence
the proof easily follows from
Lemma~\ref{main}.
\end{proof}
When $n=1$, the proof of Proposition~\ref{Hopf} can be found,
for example, in \cite[Section 2]{S}.
We note that its proof depends on the fact that
$\mathbb{S}^{3}$ can be identified with $\mathrm{Sp}(1)$
equipped with the bi-invariant metric.
Hence, our sufficient condition gives another proof,
which can also be applied to an arbitrary $n$.
\section{Horospheres and their generalizations}
In this section, we give
further applications
of Lemma~\ref{main},
which
provide
the congruency of orbits of certain isometric actions
on Riemannian symmetric spaces of noncompact type and of arbitrary rank.
The result of this section
contains, as special cases,
simple and unified proofs of
the congruency of orbits of (2), (3) and (4) mentioned in Section~1.
First of all, we recall some fundamental notions of symmetric spaces of noncompact type.
Refer to \cite{H, K}.
Let $M = G/K$ be a connected Riemannian symmetric space of noncompact type,
where $G$ is the identity component of $\Isom(M)$,
and $K$ is the isotropy subgroup of $G$ at some point $o$, called the origin.
Let us denote by $\fr{g}$ and $\fr{k}$ the Lie algebras of $G$ and $K$, respectively,
and by $\fr{p}$ the orthogonal complement of $\fr{k}$ with respect to the Killing form $B$ of $\fr{g}$.
One thus obtains that $\fr{g} = \fr{k} \oplus \fr{p}$, the Cartan decomposition of $\fr{g}$.
Denote by $\theta$ the corresponding Cartan involution.
We then introduce a positive definite inner product on $\fr{g}$ by $\langle X, Y \rangle := -B(X, \theta Y)$.
Let $\fr{a}$ be a maximal abelian subspace of $\fr{p}$ and denote the dual space of $\fr{a}$ by $\fr{a}^*$.
Then we define
\begin{align*}
\fr{g}_\lambda := \{ X \in \fr{g} :
\ad(H)X = \lambda(H)X \text{ for all } H \in \fr{a} \}
\end{align*}
for each $\lambda \in \fr{a}^*$,
and call $\lambda \in \fr{a}^* \setminus \{ 0 \}$ the \textit{restricted root} if $\fr{g}_\lambda \ne {0}$.
Denote by $\Sigma$ the set of restricted roots.
Let $\Lambda$ be a set of simple roots of $\Sigma$,
and then denote by $\Sigma^+$ the set of positive roots associated with $\Lambda$.
Let us define
\begin{align*} \textstyle
\fr{n} := \bigoplus_{\lambda \in \Sigma^+} \fr{g}_\lambda , \quad
\fr{s} := \fr{a} \oplus \fr{n} ,
\end{align*}
which yields that $\fr{g} = \fr{k} \oplus \fr{a} \oplus \fr{n}$,
the Iwasawa decomposition of $\fr{g}$.
Note that
$\fr{n}$ is a nilpotent subalgebra and $\fr{s}$ is a solvable subalgebra.
Furthermore, one can check easily that
\begin{align} \label{eq_2_1}
[\fr{s}, \fr{s}] = \fr{n}.
\end{align}
Let $S$ be the connected Lie subgroup of $G$ with $\Lie(S) = \fr{s}$.
This solvable subgroup $S$ is simply-connected and acts simply transitively on $M$.
We now give examples of isometric actions on $M$, which have the congruency of orbits.
Denote by $\ominus$ the orthogonal complement with respect to $\langle , \rangle$.
\begin{Prop} \label{orbits_S_V}
Let $V$ be any linear subspace of $\fr{a}$ and
define $\fr{s}_V := \fr{s} \ominus V = (\fr{a} \ominus V) \oplus \fr{n}$.
Then all orbits of $S_V$ in $M$ are isometrically congruent to each other,
where $S_V$ is the connected Lie subgroup of $S$ with $\Lie(S_V) = \fr{s}_V$.
\end{Prop}
\begin{proof}
Recall that $S$ acts transitively on $M$.
Then, by Lemma~\ref{main}, we have only to prove that $\fr{s}_V$ is an ideal in $\fr{s}$.
It follows easily from (\ref{eq_2_1}) and the definitions of $\fr{s}$ and $\fr{s}_V$ that
\begin{align*}
[\fr{s}, \fr{s}_V] \subset [\fr{s}, \fr{s}] = \fr{n} \subset \fr{s}_V .
\end{align*}
Thus $\fr{s}_V$ is an ideal in $\fr{s}$.
This completes the proof.
\end{proof}
The actions of $S_V$ on $M$ have been studied in \cite{BDT},
and are always hyperpolar.
Furthermore, the class of $S_V$-actions contains some interesting subclasses.
\begin{Remark}
Although the proof of Proposition~\ref{orbits_S_V} is very easy,
it gives simple and unified proofs of some known results as follows.
\begin{enumerate}
\item
In the case when $\rank M = 1$ and $\dim V = 1$,
Proposition~\ref{orbits_S_V} gives a proof of the congruency of horospheres
in hyperbolic spaces.
Indeed horospheres coincide with $N$-orbits,
where $N$ stands for the connected Lie subgroup of $G$ with $\Lie(N) = \fr{n}$.
Hence, setting $V := \fr{a}$ we have $N = S_V$.
\item
In the case when $\rank M > 1$ and $\dim V = 1$,
Proposition~\ref{orbits_S_V} gives another proof of \cite[Proposition 3.1]{BT03}.
We note that the proof in \cite{BT03} involves some geometric arguments,
but our proof is purely Lie algebraic.
We also note that the action of $S_V$ on $M$ is of cohomogeneity one.
\item
In the case when $\rank M > 1$ and $\dim V = \rank M$,
Proposition~\ref{orbits_S_V} also gives another proof of \cite[Corollary 6.5]{BDT}.
Note that $S_V = N$ in this case, whose orbits form the horocycle foliation.
\end{enumerate}
\end{Remark}
In the case when $\rank M > 1$ and $\dim V$ generic,
Proposition~\ref{orbits_S_V} is a slight extension of the above mentioned results.
\section{The solvable parts of parabolic subgroups}
In this section we introduce
new
examples of isometric actions having the congruency of orbits on Riemannian symmetric spaces of noncompact type.
They are induced by the solvable parts of parabolic subalgebras.
We first review parabolic subalgebras (we refer to \cite{K}).
We use the notations in Section~4.
It is known that there
is a one-to-one correspondence between
proper
subsets $\Phi \subsetneq \Lambda$
and the conjugacy classes of parabolic subalgebras of $\fr{g}$.
The correspondence is given as follows.
For each $\Phi \subsetneq \Lambda$,
let $\Sigma_\Phi$ be the root subsystem of $\Sigma$ generated by $\Phi$,
that is, $\Sigma_\Phi$ is the intersection of $\Sigma$ and the linear span of $\Phi$,
and put $\Sigma_\Phi^+ := \Sigma_\Phi \cap \Sigma^+$.
Then, let us define
\begin{align*}
\textstyle
\fr{q}_\Phi := \fr{g}_0 \oplus
\left( \bigoplus_{\beta \in \Sigma_\Phi \cup \Sigma^+} \fr{g}_\beta \right) ,
\end{align*}
which is a parabolic subalgebra of $\fr{g}$.
It then is clear but remarkable that
\begin{align} \label{eq_3_1}
\fr{s} \subset \fr{q}_\Phi.
\end{align}
We shall give the solvable part of $\fr{q}_\Phi$.
Consider the \textit{Langlands decomposition} $\fr{q}_\Phi = \fr{m}_\Phi \oplus \fr{a}_\Phi \oplus \fr{n}_\Phi$,
where
\begin{align*}
\fr{a}_\Phi
& = \{ H \in \fr{a} : \alpha(H) = 0 \text{ for all } \alpha \in \Phi \} , \\
\fr{m}_\Phi
& = (\fr{g}_0 \ominus \fr{a}_\Phi) \oplus
\textstyle
( \bigoplus_{\beta \in \Sigma_\Phi^+} \fr{g}_\beta) , \\
\fr{n}_\Phi
& = \textstyle
\bigoplus_{\lambda \in \Sigma^+ \setminus \Sigma_\Phi^+} \fr{g}_\lambda .
\end{align*}
Let us define $\fr{s}_\Phi := \fr{a}_\Phi \oplus \fr{n}_\Phi$,
which is called the solvable part of the parabolic subalgebra $\fr{q}_\Phi$.
By definition, one has
\begin{align*}
\fr{s}_\Phi \subset \fr{s} .
\end{align*}
Furthermore, it follows
from \cite[Proposition 7.78]{K} that $\fr{s}_\Phi$ is
an ideal
in $\fr{q}_\Phi$, that is,
\begin{align} \label{eq_3_2}
[\fr{s}_\Phi , \fr{q}_\Phi] \subset \fr{s}_\Phi.
\end{align}
We are now in the position to state the main result of this section.
\begin{Thm} \label{orbits_S_Phi}
Let $\Phi \subsetneq \Lambda$,
and $\fr{s}_\Phi$ be the solvable part of
the
parabolic subalgebra $\fr{q}_\Phi$.
Then all orbits of $S_\Phi$ in $M$ are isometrically congruent to each other,
where $S_\Phi$ is the connected Lie subgroup of $S$ with $\Lie(S_\Phi) = \fr{s}_\Phi$.
\end{Thm}
\begin{proof}
As in Proposition~\ref{orbits_S_V},
we have only to check that $\fr{s}_\Phi$ is an ideal in $\fr{s}$.
Indeed,
it follows from (\ref{eq_3_1}) and (\ref{eq_3_2}) that
\begin{align}
[\fr{s} , \fr{s}_\Phi] \subset [\fr{q}_\Phi , \fr{s}_\Phi] \subset \fr{s}_\Phi,
\end{align}
which completes the proof.
\end{proof}
The proof of Theorem~\ref{orbits_S_Phi} is
easy,
but it provides a lot of new examples of isometric actions on $M$
which have the congruency of orbits.
Note that there are $2^r - 1$ proper subsets in $\Lambda$,
where $r$ denotes
the rank of $M$.
We also note that the orbits $(S_\Phi).o$ through the origin $o$ have been studied in \cite{T}.
Indeed, it is proved that $(S_\Phi).o$ is always minimal in $M$ and is Einstein with respect to the induced metric.
Hence, combining these facts with Theorem ~\ref{orbits_S_Phi}, we readily obtain the following result.
\begin{Cor}\label{Cor}
All orbits of $S_\Phi$ are minimal in $M$,
and Einstein with respect to the induced metrics.
\end{Cor}
| {
"timestamp": "2012-12-18T02:03:52",
"yymm": "1212",
"arxiv_id": "1212.3893",
"language": "en",
"url": "https://arxiv.org/abs/1212.3893",
"abstract": "We give a sufficient condition for isometric actions to have the congruency of orbits, that is, all orbits are isometrically congruent to each other. As applications, we give simple and unified proofs for some known congruence results, and also provide new examples of isometric actions on symmetric spaces of noncompact type which have the congruency of orbits.",
"subjects": "Differential Geometry (math.DG)",
"title": "A sufficient condition for congruency of orbits of Lie groups and some applications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180643966651,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7083314856121429
} |
https://arxiv.org/abs/1102.3515 | On Gromov's Method of Selecting Heavily Covered Points | A result of Boros and Füredi ($d=2$) and of Bárány (arbitrary $d$) asserts that for every $d$ there exists $c_d>0$ such that for every $n$-point set $P\subset \R^d$, some point of $\R^d$ is covered by at least $c_d{n\choose d+1}$ of the $d$-simplices spanned by the points of $P$. The largest possible value of $c_d$ has been the subject of ongoing research. Recently Gromov improved the existing lower bounds considerably by introducing a new, topological proof method. We provide an exposition of the combinatorial component of Gromov's approach, in terms accessible to combinatorialists and discrete geometers, and we investigate the limits of his method. In particular, we give tighter bounds on the \emph{cofilling profiles} for the $(n-1)$-simplex. These bounds yield a minor improvement over Gromov's lower bounds on $c_d$ for large $d$, but they also show that the room for further improvement through the {\cofilling} profiles alone is quite small. We also prove a slightly better lower bound for $c_3$ by an approach using an additional structure besides the {\cofilling} profiles. We formulate a combinatorial extremal problem whose solution might perhaps lead to a tight lower bound for $c_d$. | \section{Introduction}
Let $P\subset {\mathbb{R}}^2$ be a set of $n$ points in general position (i.e., no three points collinear). Boros and F\"uredi~\cite{BorosFuredi:PlanarSelectionLemma-84} showed that there always exists a point $a\in{\mathbb{R}}^2$ contained in a positive fraction of all the $\binom{n}{3}$ triangles spanned by $P$,
namely, in at least $\frac{2}{9}\binom{n}{3} - O(n^2)$ triangles.
(Generally we cannot assume $a\in P$, as the example of points
in convex position shows.)
This result was generalized by B\'ar\'any~\cite{Barany:GeneralizationCarathoedory-1982} to point sets in arbitrary fixed dimension:
\begin{theorem}[\textbf{B\'ar\'any~\cite{Barany:GeneralizationCarathoedory-1982}}]
\label{thm:Barany}
Let $P$ be a set of $n$ points in general position in ${\mathbb{R}}^d$ (i.e., no $d+1$ or fewer of the points are affinely dependent). Then there exists a point in ${\mathbb{R}}^d$ that is contained in at least
$$ c}%{c^\textup{aff}_d \cdot \binom{n}{d+1} - O(n^d)$$
$d$-dimensional simplices spanned by the points in $P$, where the constant $c}%{c^\textup{aff}_d>0$ (as well as the constant implicit in the $O$-notation) depend only on $d$.
\end{theorem}
The largest possible value of $c}%{c^\textup{aff}_d$ has been the subject of ongoing research. With some abuse of notation, we will henceforth denote by $c}%{c^\textup{aff}_d$ the largest possible constant for which Theorem~\ref{thm:Barany} holds true.
\paragraph{Upper bounds.} Bukh, Matou\v{s}ek and Nivasch~\cite{BukhMatousekNivasch:StabbingSimplices-2010} showed that\footnote{In \cite{BukhMatousekNivasch:StabbingSimplices-2010}, a different normalization is used, in the sense that simplices are counted in the form $c_d \cdot n^{d+1} - O(n^d)$. This means that the bounds on $c_d$ given in \cite{BukhMatousekNivasch:StabbingSimplices-2010} have to be multiplied by $(d+1)!$ to match our normalization.}
\begin{equation}
\label{eq:lower-bound-c_d}
c}%{c^\textup{aff}_d \leq \frac{(d+1)!}{(d+1)^{(d+1)}}\sim \frac{\sqrt{2\pi d}}{e^d} = e^{-\Theta(d)}
\end{equation}
by constructing examples of $n$-point sets $P$ in ${\mathbb{R}}^d$ for which no point in ${\mathbb{R}}^d$ is contained more than
$(\tfrac{n}{d+1})^{d+1} -O(n^d)$ many $d$-simplices spanned by $P$.
\paragraph{Lower Bounds.} The result of Boros and F\"uredi says that $c}%{c^\textup{aff}_2 \geq 2/9$, which matches the upper bound in (\ref{eq:lower-bound-c_d}). For general $d$, B\'ar\'any's proof yields
$$c}%{c^\textup{aff}_d \geq \frac{1}{(d+1)^d},$$
which is smaller than (\ref{eq:lower-bound-c_d}) by a factor of $d!$. In \cite{Wagner:Thesis-2003}, the lower bound was improved by a factor of roughly $d$ to
$$c}%{c^\textup{aff}_d \geq \frac{d^2 +1}{(d+1)^{d+1}},$$
but this narrowed the huge gap between upper and lower bounds only slightly. Moreover, Bukh, Matou\v{s}ek and Nivasch showed that the method in \cite{Wagner:Thesis-2003} cannot be pushed farther.
An improvement of the lower bound for $c_3$ by a clever elementary geometric
argument was recently achieved by Basit et al.~\cite{Basit-al}.
We refer to \cite{BukhMatousekNivasch:StabbingSimplices-2010} for a more detailed discussion of the problem and related results.
\paragraph{Gromov's results.}
Recently, Gromov~\cite{Gromov:SingularitiesExpandersTopologyOfMaps2-2010} introduced a new, topological proof method, which improves on the previous
lower bounds considerably and which, moreover, applies to a more
general setting, described next.
An $n$-point set $P \subseteq {\mathbb{R}}^d$ determines an affine map $T$ from the $(n-1)$-dimensional simplex $\Delta^{n-1} \subseteq {\mathbb{R}}^{n-1}$ to ${\mathbb{R}}^d$ as follows. Label the vertices of $\Delta^{n-1}$ by $V=\{v_1,\ldots, v_n\}$, and the points as $P=\{p_1,\ldots, p_n\}$. Then $T$ is given by mapping $v_i$ to $p_i$, $1\leq i \leq n$, and by interpolating linearly on the faces $\Delta^{n-1}$.
Thus, B\'ar\'any's result can be restated by saying that for any \emph{affine} map $T \colon \Delta^{n-1} \rightarrow {\mathbb{R}}^d$, there exists a point in ${\mathbb{R}}^d$ that is contained in the $\psi$-images of at least $c}%{c^\textup{aff}_d \cdot \binom{n}{d+1} -O(n^d)$ many $d$-dimensional faces of $\Delta^{n-1}$.
Gromov shows that, more generally\footnote{In fact, Gromov's approach is still much more general than this and applies to continuous maps from arbitrary finite simplicial complexes $X$ to arbitrary $d$-dimensional manifolds $Y$. The method yields lower bounds for the maximum number of $d$-simplices of $X$ whose images share a common point as long as $X$ has certain \emph{expansion properties}. This will be briefly explained in Section~\ref{subsec:abstract} below.}, the following is true:
\begin{theorem}[\textbf{\cite{Gromov:SingularitiesExpandersTopologyOfMaps2-2010}}]
\label{thm:Gromov-Simplex-Selection}
For an arbitrary \emph{continuous} map $T \colon \Delta^{n-1} \rightarrow {\mathbb{R}}^d$, there exists a point in ${\mathbb{R}}^d$ that is contained in the $T$-images of at least $c^\textup{top}_d \cdot \binom{n}{d+1}-O(n^d)$ many $d$-faces of $\Delta^{n-1}$, where $c^\textup{top}_d>0$ is a constant that depends only on $d$.
\end{theorem}
By the same abuse of notation as before, $c^\textup{top}_d$ will also henceforth denote the largest constant for which Theorem~\ref{thm:Gromov-Simplex-Selection} holds. In particular, $c}%{c^\textup{aff}_d \geq c^\textup{top}_d$.
Gromov's method gives
\begin{equation}
\label{eq:c_d-top}
c}%{c^\textup{aff}_d \geq c^\textup{top}_d \geq \frac{2d}{(d+1)! (d+1)}
\end{equation}
For $d=2$, this yields the tight bound $c^\textup{top}_2 = c}%{c^\textup{aff}_2 =2/9$.
For general $d$, Gromov's result improves
on the earlier bounds by a factor exponential in $d$, but it
is still of order $e^{-\Theta(d\log d)}$
and thus far from the upper bound.
One of the goals of this paper is to provide an exposition of the combinatorial component of Gromov's approach, in terms accessible to combinatorialists and discrete geometers.
\medskip
Very recently, after a preliminary version of this paper was written and circulated, Karasev~\cite{Karasev:Gromov-2010} found a very short and elegant proof of Gromov's bound (\ref{eq:c_d-top}) for \emph{affine} maps.
Karasev's proof, which he himself describes as a ``decoded and refined'' version of Gromov's proof, combines probabilistic and topological arguments, but he avoids the heavy topological machinery applied in Gromov's proof and only uses the elementary notion of the degree of a piecewise smooth map between spheres.
Karasev's argument can be modified and extented so that it covers the case of arbitrary continuous maps into ${\mathbb{R}}^d$ (but not yet into general $d$-dimensional manifolds). Furthermore, the combinatorial and the topological aspects of the argument, which are intertwined in Karasev's proof, can be split into two independent parts. In this way, any combinatorial improvement on the cofilling profiles or on the pagoda problem introduced in Section~\ref{sec:c3} immediately imples improved bounds for Theorem~\ref{thm:Gromov-Simplex-Selection} also via this simpler topological route. This will be discussed in more detail in a separate note.
\paragraph{Coboundaries and {cofilling} profiles.} We need two basic notions, \emph{coboundaries} and \emph{cofilling profiles}, which have their roots in cohomology but which can be defined in elementary and purely combinatorial terms.
Let $V$ be a fixed set of $n$ elements, w.l.o.g., $V=[n]:=\{1,2,\ldots, n\}$. We will always assume that $n$ is sufficiently large.
In topological terms, we think of the $(n-1)$-dimensional simplex $\Delta^{n-1}$ as a combinatorial object (an abstract simplicial complex), namely, as the system of all subsets of $V$. Thus, given a subset $f \subseteq V$, we will also sometimes refer to $f$ as a \emph{face} of $\Delta^{n-1}$, and the \emph{dimension} of a face is defined as $\dim f:= |f|-1$.
Let $E\subseteq {V\choose d}$ be a system of (unordered) $d$-tuples,
or in other words, of $(d-1)$-dimensional faces.
We write $\|E\|:= |E|/{n\choose d}$ for the \emph{normalized
size} of $E$; one can also interpret $\|E\|$ as
the probability that a random $d$-tuple lies in~$E$.
The notation $\|E\|$ implicitly refers to
$d$ and $n$, which have to be understood from the context.
The \emph{coboundary} $\delta E$ is the system of those $(d+1)$-tuples
in $f\in {V\choose d+1}$ that contain an odd number of $e\in E$.
(For $d=2$, this notion and some of the following considerations
are related to \emph{Seidel switching} and \emph{two-graphs},
which are notions studied in combinatorics---see the end of
Section~\ref{s:seidel}
for an explanation and references.)
We emphasize that
$\delta E$ also depends on the ground set $V$, and sometimes we may write $\delta_V E$
instead of $\delta E$ to avoid ambiguities.
Many different $E$'s may have the same coboundary.
We call $E$ \emph{minimal} if $\|E\|\le \|E'\|$ for every $E'$
with $\delta E'=\delta E$.
We define the \emph{{cofilling} profile}%
\footnote{Gromov uses the notation $\|(\partial^{d-1})^{-1}_{\rm fil}\|(\beta)$ for
what we would write as $\varphi^{-1}_d(\beta)/\beta$.
Actually, he does not take the $\liminf$, which we use in order
to avoid dealing with small values of~$n$.}
as follows:
$$
\varphi_d(\alpha):=\liminf_{|V|=n\to\infty}\min \{\|\delta E\|: E\in {\textstyle \binom{V}{d}} \mbox{ minimal}, \|E\|\ge \alpha\}.
$$
Equivalently, one can also view this notion as follows. Suppose we are given a system $F \in \binom{V}{d+1}$, and we are guaranteed that $F$ is a coboundary, i.e., that $F=\delta E$ for some $E$. Now we want to know the smallest possible
(normalized) size $\|E\|$ of an $E$ with $F=\delta E$,
as a function of $\|F\|$. It suffices to consider minimal $E$'s, and
$\varphi(\alpha) \geq \beta$ means that if we are forced to take
$\| E\| \geq \alpha$, then we must have $\|F\| \geq \beta$.
We also remark that there is no minimal $E$ with $\|E\| > 1/2$ (see Section~\ref{sec:basics}), so formally, $\varphi_d(\alpha)=\infty$ for $\alpha >1/2$
(since we take the minimum over an empty set).
As a warm-up, let us consider case $d=1$: In this case, we can view an $S\in \binom{V}{1}$ simply as a subset of $S\subseteq V$, and $\partial S$
is the set of edges of the complete bipartite graph with
color classes $S$ and $V\setminus S$; in graph theory,
one also speaks of the \emph{edge cut} determined by $S$
in the complete graph on $V$.
\immfig{edgecut}
The minimality of $S$
simply means that $|S| \leq n/2$. It follows that
$\varphi_1(\alpha)=2\alpha(1-\alpha)$, $0\le\alpha\le\frac12$.
For general $d$, the following basic bound for $\varphi_d$ was observed by Gromov, and independently by Linial, Meshulam, and Wallach \cite{LinialMeshulam:HomologicalConnectivityRandom2Complexes-2006,MeshulamWallach:HomologicalConnectivityRandomComplexes-2009} (and maybe by others). In our terminology, it can be stated as follows:
\begin{lemma}[\textbf{Basic {Cofilling} Bound}]
\label{lem:basic}
For every $d\geq 1$ and all $\alpha\in [0,1]$,
$$
\varphi_d(\alpha)\ge \alpha.
$$
\end{lemma}
We will recall a simple combinatorial proof of this bound, along with other basic properties of the coboundary operator, in Section~\ref{sec:basics}. A simple example shows that the basic bound is attained with equality for
$\alpha=\tfrac{(d+1)!}{(d+1)^{(d+1)}}\approx e^{-(d+1)}$,
but for smaller $\alpha$, improvements are possible,
as we will discuss later.
In Section~\ref{sec:topological-outline}, we present an outline of topological part of Gromov's argument, which yields the following general lower bound.
\begin{prop}[\textbf{\cite{Gromov:SingularitiesExpandersTopologyOfMaps2-2010}}]
\label{prop:GromovFillingBaranyConstant}
For every $d\geq 1$,
$
c^\textup{top}_d\ge \varphi_{d}(\tfrac12 \varphi_{d-1}(\tfrac13 \varphi_{d-2}(\ldots \tfrac 1d \varphi_1(\tfrac1{d+1})\ldots))).
$
\end{prop}
Theorem~\ref{thm:Gromov-Simplex-Selection} follows from this proposition
by using $\varphi_1(\alpha)=2\alpha(1-\alpha)$ and the basic bound for
all $d\ge 2$.
Better bounds on the $c^\textup{top}_d$ would immediately follow from
Proposition~\ref{prop:GromovFillingBaranyConstant} if
one could improve on the basic cofilling bound
in an appropriate range of $\alpha$'s;
this is a purely combinatorial question (and a quite nice one, in
our opinion).
We establish the following lower bounds for $\varphi_2$ and $\varphi_3$:
\begin{theorem}
\label{thm:cofilling-2} For $d=2$ and all $\alpha\le\frac14$,
we have the lower bound
$$
\varphi_2(\alpha) \ge \tfrac34\left(1-\sqrt{1-4\alpha}\right)(1-4\alpha)=
\tfrac 32\alpha-\tfrac92\alpha^2-3\alpha^3-O(\alpha^4).
$$
\end{theorem}
Fig.~\ref{f:f2bounds} shows a plot of this lower bound.
\labfigw{f2bounds}{12cm}{The lower and upper bounds on $\varphi_2(\alpha)$:
the straight line is the basic bound,
the top one the upper bound (Proposition~\ref{p:ubbb}),
and the bottom (most curved) is the
lower bound (Theorem~\ref{thm:cofilling-2}).}
\begin{theorem}
\label{t:3ub}
For $d=3$ and $\alpha$ sufficiently small,
$$
\varphi_3(\alpha)\ge \frac 43\alpha- O(\alpha^2)
$$
(with a constant that could be made explicit).
\end{theorem}
These theorems will be proved in Sections~\ref{sec:cofilling-2} and \ref{sec:cofilling-ge2}, respectively.
They do not improve on Gromov's lower bounds
for $c_3$, for example, since they do not beat the basic bound
for the values of $\alpha$ needed in
Proposition~\ref{prop:GromovFillingBaranyConstant} for $d$ small.
However, they do apply if we take $d$ sufficiently large
in Proposition~\ref{prop:GromovFillingBaranyConstant},
and so they at least show that Gromov's lower bound
on $c^\textup{top}_d$ is not tight for large~$d$.
After the research reported in this paper was completed, Kr\'a{l'} et
al.~\cite{KMS} proved the lower bound $\varphi_2(\alpha)\ge
\frac97\alpha(1-\alpha)$. This is better than the bound of
Theorem 5 for $\alpha$ larger than approximately $0.0626$,
and it does improve on Gromov's lower bound for~$c_3$.
The bounds in Theorems~\ref{thm:cofilling-2}
and~\ref{t:3ub} may look like only minor improvements over the
basic bound, but it turns out that they have the right order
of magnitude for $\alpha$ tending to~$0$. Indeed,
we have the following upper bound on the $\varphi_d$'s.
\begin{prop}\label{p:ubbb}
For all $d\geq 1$ and $\alpha \leq \frac{1}{d+1}$,
let $\sigma\in [0,1)$ be the smallest positive number with
$\alpha=d!\sigma(\frac{1-\sigma}d)^{d-1}$. Then
$$\varphi_d(\alpha) \leq \tfrac{d+1}d\alpha(1-\sigma),$$
and consequently, $\varphi_d(\alpha)\le \frac{d+1}d\alpha$.
\end{prop}
These bounds are plotted in Fig.~\ref{f:upperboundplots}.
\labfigw{upperboundplots}{10cm}{
Plots of the upper bounds from Proposition~\ref{p:ubbb}
for $d=2,3,4$, together with the basic bound.}
Proposition~\ref{p:ubbb} follows from a simple example, whose special
case with $\alpha=\frac1{d+1}$ has already been noted in
\cite{Gromov:SingularitiesExpandersTopologyOfMaps2-2010}
and in \cite{LinialMeshulam:HomologicalConnectivityRandom2Complexes-2006,MeshulamWallach:HomologicalConnectivityRandomComplexes-2009}.
We present the example and its analysis in Section~\ref{s:multipart}.
We conjecture that the bound in Proposition~\ref{p:ubbb}
is the truth, and moreover, that the example mentioned above
is essentially the only possible extremal example.
However, a proof may be challenging even for the $d=2$ case.
On the other hand, we believe that a suitable extension of
the proof of Theorem~\ref{t:3ub} may provide a bound of the form
$\varphi_d(\alpha)\ge \frac{d+1}d\alpha-o(\alpha)$ as $\alpha\to 0$.
At present it seems that such an extension would
be highly technical and complicated.
In view of the upper bound $\varphi_d(\alpha)\le
\frac{d+1}d\alpha$, Gromov's lower bound on $c_d$
cannot be improved by more than a factor of roughly $d$
using Proposition~\ref{prop:GromovFillingBaranyConstant}
alone.
In Section~\ref{sec:c3},
we introduce a somewhat different approach, which goes beyond
Proposition~\ref{prop:GromovFillingBaranyConstant} and uses
additional combinatorial structure, and we show that
it can provide a slightly better lower bound for $c_3$.
We formulate a combinatorial extremal problem whose solution might perhaps lead to a tight lower bound for the~$c^\textup{top}_d$.
\section{Basics}
\label{sec:basics}
\label{s:multipart}
\label{s:seidel}
\heading{Linearity of the coboundary.}
For systems $E,E' \in\binom{V}{d}$ of $d$-tuples, $E+E'$ means the symmetric difference.
The coboundary operator is (well known and easily checked to be)
\emph{linear} with respect to this operation, i.e.,
$$\delta(E+E')=\delta E+\delta E',$$
and we have
$$\delta\delta E=0,$$
where $0$ means the empty system of $(d+2)$-tuples.
\heading{Cochains, coboundaries, and cocycles.}
A system $E\in \binom{V}{d}$ of $d$-tuples is also sometimes called a $(d-1)$-dimensional \emph{cochain}\footnote{Strictly speaking, a cochain with ${\mathbb{Z}}_2$-coefficients in the simplex $\Delta^{n-1}$.}, or simply $(d-1)$-cochain. Later on, when working with systems of tuples of various arities, it will be convenient to use this terminology. A cochain $E$ is called a \emph{cocycle} if $\delta E=0$, and it is a \emph{coboundary} if it can be written as $E=\delta D$ for some $(d-2)$-chain $D$.
In algebraic terms, a $(d-1)$-cochain $E\subseteq \binom{V}{d}$ can be identified with a $0/1$-vector indexed by the elements of $\binom{V}{d}$, which we can interpret as an element of the vector space ${\mathbb{Z}}_2^{\binom{V}{d}}$ over the $2$-element field ${\mathbb{Z}}_2$, and the symmetric difference $+$ corresponds to the usual addition is this vector space. The coboundary operators are linear maps between these spaces (mapping $d-1$-cochains to $d$-cochains). The $(d-1)$-cocycles are precisely the elements of the kernel of this linear map, and the $d$-coboundaries are the elements of the image. The property $\delta\delta =0$ is usually called the \emph{chain complex property}.
\paragraph{Minimality.} As we have seen, every coboundary is also a cocycle, i.e., if $F=\delta E \in \binom{V}{d+1}$, then $\delta F=0$.
In our setting, this is a complete characterization, i.e., $F \in \binom{V}{d+1}$ is a coboundary if and only if $\delta F=0$. Topologically speaking, this is because the $(n-1)$-simplex has zero cohomology.
We stress that there is one exceptional case, namely,
$d=0$. There are two $0$-cocycles $F\in \binom{V}{1}$, namely, $F=V$,
the set of all vertices, and $F=0$, the empty set of vertices.
However, \emph{by definition}, $0$ is considered to be a coboundary,
but $V$ is not. In topological terms, this is because we are working
with ordinary, \emph{non-reduced} cohomology.\footnote{We remark that from a combinatorial point of view, it may be more natural also to consider the subsets of $\binom{V}{0}$. There are two such subsets, namely $\{\varepsilon \}$ (the singleton set containing the unique $0$-tuple $\varepsilon$ of vertices of $V$) and $0$ (the empty set of $0$-tuples). For reduced cohomology, one defines $\delta \{\varepsilon \} = V$, so that the exceptional case in the characterization of cocycles disappears, but for the topological theorem that Gromov applies, it is important to work with unreduced cohomology.}
This will be formally important in Section~\ref{subsec:simplicialsets}.
Because of the equivalence of cocycles and coboundaries, for $d-1 \geq 1$,
minimality of $E$ can be equivalently characterized
as follows: $E$ is minimal if $\|E\|\le \|E+\delta D\|$ for
every $D\subseteq {V\choose d-1}$.
Thus, minimality means that $E$ contains at most half
of the $d$-tuples from each system of the form $\delta D$.
Consequently, if $E$ is minimal and $E'\subseteq E$,
then $E'$ is minimal as well.
It also follows from the alternative characterization of minimality that every minimal $E$ satisfies $\|E\| \leq 1/2$.
To see this, let $D=\{x\}$ be a singleton set consisting of a single $(d-1)$-tuple $x$. Then $|E \cap \delta D| \leq 1/2 | \delta D|$ means that $x$ is incident to at most $|\delta D|/2=\frac{n-d+1}{2}$ many $d$-tuples $e \in E$. Summing up over all $x \in \binom{V}{d-1}$, we get $d |E| \leq \binom{n}{d-1} (n-d+1)/2$, and hence $|E| \leq \binom{n}{d}/2$.
As was remarked above, minimality refers to a fixed vertex set $V$.
If a minimal $E \subseteq \binom{V}{d}$ happens to be contained
in $\binom{W}{d}$ then, considered as a set of subsets of $W$, it need
no longer be minimal. For example, suppose we partition $V$ into three vertex sets $V_1, V_2, V_3$ of size $n/3$ each and let $E=\{ \{v_1,v_2\}: v_1\in V_1, v_2 \in V_2\}$, the (edge set of the) complete bipartite graph between $V_1$ and $V_2$. Then it is not hard to check that $E$ is minimal as a subset of $\binom{V}{2}$, but not as a subset of $\binom{V_1 \cup V_2}{2}$.
\paragraph{Links of vertices.}
For a vertex $v\in V$, let us write $E_v:=\{e\in E: v\in e\}$.
The \emph{link} of $v$ in $E$ is the system of $(d-1)$-tuples
$\mathop {\rm lk}\nolimits(v,E):=\{e\setminus \{v\}: e\in E_v\}$.
It is easy to check the following formula for the coboundary of
the link:
\begin{equation}\label{e:link-cobo}
\delta\mathop {\rm lk}\nolimits(v,E) = E_v + \mathop {\rm lk}\nolimits(v,\delta E_v)
\end{equation}
(the $+$ on the right-hand side is actually a disjoint union in this case).
\paragraph{The basic bound for the filling profile.}
We now recall a combinatorial proof of the basic lower bound for $\varphi_d$.
\begin{proof}[Proof of Lemma~\ref{lem:basic}]
Let $F\subseteq{V\choose d+1}$ be a coboundary, and let $\beta:=\|F\|$.
We define the \emph{normalized degree} of $v$ as
$\|\mathop {\rm lk}\nolimits(v,F)\|=|\mathop {\rm lk}\nolimits(v,F)|/{n\choose d}$.
A simple counting
shows that the average normalized degree of a vertex equals $\beta=\|F\|$.
In particular, there exists a vertex $v$ of normalized degree at most $\beta$;
so we fix such a $v$ and
we set $E:=\mathop {\rm lk}\nolimits(v,F)$. We will check that $\delta E=F$.
Let $F_{\setminus v}:=F\setminus F_v$. Since $F$ is a coboundary,
we have $\delta F=0$, and thus $\delta F_v=\delta F_{\setminus v}$.
Let us consider an arbitrary $(d+1)$-tuple $f$ and distinguish two cases.
If $v\in F$, then it is easily seen that $f\in \delta E$
is equivalent to $f\in F$.
Next, let $v\not\in f$. Assuming $f\in\delta E$, we have
$f^+:=f\cup\{v\}\in \delta F_v$. Using $\delta F_v=F_{\setminus v}$,
we get $f^+\in \delta F_{\setminus v}$. Since the sets of $F_{\setminus v}$
avoid $v$, there is only one set of $F'$ that may be contained
in $f^+$, namely~$f$. So $f\in F$. This argument can be reversed,
showing that $f\in F$ implies $f\in\delta E$.
\end{proof}
\paragraph{The upper bound example. }
Here we prove Proposition~\ref{p:ubbb}, the upper bound
on the {cofilling} profile $\varphi_d$.
Let $\sigma \in [0,1]$ be a parameter. We partition the vertex set $V$
into $V_1,\ldots,V_{d+1}$,
where $|V_1|=\sigma n$ and the remaining vertices are divided evenly, i.e.,
$|V_i|= \frac{1-\sigma}d n$, $i>1$ (we ignore divisibility issues).
Let $E$ consist of all $d$-tuples that use exactly
one point from each of $V_1,\ldots,V_d$; see Fig.~\ref{fig:UB-cofilling}.
We have $\|E\|=:\alpha=d!\sigma(\frac{1-\sigma}d)^{d-1}$.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=1]{UpperBoundExample}
\caption{The upper bound example in the case $d=3$. The solid triangle depicts a typical element of $E$, and the dashed segments complete it to a tetrahedron that is a typical element of $\delta E$.\label{fig:UB-cofilling}}
\end{center}
\end{figure}
Then $F:=\delta E$ is the complete $(d+1)$-partite system on
$V_1,\ldots,V_{d+1}$, and $\|\delta E\|=\frac{d+1}d\alpha(1-\sigma)$.
This matches the quantitative bounds in Proposition~\ref{p:ubbb},
and it remains to check the minimality of $E$,
which is easy (and stated by Gromov and by Meshulam et al.\ without
proof): Let us consider some $E'$ with $F=\delta E'$.
Every $f\in F$ contains at least one
$e\in E'$, while every $e\in E'$ is contained in at most
$M:=\max_i{|V_i|}$ many $f\in F$. So $|E'|\ge |F|/M=|E|$.
\ProofEndBox\smallskip
\paragraph{On Seidel's switching. } In combinatorics,
a \emph{two-graph} is a set $F\subseteq {V\choose 3}$ of triples
with $\delta F=0$,i.e., a cocycle in our terminology).
As we have mentioned, this is equivalent to $F$ being a coboundary,
i.e., to the existence of some $E\subseteq
{V\choose 2}$ with $F=\delta E$. The system of all possible
$E'$ with $\delta E'=F$ is called the \emph{Seidel switching class}
of~$E$.
Two-graphs and Seidel's switching were introduced by Van Lint and Seidel
\cite{vL-S} and further studied by many authors,
because of their connections with equiangular lines,
strongly regular graphs, and interesting finite groups, for example
(for surveys see, e.g., Seidel and Taylor
\cite{SeidelTaylor} or Hage \cite{Hage-thesis}).
Numerous authors investigated the computational complexity
of various problems related to Seidel's switching
(we refer to \cite{Jeli-al} for citations).
For us, the following result is of particular interest:
It is NP-complete to decide if a given $E\subseteq {V\choose 2}$
is minimal (in its Seidel switching class), as was proved
by Jel\'{\i}nkov\'a et al.~\cite{Jeli-al}. Their reduction
produces only $E$'s with $\|E\|\ge\frac 12$ and
$\|E'\|\ge\frac14$ for all $E'$ in the same switching class;
however, recently Jel\'{\i}nek (private communication, September 2010)
was able to modify the reduction, showing that
the problem remains NP-complete even if we restrict to
only to $E$'s with $\|E\|\le c$, for every fixed $c>0$.
This shows that minimal sets have a complicated structure, and
one cannot expect to find a reasonable characterization.
\section{An Outline of Gromov's Topological Approach}
\label{sec:topological-outline}
In this section, we give a rather informal and elementary outline of the topological part of Gromov's approach
(Sections~2.2, 2.4, 2.5, and 2.6 of \cite{Gromov:SingularitiesExpandersTopologyOfMaps2-2010}). This outline
is not directly related to the new (combinatorial) results of the present paper.
We strive to keep the discussion as elementary as possible for as long as possible. For this reason, we restrict ourselves to the most basic (affine) setting of finite set $P$ of $n$ points in general position in ${\mathbb{R}}^d$, which allows us to describe most steps in the argument in simple geometric terms.
As remarked above, Gromov's method applies in much more general situations. In Section~\ref{subsec:abstract}, we briefly discuss the more general setting and also include some remarks as to how our elementary discussion
would be formulated in more standard topological terms.
We begin with an outline of our outline, by means of an example.
\begin{example}
Let $P=\{p_1,\ldots, p_5\}$ be the set of five points in ${\mathbb{R}}^2$ depicted by bold dots in
Figure~\ref{fig:PlanarAffineExample}, and let $V=[5]:=\{1,2,3,4,5\}$.
\begin{figure}[tb]
\centerline{\includegraphics[scale=0.7]{5PointExample}}
\caption{A set of five labeled points in general position in ${\mathbb{R}}^2$ (the image of $\Delta^4$ under an affine map).\label{fig:PlanarAffineExample}}
\end{figure}
Consider the three points $x,y,z$ marked by crosses in the picture. These three points are in general position w.r.t. $P$, in the sense that they do not lie on any line segment spanned by $P$, and no point of $P$ lies on any of the line segments spanned by $x,y,z$.
Let $F_x=\{ \{1,2,3\}, \{1,2,4\}, \{1,2,5\}\}$ be the set of all triples $\{i,j,k\} \in \binom{V}{3}$ such that $x$ lies in the triangle $p_i p_j p_k$, and let $F_y=\{\{1,2,3\},\{2,3,4\},\{2,3,5\}\}$ and $F_z=\{\{1,2,3\}, \{1,3,5\},$ $\{2,3,4\},\{3,4,5\}\}$ be defined analogously as the (index sets of) triangles containing $y$ and $z$, respectively.
Let $F_{xy}=\{ \{2,4\}, \{2,5\}\}$ be the set of pairs $\{i,j\} \in \binom{V}{2}$ such that the line segment $p_i p_j$ intersects the line segment $xy$, and let $F_{yz}=\{\{3,5\}\}$ and $F_{xz}=\{ \{1,5\}, \{2,4\}, \{4,5\}\}$ be defined analogously.
Finally, let $F_{xyz}= \{5\}$ be the set of indices $i \in V$ such that $p_i$ lies in the triangle $xyz$ (here, we identify elements $i \in V$ with singleton sets $\{i\} \subseteq V$ to simplify notation).
The basic observation is that these sets satisfy the relations
$$
\delta F_x = \delta F_y = \delta F_z =0,
$$
$$
F_x + F_y = \delta F_{xy},\quad F_y + F_z =\delta F_{yz}, \quad F_x+ F_z =\delta F_{xz},
$$
and
$$
F_{xy} + F_{yz} + F_{xz} = \delta F_{xyz}.
$$
It is straightforward to verify this in the specific example at hand, but it may in fact be easier---and an instructive
exercise---to check that these relations are not a coincidence, but hold in general for any finite set $P \subseteq {\mathbb{R}}^2$ and any triple of points $x,y,z$,
assuming only general position. Moreover,
similar facts hold in ${\mathbb{R}}^d$.
We discuss a proof of the general case below.
\end{example}
Informally speaking and in very general terms, the structure of Gromov's topological approach can be summarized as follows. Each step of the argument will be discussed in more detail in a separate subsection below. We fix $d\geq 1$ (the target dimension) and $V=[n]$ (the vertex set of the $(n-1)$-simplex $\Delta^{n-1}$).
\begin{enumerate}
\item We define a topological space $\cocyc{d}=\cocyc{d}(\Delta^{n-1})$, the space of $d$-dimensional cocycles (of the $(n-1)$-simplex).
\indent {\small This space is a \emph{simplicial set}, i.e., a space built of vertices, edges, triangles, and higher-dimensional simplices like a simplicial complex, but simplices are allowed to be glued to each other and to themselves in more general ways (in a first approximation, simplicial sets can be thought of as higher-dimensional analogues of multigraphs with loops, while simplicial complexes are higher-dimensional analogues of simple graphs).
The vertices of $\cocyc{d}$ are the $d$-dimensional cocycles $F\subseteq \binom{V}{d+1}$. The edges of $\cocyc{d}$ correspond to relations of the form $F_1 + F_2 = \delta F_{12}$, where $F_{12}\subseteq \binom{V}{d}$. The triangles of $\cocyc{d}$ correspond to a triple of $d$-dimensional cocycles $F_1$, $F_2$, $F_3$, edges between them, and a relation of the form $F_{12} + F_{23} + F_{31} = \delta F_{123}$, where $F_{123} \subseteq \binom{V}{d-1}$, etc. }
We stress that $\cocyc{d}$ depends only on $n$ and $d$ and is defined purely combinatorially. Moreover, as a combinatorial object, it is huge. For instance, the number of vertices of $\cocyc{d}$ ($d$-dimensional cocycles of the simplex) is $2^{\binom{n-d}{d}}$.
\item With every labeled $n$-point set $P\subseteq {\mathbb{R}}^d$, we associate a particular subspace\footnote{Formally, it would be more precise to regard $\mathcal{W}$ as a $d$-dimensional ${\mathbb{Z}}_2$-homology cycle, i.e., as a formal ${\mathbb{Z}}_2$-linear combination of $d$-simplices in $\cocyc{d}$ such that $\partial\mathcal{W}=0$. However, since we are working with ${\mathbb{Z}}_2$-coefficients, we can simply think of $\mathcal{W}$ as a subspace, given as the union of those $d$-simplices that appear an odd number of times in the formal sum.} $\mathcal{W} =\mathcal{W}(P) \subseteq \cocyc{d}$.
{\small A concrete way of doing this is to choose a triangulation $\mathcal{T}$ of ${\mathbb{R}}^d$ that is in general position w.r.t. $P$ (i.e., no $k$-dimensional simplex of $\mathcal{T}$ intersects any $\ell$-dimensional simplex spanned by $P$ if $k+\ell< d$). With every vertex $x$ of $\mathcal{T}$ we associate the set
$$F_x:=\{ f=\{i_0,i_1,\ldots ,i_d\} \in \textstyle{\binom{V}{d+1}}: x \in p_{i_0} p_{i_1}\ldots p_{i_d}\}$$
of (indices of) $d$-simplices spanned by $P$ that contain $x$. As indicated by the example, each such $F_x$ is a cocycle, i.e., a vertex of $\cocyc{d}$ (but not all vertices of $\cocyc{d}$ may be of this special form).
With every edge $xy$ of $\mathcal{T}$, we associate the set $F_{xy}$ of $d$-tuples from $V$ such that the corresponding points of $P$ span a $(d-1)$-simplex that intersects $xy$. As in the example, we get the relation $F_x + F_y =\delta F_{xy}$, and hence an edge of $\cocyc{d}$.
Similarly, each $k$-dimensional simplex of the triangulation gives rise to a $k$-simplex of $\cocyc{d}$ (but not all $k$-simplices in $\cocyc{d}$ may be of this form). We define the subspace $\mathcal{W}$ to consist of all simplices of $\cocyc{d}$ that are obtained from an odd number of simplices of $\mathcal{T}$ using this construction. (In principle, different simplices of the triangulation $\mathcal{T}$ may tield the same $k$-simplex in $\cocyc{d}$.)}
\item It follows from a theorem in algebraic topology, the Almgren--Dold--Thom Theorem, that the subspace $\mathcal{W}$ is not contractible inside $\cocyc{d}$.
\item If we choose the triangulation $\mathcal{T}$ sufficiently finely,
then for every point $a\in {\mathbb{R}}^d$, there is a vertex $x$ of $\mathcal{T}$ with $F_a = F_x$. Thus, if no point of ${\mathbb{R}}^d$ is covered by ``many'' $d$-simplices of $P$, then all sets $F_x$ are ``small.'' If this is so,
then by purely combinatorial means, we can define a concrete way of contracting the subspace $\mathcal{W}$ to a single point inside $\cocyc{d}$; see Figure~\ref{fig:IllustrationContraction}. This is a contradiction.
\begin{figure}
\begin{center}
\includegraphics{winz-a}\quad \includegraphics{winz-b}
\caption{A schematic illustration of the last two steps of the argument: $\mathcal{W}$ is not contractible inside $\cocyc{d}$, but if no point in ${\mathbb{R}}^d$ were covered by sufficiently many $d$-simplices of $P$ then we could contract $\mathcal{W}$ inside $\cocyc{d}$ to a single point.\label{fig:IllustrationContraction}}
\end{center}
\end{figure}
Thus, some point must be covered by many $d$-simplices.
\end{enumerate}
We now proceed to discuss the above steps in more detail.
\subsection{Simplicial Sets and the Space of Cocycles}
\label{subsec:simplicialsets}
\emph{Simplicial sets}\footnote{These objects also have many other names commonly found in the literature, including \emph{complete semisimplicial complexes} \cite{EilenbergZilber:SemisimplicialComplexes-1950}. Gromov uses the terminology \emph{semisimplicial spaces}.
}
are a generalization of simplicial complexes. As in the case of a simplicial complex, a simplicial set is built from $0$-simplices (vertices), $1$-simplices (edges), $2$-simplices (triangles), and higher-dimensional simplices.
One starts with the vertices, then glues each edge to one or two vertices by its endpoints, then one attaches triangles to vertices or edges along their boundaries, etc. In contrast to simplicial complexes, the attaching may involve various identifications. For instance, both endpoints of an edge may be attached to the same vertex, and two or more $i$-simplices in a simplicial set may have the same boundary. In this respect, simplicial sets can be thought of, in a first approximation, as higher-dimensional analogues of multigraphs with loops. On the other hand, in contrast to general cell complexes, there are restrictions as to what kind of attaching maps are allowed%
\footnote{Roughly speaking, one can think of each of the original simplices as having an ordered set of vertices. The attaching maps are linear maps induced by weakly monotone (order-preserving) maps between the vertex sets of the simplices.}, which makes simplicial sets more combinatorial than general cell complexes. We refer the reader to the article by Friedman \cite{Friedman:IntroductionSimplicialSets-2008}
for a very clear and accessible introduction to simplicial sets and to the book by May \cite{May:SimplicialObjects-1992} for a detailed treatment (further references can be found in Friedman's article).
The key object in Gromov's method is the \emph{space of $d$-dimensional cocycles}, which we denote\footnote{
For those readers who wish to read \cite{Gromov:SingularitiesExpandersTopologyOfMaps2-2010} in conjunction with the present one, we remark that Gromov uses the notation $\textrm{cl}^d$ or $\textrm{cl}^d_{\textup{sms}}$ for the space we denote by $\cocyc{d}$.} by $\cocyc{d}=\cocyc{d}(\Delta^{n-1})$ and which is a simplicial set defined as follows.
The vertices of $\cocyc{d}$ are the $d$-dimensional cocycles (of the simplex $\Delta^{n-1}$), i.e., subsets $F\subseteq \binom{V}{d+1}$ such that $\delta F=0$.
The edges of $\cocyc{d}$ are given by two (not necessarily distinct) $d$-cocycles $F_1$ and $F_2$ and a set $F_{12}\subseteq \binom{V}{d}$ such that $\delta F_{12}=F_1+F_2$. We stress that $F_{12}$ and at least one of the $F_i$ are necessary in order to uniquely define an edge in $\cocyc{d}$. If there is a $F_{12}'$ with $\delta F_{12}'=F_1+F_2$ then it defines a different edge of $\cocyc{d}$ connecting the same pair of vertices. On the other hand, if $F_1'$ and $F_2'$ are another pair of $d$-cocycles with $\delta F_{12} =F_1'+F_2'$, then the same $F_{12}$ yields a different edge of $\cocyc{d}$ connecting a different pair of vertices.
In the next step, a triangle of $\cocyc{d}$ is given by a triple of $d$-cocycles $F_1$, $F_2$, $F_3$, a triple of sets $F_{ij}\subseteq \binom{V}{d}$ and a set $F_{123} \subseteq \binom{V}{d-1}$ such that
\begin{enumerate}
\item[(i)] $\delta F_{ij}= F_i + F_j$, $1\leq i < j \leq 3$, and
\item[(ii)] $\delta F_{123} = F_{12}+ F_{13} + F_{23}$.
\end{enumerate}
The $F_i$ and the $F_{ij}$ define three (not necessarily distinct) vertices and three (not necessarily distinct) edges of $\cocyc{d}$ that form the boundary of a triangle, and together with this other data, $F_{123}$ defines a triangle glued in along that boundary. Again, there may be other $F_{123}'$ with the same coboundary, which define different triangles glued to the same boundary (a higher-dimensional analogue of a multiedge), and there may be a different set of $F_i'$ and/or $F_{ij}'$ which also satisfy conditions (i) and (ii); if so, they yield, together with $F_{123}$, a different triangle of $\cocyc{d}$, glued to a different boundary.
One can continue this definition inductively\footnote{In the beginning of \cite[Section 2.2]{Gromov:SingularitiesExpandersTopologyOfMaps2-2010} Gromov also gives an equivalent definition along the lines of the usual formal viewpoint of simplicial sets as functors from the category of finite totally ordered sets and monotone maps to the category of sets, as in \cite{May:SimplicialObjects-1992}.} for simplices of arbitrary dimension $r$. The case of $(d+1)$-simplices in $\cocyc{d}$, deserves special attention, however (due to the exceptional behavior of the coboundary operator in dimension zero, i.e., the fact that $V$ is not considered a coboundary, which was mentioned in Section~\ref{sec:basics}). A $(d+1)$-simplex in $\cocyc{d}$ is given by the following data (see Figure~\ref{fig:3SimplexZ2}): for each $i=0,1,\ldots,d$
and each $A\in \binom{[d+2]}{i+1}$, there is
a set $F_A \in \binom{V}{d+1-i}$
(i.e., a set of $(d-i)$-dimensional faces) such that
$$\delta F_A =\sum_{B \in \partial A} F_B \qquad \textrm{ for } 0 \leq i \leq d,$$
and
$$\sum_{A \in \binom{[d+2]}{d+1}} F_A=0.$$
\begin{figure}[tb]
\begin{center}
\includegraphics{3SimplexZ2a}
\caption{An illustration of a $3$-dimensional simplex in $\cocyc{2}$. It is given by four $2$-dimensional cocycles $F_i \in \binom{V}{3}$, six sets $F_{ij} \in \binom{V}{2}$, and four sets$F_{ijk} \in \binom{V}{1}$ (with pairwise distinct indices $i,j,k$ running between $1$ and $4$) that satisfy the relations $\delta F_{ij}= F_i + F_j$, $\delta F_{ijk}=F_{ij}+F_{jk}+F_{ik}$, and $\sum_{ijk} F_{ijk}=0$.
\label{fig:3SimplexZ2}}
\end{center}
\end{figure}
\subsection{Intersections and Cocycles}
Let $P =\{p_1,p_2,\ldots, p_n\} \subseteq {\mathbb{R}}^d$ be a labeled set of $n$ points in general position. We think of $V=[n]$ as the set of ``labels'' of $P$. The goal of this section is to define the subspace $\mathcal{W}=\mathcal{W}(P) \subseteq \cocyc{d}$ for this set $P$.
Let $A=a_0a_1\ldots a_k$ be a $k$-dimensional simplex in ${\mathbb{R}}^d$ that is in general position w.r.t. $P$, i.e., no $i$-face of $A$ intersects any $(d-i-1)$-simplex spanned by $P$, $0\leq i \leq k$. We define
$$F_A:= \{\{i_0, i_{1}\ldots, i_{d-k} \} \in {\textstyle \binom{V}{d+1-k}}: A \cap p_{i_0}p_{i_1}\ldots p_{i_{d-k}} \neq \emptyset\}.
$$
That is, we consider the $(d-k)$-simplices spanned by $P$ that are intersected by $A$. Each such simplex is of the form $p_{i_0}p_{i_1}\ldots p_{i_{d-k}}$ for some $(d+1+k)$-tuple $\{i_0, i_1 \ldots, i_{d-k} \} \in {\textstyle \binom{V}{d+1-k}}$ of labels, and $F_A$ consists precisely of these tuples. For simplicity, we will also say that $F_A$ corresponds to the set of $(d-k)$-simplices of $P$ intersected by $A$.
Thus, for a point $x \in {\mathbb{R}}^d$ in general position w.r.t.\ $P$, $F_x =\{ \{i_0,i_1,\ldots, i_d\} \in \binom{V}{d+1}: x \in p_{i_0}p_{i_1}\ldots p_{i_d}\}$ corresponds to the set of $d$-simplices of $P$ that contain $x$. If $xy$ is a segment
in general position, then $F_{xy}$ corresponds to the set of $(d-1)$-simplices of $P$ that intersect $xy$, etc.
As remarked above, the sets $F_x$ are always cocycles, i.e., $\delta F_x=0$, and the sets $F_{xy}$ satisfy
$F_x + F_y =\delta F_{xy}$. More generally, we have:
\begin{lemma}
\label{lem:intersection-duality}
Let $A=a_0a_1\ldots a_k$ be a $k$-simplex in ${\mathbb{R}}^d$ that is in general position w.r.t.\ $P$.
Then
$$\delta F_A=F_{\partial A}:=F_{A_0} + F_{A_1} + \ldots + F_{A_k},$$
where $A_i=a_0\ldots a_{i-1} a_{i+1}\ldots a_k$ is the $(k-1)$-face of $A$ obtained by dropping vertex $a_i$.
\end{lemma}
\begin{proof}
Consider a $(d-k+2)$-tuple in $f \subseteq V$ corresponding to a $(d-k+1)$-dimensional simplex $\sigma$ spanned by $P$.
By general position, this $(d-k+1)$-dimensional simplex $\sigma$ is either disjoint from $A$, or it intersects $A$ in a line segment. Each endpoint of this line segment $\sigma \cap A$ is of one of two types: either such an endpoint arises as the intersection of $\sigma \cap A_i$ of $\sigma$ with some facet of $A$, or as the intersection $\sigma_j \cap A$ of $A$ with some facet of $\sigma$. If the intersection $\sigma \cap A$ is empty or if both endpoints are of the same kind then $f$ does not contribute to either side of the claimed identity. If there is one endpoint of each type, then $f$ contributes to both sides of the identity.
\end{proof}
If we apply the preceding lemma to a $(k-1)$-face $A_i$ of $A$, we see that
$$\delta F_{A_i}= \sum_{j\neq i} F_{A_{ij}},$$ where $A_{ij}$ is the $(k-2)$-face of $A$ obtained by dropping vertices $y_i$ and $y_j$.
Iterating this, we see that $A$, together with all its faces, defines a $k$-dimensional simplex in
$\cocyc{d}$, which we denote by $\Delta_P(A)$. Note that, in particular, the vertices of $\Delta_P(A)$ correspond to the $d$-cocycles $F_{a_i}$ associated with the vertices $a_i$ of $A$.
\smallskip
Now we proceed to define the space $\mathcal{W}$. We fix a $d$-dimensional bounding simplex $B$ that contains $P$ in its interior. We also choose and fix a triangulation of $B$ that is in general position with respect to $P$ and that is
\emph{sufficiently fine}, in the sense that
\begin{enumerate}
\item[(a)] for every point $q\in {\mathbb{R}}^d$ there is a vertex $x$ of $\mathcal{T}$ with $F_q=F_x$, and
\item[(b)] any simplex $A$ in the triangulation of dimension $\dim A=k>0$ intersects $o(n^{d-k+1})$ of the $(d-k)$-simplices of $P$, i.e., $\|F_A\|=o(1)$.
\end{enumerate}
Now we complete this triangulation of $B$ to a triangulation $\mathcal{T}$ of the $d$-dimensional sphere\footnote{The somewhat ad-hoc device of introducing a bounding simplex and passing to a triangulation of the sphere can be avoided by using so-called homology with infinite supports (as Gromov does), but we opted for the ad-hoc method to keep the treatment more elementary.} $\mathbb{S}^d$ by adding a point at infinity and coning from this point over the boundary of $B$; see Figure~\ref{fig:triangulation}.
\begin{figure}[tb]
\begin{center}
\includegraphics{Triangulation}
\caption{\label{fig:triangulation} A set of five points in the plane and the line segments spanned by them (depicted by bold dots and solid segments), and a triangulation $\mathcal{T}$ of a bounding simplex $B$ plus a vertex at infinity (depicted by crosses and dashed lines).}
\end{center}
\end{figure}
We define a subspace $\mathcal{W}$ of $\cocyc{d}$ by taking the formal sum of the $d$-simplices $\Delta_P(A)$ over all $d$-simplices $A$ of $\mathcal{T}$ (note that for all simplices $A$ involving the vertex at infinity, we have $F_A=0$). In other words, a $d$-simplex of $\cocyc{d}$ is included in $\mathcal{W}$ if it is equal to $\Delta_P(A)$ for an odd number of $d$-simplices $A$ of $\mathcal{T}$. (Formally, in homological terminology, $\mathcal{W}$ is a $d$-dimensional simplicial cycle in $\cocyc{d}$.) We stress that $\mathcal{W}$ is determined by the $d$-simplices of $\mathcal{T}$, not by the vertices of $\mathcal{T}$.
As we have described it, the subspace $\mathcal{W}$ of $\cocyc{d}$ depends not only on $P$, but also on the triangulation $\mathcal{T}$ that we have chosen. It turns out that for any two choices of triangulations, the corresponding subspaces $\mathcal{W}$ are equivalent in a suitable sense (any two such cycles are homologous); see Section~\ref{subsec:abstract}.
\subsection{Nontriviality}
The key fact upon which Gromov's method hinges is that the subspace $\mathcal{W}$ defined in the previous subsection is always nontrivial, in the following sense:
\begin{quotation}
\textbf{Key Fact.} The subspace $\mathcal{W}$ defined above cannot be contracted inside $\cocyc{d}$.
\end{quotation}
(More formally, one has the stronger statement that $\mathcal{W}$ is homologically nontrivial, i.e., that it is not a homological boundary inside $\cocyc{d}$.)
Gromov deduces this fact from what he calls \emph{the algebraic version of the Almgren--Dold--Thom theorem} (see \cite[Section~2.2]{Gromov:SingularitiesExpandersTopologyOfMaps2-2010}), but
we have not been able to locate the ADT theorem in a suitable form and with proof in the literature.\footnote{The paper by Almgren \cite{Almgren:HomotopyGroupsIntegralCycleGroups-1962} that Gromov cites works in the setting of geometric measure theory and is about integral currents and cycles; an older paper by Dold and Thom \cite{DoldThom-Quasifaserungen-1958} works in the setting of simplicial sets, but only establishes a special case. It may well be that the theorem is well-known and clear to experts in the field, but not to us.} So
we will treat the Key Fact as a black box in our presentation.
\subsection{Coning in the Space of Cocycles and the Proof of Proposition~\ref{prop:GromovFillingBaranyConstant}}\label{s:coning}
To prove Proposition~\ref{prop:GromovFillingBaranyConstant}, one argues that if $\| F_y\|$ were ``too small'' for all vertices $y$ of the triangulation $\mathcal{T}$, then the space $\mathcal{W}$ could be contracted to a point inside $\mathcal{W}$---a contradiction.
In order to show contractibility, we have the following simple \emph{coning argument}: Suppose there is a vertex $o$ in $\cocyc{d}$ such that we can inductively construct a \emph{cone} $o\ast \mathcal{W}$ in $\mathcal{Z}^d$.
That is, suppose we do the following, by induction on the dimension $i$: For each $i$-simplex $\tau$ that is a face of at least one $d$-face in $\mathcal{W}$, select an $(i+1)$-simplex $o\ast \tau$ in $\cocyc{d}$ that has $\tau$ as an $i$-face and $o$ as the remaining vertex, in such a way that
\begin{equation}\label{eq:coning}
\partial (o \ast \tau)=\tau + o \ast \partial \tau.
\end{equation}
This condition means that our choices for higher-dimensional faces have to be consistent with what we have already committed to for lower-dimensional faces. We note that if $\cocyc{d}$ were just a simplicial complex, then for each $\tau$ there would be either a unique choice for $o\ast \tau$, or none at all, but for simplicial sets, there may be many choices. We also remark that $o\ast \tau$ may be a degenerate simplex, in the sense that $o$ already appears as a vertex of $\tau$.
Given a coning, one can contract $\mathcal{W}$ to $o$, by continuously moving each simplex $\tau$ of $\mathcal{W}$ towards $o$ inside $o\ast \tau$.
To perform the coning for $\mathcal{W}$, we need to fix a vertex $o$ in $\cocyc{d}$ and, for every $k$-simplex $A$ of the triangulation $\mathcal{T}$,
choose a $(k+1)$-simplex $o\ast \Delta_P(A)$ in such a way that the coning condition (\ref{eq:coning}) is satisfied. That is, inductively, for every $k$-simplex $A$ of $\mathcal{T}$, we have to choose a $(d-k-1)$-cochain $F_{Ao} \in \binom{V}{d-k}$ such that
$$\delta F_{Ao} = F_A + \sum_{B} F_{Bo},$$
where the sum is over all $(k-1)$-faces $B$ of $A$.
Using the {cofilling} profile of $\Delta^{n-1}$, we show that we can do this if all $\|F_y\|$ are small, thus obtaining a contradiction.
We choose $o$ to be the vertex of $\cocyc{d}$ corresponding to the zero $d$-cocycle $0$ in $\Delta^{n-1}$. For every vertex $y$ of the triangulation,
let $F_y$ be the corresponding vertex in $\cocyc{d}$
($d$-cocycle in $\Delta^{n-1}$). We pick an arbitrary
\emph{minimal} $(d-1)$-cochain $F_{yo}$ with $\delta F_{yo} = F_y$ $(=F_y +0)$.
By minimality, we have $\|F_y\| \geq \varphi_d (\|F_{oy}\|)$.
Next, consider an edge $xy$ in the triangulation $\mathcal{T}$. The corresponding $(d-1)$-cochain $ F_{xy}$ satisfies
$\delta F_{xy}= F_x + F_y$ (Lemma~\ref{lem:intersection-duality}).
It follows that
$$\delta (F_{xy} + F_{xo} + F_{yo})= F_x+F_y+F_x+F_y=0.$$
Now we pick a minimal $(d-2)$-cochain $F_{xyo}$
such that $\delta F_{xyo} = F_{xy} + F_{xo} + F_{yo}$.
It follows that $\|F_{xy} + F_{xo} + F_{yo}\| \geq \varphi_{d-1}(\|F_{xyo}\|)$. Moreover, by our choice of the triangulation, we
have $\|F_{xy}\|=o(1)$. Thus, up to an $o(1)$ additive error, $\|F_{xo} + F_{yo}\| \geq \varphi_{d-1}(\|F_{xyo}\|)$, hence
$$\max\{\|F_{xo}\|,\| F_{yo}\|\} \geq \tfrac{1}{2} \varphi_{d-1}(\|F_{xyo}\|),$$
and so
$$\max\{ \|F_x\|, \|F_y\|\} \geq \varphi_d(\tfrac{1}{2} \varphi_{d-1}(\|F_{xyo}\|)$$
(where we suppress $o(1)$ additive error terms in both formulas).
\begin{figure}[tb]
\begin{center}
\includegraphics{coning}
\caption{\label{fig:coning} An illustration of the coning for the boundary of a triangle $xyz$ of $\mathcal{T}$. We use a simplified labeling of the resulting simplices in $\cocyc{d}$. For example, the $0$-chain $F_{xzo}$ by itself does not determine a triangle of $\cocyc{d}$, but only does so together with the $2$-cocycles $o, F_x, F_z$ and the $1$-chains $F_{xz}, F_{xo}, F_{zo}$ that are already determined. We also stress that the coning involves making choices, e.g. between $1$-cochains $F_{yo}$ and $F_{yo}'$ that determine different edges connecting $F_y$ and $o$, and once we have made a choice, we must stick to it when choosing higher-dimensional faces.}
\end{center}
\end{figure}
In the next step, we consider a triangle $xyz$ of $\mathcal{T}$.
By our assumption on $\mathcal{T}$, we have $\|F_{xyz}\|=o(1)$.
By the choice of $F_{xyo}$, $F_{yzo}$, and $F_{zxo}$,
and using Lemma~\ref{lem:intersection-duality}, we obtain
$$\delta (F_{xyz} + F_{xyo} + F_{yzo}+F_{zxo})=0,$$
and so we can choose a minimal $(d-3)$-chain $F_{xyzo}$ with
$$\delta F_{xyzo} = F_{xyz} + F_{xyo} + F_{yzo}+F_{zxo}.$$
See Figure~\ref{fig:coning} for an illustration. Reasoning as before, we obtain $\max\{ \|F_{xyo}\|, \| F_{yzo}\| , \|F_{zxo}\|\} \geq \tfrac{1}{3} \varphi_{d-2}(\|F_{xyzo}\|)$, and hence
$$\max\{ \|F_x\|, \|F_y\|, \|F_z\|\} \geq \varphi_d(\tfrac{1}{2} \varphi_{d-1}(\tfrac{1}{3} \varphi_{d-2}(\|F_{xyzo}\|))).$$
We can continue this argument by induction. In the final step, consider a $d$-face $A$ of the triangulation.
Corresponding to it, there is a $0$-cochain $F^d_A$. Moreover, for every $(d-1)$-face $B$ of $A$, we have already constructed a $0$-cochain $F_{Bo}$ such that
$$ \max_{x\in B} \|F_x\| \geq \varphi_{d}(\tfrac12 \varphi_{d-1}(\tfrac13 \varphi_{d-2}(\ldots \tfrac 1d \varphi_1(\|F_{Bo}\|)\ldots)))$$
and $\delta (F_a + \sum_B F_{Bo})=0$. Thus, $F_a + \sum_B F_{Bo}$ is a $0$-cocycle, and so
it must be either $0$ or all of $V$, the whole vertex set of $\Delta^{n-1}$.
In the former case, we can complete the coning for $A$ by setting
$F_{Ao}=0$. If we could do this for all $d$-faces $A$ of $\mathcal{T}$, we would
be able to complete the coning, thus reaching a contradiction.
Therefore, there must be a $d$-face $A$ such that $F_A + \sum_B F_{Bo}=V$. Since $\|F_A\|=o(1)$, it follows that for some $B\subset A$,
we must have $\|F_{Bo}\| \geq \tfrac{1}{d+1}$. Maximizing over all vertices of $\mathcal{T}$, we conclude
$$ \max_{x} \|F_x\| \geq \varphi_{d}(\tfrac12 \varphi_{d-1}(\tfrac13 \varphi_{d-2}(\ldots \tfrac 1d \varphi_1(\tfrac{1}{d+1})\ldots))).$$
This completes (our outline of) the proof of Proposition~\ref{prop:GromovFillingBaranyConstant} in the affine case.
\subsection{Gromov's Method in the Topological Setting}
\label{subsec:abstract}
As was mentioned in the introduction, the setting of an $n$-point set $P \subseteq {\mathbb{R}}^d$ considered in the previous subsections corresponds to an affine map from the $(n-1)$-simplex $\Delta^n$ into ${\mathbb{R}}^d$.
Gromov's method applies to much more general situations. More precisely, it allows for the setting to generalized in several ways,
as we now sketch.
\begin{enumerate}
\item The simplex $\Delta^{n-1}$ can be replaced by an arbitrary (finite) simplicial complex $X$.
What is needed are lower bounds on the {cofilling} profile of $X$, which is defined as follows:
For $k\geq 0$, let $X_k$ be the set of $k$-dimensional faces of $X$. For each $k$, we have
the coboundary operator of $X$ which maps a subset $E\subseteq X_k$ to
$$\delta E:=\{f\in X_{k+1}: f \textrm{ contains an odd number of } e\in E\}.$$
We can identify $E \subseteq X_k$ with a $0/1$-vector indexed by $X_k$, i.e., with an element of the vector space ${\mathbb{Z}}_2^{X_k}$ over the $2$-element field ${\mathbb{Z}}_2$. In more usual (co)homological terminology,
this latter vector space is denoted by $C^{k}(X;{\mathbb{Z}}_2)$ and called the space of \emph{$k$-dimensional cochains} (with ${\mathbb{Z}}_2$-coefficients), and $\delta$ is a linear map $C^k(X;{\mathbb{Z}}_2)\rightarrow C^{k+1}(X;{\mathbb{Z}}_2)$.
For a $k$-dimensional cochain $E$, we define $\|E\|:=|E|/|X_k|$ as the normalized support size of $E$ as before,
and
$$
\varphi^X_d(\alpha):=\min \{\|\delta E\|: E\in {\textstyle C^{\ast}(X;{\mathbb{Z}}_2)}
\mbox{ minimal}, \|E\|\ge \alpha\},
$$
where $E$ is minimal if $\|E\| \leq \|E+ \delta D\|$ for all $D\in C^\ast(X;{\mathbb{Z}}_2)$.
The space $\cocyc{d}(X)$ of $d$-dimensional cocycles of $X$ is defined completely analogously to the case of the simplex.
\item The target space ${\mathbb{R}}^d$ can be replaced by an arbitrary triangulated $d$-dimensional manifold $Y$, or, even more generally, a ${\mathbb{Z}}_2$-homology manifold. What we need is to be able to compute \emph{intersection numbers} (modulo 2) between $k$-dimensional chains and $(d-k)$-dimensional chains. Equivalently, we need that Poincar\'e duality (with ${\mathbb{Z}}_2$ coefficients) holds in~$Y$.
\item Instead of affine maps, we can allow for
arbitrary continuous maps $T\colon X\rightarrow Y$. Without loss of generality, one can think of $T$ as a piecewise linear map in general position. This is not necessary for the argument, but may help the reader's intuition. For instance, for such a map, the $T$-image of any $(d-k)$-simplex $\sigma$ of $X$ intersects any $k$-simplex $A$ of $Y$ in a finite number of points in the relative interior of $A$, and the (algebraic, ${\mathbb{Z}}_2$) intersection number of $T(\sigma)$ and $A$ is defined as the number of intersection points modulo~$2$.
\item Finally, instead of cohomology with ${\mathbb{Z}}_2$-coefficients, one can work with
other coefficient rings in the argument. Potentially, this might lead
to stronger bounds. We will not discuss this generalization, since
on the one hand, it is straightforward, and on the other hand, we would
have to talk about orientations everywhere.
\end{enumerate}
The basic structure of the proof remains the same, but we have to change the definition of the cochains $F_A$ appropriately, as follows: If $A$ is a $k$-dimensional simplex in general position, one defines $F_A$ as the set of $(d-k)$-simplices of $X$ whose images under $T$ have odd intersection number with $A$.
Another way of interpreting this construction is as follows: Every $k$-simplex $A$ defines, via Poincar\'e duality, a $(d-k)$-cochain in $Y$, and this $(d-k)$-cochain pulls back under $T$ to a cochain $F_A$ in $X$.
The basic identity
\begin{equation}
\label{eq:boundary-coboundary-intersection}
\delta F_A=F_{\partial A}:=F_{A_0} + F_{A_1} + \ldots + F_{A_k}
\end{equation}
still holds. (This just says that on the level of chains and cochains, Poincar\'e duality exchanges boundary and coboundary operators).
In other words, every $k$-simplex $A$ in general position defines a $k$-simplex $\Delta_T(A)$ in $\cocyc{d}(X)$ via the intersection number construction. If $Y$ is compact, i.e., has a finite triangulation, then the subspace $\mathcal{W}=\mathcal{W}(T) \subseteq \cocyc{d}(X)$ is defined as the formal ${\mathbb{Z}}_2$-linear combination of the $d$-simplices $\Delta_T(A)$, where $A$ ranges over all $d$-simplices in the triangulation of $Y$. In other words, a $d$-simplex of $\cocyc{d}(X)$ belongs to $\mathcal{W}$ if it equals $\Delta_T(A)$ for an odd number of $d$-simplices $A$ of the triangulation of $Y$.
An equivalent way of defining $\mathcal{W}$ this is as follows: The basic identity (\ref{eq:boundary-coboundary-intersection}) implies that the map $A\mapsto \Delta_T(A)$ commutes with the boundary operator, i.e., it is a chain map (with ${\mathbb{Z}}_2$-coefficients) from $Y$ to $\cocyc{d}(X)$. Thus, this map also induces a map in homology.
Let $[Y]$ denote the fundamental $d$-dimensional homology class (over ${\mathbb{Z}}_2$) of $Y$. If we fix a triangulation of $Y$, we can take a representative $d$-cycle for $[Y]$ that is the formal ${\mathbb{Z}}_2$-linear combination of all the $d$-simplices of the triangulation. (If $Y$ is not compact, as in the case $Y={\mathbb{R}}^d$, then we have to work with homology with infinite supports.) Then $\mathcal{W}=\mathcal{W}(f)$ is defined as the image under the map $\Delta_T$ of $[Y]$, i.e., formally, it is a $d$-dimensional cycle (with ${\mathbb{Z}}_2$-coefficients) in $\cocyc{d}(X)$.
As before, the Almgren--Dold-Thom Theorem implies that this cycle is homologically nontrivial, so it cannot be contracted.
On the other hand, if every point of $Y$ were covered by the $T$-images of ``too few'' $d$-simplices of $X$, then the same combinatorial coning as before would yield a contradiction (with the precise meaning of ``too few'' depending on the {cofilling} profile of $X$). Again, the combinatorial coning can also be viewed as an actual topological contraction of $\mathcal{W}$, considered as a subspace of $\cocyc{d}(X)$, to the point $o$.
If $Y$ is unbounded, or if we are guaranteed that there is a point of $Y$ that is not covered by the image of $T$, then the same argument as before shows that
$$ \max_{y} \|F_y\| \geq \varphi^X_{d}(\tfrac12 \varphi^X_{d-1}(\tfrac13 \varphi^X_{d-2}(\ldots \tfrac 1d \varphi^X_1(\tfrac{1}{d+1})\ldots))),$$
where the maximum is over all vertices $y$ of the triangulation of $Y$.
(If $Y$ is not unbounded and if every point in $Y$ is covered by the $T$-images of some $d$-simplices of $X$, then we have to choose the apex $o$ of the coning differently (not as the empty cocycle), which yields a weaker bound.)
In particular, if $X$ is a \emph{${\mathbb{Z}}_2$-cohomological expander}, i.e., if $\varphi_i^X(\alpha)/\alpha$ is bounded away from zero for all $i$, then there is some point of $Y$ that is covered by a positive fraction of all $d$-simplices of $X$ (with a constant depending on the cofilling profile of $X$).
We remark that for the coning argument, we again choose the triangulation $\mathcal{T}$ to be sufficiently fine with respect to the map $T$, i.e., we assume that for every simplex $A$ of the triangulation of dimension $\dim A >0$, we have $\|F_A\|=o(1)$. If the map $T$ is very complicated (i.e., if we need a very fine subdivision of $X$ to approximate $T$ by a PL map), then the triangulation $\mathcal{T}$ may require a huge number of simplices, so it is important that the whole argument is completely independent of the number of simplices in the triangulation $\mathcal{T}$.
\section{The {cofilling} profile in the \boldmath$d=2$ case}
\label{sec:cofilling-2}
Here we prove Theorem~\ref{thm:cofilling-2}, which asserts that
$\varphi_2(\alpha) \ge f(\alpha):=\tfrac34\left(1-\sqrt{1-4\alpha}\right)(1-4\alpha)$.
Since $d=2$, we deal with the size of the coboundary
for an edge set $E$ of a graph. The minimality of $E$ means that
\emph{no edge cut has density more than~$\frac12$} in this graph;
in other words, for every $S\subseteq V$, the number of edges
of $E$ going between $S$ and $V\setminus S$ is at most
$\frac 12|S|\cdot|V\setminus S|$.
As we have remarked at the end of Section~\ref{s:seidel},
the minimality of $E$ is a complicated property, computationally
hard to test, for example. So we will only use it for
singleton sets $S$.
Thus, we will actually show that $\|\delta E\|\ge
f(\alpha)$ (ignoring terms tending to $0$ as $n\to\infty$)
for every $E$ with $\|E\|\ge\alpha$ and
$\deg_E(v)\le \frac n2$ for all $v\in V$ (where $\deg_E(v)$ denotes
the number of neighbors of $v$ in the graph $(V,E)$).
Before proceeding with the proof of Theorem~\ref{thm:cofilling-2},
let us remark that this relaxation (i.e., ignoring
all non-singleton $S$) already prevents us from obtaining
a tight bound for $\varphi_2$. For example, let us partition $V$ into
sets $V_1,V_2,V_3$, where $|V_1|=\alpha n$ and $|V_2|=\frac n2$,
and let $E$ consist of all edges connecting
$V_1$ to $V_2$.
This $E$ is not minimal, but it does satisfy the degree condition.
One easily checks that $\|E\|=\alpha$ and $\|\delta E\|=3 \alpha(\frac12-\alpha)=\frac 32\alpha-3\alpha^2$, which is smaller than the suspected
tight bound for $\varphi_2$ from Proposition~\ref{p:ubbb}.
However, at least the leading term is correct.
Now we proceed with the proof. Given $E$, let $m_i$
denote the number of triples $f\in {V\choose 3}$ that contain
$i$ edges of $E$, $i=1,2,3$; we have $|\delta E|= m_1+m_3$ by
definition. An easy inclusion-exclusion consideration
shows that
\begin{equation}\label{e:pieform}
|\delta E| = (n-2)|E|-\sum_{v\in V}\deg_E(v)(\deg_E(v)-1)+ 4t,
\end{equation}
where $t$ denotes the number of triangles in the graph $(V,E)$.
Indeed, to check (\ref{e:pieform}), it suffices to discuss
how many times a triple $f\in{V\choose 3}$ containing
exactly $i$ edges $e\in E$ contributes to the right-hand side,
$i=1,2,3$. For $i=1$, such an $f$ is only counted once in the term
$(n-2)|E|$, which counts the number of ordered pairs $(e,v)$,
where $e\in E$ and $v\in V\setminus e$. A triple $f$ with $i=2$
is counted twice in the term $(n-2)|E|$, but it is also counted twice
in $\sum_{v\in V}\deg_E(v)(\deg_E(v)-1)$, and thus its total contribution is
zero. Finally, for $i=3$, where $f$ induces a triangle,
it is counted three
times in $(n-2)|E|$, six times in $\sum_{v\in V}\deg_E(v)(\deg_E(v)-1)$,
and four times in $4t$, so altogether it contributes $+1$
as it should.
As the next simplification, we will ignore the triangles,
as well as the difference between $\deg_E(v)(\deg_E(v)-1)$
and $\deg_E(v)^2$, and we will
use (\ref{e:pieform}) in the form
\begin{equation}\label{e:truncpie}
|\delta E| \ge (n-2)|E|-\sum_{v\in V}\deg_E(v)^2.
\end{equation}
Since $|E|$ is given, it remains to maximize $\sum_{v\in V}\deg_E(v)^2$,
which is done in the next lemma.
\begin{lemma}\label{l:lobo2}
Let $\alpha\le \frac 14$.
Let $(V,E)$ be a graph on $n$ vertices
with $|E|=\alpha{n\choose 2}$ and $\deg_E(v)\le \frac n2$ for all
$v\in V$. Then
$$
\sum_{v\in V}\deg_E(v)^2 \le \left(\tfrac\sigma4
(1+2\sigma-4\sigma^2)+o(1)\right)n^3,
$$
where $\sigma=(1-\sqrt{1-4\alpha})/2$.
\end{lemma}
\heading{Proof. } Let $(V,E)$ be the given graph with $n$ vertices and $|E|=\alpha\binom{n}{2}$ edges.
By a sequence of transformations that do not change the number of edges and that do not
decrease the sum of squared degrees, we convert it to a particular
form.
Let us number the vertices $v_1,\ldots,v_n$ so that
$d_1\ge d_2\ge \cdots\ge d_n$, where $d_i:=\deg_E(v_i)$.
We note that for $d_i\ge d_j$, we have $(d_i+1)^2+(d_j-1)^2 > d_i^2+d_j^2$,
and thus a transformation that changes $d_i$ to $d_i+1$ and $d_j$ to $d_j-1$ and
leaves the rest of the degrees unchanged increases the sum of squared degrees.
For an edge $e=\{v_i,v_j\}$ with $i<j$, we call $v_i$
the \emph{left end} of $e$ and $v_j$ the \emph{right end}.
\begin{enumerate}
\item[(i)]
{\em Let $k$ be such that $d_1=d_2=\cdots=d_k=\lfloor \frac n2\rfloor$,
while $d_{k+1}<\lfloor \frac n2\rfloor$. Then we may assume
that the left ends of all edges are among $v_1,\ldots,v_{k+1}$.}
Indeed, if there is an edge $\{v_i,v_j\}$, $i>k+1$,
we can replace it with the edge $\{v_{k+1},v_j\}$.
This increases $\sum_{i=1}^{k+1} d_i$ (and possibly increases
$k$), so after finitely many
steps, we achieve the required condition.
\item[(ii)]
{\em We may assume that every two vertices among $v_1,\ldots,v_k$ are connected.}
Proof: We may assume that (i) holds.
Since we assume $\alpha\le \frac14$, we have $k\le \lfloor \frac n2\rfloor$.
Let us suppose $\{v_i,v_j\}\not\in E$, $1\le i<j\le k$.
Since $d_i=d_j=\lfloor \frac n2\rfloor$, each of $d_i,d_j$
is connected to at least two vertices among $v_{k+1},\ldots,v_n$.
So we may assume $\{v_i,v_{\ell}\}\in E$, $\{v_j,v_m\}\in E$,
$\ell,m\ge k+1$, $\ell\ne m$. We also have $\{v_{\ell},v_m\}\not\in E$
(according to (i)). Thus, we can delete the edges
$\{v_i,v_{\ell}\}$ and $\{v_j,v_{m}\}$ and add the edges
$\{v_i,v_j\}$ and $\{v_{\ell},v_m\}$.
\immfig{eswitch}
This increases the number of edges on $\{v_1,\ldots,v_k\}$,
which cannot decrease by the transformations in (i), so after
finitely many steps, we achieve both (i) and (ii).
\item[(iii)] {\em We may assume that the right neighbors of each $v_i$, $1\le i\le k$,
form a contiguous interval $v_{i+1},v_{i+2},\ldots,v_{\lfloor n/2\rfloor+1}$.}
Indeed, if $v_i$ is connected to some $v_{\ell+1}$ and not to $v_{\ell}$,
$\ell>k$, we can replace the edge $\{v_i,v_{\ell+1}\}$
with $\{v_i,v_{\ell}\}$. This increases the sum of squared vertex degrees,
and thus after finitely many steps, we can achieve (i)--(iii).
\end{enumerate}
A graph satisfying (i)--(iii) is almost completely determined
by its number of edges, except possibly for the neighbors
of the vertex $v_{k+1}$. Each of $v_1,\ldots,v_k$
is connected to the first $\frac n2$ vertices,
and there are no other edges, except possibly for those
incident to $v_{k+1}$. Counting the left ends of edges,
we have $\alpha{n\choose2}=|E|=kn/2-{k\choose2}+O(n)$.
Writing $k=\sigma n$, we obtain $\sigma=(1-\sqrt{1-4\alpha})/2+o(1)$.
The sum of the squared degrees is then
$k\frac{n^2}4+(\frac n2-k)k^2+O(n^2)=\left(
\frac\sigma 4(1+2\sigma-4\sigma^2)+o(1)\right)n^3$.
Lemma~\ref{l:lobo2} is proved.
\ProofEndBox\smallskip
\medskip
Theorem now follows immediately from Lemma~\ref{l:lobo2} using (\ref{e:truncpie}). \ProofEndBox\smallskip
\heading{A promising relaxation? }
As we have seen, in order to establish the tightness
of the upper bound from Proposition~\ref{p:ubbb},
one has to use the minimality condition in a stronger way
than we did in the above proof. On the other hand, it
seems possible that the other relaxation we have made in that
proof, namely, ignoring triangles, need not cost us anything.
In other words, while
in $\delta E$ we count triples containing 1 or 3 edges,
perhaps the example in Proposition~\ref{p:ubbb}
also minimizes, over all all minimal $E$ of a given size,
the number of triples containing exactly one edge.
This might be easier to prove, and triangles would be dealt with
implicitly, since the example has no triangles.
\heading{On Gromov's ``$\frac23$-bound''. }
Sec.~3.7 of ~\cite{Gromov:SingularitiesExpandersTopologyOfMaps2-2010}
claims the lower bound
$\|(\partial^1)^{-1}_{\rm fil}\|(\beta)\le\frac2{3(1-\sqrt{\beta})}$
(which would yield
$
\varphi_2(\alpha) \ge \tfrac 32\alpha -(\tfrac 32)^{3/2}\alpha^{3/2}+\tfrac98\alpha^2-O(\alpha^{5/2})
$.
The argument as given doesn't seem to work, however
(although it is also possible that we misunderstood something).
It is supposed to be based on an inequality
(the fifth displayed formula
in Sec.~3.7, derived from the Loomis--Whitney inequality),
which seems correct and is re-stated for $i=1$ two lines below.
In the language of graphs, the $i=1$ case is equivalent to
$\sum_{\{u,v\}\in{V\choose 2}} \deg_E(u)\deg_E(v)\le 2\frac {n-1}n|E|^2$.
However, the proof of the ``$\frac23$-bound'' below
seems to employ a similar inequality for $i=2$, which would
claim that $\sum_{\{u,v\}\in{V\choose 2}}\sqrt{\deg_E(u)\deg_E(v)}\le
2|E|^{3/2}$. This is false, though (a graph as in the upper bound
example, i.e., a complete bipartite graph with parts of very
unequal size, is a counterexample). Probably this kind of proof
can be saved, since it seems sufficient to take the
last sum over $\{u,v\}\in E$, instead of all pairs, and then
such an inequality is apparently true (but doesn't seem
to follow from Loomis--Whitney in a direct way).
\section{The {cofilling} profile for \boldmath$d>2$}
\label{sec:cofilling-ge2}
In this section we prove Theorem~\ref{t:3ub}, a lower bound
on $\varphi_3$. We begin with an auxiliary fact concerning
links of vertices.
\begin{obs}\label{o:minlink}
If $E\subseteq {V\choose d}$ is a minimal
system, then $\mathop {\rm lk}\nolimits(v,E)$ is also minimal, for every vertex $v\in V$.
\end{obs}
\heading{Proof. }
For a set system $F$ on $V$, let us write $F_{\setminus v}:=
\{s\in F:v\not\in F\}$.
We want to verify that for each $C\subseteq {V\choose d-2}$,
$\mathop {\rm lk}\nolimits(v,E)$ contains at most half of $\delta C$. The sets of
$\mathop {\rm lk}\nolimits(v,E)$ do not contain $v$, and so only sets of $(\delta C)_{\setminus v}$
may belong to $\mathop {\rm lk}\nolimits(v,E)$. But we have $(\delta C)_{\setminus v} =
(\delta C_{\setminus v})_{\setminus v}$, and so we may restrict our attention
to systems $C$ whose sets all avoid~$v$.
Let $D:=C*v:=\{c\cup \{v\}:c\in C\}$. We have $\delta D=
((\delta C)_{\setminus v})*v$. Now $E$ contains at most half of the sets
of $\delta D$ by minimality, so $\mathop {\rm lk}\nolimits(v,E)$ contains
at most half of the sets of $(\delta C)_{\setminus v}$ as claimed.
\ProofEndBox\smallskip
\bigskip
In the proof of Theorem~\ref{t:3ub}, we consider a minimal
system $E\subseteq {V\choose 3}$, $\|E\|=\alpha$,
and we want to show that it has a large coboundary.
Conceptually, the proof splits into two cases:
The first one deals with the situation where most of the
triples $e\in E$ are incident to vertices of
very large degrees. The second one concerns the situation
where the maximum vertex degree is not much larger than the
average vertex degree.
\heading{Dealing with high-degree vertices. }
We begin with the first case, with a significant share
of high-degree vertices. Here we rely on the basic cofilling bound,
which we are going to apply to the links
of high-degree vertices (the links are minimal by
Observation~\ref{o:minlink}). From the sets in the coboundaries of the links
we are going to obtain sets of $\delta E$; some care is needed
to avoid counting a single $f\in \delta E$ several times.
It is interesting to note that in this way we get a lower bound for
$\varphi_3$ that has a correct limit behavior as $\alpha\to 0$,
although for the links we employ
the basic cofilling bound, which is far from correct
for small $\alpha$'s. This can be explained as follows:
for those systems $E$ that are near-extremal for $\varphi_3$,
the relevant vertex links are so large that the basic cofilling bound
is almost tight for them.
We present the first part of the proof
for $d$-tuples instead of triples, since specializing
to triples would not make the argument any simpler.
\begin{lemma}\label{l:highdeg}
Let $E$ be a minimal
system of $d$-tuples on $V=\{v_1,v_2,\ldots,v_n\}$,
let $\alpha:=\|E\|$,
let $r=\beta n$ be a parameter, and let
$E_{\rm hi}\subseteq E$ consist of those $e\in E$ that
contain at least one vertex among $v_1,\ldots,v_r$.
Let $F_{\rm hi}$ be the set of those $f\in\delta E$ that contain
at least one vertex among $v_1,\ldots,v_r$.
Then
$$
\|F_{\rm hi}\| \ge \frac {d+1}d \alpha_{\rm hi} -
\frac{(d+1)d}2 \beta^2-(d+1)\alpha\beta-
O(n^{-1}),
$$
where $\alpha_{\rm hi}:=\|E_{\rm hi}\|$.
\end{lemma}
\heading{Proof. } Let $v\in V$ be a vertex, and let
us write $L_v:=\mathop {\rm lk}\nolimits(v,E)$.
Then, by Observation~\ref{o:minlink}, $L_v$ is minimal,
and thus $\|\delta L_v\|\ge \|L_v\|$ by the basic bound on
$\varphi_{d-1}$. In terms of cardinalities, we
can write this inequality as
$|\delta L_v|\ge \frac{n-d+1}d|L_v|$.
Let us consider some $e\in \delta L_v$. We observe that
if $v\not\in e$ and $e\not\in E$, then $e\cup\{v\}\in\delta E$.
There are exactly $|L_v|$ sets $e\in \delta L_v$ that contain $v$,
and so the number of $f\in \delta E$ that contain $v$
is at least
\begin{equation}\label{e:overcount}
\frac {n-d+1}d |L_v|-|L_v|-|E|.
\end{equation}
To prove the lemma, we would like to sum this bound over
$v=v_1,v_2,\ldots,v_r$, but in this way, one $f\in\delta E$
might be counted several times. In order to avoid this,
for each $v_i$ we will count only those $f\in \delta E$ that
contain $v_i$ \emph{and avoid $v_1,\ldots,v_{i-1}$}.
In this way, from the term (\ref{e:overcount}) for $v=v_i$ we
need to subtract the number of $f\in \delta E$ that contain
both $v_i$ and some $v_j$, $j<i$. A trivial upper bound
on this number is $(i-1){n-2\choose d-1}$.
Hence
\begin{eqnarray*}
|F_{\rm hi}|&\ge& \sum_{i=1}^r\left((\tfrac{n-d+1}d-1)|L_{v_i}|- (i-1){n-2\choose d-1}
-|E|\right)\\
&\ge&
\frac nd \biggl(\sum_{i=1}^r |L_{v_i}|\biggr)- \frac{\beta^2}2 n^2
{n\choose d-1} - \beta n|E|-
O(n^d).
\end{eqnarray*}
Now $\sum_{i=1}^r |L_{v_i}|\ge |E_{\rm hi}|$.
We finally divide by $n\choose d+1$ in order to pass to the normalized
size measure $\|.\|$, and we obtain the lemma.
\ProofEndBox\smallskip
\heading{Dealing with low-degree vertices. }
Next, we will show that if the vertex degrees of $E$
do not exceed the average vertex degree by too much,
then $\delta E$ is even significantly larger than in
the upper bound example from Proposition~\ref{p:ubbb}.
Here it is important for the argument that we deal with triples.
In this case, we are going to count the sets of the coboundary
using two-term inclusion-exclusion, similar to the case $d=2$
in the preceding section. This leads to bounding from above
the sum of squares of the degrees of \emph{pairs} of vertices.
For this, by a suitable double counting, we use the assumption of low
\emph{vertex} degrees, and also the fact that the degrees
of all pairs are bounded by $\frac n2$, which follows
from the minimality of~$E$.
\begin{lemma}\label{l:low3}
Let $d=3$ and let $E\subseteq {V\choose 3}$.
Suppose that $\deg_E(p)\le \frac n2$ for
each pair $p=\{u,v\}$ of vertices, and that
$\deg_E(v)\le \sigma{n\choose 2}$ for each vertex~$v$.
Then
$$
\|\delta E\|\ge (2-O(\sigma^{1/3}))\|E\|.
$$
\end{lemma}
\heading{Proof. } Similar to the graph case, we will count
only those $4$-tuples in $\delta E$ that contain exactly
one $e\in E$. For each $e\in E$, we count $n-d$ potential
$4$-tuples, and we subtract $1$ for each $e'\in E$ sharing
a pair with $e$. Thus,
$$
|\delta E|\ge (n-d)|E|-\sum_{p\in {V\choose2}} \deg_E(p)^2.
$$
We need to estimate the second term.
We choose a threshold parameter $\tau$, which we think
of as being much larger than $\sigma$ but still small,
and we call a pair $p$ \emph{heavy} if $\deg_E(p)\ge \tau n$,
and \emph{light} otherwise.
Each $e\in E$ shares a light pair with at most $3\tau n$ other $e'\in E$,
and so the contribution of light pairs is bounded as follows:
$$
\sum_{p \mathrm{~light}} \deg_E(p)^2 \le 3\tau n |E|.
$$
Let $E_k$ be the number of $e\in E$ with exactly $k$ heavy pairs,
$k=0,1,2,3$. Since each heavy pair has degree at most $\frac n2$,
reasoning as above, we can bound the contribution of the heavy
pairs as
$$
\sum_{p \mathrm{~heavy}} \deg_E(p)^2 \le \frac n2 (|E_1|+2|E_2|+3|E_3|).
$$
We aim at showing that $E_2\cup E_3$ is small.
Let us consider a vertex $v\in V$ and see how many
$e\in E_2\cup E_3$ can be incident to it. More precisely,
we want to estimate $m_v$, the number of $e\in E_2\cup E_3$
that have two heavy pairs incident
to~$v$.
Let $\deg_E(v)=\sigma_v {n\choose 2}$,
where $\sigma_v\le\sigma$, and let us consider the graph $G_v:=(V,\mathop {\rm lk}\nolimits(v,E))$.
The heavy pairs incident to $v$ correspond to the \emph{heavy vertices}
of $G_v$, i.e., vertices of degree at least $\tau n$, and $m_v$ is
the number of pairs in $G_v$ connecting
two heavy vertices. By simple counting, $G_v$ has at most
$\frac{\sigma_v n}\tau$ heavy vertices, and thus
$m_v\le (\frac{\sigma_v n}\tau)^2/2 \le \frac{\sigma}{\tau^2} |\mathop {\rm lk}\nolimits(v,E)|$.
Summing over all vertices $v$, we have
$|E_2+E_3|\le \frac{3\sigma}{\tau^2} |E|$.
Altogether we thus have
$$
\sum_{p\in {V\choose 2}} \deg_E(p)^2\le 3\tau n |E| +
\frac n2 |E|+ O(\tfrac {\sigma}{\tau^2})n|E|.
$$
Finally, setting $\tau := \sigma^{1/3}$ and normalizing
by ${n\choose 3}$, we obtain the claim of the lemma.
\ProofEndBox\smallskip
\begin{proof}[Proof of Theorem~\ref{t:3ub}]
This is a straightforward consequence of Lemmas~\ref{l:highdeg} and~\ref{l:low3}. We consider a minimal $E\subseteq {V\choose 3}$
with $\|E\|=\alpha$. We enumerate the vertices of $V$
as $v_1,\ldots,v_n$ in the order of decreasing degrees.
For a suitable parameter $\beta>\alpha$ (depending on $\alpha$),
we set $r:=\beta n$, we let $E_{\rm hi}$ be those $e\in E$
that contain a vertex among $v_1,\ldots,v_r$, and let
$E_{\rm lo}:=E\setminus E_{\rm hi}$.
We have $\sum_{i=1}^n\deg_E(v_i) = 3|E|$, and so
the degrees of the vertices $v_r,v_{r+1},\ldots,v_n$
are bounded from above by $\frac{3|E|}r$.
Hence Lemma~\ref{l:low3} with $\sigma:= \alpha/\beta$
gives $\|\delta E_{\rm lo}\|\ge (2-(\alpha/\beta)^{1/3}) \alpha_{\rm lo}$,
where $\alpha_{\rm lo}:=\|E_{\rm lo}\|$.
Let $F_{\rm hi}$ be the set of those $f\in \delta E$ that
contain a vertex among $v_1,\ldots,v_r$; then Lemma~\ref{l:highdeg}
yields $\|F_{\rm hi}\|\ge \frac 43\alpha_{\rm hi} - O(\beta^2)$
(the $\alpha\beta$ term in the lemma is insignificant since
we assume $\alpha\le\beta$).
We now observe that if some $f\in\delta E_{\rm lo}$ does not
contain any of $v_1,\ldots,v_r$, then it belongs to
$\delta E$ (since it cannot contain
any $e\in E_{\rm hi}$). The number of $f\in \delta E_{\rm lo}$
that do contain some vertex among $v_1,\ldots,v_r$
is bounded by $r |E_{\rm lo}|$. Altogether we thus have
\begin{eqnarray*}
\|\delta E\| &\ge & \|F_{\rm hi}\|+\|\delta E_{\rm lo}\|-O(\beta\alpha_{\rm lo})\\
&\ge &\frac 43\alpha_{\rm hi} - O(\beta^2)
+\left(2-(\alpha/\beta)^{1/3}\right) \alpha_{\rm lo}\\
&\ge& \frac 43\alpha -O(\beta^2) + \left(\frac 23 -
O((\alpha/\beta)^{1/3})\right)\alpha_{\rm lo}.
\end{eqnarray*}
If we set $\beta := C\alpha$ for a sufficiently large constant $C$,
then the term $O((\alpha/\beta)^{1/3})$ becomes smaller than
$\frac23$, and the whole term involving
$\alpha_{\rm lo}$ is nonnegative.
Thus, we are left with $\|\delta E\|\ge \frac 43\alpha -O(\alpha^2)$
as claimed.
\end{proof}
\section{Pagodas and a Better Bound On $\boldsymbol{c_3}$}
\label{sec:c3}
We recall the lower bound for the B\'ar\'any constant $c_d$ from
from Proposition~\ref{prop:GromovFillingBaranyConstant}:
\begin{equation}\label{e:grr}
c_d\ge \varphi_{d}(\tfrac12 \varphi_{d-1}(\tfrac13 \varphi_{d-2}(\ldots \tfrac 1d \varphi_1(\tfrac1{d+1})\ldots))),
\end{equation}
As a case study, we will concentrate on $c_3$,
the first open case. The various numerical bounds are as follows.
\begin{itemize}
\item Gromov's bound, obtained from (\ref{e:grr}) via
the basic cofilling bound and the precise value of $\varphi_1$,
is
$$
c_3\ge \frac1{16}= 0.0625.
$$
\item For comparison, the best lower achieved by other methods,
due to Basit et al.~\cite{Basit-al}, is
$$c_3\ge 0.05448.$$
\item With \emph{maximal optimism}, assuming that the upper bounds
of Proposition~\ref{p:ubbb} are tight for $\varphi_2$
and $\varphi_3$, (\ref{e:grr}) would give
$$
c_3\conjge 0.0877695.
$$
\item However, the best upper bound, which we suspect to be the
truth, only gives
$$
c_3\le 4!/4^4=\tfrac 3{32}= 0.09375.
$$
\end{itemize}
Here we will show how the lower bound for $c_3$ can be improved
\emph{beyond} (\ref{e:grr}). The specific number we achieve is not
very impressive: $c_3\ge 0.06332$. However, it is important
that the proof relies only on the basic cofilling bounds on
$\varphi_2$ and $\varphi_3$. \emph{If} better lower bounds
on $\varphi_2$ or $\varphi_3$ could be proved in suitable ranges
of $\alpha$, which would improve the lower bound (\ref{e:grr}),
we would automatically get a further (slight) improvement
from the proof below; in this sense, the method is ``orthogonal''
to bounds on the cofilling profiles.
We begin by returning to the argument in Section~\ref{s:coning}
that proves (\ref{e:grr}), and for simplicity, we specialize to the $d=3$ case.
In that argument, we considered a tetrahedron $wxyz$ in the triangulation
$\mathcal{T}$, with the corresponding set systems (cochains)
$F_A\in {V\choose 5-|A|}$ for all nonempty sets $A\subseteq S:=\{w,x,y,z\}$.
We have $\|F_A\|=o(1)$ unless $|A|=1$.
We also produced the set systems $F_{Ao}\in {V\choose 4-|A|}$
satisfying the relations
\begin{equation}\label{e:eqq1}
\delta F_{Ao}=F_A+\sum_B F_{Bo},
\end{equation}
where the sum is over all $B\subset A$ of size $|A|-1$.
Moreover, the crux of the argument was the existence
of a tetrahedron $wxyz$ for which we also have
\begin{equation}\label{e:eqq2}
F_S+\sum_{B\in {S\choose 3}} F_{Bo}=V.
\end{equation}
We introduce the notation $X\approx Y$ for sets $X,Y$ of
$k$-tuples, meaning that $|X+Y|=o(n^{k})$, or in other words, $\|X+Y\|=o(1)$.
We eliminate the sets $F_A$ with $|A|\ne 1$
from our considerations, since they are all small;
then (\ref{e:eqq1}) and (\ref{e:eqq2}) become $\approx$
relations among the various $F_{Ao}$.
We introduce a definition reflecting these relations;
we chose to call the resulting object, a structure
made of cochains of various dimensions, a \emph{pagoda}.
In order to make the notation more intuitive,
we distinguish sets of various cardinalities
by different letters (with indices), writing $V$ for
sets of vertices (0-cochains), $E$ for sets of edges (1-cochains),
$F$ for sets of triples (2-cochains), and $G$ for sets of fourtuples.
We also change the
indexing of the sets into an (isomorphic but) more convenient one.
This leads to the following definition.
\begin{sloppypar}
\begin{definition}
A ($3$-dimensional) \emph{pagoda} over $V$ consists of
vertex sets $V_1,V_2,V_3,V_4\subseteq V$, edge sets
$E_{12},E_{13},\ldots,E_{34}\subseteq{V\choose 2}$,
sets $F_{123}$, $F_{124}$, $F_{134}$, $F_{234} \subseteq{V\choose 3}$
of triples, and a set $G=G_{1234}\subseteq {V\choose 4}$ of $4$-tuples
(the \emph{top} of the pagoda).
The sets $V_i$, $E_{ij}$ and $F_{ijk}$ are \emph{minimal}
and they satisfy the following relations
(here $i,j,k$ denote mutually distinct indices):
$$
V_1+V_2+V_3+V_4\approx V,\
\delta V_i\approx \sum_j E_{ij},\
\delta E_{ij} \approx \sum_k F_{ijk},\
\delta F_{ijk}\approx G.
$$
\end{definition}
\end{sloppypar}
Thus, the argument of Proposition~\ref{prop:GromovFillingBaranyConstant}
shows that $c_3\ge c^\textup{top}_3\ge \lim\inf_{|V|\to\infty}\min \|G\|$,
where the minimum is over tops $G$ of pagodas over~$V$.
We know of \emph{no example} of a pagoda
whose top is smaller than the best known upper bound for $c_3$,
i.e., $\|G\|=\tfrac 3{32}$.
So it is possible that the value of $c_3$ can be determined precisely
using a combinatorial analysis of pagodas. Unfortunately,
we have only a much weaker result.
\begin{prop}\label{p:c3bitbetter} The top $G$ of every pagoda
satisfies, for all $n$ sufficiently large, $\|G\|\ge\frac 1{16}+\eps_0$,
for a positive constant $\eps_0> 0.00082$. Consequently,
$c_3\ge c^\textup{top}_3\ge \frac 1{16}+\eps_0> 0.06332$.
\end{prop}
A similar, but
more complicated, argument probably also works for higher-dimensional
pagodas.
\heading{Remark. }
Developing the ideas from the forthcoming
proof of Proposition~\ref{p:c3bitbetter}, one can set up a rather
complicated optimization problem, whose optimum provides
a lower bound for $c^\textup{top}_3$. However, solving this (non-convex)
optimization problem rigorously
seems rather difficult. Numerical computations
indicate that the optimum is approximately
$0.0703125$. This would be an improvement much more significant
than the one in Proposition~\ref{p:c3bitbetter}, but still
far from the suspected true value of~$c_3$.
\heading{Proof of Proposition~\ref{p:c3bitbetter}. }
We being by an outline of the argument.
We consider a pagoda
with $\|G\|\le \frac 1{16}+\eps_0$, where $\eps_0\ge 0$
is a yet unspecified
(small) parameter. Reasoning essentially as in the
derivation of (\ref{e:grr}) and
using the basic bound for $\varphi_3$ and $\varphi_2$
and the true value of $\varphi_1$, we find that
no $\|E_{ij}\|$ and no $\|\delta E_{ij}\|$
may be significantly larger than $\frac 18$, and
no $V_i$ may occupy much more than $\frac14$
of the vertex set. Hence the $V_i$ are almost disjoint
and have sizes close to $\frac14$. Then one can argue that almost
all edges of $E_{12}$, say, have to go between $V_1$ and $V_2$,
as in the following picture:
\immfig{triangletrick0}
Then, however, the coboundary $\delta E_{12}$ contains almost all
triples of the form indicated in the picture, and thus
$\|\delta E_{12}\|$ is close to $\frac3{16}$, rather than
to $\frac18$, which is a contradiction showing that
$\eps_0$ cannot be taken arbitrarily small.
Now we proceed with a more detailed and more quantitative argument.
Since $\|G\|\le \frac1{16}+\eps_0$ and $G\approx\delta F_{ijk}$,
we have $\|F_{ijk}\|\le \frac 1{16}+\eps_0$ according to the basic
bound for $\varphi_3$ (ignoring, for simplicity, the $o(1)$ terms
coming from $\approx$). Further, since each $\delta E_{ij}$
is a sum of two $F_{ijk}$'s, the basic bound for $\varphi_2$
gives
$$
\|E_{ij}\|\le \tfrac 18+2\eps_0,\ \ \ 1\le i<j\le 4.
$$
Similarly, since each $\delta V_i$ is the sum of three
$E_{ij}$'s, and since $\varphi_1(\alpha)=2\alpha(1-\alpha)$,
we get
\begin{equation}\label{e:Viub}
\|V_i\|\le \tfrac 14+\eps_1,
\end{equation}
where $\eps_1=\frac14\left(1-\sqrt{1-48\eps_0}\right)=
6\eps_0+72\eps_0^2+O(\eps_0^3)$
is given by the equation $\varphi_1(\frac14+\eps_1)=3(\frac 18+2\eps_0)$.
Next, let $E^{\rm dbl}_{ij}$
be the set of edges $e\in E_{ij}$ that belong to both
$\delta V_i$ and $\delta V_j$, and let $E^{\rm sgl}_{ij}$
consist of those $e\in E_{ij}$ that belong to exactly
one of $\delta V_i,\delta V_j$.
Using $\delta V_i\approx \sum_{j} E_{ij}$ and summing
over $i=1,\ldots,4$, we have
\begin{equation}\label{e:sgldbl}
\sum_{i=1}^4\|\delta V_i\|\le 2\sum_{i<j}\|E^{\rm dbl}_{ij}\|
+ \sum_{i<j}\|E^{\rm sgl}_{ij}\|.
\end{equation}
For every $i$, we have, using (\ref{e:Viub}),
$$
\|V_i\|\ge 1-\sum_{j\ne i}\|V_j\|\ge
1-3(\tfrac 14+\eps_1)\ge \tfrac 14-3\eps_1.
$$
Hence the left-hand side of (\ref{e:sgldbl}) is at least
$4\varphi_1(\frac14-3\eps_1)=4(\frac 38-\eps_2)$,
where $\eps_2$
is defined by the last equality (we get
$\eps_2=18\eps_0+864\eps_0^2+O(\eps_0^3)$).
Further, for every $i<j$ we have $\|E^{\rm dbl}_{ij}\|+\|E^{\rm sgl}_{ij}\|
\le \|E_{ij}\|\le\frac 18+2\eps_0$. Summing this inequality over
the six pairs $i,j$ and subtracting the result from (\ref{e:sgldbl}), we arrive at
$$
4(\tfrac 38-\eps_2)-\tfrac 68-6\cdot 2\eps_0\le \sum_{i<j}\|E^{\rm dbl}_{ij}\|.
$$
Hence there are $i$ and $j$ with
$$\|E^{\rm dbl}_{ij}\|\ge\tfrac 18-
\tfrac23\eps_2-2\eps_0.
$$
Let us fix the notation so that $i=1$ and $j=2$ have this
property.
Using $\|E^{\rm dbl}_{12}\|+\|E^{\rm sgl}_{12}\|\le \frac 18+2\eps_0$
again, we further obtain
\begin{equation}\label{e:Esgl}
\|E^{\rm sgl}_{12}\|\le 4\eps_0+\tfrac23\eps_2.
\end{equation}
Let us divide the edges in $E^{\rm dbl}_{12}$ into two subsets
$A$ and $B$, where $A$ consists of the edges $e\in E^{\rm dbl}_{12}$
that have one endpoint in $V_1\setminus V_2$ and the
other in $V_2\setminus V_1$. Then each edge in $B:=E^{\rm dbl}_{12}
\setminus A$ necessarily connects a vertex of
$I:=V_1\cap V_2$ to a vertex in $V\setminus (V_1\cup V_2)$.
The plan is now to show that, for $\eps_0$ sufficiently small,
the edges of $A$ ``contribute'' many triples to $\delta E_{12}$.
First we check that $B$ is small, and we begin by bounding
the size of $I$: We have $V=V_1\cup\cdots\cup V_4$,
by inclusion-exclusion we get $1\le \|V_1\|+\|V_2\|-\|I\|+
\|V_3\|+\|V_4\|$, and thus $\|I\|\le \sum_{i=1}^4\|V_i\|-1
\le 4\eps_1$. By the minimality of $E_{12}$, each vertex in $I$
has degree at most $\frac n2$, and so
$\|B\|\le \|I\|\le 4\eps_1$. Therefore,
\begin{equation}\label{e:Abound}
\|A\|\ge \|E^{\rm dbl}_{12}\|-\|B\|\ge \tfrac18-\tfrac 23\eps_2-2\eps_0-4\eps_1.
\end{equation}
Now let us consider an edge $e=\{u,v\}\in A$ and a vertex
$w\in V\setminus (V_1\cup V_2)$. The triple $\{u,v,w\}$ belongs
to $\delta E_{12}$ unless one of the edges $\{u,w\}$ and
$\{v,w\}$ lies in $E_{12}$; see the following illustration:
\immfig{triangletrick}
The (normalized) number of
``candidate triples'' $\{u,v,w\}$ with $\{u,v\}\in A$
and $w\in V\setminus (V_1\cup V_2)$ is
$3\|A\|\cdot \|V\setminus (V_1\cup V_2)\|$. Moreover, if
one of the edges $\{u,w\},\{v,w\}$ lies in $E_{12}$, then
it lies in $E^{\rm sgl}_{12}$. At the same time,
a given edge $e'\in E^{\rm sgl}_{12}$ may ``kill''
at most $\max(|V_1\setminus V_2|,|V_2\setminus V_1|)\le
(\frac 14+\eps_1)n$ candidate triples $\{u,v,w\}$.
Hence
$
\|\delta E_{12}\|\ge 3\|A\|(\tfrac 12-2\eps_1) - 3\|E^{\rm sgl}_{12}\|(\tfrac
14+\eps_1)\ge \tfrac 3{16}-f(\eps_0),
$$
where, employing (\ref{e:Esgl}) and (\ref{e:Abound}), we calculate that
$f(\eps_0)= 6\eps_0+\tfrac{27}4\eps_1-24\eps_1^2+\tfrac 32\eps_2-
2\eps_1\eps_2=\tfrac{147}2\eps_0+702\eps_0^2+O(\eps_0^3)$.
Since we have assumed $\|\delta E_{12}\|\le\tfrac 18+2\eps_0$,
we finally obtain $f(\eps_0)+2\eps_0\ge\tfrac1{16}$.
The numerical lower bound in the proposition follows from this.
\ProofEndBox\smallskip
\bibliographystyle{alpha}
| {
"timestamp": "2011-02-18T02:01:12",
"yymm": "1102",
"arxiv_id": "1102.3515",
"language": "en",
"url": "https://arxiv.org/abs/1102.3515",
"abstract": "A result of Boros and Füredi ($d=2$) and of Bárány (arbitrary $d$) asserts that for every $d$ there exists $c_d>0$ such that for every $n$-point set $P\\subset \\R^d$, some point of $\\R^d$ is covered by at least $c_d{n\\choose d+1}$ of the $d$-simplices spanned by the points of $P$. The largest possible value of $c_d$ has been the subject of ongoing research. Recently Gromov improved the existing lower bounds considerably by introducing a new, topological proof method. We provide an exposition of the combinatorial component of Gromov's approach, in terms accessible to combinatorialists and discrete geometers, and we investigate the limits of his method. In particular, we give tighter bounds on the \\emph{cofilling profiles} for the $(n-1)$-simplex. These bounds yield a minor improvement over Gromov's lower bounds on $c_d$ for large $d$, but they also show that the room for further improvement through the {\\cofilling} profiles alone is quite small. We also prove a slightly better lower bound for $c_3$ by an approach using an additional structure besides the {\\cofilling} profiles. We formulate a combinatorial extremal problem whose solution might perhaps lead to a tight lower bound for $c_d$.",
"subjects": "Combinatorics (math.CO); Computational Geometry (cs.CG)",
"title": "On Gromov's Method of Selecting Heavily Covered Points",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180677531124,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7083314820836165
} |
https://arxiv.org/abs/2009.05100 | The Complete Positivity of Symmetric Tridiagonal and Pentadiagonal Matrices | We provide a decomposition that is sufficient in showing when a symmetric tridiagonal matrix $A$ is completely positive. Our decomposition can be applied to a wide range of matrices. We give alternate proofs for a number of related results found in the literature in a simple, straightforward manner. We show that the cp-rank of any irreducible tridiagonal doubly stochastic matrix is equal to its rank. We then consider symmetric pentadiagonal matrices, proving some analogous results, and providing two different decompositions sufficient for complete positivity. We illustrate our constructions with a number of examples. | \section{Preliminaries}
All matrices herein will be real-valued. Let $A$ be an $n\times n$ symmetric tridiagonal matrix:
$$A=\begin{pmatrix}a_1&b_1&&&& \\ b_1 & a_2 & b_2 &&&\\ &\ddots&\ddots&\ddots&&& \\& &\ddots&\ddots&\ddots&& \\ &&&b_{n-3}&a_{n-2}&b_{n-2}& \\&&&&b_{n-2}&a_{n-1}&b_{n-1} \\&&&&&b_{n-1}&a_n \end{pmatrix}.$$
We are often interested in the case where $A$ is also doubly stochastic, in which case we have
$a_{i}=1-b_{i-1}-b_{i}$ for $i=1, 2,\ldots,n$, with the convention that $b_0=b_n=0$. It is easy to see that if a tridiagonal matrix is doubly stochastic, it must be symmetric, so the additional hypothesis of symmetry can be dropped in that case.
We are interested in positivity conditions for symmetric tridiagonal and pentadiagonal matrices. A stronger condition than positive semidefiniteness, known as complete positivity, has applications in a variety of areas of study, including block designs, maximin efficiency-robust tests, modelling DNA evolution, and more
\cite[Chapter 2]{CP}, as well as recent use in mathematical optimization and quantum information theory (see \cite{Nathaniel} and the references therein).
With this motivation in mind, we study the positivity (in various forms) of symmetric tridiagonal and pentadiagonal matrices, where we highlight the important case when the matrix is also doubly stochastic.
%
Although it is NP-hard to determine if a given matrix is completely positive \cite{NPhard}, in Section~\ref{sec:td_CP} we provide a construction that is sufficient to show that a given symmetric tridiagonal
matrix is completely positive. We provide a number of examples illustrating the utility of this construction.
%
The literature on completely positive matrices often considers the cp-rank, or the factorization index, of a completely positive matrix, which is the minimal number of rank-one matrices in the decomposition showing complete positivity; e.g. Chapter 3 of \cite{CP} is devoted to this topic. We show that for irreducible tridiagonal doubly stochastic matrices, our decomposition is minimal. It should be noted that it is known that acyclic doubly non-negative matrices are completely
positive \cite{BH}, and this result has been generalized to bipartite doubly non-negative matrices \cite{BG}. Our Proposition~\ref{PD_CP} is an independent discovery of a special case of this result, using a simpler method of proof.
As a natural extension of the tridiagonal case, we generalize many of our results to symmetric pentadiagonal matrices in Section~\ref{sec:penta}. While a construction analogous to that for the tridiagonal setting works in the pentadiagonal setting, we also provide an alternate, more involved, construction that works in many cases when the original construction does not.
\section{Tridiagonal matrices}\label{sec:td}
\subsection{Basic Properties of tridiagonal doubly stochastic matrices}\label{sec:td_basic}
Tridiagonal doubly stochastic matrices arise in the literature in a number of areas, in particular with respect to the study of Markov chains and in majorization theory. The facial structure of the set of all tridiagonal doubly stochastic matrices, which is a subpolytope of the Birkhoff polytope of $n\times n$ doubly stochastic matrices, is explored in \cite{Dahl}
with a connection to majorization. In \cite{Niezgoda}, the author develops relations involving sums of Jensen functionals to compare tuples of vectors; a tridiagonal doubly stochastic matrix is used to demonstrate their results.
In the study of mixing rates for Markov chains the assumption of symmetry in the transition matrix is sometimes seen, as in \cite{Boyd2009}. Other times, the Markov chain is assumed to be a path \cite{CihanAkar, Boyd2006} leading to a tridiagonal transition matrix. The properties of symmetric doubly stochastic matrices are explored in \cite{PereiraVali}, where majorization relations are given for the eigenvalues. Properties related to the facial structure of the polytope of tridiagonal doubly stochastic matrices can be found in \cite{FonsecaMarques, CostaFonseca}. In the former, alternating parity sequences are used to express the number of vertices of a given face, and in the latter, the number of $q$-faces of the polytope for arbitrary $n$ is determined for $q= 1,2,3$.
One can ask under what conditions is a tridiagonal doubly stochastic matrix $A$ positive semidefinite. It is known that a symmetric diagonally dominant matrix $A$ with non-negative diagonal entries is positive semidefinite. Thus, in our case, if
\begin{equation}\label{eq1} b_{i-1}+b_{i}\leq 0.5 \end{equation} for all $i=1, 2,\ldots,n$, with $b_0=b_{n}=0$, then $A$ is diagonally dominant, and hence $A$ is positive semidefinite. So \eqref{eq1} is sufficient for positive semidefiniteness of a tridiagonal doubly stochastic matrix. However, the following matrix is a tridiagonal doubly stochastic matrix that is positive semidefinite, showing that \eqref{eq1} is not necessary:
\begin{eqnarray*}
\begin{pmatrix}
0.6&0.4& 0& 0\\
0.4& 13/30& 1/6& 0\\
0& 1/6& 13/30& 0.4\\
0& 0& 0.4 &0.6
\end{pmatrix}.
\end{eqnarray*}
We now present some results related to the eigenvalues of tridiagonal doubly stochastic matrices, which can be deduced from standard facts in the literature.
%
We note that since tridiagonal doubly stochastic matrices are symmetric, their eigenvalues are real. Further, since the matrices are doubly stochastic, they always have $1$ as an eigenvalue (at least once), with corresponding eigenvector $\textbf{1}$ (the all-ones vector).
If $\lambda$ is an eigenvalue of a stochastic matrix, it is well-known that $\lambda\in \mathbb C$ such that $|\lambda|\leq 1$. In our context, we note that if $\lambda$ is an eigenvalue of a tridiagonal doubly stochastic matrix $A$, then $-1\leq \lambda \leq 1$. The fact that $\lambda\in \mathbb R$ follows immediately from the fact that a tridiagonal doubly stochastic matrix is symmetric.
\begin{lemma}\label{Gersh2}
Let $A$ be a tridiagonal doubly stochastic matrix. The eigenvalues of $A$ all lie in $[-1,1]$.
\end{lemma}
In fact, we can say something stronger: The set of all possible eigenvalues of tridiagonal doubly stochastic matrices is $[-1,1]$.
\begin{proposition}\label{prop:eigval}
Let $n\geq 2$. $\lambda$ is an eigenvalue of an $n \times n$ tridiagonal doubly stochastic matrix if and only if $\lambda\in [-1,1]$.
\end{proposition}
\begin{proof}
Suppose $\lambda\in [-1,1]$ is arbitrary. The $2\times 2$ tridiagonal doubly stochastic matrix $A=\begin{pmatrix} a & b \\b & a\end{pmatrix}$
with $a+b=1$, $a\in [0,1]$, has eigenvalues $1$ and $2a-1$. So choose $a$ such that $2a-1=\lambda$, i.e.\ $a=(\lambda+1)/2$. Then $\lambda$ is an eigenvalue of the constructed matrix $A$. For $n>2$, note that we can construct an $n\times n$ tridiagonal doubly stochastic matrix via $A \oplus B$, where $B$ is an $(n-2) \times (n-2)$ tridiagonal doubly stochastic matrix, and the constructed matrix $A \oplus B$ has $\lambda$ as an eigenvalue (if $v$ is an eigenvector corresponding to $\lambda$ for the matrix $A$, then $v\oplus \mathbf{0}_{n-2}$, where $\mathbf{0}_{n-2}$ is the $(n-2)$-dimensional zero vector,
is an eigenvector corresponding to $\lambda$ for $A\oplus B$). Thus one can construct a tridiagonal doubly stochastic matrix of arbitrary size having the prescribed eigenvalue $\lambda$.
The converse follows from Lemma~\ref{Gersh2}.
\end{proof}
\subsection{Complete Positivity}\label{sec:td_CP}
\begin{definition}\label{def:cp}
An $n\times n$ real matrix $A$ is \emph{completely positive} if it can be decomposed as $A=VV^T$, where $V$ is an $n\times k$ entrywise non-negative matrix, for some $k$.
\end{definition}
Equivalently, one can define $A$ to be completely positive provided $A=\sum_{i=1}^kv_iv_i^T$, where $v_i$ are entrywise non-negative vectors (namely, the columns of $V$).
Completely positive matrices are positive semidefinite and symmetric entrywise non-negative; such matrices are called \emph{doubly non-negative}. Doubly non-negative matrices are completely positive for $n\leq 4$, while doubly non-negative matrices that are not completely positive exist for all $n\geq 5$; see \cite{Berman88} and the references therein. In other words, the set of all completely positive matrices forms a strict subset of the set of all doubly non-negative matrices for $n\geq 5$.
We outline below a construction producing the completely positive decomposition $A=\sum_iv_iv_i^T$, which can be found by assuming that, since $A$ is tridiagonal, each $v_i$ should have only two nonzero entries (the $i$-th and $(i+1)$-th entries), and brute-force solving for these entries from the equation $A=VV^T$; these values can also be found somewhat indirectly, assuming our initial condition is zero, through a construction of pairwise completely positive matrices in \cite[Theorem 4]{Nathaniel} by taking both matrices to be $A$.
For a given $n\times n$ symmetric tridiagonal matrix $A$, define the set $\{v_i\}_{i=0}^n$ of $n+1$\, $n$-dimensional vectors where the $j$-th component of $v_i$, denoted $(v_i)_j$, is recursively defined by
\begin{equation}\label{veqn}
(v_i)_j= \begin{cases}
\sqrt{a_i-((v_{i-1})_i)^2} & j=i \\
b_i/(v_i)_i & j=i+1 \\
0 & otherwise
\end{cases}
\end{equation}
with initial condition $v_{0}=\begin{pmatrix} a_0&0& \dots & 0\end{pmatrix}^T$. This construction yields
\begin{eqnarray*}
v_{1}&=&\begin{pmatrix} \sqrt{a_1-a_0^2}& \frac{b_1}{\sqrt{a_1-a_0^2}}&0& \dots & 0\end{pmatrix}^T\\
v_2&=&\begin{pmatrix} 0 & \sqrt{a_2-\frac{b_1^2}{a_1-a_0^2}}& \frac{b_2}{\sqrt{a_2-\frac{b_1^2}{a_1-a_0^2}}}& 0 & \dots & 0\end{pmatrix}^T\\
v_3&=&\begin{pmatrix}0 &0&\sqrt{a_3-\frac{b_2^2}{a_2-\frac{b_1^2}{a_1-a_0^2}}}& \frac{b_3}{\sqrt{a_3-\frac{b_2^2}{a_2-\frac{b_1^2}{a_1-a_0^2}}}}& 0 & \dots & 0 \end{pmatrix}^T, \textnormal{ etc.}
\end{eqnarray*}
The constant $a_0$ must satisfy $a_0\geq 0$, however it is worth noting that certain values of $a_0$ (the most obvious case being $a_0^2 = a_1$) can lead to some of the $v_i$ vectors being ill-defined.
\begin{proposition}\label{tridiag_decomp}
Let $A$ be an $n\times n$ symmetric tridiagonal matrix and $a_0\geq 0$. Then $A=\sum_{i=0}^nv_iv_i^T$ with the $v_i$ as defined in Equation~\eqref{veqn}. Assuming that all the components for each $v_i$ are non-negative numbers, then $A$ is completely positive.
\end{proposition}
We note that if $A$ is entrywise non-negative, which includes the case of $A$ being doubly stochastic, then if the entries of the $v_i$ are all real, then they are automatically non-negative.
\begin{proof}
Consider a a symmetric tridiagonal matrix $A$ such that the vectors in Equation~\eqref{veqn} are well-defined. Let $V_i=v_iv_i^T$ for $i=0,1,\dots,n$ and $\tilde{A}=\sum_{i=0}^nV_i$. We wish to show that $\tilde{A}=A$. From the definition of the $v_i$ given in Equation~\eqref{veqn}, each $V_i$ is tridiagonal with only up to four nonzero entries and so $\tilde{A}$ itself is tridiagonal. Now, consider a component $\tilde{a}_{j,j+1}$ of $\tilde{A}$, where $j=1,2,\dots,n-1$. The only $V_i$ that will have a nonzero entry in the $(j,j+1)$-th component will be $V_j$ as $v_j$ is the only vector with both the $j$ and $(j+1)$-th components being nonzero. The $(j,j+1)$-th component of $V_j$ is in fact $b_j$ and so $\tilde{a}_{j,j+1}=b_j$. By symmetry, we also have $\tilde{a}_{j+1,j}=b_{j}$. Now consider a component on the diagonal of $\tilde{A}$: $\tilde{a}_{jj}$, where $j=1,2,\dots,n$. The only $V_i$ that have nonzero entries in the $(j,j)$-th component will be $V_j$ and $V_{j-1}$, with respective values $a_j-((v_{j-1})_j)^2$ and $((v_{j-1})_j)^2$. Clearly then $\tilde{a}_{jj}=a_j$ for $j=1,2,\dots,n$. Therefore $A=\tilde{A}=\sum_{i=0}^nv_iv_i^T$; i.e.\ $A$ is completely positive.
\end{proof}
Method 5.3 of \cite{DD} equates showing that a circular matrix $A$ is completely positive with finding a solution to an optimization problem using a recurrence relation involving $a_i$ and $b_i$, similar to Equation~\eqref{eq1}.
\begin{example}\label{5x5}
Consider the $5\times 5$ case, which is the first (in terms of smallest dimension) non-trivial case. For the matrices
\begin{eqnarray*}
A=\begin{pmatrix}
3/4&1/4&0 & 0 & 0\\
1/4&1/2&1/4& 0 & 0\\
0 & 1/4 & 1/2& 1/4 & 0\\
0 & 0 & 1/4& 1/2 & 1/4\\
0 & 0 & 0 & 1/4 & 3/4
\end{pmatrix} \quad \textnormal{and}\quad B= \begin{pmatrix}
7/9&2/9&0 & 0 & 0\\
2/9&5/9&2/9 & 0 & 0\\
0 & 2/9 & 7/9& 0 & 0\\
0 & 0 & 0& 8/9 & 1/9\\
0 & 0 & 0 & 1/9 & 8/9
\end{pmatrix}
\end{eqnarray*}
our construction with $a_0=0$ gives $A=VV^T$ and $B=WW^T$ where
\begin{eqnarray*}
V=\begin{pmatrix}
\frac{1}{2}\sqrt{3}&0&0 & 0 & 0 \\
\frac{1}{2\sqrt{3}}&\frac{1}{2}\sqrt{\frac{5}{3}}&0 & 0 & 0 \\
0 & \frac{1}{2}\sqrt{\frac{3}{5}}& \frac{1}{2}\sqrt{\frac{7}{5}}& 0 & 0\\
0 & 0 & \frac{1}{2}\sqrt{\frac{5}{7}}& \frac{3}{2\sqrt{7}} & 0\\
0 & 0 & 0 & \frac{1}{6}\sqrt{7} & \frac{1}{3}\sqrt{5}
\end{pmatrix} \quad \textnormal{and}\quad
W=\begin{pmatrix}
\frac{1}{3}\sqrt{7}&0&0 & 0 & 0\\
\frac{2}{3\sqrt{7}}&\frac{1}{3}\sqrt{\frac{31}{7}}&0 & 0 & 0\\
0 & \frac{2}{3}\sqrt{\frac{7}{31}}& \sqrt{\frac{21}{31}}& 0 & 0\\
0 & 0 & 0& \frac{2}{3}\sqrt{2} & 0\\
0 & 0 & 0 & \frac{1}{6\sqrt{2}} & \frac{1}{2}\sqrt{\frac{7}{2}}
\end{pmatrix}.
\end{eqnarray*}
Therefore the matrices A and B are completely positive. Note that $V$ and $W$ should be $6\times 6$ matrices; however, the selection of $a_0=0$ forces $v_0$ to be the zero vector and as such the first column of both $V$ and $W$ is all zeroes, so can be omitted. For this reason, choosing $a_0=0$ often leads to a much simpler decomposition.
It is important to emphasise here that a decomposition proving that a matrix $A$ is completely positive is in general not unique. In particular, the choice of $a_0$ can lead to different decompositions, assuming they are still well-defined.
For example, if we had instead chosen $a_0=3/4$, the matrix
\begin{eqnarray*}
\tilde{W}=\begin{pmatrix}
\frac{3}{4}&\frac{1}{12}\sqrt{31}&0&0 & 0 & 0\\
0&\frac{8}{3\sqrt{31}}&\frac{1}{3}\sqrt{\frac{91}{31}}&0 & 0 & 0\\
0&0 & \frac{2}{3}\sqrt{\frac{31}{91}}& \sqrt{\frac{57}{91}}& 0&0 \\
0&0 & 0 & 0& \frac{2}{3}\sqrt{2} & 0\\
0&0 & 0 & 0 & \frac{1}{6\sqrt{2}} & \frac{1}{2}\sqrt{\frac{7}{2}}
\end{pmatrix}
\end{eqnarray*}
works in the decomposition of $B$.
\end{example}
If our matrix is in block form but our decomposition does not work, we may employ the technique illustrated in the example below: treating each block separately.
\begin{example}\label{triBlockEx}
Consider the matrix
\begin{eqnarray*}
C=\begin{pmatrix
1 & 0 & 0 & 0 & 0\\
0 & 1/2&1/2&0 & 0\\
0 & 1/2&1/2&0 & 0\\
0 & 0 & 0 & 1/2 & 1/2\\
0 & 0 & 0 & 1/2 & 1/2
\end{pmatrix}.
\end{eqnarray*}
Since we have $b_1=0$ this gives $(v_1)_2=0$. Therefore we also have $(v_3)_3=\sqrt{a_3-\frac{b_2^2}{a_2}}=\sqrt{1/2-1/2}=0$. Hence, regardless of our choice of $a_0$ the component $(v_3)_4$ will never be well-defined. To get around this fact consider $C$ as the block matrix
\begin{eqnarray*}
C=\begin{pmatrix}
C_1 & 0_{3,2}\\
0_{2,3} & C_2
\end{pmatrix}.
\end{eqnarray*}
where $0_{n,m}$ denotes the $n\times m$ all-zeros matrix and
\begin{eqnarray*}
C_1=\begin{pmatrix}
1 & 0 & 0\\
0&1/2&1/2\\
0&1/2&1/2
\end{pmatrix} \quad \textnormal{and}\quad
C_2=\begin{pmatrix}
1/2&1/2\\
1/2&1/2\\
\end{pmatrix}.
\end{eqnarray*}
The matrices $C_1$ and $C_2$ on the other hand we have no issues with decomposing. Choosing $a_0=0$ we obtain
\begin{eqnarray*}
V_1=\begin{pmatrix}
1&0&0\\
0&\frac{1}{\sqrt{2}} & 0\\
0&\frac{1}{\sqrt{2}}&0
\end{pmatrix} \quad \textnormal{and}\quad
V_2=\begin{pmatrix}
\frac{1}{\sqrt{2}} & 0\\
\frac{1}{\sqrt{2}}&0
\end{pmatrix}
\end{eqnarray*}
where $C_1=V_1V_1^T$ and $C_2=V_2V_2^T$. Therefore
\begin{eqnarray*}
V=\begin{pmatrix}
V_1 & 0_{3,2}\\
0_{2,3} & V_2
\end{pmatrix}
=\begin{pmatrix}
1&0&0&0&0\\
0&\frac{1}{\sqrt{2}} & 0&0&0\\
0&\frac{1}{\sqrt{2}}&0&0&0\\
0&0&0&\frac{1}{\sqrt{2}} & 0\\
0&0&0&\frac{1}{\sqrt{2}}&0
\end{pmatrix}
\end{eqnarray*}
where $C=VV^T$ and hence $C$ is completely positive.
\end{example}
The decomposition given by Equation~\eqref{veqn} leads to the following result, which includes tridiagonal doubly stochastic positive definite matrices. Corollary~4.11 of \cite{BH} encompasses this result, but its proof relies on deletion of a leaf from a connected acyclic graph; our method of proof is more direct, does not involve graph theory, and relies solely on Equation~\eqref{veqn} and observing the leading principal minors of the matrix.
\begin{proposition}\label{PD_CP}
If $A$ is a symmetric tridiagonal doubly non-negative matrix, then $A$ is completely positive.
\end{proposition}
\begin{proof}
Sylvester's criterion tells us that for a real symmetric matrix $A$, positive definiteness is equivalent to all leading principal minors of $A$ being positive.
Note that all square roots in the denominators of the entries in Equation~(\ref{veqn}) being well-defined with $a_0=0$ imply that all leading principal minors are positive. Indeed,
taking $a_0=0$ in the construction of Equation~\eqref{veqn}, we find the following.
For $v_1$ to be well-defined, we have $a_1> 0$, which is the $1\times1$ leading principal minor.
For $v_2$ to be well-defined, we have $\displaystyle a_2-\frac{b_1^2}{a_1}> 0$, which is equivalent to $\displaystyle a_1a_2-b_1^2> 0$, which is the $2\times 2$ leading principal minor.
For $v_3$ to be well-defined, we have $\displaystyle a_3-\frac{b_2^2}{a_2-\frac{b_1^2}{a_1}}> 0$, which is equivalent to $\displaystyle a_1a_2a_3-a_3b_1^2-a_1b_2^2> 0$, which is the $3\times 3$ leading principal minor.
Continuing in this manner, the result follows.
\end{proof}
\begin{definition}\label{irred}
Let $A$ be an $n\times n$ matrix. $A$ is said to be \emph{reducible} if it can be transformed via row and column permutations to a block upper triangular matrix, with block sizes $<n$. $A$ is \emph{irreducible} if it is not reducible.
\end{definition}
Note that in the context of tridiagonal doubly stochastic matrices, irreducibility is equivalent to $b_i>0$ for all $i=1, \dots, n$, i.e.\ that $A$ cannot be written as the direct sum of smaller tridiagonal doubly stochastic matrices. When considering whether or not a tridiagonal doubly stochastic matrix $A$ is completely positive, we may assume $A$ is irreducible. Indeed, if $A$ were a direct sum of smaller doubly stochastic matrices---implying that some $b_i=0$---we could consider these smaller doubly stochastic matrices separately. The $V$ corresponding to $A$ in the decomposition would have the same direct sum structure: it would be the direct sum of the $V$'s corresponding to the smaller doubly stochastic matrices. If some $b_i=0,$ then the corresponding $v_i$ only has one nonzero element. $B$ in Example 1 is a direct sum of two doubly stochastic matrices and $C$ in Example 2 is a direct sum of three doubly stochastic matrices.
It is clear that if a matrix $A$ is completely positive, then it is automatically positive semidefinite.
Taussky's theorem \cite[Theorem II]{Taussky} allows us to use the eigenvalues of a given tridiagonal doubly stochastic matrix to characterize a partial converse statement.
\begin{theorem}(Taussky’s Theorem) Let $A$ be an $n\times n$ irreducible matrix. An eigenvalue of $A$ cannot lie on the boundary of a Gershgorin disk unless it lies on the boundary of every Gershgorin disk.
\end{theorem}
Equivalently, Taussky's theorem states that if $A$ is an irreducible, diagonally dominant matrix with at least one inequality of the diagonal dominance being strict (in the context of tridiagonal doubly stochastic matrices, this means that \eqref{eq1} holds with strict inequality for at least one $i$), then $A$ is nonsingular. This is in fact the original formulation of the theorem in \cite{Taussky}.
Recall that if a matrix $A$ is symmetric, diagonally dominant, and all its diagonal entries are non-negative, then $A$ is positive semidefinite.
\begin{proposition}\label{PSD_PD}
Let $A$ be an $n\times n$ irreducible tridiagonal doubly stochastic matrix. If $n\geq 3$ and $A$ is diagonally dominant, then $A$ is positive definite or, equivalently, $A$ is nonsingular.
\end{proposition}
\begin{proof} If $A$ is diagonally dominant, then $b_i+b_{i+1}\leq 0.5$ for all $i=0,1,2,\ldots,n$ with $b_0=b_{n}=0.$ Suppose $0$ is an eigenvalue of $A,$ then by Gorshgorin circle theorem, there exists some $i$ such that $b_i+b_{i+1}=0.5$ meaning that 0 is an eigenvalue on the boundary of a disk, so by Taussky's Theorem, every disk must have boundary at 0; that is, $b_i+b_{i+1}=0.5$ for all $i$. We have $b_1=0.5,$ which makes $b_2=0,$ which contradicts with the assumption that $A$ is irreducible.
\end{proof}
Note that the tridiagonal doubly stochastic matrix $\begin{pmatrix} 1/2 & 1/2\\ 1/2& 1/2\end{pmatrix}$ is the only $2\times 2$ tridiagonal doubly stochastic matrix that is positive semidefinite, without being positive definite (that is, it is the only $2\times 2$ positive semidefinite tridiagonal doubly stochastic matrix with zero as an eigenvalue). One can see this from \eqref{eq1} and the equivalent formulation of Taussky's theorem. One can verify that it is completely positive with $V=1/\sqrt{2}(1\,\, 1)^T$.
A number of corollaries follow from Proposition~\ref{PSD_PD}.
\begin{corollary}\label{cor:1/2}
Let $A$ be an $n\times n$ tridiagonal doubly stochastic matrix with $n\geq 3$. If $A$ is diagonally dominant, with zero as an eigenvalue, then $A$ must be reducible with at least one block of the form $\begin{pmatrix} 1/2 & 1/2\\ 1/2& 1/2\end{pmatrix}$.
\end{corollary}
The following Corollary is a more general statement than our previous Proposition~\ref{PD_CP}. This result appears to be known (e.g.\ it is mentioned in \cite[Section 3]{Dahl}), yet we are unaware of a proof in the literature. Given some subtleties in, and the length of, the proof, we have provided the details herein, which culminate in the corollary below. We note that \cite[Example 2]{CPgraphs} states that all tridiagonal doubly stochastic matrices are completely positive, which is not true in general without the assumption of diagonal dominance.
\begin{corollary}\label{TridiagDS_CP=PSD}
Let $A$ be an $n\times n$ tridiagonal doubly stochastic matrix. If $A$ is diagonally dominant, then $A$ is completely positive.
\end{corollary}
\begin{proof}
From the discussion following Definition~\ref{irred}, we can assume that $A$ is irreducible. If $A$ has a zero eigenvalue, then either $A=\begin{pmatrix} 1/2 & 1/2\\ 1/2& 1/2\end{pmatrix}$, which is completely positive with $V=1/\sqrt{2}(1\,\, 1)^T$, or by Corollary~\ref{cor:1/2}, the block corresponding to the zero eigenvalue is $\begin{pmatrix} 1/2 & 1/2\\ 1/2& 1/2\end{pmatrix}$, and the $V$ corresponding to $A$ will have a direct sum decomposition with corresponding $V_i=1/\sqrt{2}(1\,\, 1)^T$.
So, we may assume that $A$ is irreducible. By Proposition~\ref{PSD_PD}, $A$ is therefore positive definite.
\end{proof}
Proposition 3.2 of \cite{CP} states that the
cp-rank of a matrix $A$ (that is, the minimal number $k$ in Definition~\ref{def:cp}) is greater than or equal to the rank of $A$.
We show that the cp-rank of any tridiagonal doubly stochastic matrix is the same as the rank, or equivalently, Equation~(\ref{veqn}) provides a way to construct a minimal rank-one decomposition.
\begin{corollary}
The cp-rank of any irreducible tridiagonal doubly stochastic matrix is equal to its rank.
\end{corollary}
\begin{proof}
We can let $a_0=0$ as long as the matrix is positive definite by the proof of Proposition~\ref{PD_CP}. Therefore, for an $n\times n$ matrix $A$, the number of nonzero summands in Equation~(\ref{veqn}) is at most $n$.
An irreducible tridiagonal doubly stochastic matrix is singular if and only if it is the $2\times 2$ matrix with all entries the same (equal to $1/2$). Indeed, singular means that the rows are linear dependent, but this is impossible for $n\geq 3$ since irreducibility of a tridiagonal doubly stochastic matrix is equivalent to $b_i>0$ for all $i=1, \dots, n$ (and thus the rank of such a matrix is $n$). As previously stated, in the case of the $2\times 2$ all-$1/2$ matrix (which has rank 1), our decomposition gives a $2\times 1$ matrix $V=1/\sqrt{2}(1\,\, 1)^T$ (equivalently, a single vector $v=1/\sqrt2(1\,\, 1)^T$). For $n\geq 3$, an irreducible tridiagonal doubly stochastic matrix has rank $n$, and our decomposition gives $n$ vectors $v_1, \dots, v_n$.
\end{proof}
\section{Symmetric pentadiagonal matrices}\label{sec:penta}
Let $A$ be an $n\times n$ symmetric pentadiagonal matrix:
$$A=\begin{pmatrix}a_1&b_1&c_1&&&& \\ b_1 & a_2 & b_2&c_2 &&&\\ c_1& b_2 & a_3 & b_3&c_3 &&\\ &\ddots&\ddots&\ddots&&& \\& &\ddots&\ddots&\ddots&\ddots& \\ &&c_{n-4}&b_{n-3}&a_{n-2}&b_{n-2}&c_{n-2} \\
&&&c_{n-3}&b_{n-2}&a_{n-1}&b_{n-1} \\&&&&c_{n-2}&b_{n-1}&a_{n} \end{pmatrix}. $$
If $A$ is also doubly stochastic, we have
$a_{i}=1-(b_{i-1}+b_{i}+c_{i-2}+c_i)$ for $i=1,2,\ldots,n$, where $b_k=0$ for $k\leq 0$ or $k\geq n$ and $c_\ell=0$ for $\ell\leq 0$ or $\ell\geq n-1$.
Unlike in the tridiagonal matrix setting, the property of being doubly stochastic does not immediately imply symmetry, and thus we assume as a hypothesis this extra condition.
\subsection{Basic Properties of symmetric pentadiagonal doubly stochastic matrices}
Many of the arguments from Section~\ref{sec:td_basic} carry through into the pentadiagonal setting.
As in the tridiagonal case (Lemma~\ref{Gersh2}), the eigenvalues of a symmetric pentadiagonal doubly stochastic matrix are bounded between -1 and 1, as in Lemma~\ref{Gersh2}.
\begin{lemma}\label{Gersh4}
Let $A$ be a symmetric pentadiagonal doubly stochastic matrix. The eigenvalues of $A$ all lie in $[-1,1]$.
\end{lemma}
Again, as in the case of tridiagonal doubly stochastic matrices (Proposition~\ref{prop:eigval}), any value in $[-1,1]$ can be realized as an eigenvalue of an $n\times n$ symmetric pentadiagonal doubly stochastic matrix. One can, if desired, use the bonafide pentadiagonal matrix\[A=\begin{pmatrix}a& b & b\\b & a & b\\b&b&a
\end{pmatrix}\] for the $n\geq 3$ cases.
\begin{proposition}\label{prop:eigval2}
Let $n\geq 3$. $\lambda$ is an eigenvalue of an $n \times n$ symmetric pentadiagonal doubly stochastic matrix if and only if $\lambda\in [-1,1]$.
\end{proposition}
\subsection{Complete Positivity}
We now provide a construction similar to that for tridiagonal doubly stochastic matrices, to provide a sufficient condition for when a symmetric pentadiagonal doubly stochastic matrix $A$ is completely positive. Define the set $\{v_i\}_{i=-1}^n$ of $n+2$\, $n$-dimensional vectors where the $j$-th component of $v_i$, denoted $(v_i)_j$ (where $j=1, \dots, n$), is recursively defined by
\begin{equation}\label{veqn_pent}
(v_i)_j= \begin{cases}
\sqrt{a_i-[((v_{i-1})_i)^2+((v_{i-2})_i)^2]} & j=i \\
\frac{b_i-(v_{i-1})_j(v_{i-1})_{j-1}}{(v_i)_i} & j=i+1 \\
c_i/(v_i)_i & j=i+2 \\
0 & otherwise
\end{cases}
\end{equation}
with initial conditions $v_{-1}=\begin{pmatrix} a_{-1}&0& \dots & 0\end{pmatrix}^T$ and $v_{0}=\begin{pmatrix} a_{0}&b_{0}&0& \dots & 0\end{pmatrix}^T$. This construction yields
\begin{eqnarray*}
v_{1}&=&\begin{pmatrix} \sqrt{a_1-(a_0^2+a_{-1}^2)}& \frac{b_1-b_0a_0}{\sqrt{a_1-(a_0^2+a_{-1}^2)}}& \frac{c_1}{\sqrt{a_1-(a_0^2+a_{-1}^2)}}&0& \dots & 0\end{pmatrix}^T\\
v_2&=&\begin{pmatrix} 0 & \sqrt{a_2-\left(\frac{(b_1-b_0a_0)^2}{a_1-(a_0^2+a_{-1}^2)}+b_0^2\right)}& \frac{b_2-\frac{c_1(b_1-b_0a_0)}{a_1-(a_0^2+a_{-1}^2)}}{\sqrt{a_2-\left(\frac{(b_1-b_0a_0)^2}{a_1-(a_0^2+a_{-1}^2)}+b_0^2\right)}}& \frac{c_2}{\sqrt{a_2-\left(\frac{(b_1-b_0a_0)^2}{a_1-(a_0^2+a_{-1}^2)}+b_0^2\right)}}& 0 & \dots & 0\end{pmatrix}^T\\
&\textnormal{ etc.}&
\end{eqnarray*}
Similar to the tridiagonal case, the constants $a_{-1},a_0,$ and $b_0$ are taken to be non-negative numbers with the caveat that there will be some collections of initial values that will lead the decomposition to be ill-defined.
\begin{proposition}\label{penta_decomp}
Let $A$ be a symmetric pentadiagonal
matrix.
Then $A=\sum_{i=-1}^nv_iv_i^T$ with $v_i$ as defined in Equation~\eqref{veqn_pent}. Assuming that all the components for each $v_i$ are non-negative numbers, then $A$ is completely positive.
\end{proposition}
\begin{proof}
The proof is similar to tridiagonal case. Consider a symmetric pentadiagonal $n\times n$ matrix $A$ such that the vectors in Equation~\eqref{veqn_pent} are well-defined. Let $V_i=v_iv_i^T$ for $i=-1,\dots,n$ and $\tilde{A}=\sum_{i=-1}^nV_i$. We wish to show that $\tilde{A}=A$. From the definition of the $v_i$ given in Equation~\eqref{veqn_pent}, each $V_i$ is symmetric and pentadiagonal with only up to six nonzero entries and so $\tilde{A}$ itself is symmetric and pentadiagonal.
Now, consider a component $\tilde{a}_{j,j+1}$ of $\tilde{A}$, where $j=1,2,\dots,n-1$. The only $V_i$ that will have a nonzero entry in the $(j,j+1)$-th component will be $V_{j-1}$ and $V_j$ as $v_{j-1}$ and $v_j$ are the only vectors with both the $j$ and $(j+1)$-th components being nonzero. The $(j,j+1)$-th component of $V_{j-1}+V_j$ is $(v_{j-1})_j(v_{j-1})_{j+1}+(v_j)_j(v_j)_{j+1}$ which, after simplifying, is in fact $b_j$ and so $\tilde{a}_{j,j+1}=b_j$. By symmetry, we also have $\tilde{a}_{j+1,j}=b_{j}$.
Now, consider a component $\tilde{a}_{j,j+2}$ of $\tilde{A}$, where $j=1,2,\dots,n-2$. The only $V_i$ that will have a nonzero entry in the $(j,j+2)$-th component will be $V_{j}$ as $v_{j}$ and $v_{j+2}$ are the only vectors with both the $j$ and $(j+2)$-th components being nonzero. The value of $(v_j)_j$ is the same as the denominator of $(v_j)_{j+2}$, and so we simply obtain $\tilde{a}_{j,j+2}=c_j$. By symmetry, we also have $\tilde{a}_{j+2,j}=c_j$.
Now consider a component on the diagonal of $\tilde{A}$: $\tilde{a}_{jj}$, where $j=1,2,\dots,n$. The only $V_i$ that have nonzero entries in the $(j,j)$-th component will be $V_{j-2}$, $V_{j-1}$, and $V_j$, and the sum of the respective values is precisely $\tilde{a}_{jj}=a_j$ for $j=1,2,\dots,n$. Therefore $A=\tilde{A}=\sum_{i=-1}^nv_iv_i^T$; i.e.\ $A$ is completely positive.
\end{proof}
When using Equation~\eqref{veqn_pent} to find a decomposition of a pentadiagonal matrix, it is simplest to choose the initial vectors $v_{-1}$ and $v_0$ to both be the zero vector. However, Example~\ref{ex:nonzero_init_conds} shows that it is sometimes necessary to choose nonzero initial conditions in order to prove that the given matrix is completely positive.
\begin{example}\label{ex:nonzero_init_conds}
Consider the matrix
\[A=\begin{pmatrix}
3/4 & 1/8 & 1/8 & 0 & 0 \\
1/8 & 3/4 & 0 & 1/8 & 0 \\
1/8 & 0 & 1/2 & 13/40 & 1/20 \\
0 & 1/8 & 13/40 & 1/2 & 1/20 \\
0 & 0 & 1/20 & 1/20 & 9/10 \\
\end{pmatrix}.\]
Using Equation~\eqref{veqn_pent} with initial vectors $v_{-1}$ and $v_0$ both taken to be the zero vector, we compute the matrix $V$ such that $A=VV^T$ to be
\[V=\begin{pmatrix}
0 & 0 & \frac{\sqrt{3}}{2} & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{1}{4 \sqrt{3}} & \frac{\sqrt{\frac{35}{3}}}{4} & 0 & 0 & 0 \\
0 & 0 & \frac{1}{4 \sqrt{3}} & -\frac{1}{4 \sqrt{105}} & \frac{\sqrt{\frac{67}{35}}}{2} & 0 & 0 \\
0 & 0 & 0 & \frac{\sqrt{\frac{3}{35}}}{2} & \frac{23}{\sqrt{2345}} & \frac{\sqrt{\frac{339}{335}}}{2} & 0 \\
0 & 0 & 0 & 0 & \frac{\sqrt{\frac{7}{335}}}{2} & \frac{7 \sqrt{\frac{3}{37855}}}{2} & \sqrt{\frac{101}{113}} \\
\end{pmatrix}.\]
We note that the component $(v_2)_3$ is negative and hence this decomposition cannot be used to prove that $A$ is completely positive.
It is not surprising that taking the initial conditions to be all zero does not work: if both $v_{-1}$ and $v_0$ are zero vectors, i.e.\ $a_{-1}=a_0=b_0=0,$ then $(v_2)_3>0$ is equivalent to $a_1b_2\geq b_1c_1.$ But in $A,$ $b_1=c_1=1/8$ while $b_2=0,$ so $a_1b_2< b_1c_1.$
If we instead use the initial conditions $v_{-1}=\begin{pmatrix} 0&0& \dots & 0\end{pmatrix}^T$ and $v_{0}=\begin{pmatrix} \frac 1 2&\frac 1 4&0& \dots & 0\end{pmatrix}^T$, we obtain the decomposition
$A=WW^T$, where \[W=\begin{pmatrix}
0 & \frac{1}{2} & \frac{1}{\sqrt{2}} & 0 & 0 & 0 & 0 \\
0 & \frac{1}{4} & 0 & \frac{\sqrt{11}}{4} & 0 & 0 & 0 \\
0 & 0 & \frac{1}{4 \sqrt{2}} & 0 & \frac{\sqrt{\frac{15}{2}}}{4} & 0 & 0 \\
0 & 0 & 0 & \frac{1}{2 \sqrt{11}} & \frac{13}{5 \sqrt{30}} & \frac{\sqrt{\frac{4157}{165}}}{10} & 0 \\
0 & 0 & 0 & 0 & \frac{\sqrt{\frac{2}{15}}}{5} & \frac{23 \sqrt{\frac{11}{62355}}}{10} & \frac{\sqrt{\frac{14861}{4157}}}{2} \\
\end{pmatrix}.\]
This decomposition shows that $A$ is in fact completely positive.
\end{example}
\begin{example}\label{ex:pentaBlockDiag}
As an analogue to Example~\ref{triBlockEx} consider the matrix
\[\begin{pmatrix}
1/2 & 1/4 & 1/4 & 0 & 0 & 0 \\
1/4 & 1/2 & 1/4 & 0 & 0 & 0 \\
1/4 & 1/4 & 1/2 & 0 & 0 & 0 \\
0 & 0 & 0 & 1/2 & 1/2 & 0 \\
0 & 0 & 0 & 1/2 & 1/2 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
\end{pmatrix}.\]
For this matrix, the construction we outline through Equation~\eqref{veqn_pent} will never give a well-defined decomposition regardless of the choice of initial conditions. To see why this is the case, note that $c_2=b_3=c_3=0$. Here we may assume that we have chosen initial conditions such that $(v_1)_1$, $(v_2)_2$, and $(v_3)_3$ are nonzero (otherwise the decomposition would already be ill-defined). From this we immediately obtain
\begin{eqnarray*}
\begin{tabular}{l}
$(v_2)_4=\frac{c_2}{(v_2)_2}=0$\\[0.5cm]
$(v_3)_4=\frac{b_3-(v_2)_4(v_2)_3}{(v_3)_3}=0$\\[0.5cm]
$(v_3)_5=\frac{c_3}{(v_3)_3}=0$.\\[0.5cm]
\end{tabular}
\end{eqnarray*}
Therefore we can compute the following components of $v_4$ to be:
\begin{eqnarray*}
\begin{tabular}{l}
$(v_4)_4=\sqrt{a_4-((v_{3})_4)^2+((v_{2})_4)^2}=\sqrt{a_4}=\frac{1}{\sqrt{2}}$\\[0.5cm]
$(v_4)_5=\frac{b_4-(v_3)_5(v_3)_4}{(v_4)_4}=\frac{b_4}{\sqrt{a_4}}=\frac{1}{\sqrt{2}}$\\[0.5cm]
$(v_4)_6=\frac{c_4}{(v_4)_4}=0$.\\[0.5cm]
\end{tabular}
\end{eqnarray*}
Now, all six of the components that have been calculated so far are completely independent of the initial conditions (except for the requirement that all previous components were well-defined). Therefore the vector $v_5$ will be independent of the initial conditions. We then find that
\[(v_5)_5=\sqrt{a_5-(((v_4)_5)^2 + ((v_3)_5)^2)}=\sqrt{\frac{1}{2}-\left(\left(\frac{1}{\sqrt{2}}\right)^2+0\right)}=0.\]
Hence $(v_5)_6$ will not be well-defined.
Similar to Example~\ref{triBlockEx}, we can still make use of our construction to prove that $A$ is completely positive by considering $A$ as the block diagonal matrix
\[A=\begin{pmatrix}
A_1 & 0_{3,2} & 0_{3,1}\\
0_{2,3} & A_2 & 0_{2,1}\\
0_{1,3} & 0_{1,2} &A_3\\
\end{pmatrix}\]
where
\[A_1=\begin{pmatrix}
1/2 & 1/4 & 1/4 \\
1/4 & 1/2 & 1/4 \\
1/4 & 1/4 & 1/2\\
\end{pmatrix},\quad
A_2=\begin{pmatrix}
1/2 & 1/2 \\
1/2 & 1/2
\end{pmatrix},\quad
A_3=\begin{pmatrix}
1
\end{pmatrix}
\]
From here we can find a decomposition for the three matrices $A_1$, $A_2$, and $A_3$ separately. We find that
\[V_1=\begin{pmatrix}
0 & 0 & \frac{1}{\sqrt{2}} & 0 & 0 \\
0 & 0 & \frac{1}{2 \sqrt{2}} & \frac{\sqrt{\frac{3}{2}}}{2} & 0 \\
0 & 0 & \frac{1}{2 \sqrt{2}} & \frac{1}{2 \sqrt{6}} & \frac{1}{\sqrt{3}} \\
\end{pmatrix}, \quad
V_2=\begin{pmatrix}
0&\frac{1}{\sqrt{2}} & 0 \\
0&\frac{1}{\sqrt{2}} & 0 \\
\end{pmatrix}, \quad
V_3=\begin{pmatrix}1\end{pmatrix}\]
where $A_1=V_1V_1^T$, $A_2=V_2V_2^T$, and $A_3=V_3V_3^T$. A decomposition for $A$ can then be formed by creating the block diagonal matrix
\[V=\begin{pmatrix}
V_1 & 0_{3,3} & 0_{3,1}\\
0_{2,5} & V_2 & 0_{2,1}\\
0_{1,5} & 0_{1,3} &V_3\\
\end{pmatrix}=
\begin{pmatrix}
0 & 0 & \frac{1}{\sqrt{2}} & 0 & 0 &0&0&0&0\\
0 & 0 &\frac{1}{2 \sqrt{2}} & \frac{\sqrt{\frac{3}{2}}}{2} &0& 0&0&0&0 \\
0 & 0 & \frac{1}{2 \sqrt{2}} & \frac{1}{2 \sqrt{6}} & \frac{1}{\sqrt{3}}&0&0&0&0 \\
0&0&0&0&0&0&\frac{1}{\sqrt{2}} & 0&0 \\
0&0&0&0&0&0&\frac{1}{\sqrt{2}} & 0&0 \\
0&0&0&0&0&0&0&0&1\end{pmatrix}\]
and noting that $A=VV^T$. This proves that $A$ is completely positive.
Similar to Example~\ref{5x5}, we note that for any matrix $M$, if $M$ has columns consisting entirely of zeros these columns can be removed from the matrix $M$ without changing the value of $MM^T$. Therefore we can simplify $V$ to be the $5\times 5$ matrix below, rather than the $9\times 9$ matrix above;
\[V=\begin{pmatrix}
\frac{1}{\sqrt{2}} & 0 & 0 &0&0\\
\frac{1}{2 \sqrt{2}} & \frac{\sqrt{\frac{3}{2}}}{2} & 0&0&0 \\
\frac{1}{2 \sqrt{2}} & \frac{1}{2 \sqrt{6}} & \frac{1}{\sqrt{3}}&0&0 \\
0&0&0&\frac{1}{\sqrt{2}} & 0 \\
0&0&0&\frac{1}{\sqrt{2}} &0 \\
0&0&0&0&1\end{pmatrix}\]
\end{example}
We leave a result analogous to Proposition~\ref{PD_CP} in the setting of symmetric pentadiagonal doubly stochastic matrices as an open problem. Example~\ref{ex:nonzero_init_conds} shows that there is a connection between elements in $A$ and how should one choose $v_{-1}$ and $v_0,$ however it is not immediately clear in general. Consider the matrix $A'$ which is equal to $A$ except for the following entries:
\begin{eqnarray*}
{a}'_{11}&=&a_1-(a_0^2+a_{-1}^2)\\
{a}'_{22}&=&a_2-b_0^2\\
{a}'_{21}&=&{a}'_{12}=b_1-b_0a_0.
\end{eqnarray*}
Note that $A={A}'$ provided the initial conditions are zero. If $A'$ is positive definite, then all of its leading principal minors are positive. However, this does not appear to be enough to conclude that $A$ is completely positive in this setting. Indeed, Equation~\eqref{veqn_pent} yields $(v_2)_3=\frac{b_2-\frac{c_1(b_1-b_0a_0)}{a_1-(a_0^2+a_{-1}^2)}}{\sqrt{a_2-\left(\frac{(b_1-b_0a_0)^2}{a_1-(a_0^2+a_{-1}^2)}+b_0^2\right)}}$, and $(v_2)_3>0$ is equivalent to $b_2-\frac{c_1(b_1-b_0a_0)}{a_1-(a_0^2+a_{-1}^2)}>0$ (assuming the denominator of $(v_2)_3$ is well-defined). This expression is equivalent to requiring that the $3\times 3$ leading principal submatrix of $A'$ with the last row and second last column removed, has positive determinant.
In general, requiring that $(v_i)_{i+1}>0$, assuming the denominator is well-defined, is equivalent to requiring that the $(i+1)\times (i+1)$ leading principal submatrix of $A'$ with the last row and second last column removed, has positive determinant.
\subsection{Alternate Construction}
As Example~\ref{ex:nonzero_init_conds} illustrates, there can be some trial and error when it comes to finding a decomposition with all components being positive. Selecting initial conditions that can achieve this may be difficult or even impossible in certain cases. One workaround to this is in the case where the given matrix is block diagonal, as in Example~\ref{ex:pentaBlockDiag}.
Another technique one can use if decomposing $A$ directly as described by Equation~(\ref{veqn}) or (\ref{veqn_pent}) does not yield results, is described in this Section. It can be used when the given matrix is not necessarily block diagonal. The main idea is to find matrices $\tilde{A}$ and $\hat{A}$ such that $A=\tilde{A}+\hat{A}$ and then decompose $\tilde{A}$ and $\hat{A}$ using Equation~(\ref{veqn}) or (\ref{veqn_pent}). If $\tilde{A}$ and $\hat{A}$ are completely positive with decompositions $\tilde{A}=VV^T$ and $\hat{A}=WW^T$, then $A$ will have decomposition given by the matrix $\begin{pmatrix}V & W\end{pmatrix}$, which is simply the matrix constructed with the columns of $V$ followed by the columns of $W$. Below, we provide a construction that gives $A$ as a sum of two specified positive semidefinite matrices $\tilde{A}$ and $\hat{A}$ that can often be convenient to consider, but in general there are other matrices that work.
Let $A$ be a $n\times n$ symmetric pentadiagonal doubly stochastic matrix. Recall the convention that $b_0=b_{n}=c_{-1}=c_{0}=c_{n-1}=c_{n}=0$. Define the $n\times n$ matrix $\tilde{A}$ to be the matrix with components $\tilde{a}_{ii}=\frac{1}{2}-b_i-b_{i-1}$ for $i\in\{1,\dots,n\}$, $\tilde{a}_{i,i+2}=\tilde{a}_{i+2,i}=c_i$, and all other components being zero. We will similarly define the $n\times n$ matrix $\hat{A}$ to be the matrix with components $\hat{a}_{ii}=\frac{1}{2}-c_i-c_{i-2}$ for $i\in\{1,\dots,n\}$, $\hat{a}_{i,i+1}=\hat{a}_{i+1,i}=b_i$, and all other components being zero.
We find that $\tilde{A}+\hat{A}=A$, as desired.
If $A$ is diagonally dominant, $\tilde{A}$ and $\hat{A}$ are diagonally dominant as well, and hence also positive semidefinite.
As $\tilde{A}$ and $\hat{A}$ are much simpler than $A$, finding decompositions for both $\tilde{A}$ and $\hat{A}$ with all positive components is often much simpler (if it is possible), as the next example shows.
\begin{example}
Consider the matrix
\[A=\begin{pmatrix}
7/12&1/3&1/12&0\\
1/3&7/12&1/156&1/13\\
1/12&1/156&7/12&17/52\\
0&1/13&17/52&31/52
\end{pmatrix}.\]
Since the matrix has dimension $4$ and is doubly non-negative it must be completely positive by \cite{Berman88}. However, if we try to decompose $A$ using the all zero vectors as our initial conditions, we obtain
\[\begin{pmatrix}
0 & 0 & \frac{\sqrt{\frac{7}{3}}}{2} & 0 & 0 & 0 \\
0 & 0 & \frac{2}{\sqrt{21}} & \frac{\sqrt{\frac{11}{7}}}{2} & 0 & 0 \\
0 & 0 & \frac{1}{2 \sqrt{21}} & -\frac{15}{26 \sqrt{77}} & \frac{\sqrt{\frac{4217}{11}}}{26} & 0 \\
0 & 0 & 0 & \frac{2 \sqrt{\frac{7}{11}}}{13} & \frac{2491}{26 \sqrt{46387}} & 4 \sqrt{\frac{101}{4217}} \\
\end{pmatrix}.\]
Note the single negative entry. We can try using different initial conditions, but taking a guess-and-check approach is not an ideal strategy. Instead, now consider the matrices $\tilde{A}$ and $\hat{A}$:
\[\tilde{A}=\begin{pmatrix}
1/6 & 0 & 1/12 & 0 \\
0 & 25/156 & 0 & 1/13 \\
1/12 & 0 & 1/6 & 0 \\
0 & 1/13 & 0 & 9/52 \\
\end{pmatrix},\quad
\hat{A}=\begin{pmatrix}
5/12 & 1/3 & 0 & 0 \\
1/3 & 11/26 & 1/156 & 0 \\
0 & 1/156 & 5/12 & 17/52 \\
0 & 0 & 17/52 & 11/26 \\
\end{pmatrix}\]
Decomposing both of these we obtain
\[V=\begin{pmatrix}
0 & 0 & \frac{1}{\sqrt{6}} & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{5}{2 \sqrt{39}} & 0 & 0 \\
0 & 0 & \frac{1}{2 \sqrt{6}} & 0 & \frac{1}{2 \sqrt{2}} & 0 \\
0 & 0 & 0 & \frac{2 \sqrt{\frac{3}{13}}}{5} & 0 & \frac{\sqrt{\frac{177}{13}}}{10} \\
\end{pmatrix},\quad
W=\begin{pmatrix}
0 & 0 & \frac{\sqrt{\frac{5}{3}}}{2} & 0 & 0 & 0 \\
0 & 0 & \frac{2}{\sqrt{15}} & \sqrt{\frac{61}{390}} & 0 & 0 \\
0 & 0 & 0 & \frac{\sqrt{\frac{5}{4758}}}{2} & \frac{5 \sqrt{\frac{317}{4758}}}{2} & 0 \\
0 & 0 & 0 & 0 & \frac{17 \sqrt{\frac{183}{8242}}}{5} & \frac{2 \sqrt{\frac{4286}{4121}}}{5} \\
\end{pmatrix}\]
where $\tilde{A}=VV^T$ and $\hat{A}=WW^T$. One can check that $A=\begin{pmatrix}V & W\end{pmatrix}\begin{pmatrix}V & W\end{pmatrix}^T$, where we set $\begin{pmatrix}V & W\end{pmatrix}$ to be (we deleted unnecessary all-zero columns):
\begin{equation}
\begin{pmatrix}V & W\end{pmatrix}=\begin{pmatrix}
\frac{1}{\sqrt{6}} & 0 & 0 & 0 & \frac{\sqrt{\frac{5}{3}}}{2} & 0 & 0 & 0 \\
0 & \frac{5}{2 \sqrt{39}} & 0 & 0 & \frac{2}{\sqrt{15}} & \sqrt{\frac{61}{390}} & 0 & 0 \\
\frac{1}{2 \sqrt{6}} & 0 & \frac{1}{2 \sqrt{2}} & 0&0 & \frac{\sqrt{\frac{5}{4758}}}{2} & \frac{5 \sqrt{\frac{317}{4758}}}{2} & 0 \\
0 & \frac{2 \sqrt{\frac{3}{13}}}{5} & 0 & \frac{\sqrt{\frac{177}{13}}}{10} & 0 & 0 & \frac{17 \sqrt{\frac{183}{8242}}}{5} & \frac{2 \sqrt{\frac{4286}{4121}}}{5} \\
\end{pmatrix}
\end{equation}
This shows that $A$ is completely positive.
\end{example}
\section*{Acknowledgements}
S.P.~is supported by NSERC Discovery Grant number 1174582, the Canada Foundation for Innovation (CFI) grant number 35711, and the Canada Research Chairs (CRC) Program grant number 231250. S.P.~would like to thank Rajesh Pereira for helpful discussions.
| {
"timestamp": "2021-03-11T02:24:35",
"yymm": "2009",
"arxiv_id": "2009.05100",
"language": "en",
"url": "https://arxiv.org/abs/2009.05100",
"abstract": "We provide a decomposition that is sufficient in showing when a symmetric tridiagonal matrix $A$ is completely positive. Our decomposition can be applied to a wide range of matrices. We give alternate proofs for a number of related results found in the literature in a simple, straightforward manner. We show that the cp-rank of any irreducible tridiagonal doubly stochastic matrix is equal to its rank. We then consider symmetric pentadiagonal matrices, proving some analogous results, and providing two different decompositions sufficient for complete positivity. We illustrate our constructions with a number of examples.",
"subjects": "Combinatorics (math.CO)",
"title": "The Complete Positivity of Symmetric Tridiagonal and Pentadiagonal Matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180673335566,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.708331481782126
} |
https://arxiv.org/abs/1911.08213 | Cohomology of contact loci | We construct a spectral sequence converging to the cohomology with compact support of the m-th contact locus of a complex polynomial. The first page is explicitly described in terms of a log resolution and coincides with the first page of McLean's spectral sequence converging to the Floer cohomology of the m-th iterate of the monodromy, when the polynomial has an isolated singularity. Inspired by this connection, we conjecture that if two germs of holomorphic functions are embedded topologically equivalent, then the Milnor fibers of the their tangent cones are homotopy equivalent. | \section{Introduction and main results}
The motivation for this note comes from a question of Seidel, and a subsequent question by McLean. In \cite{DL} Denef and Loeser proved that the Euler characteristic of the contact loci of a complex polynomial $f$ coincides with the Lefschetz number of the $m$-th iterate of the monodromy of $f$. It was observed by Seidel that this in turn coincides with the Euler characteristic of the Floer cohomology of the same monodromy iterate, in the case when $f$ has an isolated singularity, and motivated him to ask what is the relation between the cohomology of the $m$-th contact locus and the Floer cohomology of the $m$-th iterate of the monodromy. McLean \cite{Mclean} constructed a spectral sequence converging to the Floer cohomology of the $m$-th iterate of the monodromy of $f$, whose first page is completely described in terms of a log resolution of $f$, and asked if there is a similar spectral sequence converging to the cohomology of the $m$-th contact locus of $f$. Our main result is an affirmative answer to this question, if one takes compactly supported cohomology for the contact locus.
It is natural to conjecture that the Floer cohomology of the $m$-th iterate of the monodromy of $f$ is isomorphic to the compactly supported cohomology of the $m$-th contact locus of $f$, and that this isomorphism comes from an isomorphism of McLean spectral sequence with ours. Besides giving very different interpretations of the same object, if true, our conjecture would endow Floer cohomology of the $m$-th iterate of the monodromy with a mixed Hodge structure, and would show that McLean's spectral sequence satisfies the same non-trivial degeneration properties as our spectral sequence.
The conjecture is true if $m$ is the multiplicity of an isolated hypersurface singularity. Inspired by this observation, we conjecture also that if two germs of holomorphic functions are embedded topologically equivalent, then the Milnor fibers of their tangent cones are homotopy equivalent.
To state the main result, let $X$ be a smooth complex algebraic variety of dimension $d$. For each $m\ge 0$, the $m$-th jet scheme $\mathscr{L}_m(X)$ of $X$ is the variety parametrizing morphisms $$\gamma:\mathrm{Spec}\, \mathbb C[t]/(t^{m+1} ) \to X$$ of schemes over $\mathbb{C}$. We denote by $\gamma(0)$ the center of a jet $\gamma$, that is, the image in $X$ of the closed point of $\mathrm{Spec}\, \mathbb{C}[t]/(t^{m+1})$.
Let $$f\colon X\to \mathbb{C}$$ be a non-invertible regular function, that is, not a unit in the ring of regular functions, or equivalently, a regular function with non-empty zero locus. For a jet $\gamma\in\mathscr{L}_m(X)$ we denote by $f(\gamma)$ the truncated power series given by the image of $s$ under the morphism of $\mathbb{C}$-algebras
$$\mathbb{C}[s]\xrightarrow{\gamma^\#\circ f^\#}\mathbb{C}[t]/(t^{m+1})$$
corresponding to the composition $f\circ \gamma$.
Fix a non-empty Zariski closed subset $\Sigma$ in $X_0=f^{-1}(0)$. The {\it $m$-th (restricted) contact locus of $f$} is defined to be
$$
\mathscr X_{m}(f,\Sigma):=\{\gamma\in \mathscr{L}_m(X) \mid { \gamma(0)\in \Sigma \text{ and }} f(\gamma)\equiv t^{m}\pmod{t^{m+1}}\}.
$$
For this to be non-empty, we assume that $m>0$.
Let $h:Y\to X$ be a log resolution of $(f,\Sigma)$, that is, a proper morphism from a smooth variety $Y$ such that $E=h^{-1}(X_0)$ and $h^{-1}(\Sigma)$ are divisors with simple normal crossings and the restriction $h \colon Y \setminus E \to X\setminus X_0$ is an isomorphism. By Hironaka, we can and in fact assume that $h$ is a composition of blowing ups of smooth centers. We denote by $E_i$ with $i$ in $S$, the irreducible components of $E$. We define $$m_i=\mathrm{ord}_fE_i\quad\text{ and }\quad\nu_i=\mathrm{ord}_{K_{Y/X}}E_i +1,$$
where $K_{Y /X}$ is the relative canonical divisor defined by the vanishing of $\det dh$.
We assume that $h$ is {\em $m$-separating}, that is, $m_i+m_j>m$ if $E_i\cap E_j\neq \emptyset$ for all $i\neq j\in S$. By Lemma \ref{lemMsep} below, such a log resolution always exists.
We set
$$A:=\{i\in S\mid h(E_i)\subset\Sigma\},$$
$$S_m:=\{i\in A \mid m_i \text{ divides } m\},$$ and $$k_i:=m/m_i$$ for each $i\in S_m$ if $S_m$ is non-empty. Fix a tuple of integers $w=(w_i)_{i\in S}$ with $w_i\ge 0$ such that the divisor $$W=-\sum_{i\in S} w_i E_i$$ is relatively very ample for $h$. We can, and we will assume, that $w_i=0$ if $E_i$ is not an exceptional divisor. For an integer $p$, we let
$$S_{m,p}:=\{i\in S_m \mid w_ik_i= -p \}$$
if $S_m$ is non-empty, otherwise we set $S_{m,p}$ to be the empty set as well.
Let $E_i^{\circ}=E_i\setminus \cup_{j\ne i}E_j$. Then there exists an unramified cyclic cover $\tilde{E_i^{\circ}}\rightarrow E_i^\circ$ of degree $m_i$, given locally in a neighborhood $U$ in $Y$ of a point in $E_i^{\circ}$ by
$$
\{(z,P)\in \mathbb{C}\times (E_i^{\circ}\cap U) \mid z^{m_i}=u(P)^{-1}\},
$$
where $f\circ h=u\cdot y_i^{m_i}$ with $y_i$ a local equation for $E_i$ and $u$ an invertible regular function on $U$.
Our main result is the following:
\begin{theorem}\label{main}
Let $f:X\to\mathbb{C}$ be a non-invertible regular function on a smooth variety $X$ of dimension $d$. Let $\Sigma$ be a non-empty closed subset of $f^{-1}(0)$, $m> 0$ an integer, and $h:Y\to X$ an $m$-separating log resolution of $(f,\Sigma)$.
Then there is a cohomological spectral sequence
$$E_1^{p,q}=\bigoplus_{i\in S_{m,p}} H_{2(d(m+1)-k_i\nu_i-1)-(p+q)} (\tilde{E^{\circ}_i},\mathbb{Z})\quad\Rightarrow\quad H^{p+q}_c\left(\mathscr X_{m}(f,\Sigma),\mathbb{Z}\right)$$
converging to the cohomology with compact support of the $m$-th contact locus of $f$.
\end{theorem}
\begin{proposition}
\label{degen} After tensoring with $\mathbb{Q}$, only the first $d$ pages of the spectral sequence $\{E_r, d_r\}_{r\ge 1}$ can contain non-zero differentials.
\end{proposition}
\begin{example}
(i) If $f=x^r\in\mathbb{C}[x]$ for some integer $r>0$, and $\Sigma$ is the origin in $X=\mathbb{C}$, then the identity map is the only log resolution for $(f,\Sigma)$ and it is $m$-separating for any $m>0$. Then
$$
E_1^{p,q}\simeq\left\{
\begin{array}{cl}
0& \text{ if }r\nmid m\text{ or }(p,q)\neq (0,2(m-\frac{m}{r})),\\
\mathbb{Z}^r& \text{ if }r\mid m\text{ and }(p,q)=(0, 2(m-\frac{m}{r})).
\end{array}
\right.
$$
Thus the theorem implies that
$$
H_c^{*}\left(\mathscr X_{m}(f,\Sigma),\mathbb{Z}\right)\simeq\left\{
\begin{array}{cl}
0& \text{ if }r\nmid m\text{ or }*\neq 2(m-\frac{m}{r}),\\
\mathbb{Z}^r& \text{ if }r\mid m\text{ and } *=2(m-\frac{m}{r}).
\end{array}
\right.
$$
This is compatible with the isomorphism
$$
\mathscr X_{m}(f,\Sigma)\simeq\left\{
\begin{array}{cl}
\emptyset & \text{ if }r\nmid m,\\
\mu_r\times\mathbb{C}^{m-\frac{m}{r}} & \text{ if }r\mid m,
\end{array}
\right.
$$
where $\mu_r$ is the group of $r$-th roots of unity, which can be easily checked.
(ii) In the case that $f$ is a product of linear polynomials on $X=\mathbb{C}^d$, it is shown in \cite{BT} that the spectral sequence degenerates at $E_1$.
\end{example}
\begin{rem} If $X=\mathbb{C}^d$ and with the additional assumption that $f$ has an isolated singularity $\Sigma=\{x\}$, McLean \cite[Theorem 1.2]{Mclean} showed that there exists a spectral sequence
\begin{equation}\label{eqMcL}
'\!E^{p,q}_1=\bigoplus_{i\in S_{m,p}} H_{d-1-2k_i\nu_i-(p+q)} (\tilde{E^{\circ}_i},\mathbb{Z})\quad \Rightarrow \quad HF^*(\phi^m,+)
\end{equation}
converging to the Floer cohomology of the $m$-th iterate of the monodromy $\phi$ on the Milnor fiber of $f$. We note that $E_1$ in this case differs from $'\!E_1$ by a $(2dm+d-1)$-shift in the total degree $p+q$, hence up to relabelling, the two pages are the same.
\end{rem}
\begin{conjecture} If $X=\mathbb{C}^d$ and $f$ has an isolated singularity $\Sigma=\{x\}$, the two spectral sequences $\{E_r, d_r\}_{r\ge 1}$ and $\{'\!E_r, '\!d_r\}_{r\ge 1}$ are isomorphic, and
$$
HF^{*}(\phi^m,+) \simeq H^{*+2dm+d-1}_c\left(\mathscr X_{m}(f,\Sigma),\mathbb{Z}\right).
$$
\end{conjecture}
The conjecture is true if $m$ is the multiplicity of $f$ at the singularity. More generally:
\begin{prop}\label{propMult}
Let $$f=f_m+f_{m+1}+\ldots$$ be a polynomial in $d$ variables vanishing at the origin, where $f_i$ are the homogeneous components of degree $i$, and $m>0$ is the multiplicity of $f$ at the origin. Let $\Sigma=\{0\}$. Then,
$$
H_c^*(\mathscr X _m(f,\Sigma),\mathbb{Z})\simeq H_{2(dm-1)-*}(F,\mathbb{Z}),
$$
where $F\simeq \{f_m=1\}$ is the Milnor fiber at the origin of the initial form $f_m$ of $f$. If in addition $f$ has an isolated singularity at the origin, then also
$$HF^{*-2dm-d+1}(\phi^m,+,)\simeq H_{2(dm-1)-*}(F,\mathbb{Z}).$$
\end{prop}
Recall that Zariski's Problem A, the multiplicity conjecture, states that the multiplicity is an embedded topological invariant of a hypersurface singularity. On the other hand, Zariski's Problem B, see \cite{Za}, has counterexamples: there exist embedded topologically equivalent hypersurface singularities (even in families) such that the topology of the tangent cones changes drastically, see \cite{J1,J2}. However, inspired by Proposition \ref{propMult}, we observed that in all these examples the homology of the Milnor fiber of the tangent cone stays the same. Therefore we dare to conjecture that the same is true in general:
\begin{conjecture}
Let $f, g:(\mathbb{C}^d,0)\rightarrow (\mathbb{C},0)$ be two germs of holomorphic functions. If $f$ and $g$ are embedded topologically equivalent, then the Milnor fibers of their initial forms are homotopy equivalent.
\end{conjecture}
The conjecture holds if $f, g$, and their initial forms have isolated singularities, by \cite[Corollary 2.4]{LR}.
\section{Proof of the main results}
We denote by $\mathscr{L}(X)$ the space of arcs on $X$. Recall that an arc on $X$ is a morphism $\gamma:\mathrm{Spec}\, \bC\llbracket t \rrbracket\to X$ of $\mathbb{C}$-schemes. We denote by
$$
\pi_m:\mathscr{L}(X)\to \mathscr{L}_m(X)\quad\text{ and }\quad \pi_{m}^l:\mathscr{L}_l(X)\to \mathscr{L}_m(X)
$$
the truncation morphisms, for $0\le m\le l$. We let from now on $$\mathscr{X}_m:=\mathscr{X}_m(f,\Sigma),$$ and
$$
\mathscr{X}_m^\infty :=\pi_m^{-1}(\mathscr{X}_m )\quad\text{ and }\quad \mathscr{X}_m^l :=(\pi_m^l)^{-1}(\mathscr{X}_m ).
$$
For an arc $\gamma$ on $X$, let $\gamma(0):=\pi_0(\gamma)$ denote the center of $\gamma$, that is, the image of the closed point of $\mathrm{Spec}\, \bC\llbracket t \rrbracket$ under $\gamma$, and by $f(\gamma)(t)$ we denote the power series associated to the composition $f\circ \gamma$.
If $\gamma\in \mathscr X_{m}^\infty $, then clearly $f_{red}(\gamma)(t)\ne 0$, that is, the generic point of the arc $\gamma$ lies in $X\setminus X_0$. Thus, by the valuative criterion of properness for the map $h:Y\rightarrow X$, there exists a unique lifting of $\gamma$ to an arc $\tilde{\gamma}$ of $Y$. We define
$$\mathscr X_{m,i}^\infty :=\{\gamma\in \mathscr X_{m}^\infty \mid \tilde{\gamma}(0)\in E_i^{\circ}\}.$$
\begin{lemma}\label{lemD}
There is a decomposition into mutually disjoint subsets
$$\mathscr X_{m}^\infty =\bigsqcup_{i\in S_m} \mathscr X_{m,i}^\infty .$$
Moreover, each $ \mathscr X_{m,i}^\infty $ with $i$ in $S_m$ is a constructible cylinder in $\mathscr{L}(X)$, that is, the inverse image under $\pi_m$ of a constructible subset of $\mathscr{L}_m(X)$.
\end{lemma}
\begin{proof} If $\gamma$ is in $\mathscr X_{m}^\infty $, $\tilde\gamma(0)$ must lie in some $E_i$ with $i\in A$. If $\tilde\gamma(0)$ lies also in $E_j$ for some $j\in S$ with $i\ne j$, then $\tilde\gamma(0)\in E_i\cap E_j$ and so $E_i\cap E_j \ne 0$. Since the resolution is $m$-separating, $m_i+m_j>m$. Hence $\gamma$ has contact order $\ge m_i+m_j>m$ with $f$, which is a contradiction. Thus $\tilde\gamma(0)\in E_i^\circ$ and the disjoint decomposition follows.
Equivalently, the decomposition follows from \cite[Theorem A]{ELM}, by noting that
$$\mathscr X_{m,i}^\infty =h_\infty (\text{Cont}^{\mu}(E))\cap \mathscr X_{m}^\infty $$
in the notation of {\it loc. cit.}, see Section \ref{secApp} below, where
$$
h_\infty:\mathscr{L}(Y)\rightarrow \mathscr{L}(X)
$$
is the induced map on arc spaces, $\mu=(\mu_j)_{j\in S}$ with $\mu_j=0$ if $j\ne i$ and $\mu_i=k_i$,
and
$$
\text{Cont}^{\mu}(E) =\{\lambda\in \mathscr{L}(Y)\mid \text{ord}_\lambda E_j = \mu_j\text{ for all }j\in S\}.
$$
Moreover $h_\infty (\text{Cont}^{\mu}(E))$ is a constructible cylinder in $\mathscr{L}(X)$ by {\it loc. cit.}, and hence $\mathscr X_{m,i}^\infty $ is also a constructible cylinder.
\end{proof}
\begin{rem}\label{remEL} We can say more precisely that each $ \mathscr X_{m,i}^\infty $ with $i$ in $S_m$ is the pull-back of a constructible subset of $\mathscr{L}_l(X)$ for any $l\ge \max\{2k_i(\nu_i-1), k_i(\nu_i-1)+m\}$. This follows from the proof of \cite[Corollary 1.7]{ELM}.
\end{rem}
Let now $l$ be a positive integer so that each $ \mathscr X_{m,i}^\infty $ with $i$ in $S_m$ is the pull-back of a constructible subset of $\mathscr{L}_l(X)$. We let $$\mathscr X_{m,i}^l :=\pi_l(\mathscr X_{m,i}^\infty ).$$ Then $\mathscr X_{m,i}^l $ is a constructible subset of $\mathscr{L}_l(X)$ such that
$$
\mathscr X_{m,i}^\infty = \pi_l^{-1}(\mathscr X_{m,i}^l )
$$
and
$$\mathscr X_{m}^l =\bigsqcup_{i\in S_m} \mathscr X_{m,i}^l .$$
We construct a filtration $F_p\mathscr X_m^\infty $ of $\mathscr X_m^\infty $. For an integer $p$, we let
$$F_{p}\mathscr X_m^\infty := \bigsqcup_{i\in S_m,\, w_ik_i\geq -p} \mathscr X_{m,i}^\infty ,$$
$$\ F_{(p)}\mathscr X_m^\infty := F_{p}\mathscr X_{m}\setminus F_{p-1}\mathscr X_{m}=\bigsqcup_{i\in S_{m,p}}\mathscr X_{m,i}^\infty .$$
We define similarly a filtration $F_p\mathscr X_m^l $ of $\mathscr X_m^l $ by replacing $\infty$ with $l$.
\begin{lemma}\label{lm21} Given $m> 0$, for all $l\gg 0$ we have: the set $F_p\mathscr X_m^l $ is Zariski closed in $\mathscr X_m^l $ for every integer $p$.
\end{lemma}
\begin{proof}
For $l\gg 0$ we have that $F_p\mathscr X_m^\infty $ is the pullback of $F_p\mathscr X_m^l$ by the truncation map $\pi_l:\mathscr{L}(X)\to \mathscr{L}_l(X)$.
Since $X$ is smooth, it admits a finite cover by affine Zariski open subsets such that each of them is an etale open subset of $\mathbb{C}^d$. Since the assertion is local in $X$ for the Zariski topology, we can assume that $X$ itself is an \'etale open subset of $\mathbb{C}^d$. Then the projection $\pi_l$ is a trivial fibration. Hence, it is enough to prove that $F_{p}\mathscr X_m^\infty$ is closed in $\mathscr X_{m}^\infty $.
Since $X$, $Y$ are smooth and $h$ is proper birational, we have $h_*\mathcal{O}_Y\simeq\mathcal{O}_X$. Since $X$ is affine there is an isomorphism of rings of global regular functions
$$
h^\#:\Gamma(X,\mathcal{O}_X) \xrightarrow{\sim}\Gamma(Y,\mathcal{O}_Y),\quad \phi\mapsto \phi\circ h.
$$
Since $-W=\sum_{i\in S}w_iE_i$ is an effective divisor, $\mathcal{O}_Y(W)$ is a sheaf of ideals of $\mathcal{O}_Y$. One has thus an ideal of regular functions on $X$
$$
\mathcal{I} :=(h^\#)^{-1}(\Gamma(Y,\mathcal{O}_Y(W))).
$$
If $\gamma$ is an arc on $X$ not completely lying in $X_0$, then its lift $\tilde{\gamma}$ to an arc of $Y$ satisfies
$$\text{ord}_\gamma \mathcal{I} =\text{ord}_{\tilde{\gamma}}(-W),$$
since $\mathcal{O}_Y(W)$ is generated by its global sections. Then the set $F_{p}\mathscr X_m^\infty $ can be expressed as
$$F_{p}\mathscr X_m^\infty =\mathscr X_{m}^\infty\cap \{\gamma\in \mathscr{L}(X) \mid \mathrm{ord}_{\gamma} (\mathcal{I}) \geq -p\},$$
which is closed in $\mathscr X_{m}^\infty $.
\end{proof}
\begin{rem}\label{remBBB}
One can determine a precise lower-bound for $l$ from $m$, $m_i$, and $w_i$, similarly to Remark \ref{remEL}. From now on we fix $l$ as in Lemma \ref{lm21}.
\end{rem}
\begin{lemma}\label{lm22}
For any integer $p$ and any $i\in S_{m,p}$, the set $\mathscr X_{m,i}^l $ is Zariski closed in $F_{(p)}\mathscr X_{m}^l $.
\end{lemma}
\begin{proof}
Suppose by contradiction that $\mathscr X_{m,i}^l $ is not closed in $F_{(p)}\mathscr X_{m}^l $ for some $i\in S_{m,p}$. Then there exist $j\in S_{m,p}$ different than $i$, and $\gamma\in \mathscr X_{m,j}^l $ such that $\gamma$ is in the closure $\overline{\mathscr X_{m,i}^l }$ of $\mathscr{X}_{m,i}^l $ in $F_{(p)}\mathscr X_{m}^l $. Since $\mathscr X_{m,i}^l $ is a constructible set, the usual curve selection lemma holds, see \cite{Mi}. That is, there exists a complex analytic curve germ
$$\alpha\colon (\mathbb{C},0)\to (\overline{\mathscr X_{m,i}^l },\gamma),\quad s\mapsto \alpha(s)$$
such that $\alpha(0)=\gamma$, and $\alpha(s)\in \mathscr X_{m,i}^l $ for all $s\neq 0$ close to $0$.
Fix a local section of the truncation morphism $\pi_l:\mathscr{L}(X)\rightarrow\mathscr{L}_l(X)$ in a neighborhood of $\gamma$. Via this section, we view now $\gamma$ as an arc on $X$, and thus $\alpha$ defines a complex analytic surface germ, i.e. a wedge,
$$\alpha\colon (\mathbb{C}^2,(0,0))\to (X,\gamma(0)),\quad (t,s)\mapsto \alpha(t,s)$$
such that
\begin{itemize}
\item[(a)] $\alpha_0(t):=\alpha(t,0)=\gamma$,
\item[(b)] $\alpha_s(t):=\alpha(t,s)$ is an arc lifting to $Y$ with center on $E^{\circ}_i$ for all $s\neq 0$.
\end{itemize}
Consider the following diagram
\begin{displaymath}
\xymatrix{
&Y\ar[d]^{h}\ar@{<--}[dl]_{\beta} \\
\mathbb{C}^2\ar[r]_{\alpha}&X}
\end{displaymath}
which defines the meromorphic map $\beta=h^{-1}\circ \alpha$. The meromorphic map $\beta$ cannot be holomorphic. If so, then $\beta(t,s)$ would equal $\tilde{\alpha}_s(t)$ for all $s,t$, where $\tilde{\alpha}_s$ is the unique lifting of $\alpha_s$ to $Y$. The latter is however not even continuous in $s$: the lifting $\tilde{\alpha}_0$ of $\alpha_0$ has center $\tilde{\alpha}_0(0)\in E^{\circ}_j$, and the lifting $\tilde{\alpha}_s$ of $\alpha_s$ has center $\tilde{\alpha}_s(0)\in E^{\circ}_i$ for all $s\neq 0$. Hence the map $\beta$ has non-trivial locus of indeterminacy, which, by a theorem of Remmert \cite[p.333]{Re}, is a complex analytic subspace of codimension $\ge 2$ since $\mathbb{C}^2$ is normal. By Hironaka, the locus of indeterminacy of $\beta$ can be resolved by a sequence of blow ups:
\begin{displaymath}\label{diag1}
\xymatrix{
Z\ar[r]^{\bar{\beta}}\ar[d]_{\sigma}&Y\ar[d]^{h}\ar@{<--}[dl]_{\beta} \\
\mathbb{C}^2\ar[r]_{\alpha}&X.
}
\end{displaymath}
Here $Z$ can be defined as a log resolution $Z\rightarrow \mathbb{C}^2\times_XY$ of the locus where the natural holomorphic map $\mathbb{C}^2\times_XY\rightarrow \mathbb{C}^2$ fails to be biholomorphic.
Let $F=\sigma^{-1}(0)=\cup_{j\in J} F_j$ be the exceptional divisor of $\sigma$. Let $L_{s}$ be the line $\{(t,s)\mid t\in \mathbb{C}\}$ in $\mathbb{C}^2$. Let $$\sigma^{*}L_0=\sum_{j\in J} b_jF_j+\tilde{L}_0$$ be the total transform of $L_0$ under $\sigma$, where $\tilde{L}_0$ is the strict transform of $L_0$. For $s\ne 0$, $L_s$ does not meet the origin, and so $\sigma^{*}L_s=\tilde{L}_s$. The composition $$Z\to \mathbb{C}^2\overset{\mathrm{pr}_2}{\to}\mathbb{C}$$ gives by definition a rational equivalence between the cycles determined by its fibers. Hence the cycles $\sigma^{*}L_s$ and $\sigma^{*}L_0$ are rationally equivalent for any $s$. As a consequence we have an equality of intersection products
$$[\sigma^{*}L_s]\cdot \bar{\beta}^{*}W=[\sigma^{*}L_0]\cdot \bar{\beta}^{*}W.$$
Note that, $\bar{\beta}^{*}W$ is supported on the exceptional divisor $F$, since $W$ is supported on the exceptional divisor $E$. Hence $\bar{\beta}^{*}W$ is a compact cycle, and thus the intersection product is well-defined. We have the following equalities
$$[\sigma^{*}L_s]\cdot \bar{\beta}^{*}W=[\tilde{L}_s]\cdot \bar{\beta}^{*}W=\bar{\beta}_{*}\tilde{L}_s\cdot [W]=\mathrm{ord}_{\tilde{\alpha}_s}(W)=p,$$
and
\begin{align*}
[\sigma^{*}L_0]\cdot \bar{\beta}^{*}W&=[\tilde{L}_0]\cdot \bar{\beta}^{*}W+\sum b_j[F_j]\cdot \bar{\beta}^{*}W\\
&=\bar{\beta}_{*}[\tilde{L}_0]\cdot W+(\sum b_j \bar{\beta}_{*}F_j)\cdot W\\
&=\mathrm{ord}_{\tilde{\gamma}}(W)+(\sum b_j \bar{\beta}_{*}F_j)\cdot W\\
&=p+(\sum b_j \bar{\beta}_{*}F_j)\cdot W,
\end{align*}
where $\tilde{\gamma}=\tilde{\alpha}_0$.
Hence we obtain that $$(\sum b_j \bar{\beta}_{*}F_j)\cdot W=0.$$
Since the $b_j$ are non-negative, by Kleiman ampleness criterion we obtain $b_j=0$ for all $j$ such that $\bar{\beta}(F_j)$ is not collapsed to a point. But this means that the map $\beta$ has no indeterminacy, which is a contradiction.
\end{proof}
For the following we will fix possibly-higher value for $l$ than up to now, one such that Lemma \ref{lemLoo1} applies.
\begin{lemma}\label{lm23}
For every $i\in S_m$, $\mathscr X^l_{m,i} $ is a smooth complex variety of dimension $d(l+1)-k_i\nu_i-1$, and it has the same homotopy type as $\tilde{E}_i^{\circ}$.
\end{lemma}
\begin{proof}
We consider the induced morphisms
$$
h_\infty:\mathscr{L}(Y)\rightarrow \mathscr{L}(X)\quad\text{and}\quad h_l\colon \mathscr{L}_l(Y)\to \mathscr{L}_l(X)$$ and denote
$$\mathscr Y^\infty_{m,i}:=h_\infty^{-1}\left( \mathscr X^\infty_{m,i} \right)
\quad\text{and}\quad
\mathscr Y^l_{m,i}:=h_l^{-1}\left( \mathscr X^l_{m,i} \right).$$
Note that
$$\mathscr Y^\infty_{m,i}=\{{\gamma}\in \mathscr{L}(Y)\mid (f\circ h)({\gamma})= t^{m} + \text{(higher order terms)} \in\bC\llbracket t \rrbracket, \text{ and }\gamma(0)\in E_i^\circ\},$$
The morphism
$$\pi_0:\mathscr Y^\infty_{m,i} \rightarrow {E}_i^\circ,\quad \gamma\mapsto \gamma(0)$$
factorizes through the cyclic cover $\tilde{E}_i^\circ\rightarrow E_i^\circ$ and a morphism
$$
\tilde{\pi}_0:\mathscr Y^\infty_{m,i} \rightarrow \tilde{E}_i^\circ
$$
which we define as follows. Let $U$ be any open neighborhood in $Y$ of a point in $E^{\circ}_i$ such that $f\circ h=u\cdot y_i^{m_i}$ in $U$, where $y_i$ is a local equation for $E_i$ and $u$ is an invertible regular function on $U$. Then the restriction of $\tilde{\pi}_0$ on the open subset $(\pi_0)^{-1}(U)\cap \mathscr Y^\infty_{m,i}$ is given by
$$\tilde{\pi}_0({\gamma}):=\left(\mathrm{ac}(y_i({\gamma})),{\gamma}(0)\right),$$
where $\mathrm{ac}(y_i({\gamma}))$ denotes the coefficient of the lowest order power of $t$ in the power series $y_i({\gamma})$. To check that the image of $\tilde{\pi}_0$ lies indeed in $\tilde{E}_i^\circ$, note that the power series $(f\circ h)(\gamma)$ is
$$
t^m + \text{(higher order terms)} = u(\gamma)\cdot y_i(\gamma)^{m_i}.
$$
Thus
$$
(\text{ac}(y_i(\gamma)))^{m_i} = (\text{ac}(u(\gamma)))^{-1} = u(\gamma(0))^{-1}.
$$
Define
$$
\tilde{\pi}^l_0:\mathscr Y^l_{m,i}\rightarrow \tilde{E}_i^\circ
$$
similarly to $\tilde{\pi}_0$. Since $\mathscr X^\infty_{m,i} =\pi_l^{-1}(\mathscr X^l_{m,i} )$, one has $\mathscr Y^\infty_{m,i} =\pi_l^{-1}(\mathscr Y^l_{m,i} )$ as well, where we abuse the notation and denote also by $\pi_l$ the map $\mathscr{L}(Y)\rightarrow\mathscr{L}_l(Y)$. Since $l$ is very big, cf. Remark \ref{remBBB}, it follows that $\tilde{\pi}_0$ factorizes through $\tilde{\pi}^l_0$.
We consider now the following diagram
\begin{displaymath}
\xymatrix{
\mathscr Y^l_{m,i}\ar[r]^{h_l\;\;\;}\ar[d]_{\tilde{\pi}_0^l}&\mathscr X^l_{m,i} \\
\tilde E^{\circ}_i&
}
\end{displaymath}
In this diagram, the morphism $h_l$ is a locally trivial fibration with fiber $\mathbb{C}^{(\nu_i-1)k_i}$, by Lemma \ref{lemLoo1} below. The lemma is then completed by the following observation: the morphism $\tilde{\pi}_0^l$ is a locally trivial fibration with fiber $\mathbb{C}^{dl-k_i}$. To prove this claim, fix an open neighborhood $U$ in $Y$ of a point $P_0$ in $E_i^\circ$ as above. Note that $\tilde{E}_i^\circ\cap (\mathbb{C}\times U)$ is the restriction above $E_i^\circ$ of the \'etale cyclic cover
$$
\tilde{U}=\{(z,P)\in \mathbb{C}\times U\mid z^{m_i}=u(P)^{-1}\} \xrightarrow{p_2} U.
$$
Let $(z_0,P_0)$ be a fixed point in $\tilde{E}_i^\circ\cap\tilde{U}$ above $P_0$, and let $\Omega$ be a small open neighborhood of $(z_0,P_0)$ in $\tilde{U}$. Note that the projection onto the first coordinate defines a regular invertible function on $\tilde{U}$, whose inverse we denote by $\tilde{u}$. The function $\tilde{u}$ plays the role of $u^{1/m_i}$, the latter being not necessarily well-defined on $U$; that is, $\tilde{u}$ satisfies $\tilde{u}^{m_i}=u\circ p_2$. Then $\tilde{y}_i:=\tilde{u}y_i$ is a local equation for $\tilde{E}_i^\circ$ in $\Omega$, and $f\circ h \circ p_2 = \tilde{y}_i^{m_i}$. Since $\tilde{y}_i$ is smooth, it forms part of an \'etale local system of coordinates on $\Omega$. Since forming of jet schemes is compatible with \'etale morphisms by \cite[Lemma 4.2]{DL99}, it follows that $\tilde{\pi}^l_0$ is trivialized above $\tilde{E}_i^\circ\cap \Omega$ with fiber isomorphic to
$$\{ \gamma \in \mathscr{L}_l(\Omega) \mid \gamma(0)=(z_0,P_0)\text{ and } \mathrm{ac}(y_i({\gamma})= z_0 \text{ and } \tilde{y}_i^{m_i}(\gamma)\equiv t^{m}\text{ mod } (t^{m+1}) \}\simeq$$
$$\simeq\{ \gamma \in \mathscr{L}_l(\Omega) \mid \gamma(0)=(z_0,P_0)\text{ and } \tilde{y}_i(\gamma)\equiv t^{k_i}\text{ mod } (t^{k_i+1}) \}\simeq$$
$$\simeq \{ \gamma \in \mathscr{L}_l(\mathbb{C}^d) \mid \gamma(0)=0\text{ and } x_1(\gamma)\equiv t^{k_i}\text{ mod } (t^{k_i+1}) \} \simeq\mathbb{C}^{dl-k_i}.$$
\end{proof}
\begin{lemma}\label{lemLoo1} The morphism $h_l: \mathscr Y^l_{m,i}\rightarrow \mathscr X^l_{m,i}$ is a Zariski locally-trivial fibration with fiber $\mathbb{C}^{(\nu_i-1)k_i}$ for $l\gg 0$.
\end{lemma}
\begin{proof} As in the proof of Lemma \ref{lemD}, the map $\mathscr Y^l_{m,i}\rightarrow \mathscr X^l_{m,i}$ is obtained by base-change from the map
$$
\pi_l(\text{\rm Cont}^\mu(E))\rightarrow h_l\pi_l(\text{\rm Cont}^\mu(E)),
$$
since $\mathscr X^l_{m,i}=h_l\pi_l(\text{\rm Cont}^\mu(E))\cap \mathscr X^l_m$ for $l\gg 0$.
Thus the claim follows immediately from Theorem \ref{lemLoo} in the Appendix.
\end{proof}
\begin{theorem}\label{main2} Let $f:X\to\mathbb{C}$ be a non-invertible regular function on a smooth complex algebraic variety $X$ of dimension $d$, $\Sigma$ a non-empty closed subset of $f^{-1}(0)$, $m> 0$ an integer, and $h:Y\to X$ an $m$-separating log resolution of $(f,\Sigma)$. For all $l\gg 0$, there is a cohomological spectral sequence
$$E_1^{p,q}=\bigoplus_{i\in S_{m,p}} H_{2(d(l+1)-k_i\nu_i-1)-(p+q)} (\tilde{E^{\circ}_i},\mathbb{Z})$$
converging to $H^{p+q}_c\left(\mathscr X_{m}^l(f,\Sigma),\mathbb{Z}\right)$.
\end{theorem}
\begin{proof}
By Lemma \ref{lm21}, there is a finite filtration of $\mathscr X^l_{m} =\mathscr X_{m}^l(f,\Sigma)$ by Zariski closed subsets
$$\mathscr X^l_{m} =F_0\mathscr X^l_{m} \supset F_{-1}\mathscr X^l_{m} \supset\cdots\supset F_{p}\mathscr X^l_{m} \supset F_{p-1}\mathscr X^l_{m} \supset\cdots.$$
This induces a spectral sequence converging to $H^{*}_c(\mathscr X^l_{m} ,\mathbb{Z})$ such that $$E_1^{p,q}=H^{p+q}_c(F_{(p)}\mathscr X^l_{m} ,\mathbb{Z}).$$
It follows from Lemma \ref{lm22} that $F_{(p)}\mathscr X^l_{m} =\bigsqcup_{i\in S_{m,p}}\mathscr X^l_{m,i} $ is the irreducible decomposition of $F_{(p)}\mathscr X^l_{m} $. Hence we have $$H^{p+q}_c(F_{(p)}\mathscr X^l_{m} ,\mathbb{Z})\simeq\bigoplus_{i\in S_{m,p}} H^{p+q}_{c} (\mathscr X^l_{m,i} ,\mathbb{Z}).$$
By the smoothness of $\mathscr X^l_{m,i} $ proven in Lemma \ref{lm23} and Poincare duality, this is in turn isomorphic to
$$\bigoplus_{i\in S_{m,p}} H_{2(d(l+1)-k_i\nu_i-1)-(p+q)} (\mathscr X^l_{m,i} ,\mathbb{Z}).$$
Since $\mathscr X^l_{m,i} $ and $\tilde{E}^{\circ}_i$ are homotopy equivalent by Lemma \ref{lm23}, we obtain
$$H^{p+q}_c(F_{(p)}\mathscr X^l_{m} ,\mathbb{Z})\simeq \bigoplus_{i\in S_{m,p}} H_{2(d(l+1)-k_i\nu_i-1)-(p+q)} (\tilde{E}^{\circ}_i,\mathbb{Z}).$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{main}]
Since $X$ is smooth it admits a cover $\{X_i\}_{i\in I}$ by Zariski open subsets such that each $X_i$ is an \'etale open subset of $\mathbb{C}^d$. Then, for each $i\in I$ there is an isomorphism $\mathscr{L}_l(X_i)\simeq \mathscr{L}_m(X_i)\times\mathbb{C}^{d(l-m)}$ such that the truncation morphism $\pi^l_m:\mathscr{L}_l(X_i)\rightarrow\mathscr{L}_m(X_i)$ is the first projection. We conclude that $\pi^l_m:\mathscr{L}_l(X)\rightarrow\mathscr{L}_m(X)$ is a Zariski locally trivial fiber bundle, with fiber $\mathbb{C}^{d(l-m)}$. Since $\mathscr{X}^l_m =(\pi^l_m)^{-1}\mathscr{X}_m $, we have that $\mathscr{X}^l_m$ is a locally trivial fiber bundle over $\mathscr{X}_m $ with fiber $\mathbb{C}^{d(l-m)}$. The trivializing open subsets are $\mathscr{X}_m\cap \mathscr{L}_m(X_i)$ for $i\in I$.
Let $c_l$ and $c_m$ be the constant maps sending $\mathscr{X}^l_m$ and $\mathscr{X}_m $ to a point respectively. The spectral sequence of the composition of the functors $R(c_l)_{!}=R(c_m)_{!}R(\pi^l_m)_!$ has $$E_2^{p,q}=H^p_c(\mathscr{X}_m,R^q(\pi^l_m)_!\mathbb{Z}_{\mathscr{X}^l_{m}})$$
and converges to
$$
H^{p+q}_c(\mathscr{X}_m^l,\mathbb{Z}).
$$
Observe that $H^q_c(\mathbb{C}^{d(l-m)},\mathbb{Z})\simeq \mathbb{Z}$ if $q=2d(l-m)$ and it is equal to $0$ otherwise. Then $R^q(\pi^l_m)_!\mathbb{Z}_{\mathscr{X}^l_{m}}$ is $0$ if $q\ne d(l-m)$. By Poincar\'e-Verdier duality \cite[Theorem 3.3.10]{Di} over the ring of coefficients $\mathbb{Z}$, the rank one local system $R^{2d(l-m)}(\pi^l_m)_!\mathbb{Z}_{\mathscr{X}^l_{m}}$ is the dual of the local system with fibers $H_0(\mathbb{C}^{d(l-m)},\mathbb{Z})= \mathbb{Z}\cdot [\textup{pt}]$ given by $\pi^l_m$ on $\mathscr{X}_m$, where $[\textup{pt}]$ is the $0$-homology class of a point in a fiber of $\pi^l_m$. Since the affine space is path-connected, every loop in $\mathscr{X}_m$ will send $[\textup{pt}]$ to itself, and hence $R^{2d(l-m)}(\pi^l_m)_!\mathbb{Z}_{\mathscr{X}^l_{m}}$ is the constant local system $\mathbb{Z}$ on $\mathscr{X}_m$. Therefore we obtain
$$
H^*_c(\mathscr{X}^l_m ,\mathbb{Z}) \simeq H^{*-2d(l-m)}_c(\mathscr{X}_m ,\mathbb{Z}).
$$
Thus, replacing $q$ with $q+2d(l-m)$ in Theorem \ref{main2}, one obtains the claim.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{degen}]
The spectral sequence of a filtration by algebraic closed subsets of a complex algebraic variety can be lifted to the category of mixed Hodge structures by \cite[Lemma 3.8]{A}. Then the spectral sequence from Theorem \ref{main2}, after tensoring with $\mathbb{Q}$, is a spectral sequence of $\mathbb{Q}$-mixed Hodge structures once the Tate twists are taken into account. The rational structure is necessary in order to lift the Poincar\'e duality, cf. \cite{Ak}. More precisely, the proof of Lemma~\ref{lm23} shows that that there is an isomorphism of $\mathbb{Q}$-mixed Hodge structures
$$H^k_c(\mathscr \mathscr{X}^l_{m,i} ,\mathbb{Q})\simeq H^{2d_i-k}(\tilde{E}_i^\circ,\mathbb{Q})^\vee\otimes\mathbb{Q}(-d_i)$$
for every integer $k$, where $d_i=\dim \mathscr{X}^l_{m,i} $. Since $\tilde{E}^{\circ}_i$ is a smooth variety of dimension $d-1$, it follows that for non-zero groups $H^{k}_c(F_{(p)}\mathscr \mathscr{X}^l_{m} ,\mathbb{Q})$, the weights on are contained in
$$[k-d+1,k] \cap [0,k].$$
Since this interval has length at most $d$ and the differentials of the spectral sequence are morphisms of mixed Hodge structures, the result follows.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{propMult}]
We prove that for every $q$, there is at most one non-zero term among $E^{p,q}_1$, and respectively among $'\!E^{p,q}_1$.
Let $h_0: Y_0\rightarrow X=\mathbb{C}^d$ be the blowup at the origin. Let $E_0$ be the exceptional divisor. Let $D=f^{-1}(0)$, and let $\tilde{D}$ be the strict transform of $D$. Then
$$
\tilde{D}\cap E_0 \simeq f_m^{-1}(0)\subset \mathbb{P}^{d-1}\simeq E_0.
$$
Moreover, since $f_m$ is a homogeneous polynomial, the closed subset $F=\{f_m=1\}$ of $\mathbb{C}^d$ is the Milnor fiber of $f_m$. The associated map $$F\rightarrow \mathbb{P}^{d-1}\setminus f_m^{-1}(0)$$ to the complement of the zero locus in the projective space of $f_m$, sending a point to the line connecting it to the origin, displays $F$ as the cyclic cover $\tilde{E}_0^\circ$ of degree $m$ of $E_0^\circ=E_0\setminus \tilde{D}\cap E_0$.
Let $h:Y\rightarrow X$ be an $m$-separating log resolution of $(f,\Sigma)$ factoring through $Y_0$, with $\Sigma$ consisting of the origin only. Abusing slightly the notation, we denote by $E_0$ the strict transform of $E_0$. Then the multiplicity of every exceptional divisor $E_i$ in $E=h^{-1}(0)$ is $>m$, and it is equal to $m$ only if $i=0$. Therefore the index set
$$
S_{m,p}=\left\{
\begin{array}{cc}
\{0\} & \text{ if } p=-w_0,\\
\emptyset & \text{ if } p\ne -w_0.
\end{array}
\right.
$$
One obtains thus for every $q$ that
$$
E_1^{-w_0,q}= H_{2(dm-1)-(-w_0+q)}(\tilde{E}_0^\circ,\mathbb{Z}) \simeq H_c^{-w_0+q}(\mathscr X_m(f,\Sigma),\mathbb{Z})
$$
by Theorem \ref{main},
and
$$
'\!E^{-w_0,q}_1= H_{-1-d-(-w_0+q)}(\tilde{E}_0^\circ,\mathbb{Z}) \simeq HF^{-w_0+q}(\phi^m,+)
$$
by (\ref{eqMcL}). The claim now follows.
\end{proof}
Finally, we provide a proof of \cite[Lemma 2.4]{Mclean} since the proof in {\it loc. cit.} contains a false claim (i.e. the inequality $a_Y-b_Y<a_{Y'}-b_{Y'}$ does not necessarily hold).
\begin{lemma}\label{lemMsep}
Let $X$ be a smooth complex algebraic variety of dimension $d$, $f$ a non-invertible regular function on $X$, $\Sigma$ a non-empty closed subset of $f^{-1}(0)$, and $m>0$ an integer. There exist $m$-separating log resolutions of $(f,\Sigma)$.
\end{lemma}
\begin{proof}
Let $h:Y\rightarrow X$ be a log resolution of $(f,\Sigma)$ and let $E=(f\circ h)^{-1}(0)=\sum_{i\in S}m_iE_i$ be the pull-back of the divisor $f^{-1}(0)$ with $E_i$ the irreducible components of $E$. For $I\subset S$, let $E_I=\cap_{i\in I}E_i$.
Let $\Delta$ be the dual complex of the simple normal crossings divisor $E$, whose vertices are labelled by $S$ and for every irreducible component of an $E_I$ with $I\subset S$ one attaches a $(|I|-1)$-dimensional cell. Each vertex $i\in S$ of $\Delta$ comes with the associated multiplicity $m_i$. For a cell $\sigma$ of $\Delta$, the multiplicity is defined $m_\sigma$ as the sum of the multiplicities of its vertices. Define
$$M(\Delta)=\min\{m_\sigma \mid \sigma\text{ is a 1-cell of }\Delta\}$$
and
$$
\mathcal{M}(\Delta)=\{ \text{ 1-cells } \sigma\text{ of }\Delta\mid m_\sigma=M(\Delta)\}.
$$
Consider the blowup $Y'$ of $Y$ along a non-empty intersection $E_i\cap E_j$ corresponding to a 1-cell $\sigma$ in $\mathcal{M}(\Delta)$. Let $\Delta'$ be the associated dual complex. Then $\Delta'$ is the stellar subdivision of $\Delta$ at $\sigma$, see \cite[\S 9]{K}. The multiplicity of the new vertex in $\Delta'$ is $m_\sigma=m_i+m_j$. The multiplicities of the other vertices remain the same. Therefore $M(\Delta')\ge M(\Delta)$.
If $M(\Delta')=M(\Delta)$, then $\mathcal{M}(\Delta')= \mathcal{M}(\Delta)\setminus\{\sigma\}$. We repeat the blowing up process $|\mathcal{M}(\Delta)|-1$ times. The resulting modification $Y'\rightarrow X$ then satisfies that $M(\Delta')>M(\Delta)$.
Repeating this process, we have achieve after finitely many steps that $M(\Delta')>m$. \end{proof}
\section{Appendix}\label{secApp}
The goal of this appendix is to prove Theorem \ref{lemLoo}, a particular case of the local triviality claimed in \cite[Lemma 9.2]{Loo} whose proof was deemed ``easy" and not included.
The notation in this appendix is independent of the notation from the introduction.
For a smooth complex variety $X$, an ideal subsheaf $I$ of $\mathcal{O}_X$ corresponding to a closed subscheme $Z$, and a natural number $m$, we denote the (non-restricted) $m$-contact locus by
$$
\text{\rm Cont}^m(Z)=\text{\rm Cont}^m(I)=\{\gamma\in\mathscr{L}(X)\mid \mathrm{ord}_\gamma(I)=m\}.
$$
For a reduced closed subscheme with the decomposition $D=\cup_i D_i$ into irreducible components, and for a tuple $\mathbf{m}=(m_i)_i$ of natural numbers, the (non-restricted) multi-contact loci are defined as
$$
\text{\rm Cont}^\mathbf{m}(D)=\{\gamma\in\mathscr{L}(X)\mid \mathrm{ord}_\gamma(D_i)=m_i\text{ for all } i\}.
$$
We recall that by $\pi_l:\mathscr{L}(X)\rightarrow\mathscr{L}_l(X)$ we mean the truncation map from arcs to $l$-jets on $X$, and for a morphism $\mu:Y\rightarrow X$, the associated maps on arcs and jets are denoted by $\mu_\infty:\mathscr{L}(Y)\rightarrow\mathscr{L}(X)$ and $\mu_l:\mathscr{L}_l(Y)\rightarrow\mathscr{L}_l(X)$, respectively.
The contact loci are cylinders since $\text{\rm Cont}^m(Z)=\pi_l^{-1}\pi_l (\text{\rm Cont}^m(Z))$ for $l\ge m$, and $\text{\rm Cont}^\mathbf{m}(D)=\pi_l^{-1}\pi_l (\text{\rm Cont}^\mathbf{m}(D))$ for $l\ge\max\{m_i\}$.
\begin{lemma}\label{lemmaBl1}
Let $2\le \nu\le d$ be integers and let $$\mu:Y=\mathbb{A}^d\rightarrow X=\mathbb{A}^d$$ be the map given in coordinates by $x_i=y_dy_i$ for $i=1,\ldots, \nu-1$, and $x_i=y_i$ for $i=\nu,\ldots, d$. Let $m\in\mathbb{N}$. Then the map
$$\mu_l:\pi_l(\text{\rm Cont}^m(y_d))\rightarrow \mu_l\pi_l(\text{\rm Cont}^m(y_d))$$
is a trivial fibration with fiber $\mathbb{C}^{(\nu-1)m}$ for $l\ge m$.
\end{lemma}
\begin{proof} Note that $\mu:Y\rightarrow X$ is an affine chart of the blowing up of $X$ at a linear subspace of codimension $\nu$, and $y_d$ is a defining function for the exceptional divisor. Consider the exact sequence
\begin{equation}\label{eqHO2}
\mu^*\Omega_X\xrightarrow{d\mu} \Omega_Y \rightarrow \Omega_{Y/X}\rightarrow 0.
\end{equation}
Then
$$
d\mu=\left(
\begin{array}{cc}
y_d I_{\nu-1} & A\\
O & I_{d-\nu+1}
\end{array}
\right)
$$
where $I_k$ is the $k\times k$ identity matrix, $O$ is a zero matrix, and $A$ is the matrix with only the last row nonzero
$$
A=\left(
O
\begin{array}{c}
y_1\\
\vdots\\
y_{\nu-1}
\end{array}
\right).
$$
Hence, multiplying $d\mu$ on the left by the appropriate invertible matrix, we can effect a change of basis so that
$$
d\mu=\left(
\begin{array}{cc}
y_d I_{\nu-1} & O\\
O & I_{d-\nu+1}
\end{array} \right).$$
For any arc $\gamma\in \text{\rm Cont}^m(y_d)$, one therefore has
$$
\gamma^*(dh)=
\left(
\begin{array}{cc}
u_\gamma t^m I_{\nu-1} & O\\
O & I_{d-\nu+1}
\end{array} \right),
$$
for some invertible $u_\gamma\in \bC\llbracket t \rrbracket$.
It is shown in the proof of \cite[Lemma 9.2]{Loo} that the fiber of $$\mu_l:\pi_l(\text{\rm Cont}^m(y_d))\rightarrow \mu_l\pi_l(\text{\rm Cont}^m(y_d))$$ through $\pi_l(\gamma)$ is the vector space
$$
K_l(\gamma):=\mathrm{Hom}_{\bC\llbracket t \rrbracket}(\gamma^*\Omega_{Y/X}, \bC\llbracket t \rrbracket/(t^{l+1})).
$$
This is the kernel of the linear map
$$
\mathrm{Hom}_{\bC\llbracket t \rrbracket}(\gamma^*\Omega_{Y}, \bC\llbracket t \rrbracket/(t^{l+1})) \rightarrow \mathrm{Hom}_{\bC\llbracket t \rrbracket}((h\circ\gamma)^*\Omega_{X}, \bC\llbracket t \rrbracket/(t^{l+1}))
$$
induced $\gamma^*(d\mu)$ via the exact sequence (\ref{eqHO2}). Hence
$$
K_l(\gamma) =[t^{l-m+1}\bC\llbracket t \rrbracket/(t^{l+1})]^{\times \nu-1}\times \{0\}^{^{\times d-\nu+1}}
$$
does not depend on the choice of $\gamma$, and it is isomorphic to $\mathbb{C}^{(\nu-1)m}$ as a $\mathbb{C}$-vector space.
\end{proof}
\begin{proof}[The second proof]
It suffices to prove the case $d=\nu=2$, because the general case for $d$ and $\nu$ is similar. Let us consider the morphism $\mu: \mathbb A^2\to \mathbb A^2$ given by $(x,y)\mapsto(xy,y)$. By definition, $\text{\rm Cont}^m(y)$ is the subscheme of $\mathscr L(\mathbb A^2)$ consisting of $(\varphi,\psi)$ with $\psi$ of order $m$. Thus we can identify $\pi_l(\text{\rm Cont}^m(y))$ with
$$\{(a_0,\dots,a_l; b_m,\dots,b_l)\in \mathbb A^{2l-m+2} \mid b_m\not=0\},$$
and identify $\mu_l: \pi_l(\text{\rm Cont}^m(y))\rightarrow \mu_l\pi_l(\text{\rm Cont}^m(y_d))$ with
$$(a_0,\dots,a_l; b_m,\dots,b_l)\mapsto (a_0b_m,a_0b_{m+1}+a_1b_m,\dots,a_0b_l+\cdots+a_{l-m}b_m; b_m, \dots,b_l).$$
We now consider the morphism
$$(\mu_l,\mathrm{pr}_{\mathbb A^m}):\pi_l(\text{\rm Cont}^m(y))\rightarrow \mu_l\pi_l(\text{\rm Cont}^m(y_d))\times_k \mathbb A^m$$
sending $(a_0,\dots,a_l; b_m,\dots,b_l)$ to
$$(a_0b_m,a_0b_{m+1}+a_1b_m,\dots,a_0b_l+\cdots+a_{l-m}b_m; b_m, \dots,b_l; a_{l-m+1},\dots, a_l).$$
This morphism is an isomorphism because given $(u_m,\dots,u_l; b_m,\dots,b_l; a_{l-m+1},\dots, a_l)$ in the target, the following system of linear equations, in variables $(a_0,\dots,a_{l-m})$,
\begin{equation*}
\begin{cases}
a_0b_m &=\ u_m\\
a_0b_{m+1}+a_1b_m &=\ u_{m+2}\\
\cdots \cdots &\quad\ \cdots\\
a_0b_l+\cdots+a_{l-m}b_m&=\ u_l
\end{cases}
\end{equation*}
is a Cramer system due to $b_m\not=0$.
\end{proof}
\begin{lemma}\label{lemBC1}
Let $\mu:Y\rightarrow X=\mathbb{A}^d$ be the blowing up of $X$ along a linear subspace $Z=\mathbb{A}^{d-\nu}$. Let $E$ be the exceptional divisor. Let $m\in\mathbb{N}$. Then the map
$$\mu_l:\pi_l(\text{\rm Cont}^m(E))\rightarrow \mu_l\pi_l(\text{\rm Cont}^m(E))$$
is a Zariski locally-trivial fibration with fiber $\mathbb{C}^{(\nu-1)m}$ for $l\ge m$.
\end{lemma}
\begin{proof}
In this case, $\mu_\infty(\text{\rm Cont}^m(E))=\text{\rm Cont}^m(Z)$ since $\mu$ is a log resolution of $(X,Z)$, by \cite[Theorem A]{ELM}.
There is an open covering $Y=\cup_{i=1}^\nu Y_i$ with $Y_i\simeq \mathbb{A}^d$ such that the restriction $\mu:Y_i\rightarrow X$ is as in Lemma \ref{lemmaBl1} for a suitable choice of coordinates. For each $i$, the set
$$
\pi_l(\text{\rm Cont}^m(E\cap Y_i))=\{\gamma\in \pi_l(\text{\rm Cont}^m(E))\mid \gamma(0)\in Y_i\}
$$
is open in $\pi_l(\text{\rm Cont}^m(E))$.
We show now that the set
$
\mu_l\pi_l(\text{\rm Cont}^m(E\cap Y_i))
$
is also open in $\mu_l\pi_l(\text{\rm Cont}^m(E))$. It consists of truncations $\pi_l(\gamma)$ of arcs $\gamma\in\text{\rm Cont}^m(Z)$ with the tangent line at the center $\gamma(0)$ contained in the normal directions to $Z$ given by $E\cap Y_i$.
Letting $Z$ be the zero locus of $I=(x_1,\ldots,x_\nu)$, where $x_1,\ldots ,x_d$ are coordinate functions on $X$, the last condition means the following. The exceptional divisor $E\simeq\mathbb{P}^\nu\times Z$ is the projectivization of the normal bundle of $Z$ in $X$, which is trivial. An arc $\gamma$ centered on $Z$ gives a well-defined point of $E$, the center of the unique lifting to an arc on $Y$,
$$
\tilde{\gamma}(0)=\left.\frac{d\gamma}{dt}\right\rvert_{t=0} \times \gamma(0)\quad\in\quad \mathbb{P}^\nu\times Z=E.
$$
where
$$
\left.\frac{d\gamma}{dt}\right\rvert_{t=0} = \left.\left[\frac{d (x_1(\gamma))}{dt},\ldots ,\frac{d (x_\nu(\gamma))}{dt}\right]\right\rvert_{t=0} \quad\in\quad\mathbb{P}^\nu
$$
is the point with homogeneous coordinates given by the coefficients of $t^{v-1}$ of the derivatives of $x_i(\gamma)(t)$ with respect to $t$, with $$v=\min\{\mathrm{ord}_\gamma x_i\mid i=1,\ldots,\nu\}=\mathrm{ord}_\gamma(I).$$
Thus, for a fixed $i$ in $\{1,\ldots, \nu\}$, the point $\tilde{\gamma}(0)$ belongs to $E\setminus (E\cap Y_i)$ if and only if the respective homogeneous coordinate of $(d\gamma/dt)\rvert_{t=0}$ is zero. That is, if and only if $\mathrm{ord}_\gamma x_i>v$. The last condition is a Zariski-closed condition on arcs, as well as on $l$-jets for $l\ge v$.
Thus for a fixed $i$ in $\{1,\ldots, \nu\}$,
$$
\mu_l\pi_l(\text{\rm Cont}^m(E\cap Y_i))=\mu_l\pi_l(\text{\rm Cont}^m(E))\cap \{\gamma\in\mathscr{L}_l(X)\mid \gamma(0)\in Z \text{ and }\mathrm{ord}_\gamma x_i\le m\}
$$
is open in $\mu_l\pi_l(\text{\rm Cont}^m(E))$ for $l\ge m$.
Since
$$
\mu_l:\pi_l(\text{\rm Cont}^m(E\cap Y_i))\rightarrow\mu_l\pi_l(\text{\rm Cont}^m(E\cap Y_i))
$$
is a trivial fibration with fiber $\mathbb{C}^{(\nu-1)m}$ by Lemma \ref{lemmaBl1}, the claim follows.
\end{proof}
\begin{lemma}\label{lemBlow}
Let $\mu:Y\rightarrow X$ be the blowing up of a smooth variety $X$ along a smooth subvariety $Z$. Let $E$ be the exceptional divisor. Let $m\in\mathbb{N}$. Then the map
$$\mu_l:\pi_l(\text{\rm Cont}^m(E))\rightarrow \mu_l\pi_l(\text{\rm Cont}^m(E))$$
is a Zariski locally-trivial fibration with fiber $\mathbb{C}^{(\nu-1)m}$ for $l\ge m$.
\end{lemma}
\begin{proof}
It is enough to cover $X$ by open affine subsets $U$ and prove the claim for the restriction $\mu^{-1}(U)\rightarrow U$ instead of $Y\rightarrow X$. Let $\nu$ be the codimension of $Z$. Since the sequence of locally free sheaves
$$
0\rightarrow I_Z/I_Z^2\rightarrow \Omega_{X} \rightarrow \Omega_Z \rightarrow 0
$$
is exact, we can find $U$ and sections $x_1,\ldots, x_d$ of $\mathcal{O}_X(U)$ with $x_1,\ldots, x_\nu$ generating $I_Z(U)$, $dx_1,\ldots, dx_d$ trivializing $\Omega_X(U)$, and $dx_1,\ldots, dx_\nu$ trivializing $\Omega_Z(U)$. Thus we obtain a base change diagram
$$
\xymatrix{
Z\cap U \ar[r]\ar[d] & \mathbb{A}^{d-\nu} \ar[d]\\
U \ar[r] & \mathbb{A}^{d}.
}
$$
with the horizontal morphism being \'etale and the vertical ones closed immersions. Thus $\mu^{-1}(U)\rightarrow U$ is the \'etale base change of the blowing up of $\mathbb{A}^d$ along the linear subspace $\mathbb{A}^{d-\nu}$. The claim follows from Lemma \ref{lemBC1} and the compatibility of jet schemes with \'etale morphisms.
\end{proof}
In what follows, the definition of a log resolution is more relaxed than in the introduction: we do not assume it anymore to be an isomorphism outside a fixed closed locus.
\begin{theorem}\label{lemLoo}
Let $\mu:Y\rightarrow X$ be a log resolution of an ideal subsheaf $I$ of $\mathcal{O}_X$ obtained by successively blowing up smooth centers. Let $I\cdot\mathcal{O}_Y=\mathcal{O}_Y(-\sum_i m_iE_i)$, where $E_i$ are the irreducible components of the zero locus $E$ of $I\cdot\mathcal{O}_Y$, and $m_i\in \mathbb{N}$. Let $\mathbf{k}=(k_i)_i$ be a tuple of natural numbers. Then for $l\gg 0$ the map
$$
\mu_l:\pi_l(\text{\rm Cont}^\mathbf{k}(E))\rightarrow\mu_l\pi_l(\text{\rm Cont}^\mathbf{k}(E))
$$
if a Zariski locally-trivial fibration with fiber $\mathbb{C}^e$ with $e=\sum_ik_i\cdot\mathrm{ord}_{K_{Y/X}}E_i$.
\end{theorem}
\begin{proof}
The proof of \cite[Lemma 9.2]{Loo} covers the claim except the fibration is only showed to be piecewise locally-trivial.
Factor $\mu:Y\rightarrow X$ into
$$
Y=Y^N\xrightarrow{\mu^N} Y^{N-1}\rightarrow\ldots\xrightarrow{\mu^2} Y^1\xrightarrow{\mu^1} Y^0=X,
$$
with $$\mu^j:Y^j\rightarrow Y^{j-1}$$ the blowing up along a smooth closed subvariety $Z^{j-1}$ of $Y^{j-1}$. Let $E^j$ be the exceptional divisor introduced by $\mu^j$.
Let $$\mu^{j,k}=\mu^k\circ\mu^{k+1}\circ\ldots\circ\mu^j:Y^j\rightarrow Y^k $$ for $k\le j$.
Define $$\mathscr{Y}^j=\mu^{N,j}_\infty (\text{\rm Cont}^\mathbf{k}(E))\quad\subset \mathscr{L}(Y^j),$$
so that we obtain a tower of surjective maps
$$
\text{\rm Cont}^\mathbf{k}(E)=\mathscr{Y}^N \rightarrow \mathscr{Y}^{N-1}\rightarrow\ldots\rightarrow\mathscr{Y}^1\rightarrow\mathscr{Y}^0=\mu_\infty(\text{\rm Cont}^\mathbf{k}(E))
$$
induced by the maps $\mu^j_\infty:\mathscr{L}(Y^j)\rightarrow\mathscr{L}_\infty(Y^{j-1})$.
By \cite[Theorem 2.1]{ELM} applied to the proper birational map $\mu^{N,j}:Y=Y^N\rightarrow Y^j$, each $\mathscr{Y}^j$ is a cylinder. The proof of this fact also shows that $\text{\rm Cont}^\mathbf{k}(E)=\mathscr{Y}^N$ is a union of fibers of $\mu^{N,j}_\infty:\mathscr{L}_\infty(Y^N)\rightarrow\mathscr{L}_\infty(Y^j)$ for each $j$. That is,
$\mathscr{Y}^N=(\mu^{N,j}_\infty)^{-1}(\mathscr{Y}^j)$ for all $j$. Thus $$\mathscr{Y}^j=(\mu^j_\infty)^{-1}(\mathscr{Y}^{j-1})$$ for all $j$ as well. Since $\mu^{N,j}$ is a log resolution of $(Y^j,Z^{j})$ and of $(Y^j,E^j)$, by {\it loc. cit.} we have that
$$
\mathscr{Y}^j\subset \text{\rm Cont}^{p_j}(Z^{j})\quad\text{ and }\quad \mathscr{Y}^j\subset \text{\rm Cont}^{q_j}(E^j),
$$
for
$$
p_j=\sum_i k_i\cdot \mathrm{ord}_{Z^{j}}E_i\quad\text{ and }\quad q_j=\sum_ik_i\cdot\mathrm{ord}_{E^j}E_i = p_{j-1}.
$$
The cylinder property allows us to draw the same conclusions for $l$-jets for $l\gg 0$. Define $$\mathscr{Y}^j_l:=\pi_l(\mathscr{Y}^j).$$ Then for $l\gg 0$,
$$
\mathscr{Y}^j=\pi_l^{-1}(\mathscr{Y}^j_l)
$$
and we have a tower of surjective maps
$$
\pi_l(\text{\rm Cont}^\mathbf{k}(E))=\mathscr{Y}^N_l \rightarrow \mathscr{Y}^{N-1}_l\rightarrow\ldots\rightarrow\mathscr{Y}^1_l\rightarrow\mathscr{Y}^0_l=\mu_l\pi_l(\text{\rm Cont}^\mathbf{k}(E)).
$$
induced by the maps $\mu^j_l:\mathscr{L}_l(Y^j)\rightarrow\mathscr{L}_l(Y^{j-1})$, such that
$$
\mathscr{Y}^j_l=(\mu^j_l)^{-1}(\mathscr{Y}^{j-1}_l),
$$
$$
\mathscr{Y}^j_l\subset \pi_l(\text{\rm Cont}^{p_j}(Z^{j}))\quad\text{ and }\quad \mathscr{Y}^j_l\subset \pi_l(\text{\rm Cont}^{q_j}(E^j)).
$$
Therefore the map $\mathscr{Y}^j_l\rightarrow\mathscr{Y}^{j-1}_l$ is obtained by base change from the map
$$
\pi_l(\text{\rm Cont}^{q_j}(E^j))\rightarrow \mu^j_l\pi_l(\text{\rm Cont}^{q_j}(E^j))=\pi_l(\text{\rm Cont}^{p_{j-1}}(Z^{j-1})),
$$
which is Zariski locally-trivial by Lemma \ref{lemBlow}. Thus each map $\mathscr{Y}^j_l\rightarrow\mathscr{Y}^{j-1}_l$ is a Zariski locally-trivial fibration, and so the composition $\mathscr{Y}^N_l\rightarrow\mathscr{Y}^0_l$ is as well.
\end{proof}
\noindent
{\bf Acknowledgement.} We thank J. Sebag and the referees for useful comments.
N.B. was partly supported by the grants STRT/13/005 and Methusalem METH/15/026 from KU Leuven, G097819N and G0F4216N from FWO.
J.F.B. was supported by ERCEA 615655 NMST Consolidator Grant, MINECO by the project
reference MTM2016-76868-C2-1-P (UCM), by the Basque Government through the BERC 2018-2021 program and Gobierno Vasco Grant IT1094-16, by the Spanish Ministry of Science, Innovation and Universities: BCAM Severo Ochoa accreditation SEV-2017-0718 and by Bolsa Pesquisador Visitante Especial (PVE) - Ciencias sem Fronteiras/CNPq Project number: 401947/2013-0.
L.Q.T. was supported by the grants mentioned for J.F.B.
H.D.N. was supported by the grants mentioned for J.F.B, by Juan de la Cierva Incorporaci\'on IJCI-2016-29891, and the National Foundation for Science and Technology Development (NAFOSTED), Grant number 101.04-2019.316, Vietnam.
| {
"timestamp": "2020-09-08T02:07:22",
"yymm": "1911",
"arxiv_id": "1911.08213",
"language": "en",
"url": "https://arxiv.org/abs/1911.08213",
"abstract": "We construct a spectral sequence converging to the cohomology with compact support of the m-th contact locus of a complex polynomial. The first page is explicitly described in terms of a log resolution and coincides with the first page of McLean's spectral sequence converging to the Floer cohomology of the m-th iterate of the monodromy, when the polynomial has an isolated singularity. Inspired by this connection, we conjecture that if two germs of holomorphic functions are embedded topologically equivalent, then the Milnor fibers of the their tangent cones are homotopy equivalent.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Cohomology of contact loci",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180669140007,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7083314814806355
} |
https://arxiv.org/abs/1301.1503 | Application of semidefinite programming to maximize the spectral gap produced by node removal | The smallest positive eigenvalue of the Laplacian of a network is called the spectral gap and characterizes various dynamics on networks. We propose mathematical programming methods to maximize the spectral gap of a given network by removing a fixed number of nodes. We formulate relaxed versions of the original problem using semidefinite programming and apply them to example networks. | \section{Introduction}
An undirected and unweighted network (i.e., graph) on $N$ nodes is equivalent to an $N \times N$
symmetric
adjacency matrix $A=(A_{ij})$,
where $A_{ij}=1$ when nodes (also called vertices) $i$ and $j$ form
a link (also called edge), and
$A_{ij}=0$ otherwise. We define the Laplacian matrix of the network by
\begin{equation}
L\equiv D-A,
\end{equation}
where
$D$ is the $N \times N$ diagonal matrix in which the $i$th diagonal
element is equal to $\sum_{j=1}^N A_{ij}$, i.e., the degree of node $i$.
When the network is connected, the eigenvalues of $L$
satisfy
\begin{equation}
\lambda_1=0 < \lambda_2 \le \cdots \le \lambda_N.
\end{equation}
The eigenvalue $\lambda_2$ is called spectral gap or algebraic connectivity
and characterizes various dynamics on networks including synchronizability
\cite{Almendral2007NJP,Arenas2008PhysRep,Donetti2006JSM},
speed of synchronization \cite{Almendral2007NJP},
consensus dynamics \cite{Olfati2007IEEE}, the speed of convergence of the
Markov chain to the stationary density
\cite{Cvetkovic2010book,Donetti2006JSM}, and
the first-passage time of the random walk \cite{Donetti2006JSM}.
Because a large $\lambda_2$ is often considered to be desirable, e.g., for
strong synchrony and high speed of convergence,
maximization of $\lambda_2$ by changing networks under certain
constraints is important in applications.
In the present work, we consider the problem of maximizing
the spectral gap by removing a specified number, $N_{\rm del}$, of nodes from a given network.
We assume that an appropriate choice of $N_{\rm del}$ nodes keeps the
network connected.
A heuristic algorithm for this task in which nodes are sequentially removed is proposed in
\cite{Watanabe2010pre}. In this study, we explore a mathematical programming
approach. We propose two algorithms
using semidefinite programming and numerically
compare their performance with that of the
sequential algorithm proposed in \cite{Watanabe2010pre}.
\section{Methods}
We start by introducing notations.
First, the binary variable $x_i$ ($1\le i\le N$)
takes a value of $0$ if node $i$ is one of the $N_{\rm
del}$ removed nodes and $1$ if node $i$ survives the removal.
Our goal is to determine $x_i$ ($1\le i\le N$) that maximizes $\lambda_2$
under the constraint
\begin{equation}
\sum_{i=1}^N x_i=N-N_{\rm del}.
\label{eq:constraint sum xi}
\end{equation}
Second, we define $\tilde{L}_{ij}$ as the $N \times N$ Laplacian matrix
generated by a single link $(i,j)\in E$, where $E$
is the set of links. In other words,
the ($i$,$i$) and ($j$,$j$) elements of $\tilde{L}_{ij}$ are
equal to 1, the ($i$,$j$) and ($j$,$i$) elements of $\tilde{L}_{ij}$
are equal to $-1$, and all the other elements of
$\tilde{L}_{ij}$ are equal to 0. It should be noted that
\begin{equation}
L=\sum_{1\le i<j\le N; (i,j)\in E}\tilde{L}_{ij}.
\label{eq:L sum}
\end{equation}
Third, $J$ denotes the $N \times N$ matrix in which all the $N^2$ elements are equal
to unity. Fourth, $E_i$ denotes the $N \times N$ diagonal matrix in which
the $(i,i)$ element is equal to unity and all the other $N^2-1$ elements are equal to 0.
After the removal of $N_{\rm del}$ nodes,
we do not decrease the size of the Laplacian. Instead,
we remove $\tilde{L}_{ij}$
from the summation on the RHS of \EQ\eqref{eq:L sum}
if
node $i$ or $j$ has been removed from the network.
The Laplacian of the remaining network, if connected, has $N_{\rm del}+1$ zero
eigenvalues. The corresponding zero eigenvectors are given by
$\bm u^{(0)}\equiv
(1 \cdots 1)^{\top}$ and $\bm e_i$, where $\top$ denotes the transposition,
$\bm e_i$ is the unit column vector
in which the $i$th element is equal to 1 and the other $N-1$ elements are
equal to 0, and $i$ is the index of one of the $N_{\rm del}$ removed nodes.
We formulate a nonlinear eigenvalue optimization
problem, which we call EIGEN, as follows:
\begin{equation*}
{\rm maximize} \; t
\quad \mbox{ subject to \EQ\eqref{eq:constraint sum xi} and}
\end{equation*}
\begin{equation}
-t I + \sum_{i<j; (i,j)\in E}x_i x_j \tilde{L}_{ij} + \alpha J
+ \beta \sum_{i=1}^N (1-x_i) E_i \succeq 0,
\label{eq:constraint 1}
\end{equation}
and $x_i
\in \{0, 1\}\quad (1\le i\le N)$,
where $\succeq 0$ indicates that the LHS is a positive semidefinite matrix.
The positive semidefinite constraint \EQ\eqref{eq:constraint 1}
is derived from a standard prescription in semidefinite programming
for optimization of an extreme eigenvalue of a matrix.
Maximizing $t$ is equivalent
to maximizing the smallest eigenvalue of the matrix given by the
sum of the second, third, and fourth terms on the LHS
of \EQ\eqref{eq:constraint 1}.
Without the third and fourth terms on the LHS of
\EQ\eqref{eq:constraint 1}, the optimal solution would be trivially equal to
$t=0$ because the Laplacian of any network has 0 as the smallest
eigenvalue. Because $J=\bm u^{(0)}\bm u^{(0)\top}$, the third term
transports a zero eigenvalue to $\approx \alpha$. We should take
a sufficiently large $\alpha>0$ such that the zero eigenvalue is shifted to a
value larger than the spectral gap of the remaining network, denoted
by $\tilde{\lambda}_2$. This technique was introduced in
\cite{Cvetkovic1999LNCS} for solving the traveling salesman problem.
For each removed node $i$ (i.e., $x_i=0$),
the matrix
represented by the second term on the LHS of \EQ\eqref{eq:constraint 1}
has a zero eigenvalue associated with eigenvector $\bm e_i$.
The fourth term shifts this zero eigenvalue
to $\approx \beta$. Note that the fourth term disappears
for the remaining $N - N_{\rm
del}$ nodes because $x_i=1$ for the remaining nodes.
If the shifted eigenvalues are larger than $\tilde{\lambda}_2$,
the solution to the problem stated above
returns the $N_{\rm del}$ nodes whose removal maximizes $\tilde{\lambda}_2$.
The second term on the LHS of \EQ\eqref{eq:constraint 1} represents
a nonlinear constraint. To linearize
the problem in terms of the variables,
we follow a conventional prescription
to introduce auxiliary variables
\begin{equation}
X_{ij}\equiv x_i x_j,
\end{equation}
where $1\le
i\le j\le N$
\cite{Grotschel1986jctsb,Lovasz1979IEEE,Lovasz1991SiamO}
(also reviewed in \cite{Goemans1997MP}).
If $x_i$ is discrete, $x_i (1-x_i)=0$ holds true.
Therefore, we require $X_{ii}=x_i^2 = x_i$. In the following discussion,
we use $x_i$ in place of $X_{ii}$.
We define the $(N+1) \times (N+1)$ matrix
\begin{equation}
Y\equiv \begin{bmatrix}
1 & \bm x^{\top}\\ \bm x & X
\end{bmatrix},
\label{eq:Y}
\end{equation}
where $\bm x \equiv (x_1\; \ldots \; x_N)^{\top}$,
the ($i,i$) element of the $N \times N$ matrix $X$ is equal to $x_i$, and
the ($i,j$) element ($i\neq j$) of $X$ is equal to $X_{ij}$.
By allowing $x_i$ and $X_{ij}$ ($1\le i< j\le N$) to take any
continuous value between 0 and 1,
we define the relaxed problem named SDP1 as follows:
\begin{equation*}
{\rm maximize} \; t \quad \mbox{ subject to \EQ\eqref{eq:constraint sum xi} and}
\end{equation*}
\begin{align}
-t I + \sum_{i<j; (i,j)\in E}X_{ij} \tilde{L}_{ij} +& \alpha J
+ \beta \sum_{i=1}^N (1-x_i) E_i \succeq 0,
\label{eq:main constraint SDP1}\\
Y \succeq & 0.\label{eq:Y>=0}
\end{align}
Note that \EQ\eqref{eq:Y>=0} implies
$0 \le x_i \le 1$ $(1\le i\le N)$ and that
SDP1 relaxes the original problem in that
$x_i$ and $X_{ij}$ are allowed
to take continuous values
while \EQ\eqref{eq:Y>=0} is imposed.
The method that we propose here for approximately maximizing
the spectral gap is to remove the $N_{\rm del}$ nodes
corresponding to the $N_{\rm del}$ smallest values among
$x_1$, $\ldots$, $x_N$ in the optimal solution of SDP1.
SDP1 involves $N(N+1)/2+1$ variables (i.e., $t$, $x_i$, and $X_{ij}$
with $i<j$). In fact, $X_{ij}$ for $(i,j)\notin E$ is free
unless \EQ\eqref{eq:Y>=0} is violated; it does not appear in the main
positive semidefinite constraint represented by \EQ\eqref{eq:main constraint
SDP1}. Because a given network is typically sparse, this implies
that there are many redundant variables in SDP1. To exploit the
sparsity and thus to save time and memory space,
a technique based on matrix completion might be useful
\cite{Fukuda2000SiamO,Nakata2003MP}. In this paper, however, we
propose another relaxation SDP2 for this purpose.
To linearize the second term on the LHS of
\EQ\eqref{eq:constraint 1}, we take advantage of four inequalities
$x_i x_j\ge 0$, $x_i(1-x_j)\ge 0$, $(1-x_i)x_j\ge 0$, and $(1-x_i)(1-x_j)\ge 0$ that must be satisfied for any link $(i, j)\in E$. By defining $X_{ij}\equiv x_i x_j$, as in the case of SDP1, we obtain the following four linear constraints \cite{Padberg1989MP}:
\begin{align}
X_{ij}\ge & 0,\label{eq:SDP2 linear constraint 1}\\
x_i-X_{ij}\ge & 0,\label{eq:SDP2 linear constraint 2}\\
x_j-X_{ij}\ge & 0,\label{eq:SDP2 linear constraint 3}\\
1-x_i-x_j+X_{ij}\ge & 0.\label{eq:SDP2 linear constraint 4}
\end{align}
SDP2 is defined by replacing \EQ\eqref{eq:Y>=0} by \EQS\eqref{eq:SDP2
linear constraint 1}--\eqref{eq:SDP2 linear constraint 4}, where only
the pairs $(i,j)\in E$ are considered. Note that
\EQS\eqref{eq:SDP2 linear constraint 1}--\eqref{eq:SDP2
linear constraint 4} guarantee
$0\le x_i\le 1$ ($1\le i\le N$).
We remove the $N_{\rm del}$ nodes
corresponding to the $N_{\rm del}$ smallest values among
$x_1$, $\ldots$, $x_N$ in the optimal solution of SDP2.
Numerically, SDP2 is
much easier to solve than SDP1 for two reasons. First, the number of
variables is smaller in SDP2 than in SDP1. In SDP2, $X_{ij}$ is
defined only on the links, whereas in SDP1 it is defined for all the pairs $1\le
i<j\le N$. In sparse networks,
the number of variables is $O(N^2)$ for SDP1 and $O(N)$ for SDP2.
Second, the positive semidefinite constraint, which is much more
time consuming to solve than a linear constraint of a
comparable size,
is smaller in SDP2 than in SDP1. While SDP1 and SDP2 share
the $N \times N$ positive semidefinite constraint
\eqref{eq:main constraint SDP1},
SDP1 involves an additional positive semidefinite constraint \eqref{eq:Y>=0} of size
$(N+1) \times (N+1)$.
To determine the values of $\alpha$ and $\beta$, we consider
the matrix
represented by the sum of the second, third, and fourth terms on the
LHS of \EQ\eqref{eq:constraint 1}.
A straightforward calculation shows that the eigenvalues of
this matrix are given by the
$N-N_{\rm del}-1$ positive eigenvalues of the Laplacian of the remaining
network, ($N_{\rm
del}-1$)-fold $\beta$, and $\beta+
\left[\alpha N - \beta \pm \sqrt{(\alpha N
-\beta)^2 + 4 N_{\rm del} \alpha\beta} \right]/2$. For a
fixed $\beta$, we should select $\alpha$ to maximize
$\beta+
\left[\alpha N - \beta - \sqrt{(\alpha N
-\beta)^2 + 4 N_{\rm del} \alpha\beta} \right]/2$, which is
always smaller than eigenvalue $\beta$.
We set
\begin{equation}
\alpha=\frac{\beta}{N}
\end{equation}
to simplify the expression
of this eigenvalue to $\beta(1 - \sqrt{N_{\rm del}/N})$
while approximately maximizing this eigenvalue.
We have the following bounds for the optimal solution
to the original problem.
We denote by $\tilde{\lambda}_2^{\rm opt}$ the optimal solution,
i.e., the maximum spectral gap with $N_{\rm del}$ nodes removed.
We denote by $\tilde{\lambda}_2^{\rm SDP}$
the smallest positive eigenvalue of the network
obtained by the proposed method; the proposed method removes
the $N_{\rm del}$ nodes
corresponding to the $N_{\rm del}$ smallest values of
$x_1$, $\ldots$, $x_N$ in the optimal solution of SDP1 or SDP2.
Obviously, $\tilde{\lambda}_2^{\rm SDP}$ is a lower bound for
$\tilde{\lambda}_2^{\rm opt}$.
On the other hand, the optimal value, $\max t$, of SDP1 or SDP2
serves as an upper bound for $\tilde{\lambda}_2^{\rm opt}$,
as long as the $\beta$ satisfies
$\tilde{\lambda}_2^{\rm opt} \leq \beta(1 - \sqrt{N_{\rm del}/N} )$.
This follows from the facts
that the optimal value of EIGEN with such a $\beta$ value
coincides with $\tilde{\lambda}_2^{\rm opt}$
and both SDP1 and SDP2 are a relaxation of EIGEN.
We can summarize our observation as follows:
$\tilde{\lambda}_2^{\rm SDP} \leq \tilde{\lambda}_2^{\rm opt}
\leq \max t$.
\section{Numerical results}
In this section, we
apply SDP1 and SDP2 to some synthetic and real networks.
We implement SDP1 and SDP2 using the free software package
SeDuMi 1.3 that runs on MATLAB 7.7.0.471 (R2008b) \cite{SeDuMi}.
We compare the performance of SDP1 and SDP2 with that of the optimal
sequential method, which is a heuristic method proposed in
\cite{Watanabe2010pre}. In the optimal sequential method, we
numerically calculate the spectral gap for the network
obtained by the removal of one node; we
do this for all possible choices of a node to be removed.
Subsequently, we remove the node whose removal yields the
largest spectral gap. Then, for the remaining network composed of $N-1$
nodes, we determine the second node to be
removed in the same way. We repeat this procedure until $N_{\rm del}$ nodes have been removed.
The first example network
is the well-known karate club social network, in which a node represents a member of the club and a link represents casual interaction between two members \cite{Zachary1977JAR}. The network has $N=34$ nodes and 78 links. We set $\beta=2$.
The spectral gaps obtained by the different node removal methods
are shown in \FIG\ref{fig:results}(a) as a function of $N_{\rm del}$.
Up to $N_{\rm del}=5$, the optimal sequential method yields the
exact solution, as do SDP1 and SDP2. For $N_{\rm del}\ge 6$, we could
not obtain the exact solution by the exhaustive search
because of the combinatorial
explosion. For $7\le N_{\rm del}\le 16$, SDP1 and SDP2 perform worse than the
optimal sequential method. However, for $N_{\rm del}\ge 17$, both SDP1 and SDP2 outperform
the optimal sequential method. SDP1 and SDP2 found efficient combinations of removed nodes that the optimal sequential method could not find.
Second, we test the three methods against the largest connected component of the undirected and unweighted version of a macaque cortical network \cite{SpornsZwi04ni}. The network has
$N=71$ nodes and 438 links. We set $\beta=2$.
The spectral gaps obtained by the different methods
are shown in \FIG\ref{fig:results}(b).
Up to $N_{\rm del}=4$, the optimal sequential method yields the
exact solution, as do SDP1 and SDP2. For $N_{\rm del}\ge 5$, we could
not obtain the exact solution because of the combinatorial
explosion. For $N_{\rm del}\ge 5$, SDP1 and SDP2 perform worse than the
optimal sequential method. Consistent with the poor performance of SDP1 and SDP2,
the final values of $x_i$ ($1\le i\le N$)
are not bimodally distributed around 0 and 1 as SDP1 and SDP2 implicitly suppose.
The distribution is rather
unimodal except for the first three values of $x_i$ that are close to
0. The ten values of $x_i$ when $N_{\rm del}=5$, in ascending
order, are as follows: $x_{33}=0.1086$, $x_{62}=0.1531$, $x_{53}=0.1589$,
$x_{1}=0.4813$, $x_{2}=0.5246$, $x_{8}=0.5591$, $x_{7}=0.6449$,
$x_{24}=0.7866$, $x_{51}=0.8749$, and $x_{63}=0.8931$ in SDP1, and
$x_{53}=0.000$, $x_{33}=0.145$, $x_{62}=0.177$, $x_{2}=0.585$,
$x_{1}=0.588$, $x_{8}=0.610$, $x_{7}=0.668$, $x_{24}=0.708$,
$x_{5}=0.738$, and $x_{4}=0.937$ in SDP2.
\begin{figure}
\centering
\includegraphics[width=6cm]{lam2-sdp-karate}
\includegraphics[width=6cm]{lam2-sdp-macaque}
\includegraphics[width=6cm]{lam2-sdp-ba-n150-m2}
\includegraphics[width=6cm]{lam2-sdp-cele-wgapj}
\caption{Spectral gap as a function of the number of removed
nodes for four networks. (a) Karate club social network with $N=34$ nodes.
(b) Macaque cortical network with $N=71$ nodes.
(c) Barab\'{a}si--Albert scale-free network with $N=150$ nodes.
(d) \textit{C.~elegans} neural network with $N=279$
nodes.}
\label{fig:results}
\end{figure}
The third network is a network with $N=150$ nodes generated by the Barab\'{a}si--Albert scale-free network model \cite{Barabasi99sci}. The growth of the network starts with a connected pair of nodes, and each incoming node is assumed to have two links. The generated network has 297 links. We set $\beta=2$. For this and the next networks, SDP1 cannot be applied because $N$ is too
large. Therefore, we only compare the performance of SDP2 against the optimal sequential method. The results shown in
\FIG\ref{fig:results}(c) indicate that SDP2 outperforms the optimal sequential method when $N_{\rm del}\ge 7$.
The fourth network is the largest connected component of the
\textit{C.~elegans} neural network \cite{wormatlas,Chen06pnas}. Two
nodes are regarded as being connected when they are connected by a
chemical synapse or gap junction. We ignore the direction and weight
of links. The network has $N=279$ nodes and 2287 links. We set
$\beta=2.5$.
The results for SDP2 and the optimal sequential method are shown
in \FIG\ref{fig:results}(d).
Although the spectral gap gradually increases with
$N_{\rm del}$ for SDP2, SDP2
performs poorly as compared to the optimal sequential method for this example.
\section{Discussion}
We proposed a method to maximize the spectral gap using semidefinite
programming. The two proposed algorithms have a firmer
mathematical foundation as compared to the heuristic numerical method
(i.e., optimal sequential method). The proposed algorithms
performed better than the heuristic method for two networks especially for large
$N_{\rm del}$ and worse for the other two networks. For the former two networks, we could find the solutions in the situations in which the exhasutive search is computationally formidable. Up to our numerical efforts, our algorithms seem to be efficient for sparse networks.
We should be careful about the choice of $\beta$.
If $\beta$ is too large, SDP1 and SDP2 would
result in $x_i\approx N_{\rm del}/N$ ($1\le i\le N$). This is because
setting $x_i= N_{\rm del}/N$ ($1\le i\le N$)
makes the fourth term on the
LHS of \EQ\eqref{eq:constraint 1} equal
to $\beta \frac{N-N_{\rm del}}{N}I$, which increases all the
eigenvalues, including the spectral gap of the remaining network,
by $\beta \frac{N-N_{\rm del}}{N}$.
In contrast, if $\beta$ is smaller than $\tilde{\lambda}_2$,
SDP1 and SDP2 would maximize a false eigenvalue
originating from the fourth term
on the LHS of \EQ\eqref{eq:constraint 1}.
To enhance the performance of SDP1 and SDP2, it
may be useful to abandon the convexity of the problem. For example,
we could try replacing $(1 - x_i)$ in the fourth term by $(1 - x_i)^p$ and gradually
increase $p$ from unity. When $p>1$, the problem is no longer convex.
Accordingly, the existence of the unique solution and the convergence of
a proposed algorithm are not guaranteed.
Nevertheless, we may be able to track the optimal solution $\bm x$ by the
Newton method while we gradually increase $p$
(see p.5 and p.63 in \cite{BendsoeSigmundbook}).
An alternative extension is to add
$-p \sum_{i=1}^N x_i (1-x_i)$ to the objective function to be maximized (i.e.,
$t$).
When $p>0$, the convexity is violated. However, we may be able to adopt
a procedure similar to the method explained above, i.e.,
start with
$p=0$ and gradually increase $p$ to track the solution by
the Newton method.
\subsubsection*{Acknowledgments.}
Naoki Masuda acknowledges the financial support of the Grants-in-Aid for Scientific
Research (no. 23681033) from MEXT, Japan.
This research is also partially supported by the Aihara Project, the FIRST
program from JSPS and by Global COE Program ``The research and training
center for new development in mathematics'' from MEXT.
| {
"timestamp": "2013-01-09T02:01:45",
"yymm": "1301",
"arxiv_id": "1301.1503",
"language": "en",
"url": "https://arxiv.org/abs/1301.1503",
"abstract": "The smallest positive eigenvalue of the Laplacian of a network is called the spectral gap and characterizes various dynamics on networks. We propose mathematical programming methods to maximize the spectral gap of a given network by removing a fixed number of nodes. We formulate relaxed versions of the original problem using semidefinite programming and apply them to example networks.",
"subjects": "Disordered Systems and Neural Networks (cond-mat.dis-nn); Optimization and Control (math.OC)",
"title": "Application of semidefinite programming to maximize the spectral gap produced by node removal",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180669140005,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7083314814806354
} |
https://arxiv.org/abs/1304.2809 | On partial sparse recovery | We consider the problem of recovering a partially sparse solution of an underdetermined system of linear equations by minimizing the $\ell_1$-norm of the part of the solution vector which is known to be sparse. Such a problem is closely related to a classical problem in Compressed Sensing where the $\ell_1$-norm of the whole solution vector is minimized. We introduce analogues of restricted isometry and null space properties for the recovery of partially sparse vectors and show that these new properties are implied by their original counterparts. We show also how to extend recovery under noisy measurements to the partially sparse case. | \section{Introduction}\label{sec:introduction}
\IEEEPARstart{I}{n} Compressed Sensing one is interested in recovering a sparse
solution~$\bar x\in\RR^N$ of an underdetermined system of the form $y=A \bar x$, given
a vector $y\in\RR^k$ and a matrix $A \in \RR^{k\times N}$
with far fewer rows than columns $(k\ll N)$.
A direct approach is to minimize the number of non-zero components of~$x$, i.e.,
the $\ell_0$-\emph{norm} of $x$ (which is defined
as $\|\vad\|_0=|\{i:\vad_i\neq 0\}|$ but, strictly speaking, is not a norm),
\begin{equation}\label{minl0fourier}
\min \|x\|_0 \quad \operatorname{s.t.}\quad A x= y.
\end{equation}
Since (\ref{minl0fourier}) is known to be NP-Hard, a tractable approximation is commonly considered which is obtained by substituting the
non-convex $\ell_0$-\emph{norm} by a convex approximation.
Recent results indicate that the $\ell_1$-norm can serve as such an approximation
(see~\cite{ECandes_2006} for a survey on some of this material).
Hence (\ref{minl0fourier}) is replaced
by the following optimization problem
\begin{equation}\label{minl1}
\min \|x\|_1 \quad \operatorname{s.t.}\quad Ax=y.
\end{equation}
Note that~(\ref{minl1}) is equivalent to a linear program and thus is much
easier to solve than~(\ref{minl0fourier}).
In this paper we consider the case
(see~\cite{NVaswani_WLu_2011,MPFriedlander_etal_2010,LJacques_2010})
when it is known a priori that the solution vector consists of two parts,
one of which is expected to be dense, in other words
we have $x=(x_1,x_2)$, where $x_1 \in\RR^{N-r}$
is sparse and $x_2 \in\RR^r$ is possibly dense.
A natural generalization of problem (\ref{minl1}) to this setting of
partially sparse recovery is given by
\begin{eqnarray}
\min \|x_1\|_1 & \operatorname{s.t.} & A_1 x_1 + A_2 x_2 = y,\label{minpartiall1}
\end{eqnarray}
where $A=(A_1, A_2)$, $A_1 \in \RR^{k \times (N-r)}$, and
$A_2 \in \RR^{k \times r}$. We will refer to this setting as {\em partially sparse recovery of size $N-r$}.
One of the key applications of partially sparse recovery is image reconstruction~\cite{NVaswani_WLu_2011}
but they also arise naturally in sparse
Hessian recovery~\cite{ABandeira_KScheinberg_LNVicente_2010}.
Vaswani and Lu~\cite{NVaswani_WLu_2011} gave a first sufficient
condition for partially sparse recovery.
Later, Friedlander et al.~\cite{MPFriedlander_etal_2010} proposed a weaker sufficient condition
and covered the extension to the noisy case.
After obtaining our results we were directed to the work of Jacques~\cite{LJacques_2010} who addressed the noisy case, deriving another sufficient condition for partially sparse recovery. His conditions guarantee the same recovery as ours but, as far as we can tell, are not the simple extensions of the NSP and RIP properties.
The conditions in~\cite{NVaswani_WLu_2011,MPFriedlander_etal_2010,LJacques_2010} are
somewhat weaker than the known
restricted isometry property for general sparse recovery, which is natural since the case of partial sparsity can be considered as a case of general sparsity where part of the support of the solution is known in advance.
The contribution of our paper is to introduce
the analogues of restricted isometry and null space properties for the case of partial sparsity.
We prove that these new properties are sufficient for partially sparse recovery
(including the noisy case) and are implied by the original conditions of fully sparse
recovery. We show that it is possible to guarantee recovery of a partially sparse signal using
Gaussian random matrices with the number of measurements an order smaller than the one necessary for general recovery.
\subsection{Notation}
We will use the following notation in this paper. $[N]$ denotes the set of integers
$\{1, \ldots, N\}$, and $[N]^{(s)}$ denotes the set of all subsets of $[N]$ of cardinality $s\leq N$. If $A$ is a matrix, then by $\NNN(A)$ and $\RRR(A)$ we denote the null and range spaces of $A$, respectively. We say that
a vector $x$ is $s-$sparse if at most $s$ components of $x$ are non-zero. This is also denoted by $\|x\|_0\leq s$. Given $v\in\RR^N$ and $S\in[N]$, $v_S\in\RR^N$ denotes a vector defined by $(v_S)_i=v_i$, $i\in S$ and $(v_S)_i=0$, $i\notin S$.
\section{Sparse recovery in compressed sensing}\label{sec:CS}
One of the main questions addressed by Compressed Sensing is under what conditions on
the matrix~$A$ can every sparse vector $\bar x$ be recovered by solving problem~(\ref{minl1}) given~$A$ and the
right hand side $y=A \bar x$.
The next definition is a well known characterization of such
matrices (see, e.g., \cite{ACohen_WDahmen_RDeVore_2009,DDonoho_XHuo_2001}).
\begin{definition}[Null Space Property]\label{def:NSP}
The matrix $A\in\RR^{k\times N}$ is said to satisfy the Null Space Property
(NSP) of order $s$ if, for every $v\in \NNN(A) \setminus \{0\}$ and for
every $S~\in~[N]^{(s)}$, one has
\begin{equation}\label{vsmenor12v}
\|v_S\|_1 \; < \; \frac12\|v\|_1.
\end{equation}
\end{definition}
It is well known that NSP is a necessary and sufficient condition for the recovery of an $s$-sparse vector $\bar x$ (see \cite{HRauhut_2010}).
\begin{theorem}\label{teoremaNSP}
The matrix $A$ satisfies the Null Space Property of order $s$ if and only if,
for every $s-$sparse vector $\bar x$, problem~(\ref{minl1}) with $y=A \bar x$
has an
unique solution and it is given by $x = \bar x$.
\end{theorem}
It is difficult to analyze whether NSP is satisfied. On the other hand, the
\emph{Restricted Isometry Property} (RIP), introduced in~\cite{ECandes_TTao_2006},
is considerably more
useful and insightful, although it provides only sufficient conditions for recovery with~(\ref{minl1}).
We present below the definition of the
\emph{RIP Constant}.
\begin{definition}[Restricted Isometry Property Constant]\label{defRIP}
One says that $\delta_s>0$ is the Restricted Isometry Property Constant,
or \emph{RIP} constant, of order $s$ of the matrix
$A\in\RR^{k\times N}$ if $\delta_s$ is the smallest positive real number such that:
\begin{equation}\label{defdeltasRIP}\left(1-\delta_s\right)\|x\|_2^2 \; \leq \;
\|Ax\|_2^2 \; \leq \; \left(1+\delta_s\right)\|x\|_2^2\end{equation}
for every $s-$sparse vector $x$.
\end{definition}
The following theorem (see, e.g.,~\cite{EJCandes_2009}) provides a useful
sufficient condition for successful recovery by~(\ref{minl1}).
\begin{theorem}\label{teoremaRIP} \cite{EJCandes_2009}
Let $A\in\RR^{k\times N}$ and $2s < k$. If
$\delta_{2s}<\sqrt{2}-1$,
where $\delta_{2s}$ is the RIP constant of $A$ of order
$2s$, then, for every $s-$sparse vector
$\bar x$, problem~(\ref{minl1}) with $y=A \bar x$ has an unique solution
and it is given by $x = \bar x$.
\end{theorem}
It is known that RIP is satisfied with some probability if the
entries of the matrix are randomly generated
(see, e.g.,~\cite{RBaraniuk_MDavenport_RDeVore_MWakin_2008}) according
to some distribution such as a sub-Gaussian. However, it is in general computationally hard to check whether
it is satisfied by a certain realization matrix~\cite{Bandeira_etal_hardRIP},
and it is still an open problem to find such matrices deterministically when the underlying
system is highly underdetermined (see~\cite{Bandeira_etal_FlatRIP}).
\section{Partial sparse recovery}\label{sec:partialCS}
In this section we consider the following extension of the NSP
to the case of partially sparse recovery.
\begin{definition}[Partial Null Space Property]\label{def:NSPpartial}
We say that $A=(A_1, A_2)$ satisfies the Null Space Property (NSP) of order $s-r$ for partially sparse recovery of size $N-r$ with $r\leq s$
if $A_2$ is full column rank (${\cal N}(A_2)=\{0\}$) and for every $v_1\in\RR^{N-r}\setminus \{0\}$ such that $A_1v_1\in \RRR(A_2)$ and every $S\in[N-r]^{(s-r)}$, we have
\begin{equation}\label{def:NSPpartialigualdade1}
\|(v_1)_S\|_1 \; < \; \frac12\|v_1\|_1.
\end{equation}
\end{definition}
Note that when $r=0$, the partial NSP naturally reduces to the
NSP in Definition~\ref{def:NSP}.
Wang and Yin~\cite{supportdetection} have suggested a stronger NSP
adapted to a setting where it is not known the location of the
partial support.
The new property is a necessary and
sufficient condition for any solution of (\ref{minpartiall1})
with $y=A\bar x$ to satisfy $x=\bar x$ if $\bar x_1$ is
appropriately sparse.
\begin{theorem}\label{teoremaNSPpartial}
The matrix $A=(A_1, A_2)$ satisfies the Null Space Property of order $s-r$
for Partially Sparse Recovery of size $N-r$ if and only if for every
$\bar x=(\bar x_1,\bar x_2)$ such that $\bar x_1\in\RR^{N-r}$ is $(s-r)-$sparse and
$\bar x_2\in\RR^r$,
problem~(\ref{minpartiall1}) with $y=A \bar x$ has an unique solution and it is given by
$(x_1,x_2)= (\bar x_1,\bar x_2)$.
\end{theorem}
\begin{proof}
The proof follows the steps of the proof of \cite[Theorem~2.3]{HRauhut_2010} with appropriate
modifications. Let us assume first that for any vector
$(\bar x_1,\bar x_2)\in\RR^N$, where $\bar x_1$ is an $(s-r)-$sparse
vector and $\bar x_2\in\RR^r$, the minimizer $(x_1,x_2)$ of
$\|x_1\|_1$ subject to
$A_1 x_1+A_2 x_2 = A \bar x$ satisfies $x_1= \bar x_1$.
Consider any $v_1 \neq 0$ such that $A_1v_1\in\RRR(A_2)$. Then consider
minimizing $\|x_1\|_1$ subject to $A_1 x_1+A_2 x_2=A_1 (v_1)_S+A_2 v_2$ for any
$v_2 \in\RR^r$ and for any $S\in[N-r]^{(s-r)}$. By the assumption, the corresponding minimizer $(x_1,x_2)$
satisfies $x_1=(v_1)_S$.
Since $A_1v_1\in\RRR(A_2)$, there exists $ u_2$ such that $A_1 (-(v_1)_{S^c}) + A_2 u_2 = A_1 (v_1)_S + A_2 v_2$. As $-(v_1)_{S^c}\neq (v_1)_S$,
$(-(v_1)_{S^c}, u_2)$ is not the minimizer of $\| x_ 1 \|_1$ subject to
$A_1 x_1 + A_2 x_2 = A_1 (v_1)_S + A_2 v_2$, hence, $\|(v_1)_{S^c}\|_1>\|(v_1)_{S}\|_1$
and~(\ref{def:NSPpartialigualdade1}) holds.
Let us now assume that $A$ satisfies the NSP of order $s-r$
for partially sparse recovery of size $N-r$ (Definition \ref{def:NSPpartial}).
Then, given a vector $(\bar x_1, \bar x_2)\in\RR^N$, where $\bar x_1$ is $(s-r)-$sparse and $\bar x_2\in\RR^r$, and a vector $(u_1,u_2)\in\RR^N$ with
$u_1 \neq \bar x_1$ and satisfying $A_1 u_1 + A_2 u_2 = A_1 \bar x_1 + A_2 \bar x_2$, consider
$(v_1,v_2)=\left((\bar x_1-u_1),(\bar x_2-u_2)\right)\in \NNN(A)$, which implies $A_1v_1\in\RRR(A_2)$
and $v_1\neq 0$.
Thus, setting $S$ to be the support of $\bar x$, one has that
\begin{eqnarray*}
\|\bar x_1\|_1&\leq&\|\bar x_1-(u_1)_S\|_1+\|(u_1)_S\|_1\\[1ex]
&=&\|(v_1)_S\|_1+\|(u_1)_S\|_1
<\|(v_1)_{S^c}\|_1+\|(u_1)_S\|_1\\ [1ex]
&=&\|-(u_1)_{S^c}\|_1+\|(u_1)_S\|_1
=\|u_1\|_1,
\end{eqnarray*}
(the strict inequality coming from (\ref{def:NSPpartialigualdade1})),
guaranteeing that all solutions $(x_1,x_2)$ of~(\ref{minpartiall1}) with $y=A \bar x$ satisfy $x_1= \bar x_1$.
It remains to note that $x_2 = \bar x_2$ is
uniquely determined by solving
$A_2 x_2=y-A_1 \bar x_1$ if and only if $A_2$ is full column rank.
\end{proof}
We now define an extension of the RIP to the partially sparse
recovery setting.
For this purpose, let $A=(A_1, A_2)$ be as considered above, under the
assumption that $A_2$ has full column rank. Let
\begin{equation}\label{projM}
{\cal P} \; = \; I-A_2\left(A_2^\top A_2\right)^{-1}A_2^\top
\end{equation}
be the matrix of the orthogonal projection from $\RR^N$ onto
$\RRR\left(A_2\right)^\bot.$ Then, the problem of recovering
$(\bar x_1, \bar x_2)$, where $\bar x_1$ is an $(s-r)-$sparse vector satisfying
$A_1 \bar x_1+A_2 \bar x_2=y$, can be stated as the problem of recovering an
$(s-r)-$sparse vector $x_1= \bar x_1$ satisfying $\left({\cal P}A_1\right)x_1={\cal P}y$
and then recovering $x_2 = \bar x_2$ satisfying $A_2x_2=y-A_1\bar x_1$.
The solution of the resulting linear system in the second step exists and is unique
given that
$A_2$ has full column rank and $({\cal P}A_1) \bar x_1={\cal P}y$. Note that
the first step is now reduced to the classical setting of Compressed Sensing.
This motivates the following definition of RIP for partially sparse recovery.
\begin{definition}[Partial RIP]\label{defpartialRIP}
We say that $\delta_{s-r}^r>0$ is the Partial Restricted Isometry
Property Constant of order $s-r$ of the
matrix $A=(A_1, A_2)\in\RR^{k\times N}$, for recovery of size $N-r$ with $r\leq s$, if
$A_2$ is full column rank and
$\delta_{s-r}^r$
is the RIP constant of order $s-r$ (see Definition~\ref{defRIP}) of the
matrix ${\cal P}A_1$, where ${\cal P}$ is given by (\ref{projM}).
\end{definition}
Again, when $r=0$ the Partial RIP reduces to the RIP of
Definition~\ref{defRIP}. We also note that, given a matrix
$A=(A_1, A_2)\in\RR^{k\times N}$ with Partial RIP constant $\delta_{2(s-r)}^r$ of
order $2(s-r)$ for recovery of size $N-r$, satisfying
$\delta_{2(s-r)}^r<\sqrt{2}-1$, Theorems~\ref{teoremaNSP} and~\ref{teoremaRIP},
guarantee that ${\cal P}A_1$ satisfies the NSP of order $s-r$. Thus,
given $\bar x=(\bar x_1, \bar x_2)$ such that $\bar x_1\in\RR^{N-r}$ is $(s-r)-$sparse
and $\bar x_2\in\RR^r$, $\bar x_1$ can be recovered by minimizing the
$\ell_1$-norm of $x_1$ subject to $({\cal P}A_1)x_1={\cal P}A \bar x$ and,
recalling that $A_2$ is full-column rank, $x_2 = \bar x_2$ is uniquely
determined by $A_2x_2=y-A_1 \bar x_1$. (In particular, this implies
that $A$ satisfies the NSP of order $s-r$ for partially sparse recovery
of size $N-r$.)
\section{Partially sparse recovery implied by fully sparse recovery conditions}\label{sec:partial-total}
We are now interested in showing that partially sparse recovery is achievable
under the conditions which guarantee fully sparse recovery.
In particular we will show that the NSP and RIP imply, respectively, the partial NSP and the partial RIP.
We first establish the relationship between
the corresponding null space properties.
\begin{theorem}\label{th:NSP-partial-total}
If a given matrix $A$ satisfies the NSP of order $s$ then it satisfies
the NSP for partially sparse recovery of order $s-r$ for any $r\leq s$.
\end{theorem}
\begin{proof}
Let $A=(A_1,A_2)$ satisfy the
NSP of order $s$. First we note that since $r \leq s$, the NSP implies that $A_2$ is full column rank. Let $v_1\in\RR^{N-r}$ be a non-zero vector such that
$A_1v_1\in\RRR(A_2)$
and let $T\in[N-r]^{(s-r)}$.
Since there exists $v_2$ such that $A_1v_1+A_2v_2=0$, we have that $v = (v_1,v_2) \in {\cal N}(A)\setminus \{0\}$,
and
therefore by setting $S=T\cup ([N]\setminus[N-r])$ and by using the NSP,
$\|(v_1)_T\|_1+\|v_2\|_1 = \|v_S\|_1 < \frac12\|v\|_1 = \frac12\|v_1\|_1+\frac12\|v_2\|_1.$
Thus,
\(\|(v_1)_T\|_1 \; \leq \; \|(v_1)_T\|_1+\frac12\|v_2\|_1 \; \leq \; \frac12\|v_1\|_1,\)
and $A$ satisfies the NSP of order $s-r$ for partially
sparse recovery of size $N-r$.
\end{proof}
Partial RIP is also implied by RIP without the change in the RIP constant value.
\begin{theorem}\label{th:RIP-partial-total}
Let $\delta_s>0$ and $A=(A_1,A_2)$ satisfy the following property:
For every $(s-r)$-sparse vector $x_1\in\RR^{N-r}$ and $x_2\in\RR^r$ we have
\begin{equation}\label{ineqThRIP}
(1-\delta_s)\|x\|_2^2 \leq \|Ax \|_2^2 \leq (1+\delta_s)\|x\|_2^2,
\end{equation}
where $x = (x_1, x_2)$. Then $A$ satisfies
partial RIP of order $s-r$ with $\delta_{s-r}^r=\delta$ for partially sparse recovery of size $N-r$, for any $r\leq s$.
\end{theorem}
\begin{proof} First we note that setting $x_1=0$ implies that $A_2$ is full column rank. Consider now any given $(s-r)-$sparse vector $x_1 \in \RR^{N-r}$.
Now, by setting $x_2 = - \left(A_2^\top A_2\right)^{-1}A_2^\top A_1 x_1$, one obtains
$(1-\delta_s) \|x_1\|_2^2 \leq
\left(1-\delta_s\right) \left( \|x_1\|_2^2+\|x_2\|_2^2 \right)
\leq \|A_1x_1 + A_2x_2\|_2^2 = \| {\cal P} A_1 x_1 \|^2$.
On the other hand, the choice $x_2 = 0$ provides
$\|{\cal P} A_1x_1\|_2^2 \leq
\|A_1x_1\|_2^2 \leq \left(1+\delta_s\right) \|x_1\|_2^2$.
We have thus arrived at the conditions of Definition~\ref{defpartialRIP}.
\end{proof}
\begin{corollary}
Let $A=(A_1,A_2)$ satisfy the
RIP of order $s$ with the RIP constant $\delta_s$. Then $A$ satisfies
partial RIP of order $s-r$ with $\delta_{s-r}^r=\delta_s$ for partially sparse recovery of size $N-r$, for any $r\leq s$.
\end{corollary}
\section{Partial (and total) compressibility recovery with noisy measurements}
In most realistic applications the observed measurement vector $y$
often contains noise and the true signal vector $\bar x$ is
not sparse
but rather compressible, meaning that most components
are very small but not necessarily zero. It is known, however, that Compressed Sensing is robust to noise and can
approximately recover compressible vectors.
This statement is formalized in the following theorem taken from~\cite{EJCandes_2009}.
\begin{theorem}\label{th:noisyclassical}
Assume that the matrix $A\in\RR^{k\times N}$ satisfies RIP with
the RIP constant $\delta_{2s}$ such that
\(
\delta_{2s} \; < \; \sqrt{2} - 1.
\)
For any $\bar{x} \in\RR^N$, let noisy measurements $y=A \bar{x} +\epsilon$ be given
satisfying $\| \epsilon \|_2\leq\eta$. Let $x^\#$ be a solution of
\begin{equation}\label{noisyl1_app}
\min_{x \in\RR^N}\|x\|_1 \quad \mbox{s.t.} \quad \|Ax-y\|_2\leq\eta.
\end{equation}
Then
\begin{equation}\label{noisyl1_result}
\|x^\# - \bar{x}\|_2 \; \leq \; c\eta + d\frac{\sigma_s(\bar x)_1}{\sqrt{s}},
\end{equation}
for constants $c,d$ only depending on the RIP constant, and where
$\sigma_s(\bar x)_1=\min_{x:\, \|x\|_0\leq s}\|x-\bar{x}\|_1$.
\end{theorem}
The following theorem provides an analogous result for the partially
sparse recovery setting introduced
in Section~\ref{sec:partialCS}.
\begin{theorem} \label{th:noisy_partial}
Assume that the matrix $A=\left(A_1,A_2\right)\in\RR^{k\times N}$
satisfies partial RIP of order $2(s-r)$ for recovery of size $N-r$ with the RIP constant $\delta_{2(s-r)}^r<\sqrt{2}-1$.
For any $\bar{x}=(\bar{x}_1,\bar{x}_2)\in\RR^{N}$,
let noisy measurements $y=A \bar{x}+\epsilon$ be given satisfying $\|\epsilon \|_2\leq\eta$. Let $x^\ast=(x_1^\ast,x_2^\ast)$ be
a solution of
\begin{equation}\label{noisyl1_partial}
\min_{x=(x_1,x_2)\in\RR^{N}}\|x_1\|_1 \quad \text{s.t.} \quad \|Ax-y\|_2\leq\eta.
\end{equation}
Then
\begin{equation}\label{noisyl1_resultpart}
\|x_1^\ast - \bar{x}_1 \|_2 \; \leq \; c\eta + d\frac{\sigma_{s-r}(\bar{x}_1)_1}{\sqrt{s-r}},
\end{equation}
and
\begin{equation} \label{noisyl1_resultpart_2}
\|x_2^\ast - \bar{x}_2\|_2 \; \leq \; C_2 \left(2\eta+ C_1 \left(c\eta + d\frac{\sigma_{s-r}(\bar{x}_1)_1}{\sqrt{s-r}}\right)\right),
\end{equation}
for constants $c,d$ only depending on $\delta_{2(s-r)}^r$, and where $C_1$ and $C_2$ are given by
\(
C_1 = \|A_1\|_2,
\)
and
\(
C_2 = \|A_2^\dagger\|_2,
\)
(Since $A_2$ is full column rank recall that $A_2^\dagger = (A_2^\top A_2)^{-1} A_2^\top$ and $C_2 > 0$.)
\end{theorem}
\begin{proof}
From Theorem~\ref{th:RIP-partial-total}, the matrix $\mathcal{P}A_1$, where $\mathcal{P}$ is given by~(\ref{projM}),
satisfies the condition of Theorem~\ref{th:noisyclassical}. Thus,
since $\mathcal{P}$ is a projection matrix,
$
\| \mathcal{P} A_1 \bar{x}_1 - \mathcal{P}y\| = \| \mathcal{P} A \bar{x} - \mathcal{P}y\|
\leq \| A \bar{x} - y \| \; \leq \; \eta,
$
and a solution~$x_1^\#$ of
\begin{equation}\label{noisyl1_proj}
\min_{x_1\in\RR^{N-r}}\|x_1\|_1 \quad \text{s.t.} \quad \| \mathcal{P} A_1x_1 - \mathcal{P}y\|_2\leq\eta,
\end{equation}
satisfies
\begin{equation}\label{noisyl1_proj_result}
\|x_1^\# - \bar{x}_1\|_2 \; \leq \; c\eta + d\frac{\sigma_{s-r}(x_1)_1}{\sqrt{s-r}}.
\end{equation}
Now, we will prove that the solutions of problems~(\ref{noisyl1_partial}) and~(\ref{noisyl1_proj}) coincide
in their $x_1$ parts,
completing thus the proof of~(\ref{noisyl1_resultpart}).
Let $(x^*_1,x^\ast_2)$ be a feasible point of~(\ref{noisyl1_partial}). Again,
since $\mathcal{P}$ is a projection matrix, we obtain that
\begin{eqnarray*}
\|\mathcal{P} A_1x^\ast_1- \mathcal{P}y\|_2 &=&
\| \mathcal{P} ( A_1x_1^\ast+A_2x_2^\ast-y)\|_2 \\ &\leq& \|A_1x_1^\ast+A_2x_2^\ast-y\|_2 \;\;\; \leq \;\;\; \eta,
\end{eqnarray*}
which proves that $x^\ast_1$ is a feasible point of~(\ref{noisyl1_proj}). Now let $x^\#_1$ be a feasible point of (\ref{noisyl1_proj}). Since $I- \mathcal{P}$ projects (orthogonally)
onto the column space of~$A_2$ there must exist an $x_2^\#$ such that
$ A_2x_2^\#=(I-\mathcal{P})(y-A_1x_1^\#)$, and then
$\|A_1x_1^\#+A_2x_2^\#-y\|_2 = \| \mathcal{P} A_1x_1^\#- \mathcal{P} y\|_2 \leq \eta$.
Therefore $(x^\#_1,x^\#_2)$ is a feasible point of~(\ref{noisyl1_partial}).
Hence we have proved that, any solution of problem~(\ref{noisyl1_partial}) is also a solution of problem~(\ref{noisyl1_proj}),
and the inequality~(\ref{noisyl1_resultpart}) results directly from~(\ref{noisyl1_proj_result}).
We now use this inequality to bound the error on the reconstruction of $\bar{x}_2$.
Since both $\bar x$ and $x^\ast$ satisfy the measurements constraints $\| Ax - y \|_2 \leq \eta$ we have that
$\|A_1(\bar{x}_1^\ast-x_1)+A_2(\bar{x}_2^\ast-x_2)\|_2 \leq 2\eta,$
and thus $\|A_2(x_2^\ast-\bar{x}_2)\|_2 \leq 2\eta+\|A_1(x_1^\ast-\bar{x}_1)\|_2$.
Using the definitions of $C_1$ and $C_2$ we have
$\|x_2^\ast-\bar{x}_2\|_2 \leq C_2 \left( 2\eta+C_1\|x_1^\ast-\bar{x}_1\|_2 \right)$,
and the result~(\ref{noisyl1_resultpart_2}) follows from bounding $\|x_1^\ast-\bar{x}_1\|_2$
by~(\ref{noisyl1_resultpart}) in this last inequality.
\end{proof}
The condition on the matrix $A$ imposed in the previous theorem involved only its partial
RIP constant.
In the next proposition we describe how one can bound the constants $C_1$ and $C_2$ in terms of the RIP constant of $A$ (the proof is simple and is omitted, see also \cite{COSAMP}).
\begin{proposition}
Consider the RIP constant~$\delta_{s}$ of order~$s$ of $A=\left(A_1,A_2\right)\in\RR^{k\times N}$. The constants $C_1$ and $C_2$ of Theorem~\ref{th:noisy_partial} satisfy
\(
C_1 \; \leq \; \sqrt{1+\delta_{s}}
\)
and
\(
C_2 \; \leq \; \frac{1}{\sqrt{1-\delta_{s}}}.
\)
\end{proposition}
\section{Matrices with Partial RIP}
In this section we investigate regimes of $N$, $s$, and $k$ for which random Gaussian matrices satisfy partial RIP.
Similar results can be obtained for other families of random matrices, like sub-Gaussian or Bernoulli matrices.
\begin{theorem}
Let $0<\delta<1$ and $r\leq s$. Let $A =(A_1,A_2)$ with $A_1\in\RR^{k\times (N-r)}$
and $A_2 \in \RR^{k\times r}$ have independent Gaussian entries
with variance $1/k$. Then,
as long as
\begin{eqnarray}
k > \frac{2\times48}{3\delta^2 - \delta^3}\left( (s-r)\log\left(\frac{N-r}{s-r}e\right) + s \log\left(\frac{12}{\delta}\right)\right), \label{boundbelowforkwithC}
\end{eqnarray}
$A=(A_1,A_2)$ satisfies partial RIP of order $s-r$ with $\delta_{s-r}^r\leq\delta$ for partially sparse recovery of size $N-r$, with high probability.
\end{theorem}
\begin{proof}
Given a particular sparsity pattern, the probability that (\ref{ineqThRIP}) does not hold is
(see~\cite[Lemma 5.1]{RBaraniuk_MDavenport_RDeVore_MWakin_2008})
\[
\leq \; 2\left(12/\delta\right)^se^{-\left(\frac{\delta^2}{16}-\frac{\delta^3}{48}\right)k}.
\]
There are ${N-r \choose s-r} \leq \left(\frac{N-r}{s-r}e\right)^{s-r}$ different sparsity patterns (see, e.g., \cite{RBaraniuk_MDavenport_RDeVore_MWakin_2008}). Let $\mathcal{P}$ denote the probability that $A=(A_1,A_2)$ does not satisfy the partial RIP of order $s-r$ with $\delta_{s-r}^r=\delta$ for partially sparse recovery of size $N-r$. For this to happen, (\ref{ineqThRIP}) has to fail for at least one sparsity pattern, setting $\beta = \frac{\delta^2}{16}-\frac{\delta^3}{48}$ and using a union bound
\begin{eqnarray*}
\mathcal{P} &\leq &e^{(s-r)\log\left(\frac{N-r}{s-r}e\right)}2\left(\frac{12}{\delta}\right)^se^{-\beta k} \\
&\leq &2e^{\left((s-r)\log\left(\frac{N-r}{s-r}e\right)+s\log\left(\frac{12}{\delta}\right)-\beta k\right)} \\
&\leq &2e^{-\beta\left[k - \frac1\beta\left((s-r)\log\left(\frac{N-r}{s-r}e\right)+s\log\left(\frac{12}{\delta}\right)\right)\right]} \\
&\leq &2e^{-\left[(s-r)\log\left(\frac{N-r}{s-r}e\right)+s\log\left(\frac{12}{\delta}\right)\right]}, \\
&\leq &2\left(\frac{N-r}{s-r}e\right)^{-(s-r)}\left(\frac{12}{\delta}\right)^{-s},
\end{eqnarray*}
where the second to last inequality was obtained using (\ref{boundbelowforkwithC}).
It is easy to see that either $\left(e(N-r)/(s-r)\right)^{-(s-r)}$ or $\left(\frac{12}{\delta}\right)^{-s}$ goes to zero polynomially with $N$, thus $\mathcal{P}\leq \OOO\left(N^{-\OOO(1)}\right)$
\end{proof}
Note that the condition (\ref{boundbelowforkwithC}) can be asymptotically smaller than the one found in the classical case $r=0$. If, e.g., $s-r=\OOO(1)$ then $(\ref{boundbelowforkwithC})$ just requires $k = \OOO(s + \log(N-r))$ instead of the classical $k = \OOO(s \log(N/s))$.
\section{Concluding Remarks}
In some applications of Compressed Sensing one may be interested in
a sparse (or compressible) vector whose support is partially known in advance.
In such a setting we show that one can
consider the $\ell_1$-minimization of the part of the vector for which
the support is not known. We have shown that such a sparse recovery can be then
ensured under conditions that are potentially weaker than those assumed for the
full approach.
We have explored this feature to show that
it is possible to guarantee partial sparse recovery (with Gaussian random matrices)
for an order of measurements below the one necessary for general recovery.
\section*{Acknowledgments}
We would like to thank Rachel Ward (Math. Dept.,
UT at Austin) for interesting discussions on the topic of this paper. We also acknowledge the referees for helping us improve the paper.
\bibliographystyle{IEEEtran}
| {
"timestamp": "2013-04-11T02:01:02",
"yymm": "1304",
"arxiv_id": "1304.2809",
"language": "en",
"url": "https://arxiv.org/abs/1304.2809",
"abstract": "We consider the problem of recovering a partially sparse solution of an underdetermined system of linear equations by minimizing the $\\ell_1$-norm of the part of the solution vector which is known to be sparse. Such a problem is closely related to a classical problem in Compressed Sensing where the $\\ell_1$-norm of the whole solution vector is minimized. We introduce analogues of restricted isometry and null space properties for the recovery of partially sparse vectors and show that these new properties are implied by their original counterparts. We show also how to extend recovery under noisy measurements to the partially sparse case.",
"subjects": "Information Theory (cs.IT); Optimization and Control (math.OC)",
"title": "On partial sparse recovery",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180664944447,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7083314811791449
} |
https://arxiv.org/abs/2007.02008 | A spectral extremal problem on graphs with given size and matching number | Brualdi and Hoffman (1985) proposed the problem of determining the maximal spectral radius of graphs with given size. In this paper, we consider the Brualdi-Hoffman type problem of graphs with given matching number. The maximal $Q$-spectral radius of graphs with given size and matching number is obtained, and the corresponding extremal graphs are also determined. | \section{Introduction}\label{s-1}
Unless stated otherwise, we follow \cite{Boundy2008,Cvetkovic2010} for terminology and notations. All graphs considered here are simple and undirected. Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. The order of $G$ is the number of vertices in $V(G)$. The number of edges in $E(G)$ is called the size of $G$, and denoted by $m(G)$. The signless Laplacian matrix of $G$ is denoted by $Q(G)$.
The largest eigenvalue of $Q(G)$ is called the $Q$-spectral radius of $G$, and write $q(G)$.
Spectral extremal problem is a classical issue in graph spectra theory.
The core of the issue is to ask the extremal value of a spectral parameter of graphs under some constraints.
In 1985, Brualdi and Hoffman proposed the following spectral extremal problem:
\begin{prob}{(Brualdi-Hoffman Problem)}\label{B-H problem}
For graphs of size $m$, what is the maximum of a spectral parameter?
\end{prob}
When $m$ is equal to $\binom{k}{2}$ for some integer $k$, Brualdi and Hoffman \cite{Brualdi1985} determined the maximal spectral radius of graphs of size $m$. Moreover, Brualdi and Hoffman proposed a conjecture for any $m$.
\begin{conj}{(Brualdi and Hoffman \cite{Brualdi1985})}
If $m=\binom{k}{2}+s$, where $s<k$, the maximal spectral radius of a graph $G$ of size $m$ is attained by taking the complete graph $K_{k}$ with $k$ vertices and adding a new vertex which is joined to $s$ of the vertices of $K_{k}$.
\end{conj}
Friedland \cite{Friedland1985} confirmed the conjecture for some special cases. Finally, the conjecture was proved by Rowlinson \cite{Rowlinson1988}. However, the Brualdi-Hoffman problem will become more difficult if adding some constraints, such as the size and order are both fixed. Brualdi and Solheid \cite{Brualdi1986} considered the maximal spectral radius of connected graphs of order $n$ and size $m=n+k$, where $k\geq 0$. If $0\leq k\leq 5$ and $n$ is sufficiently large, it was proved that the extremal graph is the graph obtained from the star $K_{1,n-1}$ by adding edges from a pendant vertex to the other pendant vertices. Brualdi and Solheid conjectured that the above conclusion also holds for all $k$, when $n$ is sufficiently large with respect to $k$. Later, Cvetkovi\'{c} and Rowlinson \cite{Cvetkovic1988} gave an affirmative answer for the conjecture of Brualdi and Solheid. However, for any positive integers $m$ and $n$,
the Brualdi-Hoffman problem of graphs of order $n$ and size $m$ is still open. In order to determine the extremal graph in the Brualdi-Hoffman problem under the constraint of order, many reports about the analysis of the extremal graph were presented (see, for example, \cite{Andelic2010,Andelic2011,Bhattacharya2008,Petrovic2015,Simic2004,Simic2010}).
Therefore, it is interesting to investigate the Brualdi-Hoffman problem under additional constraints.
Matching theory is a basic subject in graph theory, and it has many important applications in theoretical chemistry and combinatorial optimization \cite{Lovasz1986}. Recall that a matching in a graph is a set of pairwise nonadjacent edges. The number of edges in a maximum matching of a graph $G$ is called the matching number of $G$, and denoted by $\beta(G)$. The matching number of a graph is closed related to the spectral parameters. In \cite{O2010}, O and Cioab\v{a} presented the connections between the eigenvalues and the matching number of a regular graph. Cioab\v{a} and Gregory \cite{Cioaba2007} obtained some spectral sufficient conditions for the existence of large matching in regular graphs. A graph contains a perfect matching if its matching number is half of the order. Some spectral sufficient conditions, which guarantee the graph has a perfect matching, were proved in \cite{Brouwer2005,Cioaba2005,Cioaba2009}.
The matching number and ($Q$-) spectral radius of graphs were investigated in many papers (see \cite{Chang2003,Feng2007a,Feng2007,Hou2002,Lin2007,Li2014,Shen2017,Yu2008}).
Inspired by these observations, we consider the Brualdi-Hoffman problem with the additional matching constrain, that is,
\begin{prob}\label{problem}
For graphs of size $m$ and matching number $\beta$, what is the maximum of the $Q$-spectral radius?
\end{prob}
We remark that Problem \ref{problem} is trivial when $\beta=1$. Let $G$ be a graph with size $m$ and matching number $\beta=1$. Thus, $G$ is a star or a triangle (only for $m=3$) with possibly some isolated vertices.
In particular, if $m=3$, then the maximum of the $Q$-spectral radius is $q(K_{1,3})=q(K_3)=4$. If $m\neq 3$, then the maximum of the $Q$-spectral radius is $q(K_{1,m})=m+1$.
Let $S_{a,b,c}$ denote the graph obtained from a vertex $v_1$ by attaching $a$ pendant edges, $b$ pendant paths of length $2$
and $c$ pendant triangles (see \textcolor[rgb]{0.00,0.07,1.00}{Fig. \ref{fig1}}).
When $\beta\geq 2$, Problem \ref{problem} is solved in the following theorem.
\begin{thm} \label{th1}
Let $G$ be a graph of matching number $\beta\geq2$ and size $m\geq \beta$.
Then $q(G)\leq q(S_{a,b,c})$,
with equality if and only if $G\cong S_{a,b,c}$ with possibly some isolated edges and isolated vertices.
Moreover,\\
(i) if $m\geq 3\beta-1$, then $a=m-3\beta+3$, $b=0$ and $c=\beta-1$;\\
(ii) if $m\leq 3\beta-2$ and $m-\beta$ is odd, then $a=b=1$ and $c=\frac{m-\beta-1}2$;\\
(iii) if $m\leq 3\beta-2$ and $m-\beta$ is even, then $a=1$, $b=0$ and $c=\frac{m-\beta}2$.
\end{thm}
The proof of Theorem \ref{th1} is provided in the next section. Before proceeding further, let us recall some definitions and notations.
Let $G$ be a graph with signless Laplacian matrix $Q(G)$ and $Q$-spectral radius $q(G)$. It is well-known that
$$q(G)=\max_{||Y||=1}Y^{T}Q(G)Y=X^{T}Q(G)X=\sum_{uv\in E(G)}(x_{u}+x_{v})^{2},$$
where such nonnegative unit eigenvector $X$ is called the principal eigenvector of $Q(G)$.
\begin{figure}[t]
\centering \setlength{\unitlength}{2.4pt}
\begin{center}
\unitlength 0.8mm
\linethickness{0.4pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{picture}(59.368,49.236)(0,0)
\put(22.368,24.368){\circle*{4}}
\put(22.368,30.368){\makebox(0,0)[cc]{$v_1$}}
\put(43.868,47.618){\makebox(0,0)[cc]{$u_1$}}
\multiput(22.368,24.368)(-.0337078652,.0389245586){623}{\line(0,1){.0389245586}}
\put(1.118,48.618){\line(0,-1){13.75}}
\multiput(1.118,34.868)(.0666144201,-.0336990596){319}{\line(1,0){.0666144201}}
\put(1.118,47.618){\circle*{4}}
\put(1.118,34.618){\circle*{4}}
\multiput(22.118,24.9)(-.0690789474,-.0317171053){304}{\line(-1,0){.0690789474}}
\put(1.118,16.118){\line(0,-1){13.5}}
\multiput(1.118,1.368)(.0337301587,.0369047619){630}{\line(0,1){.0369047619}}
\put(1.118,16.118){\circle*{4}}
\put(1.118,1.368){\circle*{4}}
\put(1.118,29.118){\circle*{1.2}}
\put(1.118,25.368){\circle*{1.2}}
\put(1.118,21.618){\circle*{1.2}}
\put(22.368,24.618){\line(2,3){15.5}}
\put(37.868,47.618){\circle*{4}}
\put(37.868,41.868){\circle*{1.2}}
\put(37.868,33.118){\circle*{1.2}}
\put(37.868,37.368){\circle*{1.2}}
\multiput(22.368,24.868)(.140625,.033482143){112}{\line(1,0){.140625}}
\put(37.868,28.618){\circle*{4}}
\multiput(22.368,24.118)(.094551282,-.033653846){156}{\line(1,0){.094551282}}
\put(37.868,18.868){\line(1,0){21}}
\put(37.868,18.868){\circle*{4}}
\put(58.868,18.868){\circle*{4}}
\multiput(22.118,25.118)(.0336956522,-.052173913){460}{\line(0,-1){.052173913}}
\put(37.868,1.118){\line(1,0){21}}
\put(58.868,1.368){\circle*{4}}
\put(37.868,1.368){\circle*{4}}
\put(37.868,14.618){\circle*{1.2}}
\put(37.868,10.868){\circle*{1.2}}
\put(37.868,7.368){\circle*{1.2}}
\end{picture}
\end{center}
\caption{\footnotesize{The graph $S_{a,b,c}$, where $a,b,c\geq0$}.}\label{fig1}
\end{figure}
\section{Proofs}
Let $\mathfrak{G}_{m,~\geq\beta}$ be the set of graphs of size $m$ with at least $\beta$ independant edges.
In order to prove \textcolor[rgb]{0.00,0.07,1.00}{Theorem \ref{th1}},
we first look for the extremal graph with maximal $Q$-spectral radius among all graphs in $\mathfrak{G}_{m,~\geq\beta}$.
\begin{lem}\label{le1}
Let $G$ be a non-empty graph and $X$ be the principal eigenvector of $Q(G)$ with coordinate $x_v$ corresponding to $v\in V(G)$.
Assume that $u_1u_2\in E(G)$ and $v_1v_2\notin E(G)$.
If $x_{v_1}+x_{v_2}\geq x_{u_1}+x_{u_2}$ and $x_{v_1}+x_{v_2}>0$, then $q(G-u_1u_2+v_1v_2)>q(G)$.
\end{lem}
\begin{proof}
For convenience, let $G'=G-u_1u_2+v_1v_2$. Then
$$q(G')-q(G)\geq X^TQ(G')X-X^TQ(G)X=(x_{v_1}+x_{v_2})^2-(x_{u_1}+x_{u_2})^2\geq0.$$
Thus, $q(G')\geq q(G).$
If $q(G')=q(G),$
then $q(G')=X^TQ(G')X$ and hence $X$ is also the principal eigenvector of $Q(G')$.
We may assume that $v_2\notin\{u_1,u_2\}$, since $\{v_1,v_2\}\neq \{u_1,u_2\}$.
Then $d_{G'}(v_2)=d_G(v_2)+1$.
Note that $q(G)x_v=d_{G}(v)x_v+\sum_{w\in N_{G}(v)}x_w$ for any $v\in V(G)$.
Thus,
$$q(G')x_{v_2}=d_{G'}(v_2)x_{v_2}+\sum_{w\in N_{G'}(v_2)}x_w=
d_{G}(v_2)x_{v_2}+\sum_{w\in N_{G}(v_2)}x_w+x_{v_1}+x_{v_2}>q(G)x_{v_2},$$
which contradicts $q(G')=q(G)$.
Therefore, $q(G')>q(G)$.
\end{proof}
We now introduce a special matching of a non-empty graph $G$.
\begin{defi}\label{de1}
Let $G$ be a non-empty graph and $X$ be the principal eigenvector of $Q(G)$ with coordinate $x_v$ corresponding to $v\in V(G)$.
A matching $\{u_1v_1, u_2v_2,\ldots, u_{\beta(G)} v_{\beta(G)}\}$ of $G$ is said to be extremal to $X$ and denoted by $M^*(G)$,
if $$\sum_{u_iv_i\in M^*(G)}(x_{u_i}+x_{v_i})^2=\max_M\sum_{uv\in M}(x_u+x_v)^2,$$
where $M$ takes over all the maximum matchings of $G$.
\end{defi}
Let $G^*$ be the extremal graph with maximal $Q$-spectral radius among all graphs in $\mathfrak{G}_{m,~\geq\beta}$,
and $X^*$ be the principal eigenvector of $Q(G^*)$ with coordinate $x^*_v$ corresponding to $v\in V(G^*)$.
The property of the extremal matching of $G^*$ is obtained.
\begin{lem}\label{le2}
Let $M^*(G^*)=\{u_1v_1, u_2v_2,\ldots, u_{\beta(G^*)} v_{\beta(G^*)}\}$ and $V^*=\{u_i,v_i~|~i=1,2,\ldots, \beta(G^*)\}$.
Then $x^*_w\leq \min_{v\in V^*}x^*_v$ for any vertex $w\in V(G^*)\setminus V^*$.
\end{lem}
\begin{proof}
Without loss of generality, we may assume that $x^*_{u_1}=\min_{v\in V^*}x^*_v$.
Let $w$ be an arbitrary vertex in $V(G^*)\setminus V^*$. It suffices to show that $x^*_w\leq x^*_{u_1}$.
Suppose to the contrary that $x^*_w>x^*_{u_1}$.
If $w$ is adjacent to $v_1$ in $G^*$, then
$$\sum_{u_iv_i\in M^*(G)}(x^*_{u_i}+x^*_{v_i})^2<\sum_{uv\in (M^*(G^*)\setminus \{u_1v_1\})\cup \{wv_1\}}(x^*_u+x^*_v)^2,$$
which contradicts the definition of $M^*(G^*)$. Thus, $wv_1\notin E(G^*)$.
Let $G=G^*-u_1v_1+wv_1$. Clearly, $G\in\mathfrak{G}_{m,~\geq\beta}$. Since $X^*$ is a nonnegative vector, we have $x^*_w>x^*_{u_1}\geq0$, and so $x^*_w+x^*_{v_1}>x^*_{u_1}+x^*_{v_1}$.
It follows from \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}} that $q(G)>q(G^*)$, which contradicts the maximality of $q(G^*)$.
This completes the proof.
\end{proof}
Let $M^*(G^*)$ and $V^*$ be the sets defined in \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le2}}.
Set $x^*_{v_1}=\max_{v\in V^*}x^*_v$. We define two edge subsets $E_1(G^*)$ and $E_2(G^*)$ of $G^*$,
where $E_1(G^*)=M^*(G^*)\cup \{v_1v~|~v\in N_{G^*}(v_1)\}$ and $E_2(G^*)=E(G^*)\setminus E_1(G^*).$
The following theorem gives a preliminary characterization of the extremal graph $G^*$.
\begin{thm}\label{th2}
If $m(G^*)\geq \beta(G^*)+5$, then $d_{G^*}(u_1)=1$, and $G^*$ is isomorphic to $S_{a,b,c}$
with possibly some isolated edges and isolated vertices.
\end{thm}
\begin{proof}
Recall that $x^*_{v_1}=\max_{v\in V^*}x^*_v$.
By \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le2}},
we have $x^*_{v_1}=\max_{v\in V(G^*)}x^*_v$.
For convenience, let $E_i=E_i(G^*)$ for $i\in\{1,2\}$.
If $E_2=\emptyset$, then the statement holds immediately.
Suppose that $E_2\neq\emptyset$. Let us define two graphs $G_1$ and $G_2$ as follows:
(i) add isolated vertices $w_1,w_2,\ldots,w_{|E_2|}$ to $G^*$
such that $V(G_1)=V(G_2)=V(G^*)\cup\{w_i~|~i=1,2,\ldots,|E_2|\};$
(ii) $E(G_1)=E_1\cup E_2$ and $E(G_2)=E_1\cup E_2'$, where $E_2'=\{v_1w_i~|~i=1,2,\ldots,|E_2|\}.$ \\
Let $X_1=({X^*}^T,0,0,\ldots,0)^T$,
where the number of extended zero-components is $|E_2|$.
Clearly, $q(G_1)=q(G^*)$ and $X_1$ is the principal eigenvector of $Q(G_1)$.
Let $X_2$ be the principal eigenvector of $Q(G_2)$ with coordinate $x_v$ corresponding to $v\in V(G_2)$.
Then
\begin{eqnarray*}
{X_1}^TX_2[q(G_2)-q(G_1)] &=& {X_1}^Tq(G_2)X_2-[q(G_1)X_1]^TX_2\\
&=& {X_1}^TQ(G_2)X_2-[Q(G_1)X_1]^TX_2\\
&=& {X_1}^T[Q(G_2)-Q(G_1)]X_2.
\end{eqnarray*}
Thus,
\begin{eqnarray}\label{eq1}
{X_1}^TX_2[q(G_2)-q(G_1)]=\sum_{v_1w_i\in E_2'}(x^*_{v_1}+0)(x_{v_1}+x_{w_i})-\sum_{uv\in E_2}(x^*_u+x^*_v)(x_u+x_v).
\end{eqnarray}
Note that each $w_i$ is a pendant vertex of $G_2$. Then $q(G_2)x_{w_i}=x_{w_i}+x_{v_1},$
that is,
\begin{eqnarray}\label{eq2}
x_{w_i}=\frac{x_{v_1}}{q(G_2)-1}
\end{eqnarray}
for $i=1,2,\ldots, |E_2|$.
On the other hand, we have
\begin{eqnarray}\label{eq3}
x^*_u+x^*_v\leq 2x^*_{v_1}
\end{eqnarray}
for any $uv\in E_2$, since $x^*_{v_1}=\max_{v\in V(G^*)}x^*_v$.
Moreover, we can see that $d_{G_2}(v)\leq2$ for any $v\in V(G_2)\setminus \{v_1\}$.
Let $x_{v^*}=\max_{v\in V(G_2)\setminus \{v_1\}}x_v$.
Then $$q(G_2)x_{v^*}=d_{G_2}(v^*)x_{v^*}+\sum_{v\in N_{G_2}(v^*)}x_v\leq 3x_{v^*}+x_{v_1},$$
that is, $x_{v^*}\leq\frac{x_{v_1}}{q(G_2)-3}$.
This implies that
\begin{eqnarray}\label{eq4}
x_u+x_v\leq 2x_{v^*}\leq\frac{2x_{v_1}}{q(G_2)-3}
\end{eqnarray}
for any $uv\in E_2$.
Combining (\ref{eq1}), (\ref{eq2}), (\ref{eq3}) and (\ref{eq4}), we have
\begin{eqnarray}\label{eq5}
{X_1}^TX_2[q(G_2)-q(G_1)]\geq[\frac{q(G_2)}{q(G_2)-1}-\frac{4}{q(G_2)-3}]|E_2|x^*_{v_1}x_{v_1}.
\end{eqnarray}
By the definition of $G_2$, we can see that $m(G_2)=m(G^*)$ and $\beta(G_2)=\beta(G^*)$.
Thus, $G_2$ also belongs to $\mathfrak{G}_{m,~\geq\beta}$.
Furthermore, we have $q(G_2)\leq q(G^*)=q(G_1)$, since $q(G^*)$ is maximal.
Note that $x^*_{v_1}x_{v_1}>0$,
since $x^*_{v_1}=\max_{v\in V(G^*)}x^*_v$
and $v_1$ belongs to the unique connected component of $G_2$ other than an isolated vertex or edge.
By (\ref{eq5}), we have
$\frac{q(G_2)}{q(G_2)-1}\leq\frac{4}{q(G_2)-3}$, i.e., $q^2(G_2)-7q(G_2)+4\leq0$.
Thus,
\begin{eqnarray}\label{eq6}
q(G_2)\leq\frac{7+\sqrt{33}}2<7.
\end{eqnarray}
Recall that $m(G^*)\geq\beta(G^*)+5$, that is, $|E(G^*)\setminus M^*(G^*)|\geq5$.
Note that $u_1v_1\in E(G_2)$ and all the edges of $E(G^*)\setminus M^*(G^*)$ become edges incident to $v_1$ in $G_2$.
It follows that $d_{G_2}(v_1)\geq6$. This implies that $q(G_2)\geq q(K_{1,6})=7$,
which contradicts (\ref{eq6}).
Therefore, $E_2=\emptyset$ and the statement holds.
\end{proof}
In the following, we consider the case $m(G^*)\leq \beta(G^*)+4$.
We now give an ordering of the edges in an extremal matching of $G^*$.
\begin{defi}\label{de2}
An ordering $u_1v_1, u_2v_2, \ldots, u_{\beta(G^*)} v_{\beta(G^*)}$, of the edges in $M^*(G^*)$,
is said to be proper to $X^*$, if it satisfies the following conditions for each $1\leq i\leq \beta(G^*)$:\\
(i) $x^*_{v_i}\geq x^*_{u_i}$;\\
(ii) $x^*_{v_i}\geq x^*_{v_{i+1}}$;\\
(iii) $x^*_{u_i}\geq x^*_{u_{i+1}}$ if $x^*_{v_i}=x^*_{v_{i+1}}$.
\end{defi}
In the following, we may assume that $u_1v_1,u_2v_2,\ldots,u_{\beta(G^*)} v_{\beta(G^*)}$
is a proper ordering of $M^*(G^*)$. Then we have the following results.
\begin{lem}\label{le3}
Let $\beta(G^*)\geq2$ and $i,j\in \{1,2,\ldots,\beta(G^*)\}$ with $i<j$.
Then $x^*_{u_i}\geq x^*_{v_j}$ if and only if $\{u_i,v_i,u_j,v_j\}$ induces two isolated edges or a copy of $K_4$.
\end{lem}
\begin{proof}
Let $H$ be the subgraph of $G^*$ induced by $\{u_i,v_i,u_j,v_j\}$.
Firstly, suppose that $x^*_{u_i}\geq x^*_{v_j}$ and $H\ncong 2 K_2$.
Then there exists an edge $uv\in E(H)\setminus \{u_iv_i,u_jv_j\}$.
Note that $q(G^*)x^*_u=d_{G^*}(u)x^*_u+\sum_{w\in N_{G^*}(u)}x^*_w.$
If $x^*_u=0$, then $x^*_w=0$ for any $w\in N_{G^*}(u)$.
This implies that $v_1\notin N_{G^*}(u)$, since $x^*_{v_1}>0$.
Let $G=G^*-uv+uv_1$. It is obvious that $M^*(G^*)\subseteq E(G)$, and hence $G\in \mathfrak{G}_{m,~\geq\beta}$.
Since $x^*_u+x^*_{v_1}\geq x^*_u+x^*_v$, it follows from \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}} that $q(G)>q(G^*)$, a contradiction.
Therefore, $x^*_u>0$.
If $v_iv_j\notin E(H)$, we define $G'=G^*-uv+v_iv_j$.
Note that $uv\neq u_iv_i$.
By \textcolor[rgb]{0.00,0.07,1.00}{Definition \ref{de2}}, we have $x^*_{v_i}+x^*_{v_j}\geq x^*_u+x^*_v$.
Similarly as above, we have $G'\in \mathfrak{G}_{m,~\geq\beta}$ and $q(G')>q(G^*)$.
Therefore, $v_iv_j\in E(H)$.
Recall that $x^*_{u_i}\geq x^*_{v_j}$. Then $x^*_{u_i}+x^*_{u_j}\geq x^*_{v_j}+x^*_{u_j}$.
\textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}} implies that $u_iu_j\in E(H)$,
otherwise, $q(G^*-v_ju_j+u_iu_j)>q(G^*)$, and $\beta(G^*-v_ju_j+u_iu_j)\geq\beta(G^*)$
since $(M^*(G^*)\setminus \{u_iv_i,u_jv_j\})\cup \{u_iu_j,v_iv_j\}$ is a matching of $G^*-v_ju_j+u_iu_j$.
Furthermore, note that $x^*_{u_i}+x^*_{v_j}\geq x^*_{u_i}+x^*_{u_j}$ and $x^*_{v_i}+x^*_{u_j}\geq x^*_{u_i}+x^*_{u_j}$.
By \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}}, we have $u_iv_j, v_iu_j\in E(H)$.
It follows that $H\cong K_4$.
Conversely, assume that $x^*_{u_i}<x^*_{v_j}$.
We claim that either $u_iu_j\in E(H)$ or $v_iv_j\in E(H)$.
Otherwise, let $G''=G^*-u_iv_i-u_jv_j+u_iu_j+v_iv_j$.
Then $\beta(G'')\geq\beta(G^*)$ and hence $G''\in \mathfrak{G}_{m,~\geq\beta}$.
Moreover,
\begin{eqnarray*}
q(G'')-q(G^*) &\geq& {X^*}^T[Q(G'')-Q(G^*)]X^*\\
&=& (x^*_{u_i}+x^*_{u_j})^2+(x^*_{v_i}+x^*_{v_j})^2-(x^*_{u_i}+x^*_{v_i})^2-(x^*_{u_j}+x^*_{v_j})^2\\
&=& 2(x^*_{u_i}x^*_{u_j}+x^*_{v_i}x^*_{v_j})-2(x^*_{u_i}x^*_{v_i}+x^*_{u_j}x^*_{v_j})\\
&=& 2(x^*_{v_j}-x^*_{u_i})(x^*_{v_i}-x^*_{u_j}).
\end{eqnarray*}
According to \textcolor[rgb]{0.00,0.07,1.00}{Definition \ref{de2}}, we have $x^*_{v_i}\geq x^*_{v_j}\geq x^*_{u_j}$.
If $x^*_{v_i}=x^*_{u_j}$, then $x^*_{v_i}=x^*_{v_j}$. Now by the ordering rule (iii),
we have $x^*_{u_i}\geq x^*_{u_j}$, a contradiction. Thus, $x^*_{v_i}>x^*_{u_j}$.
Combining with $x^*_{u_i}<x^*_{v_j}$, we have
\begin{eqnarray}\label{eq7}
q(G'')-q(G^*)\geq2(x^*_{v_j}-x^*_{u_i})(x^*_{v_i}-x^*_{u_j})>0,
\end{eqnarray}
a contradiction. Thus, the claim holds and $H\ncong 2 K_2$.
If $H\cong K_4$, we define $M=(M^*(G^*)\setminus \{u_iv_i,u_jv_j\})\cup \{u_iu_j,v_iv_j\}$.
Then $M$ is a maximum matching of $G^*$. However, by the above discussion, $$(x^*_{u_i}+x^*_{u_j})^2+(x^*_{v_i}+x^*_{v_j})^2>(x^*_{u_i}+x^*_{v_i})^2+(x^*_{u_j}+x^*_{v_j})^2,$$
which contradicts that $M^*(G^*)$ is extremal. Hence, $H\ncong K_4$.
This completes the proof.
\end{proof}
\begin{figure}[t]
\centering \setlength{\unitlength}{2.4pt}
\begin{center}
\unitlength 0.8mm
\linethickness{0.4pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{picture}(185.5,53.368)(0,0)
\put(49.5,38.25){\line(1,0){.25}}
\put(0,27){\makebox(0,0)[cc]{}}
\put(185.25,53.368){\line(1,0){.25}}
\put(135.75,42.118){\makebox(0,0)[cc]{}}
\put(182.75,37.868){\line(1,0){.25}}
\put(133.25,26.618){\makebox(0,0)[cc]{}}
\put(151.25,39.368){\line(1,0){.25}}
\put(101.75,28.118){\makebox(0,0)[cc]{}}
\put(116.25,38.868){\line(1,0){.25}}
\put(66.75,27.618){\makebox(0,0)[cc]{}}
\put(30,39.368){\line(0,-1){26}}
\put(62,39.368){\line(0,-1){26.25}}
\multiput(62,39.118)(-.04182879377,-.03372243839){771}{\line(-1,0){.04182879377}}
\put(30.25,39.368){\line(6,-5){31.5}}
\put(62.25,13.118){\circle*{4}}
\put(62,38.618){\circle*{4}}
\put(67.25,39.118){\makebox(0,0)[cc]{$u_2$}}
\put(67.25,13.118){\makebox(0,0)[cc]{$v_2$}}
\put(24.25,39.118){\makebox(0,0)[cc]{$u_1$}}
\put(30,13){\line(1,0){32}}
\put(30,13.118){\circle*{4}}
\put(30,38.618){\circle*{4}}
\put(24.25,13.118){\makebox(0,0)[cc]{$v_1$}}
\put(45.75,0){\makebox(0,0)[cc]{$H_1$}}
\put(97,38.618){\line(0,-1){26}}
\put(129,38.618){\line(0,-1){26.25}}
\put(129,12.368){\line(-1,0){32.25}}
\put(97.25,38.618){\circle*{4}}
\put(97,13.118){\circle*{4}}
\put(129,38.618){\circle*{4}}
\put(91.25,39.118){\makebox(0,0)[cc]{$u_1$}}
\put(91.25,13.118){\makebox(0,0)[cc]{$v_1$}}
\put(128.75,12.25){\circle*{4}}
\put(133.5,39.118){\makebox(0,0)[cc]{$u_2$}}
\put(133.5,13.118){\makebox(0,0)[cc]{$v_2$}}
\put(128.5,12.25){\line(1,0){31.75}}
\put(160,12){\line(0,1){25.5}}
\put(160,38.618){\circle*{4}}
\put(164.25,39.118){\makebox(0,0)[cc]{$u_3$}}
\put(164.25,13.118){\makebox(0,0)[cc]{$v_3$}}
\qbezier(97,12.75)(129.375,0)(160.25,12.25)
\put(129.25,0){\makebox(0,0)[cc]{$H_2$}}
\put(160,13.118){\circle*{4}}
\end{picture}
\end{center}
\caption{\footnotesize{The subgraphs $H_1$ and $H_2$ of $G^*$.}}\label{fig2}
\end{figure}
\begin{lem}\label{le4}
If $E_2(G^*)\neq\emptyset$,
then $G^*$ contains either $H_1$ or $H_2$ as a subgraph
(see \textcolor[rgb]{0.00,0.07,1.00}{Fig. \ref{fig2}}).
\end{lem}
\begin{proof}
$E_2(G^*)\neq\emptyset$ implies that $\beta(G^*)\geq2$.
It follows from \textcolor[rgb]{0.00,0.07,1.00}{Definition \ref{de2}} and \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le2}} that,
$x^*_{v_1}=\max_{v\in V(G^*)}x^*_v$ and $x^*_{v_2}=\max_{w\in V(G^*)\setminus \{u_1,v_1\}}x^*_w$.
Let $uv\in E_2(G^*)$. Then $uv\neq u_1v_1$ and hence $x^*_{v_1}+x^*_{v_2}\geq x^*_u+x^*_v$.
Note that $x^*_{v_1}>0$.
If $v_1v_2\notin E(G^*)$, then by \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}},
we have $q(G^*-uv+v_1v_2)>q(G^*)$. Note that $G^*-uv+v_1v_2$ also belongs to $\mathfrak{G}_{m,~\geq\beta}$.
Thus, we get a contradiction to the maximality of $q(G^*)$.
Therefore, $v_1v_2\in E(G^*)$.
If $x^*_{u_1}\geq x^*_{v_2}$, then by \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le3}},
$\{u_1,v_1,u_2,v_2\}$ induces a copy of $K_4$. It follows that $H_1$ is a subgraph of $G^*$.
Now suppose that $x^*_{v_2}>x^*_{u_1}.$ Then $x^*_{v_2}=\max_{w\in V(G^*)\setminus \{v_1\}}x^*_w$.
Moreover, $\max\{x^*_{u_1}, x^*_{v_3}\}=\max_{w\in V(G^*)\setminus \{v_1,v_2,u_2\}}x^*_w$.
This indicates that $x^*_{v_2}+\max\{x^*_{u_1}, x^*_{v_3}\}\geq x^*_u+x^*_v$,
since $u\neq v_1, v\neq v_1$ and $uv\neq u_2v_2$.
Thus, \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}} implies that either $v_2u_1\in E_2(G^*)$ or $v_2v_3\in E_2(G^*)$.
Notice that $x^*_{v_1}\geq x^*_{v_2}>0$.
If $v_2u_1\in E_2(G^*)$, then $u_2v_1\in E_2(G^*)$
(otherwise, $\beta(G^*-u_2v_2+u_2v_1)\geq \beta(G^*)$ and $q(G^*-u_2v_2+u_2v_1)>q(G^*)$).
Therefore, $H_1\subseteq G^*$.
If $v_2v_3\in E_2(G^*)$, then $v_1v_3\in E_2(G^*)$
(otherwise, $\beta(G^*-v_2v_3+v_1v_3)\geq \beta(G^*)$ and $q(G^*-v_2v_3+v_1v_3)>q(G^*)$).
Therefore, $H_2\subseteq G^*$.
\end{proof}
\begin{thm}\label{th3}
If $m(G^*)\leq \beta(G^*)+4$, then $d_{G^*}(u_1)=1$, and $G^*$ is isomorphic to $S_{a,b,c}$
with possibly some isolated edges and isolated vertices.
\end{thm}
\begin{proof}
Recall that $E_1(G^*)=M^*(G^*)\cup \{v_1v~|~v\in N_{G^*}(v_1)\}$ and $E_2(G^*)=E(G^*)\setminus E_1(G^*).$
It is clear that the statement holds if $E_2(G^*)=\emptyset$.
Now assume that $E_2(G^*)\neq\emptyset$.
Then by \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le4}},
$G^*$ contains either $H_1$ or $H_2$ as a subgraph (see \textcolor[rgb]{0.00,0.07,1.00}{Fig. \ref{fig2}}).
This indicates that $m(G^*)\geq \beta(G^*)+3$.
In particular, if $m(G^*)=\beta(G^*)+3$,
then $G^*$ is isomorphic to either $H_1$ or $H_2$ with possibly some isolated edges and isolated vertices.
A simple calculation shows that $q(H_1)=q(H_2)=3+\sqrt{5}.$
On the other hand, let $G=S_{2,0,1}\cup (\beta(G^*)-2) K_2$.
Clearly, $\beta(G)=\beta(G^*)$ and $m(G)=\beta(G^*)+3=m(G^*)$.
However, $$q(G)=q(S_{2,0,1})\approx 5.3234>3+\sqrt{5}=q(G^*),$$ a contradiction.
Thus, $E_2(G^*)=\emptyset$.
It remains the case $m(G^*)=\beta(G^*)+4$. Let $G'=S_{3,0,1}\cup (\beta(G^*)-2) K_2$.
Clearly, $\beta(G')=\beta(G^*)$ and $m(G')=\beta(G^*)+4=m(G^*)$.
Therefore, $$q(G^*)\geq q(G')=q(S_{3,0,1})>q(K_{1,5})=6,$$
since $K_{1,5}$ is a proper subgraph of $S_{3,0,1}$.
If $\{u_1,v_1,u_2,v_2\}$ induces a $K_4$,
then $G^*$ is isomorphic to $K_4$ with possibly some isolated edges and vertices.
However, $q(G^*)=q(K_4)=6$, which contradicts that $q(G^*)>6$.
Thus, $\{u_1,v_1,u_2,v_2\}$ does not induce a $K_4$.
Firstly, assume that $H_1\subseteq G^*$. Then $u_1u_2\notin E(G^*)$.
Let $uv$ be the unique edge which is not yet determined in $G^*$.
Note that $x^*_{v_1}=\max_{v\in V(G^*)}x^*_v>0$.
Thus by \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}},
$uv$ is an edge incident to $v_1$, say, $u=v_1$.
If $v\in V^*$, then $v\in \{u_j,v_j\}$ for some $j\geq3$.
Let $H$ be the subgraph induced by $\{u_1,v_1,u_j,v_j\}$.
Then $H\ncong 2K_2$ since $v_1v\in E(H)\setminus\{u_1v_1,u_jv_j\}$; and $H\ncong K_4$ since $m(G^*)=\beta(G^*)+4$.
By \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le3}}, we have $x^*_{u_1}<x^*_{v_j}.$
Thus $x^*_{v_2}+x^*_{v_j}>x^*_{v_2}+x^*_{u_1}$.
By \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}},
we have $q(G^*-v_2u_1+v_2v_j)>q(G^*)$, a contradiction.
Therefore, $v\notin V^*$, that is, $v_1v$ is a pendant edge.
Hence, $G^*$ is isomorphic to $H_3$ with possibly some isolated edges and isolated vertices
(see \textcolor[rgb]{0.00,0.07,1.00}{Fig. \ref{fig3}}).
Secondly, assume that $H_2\subseteq G^*$.
Let $uv$ be the unique edge which is not yet determined in $G^*$.
If $u\in V^*$ and $v\notin V^*$, then by \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le2}},
$x^*_v\leq x^*_{u_2}$ and hence $x^*_{v_1}+x^*_{u_2}\geq x^*_u+x^*_v$.
By \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}}, we have $q(G^*-uv+v_1u_2)>q(G^*)$, a contradiction.
Thus, $u,v\in V^*$. If $u\in\{u_j,v_j\}$ for some $j\geq4$, then $v=v_1$,
since $x^*_{v_1}=\max_{w\in V^*}x^*_w$. Now, $\{u_2,v_2,u_j,v_j\}$ induces a copy of $2 K_2$.
By \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le3}}, we conclude that $x^*_{u_2}\geq x^*_u$.
Hence, $x^*_{u_2}+x^*_{v_1}\geq x^*_u+x^*_{v_1}$.
Thus we have $q(G^*-uv_1+u_2v_1)>q(G^*)$, a contradiction.
Therefore, $u,v\in \{u_i,v_i~|~i=1,2,3\}$.
Moreover, $uv\notin \{v_iv_j~|~1\leq i<j\leq3\}$, since $v_1v_2, v_2v_3,v_1v_3\in E(H_2)$.
If $uv\in\{u_iu_j~|~1\leq i<j\leq3\}$, say $uv=u_1u_2$, then $x^*_{v_1}+x^*_{u_2}\geq x^*_u+x^*_v$.
And hence $q(G^*-uv+v_1u_2)>q(G^*)$, a contradiction.
It follows that $u\in\{u_1,u_2,u_3\}$ and $v\in\{v_1,v_2,v_3\}.$
Thus, $G^*$ is isomorphic to $H_4$ with possibly some isolated edges and isolated vertices
(see \textcolor[rgb]{0.00,0.07,1.00}{Fig. \ref{fig3}}).
Notice that $H_3\subseteq H_4$. Thus, in both of above cases, we have
$q(G^*)\leq q(H_4)\approx5.9452,$ which contradicts that $q(G^*)>6$.
Therefore, $E_2(G^*)=\emptyset$ and the statement follows.
\end{proof}
\begin{figure}[t]
\centering \setlength{\unitlength}{2.4pt}
\begin{center}
\unitlength 0.8mm
\linethickness{0.4pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{picture}(185.5,53.368)(0,0)
\put(49.5,38.25){\line(1,0){.25}}
\put(0,27){\makebox(0,0)[cc]{}}
\put(185.25,53.368){\line(1,0){.25}}
\put(135.75,42.118){\makebox(0,0)[cc]{}}
\put(182.75,37.868){\line(1,0){.25}}
\put(133.25,26.618){\makebox(0,0)[cc]{}}
\put(151.25,39.368){\line(1,0){.25}}
\put(101.75,28.118){\makebox(0,0)[cc]{}}
\put(116.25,38.868){\line(1,0){.25}}
\put(66.75,27.618){\makebox(0,0)[cc]{}}
\put(30,39.368){\line(0,-1){26}}
\put(62,39.368){\line(0,-1){26.25}}
\multiput(62,39.118)(-.04182879377,-.03372243839){771}{\line(-1,0){.04182879377}}
\put(30.25,39.368){\line(6,-5){31.5}}
\put(62.25,13.118){\circle*{4}}
\put(62,38.868){\circle*{4}}
\put(30,13){\line(1,0){32}}
\put(30,12.75){\circle*{4}}
\put(30,39.25){\circle*{4}}
\put(97,38.618){\line(0,-1){26}}
\put(129,38.618){\line(0,-1){26.25}}
\put(129,12.368){\line(-1,0){32.25}}
\put(97.25,38.618){\circle*{4}}
\put(97,12.618){\circle*{4}}
\put(129,38.118){\circle*{4}}
\put(128.75,12.25){\circle*{4}}
\put(128.5,12.25){\line(1,0){31.75}}
\put(160,12){\line(0,1){25.5}}
\put(160,37.5){\circle*{4}}
\qbezier(97,12.75)(129.375,0)(160.25,12.25)
\put(160,12){\circle*{4}}
\put(44.75,0){\makebox(0,0)[cc]{$H_3$}}
\put(22.75,40.5){\line(1,0){.25}}
\put(23,40.5){\line(0,1){0}}
\put(23,40.5){\line(0,1){0}}
\put(129.5,0){\makebox(0,0)[cc]{$H_4$}}
\multiput(129,38)(-.04264099037,-.03370013755){727}{\line(-1,0){.04264099037}}
\multiput(29.75,13)(-.03369565217,.03804347826){690}{\line(0,1){.03804347826}}
\put(7.118,39.618){\circle*{4}}
\end{picture}
\end{center}
\caption{\footnotesize{The subgraphs $H_3$ and $H_4$ of $G^*$.}}\label{fig3}
\end{figure}
In the following, we give the proof of \textcolor[rgb]{0.00,0.07,1.00}{Theorem \ref{th1}}.
\medskip
\noindent\textbf{Proof of Theorem \ref{th1}.}
Recall that $G^*$ is the graph with maximal $Q$-spectral radius among all graphs in $\mathfrak{G}_{m,~\geq\beta}$,
where $\beta\geq2$. According to \textcolor[rgb]{0.00,0.07,1.00}{Theorems \ref{th2}-\ref{th3}},
$G^*$ is the disjoint union of $S_{a,b,c}$ with $d$ isolated edges and some isolated vertices,
where $a\geq1$ and $b,c,d\geq0$
(see \textcolor[rgb]{0.00,0.07,1.00}{Fig. \ref{fig1}}).
Let $v_1$ be the vertex of maximal degree in $G^*$ and $u_1v_1$ be a pendant edge.
Clearly, $x_{v_1}=\max_{v\in V(G^*)}x^*_v$ and $\beta(G^*)=b+c+d+1$.
We first claim that $\beta(G^*)=\beta$.
Suppose to the contrary that $\beta(G^*)\geq \beta+1\geq3.$
If $b\geq1$, we define a new graph $G$ by replacing a pendant path of length $2$ with two pendant edges incident to $v_1$.
If $d\geq1$, we define $G$ by replacing an isolated edge with a pendant edge incident to $v_1$.
Note that $a\geq1$. In both cases, we can see $\beta(G)=\beta(G^*)-1\geq \beta$, that is, $G\in\mathfrak{G}_{m,~\geq\beta}$.
However, by \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}},
we have $q(G)>q(G^*)$, a contradiction. Thus, $b=d=0$ and hence $\beta(G^*)=c+1$.
This implies that $c\geq2$ and $d_{G^*}(v_1)\geq5$. Hence, $q(G^*)\geq q(K_{1,5})=6$.
Let $\{v_1,u_2,v_2\}$ induce a triangle in $G^*$.
We now define $G$ by adding an isolated vertex $w$ and replacing the edge $u_2v_2$ with a pendant edge $wv_1$.
Then $\beta(G)=\beta(G^*)-1\geq \beta$. Note that $x^*_{u_2}=x^*_{v_2}=\frac{x^*_{v_1}}{q(G^*)-3}\leq \frac{x^*_{v_1}}{3}$.
Thus, $(x^*_{v_1}+0)^2>(x^*_{u_2}+x^*_{v_2})^2$. By \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}},
we have $q(G)>q(G^*)$, a contradiction.
The claim holds.
Assume that $a\geq2$.
Then $d=0$, otherwise, we can define a new graph $G'$ by replacing an isolated edge with a pendant edge incident to $u_1$.
Thus, $\beta(G')=\beta(G^*)$ and by \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}},
$q(G')>q(G^*),$ a contradiction.
Moreover, $b=0$, otherwise, let $v_1v_2u_2$ be a pendant path of length $2$ and define $G'=G^*-v_2u_2+v_2u_1$.
Observe that $\beta(G')=\beta(G^*)$ and $(q(G^*)-1)x^*_{u_1}=x^*_{v_1}\geq x^*_{v_2}=(q(G^*)-1)x^*_{u_2}$.
We have $x^*_{v_2}+x^*_{u_1}\geq x^*_{v_2}+x^*_{u_2}$.
By \textcolor[rgb]{0.00,0.07,1.00}{Lemma \ref{le1}}, $q(G')>q(G^*)$, a contradiction.
Therefore, $c=\beta-1$ and $a=m-3c=m-3\beta+3$.
This implies that $m\geq 3\beta-1$, since $a\geq2$.
It remains the case $a=1$. Now,
\begin{eqnarray}\label{eq8}
m=2b+3c+d+1.
\end{eqnarray}
Note that $\beta=b+c+d+1$. Thus we have
\begin{eqnarray}\label{eq9}
b+2c=m-\beta.
\end{eqnarray}
If $b\geq2$, say, $v_1v_2u_2$ and $v_1v_3u_3$ are two pendant paths of length $2$.
Define $G''=G^*-u_2v_2-u_3v_3+u_2u_3+v_2v_3$. Then $\beta(G'')=\beta(G^*)$.
Moreover, by symmetry of $G^*$,
we have $x^*_{v_2}=x^*_{v_3}$ and $x^*_{u_2}=x^*_{u_3}=\frac{x^*_{v_2}}{q(G^*)-1}<x^*_{v_2}$.
Similar to \textcolor[rgb]{0.00,0.07,1.00}{Inequality (\ref{eq7})},
we have $$q(G'')-q(G^*)\geq2(x^*_{v_3}-x^*_{u_2})(x^*_{v_2}-x^*_{u_3})>0,$$
a contradiction. Therefore, $b\leq1$.
Combining with \textcolor[rgb]{0.00,0.07,1.00}{Equalities (\ref{eq8})-(\ref{eq9})},
if $m-\beta$ is odd, then $b=1$, $c=\frac{m-\beta-1}2$ and $d=\frac{-m+3\beta-3}2$;
if $m-\beta$ is even, then $b=0$, $c=\frac{m-\beta}2$ and $d=\frac{-m+3\beta-2}2$.
Both cases imply that $m\leq 3\beta-2$.
This completes the proof.\hfill$\Box$
| {
"timestamp": "2020-07-07T02:05:25",
"yymm": "2007",
"arxiv_id": "2007.02008",
"language": "en",
"url": "https://arxiv.org/abs/2007.02008",
"abstract": "Brualdi and Hoffman (1985) proposed the problem of determining the maximal spectral radius of graphs with given size. In this paper, we consider the Brualdi-Hoffman type problem of graphs with given matching number. The maximal $Q$-spectral radius of graphs with given size and matching number is obtained, and the corresponding extremal graphs are also determined.",
"subjects": "Combinatorics (math.CO)",
"title": "A spectral extremal problem on graphs with given size and matching number",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180664944447,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7083314811791449
} |
https://arxiv.org/abs/1905.13292 | Spanning Trees and Domination in Hypercubes | Let $L(G)$ denote the maximum number of leaves in any spanning tree of a connected graph $G$. We show the (known) result that for the $n$-cube $Q_n$, $L(Q_n) \sim 2^n = |V(Q_n)|$ as $n\rightarrow \infty$. Examining this more carefully, consider the minimum size of a connected dominating set of vertices $\gamma_c(Q_n)$, which is $2^n-L(Q_n)$ for $n\ge2$. We show that $\gamma_c(Q_n)\sim 2^n/n$. We use Hamming codes and an "expansion" method to construct leafy spanning trees in $Q_n$. | \section{Introduction}\label{sec:Over}
The $n$-cube graph $Q_n$ has $2^n$ vertices, the strings
$a_1\ldots a_n$ on $n$ bits, where two vertices
are adjacent if and only if their strings differ in exactly
one coordinate (where one vertex has 0 and the other has 1).
The $n$-cube is frequently used as a structure for computer
networks, where there are $2^n$ processors corresponding
to the vertices of $Q_n$. An efficient way to connect all
of the processors, so that they all communicate with each
other, is to take a spanning tree in $Q_n$.
With this in mind, S. Bezrukov imagined it would be
interesting to construct such spanning trees with many
leaves (degree one vertices).
At the IWOCA conference (Duluth, 2014), Bezrukov
proposed the following problem:
Letting $L(G)$
denote the maximum number of leaves in any spanning tree
of a connected simple graph $G$, what can one say about
$L(Q_n)$?
He shared this problem in notes~\cite{Bez}.
For a spanning tree, the non-leaf vertices are connected,
so form a tree themselves, which we may think of as
the backbone of the tree: All vertices are either in
this backbone, or are leaves adjacent to it.
Bezrukov's question then is equivalent to constructing
a spanning tree of the hypercube with the smallest
backbone.
Notice that the opposite question,
finding the {\em minimum\/} number of
leaves in a spanning tree, is easy: By a simple induction
$Q_n$ has a Hamilton path for all $n\ge1$.
This path is a spanning tree with just two leaves.
We are interested in the other extreme, {\it maximizing\/} the
number of leaves.
Our problem is closely related to the subject of
domination in graphs.
A subset $W$ of the vertex set $V$ of a graph $G=(V,E)$
is a {\em dominating set\/} if
every vertex is either in $W$ or adjacent to
some vertex in $W$.
The {\em domination number\/} $\gamma(G)$ is the
minimum size of any dominating set.
Note that if one pulls off the leaves from a spanning
tree $T$ for a connected graph $G=(V,E)$ with at least
three vertices, then the
remaining vertices $W$ form a dominating set, and,
moreover, what remains of $T$ still connects them.
That is, $W$ forms a connected dominating set.
Conversely, from any connected dominating set we can
span them with a tree and attach any other vertices as
leaves to obtain a spanning tree.
The minimum size of a connected dominating set of $G$ is
called the {\em connected domination number\/} $\gamma_c(G)$.
We see that maximizing the number of leaves of
any spanning tree of such $G$ corresponds
to minimizing the size of a connected dominating set.
From this discussion we obtain for such $G$
\[
L(G) + \gc(G) = |V(G)|.
\]
The simple ordering relationship between these parameters is
\[
1\le \gamma(G)\le \gamma_c(G)\le |V|.
\]
For example, one can readily check that
for the four-cycle $Q_2$,
$\gamma=\gamma_c=L=2$, while for the ordinary
cube~$Q_3$,
$\gamma=2, \gamma_c=L=4$.
For larger $n$ more than half the vertices can be leaves.
The earliest paper we can find that investigates the connected
domination number of a graph is by Sampathkumar and Walikar (1979)
~\cite{SamWal}.
Several studies investigate bounding $L(G)$ for classes of graphs
$G$, such as those with given
minimum degree \cite{Sto,GriKleSha,KleWes,GriWu}.
Caro {\em et al.\/}~\cite{CarWesYus}\/study both parameters,
and provide more references.
Many papers concern algorithms for finding leafy trees
(or small connected dominating sets).
Searching online we discovered several papers concerning
domination in hypercubes.
These were often done independently of other studies.
The 1990 dissertation of Jha~\cite{Jha} gives a good
general upper bound on $\gamma(Q_n)$,
which is just twice the easy lower bound.
Arumugam and Kala~\cite{AruKal} (1998) focus on domination in
hypercubes.
Duckworth {\em et al.}~\cite{DDGZ}(2001) give good general bounds
on $L(Q_n)$. It follows that $L(Q_n)\sim 2^n$.
It means asymptotically there is a spanning tree for the
hypercube in which almost all vertices are leaves.
It is nicer to restate their results in terms of connected
domination:
\begin{theorem}\label{thm:previous}~\cite{DDGZ}
\begin{itemize}
\item Lower bound: For $n\ge2$, $\frac{\gcqn}{2^n} \ge \frac{1}{n}$
\item Upper bound: As $n\rightarrow\infty$, $\frac{\gcqn}{2^n} \le (1 + o(1)) \frac{2}{n}$
\end{itemize}
\end{theorem}
Another 2012 study of hypercubes~\cite{CheSyu} gives values of
$\gcqn$ for small $n$, but unfortunately its formula for
general $n$, stated without proof, is far from correct.
Mane and Waphare~\cite{ManWap} investigate several
generalizations of domination numbers of hypercubes.
The 2017 Master's thesis of Kubo\v n~\cite{Kub}
considers domination in hypergraphs, and uses some of the
same methods as in this paper.
In the next section, we present simple general lower bounds on
$\gamma(Q_n)$ and $\gcqn$.
In Section~\ref{sec:Hamming} we describe the Hamming code
construction that gives a ``perfect dominating set" for
$Q_n$ when $n$ is of the form $2^k-1$.
We give a method to produce a small connected dominating
set, given a dominating set, that leads to an upper bound
on $\gcqn$ for $n=2^k-1$.
A simple inductive method we call doubling is used to
give upper bounds on $\gamma(Q_n)$ and $\gcqn$ for
general~$n$ in Section~\ref{sec:Doubling}.
Where we make new progress is by introducing
in Section~\ref{sec:Exp} a new method
we call expansion, in which we take a minimum dominating
set in each of $2^j$ copies of $Q_N$ and connect them
appropriately to obtain a small connected dominating set in
$Q_n$, where $n=N+j$.
Choosing $N$ and $j$ wisely improves
the best previous upper bound on
$\gcqn$ by a factor of 2. Indeed, in Section~\ref{sec:Main}
we settle the leading asymptotic behavior of $\gcqn$:
\begin{theorem}\label{thm:main}
As $n\rightarrow\infty$,
$\frac{\gcqn}{2^n} = (1+o(1))\frac{1}{n}.$
\end{theorem}
Restating this in terms of the maximum number of
leaves, it means
\[
L(Q_n)= (1-\frac{1}{n}+o(\frac{1}{n}))2^n.
\]
We conclude with suggestions for further study and
acknowledgements of valuable ideas and support of this
project.
\section{Domination Lower Bounds}\label{sec:Lower}
Let us note some easy lower bounds on our domination parameters for the hypercube $Q_n$.
\begin{proposition}\label{thm:lower}
\begin{itemize}
\item For $n\ge1$, $\gamma(Q_n)\ge 2^n/(n+1).$
\item For $n\ge2$, $\gcqn\ge (2^n-2)/(n-1)\ge 2^n/n.$
\end{itemize}
\end{proposition}
\begin{proof}
A single vertex can dominate at most itself and its $n$ neighbors, leading to the
lower bound on $\gamma(Q_n)$.
Next, consider a connected dominating set of $Q_n$ of size $c$. There is a tree $T$
on these $c$ vertices using $c-1$ edges from $Q_n$. The sum of degrees of these $c$
vertices has $2c-2$ accounted for by $T$. It means that the number of additional
vertices (dominated by those in $T$) is at most $nc-2(c-1)$. But there are
$2^n-c$ vertices besides $T$. Rearranging terms gives the stated inequality on
$c$, hence the lower bounds on $\gcqn$.
\end{proof}
\section{Hamming Code}\label{sec:Hamming}
The famous Hamming code gives an elegant construction of a ``perfect dominating set"
in $Q_n$ when $n=2^k-1$ for some integer $k\ge1$.
This means it achieves the lower bound on $\gamma(Q_n)$ in Proposition~\ref{thm:lower}.
Viewing the vertices of $Q_n$ for such an $n$ as $n$-dimensional vectors over
$GF(2)$, the code consists of the $2^{n-k}$ vectors in the row space of a
$(n-k)\times n$ matrix built as follows:
The first $n-k$ columns form the identity matrix, while the rows of the other
$k$ columns consist of all $n-k=2^k-k-1$
vectors of length $k$ with weight (number of ones) at least 2.
The difference between any two vectors in this row space is then a
nonzero vector in the row space, and hence a nonempty sum of rows of the
matrix. By design, such a sum will always have weight at least three.
Consequently, the $2^{n-k}$ stars in $Q_n$ that are centered at the vectors
in the row space are disjoint. Each star is a $K_{1,n}$. By counting,
we see that these stars partition the vertices of $Q_n$.
They form a minimum dominating set for $Q_n$.
As Bezrukov pointed out when he proposed his problem about $L(Q_n)$, for such $n$
we only have to add some edges between leaves of different stars to obtain a
spanning tree with many leaves. After all, $Q_n$ is connected, and all edges not
used in the stars are between leaves of stars (different stars, in fact).
If we have $c$ components, we need to add $c-1$ edges to obtain a spanning tree;
here, $c=2^{n-k}$.
At worst, each additional edge costs us two new leaves--it would be less, if we are
able to use several edges from the same leaf.
When we finish, we have a spanning tree where the non-leaves are the $c$ star centers
from the Hamming code, as well as at most $2c-2$ vertices that were star leaves.
In fact, we can use this method for any connected simple graph $G$ to build a
spanning tree. Starting from a minimum dominating set of $c$ vertices, the stars
centered at those vertices cover the entire vertex set (though in general they
are not disjoint, and dominating vertices could even be adjacent). One can add
at most $c-1$ edges between stars to create a spanning tree. We obtain this
general bound:
\begin{proposition}\label{thm:ggc}
Let $G$ be a connected simple graph. Then
\[
\gamma_c(G)\le 3\gamma(G)-2.
\]
\end{proposition}
Applying this to our Hamming code construction, we obtain
\begin{proposition}\label{thm:hamming}
Let $n=2^k-1$, where the integer $k\ge1$. Then
$\gamma(Q_n)=2^{n-k}=2^n/(n+1)$, and $\gcqn< (3/(n+1))2^n.$
\end{proposition}
For this Hamming code case $n=2^k-1$ our tree construction can be
viewed this way: Starting from a perfect dominating set in $Q_n$, we
take the corresponding $C=2^n/(n+1)$ stars $K_{1,n}$ and add
$C-1$ edges to form a tree with many leaves.
Since all edges for the star centers are used already, each
edge we add will join leaves from two different stars. At worst, we give up
$2(C-1)$ star leaves (they become part of the tree backbone), plus
the backbone contains the $C$ star centers.
This gives us a connected dominating set of size at most $3C-2\sim 3(2^n/n)$.
If we are fortunate, we don't have to pick two new leaves for each
successive additional edge: It could be that one or both leaves are already
in the backbone. However, for each of the $C$ stars we must give up at least
one leaf, in order that the stars connect in the spanning tree. It means
that the connected dominating set we construct will have at least
$2C\sim 2(2^n/n)$ vertices.
\section{Doubling}\label{sec:Doubling}
So far, we have constructed leafy trees in the $n$-cube only when $n$ has the
special form $2^k-1$. The $(n+1)$-cube can be viewed as built from two copies
of $Q_n$, with a matching of edges joining the corresponding vertices from each
copy. This is true for any value of $n$, not just the special values where the
Hamming code exists.
If we take a dominating set for each copy of $Q_n$, we get a dominating
set for $Q_{n+1}$. Moreover, if we take the same connected dominating set
for each copy, it gives a dominating set for $Q_{n+1}$ that is connected.
We see this simply by adding the edge joining the two copies of a vertex in
the connected dominating set for $Q_n$. We record these observations about
doubling:
\begin{proposition}\label{thm:doubling}
For all $n\ge1$, $\gamma(Q_{n+1})\le 2\gamma(Q_n)$, and
$\gamma_c(Q_{n+1})\le 2\gcqn$.
\end{proposition}
Now suppose $n$ is between two consecutive values where the Hamming code
construction is the last section applies, say $n=N+j$, where $k\ge1$,
$N=2^k-1$, and $0\le j\le 2^k$.
We apply the doubling proposition $j$ times, starting from
$Q_N$, and obtain:
\[
\gamma(Q_n)\le 2^j \frac{2^N}{N+1}=\frac{N+j}{N+1} \frac{2^n}{n}< 2\frac{2^n}{n}.
\]
It follows that
\[
\gamma(Q_n)/2^n<2/n\rightarrow 0,
\]
as $n\rightarrow\infty$.
This matches the bound given by Jha~\cite{Jha}.
For connected domination we apply Proposition~\ref{thm:ggc} and obtain:
\[
\gcqn<3\gamma(Q_n)<6\frac{2^n}{n}.
\]
It follows that
\[
\frac{\gcqn}{2^n} < \frac{6}{n} \rightarrow0,
\]
as $n\rightarrow\infty$, confirming our earlier
assertion that there are spanning trees
for hypercubes with almost all vertices being leaves.
Of course, Theorem~\ref{thm:previous} got a better bound than this on
$\gcqn/2^n$; Our
main result will do even better.
Let us summarize our findings so far. The domination problem for
$Q_n$ is solved by the Hamming code for $n=N=2^k-1$.
Then as $n=N+j$ grows with $j, 0\le j\le 2^k$, our upper bound
on $\gamma(Q_n)/(2^n/n)$ grows from around 1 to around 2.
However, at $j=2^k$, we have the next Hamming code case,
$n=2^{k+1}-1$, and it is better to switch again to the Hamming
code construction.
It means we have a sawtooth function upper bound, rising from 1 to 2
as $n$ increases, then abruptly dropping back down to 1 and rising
again. Of course, each tooth covers an interval of length about
$2^k$, so the teeth get wider with $k$.
Owing to our upper bound Proposition~\ref{thm:ggc}, for
connected domination $\gcqn$ has a similar sawtooth upper
bound, but each tooth rises from value 3 to 6.
\section{Expansion}\label{sec:Exp}
We introduce a new method of tree construction that takes advantage
of small dominating sets to produce smaller connected dominating
sets in $Q_n$.
This will bring down our upper bound for
connected domination, and eventually allow us to solve our
problem asymptotically.
For constructing a spanning tree,
the Hamming code bound punished us by potentially using up
so many leaves to connect the stars.
If we repeatedly double the construction, then it repeats
this penalty over and over. A better idea could then be
to select one copy (or ``layer") of the base hypercube,
add edges to connect the stellar clusters in just that
layer, and then connect all the copies of each star
center to the one in the special layer.
Describing this explicitly,
let $N=2^k-1$, and $n=N +j$,
where $0\le j\le 2^k$.
Partition the vertices of $Q_n$ into $2^j$ ''layers"
according to the last $j$ coordinates of the vertices
$(a_1,\ldots,a_n)$.
Each layer induces a $Q_N$, and its vertices are partitioned
into $|C|=2^{N-k}$ stars, according to the Hamming code
partition of $Q_{N}$.
For each star $S$ in the partition of $Q_N$, there are
$2^j$ vertices, one in each layer, that are centers of
the stars corresponding to star $S$. The centers all
agree in their first $N$ coordinates, so together induce
a subgraph $Q_j$.
By adding $2^j-1$ edges these stars (copies of $S$)
can be connected into a tree.
We now have a forest of $|C|=2^{N-k}$ such trees.
We connect these trees by adding $|C|-1$ edges,
which may as well all be
in the layer ending with 0's.
Each such edge adds at most two vertices
to the connected dominating set we construct.
It is similar to how we
connected the stars in the Hamming code construction.
We record the result of our expansion construction:
\begin{proposition}\label{thm:Exp}
Let $n=N+j$, where $N=2^k-1$ and $1\le j\le 2^k$.
Then $\gamma_c(Q_n)\le 2^j|C|+2(|C|-1),$
where $C$ is the set of $2^{N-k}$ codewords for
the Hamming code in $Q_{N}$.
\end{proposition}
We have seen that $\gcqn/2^n\ge1/n$ for all $n\ge2$.
It would
be nice if we could find a tree construction for $Q_n$ that
has so many leaves that its backbone (connected dominating set)
comes close to achieving the lower bound, acting asymptotically
like a perfect dominating set: What we want is that
$\gcqn/(2^n/n)\rightarrow 1$ as $n\rightarrow\infty$.
Expansion allows us to come much closer to this goal.
Here is what we can show now:
\begin{theorem}\label{thm:ExpAsy}
For all $n\ge1$, $\gamma_c(Q_n)/2^n<2/n$.
For all $n\ge3$, $\gamma_c(Q_n)/2^n>1/n.$
We have
$\liminf_{n\rightarrow\infty} \gcqn/(2^n/n)=1$.
\end{theorem}
\begin{proof}
We have $n, N, K, j$ as above. Proposition~\ref{thm:Exp} gives us
\begin{align*}
\gamma_c(Q_n)
&\le 2^j|C|+2(|C|-1)\\
&< (2^j+2)|C|\\
&=(2^j+2)(2^{N-k})\\
&=(2^n + 2^{N+1})/2^k .\\
\end{align*}
We rewrite this as
\[
\frac{\gcqn}{2^n/n}<\left(1+\frac{1}{2^{j-1}}\right)
\left(1+\frac{j-1}{2^k}\right).
\]
In our range $1\le j\le 2^k$, the first term in the product
on the right starts at 2 and decreases exponentially quickly
towards 1.
The second term starts at 1 and grows linearly to just below 2
at the end of this range.
Throughout this whole range in $j$, the product is at most
$2$, giving us the first statement of the theorem.
The second statement, the lower bound on $\gcqn/2^n$, follows
easily from Proposition~\ref{thm:lower}.
For the third statement, we select values of $n$ for which we
can show $\gcqn/(w^n/n)$. Specifically, given $k$ take
$j=k+1$, so that $n=2^k+k$. Then in the upper bound inequality
above on $\gcqn/(2^n/n)$, both terms in the product are small
(slightly above 1), and their product $\rightarrow 1$ as
$k\rightarrow\infty$.
The $\liminf$ statement follows.
\end{proof}
An interesting observation is that for $n$ of the
form $2^k-1$, the Hamming code exists, but the
corresponding spanning tree construction for $Q_n$
we described earlier only guarantees that
$\gamma_c(n)/(2^n/n)$ is at most
$3$ for such $n$. We can do better, constructing a
tree that reduces the bound for such $n$ to 2,
by taking the Hamming
construction for $2^{k-1}-1$ and applying the expansion
method with $j=2^{k-1}$.
Nevertheless, we are still seeking to do better,
aiming to construct trees that bring the bound
down to 1 asymptotically.
\section{Main result}\label{sec:Main}
We have shown how to construct spanning trees for
hypercubes $Q_n$ with many leaves--the proportion of
the $2^n$ vertices that are not leaves is at most
roughly $2/n$. The idea is to take a Hamming code and
then expand.
Now observe that the expansion idea can be used starting
from {\em any\/} values of $N$, not just a Hamming code value
$2^k-1$, and from {\em any\/} dominating set $C$ in $Q_N$, to
produce a connected dominating set for $Q_n$, $n=N+j$:
Set $C$ gives a partition of $Q_N$ into
stars. For each star center (vertex in the dominating
set), add edges to connect the $2^j$ copies of the vertex.
In the original $Q_N$ add edges to connect the stars.
We now have a spanning tree for $Q_n$.
Denote by $D$ its backbone, a connected
dominating set in $Q_n$.
We get an upper bound on $|D|$ as in
Proposition~\ref{thm:Exp}.
Assuming $|C|$ is minimum-sized, we get that
\[
\gcqn< (2^j+2)\gamma(Q_N).
\]
Given $n$ large, let $j$ be an integer close to
$\log n$ (logarithm base 2), and take $N=n-j$.
Then the display above implies that
$\gcqn/(2^n/n)$ is bounded above approximately by
$\gamma(Q_n)/(2^N/N)$.
So an upper bound function for the domination number,
shifted to the right by $\log n$,
yields an approximate upper bound function on
connected domination.
In particular, if it holds that for domination
$\gamma(Q_n)/(2^n/n)$ tends towards 1, its
lower limit, then the same will be true for the similar
expression for connected domination!
Fortunately, what we need is proven
in the 1997
book {\em Covering Codes\/} by Cohen, Honkala, Litsyn, and
Lobstein~\cite{CHLL}, p.332. They attribute the result to Kabatyanskii
and Panchenko~\cite{KabPan} (1988). The proof relies on
various coding constructions, including $q$-ary Hamming codes
for prime powers $q$. It also depends on results on the
density of primes.
We include their result on the domination number as the first
part of our Main Theorem below. It is restated for convenient
comparison
to our result for connected domination number, the second part,
which can be viewed as a strengthening of the first part.
\begin{theorem}\label{thm:Main}
The domination ratio for hypercubes satisfies~\cite{KabPan}
\[
{\lim_{n\rightarrow\infty}
\frac{\gamma(Q_n)}{2^n/n}=1.}
\]
The connected domination ratio satisfies
\[
{\lim_{n\rightarrow\infty}
\frac{\gamma_c(Q_n)}{2^n/n}=1.}
\]
\end{theorem}
\begin{proof}
As noted above, the first statement is proven in the literature.
What is new is the second part, which is a stronger statement.
Building on Theorem~\label{thm:ExpAsy} it suffices to give an upper bound on
$\gcqn/(2^n/n)$ that goes to 1 as $n\rightarrow\infty$.
As in the discussion above, given $n$
we take $j$ is close to $\log n$ and $N=n-j$.
Given $\varepsilon>0$ we have that for all sufficiently large $n$
(and $N$) that
\[
\frac{\gamma(Q_N)}{2^N/N}<1+\varepsilon.
\]
Applying this in the discussion above, gives us for all
sufficiently large $n$ that
\[
\frac{\gcqn}{2^n/n}<(1+\varepsilon)^2,
\]
and the second part follows.
\end{proof}
Formulating this equivalently in terms of leaves in spanning trees,
we obtain:
\begin{corollary}\label{thm:leaves}
As $n\rightarrow\infty$,
$L(Q_n)=2^n (1 - \frac{1}{n} + o(\frac{1}{n})).$
\end{corollary}
\section{Further Study}\label{sec:Concl}
Here are some ideas for continuing research.
We were not able to give a simple enough proof that the
domination number that $\gamma(Q_n)/(2^n/n)\rightarrow1$.
We were hoping to give a self-contained proof of our main
result. The proof in the literature of this domination
result relies on rather technical explicit coding constructions.
It would be nice if one could devise an algorithm, or use
probabilistic arguments, to prove the existence of
dominating sets in the hypercube that are as small as
the theorem.
Another question asked by Bezrukov~\cite{Bez} remains
open: For $n=2^k-1$, starting from the stars given
by the Hamming code, how can one add edges to form a
tree with the most leaves (the smallest connected
dominating set)? We have seen that for large $k$
the connected dominating set will have
size between 2 and 3 times $2^n/n$. How can one
add edges efficiently, to get close to the lower bound?
What can one say about a more general class of graphs?
For instance, one could consider domination and
connected domination in a generalized grid (box) graph,
such as a Cartesian product of $n$ paths on $p$ vertices.
This graph on $p^n$ vertices is the hypercube when
$p=2$. Perhaps the more natural graph to study is
a product of $n$ cycles on $p$ vertices.
Note that for $p=4$ it is the same graph as $Q_{2n}$.
Edenfield~\cite{Ede} recently studied products of
cycles and products of complete graphs, both
generalizations of the hypergraph questions in this paper.
Joshua Cooper suggests considering powers of graphs.
That is, for a graph $G=(V,E)$, such as the hypercube,
fix integer $r>0$ and consider the same questions as
before, but for the graph $G^r$: This graph also
has vertex set $V$, but now edges join vertices at distance
at most $r$ in $G$. This is motivated by covering
codes of radius $r$.
\section{Acknowledgements}\label{sec:Concl}
We express gratitude to Sergei Bezrukov for sharing his
questions and notes that led to this study.
Particular thanks go to colleague Joshua Cooper
for bringing to our attention the key domination result
$\gamma(Q_n)\sim 2^n/n$ as presented
in~\cite{CHLL}.
We appreciate support for travel to Taiwan to work on this
project from Mathematics Research Promotion Center Grant 108-17 and
Ministry of Science and Technology Grant 107-2115-M-005-002-MY2.
| {
"timestamp": "2019-06-03T02:03:14",
"yymm": "1905",
"arxiv_id": "1905.13292",
"language": "en",
"url": "https://arxiv.org/abs/1905.13292",
"abstract": "Let $L(G)$ denote the maximum number of leaves in any spanning tree of a connected graph $G$. We show the (known) result that for the $n$-cube $Q_n$, $L(Q_n) \\sim 2^n = |V(Q_n)|$ as $n\\rightarrow \\infty$. Examining this more carefully, consider the minimum size of a connected dominating set of vertices $\\gamma_c(Q_n)$, which is $2^n-L(Q_n)$ for $n\\ge2$. We show that $\\gamma_c(Q_n)\\sim 2^n/n$. We use Hamming codes and an \"expansion\" method to construct leafy spanning trees in $Q_n$.",
"subjects": "Combinatorics (math.CO)",
"title": "Spanning Trees and Domination in Hypercubes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985718064816221,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7083314799731828
} |
https://arxiv.org/abs/1204.5551 | A lower bound on seller revenue in single buyer monopoly auctions | We consider a monopoly seller who optimally auctions a single object to a single potential buyer, with a known distribution of valuations. We show that a tight lower bound on the seller's expected revenue is $1/e$ times the geometric expectation of the buyer's valuation, and that this bound is uniquely achieved for the equal revenue distribution. We show also that when the valuation's expectation and geometric expectation are close, then the seller's expected revenue is close to the expected valuation. | \section{Introduction}
Consider a monopoly seller, selling a single object to a single
potential buyer. We assume that the buyer has a valuation for the
object which is unknown to the seller, and that the seller's
uncertainty is quantified by a probability distribution, from which it
believes the buyer picks its valuation.
Assuming that the seller wishes to maximize its expected revenue,
Myerson~\cite{myerson1981optimal} shows that the optimal incentive
compatible mechanism involves a simple one-time offer: the seller
(optimally) chooses a price and offers the buyer to buy the object for
this price; the assumption is that the buyer accepts the offer if its
valuation exceeds this price. Myerson's seminal paper has become a
classical result in auction theory, with numerous follow-up studies. A
survey of this literature is beyond the scope of this paper (see,
e.g.,~\cite{krishna2009auction,klemperer1999auction}).
The expected seller revenue is an important, simple intrinsic
characteristic of the valuation distribution. A natural question is
its relation with various other properties of the distribution. For
example, can seller revenue be bounded given such characterizations of
the valuation as its expectation, entropy, etc.? An immediate upper
bound on seller revenue is the buyer's expected valuation. In fact,
the seller can extract the buyer's expected valuation only if the
seller knows the buyer's valuation exactly - i.e., the distribution
over valuations is a point mass.
Lower bounds on seller revenue are important in the study of
approximations to Myerson auctions (see., e.g., Hartline and
Roughgarden~\cite{hartline2009simple}, Daskalakis and
Pierrakos~\cite{daskalakis2011simple}). A general lower bound on the
seller's revenue is known when the distribution of the buyer's
valuation has a monotone hazard rate; in this case, the seller's
expected revenue is at least $1/e$ times the expected valuation (see
Hartline, Mirrokni and Sundararajan~\cite{hartline2008optimal}, as
well as Dhangwatnotai, Roughgarden and
Yan~\cite{dhangwatnotai2010revenue}).
This bound does not hold in general: as an extreme example, the equal
revenue distribution discussed below has infinite expectation but
finite seller revenue. The family of monotone hazard rate
distributions does not include many important distributions such the
Pareto distribution or other power law distributions, or in fact any
distribution that doesn't have a very thin tail, vanishing at least
exponentially. The above mentioned lower bound for monotone hazard
rate distributions does not apply to these distributions, and indeed
it seems that the literature lacks any similar, general lower bounds
on seller revenue.
The {\em geometric expectation} of a positive random variable $X$ is
$\geo{X} = \exp(\E{\log X})$ (see,
e.g.,~\cite{paolella2006fundamental}). We show that a general lower
bound on the seller's expected revenue is $1/e$ times the geometric
expectation of the valuation. Equivalently, the (natural) logarithm
of the expected seller revenue is greater than or equal to the
expectation of the logarithm of the valuation, minus one. This bound
holds for any distribution of positive valuations. Notably, the {\em
regularity} condition, which often appears in the context of Myerson
auctions, is not required here. This result is a new and perhaps
unexpected connection between two natural properties of distributions:
the geometric expectation and expected seller revenue.
We show that this bound is tight in the following sense: for a fixed
value of the geometric mean, there is a unique cumulative distribution
function (CDF) of the buyer's valuation for which the bound is
achieved; this distribution is the equal revenue distribution, with
CDF of the form $F(v) = 1-c/v$ for $v \geq c$. This distribution is
``special'' in the context of single buyer Myerson auctions, as it is
the only one where seller revenue is identical for all prices.
The ratio between expected valuation and expected seller revenue is a
natural measure of the uncertainty of the valuation
distribution. Also, the discrepancy between the geometric expectation
and the (arithmetic) expectation of a positive random variable is a
well known measure of its dispersion. Hence, when the ratio between
the expectations is close to one, one would expect the amount of
uncertainty to be low and therefore seller revenue to be close to the
expected valuation. We show that this is indeed the case: when the
buyer's valuation has finite expectation, and the geometric
expectation is within a factor of $1-\delta$ of the expectation, then
seller revenue is within a factor of $1-2^{4/3}\delta^{1/3}$ of the
expected valuation. Similarly, it is easy to show that when the
variance of the valuation approaches zero then seller revenue also
approaches the expected valuation.
\section{Definitions and results}
We consider a seller who wishes to sell a single object to a single
potential buyer. The buyer has a valuation $V$ for the object which is
picked from a distribution with CDF $F$, i.e. $F(v) = \P{V \leq v}$.
We assume that $V$ is positive, so that $\P{V \leq 0} = 0$ or $F(0) =
0$. We otherwise make no assumptions on the distribution of $V$; it
may be atomic or non-atomic, have or not have an expectation, etc.
The seller offers the object to the buyer for a fixed price $p$. The
buyer accepts the offer if $p < V$, in which case the seller's revenue
is $p$. Otherwise, i.e., if $p \geq V$, then the seller's revenue is
0. Thus, the seller's expected revenue for price $p$, which we denote
by $\uup{p}{V}$, is given by
\begin{align}
\label{eq:u-p}
\uup{p}{V} = p\P{p < V} = p(1-F(p)).
\end{align}
We define
\begin{align}
\label{eq:u-s-max}
\uu{V} = \sup_p\uup{p}{V} = \sup_p p(1-F(p)).
\end{align}
When this supremum is achieved for some price $p$ then $\uu{V}$ is the
seller's maximal expected revenue, achieved in the optimal Myerson
auction with price $p$.
We define the {\em geometric expectation} (see,
e.g.,~\cite{paolella2006fundamental}) of a positive real random
variable $X$ by $\geo{X} = \exp\left(\E{\log X}\right)$. Note that
$\geo{X} \leq \E{X}$ by Jensen's inequality, and that equality is
achieved only for point mass distributions, i.e., when the buyer's
valuation is some fixed number. Note that likewise $\uu{V} \leq
\E{V}$, again with equality only for point mass distributions.
The equal revenue distribution
with parameter $c$ has the following CDF:
\begin{align}
\label{eq:tight}
\Phi_c(p) = \begin{cases}0&p \leq c\\ 1-\frac{c}{p}& p>c\end{cases}.
\end{align}
It is called ``equal revenue'' because if $V_c$ has CDF $\Phi_c$ then
$\uup{p}{V_c} = \uu{V_c}$ for all $p \geq c$.
Our main result is the following theorem.
\begin{theorem}\label{thmGeometricLowerBound}
Let $V$ be a positive random variable. Then $\uu{V} \geq
\frac{1}{e}\geo{V}$, with equality if and only if $V$ has the equal
revenue CDF $\Phi_c$ with $c = \uu{V}$.
\end{theorem}
\begin{proof}
Let $V$ be a positive random variable with CDF $F$. By
Eq.~\ref{eq:u-s-max} we have that
\begin{align}
\label{eq:basic}
\log \uu{V} \geq \log p + \log(1-F(p))
\end{align}
for all $p$. We now take the expectation of both sides with respect
to $p \sim F$:
\begin{align}
\label{eq:expectations}
\int_0^\infty \log \uu{V} dF(p) \geq \int_0^\infty \log p \,dF(p) +
\int_0^\infty\log(1-F(p)) dF(p).
\end{align}
Since $\uu{V}$ is a constant then the l.h.s.\ equals $\log \uu{V}$. The first
addend on the r.h.s.\ is simply $\E{\log V}$. The second is $\E{\log
(1-F(V))}$; note that $F(V)$ is distributed uniformly on $[0,1]$,
and that therefore
\begin{align*}
\E{\log (1-F(V))} = \int_0^1\log(1-x)dx = -1.
\end{align*}
Hence Eq.~\ref{eq:expectations} becomes:
\begin{align*}
\log \uu{V} \geq \E{\log V} - 1,
\end{align*}
and
\begin{align*}
\uu{V} \geq \frac{1}{e}\exp(\E{\log V}) = \frac{1}{e}\geo{V}.
\end{align*}
To see that $\uu{V}=\frac{1}{e}\geo{V}$ only for the equal revenue
distribution with parameter $\uu{V}$, note that we have equality in
Eq.~\ref{eq:basic} for all $p$ in the support of $F$ if and only if
$F=\Phi_c$ for some $c$, and that therefore we have equality in
Eq.~\ref{eq:expectations} if and only if $F=\Phi_c$ for some
$c$. Finally, a simple calculation yields that $c=\uu{V}$.
\end{proof}
Note that this proof in fact demonstrates a stronger statement, namely
that the expected revenue is at least $\frac{1}{e}\geo{V}$ for a
seller picking a random price from the distribution of
$V$. Dhangwatnotai, Roughgarden and
Yan~\cite{dhangwatnotai2010revenue} use similar ideas to show lower
bounds on revenue, for valuation distributions with monotone hazard
rates.
We next show that when the geometric expectation approaches the
(arithmetic) expectation then the seller revenue also approaches the
expectation.
\begin{theorem}
Let $V$ be a positive random variable with finite expectation, and
let $\geo{V}=(1-\delta)\E{V}$. Then $\uu{V} \geq
\left(1-2^{4/3}\delta^{1/3}\right)\E{V}$.
\end{theorem}
\begin{proof}
Let $V$ be a positive random variable with finite expectation, and
denote $1-\delta = \frac{\geo{V}}{\E{V}}$. We normalize $V$ so that
$\E{V} = 1$, and prove the claim by showing that $\uu{V} \geq
1-2^{4/3}\delta^{1/3}$.
Consider the random variable $V-1-\log V$. Since $\E{V}=1$, we have
that $\E{V-1-\log V} = -\log \geo{V} = -\log(1-\delta)$. Since $x -
1 \geq \log x$ for all $x>0$, then $V-1-\log V$ is
non-negative. Hence by Markov's inequality
\begin{align*}
\P{V-1-\log V \geq -k \log (1-\delta)} \leq \frac{1}{k},
\end{align*}
or
\begin{align}
\label{eq:concentration}
\P{Ve^{1-V} \leq (1-\delta)^k} \leq \frac{1}{k}.
\end{align}
This inequality is a concentration result, showing that when
$\delta$ is small then $Ve^{1-V}$ is unlikely to be much less than
one. However, for our end we require a concentration result on $V$
rather than on $Ve^{1-V}$; that will enable us to show that the
seller can sell with high probability for a price close to the
arithmetic expectation. To this end, we will use the {\em Lambert
$W$ function}, which is defined at $x$ as the solution of the
equation $W(x)e^{W(x)} = x$. We use it to solve the inequality of
Eq.~\ref{eq:concentration} and arrive at
\begin{align*}
\P{V \leq -W\left(-(1-\delta)^k/e\right)} \leq \frac{1}{k},
\end{align*}
which is the concentration result we needed: $V$ is unlikely to be
small when $\delta$ is small. It follows that by setting the price
at $-W\left(-(1-\delta)^k/e\right)$, the seller sells with
probability at least $1-1/k$, and so
\begin{align*}
\uu{V} \geq -W\left(-(1-\delta)^k/e\right) \cdot \Big(1-1/k\Big).
\end{align*}
Now, an upper bound on $W$ is the
following~\cite{corless1996lambertw}:
\begin{align*}
W(x) \leq -1+\sqrt{2(ex+1)},
\end{align*}
and so
\begin{align*}
\uu{V} \geq \Big(1-\sqrt{2(1-(1-\delta)^k}\Big) \cdot
\Big(1-1/k\Big)
\geq \Big(1-\sqrt{2\delta k}\Big) \cdot
\Big(1-1/k\Big).
\end{align*}
Setting $k=(2\delta)^{-1/3}$ we get
\begin{align*}
\uu{V}
\geq \Big(1-(2\delta)^{1/2} (2\delta)^{-1/6}\Big) \cdot \Big(1-(2\delta)^{1/3}\Big) \geq 1-2(2\delta)^{1/3}.
\end{align*}
\end{proof}
\section{Open questions}
It may very well be possible to show tighter {\em upper} bounds for
$\uu{V}$, using continuous entropy. For example, let $V$ have
expectation $1$ and entropy at least $1$. Then $\uu{V}$ is
at most $1/e$: in fact, it is equal to $1/e$ since, by maximum entropy
arguments, there is only one distribution on $\R^+$ (the exponential
with expectation $1$) that satisfies both conditions, and for this
distribution $\uu{V}=1/e$.
One could hope that it is likewise possible to prove upper bounds on
$\uu{V}$, given that $V$ has expectation $1$ and entropy at least $h <
1$; intuitively, the entropy constraint should force $V$ to spread
rather than concentrate around its expectation, decreasing the
seller's expected revenue.
\section{Acknowledgments}
We would like to thank Elchanan Mossel for commenting on a preliminary
version of this paper. We owe a debt of gratitude to the anonymous
reviewer who helped us improve the paper significantly through many
helpful suggestions.
This research is supported by ISF grant 1300/08, and by a Google
Europe Fellowship in Social Computing.
\bibliographystyle{elsarticle-num} | {
"timestamp": "2013-10-08T02:11:30",
"yymm": "1204",
"arxiv_id": "1204.5551",
"language": "en",
"url": "https://arxiv.org/abs/1204.5551",
"abstract": "We consider a monopoly seller who optimally auctions a single object to a single potential buyer, with a known distribution of valuations. We show that a tight lower bound on the seller's expected revenue is $1/e$ times the geometric expectation of the buyer's valuation, and that this bound is uniquely achieved for the equal revenue distribution. We show also that when the valuation's expectation and geometric expectation are close, then the seller's expected revenue is close to the expected valuation.",
"subjects": "Computer Science and Game Theory (cs.GT)",
"title": "A lower bound on seller revenue in single buyer monopoly auctions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180643966651,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7083314796716923
} |
https://arxiv.org/abs/1109.3115 | Log-concavity of complexity one Hamiltonian torus actions | Let $(M,\omega)$ be a closed $2n$-dimensional symplectic manifold equipped with a Hamiltonian $T^{n-1}$-action. Then Atiyah-Guillemin-Sternberg convexity theorem implies that the image of the moment map is an $(n-1)$-dimensional convex polytope. In this paper, we show that the density function of the Duistermaat-Heckman measure is log-concave on the image of the moment map. | \section{Introduction}
In statistic physics, the relation $S(E) = k \log{W(E)}$ is
called Boltzmann's principle where $W$ is
the number of states with given values of macroscopic parameters $E$
(like energy, temperature, ..), $k$ is the Boltzmann's constant,
and $S$ is the entropy of the system
which measures the degree of disorder in the system.
For the additive values $E$, it is well-known the entropy is
always concave function. (See \cite{O1} for more detail).
In symplectic setting, consider a Hamiltonian $G$-manifold $(M,\omega)$
with the moment map $\mu : M \rightarrow \mathfrak{g^*}$.
The Liouville measure $m_L$ is defined by
$$m_L(U) := \int_U \frac{\omega^n}{n!} $$ for any open set $U \subset M$.
Then the push-forward measure $m_{\DHfunction} := \mu_* m_L$, called
the \textit{Duistermaat-Heckman measure}, can be regarded as a
measure on $\mathfrak{g^*}$ such that for any Borel subset $B
\subset \mathfrak{g^*}$, $m_{\DHfunction}(B) = \int_{\mu^{-1}(B)}
\frac{\omega^n}{n!}$ tells us that how many states of our system
have momenta in $B.$ By the Duistermaat-Heckman theorem \cite{DH},
$m_{\DHfunction}$ can be expressed in terms of the density function
$\DHfunction(\xi)$ with respect to the Lebesque measure on
$\mathfrak{g^*}$. Therefore the concavity of the entropy of given
Hamiltonian system on $(M,\omega)$ can be interpreted as the
log-concavity of $\DHfunction(\xi)$ on the image of $\mu$. A.
Okounkov \cite{O2} proved that the density function of the
Duistermaat-Heckman measure is log-concave on the image of the
moment map for the maximal torus action when $(M,\omega)$ is the
co-adjoint orbit of some classical Lie groups. In \cite{Gr}, W.
Graham showed the log-concavity of the density function of the
Duistermaat-Heckman measure also holds for any K\"{a}hler manifold
admitting a holomorphic Hamiltonian torus action. V. Ginzberg and A.
Knudsen conjectured independently that the log-concavity holds for
any Hamiltonian $G$-manifolds, but it turns out to be false in
general by Y. Karshon \cite{K1}. Further related works can be found
in \cite{L} and \cite{C}.
As noted in \cite{K1} and \cite{Gr}, log-concavity holds for Hamiltonian toric (i.e.
complexity zero) actions, and Y. Lin dealt with log-concavity of
complexity two Hamiltonian torus actions in \cite{L}. But, there is
no result on log-concavity of complexity one Hamiltonian torus
actions. This is why we rstrict our interest to complexity one.
From now on, we assume that $(M,\omega)$ is a $2n$-dimensional
closed symplectic manifold with an effective Hamiltonian $T^{n-1}$-action.
Let $\mu : M \rightarrow \mathfrak{t}^*$ be the corresponding moment
map where $\mathfrak{t}^*$ is a dual of the Lie algebra of
$T^{n-1}$. By Atiyah-Guillemin-Sternberg convexity theorem, the
image of the moment map $\mu(M)$ is an $(n-1)$-dimensional convex
polytope in $\mathfrak{t}^*$. By the Duistermaat-Heckman theorem
\cite{DH}, we have $$ m_{\DHfunction} = \DHfunction(\xi)d\xi $$
where $d\xi$ is the Lebesque measure on $\mathfrak{t}^* \cong
\R^{n-1}$ and $\DHfunction(\xi)$ is a continuous piecewise
polynomial function of degree less than 2 on $\mathfrak{t}^*$. Our
main theorem is as follow.
\begin{theorem}\label{main}
Let $(M,\omega)$ be a $2n$-dimensional closed symplectic manifold
equipped with a Hamiltonian $T^{n-1}$-action with the moment map
$\mu : M \rightarrow \mathfrak{t}^*$. Then the density function of the Duistermaat-Heckman measure is log-concave on $\mu(M)$.
\end{theorem}
\section{Proof of the theorem \ref{main}}
Let $(M, \omega)$ be a $2n$-dimensional closed symplectic manifold.
Let $(n-1)$-dimensional torus $T$ acts on $(M,\omega)$ in
Hamiltonian fashion. Denote by $\mathfrak{t}$ the Lie algebra of
$T.$ For a moment map $\mu : M \rightarrow \mathfrak{t}^*$ of the
$T$-action, define the Duistermaat-Heckman function $\DHfunction :
\mathfrak{t}^* \rightarrow \R$ as
\begin{equation*}
\DHfunction (\xi) = \int_{M_\xi} \omega_\xi
\end{equation*}
where $M_\xi$ is the reduced space $\mu^{-1}(\xi)/T$
and $\omega_\xi$ is the corresponding reduced symplectic form on $M_\xi$.
Now, we define the x-ray of our action. Let $T_1 , \cdots , T_N$ be
the subgroups of $T^{n-1}$ which occur as stabilizers of points in
$M^{2n}$. Let $M_i$ be the set of points whose stabilizers are
$T_i.$ By relabeling, we can assume that $M_i$'s are connected and
the stabilizer of points in $M_i$ is $T_i.$ Then, $M^{2n}$ is a
disjoint union of $M_i$'s. Also, it is well known that $M_i$ is open
dense in its closure and the closure is just a component of the
fixed set $M^{T_i}.$ Let $\mathfrak{M}$ be the set of $M_i$'s. Then,
the \textit{x-ray} of $(M^{2n}, \omega, \mu)$ is defined as the set
of $\mu(\overline{M_i})$'s. Here, we recall a basic lemma.
\begin{lemma}\cite[Theorem 3.6]{GS}
\label{lemma: perpendicular subalgebra}
Let $\mathfrak{h}$ be the Lie algebra of $T_i.$ Then $\mu(M_i)$ is
locally of the form $x+\mathfrak{h}^\perp$ for some $x \in
\mathfrak{t}^*.$
\end{lemma}
By this lemma, $\dim_\R \mu(M_i) = m$ for $(n-1-m)$-dimensional
$T_i.$ Each image $\mu(\overline{M_i})$ (resp. $\mu(M_i)$) is called
an \textit{$m$-face} (resp. \textit{an open $m$-face}) of the x-ray
if $T_i$ is $(n-1-m)$-dimensional. Our interest is mainly in open
$(n-2)$-faces of the x-ray, i.e. codimension one in
$\mathfrak{t}^*.$ Figure \ref{figure: illustrate} is an example of
x-ray with $n=3$ where thick lines are $(n-2)$-faces. Now, we can
prove the main theorem.
\begin{figure}[ht]
\begin{center}
\begin{pspicture}(-2, -1)(5, 3.5)\footnotesize
\pspolygon[fillstyle=solid,fillcolor=lightgray, linewidth=1.5pt](0,
0)(4, 0)(1, 3)(0, 3)(0, 0)
\psline[linewidth=1.5pt](1, 0)(1, 3)
\psline[linewidth=1.5pt](3, 0)(0, 3)
\psline[linewidth=0.5pt, linestyle=dashed](-1, 0.5)(0, 1)
\psline[linewidth=0.5pt, linestyle=dashed](-0.4, -0.6)(-2, 2.5)
\psline[linewidth=0.5pt, linestyle=dashed](2, 2)(3, 2.5)
\psline[linewidth=0.5pt](-0.8, 0.6)(-0.9, 0.8)(-1.1, 0.7)
\uput[r](3, 2.5){$\mathfrak{k}^*+const$}
\uput[l](-2, 2.5){$\mathfrak{t}^{\prime *}$}
\psline[linewidth=1.7pt, linestyle=dashed](0, 1)(2, 2)
\psline[linewidth=0.5pt](0.4, 1.4)(1.4, 2)
\psdots[dotsize=5pt](0.4, 1.4)(1.4, 2)
\uput[u](0.4, 1.4){$x_0$} \uput[u](1.4, 2){$x_1$}
\psdots[dotsize=5pt](0.6, 1.3)(1.6, 1.8)
\uput[d](0.6, 1.3){$\xi_0$}
\uput[dr](1.6, 1.8){$\xi_1$}
\end{pspicture}
\end{center}
\caption{\label{figure: illustrate} Proof of Theorem \ref{main}}
\end{figure}
\begin{proof}[Proof of Theorem \ref{main}]
When $n=2,$ we obtain a proof by \cite[Lemma 2.19]{K2}. So, we
assume $n \ge 3.$ Pick arbitrary two points $x_0, x_1$ in the image
of $\mu.$ We should show that
\begin{equation}
\label{equation: log-concave formular} t \log \big(\DHfunction (x_1)
\big)+(1-t) \log \big(\DHfunction (x_0) \big) \le \log
\big(\DHfunction ( t x_1 + (1-t) x_0 ) \big)
\end{equation}
for each $t \in [0, 1].$ Put $x_t = t x_1 + (1-t) x_0.$
Let us fix a decomposition $T=S^1 \times \cdots \times S^1.$ By the
decomposition, we identify $\mathfrak{t}$ with $\R^{n-1},$ and
$\mathfrak{t}$ carries the usual Riemannian metric $\langle ,
\rangle_0$ which is a bi-invariant metric. This metric gives the
isomorphism
\begin{equation*}
\iota : \mathfrak{t} \rightarrow \mathfrak{t}^*, ~ X \mapsto \langle
\cdot, X \rangle_0.
\end{equation*}
For a small $\epsilon
>0,$ pick two regular values $\xi_i$ in the ball $B(x_i, \epsilon)$
for $i=0, 1$ which satisfy the following two conditions:
\begin{itemize}
\item[i.] $\xi_1-\xi_0 \in \iota(\Q^{n-1}),$
\item[ii.] the line $L$ containing $\xi_0, \xi_1$
in $\mathfrak{t}^*$ meets each open $m$-face
transversely for $m=1, \cdots, n-2.$
\end{itemize}
Transversality guarantees that the line does not meet any open
$m$-face for $m \le n-3.$ Put
\begin{equation*}
\xi_t = t \xi_1 + (1-t) \xi_0 \text{ and } X = \iota^{-1} (\xi_1 -
\xi_0).
\end{equation*}
Let $\mathfrak{k} \subset \mathfrak{t}$ be the one-dimensional
subalgebra spanned by $X.$ By i., $\mathfrak{k}$ becomes a Lie
algebra of a circle subgroup of $T,$ call it $K.$ Let
$\mathfrak{t}^\prime$ be the orthogonal complement of $\mathfrak{k}$
in $\mathfrak{t}.$ Again by i., $\mathfrak{t}^\prime$ becomes a Lie
subgroup of a $(n-2)$-dimensional subtorus of $T,$ call it
$T^\prime.$ Let
\begin{equation*}
p: \mathfrak{t}^* \rightarrow \mathfrak{t}^{\prime
*} = \iota (\mathfrak{t}^\prime)
\end{equation*}
be the orthogonal projection along $\mathfrak{k}^* = \iota
(\mathfrak{k}^\prime).$ If we put $\mu^\prime = p \circ \mu,$ then
$\mu^\prime : M \rightarrow \mathfrak{t}^{\prime *}$ is a moment map
of the restriced $T^\prime$-action on $M.$ Put $\xi^\prime =
p(\xi_t)$ for $t \in [0, 1].$
We would show that $\xi^\prime$ is a regular value of $\mu^\prime.$
For this, we show that each point $x \in \mu^{\prime
-1}(\xi^\prime)$ is a regular point of $\mu^\prime.$ By ii. and
Lemma \ref{lemma: perpendicular subalgebra}, stabilizer $T_x$ is
finite or one-dimensional. If $T_x$ is finite, then $x$ is a regular
point of $\mu$ so that it is also a regular point of $\mu^\prime.$
If $T_x$ is one-dimensional, then $\mu(x)$ is a point of an open
$(n-2)$-face $\mu(M_i)$ such that $x \in M_i.$ Let $\mathfrak{h}$ be
the Lie algebra of $T_i = T_x.$ By Lemma \ref{lemma: perpendicular
subalgebra}, $p(d\mu(T_x M_i))=p(\mathfrak{h}^\perp),$ and the
kernel $\mathfrak{k}$ of $p$ is not contained in
$\mathfrak{h}^\perp$ by transversality. So, $p(\mathfrak{h}^\perp)$
is the whole $\mathfrak{t}^{\prime *}$ because $\dim
\mathfrak{h}^\perp = \dim \mathfrak{t}^{\prime *},$ and this means
that $x$ is a regular point of $\mu^\prime.$ Therefore, we have
shown that $\xi^\prime$ is a regular value of $\mu^\prime.$
Since $\xi^\prime$ is a regular value, the preimage $\mu^{\prime
-1}(\xi^\prime)$ is a manifold and $T^\prime$ acts almost freely on
it, i.e. stabilizers are finite. So, if we denote by
$M_{\xi^\prime}$ the symplectic reduction $\mu^{\prime
-1}(\xi^\prime)/T^\prime,$ then it becomes a symplectic orbifold
carrying the induced symplectic $T/T^\prime$-action. We can observe
that the image of $\mu^{\prime -1}(\xi^\prime)$ through $\mu$ is the
thick dashed line in Figure \ref{figure: illustrate}. Since $K/(K
\cap T^\prime) \cong T/T^\prime,$ we will regard $K/(K \cap
T^\prime)$ and $\mathfrak{k}$ as $T/T^\prime$ and its Lie algebra,
respectively. The map $\mu_X := \langle \mu, X \rangle$ induces a
map on $M_{\xi^\prime}$ by $T$-invariance of $\mu,$ call it just
$\mu_X$ where $\langle ~, ~ \rangle : \mathfrak{t}^* \times
\mathfrak{t} \rightarrow \R$ is the evaluation pairing. Then, we can
observe that $\mu_X$ is a Hamiltonian of the $K/(K \cap
T^\prime)$-action on $M_{\xi^\prime},$ and that $M_{\xi_t}$ is
symplectomorphic to the symplectic reduction of $M_{\xi^\prime}$ at
the regular value $\langle \xi_t, X \rangle$ with respect to
$\mu_X.$ If we denote by $\DHfunction_X$ the Duistermaat-Heckman
function of $\mu_X : M_{\xi^\prime} \rightarrow \R,$ then we have
$\DHfunction (\xi_t) = \DHfunction_X (\langle \xi_t, X \rangle)$ for
$t \in [0, 1].$ Since $M_{\xi^\prime}$ is a four-dimensional
symplectic orbifold with Hamiltonian circle action, $\DHfunction_X$
is log-concave by Lemma \ref{orbifoldlogconcave} below. Since $x_t$
and $\xi_t$ are sufficiently close and $\DHfunction$ is continuous
by \cite{DH}, we can show (\ref{equation: log-concave formular}) by
log-concavity of $\DHfunction_X.$
\end{proof}
\begin{lemma}\label{orbifoldlogconcave}
Let $(N,\sigma)$ be a closed four dimensional Hamiltonian $S^1$-orbifold. Then the density function of the Duistermaat-Heckman measure is
log-concave.
\end{lemma}
\begin{proof}
Let $\phi : N \rightarrow \R$ be a moment map. Then the density function $\DHfunction : \textrm{Im}\phi \rightarrow \R_{\geq 0}$ of the Duistermaat-Heckman measure is given by
$$ \DHfunction(t) = \int_{N_t} \sigma_t $$
for any regular value $t \in \textrm{Im} \phi$. Let $(a,b) \subset \textrm{Im} \phi$ be an open interval consisting of regular values of $\phi$ and fix $t_0 \in (a,b)$.
By the Duistermaat-Heckman theorem \cite{DH}, $[\sigma_t] - [\sigma_{t_0}] = -e(t-t_0)$ for any $t \in (a,b)$, where $e$ is the Euler class of the $S^1$-fibration $\phi^{-1}(t_0) \rightarrow \phi^{-1}(t_0) / S^1$. Therefore
$$ DH'(t) = - \int_{N_t} e $$ and $$ DH''(t) = 0 $$ for any $t \in (a,b)$. Note that $\DHfunction(t)$ is log-concave on $(a,b)$ if and only if it satisfies $\DHfunction(t)\cdot \DHfunction''(t) - \DHfunction'(t)^2 \leq 0$ for all $t \in (a,b)$. Hence $\DHfunction(t)$ is log-concave on any open intervals consisting of regular values.
Let $c$ be any interior critical value of $\phi$ in $\textrm{Im}\phi$. Then it is enough to show that the jump in the derivative of $(\log{\DHfunction})'$ is negative at $c$.
First, we will show that the jump of the value $ \DHfunction'(t) = - \int_{N_t} e $ is negative at $c$. Choose a small $\epsilon > 0$ such that
$(c-\epsilon, c+\epsilon)$ does not contain a critical value except for $c$.
Let $N_c$ be a symplectic cut of $\phi^{-1}[c-\epsilon,c+\epsilon]$ along the extremum so that $N_c$ becomes a closed Hamiltonian $S^1$-orbifold whose maximum is the reduced space $M_{c+\epsilon}$ and the minimum is $N_{c-\epsilon}$.
Using the Atiyah-Bott-Berline-Vergne localization formula for orbifolds \cite{M}, we have
$$0 = \int_{N_c} 1 = \sum_{p \in N^{S^1} \cap \phi^{-1}(c)} \frac{1}{d_p} \frac{1}{p_1p_2 \lambda^2} + \int_{M_{c-\epsilon}} \frac {1}{\lambda + e_-} + \int_{N_{c+\epsilon}} \frac {1}{- \lambda - e_+}$$
which is equivalent to
$$ 0 = \sum_{p \in N^{S^1} \cap \phi^{-1}(c)} \frac{1}{p_1p_2} = \int_{N_{c-\epsilon}} e_- - \int_{N_{c+\epsilon}}e_+,$$
where $d_p$ is the order of the local group of $p$, $p_1$ and $p_2$ are the weights of the tangential $S^1$-representation on $T_pN$, and
$e_-$ ($e_+$ respectively) is the Euler class of $\phi^{-1}(c-\epsilon)$ ($\phi^{-1}(c+\epsilon)$ respectively). Since $c$ is in the interior of
$\textrm{Im}\phi$, we have $p_1p_2 < 0 $ for any $p \in N^{S^1} \cap \phi^{-1}(c)$. Hence the jump of $ \DHfunction'(t) = - \int_{N_t} e $ is negative
at $c$, which implies that the jump of $\log{\DHfunction(t)}' = \frac{\DHfunction'(t)}{\DHfunction(t)}$ is negative at $c$ (by continuity of $\DHfunction(t)$). It finishes the proof.
\end{proof}
\bigskip
\bibliographystyle{amsalpha}
| {
"timestamp": "2011-09-15T02:04:00",
"yymm": "1109",
"arxiv_id": "1109.3115",
"language": "en",
"url": "https://arxiv.org/abs/1109.3115",
"abstract": "Let $(M,\\omega)$ be a closed $2n$-dimensional symplectic manifold equipped with a Hamiltonian $T^{n-1}$-action. Then Atiyah-Guillemin-Sternberg convexity theorem implies that the image of the moment map is an $(n-1)$-dimensional convex polytope. In this paper, we show that the density function of the Duistermaat-Heckman measure is log-concave on the image of the moment map.",
"subjects": "Symplectic Geometry (math.SG); Mathematical Physics (math-ph)",
"title": "Log-concavity of complexity one Hamiltonian torus actions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180639771091,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7083314793702017
} |
https://arxiv.org/abs/1111.4587 | A Complete Characterization of the Gap between Convexity and SOS-Convexity | Our first contribution in this paper is to prove that three natural sum of squares (sos) based sufficient conditions for convexity of polynomials, via the definition of convexity, its first order characterization, and its second order characterization, are equivalent. These three equivalent algebraic conditions, henceforth referred to as sos-convexity, can be checked by semidefinite programming whereas deciding convexity is NP-hard. If we denote the set of convex and sos-convex polynomials in $n$ variables of degree $d$ with $\tilde{C}_{n,d}$ and $\tilde{\Sigma C}_{n,d}$ respectively, then our main contribution is to prove that $\tilde{C}_{n,d}=\tilde{\Sigma C}_{n,d}$ if and only if $n=1$ or $d=2$ or $(n,d)=(2,4)$. We also present a complete characterization for forms (homogeneous polynomials) except for the case $(n,d)=(3,4)$ which is joint work with G. Blekherman and is to be published elsewhere. Our result states that the set $C_{n,d}$ of convex forms in $n$ variables of degree $d$ equals the set $\Sigma C_{n,d}$ of sos-convex forms if and only if $n=2$ or $d=2$ or $(n,d)=(3,4)$. To prove these results, we present in particular explicit examples of polynomials in $\tilde{C}_{2,6}\setminus\tilde{\Sigma C}_{2,6}$ and $\tilde{C}_{3,4}\setminus\tilde{\Sigma C}_{3,4}$ and forms in $C_{3,6}\setminus\Sigma C_{3,6}$ and $C_{4,4}\setminus\Sigma C_{4,4}$, and a general procedure for constructing forms in $C_{n,d+2}\setminus\Sigma C_{n,d+2}$ from nonnegative but not sos forms in $n$ variables and degree $d$. Although for disparate reasons, the remarkable outcome is that convex polynomials (resp. forms) are sos-convex exactly in cases where nonnegative polynomials (resp. forms) are sums of squares, as characterized by Hilbert. | \section{Introduction}
\subsection{Nonnegativity and sum of squares}
One of the cornerstones of real algebraic geometry is Hilbert's
seminal paper in 1888~\cite{Hilbert_1888}, where he gives a
complete characterization of the degrees and dimensions \aaan{for}
which nonnegative polynomials can be written as sums of squares of
polynomials. In particular, Hilbert proves in~\cite{Hilbert_1888}
that there exist nonnegative polynomials that are not sums of
squares, although explicit examples of such polynomials appeared
only about 80 years later and the study of the gap between
nonnegative and sums of squares polynomials continues to be an
active area of research to this day.
Motivated by a wealth of new applications and a modern viewpoint
that emphasizes efficient computation, there has also been a great
deal of recent interest from the optimization community in the
representation of nonnegative polynomials as sums of squares
(sos). Indeed, many fundamental problems in applied and
computational mathematics can be reformulated as either deciding
whether certain polynomials are nonnegative or searching over a
family of nonnegative polynomials. It is well-known however that
if the degree of the polynomial is four or larger, deciding
nonnegativity is an NP-hard problem (\aaan{this follows, e.g.,} as
an immediate corollary of NP-hardness of deciding matrix
copositivity~\cite{nonnegativity_NP_hard}). On the other hand, it
is also well-known that deciding whether a polynomial can be
written as a sum of squares can be reduced to solving a
semidefinite program, for which efficient algorithms, e.g. based
on interior point methods, are available. The general machinery of
the so-called ``sos relaxation'' has therefore been to replace the
intractable nonnegativity requirements with the more tractable sum
of squares requirements that obviously provide a sufficient
condition for polynomial nonnegativity.
Some relatively recent applications that sum of squares
relaxations have found span areas as diverse as control
theory~\cite{PhD:Parrilo},~\cite{PositivePolyInControlBook},
quantum computation~\cite{Pablo_Sep_Entang_States}, polynomial
games~\cite{Pablo_poly_games}, combinatorial
optimization~\cite{Stability_number_SOS}, and many others.
\subsection{Convexity and sos-convexity}
Aside from nonnegativity, \emph{convexity} is another fundamental
property of polynomials that is of both theoretical and practical
significance. Perhaps most notably, presence of convexity in an
optimization problem often leads to tractability of finding global
optimal solutions. Consider for example the problem of finding the
unconstrained global minimum of a polynomial. This is an NP-hard
problem in general~\cite{Minimize_poly_Pablo}, but if we know a
priori that the polynomial to be minimized is convex, then every
local \aaa{minimum} is global, and even simple gradient descent
methods can find a global minimum. There are other scenarios where
one would like to decide convexity of polynomials. For example, it
turns \aaan{out} that the $d$-th root of a degree $d$ polynomial
is a norm if and only if the polynomial is homogeneous, positive
definite, and convex~\cite{Blenders_Reznick}. Therefore, if we can
certify that a homogenous polynomial is convex and definite, then
we can use it to define a norm, which is useful in many
applications. In many other practical settings, we might want to
\emph{parameterize} a family of convex polynomials that have
certain properties, e.g., that serve as a convex envelope for a
non-convex function, approximate a more complicated function, or
fit some data points with minimum error. In the field of robust
control for example, it is common to \aaan{use} convex Lyapunov
functions to prove stability of uncertain dynamical systems
described \aaan{by} difference inclusions. Therefore, the ability
to efficiently search over convex polynomials would lead to
algorithmic ways of constructing such Lyapunov functions.
The question of determining the computational complexity of
deciding convexity of polynomials appeared in 1992 on a list of
seven open problems in complexity theory for numerical
optimization~\cite{open_complexity}. In a recent joint work with
A. Olshevsky and J. N. Tsitsiklis, we have shown that the problem
is strongly NP-hard even for polynomials of degree
four~\cite{NPhard_Convexity_arxiv}. If testing membership in the
set of convex polynomials is hard, searching or optimizing over
them is obviously also hard. This result, like any other hardness
result, stresses the need for good approximation algorithms that
can deal with many instances of the problem efficiently.
The focus of this work is on an algebraic notion known as
\emph{sos-convexity} (introduced formally by Helton and Nie
in~\cite{Helton_Nie_SDP_repres_2}), which is a sufficient
condition for convexity of polynomials based on \aaan{a} sum of
squares decomposition of the Hessian matrix; see
Definition~\ref{def:sos.convex}. As we will briefly review in
Section~\ref{sec:prelims}, the problem of deciding if a given
polynomial is sos-convex amounts to solving a single semidefinite
program.
Besides its computational implications, sos-convexity is an
appealing concept since it bridges the geometric and algebraic
aspects of convexity. Indeed, while the usual definition of
convexity is concerned only with the geometry of the epigraph, in
sos-convexity this geometric property (or the nonnegativity of the
Hessian) must be certified through a ``simple'' algebraic
identity, namely the sum of squares factorization of the Hessian.
The original motivation of Helton and Nie for defining
sos-convexity was in relation to the question of semidefinite
representability of convex sets~\cite{Helton_Nie_SDP_repres_2}.
But this notion has already appeared in the literature in a number
of other
settings~\cite{Lasserre_Jensen_inequality},~\cite{Lasserre_Convex_Positive},~\cite{convex_fitting},~\cite{Chesi_Hung_journal}.
In particular, there has been much recent interest in the role of
convexity in semialgebraic geometry
~\cite{Lasserre_Jensen_inequality},~\cite{Blekherman_convex_not_sos},~\cite{Monique_Etienne_Convex},~\cite{Lasserre_set_convexity}
and sos-convexity is a recurrent figure in this line of research.
\subsection{Contributions and organization of the paper}
The main contribution of this work is to establish the counterpart
of Hilbert's characterization of the gap between nonnegativity and
sum of squares for the notions of convexity and sos-convexity. We
start by presenting some background material in
Section~\ref{sec:prelims}. In
Section~\ref{sec:equiv.defs.of.sos.convexity}, we prove an
algebraic analogue of a classical result in convex analysis, which
provides three equivalent characterizations for sos-convexity
(Theorem~\ref{thm:sos.convexity.3.equivalent.defs}). This result
substantiates the fact that sos-convexity is \emph{the} right sos
relaxation for convexity. In Section~\ref{sec:first.examples}, we
present some examples of convex polynomials that are not
sos-convex. In Section~\ref{sec:full.characterization}, we provide
the characterization of the gap between convexity and
sos-convexity (Theorem~\ref{thm:full.charac.polys} and
Theorem~\ref{thm:full.charac.forms}).
Subsection~\ref{subsec:proof.equal.cases} includes the proofs of
the cases where convexity and sos-convexity are equivalent and
Subsection~\ref{subsec:proof.non.equal.cases} includes the proofs
of the cases where they are not. In particular,
Theorem~\ref{thm:minimal.2.6.and.3.6} and
Theorem~\ref{thm:minimal.3.4.and.4.4} present explicit examples of
convex but not sos-convex polynomials that have dimension and
degree as low as possible, and
Theorem~\ref{thm:conv_not_sos_conv_forms_n3d} provides a general
construction for producing such polynomials in higher degrees.
Some concluding remarks and an open problem are presented in
Section~\ref{sec:concluding.remarks}.
\section{Preliminaries}\label{sec:prelims}
\subsection{Background on nonnegativity and sum of squares}\label{subsec:nonnegativity.sos.basics}
A (multivariate) \emph{polynomial} $p\mathrel{\mathop:}=p(x)$ in
variables $x\mathrel{\mathop:}=(x_1,\ldots,x_n)^T$ is a function
from $\mathbb{R}^n$ to $\mathbb{R}$ that is a finite linear
combination of monomials:
\begin{equation}\nonumber
p(x)=\sum_{\alpha}c_\alpha x^\alpha=\sum_{\aaan{(}\alpha_1,
\ldots, \alpha_n\aaan{)}} c_{\alpha_1,\ldots,\alpha_n}
x_1^{\alpha_1} \cdots x_n^{\alpha_n} ,
\end{equation}
where the sum is over $n$-tuples of nonnegative integers
\aaa{$\alpha\mathrel{\mathop:}=(\alpha_1,\ldots,\alpha_n)$}. We
will be concerned throughout with polynomials with real
coefficients, i.e., we will have $c_\alpha\in\mathbb{R}$. The ring
of polynomials in $n$ variables with real coefficients is denoted
by $\mathbb{R}[x]$. The \emph{degree} of a monomial $x^\alpha$ is
equal to $\alpha_1 + \cdots + \alpha_n$. The degree of a
polynomial $p\in\mathbb{R}[x]$ is defined to be the highest degree
of its component monomials. A polynomial $p$ is said to be
\emph{nonnegative} or \emph{positive semidefinite (psd)} if
$p(x)\geq0$ for all $x\in\mathbb{R}^n$. Clearly, a necessary
condition for a polynomial to be psd is for its degree to be even.
We say that $p$ is a \emph{sum of squares (sos)}, if there exist
polynomials $q_{1},\ldots,q_{m}$ such that
$p=\sum_{i=1}^{m}q_{i}^{2}$. We denote the set of psd (resp. sos)
polynomials in $n$ variables and degree $d$ by $\tilde{P}_{n,d}$
(resp. $\tilde{\Sigma}_{n,d}$). Any sos polynomial is clearly psd,
so we have $\tilde{\Sigma}_{n,d}\subseteq \tilde{P}_{n,d}$.
A \emph{homogeneous polynomial} (or a \emph{form}) is a polynomial
where all the monomials have the same degree. A form $p$ of degree
$d$ is a homogeneous function of degree $d$ since it satisfies
$p(\lambda x)=\lambda^d p(x)$ for any scalar
$\lambda\in\mathbb{R}$. We say that a form $p$ is \emph{positive
definite} if $p(x)>0$ for all $x\neq0$ in $\mathbb{R}^n$.
Following standard notation, we denote the set of psd (resp. sos)
homogeneous polynomials in $n$ variables and degree $d$ by
$P_{n,d}$ (resp. $\Sigma_{n,d}$). Once again, we have the obvious
inclusion $\Sigma_{n,d}\subseteq P_{n,d}$. All of the four sets
$\Sigma_{n,d}, P_{n,d}, \tilde{\Sigma}_{n,d}, \tilde{P}_{n,d}$ are
closed convex cones. The closedness of the sum of squares cone may
not be so obvious. This fact was first proved by
Robinson~\cite{RobinsonSOS}. We will make crucial use of it in the
proof of Theorem~\ref{thm:sos.convexity.3.equivalent.defs} in the
next section.
Any form of degree $d$ in $n$ variables can be ``dehomogenized''
into a polynomial of degree $\leq d$ in $n-1$ variables by setting
$x_n=1$. Conversely, any polynomial $p$ of degree $d$ in $n$
variables can be ``homogenized'' into a form $p_h$ of degree $d$
in $n+1$ variables, by adding a new variable $y$, and letting $
p_h(x_1,\ldots,x_n,y)\mathrel{\mathop:}=y^{d} \, p\left({x_1}/{y},
\ldots, {x_n}/{y}\right)$. The properties of being psd and sos
are preserved under homogenization and
dehomogenization~\cite{Reznick}.
A very natural and fundamental question that as we mentioned
earlier was answered by Hilbert is to understand \aaan{for} what
dimensions and degrees nonnegative polynomials (or forms) can
\aaan{always} be represented as sums of squares, i.e, for what
values of $n$ and $d$ we have
$\tilde{\Sigma}_{n,d}=\tilde{P}_{n,d}$ or $\Sigma_{n,d}=P_{n,d}$.
Note that because of the argument in the last paragraph, we have
$\tilde{\Sigma}_{n,d}=\tilde{P}_{n,d}$ if and only if
$\Sigma_{n+1,d}=P_{n+1,d}$. Hence, it is enough to answer the
question just for polynomials or just for forms\aaa{.}
\begin{theorem}[Hilbert,~\cite{Hilbert_1888}]\label{thm:Hilbert}
$\tilde{\Sigma}_{n,d}=\tilde{P}_{n,d}$ if and only if $n=1$ or
$d=2$ or $(n,d)=(2,4)$. Equivalently, $\Sigma_{n,d}=P_{n,d}$ if
and only if $n=2$ or $d=2$ or $(n,d)=(3,4)$.
\end{theorem}
The proofs of $\tilde{\Sigma}_{1,d}=\tilde{P}_{1,d}$ and
$\tilde{\Sigma}_{n,2}=\tilde{P}_{n,2}$ are relatively simple and
were known before Hilbert. On the other hand, the proof of the
fairly surprising fact that $\tilde{\Sigma}_{2,4}=\tilde{P}_{2,4}$
(or equivalently $\Sigma_{3,4}=P_{3,4}$) is more involved. We
refer the interested reader to
\cite{NewApproach_Hilbert_Ternary_Quatrics},
\cite{Scheiderer_ternary_quartic},
\cite{Choi_Lam_extremalPSDforms}, and references in~\cite{Reznick}
for some modern expositions and alternative proofs of this result.
Hilbert's other main contribution was to show that these are the
only cases where nonnegativity and sum of squares are equivalent
by giving a nonconstructive proof of existence of polynomials in
$\tilde{P}_{2,6}\setminus\tilde{\Sigma}_{2,6}$ and
$\tilde{P}_{3,4}\setminus\tilde{\Sigma}_{3,4}$ (or equivalently
forms in $P_{3,6}\setminus\Sigma_{3,6}$ and
$P_{4,4}\setminus\Sigma_{4,4}$). From this, it follows with simple
arguments that in all higher dimensions and degrees there must
also be psd but not sos polynomials; see~\cite{Reznick}. Explicit
examples of such polynomials appeared in the 1960s starting from
the celebrated Motzkin form~\cite{MotzkinSOS}:
\begin{equation}\label{eq:Motzkin.form}
M(x_1,x_2,x_3)=x_1^4x_2^2+x_1^2x_2^4-3x_1^2x_2^2x_3^2+x_3^6,
\end{equation}
which belongs to $P_{3,6}\setminus\Sigma_{3,6}$, and continuing a
few years later with the Robinson form~\cite{RobinsonSOS}:
\begin{equation}\label{eq:Robinston.form}
R(x_1,x_2,x_3,x_4)=x_1^2(x_1-x_4)^2+x_2^2(x_2-x_4)^2+x_3^2(x_3-x_4)^2+2x_1x_2x_3(x_1+x_2+x_3-2x_4),
\end{equation}
which belongs to $P_{4,4}\setminus\Sigma_{4,4}$.
Several other constructions of psd polynomials that are not sos
have appeared in the literature since. An excellent survey
is~\cite{Reznick}. See also~\cite{Reznick_Hilbert_construciton}
and~\cite{Blekherman_nonnegative_and_sos}.
\subsection{Connection to semidefinite programming and matrix
generalizations}\label{subsec:sos.sdp.and.matrix.generalize} As we
remarked before, what makes sum of squares an appealing concept
from a computational viewpoint is its relation to semidefinite
programming. It is well-known (see e.g. \cite{PhD:Parrilo},
\cite{sdprelax}) that a polynomial $p$ in $n$ variables and of
even degree $d$ is a sum of squares if and only if there exists a
positive semidefinite matrix $Q$ (often called the Gram matrix)
such that
$$p(x)=z^{T}Qz,$$
where $z$ is the vector of monomials of degree up to $d/2$
\begin{equation}\label{eq:monomials}
z=[1,x_{1},x_{2},\ldots,x_{n},x_{1}x_{2},\ldots,x_{n}^{d/2}].
\end{equation}
The set of all such matrices $Q$ is the feasible set of a
semidefinite program (SDP). For fixed $d$, the size of this
semidefinite program is polynomial in $n$. Semidefinite programs
can be solved with arbitrary accuracy in polynomial time. There
are several implementations of semidefinite programming solvers,
based on interior point algorithms among others, that are very
efficient in practice and widely used; see~\cite{VaB:96} and
references therein.
The notions of positive semidefiniteness and sum of squares of
scalar polynomials can be naturally extended to polynomial
matrices, i.e., matrices with entries in $\mathbb{R}[x]$. We say
that a symmetric polynomial matrix $U(x)\in \mathbb{R}[x]^{m
\times m}$ is positive semidefinite if $U(x)$ is positive
semidefinite in the matrix sense for all $x\in \mathbb{R}^n$, i.e,
if $U(x)$ has nonnegative eigenvalues for all $x\in \mathbb{R}^n$.
It is straightforward to see that this condition holds if and only
if the scalar polynomial $y^{T}U(x)y$ in $m+n$ variables $[x; y]$
is psd. A homogeneous polynomial matrix $U(x)$ is said to be
positive definite, if it is positive definite in the matrix sense,
i.e., has positive eigenvalues, for all $x\neq0$ in
$\mathbb{R}^n$. The definition of an sos-matrix is as follows
\cite{Kojima_SOS_matrix}, \cite{Symmetry_groups_Gatermann_Pablo},
\cite{matrix_sos_Hol}.
\begin{definition}\label{def:sos-matrix}
A symmetric polynomial matrix $U(x)\in~\mathbb{R}[x]^{m \times
m}$,$ \ x\in \mathbb{R}^n,$ is an \emph{sos-matrix} if there
exists a polynomial matrix $V(x)\in \mathbb{R}[x]^{s \times m}$
for some $s\in\mathbb{N}$, such that $P(x)~=~V^{T}(x)V(x)$.
\end{definition}
It turns out that a polynomial matrix $U(x)\in \mathbb{R}[x]^{m
\times m}$, $\ x\in~\mathbb{R}^n,$ is an sos-matrix if and only if
the scalar polynomial $y^{T}U(x)y$ is a sum of squares in
$\mathbb{R}[x; y]$; see~\cite{Kojima_SOS_matrix}. This is a useful
fact because in particular it gives us an easy way of checking
whether a polynomial matrix is an sos-matrix by solving a
semidefinite program. Once again, it is obvious that being an
sos-matrix is a sufficient condition for a polynomial matrix to be
positive semidefinite.
\subsection{Background on convexity and sos-convexity}\label{subsec:convexity.sos.convexity.basics}
A polynomial $p$ is (globally) convex if for all $x$ and $y$ in
$\mathbb{R}^n$ and all $\lambda \in [0,1]$, we have
\begin{equation}\label{eq:convexity.defn.}
p(\lambda x+(1-\lambda)y)\leq \lambda p(x)+(1-\lambda)p(y).
\end{equation}
Since polynomials are continuous functions, the inequality in
(\ref{eq:convexity.defn.}) holds if and only if it holds for a
fixed value of $\lambda\in(0,1)$, say, $\lambda=\frac{1}{2}$. In
other words, $p$ is convex if and only if
\begin{equation}\label{eq:convexity.with.lambda.0.5}
p\left(\textstyle{\frac{1}{2}}
x+\textstyle{\frac{1}{2}}y\right)\leq \textstyle{\frac{1}{2}
p(x)}+\textstyle{\frac{1}{2}}p(y)
\end{equation} for all $x$ and
$y$; see e.g.~\cite[p. 71]{Rudin_RealComplexAnalysis}. Except for
the trivial case of linear polynomials, an odd degree polynomial
is clearly never convex.
For the sake of direct comparison with a result that we derive in
the next section
(Theorem~\ref{thm:sos.convexity.3.equivalent.defs}), we recall
next a classical result from convex analysis on the first and
second order characterization of convexity. The proof can be found
in many convex optimization textbooks, e.g.~\cite[p.
70]{BoydBook}. The theorem is of course true for any twice
differentiable function, but for our purposes we state it for
polynomials.
\begin{theorem}\label{thm:classical.first.2nd.order.charac.}
Let $p\mathrel{\mathop:}=p(x)$ be a polynomial. Let $\nabla
p\mathrel{\mathop:}=\nabla p(x)$ denote its gradient and let
$H\mathrel{\mathop:}=H(x)$ be its Hessian, i.e., the $n \times n$
symmetric matrix of second derivatives. Then the following are
equivalent.
\textbf{(a)} $p\left(\textstyle{\frac{1}{2}}
x+\textstyle{\frac{1}{2}}y\right)\leq \textstyle{\frac{1}{2}
p(x)}+\textstyle{\frac{1}{2}}p(y),\quad \forall
x,y\in\mathbb{R}^n$; (i.e., $p$ is convex).
\textbf{(b)} $p(y) \geq p(x)+\nabla p(x)^T(y-x),\quad \forall
x,y\in\mathbb{R}^n$\aaan{; (i.e., $p$ lies above the supporting
hyperplane at every point).}
\textbf{(c)} $y^TH(x)y\geq0,\quad \forall x,y\in\mathbb{R}^n$;
(i.e., $H(x)$ is a positive semidefinite polynomial matrix).
\end{theorem}
Helton and Nie proposed in~\cite{Helton_Nie_SDP_repres_2} the
notion of \emph{sos-convexity} as an sos relaxation for the second
order characterization of convexity (condition \textbf{(c)}
above).
\begin{definition}\label{def:sos.convex}
A polynomial $p$ is \emph{sos-convex} if its Hessian
$H\mathrel{\mathop:}=H(x)$ is an sos-matrix.
\end{definition}
With what we have discussed so far, it should be clear that
sos-convexity is a sufficient condition for convexity of
polynomials \aaa{and} can be checked with semidefinite
programming. In the next section, we will show some other natural
sos relaxations for polynomial convexity, which will turn out to
be equivalent to sos-convexity.
We end this section by introducing some final notation:
$\tilde{C}_{n,d}$ and $\tilde{\Sigma C}_{n,d}$ will respectively
denote the set of convex and sos-convex polynomials in $n$
variables and degree $d$; $C_{n,d}$ and $\Sigma C_{n,d}$ will
respectively denote set of convex and sos-convex homogeneous
polynomials in $n$ variables and degree $d$. Again, these four
sets are closed convex cones and we have the obvious inclusions
$\tilde{\Sigma C}_{n,d}\subseteq\tilde{C}_{n,d}$ and $\Sigma
C_{n,d}\subseteq C_{n,d}$.
\section{Equivalent algebraic relaxations for convexity of
polynomials}\label{sec:equiv.defs.of.sos.convexity} An obvious way
to formulate alternative sos relaxations for convexity of
polynomials is to replace every inequality in
Theorem~\ref{thm:classical.first.2nd.order.charac.} with its sos
version. In this section we examine how these relaxations relate
to each other. We also comment on the size of the resulting
semidefinite programs.
Our result below can be thought of as an algebraic analogue of
Theorem~\ref{thm:classical.first.2nd.order.charac.}.
\begin{theorem} \label{thm:sos.convexity.3.equivalent.defs}
Let $p\mathrel{\mathop:}=p(x)$ be a polynomial of degree $d$ in
$n$ variables with its gradient and Hessian denoted respectively
by $\nabla p\mathrel{\mathop:}=\nabla p(x) $ and
$H\mathrel{\mathop:}=H(x)$. Let $g_{\lambda}$, $g_\nabla$, and
$g_{\nabla^2}$ be defined as
\begin{equation} \label{eq:defn.g_lambda.g_grad.g_grad2}
\begin{array}{lll}
g_{\lambda}(x,y)&=&(1-\lambda)p(x)+\lambda p(y)-p((1-\lambda)
x+\lambda y),\\
g_\nabla(x,y)&=&p(y)-p(x)-\nabla p(x)^T(y-x), \\
g_{\nabla^2}(x,y)&=&y^{T}H(x)y.
\end{array}
\end{equation}
Then the following are equivalent:
\textbf{(a)} \ $g_{\frac{1}{2}}(x,y)$ is sos\footnote{The
constant $\frac{1}{2}$ in $g_{\frac{1}{2}}(x,y)$ of condition
\textbf{(a)} is arbitrary and chosen for convenience. One can show
that $g_{\frac{1}{2}}$ being sos implies that $g_{\lambda}$ is sos
for any fixed $\lambda\in[0,1]$. Conversely, if $g_{\lambda}$ is
sos for some $\lambda\in(0,1)$, then $g_{\frac{1}{2}}$ is sos. The
proofs are similar to the proof of \textbf{(a)$\Rightarrow$(b)}.
}.
\textbf{(b)} \ $g_\nabla(x,y)$ is sos.
\textbf{(c)} \ $g_{\nabla^2}(x,y)$ is sos; (i.e., $H(x)$ is an
sos-matrix).
\end{theorem}
\begin{proof}
\textbf{(a)$\Rightarrow$(b)}: Assume $g_{\frac{1}{2}}$ is sos. We
start by proving that $g_{\frac{1}{2^k}}$ will also be sos for any
integer $k\geq2$. A little bit of straightforward algebra yields
the relation
\begin{equation}\label{eq:g_diatic_relation}
g_{\frac{1}{2^{k+1}}}(x,y)=\textstyle{\frac{1}{2}}g_{\frac{1}{2^{k}}}(x,y)+g_{\frac{1}{2}}\left(x,\textstyle{\frac{2^{k}-1}{2^{k}}}x+\textstyle{\frac{1}{2^{k}}}y\right).
\end{equation}
The second term on the right hand side of
(\ref{eq:g_diatic_relation}) is always sos because
$g_{\frac{1}{2}}$ is sos. Hence, this relation shows that for any
$k$, if $g_{\frac{1}{2^k}}$ is sos, then so is
$g_{\frac{1}{2^{k+1}}}$. Since for $k=1$, both terms on the right
hand side of (\ref{eq:g_diatic_relation}) are sos by assumption,
induction immediately gives that $g_{\frac{1}{2^k}}$ is sos for
all $k$.
Now, let us rewrite $g_{\lambda}$ as
\begin{equation}\nonumber
g_{\lambda}(x,y)=p(x)+\lambda(p(y)-p(x))-p(x+\lambda(y-x)).
\end{equation}
We have
\begin{equation}\label{eq:g.lambda.rewritten}
\frac{g_{\lambda}(x,y)}{\lambda}=p(y)-p(x)-\frac{p(x+\lambda(y-x))-p(x)}{\lambda}.
\end{equation}
Next, we take the limit of both sides of
(\ref{eq:g.lambda.rewritten}) by letting
$\lambda=\frac{1}{2^k}\rightarrow0$ as $k\rightarrow\infty$.
Because $p$ is differentiable, the right hand side of
(\ref{eq:g.lambda.rewritten}) will converge to $g_\nabla$. On the
other hand, our preceding argument implies that
$\frac{g_{\lambda}}{\lambda}$ is an sos polynomial (of degree $d$
in $2n$ variables) for any $\lambda=\frac{1}{2^k}$. Moreover, as
$\lambda$ goes to zero, the coefficients of
$\frac{g_{\lambda}}{\lambda}$ remain bounded since the limit of
this sequence is $g_\nabla$, which must have bounded coefficients
(see (\ref{eq:defn.g_lambda.g_grad.g_grad2})). By closedness of
the sos cone, we conclude that the limit $g_\nabla$ must be sos.
\textbf{(b)$\Rightarrow$(a)}: Assume $g_\nabla$ is sos. It is easy
to check that
\begin{equation} \nonumber
g_{\frac{1}{2}}(x,y)=\textstyle{\frac{1}{2}}g_\nabla\left(\textstyle{\frac{1}{2}}x+\textstyle{\frac{1}{2}}y,x\right)+\textstyle{\frac{1}{2}}g_\nabla\left(\textstyle{\frac{1}{2}}x+\textstyle{\frac{1}{2}}y,y\right),
\end{equation}
and hence $g_{\frac{1}{2}}$ is sos.
\textbf{(b)$\Rightarrow$(c)}: Let us write the second order Taylor
approximation of $p$ around $x$:
\begin{equation}\nonumbe
\begin{array}{ll}
p(y)=&p(x)+\nabla^{T}p(x)(y-x) \\
\ &+\frac{1}{2}(y-x)^{T}H(x)(y-x)+o(||y-x||^2).
\end{array}
\end{equation}
After rearranging terms, letting $y=x+\epsilon z$ (for
$\epsilon>0$), and dividing both sides by $\epsilon^2$ we get:
\begin{equation}\label{eq:Taylor.second.order.rearranged}
(p(x+\epsilon
z)-p(x))/\epsilon^2-\nabla^{T}p(x)z/\epsilon=\frac{1}{2}z^{T}H(x)z+1/\epsilon^2o(\epsilon^2||z||^2).
\end{equation}
The left hand side of (\ref{eq:Taylor.second.order.rearranged}) is
$g_\nabla(x,x+\epsilon z)/\epsilon^2$ and therefore for any fixed
$\epsilon>0$, it is an sos polynomial by assumption. As we take
$\epsilon\rightarrow0$, by closedness of the sos cone, the left
hand side of (\ref{eq:Taylor.second.order.rearranged}) converges
to an sos polynomial. On the other hand, as the limit is taken,
the term $\frac{1}{\epsilon^2}o(\epsilon^2||z||^2)$ vanishes and
hence we have that $z^TH(x)z$ must be sos.
\textbf{(c)$\Rightarrow$(b)}: Following the strategy of the proof
of the classical case in~\cite[p. 165]{Tits_lec.notes}, we start
by writing the Taylor expansion of $p$ around $x$ with the
integral form of the remainder:
\begin{equation}\label{eq:Taylor.expan.Cauchy.rem}
p(y)=p(x)+\nabla^{T}p(x)(y-x)+\int_0^1(1-t)(y-x)^{T}H(x+t(y-x))(y-x)dt.
\end{equation}
Since $y^{T}H(x)y$ is sos by assumption, for any $t\in[0,1]$ the
integrand
$$(1-t)(y-x)^{T}H(x+t(y-x))(y-x)$$ is an sos polynomial of degree $d$ in $x$ and
$y$. From (\ref{eq:Taylor.expan.Cauchy.rem}) we have
$$g_\nabla=\int_0^1(1-t)(y-x)^{T}H(x+t(y-x))(y-x)dt.$$
It then follows that $g_\nabla$ is sos because integrals of sos
polynomials, if they exist, are sos. \aaa{To see the latter fact,
note that we can write the integral as a limit of a sequence of
Riemann sums by discretizing the interval $[0,1]$ over which we
are integrating. Since every finite Riemann sum is an sos
polynomial of degree $d$, and since the sos cone is closed, it
follows that the limit of the sequence must be sos.}
\end{proof}
We conclude that conditions \textbf{(a)}, \textbf{(b)}, and
\textbf{(c)} are equivalent sufficient conditions for convexity of
polynomials, and can each be checked with a semidefinite program
as explained in
Subsection~\ref{subsec:sos.sdp.and.matrix.generalize}. It is easy
to see that all three polynomials $g_{\frac{1}{2}}(x,y)$,
$g_\nabla(x,y)$, and $g_{\nabla^2}(x,y)$ are polynomials in $2n$
variables and of degree $d$. (Note that each differentiation
reduces the degree by one.) Each of these polynomials have a
specific structure that can be exploited for formulating smaller
SDPs. For example, the symmetries
$g_{\frac{1}{2}}(x,y)=g_{\frac{1}{2}}(y,x)$ and
$g_{\nabla^2}(x,-y)=g_{\nabla^2}(x,y)$ can be taken advantage of
via symmetry reduction techniques developed
in~\cite{Symmetry_groups_Gatermann_Pablo}.
The issue of symmetry reduction aside, we would like to point out
that formulation \textbf{(c)} (which was the original definition
of sos-convexity) can be significantly more efficient than the
other two conditions. The reason is that the polynomial
$g_{\nabla^2}(x,y)$ is always quadratic and homogeneous in $y$ and
of degree $d-2$ in $x$. This makes $g_{\nabla^2}(x,y)$ much more
sparse than $g_\nabla(x,y)$ and $g_{\nabla^2}(x,y)$, which have
degree $d$ both in $x$ and in $y$. Furthermore, because of the
special bipartite structure of $y^TH(x)y$, only monomials of the
form \aaa{$x^\alpha y_i$ (i.e., linear in $y$)} will appear in the
vector of monomials (\ref{eq:monomials}). This in turn reduces the
size of the Gram matrix, and hence the size of the SDP. It is
perhaps not too surprising that the characterization of convexity
based on the Hessian matrix is a more efficient condition to
check. \aaa{After all, at a given point $x$, the property of
having nonnegative curvature in every direction is a local
condition, whereas characterizations \textbf{(a)} and \textbf{(b)}
both involve global conditions.}
\begin{remark}
There has been yet another proposal for an sos relaxation for
convexity of polynomials in~\cite{Chesi_Hung_journal}. However, we
have shown in~\cite{AAA_PP_CDC10_algeb_convex} that the condition
in~\cite{Chesi_Hung_journal} is at least as conservative as the
three conditions in
Theorem~\ref{thm:sos.convexity.3.equivalent.defs} and also
significantly more expensive to check.
\end{remark}
\begin{remark}\label{rmk:sos-convexity.restriction}
Just like convexity, the property of sos-convexity is preserved
under restrictions to affine subspaces. This is perhaps most
directly seen through characterization \textbf{(a)} of
sos-convexity in
Theorem~\ref{thm:sos.convexity.3.equivalent.defs}, by also noting
that sum of squares is preserved under restrictions. Unlike
convexity however, if a polynomial is sos-convex on every line (or
even on every proper affine subspace), this does not imply that
the polynomial is sos-convex.
\end{remark}
As an application of
Theorem~\ref{thm:sos.convexity.3.equivalent.defs}, we use our new
characterization of sos-convexity to give a short proof of an
interesting lemma of Helton and Nie.
\begin{lemma}\label{lem:helton.nie.sos-convex.then.sos}\emph{(Helton and Nie~\cite[Lemma 8]{Helton_Nie_SDP_repres_2})}.
Every sos-convex form is sos.
\end{lemma}
\begin{proof}
Let $p$ be an sos-convex form of degree $d$. We know from
Theorem~\ref{thm:sos.convexity.3.equivalent.defs} that
sos-convexity of $p$ is equivalent to the polynomial
$g_{\frac{1}{2}}(x,y)=\textstyle{\frac{1}{2}
p(x)}+\textstyle{\frac{1}{2}}p(y)-p\left(\textstyle{\frac{1}{2}}
x+\textstyle{\frac{1}{2}}y\right)$
being sos. But since sos is preserved under restrictions and
$p(0)=0$, this implies that
$$g_{\frac{1}{2}}(x,0)=\textstyle{\frac{1}{2}
p(x)}-p(\frac{1}{2}x)=\left(\frac{1}{2}-(\frac{1}{2})^d
\right)p(x)$$ is sos.
\end{proof}
Note that the same argument also shows that convex forms are psd.
\section{Some constructions of convex but not sos-convex
polynomials}\label{sec:first.examples} It is natural to ask
whether sos-convexity is not only a sufficient condition for
convexity of polynomials but also a necessary one. In other words,
could it be the case that if the Hessian of a polynomial is
positive semidefinite, then it must factor? To give a negative
answer to this question, one has to prove existence of a convex
polynomial that is not sos-convex, i.e, a polynomial $p$ for which
one (and hence all) of the three polynomials $g_{\frac{1}{2}},
g_\nabla,$ and $g_{\nabla^2}$ in
(\ref{eq:defn.g_lambda.g_grad.g_grad2}) are psd but not sos. Note
that existence of psd but not sos polynomials does not imply
existence of convex but not sos-convex polynomials on its own. The
reason is that the polynomials $g_{\frac{1}{2}}, g_\nabla,$ and
$g_{\nabla^2}$ all possess a very special
structure.\footnote{There are many situations where requiring a
specific structure on polynomials makes psd equivalent to sos. As
an example, we know that there are forms in
$P_{4,4}\setminus\Sigma_{4,4}$. However, if we require the forms
to have only even monomials, then all such nonnegative forms in 4
variables and degree 4 are sums of
squares~\cite{Even_quartics_4vars_sos}.} For example, $y^TH(x)y$
has the structure of being quadratic in $y$ and a Hessian in $x$.
(Not every polynomial matrix is a valid Hessian.) The Motzkin or
the Robinson polynomials in (\ref{eq:Motzkin.form}) and
(\ref{eq:Robinston.form}) for example are clearly not of this
structure.
In an earlier paper, we presented the first example of a convex
but not sos-convex
polynomial~\cite{AAA_PP_not_sos_convex_journal},\cite{AAA_PP_CDC09_HessianNotFactor}\footnote{Assuming
P$\neq$NP, and given the NP-hardness of deciding polynomial
convexity~\cite{NPhard_Convexity_arxiv}, one would expect to see
convex polynomials that are not sos-convex. However, our first
example in~\cite{AAA_PP_not_sos_convex_journal} appeared before
the proof of NP-hardness~\cite{NPhard_Convexity_arxiv}. Moreover,
from complexity considerations, even assuming P$\neq$NP, one
cannot conclude existence of convex but not sos-convex polynomials
for any finite value of the number of variables $n$.}:
\begin{equation}\label{eq:first.convex.not.sos.convex}
\begin{array}{rlll}
p(x_1,x_2,x_3)&=&32x_1^8+118x_1^6x_2^2+40x_1^6x_3^2+25x_1^4x_2^4-43x_1^4x_2^2x_3^2-35x_1^4x_3^4+3x_1^2x_2^4x_3^2
\\
\\ \quad&\
&-16x_1^2x_2^2x_3^4+24x_1^2x_3^6+16x_2^8+44x_2^6x_3^2+70x_2^4x_3^4+60x_2^2x_3^6+30x_3^8.
\end{array}
\end{equation}
As we will see later in this paper, this form which lives in
$C_{3,8}\setminus\Sigma C_{3,8}$ turns out to be an example in the
smallest possible number of variables but not in the smallest
degree. We next present another example of a convex but not
sos-convex form that has not been previously in print. The example
is in $C_{6,4}\setminus\Sigma C_{6,4}$ and by contrast to the
previous example, it will turn out to be minimal in the degree but
not in the number of variables. What is nice about this example is
that unlike the other examples in this paper it has not been
derived with the assistance of a computer and semidefinite
programming:
\begin{equation}\label{eq:clean.quartic.convex.not.sos.convex}
\begin{array}{rlll}
q(x_1,\ldots,x_6)&=&x_1^4+x_2^4+x_3^4+x_4^4+x_5^4+x_6^4 \\ \\
\quad&\
&+2(x_1^2x_2^2+x_1^2x_3^2+x_2^2x_3^2
+x_4^2x_5^2+x_4^2x_6^2+x_5^2x_6^2)\\ \\
\quad&\ & +\frac{1}{2}(x_1^2x_4^2+x_2^2x_5^2+x_3^2x_6^2)+
x_1^2x_6^2+x_2^2x_4^2+x_3^2x_5^2 \\ \\\quad&\ &
-(x_1x_2x_4x_5+x_1x_3x_4x_6+x_2x_3x_5x_6).
\end{array}
\end{equation}
The proof that this polynomial is convex but not sos-convex can be
extracted from~\cite[Thm. 2.3 and Thm.
2.5]{NPhard_Convexity_arxiv}. In there, a general procedure is
described for producing convex but not sos-convex quartic forms
from \emph{any} example of a psd but not sos biquadratic form. The
biquadratic form that has led to the form above is that of Choi
in~\cite{Choi_Biquadratic}.
Also note that the example in
(\ref{eq:clean.quartic.convex.not.sos.convex}) shows that convex
forms that possess strong symmetry properties can still fail to be
sos-convex. The symmetries in this form are inherited from the
rich symmetry structure of the biquadratic form of Choi
(see~\cite{Symmetry_groups_Gatermann_Pablo}). In general,
symmetries are of interest in the study of positive semidefinite
and sums of squares polynomials because the gap between psd and
sos can often behave very differently depending on the symmetry
properties; see e.g.~\cite{Symmetric_quartics_sos}.
\section{Characterization of the gap between convexity and
sos-convexity}\label{sec:full.characterization}
Now that we know there exist convex polynomials that are not
sos-convex, our final and main goal is to give a complete
characterization of the degrees and dimensions in which such
polynomials can exist. This is achieved in the next theorem.
\begin{theorem}\label{thm:full.charac.polys}
$\tilde{\Sigma C}_{n,d}=\tilde{C}_{n,d}$ if and only if $n=1$ or
$d=2$ or $(n,d)=(2,4)$.
\end{theorem}
We would also like to have such a characterization for homogeneous
polynomials. Although convexity is a property that is in some
sense more meaningful for nonhomogeneous polynomials than for
forms, one motivation for studying convexity of forms is in their
relation to norms~\cite{Blenders_Reznick}. Also, in view of the
fact that we have a characterization of the gap between
nonnegativity and sums of squares both for polynomials and for
forms, it is very natural to inquire the same result for convexity
and sos-convexity. The next theorem presents this characterization
for forms.
\begin{theorem}\label{thm:full.charac.forms}
$\Sigma C_{n,d}=C_{n,d}$ if and only if $n=2$ or $d=2$ or
$(n,d)=(3,4)$.
\end{theorem}
The result $\Sigma C_{3,4}=C_{3,4}$ of this theorem is joint work
with G. Blekherman and is to be presented in full detail
in~\cite{AAA_GB_PP_Convex_ternary_quartics}. The remainder of this
paper is solely devoted to the proof of
Theorem~\ref{thm:full.charac.polys} and the proof of
Theorem~\ref{thm:full.charac.forms} except for the case
$(n,d)=(3,4)$. Before we present these proofs, we shall make two
important remarks.
\begin{remark}\label{rmk:difficulty.homogz.dehomogz} {\bf Difficulty with homogenization and dehomogenization.}
Recall from Subsection~\ref{subsec:nonnegativity.sos.basics} and
Theorem~\ref{thm:Hilbert} that characterizing the gap between
nonnegativity and sum of squares for polynomials is equivalent to
accomplishing this task for forms. Unfortunately, the situation is
more complicated for convexity and sos-convexity and that is the
reason why we are presenting Theorems~\ref{thm:full.charac.polys}
and~\ref{thm:full.charac.forms} as separate theorems. The
difficulty arises from the fact that unlike nonnegativity and sum
of squares, convexity and sos-convexity are not always preserved
under homogenization. (Or equivalently, the properties of being
not convex and not sos-convex are not preserved under
dehomogenization.) In fact, any convex polynomial that is not psd
will no longer be convex after homogenization. This is because
convex forms are psd but the homogenization of a non-psd
polynomial is a non-psd form. Even if a convex polynomial is psd,
its homogenization may not be convex. For example the univariate
polynomial $10x_1^4-5x_1+2$ is convex and psd, but its
homogenization $10x_1^4-5x_1x_2^3+2x_2^4$ is not
convex.\footnote{What is true however is that a nonnegative form
of degree $d$ is convex if and only if the $d$-th root of its
dehomogenization is a convex function~\cite[Prop.
4.4]{Blenders_Reznick}.} To observe the same phenomenon for
sos-convexity, consider the trivariate form $p$ in
(\ref{eq:first.convex.not.sos.convex}) which is convex but not
sos-convex and define $\tilde{p}(x_2,x_3)=p(1,x_2,x_3)$. Then, one
can check that $\tilde{p}$ is sos-convex (i.e., its $2\times 2$
Hessian factors) even though its homogenization which is $p$ is
not sos-convex~\cite{AAA_PP_not_sos_convex_journal}.
\end{remark}
\begin{remark}\label{rmk:resemb.to.Hilbert} {\bf Resemblance to the result of Hilbert.} The reader
may have noticed from the statements of Theorem~\ref{thm:Hilbert}
and Theorems~\ref{thm:full.charac.polys}
and~\ref{thm:full.charac.forms} that the cases where convex
polynomials (forms) are sos-convex are exactly the same cases
where nonnegative polynomials are sums of squares! We shall
emphasize that as far as we can tell, our results do not follow
(except in the simplest cases) from Hilbert's result stated in
Theorem~\ref{thm:Hilbert}. Note that the question of convexity or
sos-convexity of a polynomial $p(x)$ in $n$ variables and degree
$d$ is about the polynomials $g_{\frac{1}{2}}(x,y),
g_\nabla(x,y),$ or $g_{\nabla^2}(x,y)$ defined in
(\ref{eq:defn.g_lambda.g_grad.g_grad2}) being psd or sos. Even
though these polynomials still have degree $d$, it is important to
keep in mind that they are polynomials \emph{in $2n$ variables}.
Therefore, there is no direct correspondence with the
characterization of Hilbert. To make this more explicit, let us
consider for example one particular claim of
Theorem~\ref{thm:full.charac.forms}: $\Sigma C_{2,4}=C_{2,4}$. For
a form $p$ in 2 variables and degree 4, the polynomials
$g_{\frac{1}{2}}, g_\nabla,$ and $g_{\nabla^2}$ will be forms in 4
variables and degree 4. We know from Hilbert's result that in this
situation psd but not sos forms do in fact exist. However, for the
forms in 4 variables and degree 4 that have the special structure
of $g_{\frac{1}{2}}, g_\nabla,$ or $g_{\nabla^2}$, psd turns out
to be equivalent to sos.
\end{remark}
The proofs of Theorems~\ref{thm:full.charac.polys}
and~\ref{thm:full.charac.forms} are broken into the next two
subsections. In Subsection~\ref{subsec:proof.equal.cases}, we
provide the proofs for the cases where convexity and sos-convexity
are equivalent. Then in
Subsection~\ref{subsec:proof.non.equal.cases}, we prove that in
all other cases there exist convex polynomials that are not
sos-convex.
\subsection{Proofs of Theorems~\ref{thm:full.charac.polys} and~\ref{thm:full.charac.forms}: cases where $\tilde{\Sigma C}_{n,d}=\tilde{C}_{n,d}, \Sigma
C_{n,d}=C_{n,d}$}\label{subsec:proof.equal.cases}
When proving equivalence of convexity and sos-convexity, it turns
out to be more convenient to work with the second order
characterization of sos-convexity, i.e., with the form
$g_{\nabla^2}(x,y)=y^TH(x)y$ in
(\ref{eq:defn.g_lambda.g_grad.g_grad2}). The reason for this is
that this form is always quadratic in $y$, and this allows us to
make use of the following key theorem, henceforth referred to as
the ``biform theorem''.
\begin{theorem}[e.g.~\cite{CLRrealzeros}]\label{thm:biform.thm}
Let $f\mathrel{\mathop:}=f(u_1,u_2,v_1,\ldots,v_m)$ be a form in
the variables $u\mathrel{\mathop:}=(u_1,u_2)^T$ and
$v\mathrel{\mathop:}=(v_1,,\ldots,v_m)^T$ that is a quadratic form
in $v$ for fixed $u$ and a form (of \aaa{any} degree) in $u$ for
fixed $v$. Then $f$ is psd if and only if it is sos.\footnote{Note
that the results $\Sigma_{2,d}=P_{2,d}$ and $\Sigma_{n,2}=P_{n,2}$
are both special cases of this theorem.}
\end{theorem}
The biform theorem has been proven independently by several
authors. See~\cite{CLRrealzeros} and~\cite{SOS_KYP} for more
background on this theorem and in particular~\cite[Sec.
7]{CLRrealzeros} for an elegant proof and some refinements. We now
proceed with our proofs which will follow in a rather
straightforward manner from the biform theorem.
\begin{theorem}
$\tilde{\Sigma C}_{1,d}=\tilde{C}_{1,d}$ for all $d$. $\Sigma
C_{2,d}=C_{2,d}$ for all $d$.
\end{theorem}
\begin{proof}
For a univariate polynomial, convexity means that the second
derivative, which is another univariate polynomial, is psd. Since
$\tilde{\Sigma}_{1,d}=\tilde{P}_{1,d}$, the second derivative must
be sos. Therefore, $\tilde{\Sigma C}_{1,d}=\tilde{C}_{1,d}$. To
prove $\Sigma C_{2,d}=C_{2,d}$, suppose we have a convex bivariate
form $p$ of degree $d$ in variables
$x\mathrel{\mathop:}=(x_1,x_2)^T$. The Hessian
$H\mathrel{\mathop:}=H(x)$ of $p$ is a $2\times 2$ matrix whose
entries are forms of degree $d-2$. If we let
$y\mathrel{\mathop:}=(y_1,y_2)^T$, convexity of $p$ implies that
the form $y^TH(x)y$ is psd. Since $y^TH(x)y$ meets the
requirements of the biform theorem above with
$(u_1,u_2)=(x_1,x_2)$ and $(v_1,v_2)=(y_1,y_2)$, it follows that
$y^TH(x)y$ is sos. Hence, $p$ is sos-convex.
\end{proof}
\begin{theorem}
$\tilde{\Sigma C}_{n,2}=\tilde{C}_{n,2}$ for all $n$. $\Sigma
C_{n,2}=C_{n,2}$ for all $n$.
\end{theorem}
\begin{proof}
Let $x\mathrel{\mathop:}=(x_1,\ldots,x_n)^T$ and
$y\mathrel{\mathop:}=(y_1,\ldots,y_n)^T$. Let
$p(x)=\frac{1}{2}x^TQx+b^Tx+c$ be a quadratic polynomial. The
Hessian of $p$ in this case is the constant symmetric matrix $Q$.
Convexity of $p$ implies that $y^TQy$ is psd. But since
$\Sigma_{n,2}=P_{n,2}$, $y^TQy$ must be sos. Hence, $p$ is
sos-convex. The proof of $\Sigma C_{n,2}=C_{n,2}$ is identical.
\end{proof}
\begin{theorem}
$\tilde{\Sigma C}_{2,4}=\tilde{C}_{2,4}$.
\end{theorem}
\begin{proof}
Let $p(x)\mathrel{\mathop:}=p(x_1,x_2)$ be a convex bivariate
quartic polynomial. Let $H\mathrel{\mathop:}=H(x)$ denote the
Hessian of $p$ and let $y\mathrel{\mathop:}=(y_1,y_2)^T$. Note
that $H(x)$ is a $2\times 2$ matrix whose entries are (not
necessarily homogeneous) quadratic polynomials. Since $p$ is
convex, $y^TH(x)y$ is psd. Let $\bar{H}(x_1,x_2,x_3)$ be a
$2\times 2$ matrix whose entries are obtained by homogenizing the
entries of $H$. It is easy to see that $y^T\bar{H}(x_1,x_2,x_3)y$
is then the form obtained by homogenizing $y^TH(x)y$ and is
therefore psd. Now we can employ the biform theorem
(Theorem~\ref{thm:biform.thm}) with $(u_1,u_2)=(y_1,y_2)$ and
$(v_1,v_2,v_3)=(x_1,x_2,x_3)$ to conclude that
$y^T\bar{H}(x_1,x_2,x_3)y$ is sos. But upon dehomogenizing by
setting $x_3=1$, we conclude that $y^TH(x)y$ is sos. Hence, $p$ is
sos-convex.
\end{proof}
\begin{theorem}[Ahmadi, Blekherman, Parrilo~\cite{AAA_GB_PP_Convex_ternary_quartics}]
$\Sigma C_{3,4}=C_{3,4}$.
\end{theorem}
Unlike Hilbert's results $\tilde{\Sigma}_{2,4}=\tilde{P}_{2,4}$
and $\Sigma_{3,4}=P_{3,4}$ which are equivalent statements and
essentially have identical proofs, the proof of $\Sigma
C_{3,4}=C_{3,4}$ is considerably more involved than the proof of
$\tilde{\Sigma C}_{2,4}=\tilde{C}_{2,4}$. Here, we briefly point
out why this is the case and refer the reader
to~\cite{AAA_GB_PP_Convex_ternary_quartics} for more details.
If $p(x)\mathrel{\mathop:}=p(x_1,x_2,x_3)$ is a ternary quartic
form, its Hessian $H(x)$ is a $3\times 3$ matrix whose entries are
quadratic forms. In this case, we can no longer apply the biform
theorem to the form $y^TH(x)y$. In fact, the matrix
\begin{equation}\nonumber
C(x)=\begin{bmatrix} x_1^2+2x_2^2&-x_1x_2&-x_1x_3 \\ \\
-x_1x_2&x_2^2+2x_3^2&-x_2x_3 \\ \\
-x_1x_3&-x_2x_3&x_3^2+2x_1^2
\end{bmatrix},
\end{equation}
due to Choi~\cite{Choi_Biquadratic} serves as an explicit example
of a $3\times 3$ matrix with quadratic form entries that is
positive semidefinite but not an sos-matrix; i.e., $y^TC(x)y$ is
psd but not sos. However, the matrix $C(x)$ above is \emph{not} a
valid Hessian, i.e., it cannot be the matrix of the second
derivatives of any polynomial. If this was the case, the third
partial derivatives would commute. On the other hand, we have in
particular
$$\frac{\partial C_{1,1}(x)}{\partial x_3}=0\neq-x_3=\frac{\partial C_{1,3}(x)}{\partial
x_1}.$$ It is rather remarkable that with the additional
requirement of being a valid Hessian, the form $y^TH(x)y$ turns
out to be psd if and only if it is
sos~\cite{AAA_GB_PP_Convex_ternary_quartics}.
\subsection{Proofs of Theorems~\ref{thm:full.charac.polys} and~\ref{thm:full.charac.forms}: cases where $\tilde{\Sigma C}_{n,d}\subset\tilde{C}_{n,d}, \Sigma
C_{n,d}\subset C_{n,d}$}\label{subsec:proof.non.equal.cases}
The goal of this subsection is to establish that the cases
presented in the previous subsection are the \emph{only} cases
where convexity and sos-convexity are equivalent. We will first
give explicit examples of convex but not sos-convex
polynomials/forms that are ``minimal'' jointly in the degree and
dimension and then present an argument for all dimensions and
degrees higher than those of the minimal cases.
\subsubsection{Minimal convex but not sos-convex polynomials/forms}
The minimal examples of convex but not sos-convex
polynomials (resp. forms) turn out to belong to $\tilde{C}_{2,6}\setminus\tilde{\Sigma C}_{2,6}$ and $\tilde{C}_{3,4}\setminus\tilde{\Sigma C}_{3,4}$ (resp. $C_{3,6}\setminus\Sigma C_{3,6}$ and $C_{4,4}\setminus\Sigma C_{4,4}$).
Recall from Remark~\ref{rmk:difficulty.homogz.dehomogz} that we
lack a general argument for going from convex but not sos-convex
forms to polynomials or vice versa. Because of this, one would need to
present four different polynomials in the sets mentioned above
and prove that each polynomial is (i) convex and (ii) not
sos-convex. This is a total of eight arguments to make which is
quite cumbersome. However, as we will see in the proof of Theorem~\ref{thm:minimal.2.6.and.3.6} and~\ref{thm:minimal.3.4.and.4.4} below, we have been able to find examples that
act ``nicely'' with respect to particular ways of dehomogenization. This will allow us to reduce the
total number of claims we have to prove from eight to four.
The polynomials that we are about to present next have been
found with the assistance of a computer and by employing some ``tricks'' with semidefinite
programming.\footnote{We do not elaborate here on how this was exactly done. The interested reader is referred to~\cite[Sec. 4]{AAA_PP_not_sos_convex_journal}, where a similar technique for formulating semidefinite programs that can search over a convex subset of the (non-convex) set of convex but not sos-convex polynomials is explained. The approach in~\cite{AAA_PP_not_sos_convex_journal} however does not lead to examples that are minimal.} In this process, we have made use of software
packages YALMIP~\cite{yalmip}, SOSTOOLS~\cite{sostools}, and the
SDP solver SeDuMi~\cite{sedumi}, which we acknowledge here. To
make the paper relatively self-contained and to emphasize the
fact that using \emph{rational sum of squares certificates} one
can make such computer assisted proofs fully formal, we present the proof of Theorem~\ref{thm:minimal.2.6.and.3.6} below in the appendix. On the other hand, the proof of Theorem~\ref{thm:minimal.3.4.and.4.4}, which is very similar in style to the proof of Theorem~\ref{thm:minimal.2.6.and.3.6}, is largely omitted to save space. All of the proofs are available in electronic
form and in their entirety at \texttt{http://aaa.lids.mit.edu/software} or at \aaan{\texttt{http://arxiv.org/abs/1111.4587}}.
\begin{theorem}\label{thm:minimal.2.6.and.3.6} $\tilde{\Sigma C}_{2,6}$ is a proper subset of
$\tilde{C}_{2,6}$. $\Sigma C_{3,6}$ is a proper subset of
$C_{3,6}$.
\end{theorem}
\begin{proof}
We claim that the form
\begin{equation}\label{eq:minim.form.3.6}
\begin{array}{lll}
f(x_1,x_2,x_3)&=&77x_1^6-155x_1^5x_2+445x_1^4x_2^2+76x_1^3x_2^3+556x_1^2x_2^4+68x_1x_2^5+240x_2^6-9x_1^5x_3 \\
\ &\ &\ \\
\ &\ &-1129x_1^3x_2^2x_3+62x_1^2x_2^3x_3+1206x_1x_2^4x_3-343x_2^5x_3+363x_1^4x_3^2+773x_1^3x_2x_3^2 \\
\ &\ &\ \\
\ &\ &+891x_1^2x_2^2x_3^2-869x_1x_2^3x_3^2+1043x_2^4x_3^2-14x_1^3x_3^3-1108x_1^2x_2x_3^3-216x_1x_2^2x_3^3\\
\ &\ &\ \\
\ &\ &-839x_2^3x_3^3+721x_1^2x_3^4+436x_1x_2x_3^4+378x_2^2x_3^4+48x_1x_3^5-97x_2x_3^5+89x_3^6
\end{array}
\end{equation}
belongs to $C_{3,6}\setminus\Sigma C_{3,6}$, and the
polynomial\footnote{The polynomial $f(x_1,x_2,1)$ turns out to be
sos-convex, and therefore does not do the job. One can of course
change coordinates, and then in the new coordinates perform the
dehomogenization by setting $x_3=1$.}
\begin{equation}\label{eq:minim.poly.2.6}
\tilde{f}(x_1,x_2)=f(x_1,x_2,1-\frac{1}{2}x_2)
\end{equation}
belongs to $\tilde{C}_{2,6}\setminus\tilde{\Sigma C}_{2,6}$. Note
that since convexity and sos-convexity are both preserved under
restrictions to affine subspaces (recall
Remark~\ref{rmk:sos-convexity.restriction}), it suffices to show
that the form $f$ in (\ref{eq:minim.form.3.6}) is convex and the
polynomial $\tilde{f}$ in (\ref{eq:minim.poly.2.6}) is not
sos-convex. Let $x\mathrel{\mathop:}=(x_1,x_2,x_2)^T$,
$y\mathrel{\mathop:}=(y_1,y_2,y_3)^T$,
$\tilde{x}\mathrel{\mathop:}=(x_1,x_2)^T$,
$\tilde{y}\mathrel{\mathop:}=(y_1,y_2)^T$, and denote the Hessian
of $f$ and $\tilde{f}$ respectively by $H_f$ and $H_{\tilde{f}}$.
In the appendix, we provide rational Gram matrices which prove
that the form
\begin{equation}\label{eq:y.H_f.y.xi^2.3.6.example}
(x_1^2+x_2^2)\cdot y^TH_f(x)y
\end{equation}
is sos. This, together with nonnegativity of $x_1^2+x_2^2$ and
continuity of $y^TH_f(x)y$, implies that $y^TH_f(x)y$ is psd.
Therefore, $f$ is convex. The proof that $\tilde{f}$ is not
sos-convex proceeds by showing that $H_{\tilde{f}}$ is not an
sos-matrix via a separation argument. In the appendix, we present
a separating hyperplane that leaves the appropriate sos cone on
one side and the polynomial
\begin{equation}\label{eq:y.H_f_tilda.y.xi^2.2.6.example}
\tilde{y}^TH_{\tilde{f}}(\tilde{x})\tilde{y}
\end{equation}
on the other.
\end{proof}
\begin{theorem}\label{thm:minimal.3.4.and.4.4} $\tilde{\Sigma C}_{3,4}$ is a proper subset of
$\tilde{C}_{3,4}$. $\Sigma C_{4,4}$ is a proper subset of
$C_{4,4}$.
\end{theorem}
\begin{proof}
We claim that the form
\begin{equation}\label{eq:minim.form.4.4}
\begin{array}{lll}
h(x_1,\ldots,x_4)&=&1671x_1^4-4134x_1^3x_2-3332x_1^3x_3+5104x_1^2x_2^2+4989x_1^2x_2x_3+3490x_1^2x_3^2 \\
\ &\ &\ \\
\ &\ &-2203x_1x_2^3-3030x_1x_2^2x_3-3776x_1x_2x_3^2-1522x_1x_3^3+1227x_2^4-595x_2^3x_3 \\
\ &\ &\ \\
\ &\ &+1859x_2^2x_3^2+1146x_2x_3^3+979x_3^4+1195728x_4^4-1932x_1x_4^3-2296x_2x_4^3 \\
\ &\ &\ \\
\ &\ &-3144x_3x_4^3+1465x_1^2x_4^2-1376x_1^3x_4-263x_1x_2x_4^2+2790x_1^2x_2x_4+2121x_2^2x_4^2 \\
\ &\ &\ \\
\ &\ &-292x_1x_2^2x_4-1224x_2^3x_4+2404x_1x_3x_4^2+2727x_2x_3x_4^2-2852x_1x_3^2x_4 \\
\ &\ &\ \\
\ &\ &-388x_2x_3^2x_4-1520x_3^3x_4+2943x_1^2x_3x_4-5053x_1x_2x_3x_4
+2552x_2^2x_3x_4 \\
\ &\ &\ \\
\ &\ & +3512x_3^2x_4^2
\end{array}
\end{equation}
belongs to $C_{4,4}\setminus\Sigma C_{4,4}$, and the polynomial
\begin{equation}\label{eq:minim.poly.3.4}
\tilde{h}(x_1,x_2,x_3)=h(x_1,x_2,x_3,1)
\end{equation}
belongs to $\tilde{C}_{3,4}\setminus\tilde{\Sigma C}_{3,4}$. Once
again, it suffices to prove that $h$ is convex and $\tilde{h}$ is
not sos-convex. Let $x\mathrel{\mathop:}=(x_1,x_2,x_3,x_4)^T$,
$y\mathrel{\mathop:}=(y_1,y_2,y_3,y_4)^T$, and denote the Hessian
of $h$ and $\tilde{h}$ respectively by $H_h$ and $H_{\tilde{h}}$.
The proof that $h$ is convex is done by showing that the form
\begin{equation}\label{eq:y.H_h.y.xi^2.4.4.example}
(x_2^2+x_3^2+x_4^2)\cdot y^TH_h(x)y
\end{equation}
is sos.\footnote{The choice of multipliers in
(\ref{eq:y.H_f.y.xi^2.3.6.example}) and
(\ref{eq:y.H_h.y.xi^2.4.4.example}) is motivated by a result of
Reznick in~\cite{Reznick_Unif_denominator}.} The proof that
$\tilde{h}$ is not sos-convex is done again by means of a
separating hyperplane.
\end{proof}
\subsubsection{Convex but not sos-convex polynomials/forms in all higher degrees and
dimensions}\label{subsubsec:increasing.degree.and.vars} Given a
convex but not sos-convex polynomial (form) in $n$ variables, it
is very easy to argue that such a polynomial (form) must also
exist in a larger number of variables. If $p(x_1,\ldots,x_n)$ is a
form in $C_{n,d}\setminus\Sigma C_{n,d}$, then
$$\bar{p}(x_1,\ldots,x_{n+1})=p(x_1,\ldots,x_n)+x_{n+1}^d$$
belongs to $C_{n+1,d}\setminus\Sigma C_{n+1,d}$. Convexity of
$\bar{p}$ is obvious since it is a sum of convex functions. The
fact that $\bar{p}$ is not sos-convex can also easily be seen from
the block diagonal structure of the Hessian of $\bar{p}$: if the
Hessian of $\bar{p}$ were to factor, it would imply that the
Hessian of $p$ should also factor. The argument for going from
$\tilde{C}_{n,d}\setminus\tilde{\Sigma C}_{n,d}$ to
$\tilde{C}_{n+1,d}\setminus\tilde{\Sigma C}_{n+1,d}$ is identical.
Unfortunately, an argument for increasing the degree of convex but
not sos-convex forms seems to be significantly more difficult to
obtain. In fact, we have been unable to come up with a natural
operation that would produce a \aaan{form} in
$C_{n,d+2}\setminus\Sigma C_{n,d+2}$ from a form in
$C_{n,d}\setminus\Sigma C_{n,d}$. We will instead take a different
route: we are going to present a general procedure for going from
a form in $P_{n,d}\setminus\Sigma_{n,d}$ to a form in
$C_{n,d+2}\setminus\Sigma C_{n,d+2}$. This will serve our purpose
of constructing convex but not sos-convex forms in higher degrees
and is perhaps also of independent interest in itself. For
instance, it can be used to construct convex but not sos-convex
forms that inherit structural properties (e.g. symmetry) of the
known examples of psd but not sos forms. The procedure is
constructive modulo the value of two positive constants ($\gamma$
and $\alpha$ below) whose existence will be shown
nonconstructively.\footnote{The procedure can be thought of as a
generalization of the approach in our earlier work
in~\cite{AAA_PP_not_sos_convex_journal}.}
Although the proof of the general case is no different, we present
this construction for the case $n=3$. The reason is that it
suffices for us to construct forms in $C_{3,d}\setminus\Sigma
C_{3,d}$ for $d$ even and $\geq 8$. These forms together with the
two forms in $C_{3,6}\setminus\Sigma C_{3,6}$ and
$C_{4,4}\setminus\Sigma C_{4,4}$ presented in
(\ref{eq:minim.form.3.6}) and (\ref{eq:minim.form.4.4}), and with
the simple procedure for increasing the number of variables cover
all the values of $n$ and $d$ for which convex but not sos-convex
forms exist.
For the remainder of this section, let
$x\mathrel{\mathop:}=(x_1,x_2,x_3)^T$ and
$y\mathrel{\mathop:}=(y_1,y_2,y_3)^T$.
\begin{theorem}\label{thm:conv_not_sos_conv_forms_n3d}
Let $m\mathrel{\mathop:}=m(x)$ be a ternary form of degree $d$
(with $d$ necessarily even and $\geq 6$) satisfying the following
three requirements:
\begin{description}
\item[R1:] $m$ is positive definite. \item[R2:] $m$ is not a sum
of squares. \item[R3:] The Hessian $H_m$ of $m$ is positive
definite at the point $(1,0,0)^T$.
\end{description}
Let $g\mathrel{\mathop:}=g(x_2,x_3)$ be any bivariate form of
degree $d+2$ whose Hessian is positive definite. \\ Then, there
exists a constant $\gamma>0$, such that the form $f$ of degree
$d+2$ given by
\begin{equation}\label{eq:construction.of.f(x)}
f(x)=\int_0^{x_1}\int_0^s m(t,x_2,x_3) dt ds + \gamma g(x_2,x_3)
\end{equation}
is convex but not sos-convex.
\end{theorem}
\aaa{The form $f$ in (\ref{eq:construction.of.f(x)}) is just a
specific polynomial that when differentiated twice with respect to
$x_1$ gives $m$. The reason for this construction will become
clear once we present the proof of this theorem. Before we do
that, let us comment on how one can get}
examples of forms $m$ and $g$ that satisfy the requirements of the
theorem. The choice of $g$ is in fact very easy. We can e.g. take
\begin{equation}\nonumber
g(x_2,x_3)=(x_2^2+x_3^2)^{\frac{d+2}{2}},
\end{equation}
which has a positive definite Hessian. As for the choice of $m$,
essentially any psd but not sos ternary form can be turned into a
form that satisfies requirements {\bf R1}, {\bf R2}, and {\bf R3}.
Indeed if the Hessian of such a form is positive definite at just
one point, then that point can be taken to $(1,0,0)^T$ by a change
of coordinates without changing the properties of being psd and
not sos. If the form is not positive definite, then it can made so
by adding a small enough multiple of a positive definite form to
it. For concreteness, we construct in the next lemma a family of
forms that together with the above theorem will give us convex but
not sos-convex ternary forms of any degree $\geq 8$.
\begin{lemma}\label{lem:choice.of.m(x)}
For any even degree $d\geq 6$, there exists a constant $\alpha>0$,
such that the form
\begin{equation}\label{eq:choice.of.m(x)}
m(x)=x_1^{d-6}(x_1^2x_2^4+x_1^4x_2^2-3x_1^2x_2^2x_3^2+x_3^6)+\alpha(x_1^2+x_2^2+x_3^2)^{\frac{d}{2}}
\end{equation}
satisfies the requirements {\bf R1}, {\bf R2}, and {\bf R3} of
Theorem~\ref{thm:conv_not_sos_conv_forms_n3d}.
\end{lemma}
\begin{proof}
The form $$x_1^2x_2^4+x_1^4x_2^2-3x_1^2x_2^2x_3^2+x_3^6$$ is the
familiar Motzkin form in (\ref{eq:Motzkin.form}) that is psd but
not sos~\cite{MotzkinSOS}. For any even degree $d\geq 6$, the form
$$x_1^{d-6}(x_1^2x_2^4+x_1^4x_2^2-3x_1^2x_2^2x_3^2+x_3^6)$$ is a
form of degree $d$ that is clearly still psd and less obviously
still not sos; see~\cite{Reznick}. This together with the fact
that $\Sigma_{n,d}$ is a closed cone implies existence of a small
positive value of $\alpha$ for which the form $m$ in
(\ref{eq:choice.of.m(x)}) is positive definite but not a sum of
squares, hence satisfying requirements {\bf R1} and {\bf R2}.
Our next claim is that for any positive value of $\alpha$, the
Hessian $H_m$ of the form $m$ in (\ref{eq:choice.of.m(x)})
satisfies
\begin{equation}\label{eq:Hessian.at.1.0.0.}
H_m(1,0,0)=\begin{bmatrix} c_1 & 0 & 0 \\ 0 & c_2 & 0 \\
0& 0& c_3
\end{bmatrix}
\end{equation}
for some positive constants $c_1,c_2,c_3$, therefore also passing
requirement {\bf R3}. To see the above equality, first note that
since $m$ is a form of degree $d$, its Hessian $H_m$ will have
entries that are forms of degree $d-2$. Therefore, the only
monomials that can survive in this Hessian after setting $x_2$ and
$x_3$ to zero are multiples of $x_1^{d-2}$. It is easy to see that
an $x_1^{d-2}$ monomial in an off-diagonal entry of $H_m$ would
lead to a monomial in $m$ that is not even. On the other hand, the
form $m$ in (\ref{eq:choice.of.m(x)}) only has even monomials.
This explains why the off-diagonal entries of the right hand side
of (\ref{eq:Hessian.at.1.0.0.}) are zero. Finally, we note that
for any positive value of $\alpha$, the form $m$ in
(\ref{eq:choice.of.m(x)}) includes positive multiples of $x_1^d$,
$x_1^{d-2}x_2^2$, and $x_1^{d-2}x_3^2$, which lead to positive
multiples of $x_1^{d-2}$ on the diagonal of $H_m$. Hence, $c_1,
c_2$, and $c_3$ are positive.
\end{proof}
Next, we state two lemmas that will be employed in the proof of
Theorem~\ref{thm:conv_not_sos_conv_forms_n3d}.
\begin{lemma}\label{lem:y.Hm.y>0.on.some.part}
Let $m$ be a trivariate form satisfying the requirements {\bf R1}
and {\bf R3} of Theorem~\ref{thm:conv_not_sos_conv_forms_n3d}. Let
$H_{\hat{m}}$ denote the Hessian of the form $\int_0^{x_1}\int_0^s
m(t,x_2,x_3) dt ds$. Then, there exists a positive constant
$\delta,$ such that
$$y^TH_{\hat{m}}(x)y>0$$ on the set
\begin{equation}\label{eq:the.set.S}
\mathcal{S}\mathrel{\mathop:}=\{(x,y) \ |\ ||x||=1, ||y||=1, \
(x_2^2+x_3^2<\delta \ \mbox{or}\ y_2^2+y_3^2<\delta)\}.
\end{equation}
\end{lemma}
\begin{proof}
We observe that when $y_2^2+y_3^2=0$, we have
$$y^TH_{\hat{m}}(x)y=y_1^2m(x),$$ which by requirement {\bf R1} is
positive when $||x||=||y||=1$. By continuity of the form
$y^TH_{\hat{m}}(x)y$, we conclude that there exists a small
positive constant $\delta_y$ such that $y^TH_{\hat{m}}(x)y>0$ on
the set
$$\mathcal{S}_y\mathrel{\mathop:}=\{(x,y) \ |\ ||x||=1, ||y||=1, \ y_2^2+y_3^2<\delta_y\}.$$
Next, we leave it to the reader to check that
$$H_{\hat{m}}(1,0,0)=\frac{1}{d(d-1)}H_m(1,0,0).$$ Therefore, when $x_2^2+x_3^2=0$, requirement {\bf R3} implies that
$y^TH_{\hat{m}}(x)y$ is positive when $||x||=||y||=~1$. Appealing
to continuity again, we conclude that there exists a small
positive constant $\delta_x$ such that $y^TH_{\hat{m}}(x)y>0$ on
the set
$$\mathcal{S}_x\mathrel{\mathop:}=\{(x,y) \ |\ ||x||=1, ||y||=1, \
x_2^2+x_3^2<\delta_x\}.$$ If we now take
$\delta=\min\{\delta_y,\delta_x\}$, the lemma is established.
\end{proof}
The last lemma that we need has already been proven in our earlier
work in~\cite{AAA_PP_not_sos_convex_journal}.
\begin{lemma}[\cite{AAA_PP_not_sos_convex_journal}]\label{lem:sos.mat.then.minor.sos}
All principal minors of an sos-matrix are sos
polynomials.\footnote{As a side note, we remark that the converse
of Lemma~\ref{lem:sos.mat.then.minor.sos} is not true even for
polynomial matrices that are valid Hessians. For example, all 7
principal minors of the $3\times 3$ Hessian of the form $f$ in
(\ref{eq:minim.form.3.6}) are sos polynomials, even though this
Hessian is not an sos-matrix.}
\end{lemma}
We are now ready to prove
Theorem~\ref{thm:conv_not_sos_conv_forms_n3d}.
\begin{proof}[Proof of Theorem~\ref{thm:conv_not_sos_conv_forms_n3d}]
We first prove that the form $f$ in
(\ref{eq:construction.of.f(x)}) is not sos-convex. By
Lemma~\ref{lem:sos.mat.then.minor.sos}, if $f$ was sos-convex,
then all diagonal elements of its Hessian would have to be sos
polynomials. On the other hand, we have from
(\ref{eq:construction.of.f(x)}) that
$$\frac{\partial{f(x)}}{\partial{x_1}\partial{x_1}}=m(x),$$ which by requirement {\bf R2} is not sos. Therefore $f$ is not sos-convex.
It remains to show that there exists a positive value of $\gamma$
for which $f$ becomes convex. Let us denote the Hessians of $f$,
$\int_0^{x_1}\int_0^s m(t,x_2,x_3) dt ds$, and $g$, by $H_f$,
$H_{\hat{m}}$, and $H_g$ respectively. So, we have
$$H_f(x)=H_{\hat{m}}(x)+\gamma H_g(x_2,x_3).$$ (Here, $H_g$ is a
$3\times 3$ matrix whose first row and column are zeros.)
Convexity of $f$ is of course equivalent to nonnegativity of the
form $y^TH_f(x)y$. Since this form is bi-homogeneous in $x$ and
$y$, it is nonnegative if and only if $y^TH_f(x)y\geq0$ on the
bi-sphere
$$\mathcal{B}\mathrel{\mathop:}=\{(x,y) \ | \ ||x||=1,
||y||=1\}.$$ Let us decompose the bi-sphere as
$$\mathcal{B}=\mathcal{S}\cup\bar{\mathcal{S}},$$
where $\mathcal{S}$ is defined in (\ref{eq:the.set.S}) and
\begin{equation}\nonumber
\bar{\mathcal{S}}\mathrel{\mathop:}=\{(x,y) \ |\ ||x||=1,
||y||=1, x_2^2+x_3^2\geq\delta, y_2^2+y_3^2\geq\delta\}.
\end{equation}
Lemma~\ref{lem:y.Hm.y>0.on.some.part} together with positive
definiteness of $H_g$ imply that $y^TH_f(x)y$ is positive on
$\mathcal{S}$. As for the set $\bar{\mathcal{S}}$, let
\begin{equation}\nonumber
\beta_1=\min_{x,y,\in\bar{\mathcal{S}}} y^TH_{\hat{m}}(x)y,
\end{equation}
and
\begin{equation}\nonumber
\beta_2=\min_{x,y,\in\bar{\mathcal{S}}} y^TH_g(x_2,x_3)y.
\end{equation}
By the assumption of positive definiteness of $H_g$, we have
$\beta_2>0$. If we now let $$\gamma>\frac{|\beta_1|}{\beta_2},$$
then $$\min_{x,y,\in\bar{\mathcal{S}}}
y^TH_f(x)y>\beta_1+\frac{|\beta_1|}{\beta_2}\beta_2\geq0.$$ Hence
$y^TH_f(x)y$ is nonnegative (in fact positive) everywhere on
$\mathcal{B}$ and the proof is completed.
\end{proof}
Finally, we provide an argument for existence of bivariate
polynomials of degree $8,10,12,\ldots$ that are convex but not
sos-convex.
\begin{corollary}\label{cor:bivariate.polys.8.10.12...}
Consider the form $f$ in (\ref{eq:construction.of.f(x)})
constructed as described in
Theorem~\ref{thm:conv_not_sos_conv_forms_n3d}. Let
$$\tilde{f}(x_1,x_2)=f(x_1,x_2,1).$$
Then, $\tilde{f}$ is convex but not sos-convex.
\end{corollary}
\begin{proof}
The polynomial $\tilde{f}$ is convex because it is the restriction
of a convex function. It is not difficult to see that
$$\frac{\partial{\tilde{f}}(x_1,x_2)}{\partial{x_1}\partial{x_1}}=m(x_1,x_2,1),$$
which is not sos. Therefore from
Lemma~\ref{lem:sos.mat.then.minor.sos} $\tilde{f}$ is not
sos-convex.
\end{proof}
Corollary~\ref{cor:bivariate.polys.8.10.12...} together with the
two polynomials in $\tilde{C}_{2,6}\setminus\tilde{\Sigma
C}_{2,6}$ and $\tilde{C}_{3,4}\setminus\tilde{\Sigma C}_{3,4}$
presented in (\ref{eq:minim.poly.2.6}) and
(\ref{eq:minim.poly.3.4}), and with the simple procedure for
increasing the number of variables described at the beginning of
Subsection~\ref{subsubsec:increasing.degree.and.vars} cover all
the values of $n$ and $d$ for which convex but not sos-convex
polynomials exist.
\section{Concluding remarks and an open problem}\label{sec:concluding.remarks}
To conclude our paper, we would like to point out some
similarities between nonnegativity and convexity that deserve
attention: (i) both nonnegativity and convexity are properties
that only hold for even degree polynomials, (ii) for quadratic
forms, nonnegativity is in fact equivalent to convexity, (iii)
both notions are NP-hard to check exactly for degree 4 and larger,
and most strikingly (iv) nonnegativity is equivalent to sum of
squares \emph{exactly} in dimensions and degrees where convexity
is equivalent to sos-convexity. It is unclear to us whether there
can be a deeper and more unifying reason explaining these
observations, in particular, the last one which was the main
result of this paper.
Another intriguing question is to investigate whether one can give
a direct argument proving the fact that $\tilde{\Sigma
C}_{n,d}=\tilde{C}_{n,d}$ if and only if $\Sigma
C_{n+1,d}=C_{n+1,d}$. This would eliminate the need for studying
polynomials and forms separately, and in particular would provide
a short proof of the result $\Sigma_{3,4}=C_{3,4}$ given
in~\cite{AAA_GB_PP_Convex_ternary_quartics}.
Finally, an open problem related to this work is to \emph{find an
explicit example of a convex form that is not a sum of squares}.
Blekherman~\cite{Blekherman_convex_not_sos} has shown via volume
arguments that for degree $d\geq 4$ and asymptotically for large
$n$ such forms must exist, although no examples are known. In
particular, it would \aaa{be} interesting to determine the
smallest value of $n$ for which such a form exists. We know from
Lemma~\ref{lem:helton.nie.sos-convex.then.sos} that a convex form
that is not sos must necessarily be not sos-convex. Although our
several constructions of convex but not sos-convex polynomials
pass this necessary condition, the polynomials themselves are all
sos. The question is particularly interesting from an optimization
viewpoint because it implies that the well-known sum of squares
relaxation for minimizing
polynomials~\cite{Shor},~\cite{Minimize_poly_Pablo} \aaa{is not
always} exact even for the easy case of minimizing convex
polynomials.
\bibliographystyle{abbrv}
| {
"timestamp": "2012-07-17T02:02:18",
"yymm": "1111",
"arxiv_id": "1111.4587",
"language": "en",
"url": "https://arxiv.org/abs/1111.4587",
"abstract": "Our first contribution in this paper is to prove that three natural sum of squares (sos) based sufficient conditions for convexity of polynomials, via the definition of convexity, its first order characterization, and its second order characterization, are equivalent. These three equivalent algebraic conditions, henceforth referred to as sos-convexity, can be checked by semidefinite programming whereas deciding convexity is NP-hard. If we denote the set of convex and sos-convex polynomials in $n$ variables of degree $d$ with $\\tilde{C}_{n,d}$ and $\\tilde{\\Sigma C}_{n,d}$ respectively, then our main contribution is to prove that $\\tilde{C}_{n,d}=\\tilde{\\Sigma C}_{n,d}$ if and only if $n=1$ or $d=2$ or $(n,d)=(2,4)$. We also present a complete characterization for forms (homogeneous polynomials) except for the case $(n,d)=(3,4)$ which is joint work with G. Blekherman and is to be published elsewhere. Our result states that the set $C_{n,d}$ of convex forms in $n$ variables of degree $d$ equals the set $\\Sigma C_{n,d}$ of sos-convex forms if and only if $n=2$ or $d=2$ or $(n,d)=(3,4)$. To prove these results, we present in particular explicit examples of polynomials in $\\tilde{C}_{2,6}\\setminus\\tilde{\\Sigma C}_{2,6}$ and $\\tilde{C}_{3,4}\\setminus\\tilde{\\Sigma C}_{3,4}$ and forms in $C_{3,6}\\setminus\\Sigma C_{3,6}$ and $C_{4,4}\\setminus\\Sigma C_{4,4}$, and a general procedure for constructing forms in $C_{n,d+2}\\setminus\\Sigma C_{n,d+2}$ from nonnegative but not sos forms in $n$ variables and degree $d$. Although for disparate reasons, the remarkable outcome is that convex polynomials (resp. forms) are sos-convex exactly in cases where nonnegative polynomials (resp. forms) are sums of squares, as characterized by Hilbert.",
"subjects": "Optimization and Control (math.OC)",
"title": "A Complete Characterization of the Gap between Convexity and SOS-Convexity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180627184412,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7083314784657301
} |
https://arxiv.org/abs/2006.00742 | Error bounds for overdetermined and underdetermined generalized centred simplex gradients | Using the Moore--Penrose pseudoinverse, this work generalizes the gradient approximation technique called centred simplex gradient to allow sample sets containing any number of points. This approximation technique is called the \emph{generalized centred simplex gradient}. We develop error bounds and, under a full-rank condition, show that the error bounds have order $O(\Delta^2)$, where $\Delta$ is the radius of the sample set of points used. We establish calculus rules for generalized centred simplex gradients, introduce a calculus-based generalized centred simplex gradient and confirm that error bounds for this new approach are also order $O(\Delta^2)$. We provide several examples to illustrate the results and some benefits of these new methods. | \section{Introduction}\label{sec:intro}
Derivative-free optimization (DFO) focuses on the study of optimization algorithms that do not use first-order information within the algorithm. Recent advances in their applications, convergence analysis and practical implementations have fuelled a surge in DFO research (see \cite{Audet2014,audet2017derivative,Conn2009,Custodio2017,HareNutiniTesfamariam2013,larson_menickelly_wild_2019} and citations therein).
One of the broad classes of DFO algorithms is model-based DFO methods. These methods rely on accurately approximating first-order information using only function evaluations and then using the approximations within classical optimization algorithms. For example, using linear interpolation on function values from $n+1$ well-poised sample points in $\operatorname{\mathbb{R}}^n$ creates a linear model of the objective function. The gradient of this linear model, called the {\em simplex gradient}, provides an approximation of the true gradient \cite{Bortz1998,Kelley1999}.
The error bound comparing the simplex gradient and the true gradient dates back to the late 1990s and is known to be order $O(\Delta)$, where $\Delta$ is the radius of the sample set of evaluated points \cite{Kelley1999}. This error bound is critical in showing convergence of many first-order model-based methods
\cite[Ch.\ 10 \& 11]{audet2017derivative}.
Simplex gradients, and their associated error bound, are not limited to the setting where exactly $n+1$ interpolation points are used in $\operatorname{\mathbb{R}}^n$. In \cite{Conn2008}, the authors study the construction of simplex gradients consisting of $n+1$ interpolation points in $\operatorname{\mathbb{R}}^n$, and in \cite{Conn22008}, they extend those results to the cases of fewer (underdetermined models) and more (overdetermined models) than $n+1$ points. Most notably, they establish error bounds for these cases and find them to be order $O(\Delta)$. These results were further elaborated in \cite{Regis2015}.
Many other methods of approximating gradients exist \cite{Billups2013,Oeuvray2007,Powell2004,Regis2005,Schonlau1998,Wild2011}. Central to this work is the {\em centred simplex gradient}, which is created by retaining the original points in the sample set and adding their reflection through the reference point (see Definition \ref{def:GCSG}). This creates an average of two simplex gradients. Interestingly, the accuracy of the centred simplex gradient is $O(\Delta^2)$ \cite{Kelley1999}. However, this error bound is only established for the determined case, using exactly $2n$ function evaluations in $\operatorname{\mathbb{R}}^n$. The primary goal of this paper is to establish the error bound for the centred simplex gradient for the underdetermined and overdetermined cases. This is accomplished using the Moore--Penrose pseudoinverse to define a \emph{generalized centred simplex gradient (GCSG)}, which allows centred simplex gradients to be constructed using a sample set of any finite size.
Returning briefly to our discussion of the simplex gradient, \cite{hare2020,Regis2015} develop calculus rules for the generalized simplex gradient. A secondary goal of this paper is the extension of these results to the GCSG. This provides the concept of the \emph{generalized centred simplex calculus gradient (GCSCG)}. We examine this novel gradient approximation and prove that it retains the $O(\Delta^2)$ accuracy of the centred simplex gradient. Some benefits of the new techniques are illustrated through examples.
The structure of the paper is the following. In Section \ref{sec:prel}, we introduce notation and basic definitions. In Section \ref{sec:errorbounds}, we show that generalized centred simplex gradients inherit the order of accuracy $O(\Delta^2)$. We present two error bounds, depending on the number of points in the sample set. In Section \ref{sec:calc}, we present the calculus rules for the GCSG. In Section \ref{sec:centredsimplexcalculusgradient}, we define the GCSCG, based on the calculus rules from the previous section. We prove that each of these new techniques has order of accuracy $O(\Delta^2)$. We provide examples, showing some benefits of the GCSCG compared to the GCSG in certain situations. Section \ref{sec:conclusion} summarizes the work accomplished and suggests some topics to explore in future research.
\section{Preliminaries} \label{sec:prel}
Unless otherwise stated, we use the standard notation found in \cite{rockwets}. The domain of a function $f$ is denoted by $\operatorname{dom}(f)$. The transpose of a matrix $A$ is denoted by $A^\top$. We work in finite-dimensional space $\operatorname{\mathbb{R}}^n$ with inner product $x^\top y=\sum_{i=1}^nx_iy_i$ and induced norm $\|x\|=\sqrt{x^\top x}$. We use angle brackets $\langle\cdots\rangle$ to contain an ordered set of vectors. The identity matrix is denoted by $\operatorname{Id}$. We denote by $B(x,\Delta)$ the open ball centred about $x$ with radius $\Delta$. The set of all linear combinations of the vectors in a set $S$ is denoted by $\operatorname{span} S$. \medskip\\
We next list some definitions and background results.
\begin{df}[Jacobian]
Given a differentiable function $f: \operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}^p$, the \emph{Jacobian} of $f$,
written $J_f$, is the column matrix of all partial derivatives of $f$:
$$J_f=\left[\begin{array}{c c c}\frac{\partial
f}{\partial x_1}&\cdots&\frac{\partial f}{\partial
x_n}\end{array}\right]=\left[\begin{array}{c c c}
\frac{\partial f_1}{\partial x_1}&\cdots&\frac{\partial f_1}{\partial x_n}\\
\vdots&\ddots&\vdots\\
\frac{\partial f_p}{\partial x_1}&\cdots&\frac{\partial f_p}{\partial
x_n}\end{array}\right].$$
\end{df}
\begin{df}[Lipschitz continuity] A function $f: \operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ is said to be \emph{Lipschitz continuous} with Lipschitz constant $L\geq0$ if for all $x,y\in\operatorname{dom} (f)$,
$$|f(y)-f(x)|\leq L\|y-x\|.$$If for every $x\in\operatorname{dom} (f) $ there exists a neighbourhood $U$ of $x$ such that $f$ restricted to $U$ is Lipschitz continuous, then $f$ is said to be \emph{locally Lipschitz continuous} on $U$.
\end{df}
We remind the reader that for nonsquare matrices, a generalization of the matrix inverse is the pseudoinverse. The most well-known type of matrix pseudoinverse, which is central to the results of this work, is the Moore--Penrose pseudoinverse.
\begin{df}[Moore--Penrose pseudoinverse] \label{def:mpinverse}
Let $A\in\operatorname{\mathbb{R}}^{n\times m}$. The \emph{Moore--Penrose pseudoinverse} of $A$, denoted by $A^\dagger$, is the unique matrix in $\operatorname{\mathbb{R}}^{m \times n}$ that satisfies the following four equations:
\begin{align*}
AA^\dagger A&=A,\\operatorname{\mathcal{A}}^\dagger AA^\dagger&=A^\dagger,\\(AA^\dagger)^\top&=AA^\dagger,\\(A^\dagger A)^\top&=A^\dagger A.
\end{align*}
\end{df}
The Moore--Penrose inverse $A^\dagger$ is not always an inverse of $A$, but the following two properties hold.
\begin{itemize}
\item If $A$ has full column rank $m$, then $A^\dagger$ is a left-inverse of $A$, that is, $A^\dagger A=\operatorname{Id}_m$.
\item If $A$ has full row rank $n$, then $A^\dagger$ is a right-inverse of $A$, that is, $AA^\dagger=\operatorname{Id}_n$.
\end{itemize}
In order to define the generalized simplex gradient and the GCSG in the sequel, we use the following sets, matrices and terminology.
\begin{df}[Simplex notation]\label{df:simplex}Given $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ and an ordered sample set of distinct points
\begin{align*}
\operatorname{\mathcal{X}}&=\langle x^0,x^1,\ldots,x^m\rangle=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle\subseteq \operatorname{dom} (f),
\intertext{we define}
\operatorname{\mathcal{X}}^-&=\langle x^0,x^0-d^1,\ldots,x^0-d^m\rangle\subseteq \operatorname{dom} (f),\\
\Delta&=\max\limits_{i \,\in\{1,\ldots,m\}}\|d^i\| \mbox{\emph{, the radius of }}\operatorname{\mathcal{X}},\\
S=S(\operatorname{\mathcal{X}})&=[d^1~~\cdots~~d^m]\in\operatorname{\mathbb{R}}^{n\times m},\\
\delta^s=\delta^s_f(\operatorname{\mathcal{X}})&=\left[\begin{array}{c}f(x^1)-f(x^0)\\f(x^2)-f(x^0)\\\vdots\\f(x^m)-f(x^0)\end{array}\right]\in\operatorname{\mathbb{R}}^m,\\
\delta^c=\delta^c_f(\operatorname{\mathcal{X}})&=\frac{1}{2}\left[\begin{array}{c}f(x^0+d^1)-f(x^0-d^1)\\f(x^0+d^2)-f(x^0-d^2)\\\vdots\\f(x^0+d^m)-f(x^0-d^m)\end{array}\right]\in\operatorname{\mathbb{R}}^m.
\end{align*}
\end{df}
\begin{df}
Let $\operatorname{\mathcal{X}}=\langle x^0, x^0+d^1, \dots, x^0+d^m \rangle \subseteq \operatorname{dom}(f)$ be an ordered set of $m+1$ distinct points in $\operatorname{\mathbb{R}}^n.$ Then we classify the set $\operatorname{\mathcal{X}}$ in exactly one of the following cases:
\begin{itemize}
\item \emph{overdetermined case} if $m>n$ and $\operatorname{rank} S=n$;
\item\emph{determined case} if $m=n$ and $\operatorname{rank} S=n$;
\item\emph{underdetermined case} if $m<n$ and $\operatorname{rank} S=m$;
\item\emph{undetermined case} if S is not full rank.
\end{itemize}
\end{df}
Note that the points in $\operatorname{\mathcal{X}}$ are assumed to be distinct for the rest of this paper.
\begin{df}[Generalized simplex gradient] Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ and let $\operatorname{\mathcal{X}}$ be an ordered set in $\operatorname{dom}(f).$ The \emph{generalized simplex gradient} (GSG) of $f$ over $\operatorname{\mathcal{X}}$ is denoted by $\nabla^sf(\operatorname{\mathcal{X}})$ and defined by
\begin{equation*}\label{eq:simplex}\nabla^s f(\operatorname{\mathcal{X}})=(S^\top)^\dagger\delta^s.\end{equation*}
\end{df}
\begin{df}[Generalized centred simplex gradient]\label{def:GCSG}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ and let $\operatorname{\mathcal{X}}$ be an ordered set such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^{-}$ is in $\operatorname{dom}(f)$. The \emph{generalized centred simplex gradient} (GCSG) of $f$ over $\operatorname{\mathcal{X}}$ is denoted by $\nabla^cf(\operatorname{\mathcal{X}})$ and defined by
\begin{equation*}\label{eq:centred}\nabla^c f(\operatorname{\mathcal{X}})=(S^\top)^\dagger\delta^c.\end{equation*}
\end{df}
It is easy to show that an equivalent way to compute the GCSG is by using the average of two GSGs:
\begin{equation}\label{eq:altcentred}\nabla^c f(\operatorname{\mathcal{X}})=\frac{1}{2}(\nabla^s f(\operatorname{\mathcal{X}})+\nabla^s f(\operatorname{\mathcal{X}}^-)).\end{equation}
Next, we introduce a proposition that clarifies the relation between the GSG and the GCSG and provides a case where both approaches are equal. We require the following two lemmas first.
\begin{lem}
Let $A \in \operatorname{\mathbb{R}}^{n \times m}$. Then $(A^\top)^\dagger=(A^\dagger)^\top.$
\end{lem}
\begin{proof}
Since $(A^\dagger)^\top$ and $(A^\top)^\dagger$ satisfy the same four properties in Definition \ref{def:mpinverse}, it follows that they are equal.
\end{proof}
\begin{lem} Let $A \in \operatorname{\mathbb{R}}^{n \times m}$ have full row rank. Then $\begin{bmatrix} A&-A\end{bmatrix}^\dagger=\frac{1}{2}\begin{bmatrix} A^\dagger\\-A^\dagger\end{bmatrix}.$
\end{lem}
\begin{proof}
Since $\begin{bmatrix} A &-A\end{bmatrix}$ has full row rank, we have
\begin{align*}
\begin{bmatrix} A&-A\end{bmatrix}^\dagger&=\begin{bmatrix} A&-A \end{bmatrix}^\top \left ( \begin{bmatrix} A&-A\end{bmatrix} \begin{bmatrix} A&-A \end{bmatrix}^\top \right )^{-1}\\
&=\begin{bmatrix} A^\top\\-A^\top \end{bmatrix} \left ( \begin{bmatrix} A&-A \end{bmatrix} \begin{bmatrix} A^\top\\-A^\top \end{bmatrix} \right )^{-1}\\
&=\begin{bmatrix} A^\top\\-A^\top \end{bmatrix} \left ( 2 \begin{bmatrix} AA^\top \end{bmatrix} \right )^{-1}\\
&=\frac{1}{2}\begin{bmatrix} A^\top (AA^\top)^{-1}\\-A^\top (AA^\top)^{-1} \end{bmatrix}\\
&=\frac{1}{2}\begin{bmatrix} A^\dagger\\-A^\dagger \end{bmatrix}.\qedhere
\end{align*}
\end{proof}
\begin{prop}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n \to \operatorname{\mathbb{R}}$ and let $\operatorname{\mathcal{X}}=\langle x^0, x^0+d^1, \dots, x^0+d^m \rangle$ be an ordered set such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^{-} \subseteq \operatorname{dom}(f)$ and $S$ has full row rank (determined and overdetermined cases). Let $\mathcal{Y}=\left \langle x^0, x^0+d^1, \dots, x^0+d^m, x^0-d^1, \dots, x^0-d^m \right \rangle$. Then
\begin{align*}
\nabla^s f(\operatorname{\mathcal{Y}})&=\nabla^c f(\operatorname{\mathcal{X}}).
\end{align*}
\end{prop}
\begin{proof}
We have
\begin{align*}
\nabla^s f(\operatorname{\mathcal{Y}})&=(S(\operatorname{\mathcal{Y}})^\top)^\dagger \delta^s_f (\operatorname{\mathcal{Y}})\\
&=\left (\begin{bmatrix} S(\operatorname{\mathcal{X}}) &-S(\operatorname{\mathcal{X}})\end{bmatrix}^\top\right )^\dagger \begin{bmatrix} \delta^s_f(\operatorname{\mathcal{X}})\\\delta^s_f(\X^{-})\end{bmatrix}=\left (\begin{bmatrix} S(\operatorname{\mathcal{X}}) &-S(\operatorname{\mathcal{X}})\end{bmatrix}^\dagger\right )^\top \begin{bmatrix} \delta^s_f(\operatorname{\mathcal{X}})\\\delta^s_f(\X^{-})\end{bmatrix}\\
&=\frac{1}{2}\begin{bmatrix} S(\operatorname{\mathcal{X}})^\dagger\\-S(\operatorname{\mathcal{X}})^\dagger\end{bmatrix}^\top \begin{bmatrix} \delta^s_f(\operatorname{\mathcal{X}})\\\delta^s_f(\X^{-})\end{bmatrix}=\frac{1}{2}\begin{bmatrix} (S(\operatorname{\mathcal{X}})^\top)^\dagger& -(S(\operatorname{\mathcal{X}})^\top)^\dagger \end{bmatrix}\begin{bmatrix} \delta^s_f(\operatorname{\mathcal{X}})\\\delta^s_f(\X^{-})\end{bmatrix}\\
&=\frac{1}{2}\left ( (S(\operatorname{\mathcal{X}})^\top)^\dagger \delta^s_f(\operatorname{\mathcal{X}}) -(S(\operatorname{\mathcal{X}})^\top)^\dagger \delta^s_f(\X^{-}) \right )\\
&=\frac{1}{2}\left ( (S(\operatorname{\mathcal{X}})^\top)^\dagger \delta^s_f(\operatorname{\mathcal{X}}) +(S(\X^{-})^\top)^\dagger \delta^s_f(\X^{-}) \right )=\nabla^c f(\operatorname{\mathcal{X}}).\qedhere
\end{align*}
\end{proof}
Notice that the GCSG uses an ordered set of points. The reason is that if the position of the reference point $x^0$ is changed in the overdetermined case, then we do not necessarily get the same value. The following example illustrates this situation.
\begin{ex} Consider the sets $\operatorname{\mathcal{X}}=\langle -1, 0, 1 \rangle$ and $\operatorname{\mathcal{X}}_\alpha=\langle 0, 1, -1 \rangle,$ and the function $f: \operatorname{\mathbb{R}} \to \operatorname{\mathbb{R}}: y \mapsto y^4.$ Then
\begin{align*}
\nabla^c f(\operatorname{\mathcal{X}})&=(S(\operatorname{\mathcal{X}})^\top)^\dagger \delta^c_f(\operatorname{\mathcal{X}})\\
&=\bbm1\\2\end{bmatrix}^\dagger \frac{1}{2} \begin{bmatrix} 0 -16\\operatorname{\mathds{1}}-81\end{bmatrix}=-17.6
\intertext{and}
\nabla^c f(\operatorname{\mathcal{X}}_\alpha)&=\begin{bmatrix} 1 \\-1 \end{bmatrix}^\dagger \frac{1}{2} \begin{bmatrix} 1 - 1\\ 1 -1 \end{bmatrix} =0.
\end{align*}
\end{ex}
\section{Error bounds for the GCSG} \label{sec:errorbounds}
This section is dedicated to developing the $O(\Delta^2)$ upper bounds on the error for the GCSG. There are two instances to consider separately; first we look at the determined and overdetermined cases, then the underdetermined case. These two settings have different results, as the number of linearly independent vectors in the simplex differs.
\subsection{Determined and overdetermined cases}
An error bound for the determined case of the GCSG is established in \cite[Theorem 9.13]{ audet2017derivative}, the accuracy of which is measured in terms of $\Delta$. The GCSG error bound in the determined case has order $O(\Delta^2)$. We show that this error bound can be extended to the overdetermined case. To that end, we present Lemma \ref{lem1}, which relies on the multidimensional second-order Taylor theorem below.
\begin{thm}\emph{\cite[Section 4.3]{Lax2017}}\label{thm:Taylor} Suppose $f:\operatorname{dom}(f)\subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ is a $\mathcal{C}^{3}$ function on the open ball $B(x^0,\overline{\Delta}).$ Then for $x^0+d$ in the ball,
\begin{align*}
f(x^0+d)&=f(x^0)+\nabla f(x^0)^\top d+ \frac{1}{2} d^\top \nabla^2 f(x^0) d+R_2(x^0, d).
\end{align*}
Moreover,
\begin{align*}
\vert R_2(x^0, d) \vert \leq \frac{L}{6} \Vert d \Vert^3,
\end{align*}
where $L$ is the Lipschitz constant of the Hessian $\nabla^2 f.$
\end{thm}
\begin{lem}\label{lem1}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(x^0,\overline{\Delta})$ and denote by $L$ the Lipschitz constant of $\nabla^2f$. Then for any $d\in B(x^0,\overline{\Delta})$, we have
\begin{equation}\label{eq1}|f(x^0+d)-f(x^0-d)-2\nabla f(x^0)^\top d|\leq\frac{L}{3}\|d\|^3.\end{equation}
\end{lem}
\begin{proof}
From Theorem \ref{thm:Taylor}, we know that
\begin{align}f(x^0+d)&=f(x^0)+\nabla f(x^0)^\top d+\frac{1}{2}d^\top\nabla^2f(x^0)d+R_2(x^0, d),\label{eq2}\\
f(x^0-d)&=f(x^0)-\nabla f(x^0)^\top d+\frac{1}{2}d^\top\nabla^2f(x^0)d+R_2(x^0, -d).\label{eq3}
\end{align}
Subtracting \eqref{eq3} from \eqref{eq2}, we find
\begin{align*}
f(x^0+d)-f(x^0-d)-2\nabla f(x^0)^\top d&=R_2(x^0, d)-R_2(x^0, -d)\\
\Rightarrow |f(x^0+d)-f(x^0-d)-2\nabla f(x^0)^\top d|&=|R_2(x^0, d)-R_2(x^0, -d)|.
\end{align*}
Also from Theorem \ref{thm:Taylor}, we know that
\begin{equation}\label{eq6}|R_2(x^0, \pm d)|\leq\frac{L\|d\|^3}{6}.\end{equation}
Therefore, by \eqref{eq6} and the triangle inequality, we obtain \eqref{eq1}.
\end{proof}
Now we are ready for our first error bound result, for the determined and overdetermined cases.
\begin{thm}\label{thm:det}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(x^0,\overline{\Delta}),$ and denote by $L$ the Lipschitz constant of $\nabla^2f$. Let $\operatorname{\mathcal{X}}=\left \langle x^0, x^1, \dots, x^m \right \rangle$ be an ordered set with radius $\Delta<\overline{\Delta}$ and let $\operatorname{rank} S=n$ (determined and overdetermined cases).
Then
\begin{equation*}\label{eq:determinederror}\|\nabla^c f(\operatorname{\mathcal{X}})-\nabla f(x^0)\|\leq\frac{L\sqrt m}{6}\left\|(\widehat{S}^\top)^\dagger\right\|\Delta^2,\end{equation*}
where $\widehat{S}=S/\Delta$.
\end{thm}
\begin{proof}
We have
\begin{align*}
\|\delta^c_f-S^\top\nabla f(x^0)\|&=\sqrt{\sum\limits_{i=1}^m\left(\frac{f(x^0+d^i)-f(x^0-d^i)}{2}-(d^i)^\top\nabla f(x^0)\right)^2}\\
&\leq \sum\limits_{i=1}^m\left\vert \frac{f(x^0+d^i)-f(x^0-d^i)}{2}-(d^i)^\top\nabla f(x^0)\right \vert.
\end{align*}
Now using Lemma \ref{lem1} and the definition of $\Delta$, we have
\begin{equation*}\label{eq7}\|\delta^c_f-S^\top\nabla f(x^0)\|\leq\frac{L\sqrt m\Delta^3}{6}.\end{equation*}
Since $S^\top$ has full column rank, $(S^\top)^\dagger$ is a left inverse of $S^\top$. Thus,
\begin{align*}
\|\nabla^c f(\operatorname{\mathcal{X}})-\nabla f(x^0)\|&=\left\|(S^\top)^\dagger\delta^c_f-(S^\top)^\dagger S^\top\nabla f(x^0)\right\|\\
&\leq\left\|(S^\top)^\dagger\right\|\|\delta^c_f-S^\top\nabla f(x^0)\|\\
&\leq\left\|(S^\top)^\dagger\right\|\frac{L\sqrt m\Delta^3}{6}\\
&=\left\|(\widehat{S}^\top)^\dagger\right\|\frac{L\sqrt m\Delta^2}{6}.\qedhere
\end{align*}
\end{proof}
\subsection{Underdetermined case}
When developing the underdetermined case, we obtain the error bound by considering $f$ not on all of $\operatorname{\mathbb{R}}^n$, but on a subspace that is dependent on the linearly independent vectors in $S$. Suppose $\mathbb{U}=\operatorname{span} S\neq\operatorname{\mathbb{R}}^n$ (i.e.\ $\operatorname{rank} S<n$). Note that $\mathbb{U}+x^0$ is the affine hull of $\operatorname{\mathcal{X}}$, which we denote by $\operatorname{aff}\operatorname{\mathcal{X}}$. Since all of our sample points lie in $\mathbb{U}+x^0$, it is unreasonable to expect the ability to estimate gradients accurately outside of this affine hull. The following example demonstrates this problem.
\begin{ex}\label{ex:proj}
Let $f:\operatorname{\mathbb{R}}^3\to\operatorname{\mathbb{R}}: y\mapsto ay_1+(a+1)y_2+(4-a)y_3$, $a\in\operatorname{\mathbb{R}}$. Consider the sample set$$\operatorname{\mathcal{X}}=\left\langle x^0,x^1,x^2\right\rangle=\left\langle\left[\begin{array}{c}0\\0\\0\end{array}\right],\left[\begin{array}{c}1\\0\\operatorname{\mathds{1}}\end{array}\right],\left[\begin{array}{c}0\\operatorname{\mathds{1}}\\operatorname{\mathds{1}}\end{array}\right]\right\rangle.$$Then regardless of the value of $a$, we have
$$f(x^0)=0,\qquad f(x^1)=a+4-a=4,\qquad f(x^2)=a+1+4-a=5.$$
Therefore, it is impossible to determine $\nabla f$ using $\operatorname{\mathcal{X}}$.
\end{ex}
We wish to create an approximate gradient that is accurate on the subspace $\mathbb{U}$. Note that $\mathbb{U}=\{y :y_3=y_1+y_2\}$. Define $\operatorname{Proj}_{\mathbb{U}}$ as the projection operator onto $\mathbb{U}\subseteq\operatorname{\mathbb{R}}^n$. We restrict $f$ to the domain $\mathbb{U}$ by defining the following function:
$$f_{\mathbb{U}}(y)=f(x^0+\operatorname{Proj}_{\mathbb{U}}(y-x^0)).$$Then by \cite[Lemma 4.2]{hare2019chain}, we have the following useful relationship between the projection of the gradient of $f$ onto $\mathbb{U}$ and the so-called $\operatorname{\mathcal{U}}$-gradient of $f$ denoted by $\nabla f_{\mathbb{U}}$:
$$\nabla f_{\mathbb{U}}=\operatorname{Proj}_{\mathbb{U}}\nabla f.$$
The function $f_{\mathbb{U}}$ effectively restricts the domain of $f$ to the subspace $\mathbb{U}$ and provides us with gradient information on $\operatorname{aff}\operatorname{\mathcal{X}}$.
\begin{ex} Let $f:\operatorname{\mathbb{R}}^3\to\operatorname{\mathbb{R}}:y \mapsto a^\top y=a_1y_1+a_2y_2+a_3y_3$ and$$\operatorname{\mathcal{X}}=\left\langle x^0,x^1,x^2\right\rangle=\left\langle\left[\begin{array}{c}0\\0\\0\end{array}\right],
\left[\begin{array}{c}1\\0\\operatorname{\mathds{1}}\end{array}\right],\left[\begin{array}{c}0\\operatorname{\mathds{1}}\\operatorname{\mathds{1}}\end{array}\right]\right\rangle.$$We have $$S=\left[\begin{array}{c c}1&0\\0&1\\operatorname{\mathds{1}}&1\end{array}\right].$$
By \emph{\cite[\S 2.2 eq. (3)]{vusmoothness} (see \eqref{eq:projsum2} below)}, the projection of $y$ onto $\mathbb{U}$ is given by
$$\operatorname{Proj}_{\mathbb{U}}y=\frac{1}{3} \begin{bmatrix} 2&-1&1\\-1&2&1\\operatorname{\mathds{1}}&1&2\end{bmatrix} \begin{bmatrix} y_1\\operatorname{\mathcal{Y}}_2\\operatorname{\mathcal{Y}}_3 \end{bmatrix}=\frac{1}{3}\left[\begin{array}{c}2y_1-y_2+y_3\\-y_1+2y_2+y_3\\operatorname{\mathcal{Y}}_1+y_2+2y_3\end{array}\right].$$
Thus,
\begin{align*}f_{\mathbb{U}}(y)=f(x^0+\operatorname{Proj}_{\mathbb{U}}(y))&=a_1\frac{2y_1-y_2+y_3}{3}+a_2\frac{-y_1+2y_2+y_3}{3}+a_3\frac{y_1+y_2+2y_3}{3}\\
&=\frac{2a_1-a_2+a_3}{3}y_1+\frac{-a_1+2a_2+a_3}{3}y_2+\frac{a_1+a_2+2a_3}{3}y_3\end{align*}
and $$\nabla f_{\mathbb{U}}(y)=\frac{1}{3}\left[\begin{array}{c}2a_1-a_2+a_3\\-a_1+2 a_2+a_3\\a_1+a_2+2a_3\end{array}\right].$$
On the other hand, we have
\begin{align*}
\operatorname{Proj}_{\mathbb{U}}\nabla f(y)=\operatorname{Proj}_{\mathbb{U}}a=\frac{1}{3} \begin{bmatrix} 2&-1&1\\-1&2&1\\operatorname{\mathds{1}}&1&2\end{bmatrix} \left[\begin{array}{c}a_1\\a_2\\a_3\end{array}\right]=\frac{1}{2}\left[\begin{array}{c}2a_1-a_2+a_3\\-a_1+2 a_2+a_3\\a_1+a_2+2a_3\end{array}\right].
\end{align*}
\end{ex}
In the underdetermined case below, we will work with the Moore--Penrose inverse $S^\dagger$. By \cite[\S 2.2 eq. (3)]{vusmoothness}, if $S$ has full column rank, then
\begin{equation}\label{eq:projsum2}
\operatorname{Proj}_{\mathbb{U}}y=S(S^\top S)^{-1}S^\top y=(S^\top)^\dagger S^\top y.
\end{equation}
With this in mind, we are ready to provide an error bound for the underdetermined case.
\begin{thm} \label{thm:under}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(x^0,\overline{\Delta})$ and denote by $L$ the Lipschitz constant of $\nabla^2f$.
Let $\operatorname{\mathcal{X}}=\langle x^1,\ldots,x^m\rangle$ be an ordered set with radius $\Delta<\overline{\Delta}$. Let $\operatorname{rank} S=m<n$ (underdetermined case) and let $\mathbb{U}=\operatorname{span} S.$ Define $\widehat{S}=S/\Delta$. Then
$$
\|\nabla^c f (\operatorname{\mathcal{X}}) - \nabla f_{\mathbb{U}}(x^0)\| =
\|\operatorname{Proj}_{\mathbb{U}}\nabla^c f(\operatorname{\mathcal{X}})-\operatorname{Proj}_{\mathbb{U}}\nabla f(x^0)\|
\leq \frac{L\sqrt m}{6}\left\|(\widehat{S}^\top)^\dagger\right\|\Delta^2.$$
\end{thm}
\begin{proof} By definition, $\nabla f_{\mathbb{U}}(x^0)=\operatorname{Proj}_{\mathbb{U}}\nabla f(x^0)$. Also, note that for all $i \in \{0, 1, \dots, m\}$, we have $f(x^i)=f_{\mathbb{U}}(x^i)$. So $\nabla^c f(\operatorname{\mathcal{X}})=\nabla^c f_{\mathbb{U}} (\operatorname{\mathcal{X}}),$ and by definition, $\nabla^c f_{\mathbb{U}}(\operatorname{\mathcal{X}})= \operatorname{Proj}_{\mathbb{U}} \nabla^c f(\operatorname{\mathcal{X}}).$ Thus, the first equality holds.
From the definition of Lipschitz continuity, restricting $f$ to $\mathbb{U}=\operatorname{span} S\subseteq\operatorname{\mathbb{R}}^n$ does not alter $\nabla^2f$ from being $L$-Lipschitz. The projection of a point onto $\mathbb{U}$ is given by \eqref{eq:projsum2}, which yields
\begin{align*}
\|\operatorname{Proj}_{\mathbb{U}}\nabla^cf(\operatorname{\mathcal{X}})-\operatorname{Proj}_{\mathbb{U}}\nabla f(x^0)\|&=\|(S^\top)^\dagger S^\top(S^\top)^\dagger\delta^c_f-(S^\top)^\dagger S^\top\nabla f(x^0)\|.
\end{align*}
Since $S$ has full column rank, $S^\top$ has full row rank, which yields $S^\top (S^\top)^\dagger =\operatorname{Id}_m$. Hence,
\begin{align*}
\|\operatorname{Proj}_{\mathbb{U}}\nabla^cf(\operatorname{\mathcal{X}})-\operatorname{Proj}_{\mathbb{U}}\nabla f(x^0)\|&=\|(S^\top)^\dagger\delta^c_f-(S^\top)^\dagger S^\top\nabla f(x^0)\|\\
&\leq\|(S^\top)^\dagger\|\|\delta^c_f-S^\top\nabla f(x^0)\|.
\end{align*}
Using Lemma \ref{lem1}, we have
\begin{align*}
\|\operatorname{Proj}_{\mathbb{U}}\nabla^cf(\operatorname{\mathcal{X}})-\operatorname{Proj}_{\mathbb{U}}\nabla f(x^0)\|&\leq \|(S^\top)^\dagger\|\frac{L\sqrt m}{6}\Delta^3\\
&\leq\|(\widehat{S}^\top)^\dagger\|\frac{L\sqrt m}{6}\Delta^2.\qedhere
\end{align*}
\end{proof}
We observe that this result is almost identical to the Theorem \ref{thm:det} result, except that it is in the reduced space rather than in $\operatorname{\mathbb{R}}^n$. With only $m<n$ linearly independent vectors in $S$, this is the best result possible for the underdetermined case.
\section{Calculus rules}\label{sec:calc}
In this section, we provide calculus rules for the GCSG. Throughout this section, all functions have a domain contained in $\operatorname{\mathbb{R}}^n$ and map to $\operatorname{\mathbb{R}}$. The calculus formulae for the GCSG follow directly from the calculus rules for the GSG presented in \cite{hare2020} by using \eqref{eq:altcentred}. Before introducing the calculus rules for the GSCG, let us recall the definition of the \emph{product difference vector} and the calculus rules for the GSG (Table \ref{tab:calcrulesgsg}) introduced in \cite{hare2020}.
\begin{df}[Product difference vector]
Let $\operatorname{\mathcal{X}}=\langle x^0, x^0+d^1, \dots, x^0+d^m \rangle$ be an ordered set of $m+1$ distinct points contained in $\operatorname{dom}(f)$ and $\operatorname{dom}(g)$. The \emph{product difference vector} of $f$ and $g$ over $\operatorname{\mathcal{X}}$ is denoted by $\delta^s_{f|g}(\operatorname{\mathcal{X}})$ and defined by
\[
\delta_{f | g}^s (\operatorname{\mathcal{X}})=\begin{bmatrix}(f(x^0+d^1)-f(x^0))(g(x^0+d^1)-g(x^0))\\(f(x^0+d^2)-f(x^0))(g(x^0+d^2)-g(x^0)) \\\vdots\\(f(x^0+d^m)-f(x^0))(g(x^0+d^m)-g(x^0))\end{bmatrix}.
\]
\end{df}
\begin{table}[ht]
\caption{\textbf{Calculus rules for the GSG}}\label{tab:calcrulesgsg}
\center{
\begin{tabular}{ |p{1.5cm}||p{6.8cm}|p{7.1cm}| }
\hline
\textbf{Rule}& \textbf{Formula} &\textbf{$E^s$}\\
\hline
Product of $2$ & $f(x^0)\nabla^s g(\operatorname{\mathcal{X}})+g(x^0)\nabla^s f(\operatorname{\mathcal{X}})+E^s_{fg}$&$\left (S^\top\right )^\dagger \delta_{f | g}^s$\\
Product of $k$&$\sum_{i=1}^{k}\left(\prod_{j\neq i}f_j (x^0) \right)\nabla^s f_i(\operatorname{\mathcal{X}})+ E^s_{f_1\cdots f_n}$ &$\left ( S^\top \right )^{\dagger} \left ( \delta^s_{f_1\cdots f_n} -
\sum_{i=1}^{k}\left(\prod_{j \neq i}f_j (x^0)\right )\delta^s_{f_i} \right )$\\
Positive power &$n[f(x^0)]^{n-1}\nabla^s f(\operatorname{\mathcal{X}})+E^s_{f^n}$ & $\left ( S^\top \right )^{\dagger}\left ( \sum_{i=1}^{n-1}[f(x^0)]^{n-1-i}\delta^s_{f|f^i} \right )$\\
Negative power &$-n[f(x^0)]^{-n-1} \nabla^s f(\operatorname{\mathcal{X}})-E_{f^{-n}}$&$\frac{\left ( S^\top \right )^{\dagger}}{[f(x^0)]^n}\left ( n \delta^s_{\frac{1}{f}|f} - \sum_{i=1}^{n-1}[f(x^0)]^{1+i} \delta^s_{f^{-1}|f^{-i}} \right )$\\
Quotient&$\frac{g(x^0)\nabla^s f(\operatorname{\mathcal{X}})-f(x^0)\nabla^s g(\operatorname{\mathcal{X}})}{[g(x^0)]^2} -E_{\frac{f}{g}}$ &$\frac{\left ( S^\top \right )^{\dagger}}{g(x^0)} \delta^s_{\frac{f}{g}|g}$ \\
\hline
\end{tabular}}
\end{table}
Note that the GCSG is a linear operator. Indeed, this follows from the facts that the GSG is a linear operator \cite[Proposition 9] {Regis2015} and the GCSG is the average of two GSGs. Using this, we can adjust the product rule for the GSG to a product rule for the GCSG.
\begin{prop}[GCSG product rule]\label{prop:productrule} Let $\operatorname{\mathcal{X}}=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle$ be an ordered set of $m+1$ points such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^{-}$ is in $\operatorname{dom}(f)$ and $\operatorname{dom}(g)$. Then
\begin{equation*}\nabla^c(fg)(\operatorname{\mathcal{X}})=f(x^0)\nabla^c g(\operatorname{\mathcal{X}})+g(x^0)\nabla^c f(\operatorname{\mathcal{X}})+E_{fg}^c(\operatorname{\mathcal{X}}),\end{equation*}
where $E_{fg}^c(\operatorname{\mathcal{X}})=\frac{1}{2}\left(E^s_{fg}(\operatorname{\mathcal{X}})+E^s_{fg} (\operatorname{\mathcal{X}}^-)\right)$.
\end{prop}
\begin{proof}
We have
\footnotesize\begin{align*}
\nabla^c (fg)(\operatorname{\mathcal{X}})&=\frac{1}{2}(\nabla^s(fg)(\operatorname{\mathcal{X}})+\nabla^s (fg)(\operatorname{\mathcal{X}}^-))\\
&=\frac{1}{2}(f(x^0)\nabla^s g(\operatorname{\mathcal{X}})+g(x^0)\nabla^s f(\operatorname{\mathcal{X}})+E^s_{fg}(\operatorname{\mathcal{X}})+f(x^0)\nabla^s g(\operatorname{\mathcal{X}}^-)+g(x^0)\nabla^s f(\operatorname{\mathcal{X}}^-)+E^s_{fg}(\operatorname{\mathcal{X}}^-))\\
&=f(x^0)\frac{1}{2}(\nabla^s g(\operatorname{\mathcal{X}})+\nabla^s g(\operatorname{\mathcal{X}}^-))+g(x^0)\frac{1}{2}(\nabla^s f(\operatorname{\mathcal{X}})+\nabla^s f(\operatorname{\mathcal{X}}^-))+\frac{1}{2}(E^s_{fg} (\operatorname{\mathcal{X}})+E^s_{fg} (\operatorname{\mathcal{X}}^-))\\
&=f(x^0)\nabla^c g(\operatorname{\mathcal{X}})+g(x^0)\nabla^c f(\operatorname{\mathcal{X}})+E_{fg}^c(\operatorname{\mathcal{X}}).\qedhere
\end{align*}
\normalsize
\end{proof}
The averaging technique used in Proposition \ref{prop:productrule} can be used to create calculus rules for the product of $k$ functions, positive powers, and negative powers. We omit proofs for the next three results, as they are straightforward.
\begin{cor}[GCSG product rule, $k$ functions]
Let $f_i:\operatorname{dom}(f_i) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ for all $i\in\{1,2,\ldots,k\}$, $k\geq 2$ and let $\operatorname{\mathcal{X}}=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle$ be an ordered set of $m+1$ points such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^{-}$ is in $\operatorname{dom}(f_i)$ for all $i$. Then
\begin{equation*}\nabla^c(f_1\cdots f_k)(\operatorname{\mathcal{X}})=\sum\limits_{i=1}^k\prod\limits_{j\neq i}f_j(x^0)\nabla^c f_i(\operatorname{\mathcal{X}})+E_{f_1\Compactcdots f_k}^c(\operatorname{\mathcal{X}}),\end{equation*}
where
\begin{equation*}E_{f_1\Compactcdots f_k}^c (\operatorname{\mathcal{X}})=\frac{1}{2}\left(E_{f_1\Compactcdots f_k}^s (\operatorname{\mathcal{X}})+E_{f_1\Compactcdots f_k}^s (\operatorname{\mathcal{X}}^-)\right).\end{equation*}
\end{cor}
\begin{cor}[GCSG power rule]
Let $\operatorname{\mathcal{X}}=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle$ be an ordered set of $m+1$ points such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^{-}$ is in $\operatorname{dom}(f)$. Let $k\in\operatorname{\mathbb{N}}$. Then
\begin{equation*}\nabla^c f^k(\operatorname{\mathcal{X}})=k(f(x^0))^{k-1}\nabla^c f(\operatorname{\mathcal{X}})+E_{f^k}^c(\operatorname{\mathcal{X}}),\end{equation*}
where
\begin{equation*}E_{f^k}^c(\operatorname{\mathcal{X}})=\frac{1}{2}\big(E_{f^k}^s(\operatorname{\mathcal{X}})+E_{f^k}^s(\operatorname{\mathcal{X}}^-)\big).\end{equation*}
\end{cor}
\begin{prop}[GCSG quotient rule] \label{prop:gcsgquotient}
Let $\operatorname{\mathcal{X}}=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle$ be an ordered set of $m+1$ points such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^-$ is in $\operatorname{dom}(f) \cap \operatorname{dom}(g)$ and for which $g(x^0)$, $g(x^0\pm d^1)$, $\ldots$, $g(x^0\pm d^m)$ are all nonzero. Then
\begin{equation*}\nabla^c\left(\frac{f}{g}\right)(\operatorname{\mathcal{X}})=\frac{g(x^0)\nabla^c f(\operatorname{\mathcal{X}})-f(x^0)\nabla^c g(\operatorname{\mathcal{X}})}{g^2(x^0)}-E_{\frac{f}{g}}^c (\operatorname{\mathcal{X}}),\end{equation*}
where
\begin{equation*}E_{\frac{f}{g}}^c (\operatorname{\mathcal{X}})=\frac{1}{2}\Big(E_{\frac{f}{g}}^s (\operatorname{\mathcal{X}})+E_{\frac{f}{g}}^s (\operatorname{\mathcal{X}}^-)\Big).\end{equation*}
\end{prop}
\begin{cor}[GCSG power rule, negative exponent] \label{cor:gcsgpowerneg}
Let $\operatorname{\mathcal{X}}=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle$ be an ordered set of $m+1$ points such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^-$ is in $\operatorname{dom}(f)$ and for which $f(x^0)$, $f(x^0\pm d^1), \ldots, f(x^0\pm d^m)$ are all nonzero. Let $k\in\operatorname{\mathbb{N}}$. Then
\begin{equation*}\nabla^c f^{-k}(\operatorname{\mathcal{X}})=-k f^{-k-1}(x^0)\nabla^c f(\operatorname{\mathcal{X}})-E_{f^{-k}}^c (\operatorname{\mathcal{X}}),\end{equation*}
where
\begin{equation*}E_{f^{-k}}^c (\operatorname{\mathcal{X}})=\frac{1}{2}\left(E_{f^{-k}}^s(\operatorname{\mathcal{X}})+E_{f^{-k}}^s (\operatorname{\mathcal{X}}^-)\right).\end{equation*}
\end{cor}
\noindent Our final calculus rule for the GCSG is the chain rule. For this, we require some additional notation. Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^p\to\operatorname{\mathbb{R}}$ and $g: \operatorname{dom}(g) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}^p$, where
\begin{equation*}g(y)=\left[\begin{array}{c}g_1(y)\\g_2(y)\\\vdots\\g_p(y)\end{array}\right]\in\operatorname{\mathbb{R}}^p.\end{equation*}
Let $\operatorname{\mathcal{X}}=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle$ be an ordered set of $m+1$ distinct points such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^-$ is in $\operatorname{dom}(g)$ and define
\begin{align*}
g(\operatorname{\mathcal{X}})&=\langle g(x^0),g(x^0+d^1),\ldots,g(x^0+d^m)\rangle\\
&=\langle g(x^0),g(x^0)+h^1,\ldots,g(x^0)+h^m\rangle,\\
g(\operatorname{\mathcal{X}})^{-}&= \langle g(x^0), g(x^0)-h^1, \ldots, g(x^0)-h^m \rangle
\end{align*}
where $h^i=g(x^0+d^i)-g(x^0)$ for all $i\in\{1,2,\ldots,m\}$, to be ordered sets of $m+1$ points such that $g(\operatorname{\mathcal{X}}) \cup g(\operatorname{\mathcal{X}})^-$ is in $\operatorname{dom}(f)$. Denote
\begin{align*}
S_g=S(g(\operatorname{\mathcal{X}}))&=[g(x^0+d^1)-g(x^0)~~\cdots~~g(x^0+d^m)-g(x^0)]\\
&=[h^1~~\cdots~~h^m]\in\operatorname{\mathbb{R}}^{p\times m}.
\end{align*}
Now we introduce the generalized centred simplex Jacobian of $g$ over $\operatorname{\mathcal{X}}$.
\begin{df}[Generalized centred simplex Jacobian] Define the function $g: \operatorname{dom}(g) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}^p$, $y\mapsto[g_1(y)~~g_2(y)~~\cdots~~g_p(y)]^\top$. Let $\operatorname{\mathcal{X}}$ be an ordered set of $m+1$ points such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^-$ is in $\operatorname{dom}(g)$. Then the \emph{generalized centred simplex Jacobian} of $g$ over $\operatorname{\mathcal{X}}$, denoted by $J^c_g(\operatorname{\mathcal{X}})$, is the $p\times n$ real matrix defined by
\begin{equation*}
J^c_g (\operatorname{\mathcal{X}})=\left[\begin{array}{c}\nabla^c g_1(\operatorname{\mathcal{X}})^\top\\\nabla^c g_2(\operatorname{\mathcal{X}})^\top\\\vdots\\\nabla^c g_p(\operatorname{\mathcal{X}})^\top\end{array}\right].
\end{equation*}
\end{df}
\noindent With these terms defined, we are ready to present the GCSG chain rule. Note that
\begin{align*}
\delta^c_f (g(\operatorname{\mathcal{X}}))&=\frac{1}{2}\left[\begin{array}{c}f(g(x^0)+h^1)-f(g(x^0)-h^1)\\\vdots\\f(g(x^0)+h^m)-f(g(x^0)-h^m)\end{array}\right].\\
\neq \delta^c_{f \circ g} (\operatorname{\mathcal{X}})&=\frac{1}{2}\left[\begin{array}{c}f(g(x^0+d^1))-f(g(x^0-d^1))\\\vdots\\f(g(x^0+d^m))-f(g(x^0-d^m))\end{array}\right].
\end{align*}
For this reason, the chain rule for the GCSG cannot be obtain using a similar approach to the one for the GSG \cite[Theorem 15]{hare2020}.
\begin{prop}[GCSG chain rule]
Let the functions $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^p\to\operatorname{\mathbb{R}}$, $g:\operatorname{dom}(g) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}^p$ and let $\operatorname{\mathcal{X}}=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle$ be an ordered set of $m+1$ points such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^-$ is in $\operatorname{dom}(g)$ and $g(\operatorname{\mathcal{X}}) \cup g(\operatorname{\mathcal{X}})^-$ is in $\operatorname{dom}(f)$. Then
\begin{equation*}
\nabla^c (f\circ g)(\operatorname{\mathcal{X}})=J^c_g (\operatorname{\mathcal{X}})^\top\nabla^c f(g(\operatorname{\mathcal{X}}))-E,
\end{equation*}
where
\begin{align*}
E&=(S^\top)^\dagger\delta^c_g(\operatorname{\mathcal{X}})(S_g^\top)^\dagger\widetilde{E}+(S^\top)^\dagger\widehat{E}\delta^c_f(g(\operatorname{\mathcal{X}}))-(S^\top)^\dagger\widetilde{E}\widehat{E},\\
\widetilde{E}&=\delta^c_f(g(\operatorname{\mathcal{X}}))-\delta^c_{f \circ g}(\operatorname{\mathcal{X}}),\\
\widehat{E}&=\delta^c_g(\operatorname{\mathcal{X}})(S_g^\top)^\dagger-\operatorname{Id}.
\end{align*}
\end{prop}
\begin{proof}
We have
\begin{align*}
\nabla^c (f\circ g)(\operatorname{\mathcal{X}})&=(S^\top)^\dagger\delta^c_{f \circ g}(\operatorname{\mathcal{X}})\\
&=(S^\top)^\dagger\big(\delta^c_g(\operatorname{\mathcal{X}})(S_g^\top)^\dagger-\widehat{E}\big)\delta^c_{f \circ g}(\operatorname{\mathcal{X}}).
\end{align*}Thus,
\begin{equation*}
\nabla^c(f\circ g)(\operatorname{\mathcal{X}})=(S^\top)^\dagger\big(\delta^c_g(\operatorname{\mathcal{X}})(S_g^\top)^\dagger-\widehat{E}\big)\big(\delta^c_f(g(\operatorname{\mathcal{X}}))-\widetilde{E}\big).
\end{equation*}
Expanding the left-hand side, we obtain
\small\begin{align*}
\nabla^c (f\circ g)(\operatorname{\mathcal{X}})&=(S^\top)^\dagger\left[\delta^c_g(\operatorname{\mathcal{X}})(S_g^\top)^\dagger\delta^c_f (g(\operatorname{\mathcal{X}}))-\delta^c_g(\operatorname{\mathcal{X}})(S_g^\top)^\dagger\widetilde{E}-\widehat{E}\delta^c_f(g(\operatorname{\mathcal{X}}))+\widetilde{E}\widehat{E}\right]\\
&=J^c_g(\operatorname{\mathcal{X}})^\top\nabla^c f(g(\operatorname{\mathcal{X}}))-(S^\top)^\dagger\delta^c_g(\operatorname{\mathcal{X}})(S_g^\top)^\dagger\widetilde{E}-(S^\top)^\dagger\widehat{E}\delta^c_f(g(\operatorname{\mathcal{X}}))+(S^\top)^\dagger\widetilde{E}\widehat{E}\\
&=J^c_g(\operatorname{\mathcal{X}})^\top\nabla^c f(g(\operatorname{\mathcal{X}}))-E.\qedhere
\end{align*}\normalsize
\end{proof}
\section{The generalized centred simplex calculus gradient} \label{sec:centredsimplexcalculusgradient}
In the previous section, we developed calculus rules for the GCSG that involve basic calculus rules plus a term $E$. In this section, we see that we can eliminate the $E$ terms in all the calculus rules to create new gradient approximation techniques. The error bounds for these new techniques remain $O(\Delta^2)$. We name these techniques the \emph{generalized centred simplex calculus gradient (GCSCG)}.
Table \ref{tab1} below summarizes the calculus results of Section \ref{sec:calc}.
\begin{table}[H]
\caption{Calculus rules for the GCSG}\label{tab1}
\begin{tabular}{|l||l|l|l|}\hline
Rule&Formula&$E^c$&$E^s$\\\hline
\scriptsize\pbox{2cm}{Product\\$fg$}&\scriptsize $f(x^0)\nabla^c g+g(x^0)\nabla^c f+E_{fg}^c$&\scriptsize $\frac{1}{2}\left(E^s_{fg}(\operatorname{\mathcal{X}})+E^s_{fg}(\operatorname{\mathcal{X}}^-)\right)$&\scriptsize $(S^\top)^\dagger\delta^s_{f|g}$\\\hline
\scriptsize\pbox{2cm}{Product\\ $f_1\Compactcdots f_k$}&\scriptsize $\sum\limits_{i=1}^k\prod\limits_{j\neq i}f_j(x^0)\nabla^c f_i+E_{f_1\Compactcdots f_k}^c$&\scriptsize $\frac{1}{2}\left(E_{f_1\Compactcdots f_k}^s(\operatorname{\mathcal{X}})+E_{f_1\Compactcdots f_k}^s(\operatorname{\mathcal{X}}^-)\right)$&\scriptsize $(S^\top)^\dagger\left(\delta_{f_1\Compactcdots f_k}^s-\sum\limits_{i=1}^k\prod\limits_{j\neq i}f_j(x^0)\delta^s_f \right)$\\\hline
\scriptsize Power&\scriptsize $kf^{k-1}(x^0)\nabla^c f+E_{f^k}^c$&\scriptsize $\frac{1}{2}\left(E_{f^k}^s(\operatorname{\mathcal{X}})+E_{f^k}^s (\operatorname{\mathcal{X}}^-)\right)$&\scriptsize$(S^\top)^\dagger\left(\sum\limits_{i-1}^{k-1}f^{k-1-i}(x^0)\delta_{f|f^{-i}}^s\right)$ \\\hline
\scriptsize Quotient&\scriptsize $\frac{g(x^0)\nabla^c f-f(x^0)\nabla^c g}{g^2(x^0)}-E_{\frac{f}{g}}^c$&\scriptsize $\frac{1}{2}\left(E_{\frac{f}{g}}^s(\operatorname{\mathcal{X}})+E_{\frac{f}{g}}^s (\operatorname{\mathcal{X}}^-)\right)$&\scriptsize$\frac{(S^\top)^\dagger}{g(x^0)}\delta_{\frac{f}{g}|g}^s$ \\\hline
\scriptsize\pbox{2cm}{Negative\\power}&\scriptsize$-kf^{-k-1}(x^0)\nabla^c f-E_{f^{-k}}^c$&\scriptsize$\frac{1}{2}\left(E_{f^{-k}}^s(\operatorname{\mathcal{X}})+E_{f^{-k}}^s (\operatorname{\mathcal{X}}^-)\right)$&\scriptsize$\frac{(S^\top)^\dagger}{f^k(x^0)}\left(k\delta_{\frac{1}{f}|f}^s-\sum\limits_{i=1}^{k-1}f^{1+i}(x^0)\delta_{f^{-1}|f^{-i}}^s\right)$\\\hline
\scriptsize Chain&\scriptsize $(J^c_g)^\top\nabla^c f(g(\operatorname{\mathcal{X}}))-E_{f\circ g}^c$&\scriptsize\pbox{4cm}{$(S^\top)^\dagger\delta^c_g (\operatorname{\mathcal{X}})(S_g^\top)^\dagger\widetilde{E}+$\\$(S^\top)^\dagger\widehat{E}\delta^c_f (g(\operatorname{\mathcal{X}}))-(S^\top)^\dagger\widetilde{E}\widehat{E}$} &\scriptsize$(S^\top)^\dagger\left(S_g^\top(S_g^\top)^\dagger-\operatorname{Id}\right)\delta^s_f (g(\operatorname{\mathcal{X}}))$ \\\hline
\end{tabular}
\end{table}
\noindent We introduce the notation $\nabla^{cc}$ to represent the GCSCG. In the sequel, we formalize the formulae.
\begin{df}[GCSCG]Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}, g: \operatorname{dom}(g)\subseteq \operatorname{\mathbb{R}}^n \to \operatorname{\mathbb{R}}$ and let $\operatorname{\mathcal{X}}=\langle x^0,x^0+d^1,\ldots,x^0+d^m\rangle$ be an ordered set such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^-$ is in $\operatorname{dom}(f) \cap \operatorname{dom}(g)$.
The GCSCG of $fg$ over $\operatorname{\mathcal{X}}$ is
\begin{equation}\label{eq:cc1}\nabla^{cc} (fg)(\operatorname{\mathcal{X}})=f(x^0)\nabla^c g(\operatorname{\mathcal{X}})+g(x^0)\nabla^c f(\operatorname{\mathcal{X}}).\end{equation}
Let $f_i:\operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ for all $i\in\{1,\ldots,k\}$, $k\geq 2,$ and let $\operatorname{\mathcal{X}}$ be an ordered set such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^-$ is in $\operatorname{dom}(f_i)$ for all $i$. The GCSCG of $f_1\cdots f_k$ over $\operatorname{\mathcal{X}}$ is
\begin{equation}\label{eq:cc2}\nabla^{cc} (f_1\cdots f_k)(\operatorname{\mathcal{X}})=\sum\limits_{i=1}^k\prod\limits_{j\neq i}f_j(x^0)\nabla^c f_i(\operatorname{\mathcal{X}}).\end{equation}
The GCSCG of $f^k$ over $\operatorname{\mathcal{X}}$ is
\begin{equation}\label{eq:cc3}\nabla^{cc} f^k(\operatorname{\mathcal{X}})=kf^{k-1}(x^0)\nabla^c f(\operatorname{\mathcal{X}}),\end{equation}where $f(x^0)$ is nonzero whenever $k-1<0$.\medskip\\
Let $g(x^0)\neq0$. The GCSCG of $\frac{f}{g}$ over $\operatorname{\mathcal{X}}$ is
\begin{equation}\label{eq:cc4}\nabla^{cc} \left(\frac{f}{g}\right)(\operatorname{\mathcal{X}})=\frac{g(x^0)\nabla^c f(\operatorname{\mathcal{X}})-f(x^0)\nabla^c g(\operatorname{\mathcal{X}})}{g^2(x^0)}.\end{equation}
Let $a\in (0, \infty)$. The GCSCG of $a^f$ over $\operatorname{\mathcal{X}}$ is
\begin{equation}\label{eq:cc6}\nabla^{cc} a^{f(\operatorname{\mathcal{X}})}=a^{f(x^0)}\nabla^c f(\operatorname{\mathcal{X}})\ln a.\end{equation}
Let $f(x^0)\neq0$ and $a\in (0, \infty)$. The GCSCG of $\log_af$ over $\operatorname{\mathcal{X}}$ is
\begin{equation}\label{eq:cc7}\nabla^{cc} \log_af(\operatorname{\mathcal{X}})=\frac{1}{f(x^0)\ln a}\nabla^c f(\operatorname{\mathcal{X}}).\end{equation}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^p\to\operatorname{\mathbb{R}}$ and $g:\operatorname{dom}(g)\subseteq\operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}^p$. Let $\operatorname{\mathcal{X}}$ be an ordered set such that $\operatorname{\mathcal{X}} \cup \operatorname{\mathcal{X}}^-$ is in $\operatorname{dom}(g)$ and $g(\operatorname{\mathcal{X}}) \cup g(\operatorname{\mathcal{X}})^-$ is in $\operatorname{dom}(f).$ The GCSCG of $f\circ g$ over $\operatorname{\mathcal{X}}$ is
\begin{equation}\label{eq:cc5}\nabla^{cc} (f\circ g)(\operatorname{\mathcal{X}})=J^c_g (\operatorname{\mathcal{X}})^\top\nabla^c f(g(\operatorname{\mathcal{X}})).\end{equation}
\end{df}
We point out that the GCSCG is less restrictive than the GCSG. For instance, \eqref{eq:cc3} only requires $f(x^0)$ to be nonzero when $k-1<0$, which is not sufficient in Corollary \ref{cor:gcsgpowerneg}. The quotient rule presented in \eqref{eq:cc4} requires only $g(x^0)$ to be nonzero, whereas Proposition \ref{prop:gcsgquotient} requires all of $g(x^0), g(x^0\pm d^1), \dots, g(x^0\pm d^m)$ to be nonzero. Lastly, the GCSG $\nabla^c\ln f(\operatorname{\mathcal{X}})$ requires $f$ to be positive for all points in $\operatorname{\mathcal{X}}$ and $\operatorname{\mathcal{X}}^-$, so that $\delta_{\ln f}^c(\operatorname{\mathcal{X}})$ is well-defined. However, only $f(x^0)$ must be nonzero in \eqref{eq:cc7} and $f$ is not restricted at any other point in $\operatorname{\mathcal{X}}$ or $\operatorname{\mathcal{X}}^-$.\par The preceding seven equations of approximate gradients are summarized in Table \ref{tab2} below for quick reference.
\begin{table}[H]\centering
\caption{Calculus rules for the GCSCG}\label{tab2}
\begin{tabular}{|l||l l|}\hline
Rule&Formula $\nabla^{cc}$&\\\hline
Product $fg$&$f(x^0)\nabla^c g(\operatorname{\mathcal{X}})+g(x^0)\nabla^c f(\operatorname{\mathcal{X}})$&\eqref{eq:cc1}\\\hline
Product $f_1\Compactcdots f_k$&$\sum\limits_{i=1}^k\prod\limits_{j\neq i}f_j(x^0)\nabla^c f_i(\operatorname{\mathcal{X}})$&\eqref{eq:cc2}\\\hline
Power&$kf^{k-1}(x^0)\nabla^c f(\operatorname{\mathcal{X}})$&\eqref{eq:cc3}\\\hline
Quotient&$\frac{g(x^0)\nabla^c f(\operatorname{\mathcal{X}})-f(x^0)\nabla^c g(\operatorname{\mathcal{X}})}{g^2(x^0)}$&\eqref{eq:cc4}\\\hline
Exponential&$a^{f(x^0)}\nabla^c f(\operatorname{\mathcal{X}})\ln a$&\eqref{eq:cc6}\\\hline
Logarithmic&$\frac{1}{f(x^0)\ln a}\nabla^c f(\operatorname{\mathcal{X}})$&\eqref{eq:cc7}\\\hline
Chain&$J^c_g (\operatorname{\mathcal{X}})^\top\nabla^c f(g(\operatorname{\mathcal{X}}))$&\eqref{eq:cc5}\\\hline
\end{tabular}
\end{table}
\noindent The next step is to show that GCSCG has controlled error.
\subsection{Error bounds for the GCSCG}
In this section, we demonstrate that the GCSCG is a valid approximation method, in the sense that we can define an error bound between the approximations and the true values of the gradients at $x^0$. Furthermore, we show that the error bounds are all $O(\Delta^2)$. We provide some examples along the way, to show the accuracy gain that can be made by using the GCSCG. The four propositions below follow from applying Theorems \ref{thm:det} and \ref{thm:under} to the appropriate results from Section \ref{sec:calc}. We provide the proof for Proposition \ref{prop:nccfgerrorbound} as a demonstration and omit proofs for the other three results. In the following propositions, we use $\mathbb{U}=\operatorname{span} S \subseteq \operatorname{\mathbb{R}}^n.$ Note that if $S$ has full row rank, then $\mathbb{U}=\operatorname{\mathbb{R}}^n$ and $f_\mathbb{U}=f.$
\begin{prop}[$\nabla^{cc} (fg)$ error bound] \label{prop:nccfgerrorbound}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}, g:\operatorname{dom}(g) \subseteq \operatorname{\mathbb{R}}^n \to \operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(x^0, \overline{\Delta})$ and denote by $L_f$ and $L_g$ the Lipschitz constants of $\nabla^2 f$ and $\nabla^2 g$.
Let $\operatorname{\mathcal{X}}=\langle x^0, x^1, \ldots, x^m \rangle$ be an ordered set with radius $\Delta<\overline{\Delta}$ such that $S$ has full rank. Let $\mathbb{U}=\operatorname{span} S$. Then
\begin{equation*}
\|\nabla^{cc} (fg)(\operatorname{\mathcal{X}})-\nabla(fg)_{\mathbb{U}}(x^0)\|\leq\frac{\sqrt m}{6}\big(L_g|f(x^0)|+L_f|g(x^0)|\big)\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2.
\end{equation*}
\end{prop}
\begin{proof}
Note that $f(x^i)=f_{\mathbb{U}}(x^i)$ and $g(x^i)=g_{\mathbb{U}}(x^i)$ for all $i \in \{0, 1, \ldots, m\}.$ We have
\begin{align*}
&\|\nabla^{cc}(fg)(\operatorname{\mathcal{X}})-\nabla(fg)_{\mathbb{U}}(x^0)\|\\
=&\|f(x^0)\nabla^c g(\operatorname{\mathcal{X}})+g(x^0)\nabla^c f(\operatorname{\mathcal{X}})-f_{\mathbb{U}}(x^0)\nabla g_{\mathbb{U}}(x^0)-g_{\mathbb{U}}(x^0)\nabla f_{\mathbb{U}}(x^0)\|\\
\leq&|f(x^0)|\|\nabla^c g(\operatorname{\mathcal{X}})-\nabla g_{\mathbb{U}}(x^0)\|+|g(x^0)|\|\nabla^c f(\operatorname{\mathcal{X}})-\nabla f_\mathbb{U}(x^0)\|\\
\leq&\frac{\sqrt m}{6}\left(L_g|f(x^0)|+L_f|g(x^0)|\right)\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2
\end{align*}
by Theorem \ref{thm:det} or Theorem \ref{thm:under} as appropriate.
\end{proof}
\begin{prop}[$\nabla^{cc} (f_1\Compactcdots f_k)$ error bound]\label{prop:ccproduct}
Let $f_i:\operatorname{dom}(f_i) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(x^0, \overline{\Delta})$ and denote by $L_i$ the Lipschitz constants of $\nabla^2 f_i$ for each $i$.
Let $\operatorname{\mathcal{X}}=\langle x^0, x^1, \ldots, x^m \rangle$ be an ordered set with radius $\Delta<\overline{\Delta}$ such that $S$ has full rank. Let $\mathbb{U}= \operatorname{span} S.$ Then
\begin{equation*}\|\nabla^{cc} (f_1\Compactcdots f_k)(\operatorname{\mathcal{X}})-\nabla(f_1\Compactcdots f_k)_\mathbb{U}(x^0)\|\leq\frac{\sqrt m}{6}\sum\limits_{i=1}^k\prod\limits_{j\neq i}|f_j(x^0)|L_i\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2.\end{equation*}
\end{prop}
\noindent In some situations, the error bound above is zero. Corollary \ref{cor:ccproduct} below gives sufficient conditions for the GCSCG $\nabla_{cc}(f_1\Compactcdots f_k)$ to be perfectly accurate.
\begin{cor}\label{cor:ccproduct}Let the assumptions of Proposition \ref{prop:ccproduct} hold. If any of the following holds:
\begin{itemize}
\item[(a)]$f_i(x^0)=f_j(x^0)=0$ for some $i,j\in\{1,\ldots,k\}, i\neq j$;
\item[(b)]$f_i$ is a polynomial of order less than three for all $i\in\{1,\ldots,k\}$;
\item[(c)]$f_i$ is a polynomial of order less than three and $f_i(x^0)=0$ for some $i\in\{1,\ldots,k\}$,
\end{itemize}then
\begin{equation*}
\nabla^{cc} (f_1\Compactcdots f_k)(\operatorname{\mathcal{X}})=\nabla(f_1\Compactcdots f_k)_\mathbb{U}(x^0).
\end{equation*}
\end{cor}
\noindent When $S$ has full rank, Corollary \ref{cor:ccproduct} tells us that if just one of the $k$ functions is linear or quadratic and is equal to zero at $x^0$, then $\nabla^{cc} (f_1\Compactcdots f_k)(\operatorname{\mathcal{X}})$ is equal to $\nabla (f_1\Compactcdots f_k)_\mathbb{U}(x^0),$ regardless of the nature of the other $k-1$ functions. The same result is obtained if just two of the $k$ functions are equal to zero at $x^0$, no matter the form of the other functions.
\begin{prop}[$\nabla^{cc} f^k$ error bound]\label{prop:power}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ be $\mathcal{C}^3$ on $B(x^0, \overline{\Delta})$ and denote by $L$ the Lipschitz constant of $\nabla^2 f$. Let $f(x^0)\neq0$ whenever $k-1<0$.
Let $\operatorname{\mathcal{X}}=\langle x^0, x^1, \ldots, x^m \rangle$ be an ordered set with radius $\Delta<\overline{\Delta}$ such that $S$ has full rank. Let $\mathbb{U}=\operatorname{span} S$ and $k \in \operatorname{\mathbb{R}}$. Then
\begin{equation*}
\|\nabla^{cc} (f^k)(\operatorname{\mathcal{X}})-\nabla (f^k)_\mathbb{U}(x^0)\|\leq\frac{L\sqrt m}{6}|k||f(x^0)|^{k-1}\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2.
\end{equation*}
\end{prop}
\begin{prop}[$\nabla^{cc} \left(\frac{f}{g}\right)$ error bound]\label{prop:quotient}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}, g:\operatorname{dom}(g)\subseteq \operatorname{\mathbb{R}}^n \to \operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(x^0,\overline{\Delta})$ and denote by $L_f, L_g$ the Lipschitz constants of $\nabla^2 f, \nabla^2 g$ respectively.
Let $\operatorname{\mathcal{X}}=\langle x^0, x^1, \ldots, x^m \rangle$ be an ordered set with radius $\Delta<\overline{\Delta}$ such that $S$ has full rank. Let $\mathbb{U}=\operatorname{span} S$. Assume $g(x^0)\neq0$. Then
\begin{equation*}
\left\|\nabla^{cc} \left(\frac{f}{g}\right)(\operatorname{\mathcal{X}})-\nabla\left(\frac{f}{g}\right)_\mathbb{U}(x^0)\right\|\leq\frac{\sqrt m}{6}\left(L_f\left|\frac{1}{g(x^0)}\right|+L_g\left|\frac{f(x^0)}{g^2(x^0)}\right|\right)\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2.
\end{equation*}
\end{prop}
The error bound involving the chain rule ($\nabla^{cc} (f\circ g)$) requires new techniques, so we include the proof of Proposition \ref{prop:cccomp}.
\begin{prop}[error bound for $\nabla^{cc} (f\circ g)$]\label{prop:cccomp}
Let $g:\operatorname{dom}(g) \subseteq \operatorname{\mathbb{R}}^n \to \operatorname{\mathbb{R}}^p$, $f: \operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^p \to \operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(g(x^0),\overline{\Delta}_g)$ and $B(x^0,\overline{\Delta})$ respectively and denote by $L_{\nabla^2 f} , L_{\nabla^2 g}$ the Lipschitz constants of $\nabla^2 f$ and $\nabla^2 g$. Denote by $L_{g_i}$ the Lipschitz constant of $g_i$ on $B(x^0,\overline{\Delta})$ for each $i \in \{1,2, \dots, p\}.$ Let $\operatorname{\mathcal{X}}=\langle x^0, x^1, \ldots, x^m \rangle$ be an ordered set with radius $\Delta<\overline{\Delta}$ and let $g(\operatorname{\mathcal{X}})=\langle g(x^0), g(x^1), \ldots, g(x^m) \rangle$ be an ordered set with radius $\Delta_g<\overline{\Delta}_g$. Assume that $S$ and $S_g$ have full rank. Let $\mathbb{U}=\operatorname{span} S \subseteq \operatorname{\mathbb{R}}^n$ and $\mathbb{V}= \operatorname{span} S_g \subseteq \operatorname{\mathbb{R}}^p.$ Then
\footnotesize\begin{equation*}
\|\nabla^{cc} (f\circ g)(\operatorname{\mathcal{X}})-\nabla(f\circ g)_\mathbb{U} (x^0)\|\leq \frac{\sqrt{m}\;p}{6}\big(\sqrt{m} \; L_{g_*} \;L_{\nabla^2 f}\big\|(\widehat{S}_g^\top)^\dagger\big\|+ \Vert \nabla f(g(x^0)) \Vert L_{\nabla^2 g_*} \big)\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2_{*},\\
\end{equation*}\normalsize
where
\begin{align*}
\Delta_{*}&= \max\left \{ \Delta, \Delta_g \right\},\\
L_{g_{*}}&= \max \{L_{ g_i}: i=1, \dots, p\},\\
L_{ \nabla^2 g_{*}}&= \max \{L_{\nabla^2 g_i}: i=1, \dots, p\}.
\end{align*}
\end{prop}
\begin{proof}
We have
\begin{align*}
\Vert \nabla^{cc} (f \circ g)(\operatorname{\mathcal{X}})-\nabla(f\circ g)_\mathbb{U}(x^0)\Vert&=\Vert \left (J_g^c(\X) \right )^\top \nabla^c f(g(\operatorname{\mathcal{X}})) - \left (J_{g_\uu}(x^0) \right )^\top \nabla f_\mathbb{V} \left ( g_\mathbb{U} \left ( x^0\right ) \right )\Vert.
\end{align*}
Note that $g_\mathbb{U}(x^0)=g(x^0+\operatorname{Proj}_\mathbb{U}(x^0-x^0))=g(x^0).$ We obtain
\begin{align*}
&\Vert \nabla^{cc} (f \circ g)(\operatorname{\mathcal{X}})-\nabla(f\circ g)_\mathbb{U}(x^0)\Vert\\
=&\Vert \left (J_g^c(\X) \right )^\top \nabla^c f(g(\operatorname{\mathcal{X}})) - \left (J_{g_\uu}(x^0) \right )^\top \nabla f_\mathbb{V} \left ( g \left ( x^0\right ) \right )\Vert\\
=&\Vert \left (J_g^c(\X) \right )^\top \nc f\left ( g(\X)\right ) - \left (J_g^c(\X) \right )^\top \nabla f_{\vv}\left (g\left (x^0 \right ) \right )\\
\quad&+\left (J_g^c(\X) \right )^\top \nabla f_{\vv}\left (g\left (x^0 \right ) \right ) - \left (J_{g_\uu}(x^0) \right )^\top \nabla f_{\mathbb{V}} \left ( g\left ( x^0\right ) \right )\Vert\\
\leq&\Vert \left (J_g^c(\X) \right )^\top \Vert \Vert \nc f\left ( g(\X)\right ) -\nabla f_{\vv}\left (g\left (x^0 \right ) \right ) \Vert + \Vert \nabla f_{\vv}\left (g\left (x^0 \right ) \right ) \Vert \Vert \left ( J_g^c(\X)- J_{g_\uu}(x^0) \right )^\top \Vert.
\end{align*}
Let us find a bound for $\Vert \nc f\left ( g(\X)\right )-\nabla f_{\vv}\left (g\left (x^0 \right ) \right ) \Vert.$ If $S_g$ has full column rank, then
\begin{align*}
\Vert \nc f\left ( g(\X)\right )-\nabla f_{\vv}\left (g\left (x^0 \right ) \right ) \Vert &\leq \frac{\sqrt{m}}{6} L_{\nabla^2 f}\big\|(\widehat{S}_g^\top)^\dagger\big\|\Delta^2_g
\end{align*}
by Theorem \ref{thm:under}. If $S_g$ has full row rank, then
\begin{align*}
\nabla f_{\vv}\left (g\left (x^0 \right ) \right )&=\operatorname{Proj}_\mathbb{V} \nabla f (g(x^0))\\
&=(S_g^\top)^\dagger S_g^\top \nabla f (g(x^0))\\
&=\nabla f (g(x^0)).
\end{align*}
Hence
\begin{align*}
\Vert \nc f\left ( g(\X)\right )-\nabla f_{\vv}\left (g\left (x^0 \right ) \right ) \Vert &= \Vert \nc f\left ( g(\X)\right ) -\nabla f (g(x^0)) \Vert\\
&\leq \frac{\sqrt{m}}{6} L_{\nabla^2 f}\big\|(\widehat{S}_g^\top)^\dagger\big\|\Delta^2_g
\end{align*}
by Theorem \ref{thm:det}. Next we find a bound for $\Vert \left (J_g^c(\X)-J_{g_\uu}(x^0) \right )^\top \Vert.$ We have
\begin{align*}
\Vert \left (J_g^c(\X)-J_{g_\uu}(x^0) \right )^\top \Vert &= \left \Vert \begin{bmatrix}\left (\nabla^c g_1(\operatorname{\mathcal{X}})-\nabla (g_1)_{\mathbb{U}}(x^0) \right )^\top\\ \vdots \\ \left ( \nabla^c g_p(\operatorname{\mathcal{X}}) -\nabla (g_p)_{\mathbb{U}} (x^0) \right )^\top \end{bmatrix}^\top \right \Vert\\
&\leq \Vert \nabla^c g_1(\operatorname{\mathcal{X}})-\nabla (g_1)_{\mathbb{U}}(x^0) \Vert + \dots + \Vert \nabla^c g_p(\operatorname{\mathcal{X}})-\nabla (g_p)_{\mathbb{U}} (x^0) \Vert.\\
\end{align*}
If $S$ has full row rank, then $\nabla (g_i){_\mathbb{U}}(x^0)=\nabla (g_i)(x^0)$ for all $i \in \{1, 2, \dots, p\}.$ By Theorem \ref{thm:det}, we obtain
\begin{align}
&\Vert \nabla^c g_1(\operatorname{\mathcal{X}})-\nabla (g_1)_{\mathbb{U}}(x^0) \Vert + \dots + \Vert \nabla^c g_p(\operatorname{\mathcal{X}})-\nabla (g_p)_{\mathbb{U}} (x^0) \Vert \notag\\
&\leq \frac{\sqrt{m}}{6} \Delta\big\|(\widehat{S}^\top)^\dagger\big\|\left ( L_{\nabla^2 g_1} +\dots +L_{\nabla^2 g_p} \right ) \notag\\
&\leq \frac{\sqrt{m}\;p}{6} L_{ \nabla^2 g_*}\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2. \label{eq:bound}
\end{align}
If $S$ has full column rank, then \eqref{eq:bound} is obtained by Theorem \ref{thm:under}. Finally, let us find a bound for $\Vert \left (J_g^c(\X) \right )^\top \Vert$. We have
\begin{align*}
\Vert \left (J_g^c(\X) \right )^\top \Vert&=\left \Vert \begin{bmatrix}\nabla^c g_1(\operatorname{\mathcal{X}})^\top\\ \vdots \\ \nabla^c g_p(\operatorname{\mathcal{X}})^\top \end{bmatrix}^\top \right \Vert\\
&\leq \Vert \nabla^c g_1(\operatorname{\mathcal{X}}) \Vert + \dots + \Vert \nabla^c g_p(\operatorname{\mathcal{X}})\Vert\\
&\leq \big\|(\widehat{S}^\top)^\dagger\big\|\left \Vert \frac{\delta^c_{g_1}(\operatorname{\mathcal{X}})}{\Delta} \right \Vert+ \dots + \big\|(\widehat{S}^\top)^\dagger\big\|\left \Vert \frac{\delta^c_{g_p}(\operatorname{\mathcal{X}})}{\Delta} \right \Vert\\
&\leq\big\|(\widehat{S}^\top)^\dagger\big\|\sqrt{m} \; L_{g_1}+ \dots +\big\|(\widehat{S}^\top)^\dagger\big\|\sqrt{m} \; L_{g_p}\\
&\leq \sqrt{m}\;p \; L_{g_*}\big\|(\widehat{S}^\top)^\dagger\big\|.
\end{align*}
All together,
\begin{align*}
&\Vert \nabla^c (f \circ g)(\operatorname{\mathcal{X}})-\nabla(f\circ g)_\mathbb{U}(x^0)\Vert\\
\leq&\sqrt{m}\;p \; L_{ g_*}\big\|(\widehat{S}^\top)^\dagger\big\|\frac{\sqrt{m}}{6} L_{\nabla^2 f}\big\|(\widehat{S}_g^\top)^\dagger\big\|\Delta_g^2+ \Vert \nabla f_{\mathbb{V}}(g(x^0)) \Vert \frac{\sqrt{m}\;p}{6} L_{ \nabla^2 g_*} \big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2\\
\leq&\frac{\sqrt{m}\;p}{6}\big(\sqrt{m} \; L_{g_*} \;L_{\nabla^2 f}\big\|(\widehat{S}_g^\top)^\dagger\big\|+ \Vert \nabla f_{\mathbb{V}}(g(x^0)) \Vert L_{\nabla^2 g_*} \big)\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2_{*}.
\end{align*}
Since $\Vert \nabla f_{\mathbb{V}}(g(x^0)) \Vert \leq \Vert \nabla f (g(x^0)) \Vert,$ we obtain the final result. \qedhere
\end{proof}
\noindent Analysing the error bound of Proposition \ref{prop:cccomp}, we find two cases where it is zero. Corollary \ref{cor:cccomp} presents these cases.
\begin{cor}\label{cor:cccomp}
Let the assumptions of Proposition \ref{prop:cccomp} hold. If either of the following holds:
\begin{itemize}
\item[(a)]$g$ is a constant function;
\item[(b)]$f$ and $g$ are polynomials of order less than three,
\end{itemize}then
\begin{equation*}
\nabla^{cc} (f\circ g)(\operatorname{\mathcal{X}})=\nabla(f\circ g)_\mathbb{U}(x^0).
\end{equation*}
\end{cor}
\begin{ex}
Consider $f:\operatorname{\mathbb{R}}\to\operatorname{\mathbb{R}}:y\mapsto y^2$, $g:\operatorname{\mathbb{R}}\to\operatorname{\mathbb{R}}:y\mapsto y^2+1$ and $\operatorname{\mathcal{X}}=\langle2,3\rangle$. We compute the absolute error for $\nabla^{cc} (f \circ g)(\operatorname{\mathcal{X}})$ and the value of $\nabla^c (f \circ g)(\operatorname{\mathcal{X}}).$ Note that $S$ and $S_g$ have full row rank. Hence, $\nabla (f \circ g)_\mathbb{U} (x^0)=\nabla (f \circ g)(x^0).$ We have
\begin{align*}
\nabla^{cc} (f\circ g)(\operatorname{\mathcal{X}})&=J^c_g (\operatorname{\mathcal{X}})^\top\nabla^c f(g(\operatorname{\mathcal{X}}))\\
&=(S^\top)^\dagger\delta^c_g (\operatorname{\mathcal{X}})(S_g^\top)^\dagger\delta^c_f (g(\operatorname{\mathcal{X}}))\\
&=1\cdot\frac{1}{2}(10-2)\frac{1}{5}\cdot\frac{1}{2}(100-0)\\
&=40
\end{align*}and the true derivative $\nabla (f \circ g)(x^0)$ is
\begin{equation*}
\frac{d}{dy}(y^2+1)^2\Big\vert_{y=2}=40.
\end{equation*}
Therefore, the absolute error $\| \nabla^{cc} (f \circ g)(\operatorname{\mathcal{X}})-\nabla (f \circ g)(\operatorname{\mathcal{X}})\|=0.$ The error bound in Proposition \ref{prop:cccomp} is also equal to zero since $f$ and $g$ are quadratic functions. The GCSG $\nabla^c(f\circ g)(\operatorname{\mathcal{X}})$ does not return the exact value of the derivative. Indeed, we have
\begin{align*}
\nabla^c(f\circ g)(\operatorname{\mathcal{X}})&=(S^\top)^\dagger\delta^c_{f \circ g}(\operatorname{\mathcal{X}})\\
&=1\cdot\frac{1}{2}(9+1)^2-(1+1)^2)=48.
\end{align*}
\end{ex}
The next example demonstrates Corollary \ref{cor:cccomp}.
\begin{ex}
Consider $f:\operatorname{\mathbb{R}}^3 \to \operatorname{\mathbb{R}}:y \mapsto \alpha(y_1^2+y_2^2+y_3^2),$ $\alpha \in \operatorname{\mathbb{R}},$ $g:\operatorname{\mathbb{R}}^3 \to \operatorname{\mathbb{R}}^2:y \mapsto \begin{bmatrix} y_2-2y_1&y_1+y_2&y_1y_2+y_2\end{bmatrix}^\top$ and the sample set $\operatorname{\mathcal{X}}=\left \langle \begin{bmatrix} 1\\2\end{bmatrix}, \begin{bmatrix} 2\\2 \end{bmatrix}, \begin{bmatrix} 1\\3 \end{bmatrix}\right \rangle.$ We compute the absolute error for $\nabla^{cc} (f \circ g)(\operatorname{\mathcal{X}}).$ Note that $g(\operatorname{\mathcal{X}})=\left \langle \begin{bmatrix} 0\\3\\4\end{bmatrix}, \begin{bmatrix} -2\\4\\6\end{bmatrix}, \begin{bmatrix} 1\\4\\6\end{bmatrix}\right \rangle$ and
\begin{align*}
S&=\begin{bmatrix} 1&0\\0&1 \end{bmatrix}, \quad S_g=\begin{bmatrix} -2&1\\operatorname{\mathds{1}}&1\\2&2\end{bmatrix}.
\end{align*}
We see that $S$ has full row rank and $S_g$ has full column rank, so $\nabla (f \circ g)_\mathbb{U}(x^0)=\nabla (f \circ g)(x^0)$. We obtain
\begin{align*}
\nabla^{cc} (f \circ g)(\operatorname{\mathcal{X}})&=(J^c_g(\operatorname{\mathcal{X}}))^\top \nabla^c f(g(\operatorname{\mathcal{X}}))\\
&=\begin{bmatrix} -2&1&2\\operatorname{\mathds{1}}&1&2\end{bmatrix}\begin{bmatrix} 0\\ 4.4\alpha \\8.8\alpha \end{bmatrix}\\
&=\alpha\begin{bmatrix} 22\\22\end{bmatrix},
\end{align*}
and
\begin{align*}
\nabla (f \circ g)(x^0)&=(J_g(x^0))^\top \nabla f(g(x^0))\\
&=\begin{bmatrix} -2&1&2\\operatorname{\mathds{1}}&1&2\end{bmatrix} \begin{bmatrix} 0\\ 6\alpha \\ 8\alpha \end{bmatrix}\\
&=\alpha \begin{bmatrix} 22\\ 22\end{bmatrix}.
\end{align*}
Therefore, the absolute error is
\begin{align*}
\|\nabla^{cc} (f\circ g)(\operatorname{\mathcal{X}})-\nabla(f\circ g)_\mathbb{U}(x^0)\|&=\|\nabla^{cc} (f\circ g)(\operatorname{\mathcal{X}})-\nabla(f\circ g)(x^0)\|=0.
\end{align*}
The error bound in Proposition \ref{prop:cccomp} is also equal to zero since $f$ and $g$ are quadratic functions.
\end{ex}
The next two propositions are proved using the same technique as Proposition \ref{prop:nccfgerrorbound}.
\begin{prop}[$\nabla^{cc} a^f$ error bound]\label{prop:ccexp}
Let $f:\operatorname{dom}(f)\subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(x^0,\overline{\Delta})$ and denote by $L$ the Lipschitz constant of $\nabla^2 f.$
Let $\operatorname{\mathcal{X}}=\langle x^0, x^1, \ldots, x^m \rangle$ be an ordered set with radius $\Delta<\overline{\Delta}$ such that $S$ has full rank. Let $\mathbb{U}=\operatorname{span} S$ and $a>0$. Then
\begin{equation*}
\big\|\nabla^{cc} a^{f(\operatorname{\mathcal{X}})}-\nabla a^{f_\mathbb{U}(x^0)}\big\|\leq\big|a^{f(x^0)}\ln a\big|\frac{L\sqrt m}{6}\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2.
\end{equation*}
\end{prop}
\noindent Examining the error bounds of Proposition \ref{prop:ccexp}, we see that it is zero whenever $L=0$. This is the case when $f$ is a polynomial of order less than three. The following examples illustrate this for both cases.
\begin{ex}
Consider $f:\operatorname{\mathbb{R}}^2\to\operatorname{\mathbb{R}}:y\mapsto y_1^2+y_2^2$ and $\operatorname{\mathcal{X}}=\left\langle\left[\begin{array}{c}1\\operatorname{\mathds{1}}\end{array}\right],\left[\begin{array}{c}2\\operatorname{\mathds{1}}\end{array}\right],\left[\begin{array}{c}1\\2\end{array}\right]\right\rangle$. We compute the absolute error for $\nabla^{cc} e^{f(\operatorname{\mathcal{X}})}$ and the value of $\nabla^c e^f(\operatorname{\mathcal{X}}).$ Note that $S$ has full row rank, hence, $\nabla e^{f_\mathbb{U}(x^0)}=\nabla e^{f(x^0)}.$ We have
\begin{align*}
\nabla^{cc} e^{f(\operatorname{\mathcal{X}})}&=e^{f(x^0)}\nabla^c f(\operatorname{\mathcal{X}})\\
&=e^2(S(\operatorname{\mathcal{X}})^\top)^\dagger\delta_f^c(\operatorname{\mathcal{X}})\\
&=e^2\left[\begin{array}{c c}1&0\\0&1\end{array}\right]\frac{1}{2}\left[\begin{array}{c}5-1\\5-1\end{array}\right]\\
&=\left[\begin{array}{c}2e^2\\2e^2\end{array}\right]\approx\left[\begin{array}{c}14.78\\14.78\end{array}\right],
\end{align*}
and
\begin{align*}
\nabla e^{f(x^0)}&=\nabla e^{y_1^2+y_2^2}=[2e^2~~2e^2]^\top.
\end{align*}
So the absolute error is equal to zero. The error bound in Proposition \ref{prop:ccexp} is also equal to zero since $f$ is a quadratic function. Also,
\begin{align*}
\nabla^c e^{f(\operatorname{\mathcal{X}})}&=\left(\widehat{S}^\top\right)^\dagger\delta_{e^f}^c(\operatorname{\mathcal{X}})\\
&=\left[\begin{array}{c c}1&0\\0&1\end{array}\right]\frac{1}{2}\left[\begin{array}{c}e^5-e^1\\e^5-e^1\end{array}\right]\approx\left[\begin{array}{c}72.85\\72.85\end{array}\right].
\end{align*}
\end{ex}
\begin{ex}
Consider $f:\operatorname{\mathbb{R}}^2\to\operatorname{\mathbb{R}}:y\mapsto y_1^2+y_2^2$ and $\operatorname{\mathcal{X}}=\left\langle\left[\begin{array}{c}1\\operatorname{\mathds{1}}\end{array}\right],\left[\begin{array}{c}2\\operatorname{\mathds{1}}\end{array}\right] \right\rangle$. We compute the absolute error for $\nabla^{cc} e^{f(\operatorname{\mathcal{X}})}.$ Note that $S$ has full column rank. We obtain
\begin{align*}
\nabla^{cc} e^{f(\operatorname{\mathcal{X}})}&=e^{f(x^0)}\nabla^c f(\operatorname{\mathcal{X}})\\
&=\begin{bmatrix}2e^2\\0\end{bmatrix}
\end{align*}
Also,
\begin{align*}
\nabla e^{f_\mathbb{U} (x_0)}&=e^{f_\mathbb{U}(x^0)}\nabla f_\mathbb{U}(\operatorname{\mathcal{X}})\\
&=e^{f(x^0)} \operatorname{Proj}_\mathbb{U} \nabla f(x_0)\\
&=e^2 \begin{bmatrix} 1&0\end{bmatrix}^\dagger \bbm1&0 \end{bmatrix} \begin{bmatrix} 2\\2\end{bmatrix}=\begin{bmatrix} 2e^2\\0\end{bmatrix},
\end{align*}
so the absolute error is
\begin{equation*}
\left\|\nabla^{cc} e^{f(\operatorname{\mathcal{X}})}-\nabla e^{f_\mathbb{U}(x^0)}\right\|=0.
\end{equation*}
The error bound in Proposition \ref{prop:ccexp} is also zero since $f$ is a quadratic function.
\end{ex}
\begin{prop}[$\nabla^{cc} \log_af(\operatorname{\mathcal{X}})$ error bound]\label{prop:cclog}
Let $f:\operatorname{dom}(f) \subseteq \operatorname{\mathbb{R}}^n\to\operatorname{\mathbb{R}}$ be $\mathcal{C}^{3}$ on $B(x^0, \overline{\Delta})$ with $f(x^0)\neq0$, and denote by $L$ the Lipschitz constant of $\nabla^2 f$ on $B(x^0, \overline{\Delta}).$ Let $\operatorname{\mathcal{X}}=\langle x^0, x^1, \ldots, x^m \rangle $ be an ordered set with radius $\Delta<\overline{\Delta}$ such that $S$ has full rank. Let $\mathbb{U}=\operatorname{span} S$ and $a>0$. Then
\begin{equation*}
\|\nabla^{cc} \log_af(\operatorname{\mathcal{X}})-\nabla\log_af_\mathbb{U}(x^0)\| \leq \left|\frac{1}{f(x^0)\ln a}\right|\frac{L\sqrt m}{6}\big\|(\widehat{S}^\top)^\dagger\big\|\Delta^2.
\end{equation*}
\end{prop}
\noindent Examining the error bounds of Proposition \ref{prop:cclog}, we see that it is zero whenever $f$ is a polynomial of order less than three. The example below illustrates this situation.
\begin{ex}
Consider $f:\operatorname{\mathbb{R}}^2\to\operatorname{\mathbb{R}}:y\mapsto y_1^2+2y_2^2-3$ and $\operatorname{\mathcal{X}}=\left\langle\left[\begin{array}{c}2\\2\end{array}\right],\left[\begin{array}{c}3\\2\end{array}\right],\left[\begin{array}{c}2\\3\end{array}\right]\right\rangle$. We compute the error bound for $\nabla^{cc} \ln f(\operatorname{\mathcal{X}})$ and $\nabla^c \ln f(\operatorname{\mathcal{X}}).$ Note that $S$ has full row rank, so $\nabla \ln f_\mathbb{U} (x^0)=\nabla \ln f(x^0).$ We have
\begin{align*}
\nabla^{cc} \ln f(\operatorname{\mathcal{X}})&=\frac{1}{9}\left[\begin{array}{c c}1&0\\0&1\end{array}\right]\frac{1}{2}\left[\begin{array}{c}14-6\\19-3\end{array}\right]\\
&=\left[\begin{array}{c}\frac{4}{9}\\\frac{8}{9}\end{array}\right],
\end{align*}and the true gradient is
\begin{equation*}
\nabla\ln f(x^0)=\left[\begin{array}{c}\frac{1}{9}\cdot4\\\frac{1}{9}\cdot8\end{array}\right]=\left[\begin{array}{c}\frac{4}{9}\\\frac{8}{9}\end{array}\right].
\end{equation*}
Therefore, the absolute error is equal to zero. The error bound in Proposition \ref{prop:cclog} is also zero since $f$ is a quadratic function. The GCSG is
\begin{align*}
\nabla^c \ln f(\operatorname{\mathcal{X}})&=\left[\begin{array}{c c}1&0\\0&1\end{array}\right]\frac{1}{2}\left[\begin{array}{c}\ln14-\ln6\\\ln19-\ln3\end{array}\right]\\
&\approx\left[\begin{array}{c}0.4236\\0.9229\end{array}\right].
\end{align*}
\end{ex}
\begin{comment}
\section{Numerical experiment} \label{sec:numericalexperiment}
In this section, we investigate the numerical accuracy of the GCSCG using the functions defined in \cite{more1981testing}. Four different gradient approximation techniques are employed for comparison: the GSG $\nabla^s$, generalized simplex calculus gradient $\nabla^{sc}$, GCSG $\nabla^c$ and GCSCG $\nabla^{cc}$.\par First, we test the chain rule. We recall the definitions of these four gradient approximations for the composition $f\circ g$:
\begin{align*}
\nabla^s (f\circ g)(\operatorname{\mathcal{X}})&=(S^\top)^\dagger\delta^s_{f \circ g} (\operatorname{\mathcal{X}}),\\
\nabla^{sc}(f\circ g)(\operatorname{\mathcal{X}})&=J^s_g(\operatorname{\mathcal{X}})^\top\nabla^s f(g(\operatorname{\mathcal{X}})),\\
\nabla^c (f\circ g)(\operatorname{\mathcal{X}})&=(S^\top)^\dagger\delta^c_{f \circ g}(\operatorname{\mathcal{X}}),\\
\nabla^{cc} (f\circ g)(\operatorname{\mathcal{X}})&=J^c_g (\operatorname{\mathcal{X}})^\top\nabla^c f(g(\operatorname{\mathcal{X}})).
\end{align*}
Second, we test the product rule. The four approximations for the product of $k$ functions are:
\begin{align*}
\nabla^s (f_1\Compactcdots f_k)(\operatorname{\mathcal{X}})&=(S^\top)^\dagger\delta_{f_1\Compactcdots f_k}^s (\operatorname{\mathcal{X}}),\\
\nabla^{sc}(f_1\Compactcdots f_k)(\operatorname{\mathcal{X}})&=\sum\limits_{i=1}^k\prod\limits_{j\neq i}f_j(x^0)\nabla^s f_i(\operatorname{\mathcal{X}}),\\
\nabla^c (f_1\Compactcdots f_k)(\operatorname{\mathcal{X}})&=(S^\top)^\dagger \delta_{f_1\Compactcdots f_k}^c (\operatorname{\mathcal{X}}),\\
\nabla^{cc} (f_1\Compactcdots f_k)(\operatorname{\mathcal{X}})&=\sum\limits_{i=1}^k\prod\limits_{j\neq i}f_j(x^0)\nabla^c f_i(\operatorname{\mathcal{X}}).
\end{align*}
The sample set of points used on every problem is $\operatorname{\mathcal{X}}=\langle x^0,x^0+e_1,\ldots,x^0+e_n\rangle$, where $x^0$ is the starting point defined in \cite{more1981testing}. The parameter $\beta$ is defined to be a real number in the interval $(0,1]$. The role of $\beta$ is to shrink the matrix $S=[e_1~~e_2~~\cdots~~e_n]$. Our goal is to determine the largest value of $\beta$ necessary to obtain a relative error less than $10^{-3}$. To achieve this goal, we do the following for all four gradient approximations.
\begin{enumerate}
\item Let $\beta=1$. Compute the approximate gradient and the resulting relative error. If the relative error is less than $10^{-3}$, stop and return $\beta=1$.
\item Compute the approximate gradient using $\beta\in\{10^{-1},10^{-2},\ldots,10^{-8}\}$ and the resulting relative errors, until a value is found that gives a relative error less than $10^{-3}$. If none of these values provides such a relative error, return {\tt error}.
\item Apply a bisection method with a tolerance of $10^{-7}$ to find the highest value of $\beta$ that returns a relative error less than $10^{-3}$.
\end{enumerate}
We present our findings in Tables \ref{tab3} and \ref{tab4}. In the tables, a number is boldfaced if no other method does better on that particular problem. A boldfaced and underlined number means that not only does no other method do better, but also the value of $\beta$ is at least one order of magnitude higher than the second-best method.\par Table \ref{tab3} displays our results regarding the chain rule. The outer function used is $f:\operatorname{\mathbb{R}}^k\to\operatorname{\mathbb{R}}:y\mapsto\sum_{i=1}^k\|y_1\|^2$. In the table, the dimension of the domain of the inner function $g$ is given by $n$ and the dimension of the codomain by $k$.
\begin{table}[H]\centering
\caption{Values of $\beta$ that obtain relative error $<10^{-3}$}\label{tab3}
\begin{tabular}{|l|l l|l|l|l|l|}\hline
Function&$n$&$k$&$\nabla^s (f\circ g)$&$\nabla^{sc}(f\circ g)$&$\nabla^c (f\circ g)$&$\nabla^{cc} (f\circ g)$\\\hline\hline
1. Rosenbrock&2&2&3.46e-04&3.46e-04&2.20e-02&\textbf{\underline{1}}\\
2. Freudenstein&2&2&7.64e-04&7.64e-04&4.16e-02&\textbf{\underline{1.63e-01}}\\
3. PowellBS&2&2&2.00e-07&2.00e-07&\textbf{\underline{1}}&\textbf{\underline{1}}\\
4. BrownBS&2&3&\textbf{\underline{1}}&\textbf{\underline{1}}&\textbf{\underline{1}}&\textbf{\underline{1}}\\
5. Beale&2&3&8.10e-04&8.10e-04&\textbf{3.19e-2}&1.87e-02\\
6. Jenrich&2&4&5.38e-04&5.38e-04&9.56e-03&\textbf{\underline{2.27e-02}}\\
7. Helical&3&3&6.69e-03&6.69e-03&\textbf{\underline{6.65e-02}}&\textbf{\underline{6.65e-02}}\\
8. Bard&3&15&1.28e-03&1.28e-03&4.27e-02&\textbf{6.77e-02}\\
9. Gaussian&3&15&2.09e-06&2.09e-06&7.48e-03&\textbf{\underline{3.04e-01}}\\
10. Meyer&3&16&7.73e-05&7.73e-05&\textbf{\underline{1}}&\textbf{\underline{1}}\\
11. Gulf&3&20&4.72e-04&4.72e-04&8.01e-03&\textbf{\underline{1.11e-02}}\\
12. Box3D&3&3&2.37e-02&2.37e-02&\textbf{6.41e-01}&5.61e-01\\
13. PowellS&4&4&1.27e-03&1.27e-03&6.24e-02&\textbf{\underline{1}}\\
14. Wood&4&6&2.18e-03&2.18e-03&1.01e-01&\textbf{\underline{1}}\\
15. Kowalik&4&11&4.75e-05&4.75e-05&2.66e-02&\textbf{4.80e-02}\\
16. Brown&4&4&1.85e-02&1.85e-02&8.84e-01&\textbf{\underline{1}}\\
17. Osborne1&5&33&4.88e-06&4.88e-06&2.09e-04&\textbf{4.48e-04}\\
18. Biggs&6&6&7.78e-04&7.78e-04&1.30e-01&\textbf{4.42e-01}\\
19. Osborne2&11&65&2.50e-04&2.50e-04&\textbf{\underline{1.27e-02}}&6.45e-03\\
20. Watson&31&31&1.44e-04&1.44e-04&4.26e-02&\textbf{\underline{1}}\\
21. RosenbrockE&4&4&4.52e-04&4.52e-04&2.53e-02&\textbf{\underline{1}}\\
22. PowellExt&8&8&1.28e-03&1.28e-03&6.29e-02&\textbf{\underline{1}}\\
23. Penalty1&4&5&2.56e-03&2.56e-03&1.47e-01&\textbf{\underline{1}}\\
24. Penalty2&6&12&6.33e-04&6.33e-04&2.92e-02&\textbf{\underline{1}}\\
25. VariablyDim&7&9&2.31e-03&2.31e-03&1.05e-01&\textbf{\underline{1}}\\
26. Trigonometric&7&7&2.49e-04&2.49e-04&6.45e-03&\textbf{\underline{7.75e-02}}\\
27. BrownAlm&9&9&1.41e-01&1.41e-01&\textbf{\underline{1}}&\textbf{\underline{1}}\\
28. DiscreteBnd&5&5&9.47e-06&9.47e-06&1.56e-02&\textbf{\underline{2.65e-01}}\\
29. DiscreteInt&3&3&1.29e-04&1.29e-04&2.13e-02&\textbf{\underline{1.47e-01}}\\
30. BroydenTri&5&5&3.54e-04&3.54e-04&2.74e-02&\textbf{\underline{1}}\\
31. BroydenBan&8&8&4.73e-04&4.73e-04&2.05e-02&\textbf{6.68e-02}\\
32. LinearFR&10&13&4.00e-03&4.00e-03&\textbf{\underline{1}}&\textbf{\underline{1}}\\
33. LinearR1&10&10&1.35e-02&1.35e-02&\textbf{\underline{1}}&\textbf{\underline{1}}\\
34. LinearR1W0&10&10&1.19e-02&1.19e-02&\textbf{\underline{1}}&\textbf{\underline{1}}\\
35. Chebyquad&4&5&1.31e-04&1.31e-04&5.48e-03&\textbf{\underline{1}}\\\hline\hline
\textbf{Average}&4.8&9.7&3.53e-02&3.53e-02&2.74e-01&\textbf{6.08e-01}\\
\textbf{Median}&4&6&6.33e-04&6.33e-04&4.26e-02&\textbf{\underline{1}}\\\hline
\end{tabular}
\end{table}
\begin{table}[H]\centering
\caption{Values of $\beta$ that obtain relative error $<10^{-3}$}\label{tab4}
\begin{tabular}{|l|l l|l|l|l|l|}\hline
Function&$n$&$k$&$\nabla^s (f_1\Compactcdots f_k)$&$\nabla^{sc}(f_1\Compactcdots f_k)$&$\nabla^c(f_1\Compactcdots f_k)$&$\nabla^{cc} (f_1\Compactcdots f_k)$\\\hline\hline
1. Rosenbrock&2&2&1.33e-03&2.79e-03&7.83e-02&\textbf{\underline{1}}\\
2. Freudenstein&2&2&6.83e-04&2.65e-04&1.75e-02&\textbf{4.03e-02}\\
3. PowellBS&2&2&3.68e-04&\textbf{\underline{1}}&2.71e-02&\textbf{\underline{1}}\\
4. BrownBS&2&3&9.99e-04&\textbf{\underline{1}}&\textbf{\underline{1}}&\textbf{\underline{1}}\\
5. Beale&2&3&6.81e-04&1.70e-03&2.72e-02&\textbf{8.42e-02}\\
6. Jenrich&2&4&2.75e-04&6.04e-04&1.68e-02&\textbf{2.25e-02}\\
7. Helical&3&3&5.00e-03&\textbf{\underline{1}}&\textbf{\underline{1}}&\textbf{\underline{1}}\\
8. Bard&3&15&1.97e-04&3.93e-03&7.65e-03&\textbf{\underline{8.51e-02}}\\
9. Gaussian&3&15&2.33e-07&1.90e-03&9.23e-06&\textbf{\underline{5.27e-02}}\\
10. Meyer&3&16&5.71e-06&\textbf{\underline{1}}&2.29e-04&\textbf{\underline{1}}\\
11. Gulf&3&3&1.87e-04&8.77e-04&1.04e-02&\textbf{1.10e-02}\\
12. Box3D&3&3&1.71e-02&3.01e-02&\textbf{6.91e-01}&5.96e-01\\
13. PowellS&4&4&1.18e-03&1.12e-03&3.30e-02&\textbf{\underline{1}}\\
14. Wood&4&6&2.86e-03&\textbf{\underline{1}}&2.00e-01&\textbf{\underline{1}}\\
15. Kowalik&4&11&3.49e-05&6.24e-04&9.03e-04&\textbf{\underline{1.69e-02}}\\
16. Brown&4&4&8.47e-03&4.04e-02&3.43e-01&\textbf{\underline{1}}\\
17. Osborne1&5&33&2.69e-07&1.39e-05&1.04e-05&\textbf{\underline{4.65e-04}}\\
18. Biggs&6&6&7.05e-05&8.23e-03&3.39e-03&\textbf{\underline{2.06e-01}}\\
19. Osborne2&11&65&7.59e-06&1.43e-03&3.28e-04&\textbf{\underline{3.81e-02}}\\
20. Watson&2&31&5.77e-03&\textbf{\underline{1}}&5.77e-03&\textbf{\underline{1}}\\
21. RosenbrockE&4&4&1.35e-03&2.83e-03&7.89e-02&\textbf{\underline{1}}\\
22. PowellExt&8&8&1.18e-03&1.12e-03&3.30e-02&\textbf{\underline{1}}\\
23. Penalty1&4&5&1.48e-02&\textbf{\underline{1}}&1.72e-01&\textbf{\underline{1}}\\
24. Penalty2&6&12&1.17e-03&2.20e-03&5.24e-02&\textbf{\underline{1}}\\
25. VariablyDim&7&9&3.01e-03&5.61e-02&1.20e-01&\textbf{\underline{1}}\\
26. Trigonometric&7&7&5.81e-05&8.79e-05&1.73e-03&\textbf{\underline{7.75e-02}}\\
27. BrownAlm&9&9&2.11e-02&\textbf{\underline{1}}&8.78e-01&\textbf{\underline{1}}\\
28. DiscreteBnd&5&5&2.16e-05&4.12e-02&8.26e-04&\textbf{\underline{4.36e-01}}\\
29. DiscreteInt&3&3&5.17e-04&5.00e-03&3.38e-02&\textbf{\underline{1.39e-01}}\\
30. BroydenTri&5&5&3.34e-04&2.27e-03&3.09e-02&\textbf{\underline{1}}\\
31. BroydenBan&8&8&6.19e-04&1.10e-03&2.30e-02&\textbf{6.68e-02}\\
32. LinearFR&10&13&8.65e-03&\textbf{\underline{1}}&6.29e-02&\textbf{\underline{1}}\\
33. LinearR1&10&10&1.50e-03&\textbf{\underline{1}}&5.90e-02&\textbf{\underline{1}}\\
34. LinearR1W0&10&10&1.70e-03&\textbf{\underline{1}}&6.81e-02&\textbf{\underline{1}}\\
35. Chebyquad&2&2&3.33e-04&\textbf{\underline{1}}&1.05e-02&\textbf{\underline{1}}\\\hline\hline
\textbf{Average}&4.8&9.7&2.90e-03&3.49e-01&1.45e-01&\textbf{6.24e-01}\\
\textbf{Median}&4&6&6.83e-04&5.00e-03&3.09e-02&\textbf{\underline{1}}\\\hline
\end{tabular}
\end{table}
\hilight{Interpret the tables.} \hilight{Explain why $\nabla_s$ and $\nabla_{sc}$ are identical for composition.}
\end{comment}
\section{Conclusion} \label{sec:conclusion}
Generalized centred simplex gradients provide a formula to approximate gradients regardless of the number of points in the sample set. In the underdetermined case, an error bound with order $O(\Delta^2)$ is defined by restricting the function to a subspace of $\operatorname{\mathbb{R}}^n.$ In the overdetermined case, the error bound remains order $O(\Delta^2)$. Thereafter, we showed that calculus rules for generalized centred simplex gradients can be written in a way similar to those for the true gradients plus a term $E.$ Removing the term $E$ from the calculus rules leads to new approaches to approximate gradients that also have error bounds of order $O(\Delta^2)$. If the true objective functions are linear or quadratic, then the new approaches result in perfect accuracy. Corollaries \ref{cor:ccproduct} and \ref{cor:cccomp} provide several cases where the product rule and the chain rule are perfectly accurate.
Recent work has been done in the reduction of calculation time and storage space needed to use simplex gradients. In \cite{Coope2019Efficient}, Coope and Tappenden start with the knowledge that a simplex gradient in $\operatorname{\mathbb{R}}^n$ can require $O(n^3)$ operations and $O(n^2)$ storage units, and then reduces both of them to $O(n)$ under reasonable conditions. A valuable next step would be to confirm if these techniques also work for generalized centred simplex gradients. Another future research direction would be to investigate error bounds when the matrix $S$ does not have full rank (the undetermined case).
\def$'${$'$}
| {
"timestamp": "2020-06-02T02:27:14",
"yymm": "2006",
"arxiv_id": "2006.00742",
"language": "en",
"url": "https://arxiv.org/abs/2006.00742",
"abstract": "Using the Moore--Penrose pseudoinverse, this work generalizes the gradient approximation technique called centred simplex gradient to allow sample sets containing any number of points. This approximation technique is called the \\emph{generalized centred simplex gradient}. We develop error bounds and, under a full-rank condition, show that the error bounds have order $O(\\Delta^2)$, where $\\Delta$ is the radius of the sample set of points used. We establish calculus rules for generalized centred simplex gradients, introduce a calculus-based generalized centred simplex gradient and confirm that error bounds for this new approach are also order $O(\\Delta^2)$. We provide several examples to illustrate the results and some benefits of these new methods.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Error bounds for overdetermined and underdetermined generalized centred simplex gradients",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180694313357,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7083314773491277
} |
https://arxiv.org/abs/1702.07077 | Min-oo conjecture for fully nonlinear conformally invariant equations | In this paper we show rigidity results for super-solutions to fully nonlinear elliptic conformally invariant equations on subdomains of the standard $n$-sphere $\mathbb S^n$ under suitable conditions along the boundary. We emphasize that our results do not assume concavity assumption on the fully nonlinear equations we will work with.This proves rigidity for compact connected locally conformally flat manifolds $(M,g)$ with boundary such that the eigenvalues of the Schouten tensor satisfy a fully nonlinear elliptic inequality and whose boundary is isometric to a geodesic sphere $\partial D(r)$, where $D(r)$ denotes a geodesic ball of radius $r\in (0,\pi/2]$ in $\mathbb S^n$, and totally umbilical with mean curvature bounded below by the mean curvature of this geodesic sphere. Under the above conditions, $(M,g)$ must be isometric to the closed geodesic ball $\overline{D(r)}$.As a side product, in dimension $2$ our methods provide a new proof to Toponogov's Theorem about the rigidity of compact surfaces carrying a shortest simple geodesic. Roughly speaking, Toponogov's Theorem is equivalent to a rigidity theorem for spherical caps in the Hyperbolic three-space $\mathbb H^3$. In fact, we extend it to obtain rigidity for super-solutions to certain Monge-Ampère equations. | \section{Introduction}
In 1995, Min-Oo \cite{O}, inspired by the work of Schoen and Yau \cite{SY1,SY2} on the Positive Mass Theorem, conjectured that if $(M^n,g)$
is a compact Riemannian manifold with boundary such that the scalar curvature of $M$ is at least $n(n-1)$ and whose boundary $\partial M$ is totally geodesic and
isometric to the standard sphere, then $M$ is isometric to the closed hemisphere
$\overline{\mathbb S^n_+}$ equipped with the standard round metric.
Analogous statement of the Min-Oo conjecture for $\mathbb R^n$
(instead for $\overline{\mathbb S^n_+}$ as the original conjecture above)
was proved in 2002 (see \cite{Miao} and \cite{ST}).
On the other hand, a counterexample for the Min-Oo conjecture was given by Brendle, Marques and Neves in 2011 in \cite{BMN} .
The Min-Oo conjecture on $\overline{\mathbb S^n_+}$ among metrics conformal to the standard metric on the hemisphere was proved by Hang and Wang in \cite{HW}. Namely:
\begin{theorem} [Hang-Wang \cite{HW}]\label{HW}
Let $g=e^{2\rho}g_0$ be a $C^2$ metric on the unit closed hemisphere $\overline{\mathbb S^n_+}$, where $g_0$ denotes the standard round metric. Assume that
\begin{itemize}
\item[(a)] $R_g\geq n(n-1)$, and
\item[(b)] the boundary is totally geodesic and isometric to the standard $\mathbb S^{n-1}$.
\end{itemize}
Then $g$ is isometric to $g_0$.
\end{theorem}
We point out here that Hang and Wang also established
a Ricci curvature version of the Min-Oo conjecture in \cite{HW2}.
Recently, Spiegel \cite{SP} showed a scalar curvature rigidity theorem for locally conformally flat manifolds with boundary in the spirit of Min-Oo's conjecture which is an extension of Hang-Wang's Theorem. To be more precise, let $p\in \mathbb S^n$, $0< r\leq\frac{\pi}{2}$ and
\[
D(p,r):=\{x\in\mathbb S^n \, : \,\, d_{g_0}(x,p)<r\}
\]
be the geodesic ball of radius $r$ centered at $p$ in $\mathbb S^n$. Let $H_{r}=\cot(r)$ be the mean curvature of the boundary $ \partial D(p,r)$, measured with respect to the inward orientation. Note that $\partial D(p,r)$ is isometric to a sphere of radius $\sin(r)$.
\begin{theorem} [Spiegel \cite{SP}]\label{Sp}
Let $(M^n,g)$, $n\geq3$, be a compact connected locally conformally flat Riemannian manifold with boundary. Assume that
\begin{itemize}
\item[(a)] $R_g\geq n(n-1)$, and
\item[(b)] the boundary $\partial M$ is umbilic with mean curvature $H_g \geq H_r$ and isometric to $\partial D(p,r)$, $0< r \leq \pi/2$. Here, the mean curvature is measured with respect to the inward orientation.
\end{itemize}
Then $(M,g)$ is isometric to $\overline{D(p,r)}$ with the standard metric.
\end{theorem}
\begin{remark}
Spiegel also proved that the assumption on the mean curvature in the theorem above can be dropped provided $M$ is simply-connected and $r=\frac{\pi}{2}$. See Remark 1.3 in \cite{SP}. Therefore, Theorem \ref{Sp} is an extension of Theorem \ref{HW}.
\end{remark}
Theorem \ref{Sp} is sharp in $r$ in the sense that one can construct counterexamples on $\overline{D(p,r)}$ for $\pi/2 < r <\pi $ (cf. \cite{HW}).
We are interested in the Min-Oo's conjecture for compact connected locally conformally flat Riemannian manifolds $(M^n,g)$ satisfying a more general curvature condition. It is well known that the scalar curvature is, up to a constant, the sum of the eigenvalues of the Schouten tensor ${\rm Sch}_g$. In fact, let $\lambda (p)=(\lambda_1(p),\ldots, \lambda_n (p))$ denote its eigenvalues, then
\begin{equation}\label{Eq:ScalarSchouten}
{\rm Trace}(g^{-1}{\rm Sch}_g)=\lambda_1(p)+\cdots +\lambda_n(p)= \frac{R(g)}{2 (n-1)}\,.
\end{equation}
It is natural to ask if the Min-Oo's conjecture holds when one considers a more general function on the eigenvalues of the Schouten tensor instead of the scalar curvature.
In order to establish properly our main result, we need to define the type of curvature function for the eigenvalues of the Schouten tensor that we will consider. First, let us recall the notion of elliptic data originally introduced by Caffarelli, Nirenberg and Spruck \cite{CNS}; we use the theory developed by Li and Li for conformal equations (cf. \cite{LiLi1,LiLi2}). Consider the convex cones
\begin{equation*}
\begin{split}
\Gamma_{n} =& \{x\in\mathbb R^{n} \, : \, \, x_{i}>0, \,\, i=1,\ldots,n\}, \\
\Gamma_{1}= & \left\{ x\in\mathbb R^{n} \, : \,\, x_{1}+\cdots+x_{n}>0\right\} .
\end{split}
\end{equation*}
Let $\Gamma\subset\mathbb R^{n}$ be a symmetric open convex cone and $f\in C^{1}\left(\Gamma\right)\cap C^0\left(\overline{\Gamma}\right)$. We say that $(f,\Gamma)$ is an {\it elliptic data} if the pair $(f,\Gamma)$ satisfies
\begin{enumerate}
\item $ \Gamma_{n} \subset \Gamma \subset \Gamma_{1} $,
\item $f$ is symmetric,
\item $f>0$ in $\Gamma$,
\item $f|_{\partial\Gamma}=0$,
\item $f$ is homogeneous of degree 1,
\item $\nabla f(x)\in\Gamma_{n}$ for all $x\in\Gamma$,
\item $f(1,\ldots ,1)= 2 $.
\end{enumerate}
Let $(M,g) $ be a Riemannian manifold. Then, given an elliptic data $(f, \Gamma )$ we say that $g$ is a {\it supersolution} to $(f,\Gamma)$ if
\begin{equation*}
f(\lambda _g(p)) \geq 1, \, \, \lambda _g (p) \in \Gamma \text{ for all } p \in M ,
\end{equation*}where $\lambda _g(p)=(\lambda_1(p),\ldots, \lambda_n (p) )$ is composed by the eigenvalues of the Schouten tensor of $g$ at $p \in M$.
It is well-known that the Schouten tensor of the standard $n$-sphere is $ {\rm Sch}_{g_0} = \frac{1}{2} g_0 $, then, condition (7) above says that we are normalizing the functional $f$ to be $1$ when considering the Schouten tensor of the standard sphere, i.e.,
$$ f(1/2 , \ldots , 1/2) = 2^{-1}f(1,\ldots ,1) = 1 ,$$ where we have used that $f$ is homogeneous of degree one.
In this paper, we prove that the Min-Oo's conjecture holds for super-solutions to elliptic
data $(f, \Gamma)$ in locally conformally flat manifolds.
Namely, we prove the following result.
\begin{quote}
{\bf Theorem A.} \label{theoremA} {\it Let $(M^n,g)$ be a compact connected locally conformally flat Riemannian manifold with boundary $\partial M$. Let $(f, \Gamma)$ be an elliptic data and assume that $g$ is a supersolution to $(f,\Gamma)$ in $M$, i.e.,
$$f(\lambda _g(p)) \geq 1, \, \, \lambda _g(p)\in\Gamma \text{ for all } p\in M .$$
Assume that $\partial M$ is umbilical with mean curvature $H_g \geq H_r $ and isometric to $\partial D(p,r)$, $0<r\leq \pi/2$. Then $(M,g)$ is isometric to $\overline{D(p,r)}$ with the standard metric.
}
\end{quote}
\begin{remark}
We also can prove that the assumption on the mean curvature in the theorem above can be dropped provided $M$ is simply-connected and $r=\frac{\pi}{2}$.
\end{remark}
\medskip
We emphasize that in our theorem above \emph{no concavity assumption on $f$ is needed}.
Of special interest is when we consider $\sigma_k(\lambda(p))$, the $k$-th elementary symmetric polynomial of the eigenvalues $\lambda_1 (p)$,...,$\lambda_n (p)$. However,
these cases, and in fact for all concave $f$ ($\sigma_k^{1/k}$ is concave), the result
follows from the theorem of Spiegel. Indeed, we only need to
prove that under the additional concavity assumption of $f$ in $\Gamma$, one has
\[
f(\lambda) \leq R_g/[n(n-1)], \textrm{ for all } \lambda\in \Gamma.
\]
The above inequality can be proved as follows. By the homogeneiety of $f$,
$\sum f_{\lambda_i}\lambda_i= f(\lambda)$ and therefore
$2=f(1,\ldots, 1)$ and, in view of the symmetry of $f$,
$f_{\lambda_i}(1, \ldots, 1) = \frac 2 n$, $i=1, \ldots, n$. By the concavity of $f$ we get
\[
f(\lambda) \leq f(1,\ldots, 1) + \sum_{i=1}^n f_{\lambda_i}(\lambda_i-1) = R_g/[n(n-1)].
\]
Our approach relies in a geometric method developed by the third author, G\'alvez and Mira
in \cite{EGM} and further developments contained in \cite{AE,BEQ,BQZ,CE,Esp}, where
conformal metrics on spherical domains are represented by hypersurfaces in the hyperbolic space.
In order to reduce our problem on locally conformally flat manifolds to conformal metrics on
subdomains of the sphere, we use results contained in the work of Spiegel \cite{SP} and Li and
Nguyen \cite{LiNg} based on the deep theory by Schoen and Yau \cite{SY3} on the developing map
of a locally conformally flat manifold. Hence, combining these results, we show that Theorem A is
equivalent to a rigidity result for horospherically concave hypersurfaces with boundary in the
Hyperbolic space $\mathbb H^{n+1}$. In particular, in dimension $n=2$, these methods provide a
new proof to Toponogov's Theorem \cite{Top} and, in fact, we can extend it.
\subsection*{Acknowledgments}
The authors are grateful to the referee for him/her
valuable comments and suggestions that have improved this article.
\section{Preliminaries}\label{elliptic}
We will establish in this section the necessary tools we will use along this paper.
\subsection{Representation formula and regularity}
Here we recover the hypersurface interpretation of conformal metrics on the sphere developed in
\cite{BEQ,EGM}. Let us denote by $\mathbb{L}^{n+2}$ the Minkowski spacetime, that is, the vector
space $\mathbb R ^{n+2}$ endowed with the Minkowski spacetime metric $\meta{}{}$ given by
$$
\meta{\bar{x}}{\bar{x}} = - x_0 ^2 + \sum _{i=1}^{n+1} x_i ^2,
$$
where $\bar{x} \equiv (x_0 , x_1 , \ldots , x_{n+1})\in \mathbb R^{n+2}$.
Then hyperbolic space, de Sitter spacetime and positive null cone are given, respectively, by the hyperquadrics
\begin{equation*}
\begin{split}
\mathbb{H} ^{n+1} &= \set{ \bar{x} \in \mathbb L ^{n+2} : \, \meta{\bar{x}}{\bar{x}} = -1, \, x_0 >0}\\
d\mathbb S^{n+1}_1 &= \set{ \bar{x} \in \mathbb L ^{n+2} : \, \meta{\bar{x}}{\bar{x}} = 1}\\
\mathbb{N}^{n+1}_+ &= \set{ \bar{x} \in \mathbb L ^{n+2} : \, \meta{\bar{x}}{\bar{x}} = 0, \, x_0 >0}.
\end{split}
\end{equation*}
Let $\phi:M^n\to \mathbb{H}^{n+1} \subset \mathbb{L} ^{n+2}$ be an isometric immersion of an oriented hypersurface, with orientation $\eta :M^n\to d\mathbb S^{n+1}_1 \subset \mathbb{L} ^{n+2}$. We define the associated light cone map as
$$\psi := \phi - \eta : M^n \to \mathbb{N} ^{n+1}_+ \subset \mathbb{L} ^{n+2} .$$
If we write $\psi = (\psi _0 , \ldots , \psi _{n+1})$, consider the map $G$ (the hyperbolic Gauss map) given by:
$$ G = \frac{1}{\psi _0}(\psi _1 , \ldots , \psi _{n+1}) : M \to \mathbb S ^n, $$
Hence, if we label $e^{\rho}:=\psi _0$ (the hyperbolic support function), we get
$$ \psi = e^{\rho} (1 , G) \in \mathbb{L} ^{n+2}.$$
Set $\Sigma := \phi(M^n)\subset \mathbb H^{n+1}$ with orientation $\eta$. We say that $\Sigma$ is horospherically concave if $\Sigma$ lies (locally) around any point $p\in \Sigma$ strictly in the concave side of the tangent horosphere at $p$ and its normal points into the concave side of the tangent horosphere.
\begin{theorem}[\cite{EGM}]
Let $\phi:\Omega\subset \mathbb S^n \to \mathbb H^{n+1}$ be an oriented piece of horospherically concave hypersurface with orientation $\eta : \Omega \to d\mathbb S ^{n+1}_+$ and hyperbolic Gauss map $G(x) = x$. Then
\begin{equation}\label{formula}
\phi(x) = \frac{e^\rho}{2}\big(1+e^{-2\rho}(1+ \|\nabla \rho \|^2)\big)(1,x)+e^{-\rho}(0,-x+\nabla\rho),
\end{equation}and its orientation is given by
\begin{equation}\label{orientation}
\eta (x) = \phi (x) - e^{\rho}(1,x).
\end{equation}
Moreover, the eigenvalues $\lambda_i$ of the Schouten tensor of $g=e^{2\rho}g_0$ and the principal curvatures $k_i$ of $\phi$ are related by
$$\lambda_i=\frac 1 2 - \frac 1 {1+k_i}.$$
Conversely, given a conformal metric $g = e^{2\rho}g_0$ defined on a domain of the sphere $\Omega\subset \mathbb S^n$ such that the eigenvalues of its Schouten tensor are all less than $1/2$, then the map $\phi$ given by \eqref{formula} defines an immersed, horospherically concave hypersurface in $\mathbb H^{n+1}$ with orientation \eqref{orientation} whose hyperbolic Gauss map is $G(x) = x$ for $x \in \Omega$.
Here, the connection $\nabla $ and the norm $\norm{\cdot} $ are with respect to the standard metric $g_0$ on $\mathbb S ^n$.
\end{theorem}
Let $\Omega \subset \mathbb S ^n$ be a relatively compact domain with smooth boundary. Given $\rho \in C^{2} (\overline{\Omega})$, the above representation formula says that $\phi$ and $\eta$ are $C^1$ maps and $\Sigma := \phi (\overline \Omega) \subset \mathbb{H}^{n+1}$ is a compact hypersurface with boundary $\partial \Sigma = \phi(\partial \Omega)$ whose tangent plane varies $C^1$. Moreover, the corresponding conformal metric $g= e^{2\rho} g_0$ on $\Omega$ is the horospherical metric associated to $\Sigma$. Observe that, since $\rho \in C^2(\overline \Omega)$, the eigenvalues of the Schouten tensor associated to $g=e^{2\rho} g_0$ are continuous in $\Omega$ and hence there exists $t >0$ so that the eigenvalues of the Schouten tensor associated to $g_t = e^{2(\rho +t)} g_0$ are less than $1/2$.
In the Poincar\'{e} ball model of $\mathbb H ^{n+1}$, the representation formula (cf. \cite{AE}) is given by
\begin{equation*
\varphi_{t}(x)=\frac{1-e^{-2\rho _t(x)}+\|\nabla e^{-\rho _t}(x) \|^{2}}{ \left( 1+e^{-\rho_t(x)} \right)^{2}+\|\nabla e^{-\rho_t}(x) \|^{2}}x-\frac{1}{ \left( 1+e^{-\rho_t(x)} \right)^{2}+\|\nabla e^{-\rho_t}(x) \|^{2}}\nabla\left( e^{-2\rho_t}\right)(x).
\end{equation*}
Set $\epsilon = e^{-t}$, then
$$f (x, \epsilon):= - \frac{2(e^{\rho (x)}+ \epsilon )}{ \left( e^{ \rho(x) }+\epsilon \right)^{2}+ \epsilon ^2 \, \|\nabla \rho(x) \|^{2}}$$and
$$ g (x,\epsilon) = \frac{2 \epsilon }{ \left( e^{ \rho(x) }+\epsilon \right)^{2}+\epsilon ^2 \, \|\nabla \rho(x) \|^{2}} $$are in $C^1(\overline{\Omega}\times [0,+\infty))$ and they are smooth in $\epsilon$, moreover, the vector field $\nabla \rho$ is $C^1$ in $\overline{\Omega}$, since $\rho \in C^2(\overline \Omega)$. Thus,
$$ \varphi _\epsilon (x) = x + \epsilon \left( f (x,\epsilon) x + g (x,\epsilon) \nabla \rho (x)\right) \in \mathbb{B}^{n+1} \subset \mathbb R ^{n+1} $$belongs to $C^1(\overline \Omega)$, in particular, the vector field
$$ Y(x,\epsilon) := f (x,\epsilon) x + g (x,\epsilon) \nabla \rho (x) \in C^1(\overline \Omega \times [0,+\infty)).$$
Let $\tilde Y : \mathbb S ^n \times [0,+\infty ) \to \mathbb{B}^{n+1} \subset \mathbb R ^{n+1}$ be the Lipschitz extension of $Y $ so that
$\left.\tilde Y \right._{|\overline\Omega \times [0,+\infty)} =Y $. Therefore, the corresponding extension map
$$ \tilde \varphi : \mathbb S ^n \times [0, +\infty ) \to \mathbb R^{n+2} $$is Lipschitz in $x$ and smooth in $\epsilon $ so that $\tilde \varphi (x,\epsilon) = \varphi_\epsilon (x)$ for all $(x,\epsilon ) \in \overline \Omega \times (0,+\infty)$ satisfying $\tilde \varphi (x,0) = x$, i.e., $\tilde\varphi _0 (\cdot)= \tilde \varphi (\cdot , 0)$ is the identity map, which is an embedding of the sphere $\mathbb S ^n$ into $\mathbb R ^{n+1}$. Since $\tilde \varphi _\epsilon : \mathbb S ^n \to \mathbb R ^{n+1}$ is a Lipschitz deformation of an embedding, from \cite{FukNak}, there exists $\epsilon _0 >0$ so that $\tilde \varphi _\epsilon : \mathbb S ^n \to \mathbb R ^{n+1} $ is an embedding for all $\epsilon \in [0, \epsilon _0 )$. Thus, summarizing all we have done in this subsection, we obtain:
\begin{lemma}[\cite{AE,BEQ,EGM}]\label{KeyLemma}
Let $\Omega \subset \mathbb S ^n$ be a relatively compact domain with smooth boundary and $\rho \in C^2 (\overline \Omega)$. Then, there exists $t >0$ so that the horospherically concave hypersurface $\phi _t : \overline \Omega \to \mathbb{H} ^{n+1}$ given by \eqref{formula} is a compact embedded hypersurface $\Sigma _t = \phi _t (\Omega)$ with boundary $\partial \Sigma _t = \phi _ t (\partial \Omega)$. Moreover, the eigenvalues of its associated horospherical metric $g_t := e^{2(\rho +t)}g_0$ are less than $1/2$.
\end{lemma}
It is important to recall the connection between isometries of the hyperbolic space ${\rm Iso}(\mathbb H^{n+1}) $ and conformal diffeomorphisms of the sphere ${\rm Conf}(\mathbb S^n)$. It is well-known that each isometry $T\in {\rm Iso}(\mathbb H^{n+1})$ induces a unique conformal diffeormorphism $\Phi \in {\rm Conf}(\mathbb S^n)$.
Let $T\in {\rm Iso}(\mathbb H^ {n+1})$ be an isometry and $\Phi \in {\rm Conf}(\mathbb S ^n)$ be the unique conformal diffeomorphism associated to $T$. Then, given a horospherically concave hypersurface $\Sigma \subset \mathbb H ^{n+1}$ with horospherical metric $g$, one can see that (cf. \cite{Esp}) the horospherical metric $\tilde g$ associated to $\tilde \Sigma = T(\Sigma)$ is given by $\tilde g = \Phi ^* g$. Vice versa, given a conformal metric $g$ on a subdomain of the sphere with associated hypersurface $\Sigma$, given by the representation formula under the appropriated conditions, the associated horospherically concave hypersurface $\tilde \Sigma $ associated to the conformal metric $\tilde g = \Phi ^* g$ is given by $\tilde \Sigma = T(\Sigma)$.
\subsection{Locally conformally flat metrics and developing map}
Let $(M^n,g)$, $n\geq 3$, be a Riemannian manifold with a $C^k$-metric $g$. We say that $(M,g)$ is locally conformally flat if for every point $p \in M$ there exist a neighborhood $U$ of $p$ and $\varphi \in C^k (U)$ such that the metric $e^{2\varphi} g$ is flat on $U$. An immersion $\Psi: (M, g) \rightarrow (N, h)$ is a conformal immersion if we can write $\Psi^* h = e^{2\varphi} g$ for some function $\varphi$.
If $(M, g)$ is a locally conformally flat manifold it is well known that there exists a conformal map $\Psi : M \rightarrow \mathbb S^n$, called the {\it developing map} which is unique up to conformal transformations of $\mathbb S ^n$.
When $M$ is compact and simply-connected with umbilical boundary, Spiegel \cite{SP} proved that the developing map can be taken as a diffeomorphism over the hemisphere $\overline{\mathbb S^n}_+$.
If $M$ is not simply-connected, we can pass to the universal covering $\tilde{M}$ to obtain a developing map $\Psi : \tilde{M} \rightarrow \mathbb S^n$ which is, under some assumptions, injective. In fact, Li and Nguyen \cite{LiNg} showed the following theorem:
\begin{theorem}\label{SP}
Let $(M, g)$ be a compact connected locally conformally flat manifold with boundary. Assume that $M$ has positive scalar curvature and that $\partial M$ is umbilic and simply-connected with non-negative mean curvature. Let $\Pi: \tilde{M}\rightarrow M$ be the universal covering. Then there exists an injective conformal map $\Psi : \tilde{M} \rightarrow \mathbb S^n$ which is a conformal diffeomorphism onto its image. The image is of the form
$$\Omega = \Omega (\epsilon_i , p_i , \Lambda) := \mathbb S ^n\backslash \left( \bigcup\limits_{i} D(p_i ,\epsilon_i)\cup \Lambda\right),$$
where the $D(p_i ,\epsilon_i)$ are geodesic balls in $\mathbb S^n$ centered at $p_i$ of radius $\epsilon _i$ with disjoint closures and $\Lambda$ is the so-called limit set, a closed subset of Hausdorff dimension at most $\frac{n-2}{2}$.
\end{theorem}
For the sake of completeness we include their proof here.
\begin{proof}
Actually, we can see that Theorem \ref{SP} is a consequence of Theorem 1.4 in \cite{LiNg}. In order to see that, note that one has an additional hypothesis that $\partial M$ is simply connected. Hence, the two points which need to be checked, under this additional hypothesis, are (1) the closed balls $\bar{D}(p_i ,\epsilon_i)$ in \cite{LiNg} are mutually disjoint and (2) the set $G=\mathbb S ^n\backslash \left( \bigcup\limits_{i} D(p_i ,\epsilon_i)\cup \Lambda\right)$ in \cite{LiNg} is simply connected.
Point (1) is a consequence of Property (ii) in [15, Theorem 1.4] and some facts from point-set topology. First, $\Psi^{-1}(\partial M) =\cup_{i}(\partial D(p_i ,\epsilon_i) \setminus \Lambda)$ and, as $\Lambda$ is closed and its $(n - 2)$-Hausdorff measure is zero, $\partial D(p_i ,\epsilon_i) \setminus \Lambda$ is (path-)connected for every $i$. This implies, in view of Property (ii) in [15, Theorem 1.4], that the connected components of $\Psi^{-1}(\partial M)$ are the collection $\{\partial D(p_i ,\epsilon_i) \setminus \Lambda\}$. Second, for any connected component $X$ of $\Psi^{-1}(\partial M)$, the map $\Psi: X\rightarrow \partial M$ is a covering map. Now if $\partial M$ is simply connected, each such $X$ is homeomorphic to $\partial M$, and so, $X$ is compact. Thus, $\partial D(p_i ,\epsilon_i) \cap \Lambda$ is empty for every $i$, and, in view of Property (ii) in [15, Theorem 1.4], the balls $\bar{D}(p_i ,\epsilon_i)$ are mutually disjoint.
Let us turn to point (2). If $\Lambda$ is empty, the collection $\{D(p_i ,\epsilon_i)\}$ of balls must be finite thanks to property (iii) in [15, Theorem 1.4], in which case the simple connectedness of $G$ is clear. Assume that $\Lambda$ is non-empty. Recall that $\Psi$ is constructed in \cite{LiNg} as the covering map from the universal cover $\tilde{M}_2 \subset \mathbb{S}^n$ of the double $M_2$ of $M$, still denoted by $\Psi$ here, and $G$ is a connected component of $\Psi^{-1}(M)$. Let $\Lambda_2:=\mathbb{S}^n\setminus \tilde{M}_2$ so that $\Lambda=\Lambda_2 \cap(\mathbb{S}^n \setminus (\cup D(p_i ,\epsilon_i)))$.
Suppose first that $D(p_i ,\epsilon_i) \cap\Lambda_2\neq\emptyset$ for every $i$. In this case, as $G=\tilde{M}_2 \setminus(\cup D(p_i ,\epsilon_i))$, $\partial D(p_i ,\epsilon_i)\subset G$ (due to the simple connectedness of $\partial M$ as in point (1)) and $D(p_i ,\epsilon_i) \cap (\mathbb{S}^n \setminus \tilde{M}_2)\neq \emptyset$ for each $i$, there is clearly a retraction from $\tilde{M}_2$ onto $G$. The simple connectedness of $G$ follows from that of $\tilde{M}_2$.
Assume now that $D(p_{i_0} ,\epsilon_{i_0})\cap \Lambda_2=\emptyset$ for some $i_0$. We have $\Psi^{-1}(M_2 \setminus M) \subset \cup(D(p_i ,\epsilon_i) \setminus \Lambda_2)$. Hence, as $\Psi$ is locally homeomorphic and by Property (iii) in [15, Theorem 1.4],
\[
\Psi^{-1}(\overline{M_2 \setminus M}) \subset \overline{\Psi^{-1}(M_2 \setminus M)}\cap(\mathbb{S}^n \setminus \Lambda_2)\subset \cup (\bar{D}(p_i ,\epsilon_i)\setminus \Lambda_2).\,
\]
As the balls $\bar{D}(p_i ,\epsilon_i)$ are disjoint, the above implies that there is a connected component of $\Psi^{-1}(\overline{M_2 \setminus M})$ lying entirely in $D(p_{i_0} ,\epsilon_{i_0})$, which covers $\overline{M_2 \setminus M}$, which is a copy of $M$. We can use this set in place of the original set $G$ to run the argument, in which case $\Lambda\subset D(p_{i_0} ,\epsilon_{i_0})\cap\Lambda_2=\emptyset$ becomes empty and we are done as above.
\end{proof}
Note that, since we are assuming $\lambda_g(p)\in \Gamma$, for all $p\in M$, and $\Gamma \subset \Gamma_1$, hence we have that $R_g>0$. Therefore, under the conditions of Theorem A, we can apply Theorem \ref{SP}.
\section{The case of the hemisphere}\label{hemisphere}
We begin by considering the baby case, say conformal metrics on the hemisphere. This case will enlighten the geometric ideas contained in the proof.
\begin{theorem}\label{Th:Hemi}
Let $(f, \Gamma)$ be an elliptic data and let $g=e^{2\rho}g_0$, $\rho \in C^2 (\overline{\mathbb S^n_+})$, $n\geq 3$, be a supersolution to $(f,\Gamma)$ on the closed hemisphere $\overline{\mathbb S^n_+}$, i.e.,
$$f(\lambda(p)) \geq 1, \, \, \lambda _g(p)\in\Gamma \text{ for all } p\in \mathbb{S}^n_+ .$$
Assume that the boundary $\partial \mathbb S^n_+$ with respect to $g$ is isometric to $\partial \mathbb{S}^n_+$. Then $g=\Phi^*g_0$, where $\Phi \in {\rm Conf}(\mathbb S^n)$ preserving $\overline{\mathbb S ^n _+}$.
\end{theorem}
\begin{proof}
First, $\partial \mathbb S ^n _+$ is isometric to $\mathbb S ^{n-1}$ implies that $g_{|\partial \mathbb S ^n_+}$ is isometric to $\mathbb S ^{n-1}$. Hence, by Obata's Theorem, there exists a conformal diffeomorphism $\tilde \Phi \in {\rm Conf}(\mathbb S ^{n-1})$ so that $g_{|\partial \mathbb S ^n_+}= \tilde \Phi ^* \left.g_0\right. _{|\partial \mathbb S ^n_+}$ along $\partial \mathbb S ^n _+$. Observe that $\tilde \Phi$ can be extended to a conformal diffeomorphism $\Phi \in {\rm Conf}(\mathbb S^n)$ so that $\Phi (\mathbb S^n _+) = \mathbb S ^n _+$ and $\Phi _{| \partial \mathbb S ^n_+} = \tilde \Phi$. Hence, up to the conformal diffeomorphism $\Phi$, we can assume that $g= g_0$ along $\partial \mathbb S ^n _+$. In other words,
\begin{equation}\label{RB}
\rho = 0 \text{ on } \partial \mathbb S ^n _+ .
\end{equation}
Moreover, since $\partial \mathbb S ^n _+$ is totally geodesic with respect to $g_0$ and $g$ is conformal to $g_0$, $\partial \mathbb S ^n _+$ is totally
umbilical with respect to $g$, in particular, the mean curvature along $\partial \mathbb S ^n _+$ with respect to $g$ is given by
\begin{equation}\label{MB}
H_g:= -e^{-\rho}\frac{\partial \rho}{\partial \nu} = -\frac{\partial \rho}{\partial \nu} \text{ on } \partial \mathbb S ^n _+ ,
\end{equation}where $\nu = e_{n+1}$ is the inward normal along $\partial \mathbb S ^n_+$.
Let $P \subset \mathbb{H}^{n+1}$ be the totally geodesic hyperplane whose boundary at infinity is the equator of the upper hemisphere, i.e., $\partial _\infty P = \partial \mathbb S ^n _+$. Denote by $P^+$ (resp. $P^-$) the connected component of $\mathbb{H}^{n+1}\setminus P$ that contains the north pole (resp. south pole) at its boundary at infinity. Also, denote by $P(b)$, $b\in \mathbb R$, the equidistant to $P$ at distance $b$. Note that $P(b)\subset P^+$ when $b>0$ and $P(b)\subset P^-$ when $b<0$. We define $P(b)^+$ (resp. $P(b)^-$) as the connected component of $\mathbb H ^{n+1}\setminus P(b)$ containing the north pole (resp. south pole) in its boundary at infinity. Clearly, $\partial _\infty P(b) = \partial _\infty P = \partial \mathbb S ^n _+ $ for all $b \in \mathbb R$.
Now, we fix $t>0$ as in Lemma \ref{KeyLemma} such that the eigenvalues of the Schouten tensor of $g_t=e^{2(\rho +t)}g_0$ satisfy $\lambda^t_i(x)<1/2$ for all $x\in \overline{\mathbb S^n_+}$ and the compact horospherically concave hypersurface with boundary $\Sigma _t=\phi _t(\mathbb S^n_+) \subset \mathbb H^{n+1} \subset \mathbb L ^{n+2}$ given by the representation formula (\ref{formula}) associated to $\rho _t =\rho +t$ is embedded.
Given $p\in \mathbb{H}^{n+1}$ we denote by $d_{\mathbb H^{n+1}}(p, P)$ the signed distance to $P$, that is, it is positive if $p \in P^+$ and negative if $p\in P^-$. Then, taking $t>0$ big enough in Lemma \ref{KeyLemma} we can assume that $\Sigma _t$ is above $P(m)$, i.e., $\Sigma _t \subset \overline{P(m)^+}$, where $m = {\rm min}\set{ d_{\mathbb H^{n+1}}(p, P) \, : \, \, p \in \partial \Sigma _t} $. In fact, one can check (cf. \cite[Section 2.4]{AE} for details) that $m= {\rm min}\set{ {\rm arc}\sinh (-e^{-t}H_g(x)) \, : \,\, x \in \partial \mathbb S ^n _+ } $.
Observe that \eqref{RB} implies
\begin{equation}\label{Rt}
\rho_t = t \text{ and } \frac{\partial \rho _t}{\partial \nu} = \frac{\partial \rho }{\partial \nu} \text{ on } \partial \mathbb S ^n _+ .
\end{equation}
We claim:
\begin{quote}
{\bf Claim A:} {\it Let $\gamma : \mathbb R \to \mathbb H^{n+1}$ be the complete geodesic (parametrized by arc-length) joining the south and north poles. Let $\mathcal C _t$ be the solid cylinder in $\mathbb H ^{n+1}$ of axis $\gamma$ and radius $t$. Then, $\partial \Sigma _t$ lies outside the interior of $\mathcal C _t$, and $\partial \Sigma _t \cap \mathcal C _t \subset P$. Moreover, if $\partial \Sigma _t \cap \mathcal C _t \neq \emptyset$ then at such points $\Sigma _t$ is orthogonal to $P$.}
\end{quote}
\begin{proof}[Proof of Claim A]
Note that, since $x\in \partial \mathbb S^n_+$, $\phi _t(x) \in \mathcal H(x,t)$, where $\mathcal H(x,t)$ is the horosphere whose point at
infinity is $x$ and signed distance to the origin is $t>0$ (see \cite{BEQ}). It proves the first part of the claim
To finish the proof, we must check that at a point where $\frac{\partial \rho}{\partial \nu} (x) =0$ we get that $\Sigma _t$ is orthogonal to $P$. The unit normal along $\Sigma_t$ is given by
$$ \eta _t(x) = \frac{e^{-\rho-t}}{2} \big(\norm{\nabla \rho }^2 -1+e^{\rho+t} \big) (1,x) + e^{-\rho-t}(0,-x+ \nabla \rho) $$and the normal along $P$ is given by $ n(p) = (0,e_{n+1}) $ for all $ p \in P$. Hence, we have
$$ \meta{\eta_t (x)}{n(\phi (x))} = 0 ,$$that is, $\Sigma _t$ is orthogonal to $P$ at $x$.
\end{proof}
Let $(1, {\bf 0}):=(1,0, \ldots , 0) \in \mathbb H ^{n+1} \subset \mathbb L ^{n+2}$ be the origin in the hyperboloid model (note that such point corresponds to the actual origin in the Poincar\'e ball model). Denote by $S_t \subset \mathbb{H}^{n+1}$ the geodesic sphere centered at the origin $(1,{\bf 0})$ of radius $t$.
It is easy to see that its horospherical metric is given by $\tilde g _ t = e^{2t} g_0$ (cf. \cite{Esp}). Consider the half-sphere $S_t ^+ = S_t \cap \overline{P^+}$ and observe that $S_t ^+$ is orthogonal to $P$ along the boundary $\partial S_t^+$.
Let $T_s :\mathbb H ^{n+1} \to \mathbb H ^{n+1}$ be the hyperbolic translation at distance $s$ along $\gamma$ so that $T_s((1, {\bf 0}))= \gamma (s)$, an isometry of $\mathbb H ^{n+1}$. It is clear that $T_s(S_t^+\setminus \partial S_t ^+) \cap \partial \Sigma _t = \emptyset $, for all $s \in \mathbb R$ by Claim A.
Let $\Phi _s \in {\rm Conf}(\mathbb S ^n)$ be the unique conformal diffeomorphism associated to $T_s$. Set $S_{t,s} := T_s (S_t)$ for all $s \in \mathbb R$, then the horospherical metric associated to $S_{t,s}$ is given by $\tilde g _{t,s} = e^{2t} \Phi _s ^* g_0$ in $\mathbb S ^n$ and denote by $\tilde \rho _{t,s} \in C^{\infty}(\mathbb S ^n )$ the horospherical support function associated to $S_{t,s}$, i.e, $\tilde g _{t,s}= e^{2\tilde \rho _{t,s}} g_0$.
Let $\hat g_{t,s}$ be the restriction of $\tilde g _{t,s}$ to $\overline{\mathbb S ^n_+}$, i.e., $\left.\tilde g _{t,s}\right. _{|\mathbb S ^n _+} = \hat g_{t,s}$, and $\hat \rho _{t,s}$ the restriction of $\tilde \rho _{t,s}$ to $\overline{\mathbb S ^n _+}$.
Consider $\bar s\in \mathbb R$ so that $S_{t,s} ^+ \cap \Sigma _t= \emptyset$ for all $s < \bar s$. Increasing $s$ from $\bar s$ to $+\infty$, we must find a first instant $s_0$ so that $S_{t,s_0} ^+ \cap \Sigma _t\neq \emptyset$ tangentially. If $S_{t,s_0} ^+ $ does not coincides with $\Sigma _t$ identically, such tangential point must be either at an interior point of $\Sigma _t$ or at a boundary point of $\partial \Sigma_t$. In the latter case we must necessarily have $s_0=0$ by the second part of Claim A.
\begin{quote}
{\bf Claim B:} {\it $\rho _t \geq \hat \rho _{t,s_0}$ on $\overline{\mathbb S ^n _+}$.}
\end{quote}
\begin{proof}[Proof of Claim B]
From Claim A we have that $\mathcal H (x,r)$ either does not touch $S_{t,s_0}$ or does touch at a tangent point, for all $x \in \partial \mathbb S ^n _+$ and all $r\geq t$. This says that $\rho _t \geq \hat \rho _{t,s_0}$ on $\partial \mathbb S ^n _+$ because $\Sigma_t$ is horospherically concave.
Now, let us prove that $\rho _t \geq \hat \rho _{t,s_0}$ on $\mathbb S ^n _+$. Assume there exists $x \in \mathbb S ^n _+$ so that $\rho _t (x)< \hat \rho _{t,s_0}(x)$. Then, as pointed out above, the horosphere $\mathcal H(x,\hat \rho _{t,s_0}(x))$ does not touch $\Sigma _t$ and touch at one point $q\in S _{t,s_0}$. Observe that $\mathcal H(x,\hat \rho _{t,s_0}(x)- \delta) $ does not touch $\Sigma _t$ for any $\delta < \hat \rho _{t,s_0}(x)-\rho _t (x)$. Denote by $\beta _1 $ the geodesic ray joining $q$ and the point at infinity $x \in \mathbb S ^n _+$, this arc is completely contained in the horoball determined by $\mathcal H(x,\hat \rho _{t,s_0}(x))$ and hence $\beta _1 \cap \Sigma _t =\emptyset$. Denote by $\beta _2$ the geodesic joining $\gamma (s_0)$ with the south pole ${\bf s}\in \mathbb S ^n$, then $\Sigma _t \cap \beta _2 = \emptyset$, otherwise we contradict the fact that $S_{t,s_0}$ is the first sphere of contact with $\Sigma _t$ coming from infinity. Finally, denote by $\beta _3$ the geodesic arc joining $\gamma (s_0)$ and $q$. Consider the piecewise smooth curve $\beta = \beta_1 \cup \beta _2 \cup \beta _3$ and observe that $\beta $ is homotopic to $\gamma$, moreover, $\partial \Sigma _t$ is homotopic to $\partial \mathbb S ^n _+$, which implies that the linking number of $\beta $ and $\partial \Sigma _t$ is $\pm 1$ (depending on the orientation), that is, they must intersects. The only possibility is that they intersect in the interior of $\beta _2$, however, this implies that $\Sigma _t$ and $S_{t,s_0}$ has a transverse intersection, contradicting that $S_{t,s_0} $ is the first sphere of contact. Thus, $\rho _t \geq \hat \rho _{t,s_0}$ on $ \mathbb S ^n _+$.
\end{proof}
Note that, since the elliptic data is homogeneous of degree one, we have that $g_t$ satisfies
$$ f(\lambda _{g_t}(p)) = f (e^{-t}\lambda _g(p))\geq e^{-t} \text{ for all } p\in \mathbb S^n_+$$and the horospherical metric of $S_{t,s} ^+$ satisfies
$$f(\lambda _{\hat g _{t,s}}(p))= f(e^{-t}\lambda _{g_0}(p)) = e^{-t}f(1/2,\ldots,1/2)=e^{-t} \text{ for all } p\in \mathbb S^n_+,$$that is
$$ f(\lambda _{g_t}(p)) \geq f(\lambda _{\hat g_{t,s}}(p)) \text{ for all } p\in \mathbb S^n_+ . $$
Thus, if $S_{t,s_0} ^+$ intersects $\Sigma _t$ at an interior point, this contradicts the strong maximum principle (see Lemma \ref{SMP} in the Appendix). Observe that we do not really need that both hyperbolic support functions are positive. To overcame this we can either dilate at the beginning with a $t$ big enough so that $\rho _t >0$ or translate $\Sigma _t$ and $S_{t,s_0}$ at distance $\abs{s_0}$ using $T_{\abs{s_0}}$. Then, the new hyperbolic support functions are positive, they coincide at some point in the interior and differ along the boundary. All these conditions follow since $T_{\abs{s_0}}$ is an isometry.
Therefore, it remains the case that $S_{t,s_0} ^+$ intersects $\Sigma _t$ at a boundary point. Since in this case $s_0 =0$, the argument above shows that $\rho _t \geq t $ on $\overline{\mathbb S^n _+} $. This inequality follows since $\rho _t \geq \hat \rho _{t,s}$ on $ \overline{\mathbb S ^n _+}$ for all $s<0$, taking $s \to 0$ one can easily see that $\hat \rho _{t,s} \to \hat \rho _t := t $.
If $\partial \Sigma _t \cap P = \emptyset$, then $S_{t,s} \cap \partial \Sigma _t = \emptyset$ for all $s \in \mathbb R$. Hence, we can translate $S_t$ up to the north pole until we find a first contact point with $\Sigma _t$, such point must be an interior point. However, as above, this contradicts the strong maximum principle.
Therefore, by Claim A, there exists $x \in \partial \mathbb S^n _+$ so that
$$ \frac{\partial \hat \rho _t}{\partial \nu} (x)=0,$$hence, by the Hopf Lemma (cf. Lemma \ref{HL} in the Appendix), we obtain that $\rho _t \equiv t$ in $\overline{\mathbb S ^n _+}$. Thus, $g_t = \tilde g _t$ and hence, $g= g_0$.
\end{proof}
The same ideas work on geodesic balls in $\mathbb S ^n$ of radius $r< \pi/2$. However, in this situation we must impose an extra condition on the mean curvature along the boundary. Geometrically, in the previous result we compared $\Sigma _t$ with the semi-sphere $S_t ^+$. Now, we are going to compare with a smaller spherical cap of $S_t$ that depends on $r$.
First, observe that the geodesic ball $D({\bf n}, r) \subset (\mathbb S ^n , g_0)$ of radius $r$ centered at the north pole satisfies that $\partial D({\bf n}, r)$ is isometric to $\mathbb S ^{n-1} (\sin (r))$ and the mean curvature of $\partial D({\bf n}, r)$ with respect to the inward orientation is $\cot (r) $.
Second, the horospherical metric associated to the geodesic sphere $S_t \subset \mathbb H ^{n+1}$ centered at the origin (in the Poincar\'e ball Model) of radius $t$ is just the dilated metric $\tilde g _t = e^{2\tilde \rho _t} g_0= e^{2t} g_0 $ and, from the representation formula \eqref{formula}, it is parametrized by
$$ \tilde \phi _t (x) = (\cosh (t) , \sinh (t) \, x) \text{ for all } x \in \mathbb S ^n .$$
In particular, $$H_{\tilde g _t} (x) = e^{-t} \cot (r) \textrm{ for all } x \in \partial D ({\bf n}, r).$$
Now, let $P _r$ be the totally geodesic hyperplane in $\mathbb H ^{n+1}$ whose boundary at infinity coincides with the boundary of $D({\bf n} ,r)$, that is, $\partial _\infty P_r = \partial D ({\bf n}, r)$. Set $S^+_{r,t} = \tilde \phi _t (\overline{D({\bf n},r)})$. Hence, with the conditions above (as we have already done) we can check that
$$ \tilde \phi _t (x) \in \mathcal H (x,t) \cap P_r({\rm arc}\sinh (-e^{-t}\cot (r))), \text{ for all } x \in \partial D ({\bf n}, r) $$and $S^+_{r,t} \subset \overline{P_r({\rm arc}\sinh (-e^{-t}\cot (r))) } $
Denoting by $\mathcal B (x, t)$ the open horoball determined by $\mathcal H (x,t)$ we observe that
$$
\mathcal D (a) := P_r({\rm arc}\sinh (-e^{-t}\cot (r))) \setminus \bigcup _{x \in \partial D({\bf n} ,r)} \mathcal B (x, t)
$$
is a closed ball in $P_r({\rm arc}\sinh (-e^{-t}\cot (r)))$ of radius $a>0$ depending on $r$ and $t$ and centered at $q_0 = P_r({\rm arc}\sinh (-e^{-t}\cot (r))) \cap \gamma (\mathbb R) $, where $\gamma$ is the complete geodesic in $\mathbb H ^{n+1}$ joining the south and north poles. Let $\bar a >0$ the unique positive number so that
$$ \mathcal C (\bar a) \cap P_r({\rm arc}\sinh (-e^{-t}\cot (r))) = \partial \mathcal D(a) \subset P_r({\rm arc}\sinh (-e^{-t}\cot (r))) ,$$where $\mathcal C (\bar a)$ is the hyperbolic cylinder in $\mathbb H ^{n+1}$ of axis $\gamma$ and radius $\bar a$, i.e., those points at distance $\bar a$ from $\gamma$.
The exact value of $\bar a$ is not important. However it can be computed explicitly. The important observation is the following. Let $P_r({\rm arc}\sinh (-e^{-t}\cot (r))) ^-$ be the halfspace determined by $P_r({\rm arc}\sinh (-e^{-t}\cot (r)))$ containing the south pole at its boundary at infinity, then
\begin{equation}\label{OutCylinder}
\mathcal C (\bar a) \cap P_r({\rm arc}\sinh (-e^{-t}\cot (r)))^- \cap \mathcal H (x ,t) = \emptyset \text{ for all } x \in \partial D ({\bf n} ,r) .
\end{equation}
Let $\hat g_{t}$ be the restriction of $\tilde g _{t}$ to $\overline{D({\bf n},r)}$, i.e., $\left.\tilde g _{t}\right. _{|D({\bf n},r)} = \hat g_{t}$, and $\hat \rho _{t}$ the restriction of $\tilde \rho _{t}$ to $\overline{D({\bf n},r)}$. Then, it holds
\begin{equation}\label{BConditions}
\hat \rho _{t} = t \text { and } \frac{\partial \hat \rho _{t}}{\partial \nu} = 0 \text{ on } \partial D({\bf n},r) ,
\end{equation}where $\nu$ is the inward normal along $\partial D({\bf n},r)$.
After the proof of Theorem \ref{Th:rsmall} we will explain, geometrically, the necessity on the condition for the mean curvature.
\begin{theorem}\label{Th:rsmall}
Let $(f, \Gamma)$ be an elliptic data and let $g=e^{2\rho}g_0$, $\rho \in C^2 (\overline{\mathbb S^n_+})$, be a supersolution to $(f,\Gamma)$ in the closed hemisphere $\overline{\mathbb S^n_+}$, i.e.,
$$f(\lambda(p)) \geq 1, \, \, \lambda _g(p)\in\Gamma \text{ for all } p\in \mathbb{S}^n_+ .$$
Assume that the boundary $\partial \mathbb S ^n_+$ with respect to $g$ is umbilic with mean curvature $H_g\geq {\rm cot}(r)$ and isometric to $\mathbb{S}^{n-1}(\sin r)$ for some $r \in (0,\pi/2)$, here $\mathbb{S}^{n-1}(\sin r)$ denotes the standard sphere of radius $\sin r$.
Then, there exists a conformal diffeomorphism $\Phi \in {\rm Conf}(\mathbb S^n)$ so that $(\overline{\mathbb S^n_+} , \Phi^* g )$ is isometric $D ({\bf n}, r)$, where $D ({\bf n}, r)$ is the geodesic ball in $\mathbb S ^n$ with respect to the standard metric $g_0$ centered at the north pole ${\bf n}$ of radius $r$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{Th:rsmall}]
Using Obata's Theorem in this case, up to a conformal diffeomorphism, we can assume that $ g = e^{2\rho} g_0 $ is defined on $\overline{D({\bf n}, r)}$ and it is so that $\rho = 0$ on $\partial D ({\bf n} ,r)$. Moreover, the mean curvature of $\partial D ({\bf n} ,r)$ with respect to $g$ is given by
\begin{equation}\label{MB2}
{\rm cot}(r)\leq H_g:= -e^{-\rho}\frac{\partial \rho}{\partial \nu} + \cot(r) \text{ on } \partial D ({\bf n} ,r) .
\end{equation}
Now, as we have done above, we fix $t>0$ such that the eigenvalues of the Schouten tensor of $g_t=e^{2(\rho +t)}g_0$ satisfy $\lambda^t_i(x)<1/2$, for all $x\in \overline{\mathbb S^n_+}$ and we denote by
$\Sigma _t=\phi _t(\mathbb S^n_+) \subset \mathbb H^{n+1} \subset \mathbb L ^{n+2}$ the compact embedded horospherically concave hypersurface with boundary given by the representation formula (\ref{formula}) associated to $\rho _t =\rho +t$. In particular, $\rho = t$ along $\partial D ({\bf n},r)$.
As we have seen above, we have $\phi _t(x) \in \mathcal H(x,\rho_t(x))$, where $\mathcal H(x,\rho_t(x))$ is the horosphere whose point at infinity is $x$ and distance to the origin is $t$. Moreover, the mean curvature $H_g (x)$ measures the equidistant where $\phi _t (x )$ is contained, that is
$$ \phi _t(x) \in \mathcal H(x,\rho_t(x)) \cap P_r({\rm arc}\sinh (-e^{-t}H_g(x))).$$ In particular, $ \partial \Sigma _t \subset \overline{P _r({\rm arc}\sinh (-e^{-t}\cot (r)))^- }$. Hence,
\begin{quote}
{\bf Claim:} {\it $\partial \Sigma _t$ lies outside the interior of $\mathcal C (\bar a)$. Moreover, $\phi _t (x) \in \partial \Sigma _t \cap \mathcal C (\bar a) $ for some $x \in \partial D({\bf n} ,r)$ if, and only if, $\dfrac{\partial \rho _t}{\partial \nu} =0$ at $x \in \partial D({\bf n} ,r)$.}
\end{quote}
\begin{proof}[Proof of Claim]
From \eqref{OutCylinder}, the boundary $\Sigma _t$ lies outside the interior of $\mathcal C (\bar a)$. Moreover, $\partial \Sigma _t$ touches $\mathcal C (\bar a)$ at $\phi _t (x)$ for some $x \in \partial D({\bf n} ,r)$ if, and only if, $H_g (x) = \cot(r)$ and, from \eqref{MB2}, this is equivalent to $\frac{\partial \rho _t}{\partial \nu} =0$ at $x \in \partial D({\bf n} ,r)$. This finishes the proof of Claim.
\end{proof}
Now, consider the metric $\hat g _t = e^{2 \hat \rho _t}$ on $D({\bf n},r)$ defined above and satisfying \eqref{BConditions}. Now, we only have to compare $\rho _t$ and $\hat \rho _t$ the same way we did in Theorem \ref{Th:Hemi} and we conclude that $\rho _t \equiv \hat \rho _t$ on $D({\bf n} ,r)$. This proves the theorem.
\end{proof}
The condition on the mean curvature is fundamental to ensure that $\partial \Sigma _t$ does not touch the interior of $\mathcal C(\bar a)$. If, at some point $x\in \partial D({\bf n}, r)$, the mean curvature were smaller than $\cot (r)$, the point $\phi _t (x)$ might be in the interior of $\mathcal C (\bar a)$. Hence, when we compare $\Sigma _t$ and the spherical cap, the first contact point could be an interior point of the spherical cap and a boundary point of $\partial \Sigma _t$ and hence, we can not apply the maximum principle.
Finally, we establish our main result in this section:
\begin{theorem}\label{Th:General}
Let $p_i \in \mathbb S ^n$ and $\epsilon _ i >0 $, $i=1,\ldots , k$, be so that the closed geodesic balls $\overline{D(p_i,\epsilon _i)}\subset \mathbb S^n$ are pairwise disjoint. Set $\Omega := \mathbb S^n \setminus \bigcup _{i=1}^k D(p_i,\epsilon _i)$ and let $\Lambda \subset \Omega$ be a closed subset with empty interior.
Let $(f, \Gamma)$ be an elliptic data and let $g=e^{2\rho}g_0$, $\rho \in C^2 (\overline{\Omega}\setminus \Lambda )$, be a supersolution to $(f,\Gamma)$ in $\Omega \setminus \Lambda $, i.e.,
$$f(\lambda(p)) \geq 1, \, \, \lambda _g(p)\in\Gamma \text{ for all } p\in \Omega \setminus \Lambda .$$
Assume that $g$ is complete in $\overline{\Omega}\setminus \Lambda $ and the Schouten tensor of $g$ is bounded.
Assume that each boundary component $\partial D(p_i,\epsilon _i)$ with respect to $g$ is umbilic with mean curvature $H_g\geq {\rm cot}(r)$ and isometric to $\mathbb{S}^{n-1}(\sin (r))$ for some $r \in (0,\pi/2]$, here $\mathbb{S}^{n-1}(\sin (r))$ denotes the standard sphere of radius $\sin (r)$.
Then, there exists a conformal diffeomorphism $\Phi \in {\rm Conf}(\mathbb S^n)$ so that $(\overline{\Omega}\setminus \Lambda , \Phi^* g )$ is isometric $\overline{D ({\bf n}, r)}$, where $D ({\bf n}, r)$ is the geodesic ball in $\mathbb S ^n$ with respect to the standard metric $g_0$ centered at the north pole ${\bf n}$ of radius $r$.
\end{theorem}
The condition on $\Lambda$ having empty interior is superfluous. Under the conditions above, following ideas contained in \cite{BEQ}, one can prove that $\Lambda $ must have empty interior.
After the proof of Theorem \ref{Th:General} we will explain the necessity of $H_g \geq 0$ when $(\partial \mathbb S ^n _+ ,g)$ is isometric to $\mathbb S ^{n-1}$ in the case of multiple boundary components, in contrast to Theorem \ref{Th:Hemi}.
\begin{proof}[Proof of Theorem \ref{Th:General}]
Since $|{\rm Sch}_g|<+\infty$ and $g$ is a complete metric, following the results in \cite{BEQ} (see also \cite{BQZ}), there exists $t>0$ such that the horospherically concave hypersurface associated
\[
\Sigma_t = \phi_t(\Omega\setminus \Lambda) \subset \mathbb{H}^{n+1}
\]
is properly embedded with boundary and $\partial _{\infty}\Sigma_t = \Lambda$. Without loss of generality we can assume that $\Sigma _t$ is locally convex with respect to the canonical orientation $\eta _t$ by taking $t$ big enough.
Observe that, up to a conformal diffeomorphism $\Phi \in {\rm Conf} (\mathbb S^n)$, we can assume that one connected component of $\partial \Omega$ is $\partial \mathbb{S}^{n}_+$. Consider the case where $(\partial \Omega, g)$ is isometric to $(\mathbb{S}^{n-1},g_0)$. The case where $(\partial \Omega, g)$ is isometric to $(\partial \mathbb{D}(r),g_0)$ is analogous. Observe that at the beginning of Theorem \ref{Th:Hemi} we did a conformal transformation to ensure that $\rho = 0$ along $\partial \mathbb{S}^{n}_+$. We can do this to ensure $\rho = 0$ along one connected component of the boundary (of course, not all of them). We assume $\Gamma_1=\partial \mathbb{S}^{n}_+$ has this property. Observe that after applying this conformal diffeomorphism we can assume $\Phi (\Omega) \subset \mathbb{S}^{n}_+$. Now consider the half-sphere $S^+_t\subset P^+$ as in the Theorem \ref{Th:Hemi}. We only need to prove that $S^+_t$ does not touch any other boundary component.
As we did in Theorem \ref{Th:Hemi}, consider the hyperbolic translation $T_s : \mathbb H ^{n+1} \to \mathbb H ^{n+1}$ and set $S^+_{t,s} = T_s (S^+_t)$. Then, there exists $s_0 <0$ so that $\Sigma _t \cap S^+_{t,s} = \emptyset$ for all $s \geq s_0$. Then, we increase $s$ to $0$ up to the first contact point with $\Sigma _t$. If this first contact point happens either at interior points or at boundary points for $s=0$, then $\Sigma _t$ equals $S^+_t$ by the maximum principle as we did in Theorem \ref{Th:Hemi}.
Therefore, we only must show that the first contact point does not occur at an interior point of $S^+ _{t,s}$, for some $s\in [s_0 ,0]$, and a boundary point of $\Sigma _t$. Assume this happens, and let $\Gamma_2$ be the other boundary component of $\partial\Omega\setminus \Gamma_1$ that $S^+_{t,s}$ touch.
Let $p \in \mathbb S ^n _+$ and $\epsilon >0$, $\overline{D(p,\epsilon)}\subset \mathbb S ^n _+$, so that $\Gamma _2 = \phi _t (\partial D(p,\epsilon))$. Let $P$ and $Q$ be the totally geodesic hyperplanes in $\mathbb H ^{n+1}$ whose boundaries at infinity are
$$ \partial _\infty P = \partial \mathbb S ^n_+ \text{ and } \partial _\infty Q = \partial D(p, \epsilon) . $$
Let $Q^+ $ be the halfspace determined by $Q$ whose boundary at infinity contains $p \in \partial _\infty Q^+$. Let $\phi _t (x) =q \in \Gamma _2 \cap S^+_{t,s}$ be a first contact point. Let $\eta _t$ and $\tilde \eta _{t,s}$ be the canonical orientation of $\Sigma _t$ and $S^+_{t,s}$ respectively. Then, $\phi _t (x) \in Q({\rm arc}\sinh (e^{-t}H_g(x)))$, where $Q({\rm arc}\sinh (e^{-t}H_g(x)))$ is the equidistant to $Q$ at distance ${\rm arc}\sinh (e^{-t}H_g(x))$ contained in $Q^+$.
Since we are assuming that the mean curvature $H_{g}$ is non-negative along $\partial D(p,\epsilon)$ we have that $\eta _t (q)$ points towards $\overline{Q^+}$, it could belong to the tangent bundle of $Q$ if $H_g (x)=0$, $q = \phi _t (x)$. Now, since $S^+_{t,s}$ is convex with respect to the canonical orientation $\tilde \eta _{t,s}$, then $\tilde \eta _{t,s} (q)$ points towards $Q^-$. Since we are assuming that $q$ is the first contact point, the only possibility is that $\eta _t (q) = - \tilde \eta _{t,s} (q)$. However, if this were the case, since $\Sigma _t$ and $S^+ _{t,s}$ are locally convex and their tangent hyperplanes coincide, they must be (locally) in opposite sides of the tangent hyperplane, in other words, $S^+_{t,s}$ is approaching by the concave side of $\Sigma _t$, which is a contradiction. Hence, in any case, the first contact point does not occur at an interior point of $S^+ _{t,s}$, for some $s\in [s_0 ,0]$, and a boundary point of $\Sigma _t$.
Thus, this finishes the proof of Theorem \ref{Th:General}
\end{proof}
Observe that the condition $H_g \geq 0$ is essential in Theorem \ref{Th:General}, in contrast to Theorem \ref{Th:Hemi}. The reason is that this condition gives us a direction of the canonical orientation $\eta _t$ at the contact point. If the mean curvature at some point were negative, both $\eta _t$ and $\tilde \eta _{t,s}$ point toward the same halfspace $Q^-$ at the contact point $q$, and we can not achieve a contradiction.
\section{Proof of Theorem A}
Now, we are ready to prove our main result. For simplicity, we divide the proof into two cases.
\subsection{$M$ is simply-connected}
\begin{proof}
First, we prove our Theorem A under the condition that $M$ is simply-connected. In this case, there exists a developing map $\Psi : M \rightarrow \mathbb S ^n$. Since $\partial M$ is umbilic, and to be umbilic is a conformal invariant, the image of $\partial M$ must be umbilic in $\mathbb S ^n$. Hence, $\partial M$ is contained in a hypersphere $\mathcal S \subseteq \mathbb S ^n$. Note that, in fact, $\Psi _{|_{\partial M}}: \partial M \rightarrow \mathcal S$ is a diffeomorphism. Composing it with a conformal diffeomorphism of $\mathbb S ^n$, if necessary, we can assume that $\mathcal S$ is the equator $\partial \mathbb S _+^n = \{x \in \mathbb S ^n\,; \,\,x_{n+1} = 0\}$. Now, consider the double manifold $\hat{M}=M\bigcup\limits_{\partial M}(-M).$ We are writing $-M$ for the second copy of $M$ in $\hat{M}$ in order to distinguish it from $M$ itself. We extend $\Psi$ to a map $\hat{\Psi}: M\rightarrow \mathbb S ^n$ in a natural way: we write $\Psi = (\Psi_1 , \ldots , \Psi_{n+1} )$ and set
$$\begin{displaystyle}
\hat{\Psi}(x) :=
\begin{cases}
\Psi(x) & \text{if } x\in M \\
(\Psi_1(x), \ldots , \Psi_{n+1}(x),-\Psi_{n+1}(x) ), & \text{if } x\in -M
\end{cases}
\end{displaystyle}$$
Then $\hat{\Psi}$ is well-defined and continuous because $\Psi_{n+1}(x) = 0$ for $x \in \partial M$. Moreover, it is a local homeomorphism. It follows that $\hat{\Psi}$ is a homeomorphism and hence $\Psi$ is injective. Furthermore, the image is either $\mathbb S ^n_+$ or $\mathbb S ^n _-$. Let $\{{\bf s}, {\bf n} =-{\bf s}\}$ be a pair of antipodal points. By composing $\Psi$ with a conformal diffeomorphism of $\mathbb S^n$, we may assume that the image of $\Psi$ is $\overline{\mathbb S ^n _+}$.
Now, we can pushforward the metric $g$ on $M$ to $\overline{\mathbb S^n _+}$ via $\Psi$, $\tilde g = (\Psi ^{-1})^*g$, and we obtain a conformal metric to standard metric on the sphere satisfying that the boundary $\partial \mathbb S ^n_+$ with respect to $\tilde g$ is umbilic with mean curvature $H_{\tilde g}\geq {\rm cot}(r)$ and isometric to $\mathbb{S}^{n-1}(\sin r)$ for some $r \in (0,\pi/2]$, here $\mathbb{S}^{n-1}(\sin r)$ denotes the standard sphere of radius $\sin r$. Therefore, either Theorem \ref{Th:Hemi} if $r =\pi/2$ (in this case we do not need to assume $H_g \geq 0$) or Theorem \ref{Th:rsmall} if $r\in (0, \pi/2)$ imply that $\tilde g$ is isometric (up to a conformal diffeomorphism) to $\overline{D({\bf n},r)}$. This concludes the proof of Theorem A in the simply-connected case.
\end{proof}
\subsection{$M$ is not simply-connected}
In this case, we will use Theorem \ref{SP}. Then, there exists an injective conformal diffeomorphism $\Psi : M \to \Omega \setminus \Lambda $ where $\Omega = \Omega (\epsilon_i , p_i ) := \mathbb S ^n\backslash \left( \bigcup\limits_{i} D(p_i ,\epsilon_i)\right)$, $D(p_i ,\epsilon_i)$ are geodesic balls in $\mathbb S^n$ centered at $p_i$ of radius $\epsilon _i$ with disjoint closures and $\Lambda$ is a closed subset of Hausdorff dimension at most $\frac{n-2}{2}$.
Hence, as we did above, we can push forward the metric on $M$ to $\Omega \setminus \Lambda $ as $\tilde g = (\Psi ^{-1}) ^*g$, $\tilde g$ is conformal to the standard metric on the sphere. This metric is complete (cf. \cite[Section 2]{LiNg}) and its Schouten tensor is bounded, since the Schouten tensor of $g$ is bounded in $M$. Moreover, the boundary conditions on $g$ imply that each boundary component $\partial D(p_i,\epsilon _i)$ with respect to $\tilde g$ is umbilic with mean curvature $H_{\tilde g}\geq {\rm cot}(r)$ and isometric to $\mathbb{S}^{n-1}(\sin (r))$ for some $r \in (0,\pi/2]$, here $\mathbb{S}^{n-1}(\sin (r))$ denotes the standard sphere of radius $\sin (r)$.
Therefore, Theorem \ref{Th:General} implies that there exists a conformal diffeomorphism $\Phi \in {\rm Conf}(\mathbb S^n)$ so that $(\overline{\Omega}\setminus \Lambda , \Phi^* \tilde g )$ is isometric $D ({\bf n}, r)$, where $D ({\bf n}, r)$ is the geodesic ball in $\mathbb S ^n$ with respect to the standard metric $g_0$ centered at the north pole ${\bf n}$ of radius $r$. In particular, $\Lambda = \emptyset$ and the number of connected components at the boundary is one. This implies that $M$ is simply connected via $\Psi$. This concludes the proof of Theorem A.
\section{Rigidity for hypersurfaces in $\mathbb H ^{n+1}$}
Now, we will see how our results on Section \ref{hemisphere} apply to hypersurfaces $\Sigma$ in $\mathbb H ^{n+1}$. We are going to establish here a simplified version of that we could, but which is geometrically more appealing.
First, we define the geometric setting. Let $P_i \subset \mathbb H ^{n+1}$, $i=1,\ldots, m$, be pairwise disjoint totally geodesic hyperplanes and let $\mathcal O (m)$ be the connected component of $\mathbb H^{n+1}\setminus \bigcup _{i=1}^m P_i$ whose boundary is $\partial \mathcal O (m) = \bigcup _{i=1}^m P_i$. Fix $r\geq 0$ and denote by $P_i(r)$ the equidistant hypersurface $P_i$ at distance $r$ so that $P_i (r) \subset \mathbb H^{n+1}\setminus \mathcal O$. Assume that $P_i (r)$, $i=1, \ldots , m$, are pairwise disjoint and denote by $\mathcal O (m, r)$ the connected component of $\mathbb H^{n+1}\setminus \bigcup _{i=1}^m P_i (r)$ whose boundary is $\partial \mathcal O (m, r) = \bigcup _{i=1}^m P_i (r)$. Observe that the boundary at infinity $\overline{\Omega(m)} := \partial _\infty \mathcal O (m,r) \subset \mathbb S ^n $ satisfies that $\partial \Omega (m)= \bigcup _{i=1}^m \partial D(p_i ,\epsilon _i)$, for certain $p_i \in \mathbb S^n$ and $\epsilon _i >0$. Moreover, we orient each $P_i$ so that the normal $N_i$ along $P_i$ points into $\mathcal O (m,r)$. A domain $\mathcal O (m,r)$ in the above conditions is called a {\it $(m,r)-$domain}.
Second, we define how the hypersurface $\Sigma$ sits into a $(m,r)-$domain. Let $\Sigma \subset \mathbb H ^{n+1}$ be a properly embedded hypersurface with boundary. We say that $\Sigma $ {\it sits into a $(m,r)-$domain}, denoted by $\Sigma \subset \mathcal O (m,r)$, if
\begin{itemize}
\item $\Sigma \setminus \partial \Sigma \subset \mathcal O (m,r)$,
\item $\partial \Sigma = \bigcup _{i=1}^m \mathcal S _i$, where each $\mathcal S _i $ is homeomorphic to $\mathbb S ^{n-1}$ and $\mathcal S _i \subset P _i (r)$,
\item let $\mathcal D _i \subset P_i$ the domain bounded by $\mathcal S _i$ in $P_i (r)$, the orientation $\eta$ of $\Sigma$ is the one pointing into the domain $W \subset \mathbb H ^{n+1}$ bounded by $\Sigma \cup \left( \bigcup_{i=1}^m \mathcal D _i \right)$, and
\item $\partial _\infty \Sigma \subset \Omega (m)$.
\end{itemize}
Third, we set the type of elliptic inequality the hypersurface will satisfy. We recall the definition of elliptic data for a hypersurface in $\mathbb H ^{n+1}$ (cf. \cite[Section 4]{BEQ} and references therein). Let
$$\Gamma ^* _n =\{ (x_1, \ldots , x_n) \in \field{R} ^n \, : \,\, x_i >1 \}$$and
$$\Gamma ^* _1 =\{ (x_1, \ldots , x_n) \in \field{R} ^n \, : \,\, \sum _{i=1}^nx_i >n \}.$$
Consider a symmetric function $\mathcal W (x_1 , \ldots , x_n)$ with $\mathcal W (1, \ldots , 1) = 0$ and $\Gamma ^*$ an open connected component of
$$ \{ (x_1, \ldots , x_n) \in \field{R} ^n \, : \,\, \mathcal W (x_1, \ldots ,x_n ) >0 \}. $$
We say that $(\mathcal W , \Gamma ^* , \kappa _0) $, $\kappa_0 >0$, is an elliptic data if they satisfy
\begin{enumerate}
\item $\Gamma ^*_n \subset \Gamma ^* \subset \Gamma ^* _1$,
\item $\mathcal W$ is symmetric,
\item $\mathcal W>0$ in $\Gamma^*$,
\item $\mathcal W |_{\partial\Gamma^*}=0$,
\item $\dfrac{\partial \mathcal W}{\partial x_i}>0$ for all $ i=1\ldots, n.$,
\item $\mathcal W (\kappa_0, \ldots , \kappa_0) =1$.
\end{enumerate}
Then, given an elliptic data $(\mathcal W, \Gamma ^* ,\kappa _0)$ we say that an oriented hypersurface $\Sigma \subset \mathbb H^{n+1}$ is a {\it supersolution} to $(\mathcal W, \Gamma ^* ,\kappa _0)$ if
$$ \mathcal W (k (p)) \geq 1, \, \, k (p) \in \Gamma ^* \text{ for all } p\in \Sigma ,$$where $k(p):= (k_1 (p), \ldots, k_n (p))$ is composed by the principal eigenvalues of $\Sigma$ at $p\in \Sigma$ with respect to the chosen orientation.
We have already established the geometric configuration. In order to state appropriately our main result, we need to introduce some notation.
Fix $r \geq 0$ and $\kappa _0 >1$. Let $S_p(\kappa _0)$ be the totally umbilic geodesic sphere centered at $p\in \mathbb H^{n+1}$ whose principal curvatures (with respect to the inward orientation) are equal to $\kappa _0$. Let $P(r)$ be a equidistant hypersurface to a totally geodesic hyperplane $P$. Denote by $P(r)^+$ the convex component of $\mathbb H^{n+1}\setminus P$. Let $p_r \in \mathbb H ^{n+1}$ be a point so that $S(\kappa_0,r)^+ := S_{p_r} (\kappa_0) \cap P(r)^+$ makes a constant angle $\alpha (r)={\rm arc}\cos\left( -\frac{r}{\sqrt{1+r^2}}\right)$, the angle here is measure between the inward normal along the geodesic sphere and the normal along $P(r)$ pointing into the convex side.
\begin{definition}
We say that $\Sigma$ is a {\it $(\kappa_0,r)-$spherical cap} if $\Sigma := S(\kappa_0,r)^+$, up to an isometry of $\mathbb H^{n+1}$.
\end{definition}
Recall that the inradius of a closed embedded hypersurface $\mathcal S$ in $P(r)$, denoted by ${\rm InRad}(\mathcal S, P(r))$, is the radius of the biggest geodesic ball in $P(r)$ contained in the domain bounded by $\mathcal S$ in $P(r)$. Then, we set $\imath (\kappa _0 ,r) := {\rm InRad}(\partial S(\kappa _0 ,r)^+ ,P(r)) >0$.
It is clear that $(\kappa_0,r)-$spherical caps will be the model hypersurfaces to compare with in the next result.
\begin{theorem}\label{capillar}
Fix $m \in \mathbb N \cup \set{0} $ and $r \geq 0$. Consider a $(m,r)-$domain $\mathcal O (m,r)$ and let $\Sigma \subset \mathcal O (m,r)$ be a properly embedded hypersurface sitting on it.
Let $(\mathcal W, \Gamma ^* ,\kappa _0)$ be an elliptic data and assume that $\Sigma$ is a supersolution to $(\mathcal W, \Gamma ^* ,\kappa _0)$. Assume that along the boundary $\Sigma$ satisfies:
\begin{itemize}
\item $ \meta{\eta (x)}{N_i(x)} \leq -\frac{r}{\sqrt{1+r ^2}}$ for each $x\in \mathcal S_i $.
\item ${\rm InRad}(S_{i_0} ,P_{i_0}(r)) \geq \imath (\kappa _0 ,r) $ for some $i_0 \in \set{1,\ldots ,m}$.
\end{itemize}
Then, $\Sigma $ is, up to an isometry of $\mathbb H^{n+1}$, a $(\kappa_0,r)-$spherical cap.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{capillar}]
The proof follows from the arguments given in Theorem \ref{Th:General}. In this case, we only need to compare with the $(r,\kappa_0)-$spherical cap.
\end{proof}
\begin{remark}
We can drop the embeddedness hypothesis on Theorem \ref{capillar}, as far as $\Sigma \cup \left( \bigcup_{i=1}^m \mathcal D _i \right)$ is Alexandrov embedded.
\end{remark}
\section{Toponogov type theorem}
In this section, we proceed as Espinar-G\'alvez-Mira \cite{EGM} in order to define the Schouten tensor for a two-dimensional domain endowed with a metric $g$ conformal to the standard metric $g_0$ on $\mathbb{S}^2$. Consider $g=e^{2\rho}g_0$, where $\rho\in C^{2}(\Omega)$, defined on a domain $\Omega \subset \mathbb{S}^2$. In this case, we define the Schouten tensor ${\rm Sch}_g$ of $g$ from the following relation:
$$
{\rm Sch_g} +\nabla ^2\rho + \frac{1}{2}\|\nabla \rho \|^2g_0={\rm Sch}_{g_0} + \nabla\rho \otimes \nabla \rho
$$
where $\nabla$ and $\nabla ^2$ are the gradient and the hessian with respect to the metric $g_0$, respectively, and $\| \cdot \|$ denote the norm with respect of $g_0$. Consider then $\lambda_g=(\lambda_1,\lambda_2)$, where $\lambda_i$, $i=1,2$, are the eigenvalues of the Schouten tensor given by the expression above. Note that if $f(x,y)=x+y$ then
\[
f(\lambda_1,\lambda_2)=\frac{R_g}{2(n-1)}=K\,,
\]
since $n=2$, where $K$ is the Gaussian curvature of $g=e^{2\rho}g_0$. Then the Liouville problem (i.e. the Yamabe problem in dimension $n=2$) is a particular problem of more general elliptic problems for conformal metrics in $\mathbb{S}^2$. Moreover, we can consider the Min-Oo conjecture for more general elliptic problems and see Toponogov's Theorem as a particular case of it.
Of particular interest is when we consider the product of the eigenvalues, i.e., $f(x,y) = \sqrt{xy}$. It is clear that $(f,\Gamma _2)$ is an elliptic data and, if we consider $g=e^{2\rho} g_0$, $\rho \in C^2 ( \Omega )$, that satisfies $f(\lambda _g) \geq 1$, then $\rho$ is a super-solution to the Monge-Amp\`ere type equation
$$ e^{-4\rho}{\rm det}_{g_0} \left(\nabla ^2\rho - \nabla\rho \otimes \nabla \rho - \frac{1}{2}(1- \|\nabla \rho \|^2)g_0 \right) \geq 1 .$$
That is the subject of our next result.
\begin{theorem}
Let $(f,\Gamma) $ be an elliptic data. Let $(M^2,g)$ be a compact surface with smooth boundary such that $f(\lambda_g)\geq1$. Suppose the geodesic curvature $k$ and the length $L$ of the boundary $\partial M$ (w.r.t. $g$) satisfy $k\geq c\geq0$ and $L=\frac{2\pi}{\sqrt{1+c^2}}$ respectively. Then $(M^2,g)$ is isometric to a disc of radius $r=cot^{-1}(c)$ in $\mathbb{S}^2$.
\end{theorem}
\begin{proof}
Since $(f,\Gamma) $ is elliptic we have that $K>0$, where $K$ is the Gaussian curvature of $(M,g)$. Hence, since the geodesic curvature $k$ of the boundary satisfies $k\geq c\geq0$, it follows from the Gauss-Bonnet formula that
$$
2\pi\chi(M)=\int_{M}Kdv_{M}+\int_{\partial\Sigma}kds>0\,,
$$
where $\chi(M)$ is the Euler number of $M$. Therefore $M$ is a disc. By the Riemann mapping theorem, $(M^2,g)$ is conformally equivalent to the unit disc $\mathbb{D}=\{(x,y)\in \mathbb{R}^2\,:\,\,x^2+y^2 \leq 1\}$ with the flat metric $ds_0^2$. Without loss of generality, we can write $g=e^{2\rho}g_0$, with $\rho\in C^{2}(M)$, and $M=\overline{\mathbb{S}^2_+}$, where $g_0$ denote the standard metric on $\mathbb{S}^2_+$, since $(\mathbb{D}, ds_0^2)$ is conformally equivalent to ($\mathbb{S}^2_+,g_0)$. Moreover, $\rho$ satisfies
\[
\frac{\partial \rho}{\partial \nu}=-ke^{\rho}\leq -ce^{\rho}\,.
\]
Moreover, since $L = \frac{2\pi}{\sqrt{1+c^2}}$, we can reparametrize $\overline{\mathbb{S}^2_+}$ so that $\rho = - \ln \sqrt{1+c^2}$. Now, arguing as in the proof of Theorems \ref{Th:Hemi} and \ref{Th:rsmall}, we obtain that $(M^2,g)$ is isometric to a disc of radius $r={\rm arc}\cot(c)$ in $\mathbb{S}^2$.
\end{proof}
As a direct consequence of the result above, we obtain the following version of the Toponogov Theorem.
\begin{theorem}\label{Th:Toponogov}
Let $(f,\Gamma)$ be an elliptic data. Let $(M^2,g)$ be a closed surface such that $f(\lambda_g)\geq1$. Assume that there exists a simple closed geodesic in $M$ with length $2\pi$. Then $(M^2,g)$ is isometric to the standard sphere $\mathbb{S}^2$.
\end{theorem}
\begin{proof}
Suppose that $\gamma$ is a simple closed geodesic in $M$ with length $2\pi$. We cut $M$ along $\gamma$ to obtain two compact surfaces with the geodesic $\gamma$ as their common boundary. The result follows from applying the previous theorem to either of these two compact surfaces with boundary.
\end{proof}
\section{Appendix A: comparison principle}
In this appendix we recover some results contained in \cite{JinLiLi,LiLi1,LiLi2} to make this paper as self-contained as possible. Specifically, we will use \cite[Lemma 6.1]{JinLiLi} and its proof, that relies in the strong maximum principle and Hopf Lemma developed in \cite{LiLi1,LiLi2}. We can summarize these results as follows:
\begin{lemma}[Strong Maximum Principle]\label{SMP}
Let $(f,\Gamma) $ be an elliptic data. Let $g_i = e^{2 \rho _i} g_0$, $\rho _i \in C^2 (\Omega) \cap C^{1}(\overline \Omega)$ for $\Omega \subset \mathbb S ^n$, be two conformal metrics so that
\begin{itemize}
\item $f(\lambda _{g_1}(p)) \geq f(\lambda _{g_2}(p)) , \,\, \lambda _{g_i} (p)\in \Gamma, \, i=1,2, \, \text{ for all } p \in \Omega$,
\item $\rho _1 , \rho _2 >0$.
\end{itemize}
If $\rho _1 - \rho _2 >0 $ on $\partial \Omega$ then $\rho _1 - \rho _2 >0$ on $\Omega$.
\end{lemma}
And
\begin{lemma}[Hopf Lemma]\label{HL}
Let $(f,\Gamma) $ be an elliptic data. Let $g_i = e^{2 \rho _i} g_0$, $\rho _i \in C^2 (\Omega)\cap C^{1}(\overline \Omega)$ for $\Omega \subset \mathbb S ^n$, be two conformal metrics so that
\begin{itemize}
\item $f(\lambda _{g_1}(p)) \geq f(\lambda _{g_2}(p)) , \,\, \lambda _{g_i} (p) \in \Gamma, \, i=1,2, \, \text{ for all } p \in \Omega$,
\item $\rho _1 \geq \rho _2 > 0$.
\end{itemize}
If $\frac{\partial}{\partial \eta}(\rho _1 - \rho _2 ) \leq 0 $ at $p \in \partial \Omega$ then $\rho _1 = \rho _2 $ on $\Omega$.
\end{lemma}
We should say that the results in \cite{JinLiLi} do not need that $f$ is homogeneous of degree one. Also, in \cite{JinLiLi}, the authors assumed $f \in C^\infty (\Gamma ) \cap C^0 (\bar \Gamma)$, but it suffices $f \in C^1 (\Gamma ) \cap C^0 (\bar \Gamma)$.
\bibliographystyle{amsplain}
| {
"timestamp": "2018-11-26T02:08:10",
"yymm": "1702",
"arxiv_id": "1702.07077",
"language": "en",
"url": "https://arxiv.org/abs/1702.07077",
"abstract": "In this paper we show rigidity results for super-solutions to fully nonlinear elliptic conformally invariant equations on subdomains of the standard $n$-sphere $\\mathbb S^n$ under suitable conditions along the boundary. We emphasize that our results do not assume concavity assumption on the fully nonlinear equations we will work with.This proves rigidity for compact connected locally conformally flat manifolds $(M,g)$ with boundary such that the eigenvalues of the Schouten tensor satisfy a fully nonlinear elliptic inequality and whose boundary is isometric to a geodesic sphere $\\partial D(r)$, where $D(r)$ denotes a geodesic ball of radius $r\\in (0,\\pi/2]$ in $\\mathbb S^n$, and totally umbilical with mean curvature bounded below by the mean curvature of this geodesic sphere. Under the above conditions, $(M,g)$ must be isometric to the closed geodesic ball $\\overline{D(r)}$.As a side product, in dimension $2$ our methods provide a new proof to Toponogov's Theorem about the rigidity of compact surfaces carrying a shortest simple geodesic. Roughly speaking, Toponogov's Theorem is equivalent to a rigidity theorem for spherical caps in the Hyperbolic three-space $\\mathbb H^3$. In fact, we extend it to obtain rigidity for super-solutions to certain Monge-Ampère equations.",
"subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP)",
"title": "Min-oo conjecture for fully nonlinear conformally invariant equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180685922241,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7083314767461468
} |
https://arxiv.org/abs/1406.0762 | Sobolev orthogonal polynomials on product domains | Orthogonal polynomials on the product domain $[a_1,b_1] \times [a_2,b_2]$ with respect to the inner product $$\langle f,g \rangle_S = \int_{a_1}^{b_1} \int_{a_2}^{b_2} \nabla f(x,y)\cdot \nabla g(x,y)\, w_1(x)w_2(y)\,dx\, dy + \lambda f(c_1,c_2)g(c_1,c_2) $$ are constructed, where $w_i$ is a weight function on $[a_i,b_i]$ for $i = 1, 2$, $\lambda > 0$, and $(c_1, c_2)$ is a fixed point. The main result shows how an orthogonal basis for such an inner product can be constructed for certain weight functions, in particular, for product Laguerre and product Gegenbauer weight functions, which serve as primary examples. | \section{Introduction}
\setcounter{equation}{0}
Let $w_i(x)$ be a nonnegative weight function defined on an interval $[a_i,b_i]$, where $i =1,2$. Let $W$ be
the product weight function
\begin{equation} \label{eq:W}
W(x,y):= w_1(x) w_2(y), \qquad (x,y) \in \Omega : = [a_1,b_1] \times [a_2,b_2].
\end{equation}
The purpose of this paper is to study orthogonal polynomials with respect to the inner product
\begin{equation} \label{eq:ipd-1}
{\langle} f ,g {\rangle}_S = \iint\limits_{\Omega} \nabla f(x,y)\cdot \nabla g(x,y)\, W(x,y) \,dx\, dy + {\lambda} f(c_1,c_2)g(c_1,c_2),
\end{equation}
where ${\lambda} > 0$ and $(c_1,c_2)$ is a fixed point, typically a corner point of the product domain $\Omega$.
Sobolev orthogonal polynomials of one variable have been extensively studied (see the survey \cite{MX}).
In particular, polynomials that are orthogonal with respect to the one--variable analogue of the inner product
\eqref{eq:W} were analyzed in \cite{KLJ97}. In
contrast, the study of such polynomials in several variables is a fairly recent affair. In \cite{X08}, one of the earliest
studies in several variables, Sobolev orthogonal polynomials with respect to an inner product similar to \eqref{eq:ipd-1} on
the unit ball of $\mathbb{R}^d$ are constructed, where the discrete part could also be replaced by the integral on the boundary
of the ball. The
motivation of \cite{X08} came from a question from engineering that requires control over the gradient. Such
inner products appear naturally in the analysis of spectral methods for numerical solutions of partial differential equations
(cf. \cite{LX}), which motivates our study.
For the ordinary inner product on the product domain,
\begin{equation} \label{eq:ipd-2}
{\langle} f ,g {\rangle}_W = \iint\limits_{\Omega} f(x,y) g(x,y)\, W(x,y) \,dx\, dy,
\end{equation}
it is immediate that a basis of orthogonal polynomials of degree $n$ is given by
$p_k(w_1;x)p_{n-k}(w_2;y)$, $0\leqslant k \leqslant n$, where $p_k(w;x)$ denotes the orthogonal polynomial of degree $k$ with
respect to $w$. A moment reflection shows, however, that Sobolev orthogonal polynomials with respect to the
inner product \eqref{eq:ipd-1} do not possess product structure. Our goal in this paper is to study the orthogonal
structure for the inner product \eqref{eq:ipd-1} on the product domain.
Our main result provides a way to construct a basis of Sobolev orthogonal polynomials, complemented with an
algorithm that computes both orthogonal polynomials and their $L^2$ norm, when both weight functions $w_1$
and $w_2$ are self-coherent, which means that their monic orthogonal polynomials satisfy the relations of the form
\begin{equation} \label{eq:cohen-poly}
p_n (x)= \frac{p'_{n+1}(x)}{n+1} + a_n p'_{n} (x) + b_n p'_{n-1}(x), \qquad n \geqslant 1.
\end{equation}
Weight functions, or measures, that are self-coherent have been studied extensively and characterized. They are essentially the classical measures. In \cite{MBP} the authors proved that \eqref{eq:cohen-poly} characterizes classical orthogonal polynomials.
Our approach is to express the Sobolev orthogonal polynomials with respect to the inner product ${\langle} \cdot,\cdot {\rangle}_S$
in terms of a family of product polynomials, which are not, however, the product orthogonal polynomials with respect to \eqref{eq:ipd-2}, but product polynomials of the form $q_k(w_1;x) q_{n-k}(w_2;y)$, where $q_k(w)$ takes the form of
the right hand side of \eqref{eq:cohen-poly} without the derivative. In order to keep the idea transparent, we will not
work with the most general case that our method applies, but work primarily with two examples, product
Laguerre weight functions and product Gegenbauer weight functions, for which we work out our algorithms explicitly.
Some of our results can no doubly be extended from two variables to several variables. We choose to stay
with two variables to avoid complicated notation and keep the algorithm practical.
The paper is organized as follows. In the next section, we recall the basics for orthogonal polynomials of several
variables, and describe our strategy for constructing Sobolev orthogonal polynomials for the product
weight functions. The construction is worked out explicitly in the case of product Laguerre weight in
Section 3 and in the case of product Gegenbauer weight in Section 4.
\section{Constructing bases for Sobolev orthogonal polynomials}
\setcounter{equation}{0}
The basics of orthogonal polynomials in several variables are given in the first subsection. Sobolev
orthogonal polynomials for product measures are described in the second subsection, and the
strategy for constructing an orthogonal basis is discussed in the third subsection.
\subsection{Orthogonal polynomials of two variables}
Let $\Pi^2$ denote the space of polynomials in two real variables and, for $n = 0,1,2,\ldots$, let $\Pi_n^2$ denote the
subspace of polynomials of (total) degree at most $n$ in $\Pi^2$. For an inner product ${\langle} \cdot, \cdot {\rangle}$ defined
on $\Pi^2$, a polynomial $P \in \Pi_n^2$ is said to be orthogonal if ${\langle} P, Q{\rangle} =0$ for all $Q \in \Pi_{n-1}^2$.
Let $\mathcal{V}_n^2$ denote the space of orthogonal polynomials of total degree $n$ with respect to ${\langle} \cdot, \cdot {\rangle}$.
It is known that
$$
\dim \Pi_n^2 = \binom{n+2}{n} \quad \hbox{and} \quad \dim \mathcal{V}_n^2 = n+1.
$$
The space $\mathcal{V}_n^2$ can have many different bases. A basis $\{P_k^n: 0\leqslant k\leqslant n\}$ of $\mathcal{V}_n^2$
is called mutually orthogonal if ${\langle} P_k^n, P_j^n{\rangle} =0$ for $k \ne j$ and it is called orthonormal if, in addition,
${\langle} P_k^n, P_k^n{\rangle} =1$. Another polynomial basis that is of interest is the monic basis, for which $P_k^n(x,y) = x^{n-k} y^k
+ R_k^n(x,y)$, where $R_k^n \in \Pi_{n-1}^2$, $0\leqslant k \leqslant n$. It is often convenient to use the vector notation
$$
\mathbb{P}_n = \big(P^n_{0}, P^n_{1}, \ldots, P^n_{n} \big)^{{\mathsf{T}}},
$$
considered as a column vector, which we also regard as a set of orthogonal polynomials of degree $n$. In this
notation, ${\langle} \mathbb{P}_n, \mathbb{P}_m^{\mathsf{T}} {\rangle} =
\mathbf{H}_n \delta_{n,m}$, where $\mathbf{H}_n$ is a matrix of size $(n+1) \times (n+1)$, necessarily symmetric and positive definite. If the set $\mathbb{P}_n$ contains a mutually orthogonal basis then $\mathbf{H}_n$
is diagonal, and if it is orthonormal then $\mathbf{H}_n$ is the identity matrix.
For $W(x,y) = w_1(x) w_2(y)$ as in \eqref{eq:W}, we consider the inner product
$$
{\langle} f, g{\rangle}_W = c \int_\Omega f(x,y) g(x,y) W(x,y) dxdy,
$$
where $c$ is a normalization constant of $W$ so that ${\langle} 1, 1 {\rangle}_W =1$. A basis of $\mathcal{V}_n^2 (W)$ is given by
the product polynomials
\begin{equation} \label{eq:productOP}
P_k^n(x,y) := p_{n-k}(w_1;x)p_k(w_2;y), \qquad 0 \leqslant k \leqslant n,
\end{equation}
where $p_k(w_i;x) = x^k+ \ldots$ denotes the monic orthogonal polynomial with respect to $w_i$ on $[a_i,b_i]$.
Then $P_k^n$ is the monic orthogonal polynomial and $\{P_k^n: 0\leqslant k \leqslant n\}$ forms a mutually orthogonal basis of
$\mathcal{V}_n^2(W)$.
\subsection{Sobolev orthogonal polynomials}
For $i =1,2$, let $w_i$ be a weight function defined on the interval $[a_i,b_i]$, where $-a_i$ and $b_i$ can be
infinity. For the product weight function $W$ in \eqref{eq:W}, let $\mathcal{V}_n^2(S)$ denote the space of Sobolev
orthogonal polynomials of degree $n$ with respect to the inner product ${\langle} \cdot,\cdot {\rangle}_S$ defined in
\eqref{eq:ipd-1}. Most of our work will be carried out for the following two examples.
\begin{exam} \label{ex:Laguerre}
For ${\alpha} > -1$, let $w_{\alpha}$ be the Laguerre weight function
$$
w_{\alpha}(x):= x^{\alpha} e^{-x}, \qquad x \in \mathbb{R}_+:= [0,\infty).
$$
For ${\alpha},{\beta} > -1$, let $W_{{\alpha},{\beta}}$ be the product Laguerre weight function defined by
$$
W_{{\alpha},{\beta}} (x,y):= w_{\alpha}(x)w_{\beta}(y), \qquad (x,y) \in \Omega: = \mathbb{R}_+^2.
$$
There is only one finite corner point of $\Omega$, and we consider the inner product
\begin{equation} \label{eq:Laguerre}
{\langle} f, g {\rangle}_{S}= c_{{\alpha},{\beta}} \int_{\mathbb{R}_+^2} \nabla f(x,y)\cdot \nabla g(x,y)\, W_{{\alpha},{\beta}}(x,y) \,dx\, dy + {\lambda} f(0,0)g(0,0),
\end{equation}
where ${\lambda} > 0$ is a fixed constant and $c_{{\alpha},{\beta}}= 1/ \int_{\mathbb{R}_+^2} W_{{\alpha},{\beta}}(x,y) \,dx\, dy $.
\end{exam}
\begin{exam} \label{ex:Gegen}
For ${\alpha} > -1/2$, let $u_{\alpha}$ be the Gegenbauer weight function
$$
u_{\alpha}(x):= (1-x^2)^{{\alpha} - \f12}, \qquad x \in [-1,1].
$$
For ${\alpha},{\beta} > -1/2$, let $U_{{\alpha},{\beta}}$ be the product Gegenbauer weight function defined by
$$
U_{{\alpha},{\beta}} (x,y):= u_{\alpha}(x)u_{\beta}(y), \qquad (x,y) \in \Omega: = [-1,1]^2.
$$
There are four corner points of $\Omega$ and we consider the inner product
\begin{align} \label{eq:Gegenbauer}
{\langle} f, g {\rangle}_{S}= c_{{\alpha},{\beta}} \int_{-1}^1 \int_{-1}^1 \nabla f(x,y)\cdot \nabla g(x,y)\, U_{{\alpha},{\beta}}(x,y) \,dx\, dy + {\lambda} f(1,1)g(1,1),
\end{align}
where ${\lambda} > 0$ is a fixed constant and $c_{{\alpha},{\beta}}= 1/ \int_{\Omega} U_{{\alpha},{\beta}}(x,y) \,dx\, dy $.
\end{exam}
For the inner product ${\langle} \cdot,\cdot {\rangle}_S$ in \eqref{eq:ipd-1}, we denote its main part by
\begin{align}\label{eq:ipd-nabla}
{\langle} f,g {\rangle}_\nabla : = & c \int_\Omega \nabla f(x,y) \cdot \nabla g(x,y) W(x,y) dxdy \\
= & {\langle} \partial_1f , \partial_1 g {\rangle}_W + {\langle} \partial_2 f , \partial_2 g {\rangle}_W. \notag
\end{align}
This is a bilinear form and it is an inner product on the linear space $\Pi^2 \backslash \mathbb{R}$ of polynomials
having a zero constant term. Let
$$
\mathcal{V}_n^2(S):= \mathcal{V}_n^2(S,W) \quad \hbox{and} \quad \mathcal{V}_n^2(\nabla):= \mathcal{V}_n^2(\nabla,W)
$$
denote the linear spaces of orthogonal polynomials of total degree $n$ associated with ${\langle} \cdot,\cdot {\rangle}_S$
and ${\langle} \cdot,\cdot {\rangle}_\nabla$, respectively.
Let $\mathsf{S}_k^n$ be the monic orthogonal polynomial of degree $n$ in $\mathcal{V}_n^2(S)$ that satisfies
$\mathsf{S}_k^n(x,y) - x^{n-k}y^k \in \Pi_{n-1}^2$ for $0 \leqslant k \leqslant n$. Likewise, for $n \geqslant 1$,
let $S_k^n$ be a monic orthogonal polynomial in $\mathcal{V}_n^2(\nabla)$.
\begin{thm}\label{sobolev-basis}
For $n\geqslant 1$, let $\{S_k^n: 0\leqslant k \leqslant n\}$ denote a monic orthogonal basis of
$\mathcal{V}_n^2(\nabla)$. Then, the monic orthogonal basis $\{\mathsf{S}_k^n: 0 \leqslant k \leqslant n\}$ of $\mathcal{V}_n^2(S)$
is given by $\mathsf{S}_0^0(x,y) =1$ and
$$
\mathsf{S}_k^n(x,y) = S_k^n(x,y) - S_k^n(c_1,c_2), \quad n\geqslant 1.
$$
\end{thm}
\begin{proof}
Since $\mathsf{S}_k^n(c_1,c_2) =0$, it follows that ${\langle} \mathsf{S}_k^n, \mathsf{S}_j^m {\rangle}_S = {\langle} \mathsf{S}_k^n, S_j^m {\rangle}_\nabla$ if $n \geqslant 1$.
\end{proof}
This theorem shows that we only need to work with the bilinear form ${\langle} \cdot, \cdot {\rangle}_\nabla$ and on the linear space $\Pi^2 \backslash \mathbb{R}$. Observe that the orthogonal polynomials in $\mathcal{V}_n^2(\nabla)$ are determined up to an additive constant
$c$. Indeed, for any constant $c$, the polynomial $S_k^n + c$ is also a monic orthogonal polynomial in $\mathcal{V}_n^2(\nabla)$.
By Theorem \ref{sobolev-basis}, however, we only need to determine $S_k^n$ up to a constant. For convenience, we adopt
the following notation for two functions that are equal up to a constant:
$$
f(x,y) \c= g(x,y) \qquad \hbox{if} \quad f(x,y) - g(x,y) \equiv c,
$$
where $c \in \mathbb{R}$ is a generic constant.
\subsection{Strategy for constructing Sobolev orthogonal polynomials}
In order to construct the polynomial $S_k^n$, we expand it in terms of a known basis of polynomials denoted by
$\{Q_j^m: 0\leqslant j \leqslant m \leqslant n\}$,
\begin{equation} \label{eq:Skn=Qjm}
S_k^n (x,y) = \sum_{m=0}^n \sum_{j=0}^m a_{j,m}(k) Q_j^m (x,y),
\end{equation}
and determine the coefficients $a_{j,m}(k)$ by orthogonality. Since $S_k^n$ is determined up to a constant, the equal sign should be replaced by $\c=$ in \eqref{eq:Skn=Qjm}.
The choice of $Q_j^m$ clearly matters. An
obvious choice is the basis of product orthogonal polynomials $P_k^n$ in \eqref{eq:productOP}. This basis,
however, is not a good choice since we need to work with derivatives of the basis elements. This is where the
notion of coherent pair comes in.
A weight function $w$ defined on the real line is called self-coherent if its monic orthogonal polynomials $p_n(w)$
satisfy the relation
\begin{equation}\label{eq:coherent}
p_n(w;x) = \frac{p_{n+1}'(w;x)}{n+1} + a_n p_{n}'(w;x), \qquad n \geqslant 0,
\end{equation}
for some constants $a_n$. Furthermore, $w$ is called symmetric self-coherent, if $w$ is an even function and
its monic orthogonal polynomials $p_n(w)$ satisfy the relation
\begin{equation}\label{eq:symm-coherent}
p_n(w;x) = \frac{p_{n+1}'(w;x)}{n+1} + b_n p_{n-1}'(w;x), \qquad n \geqslant 1.
\end{equation}
More generally, we can call $w$ self-coherent if it satisfies \eqref{eq:cohen-poly}, that is,
$$
p_n(w;x) = \frac{p_{n+1}'(w;x)}{n+1} + a_n p_{n}'(w;x)+ b_n p_{n-1}'(w;x), \qquad n \geqslant 1
$$
If $w$ is self-coherent, we denote by $q_n(w)$ the polynomial of degree $n$ defined by
\begin{equation}\label{eq:q_n}
q_{n} (w;x) = p_{n}(w;x) + n a_{n-1} p_{n-1}(w;x) + n b_{n-1} p_{n-2}(w;x),
\quad n \geqslant 1,
\end{equation}
where, by convention, $p_{-1}(w;x)=0$ and we assume the last term is zero if $n=1$. It follows
directly from the definition that $q_n(w)$ is monic and
$$
q_n'(w;x) = n p_{n-1}(w;x).
$$
Notice that self-coherent orthogonal polynomials are essentially, up to a linear change of variable, the classical orthogonal polynomials (Jacobi, Laguerre and Hermite) as was proved in \cite{MBP}.
We now define the polynomials $Q_j^m$ of two variables by
\begin{equation}\label{eq:Q_jm}
Q_k^n(x,y): = q_{n-k}(w_1;x) q_k(w_2;y), \qquad 0 \leqslant k \leqslant n, \quad n =0,1,\ldots.
\end{equation}
The derivatives of $Q_k^n$ can be given explicitly in terms of product orthogonal polynomials $P_j^m$
in \eqref{eq:productOP}.
\begin{lem} \label{lem:Qkn}
Let $\partial_i$ denote the $i$-th partial derivative. Then
\begin{align*}
& \partial_1 Q_0^n(x,y) = n p_{n-1}(w_1;x) = n P_0^{n-1}(x,y) \quad \hbox{and} \quad \partial_2 Q_0^n (x,y) =0, \\
& \partial_1 Q_n^n(x,y) =0 \quad \hbox{and} \quad \partial_2 Q_0^n (x,y) = n p_{n-1}(w_2;y) = n P_{n-1}^{n-1}(x,y).
\end{align*}
Furthermore, for $1 \leqslant k \leqslant n-1$,
\begin{align*}
\partial_1 Q_k^n & = (n-k) \left( P_k^{n-1} +k a_{k-1}(w_2) P_{k-1}^{n-2} + k b_{k-1}(w_2) P_{k-2}^{n-3}\right), \\
\partial_2 Q_k^n & = k \left(P_{k-1}^{n-1} + (n-k) a_{n-k-1}(w_1) P_{k-1}^{n-2} +(n-k) b_{n-k-1}(w_1) P_{k-1}^{n-3}\right).
\end{align*}
\end{lem}
\begin{proof}
For $1 \leqslant k \leqslant n$, it follows directly from the definition of $Q_k^n $ that
$$
\partial_1 Q_k^n(x,y) = q_{n-k}'(w_1;x) q_k(w_2;y) = (n-k) p_{n-k-1}(w_1;x) q_k(w_2;y).
$$
Substituting $q_k(w_2;y)$ by its definition \eqref{eq:q_n}, the identity for $\partial_1 Q_k^n$ follows from
the definition of $P_j^m$. The other identities are proved similarly.
\end{proof}
Let $\mathbb{Q}_n = (Q_0^n, \ldots, Q_n^n)^{\mathsf{T}}$ and $\mathbb{S}_n = (S_0^n, \ldots, S_n^n)^{\mathsf{T}}$ denote the column
vector of polynomials $Q_k^n$ and $S_k^n$, respectively. Furthermore, let $e_i$ denote the standard Euclidean
coordinate vector whose $i$-th element is 1 and all other elements are 0.
\begin{thm} \label{thm:Q=S}
For $0 \leqslant k \leqslant n$, there exist real numbers $a_{i,k}$ and $b_{i,k}$ such that
\begin{equation} \label{eq:Qkn=Skn}
Q_k^n(x,y) \c= S_k^n(x,y) + \sum_{i=0}^{n-1} a_{i,k} S_i^{n-1}(x,y) + \sum_{i=0}^n b_{i,k} S_i^{n-2}(x,y).
\end{equation}
Moreover, in the case of $k =0$ and $k=n$, we have, respectively,
\begin{equation} \label{eq:Q0n=S0n}
S_0^n (x,y) \c= Q_0^n(x,y) \quad \hbox{and} \quad S_n^n (x,y) \c= Q_n^n(x,y).
\end{equation}
In terms of vector notation, \eqref{eq:Qkn=Skn} can be written as
\begin{equation} \label{eq:Qn=Sn}
\mathbb{Q}_n \c= \mathbb{S}_n + \mathbf{A}_{n-1} \mathbb{S}_{n-1} + \mathbf{B}_{n-2} \mathbb{S}_{n-2},
\end{equation}
where $\mathbf{A}_{n-1}$ and $\mathbf{B}_{n-2}$ are matrices of the form
$$
\mathbf{A}_{n-1}=\left[ \begin{array}{ccc}
0 & \dots & 0 \\ \hline & & \\
& \widetilde{\mathbf{A}}_{n-1} & \\
& & \\ \hline 0 & \dots & 0
\end{array}
\right] \quad \hbox{and} \quad \mathbf{B}_{n-2}=\left[ \begin{array}{ccc}
0 & \dots & 0 \\ \hline & & \\
& \widetilde{\mathbf{B}}_{n-2} & \\
& & \\ \hline 0 & \dots & 0
\end{array}
\right].
$$
Here $\widetilde \mathbf{A}_{n-1}$ and $\widetilde \mathbf{B}_{n-2}$ are matrices of size $(n-1)\times n$ and $(n-1) \times (n-1),$ respectively.
\end{thm}
\begin{proof}
If $k=0$ and $P$ is any polynomial in $\Pi_{n-1}^2$, then, by Lemma \ref{lem:Qkn},
$$
{\langle} Q_0^n, P {\rangle}_\nabla = {\langle} P_0^{n-1}, \partial_1 P {\rangle}_W = 0.
$$
Since the space $\{\partial_1 P: P \in \Pi_{n-1}^2\}$ is $\Pi_{n-2}^2$, this shows that
$Q_0^n \in \mathcal{V}_n^2(\nabla)$ and it is equal to $S_0^n$ as it is monic. The proof for $S_n^n$ is
similar. Moreover, if $1 \leqslant k \leqslant n$, it follows from Lemma \ref{lem:Qkn} that
$$
{\langle} Q_k^n, P {\rangle}_\nabla = {\langle} \partial_1 Q_k^{n}, \partial_1 P {\rangle}_W+ {\langle} \partial_2 Q_k^{n}, \partial_2 P {\rangle}_W =0
$$
for any polynomial $P$ of degree at most $n-3$. Consequently, $Q_k^n$ can be written as a linear combination of
the Sobolev orthogonal polynomials of degree $n, n-1$ and $n-2$. Since both $Q_k^n$ and $S_k^n$ are monic by
definition, \eqref{eq:Qkn=Skn} follows.
\end{proof}
To determine the matrices $\mathbf{A}_{n-1}$ and $\mathbf{B}_{n-2}$, we need to work with specific weight functions. The
simplest cases are the product Laguerre polynomials for which $\mathbf{B}_{n-2} =0$ and the product Gegenbauer
polynomials for which $\mathbf{A}_{n-1} =0$. These two cases will be worked out in detail in the next two sections.
\section{The product Laguerre weight}
\setcounter{equation}{0}
In this section we consider the product of Laguerre weight functions and the inner product
\eqref{eq:Laguerre}. The Laguerre polynomials are defined by (cf. \cite[Chapt V]{Szego})
\begin{eqnarray*}
L_n^{{\alpha}}(x) := \frac{({\alpha}+1)_n}{n!}{}_1F_1(-n;{\alpha}+1;x) = \frac{(-1)^n}{n!} x^n + \cdots
\end{eqnarray*}
and their orthogonality is given by
$$
{\langle} L_n^{{\alpha}}, L_m^{{\alpha}} {\rangle}_{w_{\alpha}} := \frac{1}{\Gamma({\alpha}+1)}\int_{0}^{+\infty} L_n^{{\alpha}}(x)L_m^{\alpha}(x)\,w_{\alpha}(x) dx
= \frac{({\alpha}+1)_n}{n!} {\delta}_{n,m},
$$
where $(a)_n = a(a+1) \cdots (a+n-1),$ $n\geqslant 1,$ $(a)_0= 1,$ is the Pochhammer symbol. Furthermore, they satisfy the relation
(\cite[p. 102]{Szego})
$$
L_n^{{\alpha}}(x) = -\frac{d}{dx}\, L_{n+1}^{{\alpha}}(x) + \frac{d}{dx} \, L_n^{{\alpha}}(x),
$$
which shows that the Laguerre weight function $w_{\alpha}$ is self-coherent. Monic Laguerre orthogonal polynomial
$p_n(w_{\alpha})$ and its $L^2$ norm are given by
$$
p_n(w_{\alpha}; x) := (-1)^n \,n!\, L_n^{{\alpha}}(x), \qquad h_n^{\alpha}: = {\langle} p_n(w_{\alpha}), p_n(w_{\alpha}){\rangle}_{w_{\alpha}} = n! \, ({\alpha}+1)_n.
$$
From these relations, it follows readily that the polynomial
$$
q_n(w_{\alpha}; x):= p_n(w_{\alpha}; x) + n p_{n-1}(w_{\alpha}; x)
$$
satisfies $q_n'(w_{\alpha}; x) = n p_{n-1}(w_{\alpha};x)$ for $n =0,1,2, \ldots$
We are now ready to state our polynomials in two variables for the product Laguerre weight function
$W_{{\alpha},{\beta}}$ on $\mathbb{R}_+^2$, with ${\alpha},{\beta} > -1$. We again denote the orthogonal polynomials by $P_k^n$,
$$
P_k^n(x,y) := p_{n-k}(w_{\alpha}; x) p_k(w_{\beta}; y), \qquad 0 \leqslant k \leqslant n.
$$
It follows readily that these are mutually orthogonal polynomials and
\begin{equation}\label{eq:hkn-Laguerre}
h_k^n: = {\langle} P_k^n, P_k^n {\rangle}_{W_{{\alpha},{\beta}}} = h^{{\alpha}}_{n-k}\,h^{{\beta}}_{k}=(n-k)!\,k!\,({\alpha}+1)_{n-k}\,({\beta}+1)_k.
\end{equation}
We also define the monic polynomial $Q_k^n$ by
$$
Q_k^n(x,y) := q_{n-k}(w_{\alpha}; x) q_k(w_{\beta}; y), \qquad 0 \leqslant k \leqslant n.
$$
In this setting, their partial derivative for $1 \leqslant k\leqslant n$ in Lemma \ref{lem:Qkn} becomes the following:
\begin{lem} \label{lem:Q-Laguerre}
For $1\leqslant k\leqslant n-1$, the following formulas hold
\begin{align*}
\partial_1 \, Q_k^n(x,y) & = (n-k) \left[P_k^{n-1}(x,y) + k\, P_{k-1}^{n-2}(x,y)\right], \\
\partial_2 \, Q_k^n(x,y) & = k \left[P_{k-1}^{n-1}(x,y) + (n-k) \,P_{k-1}^{n-2}(x,y)\right].
\end{align*}
\end{lem}
Recall that $\mathcal{V}_n^2(\nabla, W_{{\alpha},{\beta}})$, $n \geqslant 1$, is the space of Sobolev orthogonal polynomials with respect to the bilinear form ${\langle} \cdot, \cdot {\rangle}_\nabla$ defined in \eqref{eq:ipd-nabla}. Let $S_k^n = x^{n-k} y^k + \cdots$ be
a monic orthogonal polynomial in $\mathcal{V}_n^2(\nabla, W_{{\alpha},{\beta}})$. Then relation \eqref{eq:Qn=Sn} becomes
\begin{equation}\label{eq:Q-S-Laguerre}
\mathbb{Q}_n \c= \mathbb{S}_n + \mathbf{A}_{n-1} \mathbb{S}_{n-1}.
\end{equation}
Our goal is to show how $\mathbf{A}_{n-1}$ can be explicitly computed. To this end, we need explicit formulas for
the inner products of the gradients of the polynomials $Q_k^n$. In the following we write ${\langle} \cdot,\cdot{\rangle} = {\langle} \cdot,\cdot{\rangle}_{W_{{\alpha},{\beta}}}$.
\begin{lem}\label{lem:Qoc} For $0\leqslant i \leqslant n$ and $0\leqslant l\leqslant m$,
\begin{align*}
{\langle} Q^n_i, Q^m_l {\rangle}_\nabla
=& \left[ l (m-l)^2 \, h^{m-2}_{l-1} \, {\delta}_{i,l-1} + l^2 (m-l) \, h^{m-2}_{l-1} \, {\delta}_{i,l} \right] {\delta}_{n,m-1} \\
&+ \left[(m-l)^2 \, h^{m-1}_{l} \, {\delta}_{i,l} +2 l^2 (m-l)^2 \, h^{m-2}_{l-1} \, {\delta}_{i,l} + l^2 h^{m-1}_{l-1} \, {\delta}_{i,l} \right] {\delta}_{n,m} \\
& + \left[ (l+1) (m-l)^2 \, h^{m-1}_{l} \, {\delta}_{i-1,l} + l^2 (m+1-l)\, h^{m-1}_{l-1} \, {\delta}_{i,l} \right] {\delta}_{n,m+1}. \
\end{align*}
In particular,
\begin{align*}
{\langle} Q^n_0, Q^m_l {\rangle}_\nabla &= (m-1)^2 \, h^{m-2}_{0} \, {\delta}_{l,1} \, {\delta}_{n,m-1} + m^2 \, h^{m-1}_{0}\, {\delta}_{l,0} \, {\delta}_{n,m}, \\
{\langle} Q^n_n, Q^m_l {\rangle}_\nabla &= (m-1)^2 \, h^{m-2}_{m-2} \, {\delta}_{l,n} \, {\delta}_{n,m-1} + m^2 \, h^{m-1}_{m-1} \, {\delta}_{l,n} \, {\delta}_{n,m}.
\end{align*}
\end{lem}
\begin{proof}
Directly from the definition,
$$
{\langle} Q^n_i, Q^m_l {\rangle}_\nabla = {\langle} \nabla Q^n_i, \nabla Q^m_l {\rangle} ={\langle} \partial_1 Q^n_i, \partial_1 Q^m_l {\rangle} + {\langle} \partial_2 Q^n_i, \partial_2 Q^m_l {\rangle}.
$$
By Lemmas \ref{lem:Qkn} and \ref{lem:Q-Laguerre}, the inner product ${\langle} \partial_j Q^n_i, \partial_j Q^m_l {\rangle}$
can be computed by the orthogonality of $P_k^n$ and \eqref{eq:hkn-Laguerre}. For example,
\begin{align*}
{\langle} \partial_1 Q^n_i, \partial_1 Q^m_l {\rangle} = & (n-i) (m-l) \, {\langle} P^{n-1}_{i}, P^{m-1}_{l} {\rangle}
+ l (n-i) (m-l) \, {\langle} P^{n-1}_{i}, P^{m-2}_{l-1} {\rangle} \\
& + i (n-i) (m-l) \, {\langle} P^{n-2}_{i-1}, P^{m-1}_{l} {\rangle} + i l (n-i) (m-l) \, {\langle} P^{n-2}_{i-1}, P^{m-2}_{l-1} {\rangle} \\
= & (n-i) (m-l) \, h^{n-1}_{i} \, {\delta}_{i,l} \, {\delta}_{n,m} + l (n-i) (m-l) \, h^{n-1}_{i} \, {\delta}_{i,l-1} \, {\delta}_{n,m-1} \\
& + i (n-i) (m-l) \, h^{n-2}_{i-1} \, {\delta}_{i-1,l} \, {\delta}_{n-1,m} + i l (n-i) (m-l) \, h^{n-2}_{i-1} \, {\delta}_{i,l} \, {\delta}_{n,m}
\end{align*}
The other terms are computed similarly.
\end{proof}
\begin{cor}\label{cor:Qni-Qml}
For $0\leqslant i \leqslant n$, $0\leqslant l\leqslant m$, and $m\leqslant n-1$ it holds
$$
{\langle} Q^n_i, Q^m_l {\rangle}_\nabla
= \left[ (l+1) (m-l)^2 \, h^{m-1}_{l} \, {\delta}_{i-1,l} + l^2 (m+1-l)\, h^{m-1}_{l-1} \, {\delta}_{i,l} \right] {\delta}_{n-1,m}.\\
$$
In particular,
\begin{align*}
{\langle} Q^n_0, Q^m_l {\rangle}_\nabla = 0 \quad \hbox{and} \quad {\langle} Q^n_n, Q^m_l {\rangle}_ \nabla = 0, \qquad m<n.
\end{align*}
\end{cor}
To determine the matrix $\mathbf{A}_{n-1}$, we will need explicit forms of the following two matrices:
$$
\mathbf{C}_n: = {\langle} \mathbb{Q}_{n+1},\mathbb{Q}_n^{\mathsf{T}} {\rangle}_{\nabla} \quad \hbox{and}\quad \mathbf{D}_n: = {\langle} \mathbb{Q}_n,\mathbb{Q}_n^{\mathsf{T}} {\rangle}_{\nabla}.
$$
\begin{lem}
For $n =0,1,2,\ldots$, $\mathbf{D}_n$ is a diagonal matrix
\begin{equation}\label{Dn}
\mathbf{D}_n= \mathrm{diag}\{d_0^n, d_1^n, \ldots, d_n^n\},
\end{equation}
where
\begin{align*}
d_j^n = (n-j)^2 \, h_j ^{n-1} + j^2 \, h_{j-1}^{n-1} + 2j^2 (n-j)^2 \, h_{j-1}^{n-2}, \quad 0 \leqslant j \leqslant n,
\end{align*}
with $h_j^m$ as given in \eqref{eq:hkn-Laguerre}, and $\mathbf{C}_n: (n+2) \times (n+1)$ is a bidiagonal matrix,
\begin{equation}\label{eq:matrixC}
\mathbf{C}_n =
\left[ \begin{matrix}
0 & 0 & & \cdots & 0 \\
c_{1,0}^n & c_{1,1}^n & & & \\
& c_{2,1}^n & c_{2,2}^n & & \\
& & \ddots & \ddots & \\
& & & c_{n,n-1}^n & c_{n,n}^n \\
0 & \cdots & & 0 & 0
\end{matrix}
\right],
\end{equation}
where
\begin{align*}
c_{i,i}^n &= i^2 (n-i+1)\, h_{i-1}^{n-1}, \qquad 1\leqslant i \leqslant n,\\
c_{i+1,i}^n &= (i+1) (n-i)^2\, h_{i}^{n-1}, \qquad 0\leqslant i \leqslant n-1.
\end{align*}
\end{lem}
\begin{proof}
The formula for $\mathbf{D}_n$ follows directly from Lemma \ref{lem:Qoc}. Furthermore, by
Corollary \ref{cor:Qni-Qml}, for $1\leqslant i\leqslant n-1$,
$$
{\langle} \nabla Q^n_i, \nabla Q^{n-1}_l {\rangle}
= (l+1) (n-1-l)^2 \, h^{n-2}_{l} \, {\delta}_{i,l+1} + l^2 (n-l)\, h^{n-2}_{l-1} \, {\delta}_{i,l},
$$
which shows that $\mathbf{C}_n$ is a bidiagonal matrix and its first and the last row are zero.
\end{proof}
We are now ready to determine the matrix $\mathbf{A}_{n-1}$ in \eqref{eq:Q-S-Laguerre}.
\begin{thm} \label{thm_3.5}
Let $\mathbf{H}_n^\nabla: = {\langle} \mathbb{S}_n, \mathbb{S}_n^{{\mathsf{T}}} {\rangle}_\nabla$. Then $\mathbf{H}_n^\nabla$ satisfies the recursive relation
\begin{align}
\mathbf{H}_n^{\nabla}&= \mathbf{D}_n - \mathbf{C}_{n-1} (\mathbf{H}_{n-1}^{\nabla})^{-1} \mathbf{C}_{n-1}^{{\mathsf{T}}} \label{eq:H-rec-Lag},
\end{align}
where the iteration is initiated by $\mathbf{H}_1^\nabla = \mathbf{I}$, the identity matrix.
Furthermore, for $n =1,2,\ldots$,
the matrix $\mathbf{A}_n$ in \eqref{eq:Q-S-Laguerre} is determined by
\begin{align}
\mathbf{A}_n = \mathbf{C}_{n} (\mathbf{H}_n^{\nabla})^{-1}. \label{eq:A-rec-Lag}
\end{align}
\end{thm}
\begin{proof}
Using the orthogonality of $\mathbb{S}_n$ and
the fact that $S_k^n - Q_k^n \in \Pi_{n-1}^2$, we obtain from \eqref{eq:Q-S-Laguerre} that
\begin{align*}
{\langle} \mathbb{S}_{n+1}, \mathbb{S}_{n}^{{\mathsf{T}}}{\rangle}_\nabla &= {\langle} \mathbb{Q}_{n+1}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - \mathbf{A}_n {\langle} \mathbb{S}_{n}, \mathbb{S}_{n}^{{\mathsf{T}}}{\rangle}_\nabla \\
& = {\langle} \mathbb{Q}_{n+1}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - \mathbf{A}_n {\langle} \mathbb{Q}_{n}, \mathbb{S}_{n}^{{\mathsf{T}}}{\rangle}_\nabla \\
& = {\langle} \mathbb{Q}_{n+1}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - \mathbf{A}_n {\langle} \mathbb{Q}_{n}, (\mathbb{Q}_{n}-\mathbf{A}_{n-1}\,\mathbb{S}_{n-1})^{\mathsf{T}} {\rangle}_\nabla,
\end{align*}
where we have used \eqref{eq:Q-S-Laguerre} once more . Hence, it follows that
\begin{align*}
{\langle} \mathbb{S}_{n+1}, \mathbb{S}_{n}^{{\mathsf{T}}}{\rangle}_\nabla &= {\langle} \mathbb{Q}_{n+1}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - \mathbf{A}_n {\langle} \mathbb{Q}_{n}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla
+ \mathbf{A}_n {\langle} \mathbb{Q}_{n}, \mathbb{S}_{n-1}^{{\mathsf{T}}}{\rangle}_\nabla \mathbf{A}_{n-1}^{{\mathsf{T}}} \\
&= {\langle} \mathbb{Q}_{n+1}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - \mathbf{A}_n {\langle} \mathbb{Q}_{n},\mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla + \mathbf{A}_n {\langle} \mathbb{Q}_{n}, \mathbb{Q}_{n-1}^{{\mathsf{T}}}{\rangle}_\nabla \mathbf{A}_{n-1}^{{\mathsf{T}}}.
\end{align*}
Consequently, from ${\langle} \nabla \mathbb{S}_{n+1}, \nabla \mathbb{S}_{n}^{{\mathsf{T}}}{\rangle}=0$ we obtain
\begin{equation} \label{An-rec}
\mathbf{A}_n \left[{\langle} \mathbb{Q}_{n}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - {\langle} \mathbb{Q}_{n}, \mathbb{Q}_{n-1}^{{\mathsf{T}}}{\rangle}_ \nabla \mathbf{A}_{n-1}^{{\mathsf{T}}} \right]=
{\langle} \mathbb{Q}_{n+1}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla.
\end{equation}
Next we compute $\mathbf{H}_n^{\nabla} = {\langle} \mathbb{S}_n, \mathbb{S}_n^{{\mathsf{T}}} {\rangle}_\nabla$ by using \eqref{eq:Q-S-Laguerre} and the orthogonality of $\mathbb{S}_n$,
\begin{align} \label{Hn-rec}
\mathbf{H}_n^{\nabla} & = {\langle} \mathbb{Q}_n, \mathbb{S}_n^{{\mathsf{T}}} {\rangle}_ \nabla = {\langle} \mathbb{Q}_n, ( \mathbb{Q}_n-\mathbf{A}_{n-1} \mathbb{S}_{n-1})^{{\mathsf{T}}} {\rangle} _\nabla \\
& = {\langle} \mathbb{Q}_n, \mathbb{Q}_n^{{\mathsf{T}}} {\rangle}_\nabla - {\langle} \mathbb{Q}_n, \mathbb{S}_{n-1}^{{\mathsf{T}}} {\rangle}_\nabla \mathbf{A}_{n-1}^{{\mathsf{T}}} \notag \\
& = {\langle}\mathbb{Q}_n, \mathbb{Q}_n^{{\mathsf{T}}} {\rangle}_ \nabla - {\langle} \mathbb{Q}_n, \mathbb{Q}_{n-1}^{{\mathsf{T}}} {\rangle}_\nabla \mathbf{A}_{n-1}^{{\mathsf{T}}}. \notag
\end{align}
Since $\mathbf{H}_n^\nabla$ is nonsingular, substituting the above relation into \eqref{An-rec} proves \eqref{eq:A-rec-Lag}.
Furthermore, substituting \eqref{eq:A-rec-Lag} into \eqref{Hn-rec} shows that $\mathbf{H}_n^{\nabla}$ satisfies
\begin{align*}
\mathbf{H}_n^{\nabla}& = {\langle}\mathbb{Q}_n, \mathbb{Q}_n^{{\mathsf{T}}} {\rangle}_ \nabla - {\langle} \mathbb{Q}_n, \mathbb{Q}_{n-1}^{{\mathsf{T}}} {\rangle}_\nabla
( {\langle}\mathbb{Q}_{n},\mathbb{Q}_{n-1}^{\mathsf{T}} {\rangle}_\nabla (\mathbf{H}_{n-1}^{\nabla})^{-1})^{{\mathsf{T}}},
\end{align*}
which simplifies to \eqref{eq:H-rec-Lag} from the symmetry of $\mathbf{H}_{n-1}^{\nabla}$, and therefore completes the proof.
\end{proof}
The theorem shows that $\mathbf{H}_n^\nabla$, hence $\mathbf{A}_n$, can be determined iteratively.
Since $S_0^n = Q_0^n$ and $S_n^n = Q_n^n$, we only need to determine $S_k^n$ for $1 \leqslant k \leqslant n-1$. This
additional information is reflected in the matrix structure, as shown in Theorem \ref{thm:Q=S} and \eqref{eq:matrixC},
$$
\mathbf{A}_{n-1}=\left[
\begin{array}{ccc} 0 & \dots & 0 \\ \hline & & \\
& \widetilde{\mathbf{A}}_{n-1} & \\ & & \\ \hline 0 & \dots & 0
\end{array} \right]
\quad \hbox{and} \quad
\mathbf{C}_{n-1} = \left[
\begin{array}{ccc} 0 & \dots & 0 \\ \hline & & \\
& \widetilde \mathbf{C}_{n-1} & \\
& & \\ \hline 0 & \dots & 0
\end{array} \right],
$$
where $\widetilde{\mathbf{A}}_{n-1}$ and $\widetilde \mathbf{C}_{n-1}$ are matrices of size $(n-1)\times n$. These suggest a further
simplification in the iteration, which we now explore.
The matrix structure shows that
$$
\mathbf{H}_{n}^{\nabla} = \mathbf{D}_n - \mathbf{C}_{n-1} \mathbf{A}_{n-1}^{{\mathsf{T}}} \\
= \left[
\begin{array}{ccc}
d_0^n & & \\
& \widetilde{\mathbf{D}}_{n} & \\
& & d_n^{n}
\end{array}
\right] - \left[
\begin{array}{ccc}
0 & \cdots & 0 \\
\vdots & \widetilde \mathbf{C}_{n-1} \widetilde{\mathbf{A}}_{n-1}^{{\mathsf{T}}} & \vdots \\
0 & \cdots & 0
\end{array}
\right],
$$
which shows that the matrix $\mathbf{H}_n^\nabla$ takes the form
\begin{equation} \label{eq:htHn}
\mathbf{H}_n^\nabla = \left [ \begin{matrix} d_0^n & & 0 \\
& \widehat \mathbf{H}_n^\nabla & \\ 0 & & d_n^n \end{matrix} \right]
\quad \hbox{with} \quad \widehat{\mathbf{H}}_{n}^{\nabla} = \widetilde{\mathbf{D}}_n - \widetilde \mathbf{C}_{n-1} \widetilde{\mathbf{A}}_{n-1}^{{\mathsf{T}}}.
\end{equation}
Consequently, we only need to determine $\widehat \mathbf{H}_n^\nabla$. Let us further write
$$
\widetilde \mathbf{C}_n = \left[\begin{array}{c|c|c}
c_{1,0}^n & & 0 \\
\vdots & \widehat{\mathbf{C}}_{n} & \vdots \\
0 & & c_{n,n}^{n}
\end{array}
\right] \quad \hbox{with}\quad \widehat{\mathbf{C}}_{n} =
\left[\begin{matrix}
c_{1,1}^n & & & \bigcirc \\
c_{2,1}^n & c_{2,2}^n & & \\
& \ddots & \ddots & \\
& & c_{n-1,n-2}^n & c_{n-1,n-1}^n \\
\bigcirc & & & c_{n,n-1}^n
\end{matrix}\right].
$$
It then follows from $\mathbf{A}_n = \mathbf{C}_{n} \left(\mathbf{H}_{n}^{\nabla}\right)^{-1}$ at \eqref{eq:A-rec-Lag} that
$$
\widetilde{\mathbf{A}}_{n} = \widetilde \mathbf{C}_{n}
\left[
\begin{matrix}
(d_0^n)^{-1} & \ldots & 0 \\
& \left(\widehat{\mathbf{H}}_{n}^{\nabla}\right)^{-1} & \\
0 & \ldots & (d_n^{n})^{-1}
\end{matrix}
\right] = \left[ \begin{array}{c|c|c}
1 & & 0 \\ \vdots & \widehat{\mathbf{C}}_{n} \left(\widehat{\mathbf{H}}_{n}^{\nabla}\right)^{-1} & \vdots \\ 0 & & 1
\end{array}
\right],
$$
where we have used the fact that $c_{1,0}^{n} = d_0^n = n^2 h_0^{n-1}$ and $c_{n,n}^{n} = d_n^n = n^2 h_{n-1}^{n-1}$,
which follow directly from their explicit formulas. Consequently, we see that $\widetilde \mathbf{A}_n$ is of the form
\begin{equation} \label{eq:htAn}
\widetilde{\mathbf{A}}_{n} = \left[ \mathbf{e}_1 | \widehat{\mathbf{A}}_{n} | \mathbf{e}_n \right] \quad \hbox{with} \quad
\widehat{\mathbf{A}}_{n} = \widehat{\mathbf{C}}_{n} \left(\widehat{\mathbf{H}}_{n}^{\nabla}\right)^{-1},
\end{equation}
where $\mathbf{e}_1, \mathbf{e}_{n}$ are, respectively, the first and the last vector in the canonical basis of $\mathbb{R}^{n}$.
Consequently, it follows that
$$
\widetilde \mathbf{C}_{n-1} \widetilde{\mathbf{A}}_{n-1}^{{\mathsf{T}}} =d_{0}^{n-1} \mathbf{e}_1 \mathbf{e}_1^{{\mathsf{T}}} + \widehat{\mathbf{C}}_{n-1} \widehat{\mathbf{A}}_{n-1}^{{\mathsf{T}}} +
d_{n-1}^{n-1} \mathbf{e}_{n-1} \mathbf{e}_{n-1}^{{\mathsf{T}}}.
$$
We finally conclude by \eqref{eq:htHn} that the matrix $\widehat \mathbf{H}_n^{\nabla}$ satisfies the relation
$$
\widehat{\mathbf{H}}_{n}^{\nabla} = \widehat \mathbf{D}_n - \widehat \mathbf{C}_{n-1} \widehat{\mathbf{A}}_{n-1}^{{\mathsf{T}}},
$$
where $\widehat \mathbf{D}_n$ is the diagonal matrix
$$
\widehat{\mathbf{D}}_{n} = \widetilde \mathbf{D}_n - d_{0}^{n-1} \mathbf{e}_1 \mathbf{e}_1^{{\mathsf{T}}} - d_{n-1}^{n-1} \mathbf{e}_{n-1} \mathbf{e}_{n-1}^{{\mathsf{T}}}.
$$
Summing up, we have proved the following proposition.
\begin{prop}
Let $\widehat \mathbb{Q}_n : = (Q_1^n, \ldots, Q_{n-1}^n)$ and $\widehat \mathbb{S}_n : =
(S_1^n, \ldots, S_{n-1}^n)$. Then $\widehat \mathbf{H}_n^\nabla = {\langle} \widehat \mathbb{S}_n, \widehat \mathbb{S}_n^{\mathsf{T}} {\rangle}_\nabla$.
Furthermore, for $n =2,3,\ldots$,
\begin{equation} \label{eq:whQ=whS}
\widehat \mathbb{Q}_n \c= \widehat \mathbb{S}_n + \left[ \mathbf{e}_1 \big \vert \widehat{\mathbf{A}}_{n-1} \big \vert \mathbf{e}_{n-1} \right] \mathbb{S}_{n-1},
\end{equation}
where the matrices $\widehat \mathbf{A}_n$ of size $n \times (n-1)$ and $\widehat \mathbf{H}_n^{\nabla}$ of size $(n-1)\times(n-1)$ are determined
iteratively by
$$
\widehat{\mathbf{A}}_{n} = \widehat{\mathbf{C}}_{n} \big(\widehat{\mathbf{H}}_{n}^{\nabla}\big)^{-1} \quad \hbox{and}\quad
\widehat{\mathbf{H}}_{n}^{\nabla} = \widehat \mathbf{D}_n - \widehat \mathbf{C}_{n-1} \widehat{\mathbf{A}}_{n-1}^{{\mathsf{T}}}
$$
for $n=3,4,\ldots,$ with the starting point $\widehat \mathbf{A}_1 = 0$.
\end{prop}
\begin{exam}
In the case of ${\alpha} = {\beta} =0$, the iterative algorithm gives
\begin{align*}
\widehat \mathbf{A}_2 & = \left[\begin{matrix} 1 \\ 1 \end{matrix}\right], \quad
\widehat \mathbf{H}_2 =\left[\begin{matrix} 2 \end{matrix}\right], \\
\widehat \mathbf{A}_3 & = \frac 1 4\left[\begin{matrix} 5 & 1 \\ 5 & 5 \\ 1 & 5 \end{matrix}\right], \quad
\widehat \mathbf{H}_3 =\left[\begin{matrix} 10 & -2 \\ -2 & 10 \end{matrix}\right], \\
\widehat \mathbf{A}_4 & = \frac 1{56}\left[\begin{matrix} 90 & 24 & 6 \\ 53 & 72 &11 \\ 11 & 72 & 53 \\
6 & 24 & 90 \end{matrix}\right], \quad
\widehat \mathbf{H}_4 = \left[\begin{matrix} 93 & -12 & -3 \\ -12 & 48 & -12 \\ -3 & -12 & 93 \end{matrix}\right].
\end{align*}
\end{exam}
Once the matrices $\widehat \mathbf{A}_n$ are determined, the relation \eqref{eq:whQ=whS} can be used to
determine the Sobolev orthogonal polynomials $\mathbb{S}_n$ iteratively, since
$$
\widehat \mathbb{S}_n \c= \widehat \mathbb{Q}_n - Q_0^{n-1} \mathbf{e}_1 - Q_{n-1}^{n-1} \mathbf{e}_{n-1} - \widehat{\mathbf{A}}_{n-1} \widehat \mathbb{S}_{n-1},
$$
where we have used $S_0^{n-1} = Q_0^{n-1}$ and $S_{n-1}^{n-1} = Q_{n-1}^{n-1}$.
We could also determine the polynomials $S_k^n$ directly by solving a linear system of equations. For this
purpose, we fix $k$, $1\leqslant k\leqslant n-1$, write
\begin{equation}\label{eq:Skn2=Qkn2}
S^n_k(x,y) \c= Q^n_k(x,y)+\sum_{j=1}^{n-1} \sum_{i=0}^j a_i^j \, Q_i^j(x,y)
\end{equation}
and determine the coefficient $a_i^j$ by the orthogonality ${\langle} S_k^n, Q_j^m {\rangle}_\nabla =0$ for
$0 \leqslant l \leqslant m \leqslant n-1$, which is equivalent to the linear system of equations
$$
\sum_{j=1}^{n-1} \sum_{i=0}^j a_i^j \, {\langle} Q_i^j, Q^m_l {\rangle}_\nabla
= - {\langle} Q_k^n, Q^m_l {\rangle}_\nabla, \quad 0 \leqslant l \leqslant m \leqslant n-1.
$$
By Lemma \ref{lem:Qoc}, these equations become
\begin{align*}
& l(m-l)^2 h_{l-1}^{m-2}\, a_{l-1}^{m-1} + l^2(m-l) h_{l-1}^{m-2}\, a_l^{m-1} \\
&\qquad +\left[(m-l)^2 h_l^{m-1}+2(m-l)^2 l^2 h_{l-1}^{m-2}+l^2 h_{l-1}^{m-1}\right]\, a_l^m \\
&\qquad + l^2(m-l+1)h_{l-1}^{m-1}\, a_l^{m+1}+(l+1)(m-l)^2h_l^{m-1} \, a_{l+1}^{m+1}\\
&=-[(l+1)(m-l)^2h_l^{m-1} {\delta}_{k-1,l}+l^2(m+1-l)h_{l-1}^{m-1}{\delta}_{k,l}]{\delta}_{m,n-1}.
\end{align*}
Observe that for $m=n-1$ the third term in the left hand side does not appear since $a_{l}^{n} = 0$ by definition.
Using $h_{l-1}^{m-1}=(m-l)({\alpha} +m-l) h_{l-1}^{m-2}$ and $h_l^{m-1}=l({\beta} +l) h_{l-1}^{m-2}$, the above
equations can be simplified to
\begin{align}\label{eq:rr}
&(m-l)\, a_{l-1}^{m-1} + l\, a_l^{m-1} +\left[l {\alpha} + (m-l){\beta} +4l(m-l) \right]\, a_l^m \\
& \qquad + l(m-l+1)({\alpha} +m-l) \, a_l^{m+1}+(l+1)(m-l) ({\beta} +l) \, a_{l+1}^{m+1} \notag \\
&= -[(l+1)(m-l) ({\beta} + l) {\delta}_{k,l+1}+l(m+1-l)({\alpha} +m-l) {\delta}_{k,l}]{\delta}_{m,n-1}. \notag
\end{align}
The indexes of $a_l^m$ are lattices in $\Lambda_n: = \{(l,m): 0 \leqslant l \leqslant m \leqslant n-1\}$. For each $(l,m)$, the
equation \eqref{eq:rr} involves $a_l^m$ and its four neighbors, directly above and below, left
and right of $a_l^m$ in the lattice. In particular, for $l=0$ and $l=m$ we obtain the equations
\begin{align*}
a_0^m + a_1^{m+1} = - {\delta}_{k,1} \,{\delta}_{m,n-1},\qquad
a_m^m + a_m^{m+1} = - {\delta}_{k,m} \, {\delta}_{m,n-1}.
\end{align*}
By $a_{l}^{n}=0$, these equations can be written in an equivalent way as
\begin{align} \label{eq:ic}
\begin{split}
& a_0^{n-1} = -{\delta}_{k,1} \qquad a_{n-1}^{n-1} = -{\delta}_{k,n-1} \quad a_m^m + a_m^{m+1} = 0\\
& a_0^m + a_1^{m+1} = 0 , \qquad 1\leqslant m \leqslant n-2.
\end{split}
\end{align}
These provide the boundary relations for the lattice $\Lambda_n$. Together, \eqref{eq:rr} and \eqref{eq:ic}
form a linear system of equations that can be solved for $\{a_l^m: 0\leqslant l\leqslant m \leqslant n-1\}$. Furthermore,
the relations in \eqref{eq:ic} allow us to combine some of the terms in the sum \eqref{eq:Skn2=Qkn2}.
We summarize the above consideration into the following proposition.
\begin{prop}
For $1\leqslant k \leqslant n-1$, the monic Sobolev polynomials are given by
\begin{align*}
S^n_k \c= & Q_k^n-{\delta}_{k,1}\, Q_0^{n-1}-{\delta}_{n,n-1}\, Q_{n-1}^{n-1}+ \sum_{j=1}^{n-2}a_0^j\,(Q_0^j-Q_1^{j+1}) \\
& + \sum_{j=1}^{n-2}a_j^j\,(Q_j^j-Q_j^{j+1})+ \sum_{j=4}^{n-1}\sum_{i=4}^j a_{i-2}^j \,Q_{i-2}^j
\end{align*}
where the coefficients $a_i^j$ are solutions of \eqref{eq:rr} and \eqref{eq:ic}.
\end{prop}
\begin{exam}
For the case of ${\alpha}={\beta}=0$, the monic Laguerre--Sobolev orthogonal polynomials satisfy the relation
$$
S_{n-k}^n (x,y) = S_k^n(y,x), \qquad 0 \leqslant k \leqslant n.
$$
The following are these polynomials in lower degrees: $S_0^1(x,y) = x$,
\begin{align*}
&S_0^2(x,y) = x(x-2), \quad S_1^2(x,y) = x y -x-y, \\
&S_0^3(x,y) = x(x^2-6x +6), \quad S_1^3(x,y) = x^2 y - x^2 - 3 x y +3 x +y.
\end{align*}
\end{exam}
\begin{rem}
In the case of ${\alpha}={\beta}=0$ we have (see equation (5.2.1) in \cite{Szego})
$$
q_n(u_0;x) = (-1)^n \frac{1}{n!} L_n^{-1}(x) = (-1)^{n-1} \frac{1}{(n-1)!} x L_{n-1}^{1}(x),
$$
and therefore the constant term in $q_n(u_0;x)$ always vanishes for $n \geqslant 1$. Consequently,
in this case, equations that hold under \textit{modulo constant}, or $\c=$, in Theorem \ref{thm:Q=S} can
be replaced by the usual equal sign.
\end{rem}
\section{The product Gegenbauer weight}
\setcounter{equation}{0}
In this section we study the product of Gegenbauer (or ultraspherical) weight functions
and the inner product \eqref{eq:Gegenbauer}. Let
$$
u_{{\alpha}}(x) := (1 - x^2)^{{\alpha} - \frac{1}{2}}, \qquad {\alpha} > -\tfrac{1}{2}.
$$
The classical Gegenbauer polynomials $C_n^{\lambda}$, defined by (\cite[Chapt IV]{Szego})
$$
C_n^{{\alpha}}(x) := \binom{n+2{\alpha}-1}{n}{}_2F_1(-n,n+2{\alpha};{\alpha}+\frac{1}{2};x) = 2^n\binom{n+{\alpha}-1}{n} \,x^n + \cdots
$$
are orthogonal with respect to the inner product
$$
{\langle} f, g {\rangle}_{u_{\alpha}} := \frac{\Gamma({\alpha}+1)}{\Gamma({\alpha}+1/2)\Gamma(1/2)}\int_{-1}^{1} f(x)g(x)\,u_{\alpha}(x) dx.
$$
More precisely, they satisfy
\begin{align*}
{\langle} C_n^{{\alpha}}, C_m^{{\alpha}} {\rangle}_{u_{\alpha}} = \frac{2^{1-2{\alpha}}\,{\alpha}\,\sqrt{\pi}}{\Gamma({\alpha}+1/2)\,\Gamma({\alpha})}\,\frac{\Gamma(n+2{\alpha})}{(n+{\alpha})\,n!} {\delta}_{n,m}.
\end{align*}
Moreover, they are self-coherent since they satisfy (\cite[(4.7.29) in p. 83]{Szego})
$$
2\,(n+{\alpha}) \,C_n^{{\alpha}}(x) = \frac{d}{dx}\, \left[C_{n+1}^{{\alpha}}(x) - C_{n-1}^{{\alpha}}(x)\right],\quad n\ge1.
$$
Monic Gegenbauer orthogonal polynomials $p_n(u_{\alpha})$ are defined by
$$
p_n(u_{\alpha}; x) := 2^{-n} \,\binom{n+{\alpha}-1}{n}^{-1}\, C_n^{{\alpha}}(x),$$
and their $L^2$ norms are given by
$$ h_n^{\alpha}: = {\langle} p_n(u_{\alpha}), p_n(u_{\alpha}){\rangle}_{u_{\alpha}} = \frac{2^{1-2{\alpha}-2n}\,\sqrt{\pi}\,n!\,\Gamma({\alpha}+1)\,
\Gamma(n+2{\alpha})}{\Gamma({\alpha}+1/2)\,\Gamma(n+{\alpha})\,\Gamma(n+{\alpha}+1)}.
$$
From these relations, we deduce that the polynomial
$$
q_n(u_{\alpha}; x):= p_n(u_{\alpha}; x) + n\,b_{n-1}(\alpha)\, p_{n-2}(u_{\alpha}; x),
$$
where
$$
b_{n-1}(\alpha) = - \frac{(n-1)}{4\,(n+{\alpha}-1)\,(n+{\alpha}-2)}, \quad n\ge 2,
$$
satisfies $q_n'(u_{\alpha}; x) = n \, p_{n-1}(u_{\alpha};x)$ for $n =1,2, \ldots$
We define the product Gegenbauer weight function $U_{{\alpha},{\beta}}(x,y):= u_{\alpha}(x) y_{\beta}(y)$ on $[-1,1]\times [-1,1]$
for ${\alpha},{\beta} > -1/2$ and define monic product polynomials
$$
P_k^n(x,y) := p_{n-k}(u_{\alpha}; x) \, p_k(u_{\beta}; y), \qquad 0 \leqslant k \leqslant n.
$$
These are mutually orthogonal polynomials, and
\begin{equation}\label{eq:hkn-Gegenbauer}
h_k^n: = {\langle} P_k^n, P_k^n {\rangle}_{U_{{\alpha},{\beta}}} = h^{{\alpha}}_{n-k}\,h^{{\beta}}_{k}.
\end{equation}
We also define the monic polynomial $Q_k^n$ by
$$
Q_k^n(x,y) := q_{n-k}(u_{\alpha}; x) \,q_k(u_{\beta}; y), \qquad 0 \leqslant k \leqslant n.
$$
In this setting, their partial derivatives for $1 \leqslant k \leqslant n$ in Lemma \ref{lem:Qkn} become
\begin{lem} \label{lem:Q-Gegenbauer}
For $1\leqslant k\leqslant n-1$,
\begin{align*}
\partial_1 \, Q_k^n(x,y) & = (n-k) \left[P_k^{n-1}(x,y) + k\,b_{k-1}(\beta)\, P_{k-2}^{n-3}(x,y)\right], \\
\partial_2 \, Q_k^n(x,y) & = k \left[P_{k-1}^{n-1}(x,y) + (n-k)\,b_{n-k-1}(\alpha) \,P_{k-1}^{n-3}(x,y)\right].
\end{align*}
\end{lem}
Denote by $\mathcal{V}_n^2(\nabla, U_{{\alpha},{\beta}})$, $n \ge 1$, the space of Sobolev orthogonal polynomials
with respect to the bilinear form ${\langle} \cdot, \cdot {\rangle}_\nabla$ defined in \eqref{eq:ipd-nabla}, and
let $S_k^n = x^{n-k} y^k + \cdots$ be
the monic orthogonal polynomials in $\mathcal{V}_n^2(\nabla, U_{{\alpha},{\beta}})$. In this case, relation \eqref{eq:Qn=Sn}
becomes
\begin{equation}\label{eq:Q-S-Gegenbauer}
\mathbb{Q}_n \c= \mathbb{S}_n + \mathbf{B}_{n-2} \mathbb{S}_{n-2}.
\end{equation}
To compute $\mathbf{B}_{n-2}$ explicitly, we need explicit formulas for
the inner products of the gradients of the polynomials $Q_k^n$. In order to simplify the expressions,
from now on we will write ${\langle} \cdot,\cdot{\rangle} = {\langle} \cdot,\cdot{\rangle}_{U_{{\alpha},{\beta}}}$.
\begin{lem}\label{lem:Qoc-G} For $0\leqslant i\leqslant n$ and $0\leqslant l \leqslant m$,
\begin{align*}
{\langle} Q^n_i, Q^m_l {\rangle}_\nabla
=& {\delta}_{n,m+2} \left[ (m-l)^2(l+2)\, b_{l+1}(\beta) \, h^{m-1}_{l} \, {\delta}_{i,l+2} \right. \\
& \qquad \left. + l^2 (m-l+2)\, b_{m-l+1}(\alpha) \, h^{m-1}_{l-1} \, {\delta}_{i,l} \right] \\
& + {\delta}_{n,m} \left[(m-l)^2 \, h^{m-1}_{l} \, {\delta}_{i,l} + l^2 (m-l)^2\,b^2_{l-1}(\beta) \, h^{m-3}_{l-2} \, {\delta}_{i,l} \right. \\
& \qquad \left. + l^2 h^{m-1}_{l-1} \, {\delta}_{i,l} + l^2 (m-l)^2\,b^2_{m-l-1}(\alpha) \, h^{m-3}_{l-1} \, {\delta}_{i,l} \right] \\
& + {\delta}_{n,m-2} \left[ l (m-l)^2\, b_{l-1}(\beta) \, h^{m-3}_{l-2} \, {\delta}_{i,l-2} \right. \\
& \qquad \left. + l^2 (m-l)\, b_{m-l-1}(\alpha)\, h^{m-3}_{l-1} \, {\delta}_{i,l} \right]. \
\end{align*}
In particular,
\begin{align*}
{\langle} Q^n_0, Q^m_l {\rangle}_\nabla &= 2 (m-2)^2\, b_1(\beta) \, h^{m-1}_{0} \, {\delta}_{l,2} \, {\delta}_{n,m-2} + m^2 \, h^{m-1}_{0} \, {\delta}_{l,0}\, {\delta}_{n,m}. \\
{\langle} Q^n_n, Q^m_l {\rangle}_\nabla &= 2(m-2)^2\, b_1(\alpha) \, h_{m-1}^{m-1}\,{\delta}_{l,n}\, {\delta}_{n,m-2} + m^2 \, h^{m-1}_{m-1} \, {\delta}_{l,n}\, {\delta}_{n,m}.
\end{align*}
\end{lem}
The proof is analogous to that of Lemma \ref{lem:Qoc}.
\begin{cor}\label{cor:Qni-Qml-G}
For $0\leqslant i \leqslant n$, $0\leqslant l\leqslant m$, and $m\leqslant n-1$ it holds
\begin{align*}
{\langle} Q^n_i, Q^m_l {\rangle}_\nabla
=\, & {\delta}_{n,m+2} \left[ (m-l)^2(l+2)\, b_{l+1}(\beta) \, h^{m-1}_{l} \, {\delta}_{i,l+2} \right. \\
& \qquad \left.+ l^2 (m-l+2)\, b_{m-l+1}(\alpha) \, h^{m-1}_{l-1} \, {\delta}_{i,l} \right].
\end{align*}
In particular, \begin{align*}
{\langle} Q^n_0, Q^m_l {\rangle}_\nabla &= 0 \quad \hbox{and} \quad {\langle} Q^n_n, Q^m_l {\rangle}_ \nabla = 0, \qquad m<n.
\end{align*}
\end{cor}
To determine the matrix $\mathbf{B}_{n-2}$, we will need explicit forms of the following two matrices:
$$
\mathbf{C}_n: = {\langle} \mathbb{Q}_{n+2},\mathbb{Q}_n^{\mathsf{T}} {\rangle}_{\nabla} \quad \hbox{and}\quad \mathbf{D}_n: = {\langle} \mathbb{Q}_n,\mathbb{Q}_n^{\mathsf{T}} {\rangle}_{\nabla}.
$$
\begin{lem}
For $n =0,1,2,\ldots$, $\mathbf{D}_n$ is a diagonal matrix
\begin{equation}
\mathbf{D}_n= \mathrm{diag}\{d_0^n, d_1^n, \ldots, d_n^n\},
\end{equation}
where, for $0 \leqslant j \leqslant n$,
\begin{align*}
d_j^n &= (n-j)^2 \, h_j ^{n-1} + j^2 (n-j)^2\, b^2_{j-1}(\beta) \, h_{j-2}^{n-3} \\
&\quad + j^2 \, h_{j-1}^{n-1} + j^2 (n-j)^2\, b^2_{n-j-1}(\alpha) \, h_{j-1}^{n-3},
\end{align*}
with $h_j^m$ as given in \eqref{eq:hkn-Gegenbauer}, and $\mathbf{C}_n: (n+3) \times (n+1)$ is a bidiagonal matrix,
\begin{equation}\label{eq:matrixC-G}
\mathbf{C}_n =
\left[ \begin{matrix}
0 & 0 & 0 & \cdots & 0 \\
0 & c_{1,1}^n & 0 & & \\
c_{2,0}^n & 0 & c_{2,2}^n & & \\
& \ddots & \ddots & \ddots & \\
& & & 0 & c_{n,n}^n \\
& & & c_{n+1,n-1}^n & 0 \\
0 & \cdots & & 0 & 0
\end{matrix}
\right],
\end{equation}
where
\begin{align*}
c_{l,l}^n &= {\langle} Q^{n+2}_l, Q^n_l {\rangle}_{\nabla} =
l^2 (n-l+2)\, b_{n-l+1}({\alpha})\, h_{l-1}^{n-1}, \qquad 0 \leqslant l \leqslant n\\
c_{l+2,l}^n &= {\langle} Q^{n+2}_{l+2}, Q^n_l {\rangle}_{\nabla} =
(l+2) (n-l)^2\, b_{l+1}({\beta})\, h_l^{n-1}, \qquad 0\leqslant l \leqslant n.
\end{align*}
\end{lem}
\begin{proof}
The formula for $\mathbf{D}_n$ follows directly from Lemma \ref{lem:Qoc-G}. Furthermore, by
Corollary \ref{cor:Qni-Qml-G}, for $0\leqslant i\leqslant n+2$,
$$
{\langle} Q^{n+2}_i, Q^n_l {\rangle}_{\nabla}
= (l+2)(n-l)^2\, b_{l+1}({\beta}) \, h^{n-1}_{l} \, {\delta}_{i,l+2} + l^2 (n-l+2)\, b_{n-l+1}({\alpha})\, h^{n-1}_{l-1} \, {\delta}_{i,l},
$$
which shows that $\mathbf{C}_n$ is a bidiagonal matrix and its first and last row are zero.
\end{proof}
Now we can compute the matrix $\mathbf{B}_{n-2}$ in \eqref{eq:Q-S-Gegenbauer}.
\begin{thm}
Let $\mathbf{H}_n^\nabla: = {\langle} \mathbb{S}_n, \mathbb{S}_n {\rangle}_\nabla$. Then $\mathbf{H}_n^\nabla$ satisfies the recursive relation
\begin{align}
\mathbf{H}_n^{\nabla}&= \mathbf{D}_n - \mathbf{C}_{n-2} (\mathbf{H}_{n-2}^{\nabla})^{-1} \mathbf{C}_{n-2}^{{\mathsf{T}}} \label{eq:H-rec-Geg},
\end{align}
where the iteration is initiated by $\mathbf{H}_1^\nabla = \mathbf{I}$, the identity matrix,
and $\mathbf{H}_2^\nabla = \mathbf{D}_2$.
Furthermore, for $n =1,2,\ldots$,
the matrix $\mathbf{B}_n$ in \eqref{eq:Q-S-Gegenbauer} is determined by
\begin{align}
\mathbf{B}_n = \mathbf{C}_{n} (\mathbf{H}_n^{\nabla})^{-1}. \label{eq:B-rec-Geg}
\end{align}
\end{thm}
\begin{proof} This is similar to the proof of Theorem~\ref{thm_3.5}. Using \eqref{eq:Q-S-Gegenbauer} twice we obtain
\begin{align*}
{\langle} \mathbb{S}_{n+2}, \mathbb{S}_{n}^{{\mathsf{T}}}{\rangle}_\nabla &= {\langle} \mathbb{Q}_{n+2}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - \mathbf{B}_n {\langle} \mathbb{S}_{n}, \mathbb{S}_{n}^{{\mathsf{T}}}{\rangle}_\nabla \\
& = {\langle} \mathbb{Q}_{n+2}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - \mathbf{B}_n {\langle} \mathbb{Q}_{n}, (\mathbb{Q}_{n}-\mathbf{B}_{n-2}\,\mathbb{S}_{n-2})^{\mathsf{T}} {\rangle}_\nabla\\
&= {\langle} \mathbb{Q}_{n+2}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - \mathbf{B}_n {\langle} \mathbb{Q}_{n},\mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla + \mathbf{B}_n {\langle} \mathbb{Q}_{n}, \mathbb{Q}_{n-2}^{{\mathsf{T}}}{\rangle}_\nabla \mathbf{B}_{n-2}^{{\mathsf{T}}}.
\end{align*}
And from ${\langle} \nabla \mathbb{S}_{n+2}, \nabla \mathbb{S}_{n}^{{\mathsf{T}}}{\rangle}=0$ we deduce
\begin{equation} \label{Bn-rec}
{\langle} \mathbb{Q}_{n+2}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla = \mathbf{B}_n \left[{\langle} \mathbb{Q}_{n}, \mathbb{Q}_{n}^{{\mathsf{T}}}{\rangle}_\nabla - {\langle} \mathbb{Q}_{n}, \mathbb{Q}_{n-2}^{{\mathsf{T}}}{\rangle}_ \nabla \mathbf{B}_{n-2}^{{\mathsf{T}}} \right].
\end{equation}
Next we compute $\mathbf{H}_n^{\nabla} = {\langle} \mathbb{S}_n, \mathbb{S}_n^{{\mathsf{T}}} {\rangle}_\nabla$ by using \eqref{eq:Q-S-Gegenbauer} and the orthogonality of $\mathbb{S}_n$,
\begin{align} \label{Hn-Geg-rec}
\mathbf{H}_n^{\nabla} & = {\langle} \mathbb{Q}_n, \mathbb{S}_n^{{\mathsf{T}}} {\rangle}_ \nabla = {\langle} \mathbb{Q}_n, ( \mathbb{Q}_n-\mathbf{B}_{n-2} \mathbb{S}_{n-2})^{{\mathsf{T}}} {\rangle} _\nabla \\
& = {\langle}\mathbb{Q}_n, \mathbb{Q}_n^{{\mathsf{T}}} {\rangle}_ \nabla - {\langle} \mathbb{Q}_n, \mathbb{Q}_{n-2}^{{\mathsf{T}}} {\rangle}_\nabla \mathbf{B}_{n-2}^{{\mathsf{T}}}. \notag
\end{align}
Since $\mathbf{H}_n^\nabla$ is nonsingular, substituting the above relation into \eqref{Bn-rec} proves \eqref{eq:B-rec-Geg}.
Finally, substituting \eqref{eq:B-rec-Geg} into \eqref{Hn-Geg-rec} shows \eqref{eq:H-rec-Geg}.
\end{proof}
The previous theorem shows that $\mathbf{H}_n^\nabla$ and $\mathbf{B}_n$ can be determined iteratively.
Since $S_0^n \c= Q_0^n$ and $S_n^n \c= Q_n^n$, we only need to determine $S_k^n$ for $1 \leqslant k \leqslant n-1$. The matrix structure reflects this information, as shown in Theorem \ref{thm:Q=S} and \eqref{eq:matrixC-G}; in fact we have
$$
\mathbf{B}_{n-2}=\left[
\begin{array}{ccc} 0 & \dots & 0 \\ \hline & & \\
& \widetilde{\mathbf{B}}_{n-2} & \\ & & \\ \hline 0 & \dots & 0
\end{array} \right]
\quad \hbox{and} \quad
\mathbf{C}_{n-2} = \left[
\begin{array}{ccc} 0 & \dots & 0 \\ \hline & & \\
& \widetilde \mathbf{C}_{n-2} & \\
& & \\ \hline 0 & \dots & 0
\end{array} \right],
$$
where $\widetilde{\mathbf{B}}_{n-2}$ and $\widetilde \mathbf{C}_{n-2}$ are matrices of size $(n-1)\times (n-1)$.
We now proceed as in Section 3 to simplify the iteration process.
The matrix structure reads as
$$
\mathbf{H}_{n}^{\nabla} = \mathbf{D}_n - \mathbf{C}_{n-2} \mathbf{B}_{n-2}^{{\mathsf{T}}} \\
= \left[
\begin{array}{ccc}
d_0^n & & \\
& \widetilde{\mathbf{D}}_{n} & \\
& & d_n^{n}
\end{array}
\right] - \left[
\begin{array}{ccc}
0 & \cdots & 0 \\
\vdots & \widetilde \mathbf{C}_{n-2} \widetilde{\mathbf{B}}_{n-2}^{{\mathsf{T}}} & \vdots \\
0 & \cdots & 0
\end{array}
\right],
$$
which shows that the matrix $\mathbf{H}_n^\nabla$ takes the form
\begin{equation} \label{eq:htHn-G}
\mathbf{H}_n^\nabla = \left [ \begin{matrix} d_0^n & & 0 \\
& \widehat \mathbf{H}_n^\nabla & \\ 0 & & d_n^n \end{matrix} \right]
\quad \hbox{with} \quad \widehat{\mathbf{H}}_{n}^{\nabla} = \widetilde{\mathbf{D}}_n - \widetilde \mathbf{C}_{n-2} \widetilde{\mathbf{B}}_{n-2}^{{\mathsf{T}}},
\end{equation}
and we only need to determine $\widehat \mathbf{H}_n^\nabla$. If we write
$$
\widetilde \mathbf{C}_n = \left[\begin{array}{c|c|c}
0 & & 0 \\
c_{2,0}^n & & 0 \\
\vdots & \widehat{\mathbf{C}}_{n} & \vdots \\
0 & & c_{n,n}^{n}\\
0 & & 0
\end{array}
\right] \quad \hbox{with}\quad
\widehat{\mathbf{C}}_{n} =
\left[ \begin{matrix}
c_{1,1}^n & 0 & & \\
0 & c_{2,2}^n & & \\
c_{3,1}^n & 0 & & \\
& \ddots & \ddots & \ddots \\
& & 0 & c_{n-1,n-1}^n \\
& & c_{n,n-2}^n & 0 \\
& & 0 & c_{n+1,n-1}^n
\end{matrix}
\right],
$$
then from $\mathbf{B}_n = \mathbf{C}_{n} \left(\mathbf{H}_{n}^{\nabla}\right)^{-1}$ at \eqref{eq:B-rec-Geg} we conclude
$$
\widetilde{\mathbf{B}}_{n} = \widetilde \mathbf{C}_{n}
\left[
\begin{matrix}
(d_0^n)^{-1} & \ldots & 0 \\
& \left(\widehat{\mathbf{H}}_{n}^{\nabla}\right)^{-1} & \\
0 & \ldots & (d_n^{n})^{-1}
\end{matrix}
\right] = \left[ \begin{array}{c|c|c}
0 & & 0 \\
2 b_1({\beta}) & & 0 \\ \vdots & \widehat{\mathbf{C}}_{n} \left(\widehat{\mathbf{H}}_{n}^{\nabla}\right)^{-1} & \vdots \\
0 & & 2 b_1({\alpha}) \\
0 & & 0
\end{array}
\right],
$$
where we use
\begin{align*}
c_{2,0}^{n} &= 2 b_1({\beta})\, n^2 h_0^{n-1}, \qquad d_0^n = n^2 h_0^{n-1}, \\
c_{n,n}^{n} &= 2 b_1({\alpha})\, n^2 h_{n-1}^{n-1}, \qquad d_n^n = n^2 h_{n-1}^{n-1}.
\end{align*}
Consequently, we see that $\widetilde \mathbf{B}_n$ is of the form
\begin{equation} \label{eq:htBn}
\widetilde{\mathbf{B}}_{n} = \left[ 2 b_1({\beta}) \mathbf{e}_2 | \widehat{\mathbf{B}}_{n} | 2 b_1({\alpha}) \mathbf{e}_{n} \right] \quad \hbox{with} \quad
\widehat{\mathbf{B}}_{n} = \widehat{\mathbf{C}}_{n} \left(\widehat{\mathbf{H}}_{n}^{\nabla}\right)^{-1},
\end{equation}
where $\mathbf{e}_2$ and $\mathbf{e}_{n}$ are, respectively, the second vector and the second last vector in the canonical basis
of $\mathbb{R}^{n+1}$. Consequently, it follows that
$$
\widetilde \mathbf{C}_{n-2} \widetilde{\mathbf{B}}_{n-2}^{{\mathsf{T}}} = 4 b_1^2({\beta})d_{0}^{n-2} \mathbf{e}_2 \mathbf{e}_2^{{\mathsf{T}}} + \widehat{\mathbf{C}}_{n-2} \widehat{\mathbf{B}}_{n-2}^{{\mathsf{T}}} +
4 b_1^2({\alpha}) d_{n-2}^{n-2} \mathbf{e}_{n-2} \mathbf{e}_{n-2}^{{\mathsf{T}}}.
$$
We finally conclude by \eqref{eq:htHn-G} that the matrix $\widehat \mathbf{H}_n^{\nabla}$ satisfies the relation
$$
\widehat{\mathbf{H}}_{n}^{\nabla} = \widehat \mathbf{D}_n - \widehat \mathbf{C}_{n-2} \widehat{\mathbf{B}}_{n-2}^{{\mathsf{T}}},
$$
where $\widehat \mathbf{D}_n$ is the diagonal matrix
$$
\widehat{\mathbf{D}}_{n} = \widetilde \mathbf{D}_n - 4 b_1^2({\beta}) d_{0}^{n-2} \mathbf{e}_2 \mathbf{e}_2^{{\mathsf{T}}} - 4 b_1^2({\alpha}) d_{n-2}^{n-2} \mathbf{e}_{n-2} \mathbf{e}_{n-2}^{{\mathsf{T}}}.
$$
Summing up, we have proved the following proposition.
\begin{prop}
Let $\widehat \mathbb{Q}_n : = (Q_1^n, \ldots, Q_{n-1}^n)$ and $\widehat \mathbb{S}_n : =
(S_1^n, \ldots, S_{n-1}^n)$. Then $\widehat \mathbf{H}_n^\nabla = {\langle} \widehat \mathbb{S}_n, \widehat \mathbb{S}_n^{\mathsf{T}} {\rangle}_\nabla$.
Furthermore, for $n =3,4,\ldots$,
\begin{equation} \label{eq:whQ=whS-G}
\widehat \mathbb{Q}_n \c= \widehat \mathbb{S}_n + \left[ 2 b_1({\beta}) \mathbf{e}_2 \big \vert \widehat{\mathbf{B}}_{n-2} \big \vert 2 b_1({\alpha}) \mathbf{e}_{n-2} \right] \mathbb{S}_{n-2},
\end{equation}
where the matrices $\widehat \mathbf{B}_n$ of size $n \times (n-2)$ and $\widehat \mathbf{H}_n^{\nabla}$ of size $(n-1)\times(n-1)$ are determined
iteratively by
$$
\widehat{\mathbf{B}}_{n} = \widehat{\mathbf{C}}_{n} \big(\widehat{\mathbf{H}}_{n}^{\nabla}\big)^{-1} \quad \hbox{and}\quad
\widehat{\mathbf{H}}_{n}^{\nabla} = \widehat \mathbf{D}_n - \widehat \mathbf{C}_{n-2} \widehat{\mathbf{B}}_{n-2}^{{\mathsf{T}}}
$$
for $n=3,4,\ldots,$ with the initial condition $\widehat{\mathbf{B}}_{1} = 0$.
\end{prop}
\begin{exam}
In the case of ${\alpha} = {\beta} =1$ we have $b_1(1) = -\frac{1}{8}$, and the iterative algorithm gives
\begin{align*}
\widehat \mathbf{B}_2 &= -\frac 1 8\left[\begin{matrix} 1 \\ 0 \\ 1 \end{matrix}\right], \quad \widehat \mathbf{H}_2 =\left[\begin{matrix} \frac{1}{2} \end{matrix}\right], \\
\widehat \mathbf{B}_3 &= -\frac{1}{20} \left[\begin{matrix} 1 & 0 \\ 0 & 4 \\ 4 & 0 \\
0 & 1 \end{matrix}\right], \quad
\widehat \mathbf{H}_3 = \frac{5}{16}\left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right] \\
\widehat \mathbf{B}_4 &= -\frac{1}{880}\left[\begin{matrix} 21 & 0 & 1 \\ 0 & 110 & 0 \\ 198 & 0 & 198 \cr
0 & 110 & 0 \\ 1 & 0 & 21 \end{matrix}\right], \quad
\widehat \mathbf{H}_4 = \frac{1}{128}\left[\begin{matrix} 21 & 0 & -1 \\ 0 & 16 & 0 \\ -1 & 0 & 21 \end{matrix}\right].
\end{align*}
\end{exam}
Once the matrices $\widehat \mathbf{B}_n$ are determined, the relation \eqref{eq:whQ=whS-G} can be used to
determine the Sobolev orthogonal polynomials $\mathbb{S}_n$ iteratively, since
$$
\widehat \mathbb{S}_n \c= \widehat \mathbb{Q}_n - Q_0^{n-2} 2 b_1({\beta}) \mathbf{e}_2 - Q_{n-2}^{n-2} 2 b_1({\alpha}) \mathbf{e}_{n-2} - \widehat{\mathbf{B}}_{n-2} \widehat \mathbb{S}_{n-2},
$$
where we have used $S_0^{n} \c= Q_0^{n}$ and $S_{n}^{n} \c= Q_{n}^{n}$.
\begin{exam}
For the case of ${\alpha}={\beta}=1$, the monic Gegenbauer--Sobolev orthogonal polynomials satisfy the relation
$$
S_{n-k}^n (x,y) = S_k^n(y,x), \qquad 0 \leqslant k \leqslant n.
$$
The following are these polynomials in lower degrees:
\begin{align*}
&S_0^1(x,y) = x\\
&S_0^2(x,y) = x^2, \quad S_1^2(x,y) = x y, \\
&S_0^3(x,y) = x (x^2-\frac{3}{4}), \quad S_1^3(x,y) = (x^2 - \frac{1}{4}) y, \\
&S_0^4(x,y) = x^2(x^2-1), \quad S_1^4(x,y) = x (x^2 - \frac{5}{8}) y, \quad S_2^4(x,y) = x^2 y^2 -
\frac{1}{4} x^2 - \frac{1}{4} y^2.
\end{align*}
\end{exam}
\begin{rem} In contrast to the Laguerre case with ${\alpha} = {\beta} = 0$, we need the \textit{modulo constant}, or
$\c=$, in the Theorem \ref{thm:Q=S} for the Gegenbauer case. Note, however, that this is not a real limitation,
since our main goal is to construct a basis for $\mathcal{V}_n^2(S)$, for which the additive constant does not
matter, as shown in Theorem \ref{sobolev-basis}.
\end{rem}
| {
"timestamp": "2014-06-04T02:10:04",
"yymm": "1406",
"arxiv_id": "1406.0762",
"language": "en",
"url": "https://arxiv.org/abs/1406.0762",
"abstract": "Orthogonal polynomials on the product domain $[a_1,b_1] \\times [a_2,b_2]$ with respect to the inner product $$\\langle f,g \\rangle_S = \\int_{a_1}^{b_1} \\int_{a_2}^{b_2} \\nabla f(x,y)\\cdot \\nabla g(x,y)\\, w_1(x)w_2(y)\\,dx\\, dy + \\lambda f(c_1,c_2)g(c_1,c_2) $$ are constructed, where $w_i$ is a weight function on $[a_i,b_i]$ for $i = 1, 2$, $\\lambda > 0$, and $(c_1, c_2)$ is a fixed point. The main result shows how an orthogonal basis for such an inner product can be constructed for certain weight functions, in particular, for product Laguerre and product Gegenbauer weight functions, which serve as primary examples.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Sobolev orthogonal polynomials on product domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180685922241,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7083314767461467
} |
https://arxiv.org/abs/1908.07092 | Linear stability analysis for large dynamical systems on directed random graphs | We present a linear stability analysis of stationary states (or fixed points) in large dynamical systems defined on random directed graphs with a prescribed distribution of indegrees and outdegrees. We obtain two remarkable results for such dynamical systems: First, infinitely large systems on directed graphs can be stable even when the degree distribution has unbounded support; this result is surprising since their counterparts on nondirected graphs are unstable when system size is large enough. Second, we show that the phase transition between the stable and unstable phase is universal in the sense that it depends only on a few parameters, such as, the mean degree and a degree correlation coefficient. In addition, in the unstable regime we characterize the nature of the destabilizing mode, which also exhibits universal features. These results follow from an exact theory for the leading eigenvalue of infinitely large graphs that are locally tree-like and oriented, as well as, for the right and left eigenvectors associated with the leading eigenvalue. We corroborate analytical results for infinitely large graphs with numerical experiments on random graphs of finite size. We discuss how the presented theory can be extended to graphs with diagonal disorder and to graphs that contain nondirected links. Finally, we discuss the influence of small cycles and how they can destabilize large dynamical systems when they induce strong enough feedback loops. | \section{Introduction}
Scientists use networks to characterize the causal interactions between the constituents of large dynamical systems \cite{barrat2008dynamical, newman2010networks, barthelemy2018spatial, dorogovtsev2013evolution, barabasi2016network}. An important question is how network architecture affects the stability of stationary states in large dynamical systems. This question is crucial to understand, inter alia, systemic risk in financial markets, stability of ecosystems, or power outages in power grids. Indeed, the
spreading of debt between financial institutions is affected by the
architecture of the network of liabilities between these institutions~\cite{gai2010contagion, haldane2011systemic, bardoscia2017pathways}; ecologists aim to understand how the occurrence of major changes in ecological communities \cite{mccann2000diversity, bascompte2009disentangling, bastolla2009architecture, allesina2012stability, haas2019subpopulations} --- such as the microbiome community in the human gut~\cite{coyte2015ecology} --- is affected by the architecture of food webs; and engineers study how the topology of a power-grid network affects the risk of power outages~\cite{motter2013spontaneous}. In these examples, a stable stationary state is beneficial and associated with a well-functioning system, such as, a flourishing economy or a healthy individual, as is the case of the human gut example. On the other hand, the instability of the stationary state is associated with a period of economic crisis or disease. Hence, if we identify network features that stabilise large dynamical systems, then we could use these insights to reduce risk and instability in these systems.
A theory for the stability of large systems of interacting degrees of freedom in the vicinity of a stationary state has been introduced by May \cite{gardner1970connectance, may1972will}. In May's approach, one considers $n$ degrees of freedom $\vec{y}(t) = (y_1(t), y_2(t),\dots, y_n(t)) \in \mathbb{R}^n$ that evolve according to a set of randomly coupled linear differential equations
\begin{eqnarray}
\partial_t y_j(t) = \sum^n_{k=1}y_k(t) A_{kj}, \label{eq:linx}
\end{eqnarray}
where $t\geq 0$ is the time variable and $A_{kj}$ are the entries of a square random matrix $\mathbf{A}_n$ of size $n$. Notice that the null vector $\vec{y}(t) = 0$ is a fixed point or {\it stationary state} of the dynamics (\ref{eq:linx}). We consider random matrices instead of deterministic matrices because we are interested in the typical behaviour of an ensemble of systems rather than in the dynamics of one given system.
Since Eqs.~(\ref{eq:linx}) are linear, the dynamics of $\vec{y}(t)$
is governed by the eigenvalues $\lambda_j(\bA_n)$ ($j=1,\dots,n$) and their associated right eigenvectors $\vec{R}_j $ and left eigenvectors $\vec{L}_j$,
\begin{eqnarray}
\vec{y}(t) = \sum^n_{j=1} \left( \vec{R}_j \cdot \vec{y}(0)\right) \: e^{\lambda_j t} \vec{L}_j \label{jkla}
\end{eqnarray}
If all eigenvalues have negative real parts, then $\lim_{t\rightarrow \infty} \vec{y}(t) = 0$ and the stationary state is stable. On the other hand, if there exist at least one eigenvalue with a positive real part, then the stationary state is unstable. Hence, the question of stability of the stationary state $\vec{y}(t) = 0$ boils down to verifying whether the eigenvalue with the largest real part of a random matrix is negative.
Random matrix theory provides mathematical methods to study the properties of the eigenvalue $\lambda_1(\bA_n)$ with the largest real part for $n \gg 1$, which we call the {\it leading eigenvalue}.
Fortunately, $\lambda_1(\bA_n)$ often converges to a deterministic value $\lambda_1$ for $n\rightarrow \infty$ \cite{may1972will, tao2012topics}. This implies that an ensemble of dynamical systems of the form (\ref{eq:linx}) may exhibit for large $n$ a phase transition between a stable and an unstable phase: if the asymptotic value $\lambda_1$ is negative, then the dynamical system is linearly stable, whereas if $\lambda_1$ is positive, then the system is linearly unstable. The random-matrix-theory approach for the stability analysis of large dynamical systems aims to compute the asymptotic eigenvalue $\lambda_1$ as a function of the parameters that define the random matrix ensemble~$\bA_n$.
Random matrices are also useful to investigate the linear stability of stationary states in a set of randomly coupled {\it non-linear} differential equations~\cite{may1972will}. According to the Hartman-Grobner theorem \cite{grobman1959homeomorphism, hartman1960lemma}, Eqs.~(\ref{eq:linx}) yield a very good approximation for the dynamics of $n$ degrees of freedom $\vec{x}(t) = (x_1(t), \dots,x_n(t))$ in a nonlinear dynamical system $\partial_t \vec{x}(t) =f[\vec{x}(t)]$ in the vicinity of a fixed point $\vec{x}^\ast$, for which $f[\vec{x}^\ast] = 0$ with $f$ a generic function that couples the degrees of freedom. In this setting, $\mathbf{A}$ is the Jacobian of $f$ and $\vec{y}(t) = \vec{x}(t)-\vec{x}^\ast$ is the deviation vector. Randomly coupled non-linear differential equations have been used to model neural networks \cite{sompolinsky1988chaos, del2013synchronization, marti2018correlations}, ecological communities \cite{diederich1989replicators, faust2012microbial, bucci2014towards}, protein signalling networks \cite{voit2000computational, nphysHens}, financial markets~\cite{PhysRevE.97.052312}, and synchronization of coupled oscillators \cite{arenas2008synchronization}. Relations of this type often contain a large number of fixed points \cite{fyodorov2016nonlinear, biroli2018marginally}, and the equation (\ref{jkla}) describes the dynamics in the vicinity of one given fixed point $\vec{x}^\ast$.
One of the simplest random-matrix models, used by May in his original paper \cite{may1972will}, is composed of off-diagonal entries $A_{kj}$ that are independent and identically distributed (i.i.d.)~random variables with a probability distribution $p_A(a)$, and the diagonal entries are set to $A_{jj} = -d$, where $d$ is a real-valued function depends on $n$. We call this random-matrix ensemble the {\it i.i.d.~random matrix model}. In this model, the leading eigenvalue $\lambda_1$ is given by \cite{girko1985circular, bai1997circular, gotze2010circular, tao2010random, bordenave2012around, tao2013outliers}
\begin{eqnarray}
\lambda_1 = \left\{\begin{array}{ccc} n \langle A \rangle \left(1+ o_n(1)\right) - d && \langle A \rangle > 0 , \\
\sqrt{n \langle A^2 \rangle}\left(1+ o_n(1)\right) -d && \langle A \rangle \leq 0 , \end{array} \right. \label{eq:lambda1IID}
\end{eqnarray}
where $o_n(1)$ denotes the little-$o$ notation, see section~3.1 in Ref.~\cite{cormen2009introduction}. The leading eigenvalue thus only depends on
the mean value $\langle A \rangle = \int {\rm d}a \:a \: p_A(a)$ and the second moment $\langle A^2 \rangle = \int {\rm d}a\:a^2 p_A(a)$ of
the distribution $p_A(a)$, exhibiting a high degree of {\it universality}. The result (\ref{eq:lambda1IID}) describes how interactions between degrees of freedom can destabilise a large complex system. There exist two
qualitatively different regimes: for $\langle A \rangle > 0$, $\lambda_1$ is an outlier and it is proportional
to $n$; for $\langle A \rangle <0$, $\lambda_1$ is located at the boundary of the continuous spectrum and it is proportional to $\sqrt{n}$.
The random-matrix-theory approach to the linear stability of large complex systems has gained significant traction in recent years, mainly in the fields of ecology and neuroscience. With random matrices one can study how statistical properties of the interactions in a system affect its stability; this approach is complementary to mathematical models that rely on a low-dimensional representation of a large complex system.
For example, the i.i.d.~random matrix model has been generalized in order to describe how the stability of ecosystems depends on predator-prey interactions \cite{allesina2012stability}, hierarchical interactions \cite{allesina2015predicting}, modularity \cite{grilli2016modularity}, and species abundances \cite{gibbs2018effect}. In neuroscience, the i.i.d.~random matrix model has been generalized in order to study how the asymptotic dynamics of neural networks is influenced by Dale's principle \cite{rajan2006eigenvalue}, balance conditions on the excitatory and inhibitory synaptic connections to a neuron \cite{rajan2006eigenvalue, gudowska2018synaptic}, cell-type specific interactions \cite{PhysRevLett.114.088101, PhysRevE.93.022302, kuczala2016eigenvalue}, and partial random network structure \cite{ahmadian2015properties}. Other applications are phase separation in multiple component fluids \cite{sear2003instabilities} and the stability of a large economy \cite{moran2019will}. Note that all of the models mentioned so far share the common feature that they are defined on a {\it dense graph}, in the sense that the average number of nonzero elements in each row or column of $\mathbf{A}_n$ diverges as a function of~$n$.
The random-matrix-theory approach for the linear stability of dynamical systems, although clearly powerful, has been criticized since the original paper of May. First, there is the problem that complex systems defined on dense graphs are unstable if the number of degrees of freedom $n$ is large enough \cite{gardner1970connectance, may1972will}, since the leading eigenvalue diverges as a function of $n$. This behaviour is unrealistic, since real systems are often large and stable~\cite{mccann2000diversity}. A second critique, is that the i.i.d.~random matrix model, and its extensions discussed in the previous paragraph, can only account for random networks which are formed by nodes that interact with a finite probability, independently of the system size $n$. These models cannot account for the nonrandom features observed in real systems~\cite{bastolla2009architecture, newman2010networks}, such as, degree distributions that may have power-law tails \cite{amaral2000classes, RevModPhys.74.47, dunne2002food, clauset2009power}.
A natural approach to resolve these two issues is to consider {\it sparse} random matrices $\mathbf{A}_n$. Each row and column of a sparse random matrix contains a finite number of non-zero elements, even in the limit of $n\rightarrow \infty$, such that $\mathbf{A}_n$ is composed of a total number $O(n)$ of non-zero matrix entries. Sparse random matrices can take into account the nonrandom structures observed in real-world systems, such as, networks with a prescribed degree distribution \cite{dorogovtsev2003spectra, rogers2008cavity, rogers2009cavity, bordenave2010resolvent, rogers2010spectral, neri2016eigenvalue, amir2016non, metz2018spectra} or with recurrent motifs \cite{metz2011spectra, bolle2013spectra, newman2019spectra, aceituno2019universal}. Constraints on the degree distribution of a network are incorporated through constraints on the number of non-zero matrix entries in the columns and rows of $\mathbf{A}_n$. As an important consequence, dynamical systems associated with sparse random matrices can be stable even for large values of $n$: the leading eigenvalue $\lambda_1$ is finite since any degree of freedom interacts with a finite number of others. Hence, differential equations coupled through sparse random matrices can describe real-world networks and their dynamics is stable in the limit of large~$n$.
In the present paper we focus on the development of exact mathematical methods to study the stability of large dynamical systems defined on sparse random graphs. To this aim, we use the
spectral theory for sparse non-Hermitian random matrices \cite{rogers2009cavity, neri2016eigenvalue, metz2018spectra}.
For sparse random matrices, the eigenvalue distribution is not universal and requires a numerical procedure to compute \cite{rogers2009cavity, metz2018spectra}. However, the leading eigenvalue, as well as the statistics of the components of its associated right and left eigenvectors, exhibit universal properties and can be treated analytically \cite{neri2016eigenvalue}. Since the stability of large dynamical systems are governed by the leading eigenvalue, sparse random matrices provide a useful avenue of approach to study how network architecture affects the stability of large systems.
The aim of the present paper is to provide a better understanding of the theory for the leading eigenvalue of sparse random matrices introduced in \cite{neri2016eigenvalue}, from a theoretical and from a more practical point of view. On the theoretical side,
we derive explicit analytical expressions for the leading eigenvalue and the first moment of its associated right and left eigenvectors in the case of oriented random graphs with prescribed degree distributions that may allow for {\it correlations} between indegrees and outdegrees; this is a generalization of the results presented in~\cite{neri2016eigenvalue} valid in the absence of degree correlations. We obtain these results from a set of recursion
recursion relations in the components of right eigenvectors and left eigenvectors, which we derive using the {\it Schur} formula. From a practical point of view, we illustrate the theory through a large body of data obtained from numerical experiments, and we also challenge the theoretical results by considering adjacency matrices of graphs with power-law degree distributions and adjacency matrices of graphs with a small mean degree.
Subsequently, we apply the theory to the linear stability of large dynamical systems described by a set of randomly coupled differential equations and identify which network properties stabilize the stationary points in these systems. Finally, we discuss extensions of our theory beyond the setup of oriented random matrices.
The outline of the paper is the following. In Sec.~\ref{eq:modelDef} we define the random matrices and spectral quantities we study in this paper, and in Sec.~\ref{eq:theory} we present the main results of this paper for the random matrices defined in Sec.~\ref{eq:modelDef}. In
Sec.~\ref{sec:der} we derive the main results and we also present the theory we use to derive these results. In Sec.~\ref{sec:examples} we compare the theoretical results with numerical data for large matrices, while in Sec.~\ref{sec:app} we apply the theory by analysing the stability of stationary states in networked systems. In Sec.~\ref{sec:ext} we discuss extensions of the theory, presented in Sec.~\ref{sec:der}, to the cases of adjacency matrices with diagonal disorder and adjacency matrices of non-oriented graphs. Finally, in Sec.~\ref{sec:discu} we present a discussion of the main results.
Appendix~\ref{sec:finite} details the algorithm we use to generate graphs with a prescribed degree distribution, and in Appendix~\ref{AppendixC}, we discuss the percolation theory for the largest strongly connected component of a directed graph. In Appendix~\ref{sec:rec} we use the Schur formula to derive a set of recursive relations for the components of right (left) eigenvectors of a random matrix with a tree-like topology.
\subsection{Notation}
We use lower case symbols for deterministic variables, e.g., $x$ and $y$. We write (column) vectors as $\vec{x}$ and $\vec{y}$, while there adjoint vectors are $\vec{x}^\dagger$ and $\vec{y}^\dagger$.
Matrices are written in boldface, e.g., $\mathbf{x}$ and $\mathbf{y}$. If we want to emphasize the dependency on the matrix size $n$, then we write $\mathbf{x}_n$ and $\mathbf{y}_n$.
We write random variables in upper case, e.g., $X$ and $Y$. The probability distribution of a random variable $X$ is denoted by $p_X(x)$.
There are a few exceptions to the use of upper case letters to represent random quantities. For example, we use the notation $\lambda_j(\mathbf{A})$ to denote the $j$-th eigenvalue of a random matrix $\mathbf{A}$, and we write $p_X(x;\mathbf{A})$ for the probability distribution of a random variable $X$ that depends on the matrix $\mathbf{A}$. We denote averages with respect to the distribution $p_{\mathbf{A}}(\mathbf{a})$ by $\langle \cdot\rangle$.
\section{System set up and definitions} \label{eq:modelDef}
In this section we define the random matrices and their spectral properties that we study in this paper.
\subsection{Adjacency matrices of weighted, oriented, and simple random graphs with a prescribed degree distribution}\label{sec:modeldef}
In this paper we study the spectral properties of random matrices of the form
\begin{eqnarray}
\bA_n = - d \:\mathbf{1}_n + \bJ_n\circ\bC_n, \label{eq:model}
\end{eqnarray}
where $\mathbf{1}_n$ is the identity matrix, $\bJ_n$ is a square matrix with real entries $J_{jk}\in \mathbb{R}$ that are i.i.d.~random variables~drawn from an arbitrary probability distribution $p_J$, and where $\bC_n$ is the adjacency matrix of an oriented simple random graph $\mathcal{G}$ with a prescribed degree distribution \cite{molloy1995critical, newman2001random, dorogovtsev2013evolution}. The parameter $d$ is a real, constant number and $\circ$ denotes the Hadamard product, i.e., $[\bJ_n\circ\bC_n]_{jk} = J_{jk}C_{jk}$. The
$j$ and $k$ indices fulfill $j,k\in [n]$, where $[n] = \left\{1,2,\ldots,n\right\}$.
Since the graph is simple, the entries of its adjacency matrix satisfy $C_{jk}\in\left\{0,1\right\}$ and $C_{jj} = 0$.
We use the convention that if $C_{jk}=1$, then the graph $\mathcal{G}$ has an edge directed from node $j$ to node $k$. Therefore, the
indegree of the $j$-th node equals the number of non-zero elements in the $j$-th column,
\begin{eqnarray}
K^{\rm in}_j := \sum^n_{k=1}C_{kj},
\end{eqnarray}
and the outdegree $K^{\rm out}_j$ is given by the number of non-zero elements in the $j$-th row,
\begin{eqnarray}
K^{\rm out}_j := \sum^n_{k=1}C_{jk}.
\end{eqnarray}
The in-neighbourhood $\partial^{\rm in}_j$ and out-neighbourhod $\partial^{\rm out}_j$ of node $j$ are defined by
\begin{eqnarray}
\partial^{\rm in}_j := \left\{k\in[n]: C_{kj}=1\right\} , \label{eq:pin} \\
\quad \partial^{\rm out}_j := \left\{k\in[n]: C_{jk}=1\right\} ,\label{eq:pout}
\end{eqnarray}
and
\begin{eqnarray}
\partial_j := \partial^{\rm in}_j \cup \partial^{\rm out}_j,
\end{eqnarray}
is the neighbourhood of node $j$.
A directed graph is {\it oriented} when $C_{jk}C_{kj} = 0$ for any pair of nodes.
We say that $\mathcal{G}$ is a random graph with a {\it prescribed degree distribution} if (i) the degrees $(K^{\rm in}_j, K^{\rm out}_j)$ are i.i.d.~random variables with a joint probability distribution $p_{K^{\rm in}, K^{\rm out}}(k^{\rm in}, k^{\rm out})$ and with the additional constraint $\sum^{n}_{j=1}K^{\rm in}_j = \sum^n_{j=1}K^{\rm out}_j$; (ii) given a certain degree sequence $\left\{K^{\rm in}_j, K^{\rm out}_j\right\}^n_{j=1}$, the nodes are connected randomly and hence the edges of $\mathcal{G}$ are generated by the configuration model \cite{molloy1995critical, newman2001random, dorogovtsev2013evolution}. In the Appendix~\ref{sec:finite} we describe in detail the algorithm we use to sample random graphs with a prescribed degree distribution.
In the specific case of $J_{jk} = 1$ and $d=0$, random matrices defined by Eq.~(\ref{eq:model}) are the adjacency matrices of oriented and simple random graphs \cite{newman2010networks, bollobas2013modern, bollobas2001random}.
The variables $J_{jk}$ are the weights associated with the links of the graph with adjacency matrix $\bC_n$, and hence for $J_{jk} \neq 1$ the random matrix $\mathbf{A}_n$ is the adjacency matrix of a weighted graph. The constant parameter $d$ affects the spectral properties of $\mathbf{A}_n$ only in a trivial manner, but it is important when discussing the stability of dynamical systems on graphs.
\subsection{Spectral observables}\label{specobs}
Here we define the spectral observables of random matrices that are relevant for the study of the stability of dynamical systems.
The eigenvalues $\left\{\lambda_\alpha(\bA)\right\}_{\alpha \in [n]}$ are defined as the complex roots of the algebraic equation ${\rm det}(\bA - \lambda \mathbf{1}_n) = 0$.
We sort the eigenvalues in decreasing order, i.e., ${\rm Re}[\lambda_1(\bA )]\geq {\rm Re}[\lambda_2(\bA )]\ldots \geq {\rm Re}[\lambda_n(\bA )]$. If there exists a degenerate eigenvalue, then it appears multiple times in this sequence. If there are two or more eigenvalues with the same real part, then we sort them based on their imaginary part. We define call $\lambda_{1}$ the {\it leading} eigenvalue and
$\lambda_2$ the {\it subleading} eigenvalue.
A right eigenvector $\vec{R}_\alpha(\bA)$ and a left eigenvector $\vec{L}_\alpha(\bA)$ associated with $\lambda_{\alpha}$ fullfils
\begin{eqnarray}
\bA\, \vec{R}_\alpha = \lambda_{\alpha}\, \bA,\quad {\rm and} \quad \vec{L}^\dagger_\alpha\, \bA = \lambda_{\alpha}\, \bA. \label{eq:eigvDef}
\end{eqnarray}
We use the notation $R_{\alpha,j}$ and $L_{\alpha,j}$, with $j\in[1,n]$, for the components (or entries) of the
right and left eigenvectors, respectively. In order to uniquely define the right and left eigenvector associated with
a nondegenerate eigenvalue, we consider that eigenvectors are biorthonormal
\begin{eqnarray}
\vec{L}_{\beta}\cdot \vec{R}_{\alpha} = \delta_{\alpha\beta}, \quad \alpha,\beta \in [n], \label{eq:norm}
\end{eqnarray}
we take the convention that
\begin{eqnarray}
{\rm Im}\left[\sum^n_{j=1}R_{\alpha,j} \right] = 0,\quad {\rm Re}\left[\sum^n_{j=1}R_{\alpha,j} \right] \geq 0, \label{eq:convR}
\end{eqnarray}
and we set
\begin{eqnarray}
\sum^n_{j=1}|R_{\alpha,j}|^2 = n. \label{eq:convR2}
\end{eqnarray}
Note that in this convention the norm $\sum^n_{j=1}|L_{\alpha,j}|^2$ and the complex phase of $\sum^n_{j=1}L_{\alpha,j} $ are functions of the entries of~$\bA$.
The spectrum is the set
\begin{eqnarray}
\sigma(\bA_n) := \left\{\lambda \in \mathbb{C}:{\rm det}\left(\bA_n - \lambda \mathbf{1}_n\right)\right\}
\end{eqnarray}
of eigenvalues of $\bA_n$. For finite $n$, $\sigma(\bA_n)$ is discrete, whereas for large $n$, $\sigma(\bA_n)$ often converges to a deterministic set
\begin{eqnarray}
\sigma = \lim_{n\rightarrow \infty}\sigma(\bA_n),
\end{eqnarray}
which can contain continuous and discrete parts.
We specify the different parts the spectrum $\sigma$ can have. The discrete part can consist of outlier eigenvalues, eigenvalues with infinite multiplicity and a discrete spectrum that is dense in a region of the complex plane. We will be mainly interested in outlier eigenvalues which are defined as follows.
Let $b(\lambda^\ast, \epsilon):= \left\{\lambda \in \mathbb{C}: |\lambda^\ast - \lambda|<\epsilon\right\}$ be the open ball with radius $\epsilon$ centered at the element $\lambda^\ast$ of the complex plane. We say that $\lambda_{\rm isol}\in \sigma$ is an {\it outlier eigenvalue} if there exists an $\epsilon>0$ such that $\sigma \cap b(\lambda_{\rm isol}, \epsilon) = \left\{\lambda_{\rm isol}\right\}$ and if the algebraic multiplicity of $\lambda_{\rm isol}$ is finite. The continuous part of the spectrum can be decomposed into an {\it absolute continuous} part $\sigma_{\rm ac}$, that is a set of non-zero Lebesgue measure, and a singular continuous part, that is a set of zero Lebesgue measure. Note that the different parts of the spectrum $\sigma$ are defined by applying the Lebesgue-decomposition theorem to the empirical spectral distribution \cite{reed2012methods, aizenman2015random}.
We also study the statistics of the components $R_{\alpha, i}$ and $L_{\alpha, i}$ of the right and left eigenvectors, respectively. To this aim, we define the random variables $R_{\alpha}$ and $L_{\alpha}$,which are sampled uniformly at random from the entries of $\vec{R}_{\alpha}$ and $\vec{L}_{\alpha}$, respectively. When we consider the properties of $R_{\alpha}$ and $L_{\alpha}$ for an arbitrary eigenvalue, then we omit the rank $\alpha$ and write simply $R_{\alpha} = R$ and $L_{\alpha} = L$. If $R$ and $L$ refer to an outlier, then we use the notation $R_{\rm isol}$ and $L_{\rm isol}$; if $R$ and $L$ refer to an eigenvalue located at the boundary of $\sigma_{\rm ac}$, then we use $R_{\rm b}$ and $L_{\rm b}$.
The distribution of the random variables $R$ and $L$ are
\begin{eqnarray}
p_{R}(r|\mathbf{A}) &= \frac{1}{n}\sum^n_{i=1}\delta(r-R_{ i})
\label{eq:DefEigv}
\end{eqnarray}
and
\begin{eqnarray}
p_{L}(l|\mathbf{A}) &= \frac{1}{n}\sum^n_{i=1}\delta(l-L_{ i}), \label{eq:DefEigv2}
\end{eqnarray}
respectively,
where $\delta(z)$ is the Dirac-delta distribution in the complex plane. In the limit $n\rightarrow \infty$,
the distributions $p_{R}(r|\mathbf{A})$ and $p_{L}(l|\mathbf{A})$ converge to deterministic limits $p_{R}(r)$ and $p_{L}(l)$.
We denote the moments of the limiting distributions $p_{R}(r)$ and $p_{L}(l)$ by
\begin{eqnarray}
\langle R^m \rangle = \int {\rm d}^2r\: p_{R}(r) r^m , \quad \langle L^m\rangle = \int {\rm d}^2l\: p_{L}(r) l^m, \nonumber\\
\end{eqnarray}
where ${\rm d}^2r = {\rm d}{\rm Re}(r) {\rm d}{\rm Im}(r)$ and ${\rm d}^2l = {\rm d}{\rm Re}(l) {\rm d}{\rm Im}(l)$.
\subsection{Ensemble parameters and universality of spectral quantities}\label{def:EnsembleParam}
The random matrix ensemble (\ref{eq:model}) depends on the following parameters: the distribution $p_J$ of weights, the joint distribution $p_{K^{\rm in}, K^{\rm out}}$ of indegrees and outdegrees, the real number $d$, and the size $n$.
We often use the moments of $p_J$ and $p_{K^{\rm in}, K^{\rm out}}$ to specify a random matrix model.
The $m$-th moment of $p_J$ is defined by
\begin{eqnarray}
\langle J^m\rangle := \int^{\infty}_{-\infty}{\rm d}x \:x^m\: p_J(x),
\end{eqnarray}
and the $(m,\ell)$-th moment of $p_{K^{\rm in}, K^{\rm out}}$ is
\begin{eqnarray}
\lefteqn{\langle \left(K^{\rm in}\right)^m \left(K^{\rm out}\right)^\ell \rangle } && \nonumber\\
&& := \sum^{\infty}_{k^{\rm in}=0}\sum^{\infty}_{k^{\rm out}=0}p_{K^{\rm in}, K^{\rm out}}\left(k^{\rm in}, k^{\rm out} \right)\left(k^{\rm in}\right)^m \left(k^{\rm out}\right)^\ell.\nonumber\\
\end{eqnarray}
Important quantities are the {\it mean degree}
\begin{eqnarray}
c := \langle K^{\rm in}\rangle = \langle K^{\rm out}\rangle \label{eq:meanDef}
\end{eqnarray}
and the {\it degree correlation coefficient}
\begin{eqnarray}
\rho := \frac{\langle K^{\rm in}K^{\rm out}\rangle - c^2 }{c^2} . \label{eq:assort}
\end{eqnarray}
The mean degree is the average number of edges that enter or leave a random vertex in the graph.
The parameter $c \langle J \rangle $ is a measure of the average interaction strength felt by a degree of freedom in a dynamical system defined by Eq.~(\ref{eq:linx}).
The degree correlation coefficient $\rho$ characterises the correlations between indegrees and outdegrees of a random vertex in the graph; note that this quantity is similar to the assortativity coefficient that considers the correlations between degrees of a randomly drawn edge in the graph, see section 8.7 on assortative mixing in \cite{newman2010networks}. If $\langle K^{\rm in}_jK^{\rm out}_j\rangle = \langle K^{\rm in}_j\rangle \langle K^{\rm out}_j\rangle$, then $\rho = 0$, which means that indegrees and outdegrees are uncorrelated. If $\rho>0$ ($\rho<0$), then the indegrees and outdegrees are positively (negatively) correlated.
Here we say that a spectral quantity of a random matrix is {\it universal} if it converges, for $n \rightarrow \infty$, to a deterministic limit that just depends on the first few moments of $p_J$ and $p_{K^{\rm in}, K^{\rm out}}$.
\section{Main results} \label{eq:theory}
In this section we present analytical results for the following spectral properties of random matrices defined by Eq.~(\ref{eq:model}) in the limit of large $n$:
the eigenvalue outliers $\lambda_{\rm isol}$, the boundary $\lambda_b\in\partial \sigma_{\rm ac}$ of the continuous part of the spectrum, the eigenvalue with the largest real part~$\lambda_1$, and the first moments of the right eigenvectors (left) eigenvectors associated with these eigenvalues.
Our theoretical results hold for infinitely large oriented random matrices with a prescribed degree distribution provided that $c (\rho + 1)>1$ and the moments of the distributions $p_{K^{\rm in}, K^{\rm out}}$ and $p_J$ are finite. The condition $c (\rho + 1)>1$ is required because otherwise the spectrum of $\mathbf{A}_n$ converges to a pure-point spectrum, which follows from the fact that oriented random graphs with $c(\rho+1)<1$ do not have a giant strongly connected component and therefore ${\rm Tr}[\bA^{m}] = 0$ for all $m\in \mathbb{N}$. Indeed, $c (\rho + 1) = 1$ is the critical percolation point for the strongly connected component of oriented graphs (see Appendix~\ref{AppendixC} or Ref.~\cite{dorogovtsev2001giant} for more details).
The moments of the degree distribution $p_{K^{\rm in}, K^{\rm out}}$ and of the distribution of weights $p_J$ are required to be finite, since otherwise the spectral quantities defined in Sec.~\ref{specobs} may not have a well-defined limit. In fact, if the tail of the degree distribution is a power-law characterized by a sufficiently small exponent, then the first two moments of $\lambda_1$ may diverge for $n \rightarrow \infty$.
\subsection{Outlier eigenvalue} \label{outliersub}
If $c (\rho + 1)>1$ and $\langle J^2\rangle <c (\rho + 1)|\langle J\rangle|$, then the matrix ensemble (\ref{eq:model}) has one real outlier located at
\begin{eqnarray}
\lambda_{\rm isol} = -d + c (\rho + 1) \langle J \rangle \label{eq:outlier},
\end{eqnarray}
and the corresponding entries of the eigenvectors $\vec{R}_{\rm isol}$ and $\vec{L}_{\rm isol}$ are real. Moreover, the first moments of $R_{\rm isol}$ satisfy
\begin{eqnarray}
\frac{\langle R_{\rm isol}\rangle^2 }{\langle R^2_{\rm isol}\rangle} = \frac{c^3 (\rho + 1) [c(\rho + 1)\langle J\rangle^2- \langle J^2\rangle]}{ c^2(\rho+1)^2\langle J\rangle^2 [\langle (K^{\rm out})^2\rangle -c] + \langle J^2\rangle \rho^{\rm out}_2 } , \nonumber\\
\label{eq:R}
\end{eqnarray}
where
\begin{eqnarray}
\rho^{\rm out}_2 = \langle K^{\rm in}(K^{\rm out})^2\rangle - c(1+\rho) \langle (K^{\rm out})^2\rangle.
\end{eqnarray}
The mean value of $L_{\rm isol}$ is given by an analogous equation
\begin{eqnarray}
\frac{\langle L_{\rm isol}\rangle^2 }{\langle L^2_{\rm isol}\rangle} = \frac{c^3 (\rho + 1) [c(\rho + 1)\langle J\rangle^2- \langle J^2\rangle]}{ c^2(\rho+1)^2\langle J\rangle^2 [\langle (K^{\rm in})^2\rangle -c] + \langle J^2\rangle \rho^{\rm in}_2 } , \nonumber\\
\label{eq:L}
\end{eqnarray}
where
\begin{eqnarray}
\rho^{\rm in}_2 = \langle K^{\rm out}(K^{\rm in})^2\rangle - c(1+\rho) \langle (K^{\rm in})^2\rangle.
\end{eqnarray}
Notice that $\rho=0$ and $\rho^{\rm in}_2 = \rho^{\rm out}_2 = 0$ for random graphs with uncorrelated indegrees and outdegrees, and therefore we recover in this special case the results in \cite{neri2016eigenvalue}.
\subsection{Eigenvalues at the boundary of the continuous part of the spectrum} \label{sec:bound}
If $c (\rho + 1)>1$, then the spectrum $\sigma$ of the model (\ref{eq:model}) has a continuous part.
The boundary $\partial \sigma_{\rm ac}$ of the continuous part consists of points $\lambda_{\rm b}$ that obey
\begin{eqnarray}
\frac{ c (\rho + 1)}{|\lambda_{\rm b}+d|^2}\langle J^2 \rangle =1 .\label{eq:boundary}
\end{eqnarray}
Regarding the components of the eigenvector associated with $\lambda_b\in \sigma_{\rm ac}$, we need to distinguish between
the cases where $\lambda_{\rm b}\notin \mathbb{R}$ and $\lambda_b \in \mathbb{R}$. In the former case, $R_{\rm b}$ and $L_{\rm b}$ are complex random variables that
fulfill
\begin{eqnarray}
\langle R_{\rm b}\rangle &= \langle R^2_{\rm b}\rangle &= 0, \\
\langle L_{\rm b}\rangle &= \langle L^2_{\rm b}\rangle &=0.
\end{eqnarray}
If $\lambda_b \in \mathbb{R}$, then the eigenvector components are real-valued random variables that fulfill
\begin{eqnarray}
\langle R_{\rm b}\rangle &= 0, \\
\langle L_{\rm b}\rangle &= 0,
\end{eqnarray}
and the second moments $\langle R^2_{\rm b}\rangle= 1$ and $\langle L^2_{\rm b}\rangle >0 $. Recall that the latter are fixed by the normalization convention we have chosen in Sec.~\ref{specobs}.
\subsection{The leading eigenvalue} \label{sec:lead}
From the results in Secs.~\ref{outliersub} and \ref{sec:bound} we readily obtain expressions for the leading eigenvalue $\lambda_1$.
If $c (\rho + 1)>1$, then the leading eigenvalue
\begin{eqnarray}
\lambda_1 = \left\{\begin{array}{ccc} -d + c (\rho + 1) \langle J \rangle &{\rm if}& \langle J \rangle > \sqrt{\frac{\langle J^2 \rangle}{c(\rho + 1)}}, \\ -d + \sqrt{c (\rho + 1) \langle J^2 \rangle}&{\rm if}& \langle J \rangle \leq \sqrt{\frac{\langle J^2 \rangle}{c (\rho + 1)}}.\end{array}\right. \nonumber\\ \label{eq:lambda1}
\end{eqnarray}
Thus, $\lambda_1$ can be either an outlier or an eigenvalue located at the boundary of the continuous part of the spectrum: for a positive mean value $\langle J \rangle>0$, $\lambda_1$ is
an outlier if $c(\rho + 1) > \langle J^2 \rangle/\langle J \rangle^2$ and $\lambda_1\in \partial \sigma_{\rm ac}$
otherwise. Notice that, if the leading eigenvalue is an outlier, then its value is independent of $\langle J^2 \rangle$, whereas
if the leading eigenvalue is located at $\partial \sigma_{\rm ac}$, then its value depends on $\langle J^2 \rangle$. This will be an important feature when discussing the stability analysis
of dynamical systems.
Let us consider the behaviour of Eq.~(\ref{eq:lambda1}) in a few specific cases. If we set $c(\rho+1) = n$ and $J = A$, then Eq.~(\ref{eq:lambda1}) recovers the expression (\ref{eq:lambda1IID}) for i.i.d.~random matrices. However, note that the formula~(\ref{eq:lambda1}) holds for graphs with $c \in O_n(1)$ and therefore the correspondance holds only formally.
For oriented random matrices without correlations between indegrees and outdegrees ($\rho=0$), Eq.~(\ref{eq:lambda1}) reduces to the expression for $\lambda_1$ derived in Ref.~\cite{neri2016eigenvalue}. Finally, in the case of adjacency matrices of oriented random graphs where $J_{jk}=1$, $\lambda_1$ is always an outlier given by $\lambda_1 = c (\rho + 1)$. In the limit $c (\rho + 1)\rightarrow 1^+$, where the giant strongly connected component vanishes, the outlier coalesces with the continuous part of the spectrum.
We also consider the first moments $\langle R_{\rm 1}\rangle$ and $\langle L_{\rm 1}\rangle$ of the eigenvectors associated with $\lambda_1$. Since either $\lambda_1 = \lambda_{\rm isol}$ or $\lambda_1 = \lambda_{\rm b}$, we obtain readily
\begin{eqnarray}
\frac{\langle R_{\rm 1}\rangle}{\langle |R_{\rm 1}|^2\rangle} = \left\{\begin{array}{ccc} \langle R_{\rm isol}\rangle/\langle |R_{\rm isol}|^2\rangle &{\rm if}&
\langle J \rangle > \sqrt{\frac{\langle J^2 \rangle}{c(\rho + 1)}}, \\ 0&{\rm if}& \langle J \rangle \leq
\sqrt{\frac{\langle J^2 \rangle}{c (\rho + 1)}}.\end{array}\right. \nonumber\\ \label{eq:R1}
\end{eqnarray}
An analogous expression holds for the left eigenvector.
\subsection{Spectral gap} \label{sec:sublead}
The spectral gap is the difference $\lambda_1 - {\rm Re}[\lambda_2]$ between the leading eigenvalue and the real part of the subleading eigenvalue. From the results in Secs.~\ref{outliersub}, \ref{sec:bound} and \ref{sec:lead} we readily obtain expressions for the spectral gap.
If $c (\rho + 1)>1$, then
\begin{eqnarray}
\lefteqn{\lambda_1 - {\rm Re}[\lambda_2] }&&
\nonumber\\
&=& \left\{\begin{array}{ccc} c (\rho + 1) \langle J \rangle - \sqrt{c (\rho + 1) \langle J^2 \rangle} &{\rm if}& \langle J \rangle > \sqrt{\frac{\langle J^2 \rangle}{c(\rho + 1)}}, \\ 0 &{\rm if}& \langle J \rangle \leq \sqrt{\frac{\langle J^2 \rangle}{c (\rho + 1)}},\end{array}\right. \nonumber\\ \label{eq:lambdaSG}
\end{eqnarray}
and \begin{eqnarray}
\langle R_{\rm 2}\rangle = \langle L_{\rm 2}\rangle = 0 .\label{eq:R2}
\end{eqnarray}
\subsection{Relation with the Perron-Frobenius theorem} \label{sec:Perron}
Here we discuss how our results are related to the celebrated Perron-Frobenius theorem~\cite{horn2012matrix}, which states that the eigenvalue $\lambda_1$ of a nonnegative matrix, and the components of its right (left) eigenvector, are nonnegative numbers. In other words, the Perron-Frobenius theorem implies that $R_{1,j}\geq 0$ for all $j=1,2,\ldots,n$.
Interesting conclusions about the localization of eigenvectors of $\mathbf{A}$ are drawn if we combine the Perron-Frobenius theorem with the result (\ref{eq:R1}).
If $c(\rho+1) \leq \langle J^2 \rangle/\langle J \rangle^2$ and $c(\rho+1)>1$, such that $\lambda_1$ is part of $\partial \sigma_{\rm ac}$, then $\langle R_1 \rangle =0$ and $\langle R_{1}^2 \rangle = 1$, see Eq.~(\ref{eq:convR2}). Since according to the Perron-Frobenius theorem $R_1\geq 0$, we obtain that $R_1=0$ holds with probability one.
The two conditions $\lim_{n\rightarrow \infty} \langle R_1(\mathbf{A}_n)\rangle = 0$ and $\lim_{n\rightarrow \infty} \langle R^2_1(\mathbf{A}_n)\rangle = 1$ can be simultaneously valid provided that a few components of the eigenvector $\vec{R}_1(\bA)$ diverge, such that $\lim_{n\rightarrow \infty} \langle R^2_1(\mathbf{A}_n)\rangle\neq \langle \lim_{n\rightarrow \infty} R^2_1(\mathbf{A}_n)\rangle$.
Hence, (\ref{eq:R1}) and the Perron-Frobenius theorem imply that for nonnegative matrices for which the conditions $c(\rho+1) \leq \langle J^2 \rangle/\langle J \rangle^2$ and $c(\rho+1)>1$ are fulfilled, the right eigenvector $\vec{R}_1$ associated with the leading eigenvalue is localized on a few nodes.
\section{Mathematical derivation of the main results} \label{sec:der}
We use the theory of Ref.~\cite{neri2016eigenvalue}, which is based on the {\it cavity method} \cite{mezard2003cavity, mezard2001bethe, rogers2008cavity, bordenave2010resolvent, metz2018spectra}, to derive the analytical expressions (\ref{eq:outlier}-\ref{eq:R1}) for the spectral properties of random oriented matrices.
The cavity method is a mathematical technique to study properties of graphical models defined on random graphs that have a local tree-like structure.
The cavity method is closely related to the objective method \cite{aldous2004objective, bordenave2010resolvent} and belief propagation \cite{bickson2008gaussian, weiss2000correctness, yedidia2003understanding}.
The cavity method, as applied to the present problem, consists of three steps. First, we derive a set of recursion relations for the components of right (left) eigenvector of adjacency matrices of tree-like graphs. In a second step, we obtain a set of recursion relations for the eigenvector distributions $p_R$ and $p_L$ of infinitely large random matrices that are locally tree-like. Finally, we obtain the main results~(\ref{eq:outlier}-\ref{eq:R2}) from the solutions of certain fixed-point equations for the eigenvector moments, which follow from the recursive distributional equations for $p_R$ and $p_L$.
In the next subsection, we explain concepts, such as, tree matrices and locally tree-like matrices, which are important for the cavity method. In the subsequent subsections, we implement the aforementioned steps of the cavity method. Without loss of generality, we can focus on the right eigenvectors, since the left eigenvectors of $\mathbf{A}$ are the right eigenvectors of~$\mathbf{A}^T$.
\subsection{Tree matrices and locally tree-like matrices}
Let $\mathcal{G}$ be an undirected graph represented by a given symmetric adjacency matrix. The graph $\mathcal{G}$ is a tree if it is connected and does not
contain cycles \cite{bollobas2013modern} and $\mathcal{G}$ is a forest if it is the union of several isolated trees \cite{bollobas2013modern}.
Let $\mathbf{A}_n$ be a matrix and let $\mathbf{C}_n$ be its associated adjacency matrix, i.e., $C_{kj}=1$ when $A_{kj}\neq 0$ and $C_{kj}=0$ when $A_{kj}=0$. We define the matrix
$\tilde{\mathbf{C}}_n$, with entries $\tilde{C}_{jk} = {\rm max}\left\{C_{jk}, C_{kj}\right\}$. Note that $\tilde{\mathbf{C}}_n$ is the adjacency matrix of an undirected simple graph. We say that the matrix $\mathbf{A}_n$ is a {\it tree matrix} if $\tilde{\mathbf{C}}_n$ is the adjacency matrix of a tree.
We say that a sequence $\left\{\mathbf{A}_n\right\}_{n\in \mathbb{N}}$ of matrices is {\it locally tree-like} if in the limit $n\rightarrow \infty$ each finite neighbourhood of a node, chosen uniformly at random, is a tree with probability one \cite{bordenave2010resolvent}.
\subsection{Recursion relations for the eigenvector elements of oriented tree-like matrices $\bA_n$}
Let $\lambda$ be an eigenvalue of the matrix $\bA_n$ and let $\vec{R}$ be the right eigenvector associated with $\lambda$. Equation (\ref{eq:eigvDef}) implies that
\begin{eqnarray}
R_j &=& \frac{1}{\lambda+A_{jj}} \sum^{n}_{k=1}A_{jk}R_k \nonumber\\
&=& \frac{1}{\lambda+d} \sum_{k\in \partial^{\rm out}_j}J_{jk} R_k , \label{eq:rel}
\end{eqnarray}
for all $j \in \left\{1, 2,\ldots,n\right\}$.
In general, the random variables $R_k$ are correlated with the entries $J_{jk}$ and the degree $K^{\rm out}_j$, and therefore, Eq.~(\ref{eq:rel}) is not useful to derive a selfconsistent distributional equation. However, if $\mathbf{A}_n$ is an oriented tree matrix, or a large oriented locally tree-like matrix, then $R_k$ is statistical independent from $A_{jk}$ and $K^{\rm out}_j$, and the relation (\ref{eq:rel}) can be closed.
The statistical independence between $R_k$ and $A_{jk}$ can be understood using a recursive argument.
Let $\bA^{(j)}_{n-1}$ be the submatrix obtained from $\mathbf{A}_n$ by deleting its $j$-th column and row, and let $\vec{R}^{(j)}$ be the right eigenvector of $\bA^{(j)}_{n-1}$ associated with $\lambda$. Then, for oriented tree matrices \cite{neri2016eigenvalue} (see also Appendix~\ref{sec:rec})
\begin{eqnarray}
R_k = R^{(j)}_k \label{eq:relx}
\end{eqnarray}
for all pairs of nodes $(j,k)$ with $A_{jk}\neq 0$, where $R^{(j)}_k$ is the $k$-th element of the right eigenvector $\vec{R}^{(j)}$.
Note that we have assumed that $\lambda$ is an eigenvalue of both $\mathbf{A}_n$ and $\bA^{(j)}_{n-1}$, which is reasonable when $n$ is large enough
and $\lambda$ is not inside $\sigma_{\rm ac}$.
The relations (\ref{eq:rel}) and (\ref{eq:relx}) imply that
\begin{eqnarray}
R^{(j)}_k &=&\frac{1}{\lambda+d} \sum_{\ell\in \partial^{\rm out}_k}J_{k\ell} R^{(k)}_\ell \label{eq:relxxx}
\end{eqnarray}
for all $k\in[n]$ and $j\in \partial^{\rm in}_k$.
Since we are interested in the statistics of $R$, we will also use the relation
\begin{eqnarray}
R_j &=&\frac{1}{\lambda+d} \sum_{k\in \partial^{\rm out}_j}J_{jk} R^{(j)}_k ,\label{eq:relxx}
\end{eqnarray}
which also follows from (\ref{eq:rel}) and (\ref{eq:relx}).
In the next subsection we use the relations (\ref{eq:relxxx}) and (\ref{eq:relxx}) to derive a set of self-consistent equations in the distributions of
$R^{(j)}_k$ and $R_j$
\subsection{Recursion relations for the distribution of eigenvector elements in infinitely large random locally tree-like oriented matrices} \label{subsec:c}
We apply the recursion relations (\ref{eq:relxx}) and (\ref{eq:relxxx}) to random matrices of the form Eq.~(\ref{eq:model}) in the limit of $n\rightarrow \infty$. In this limit, the random matrices of Eq.~(\ref{eq:model})) are locally tree-like.
Since we are interested in the limit where $\mathbf{A}_n$ becomes infinitely large, it is useful to introduce the distributions of right eigenvector elements
\begin{eqnarray}
p_R(r|\bA) = \frac{1}{n}\sum^n_{j=1} \delta(r-R_j) \label{eq:uniform}
\end{eqnarray}
and
\begin{eqnarray}
q_{R}(r|\bA) = \frac{1}{c\:n}\sum_{k=1}^n \sum_{j \in \partial^{\rm in}_{k}} \delta(r-R_k^{(j)} ), \label{eq:out}
\end{eqnarray}
where $c$ is the mean degree. We obtain the distribution $p_R(r|\bA)$ by selecting uniformly at random a node $j$ and asking what is the corresponding eigenvector element $R_j$, whereas we obtain the distribution $q_{R}(r|\bA)$ by selecting uniformly at random an edge $j\rightarrow k$ asking what is the eigenvector element $R_k^{(j)}$.
The limiting distributions $p_R$ and $q_R$ for large $n$ solve the recursive distributional equations
\begin{eqnarray}
\lefteqn{p_R(r) = \sum^{\infty}_{k^{\rm in}=0} \sum^{\infty}_{k^{\rm out}=0}p_{K^{\rm in}, K^{\rm out}}(k^{\rm in}, k^{\rm out}) }&&
\nonumber\\
&& \int \prod^{k^{\rm out}}_{j=1}{\rm d}^2 r_j q_R(r_j) \int \prod^{k^{\rm out}}_{j=1}{\rm d} x_j p_J(x_j) \delta \left[r - \frac{\sum^{k^{\rm out}}_{j=1}x_j r_j }{\lambda+d}\right] \nonumber\\ \label{eq:pRec}
\end{eqnarray}
and
\begin{eqnarray}
\lefteqn{q_R(r) = \sum^{\infty}_{k^{\rm in}=0} \sum^{\infty}_{k^{\rm out}=0}p_{K^{\rm in}, K^{\rm out}}(k^{\rm in}, k^{\rm out}) \frac{k^{\rm in}}{c} }&&
\nonumber\\
&& \int \prod^{k^{\rm out}}_{j=1}{\rm d}^2 r_j q_R(r_j) \int \prod^{k^{\rm out}}_{j=1}{\rm d} x_j p_J(x_j) \delta \left[r - \frac{\sum^{k^{\rm out}}_{j=1}x_j r_j }{\lambda+d}\right].\label{eq:qRec} \nonumber\\
\end{eqnarray}
Equations (\ref{eq:pRec}) and (\ref{eq:qRec}) are obtained from the recursion relations (\ref{eq:relxxx}) and (\ref{eq:relxx}), respectively. We have used the fact that the random variables on the right hand side of (\ref{eq:relxxx}) and (\ref{eq:relxx}) are independent and that random graphs as defined in Sec.~\ref{eq:modelDef} have no boundary, which implies that all nodes are equivalent \cite{aldous2004objective}. Notice that Eqs.~(\ref{eq:pRec}) and (\ref{eq:qRec}) do not apply to large tree graphs because the presence of a (large) boundary.
In the special case where
\begin{eqnarray}
p_{K^{\rm in}, K^{\rm out}}(k^{\rm in}, k^{\rm out}) = p_{K^{\rm in}}\left(k^{\rm in}\right)p_{K^{\rm out}}\left(k^{\rm out}\right),
\end{eqnarray}
we recover the results in \cite{neri2016eigenvalue} because $p_R(r)=q_R(r)$.
We are interested in solutions to the relations (\ref{eq:pRec}) and (\ref{eq:qRec}) that are {\it normalizable}, i.e.,
\begin{eqnarray}
\int {\rm d}^2 r \: p_R(r)|r|^2 \in (0,\infty).
\end{eqnarray}
The relations (\ref{eq:pRec}) and (\ref{eq:qRec}) admit two types of normalizable solutions, those associated with eigenvalue outliers $\lambda = \lambda_{\rm isol}$ and those associated with values $\lambda = \lambda_{\rm b}$ located at the boundary of the continuous part of the spectrum. As a consequence we can obtain expressions for the outliers $\lambda_{\rm isol}$ and the boundary $\partial \sigma_{\rm ac}$ by identifying values of $\lambda$ for which the relations (\ref{eq:pRec}) and (\ref{eq:qRec}) admit normalizable solutions. This is the program we will pursue in the next subsection.
\subsection{Solutions to the recursion relations} \label{sec:solr}
In this subsection we obtain analytical results for the boundary of the continuous part of the spectrum, $\lambda_b\in\partial\sigma_{\rm ac}$, and outlier eigenvalue $\lambda_{\rm isol}$, by identifying values of $\lambda$ for which the relations (\ref{eq:pRec}) and (\ref{eq:qRec}) admit a normalizable solution.
Since Eqs.~(\ref{eq:pRec}) and (\ref{eq:qRec}) are linear distributional equations, we can derive a set of fixed-point equations for the lower-order moments of $R$ and $L$. In order to distinguish averages with respect to $p_R$ and $q_R$, we introduce the definitions
\begin{eqnarray}
\langle f(R) \rangle = \int {\rm d}^2 r p_R(r) f(r)
\end{eqnarray}
and
\begin{eqnarray}
\langle f(R) \rangle_q = \int {\rm d}^2 r q_R(r) f(r) ,
\end{eqnarray}
where $f$ is an arbitrary function.
From Eq.~(\ref{eq:qRec}), we obtain that
\begin{eqnarray}
\langle R \rangle_q &=& \frac{\langle K_{\rm in}K_{\rm out}\rangle}{c(\lambda+d)} \langle J \rangle \langle R\rangle_q ,\label{eq:o}\\
\langle R^2 \rangle_q &=& \frac{\langle K_{\rm in}K_{\rm out}\rangle}{c(\lambda+d)^2} \langle J^2 \rangle \langle R^2\rangle_q \nonumber\\
&& + \frac{\langle K_{\rm in}K_{\rm out}(K_{\rm out}-1)\rangle}{c (\lambda+d)^2} \langle J \rangle^2 \langle R\rangle^2_q, \label{eq:aa}\\
\langle |R|^2 \rangle_q &=& \frac{\langle K_{\rm in}K_{\rm out}\rangle}{c|\lambda+d|^2}\langle |J|^2 \rangle \langle |R|^2\rangle_q \nonumber\\ && + \frac{\langle K_{\rm in}K_{\rm out}(K_{\rm out}-1)\rangle}{c |\lambda+d|^2} |\langle J \rangle|^2 |\langle R\rangle_q |^2,\label{eq:b}
\end{eqnarray}
and from Eq.~(\ref{eq:pRec}) we obtain
\begin{eqnarray}
\langle R \rangle &=& \frac{c}{\lambda+d} \langle J \rangle \langle R\rangle_q, \label{eq:ox}\\
\langle R^2 \rangle &=& \frac{c}{(\lambda+d)^2} \langle J^2 \rangle \langle R^2\rangle_q \nonumber\\
&& + \frac{\langle K^2_{\rm out}\rangle -c}{ (\lambda+d)^2} \langle J \rangle^2 \langle R\rangle^2_q, \\
\langle |R|^2 \rangle &=& \frac{c}{|\lambda+d|^2}\langle |J|^2 \rangle \langle |R|^2\rangle_q \nonumber\\ && + \frac{\langle K^2_{\rm out}\rangle -c}{ |\lambda+d|^2} |\langle J \rangle|^2 |\langle R\rangle_q |^2.\label{eq:bx}
\end{eqnarray}
The relations (\ref{eq:o}-\ref{eq:bx}) admit three kind of solutions. The first type of solution is obtained when $\langle R \rangle_q\neq 0$. We denote this
solution by $\lambda = \lambda_{\rm isol}$ and $R = R_{\rm isol}$, since it identifies the outliers of the random matrix ensemble. In this case, (\ref{eq:o}) implies that
\begin{eqnarray}
\frac{\langle K_{\rm in}K_{\rm out}\rangle}{c(\lambda_{\rm isol}+d)} \langle J \rangle = 1,
\end{eqnarray}
which gives the result (\ref{eq:outlier}) for the outlier eigenvalue. Since $\lambda_{\rm isol}\in\mathbb{R}$, it holds that $R_{\rm isol}\in \mathbb{R}$.
Consequently, we obtain Eq.~(\ref{eq:R}) for $\langle R_{\rm isol} \rangle$ by solving Eqs.~(\ref{eq:o}-\ref{eq:bx}) at $\lambda = \lambda_{\rm isol}$.
The second type of solution is obtained when $\langle R \rangle_q =0$ and $\lambda\notin \mathbb{R}$. We denote this solution as $\lambda = \lambda_{\rm b}$ and $R = R_{\rm b}$. Solving (\ref{eq:b}) we obtain the relation
\begin{eqnarray}
\frac{\langle K_{\rm in}K_{\rm out}\rangle}{c|\lambda_{\rm b}+d|^2}\langle |J|^2 \rangle = 1,
\end{eqnarray}
which leads to Eq.~(\ref{eq:boundary}), if we use the degree correlation coefficient $\rho$ as defined in~(\ref{eq:assort}). In this case, $R_b$ is a complex random variable and its first two moments are zero.
The third type of solution is obtained when $\langle R \rangle_q = 0$ and $\lambda\in \mathbb{R}$, and we denote this solution
as $\lambda = \lambda_{\rm r}$ and $R = R_{\rm r}$. Solving (\ref{eq:o}) we obtain
\begin{eqnarray}
\frac{\langle K_{\rm in}K_{\rm out}\rangle}{c(\lambda_{\rm r}+d)^2}\langle |J|^2 \rangle = 1 .
\end{eqnarray}
For this solution we have that $\langle R_{\rm r} \rangle = 0$, but the value of $\langle R^2_{\rm r} \rangle \neq 0$ depends on the normalization of $R_{\rm r}$.
\begin{figure*}[t]
\centering
\hspace{-0.5cm}
\subfigure[Eigenvalues of an adjacency matrix with the Poissonian degree distribution (\ref{eq:neg1}) and $\rho = 0$. ]
{\includegraphics[width=0.4\textwidth]{figure1a.pdf}\label{fig:adjneg1}}
\subfigure[Eigenvalues of an adjacency matrix with the Poissonian degree distribution (\ref{eq:neg1}) and $\rho = -0.3$. ]
{\includegraphics[width=0.4\textwidth]{figure1b.pdf}\label{fig:adjneg2}}
\subfigure[Eigenvalues $\lambda_1$ and $\mathrm{Re}(\lambda_2)$ as a function of $\rho$. ]
{\includegraphics[width=0.4\textwidth]{figure1c.pdf}\label{fig:adjneg3}}
\subfigure[Mean of the elements of the right eigenvector associated with $\lambda_1$.]
{\includegraphics[width=0.4\textwidth]{figure1d.pdf}\label{fig:adjneg4}}
\caption{{\it Effect of negative $\rho$ on the spectra of adjacency matrices of oriented random graphs with a prescribed degree distribution.} Spectra of adjacency matrices of Poissonian (\ref{eq:neg1}) and exponential (\ref{eq:neg2}) oriented random graphs with mean degree $c=2$ and diagonal entries $d=0$ are presented.
Direct diagonalization results of matrices of size $n=4000$ (markers) are compared with the theoretical results for infinite large matrices presented in Sec.~\ref{eq:theory} (lines). Figs.~\ref{fig:adjneg1} and \ref{fig:adjneg2}: eigenvalues $\lambda(\mathbf{A})$ of the adjacency matrices of two Poissonian random graphs with $\rho=0$ and $\rho=0.3$ are presented. The red line is the boundary $\lambda_b$ given by (\ref{eq:boundary}). Fig.~\ref{fig:adjneg3}: the sample means $\overline{\lambda}_1$ and $\overline{{\rm Re}[\lambda_2]}$ are compared with theoretical results $\lambda_1 = 2\rho$ and $|\lambda_{\rm b}| = \sqrt{2 \rho}$ if $\rho>-0.5$ from Sec.~\ref{eq:theory}.
Fig.~\ref{fig:adjneg4}: direct diagonalization results for $\overline{\mathcal{R}}$ are compared with the theoretical results from Sec.~\ref{eq:theory}, viz., $\frac{\langle R_1\rangle}{\sqrt{\langle |R_1|^2\rangle}} = \sqrt{ \frac{1 + 2\rho }{ 2+\rho -2\rho^2 } } $ for the Poisson ensemble with $\rho\geq-0.5$, $\frac{\langle R_{\rm isol}\rangle }{\sqrt{\langle R^2_{\rm isol}\rangle}} = \sqrt{\frac{1+2\rho }{ 2 (2 +\rho - 2\rho^2)} }$ for the exponential ensemble with $\rho\geq-0.5$, and $\frac{\langle R_1\rangle}{\sqrt{\langle |R_1|^2\rangle}} = 0$ if $\rho<-0.5$. In Figs.~\ref{fig:adjneg3} and \ref{fig:adjneg4} direct diagonalization results are sample means over $1000$ matrix realizations and error bars represent the sample standard deviations.
} \label{fig1}
\end{figure*}
\section{Examples} \label{sec:examples}
The theoretical results in Sec.~\ref{eq:theory} are conjectured to hold for ensembles of the type (\ref{eq:model}) provided that
$c (\rho + 1)>1$, the moments of $p_{K^{\rm in}, K^{\rm out}}$ and $p_J$ are finite, and $n$ is large enough. In this section we compare theoretical results with direct diagonalization results of matrices of finite size~$n\sim O(10^3)$. Such numerical experiments reveal the magnitude of finite size effects, which is important for applications because real-world systems are finite. Moreover, in order to better understand the limitations of the theory, we also consider ensembles for which
$c (\rho + 1)<1$ or the moments of the degree distribution diverge, such that the theory in Sec.~\ref{eq:theory} does not apply. In these cases we are interested in the deviations between the formulas in Sec.~\ref{eq:theory} and results from numerical experiments.
Our numerical experiments are designed as follows. First, we sample random matrices from an ensemble of the type (\ref{eq:model}) using the algorithm presented in Appendix~\ref{sec:finite}. Subsequently, we diagonalize the matrix samples with the subroutine {\it gsl\_eigen\_nonsymmv} from the GNU Scientific Library, which computes the eigenvalues of a matrix and the entries of their right eigenvectors (see the website "https://www.gnu.org/software/gsl/" for more information).
Finally, in order to test the theory in Sec.~\ref{eq:theory}, we compute for each matrix sample
the leading eigenvalue $\lambda_1(\mathbf{A})$, the real part of the subleading eigenvalue $\lambda_2(\mathbf{A})$, and the
observable
\begin{eqnarray}
\mathcal{R}(\mathbf{A}) = \frac{\sum^{n}_{j=1} R_{1,j}(\mathbf{A}) }{\sqrt{\sum^{n}_{j=1}|R_{1,j}(\mathbf{A})|^2 }} , \label{eq:mathcalR}
\end{eqnarray}
which quantifies the mean value of the components of the right eigenvector associated with $\lambda_1(\mathbf{A})$. Before we compute $\mathcal{R}(\mathbf{A})$ through the above equation, we rotate all the elements $R_{1,j}(\mathbf{A})$ by a constant phase $e^{i \theta}$, such that the empirical mean $\sum^n_{j=1}R_{1,j}(\mathbf{A})$ is a positive real number, in accordance with our conventions in Eq.~(\ref{eq:convR}).
Finally, we compute the mean values $\overline{\lambda}_1$, $\overline{\lambda}_2$, and
$\overline{\mathcal{R}}$ of the sampled populations, together with the standard deviations for each quantity.
The present section is organized into three subsections.
In Secs.~\ref{subSec:rhoN} and~\ref{subSec:rhoP} we consider adjacency matrices of oriented random graphs with negative degree correlations ($\rho<0$) and positive degree correlations ($\rho>0$), respectively. In these subsections we focus on random graph ensembles with a prescribed degree distribution having finite moments, namely, Poissonian and geometric random graphs. For such ensembles we expect that the theoretical results of Section~\ref{eq:theory} will be well corroborated by direct diagonalization results as long as $c (\rho + 1)>1$.
In Subsection~\ref{subSec:powerLaw} we apply the theoretical results of Section~\ref{eq:theory} to adjacency matrices of oriented random graphs with power-law degree distributions that have divergent moments and, therefore, we expect to observe deviations between numerical experiments and theoretical results, which we aim to understand.
\subsection{Adjacency matrices of oriented random graphs with negative degree correlations ($\rho\leq 0$)} \label{subSec:rhoN}
Here we apply the theoretical results of Section~\ref{eq:theory} to adjacency matrices of oriented random graphs with negative correlations between indegrees and outdegrees, i.e., for $\rho \leq 0$.
We consider {\it Poissonian} random graphs --- also called Erd\H{o}s-R\'{e}nyi random graphs --- and
{\it geometric} random graphs with mean degree $c>0$ and degree correlation coefficient $\rho\in[-1,0]$.
For Poissonian random graphs, the prescribed degree distribution
\begin{eqnarray}
\lefteqn{p_{K^{\rm in},K^{\rm out}}(k^{\rm in},k^{\rm out}) = (1+\rho) \: p_{\rm p}(k^{\rm in};c) p_{\rm p}(k^{\rm out};c)} \quad \quad&&
\nonumber\\
&& - \frac{\rho}{2} \left[ \delta_{k^{\rm in},0}\: p_{\rm p}(k^{\rm out};2c) + \delta_{k^{\rm out},0} \: p_{\rm p}(k^{\rm in};2c) \right], \label{eq:neg1}
\end{eqnarray}
where $k^{\rm in},k^{\rm out}\in\left\{0,1,\ldots,n-1\right\}$, where
\begin{eqnarray}
p_{\rm p}(k;c) = \frac{1}{\mathcal{N}_{\rm p}}\frac{c^{k}}{k!}, \label{eq:pP}
\end{eqnarray}
and where $\mathcal{N}_{\rm p} = \sum^{n-1}_{k=0}c^k/k!$. For $n\rightarrow \infty$, $p_{\rm p}(k;c)$ is the Poisson distribution with mean degree $c$ and $\mathcal{N}_{\rm p} =e^c$.
For geometric random graphs, the prescribed degree distribution
\begin{eqnarray}
\lefteqn{p_{K^{\rm in},K^{\rm out}}(k^{\rm in},k^{\rm out}) = (1+\rho) \: p_{\rm g}(k^{\rm in};c) p_{\rm g}(k^{\rm out};c)} &&
\nonumber\\
&& - \frac{\rho}{2} \left[ \delta_{k^{\rm in},0} \: p_{\rm g}(k^{\rm out};2c) + \delta_{k^{\rm out},0} \: p_{\rm g}(k^{\rm in};2c) \right], \label{eq:neg2}
\end{eqnarray}
where $k^{\rm in},k^{\rm out}\in\left\{0,1,\ldots,n-1\right\}$, where
\begin{eqnarray}
p_{\rm g}(k;c) = \frac{1}{\mathcal{N}_{\rm g}}\left(\frac{c}{1+c}\right)^k, \label{eq:expP}
\end{eqnarray}
and where $\mathcal{N}_{\rm g} = \sum^{n-1}_{k=0}\left(\frac{c}{1+c}\right)^k$. For $n\rightarrow \infty$, $p_{\rm g}(k;c) $ is the geometric distribution with mean degree $c$ and $\mathcal{N}_{\rm g} = c+1$.
Throughout this subsection, we consider unweighted graphs for which $J_{jk}=1$ for all $j \neq k$, and thus
\begin{eqnarray}
p_J(x) = \delta(x-1).
\end{eqnarray}
Since a nonzero $d$ results in a constant shift of all eigenvalues by $-d$, i.e.~$\lambda_j\rightarrow \lambda_j - d$, we set for simplicity $d=0$.
In Fig.~\ref{fig1} we compare direct diagonalization results with the theoretical results given by Eqs.~(\ref{eq:outlier}), (\ref{eq:boundary}), (\ref{eq:R}), (\ref{eq:lambda1}), (\ref{eq:R1}) and (\ref{eq:lambdaSG}). The results in Fig. \ref{fig1} show how the degree correlation coefficient $\rho$ affects the spectral properties of adjacency matrices of oriented random graphs with mean degree $c=2$.
In Figs.~\ref{fig:adjneg1}-\ref{fig:adjneg2}, we provide a global picture of the spectra of adjacency matrices of Poissonian random graphs: we compare the spectra of a matrix with $\rho=0$ and a matrix with $\rho=-0.3$. We observe how negative degree correlations contract the spectrum: for $\rho=-0.3$ the leading eigenvalue is smaller and the spectrum concentrates closer to the origin.
In the bulk of the spectrum, we observe two types of eigenvalues, namely, those that are located in the center of the spectrum and have a regular spacing, and those that are located along the rim of the spectrum and are randomly spaced. In the limit of large $n$, the former give rise to the pure-point spectrum, while the eigenvalues along the rim yield the continuous spectrum. We observe that the pure-point spectrum is smaller in Fig.~\ref{fig:adjneg1} than in Fig.~\ref{fig:adjneg2}, which is consistent with the fact that the size of the giant strongly connected component increases as a function of $\rho$.
In Fig.~\ref{fig:adjneg3} we present a more detailed analysis of the behaviour of the leading eigenvalue $\lambda_1$ and of the subleading eigenvalue $\lambda_2$ as a function of $\rho$. As discussed in Sec.~\ref{eq:theory}, for the adjacency matrices of unweighted random graphs with $c(\rho+1)>1$, it holds that $\lambda_1 = \lambda_{\rm isol}$ and ${\rm Re}[\lambda_2] = {\rm max}\left\{{\rm Re}[\lambda_{\rm b}]: \lambda_{\rm b}\in \partial \sigma_{\rm ac}\right\}$, which is well corroborated by the numerical results in Fig.~\ref{fig:adjneg3}. Oriented random graphs do not have a giant strongly connected component when $c(\rho+1)<1$ (see Appendix~\ref{AppendixC}), and therefore the Eqs. (\ref{eq:outlier}), (\ref{eq:boundary}), (\ref{eq:lambda1}) and (\ref{eq:lambdaSG}) do not apply in this regime. In Fig.~\ref{fig:adjneg3}, we observe that $\lambda_1 = \lambda_2$ at the critical percolation treshold $\rho= -1+1/c= -0.5$, as predicted by the theory. In the regime $\rho<-0.5$, we observe large sample-to-sample fluctuations in $\lambda_1$ because the generated graph is a disconnected union of a large number of small isolated subgraphs.
In Fig.~\ref{fig:adjneg4} we present a systematic study of the first moment $\langle R_1 \rangle$ of the eigenvector $\vec{R}_1$ associated with the leading eigenvalue, which is an outlier for $\rho\geq -0.5$. The theoretical result (\ref{eq:R}) is well corroborated by direct diagonalization results for the observable $\overline{\mathcal{R}}$, defined in Eq.~(\ref{eq:mathcalR}).
Interestingly, we observe that $\overline{\mathcal{R}}$ behaves as the order-parameter of a continuous phase transition reminiscent of spin models: for $c(\rho+1)>1$, we obtain $\langle R_1 \rangle>0$ and the system can be considered ferromagnetic, whereas for $c(\rho+1)<1$ we have $\langle R_1 \rangle=0$ and the system can be considered spin-glass like. A similar type of behaviour has been found in sparse symmetric random matrices \cite{kabashima2010cavity, kabashima2012first, takahashi2014fat, susca2019top}.
Taken together, the results in Fig. \ref{fig1} illustrate how the leading eigenvalue of the adjacency matrix of an oriented random graph increases as a function of $\rho$. Thus, $\lambda_1$ can be significantly reduced by rewiring a graph in such a manner that the correlation between indegrees and outdegrees decreases. As a consequence, we can already expect that dynamical systems coupled through oriented graphs characterized by negative values of $\rho$ are more stable in comparison to random graph models with $\rho > 0$.
In the next subsection we apply the theoretical results of Sec.~\ref{eq:theory} to the adjacency matrices of {\it weighted} random graphs with a positive $\rho$.
\subsection{Adjacency matrices of oriented weighted random graphs with positive degree correlations ($\rho\geq 0$)} \label{subSec:rhoP}
We illustrate the theoretical results of Sec.~\ref{eq:theory} for the adjacency matrices of weighted oriented random graphs with a positive $\rho$. We consider again Poissonian and geometric random graphs. The {\it Poissonian} ensemble with positive $\rho$ has a prescribed degree distribution
\begin{eqnarray}
\lefteqn{p_{K^{\rm in},K^{\rm out}}(k^{\rm in},k^{\rm out})} && \nonumber\\
&& = (1-c\rho) p_{\rm p}(k^{\rm in}) p_{\rm p}(k^{\rm out})
+ c\rho \:p_{\rm p}(k^{\rm out}) \delta_{k^{\rm in}, k^{\rm out}}, \label{eq:posAss}
\end{eqnarray}
where $\rho\in[0,1/c]$, and $p_{\rm p}$ is the truncated Poisson distribution defined by Eq.~(\ref{eq:pP}). The {\it geometric} ensemble has the prescribed degree distribution
\begin{eqnarray}
\lefteqn{p_{K^{\rm in},K^{\rm out}}(k^{\rm in},k^{\rm out})} && \nonumber\\
&& = \left(1- \frac{c \rho}{c+1}\right) p_{\rm g}(k^{\rm in}) p_{\rm g}(k^{\rm out})
+ \frac{c \rho}{c+1} \:p_{\rm g}(k^{\rm out}) \delta_{k^{\rm in}, k^{\rm out}}, \nonumber\\ \label{eq:posAss2}
\end{eqnarray}
where $\rho\in[0,(c+1)/c]$, and $p_{\rm g}$ is the truncated geometric distribution defined by Eq.~(\ref{eq:expP}).
The off-diagonal matrix entries $J_{jk}$ are i.i.d.~random variables drawn either from a Gaussian distribution
\begin{eqnarray}
p_J(x) = \frac{1}{\sqrt{2\pi v^2}} e^{-\frac{(x-\mu)^2}{2v^2}},
\label{gauss}
\end{eqnarray}
or from a bimodal distribution
\begin{eqnarray}
p_J(x) = b \delta(x-x_0) + (1-b)\delta(x+x_0),
\label{bim}
\end{eqnarray}
with the parametrization $x_0 = \sqrt{\mu^2+v^2}$ and $2b = 1 + \mu/x_0$. The parameters $\mu$ and $v$ denote, respectively, the mean and the standard
deviation of each distribution defined above.
Again, without loss of generality, we can set $d=0$.
\begin{figure*}[t]
\centering
\hspace{-0.5cm}
\subfigure[Eigenvalues of an adjacency matrix with the Poissonian degree distribution (\ref{eq:posAss}) and $\rho = 0$. ]
{\includegraphics[width=0.4\textwidth]{figure2a.pdf}\label{fig:2a}}
\subfigure[Eigenvalues of an adjacency matrix with the Poissonian degree distribution (\ref{eq:posAss}) and $\rho = 0.5$. ]
{\includegraphics[width=0.4\textwidth]{figure2b.pdf}\label{fig:2b}}
\subfigure[Leading eigenvalue $\lambda_1$ as a function of $\rho$. ]
{\includegraphics[width=0.4\textwidth]{figure2c.pdf}\label{fig:2c}}
\subfigure[Spectral gap as a function of $\rho$.]
{\includegraphics[width=0.4\textwidth]{figure2d.pdf}\label{fig:2d}}
\subfigure[ Mean of the right eigenvector components associated with $\lambda_1$ for Poissonian random graphs]
{\includegraphics[width=0.4\textwidth]{figure2e.pdf}\label{fig:2e}}
\subfigure[Mean of the right eigenvector components associated with $\lambda_1$ for geometric random graphs]
{\includegraphics[width=0.4\textwidth]{figure2f.pdf}\label{fig:2f}}
\caption{ {\it Effect of positive $\rho$ on the spectral properties of adjacency matrices of oriented weighted random graphs with prescribed degree distributions.}
The figure shows results for adjacency matrices of Poissonian and geometric oriented weighted random graphs with degree distributions defined by Eqs.~(\ref{eq:posAss}) and (\ref{eq:posAss2}), respectively, and with mean degree $c=2$. The off-diagonal weights are drawn from a Gaussian or a bimodal distribution with mean $\langle J \rangle = 1$ and
standard deviation $\sqrt{\langle J\rangle^2 -\langle J \rangle^2} = 1.2$ (see Eqs. (\ref{gauss}) and (\ref{bim})). The diagonal weights are set to zero, i.e., $d=0$.
Direct diagonalization results of matrices of size $n=4000$ (markers) are compared with the theoretical results (lines) for $n \rightarrow \infty$, presented in Section~\ref{eq:theory}. Figures \ref{fig:2a} and \ref{fig:2b} show the eigenvalues of the adjacency matrices of two Poissonian random graphs with $\rho=0$ and $\rho=0.3$, respectively, and with a Gaussian distribution for the off-diagonal weights. The red line is the theoretical result for $\lambda_b$ given by Eq.~(\ref{eq:boundary}).
In Figs.~\ref{fig:2c}-\ref{fig:2f}, the theoretical results for the leading eigenvalue $\lambda_1$, the spectral gap $\lambda_1 - {\rm Re}[\lambda_2] $, and the first moment of the right eigenvector $\langle R_1\rangle$ are compared with the sample means $\overline{{\rm Re}[\lambda_1]}$, $\overline{{\rm Re}[\lambda_1]} - \overline{{\rm Re}[\lambda_2]}$ and $\overline{\mathcal{R}}$, obtained from the direct diagonalization of $1000$ random matrices of size $n=4000$ (except for the blue circles in Fig.~\ref{fig:2e}, which are for $n=1000$). The error bars denote sample standard deviations. } \label{fig2}
\end{figure*}
In Fig.~\ref{fig2} we analyze how positive values of $\rho$ affect the spectral properties of adjacency matrices of oriented weighted random graphs. We compare the spectral properties for different values of $\rho$ and fixed parameters $c=2$, $\mu=1$, and $v = 1.2$. We
compare theoretical results from Sec.~\ref{eq:theory} (lines) with direct diagonalization results for matrices of size $n=4000$ (markers).
In Figs.~\ref{fig:2a} and \ref{fig:2b} we provide a global picture of the spectra of Poissonian random graphs by comparing the spectrum of a graph without degree correlations ($\rho=0$, Fig.~\ref{fig:2a}) with the spectrum of a graph with positive degree correlations ($\rho = 0.5$, Fig.~\ref{fig:2b}). In the latter case, the correlations are perfect in the sense that $K^{\rm in}_j = K^{\rm out}_j$ for each node $j$. The direct diagonalization results corroborate well the formula \ref{eq:boundary} for the boundary of the continuous part of the spectrum. We also note that the leading eigenvalue $\lambda_1(\mathbf{A})$ increases as a function of $\rho$. Moreover, $\lambda_1(\mathbf{A})$ is located at the boundary $\partial \sigma_{\rm ac}$ for $\rho=0$ (Fig.~\ref{fig:2a}), whereas $\lambda_1(\mathbf{A})$ is an outlier for $\rho=0.5$ (Fig.~\ref{fig:2b}).
In Figs.~\ref{fig:2c} and \ref{fig:2d} we provide a more detailed view of the eigenvalues $\lambda_1$ and $\lambda_2$. We observe that both eigenvalues $\lambda_1$ and $\lambda_2$ are monotonically increasing functions of $\rho$, and that there is a continuous transition from a gapless phase for $\rho< \langle J^2\rangle/(c\langle J\rangle^2) -1 \approx 0.22$ to a gapped phase for $\rho> \langle J^2\rangle/(c\langle J\rangle^2) -1$. We observe that the values of $\lambda_1$ and $\lambda_2$ are universal, in the sense they depend on the distributions $p_J$ and $p_{K^{\rm in},K^{\rm out}}$ only through the parameters $c$, $\rho$, $\langle J\rangle$ and $\langle J^2\rangle$. Theoretical results are again well corroborated with direct diagonalization results, although finite size effects are more significant for the spectral gap.
Finally, Figs.~\ref{fig:2e} and \ref{fig:2f} compare the theoretical result $\langle R_1\rangle$ of section \ref{eq:theory} with the sampled average $\overline{\mathcal{R}}$ of the quantity $\mathcal{R}$, as defined in Eq.~(\ref{eq:mathcalR}). In the gapless phase, we have $\langle R_1 \rangle = 0$, while in the gapped phase we obtain $\langle R_1 \rangle > 0$, which is once more reminiscent of a continuous phase transition between a spin-glass phase and a ferromagnetic phase.
We observe that in the gapped phase direct diagonalization results are in very good agreement with the theoretical expressions, whereas in the gapless phase there are significant deviations between theory and direct diagonalization. These deviations are due to finite size effects, which are significant because of our convention to normalize the eigenvectors with Eq.~(\ref{eq:convR}).
In spite of that, we see that direct diagonalization results slowly converge to the theoretical values as the matrix size $n$ increases.
\subsection{Adjacency matrices of random graphs with power-law degree distributions} \label{subSec:powerLaw}
In this subsection we analyze the spectral properties of the adjacency matrices of power-law random graphs. Note that the theoretical results in Sec.~\ref{eq:theory} are conjectured to hold for random matrices for which the moments of the degree distribution are finite. For power-law random graphs, the moments of the degree distribution diverge and it is therefore {\it a priori} not clear whether we can apply the theory of Sec.~\ref{eq:theory} in this case.
We consider two ensembles of power-law random graphs, namely, an ensemble without correlations between indegrees and outdegrees ($\rho=0$), and an ensemble with perfect degree correlations ($\rho > 0$), where $K^{\rm in}_j = K^{\rm out}_j$ for all nodes $j$. The ensemble without degree correlations has the prescribed degree distribution
\begin{eqnarray}
p_{\rm deg}\left(k_{\rm in},k_{\rm out}\right) =\frac{k^{-a}_{\rm in} k^{-a}_{\rm out}}{\mathcal{N}^2_{\rm pow}}, \label{eq:ensemblePow1}
\end{eqnarray}
with $k_{\rm in},k_{\rm out}\in\left\{1,2,\ldots,n-1\right\}$ and the normalization $\mathcal{N}_{\rm pow} = \sum^{n-1}_{k=1}k^{-a}$. The ensemble with perfect degree correlations has the prescribed degree distribution
\begin{eqnarray}
p_{\rm deg}\left(k_{\rm in},k_{\rm out}\right) = \frac{k^{-a}_{\rm in}}{\mathcal{M}_{\rm pow}} \delta_{k_{\rm in},k_{\rm out}}, \label{eq:ensemblePow2}
\end{eqnarray}
with $k_{\rm in},k_{\rm out}\in\left\{1,2,\ldots,(n-1)/2\right\}$ and the normalization $\mathcal{M}_{\rm pow} = \sum^{(n-1)/2}_{k=1}k^{-a}$.
The mean degree is given by
\begin{eqnarray}
c = \zeta(a-1)/\zeta(a)
\end{eqnarray}
provided that $a > 2$ and $n\rightarrow \infty$, with $\zeta(x)$ the Riemann zeta function.
The ensemble of Eq.~(\ref{eq:ensemblePow1}) has
\begin{eqnarray}
\rho = 0
\end{eqnarray}
if $a>2$ and $n\rightarrow \infty$, while the ensemble of
Eq.~(\ref{eq:ensemblePow2}) is characterized by
\begin{eqnarray}
\rho = \frac{\zeta(a-2)\zeta(a)}{\zeta^2(a-1)} -1
\end{eqnarray}
if $a>3$ and $n\rightarrow \infty$.
We consider unweighted power-law random graphs, with diagonal entries $A_{kk} = -d =0$ and off-diagonal matrix entries $J_{jk}=1$, for all $j,k \in \{ 1,\dots,n \}$.
Power-law random graphs are interesting from a practical point of view, since degree distributions of real-world systems often have tails that are fitted well by power-law distributions~\cite{amaral2000classes, RevModPhys.74.47, dunne2002food, clauset2009power}. From a theoretical point of view, we expect that the analytic expressions in Sec.\ref{eq:theory} will {\it not} describe well the spectral properties of random matrices with power-law degree distributions when $a$ is small enough, since these random graph models display {\it finite size effects} and large {\it fluctuations} in the properties of their local neighbourhoods.
We now resort to direct diagonalization in order to gain a better understanding of the statistics of the leading eigenvalue
of power-law random graphs.
In Fig.~\ref{fig:4a} we plot the sample mean $\overline{\lambda_1}$ of the leading eigenvalue $\lambda_{1}(\mathbf{A})$ and the sample mean $\overline{{\rm Re}[\lambda_2]}$ of the real part of the subleading eigenvalue $\lambda_{2}(\mathbf{A})$ for the ensemble defined by Eq.~(\ref{eq:ensemblePow1}), with $\rho=0$. We observe that the theoretical expressions (\ref{eq:outlier}) and (\ref{eq:boundary}) for $\lambda_{\rm isol}$ and $|\lambda_{\rm b}|$, which are the leading and the subleading eigenvalue, respectively, are in very good correspondance with direct diagonalization results when $a\gtrsim 3$. In the regime $a \lesssim 3$, we observe significant deviations between theory and numerical experiments. Such deviations are expected, since $c\rightarrow \infty$ for $a\rightarrow 2^+$, and therefore the theoretical expressions for $\lambda_{\rm isol}$ and $|\lambda_{\rm b}|$ also diverge for $a\rightarrow 2^+$. Analogously, in Fig.~\ref{fig:4b} we present results for $\overline{\lambda_1}$ and $\overline{{\rm Re}[\lambda_2]}$ for the ensemble defined by Eq.~(\ref{eq:ensemblePow2}), with $\rho > 0$. In this case, the theory works well when $a\gtrsim 4$, whereas for $a \lesssim 4$ we observe significant deviations. Indeed, for $a\rightarrow 3^+$ the degree correlation coefficient $\rho$ diverges, and therefore the theoretical expressions for $\lambda_{\rm isol}$ and $|\lambda_{\rm b}|$ also diverge. Overall, these results show that the relations (\ref{eq:outlier}) and (\ref{eq:boundary}) work remarkably well for power-law random graphs.
Finally, in Figs.~\ref{fig:4c} and \ref{fig:4d} we compare the theoretical expression for $\langle R_{\rm isol}\rangle$, shown in Eq.~(\ref{eq:R}), with the empirical mean $\overline{\mathcal{R}}$ obtained from diagonalizing numerically matrices of sizes $n=2000$ and $n=4000$. There is a reasonable correspondance between theoretical results and numerical experiments, considering that power-law random graphs exhibit significant finite-size effects and fluctuations. Interestingly, when decreasing $a$ and thus increasing $c$, the normalized mean $\langle R_{\rm isol}\rangle/\sqrt{\langle R^2_{\rm isol}\rangle}$ vanishes at $a = 3$ and $a = 4$ for the ensembles defined by the degree distributions (\ref{eq:ensemblePow1}) and (\ref{eq:ensemblePow2}), respectively.
Since the Perron-Frobenius theorem applies to this ensemble, this is a transition from a delocalized phase ($\langle R_{\rm isol}\rangle/\sqrt{\langle R^2_{\rm isol}\rangle}>0$) to a localized phase ($\langle R_{\rm isol}\rangle/\sqrt{\langle R^2_{\rm isol}\rangle} = 0$), as argued in Sec.~\ref{sec:Perron}. In other words, the leading eigenvector is {\it localized} when the exponent $a$ that characterizes the decay of the power-law degree distribution is small enough.
\begin{figure*}[t]
\centering
\hspace{-0.5cm}
\subfigure[Leading eigenvalues for power-law graphs with degree distribution (\ref{eq:ensemblePow1}) ($\rho=0$)]
{\includegraphics[width=0.4\textwidth]{figure3a.pdf}\label{fig:4a}}
\subfigure[Leading eigenvalues for power-law graphs with degree distribution (\ref{eq:ensemblePow2}) ($\rho>0$)]
{\includegraphics[width=0.4\textwidth]{figure3c.pdf}\label{fig:4b}}
\subfigure[First moment of the eigenvector associated with the leading eigenvalue for power-law graphs with degree distribution (\ref{eq:ensemblePow1}) ($\rho=0$)]
{\includegraphics[width=0.4\textwidth]{figure3b.pdf}\label{fig:4c}}
\subfigure[First moment of the eigenvector associated with the leading eigenvalue for power-law graphs with degree distribution (\ref{eq:ensemblePow2}) ($\rho>0$)]
{\includegraphics[width=0.4\textwidth]{figure3d.pdf}\label{fig:4d}}
\caption{{\it Properties of the leading and subleading eigenvalue of power-law random graphs.}
Spectral properties of the adjacency matrices of power-law random graphs with prescribed degree distributions (\ref{eq:ensemblePow1}) or (\ref{eq:ensemblePow2}); the former has $\rho=0$ and the latter $\rho>0$. The off-diagonal weights $J_{jk} = 1$ and diagonal weights are set to zero, $d=0$.
Direct diagonalization results are the means of a sampled population of matrices of size $n=2000$ or $n=4000$ (markers) --- with population sizes of $2000$ and $1000$, respectively --- and are compared with the equations derived in
Sec.~\ref{eq:theory} (lines). The error bar denotes the standard deviation of the population.
Figs.~\ref{fig:4a} and \ref{fig:4b}: direct diagonalization results for the (sub)leading eigenvalue in the random graph models (\ref{eq:ensemblePow1}) and (\ref{eq:ensemblePow2}) are compared with $|\lambda_{\rm b}|^2 = \lambda_{\rm isol} = \zeta(a-1)/\zeta(a)$ and $|\lambda_{\rm b}|^2 = \lambda_{\rm isol} = \zeta(a-2)/\zeta(a-1)$, respectively.
Fig.~\ref{fig:4c}: direct diagonalization results for $\overline{\mathcal{R}}$ in the model (\ref{eq:ensemblePow1}) are compared with $\frac{\langle R_{\rm isol} \rangle}{\sqrt{\langle R^2_{\rm isol} \rangle}}= \sqrt{\frac{\zeta(a-1)[\zeta(a-1)-\zeta(a)]}{\zeta(a)[\zeta(a-2) - \zeta(a-1)]}}$ if $a> 3$ and $\frac{\langle R_{\rm isol} \rangle}{\sqrt{\langle R^2_{\rm isol} \rangle}}= 0$ if $a<3$.
Fig.~\ref{fig:4d}: direct diagonalization results $\overline{\mathcal{R}}$ in the model (\ref{eq:ensemblePow2}) are compared with $\frac{\langle R_{\rm isol} \rangle}{\sqrt{\langle R^2_{\rm isol} \rangle}} = \sqrt{\frac{ \zeta(a-1)\zeta(a-2) [\zeta(a-2)- \zeta(a-1)]}{ \zeta(a) \zeta(a-1)\zeta(a-3) - \zeta^2(a-1) \zeta(a-2) +(\zeta(a-1)-\zeta(a)) \zeta^2(a-2) }}$ if $a>4$ and $\frac{\langle R_{\rm isol} \rangle}{\sqrt{\langle R^2_{\rm isol} \rangle}}= 0$ if $a<4$.
} \label{fig4}
\end{figure*}
\section{Stability of complex systems}\label{sec:app}
We use the results from Sec.~\ref{eq:theory} to analyse the stability of stationary states in large networked systems whose Jacobian matrix $\mathbf{A}$ is modeled by
random matrices defined by Eq.~(\ref{eq:model}). In this
case, Eq.~(\ref{eq:linx}) takes the form
\begin{eqnarray}
\partial_t y_j(t) = \sum^n_{k=1; (k\neq j)} C_{kj} J_{kj} y_k(t) - d\: y_j(t) , \label{eq:linT}
\end{eqnarray}
where $d$ represents the strength of self-regulation at each node of the network.
By requiring
that $d>0$, the stationary state $\vec{y}=0$ is stable in the absence of interactions between the degrees of freedom. However, when the constituents of the system interact strong enough, then small perturbations or fluctuations around the fixed point $\vec{y}=0$ can propagate through the system and the stationary state can become unstable due to these interactions. In this section, we present a quantitative study on how the architecture of the network of interactions between the constituents of the system affects the system stability.
The stability of a networked system can be studied with spectral methods.
Indeed, the solution of the linear Eq.~(\ref{eq:linT}) is given by (\ref{jkla}), which implies that the fixed-point $\vec{y} = 0$
is stable if ${\rm Re}[\lambda_j] < 0$ for all $j \in \{ 1,\dots,n \}$.
Hence, the long-time behaviour of $\vec{y}(t)$ is governed by the leading eigenvalue $\lambda_1$ and its associated right and left eigenvector: If ${\rm Re}[\lambda_1] < 0$, then $\lim_{t \rightarrow \infty} \|\vec{y}(t)\| = 0$ and the stationary state is stable. On the other hand, if ${\rm Re}[\lambda_1] > 0$, then $\vec{y}(t)= e^{\lambda_1 t}\left((\vec{R}_1\cdot \vec{y}(0))\vec{L}_1 + O(e^{(\lambda_2-\lambda_1)t})\right)$ and $\vec{y} = 0$ is unstable. Notice that the nature of the mode that destabilizes the system takes the form of the left eigenvector $\vec{L}_1$. The nature of the right and left eigenvectors associated with the leading eigenvalue contain thus valuable information about the nature of the modes that destabilize the system. For instance,
if the eigenvector $\vec{L}_1$ has a positive mean $\langle L_1\rangle>0$, then the instability is reminiscent of a ferromagnetic phase whereas if
$\langle L_1\rangle=0$, then the instability is reminiscent of a spin-glass phase~\cite{mezard2001bethe, mezard2003cavity, franz2001ferromagnet}.
We study here the stability of large systems
coupled through oriented networks defined by the random matrices of the type (\ref{eq:model})
using the analytical expressions for $\lambda_1$ and $\langle R_1 \rangle$, given by Eqs.~(\ref{eq:lambda1}) and (\ref{eq:R1}) in Sec.~\ref{eq:theory}, respectively.
First of all, note that the leading eigenvalue of a networked system converges to a finite value when $n$ diverges, in contrary to the leading eigenvalue (\ref{eq:lambda1IID}) of the mean-field model studied by May~\cite{may1972will}, which diverges for increasing $n$. As a consequence, networked models are stable in the limit of large $n$, which seems to resolve the diversity-stability debate \cite{mccann2000diversity}. However, it remains of interest to study how network architecture affects the stability of large dynamical systems, since the leading eigenvalue $\lambda_1$ will depend on the network structure.
Interestingly, for the interaction networks defined in Sec.~\ref{eq:theory}, the stability of the stationary state is solely governed by three parameters that characterise the network architecture: the {\it effective mean degree} $c(1+\rho)$ that characterizes the effective number of degrees of freedom each node in the network interacts with; the {\it coefficient of variation} $v_J := \sqrt{\langle J^2 \rangle - \langle J \rangle^2}/\langle J \rangle$ that characterizes the fluctuations in the coupling strengths between the constituents of the system; and the {\it effective interaction strength} $\alpha := \langle J \rangle/d$ that quantifies the relative strength of the interactions with regard to the rate~$d$ of self-regulation. Remarkably, the system stability characterized by the leading eigenvalue $\lambda_1$ only depends on these three parameters, and thus enjoys a high degree of universality.
In order to better understand how the three parameters $c(1+\rho)$, $v_J$ and $\alpha$ govern the stability of dynamical systems on oriented graphs, we present in Fig.~\ref{fig5} the phase diagram of the system in the $(v_J,c(1+\rho))$ plane for a fixed values of $\alpha\in[0,1]$ and for $c(1+\rho)>1$. The reason we choose these parameter regimes is because for $\alpha >1$ there exist no stable phase and for $c(1+\rho) <1$ the graph does not have a giant strongly connected component; in the latter regime the system falls apart in the sense that it is a union of a large number of small isolated subsystems, and thus we are not considering anymore the linear stability of a large system of interacting degrees of freedom.
The phase diagram denotes the critical connectivity $c^\ast$ (black lines) that separates the stable phase (${\rm Re}[\lambda_1]<0$), for systems at low connectivity $c(1+\rho)$, from the unstable phase (${\rm Re}[\lambda_1]>0$), for systems at high connectivity $c(1+\rho)$. The critical line is determined by the function
\begin{eqnarray}
c_\ast = \left\{ \begin{array}{ccc}1/\alpha, && v^2_J<1/\alpha-1, \\
v^2_\ast/v^2_J, && v^2_J\in [1/\alpha-1, v^2_\ast], \\
\setminus && v^2_J > v^2_\ast, \end{array} \right. \label{eq:critConn}
\end{eqnarray}
that provides us with the effective connectivity $c(\rho+1)$ at ${\rm Re}[\lambda_1]=0$ as a function of $\alpha$ and $v_J$; in formula (\ref{eq:critConn}) we have used the symbol $v^2_\ast = \frac{1-\alpha^2}{\alpha^2}$. Since the critical connectivity is finite for all values of $\alpha$ and $v_J$, it follows that for large enough $c(1+\rho)$ any dynamical system is unstable, which is consistent with the results of May~\cite{may1972will} stating that any large enough fully connected system is unstable.
However, as we see from Eq.~(\ref{eq:critConn}) and Fig.~\ref{fig5}, the phase transition to the stable phase at low connectivities has three qualitatively different regimes, which we discuss in the following paragraphs.
The critical value $v_\ast$ separates a regime at $v_J>v^\ast$, which does not have stable phase, from a regime at $v_J<v^\ast$, which has a stable phase at low enough connectivity $c(\rho+1)>1$. Hence, for small enough fluctuations in the interaction strengths ($v_J<v^\ast$) it is possible to stabilize the system by rewiring edges in the graph until the negative correlations between indegrees and outdegrees are large enough. Stabilizing the system by rewiring edges is however not possible when $v_J>v^\ast$.
Moreover, the regime $v_J<v^\ast$ consists of two distinct regimes: a {\it gapped} regime, which appears when the fluctuations in the interaction strengths are small ($v^2_J<1/\alpha-1$), and a {\it gapless} regime, which appears when the fluctuations in the interaction strengths are large ($v^2_J>1/\alpha-1$). In Fig.~\ref{fig5}, these two regimes are separated by the red dotted line. In the gapped regime the leading eigenvalue is an outlier and the critical connectivity $c^\ast$ is independent of $v^2_J$. This implies that fluctuations do not affect the system stability when the leading eigenvalue is an outlier. On the other hand, in the gapless regime the leading eigenvalue is part of the boundary of the continuous spectrum and the critical connectivity $c^\ast$ decreases as $1/v^2_J$. In this regime fluctuations in interaction strengths render the system less stable. The differences the gapped and gapless regimes can be understood in terms of the nature of the destabilizing mode. In the gapped regime, the mode that destabilizes the system is ferromagnetic, i.e., $\langle L_1\rangle>0$, whereas in the gapless regime, the mode that
that destabilizes the system is spin-glass-like, i.e., $\langle L_1\rangle=0$. Hence, increasing the fluctuations $v_J$ for fixed values of the mean strength $\alpha$ does not affect the ferromagnetic mode, which gives an intuitive understanding why the location of the outlier is independent of $v_J$.
Finally, we can quantify the overall stability of systems coupled through random matrices (\ref{eq:model}) in terms
of a single parameter $a_{\rm stab}$, defined as the area in figure \ref{fig5} where the system is stable and $c(1+\rho) > 1$.
The quantity $a_{\rm stab}$ is given by
\begin{eqnarray}
a_{\rm stab} &=& \frac{1}{\alpha} \sqrt{\frac{ 1- \alpha}{\alpha} } \left( 1 - \sqrt{\alpha (1 + \alpha) } \right) \nonumber \\
&+& \frac{1}{\alpha^2} \left[ \tanh^{-1}{\left( \sqrt{\frac{1 - \alpha^2 }{\alpha^2} } \right) } - \tanh^{-1}{\left( \frac{1 - \alpha}{ \alpha} \right) } \right]. \nonumber
\end{eqnarray}
The area $a_{\rm stab}$ is a monotonic decreasing function of $\alpha$, which approaches $a_{\rm stab} \rightarrow 0$ as $\alpha \rightarrow 1$ and
$a_{\rm stab} \rightarrow \infty$ as $\alpha \rightarrow 0$. Thus, the increase of the average interaction strength between the elements
of a network system, in the sense that $\langle J \rangle$ approaches $d$, makes the system less stable.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figure4.pdf}\label{fig:rho0}
\caption{{\it Phase diagram for the stability of dynamical systems (\ref{eq:linT}) on oriented networks (\ref{eq:model})}. The stability diagram is universal and only depends on three parameters, an effective connectivity $c(\rho+1)$, the coefficient of variation $v_J = \sqrt{\langle J^2\rangle - \langle J \rangle^2}/\langle J\rangle$ and the mean interaction strength $\alpha = -\langle J\rangle/d$. Black lines
separate the unstable phase at large effective
connectivity $c(\rho+1)$ from the stable phase at small connectivity $c(\rho+1)$ for a given value of $\alpha$. The red line separates the gapped phase at small $v_J$ from a gapless phase at high~$v_J$. } \label{fig5}
\end{figure}
\section{Extensions}\label{sec:ext}
Here we extend the theory in Sec.~\ref{sec:der} to random matrices with diagonal disorder and non-oriented random matrices. We present relations that for their outliers, the boundary of the continuous part of the spectrum, and for the associated right (left) eigenvectors.
\subsection{Random matrices with diagonal disorder}
We consider random matrices of the form
\begin{eqnarray}
\bA_n = - \mathbf{D}_n + \bJ_n\circ\bC_n, \label{eq:modelxx}
\end{eqnarray}
where $\bJ_n$ and $\bC_n$ are defined in exactly the same way as in (\ref{eq:model}), but where $\mathbf{D}_n$ is now is a diagonal matrix with entries $[\mathbf{D}_n]_{jj} = D_j$ that are i.i.d.~random variables with a probability distribution $p_D(x)$. We assume that the support of $p_D$ is a compact subset of the real line.
In this case, the theory of Sec.~\ref{sec:der} applies with some slight modifications. We first derive a set of relations that are equivalent to (\ref{eq:relxxx}) and (\ref{eq:relxx}), but that take into consideration the fact that the diagonal elements in (\ref{eq:modelxx}) are not constant. The eigenvector elements $R^{(j)}_k$ satisfy now the relations
\begin{eqnarray}
R^{(j)}_k &=&\frac{1}{\lambda+D_k} \sum_{\ell\in \partial^{\rm out}_k}J_{k\ell} R^{(k)}_\ell \label{eq:relDx}
\end{eqnarray}
for all $j\in \partial^{\rm in}_k$ and $k\in [n]$, and
\begin{eqnarray}
R_j &=&\frac{1}{\lambda+D_j} \sum_{k \in \partial^{\rm out}_j}J_{jk} R^{(j)}_k \label{eq:relD}
\end{eqnarray}
for all $j\in [n]$. Note that if $D_{k} = d$, then (\ref{eq:relDx}) and (\ref{eq:relD}) are identical to (\ref{eq:relxxx}) and (\ref{eq:relxx}).
Following the same ensemble averaging procedure as laid out Sec~\ref{sec:solr}, we obtain
\begin{eqnarray}
\langle R \rangle_q &=& \frac{\langle K_{\rm in}K_{\rm out}\rangle}{c} \Big\langle \frac{1}{\lambda-D}\Big\rangle \langle J \rangle \langle R\rangle_q ,\label{eq:oD}\\
\langle R^2 \rangle_q &=& \frac{\langle K_{\rm in}K_{\rm out}\rangle}{c} \Big\langle \frac{1}{(\lambda-D)^2}\Big\rangle \langle J^2 \rangle \langle R^2\rangle_q \nonumber\\
&& + \Big\langle\frac{\langle K_{\rm in}K_{\rm out}(K_{\rm out}-1)\rangle}{c(\lambda-D)^2} \Big\rangle \langle J \rangle^2 \langle R\rangle^2_q,\nonumber \\
\langle |R|^2 \rangle_q &=& \frac{\langle K_{\rm in}K_{\rm out}\rangle}{c}\Big\langle \frac{1}{|\lambda-D|^2} \Big\rangle \langle |J|^2 \rangle \langle |R|^2\rangle_q \nonumber\\ && + \Big\langle\frac{ \langle K_{\rm in}K_{\rm out}(K_{\rm out}-1)\rangle}{c (\lambda-D)^2}\Big\rangle |\langle J \rangle|^2 |\langle R\rangle|^2_q , \label{eq:bD}
\end{eqnarray}
which extend the relations (\ref{eq:o}-\ref{eq:b}) to the case with diagonal disorder, and in the specific case of $p_D(x) = \delta(x+d)$ we recover (\ref{eq:o}-\ref{eq:b}).
The outliers of (\ref{eq:modelxx}) solve
\begin{eqnarray}
c(\rho+1)\langle J \rangle\Big \langle \frac{1}{\lambda_{\rm isol}+D}\Big\rangle = 1,
\end{eqnarray}
the boundary of the continuous part of the spectrum consists of $\lambda_{\rm b}\in\mathbb{C}$ for which \begin{eqnarray}
c(\rho+1)\langle J \rangle\Big \langle \frac{1}{(\lambda_{\rm b}+D)^2}\Big\rangle = 1,
\end{eqnarray}
and the moments of the right eigenvectors associated with either $\lambda = \lambda_{\rm isol}$ or $\lambda = \lambda_{\rm b}$ are given by
\begin{eqnarray}
\langle R \rangle &=& \Big\langle \frac{c}{\lambda+D} \Big\rangle \langle J \rangle \langle R\rangle_q, \label{eq:oxD}\\
\langle R^2 \rangle &=& \Big\langle \frac{c}{(\lambda+D)^2}\Big\rangle \langle J^2 \rangle \langle R^2\rangle_q \nonumber\\
&& + (\langle K^2_{\rm out}\rangle -c)\Big\langle \frac{1}{ (\lambda+D)^2} \Big\rangle \langle J \rangle^2 \langle R\rangle^2_q, \\
\langle |R|^2 \rangle &=& \Big\langle \frac{c}{|\lambda+D|^2} \Big\rangle \langle |J|^2 \rangle \langle |R|^2\rangle_q \nonumber\\ && + (\langle K^2_{\rm out}\rangle -c) \Big\langle \frac{1}{ |\lambda+D|^2} \Big\rangle |\langle J \rangle|^2 |\langle R\rangle_q |^2.\label{eq:bxD}
\end{eqnarray}
The relations (\ref{eq:oxD}-\ref{eq:bxD}) generalize the relations (\ref{eq:ox}-\ref{eq:bx}) for the case of constant $D = d$, and the relations derived in \cite{neri2016eigenvalue} for graphs without without degree correlations, i.e., $p_{K^{\rm in},K^{\rm out}}(k^{\rm in},k^{\rm out}) = p_{K^{\rm in}}(k^{\rm in}) p_{K^{\rm out}}(k^{\rm out}) $.
\subsection{Non-oriented random matrices}
We consider random matrices of the form
\begin{eqnarray}
\bA_n = -d\:\mathbf{1}_n + \tilde{\bJ}_n\circ\tilde{\bC}_n , \label{eq:modelxxxxx}
\end{eqnarray}
where $\tilde{\bC}_n$ is the adjacency matrix of a symmetric random graph with a prescribed degree distribution $p_{\rm deg}(k)$, and where $\tilde{\mathbf{J}}_n$ is a random matrix with zero entries on the diagonal and with offdiagonal pairs $(\tilde{J}_{jk},\tilde{J}_{kj})$ that are i.i.d.~random variables with distribution $p_{\tilde{J}_1, \tilde{J}_2}(x,y)$; $p_{\tilde{J}_1, \tilde{J}_2}(x,y)$ has the symmetry property $p_{\tilde{J}_1, \tilde{J}_2}(x,y) = p_{\tilde{J}_1, \tilde{J}_2}(y,x)$. Note that if $p_{\tilde{J}_1, \tilde{J}_2}(x,y) = \frac{1}{2}p_{J}(x)\delta(y) + \frac{1}{2}p_{J}(y)\delta(x)$, then (\ref{eq:modelxxxxx}) is a specific case of the oriented model (\ref{eq:model}) with degree distribution
\begin{eqnarray}
\lefteqn{ p_{K^{\rm in}, K^{\rm out}}(k^{\rm in}, k^{\rm out}) }&& \nonumber \\
&& = \sum^{\infty}_{k=0} \frac{p_{\rm deg}(k)}{2^k} \sum^{k}_{n=0}\frac{n!(k-n)!}{k!} \delta_{k^{\rm in}, n}\delta_{k^{\rm out}, n-k},
\end{eqnarray}
whereas if $p_{\tilde{J}_1, \tilde{J}_2}(x,y) = \delta(x-y)p_J(x)$ then $\mathbf{A}_n$ is a symmetric matrix.
As derived in the Appendix~\ref{sec:rec}, the entries $R^{(j)}_k$ of a right eigenvector of the matrix $\mathbf{A}^{(j)}_{n-1}$ with eigenvalue $\lambda$ satisfy
\begin{eqnarray}
R^{(j)}_k &=& -G^{(j)}_k \sum_{\ell\in \partial_k\setminus j}J_{k\ell} R^{(k)}_\ell \label{eq:relOx},
\end{eqnarray}
for all $k\in [n]$ and $j\in \partial_{k}$,
where
\begin{eqnarray}
G^{(j)}_k &=& \frac{1}{-\lambda - d+ \sum_{\ell\in \partial_k\setminus j}J_{k\ell} G^{(k)}_\ell J_{\ell k} }, \label{eq:Oxx}
\end{eqnarray}
is the $k$-th diagonal element of the resolvent matrix $(\mathbf{A}^{(j)}_{n-1}-\lambda \mathbf{1}_{n-1})^{-1}$ of $\mathbf{A}^{(j)}_{n-1}$. Analogously, the entries $R_j$ of a right eigenvector of the matrix $\mathbf{A}_{n}$ satisfy the relations
\begin{eqnarray}
R_j &=& -G_j \sum_{k \in \partial_j}J_{jk} R^{(j)}_k \label{eq:relO},
\end{eqnarray}
where
\begin{eqnarray}
G_j &=& \frac{1}{-\lambda - d+ \sum_{k\in \partial_j}J_{jk} G^{(j)}_k J_{kj} }, \label{eq:O} \end{eqnarray}
Note that in the special case of oriented random matrices, $G^{(j)}_k = G_k = \frac{1}{-\lambda_+d}$
and therefore for oriented matrices the relations (\ref{eq:relOx}) and (\ref{eq:relO}) are equivalent to the relations (\ref{eq:relxxx}) and (\ref{eq:relxx}).
In order to perform the limit $n\rightarrow \infty$, we define the joint distributions
\begin{eqnarray}
p_{R,G}(r, g|\bA) = \frac{1}{n}\sum^n_{j=1} \delta(r-R_j) \delta(g-G_j)\label{eq:uniformO}
\end{eqnarray}
and
\begin{eqnarray}
q_{R,G}(r,g|\bA) = \frac{1}{c\:n}\sum_{k=1}^n \sum_{j \in \partial_{k}} \delta(r-R_k^{(j)} )\delta(g-G^{(j)}_k). \nonumber\\ \label{eq:outP}
\end{eqnarray}
Following an analogous approach as in Sec.~\ref{subsec:c}, we use
the recursion relations (\ref{eq:relOx}) and (\ref{eq:Oxx}) to derive the recursive distributional equation
\begin{eqnarray}
\lefteqn{q_{G,R}(g,r) = \sum^{\infty}_{k=0} \frac{k\: p_{{\rm deg}}(k)}{c}} &&
\nonumber \\
&& \times \int \prod^{k-1}_{\ell=1} {\rm d}^2g_{\ell}{\rm d}^2r_{\ell} \: q_{R, G}(r_{\ell},g_{\ell}) \int \prod^{k-1}_{\ell=1} {\rm d}x_{\ell} \: {\rm d}y_{\ell} \:p_{J_1, J_2}(x_{\ell},y_{\ell})
\nonumber\\
&& \times \delta\left(r + g \frac{\sum^{k-1}_{\ell=1}x_\ell r_{\ell}}{-\lambda - d + \sum^{k-1}_{\ell=1}x_\ell g_{\ell} y_\ell }\right) \nonumber\\
&& \times \delta\left(g - \frac{1}{-\lambda - d + \sum^{k-1}_{\ell=1}x_\ell g_{\ell} y_\ell }\right).
\nonumber\\ \label{eq:distriO1}
\end{eqnarray}
Analogously, we use the relations (\ref{eq:relO}) and (\ref{eq:O}) to obtain the distributional equation
\begin{eqnarray}
\lefteqn{p_{G,R}(g,r) = \sum^{\infty}_{k=0}p_{{\rm deg}}(k) } &&
\nonumber \\
&& \times \int \prod^{k}_{\ell=1} {\rm d}^2g_{\ell}{\rm d}^2r_{\ell} \: q_{R, G}(r_{\ell},g_{\ell}) \int \prod^{k}_{\ell=1} {\rm d}x_{\ell} \: {\rm d}y_{\ell} \:p_{J_1, J_2}(x_{\ell},y_{\ell}) \nonumber \\
\nonumber\\
&& \times \delta\left(r + \frac{\sum^{k}_{\ell=1}x_\ell r_{\ell}}{-\lambda - d + \sum^{k}_{\ell=1}x_\ell g_{\ell} y_\ell}\right) \nonumber\\
&& \times \delta\left(g - \frac{1}{-\lambda - d + \sum^{k}_{\ell=1}x_\ell g_{\ell} y_\ell }\right).\label{eq:distriO2}
\end{eqnarray}
The Eqs.~(\ref{eq:distriO1}-\ref{eq:distriO2}) can be solved with a population dynamics algorithm, as described in Refs.~\cite{abou1973selfconsistent, mezard2001bethe, metz2010localization}. As before, the outliers $\lambda_{\rm isol}$ and the boundary $\lambda_b$ of the continuous part of the spectrum are found as values of $\lambda$ for which the relations (\ref{eq:distriO1}-\ref{eq:distriO2}) admit a normalizable solution. Moreover, for a given value of $\lambda\in\partial \sigma_{\rm ac}$, the relations (\ref{eq:distriO1}-\ref{eq:distriO2}) provides us with the distribution $p_R(r) = \int {\rm d}^2\:g \:p_{G,R}(g,r)$ of the entries of the right eigenvector associated with $\lambda$. In the special case of symmetric matrices, the relations (\ref{eq:distriO1}-\ref{eq:distriO2}) are equivalent to those derived in Refs.~\cite{kabashima2010cavity, kabashima2012first, takahashi2014fat, susca2019top}.
\section{Discussion}\label{sec:discu}
Random matrices have been used to study the linear stability of large dynamical systems of interacting degrees of freedom~\cite{may1972will,sear2003instabilities, rajan2006eigenvalue, allesina2012stability, allesina2015predicting, grilli2016modularity, ahmadian2015properties, PhysRevLett.114.088101, PhysRevE.93.022302, kuczala2016eigenvalue, gibbs2018effect, gudowska2018synaptic, moran2019will}. A common feature of these models is that each constituent interacts with a number of degrees of freedom that increases with system size, and therefore the system is unstable when it is large enough. It is however more realistic to consider systems defined on sparse graphs for which each constituent interacts with a finite number of other constituents, independent of the system size. For models on sparse graphs, system stability is independent of system size and the question that arises is how network architecture affects system stability. In this paper we have developed a mathematical method to address this problem.
For dynamical systems defined on oriented random graphs with a prescribed degree distribution, as defined in Sec.~\ref{eq:modelDef}, we have shown that system stability is governed by only three network parameters: the effective mean degree $c(\rho+1)$, the coefficient of variation $v_J = \sqrt{\langle J^2\rangle -\langle J\rangle^2}/\langle J\rangle$ and the relative interaction strength $\alpha = \langle J \rangle/d$. This result follows from the analytical expression (\ref{eq:lambda1}) for the leading eigenvalue of the adjacency matrix of the graph of interactions between the constituents of the system.
From the phase diagram we obtain the following interesting conclusions. First, negative correlations between indegrees and outdegrees stabilise large dynamical systems, whereas the mean coupling strength $\alpha$ and fluctuations $v_J$ in the couplings render systems less stable. Second, when the fluctuations $v_J$ of the coupling strengths are small enough, the stability is controlled by an outlier and is independent of $v_J$. On the other hand, when $v_J$ is large enough, then the leading eigenvalue is determined by the boundary of the continuous part of the spectrum and the system stability decreases as a function of $v_J$. Moreover, in the first scenario the unstable mode is ferromagnetic ($\langle J \rangle>0$) whereas in the second scenario it is spin-glass-like ($\langle J\rangle=0$). Finally, systems with fluctuations $v_J$ larger than the critical value $v_\ast = \sqrt{\frac{1-\alpha^2}{\alpha^2}}$ do not contain a stable phase, no matter how large are the negative correlations between indegrees and outdegrees.
Our results rely on a spectral theory for the eigenvalue outliers and the boundary of the continuous part of the spectrum of large sparse non-Hermitian random matrices. This theory also provides us with the statistics of the entries of the right and left eigenvectors associated with outlier eigenvalues or eigenvalues located at the boundary of the continuous spectrum. Because spectra of directed graphs appear in various research areas, the spectral theory presented in this paper can also be applied to problems other than the linear stability analysis of randomly coupled differential equations. A first example of an application is the stability of dynamical systems in discrete time \cite{hastings1982may}, which are relevant for the systemic risk of networks banks connected through financial contracts~\cite{bardoscia2017pathways}. For discrete-time systems the stability is controlled by the spectral radius $r(\mathbf{A}) = {\rm max}\left\{|\lambda_1|, |\lambda_2|,\ldots, |\lambda_n| \right\}$: when $r(\mathbf{A})>1$ the system is unstable and when
$r(\mathbf{A})<1$ it is stable. A second application is the analysis of spectral algorithms that use the right or left eigenvector associated with the (sub)leading eigenvalue, e.g., spectral clustering algorithms \cite{pentney2005spectral, krzakala2013spectral}, centrality measures based on eigenvectors \cite{bonacich2001eigenvector, langville2011google, ermann2015google}, or the low-rank matrix estimation problem~\cite{lesieur2017constrained}. Moreover, detectability thresholds of spectral algorithms often depend on the location of the leading and subleading eigenvalue \cite{krzakala2013spectral, bordenave2015non,zdeborova2016statistical, kawamoto2018algorithmic}. A third application is the analysis of continuous phase transitions on networks for which the leading eigenvalue or the spectral radius of a nonsymmetric matrix determines the phase transition threshold; examples are the threshold for the onset of a susceptible-infected-susceptible epidemic \cite{li2013epidemic} or the percolation transition \cite{karrer2014percolation, hamilton2014tight}. A fourth application is the analysis of stochastic processes: the stationary state of a Markov processes is the right (or left) eigenvector of the leading eigenvalue of a Markov matrix \cite{margiotta2019glassy} and the values of the cumulant generating function of a time additive observable can be expressed as the leading eigenvalue of a Markov matrix \cite{donsker1975asymptotic, donsker1975asymptotic2, donsker1976asymptotic3, donsker1983asymptotic4, de2016rare}. Finally, we remark that the subleading eigenvalue provides information about the finite-time dynamics of a set of randomly coupled differential equations \cite{tarnowski2019universal}, and not only about their asymptotic stability. Taken together, we conclude that the spectral theory presented in this paper can be used in various contexts.
| {
"timestamp": "2019-08-21T02:03:51",
"yymm": "1908",
"arxiv_id": "1908.07092",
"language": "en",
"url": "https://arxiv.org/abs/1908.07092",
"abstract": "We present a linear stability analysis of stationary states (or fixed points) in large dynamical systems defined on random directed graphs with a prescribed distribution of indegrees and outdegrees. We obtain two remarkable results for such dynamical systems: First, infinitely large systems on directed graphs can be stable even when the degree distribution has unbounded support; this result is surprising since their counterparts on nondirected graphs are unstable when system size is large enough. Second, we show that the phase transition between the stable and unstable phase is universal in the sense that it depends only on a few parameters, such as, the mean degree and a degree correlation coefficient. In addition, in the unstable regime we characterize the nature of the destabilizing mode, which also exhibits universal features. These results follow from an exact theory for the leading eigenvalue of infinitely large graphs that are locally tree-like and oriented, as well as, for the right and left eigenvectors associated with the leading eigenvalue. We corroborate analytical results for infinitely large graphs with numerical experiments on random graphs of finite size. We discuss how the presented theory can be extended to graphs with diagonal disorder and to graphs that contain nondirected links. Finally, we discuss the influence of small cycles and how they can destabilize large dynamical systems when they induce strong enough feedback loops.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech); Disordered Systems and Neural Networks (cond-mat.dis-nn); Social and Information Networks (cs.SI); Physics and Society (physics.soc-ph); Populations and Evolution (q-bio.PE)",
"title": "Linear stability analysis for large dynamical systems on directed random graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180677531123,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7083314761431658
} |
https://arxiv.org/abs/2007.08956 | Vertex distinction with subgraph centrality: a proof of Estrada's conjecture and some generalizations | Centrality measures are used in network science to identify the most important vertices for transmission of information and dynamics on a graph. One of these measures, introduced by Estrada and collaborators, is the $\beta$-subgraph centrality, which is based on the exponential of the matrix $\beta A$, where $A$ is the adjacency matrix of the graph and $\beta$ is a real parameter ("inverse temperature"). We prove that for algebraic $\beta$, two vertices with equal $\beta$-subgraph centrality are necessarily cospectral. We further show that two such vertices must have the same degree and eigenvector centralities. Our results settle a conjecture of Estrada and a generalization of it due to Kloster, Král and Sullivan. We also discuss possible extensions of our results. | \section{Introduction}
Centrality measures have been used to determine the importance of a vertex in a graph, with many applications in biology, finance, sociology, epidemiology, and more generally in network science. Among many such measures, we focus here on subgraph centrality, which is based on counting the number of closed walks of different lengths passing through each node. This measure has been
successfully used in the study of protein-protein interaction networks, in the analysis of traffic and other transportation networks, and in several studies of brain networks, to name just a few applications; see, for instance, \cite{BKcentralitymeasures, Estrada, EPHwalkentropies, EHstatisticalmechanics,EHcommunicability, EHB12, ERVsubgraphdefinition}.
Let $\G=(V,E)$ be a simple undirected graph with $|V|=n$ vertices and adjacency matrix $A$. Later we will also consider the case where $\G$ is a directed and weighted graph.
The $\beta$-subgraph centrality with $\beta \geq 0$ is defined as $[e^{\beta A}]_{ii}$ for each vertex $i$ of $\G$.
It was introduced by Estrada and Rodr\'iguez-Vel\'azquez in \cite{ERVsubgraphdefinition} for $\beta=1$,
as a node centrality measure. Two years later, Estrada and Hatano \cite{EHstatisticalmechanics} introduced a generalization of it involving the tuneable parameter $\beta$.
The idea is to write $e^{\beta A}$ as a power series expansion:
\begin{equation}
\begin{aligned}
e^{\beta A}\, =\;& I+ \beta A+ \frac{\beta^2}{2}A^2 + \frac{\beta^3}{3!} A^3+\cdots,\\
[e^{\beta A}]_{ii}\, = \;& 1 + \beta[A]_{ii} + \frac{\beta^2}{2} [A^2]_{ii} + \frac{\beta^3}{3!} [A^3]_{ii}+\cdots.
\end{aligned}
\label{espansione_expA}
\end{equation}
As we have anticipated, the $\beta$-subgraph centrality of node $i$ is then given by $[e^{\beta A}]_{ii}$. Hence, the $\beta$-subgraph centrality is strictly related to the number of closed walks starting from $i$, since the number of such walks of length $r$ is $[A^r]_{ii}$. By weighting walks of length $r$ by $\beta^r/r!$, longer closed walks are penalized. Nodes that are visited by
many short, closed walks are considered important. The role of the parameter $\beta$, known as the
``inverse temperature," is that of giving more or less weight to walks
of a given length, and also to model situations where the network is subject to some external ``stress".
In the above-mentioned applications in network science, it is often useful to determine when two ``different" vertices have the same centrality measure. As a first step, one can ask which graphs have all vertices with equal subgraph centrality. Highly symmetrical graphs, such as vertex-transitive graphs, satisfy this condition; a wider class of graphs that satisfy is that of walk-regular graphs. It was conjectured that there are not any others:
\begin{conj*}Given $\beta>0$, a graph $\G$ has all vertices with the same $\beta$-subgraph centrality if and only if $\G$ is walk-regular.
\end{conj*}
In this paper we will show that this conjecture is true for all algebraic $\beta$, by proving a stronger result: we will show that if $\beta$ is algebraic, two vertices $i,j$ have the same $\beta$-subgraph centrality if and only if they are cospectral. This implies the conjecture because a graph is walk-regular if and only if all its vertices are cospectral.
\vspace{0.2cm}
In Section 2, we give all the necessary definitions and show various formulations of the conjecture. In Section 3 we introduce the Lindemann-Weierstrass Theorem. In Section 4 we prove the main result (Theorem \ref{main_thm}) and Theorem \ref{prop_main}, which is the key element for its proof. In Section 5 we discuss some generalizations of our results, and possible further developments.
\section{Preliminaries}
It is convenient to introduce the following terminology.
\begin{defin}$\G$ is \textit{$\beta$-subgraph regular} if $\forall \; i,j\in V$, $[e^{\beta A}]_{ii} = [e^{\beta A}]_{jj}$.
\end{defin}
In \cite{ERVsubgraphdefinition} examples were given of graphs with vertices with equal degree, eigenvector, closeness and betweenness centralities, but different 1-subgraph centralities. This led to the following conjecture:
\begin{conj}[Estrada, Rodr\'iguez-Vel\'azquez \cite{ERVsubgraphdefinition}]
Let $\G$ be a $1$-subgraph regular graph. Then the degree, closeness, eigenvector, and betweenness centralities are also identical for all nodes.
\label{conjec1}
\end{conj}
Some counterexamples for the closeness and betweenness centralities were found independently by Rombach and Porter \cite{RPdiscrimination} and by Stevanovi\'c \cite{Stevanovic}, but the conjecture remained open for degree and eigenvector centralities.
\vspace{0.2cm}
We recall the following definition:
\begin{defin} $\G$ is \textit{walk-regular} if $\forall \; i,j\in V$ and for every positive integer $r$, $[A^r]_{ii} = [A^r]_{jj}$.
\end{defin}
From the power series expansion of eq.~(\ref{espansione_expA}), it follows immediately that a walk-regular graph is also $\beta$-subgraph regular for all $\beta$. From here on, we assume that $\beta \ne 0$ to avoid trivialities.
A related quantity is the \textit{walk entropy} of a graph \cite{EPHwalkentropies,EHstatisticalmechanics}, defined as
$$S(\G,\beta) = - \sum_{i=1}^n p_i \ln p_i, \quad p_i=
\frac{[e^{\beta A}]_{ii}}{\text{Tr} [e^{\beta A}]}\,.$$
It is easy to see that the walk entropy is maximized (and equal to $\ln n$) if and only if the graph $\G$ is $\beta$-subgraph regular.
In \cite{EPHwalkentropies} it was conjectured that $\G$ is walk-regular if and only if $\G$ is $\beta$-subgraph regular for all $\beta > 0$. This was proved true by Benzi in the following stronger form:
\begin{teo}[Benzi \cite{Benzi}, Theorem 2.2]
A graph $\G$ is walk-regular if and only if $\G$ is $\beta$-subgraph regular for all $\beta \in I \subseteq \R$, where $I$ is any set of real numbers containing an accumulation point.
\end{teo}
In the same paper, it was conjectured that if a graph $\G$ is $\beta$-subgraph regular for only one value of $\beta$, then it is necessarily walk-regular (also in \cite{Estrada} this was conjectured for the special case $\beta=1$). The general case was shown to be false by Sullivan et al. in \cite{HKSwalkregularity,KKScounterexample},
by exhibiting a (infinite) family of non walk-regular graphs (also, non degree-regular), for each of which there exists a value of $\beta$ such that the graph is $\beta$-subgraph regular; incidentally, this counterexample also falsified an incorrect proof of the above-mentioned
conjecture given in \cite{EdP}.
Nevertheless, for any non walk-regular graph $\G$ there can be only finitely many values of $\beta$ that make $\G$ $\beta$-subgraph regular
In \cite{KKScounterexample}, the following conjecture was put forth:
\begin{conj}[Kloster, Kr\'al, Sullivan \cite{KKScounterexample}, Conjecture 5]
A graph $\G$ is walk-regular if and only if there exists a rational $\beta > 0$ such that $\G$ is $\beta$-subgraph regular.
\label{conjec5}
\end{conj}
We will show that this conjecture is true in a stronger form, by requiring $\beta$ only to be any algebraic number. Also, our result applies not just to undirected graphs, but also to directed graphs with diagonalizable adjacency matrices.
We recall that in the case of a directed graph the interpretation of $[A^r]_{ii}$ in terms of closed walks remains valid, provided
that a ``closed walk" is understood as a directed walk starting and ending at the same vertex.
\vspace{0.3cm}
For $\G$ either a directed or an undirected graph, we introduce the following terminology.
\begin{defin}Two vertices $i,j$ of $\G$ are \textit{cospectral} if for every integer $r >0$, $[A^r]_{ii} = [A^r]_{jj}$.
\end{defin}
Observe that by the Hamilton-Cayley Theorem, it is sufficient to check $n-1$ values of $r$ to determine cospectrality. If there exists an automorphism $\varphi$ of $\G$ such that $\varphi(i)=j$, then $i,j$ are cospectral; however, there are examples of cospectral vertices which are not related by an automorphism. One of such examples can be found in \cite{Schwenk}, which was the first to make use of cospectral vertices, although without defining them explicitly. See \cite{GSstronglycospectral} for many other equivalent conditions for two vertices to be cospectral.
\begin{defin} Two vertices $i,j$ of $\G$ are \textit{$\beta$-subgraph equivalent} if $[e^{\beta A}]_{ii} = [e^{\beta A}]_{jj}$.
\end{defin}
From the Taylor series expansion it is clear that if $i,j$ are cospectral, then they are $\beta$-subgraph equivalent for all $\beta$. We will show that for $\G$ an undirected graph, or a directed graph with diagonalizable adjacency matrix, if $\beta$ is an algebraic number and $i,j$ are $\beta$-subgraph equivalent, then they are cospectral.
This implies a proof of Conjecture \ref{conjec5}, because a graph is walk-regular if and only if all its vertices are cospectral, and it is $\beta$-subgraph regular if and only if all its vertices are $\beta$-subgraph equivalent.
\section{Algebraic numbers and the Lindemann-Weierstrass Theorem}
We recall that $a\in \C$ is an algebraic number if there exists a nonzero polynomial $p(x)\in \Q[x]$ such that $p(a)=0$. The set of all algebraic numbers is a field and it will be denoted by $\Qalg$.
\begin{prop}Let $B$ be a $n \times n$ matrix with all its entries $B_{ij} \in \Qalg$. Let $\Ker(B)\subseteq \C^n$ be the null-space of $B$ with $\dim \Ker(B)=d \geq 1$. Then there exists a basis $\{v_1,\ldots , v_d\}$ of $\Ker(B)$ such that all the entries of each vector are algebraic numbers.
\label{prop_kernel}
\end{prop}
\begin{proof}We can see $B$ as a matrix over the field $\Qalg$. Gaussian elimination holds
on every ground field, so we can apply it to the rows of $B$ and find a basis of the null-space $\{v_1,\ldots , v_d\}$, with $v_i \in \Qalg^n$.
Since $\Qalg \subseteq \C$, we have that $\{v_1,\ldots , v_d\}$ is also a basis for $\Ker(B)$ when viewed as a complex-valued vector space.
\end{proof}
\begin{prop}Let $B$ be a $n\times n$ matrix with all its entries $B_{ij} \in \Qalg$. If $B$ is non-singular, then the inverse matrix $B^{-1}$ has all its entries in $\Qalg$.
\label{prop_inverse}
\end{prop}
\begin{proof} The inverse of $B$ can be computed explicitly: $[B^{-1}]_{ij} = (-1)^{i+j}\, \frac{\det(B\setminus(j,i))}{\det(B)} $, where $\det(B\setminus(j,i))$ is the minor of the matrix obtained removing row $j$ and column $i$. It is clear that
$\det(B)$ and $\det(B\setminus(j,i))$ are both algebraic numbers, so $[B^{-1}]_{ij} \in \Qalg$ for every $i,j$.
\end{proof}
We now introduce the Lindemann-Weierstrass Theorem, which is the central ingredient for the main result. The theorem, proven in 1885 combining the work of Hermite, Lindemann and Weierstrass, is a milestone of Transcendental Number Theory. We state it here in a formulation due to Baker \cite{Baker}.
\begin{teo}[Lindemann-Weierstrass
Let $a_1, \ldots, a_n$ be distinct algebraic numbers. Then the exponentials $e^{a_1},\ldots, e^{a_n}$ are linearly independent over the algebraic numbers. In other words, for every choice of $c_1,\ldots c_n \in \overline{\Q}$, not necessarily distinct, we have:
\begin{equation}
c_1 e^{a_1} \,+ \,\cdots \,+\, c_n e^{a_n}\,=\,0 \; \; \; \iff \; \;\; c_i = 0 \;\; \;\;\forall\; \;1\leq i \leq n. \end{equation}
\end{teo}
\begin{proof}
See for instance \cite{Siegel} for a proof with an historical perspective, \cite{Baker} for a simpler argument or \cite{Nathanson} for a proof using Padé approximants.
\end{proof}
Notice that the result has many important consequences: i.e., the transcendence of $e$ (choosing $a_1=1$ and $a_2=0$) and the transcendence of $\pi$ (choosing $a_1=i \pi$ and $a_2=0$). For the history of the Lindemann-Weierstrass Theorem, see \cite{Brezinski}.
\section{Main result}
Let $\G$ be a graph with adjacency matrix $A$, and assume that $A$ is diagonalizable, say $A=Q D Q^{-1}$. We are mostly interested in undirected graphs, where the latter property is always true (because $A$ is real symmetric); nonetheless, we can extend the result at least to directed graphs with diagonalizable adjacency matrix.
\vspace{0.3cm}
Let $q_{ij}$ be the $(i,j)^{th}$ entry of $Q$ and $\widehat{q}_{ij}$ that of $Q^{-1}$. Let $(\lambda_1,\ldots, \lambda_n)=\text{diag}(D)$ be the (possibly non-distinct) eigenvalues of $A$. We will use $(\mu_1,\ldots ,\mu_d)$ with $d\leq n$ to denote the eigenvalues without repetition, with $\mu_i$ of multiplicity $m_i$. Up to permutation, we can assume $\lambda_1=\cdots=\lambda_{m_1}=\mu_1$, $\lambda_{m_1+1}=\cdots=\lambda_{m_1+m_2}=\mu_2$, and so on. For ease of notation, let $\II_h=\{ k \,|\, \lambda_k =\mu_h\}$ be the set of all indices of the multiple occurrences of eigenvalue $\mu_h$.
\vspace{0.3cm}
From $A^r = Q D^r Q^{-1}$ and $e^{\beta A} = Q \,e^{\beta D} \, Q^{-1}$ we can group equal eigenvalues together to obtain:
\begin{equation}
\begin{aligned}
[A^r]_{ii} = &\sum\limits_{k=1}^n \,q_{ik}\;\widehat{q}_{ki} \, \lambda_k^r = \left(\,\sum\limits_{k\in\II_1} q_{ik}\;\widehat{q}_{ki} \right)\mu_1^r + \cdots + \left(\,\sum\limits_{k\in\II_d} q_{ik}\;\widehat{q}_{ki} \right)\mu_d^r=\\
=&\;C_{1\,i} \;\mu_1^r \,+\, C_{2\,i}\; \mu_2^r\,+\,\cdots\,+\,C_{d\,i} \;\mu_d^r\,;
\end{aligned}
\label{Ar_autoval}
\end{equation}
\begin{equation}[e^{\beta A}]_{ii} = \sum\limits_{k=1}^n \,q_{ik}\;\widehat{q}_{ki} \;e^{\beta\lambda_k} = C_{1\,i}\;e^{\beta\mu_1} \,+\,C_{2\,i}\;e^{\beta\mu_2} \,+\, \cdots \,+\, C_{d\,i}\;e^{\beta\mu_d}\,
\label{expA_autoval}
\end{equation}
where $C_{h\,i}= \sum\limits_{k\in\II_h} q_{ik}\;\widehat{q}_{ki}$. The next proposition is the key argument in the proof of the main result.
\begin{teo}
Let $\G$ be a directed graph with adjacency matrix $A$, and assume that $A$ is diagonalizable. Let $\beta\neq 0$ be an algebraic number. If two vertices $i,j$ are $\beta$-subgraph equivalent, then they are cospectral.
\label{prop_main}
\end{teo}
\begin{proof}
We would like to apply the Lindemann-Weierstrass Theorem, so we need to prove that (with the above notation) the exponents $\beta \mu_h$ and the coefficients $C_{h\,i} = \sum\limits_{k\in\II_h} q_{ik}\;\widehat{q}_{ki} $ are all algebraic numbers.
\vspace{0.3cm}
The entries of $A$ are either 0 or 1, so the characteristic polynomial $P_A(x)=\det(xI-A)$ has integer coefficients. The roots of $P_A(x)$ are $\mu_1,\ldots ,\mu_d$, so they all are algebraic numbers. Since $\beta\in\Qalg$, then also $\beta\mu_1,\ldots, \beta\mu_d$ are algebraic numbers.
\vspace{0.3cm}
Observe that the coefficients $C_{h\,i}$ in equations (\ref{Ar_autoval}), (\ref{expA_autoval}) can be obtained for many possible choices of $Q$, as long as $A=QDQ^{-1}$ holds. We will construct an appropriate $Q$ with algebraic numbers in all entries.
\vspace{0.25cm}
For every eigenvalue $\mu_h$, let $B=A-\mu_h I$. Using Proposition \ref{prop_kernel} we can find vectors $\{v_1,\ldots ,v_{m_h}\}$ which form a basis of $\Ker(B)$ and such that all their components are in $\Qalg$. Hence, we can
construct a matrix $Q$ which has the $m_h$ columns relative to the eigenvalue $\mu_h$
equal to the above-defined vectors $\{v_1,\ldots ,v_{m_h}\}$.
$Q$ has all the entries in $\Qalg$, and so does its inverse $Q^{-1}$ by Proposition \ref{prop_inverse}. This implies that $\forall \, h,i$, the coefficients $C_{h\,i}=\sum\limits_{k\in\II_h} q_{ik}\;\widehat{q}_{ki} $ are algebraic numbers.
We can now prove the result. The hypothesis is $[e^{\beta A}]_{ii} = [e^{\beta A}]_{jj}$ which we can write as in equation (\ref{expA_autoval}) as:
\begin{equation*}
\begin{aligned}
[e^{\beta A}]_{ii} = \sum\limits_{k=1}^n \,q_{ik}\;\widehat{q}_{ki} \;e^{\beta\lambda_k} &= \,C_{1\,i}\;e^{\beta\mu_1}\, +\,C_{2\,i}\;e^{\beta\mu_2} \,+ \,\cdots \,+\, C_{d\,i}\;e^{\beta\mu_d},\\
[e^{\beta A}]_{jj} = \sum\limits_{k=1}^n \,q_{jk}\;\widehat{q}_{kj} \,e^{\beta\lambda_k} &=\, C_{1\,j}\;e^{\beta\mu_1} \,+\,C_{2\,j}\;e^{\beta\mu_2}\, +\, \cdots\, +\, C_{d\,j}\;e^{\beta\mu_d},
\end{aligned}
\end{equation*}
\begin{equation*}
0=[e^{\beta A}]_{ii}-[e^{\beta A}]_{jj} =\, (C_{1\,i}-C_{1\,j})\;e^{\beta\mu_1} \,+\, \cdots\, +\, (C_{d\,i}-C_{d\,j})\;e^{\beta\mu_d}.
\end{equation*}
Since for every $h$ the exponents $\beta\mu_h$ and coefficients $C_{h\,i},\, C_{h\,j}$ are algebraic numbers, and also $\beta \mu_h$ are all distinct because $\beta \neq 0$ and $\mu_h$ are pairwise distinct, we can apply the Lindemann-Weierstrass Theorem to obtain that $C_{h\,i}=C_{h\,j}\;\; \forall \, 1\leq h\leq d$.
From this it follows that for all positive integers $r$,
\begin{equation}
\begin{aligned}
[A^r]_{ii} =&\;C_{1\,i} \;\mu_1^r + C_{2\,i}\; \mu_2^r+\cdots+C_{d\,i} \;\mu_d^r=\\
=&\;C_{1\,j} \;\mu_1^r + C_{2\,j}\; \mu_2^r+\cdots+C_{d\,j} \;\mu_d^r=[A^r]_{jj}\,,
\end{aligned}
\end{equation}
which means that $i,j$ are cospectral in $\G$. The proof is complete.
\end{proof}
\textbf{Remark.} Observe that if $\G$ is an undirected graph, then its adjacency matrix $A$ is symmetric and therefore it is diagonalizable; hence the result of Theorem \ref{prop_main} can be applied.
\vspace{0.3cm}
We will now prove the Conjectures \ref{conjec1} and \ref{conjec5} stated in Section 2:
\begin{teo}[Main Result]
Let $\beta>0$ be an algebraic number and let $\G$ be a connected undirected graph with adjacency matrix $A$.
\begin{enumerate}
\item $\G$ is $\beta$-subgraph regular if and only if $\G$ is walk-regular.
\item If two vertices $i$, $j$ are $\beta$-subgraph equivalent, then the degree and eigenvector centralities of $i$ and $j$ are equal.
\item If $\G$ is $\beta$-subgraph regular, then the degree and eigenvector centralities are also identical for all nodes.
\end{enumerate}
\label{main_thm}
\end{teo}
\begin{proof}
\textbf{(1)} If $\G$ is walk-regular, then by the Taylor series expansion of $[e^{\beta A}]_{ii}$ it follows that $\G$ is $\beta$-subgraph regular for every $\beta\in \R$.
If $\G$ is $\beta$-subgraph regular for $\beta \in \Qalg$, this means that $\forall \,i,j $ we have $[e^{\beta A}]_{ii} = [e^{\beta A}]_{jj}$. By Theorem \ref{prop_main}, we have that $[A^r]_{ii} = [A^r]_{jj}$ for every $r>0$ and for every $i,j$, which is the definition of walk-regularity.\\
\textbf{(2)} The degree centrality of $i$ is the number of edges incident in $i$, which is $[A^2]_{ii}$. Since Theorem \ref{prop_main} implies that $[A^r]_{ii} = [A^r]_{jj}$ for every integer $r>0$, it follows that $[A^2]_{ii}=[A^2]_{jj}$.
\vspace{0.2cm}
Let us take $Q$ as in the proof of Theorem \ref{prop_main}. Since $A$ is real symmetric, $Q$ can be transformed into an orthogonal matrix still satisfying $A=QDQ^{-1}$ by applying Gram-Schmidt orthogonalization and column normalization; these operations preserve the algebraicity of its entries. So $Q^{-1}=Q^\T$ and the coefficients are simply $C_{h\,i}=\sum\limits_{k\in\II_h} q_{ik}\;\widehat{q}_{ki} = \sum\limits_{k\in\II_h} q_{ik}^2$.
Up to permutation, we can assume that $\lambda_1$ is the eigenvalue with the greatest absolute value. Since $\G$ is undirected and connected, by the Perron-Frobenius Theorem \cite{perronfrobenius}, $\lambda_1$ is a simple eigenvalue with a non-negative eigenvector $(q_{1\,1},\ldots ,q_{n\,1})^\T$. The eigenvector centrality of vertex $i$ is defined as $q_{i\,1}$.
\vspace{0.2cm}
In the proof of Theorem \ref{prop_main} we have obtained that for $i,j$ which are $\beta$-subgraph equivalent, $C_{1\,i}=C_{1\,j}$. Since $\lambda_1$ is a simple eigenvalue, $C_{1\,i}=q_{i\,1}^2$ and $C_{1\,j}=q_{j\,1}^2$. We conclude that $q_{i\,1}=q_{j\,1}$ because they are both non-negative, proving that $i$ and $j$ have the same eigenvector centrality.
\vspace{0.2cm}
\textbf{(3)} It follows from point 2 and the fact in a $\beta$-subgraph regular graph all vertices are $\beta$-subgraph equivalent.
\end{proof}
\textbf{Remark.} Point 1 implies that the value(s) of $\beta$ in the counterexample found in \cite{KKScounterexample} is necessarily a transcendental number.
\section{Generalizations and remarks}
For any sufficiently regular function $f$ (analytic and with radius of convergence in $0$ greater than $\rho(A)$) the matrix function $f(A)$ can be calculated using the Taylor series expansion.
Defining the \textit{diagonal entry function} as $f_D(i)=[f(A)]_{ii}$, it is possible to obtain properties of the graph and of the vertices $i,j$ by comparing $f_D(i)$ and $f_D(j)$. The subgraph centrality is a special case obtained by taking $f(x)=e^{\beta x}$. Other functions have also been studied in literature, for example $f(x)=\frac{1}{1-\alpha x}$ (with $0<\alpha < \frac{1}{\rho(A)}$) which gives the resolvent subgraph centrality; see, for instance, \cite{EHnetwork}.
If two vertices $i,j$ are cospectral, then by power series expansion it follows that $f_D(i)=[f(A)]_{ii}=[f(A)]_{jj}=f_D(j)$: this means that $i$ and $j$ cannot be distinguished by any diagonal entry function. However, the function $f(x)=e^{\beta x}$ with algebraic $\beta$ has the ``maximum resolution" among all diagonal entry functions: by Theorem \ref{prop_main}, two non cospectral vertices must have different $\beta$-subgraph centralities.
It is not yet known a ``simple" function which can always distinguish vertices up to graph automorphism. Nevertheless, the subgraph centrality can distinguish non cospectral vertices, and that is the limit for any diagonal entry function. \\
We can observe that the proof of Theorem \ref{prop_main} does not need $A$ to be the adjacency matrix of a graph, but only that the roots of $P_A(x)$ and the eigenvectors of $A$ are algebraic. This is true if all the entries of $A$ are rational (or even algebraic) numbers.
\begin{prop}
Let $A\in \Qalg^{n\times n}$ be a diagonalizable matrix. If for $1\leq i,j\leq n$ and $\beta \in \Qalg$ we have that $[e^{\beta A}]_{ii}= [e^{\beta A}]_{jj}$, then for every integer $r>0$ we have $[A^r]_{ii}= [A^r]_{jj}$.
\end{prop}
We can see $A$ as the adjacency matrix of a weighted directed graph, with algebraic weights (possibly negative).
\\
The next question is whether the result can be generalized to a non-diagonalizable matrix $A$ (both for $A$ adjacency matrix of a directed graph, or more generally for any $A$ with algebraic coefficients).
We have been able to obtain a partial answer to this question.
\begin{prop}
Let $A\in \Qalg^{n\times n}$ with Jordan normal form $J$, i.e. $A=QJQ^{-1}$. Assume that $\lambda_1$ has index $\leq 2$ (its largest Jordan block has size $\leq 2$) and all other eigenvalues have index 1.
If $[e^{\beta A}]_{ii}= [e^{\beta A}]_{jj}$, then for every integer $ r>0$ we have $[A^r]_{ii}= [A^r]_{jj}$.
\end{prop}
\begin{proof}
The Jordan normal form of $A=QJQ^{-1}$ is the following, with $m$ copies of the block $J_1$:
$$J=Q^{-1}AQ=\begin{pmatrix}
J_1 & & & \\
& \ddots & & \\
& & J_1 & \\
& & & \lambda_{2m+1} &\\
& & & & \ddots \\
& & & & & \lambda_n
\end{pmatrix} , \hspace{1cm}
J_1=\begin{pmatrix}
\lambda_1 & 1 \\
0& \lambda_1
\end{pmatrix}.$$
To calculate $A^r$ and $e^{\beta A}$, we need $J^r$ and $e^{\beta J}$, which are block-diagonal with the blocks relative to $J_1$ equal to:
$$J_1^r=\begin{pmatrix}
\lambda_1^r & r \lambda_1^{r-1} \\
0& \lambda_1^r
\end{pmatrix} \hspace{1cm}
e^{\beta J_1} = \begin{pmatrix}
e^{\beta \lambda_1} & \beta e^{\beta \lambda_1}\\
0& e^{\beta \lambda_1}
\end{pmatrix}$$
We obtain thus:
$$[A^r]_{ii} = \sum\limits_{k=1}^n \,q_{ik} \;\widehat{q}_{ki}\; \lambda_k^r + \sum\limits_{l=1}^{m} \,q_{i\,2l-1} \;\,\widehat{q}_{2l\,i} \; r \lambda_1^{r-1},$$
$$[e^{\beta A}]_{ii} = \sum\limits_{k=1}^n \,q_{ik} \;\widehat{q}_{ki} \;e^{\beta \lambda_k} + \sum\limits_{l=1}^{m} \,q_{i\,2l-1}\,\; \widehat{q}_{2l\,i}\;\beta e^{\beta \lambda_1}.$$
Setting in this case $C_{h\,i} = \sum\limits_{k\in \II_h} q_{ik} \;\widehat{q}_{ki}$, by Lindemann-Weierstrass Theorem we have that $C_{h\,i } = C_{h\,j}$ for all $h\geq 2$. By looking at the coefficient of $e^{\beta \lambda_1}$ we obtain:
$$C_{1\,i} + \sum\limits_{l=1}^{m}\, q_{i\,2l-1} \;\widehat{q}_{2l\,i}\,\beta = C_{1\,j} + \sum\limits_{l=1}^{m} \; q_{j\,2l-1} \;\widehat{q}_{2l\,j}\,\beta$$
Using the relation $I=QQ^{-1}$, we have $1=I_{ii} = \sum\limits_{k=1}^n\,q_{ik}\, \widehat{q}_{ki}=\sum\limits_{h=1}^n C_{h\,i}$. Using the same relation for $I_{jj}$, we obtain that $C_{1\,i} = C_{1\,j}$, and so $\sum\limits_{l=1}^{m} \,q_{i\,2l-1}\; \widehat{q}_{2l\,i} = \sum\limits_{l=1}^{m} \, q_{j\,2l-1}\; \widehat{q}_{2l\,j}$. From this it follows that $[A^r]_{ii}=[A^r]_{jj}$ for all $r>0$, as desired.
\end{proof}
We believe that the result is true for all non-diagonalizable matrices, so we set forth the following conjecture:
\begin{conj}
Let $A$ be the adjacency matrix of a directed, unweighted graph, with $A$ non-diagonalizable. If for two vertices $i,j$ and for $\beta\in \Qalg$ we have $[e^{\beta A}]_{ii} = [e^{\beta A}]_{jj}$, then for every integer $r>0$ we have $[A^r]_{ii} = [A^r]_{jj}$.
\end{conj}
Considering the application of Lindemann-Weierstrass Theorem, it is quite possible that this conjecture holds also for all non-diagonalizable matrices $A \in \Qalg^{n\times n}$.
\section{Acknowledgements}
The authors would like to thank Michele Benzi for bringing this problem to their attention, for useful discussions on the topic and for his very helpful review of the manuscript, and an anonymous referee for helpful comments and suggestions.
| {
"timestamp": "2021-06-08T02:27:37",
"yymm": "2007",
"arxiv_id": "2007.08956",
"language": "en",
"url": "https://arxiv.org/abs/2007.08956",
"abstract": "Centrality measures are used in network science to identify the most important vertices for transmission of information and dynamics on a graph. One of these measures, introduced by Estrada and collaborators, is the $\\beta$-subgraph centrality, which is based on the exponential of the matrix $\\beta A$, where $A$ is the adjacency matrix of the graph and $\\beta$ is a real parameter (\"inverse temperature\"). We prove that for algebraic $\\beta$, two vertices with equal $\\beta$-subgraph centrality are necessarily cospectral. We further show that two such vertices must have the same degree and eigenvector centralities. Our results settle a conjecture of Estrada and a generalization of it due to Kloster, Král and Sullivan. We also discuss possible extensions of our results.",
"subjects": "Combinatorics (math.CO); Social and Information Networks (cs.SI)",
"title": "Vertex distinction with subgraph centrality: a proof of Estrada's conjecture and some generalizations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180673335565,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7083314758416753
} |
https://arxiv.org/abs/1703.06214 | Free 2-step nilpotent Lie algebras and indecomposable modules | Given an algebraically closed field $F$ of characteristic 0 and an $F$-vector space $V$, let $L(V)=V\oplus\Lambda^2(V)$ denote the free 2-step nilpotent Lie algebra associated to $V$. In this paper, we classify all uniserial representations of the solvable Lie algebra $\mathfrak g=\langle x\rangle\ltimes L(V)$, where $x$ acts on $V$ via an arbitrary invertible Jordan block. | \section{Introduction}
\label{intro}
We fix throughout an algebraically closed field $F$ of characteristic zero. All Lie
algebras and representations considered in this paper are assumed
to be finite dimensional over $F$, unless explicitly stated
otherwise.
According to \cite{M} (see also \cite{GP}), the task of classifying all indecomposable modules of an arbitrary Lie algebra is daunting.
However, in recent years there has been significant progress in classifying certain types of indecomposable modules for various
families of Lie algebras. See \cite{C1, C2, CGS, CPS, CS1, CS2, CS3, CMS, DdG, DR, J}, for example. The classification of all uniserial modules (those having a unique composition series) of distinguished classes of Lie algebras has been specially successful (see \cite{CPS,CGS,CS1,CS3}, for instance).
In this paper, we make a further contribution in this direction by classifying all uniserial representations of the solvable Lie algebra $\g=\langle x\rangle\ltimes L(V)$,
where $V$ is a vector space, $L(V)=V\oplus\Lambda^2(V)$ is the free 2-step nilpotent Lie algebra associated to $V$,
and $x$ acts on $V$ via a single Jordan block $J_n(\la)$, with $\la\neq 0$. The case $n=1$, when $\Lambda^2(V)=0$, is covered in \cite{CS2},
so we will focus attention on the case $n>1$.
We say that a uniserial representation $R:\g\to\gl(U)$ is
\emph{relatively faithful} if $\ker(R)\cap\Lambda^2(V)$ is properly
contained in $\Lambda^2(V)$ and $\ker(R)\cap V=(0)$. It suffices
to consider the case when $R$ is relatively faithful, for if
$\Lambda^2(V)\subseteq \ker(R)$ then \cite{CPS} applies, if
$V\subseteq \ker(R)$ we may appeal to \cite{CS1}, and if
$(0)\neq \ker(R)\cap V\neq V$, we are led to consider a uniserial
representation $\overline{R}$ of $\langle \overline{x}\rangle\ltimes L(\overline{V})$,
where $\overline{V}$ is a factor of~$V$ by an $x$-invariant subspace,
$\overline{x}$ acts on $\overline{V}$ via an invertible Jordan block $J_m(\la)$, $1\leq m<n$,
and $\ker(\overline{R})\cap\overline{V}=(0)$.
Our main results are as follows.
In \S\ref{sec1} we define a family of relatively faithful uniserial representations of $\g$ (the case $\la=0$ being allowed).
Explicitly, let $v_0,\dots,v_{n-1}$ be a basis of $V$ such that
$$
[x,v_0]=\la v_0+v_1, [x,v_1]=\la v_1+v_2,\dots, [x,v_{n-1}]=\la v_{n-1}.
$$
Given a triple $(a,b,c)$ of positive integers satisfying
$$
a+b=n+1,\; c\leq a\quad\text{ or }\quad c+b=n+1,\; a\leq c,
$$
two matrices $M\in M_{a\times b}$ and $N\in M_{b\times c}$ such that $$M_{a,1}\neq 0\text{ and }N_{b,1}\neq 0,$$
and a scalar $\al\in F$, we define a representation
$R=R_{a,b,c,M,N,\al}:\g\to\gl(d)$, $d=a+b+c$, in block form, in the following manner:
$$
R(x)=A=\left(
\begin{array}{ccc}
J^a(\al) & 0 & 0 \\
0 & J^b(\al-\la) & 0 \\
0 & 0 & J^c(\al-2\la) \\
\end{array}
\right),
$$
where $J^p(\be)$ denotes the upper triangular Jordan block of size $p$ and eigenvalue~$\be$,
$$
R(v_k)=(\ad_{\gl(d)} A-\la 1_{\gl(d)})^k \left(
\begin{array}{ccc}
0 & M & 0 \\
0 & 0 & N \\
0 & 0 & 0 \\
\end{array}
\right),\quad 0\leq k\leq n-1,
$$
$$
R(v\wedge w)=[R(v),R(w)],\quad v,w\in V.
$$
The representation~$R$ is always uniserial. It is also relatively faithful, except for an extreme case,
as described in Definition \ref{defeabc}. The length of~$R$, as defined in Definition \ref{dele}, is equal to 3 (it coincides
with the number of Jordan blocks of $R(x)$ in this case).
Conjugating all $R(y)$, $y\in \g$, by a suitable block diagonal matrix commuting with~$A$, one may normalize $R$,
in the sense of Definition \ref{defeabc}.
In \S\ref{sec3} we prove, for $\la\neq 0$, that every relatively faithful uniserial representation of $\g$
is isomorphic to one and only one normalized representation $R_{a,b,c,M,N,\al}$ of non-extreme type. This requires, in particular,
to prove that $\g$ has no relatively faithful uniserial representations of length $>3$. This is our most challenging obstacle, and it is proven in Theorem \ref{main1}. The ideas behind the proof of Theorem \ref{main1} are somewhat subtle
and are presented independently in \S\ref{secnueva}.
We are be very interested in knowing the classification of all uniserial modules of $\g$ when $\lambda=0$
(the case when $\g$ is nilpotent), but this seems to be a very difficult task.
In \S\ref{sec2} we determine when $R_{a,b,c,M,N,\al}$ is faithful (for arbitrary $\lambda$). It turns out that $R_{a,b,c,M,N,\al}$ is faithful
if and only if
$$
(a,b,c)\in\{(n,1,n),(n-1,2,n-1),(n,1,n-1),(n-1,1,n)\}.
$$
Sufficiency of this result is fairly delicate. Most of the work towards it is done in Proposition \ref{fieln-2}.
The case $n=3$ and $(a,b,c)=(2,2,2)$ is special, in the sense that it is the only faithful uniserial representation of $\g$
where all blocks are square (in this case of size~2). This case is intimately related to a representation of
the truncated current Lie algebra $\sl(2)\otimes F[t]/(t^3)$.
In \S\ref{secnew} we provide a generalization of our faithfulness result, stated without reference to Lie algebras
or their representations.
Our general notation, basic concepts and preliminary material can all be found in \S\ref{sec0}, \S\ref{sec1} and \S\ref{sec2}.
\section{The Lie algebra $\g$}\label{sec0}
We fix throughout a vector space $V$. There is a unique Lie algebra structure on
$$
L(V)=V\oplus\Lambda^2(V)
$$
such that
$$
[v,w]=v\wedge w,\quad v,w\in V
$$
and
$$
[u,v\wedge w]=0, \quad u,v,w\in V.
$$
The Lie algebra $L(V)$ is the \emph{free 2-step nilpotent Lie algebra associated to $V$}.
In particular we have the following straightforward lemma.
\begin{lemma}\label{2free} Let $\h$ be a Lie algebra and let $\Omega:V\to\h$ be a linear map satisfying
\begin{equation*}
[\Omega(V),[\Omega(V),\Omega(V)]]=0.
\end{equation*}
Then $\Omega$ has a unique extension to a homomorphism of Lie algebras $\Omega':L(V)\to\h$.
\end{lemma}
Given a Lie algebra $\h$ and a representation $\h\to\gl(V)$, we can make $\Lambda^2(V)$ into an $\h$-module via:
$$
x(v\wedge w)=xv\wedge w+v\wedge xw,\quad x\in\h, v,w\in V.
$$
This gives a representation $\h\to\gl(L(V))$ whose image we readily see to be in $\Der(L(V))$. This produces the Lie algebra
$$
\h\ltimes L(V).
$$
For the remainder of the paper we set
$$
\g=\langle x\rangle\ltimes L(V),
$$
where $x\in\gl(V)$.
\section{Relatively faithful uniserial representations of $\g$}\label{sec1}
Given $p\geq 1$ and $\al\in F$,
we write $J_p(\al)$ (resp. $J^p(\al)$) for the lower (resp. upper) triangular Jordan block of size $p$ and eigenvalue $\al$.
We suppose throughout this section that $\g=\langle x\rangle \ltimes L(V)$, where $x\in\gl(V)$ acts on $V$ via a single, lower triangular, Jordan block, say $J_n(\la)$ with $n>1$, relative to a basis
$v_0,\dots,v_{n-1}$ of $V$. The case $\la=0$ is allowed. Then $\g$ has the following defining relations:
\begin{equation}
\label{rela}
[v,w]=v\wedge w,\quad v,w\in V,
\end{equation}
\begin{equation}
\label{rela2}
[u,v\wedge w]=0, \quad u,v,w\in V,
\end{equation}
\begin{equation}
\label{rew}
[x,v_0]=\la v_0+v_1, [x,v_1]=\la v_1+v_2,\dots, [x,v_{n-1}]=\la v_{n-1}.
\end{equation}
We may translate (\ref{rew}) as
\begin{equation}
\label{rela3}
(\ad_\g\, x -\lambda 1_\g)^k v_0=v_k,\quad 0\leq k\leq n-1,
\end{equation}
and
\begin{equation}\label{rela4}
(\ad_\g x -\lambda 1_\g)^n v_0=0.
\end{equation}
\begin{definition}\label{dele}
Let $U$ be a non-zero $\g$-module. Let $U_1$ be the subspace of $U$ annihilated by
$[\g,\g]$. Since $[\g,\g]$ is an ideal of $\g$, it is clear that $U_1$ is a $\g$-submodule of $U$. Moreover, since $[\g,\g]$ acts via nilpotent
operators on $U$, Engel's theorem ensures that $U_1\neq 0$. We then choose $U_2$ so that $U_2/U_1$ is the subspace of $U/U_1$ annihilated by $[\g,\g]$, and so on. This gives rise to a strictly increasing sequence of $\g$-submodules of $U$, namely
$$
0\subset U_1\subset U_2\subset\cdots\subset U_\ell=U.
$$
We define the \emph{length} of $U$ to be $\ell$.
Note that, since $\g$ is solvable and $F$ is algebraically closed,
the length of a Jordan-H\"older composition series of $U$ is $\dim U$.
\end{definition}
\begin{definition}\label{defeabc} Let $(a,b,c)$ be a triple of positive integers satisfying
\begin{equation}
\label{cond}
a+b=n+1,\; c\leq a\quad\text{ or }\quad c+b=n+1,\; a\leq c,
\end{equation}
let $M\in M_{a\times b}$, $N\in M_{b\times c}$ be such that $$M_{a,1}\neq 0\text{ and }N_{b,1}\neq 0,$$
and let $\al\in F$. Associated to this data we define a linear transformation
$R=R_{a,b,c,M,N,\al}:\g\to\gl(d)$, $d=a+b+c$, in block form, as follows:
\begin{equation}
\label{pet}
R(x)=A=\left(
\begin{array}{ccc}
J^a(\al) & 0 & 0 \\
0 & J^b(\al-\la) & 0 \\
0 & 0 & J^c(\al-2\la) \\
\end{array}
\right),
\end{equation}
\begin{equation}
\label{pet2}
R(v_k)=(\ad_{\gl(d)} A-\la 1_{\gl(d)})^k \left(
\begin{array}{ccc}
0 & M & 0 \\
0 & 0 & N \\
0 & 0 & 0 \\
\end{array}
\right),\quad 0\leq k\leq n-1,
\end{equation}
\begin{equation}
\label{pet3}
R(v\wedge w)=[R(v),R(w)],\quad v,w\in V.
\end{equation}
We refer to $M$ and $N$ as \emph{normalized}, if the last rows of $M$ and $N$ are equal to the first
canonical vectors of $F^b$ and $F^c$, respectively, and the first column of~$M$ is equal to the last canonical
vector of $F^a$. In this case, we say that $R$ itself is \emph{normalized}.
If $R$ is normalized, we say that $R$ is of \emph{extreme type} if $n$ is odd,
$a=1$, $c=1$ and $N_{i,1}=0$ for all even $i$.
\end{definition}
Conjugating all
$R(y)$, $y\in \g$, by a suitable block diagonal matrix commuting with~$A$, it is always possible to normalize $R$,
as seen in \cite[Lemma 2.5]{CPS}.
\begin{prop}\label{normali} The linear map $R_{a,b,c,M,N,\al}$ is a uniserial
representation.
\end{prop}
\begin{proof} It follows from Lemma \ref{2free} that (\ref{pet2})-(\ref{pet3}) define a Lie homomorphism $L(V)\to\gl(d)$.
By (\ref{cond}), we have $a+b\leq n+1$ and $b+c\leq n+1$, so \cite[Proposition 2.2]{CPS} ensures that the relations (\ref{rela3}) and (\ref{rela4})
are preserved, whence $R$ is a representation. Since $M_{a,1}\neq 0$ and $N_{b,1}\neq 0$, $R$ is clearly uniserial.
\end{proof}
\begin{prop}\label{normal2} Assume $\lambda\ne0$. The normalized representations $R_{a,b,c,M,N,\al}$ are non-isomorphic to each other.
The normalized representation $R_{a,b,c,M,N,\al}$ is relatively faithful, except only for the extreme type.
\end{prop}
\begin{proof} Considering the eigenvalues of the image of $x$ as well as their multiplicities, the only possible isomorphisms are easily seen to be between $R_{a,b,c,M,N,\al}$ and $R_{a,b,c,M',N',\al}$. Suppose $T\in\GL(d)$, $d=a+b+c$, satisfies
$$
T R_{a,b,c,M,N,\al}(y) T^{-1}=R_{a,b,c,M',N',\al}(y),\quad y\in\g.
$$
Then $T$ commutes with $R_{a,b,c,M,N,\al}(x)=J^a(\al)\oplus J^{b}(\al-\la)\oplus J^c(\al-2\la)$, and therefore $T=T_1\oplus T_2\oplus T_3$,
where $T_1,T_2,T_3$ are polynomials in $J^a(0),J^b(0),J^c(0)$, respectively, with non-zero constant term.
This means that every superdiagonal
of~$T_i$, $1\leq i\leq 3$, has equal entries. Using this feature of $T_1,T_2,T_3$ in
$$
T R_{a,b,c,M,N,\al}(v_0) =R_{a,b,c,M',N',\al}(v_0)T
$$
together with the fact that $M,N$ and $M',N'$ are normalized, we readily find that~$T$ is a scalar operator, whence $M=M'$ and $N=N'$.
Since $a+b=n+1$ or $b+c=n+1$, \cite[Proposition 2.2]{CPS} yields $\ker(R)\cap V=(0)$.
It remains to determine when
is $\Lambda^2(V)\subseteq \ker(R)$. By \cite[Theorem 3.2]{CPS}, this can only happen when $n$ is odd, $a=1$, $c=1$,
in which case direct computation forces $N_{i,1}=0$ for all even $i$.
\end{proof}
\section{Determining the faithful uniserial representations of $\g$}\label{sec2}
We assume throughout this section that $\g=\langle x\rangle \ltimes L(V)$, where $x$ acts on $V$ via a single lower Jordan
block $J_n(\la)$, $n>1$, relative to a basis $v_0,\dots,v_{n-1}$ of $V$.
\begin{definition} Given a sequence $(d_1,\dots,d_\ell)$ of positive integers, we view
every $M\in M_d$, for $d=d_1+\cdots+d_\ell$, as partitioned into $\ell^2$ blocks
$M(i,j)\in M_{d_1\times d_j}$, $1\leq i,j\leq \ell$. For $0\leq i\leq \ell-1$,
by the $i$th superdiagonal of $M$ we mean the blocks $M(1,1+i),M(2,2+i),\dots,M(\ell-i,\ell)$,
and we say that $M$ is an $i$-diagonal block
matrix if all other blocks of $M$ are equal to 0.
We refer to $M$ as block upper triangular if $M(i,j)=0$ for all $i>j$ and as block
strictly upper triangular if $M(i,j)=0$ for all $i\geq j$.
\end{definition}
\begin{definition} Given an integer $\ell>2$, a sequence of positive integers $(d_1,\dots,d_\ell)$, and a scalar $\al\in F$,
a representation $R:\g\to\gl(d)$ is said to be \emph{standard} relative to $(\ell,(d_1,\dots,d_\ell),\al)$
if the following conditions hold:
$$d_1+\cdots+d_\ell=d;\quad d_{i}+d_{i+1}\leq n+1\text{ for all }i;$$
$R(x)$ is the 0-diagonal block matrix
$$
A=J^{d_1}(\alpha)\oplus J^{d_2}(\alpha-\la)\oplus\cdots\oplus J^{d_\ell}(\alpha-(\ell-1)\la);
$$
every $R(v)$, $v\in V$, is a 1-diagonal block matrix; every block in the first superdiagonal of $R(v_0)$
has non-zero bottom left entry.
Let $M_1,\dots,M_{\ell-1}$ denote the blocks in the first superdiagonal of $R(v_0)$.
We say that $R$ is \emph{normalized standard} relative to $(\ell,(d_1,\dots,d_\ell),\al)$ if,
in addition to the above conditions, the last row of each $M_i$ is equal to the first canonical vector,
and the first column of $M_{1}$ is the last canonical vector.
\end{definition}
Note that a standard representation $R$ is always uniserial, and its length,
as defined in Definition \ref{dele}, is equal to $\ell$.
Observe also that if $R$ is a standard representation then every $R(v\wedge w)$, $v,w\in V$,
is a 2-diagonal block matrix.
\begin{lemma}\label{funk} Given an integer $\ell>2$, a sequence of positive integers $(d_1,\dots,d_\ell)$, and a scalar $\al\in F$,
let $R:\g\to\gl(d)$ be a standard representation relative to them. Then $\ker(R)\cap V=(0)$
if and only if $d_{i}+d_{i+1}=n+1$ for at least one $i$.
\end{lemma}
\begin{proof} Since the $x$-invariant subspaces of $V$ form a chain, we have $\ker(R)\cap V=(0)$ if and only if $v_{n-1}\notin\ker(R)$,
which is equivalent to $d_{i}+d_{i+1}=n+1$ for some $i$, by \cite[Proposition 2.2]{CPS}.
\end{proof}
\begin{lemma}\label{duy} Given an integer $\ell>2$, a sequence of positive integers $(d_1,\dots,d_\ell)$,
and a scalar $\al\in F$,
let $R:\g\to\gl(d)$ be a standard (resp. normalized standard) representation relative to them.
Then the dual representation is similar to a representation $T:\g\to\gl(d)$ that is standard
(resp. normalized standard) relative to $(\ell, (d_\ell,\dots,d_1),(\ell-1)\la-\al)$.
Moreover, $R$ is faithful (resp. relatively faithful) if and only so is $T$.
\end{lemma}
\begin{proof} This is straightforward.
\end{proof}
\begin{prop}\label{fieln-2} Given an integer $n\geq 2$, let $(p_1,\dots,p_{n-1}),(q_1,\dots,q_{n-1})\in F^{n-1}$
be such that $p_j+q_j\ne0$ for all $j$, and
let $z,w\in F$ be non-zero.
Associated to these data, we consider matrices
$$
P_0,\dots,P_{n-1}\in M_{n-1\times 2},\qquad Q_0,\dots,Q_{n-1}\in M_{2\times n-1},
$$
having the following structure:
\[\small
\begin{array}{lll}
P_0=\left(
\begin{array}{cc}
* & * \\
\vdots & \vdots \\
* & * \\
z & * \\
\end{array}
\right), &
P_1=\left(
\begin{array}{cc}
* & * \\
\vdots & \vdots \\
* & * \\
z & * \\
0 & -p_1z \\
\end{array}
\right), &
P_2=\left(
\begin{array}{cc}
* & * \\
\vdots & \vdots \\
* & * \\
z & * \\
0 & -p_2z \\
0 & 0 \\
\end{array}
\right), \\
P_3=\left(
\begin{array}{cc}
* & * \\
\vdots & \vdots \\
* & * \\
z & * \\
0 & -p_3z \\
0 & 0 \\
0 & 0 \\
\end{array}
\right),\dots, &
P_{n-2}=\left(
\begin{array}{cc}
z & * \\
0 & -p_{n-2}z\\
0 & 0 \\
\vdots & \vdots \\
0 & 0 \\
\end{array}
\right), &
P_{n-1}=\left(
\begin{array}{cc}
0 & -p_{n-1}z\\
0 & 0 \\
\vdots & \vdots \\
0 & 0 \\
\end{array}
\right),
\end{array}
\]
\medskip
\[\small
\begin{array}{ll}
Q_{0}= \left(
\begin{array}{cccc}
* & * & \dots & * \\
w & * & \dots & * \\
\end{array}
\right), &
Q_{1}= \left(
\begin{array}{ccccc}
q_1w & * & * & \dots & * \\
0 & -w & * & \dots & * \\
\end{array}
\right), \\[5mm]
Q_2= \left(
\begin{array}{cccccc}
0 & -q_2w & * & *& \dots & * \\
0 & 0 & w & * &\dots & * \\
\end{array}
\right), &
Q_{3}= \left(
\begin{array}{cccccc}
0 & 0 & q_3w & * & \dots & * \\
0 & 0 & 0 & -w & \dots & * \\
\end{array}
\right),\dots, \\[5mm]
Q_{n-2}= \left(
\begin{array}{cccc}
0\; \dots\; 0 & (-1)^{n-3}q_{n-2}w & * \\
0\; \dots\; 0 & 0 & (-1)^{n-2}w \\
\end{array}
\right), &
Q_{n-1}=\left(
\begin{array}{cc}
0\; \dots\; 0 & (-1)^{n-2}q_{n-1}w \\
0\; \dots\; 0 & 0 \\
\end{array}
\right). \\[5mm]
\end{array}
\]
Then the matrices $T_{i,j}\in M_{n-1}$, $0\leq i<j\leq n-1$, defined by
$$
T_{i,j}=P_iQ_j-P_jQ_i, \quad 0\leq i<j\leq n-1,
$$
are linearly independent.
\end{prop}
\begin{proof} By induction on $n$. In the base case $n=2$, we have
$$
P_0=\begin{pmatrix}
z & *
\end{pmatrix}, \;
P_1=\begin{pmatrix}
0 & -p_1z
\end{pmatrix},\;
Q_0=\left(
\begin{array}{c}
* \\
w \\
\end{array}
\right), Q_1=\left(
\begin{array}{c}
q_1w \\
0 \\
\end{array}
\right).
$$
Therefore
$$
T_{0,1}=\begin{pmatrix}(p_1+q_1)wz\end{pmatrix}\neq 0.
$$
Assume that $n>2$ and the that result is true for $m=n-1$. Let
\begin{equation*}
\mathcal{T}=\underset{0\leq i<j\leq n-1}\sum\al_{i,j} T_{i,j}
\end{equation*}
and assume $\mathcal{T}=0$. We wish to show that
\begin{equation}
\label{alcero-2}
\al_{i,j}=0,\quad 0\leq i<j\leq n-1.
\end{equation}
It suffices to show that
\begin{equation}
\label{alcero}
\al_{0,j}=0,\quad 1\leq j\leq n-1.
\end{equation}
Indeed, assume we have proven (\ref{alcero}). Since $\mathcal{T}=0$, we obtain
\begin{equation}
\label{alcero2}
\underset{1\leq i<j\leq n-1}\sum\al_{i,j} T_{i,j}=0.
\end{equation}
Let $P'_0,\dots,P'_{m-1}\in M_{m-1\times 2}$ and $Q'_0,\dots,Q'_{m-1}\in M_{2\times m-1}$
be the matrices
obtained by deleting the last rows of $P_1,\dots,P_{n-1}$ and the first columns of $Q_1,\dots,Q_{n-1}$,
and let $T'_{i,j}=P'_iQ'_j-P'_jQ'_i$, $0\leq i<j\leq m-1$.
It follows automatically from (\ref{alcero2}) that
\begin{equation*}
\underset{0\leq i<j\leq m-1}\sum\al'_{i,j} T'_{i,j}=0,
\end{equation*}
where $\al'_{i,j}=\al_{i+1,j+1}$ and, from the inductive hypothesis, we conclude
\begin{equation}
\label{alcero4}
\al_{i,j}=0,\quad 1\leq i<j\leq n-1.
\end{equation}
We may now obtain (\ref{alcero-2}) from (\ref{alcero}) and (\ref{alcero4}).
We proceed to prove (\ref{alcero}). In fact we will prove by induction on $k\le n-1$ that $\alpha_{i,j}=0$ whenever $i<j$ and $i+j\leq k$.
The base case $k=1$ is straightforward. Indeed, from $\mathcal{T}_{n-1,1}=\alpha_{0,1}(p_1+q_1)wz$, infer $\alpha_{0,1}=0$.
Suppose $1<k\leq n-1$ and assume that $\alpha_{i,j}=0$ whenever $i<j$ and $i+j\leq k-1$. Using this, a direct computation reveals that, for $i-j=n-1-k$, we have
\[
\mathcal{T}_{i,j}=\begin{cases}
(-1)^j(\alpha_{j,k-j}\,q_{j}-\alpha_{j-1,k+1-j}\,p_{k+1-j})\,wz, & \text{if $1\le j<\frac{k}2$;} \\[2mm]
-(-1)^{\frac{k}{2}}\,\alpha_{\frac{k}{2}-1,\frac{k}{2}+1}\,p_{\frac{k}{2}+1}\,wz, & \text{if $j=\frac{k}2$;} \\[2mm]
-(-1)^{\frac{k+1}{2}}\,\alpha_{\frac{k-1}{2},\frac{k+1}{2}}\big(q_{\frac{k+1}{2}}+p_{\frac{k+1}{2}}\big)\,wz, & \text{if $j=\frac{k+1}2$;} \\[2mm]
-(-1)^{\frac{k+2}{2}}\,\alpha_{\frac{k}{2}-1,\frac{k}{2}+1}\,q_{\frac{k}{2}+1}\,wz, & \text{if $j=\frac{k}2+1$;} \\[2mm]
-(-1)^j(\alpha_{k-j,j}\,q_{j}-\alpha_{k+1-j,j-1}\,p_{k+1-j})\,wz, & \text{if $\frac{k}2+1<j\le n-1$;}
\end{cases}
\]
that is
\[
\begin{array}{cc|cl}
i&j & \mathcal{T}_{i,j}/wz \\
\hline & \\[-2mm]
n-k & 1 & -\alpha_{1,k-1}q_{1}+\alpha_{0,k}p_{k} \\[2mm]
n-k+1 & 2 & \alpha_{2,k-2}q_{2}-\alpha_{1,k-1}p_{k-1} \\[2mm]
n-k+2 & 3 & -\alpha_{3,k-3}q_{3}+\alpha_{2,k-2}p_{k-2} \\[1mm]
\vdots & \vdots & \vdots \\[1mm]
n-1-\frac{k}{2} & \frac{k}{2} & -(-1)^{\frac{k}{2}}\,\alpha_{\frac{k}{2}-1,\frac{k}{2}+1}\,p_{\frac{k}{2}+1} &\text{(if $k$ is even)}\\[2mm]
n-1-\frac{k-1}{2} & \frac{k+1}{2} & -(-1)^{\frac{k+1}{2}}\,\alpha_{\frac{k-1}{2},\frac{k+1}{2}}\big(q_{\frac{k+1}{2}}+p_{\frac{k+1}{2}}\big)&\text{(if $k$ is odd)} \\[2mm]
n-1-\frac{k-2}{2} & \frac{k+2}{2} & -(-1)^{\frac{k+2}{2}}\,\alpha_{\frac{k}{2}-1,\frac{k}{2}+1}\,q_{\frac{k}{2}+1} & \text{(if $k$ is even)} \\[1mm]
\vdots & \vdots & \vdots \\[1mm]
n-3 & k-2 & -(-1)^{k-2}(\alpha_{2,k-2}\,q_{k-2}-\alpha_{3,k-3}\,p_{3}) \\[2mm]
n-2 & k-1 & -(-1)^{k-1}(\alpha_{1,k-1}\,q_{k-1}-\alpha_{2,k-2}\,p_{2}) \\[2mm]
n-1 & k & -(-1)^k(\alpha_{0,k}\,q_{k}-\alpha_{1,k-1}\,p_{1}) \\
\end{array}
\]
Since, by hypothesis, $p_j+q_j\ne0$ for all $j$
(which in turns implies that either $p_j$ or $q_j$ is non-zero for all $j$) we obtain that (\ref{alcero}) holds.
\end{proof}
\begin{theorem}\label{main0} A representation $R_{a,b,c,M,N,\al}$ of $\g$ is faithful if and only if
\begin{equation}
\label{conabc}
(a,b,c)\in\{(n,1,n),(n-1,2,n-1),(n,1,n-1),(n-1,1,n)\}.
\end{equation}
\end{theorem}
\begin{proof} We divide the proof into two parts.
{\sc Necessity.} Suppose the representation $R=R_{a,b,c,M,N,\al}:\g\to\gl(d)$ is faithful, where $d=a+b+c$.
Let $S$ be the subspace of $\gl(d)$ of all matrices
$$
\left(
\begin{array}{ccc}
0 & 0 & P \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right),\quad P\in M_{a\times c}.
$$
Letting $A$ be as in (\ref{pet}), we view $S$ as an $F[t]$-module via $\ad_{\gl(d)}A-2\la 1_{\gl(d)}$.
As in \cite[Proposition 2.1]{CPS}, we see that $\ad_{\gl(d)}A-2\la 1_{\gl(d)}$ acts nilpotently on $S$ with nilpotency degree $a+c-1$.
On the other hand, we may view $\Lambda^2(V)$ as an $F[t]$-module via $\ad_\g x -2\la 1_\g$. Direct computation (alternatively, we may use the theory
of $\sl(2)$-modules), reveals that $\ad_\g x -2\la 1_\g$ acts on $\Lambda^2(V)$ with nilpotency degree $2n-3$. Indeed, we have
\begin{equation}
\label{forme}
(\ad_\g x -2\la 1_\g)^m(v\wedge w)=\underset{i+j=m}\sum {{m}\choose {i}} (x-\la 1_V)^i v\wedge (x-\la 1_V)^j w.
\end{equation}
Set $m=2n-3$ in (\ref{forme}) and take $v=v_p$ and $w=v_q$ with $0\leq p<q\leq n-1$. Then the right hand side of (\ref{forme}) is equal to 0
(including the extreme case $p=0,q=1$, which produces ${{2n-3}\choose {n-1}} v_{n-1}\wedge v_{n-1}=0$). Next set $m=2n-4$ in (\ref{forme}) and take $v=v_0$ and $w=v_1$. Then the right hand side of (\ref{forme}) is equal to
$$
\left [{{2n-4}\choose {n-1}}-{{2n-4}\choose {n-2}}\right ]v_{n-1}\wedge v_{n-2}\neq 0.
$$
Since $R$ is faithful, restricting $R$ to $\Lambda^2(V)$ yields a linear monomorphism $T:\Lambda^2(V)\to S$. It follows from \cite[Lemma 3.1]{CPS}
that $T$ commutes with the indicated actions of $F[t]$, so that $T$ is a monomorphism of $F[t]$-modules. It follows from above that
\begin{equation}
\label{ace}
2n-3\leq a+c-1.
\end{equation}
On the other hand, by (\ref{cond}), we have $a+b=n+1$ or $c+b=n+1$.
By duality (see Lemma \ref{duy}), we may
assume that $a+b=n+1$.
Suppose, if possible, that $b+c<n$. As the $x$-invariant subspaces of $V$ form a chain, it follows from \cite[Proposition 2.2]{CPS} that blocks (2,3) of $R(v_{n-1})$ and $R(v_{n-2})$ are equal to 0 (alternatively, appeal to a direct computation based on (\ref{pet}) and (\ref{pet2})).
Then (\ref{pet3}) yields $R(v_{n-2}\wedge v_{n-1})=0$, a contradiction. We infer $b+c\geq n$. It follows from (\ref{cond}) that $b+c=n$ or $b+c=n+1$. In the second case $c=a$, so (\ref{ace}) gives $a\geq n-1$, whence
$$
(a,b,c)\in\{(n,1,n),(n-1,2,n-1)\}.
$$
In the first case $c=a-1$, so (\ref{ace}) gives $a\geq n-\frac{1}{2}$, whence $(a,b,c)=(n,1,n-1)$.
{\sc Sufficiency.} We wish to show that $R=R_{a,b,c,M,N,\al}$ is faithful whenever (\ref{conabc}) holds.
By duality (see Lemma \ref{duy}), we may restrict to the cases
\begin{equation}
\label{set_abc}
(a,b,c)\in\{(n,1,n),(n-1,2,n-1),(n,1,n-1)\}.
\end{equation}
We will write $P(y),Q(y)$ and $T(y)$ for blocks $(1,2),(2,3)$ and (1,3) of $R(y)$, $y\in\g$, respectively.
By Proposition \ref{normal2}, $R$ is relatively faithful (it follows from \eqref{set_abc} that,
after normalizing $R$, we are not in the extreme case)
and thus $R$ is faithful if and only if the matrices $T(v_i\wedge v_j)$, $0\leq i<j\leq n-1$, are linearly independent.
$\bullet (a,b,c)=(n-1,2,n-1)$.
Set $(p_1,\dots,p_{n-1})=(q_1,\dots,q_{n-1})=(1,\dots,n-1)$ and,
for $i=0,\dots,n-1$, let $P_i=P(v_i)\in M_{n-1\times 2}$ and $Q_i=Q(v_i)\in M_{2\times n-1}$.
It is not difficult to see that these vectors and matrices satisfy the hypothesis of
Proposition \ref{fieln-2} and thus, considering \eqref{pet3}, we obtain that
$$T(v_i\wedge v_j)=P(v_i)Q(v_j)-P(v_j)Q(v_i)=P_iQ_j-P_jQ_i=T_{i,j}, \quad 0\leq i<j\leq n-1,$$
are linearly independent.
$\bullet $ $(a,b,c)=(n,1,n)$. Note that $T(v_i\wedge v_j)$, $0\leq i<j\leq n-1$, form the canonical
basis of the space $\mathfrak{so}(n)$ of all $n\times n$ skew-symmetric matrices.
$\bullet $ $(a,b,c)=(n,1,n-1)$. Again, $T(v_i\wedge v_j)$, $0\leq i<j\leq n-2$, form the canonical
basis of $\mathfrak{so}(n-1)$, viewed as the subspace of $\mathfrak{so}(n)$ of matrices with zero first row and last column. On the other hand,
noting that $Q(v_{n-1})=0$, we see that $T(v_i\wedge v_{n-1})$, $0\leq i<n-1$, form the (opposite of the) canonical basis of $F^{n-1}$,
viewed as top left corner, say $C$, of $M_n$. Since $\mathfrak{so}(n-1)\cap C=(0)$, the result follows.
\end{proof}
\begin{exa}\label{sl2lindo}{\rm An interesting example occurs when $n=3$ and $a=b=c=2$. Then we do get
a faithful module above of a very special nature: it is the
only faithful uniserial module of $\g$ where all the blocks are squares.
Take $\la=\al=0$ (the other cases are easy modifications).
Given a Lie algebra $L$ and an associative commutative algebra $A$, we know that $L\otimes A$ is Lie algebra under
$[x\otimes a,y\otimes b]=[x,y]\otimes ab$. Moreover, if $R_1:L\to\gl(V_1)$ and $R_2:A\to\gl(V_2)$ are representations,
then $R_1\otimes R_2:L\otimes A\to\gl(V\otimes A)$ is a representation.
Now take $L=\sl(2)$, with standard basis $E,H,F$, and $A=F[t]/(t^3)$.
Let $R_1$ be the irreducible representation of highest weight 1 and
let $R_2$ be the regular representation.
If we restrict the representation $R_1\otimes R_2$ to the subalgebra of $\sl(2)\otimes F[t]/(t^3)$
generated by $\{E\otimes 1, F\otimes t\}$ (which is isomorphic to $\g$)
we obtain the case $n=3$ and $a=b=c=2$ of the above construction.
}
\end{exa}
\section{Faithfulness in purely matrix terms}\label{secnew}
The following general version of Theorem \ref{main0} is stated in purely matrix terms. Given integers $a,b\geq 1$, let $\Phi_{a,b}:M_{a\times b}\to M_{a\times b}$ be the nilpotent linear operator defined by
$$
\Phi_{a,b}(X)=J^a(0)X-XJ^b(0).
$$
We will write $\Phi$ instead of $\Phi_{a,b}$ when no confusion is possible.
\begin{theorem}\label{lidep}
Given a triple $(a,b,c)$ of positive integers and
a pair $(P,Q)$ of matrices such that $P\in M_{a\times b}$, $Q\in M_{b\times c}$, we
define the matrices $P_i$, $Q_i$, $T_{i,j}$ by
$$
P_i=\Phi^i(P),\; Q_i=\Phi^i(Q),\quad i\geq 0,
$$
$$
T_{i,j}=P_iQ_j-P_jQ_i,\quad 0\leq i<j,
$$
and set
$$
n=\max\{a+b-1,b+c-1\}.
$$
Then $P_i=Q_i=0$ for $i\ge n$ and the set $\mathcal{T}=\{T_{i,j}:0\leq i<j\leq n-1\}$ is linearly independent if and only if
exactly one of the following three conditions hold:
$$
P_{a,1}\neq 0, Q_{b,1}\neq 0\text{ and }(a,b,c)\in\{(n,1,n),(n-1,2,n-1),(n,1,n-1),(n-1,1,n)\},
$$
$$
P_{a,1}=0, P_{a-1,1}\neq 0, Q_{b,1}\neq 0\text{ and } (a,b,c)=(n,1,n),
$$
$$
P_{a,1}\neq 0, Q_{b,1}=0, Q_{b,2}\neq 0\text{ and } (a,b,c)=(n,1,n).
$$
\end{theorem}
\begin{proof} The case $n=1$ is obvious, so we assume $n>1$.
It follows from \cite[Proposition 2.2]{CPS} that $P_{i}=Q_{i}=0$ for $i\ge n$.
If $P_{a,1}=0$ and $Q_{b,1}=0$ then \cite[Proposition 2.1]{CPS} implies $P_{n-1}=Q_{n-1}=0$
and thus $\mathcal{T}$ is linearly dependent.
For the remainder of the proof we assume that $P_{a,1}\neq 0$ or $Q_{b,1}\neq 0$. Three cases arise.
\medskip
Case 1: $P_{a,1}\neq 0$ and $Q_{b,1}\neq 0$. By Theorem \ref{main0}, the set $\mathcal{T}$ is
linearly independent if and only if $(a,b,c)\in\{(n,1,n),(n-1,2,n-1),(n,1,n-1),(n-1,1,n)\}$.
\medskip
Case 2: $P_{a,1}=0$ and $Q_{b,1}\neq 0$. Suppose first $\mathcal{T}$ linearly independent.
The necessity part of the proof of Theorem \ref{main0} still implies that $(a,b,c)$ belongs to
$\{(n,1,n),(n-1,2,n-1),(n,1,n-1),(n-1,1,n)\}$. We will show that $P_{a-1,1}\neq 0$ and
$(a,b,c)=(n,1,n)$.
The fact that $P_{a,1}=0$ and \cite[Proposition 2.1]{CPS} imply that $P_{n-1}=0$.
If $b+c<n+1$ then $Q_{n-1}=0$, by \cite[Proposition 2.2]{CPS},
so $T_{i,n-1}=0$ for all $0\leq i<n-1$, a contradiction.
Thus $b+c=n+1$. Since $P_{a,1}=0$, every entry of $P_{n-2}$, except
perhaps for its top right entry, is equal to 0. By construction, $Q_{n-1}$ shares this property. Since
$$
T_{n-2,n-1}=P_{n-2}Q_{n-1}-P_{n-1}Q_{n-2}=P_{n-2}Q_{n-1}\neq 0,
$$
we infer $b=1$ and thus $c=n$.
Moreover, if $a<n$ then $b=1$, $P_{a,1}=0$ and \cite[Proposition 2.1]{CPS} imply
$P_{n-2}=0$, so $T_{n-2,n-1}=0$, a contradiction.
Therefore $a=n$. Finally, if $P_{n-1,1}=0$ we obtain again $P_{n-2}=0$. Thus $P_{n-1,1}\neq 0$.
Finally, suppose $(a,b,c)=(n,1,n)$ and $P_{a-1,1}\neq 0$. By deleting the last row of $P$
and arguing as in Case 1 for $(a',b',c')=(n-1,1,n)$, we obtain that $\mathcal{T}$ is linearly independent.
Case 3: $P_{a,1}\neq 0$ and $Q_{b,1}=0$. This is completely analogous to Case 2.
\end{proof}
\section{Lemmata}\label{secnueva}
Recall the meaning of $\Phi$ given in \S\ref{secnew}.
\begin{lemma}\label{kernel}
Let $Y\in M_{a,b}$. Then $\Phi(Y)=0$ if and only if
\begin{equation}\label{aleb}
Y=\begin{pmatrix}
0 &\cdots &0 & \nu_1& \nu_2 &\cdots & \nu_a \\
0 &\cdots &0 & 0 & \nu_1 &\ddots & \vdots \\
\vdots & &\vdots & \vdots &\vdots &\ddots& \nu_2 \\
0 &\cdots &0 & 0 & 0 &\cdots &\nu_1
\end{pmatrix},\text{ if }a\leq b,
\end{equation}
\begin{equation}\label{blea}
Y=
\begin{pmatrix}
\mu_1& \mu_2 &\cdots & \mu_b \\
0 & \mu_1 &\ddots & \vdots \\
\vdots &\vdots &\ddots& \mu_2 \\
0 & 0 &\cdots &\mu_1 \\
0 & \cdots &\cdots & 0\\
\vdots & \cdots &\cdots &\vdots \\
0 &\cdots &\cdots &0\\
\end{pmatrix},\text{ if }b\leq a
\end{equation}
for some $\mu_i,\nu_i\in F$.
\end{lemma}
\begin{proof} View $M_{a,b}$ as an $\sl(2)$-module as in the proof of \cite[Proposition 2.1]{CPS}.
The nullity of $\Phi$ is the number $m=\min\{a,b\}$ of irreducible $\sl(2)$-submodules of $M_{a,b}$. On the other
hand, if $m=a$ (resp. $m=b$) we readily verify that $Y$ as in (\ref{aleb}) (resp. (\ref{blea})) satisfies $\Phi(Y)=0$.
\end{proof}
We say that $X\in M_{a,b}$ is a lowest matrix if $X_{a,1}=1$.
\begin{lemma}\label{lemma1}
Let $X_1\in M_{a,b_1}$, $X_2\in M_{b_2,c}$, $Y_1\in M_{b_1,c}$ and $Y_2\in M_{a,b_2}$.
Assume that $X_1$ and $X_2$ are lowest matrices, that
$$
(Y_1,Y_2)\ne(0,0),\;\Phi(Y_1)=0,\Phi(Y_2)=0,
$$
and set
\[
Z=X_1Y_1-Y_2X_2.
\]
If $Z=0$ then $a\le b_2$, $c\le b_1$ and
\begin{equation}\label{eq.Z_0}
Y_2=\begin{pmatrix}
0 &\cdots &0 & \nu_1& \nu_2 &\cdots & \nu_a \\
0 &\cdots &0 & 0 & \nu_1 &\ddots & \vdots \\
\vdots & &\vdots & \vdots &\vdots &\ddots& \nu_2 \\
0 &\cdots &0 & 0 & 0 &\cdots &\nu_1
\end{pmatrix},\qquad
Y_1=
\begin{pmatrix}
\mu_1& \mu_2 &\cdots & \mu_c \\
0 & \mu_1 &\ddots & \vdots \\
\vdots &\vdots &\ddots& \mu_2 \\
0 & 0 &\cdots &\mu_1 \\
0 & \cdots &\cdots & 0\\
\vdots & \cdots &\cdots &\vdots \\
0 &\cdots &\cdots &0\\
\end{pmatrix},
\end{equation}
with $\mu_1=\nu_1\ne0$.
\end{lemma}
\begin{proof} If $Y_1\neq 0$, let $C_i$, $1\leq i\leq c$, be the first column of $Y_1$ that is non-zero.
By Lemma \ref{kernel}, we have
$$
C_i=\left(
\begin{array}{c}
\mu \\
0 \\
\vdots \\
0 \\
\end{array}
\right),\quad \mu\neq 0.
$$
Since $X_1$ is a lowest matrix, it follows that column $i$ of $X_1Y_1$ is equal to
$$
\left(
\begin{array}{c}
* \\
\vdots \\
*\\
\mu \\
\end{array}
\right)\quad \mu\neq 0.
$$
If $Y_2\neq 0$, let $R_j$, $1\leq j\leq b_2$, be the last row of $Y_2$ that is non-zero.
By Lemma~\ref{kernel}, we have
$$
R_j=(0,\dots,0,\nu),\quad\nu\neq 0.
$$
Since $X_2$ is a lowest matrix, it follows that row $j$ of $Y_2X_2$ is equal to
$$
(\nu,*,\dots,*),\quad\nu\neq 0.
$$
Since $(Y_1,Y_2)\neq 0$ and $Z=0$, we infer from above that $Y_1\neq 0$ and $Y_2\neq 0$. If either
if $a>b_2$ or $Y_2$ does not have full rank, then Lemma \ref{kernel} implies that the last row of $Y_2$ is 0, so by above $Z_{a,i}=\mu$, a contradiction.
Similarly, if either $c>b_1$ or $Y_1$ does not have full rank, then Lemma \ref{kernel} implies that the first column of $Y_1$ is~0, so by above $Z_{j,1}=-\nu$, a contradiction. Thus $a\le b_2, c\le b_1$ and, by Lemma \ref{kernel}, $Y_1$ and $Y_2$ are as described in (\ref{eq.Z_0})
with $\mu_1\neq 0$, $\nu_1\neq 0$. Since $Z_{a,1}=0$, we infer $\mu_1=\nu_1$.
\end{proof}
Given integers $a,b\geq 1$ and $\alpha\in F$ we consider matrices $f(\alpha),g(\alpha),h(\alpha)\in M_{a,b}$ of respective forms
$$
\left(\begin{array}{cccc}
0 & \dots & 0 & \alpha \\
0 & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \dots & 0 & 0\\
\end{array}
\right), \left(\begin{array}{cccc}
0 & \dots & 0 & * \\
\vdots & \vdots & \vdots & \vdots \\
0 & \dots & 0 & * \\
0 & \dots & 0 & \alpha\\
\end{array}
\right),\left(\begin{array}{cccc}
\alpha & * & \dots & * \\
0 & \dots & \dots & 0 \\
\vdots & \vdots & \vdots & \vdots \\
0 & \dots & \dots & 0\\
\end{array}
\right),
$$
where the entries $*$ will play no role whatsoever.
\begin{prop}\label{reduccion}
Given $\al\in F$ and a sequence $(d_1,d_2,d_3,d_4)$ of positive integers, let $\h$ be
the subalgebra of $\gl(d)$, $d=d_1+d_2+d_3+d_4$, generated by $A$ and $X$, where
-- $A\in\gl(d)$ is the $0$-diagonal block matrix
\[
A= J^{d_1}(\alpha)\oplus J^{d_2}(\alpha-\lambda)\oplus J^{d_3}(\alpha-2\lambda)\oplus J^{d_4}(\alpha-3\lambda),
\]
-- $X\in\gl(d)$ is a $1$-diagonal block matrix whose blocks (1,2), (2,3), (3,4) satisfy
\[
X(1,2)_{d_1,1}=X(2,3)_{d_2,1}=X(3,4)_{d_3,1}=1.
\]
Then $Y(1,4)=0$ for all $Y\in \h$ if and only if $(d_1,d_2,d_3,d_4)=(1,1,1,1)$.
\end{prop}
\begin{proof} {\sc Sufficiency.} Suppose $(d_1,d_2,d_3,d_4)=(1,1,1,1)$. Then $[A,X]=\la X$, so $Y(1,3)=Y(2,4)=Y(1,4)=0$ for all $Y\in\h$.
\smallskip
\noindent {\sc Necessity.} Suppose $Y(1,4)=0$ for all $Y\in\h$. Given $(i,j)$, $1\leq i<j\leq 4$, we set
$$
D_{i,j}=(-1)^{d_j-1}{{d_i+d_j-2}\choose{d_i-1}}.
$$
Let
$$m=\max\{d_1+d_2,d_2+d_3,d_3+d_4\}\text{ and }Z=(\ad_{\gl(d)} A-\la 1_{\gl(d)})^{m-2}(X)\in\h.
$$
Then $Z$ is a 1-diagonal block matrix, where
$$
Z(1,2)=\delta_{m,d_1+d_2}f(D_{1,2}),\; Z(2,3)=\delta_{m,d_2+d_3}f(D_{2,3}),\;Z(3,4)=\delta_{m,d_3+d_4}f(D_{3,4}).
$$
Set $U=[X,Z]$. Then $U$ is a 2-diagonal block matrix, where
$$
U(1,3)=\delta_{m,d_2+d_3}g(D_{2,3})-\delta_{m,d_1+d_2}h(D_{1,2}),
$$
$$
U(2,4)=\delta_{m,d_3+d_4}g(D_{3,4})-\delta_{m,d_2+d_3}h(D_{2,3}).
$$
Note that $U=0$ if and only if $(d_1,d_2,d_3,d_4)=(1,1,1,1)$. Suppose, if possible, that $(d_1,d_2,d_3,d_4)\neq (1,1,1,1)$.
Choose $k$ as large as possible such that $V=(\ad_{\gl(d)} A-\la 1_{\gl(d)})^k(U)\neq 0$. By hypothesis, $[X,V]=0$,
so Lemma \ref{lemma1} implies $\text{rank}\,V(1,3)=d_1\le d_3$ and $\text{rank}\,V(2,4)=d_4\le d_2$ (*). Several cases arise:
\smallskip
\noindent{\it Case 1.} $d_1+d_2=d_2+d_3>d_3+d_4$. We have $d_1=d_3$,
$d_4=\mathrm{rank}\,V(2,4)=1$ and $d_1=\mathrm{rank}\,V(1,3)\leq 2$.
From $d_4=1$ we infer $V=U$. Whether $d_1=1$ or $d_1=2$, we readily see
that the condition $\mu_1=\nu_1$ from Lemma \ref{lemma1} is violated.
\smallskip
\noindent{\it Case 1'.} $d_3+d_4=d_2+d_3>d_1+d_2$. This is dual to Case 1, and hence impossible.
\medskip
\noindent{\it Case 2.} $d_1+d_2=d_3+d_4>d_2+d_3$. Then $d_1>d_3$ and $d_4>d_2$, contradicting (*).
\smallskip
\noindent{\it Case 3.} $d_1+d_2>d_2+d_3,d_3+d_4$. Then $d_1>d_3$, contradicting (*).
\smallskip
\noindent{\it Case 3'.} $d_3+d_4>d_2+d_3,d_1+d_2$. Then $d_4>d_2$, contradicting (*).
\smallskip
\noindent{\it Case 4.} $d_2+d_3>d_1+d_2,d_3+d_4$. In this case, $d_1=\mathrm{rank}\, V(1,3)=1$
and $d_4=\mathrm{rank}\,V(2,4)=1$, whence $V=U$.
We readily see that the condition $\mu_1=\nu_1$ from Lemma \ref{lemma1} is violated.
\smallskip
\noindent{\it Case 5.} $d_1+d_2=d_2+d_3=d_3+d_4$. We have $d_3=d_1=\mathrm{rank}\, V(1,3)\leq 2$ as well as $d_2=d_4=\mathrm{rank}\,V(2,4)\leq 2$. If $(d_1,d_2)=(2,2)$ then $k=1$ and thus $\mathrm{rank}\, V(1,3)=1=\mathrm{rank}\,V(2,4)$, contradicting (*). Whether $(d_1,d_2)=(2,1)$ or $(d_1,d_2)=(1,2)$, we have $V=U$, and we readily see that the condition $\mu_1=\nu_1$ from Lemma \ref{lemma1} is violated.
\end{proof}
\section{Classifying the relatively faithful uniserial representations of $\g$}\label{sec3}
We assume throughout this section that $\g=\langle x\rangle \ltimes L(V)$, where $x$ acts on $V$ via a single lower Jordan
block $J_n(\la)$, $n>1$, relative to a basis $v_0,\dots,v_{n-1}$ of $V$.
\begin{prop}\label{ensu} Suppose $\lambda\neq 0$ and let $T:\g\to\gl(U)$ be a relatively faithful uniserial
representation of dimension $d$.
Then there is a basis $\B$ of $U$, an integer $\ell>2$, a sequence of positive integers $(d_1,\dots,d_\ell)$
satisfying $d_1+\cdots+d_\ell=d$, and a scalar $\al\in F$,
such that the matrix representation $R:\g\to\gl(d)$ associated to $\B$ is normalized standard relative
to $(\ell, (d_1,\dots,d_\ell),\al)$.
\end{prop}
\begin{proof} Noting that $[\g,\g]=V\oplus \Lambda^2(V)$ and $[[\g,\g],[\g,\g]]=\Lambda^2(V)$,
the proof of \cite[Theorem 3.2]{CPS} applies almost verbatim to yield the desired result.
\end{proof}
\begin{theorem}\label{main1} Suppose $\lambda\neq 0$. Then $\g$ has no relatively
faithful uniserial representations of length $>3$.
\end{theorem}
\begin{proof} Let $T:\g\to\gl(U)$ be a relatively faithful representation. By Proposition \ref{ensu},
there is a basis $\B$ of $U$, an integer $\ell>2$, a sequence of positive integers $(d_1,\dots,d_\ell)$
satisfying $d_1+\cdots+d_\ell=d$,
and a scalar $\al\in F$ such that the matrix representation $R:\g\to\gl(d)$ associated to $\B$
is normalized standard relative to $(\ell, (d_1,\dots,d_\ell),\al)$.
Suppose, if possible, that $\ell>3$. By Lemma \ref{funk}, there is some $i$ such that $d_i+d_{i+1}=n+1$.
Since $\ell>3$, we may consider the representation of $\g$, say $S$, obtained from $R$ by choosing any set of four
contiguous indices taken from $\{1,\dots,\ell\}$ including $i$ and $i+1$. Then $\ker(S)\cap V=(0)$ by Lemma \ref{funk}.
Moreover, $\Lambda^2(V)$ is not contained in $\ker(S)$ because $S$ involves a non-zero 2-diagonal block matrix, as indicated in the proof of Proposition \ref{reduccion}.
We may thus assume without loss of generality that $\ell=4$ and $(d_1,d_2,d_3,d_4)\neq (1,1,1,1)$.
Since $R$ is a representation and $\Lambda^2 V$ commutes with $V$,
it follows from the shape of the matrices in $R(\g)$ that block $(1,4)$
of $R(x)$ is zero for all $x\in\g$, which contradicts Proposition \ref{reduccion}.
\end{proof}
\begin{theorem}\label{main2} Suppose $\lambda\neq 0$. Then every
relatively faithful uniserial representation of $\g$ is isomorphic to one and
only one normalized representation $R_{a,b,c,M,N,\al}$ of non-extreme type.
\end{theorem}
\begin{proof} Let $T:\g\to\gl(U)$ be a relatively faithful representation of dimension $d$. By Proposition \ref{ensu}, there
is a basis $\B$ of $U$, an integer $\ell>2$, a sequence of positive integers $(d_1,\dots,d_\ell)$ satisfying
$d_1+\cdots+d_\ell=d$, and a scalar $\al\in F$ such that the matrix representation $R:\g\to\gl(d)$ associated
to $\B$ is normalized standard relative to $(\ell, (d_1,\dots,d_\ell),\al)$.
Theorem \ref{main1} gives $\ell=3$.
Set $(a,b,c)=(d_1,d_2,d_3)$. We have $a+b\leq n+1$ and $b+c\leq n+1$, with equality holding in at least one case, by Lemma \ref{funk}.
Thus $a+b=n+1$ and $c\leq a$, or $b+c=n+1$ and $a\leq c$. It follows that $R$ is isomorphic to $R_{a,b,c,M,N,\al}$, where $M$ and $N$
are the blocks in the first superdiagonal of $R(v_0)$, and $R_{a,b,c,M,N,\al}$ is of non-extreme type by Proposition \ref{normal2}.
Uniqueness follows from Proposition \ref{normal2}.
\end{proof}
\section{Further cases}
We assume throughout this section that $\g=\langle x\rangle \ltimes L(V)$, where $x\in\GL(V)$.
When the Jordan decomposition of $x$ acting on $V$ has more than one block, other representations are possible. As an illustration, let $m,n\geq 1$, let $\la,\mu\in F$ (we allow the case $\la=\mu$), and suppose $v_0,\dots,v_{n-1},w_0,\dots,w_{m-1}$ is a basis of $V$ relative to which
$$
[x,v_0]=\la v_0+v_1, [x,v_1]=\la v_1+v_2,\dots, [x,v_{n-1}]=\la v_{n-1},
$$
$$
[x,w_0]=\mu w_0+w_1, [x,w_1]=\mu w_1+w_2,\dots, [x,w_{m-1}]=\mu w_{m-1}.
$$
Let $(a,b,c)$ be a triple of positive integers satisfying
$$a+b=n+1,\; b+c=m+1,
$$
suppose $M\in M_{a\times b}$ and $N\in M_{b\times c}$ satisfy $M_{a,1}\neq 0$ and $N_{b,1}\neq 0$, and let $\al\in F$.
We may then define the uniserial representation $S=S_{a,b,c,M,N,\al}:\g\to\gl(d)$, $d=a+b+c$, as follows:
$$
S(x)=A=\left(
\begin{array}{ccc}
J^a(\al) & 0 & 0 \\
0 & J^b(\al-\la) & 0 \\
0 & 0 & J^c(\al-\la-\mu) \\
\end{array}
\right),
$$
$$
S(v_k)=(\ad_{\gl(d)} A-\la 1_{\gl(d)})^k \left(
\begin{array}{ccc}
0 & M & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right),\quad 0\leq k\leq n-1,
$$
$$
S(w_k)=(\ad_{\gl(d)} A-\la 1_{\gl(d)})^k \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & N \\
0 & 0 & 0 \\
\end{array}
\right),\quad 0\leq k\leq m-1.
$$
The fact that $a+b=n+1$ and $b+c=m+1$, together with \cite[Proposition 2.2]{CPS}, ensure that $\ker(S)\cap V=(0)$. Moreover,
since $S(v_0\wedge w_{b-1})\neq 0$, it follows that $\Lambda^2(V)$ is not contained in $\ker(S)$. Thus, $S$ is relatively faithful.
We may imbed $\g$ as a subalgebra of $\g'=\langle x'\rangle\ltimes L(V')$, where $x'$ has Jordan decomposition
$$
J_{n_1}(\la)\oplus\cdots\oplus J_{n_e}(\la)\oplus J_{m_1}(\mu)\oplus\cdots\oplus J_{m_f}(\mu),
$$
where
$$
n=n_1\geq\dots \geq n_e,\quad m=m_1\geq \dots \geq m_f,
$$
$$
n_2\leq n-2,\; n_3\leq n-4,\; n_4\leq n-6,\dots,\; n_e\leq n-2(e-1),
$$
$$
m_2\leq m-2,\; m_3\leq m-4,\; m_4\leq m-6,\dots,\; m_f\leq m-2(f-1),
$$
$$
e\leq \mathrm{min}\{a,b\},\; f\leq \mathrm{min}\{b,c\}.
$$
Then \cite[Theorem 4.1]{CPS} ensures that we may extend the above representation $S$ of $\g$
to a uniserial representation $S'$ of $\g'$ in such that a way that we still have $\ker(S')\cap V'=(0)$.
Since $\Lambda^2(V)$ is not contained in $\ker(S)$, it follows automatically that $\Lambda^2(V')$ is not contained in $\ker(S')$.
Thus, $S'$ is also relatively faithful.
If $n>1$ (resp. $m>1$) then $S$ (and therefore $S'$) is not faithful, as all wedges $v_i\wedge v_j$ (resp. $w_i\wedge w_j$)
are in the kernel of $S$.
The case $n=1$ and $m=1$ leads to the representation $S_{\al}:\g\to\gl(3)$, given by
$$
x\mapsto \left(
\begin{array}{ccc}
\alpha & 0 & 0 \\
0 & \alpha-\la & 0 \\
0 & 0 & \alpha-\la-\mu \\
\end{array}
\right),
$$
$$
v_0\mapsto \left(
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right),
w_0\mapsto \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0 \\
\end{array}
\right), v_0\wedge w_0 \mapsto \left(
\begin{array}{ccc}
0 & 0 & 1 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right).
$$
This is a faithful uniserial representation.
Suppose next that $x$ acts diagonalizably on $V$, as in the preceding example.
Depending on the nature of the eigenvalues of $x$, there may be other examples of relatively faithful uniserial representations.
Indeed, let $\g=\langle x\rangle\ltimes L(V)$,
where $n>1$, $\la\in F$ and $v_1,\dots,v_n$ is a basis of $V$ such that $$xv_1=i_1\la v_1,xv_2=i_2\la v_2,\dots,xv_n=i_n\la v_n,$$
for positive integers $1=i_1<i_2<\dots<i_n$. Setting $p=i_n+2$ and $J=J^p(0)$, we may then define the uniserial representation $T:\g\to\gl(p)$,
as follows:
$$
x\mapsto \mathrm{diag}(\al,\al-\la,\dots,\al-(p-1)\la),
$$
$$
v_1\mapsto J^{i_1},\; v_2\mapsto J^{i_2},\dots,v_{n-1}\mapsto J^{i_{n-1}}, v_n\mapsto \be E^{1,p-1}+\ga E^{2,p}.
$$
Here we require $\be\neq\ga$ to ensure that $\Lambda^2(V)$ is not contained in $\ker(T)$. Since $\ker(T)\cap V=(0)$,
it follows that $T$ is relatively faithful. Note that $T$ is only faithful when $n=2$.
| {
"timestamp": "2017-03-21T01:02:02",
"yymm": "1703",
"arxiv_id": "1703.06214",
"language": "en",
"url": "https://arxiv.org/abs/1703.06214",
"abstract": "Given an algebraically closed field $F$ of characteristic 0 and an $F$-vector space $V$, let $L(V)=V\\oplus\\Lambda^2(V)$ denote the free 2-step nilpotent Lie algebra associated to $V$. In this paper, we classify all uniserial representations of the solvable Lie algebra $\\mathfrak g=\\langle x\\rangle\\ltimes L(V)$, where $x$ acts on $V$ via an arbitrary invertible Jordan block.",
"subjects": "Representation Theory (math.RT)",
"title": "Free 2-step nilpotent Lie algebras and indecomposable modules",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985718065655333,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7083314746357134
} |
https://arxiv.org/abs/1411.6606 | A Baxter class of a different kind, and other bijective results using tableau sequences ending with a row shape | Tableau sequences of bounded height have been central to the analysis of k-noncrossing set partitions and matchings. We show here that familes of sequences that end with a row shape are particularly compelling and lead to some interesting connections. First, we prove that hesitating tableaux of height at most two ending with a row shape are counted by Baxter numbers. This permits us to define three new Baxter classes which, remarkably, do not obviously possess the antipodal symmetry of other known Baxter classes. We then conjecture that oscillating tableau of height bounded by k ending in a row are in bijection with Young tableaux of bounded height 2k. We prove this conjecture for k at most eight by a generating function analysis. Many of our proofs are analytic in nature, so there are intriguing combinatorial bijections to be found. | \section{Introduction}
\label{sec:introduction}
The counting sequence for Baxter permutations, whose elements
\[
B_n=\sum_{k=1}^n\frac{\binom{n+1}{k-1}\binom{n+1}{k}\binom{n+1}{k+1}}{\binom{n+1}{1}\binom{n+1}{2}}
\] are known as Baxter
numbers, is a fascinating combinatorial entity, enumerating a
diverse selection of combinatorial classes and resurfacing in many
contexts which do not \emph{a priori\/} appear connected. The recent
comprehensive survey of Felsner, Fusy, Noy and Orden~\cite{FeFuNoOr11}
finds many structural commonalities among these seemingly diverse
families of objects.
Here we describe three new combinatorial classes which are enumerated
by Baxter numbers. These classes have combinatorial bijections between
them, but {\bf do not} share many of the properties of the other known
Baxter classes aside from a natural Catalan subclass. Rather, they
connect to the restricted arc diagram family of objects, whose study
was launched by Chen, Deng, Du, Stanley and Yan~\cite{Chetal07}. We
have discovered a bridge between classic Baxter objects and sequences
of tableaux, and consequently walks in Weyl chambers. The main focus
of this article is tableau sequences that end in a (possibly empty)
row shape: as these correspond to a family of lattice walks that end
on a boundary, we are able to deduce enumerative results and find
surprising connections to other known combinatorial
structures. Another consequence of this construction is a new
generating tree description for Baxter numbers.
Our main results are Theorem~\ref{thm:Main} and
Theorem~\ref{thm:Baxter}. They use the integer lattice $W_k=\{(x_k,
\dots, x_1): x_k>\dots, x_1>0, x_i\in\mathbb{N}\}$. All of the other
combinatorial classes referenced in the statement are defined in the
next section.
\begin{theorem}
\label{thm:Main}
The following classes are in bijection for all nonnegative integers
$m, n$ and $k$:
\begin{enumerate}
\item $\mathcal{H}^{(k)}_{n,m}$: the set of hestitating tableaux of length
$2n$, with maximum height bounded by~$k$, starting from $\emptyset$, ending in a strip of length
$m$;
\item $\mathcal{L}^{(k)}_{n,m}$: the set of $W_k$-hesitating lattice walks of
length $2n$, starting at $\delta=(k, k-1, \dots, 1)$ ending at
$(m+k, k-1, \dots, 1)$;
\item $\Omega^{(k)}_{n,m}$: open partition diagrams of length $n$
with~$m$ open arcs, but with no~enhanced $(k+1)$-nesting, nor future enhanced~$(k+1)$-nesting.
\end{enumerate}
\end{theorem}
The equivalence (1) $\equiv$ (2) follows almost immediately from the
proof of Theorem~3.6 of~\cite{Chetal07}. We discuss it in
Section~\ref{sec:bijections}. We then prove the equivalence (1)
$\equiv$ (3), by providing an explicit bijection. It is a slight
modification of the bijection $\overline{\phi}$ used in the proof of
Theorem~4.3 of~\cite{Chetal07}.
It is well known that both $\mathcal{L}^{(1)}_{n,0}$ and
$\Omega^{(1)}_{n,0}$ are of cardinalty $C_{n}$, the Catalan number of index $n$.
Furthermore, we have the following theorem, proved in Section~\ref{sec:Baxter}.
\begin{theorem}
\label{thm:Baxter}
Let $\ell^{(k)}{m}(n)=|\mathcal{L}^{(k)}_{n,m}|$. The number of walks in
$\mathcal{L}^{(2)}_{n,m}$ over all possible values of $m$ is equal to the Baxter number of index
$n+1$. That is, \[\sum_{m=0}^n \ell^{(2)}_{m}(n)=B_{n+1}.\]
\end{theorem}
The related class of \emph{oscillating} tableaux that end in a row also gives
rise to some interesting results.
\begin{theorem}
The following classes are in bijection:
\begin{enumerate}
\item Oscillating tableaux of length~$n$, maximum height bounded by~$k$ ending in
a strip of length $m$;
\item The set of $W_k$-oscillating lattice walks of length $n$ ending
at $(m+k, k-1, \dots, 1)$;
\item Open matching diagrams of length $n$ with no~$(k+1)$-nesting, nor future~$(k+1)$-nesting, with $m$ open arcs.
\end{enumerate}
\label{thm:Main2}
\end{theorem}
The proof of this theorem is analogous to the proof of the
previous theorem, and we do not present it here. Rather, we are highly
intrigued by a fourth class that also appears to be in bijection.
\begin{conjecture}
The set of $W_k$-oscillating lattice walks of length $n$ ending
at the boundary $\{(m+k, k-1, \dots, 1 ): m\geq 0\}$ is in bijection
with standard Young tableaux of size $n$, with height bounded by~$2k$.
\label{thm:conjecture}
\end{conjecture}
As far as we can tell, this was first conjectured by
Burrill~\cite{Burr14}. Using the lattice path characterization, we
access generating function expressions, and prove the conjecture
for $k\leq 8$. We discuss the strong evidence for Conjecture~\ref{thm:conjecture}
in Section~\ref{sec:Conjecture}.
\section{The Combinatorial Classes}
\label{sec:combclasses}
We now describe the combinatorial classes mentioned in the two main
theorems.
\subsection{Tableaux families}
There are three tableaux families that we consider.
\begin{description}
\item[oscillating tableau] This is a sequence of Ferrers diagrams such
that at every stage a box is either added, or deleted. The sequences
start from an empty shape, and have a specified ending shape. The
size is the length of the sequence.
\item[standard Young tableau] This is an oscillating tableau where
boxes are only added and never removed.
\item[hesitating tableau] This is a variant of the oscillating
tableaux. They are even length sequences of Ferrers diagrams that
start from an empty tableaux. The sequence is composed of pairs of moves
of the form: (i) do nothing then add a
box; (ii) remove a box then do nothing; or (iii) add a box
then remove a box.
\end{description}
In each case, if no diagram in the sequence is of height $k+1$, we say that the
tableau has height bounded by $k$. This is how we arrive
to the set $\mathcal{H}^{(k)}_{n,m}$, of hestitating tableaux of
length $2n$, with height bounded by~$k$, ending in the
shape~$(m)$\footnote{Here, $(0)\equiv\emptyset$}.
\subsection{Lattice walks}
We focus on two different lattice path families. We are interested in walks
in the region\footnote{Remark this model is a reparametrization of the
region considered in~\cite{Chetal07}. We slightly shift the notation of
\cite{BoXi05}.}~$W_k=\{(x_1,x_2, \dots,
x_k): x_i \in \mathbb{Z}, x_1>x_2>\dots>x_k> 0 \}$ starting at the
point $\delta=(k, k-1, \dots, 1)$. Let $e_i$ be the elementary vector
with a 1 at position $i$ and 0 elsewhere. We permit steps that do
nothing, which we call ``stay steps''.
The class of \emph{$W_k$-oscillating walks\/} starts at $\delta$ and
takes steps of type $e_i$ or $-e_i$, for $1\leq i\leq k$. We define
$\mathcal{O}^{(k)}_{n,m}$ to be the set of oscillating walks in $W_k$ of
length $n$, ending at the point $me_1+\delta$.
A \emph{$W_k$-hesitating walk\/} is of even length, and steps occur in the
following pairs: (i) a stay step followed by an $e_i$ step; (ii) a
$-e_i$ step followed by a stay step; (iii) an $e_i$ step follow by $-e_j$ step.
We focus on~$\mathcal{L}^{(k)}_{n,m}$: the set of hesitating lattice walks of
length $2n$ in $W_k$, starting at $(k, k-1, \dots, 1)$ and ending at $(m+k, k-1, \dots,1)$.
\subsection{Open arc diagrams}
Matchings and set partitions are two combinatorial classes that have
natural representations using arc diagrams. In the arc diagram
representation of a set partition of~$\{1, 2, \ldots, n\}$, a row of increasing
vertices is labelled from~$1$ to~$n$. A partition block~$\{a_1, a_2, \ldots, a_j\}$,
ordered~$a_1<a_2<\ldots<a_j$, is represented by the arcs~$(a_1,a_2), (a_2,a_3), \ldots,(a_{j-1}, a_j)$ which are always drawn above the
row of vertices. The set partition~$\pi=\{\{1,3,7\},\{2,8\},\{4\},\{5,6\}\}$ is depicted
as an arc diagram in Figure~\ref{fig:expartition}. Matchings are
represented similarly, with each pair contributing an arc.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=0.4]
\foreach \x in {1,2,...,8}{
\node[pnt, label=below:{$\x$}](\x) at (\x,0){};}
\draw[bend left=45](1) to (3) to (7);
\draw[bend left=45](2) to (8);
\draw[bend left=45](5) to (6);
\end{tikzpicture}
\caption{The set partition~$\pi=\{1,3,7\},\{2,8\},\{4\},\{5,6\}$}
\label{fig:expartition}
\end{figure}
A set of $k$ arcs $(i_1, j_1), \dots, (i_k, j_k)$ form a
\emph{$k$-nesting} if $i_1<i_2<\dots <i_k<j_k<\dots<j_2<j_1$. They
form an \emph{enhanced $k$-nesting} if $i_1<i_2<\dots
<i_k\leq j_k<\dots<j_2<j_1$. They form a \emph{$k$-crossing} if
$i_1<i_2<\dots<i_k<j_1<j_2<\dots<j_k$. Figure~\ref{fig:3nest}
illustrates a $3$-nesting, an enhanced $3$-nesting, and a
$3$-crossing. A partition is $k$-nonnesting if it does not contain any
collection of edges that form a $k$-nesting.
\begin{figure}[h!]
\center
\begin{tikzpicture}[scale=0.5]
\node[pnt] at (0,0)(1){};
\node[pnt] at (1,0)(2){};
\node[pnt] at (2,0)(3){};
\node[pnt] at (3,0)(4){};
\node[pnt] at (4,0)(5){};
\node[pnt] at (5,0)(6){};
\node[pnt] at (7,0)(1c){};
\node[pnt] at (8,0)(2c){};
\node[pnt] at (9,0)(3c){};
\node[pnt] at (10,0)(4c){};
\node[pnt] at (11,0)(5c){};
\node[pnt] at (13,0)(1b){};
\node[pnt] at (14,0)(2b){};
\node[pnt] at (15,0)(3b){};
\node[pnt] at (16,0)(4b){};
\node[pnt] at (17,0)(5b){};
\node[pnt] at (18,0)(6b){};
\node[pnt] at (20,0)(1d){};
\node[pnt] at (21,0)(2d){};
\node[pnt] at (22,0)(3d){};
\node[pnt] at (23,0)(4d){};
\draw(1) to [bend left=45] (6);
\draw(2) to [bend left=45] (5);
\draw(3) to [bend left=45] (4);
\draw(1c) to [bend left=45] (5c);
\draw(2c) to [bend left=45] (4c);
\draw(1b) to [bend left=45] (4b);
\draw(2b) to [bend left=45] (5b);
\draw(3b) to [bend left=45] (6b);
\draw(1d) to [bend left=45] (24,0.5);
\draw(2d) to [bend left=45] (4d);
\end{tikzpicture}
\caption{A $3$-nesting, an enhanced $3$-nesting, a $3$-crossing, and a
future enhanced $3$-nesting}
\label{fig:3nest}
\end{figure}
Recently, Burrill, Elizalde, Mishna and Yen~\cite{Buetal12}
generalized arc diagrams by permitting semi-arcs: in the diagrams,
each arc must have a left endpoint, but not necessarily a right
endpoint. The notion of $k$-nesting is generalized to
a~\emph{future~$k$-nesting}, which is a pattern with an open arc to
the left of a~$(k-1)$-nesting, and a future enhanced $k$-nesting is
defined similarly. An example is given in
Figure~\ref{fig:3nest}. Burrill~\emph{et al.}~\cite{Buetal12}
conjectured that open partition diagrams with no future enhanced
3-nestings, nor enhanced 3-nestings were counted by Baxter numbers.
\section{Bijections}
\label{sec:bijections}
We now give the bijections to prove Theorem~\ref{thm:Main}.
Table~\ref{fig:allsizetwo} illustrates the different mappings in the case of $n=2$,
$k=2$.
\begin{table}[h!]
\center
\begin{tabular}{l||llll}
& {\bf Tableaux} $\mathcal{H}^{(2)}_{2,m}$
& {\bf Walks} $\mathcal{L}^{(2)}_{2,m}$
& {\bf Open partitions} $\Omega^{(2)}_{2,m}$&\\[2mm]
$m=0$
&$\emptyset\, \square\, \emptyset\, \square\, \emptyset $ &
(2,1)-(3,1)-(2,1)-(3,1)-(2,1) &
\begin{tikzpicture}[scale=0.4]
\foreach \x in {1,2}{\node[pnt] at (\x, 0){};};
\end{tikzpicture}\\
&$\emptyset\, \emptyset\, \square\, \emptyset\, \emptyset $ &
(2,1)-(2,1)-(3,1)-(2,1)-(2,1) &
\begin{tikzpicture}[scale=0.4]
\foreach \x in {1,2}{\node[pnt] at (\x, 0){};};
\draw[bend left=65](1,0) to (2,0);
\end{tikzpicture}\\[2mm]\hline
&$\emptyset\, \square\, \emptyset\, \emptyset\, \square\,$ &
(2,1)-(3,1)-(2,1)-(2,1)-(3,1) &
\begin{tikzpicture}[scale=0.4]
\foreach \x in {1,2}{\node[pnt] at (\x, 0){};};
\draw[bend left=45](2,0) to (3,.5);
\end{tikzpicture}\\
$m=1$
&$\emptyset\, \emptyset\, \square\, \square\hspace{-0.5mm}\square \, \square$ &
(2,1)-(2,1)-(3,1)-(4,1)-(3,1)
&
\begin{tikzpicture}[scale=0.4]
\foreach \x in {1,2}{\node[pnt] at (\x, 0){};};
\draw[bend left=45](1,0) to (2,0);
\draw[bend left=45](2,0) to (3,0.5);
\end{tikzpicture}\\
&$\emptyset\, \emptyset\, \square\, \begin{tikzpicture}[scale=0.21]\draw(0,0) to (1,0) to (1,1) to (0,1) to (0,0);
\draw(1,1) to (1,2) to (0,2) to (0,1); \end{tikzpicture} \, \square $ &
(2,1)-(2,1)-(3,1)-(3,2)-(3,1)
&\begin{tikzpicture}[scale=0.4]
\foreach \x in {1,2}{\node[pnt] at (\x, 0){};};
\draw[bend left=20](1,0) to (3,.8);
\end{tikzpicture}\\[2mm]\hline
$m=2$&$\emptyset\, \emptyset\, \square\, \square\,
\square\hspace{-0.5mm}\square $ &
(2,1)-(2,1)-(3,1)-(3,1)-(4,1) &
\begin{tikzpicture}[scale=0.4]
\foreach \x in {1,2}{\node[pnt] at (\x, 0){};};
\draw[bend left=45](1,0) to (3,1);
\draw[bend left=45](2,0) to (3,0.5);
\end{tikzpicture}\\
&&\\
\end{tabular}
\caption{The six objects of size~$n=2$ for each class in Theorem~\ref{thm:Main} }
\label{fig:allsizetwo}
\end{table}
\subsection{Tableaux to walks}
The bijection between $\mathcal{H}^{(k)}_{n,m}$ and
$\mathcal{L}^{(k)}_{n,m}$ is a straightforward consequence of the
proof of Theorem~3.6 of~\cite{Chetal07}. The authors give the bijection
explicitly for vacillating tableaux, but the results follow directly
for the case of hesitating tableaux. In their bijection, a shape
$\lambda=(\lambda_1, \dots, \lambda_k)$ in a hesitating tableaux
corresponds to a point in position $\delta+\lambda=(\lambda_1+k,
\lambda_2+k-1, \dots, \lambda_k+1)$ in the hestitating walk.
\subsection{Arc diagrams and tableaux}
We modify Chen~\emph{et al.}'s bijection between set partitions and
hesitating tableaux to incorporate the possibility that an edge is not
closed. In this abstract, we simply describe how to modify Marberg's
presentation in~\cite{Marb13} of the bijection to handle open arcs. We
refer readers to that article, or to Chen~\emph{et al.} to get the
full description.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.5]
\foreach \x in {1,2,...,5}{
\node[pnt, label=below:{$\x$}](\x) at (\x,0){};}
\node[pnt, label=right:{$8$}](8) at (6,.5){};
\node[pnt, label=right:{$7$}](7) at (6,1.2){};
\node[pnt, label=right:{$6$}](6) at (6,1.9){};
\draw[bend left=45](1) to (3);
\draw[bend left=45](3) to (7);
\draw[bend left=35](2) to (6);
\draw[bend left=25](5) to (8);
\end{tikzpicture}
\caption{Preparing an open diagram for the bijection.}
\label{fig:expartition2}
\end{figure}
\begin{proposition} The subset of~$\Omega_{n,m}^{(k)}$ comprised of open partition
diagrams of size~$n$ avoiding both enhanced~$(k+1)$-nestings, and
future enhanced~$(k+1)$-nestings consisting of diagrams with~$m$ open arcs, is in bijection
with hesitating tableaux of length~$2n$ of maximum height~$k$,
ending with a single row of length~$m$.
\end{proposition}
\begin{proof}
We prove this by describing how to process open arcs. We assume the
reader has some familiarity with the original bijection to have a
brief presentation of our bijection. We read diagrams
from left to right, and insert and delete cells under the same rules as
Marberg~\cite{Marb13}. The insertion depends on the right
endpoints of the arcs. If the diagram has $n$ vertices, we incrementally label the
right endpoints of the open arcs with labels $n+1$ to $n+m$ from left
to right, as seen in Figure~\ref{fig:expartition2}.
Recall how vertices are processed in this bijection. Each vertex
corresponds to two steps; if~$i$ is a left endpoint, first do nothing
and then insert its (possibly artificial) right endpoint into the
tableau; if~$i$ is a right endpoint, delete it from the tableau and
then do nothing; if~$i$ is both a right endpoint of some arc~$(i', i)$
and left endpoint of~$(i, i'')$, first insert~$i''$ into the tableaux
and then delete~$i$. If $i$ is a fixed point, first insert $i$, then
delete $i$. The insertion is done with RSK, and the deletion is simple
cell deletion.
In this process, the values corresponding to open arcs are inserted in
the tableau in order, and are never deleted. By RSK, they will all be
in the same row. Once the other points are deleted, all that will
remains is a single row of length $m$.
The inverse map of Marberg (and ultimately Chen \emph{et al.}) is similarly
adapted.
\end{proof}
\section{When $k=2$ the complete generating function is Baxter}
\label{sec:Baxter}
\subsection{Baxter permutations}
Baxter numbers comprise entry A001181 of the Online Encylopedia of
Integer Sequences (OEIS)~\cite{oeis}, and their main interpretation is the
number of permutations $\sigma$ of $n$ such that there are no indicies
$i<j<k$ satisfying
$\sigma(j+1)<\sigma(i)<\sigma(k)<\sigma(j)\quad\text{or}\quad\sigma(j)<\sigma(k)<\sigma(i)<\sigma(j+1).
$
They were introduced by Baxter~\cite{Baxter64} in a
question about compositions of commuting functions, and have since been found in
many places in combinatorics.
Chung~\emph{et al.}~\cite{ChGrHoKl78} found the explicit formula for
the $n^\text{th}$ Baxter number given in Section~1, and also gave a
second order linear recurrence:
\begin{equation}\label{eqn:baxrec}
8(n+2)(n+1)B_n + (7n^2+49n+82) B_{n+1}-(n+6)(n+5) B_{n+2} =0.
\end{equation}
We prove Theorem~\ref{thm:Baxter} by a a generating function
argument. We use the notation $\overline{x}=\frac{1}{x}$, and work in
the ring of formal series $\mathbb{Q}[x, \overline{x}] [\![t]\!]$.
The operator $PT_x$ (resp $NT_x$) extracts positive (resp. negative)
powers of $x$ in series of $\mathbb{Q}[x, \overline{x}] [\![t]\!]$. We
relate this to the diagonal operator $\Delta$ defined by : $\Delta
\sum_{i,j \in \mathbb{N}} f_{i,j} x^it^j = \sum_n f_{n,n} t^n$.
A bivariate series in $\mathbb{Q}[x] [\![t]\!]$ is D-finite with
respect to $t$ and $x$ if the set of its partial derivatives spans a
finite dimensional vector space over
$\mathbb{Q}$. Lipshitz~\cite{Lips88} proved that if $F(x,t)\in
\mathbb{Q}[x] [\![t]\!]$ is D-finite then so is $f(t)=\Delta
F(x,t)$. This result is effective, and creative telescoping strategies
have resulted in efficient algorithms for computing such
differential equations~\cite{Chyz00, BoLaSa13}: that is, given a
system of differential equations satisfied by $F(x,t)$, one can
compute the linear differential equation satisfied by $\Delta
F(x,t)=f(t)$. There are several implementations in various
computer algebra systems, such as \textsf{Mgfun} by Chyzak in Maple~\cite{Chyz94}, and the \textsf{HolonomicFunctions}
package of Koutschan in Mathematica~\cite{Kout10}.
Roughly, the strategy is to use elimination in an Ore algebra or an ansatz of undetermined coefficients
to compute differential operators annihilating the multivariate integral
\begin{equation} \frac{1}{2\pi i} \int_\Omega \frac{F(x,t/x)}{x} dx = \Delta F(x,t), \end{equation}
where $\Omega$ is an appropriate contour in $\mathbb{C}$ containing the origin.
\subsection{A generating function for hesitating walks ending on an axis}
We recall Proposition~12 of Bousquet-M\'elou and Xin~\cite{BoXi05}; As
noted earlier, we have shifted our indices, and our presentation of
their results are duely adapted. Here $Q_k$ is the first quadrant
$Q_k=\{(x_1, \dots, x_k): x_i> 0\}$.
\begin{proposition}[Bousquet-M\'elou and Xin, Propostion 12 of~\cite{BoXi05}]
For any starting and ending points $\lambda$ and $\mu$ in $W_k$, the
number of $W_k$-hesitating walks going from $\lambda$ to $\mu$ can
be expressed in terms of the number of $Q_k$ hesitating walks as
follows:
\[
w_k(\lambda, \mu, n) = \sum_{\pi\in S_k} (-1)^\pi q_k(\lambda, \pi(\mu), n),
\]
where $(-1)^\pi$ is the sign of $\pi$ and $\pi(\mu_1, \dots,
\mu_k)=(\mu_{\pi(1)}, \dots, \mu_{\pi(k)})$.
\end{proposition}
The result is proved by a classic reflection argument using a simple
sign reversing involution between pairs of walks; the walks restricted
to $W_k$ appear as fixed points.
We restrict to $k=2$, and define
\[
H(x;t) = \sum_{i,n} q_2((2,1), (i,1), 2n) x^it^n\quad \text{and}\quad
V(y;t) = \sum_{i,n} q_2((2,1), (1,i), 2n) y^it^n.
\]
By applying the proposition we see immediatetly that
\begin{equation}
\label{eqn:hv}
W(x,t)=\sum_{i,n} w_2((2,1), (i,1), 2n)\, x^it^n = H(x;t)-V(x;t).
\end{equation}
Their explicit Proposition~13 is key to our solution.
\begin{proposition}[Bousquset-M\'elou and Xin, Proposition~13
of~\cite{BoXi05}]
\label{thm:BX}
The series $H(x;t)$ and $V(y;t)$ which count $Q_2$-hesitating walks
of even length ending on the $x$-axis and on the $y$-axis, satisfy
\begin{eqnarray}
xH(x;t) &= PT_x \frac{Y}{t(1+x)} (x^2-Y^2/x^2+Y/x^3)\\
\overline{x}^2V(\overline{x};t)&= NT_x \frac{Y}{t(1+x)} (x^2-Y^2/x^2+Y/x^3),
\end{eqnarray}
where $Y$ is the algebraic function
\[Y= \frac{-tx^2+(1-2t)x -t\sqrt{{t}^{2}
{x}^{4}-2\,t{x}^{3}+ \left( -2\,{t}^{2}-4\,t+1 \right) {x}^{2}-
2\,tx+{t}^{2}}}{2t\left( 1+x \right) }.\]
\end{proposition}%
Using this, we can express $W(x,t)$ as a diagonal.
Define
\begin{equation}
G(x,t)=\frac{Y}{t(1+x)}\left(x^2-Y^2/x^2+Y/x^3\right).
\end{equation}
\begin{lemma}
\label{thm:setup}
The generating function for the class of walks of $W_2$-hesitating
walks of even length end on the $x$-axis defined
\[
W(t)=\sum_{i,n} w_2 ((2,1), (i,1), n) t^n
\]
satisfies
\begin{equation}
W(t) = \Delta \left( \left(G\left(\frac{1}{x},xt\right)-G(x,xt)\right)
\frac{1}{1-x} \right).
\end{equation}
\end{lemma}
\begin{proof} We remark that $W(t)=H(1;t)-V(1;t)$ by
Equation~\ref{eqn:hv}. We apply Proposition~\ref{thm:BX}, and
then convert $PT$ and $NT$ operators to diagonal operations by
straightforward series manipulation:
\begin{equation}
H(1;t)=\left.PT_x G(x,t)\right|_{x=1} = \Delta \left(G\left(\frac{1}{x},xt\right) \frac{1}{1-x} \right),\qquad V(1;t)= \Delta \left(G(x,xt) \frac{1}{1-x}\right).
\end{equation}
\end{proof}
\begin{theorem}\label{thm:baxter}
Let $B_n$ be the number of Baxter permutations of size
$n$. Then $W(t)=\sum B_{n+1} t^n$.
\end{theorem}
\begin{proof}
We prove this using effective computations for D-finite closure
properties. Using the Mathematica package of
Koutschan~\cite{Kout10} we use our expression for $W(t)$ as a
diagonal to calculate that it satisfies the differential equation
$\mathcal{L} \cdot W(t) =0$, where $\mathcal{L}$ is the differential
operator
\begin{align*}
\mathcal{L} := & t^4 \left( t+1 \right) \left( 8\,t-1 \right) {{\it D_t}}^{5}+{t}^{
3} \left( -20+147\,t+176\,{t}^{2} \right) {{\it D_t}}^{4} +4\,{t}^{2}
\left( -30+241\,t+304\,{t}^{2} \right){{\it D_t}}^{3} \\
&+12\,t \left( -
20+191\,t+256\,{t}^{2} \right) {{\it D_t}}^{2}
+24\, \left( -5+72\,t+104
\,{t}^{2} \right) {\it D_t}+240+384\,t.
\end{align*}
Furthermore, the Equation~\ref{eqn:baxrec} implies that the
shifted generating function for the Baxter numbers written above also
satisfies this differential equation. As the solution space of the
differential operator $\mathcal{L}$ is a vector space of dimension 5;
to prove equality it is sufficient to show that the first five terms
of the generating functions are equal. We have verified that the
initial terms in the series agree for over 200 terms.
\end{proof}
\subsection{A new generating tree}
\label{sec:GeneratingTree}
A \emph{generating tree} for a combinatorial class expresses recursive
structure in a rooted plane tree with labelled nodes. The objects of
size~$n$ are each uniquely generated, and the set of objects of size $n$
comprise the $n^\text{th}$ level of the tree. They are useful for
enumeration, and for showing that two classes are in bijection. One
consequence of Theorem~\ref{thm:Baxter} is a new generating tree
construction for Baxter objects.
Several different formalisms exist for generating trees,
notably~\cite{BaBoDeFlGaGo02}. The central properties are as
follows. Every object $\gamma$ in a combinatorial class $\mathcal{C}$ is
assigned a label $\ell(\gamma)\in\mathbb{Z}^k$, for some fixed
$k$. There is a rewriting rule on these labels with the property that
if two nodes have the same label then the ordered list of labels of
their children is also the same. We consider labels that are pairs of
positive integers, specified by~$\{\ell_\text{Root}: [i,j]
\rightarrow \operatorname{Succ}([i,j])\}$, where $\ell_\text{Root}$
is the label of the root.
Two generating trees for Baxter objects are known in the literature,
and one consequence of Theorem~\ref{thm:Main} is a third, using the
generating tree for $\Omega^{(k)}$ given by Burrill~\emph{et al.}~\cite{Buetal12}. This
tree differs from the other two already at the third level,
illustrating a very different decomposition of the objects. For the
three different systems, we give the succession rules, and the first 5
levels of the tree (unlabelled) in Figure~\ref{fig:trees}.
\begin{figure}\center
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth, height=2cm]{BoBoFu10}
{\tiny $\{[1,1];[i,j]\rightarrow [1,j+1], \dots, [i, j+1], [i+1, j], \dots [i+1, 1]\}$}
\end{subfigure}\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth, height=2cm]{BoGu14}
{\tiny$\{[0,2];
[i,j]\rightarrow [0, j], \dots , [i-1, j], [1, j+1], \dots, [i+j-1, 2]\}$}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth,height=2cm]{BuElMiYe12}
{\tiny \[
\begin{array}{rll}
\{[0,0];[i,j]\rightarrow
&[i,i], [i +1,j]\\
&[i,j],[i,j+1],\dots,[i,i-1], &\mathrm{if}\, i>0\\
&[i-1,j],[i-1,j+1],\dots,[i-1,i-1], &\mathrm{if}\,i>0 \\
&[i , j -1], [i-1, j-1]&\mathrm{if}\, i > 0, \mathrm{and}\, j > 0\}.\\
\end{array} \]}
\end{subfigure}
\caption{The first five levels of each of the Baxter generating
tree. They are respectively from~\cite{BoBoFu10}~\cite{BoGu14}~\cite{Buetal12}. }
\label{fig:trees}
\end{figure}
\section{Standard Young tableaux of bounded height}
\label{sec:Conjecture}
Chen \emph{et al.}~\cite{Chetal07} give an analogy comparing the
relationship between oscillating tableaux and irreducible
representations of the Brauer algebra to the relationship of standard
Young tableaux and the symmetric group. Indeed, there are many connections
between standard Young tableaux of bounded height and oscillating
tableaux of bounded height, but we were unable to find any
consideration of the following conjecture.
\begin{conjecture}
The set of oscillating lattice walks of length~$n$ in $W_k$ ending
at the boundary $\{(m+k, k-1, \dots, 1 ): m\geq 0\}$ is in bijection
with the set of standard Young tableaux of size~$n$, of height
bounded by~$2k$.
\end{conjecture}
The case of $k=1$ is straightforward. An oscillating walk in~$W_1$ is
simply a sequence of $e_1$ and $-e_1$ steps such that at any point the
number of $e_1$ is greater than or equal to the number of
$-e_1$. There is a simple bijection to standard Young tableaux of
height 2: Given an oscillating walk as a sequence of steps,
$w=w_1,w_2,\dots, w_n$, the standard Young tableaux is filled by
putting entry~$j$ on the top row of the tableau if $w_j=e_1$, and on
the second row if $w_j=-e_1$. However, the interaction between the
steps is less straightforward in higher dimensions.
Eric Fusy remarked to us that the inverse RSK bijection maps standard
Young tableaux to involutions, as the images of pairs of identical
tableaux. This does not directly give our matching diagrams, but it is
a promising candidate for a bijection.
To give evidence for this conjecture, we first give expressions for
the exponential generating functions using determinants of matrices
filled with Bessel functions. For any $k$ the equivalence could
be verified in this manner, and we have done so for $k\leq 8$. It is possible
that very explicit manipulations might also give an answer.
Next we express Young tableaux as walks, and use standard
generating function techniques for the ordinary generating
functions. This gives us a different characterization of the bijection
in terms of walks, and two expressions for the generating functions as
diagonals of rational functions. The technical details of the
constructions are reserved for the long version of this article.
\subsection{A determinant approach}
Our first approach to settling this conjecture is a direct appeal
to two recent enumerative results. In this section,
$b_j(x)=I_j(2x)=\sum_n\frac{(2x)^{2n+j}}{n!(n+j)!}$, the hyperbolic
Bessel function of the first kind of order~$j$.
Let the $\tilde{Y}_{k}(t)$ be the exponential generating function for
the class of standard Young tableaux with height bounded by
$k$. Formulas for $\tilde{Y}_k(t)$ follow from works of Gordon,
Houten, Bender and Knuth~\cite{Gord71, GoHo68, BeKn72}, which depend
on the parity of~$k$. We are only interested in the even values here.
\begin{theorem}[~\cite{Gord71, GoHo68, BeKn72}]
The exponential generating function for the class of standard Young
tableaux of height bounded by~$2k$ is given by
\[
\tilde{Y}_{2k}(t)= \det[b_{i-j}(t)+b_{i+j-1}(t)]_{1\leq i,j\leq k}.
\]
\end{theorem}
Around the same time, Grabiner-Magyar~\cite{GrMa93} determined an
exponential generating function for~$\mathcal{O}^{(k)}_n({\lambda}; {\mu})$,
the number of oscillating lattice walks of length~$n$
from~${\lambda}$ to~${\mu}$, which stay within the~$k$-dimensional Weyl
chamber~$W_k$ and take steps in the positive or negative unit coordinate
vectors.
\begin{theorem}[Grabiner-Magyar~\cite{GrMa93}] For fixed~${\lambda},
{\mu} \in W_k$, the exponential
generating function for $\mathcal{O}_n({\lambda}; {\mu})$ satisfies
\[
O_{{\lambda}, {\mu}}(t)= \sum_{n\geq 0} |\mathcal{O}_n({\lambda};
{\mu}) |\frac{t^n}{n!}
= \det \left( b_{{\mu_i}-{\lambda_j}}(2t)-b_{{\mu_i}+{\lambda_j}}(2t)\right)_{1\leq i, j\leq k}.
\]
\end{theorem}
We specialize the start and end positions as
${\lambda}=\delta=(d, d-1, \ldots, 1)$ and~${\mu}=
\delta+me_1=(d+m', d-1, \ldots, 1)=(m, d-1, \ldots, 1)$. We are
interested in the sum over all values of $m$, and
define $O_k(t)\equiv\sum_{m\geq 0}O_{{\delta}, me_1+\delta}(t)$.
Using their result we deduce the following.
\begin{proposition}
The exponential generating function for the class of oscillating
tableaux ending with a row shape is the finite sum
\[
\tilde{O}_k(t)= \sum_{u=0}^{k-1}(-1)^u \sum_{\ell=u}^{2k-1-2u}(I_{\ell}) \det (I_{i-j}-I_{kd-i-j})_{0 \leq i \leq k-1, i\neq u, 1\leq j \leq k-1}.
\]
\end{proposition}
This follows from the fact that the infinite sum which arises from direct
application of Grabiner and Magyar's formula telescopes,
using also the identity $b_{-k}=b_k$.
Our conjecture is equivalent to
$\tilde{O}_k(t)=\tilde{Y}_{2k}(t)$. Here are the first two values:
\[
\tilde{O}_1(t)= \tilde{Y}_2(t)= b_0+b_1, \quad \tilde{O}_2(t) = \tilde{Y}_4(t)=b_0^2+b_0b_1+b_0b_3-2b_1b_2-b_2^2-b_1^2+b_1b_3.
\]
The determinants are easy to compute for small
values, and they agree for $k\leq 8$.
\subsection{A diagonal approach}
\label{sec:diag}
Using Theorem~\ref{thm:Main2}, we can reformulate the conjecture to be
strictly in terms of lattice paths by viewing standard Young tableaux as
oscillating tableaux with no deleting steps.
\begin{conjecture}
The set of oscillating lattice walks of length $n$ in $W_k$ starting
at $\delta=(k, k-1, \dots, 1)$ and ending at the boundary at the
boundary $\{me_1+\delta: m\geq 0\}$ is in bijection with the set of
oscillating lattice walks of length $n$ in $W_{2k}$, using only
positive steps ($e_j$), starting at $\delta$ and ending anywhere in
the region.
\end{conjecture}
To approach this, we consider the set of walks separately using
some standard enumeration techniques: namely the orbit sum method and
results on reflectable walks in Weyl Chambers. Some of the enumerative parallels
of these strategies in this context are discussed
in~\cite{MeMi14c}. The advantage of these diagonal representations is
potential access to asymptotic enumeration formulas, and possibly
alternative combinatorial representations. All of the generating
functions are D-finite, and we can use the work of~\cite{BoLaSa13}
to determine bounds on the shape of the annihilating differential
equation.
\begin{theorem}
\label{thm:osc}
The ordinary generating function for oscillating walks starting at $\delta$ and
ending on the boundary $\{me_1+\delta: m\geq 0\}$, is given by the
following formula:
\[ O_k(t)=\Delta \left[ \frac{t^{2d-1}(z_3 z_4^2 \cdots
z_k^{k-2})(z_1+1)\prod_{1\leq j<i \leq k} (z_i-z_j)(z_iz_j-1)
\cdot \prod_{2 \leq i \leq k} (z_i^2 -1)}{1-t(z_1\cdots z_k)(z_1 +
\ensuremath{\overline{z}}_1 + \cdots z_d + \ensuremath{\overline{z}}_d)} \right]. \]
\end{theorem}
The proof of Theorem~\ref{thm:osc} is a rather direct application of
Gessel and Zeilberger's formula for reflectable walks in Weyl
chambers.
By applying an orbit sum analysis, we derive the following
expression. There are still some points to verify in the computation,
so although we believe it, and have verified it up to $k=8$, this form
remains as a conjecture.
\begin{conjecture}The ordinary generating function for standard Young
tableau of height at most $k$ is
\[ Y_k(t) = \Delta \left( \frac{(z_1\cdots z_{k-1})\Phi(\overline{\ensuremath{\mathbf{z}}})}{(1-t(z_1\cdots z_{k-1})S(\ensuremath{\mathbf{z}}))(1-z_1)\cdots(1-z_{k-1})} \right), \]
where
\[ S(\ensuremath{\mathbf{z}}) = \ensuremath{\overline{z}}_1 + \ensuremath{\overline{z}}_1z_2 + \cdots + \ensuremath{\overline{z}}_{k-1}z_{k-2} + z_{k-1}, \]
and
\[ \Phi(\ensuremath{\mathbf{z}}) = \frac{(z_1z_{k-1}-1)}{(z_1\cdots z_{k-1})^{k-1}}
\prod_{j=1}^{k-2}(z_1z_j - z_{j+1})\prod_{j=2}^{k-1}(z_{k-1}z_j -
z_{j-1})\prod_{j=1}^{k-3}\prod_{k=j+2}^{k-1}(z_jz_k-z_{j+1}z_{k-1}). \]
\end{conjecture}
In order to prove the conjecture, giving an explicit diagonal representation for the generating function of the number of
standard Young tableaux of a given height, it is sufficient to prove that for every $d$
{\small
\[ \sum_{\sigma \in \mathcal{G}} (-1)^{\operatorname{sgn}(g)} \sigma(z_1\cdots z_{d-1}) = \frac{(z_1z_{k-1}-1)}{(z_1\cdots z_{k-1})^{k-1}}
\prod_{j=1}^{k-2}(z_1z_j - z_{j+1})\prod_{j=2}^{k-1}(z_{k-1}z_j -
z_{j-1})\prod_{j=1}^{k-3}\prod_{k=j+2}^{k-1}(z_jz_k-z_{j+1}z_{k-1}),\] }
where $\mathcal{G}$ is the finite group of rational transformations of
$\mathbb{R}^{d-1}$ generated by{\small
\begin{align*}
\phi_1: (z_1,\dots,z_{d-1}) &\mapsto (\ensuremath{\overline{z}}_1z_2,\dots,z_{d-1}) \\
(1 < k < d-1) \quad \phi_k: (z_1,\dots,z_{d-1}) &\mapsto (z_1,\dots,z_{k-1},\quad z_{k-1}\ensuremath{\overline{z}}_kz_{k+1}, \quad z_{k+1}, \dots,z_{d-1}) \\
\phi_{d-1}: (z_1,\dots,z_{d-1}) &\mapsto (z_1,\dots,z_{d-1},z_{d-2}\ensuremath{\overline{z}}_{d-1}),
\end{align*}}
which acts on $f \in \mathbb{R}[z_1,\dots,z_d]$ by $\sigma(f(z_1,\dots,z_{d-1})) = f(\sigma(z_1,\dots,z_{d-1}))$. It can be shown that $\mathcal{G}$ is isomorphic to the symmetric group $S_d$, however the action of its elements in terms such as $\sigma(z_1\cdots z_{d-1})$ is less clear.
\section{Conclusion}
\label{sec:Conclusion}
These problems are clearly begging for combinatorial proofs. There are
a huge number of candidates for Baxter objects to choose from, and
hopefully the Catalan subclasses are retained.
The diagonal expressions in Section~\ref{sec:diag}, which are of
interest in their own right, are also compelling in a wider
context. We are quite interested in the conjecture of Christol on
whether or not globally bounded D-finite series can always be
expressed as a diagonal of a rational function. The class of standard
Young tableau of bounded height is a very intriguing case, and
follows our study of lattice path walks~\cite{MeMi14c} wherein we
consider the study of diagonal expressions for known D-finite
classes.
\section*{Acknowledgements}
We are extremely grateful to Eric Fusy, Julien Courtiel, Sylvie
Corteel, Lily Yen, Yvan le Borgne and Sergi Elizalde for stimulating
conversations, and important insights.
\small
| {
"timestamp": "2015-05-13T02:12:12",
"yymm": "1411",
"arxiv_id": "1411.6606",
"language": "en",
"url": "https://arxiv.org/abs/1411.6606",
"abstract": "Tableau sequences of bounded height have been central to the analysis of k-noncrossing set partitions and matchings. We show here that familes of sequences that end with a row shape are particularly compelling and lead to some interesting connections. First, we prove that hesitating tableaux of height at most two ending with a row shape are counted by Baxter numbers. This permits us to define three new Baxter classes which, remarkably, do not obviously possess the antipodal symmetry of other known Baxter classes. We then conjecture that oscillating tableau of height bounded by k ending in a row are in bijection with Young tableaux of bounded height 2k. We prove this conjecture for k at most eight by a generating function analysis. Many of our proofs are analytic in nature, so there are intriguing combinatorial bijections to be found.",
"subjects": "Combinatorics (math.CO)",
"title": "A Baxter class of a different kind, and other bijective results using tableau sequences ending with a row shape",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180643966651,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7083314737312417
} |
https://arxiv.org/abs/1203.3868 | Optimal covers with Hamilton cycles in random graphs | A packing of a graph G with Hamilton cycles is a set of edge-disjoint Hamilton cycles in G. Such packings have been studied intensively and recent results imply that a largest packing of Hamilton cycles in G_n,p a.a.s. has size \lfloor delta(G_n,p) /2 \rfloor. Glebov, Krivelevich and Szabó recently initiated research on the `dual' problem, where one asks for a set of Hamilton cycles covering all edges of G. Our main result states that for log^{117}n / n < p < 1-n^{-1/8}, a.a.s. the edges of G_n,p can be covered by \lceil Delta(G_n,p)/2 \rceil Hamilton cycles. This is clearly optimal and improves an approximate result of Glebov, Krivelevich and Szabó, which holds for p > n^{-1+\eps}. Our proof is based on a result of Knox, Kühn and Osthus on packing Hamilton cycles in pseudorandom graphs. | \section{Introduction}
Given graphs $H$ and $G$, an $H$-decomposition of $G$ is a set of edge-disjoint copies of $H$ in $G$ which cover all edges of $G$.
The study of such decompositions forms an important area of Combinatorics but it is notoriously difficult.
Often an $H$-decomposition does not exist (or it may be out of reach of current methods).
In this case, the natural approach is to study the packing and covering versions of the problem.
Here an \emph{$H$-packing} is a set of edge-disjoint copies of $H$ in $G$ and an \emph{$H$-covering}
is a set of (not necessarily edge-disjoint) copies of $H$ covering all the edges of $G$.
An $H$-packing is \emph{optimal} if it has the largest possible size and an $H$-covering is \emph{optimal} if it has the smallest possible size.
The two problems of finding (nearly) optimal packings and coverings may be viewed as `dual' to each other.
By far the most famous problem of this kind is the Erd\H{o}s-Hanani problem on packing and covering a complete $r$-uniform hypergraph with $k$-cliques,
which was solved by R\"odl~\cite{Rodlnibble}. In this case, it turns out that the (asymptotic) covering and packing versions of the problem are trivially equivalent
and the solutions have approximately the same value.
Packings of Hamilton cycles in random graphs $G_{n,p}$ were first studied by Bollob\'as and Frieze~\cite{BF85}.
(Here $G_{n,p}$ denotes the binomial random graph on $n$ vertices with edge probability $p$.)
Recently, the problem of finding optimal packings of edge-disjoint Hamilton cycles in a random graph has received a large amount
of attention, leading to its complete solution in a series of papers by several authors (see below for more details on the history of the problem).
The size of a packing of Hamilton cycles in a graph $G$ is obviously at most $\lfloor \delta(G)/2 \rfloor$, and this trivial bound turns out to be tight in the case of $G_{n,p}$
for \emph{any} $p$.
The covering version of the problem was first investigated by Glebov, Krivelevich and Szab\'o~\cite{GKS}.
Note that the trivial bound on the size an optimal covering of a graph $G$ with Hamilton cycles is $\lceil \Delta(G)/2 \rceil$.
They showed that for $p \ge n^{-1+\varepsilon}$, this bound is a.a.s.~approximately tight, i.e.~in this range,
a.a.s.~the edges of $G_{n,p}$ can be covered with $(1+o(1))\Delta(G_{n,p})/2$ Hamilton cycles.
Here we say that a property $A$ holds a.a.s.~(asymptotically almost surely), if the probability that $A$ holds tends to $1$ as $n$ tends to infinity.
The authors of~\cite{GKS} also conjectured that their approximate bound could be extended to any $p \gg \log n/n$.
We are able to go further and prove the corresponding exact bound, unless $p$ tends to $0$ or $1$ rather quickly.
\begin{thm}\label{thm:main-result}
Suppose that $G\sim G_{n,p}$, where $\frac{\log^{117}n}{n}\le p\le1-n^{-1/8}$.
Then a.a.s.~the edges of $G$ can be covered by $\ceil{\Delta(G)/2}$
Hamilton cycles.
\end{thm}
Note that the exact bound fails when $p$ is sufficiently large. Indeed, let $n\ge 5$ be odd and take $p = 1 - n^{-2}$. Then with $\Omega(1)$ probability\COMMENT{This happens with probability
\[\binom{n}{2}\left(1-\frac{1}{n^{2}}\right)^{\binom{n}{2}-1}\frac{1}{n^2} \ge \frac{1}{4}\left(1-\frac{1}{n^2}\right)^{n^2} \ge \frac{1}{4}e^{-1-\frac{1}{n^2}} \ge \frac{1}{100}.\]}, $G\sim G_{n,p}$ is the complete graph with one edge $uv$ removed. We claim that in this case, $G$ cannot be covered by $(n-1)/2$ Hamilton cycles. Suppose such a cover exists. Then exactly one edge is contained in more than one Hamilton cycle in the cover. But $u$ and $v$ both have odd degrees, and hence are both incident to an edge contained in more than one Hamilton cycle. Since $uv \notin E(G)$, these edges must be distinct and we have a contradiction.
Note also that even though our result does not hold for $p > 1 - n^{-1/8}$, it still implies the conjecture of~\cite{GKS} in this range. Indeed, if $G \sim G_{n,p}$ with $p > 1 - n^{-1/8}$, we may simply partition $G$ into two edge-disjoint graphs uniformly at random and apply Theorem~\ref{thm:main-result} to each one to a.a.s. cover $G$ with $(1+o(1))n/2$ Hamilton cycles.
Unlike the situation with the Erd\H{o}s-Hanani problem, the packing and covering problems are not equivalent in the case of Hamilton cycles.
However, they do turn out to be closely related, so we now summarize the known results leading to the solution of the packing problem for Hamilton cycles in
random graphs. Here `exact' refers to a bound of $\lfloor \delta(G_{n,p})/2 \rfloor$, and $\varepsilon$ is a positive constant.
$$
\begin{array}{l|l|l}
\mbox{authors} & \mbox{range of } p & \\
\hline
\mbox{Ajtai, Koml\'os \& Szemer\'edi~\cite{Komlos}} & \delta(G_{n,p}) =2 & \mbox{exact} \\
\mbox{Bollob\'as \& Frieze~\cite{BF85}} & \delta(G_{n,p}) \mbox{ bounded} & \mbox{exact} \\
\mbox{Frieze \& Krivelevich~\cite{fk}} & p \mbox{ constant} & \mbox{approx.} \\
\mbox{Frieze \& Krivelevich~\cite{FK08}} & p=\frac{(1+o(1))\log n}{n} & \mbox{exact} \\
\mbox{Knox, K\"uhn \& Osthus~\cite{AHDoRG}} & p \gg \frac{\log n}{n} & \mbox{approx.} \\
\mbox{Ben-Shimon, Krivelevich \& Sudakov~\cite{BKS}} & \frac{(1+o(1))\log n}{n}\le p\le \frac{1.02\log n}{n} & \mbox{exact} \\
\mbox{Knox, K\"uhn \& Osthus~\cite{Knox2011e}} & \frac{\log^{50}n}{n} \le p \le 1- n^{-1/5} & \mbox{exact} \\
\mbox{Krivelevich \& Samotij~\cite{KrS}} & \frac{\log n}{n} \le p \le n^{-1 + \varepsilon} & \mbox{exact} \\
\mbox{K\"uhn \& Osthus~\cite{KOappl}} & p \ge 2/3 & \mbox{exact} \\
\end{array}
$$
In particular, the results in~\cite{BF85,Knox2011e,KrS,KOappl} (of which~\cite{Knox2011e,KrS} cover the main range)
together show that for any $p$, a.a.s.~the size of an optimal packing of Hamilton cycles in $G_{n,p}$ is $\lfloor \delta(G_{n,p})/2 \rfloor$.
This confirms a conjecture of Frieze and Krivelevich~\cite{FK08} (a stronger conjecture was made in~\cite{fk}).
The result in~\cite{KOappl} is based on a recent result of K\"uhn and Osthus~\cite{monster} which guarantees the existence of a Hamilton decomposition in every
regular `robustly expanding' digraph. The main application of the latter was the proof (for large tournaments)
of a conjecture of Kelly that every regular tournament has a Hamilton decomposition.
But as discussed in~\cite{monster,KOappl}, the result in~\cite{monster} also has a number of further applications to packings of Hamilton cycles in dense graphs and (quasi-)random graphs.
Recall that the above results imply an optimal packing result for any $p$.
However, for the covering version, we need $p$ to be large enough to ensure the existence of at least one Hamilton cycle
before we can find any covering at all. This is the reason for the restriction $p \gg \log n/n$ in the conjecture of Glebov, Krivelevich and Szab\'o~\cite{GKS}
mentioned above.
However, they asked the intriguing question whether this might extend to $p$ which is closer to the threshold $\log n/n$
for the appearance of a Hamilton cycle in a random graph.
In fact, it would be interesting to know whether a `hitting time' result holds. For this, consider the well-known `evolutionary' random graph process $G_{n,t}$:
Let $G_{n,0}$ be the empty graph on $n$ vertices. Consider a random ordering of the edges of $K_n$. Let $G_{n,t}$
be obtained from $G_{n,t-1}$ by adding the $t$th edge in the ordering.
Given a property $\mathcal{P}$, let $t(\mathcal{P})$ denote the \emph{hitting time} of $\mathcal{P}$, i.e.~the smallest $t$ so that $G_{n,t}$ has $\mathcal{P}$.
\begin{question}
\label{con:hittime}
Let $\mathcal{C}$ denote the property that an optimal covering of a graph $G$ with Hamilton cycles has size $\lceil \Delta(G)/2 \rceil$.
Let $\mathcal{H}$ denote the property that a graph $G$ has a Hamilton cycle.
Is it true that a.a.s.~$t(\mathcal{C})=t(\mathcal{H})$?
\end{question}
Note that $\mathcal{C}$ is not monotone. In fact, it is not even the case that for all $t>t(\mathcal{C})$, $G_{n,t}$ a.a.s.~has $\mathcal{C}$. Taking $n\ge 5$ odd and $t=\binom{n}{2}-1$, $G_{n,t}$ is the complete graph with one edge removed -- which, as noted above, may not be covered by $(n-1)/2$ Hamilton cycles. It would be interesting to determine (approximately) the ranges of $t$ such that a.a.s. $G_{n,t}$ has $\mathcal{C}$.
The approximate covering result of Glebov, Krivelevich and Szab\'o~\cite{GKS} uses the approximate packing result in~\cite{AHDoRG} as a tool.
More precisely, their proof applies the result in~\cite{AHDoRG} to obtain an almost optimal packing.
Then the strategy is to add a comparatively small number of Hamilton cycles which cover the remaining edges.
Instead, our proof of Theorem~\ref{thm:main-result} is based on the main technical lemma (Lemma 47) of the exact packing result
in~\cite{Knox2011e}. This is stated as Lemma~\ref{lem:prcovering} in the current paper and (roughly) states the following: Suppose we are given a regular graph $H$
which is close to being pseudorandom
and a pseudorandom graph $G_1$, where $G_1$ is allowed to be surprisingly sparse compared to $H$.
Then we can find a set of edge-disjoint Hamilton cycles in $G_1 \cup H$ covering all edges of $H$.
Our proof involves several successive applications of this result, where we eventually cover all edges of $G_{n,p}$.
In addition, our proof crucially relies on the fact that in the range of $p$ we consider,
there is a small but significant gap between the degree of the unique vertex $x_0$ of maximum degree and the other vertex degrees
(and the same holds for the vertex of minimum degree). This means that for all vertices $x \neq x_0$, we can afford to
cover a few edges incident to $x$ more than once. The analogous observation for the minimum degree was exploited in~\cite{Knox2011e} as well.
The result in~\cite{GKS} also holds for quasi-random graphs of edge density at least $n^{-1+\varepsilon}$, provided that they have an almost optimal packing of Hamilton cycles. It would be interesting to obtain such
results for sparser quasi-random graphs too.
In fact, the result in~\cite{Knox2011e} does apply in a quasi-random setting (see Theorem~48 in~\cite{Knox2011e}), but the assumptions are quite restrictive and it is not clear to which
extent they can be used to prove results for $(n,d,\lambda)$-graphs, say. Note that even if the assumptions of~\cite{Knox2011e} could be weakened, our results would still not immediately generalise to $(n,d,\lambda)$-graphs.
This paper is organized as follows: In the next section, we collect several results and definitions regarding pseudorandom graphs, mainly from~\cite{Knox2011e}.
In Section~\ref{sec:tutte}, we apply Tutte's Theorem to give results which enable us to add a small number of
edges to certain almost-regular graphs in order to turn them into regular graphs (without increasing the maximum degree).
Finally, in Section~\ref{sec:proof} we put together all these tools to prove Theorem~\ref{thm:main-result}.
\section{Pseudorandom graphs}
The purpose of this section is to collect all the properties of $G_{n,p}$ that we need for our proof of Theorem~\ref{thm:main-result}.
Throughout the rest of the paper, we always assume that $n$ is sufficiently large for our estimates to hold. In particular, some
of our lemmas only hold for sufficiently large~$n$, but we do not state this explicitly. We write $\log$ for the natural logarithm and
$\log^a n$ for $(\log n)^a$. Given functions $f,g:\mathbb{N}\to \mathbb{R}$, we write $f=\omega(g)$ if $f/g\to \infty$ as $n\to \infty$.
We denote the average degree of a graph $G$ by $d(G)$.
We will need the following Chernoff bound (see e.g.~Theorem~2.1 in~\cite{JLR}).
\begin{lem}\label{lem:chernoff}
Suppose that $X\sim Bin(n,p)$.
For any $0<a<1$ we have $$\mathbb{P}(X \le (1-a)\mathbb{E}X) \le e^{-\frac{a^2}{3}\mathbb{E}X}.$$
\end{lem}
The following notion was first introduced by Thomason~\cite{T87}.
\begin{defn}\label{jumbled}
Let $p,\beta\ge 0$ with $p\le 1$. A graph $G$ is $(p,\beta)$-jumbled
if for all non-empty $S\subseteq V(G)$ we have $$\left|e_{G}(S)-p\binom{|S|}{2}\right|<\beta|S|.$$
\end{defn}
We will also use the following immediate consequence of Definition~\ref{jumbled}. Suppose that
$G$ is a $(p,\beta)$-jumbled graph and $X,Y\subseteq V(G)$ are disjoint. Then
\begin{equation}\label{eq:jumbled}
\left|e(X,Y)- p|X||Y|\right|\le 2\beta(|X|+|Y|).
\end{equation}
To see this, note that $e(X,Y)=e(X\cup Y)-e(X)-e(Y)$. Now~(\ref{eq:jumbled}) follows from Definition~\ref{jumbled}
by applying the triangle inequality.
The following notion was introduced in~\cite{Knox2011e}.%
\COMMENT{This is not equivalent to requiring that if $G$ has degree sequence $d_{1}<d_{2}<\ldots<d_{n}$,
we have $d_{i}-d_{1}\ge 2(i-1)$ for all $i\le\log^{2}n+1$. We could e.g. have that $d_2=d_3=\delta+3$ and
$d_i\ge \delta+2\log^2 n$ for all $i\ge 4$.}
\begin{defn}
Let $G$ be a graph on $n$ vertices. For a set $T\subseteq V(G)$,
let $\overline{d}_{G}(T):=\frac{1}{|T|}\sum_{t\in T} d_G(t)$ be the average degree of the vertices of
$T$ in $G$. Then $G$ is \emph{strongly $2$-jumping} if for
all non-empty $T\subseteq V(G)$ we have \[
\overline{d}_{G}(T)\ge\delta(G)+\min\{|T|-1,\log^{2}n\}.\]
\end{defn}
Note that a strongly $2$-jumping graph $G$ is `$2$-jumping', i.e.~it has a unique vertex of minimum degree and all other vertices
have degree at least $\delta(G)+2$.
The next definition collects (most of) the pseudorandomness properties that we need.
\begin{defn}\label{pseudodef}
A graph $G$ on $n$ vertices is \emph{$p$-pseudorandom} if all of
the following hold:
\begin{itemize}
\item[(P1)] $G$ is $(p,2\sqrt{np(1-p)})$-jumbled.
\item[(P2)] For any disjoint $S,T\subseteq V(G)$,
\begin{enumerate}
\item if $\left(\frac{1}{|S|}+\frac{1}{|T|}\right)\frac{\log n}{p}\ge\frac{7}{2}$,
then $e_{G}(S,T)\le2(|S|+|T|)\log n$,
\item if $\left(\frac{1}{|S|}+\frac{1}{|T|}\right)\frac{\log n}{p}\le\frac{7}{2}$,
then $e_{G}(S,T)\le7|S||T|p$.
\end{enumerate}
\item[(P3)] For any $S\subseteq V(G)$,
\begin{enumerate}
\item if $\frac{\log n}{|S|p}\ge\frac{7}{4}$, then $e(S)\le2|S|\log n$,
\item if $\frac{\log n}{|S|p}\le\frac{7}{4}$, then $e(S)\le\frac{7}{2}|S|^{2}p$.
\end{enumerate}
\item[(P4)] We have $np-2\sqrt{np\log n}\le\delta(G)\le np-200\sqrt{np(1-p)}$.
\item[(P5)] We have $\Delta(G)\le np+2\sqrt{np\log n}$.
\item[(P6)] $G$ is strongly 2-jumping.
\end{itemize}
\end{defn}
The following definition is essentially the same, except that some of the bounds are more restrictive.
\begin{defn}
A graph $G$ on $n$ vertices is \emph{strongly $p$-pseudorandom} if all of
the following hold:%
\COMMENT{We could replace the lower bound on $\delta(G)$ in (SP4) by $np-\frac{15}{8}\sqrt{np\log n}$, but we don't use this stronger bound.}
\end{defn}
\begin{itemize}
\item[(SP1)] $G$ is $(p,\frac{3}{2}\sqrt{np(1-p)})$-jumbled.
\item[(SP2)] For any disjoint $S,T\subseteq V(G)$,
\begin{enumerate}
\item if $\left(\frac{1}{|S|}+\frac{1}{|T|}\right)\frac{\log n}{p}\ge\frac{7}{2}$,
then $e_{G}(S,T)\le \frac{3}{2}(|S|+|T|)\log n$,
\item if $\left(\frac{1}{|S|}+\frac{1}{|T|}\right)\frac{\log n}{p}\le\frac{7}{2}$,
then $e_{G}(S,T)\le6|S||T|p$.
\end{enumerate}
\item[(SP3)] For any $S\subseteq V(G)$,
\begin{enumerate}
\item if $\frac{\log n}{|S|p}\ge\frac{7}{4}$, then $e(S)\le \frac{3}{2}|S|\log n$,
\item if $\frac{\log n}{|S|p}\le\frac{7}{4}$, then $e(S)\le 3|S|^{2}p$.
\end{enumerate}
\item[(SP4)] We have $np-2\sqrt{np\log n}\le\delta(G)\le np-200\sqrt{np(1-p)}$.
\item[(SP5)] We have $\Delta(G)\le np + \frac{15}{8}\sqrt{np\log n}$.
\item[(SP6)] $G$ is strongly $2$-jumping.
\end{itemize}
The following lemma is an immediate consequence of Lemmas~9--11, 13 and~14 from \cite{Knox2011e}.%
\COMMENT{The version on our homepage does not include the better bounds yet.}
\begin{lem}\label{lem:rgsarepr}
Let $G\sim G_{n,p}$, where $48^{2}\log^{7}n/n\le p\le 1-36\log^{\frac{7}{2}}n/\sqrt{n}$.
Then $G$ is strongly $p$-pseudorandom with probability at least $1-11/\log n$.
\end{lem}
The next observation shows that if we add a few edges at some vertex $x_0$ of a strongly pseudorandom graph such that
none of these edges is incident to the unique vertex of minimum degree, then we obtain a graph which is still pseudorandom.
\begin{lem}\label{lem:changingvertices}
Suppose that $G$ is a strongly $p$-pseudorandom graph with $p,1-p=\omega\left(1/n\right)$.
Let $y_1$ be the (unique) vertex of minimum degree in $G$ and let $x_0\neq y_1$ be any other vertex.
Let $F$ be a collection of edges of $K_n$ not contained in $G$ which are incident to $x_{0}$ but not to $y_1$
and such that $|F|\le \sqrt{np\log n}/8.$ Then the graph $G+F$ is $p$-pseudorandom.
\end{lem}
\begin{proof}
Let $G':=G+F$.
Clearly, (SP4) and (SP6) are not affected by adding the edges of $F$, so $G'$ satisfies (P4) and (P6).
The bound on $|F|$ together with (SP5) immediately imply that $G'$ satisfies (P5). %
We now show that $G'$ satisfies (P1).
Indeed, for any $S\subseteq V(G')$, (SP1) implies that
\begin{eqnarray*}
\left|e_{G'}(S)-p\binom{|S|}{2}\right| & \le & \left|e_{G'}(S)-e_{G}(S)\right|+\left|e_{G}(S)-p\binom{|S|}{2}\right|\\
& \le & |S|+\frac{3}{2}\sqrt{np(1-p)}|S|
\le 2\sqrt{np(1-p)}|S|.
\end{eqnarray*}
To check (P2), suppose that $S,T\subseteq V(G')$ are disjoint. Without loss of generality we may assume that
$|S|\le|T|$. First suppose $\left(\frac{1}{|S|}+\frac{1}{|T|}\right)\frac{\log n}{p}\ge\frac{7}{2}$.
Then (i) of (SP2) implies that
\[e_{G'}(S,T)\le e_{G}(S,T)+|T|\le\frac{3}{2}\left(|S|+|T|\right)\log n+|T|\le2\left(|S|+|T|\right)\log n,\]
as required. Now suppose that $\left(\frac{1}{|S|}+\frac{1}{|T|}\right)\frac{\log n}{p}\le\frac{7}{2}$.
Then (ii) of (SP2) implies that%
\COMMENT{Indeed, we have:
$\frac{1}{|S|}\cdot\frac{\log n}{p}\le\frac{7}{2}\Rightarrow p|S|\ge\frac{2}{7}\log n>1$ as desired.}
\[e_{G'}(S,T)\le e_{G}(S,T)+|T|\le|T|\left(6p|S|+1\right)\le 7|S||T|p.\]
So (ii) of (P2) holds.
The proof that (P3) holds is essentially the same.%
\COMMENT{Now suppose $S\subseteq V(G)$ - we will show that $G'$ satisfies the
relevant upper bound on $e(S)$. First suppose $\frac{\log n}{|S|p}\ge\frac{7}{4}$
- then we have by (SP4) $e_{G'}(S)\le e_{G}(S)+|S|\le\frac{3}{2}|S|\log n+|S|\le2|S|\log n$
as desired. Now suppose $\frac{\log n}{|S|p}\le\frac{7}{4}$. Then
we have by (SP4) $e_{G'}(S)\le e_{G}(S)+|S|\le|S|\left(3p|S|+1\right)$
and so we must show $1\le\frac{p|S|}{2}$. But we have
$\frac{1}{|S|}\cdot\frac{\log n}{p}\le\frac{7}{4}\Rightarrow p|S|\ge\frac{4}{7}\log n>2$
as desired.}
\end{proof}
We say that a graph $G$ on $n$ vertices is \emph{$u$-downjumping} if it has
a unique vertex $x_{0}$ of maximum degree, and $d(x_{0})\ge d(x)+u$
for all $x\ne x_{0}$.
The following result follows from Lemma~17 in~\cite{Knox2011e} by considering complements.
The latter lemma in turn follows easily from Theorem~3.15 in~\cite{B84}.
\begin{lem}\label{lem:downjumping}
Let $G\sim G_{n,p}$ with $p,1-p=\omega\left(\log n/n\right)$.
Then a.a.s.~$G$ is $5\frac{\sqrt{np(1-p)}}{\log n}$-downjumping.
\end{lem}
The next result is intuitively obvious, but due to possible correlations between vertex degrees, it does merit some justification.
\begin{lem}\label{lem:maxmin}
Suppose that $\log^2 n/n < p' \le p \le 1- \log^2 n/n$, that $p'\le 1/2$ and that $G\sim G_{n,p}$.
Let $H$ be a random subgraph of $G$ obtained by including each edge of $G$ into $H$ with probability $p'/p$.
Then a.a.s.~$G$ contains a unique vertex $x_0$ of maximum degree and $x_0$ does not have minimum degree in $H$.
\end{lem}
\proof
Fix any $\varepsilon>0$.
Let $A$ be the event that $G$ contains a unique vertex $x_0$ of maximum degree and that $d_H(x_0)=\delta(H)$.
Let $f:=np'- \sqrt{np'\log \log n }$. Let $B$ be the event that $\delta(H) \le f$.
Note that $H \sim G_{n,p'}$. So Corollary 3.13 of~\cite{Bollobasbook} implies that $\mathbb{P}(\overline{B}) \le \varepsilon$.
Let $C$ be the event that $G$ contains a unique vertex $x_0$ of maximum degree and that $d_H(x_0) \le f$ and note that $A \cap B \subseteq C$.
Note also that $\mathbb{P}(A) \le \mathbb{P}(A \cap B )+ \mathbb{P}(\overline{B}) \le \mathbb{P}(C)+ \varepsilon$.
We say that a graph $F$ on $n$ vertices is \emph{typical} if $\Delta(F) \ge np$ and there is a unique vertex
of degree $\Delta(F)$. Now let $D$ be the event that $G$ is typical. Then Corollary 3.13 of~\cite{Bollobasbook} and
Lemma~\ref{lem:downjumping} together imply that
$\mathbb{P}(\overline{D}) \le \varepsilon$. For any fixed graph $F$ on $n$ vertices, let $E_F$ denote the event that $G=F$.
Then $\mathbb{P}(C)\le \varepsilon+ \sum_{F \colon F \ {\rm typical}} \mathbb{P}(C \mid E_F) \mathbb{P}(E_F)$.
Suppose that $E_F$ holds, where $F$ is typical. Let $N:=d_G(x_0)$ (note that $E_F$ determines $N$ and $x_0$).
Whether the event $C$ holds is now determined by a sequence of $N$ Bernoulli trials, each with success probability $p'/p$.
So let $X \sim \mbox{Bin}(N,p'/p)$. Then
$\mathbb{E}(X)=N(p'/p) \ge p'n$, which implies that%
\COMMENT{This holds since the function $x-\sqrt{x\log\log n}$ is monotone increasing if $x\ge \log \log n/4$.}
$f \le \mathbb{E}(X)(1-\sqrt{\log \log n/\mathbb{E}(X)})$.
Then an application of Lemma~\ref{lem:chernoff} gives us
$$
\mathbb{P}(C \mid E_F)=\mathbb{P}(X \le f) \le e^{-\log \log n /3} \le \varepsilon.
$$
So $\mathbb{P}(C) \le 2\varepsilon$, which in turn implies that $\mathbb{P}(A) \le 3\varepsilon$. Since $\varepsilon$ was arbitrary, this implies the result.
\noproof\bigskip
Hefetz, Krivelevich and Szab\'o~\cite{HKS} proved a criterion for Hamiltonicity which requires only a rather weak quasirandomness notion.
We will use a special case of their Theorem~1.2 in~\cite{HKS}.
In that theorem, given a set $S$ of vertices in a graph $G$, we let $N(S)$ denote the external neighbourhood of $S$,
i.e.~the set of all those vertices $x\notin S$ for which there is some vertex $y\in S$ with $xy\in E(G)$.
Also, we say that $G$ is \emph{Hamilton-connected} if for any pair $x,y$ of distinct vertices
there is a Hamilton path with endpoints $x$ and $y$.
\begin{thm}
Suppose that $G$ is a graph on $n$ vertices which satisfies the following:
\begin{itemize}
\item[(HP1)] For every $S \subseteq V(G)$ with $|S| \le n/\sqrt{\log n}$, we have $|N(S)| \ge 20 |S|$.
\item[(HP2)] $G$ contains at least one edge between any two disjoint subsets $A,B \subseteq V(G)$ with $|A|,|B| \ge n/\log n$.%
\COMMENT{so this is the theorem with $d =20$ and where we bound the logs to make the statement cleaner.}
\end{itemize}
Then $G$ is Hamilton-connected.
\end{thm}
\begin{thm}\label{lem:hamilton-connected}
Let $G \sim G_{n,p}$ with $\log ^8 n /n \le p \le 1-n^{-1/3}$, and let $x_{0}$ be a vertex of maximum degree in $G$.
Then a.a.s.~$G-x_{0}$ is Hamilton-connected.
\end{thm}
\proof
It suffices to check that $G-{x_0}$ satisfies (HP1) and (HP2).
For $p$ in the above range, these properties are well known to hold a.a.s.~for $G$ with room to spare and so also hold for $G-{x_0}$.
For completeness we point out explicit references.
To check (HP1), first note that Lemma~\ref{lem:rgsarepr} implies that $G$ is $p$-pseudorandom.
So Corollary~37 of~\cite{Knox2011e} applied with $A_x:=N_G(x) \setminus \{x_0\}$ now implies that (HP1) holds.
(HP2) is a special case of Theorem~2.11 in~\cite{Bollobasbook} -- the latter guarantees a.a.s.~the existence of many edges between
$A$ and $B$.%
\COMMENT{When using (i) in Corollary 36, note that $\sum_{x \in S} |A_x|\ge s(np/2)\ge s\log^8 n/2$ and so
$|N(S)|=t\ge \sum_{x \in S} |A_x|/4\log n\ge s\log^6 n$.
When using~(ii), note that $\sum_{x \in S} |A_x|/7sp \ge s (np/2)/7sp =n/14$.
The results in HKS checking HP1 and HP2 do not seem to be formally applicable in our setting.}
\noproof\bigskip
\section{Extending graphs into regular graphs}\label{sec:tutte}
The aim of this section is to show that whenever $H$ is a graph which satisfies certain conditions and $G$ is
a $p$-pseudorandom graph on the same vertex set which is edge-disjoint from $H$, then $G$ contains a spanning subgraph $H'$
whose degree sequence complements that of $H$, i.e.~such that $H\cup H'$ is $\Delta(H)$-regular.
The conditions on $H$ that we need are the following:
\begin{itemize}
\item $H$ has even maximum degree.
\item $H$ is $\sqrt{np}$-downjumping.
\item $H$ satisfies $\Delta(H)-\delta(H)\le (np\log n)^{5/7}$.
\end{itemize}
In order to show this we will use Tutte's $f$-factor theorem, for which we need to introduce the following notation.
Given a graph $G=(V,E)$ and a function $f:V\rightarrow\mathbb{N}\cup \{0\}$,
an \emph{$f$-factor} of $G$ is a subgraph $G'$ of $G$ such
that $d_{G'}(v)=f(v)$ for all $v\in V$. Our approach will then be to
set $f(v):=\Delta(H)-d_{H}(v)$ and attempt to find an $f$-factor
in the pseudorandom graph $G$. The following result of Tutte~\cite{Tutte1, Tutte2}
gives a necessary and sufficient condition for a graph to contain
an $f$-factor.
\begin{thm}\label{thm:Tutte}
A graph $G=(V,E)$ has an $f$-factor if and only
if for every two disjoint subsets $X,Y\subseteq V$, there are at most
\[\sum_{x\in X}f(x)+\sum_{y\in Y}(d(y)-f(y))-e(X,Y)\]
connected components $K$ of $G-X-Y$ such that
\[\sum_{x\in K}f(x)+e(K,Y)\]
is odd.
\end{thm}
When applying this result, we will often bound the number of components $K$ of $G-X-Y$ for which
$\sum_{x\in K}f(x)+e(K,Y)$ is odd by the total number of components of $G-X-Y$.
The next lemma (which is a special case of Lemma~20 in~\cite{Knox2011e}) implies that there are at most $|X|+|Y|$ such components.
\begin{lem}\label{lem:reghelper}
Let $G=(V,E)$ be a $p$-pseudorandom graph on $n$ vertices with $pn\ge \log n$. Then for any nonempty
$B\subseteq V$, the number of components of $G[V\setminus B]$ is at most $|B|$. In particular, $G$ is connected.
\end{lem}
The following lemma guarantees an $f$-factor in a pseudorandom graph, as long as $\sum_{v\in V}f(v)$ is even, $f(v)$ is not too large and for
all but at most one vertex $f(v)$ is not too small either. (Clearly, the requirement that $\sum_{v\in V}f(v)$ is even is necessary.)%
\COMMENT{Before we had that $\sqrt{np}\le f(v)\le(np\log n)^{\frac{5}{7}}$ holds where the lower bound might fail for a bounded number $C$ of vertices.
But the case when $|X|<C$ and $Y=\emptyset$ was missing. So I changed the lemma since we are only using it in the case when $C=1$ anyway.}
\begin{lem}\label{lem:ffactors}
Let $G=(V,E)$ be a $p$-pseudorandom graph on $n$ vertices with%
\COMMENT{Before we also had $p\le 1/2$, but this seems not to be needed for the proof.}
$pn\ge \log^{21}n$, and let $f:V\rightarrow\mathbb{N}\cup \{0\}$
be a function such that $\sum_{v\in V}f(v)$ is even. Suppose that $G$ contains a vertex $x_0$ such that
$f(x_0)$ is even and such that
$$ f(x_0)\le (np\log n)^{\frac{5}{7}} \ \ \ \text{and} \ \ \ \sqrt{np}\le f(v)\le(np\log n)^{\frac{5}{7}} \ \ \text{for all} \ \ v\in V\setminus\{x_0\}.$$
Then $G$ has an $f$-factor.
\end{lem}
\begin{proof}
Given two disjoint sets $X,Y\subseteq V$, we define $\alpha_{f}(X,Y)$
to be the number of connected components $K$ of $G-X-Y$ such that
\[\sum_{x\in K}f(x)+e(K,Y)\] is odd. We also define
\[\beta_{f}(X,Y):=\sum_{x\in X}f(x)+\sum_{y\in Y}\left(d(y)-f(y)\right)-e(X,Y).\]
By \prettyref{thm:Tutte}, it then suffices to prove that $\alpha_{f}(X,Y)\le\beta_{f}(X,Y)$.
We will first show that $\alpha_{f}(X,Y)\le|X|+|Y|$. If either $X$ or $Y$
is nonempty, this follows immediately from \prettyref{lem:reghelper}.
If both $X$ and $Y$ are empty, then we must show that $\alpha_{f}(\emptyset,\emptyset)=0$. But
this holds since $G$ is connected by \prettyref{lem:reghelper},
and $\sum_{x\in V}f(x)$ is even by hypothesis. Hence $\alpha_{f}(X,Y)\le|X|+|Y|$
in all cases.
Hence if
\begin{equation}\label{eq:betaXY}
\beta_{f}(X,Y)\ge|X|+|Y|
\end{equation}
holds, then we have $\alpha_{f}(X,Y)\le\beta_{f}(X,Y)$ and we are done. If $X=Y=\emptyset$, (\ref{eq:betaXY}) holds. So it remains to consider the following cases.
\begin{caseenv}
\item $|X|= 1$.
\smallskip
Let $x$ denote the unique vertex in $X$. Suppose first that $Y=\emptyset$.
In this case Lemma~\ref{lem:reghelper} implies that $G-x=G-X-Y$ is connected.
If $x=x_0$ then $\sum_{v\in V\setminus\{x\}} f(v)= \sum_{v\in V} f(v)-f(x)$ is even. Thus
$\alpha_{f}(X,Y)=0$ and so $\beta_{f}(X,Y)\ge \alpha_{f}(X,Y)$, as desired. If $x\neq x_0$
then $\beta_{f}(X,Y)=f(x)\ge \sqrt{np}\ge 1\ge \alpha_{f}(X,Y)$, as desired.
Thus we may assume that $Y\neq \emptyset$. Then
\begin{eqnarray*}
\beta_{f}(X,Y) & \ge & \sum_{y\in Y}\left(d(y)-f(y)\right)-|X||Y|\\
& \stackrel{({\rm P4})}{\ge} & \left(np-2\sqrt{np\log n}-(np\log n)^{\frac{5}{7}}\right)|Y|-|Y|\\
& \ge & \frac{np}{2}|Y|\ge |X|+|Y|
\end{eqnarray*}
and so~(\ref{eq:betaXY}) holds.
\item $|X|>1$ and $|Y|\le\frac{1}{4}|X|(np)^{-\frac{3}{14}}\log^{-\frac{5}{7}}n$.
\smallskip
Since $\sum_{y\in Y}d(y)\ge e(X,Y)$ it follows that in this case we have
\begin{align*}
\beta_{f}(X,Y) & \ge \sum_{x\in X}f(x)-\sum_{y\in Y}f(y)
\ge (|X|-1)\sqrt{np}-|Y|(np\log n)^{\frac{5}{7}}\\
& \ge \frac{\sqrt{np}}{2}|X|-\frac{\sqrt{np}}{4}|X|\ge 2|X|\ge |X|+|Y|,
\end{align*}
and so~(\ref{eq:betaXY}) holds.
\item $1 < |X| \leq \frac{n}{2}$ and $|Y|>\frac{1}{4}|X|(np)^{-\frac{3}{14}}\log^{-\frac{5}{7}}n$.
\smallskip
It follows by (P1) and~(\ref{eq:jumbled}) that
\[e(X,Y)\le p|X||Y|+4\sqrt{np}(|X|+|Y|).\]
Thus
\begin{eqnarray}
\beta_{f}(X,Y)-\alpha_{f}(X,Y) & \ge & \sum_{y\in Y}\left(d(y)-f(y)\right)-e(X,Y)-|X|-|Y|\nonumber\\
& \stackrel{({\rm P4})}{\ge} & \left(np-2\sqrt{np\log n}-(np\log n)^{\frac{5}{7}}\right)|Y|-p|X||Y|-5\sqrt{np}(|X|+|Y|)\nonumber\\
& \ge & \left(p(n-|X|)-2(np\log n)^{\frac{5}{7}}\right)|Y|-5\sqrt{np}|X|\label{eq:beta2}\\
& \ge & \left(\frac{np}{2}-2(np\log n)^{\frac{5}{7}}\right)|Y|-5\sqrt{np}|X|\nonumber\\
& \ge & \frac{1}{4}\left(\frac{(np)^{\frac{11}{14}}}{2\log^{\frac{5}{7}}n}-22\sqrt{np}\right)|X|\ge 0\nonumber,
\end{eqnarray}
as desired
\COMMENT{For the last inequality we use that $(np)^{11/14}\ge 44(np)^{1/2}\log^{5/7}n$ since $np\gg \log^{5/2}n$.}
\item $|X|>\frac{n}{2}$ and $|Y|>\frac{1}{4}|X|(np)^{-\frac{3}{14}}\log^{-\frac{5}{7}}n$.
\smallskip
In this case we have
\[n-|X|\ge|Y|\ge\frac{|X|}{4(np)^{\frac{3}{14}}\log^{\frac{5}{7}}n}\ge\frac{n^{\frac{11}{14}}}{8p^{\frac{3}{14}}\log^{\frac{5}{7}}n}.\]
But as in the previous case, one can show that~(\ref{eq:beta2}) still holds and so
\begin{align*}
\beta_{f}(X,Y)-\alpha_{f}(X,Y) & \ge \left(p(n-|X|)-2(np\log n)^{\frac{5}{7}}\right)|Y|-5\sqrt{np}|X|\\
& \ge \left(\frac{(np)^{\frac{11}{14}}}{8\log^{\frac{5}{7}}n}-2(np\log n)^{\frac{5}{7}}\right)|Y|-5\sqrt{np}|X|\\
& \ge \frac{\left(np\right)^{\frac{11}{14}}}{9\log^{\frac{5}{7}}n}|Y|-5\sqrt{np}|X|\\
& \ge \left(\frac{(np)^{\frac{4}{7}}}{36\log^{\frac{10}{7}}n}-5\sqrt{np}\right)|X|\ge 0,\\
\end{align*}
as desired
\COMMENT{For the third inequality we use that $(np)^{11/14}\ge 144 (np)^{5/7}\log^{10/7} n$ as $np\ge \log^{21}n$.
Similarly, the final inequality holds since $(np)^{4/7}\ge 180(np)^{1/2}\log^{10/7} n$, i.e. $(np)^{1/14}\ge 180\log^{10/7} n$
as $np\ge \log^{21}n$.}
\end{caseenv}
This completes the proof of the lemma.
\end{proof}
\begin{cor}\label{cor:exactregularise}
Let $G$ be a $p$-pseudorandom graph on $n$ vertices, where $pn\ge \log^{21}n$.
Suppose that $H$ is a graph on $V(G)$ which satisfies the following conditions:
\begin{itemize}
\item $H$ is $\sqrt{np}$-downjumping.
\item If $x_0$ is the unique vertex of maximum degree in $H$ then $H-x_0$ and $G-x_0$ are edge-disjoint.
\item $\Delta(H)$ is even.
\item $\Delta(H)-\delta(H)\le(np\log n)^{\frac{5}{7}}$.
\end{itemize}
Then there exists a $\Delta(H)$-regular graph $H'$ such that $H\subseteq H'\subseteq G\cup H$.
\end{cor}
\begin{proof}
Define $f(v):=\Delta(H)-d_{H}(v)$ for all $v\in V(G)$.
Then
\[\sum_{v\in V}f(v)=n\Delta(H)-\sum_{v\in V}d_{H}(v),\]
which is even. Moreover $f(x_{0})=0$ and our assumptions on~$H$ imply that
\[\sqrt{np}\le f(v)\le\Delta(H)-\delta(H)\le(np\log n)^{\frac{5}{7}}\]
for all $v\in V\setminus\{x_{0}\}$.
We may therefore apply \prettyref{lem:ffactors} to find an $f$-factor $G'$ in~$G$. Then $H':=H\cup G'$
is a $\Delta(H)$-regular graph as desired.
\end{proof}
\section{Proof of Theorem~\ref{thm:main-result}}\label{sec:proof}
The main tool for our proof of Theorem~\ref{thm:main-result} is the following result from \cite[Lemma~47]{Knox2011e}.
Roughly speaking, it asserts that given a regular graph $H_0$ which is contained in a pseudorandom graph $G$
and given a pseudorandom subgraph $G_0$ of $G$ which is allowed to be quite sparse compared to $H_0$, we can find a set of edge-disjoint Hamilton cycles in
$H_0 \cup G_0$ which cover all edges of $H_0$. For technical reasons, instead of a single pseudorandom graph $G_0$,
in its proof we actually need to consider a union of several edge-disjoint pseudorandom graphs $G_1,\dots,G_{2m+1}$, where $m$ is close to $\log n$.
\begin{lem}\label{lem:prcovering}
Suppose that $p_{0}\ge\frac{\log^{14}n}{n}$ and
$p_{1}\ge\frac{(np_{0})^{\frac{3}{4}}\log^{\frac{5}{2}}n}{n}$. Let
$m:=\frac{\log(n^{2}p_{1})}{\log\log n}$, and for all $i\in[2m+1]$ set
$p_{i}:=p_{1}$ if $i$ is odd, and $p_{i}:=10^{10}p_{1}$ if $i$ is even.
Let $G$ be a $p_{0}$-pseudorandom graph on $n$ vertices. Suppose that
$G_{1},\ldots,G_{2m+1}$ are pairwise edge-disjoint spanning subgraphs of $G$
such that each $G_{i}$ is $p_{i}$-pseudorandom. Moreover, for all $i\in[2m+1]$,
let $H_{i}$ be an even-regular spanning subgraph of $G_{i}$ with
$\delta(G_{i})-1\le d(H_{i})\le\delta(G_{i})$. Suppose that $H_{0}$ is
an even-regular spanning subgraph of $G$ which is edge-disjoint from $\bigcup_{i=1}^{2m+1}H_{i}$.
Then there exists a collection $\mathcal{HC}$ of edge-disjoint Hamilton cycles such that the union $HC := \bigcup \mathcal{HC}$
of all these Hamilton cycles satisfies $H_{0}\subseteq HC\subseteq\bigcup_{i=0}^{2m+1}H_{i}$.
\end{lem}
The following lemma is a special case of Lemma 22(ii) of \cite{Knox2011e}. Given $p_i$-pseudo\-random
graphs $G_i$ as in Lemma~\ref{lem:prcovering}, it allows us to find the even-regular spanning subgraphs
$H_i$ required by Lemma~\ref{lem:prcovering}.
\begin{lem}
\label{lem:rfactors}Let $G$ be a $p$-pseudorandom graph on $n$
vertices such that $p,1-p=\omega\left(\log^{2}n/n\right)$.
Then $G$ has an even-regular spanning subgraph $H$ with $\delta(G)-1\le d(H)\le\delta(G)$.
\end{lem}
The next lemma ensures that $G\sim G_{n,p}$ contains a collection of Hamilton cycles which cover all edges of $G$ except for
some edges at the vertex $x_0$ of maximum degree and such that every edge at $x_0$ is covered at most once.
Theorem~\ref{thm:main-result} will then be an easy consequence of this lemma and Theorem~\ref{lem:hamilton-connected}.
\begin{lem}\label{thm:defactomainresult}
Let $G\sim G_{n,p}$, where $\frac{\log^{117}n}{n}\le p\le 1-n^{-\frac{1}{8}}$.
Then a.a.s.~$G$ has a unique vertex $x_0$ of degree $\Delta(G)$ and there
exist a collection $\mathcal{HC}$ of Hamilton cycles in $G$ and
a collection $F$ of edges incident to $x_{0}$ such that
\begin{itemize}
\item[{\rm (i)}] every edge of $G-F$ is covered by some Hamilton cycle in $\mathcal{HC}$;
\item[{\rm (ii)}] no edge in $F$ is covered by a Hamilton cycle in $\mathcal{HC}$;
\item[{\rm (iii)}] no edge incident to $x_{0}$ is covered by more than one
Hamilton cycle in $\mathcal{HC}$.
\end{itemize}
\end{lem}
Note that in Lemma~\ref{thm:defactomainresult}, we have $|\mathcal{HC}| = (\Delta(G) - |F|)/2$.
The strategy of our proof of Lemma~\ref{thm:defactomainresult} is as follows.
We split $G\sim G_{n,p}$ into three edge-disjoint random graphs $G_1$, $G_2$ and $R$ such that the density of $G_1$ is
almost $p$ and both $G_2$ and $R$ are much sparser.
It turns out we may assume that the vertex $x_0$ of maximum degree in $G$ also has maximum degree in $G_1$.
We then apply Corollary~\ref{cor:exactregularise} in order to extend $G_1$ into a $\Delta(G_1)$-regular
graph by using some edges of $R$. Next we apply Lemma~\ref{lem:prcovering} in order to cover this regular graph
with edge-disjoint Hamilton cycles, using some edges of~$G_2$.
Let $H_2$ be the subgraph of $R\cup G_2$ which is not
covered by these Hamilton cycles. Again, we can make sure that $x_0$ is still the vertex of maximum degree in~$H_2$.
We now apply Corollary~\ref{cor:exactregularise} again in order to extend $H_2$ into a $\Delta(H_2)$-regular
graph $H_2'$ by using edges of a random subgraph $R'$ of~$G_1$ (i.e.~edges which we have already covered by Hamilton cycles).
Finally, we would like to apply Lemma~\ref{lem:prcovering} in order to cover this regular graph by edge-disjoint
Hamilton cycles, using edges of another sparse random subgraph $G'$ of~$G_1$. However, this means that in the last step we might use edges of $G'$ at $x_0$,
i.e.~edges which have already been covered with edge-disjoint Hamilton cycles. Clearly, this would violate condition~(iii) of the lemma.
We overcome this problem as follows: at the beginning, we delete all those
edges at $x_0$ from $G_1$ which lie in $G'$, and then we regularize and cover the graph $H_1$ thus obtained from $G_1$ as before, instead of $G_1$ itself.
However, we have to ensure that $x_0$ is still the vertex of maximum degree in $H_1$. This forces us to make $G'$ quite sparse: the average degree of $G'$
needs to be significantly smaller than the gap between $d_G(x_0)=\Delta(G)$ and the degree of the next vertex, i.e.~significantly smaller than
$\sqrt{np(1-p)}/\log n$. Unfortunately it turns out that such a choice would make $G'$ too sparse to apply Lemma~\ref{lem:prcovering} in order to cover $H_2$.
Thus the above two `iterations' are not sufficient to prove the lemma (where each iteration consists of an application of Corollary~\ref{cor:exactregularise} to regularize and
then an application of Lemma~\ref{lem:prcovering} to cover). But with three iterations, the above approach can be made to work.
\medskip
\noindent
\emph{Proof of Lemma~\ref{thm:defactomainresult}.}
Lemmas~\ref{lem:rgsarepr} and~\ref{lem:downjumping} imply that a.a.s.~$G$ satisfies the following two conditions:
\begin{itemize}
\item[(a)] $G$ is $p$-pseudorandom.
\item[(b)] $G$ is $5u$-downjumping, where $u:=\frac{\sqrt{np(1-p)}}{\log n}.$
\end{itemize}
Note that
\begin{equation}\label{eq:pnu}
(np)^{\frac{27}{64}}\log^{\frac{259}{32}}n=\frac{\sqrt{np(1-p)}}{\log n}\cdot\frac{\log^{\frac{291}{32}}n}{(np)^{\frac{5}{64}}\sqrt{1-p}}\le\frac{u}{2}.
\end{equation}
Indeed, to see the last inequality note that either $1-p\ge 1/2$ and $(np)^{\frac{5}{64}}\ge\log^{\frac{292}{32}}n$
or%
\COMMENT{This is equivalent to $np \ge \log^{2\cdot 292/5}n=\log^{116.8} n$. So this is the point where we need that $np\ge \log^{117} n$.
For the next calculation we need the upper bound on~$p$.}
$(np)^{\frac{5}{64}}\ge (n/2)^{\frac{5}{64}}$
and $\sqrt{1-p}\ge n^{-\frac{1}{16}}$. So here we use the bounds on~$p$ in the lemma. Define
\begin{eqnarray*}
p_{2} & := & \frac{(np)^{\frac{3}{4}}\log^{\frac{7}{2}}n}{n}\ge\frac{\log^{91}n}{n},\\
p_{3} & := & \frac{(np_{2})^{\frac{3}{4}}\log^{\frac{7}{2}}n}{n}=\frac{(np)^{\frac{9}{16}}\log^{\frac{49}{8}}n}{n}\ge\frac{\log^{71}n}{n},\\
p'_{3} & := & 1600p_3,\\
p_{4} & := & \frac{(np_{3})^{\frac{3}{4}}\log^{\frac{7}{2}}n}{n}=\frac{(np)^{\frac{27}{64}}\log^{\frac{259}{32}}n}{n}\ge\frac{\log^{57}n}{n},\\
p_{1} & := & p-2p_{2}-p_{3},\\
m_{i} & := & \frac{\log(n^{2}p_{i})}{\log\log n} \ \ \ \textnormal{for all} \ \ \ 2 \leq i \leq 4,\\
p_{(i,j)} & := & \begin{cases}
\frac{p_{i}}{(10^{10}+1)m_{i}+1} & \textnormal{ if } 2 \leq i \leq 4 \text{ and if } j \in [2m_i+1] \textnormal{ is odd},\\
\frac{10^{10}p_{i}}{(10^{10}+1)m_{i}+1} & \textnormal{ if } 2 \leq i \leq 4 \text{ and if } j \in [2m_i+1] \textnormal{ is even.}
\end{cases}
\end{eqnarray*}
Now form random subgraphs of $G$ as follows. First partition $G$
into edge-disjoint random graphs $G_{1}$, $G_{2}$, $G_{3}$ and $R_{2}$
such that $G_{i}\sim G_{n,p_{i}}$ for $i = 1,2,3$ and $R_{2}\sim G_{n,p_{2}}$. (This can be done by randomly including each edge $e$ of $G$ into
precisely one of $G_{1}$, $G_{2}$, $G_{3}$ and $R_{2}$, where the probability that $e$ is included into $G_i$ is $p_i/p$ and
the probability that $e$ is included into $R_2$ is $p_2/p$, independently of all other edges of $G$.) We then choose
edge-disjoint random subgraphs $R'_{2}$, $R_{4}$ and $G_{4}$ of $G_{1}$
with $R'_{2}\sim G_{n,p_{2}}$, $R_{4}\sim G_{n,p_{4}}$, and $G_{4}\sim G_{n,p_{4}}$.
(Since $p_1\ge p_2+2p_4$ this can be done similarly to before.)
Next we choose a random subgraph $G'_3$ of $G_2$ such that $G'_{3}\sim G_{n,p'_{3}}$.
To summarize, we thus have the following containments, where $\dot{\cup}$ denotes the edge-disjoint union of graphs:
$$
G=G_1 \ \dot{\cup} \ G_2 \ \dot{\cup} \ G_3 \ \dot{\cup} \ R_2 \ \ \ \mbox{and} \ \ \
G_1 \supseteq R'_2 \ \dot{\cup}\ R_4 \ \dot{\cup}\ G_4 \ \ \ \mbox{and} \ \ \
G_2 \supseteq G'_3.
$$
Finally, for each $i\in\{2,3,4\}$, we partition $G_{i}$ into edge-disjoint random
subgraphs $G_{(i,1)},\ldots,G_{(i,2m_{i}+1)}$ with $G_{(i,j)}\sim G_{n,p_{(i,j)}}$.
Lemma~\ref{lem:rgsarepr} and a union bound implies that a.a.s.~the following conditions hold:
\begin{itemize}
\item[(c)] $G_{i}$ is $p_{i}$-pseudorandom for all $i=1,\dots,4$.
\item[(d)] $G_{(i,j)}$ is $p_{(i,j)}$-pseudorandom for all $i=2,3,4$ and all $j\in [2m_i+1]$.
\item[(e)] $R_{2}$ and $R'_{2}$ are $p_{2}$-pseudorandom, and $R_{4}$ is $p_{4}$-pseudorandom.
\item[(f)] $R_2\cup G_{2}\cup R'_{2}\cup G_{3}$
is strongly $(3p_{2}+p_{3})$-pseudorandom and $G'_3\cup G_{3}\cup R_{4}\cup G_{4}$
is strongly $(p'_3+p_{3}+2p_{4})$-pseudorandom.
\end{itemize}
Since $R_2\cup G_{2}\cup R'_{2}\cup G_{3}\sim G_{n,3p_{2}+p_{3}}$ and $G'_3\cup G_{3}\cup R_{4}\cup G_{4}\sim G_{n,p'_3+p_{3}+2p_{4}}$,
Lemma~\ref{lem:maxmin} implies that a.a.s.~the following condition holds:
\begin{itemize}
\item[(g)] Let $x_0$ be the unique vertex of maximum degree of $G$. Then $x_0$ is not the vertex of minimum degree
in $R_2\cup G_{2}\cup R'_{2}\cup G_{3}$ or $G'_3\cup G_{3}\cup R_{4}\cup G_{4}$.
\end{itemize}
It follows that a.a.s.~conditions (a)--(g) are all satisfied; in the remainder of the proof we will thus assume that they are.
We can apply \prettyref{lem:rfactors}
for each $i=2,3,4$ and each $j\in [2m_i+1]$ to obtain an even-regular spanning subgraph $H_{(i,j)}$ of $G_{(i,j)}$
with $\delta(G_{(i,j)})-1\le d(H_{(i,j)})\le\delta(G_{(i,j)})$.
As indicated earlier, our strategy consists of the following three iterations. The purpose of the first iteration is to cover all the edges of $G_1$. To do this, we will apply Corollary~\ref{cor:exactregularise} in order to extend $G_1$ into a regular graph $H'_1$, using some edges of $R_2$. (Actually we will first set aside a set $F_1$ of edges of $G_1$ at $x_0$, but this will still leave $x_0$ the vertex of maximum degree in $H_1:=G_1-F_1$. In particular, $F_1$ will contain the set $F^*$ of all edges of $G_4$ at $x_0$.) We will then apply Lemma~\ref{lem:prcovering} to cover $H'_1$ with edge-disjoint Hamilton cycles, using some edges of $G_2$.
The purpose of the second iteration is to cover all the edges of $G_2\cup R_2$ not already covered in the first iteration -- we denote this remainder by $H_2$. It turns out that $x_0$ will still be the vertex of maximum degree in $H_2$. If $\Delta(H_2)$ is odd, then we will add one edge from $F_1\setminus F^*$ to $H_2$ to obtain a graph $H'_2$ of even maximum degree. Otherwise, we simply let $H'_2:=H_2$. We extend $H'_2$ into a regular graph $H''_2$ using Corollary~\ref{cor:exactregularise} and some edges of $R'_2$, then cover $H''_2$ with edge-disjoint Hamilton cycles using Lemma~\ref{lem:prcovering} and some edges of $G_3$.
The purpose of the third iteration is to cover all the edges of $G_3$ not already covered in the second iteration -- we denote this remainder by $H_3$. We first add some (so far unused) edges from $F_1 \setminus F^*$ to $H_3$ in order to make $x_0$ the unique vertex of maximum degree. Let $H'_3$ denote the resulting graph. We then extend $H'_3$ into a regular graph $H''_3$ using Corollary~\ref{cor:exactregularise} and some edges of $R_4$, and finally cover $H''_3$ with edge-disjoint Hamilton cycles using Lemma~\ref{lem:prcovering} and some edges of~$G_4$.
It is in this iteration that we make use of $G_3'$, for technical reasons. It turns out that $G_3 \cup G_4 \cup R_4$ is so sparse that adding the required edges from $F_1 \setminus F^*$ may destroy its pseudorandomness, rendering it unsuitable as a choice of $G$ in Lemma~\ref{lem:prcovering}. Since the only role of $G$ in Lemma~\ref{lem:prcovering} is that of a `container' for the other graphs, this issue is easy to solve by adding a slightly denser random graph to $G_3 \cup G_4 \cup R_4$, namely $G_3'$.
Note that we did not use any edges of $R'_2$ at $x_0$ when turning $H'_2$ into $H''_2$ since $x_0$ is a vertex of maximum degree in $H'_2$. Similarly, we did not use any edges of $R_4$ at $x_0$ when turning $H'_3$ into $H''_3$. Moreover, $F^*$ was the set of all edges of $G_4$ at $x_0$ and no edge in $F^*$ was covered in the first two iterations.
Altogether this means that we do not cover any edge at $x_0$ more than once.
Note that in the second and third iterations, the graphs $R'_2$ and $R_4$ we use for regularising consist of edges we have already covered. In the second iteration, this turns out to be a convenient way of controlling the difference between the maximum and minimum degree of $H_3$ (which might have been about $\Delta(G) - \delta(G)$ if we had used uncovered edges). In the third iteration, there are simply no more uncovered edges available.
After outlining our strategy, let us now return to the actual proof.
We claim that $x_{0}$ is the unique vertex of maximum degree in $G_{1}$ and that $G_{1}$ is $4u$-downjumping. Indeed, for all
$x\ne x_{0}$ we have
\begin{align*}
d_{G_{1}}(x) & = d_{G}(x)-d_{G_2\cup G_3\cup R_2}(x) \stackrel{({\rm b})}{\le} d_{G}(x_{0})-5u-d_{G_2\cup G_3\cup R_2}(x)\\
& =d_{G_1}(x_{0})+d_{G_2\cup G_3\cup R_2}(x_0)-5u-d_{G_2\cup G_3\cup R_2}(x)\\
& \le d_{G_{1}}(x_{0})+\Delta(G_{2})+\Delta(G_{3})+\Delta(R_{2})-5u-\delta(G_{2})-\delta(G_{3})-\delta(R_{2})\\
& \le d_{G_{1}}(x_{0})-\left(5u-12\sqrt{np_{2}\log n}\right),
\end{align*}
where the last inequality follows from the facts that both $G_2$ and $R_2$ are $p_2$-pseudorandom,
$G_3$ is $p_3$-pseudorandom, $p_3\le p_2$ as well as from (P4) and~(P5).
But
\begin{equation}\label{eq:p2gap}
\sqrt{np_{2}\log n}=(np)^{\frac{3}{8}}\log^{\frac{9}{4}}n\stackrel{(\ref{eq:pnu})}{\le} \frac{u}{2} \cdot (np)^{-\frac{3}{64}}
\le \frac{u}{\log n}.
\end{equation}
Altogether this shows that $d_{G_1}(x)\le d_{G_1}(x_0)-4u$ for all $x\neq x_0$. Thus $G_1$ is $4u$-downjumping and $x_0$ is
the unique vertex of maximum degree in $G_{1}$, as desired.
Note that
\begin{equation}\label{eq:DeltaG4}
\Delta(G_{4})\le 2np_{4}=2(np)^{\frac{27}{64}}\log^{\frac{259}{32}}n \stackrel{(\ref{eq:pnu})}{\le} u.
\end{equation}
Let $F^*$ be the set of all edges of $G_4$ which are incident to~$x_0$. Thus $|F^*|\le u$ by~(\ref{eq:DeltaG4}).
Choose a set $F_{1}$ of edges incident to $x_{0}$ in $G_{1}$ such that $F^*\subseteq F_1$,
\begin{equation}\label{eq:sizeF1}
3u-1\le|F_{1}|\le 3u,
\end{equation}
and such that $\Delta(G_{1}-F_{1})$ is even. Note that we used (\ref{eq:DeltaG4}) and thus the full strength of~(\ref{eq:pnu})
(in the sense that it would no longer hold if we replace 117 by 116 in the lower bound on $p$ stated in Lemma~\ref{thm:defactomainresult})
in order to be able to guarantee that $F^*\subseteq F_1$. So this is the point where we need the bounds on~$p$ in the lemma.
Let $H_{1}:=G_{1}-F_{1}$. Thus $H_{1}$ is still $u$-downjumping.
Our next aim is to apply \prettyref{cor:exactregularise} in order to extend $H_1$ into
a $\Delta(H_1)$-regular graph $H_1'$, using some of the edges of $R_{2}$. So we need to check that the conditions
in \prettyref{cor:exactregularise} are satisfied. But since $G_1$ is $p_1$-pseudorandom we have
\begin{align}
\Delta(H_{1})-\delta(H_{1}) & \le \Delta(G_{1})-\delta(G_{1}) \stackrel{({\rm P4}),({\rm P5})}{\le } 4\sqrt{n p_1\log n}\nonumber\\
& \le 4\sqrt{n p\log n} = 4(np_{2})^{\frac{2}{3}}\log^{-\frac{11}{6}}n \le (np_{2}\log n)^{\frac{5}{7}}.\label{eq:pnp_2n}
\end{align}
Moreover $p_2 \ge \log^{21}n/n$ and $H_{1}$
is $u$-downjumping and so $\sqrt{np_{2}}$-downjumping by~(\ref{eq:p2gap}).
Since $R_2$ is $p_2$-pseudorandom we may therefore apply \prettyref{cor:exactregularise}
to find a regular graph $H_{1}'$ of degree $\Delta(H_1)$ with $H_{1}\subseteq H_{1}'\subseteq H_{1}\cup R_{2}$.
Next, we wish to apply \prettyref{lem:prcovering} in order to cover $H_1'$ with edge-disjoint Hamilton cycles. Note that for every $1 \leq j \leq 2 m_2 + 1$
\begin{equation}\label{eq:p2j}
np_{(2,j)}\ge\frac{np_{2}}{(10^{10}+1)m_{2}+1}\ge\frac{(np)^{\frac{3}{4}}\log^{\frac{7}{2}}n\log\log n}{10^{11}\log n}\ge(np)^{\frac{3}{4}}\log^{\frac{5}{2}}n.
\end{equation}
So we can apply \prettyref{lem:prcovering} with $G$, $H'_1$, $G_{(2,1)},\dots,G_{(2,2m_2+1)}$ and $H_{(2,1)},\dots,H_{(2,2m_2+1)}$
playing the roles of $G$, $H_0$, $G_1,\dots,G_{2m+1}$ and $H_1,\dots,H_{2m+1}$ to obtain a collection $\mathcal{HC}_{1}$ of edge-disjoint
Hamilton cycles such that the union $HC_1 := \bigcup \mathcal{HC}_{1}$ of these Hamilton
cycles satisfies $$H_{1}'\subseteq HC_{1}\subseteq H_{1}'\cup\bigcup_{j=1}^{2m_2+1} H_{(2,j)}\subseteq H'_1\cup G_2.$$
Write $H_{2}:=(G_{2}\cup R_{2})\setminus E(HC_{1})$ for the uncovered
remainder of $G_{2}\cup R_{2}$. Note that
\begin{itemize}
\item[(HC1)] no edge of $G$ incident to $x_{0}$ is covered more than once in $\mathcal{HC}_{1}$;
\item[(HC1$'$)] $HC_{1}$ contains no edges from $F_1$.
\end{itemize}
Our next aim is to extend $H_{2}$ into a regular graph $H'_2$ using some of the edges of $R'_{2}$. We will then use some of the edges
of $G_3$ in order to find edge-disjoint Hamilton cycles which cover $H'_2$.
Note that
\begin{equation}\label{eq:degreesH2}
d_{H_{2}}(x)=d_{H_{1}}(x)+d_{R_{2}\cup G_{2}}(x)-2|\mathcal{HC}_{1}|
\end{equation}
for all $x\in V(G)$.
Together with the fact that $H_1$ is $u$-downjumping this implies that for all $x\neq x_0$ we have
\begin{align*}
d_{H_{2}}(x_{0})-d_{H_{2}}(x) & = (d_{H_{1}}(x_{0})-d_{H_{1}}(x))+(d_{R_{2}\cup G_{2}}(x_{0})-d_{R_{2}\cup G_{2}}(x))\\
& \ge u-(\Delta(R_{2})+\Delta(G_{2})-(\delta(R_{2})+\delta(G_{2})))\\
& \ge u-8\sqrt{np_{2}\log n} \stackrel{(\ref{eq:p2gap})}{\ge} \sqrt{np_{2}}.
\end{align*}
(For the second inequality we used the fact that both $R_2$ and $G_2$ are $p_2$-pseudo\-random together with (P4) and~(P5).)
Thus $x_0$ is the unique vertex of maximum degree in $H_2$ and $H_2$ is $\sqrt{np_{2}}$-downjumping.
If $\Delta(H_2)$ is odd, let $H'_2$ be obtained from $H_2$ by adding some edge from $F_1\setminus F^*$.
Condition~(g) ensures that we can choose this edge in such a way that it is not incident to the unique vertex of minimum degree in the
$(3p_2+p_3)$-pseudorandom graph $R_2\cup G_2\cup R'_2\cup G_3$. Let $F'_1$ be the set consisting of this edge.
If $\Delta(H_2)$ is even, let $H'_2:=H_2$ and $F'_1:=\emptyset$.
In both cases, let $F_2:=F_1\setminus F'_1$ and note that $H'_2$ is still $\sqrt{np_{2}}$-downjumping.
Moreover,
\begin{eqnarray*}
\Delta(H_{2}')-\delta(H_{2}') & \le & \Delta(H_{2})-\delta(H_{2})+1\\
& \stackrel{(\ref{eq:degreesH2})}{\le} & \Delta(H_{1})+\Delta(G_{2})+\Delta(R_{2})-\delta(H_{1})-\delta(G_{2})-\delta(R_{2})+1\\
& \le & \Delta(G_{1})+\Delta(G_{2})+\Delta(R_{2})-\delta(G_{1})-\delta(G_{2})-\delta(R_{2})+1\\
& \le & 4\sqrt{np_1\log n}+8\sqrt{np_2\log n}+1\le 5\sqrt{np\log n}\\
& \le & (np_{2}\log n)^{\frac{5}{7}}.
\end{eqnarray*}
(For the fourth inequality we used the facts that $G_1$ is $p_1$-pseudorandom and both $R_2$ and $G_2$ are $p_2$-pseudorandom
together with (P4) and~(P5). The final inequality follows similarly to~(\ref{eq:pnp_2n}).)
Furthermore, note that $E(H'_2)\cap E(R'_2)\subseteq F'_1$ and so $H'_2-x_0$ and $R'_2-x_0$ are edge-disjoint.
Thus we may apply \prettyref{cor:exactregularise} to find a regular graph $H_{2}''$ of degree $\Delta(H_{2}')$
with $H_{2}'\subseteq H_{2}''\subseteq H_{2}'\cup R'_{2}$. Since
$x_{0}$ is of maximum degree in $H_{2}'$, we have the following:
\textno
No edge from $R'_{2}$ incident to $x_{0}$ was
added to $H'_2$ in order to obtain $H_{2}''$. &(\dagger)
Let $G^*_2:=(R_2\cup G_2\cup R'_2\cup G_3)+F'_1$. Our choice of $F'_1$ and condition~(f) together ensure that we can
apply \prettyref{lem:changingvertices} with $R_2\cup G_2\cup R'_2\cup G_3$ and $F'_1$ playing the roles of $G$ and $F$
to see that $G^*_2$ is $(3p_2+p_3)$-pseudorandom.
Note that for every $1 \leq j \leq 2 m_3 + 1$
$$
np_{(3,j)}\ge(4np_{2})^{\frac{3}{4}}\log^{\frac{5}{2}}n\ge(n(3p_{2}+p_{3}))^{\frac{3}{4}}\log^{\frac{5}{2}}n,
$$
where the first inequality follows similarly to~(\ref{eq:p2j}).
Hence we may apply Lemma~\ref{lem:prcovering}
with $G^*_2$, $H''_2$, $G_{(3,1)},\dots,G_{(3,2m_3+1)}$ and $H_{(3,1)},\dots,H_{(3,2m_3+1)}$
playing the roles of $G$, $H_0$, $G_1,\dots,G_{2m+1}$ and $H_1,\dots,H_{2m+1}$ to obtain a collection $\mathcal{HC}_{2}$ of edge-disjoint
Hamilton cycles such that the union $HC_2 := \bigcup \mathcal{HC}_{2}$ of these Hamilton
cycles satisfies
$$
H_{2}''\subseteq HC_{2}\subseteq H_{2}''\cup\bigcup_{j=1}^{2m_3+1} H_{(3,j)} \subseteq H_2'' \cup G_3.
$$
We now have the following properties:
\begin{itemize}
\item[(HC2)] no edge of $G$ incident to $x_{0}$ is covered more than once in $\mathcal{HC}_{1}\cup\mathcal{HC}_{2}$;
\item[(HC2$'$)] $HC_{1} \cup HC_2$ contains no edges from $F_2$;
\item[(HC2$''$)] $\mathcal{HC}_{1}\cup\mathcal{HC}_{2}$ covers all edges in $(G_1-F_2) \cup G_2 \cup R_2$.
\end{itemize}
Indeed, to see (HC2), first note that ($\dagger$) implies that all edges incident to $x_{0}$ in $HC_{2}$ are contained in $H_{2}'\cup G_{3}$
and thus in $(H_2+F'_1) \cup G_3$, which is edge-disjoint from $HC_{1}$.
Now (HC2) follows from (HC1) together with the fact that the Hamilton cycles in $\mathcal{HC}_{2}$ are pairwise edge-disjoint.
Write $H_{3}:=G_{3}\setminus E(HC_{2})$ for the subgraph of $G_3$ which is not covered by the Hamilton cycles in $\mathcal{HC}_{2}$.%
\COMMENT{Recall that $R_2 \subseteq G_1$ and that all edges of $H_1 = G_1 -(F_1\setminus F'_1)$
have been covered by $\mathcal{HC}_{1}\cup\mathcal{HC}_{2}$.
So all the edges of $R'_2$ apart from those lying in $F_2$ are covered by $\mathcal{HC}_{1}\cup\mathcal{HC}_{2}$.
So unlike $R_2$ in the previous iteration, we do not need to consider $R_2'$ this time. Alternatively, note that (unlike $R_2$), $R_2'$ is not part of the partition of $G$, which
is why we don't need to consider it explicitly when covering $G$.}
Our final aim is to extend $H_{3}$ into a regular graph $H'_3$ using some of the edges of $R_{4}$.
We will then use the edges of $G_4$ in order to find edge-disjoint Hamilton cycles which cover $H'_3$
(and thus the edges of $G_3$ not covered so far).
Note that for all $x\in V(G)$
\[d_{H_{3}}(x)=d(H_{2}'')+d_{G_{3}}(x)-2|\mathcal{HC}_{2}|.\]
Together with the fact that $G_3$ is $p_3$-pseudorandom this implies
that
\begin{equation}\label{eq:Delta3}
\Delta(H_{3})-\delta(H_{3})=\Delta(G_{3})-\delta(G_{3}) \stackrel{({\rm P4}),({\rm P5})}{\le } 4\sqrt{np_{3}\log n}.
\end{equation}
Thus we can add a set $F'_2\subseteq F_2\setminus F^*$ of edges at $x_0$ to $H_3$ to ensure that
$x_0$ is the unique vertex of maximum degree in the graph $H'_3$ thus obtained from $H_3$,
that $H'_3$ is $\sqrt{np_{4}}$-downjumping, $\Delta(H_{3}')$ is even and such that
\begin{equation}\label{eq:sizeF'2}
|F'_2|\le 4\sqrt{np_{3}\log n}+\sqrt{np_4}+1\le 5\sqrt{np_{3}\log n}\le \sqrt{np_{2}\log n}\stackrel{(\ref{eq:p2gap})}{\le}\frac{u}{\log n}.
\end{equation}
Note that $|F_2\setminus F^*|=|F_1\setminus (F'_1\cup F^*)|\ge 2u-2$ by~({\ref{eq:sizeF1}) and since $|F^*|\le u$ by~(\ref{eq:DeltaG4}).
So we can indeed choose such a set $F'_2$.
Moreover, condition~(g) ensures that we can choose $F'_2$ in such a way that it contains no edge which is incident
to the unique vertex of minimum degree in the
$(p'_3+p_3+2p_4)$-pseudorandom graph $G'_3\cup G_3\cup R_4\cup G_4$. Let $F_3:=F_2\setminus F'_2$ and note that
\begin{align*}
\Delta(H_{3}')-\delta(H_{3}') & \le \Delta(H_{3})-\delta(H_{3})+ \sqrt{np_4}+1 \stackrel{(\ref{eq:Delta3})}{\le} 5\sqrt{np_{3}\log n}
= 5(np_4)^{\frac{2}{3}}\log^{-\frac{11}{6}} n\\
& \le (np_{4}\log n)^{\frac{5}{7}}.
\end{align*}
Furthermore, $E(H'_3)\cap E(R_4)\subseteq F'_2$ and so $H'_3-x_0$ and $R_4-x_0$ are edge-disjoint.
Since also $p_4 \ge \log^{21}n/n$, we may apply \prettyref{cor:exactregularise}
to obtain a regular graph $H_{3}''$ of degree $\Delta(H_{3}')$
such that $H_{3}'\subseteq H_{3}''\subseteq H_{3}'\cup R_{4}$. Note that since
$x_{0}$ is of maximum degree in $H_{3}'$, we have the following:
\textno
No edge from $R_{4}$ incident to $x_{0}$ was added to $H'_3$ in order to obtain $H_{3}''$.
& (\star)
Let $G^*_3:=(G'_3\cup G_3\cup R_4\cup G_4)+F'_2$. Since $|F'_2|\le 5\sqrt{np_{3}\log n}=\sqrt{np'_{3}\log n}/8$ by~(\ref{eq:sizeF'2}),
we may apply \prettyref{lem:changingvertices} with $G'_3\cup G_3\cup R_4\cup G_4$ and $F'_2$ playing the roles of $G$ and $F$
to see that $G^*_3$ is $(p'_3+p_3+2p_4)$-pseudorandom.
Note that for every $1 \leq j \leq 2 m_4 + 1$
$$
np_{(4,j)}\ge(4np'_{3})^{\frac{3}{4}}\log^{\frac{5}{2}}n\ge(n(p'_3+p_{3}+2p_{4}))^{\frac{3}{4}}\log^{\frac{5}{2}}n,
$$
where the first inequality follows similarly to~(\ref{eq:p2j}). Recall that $F^*$ denotes the set of all those edges of $G_4$
which are incident to $x_0$. Since $F'_2\cap F^*=\emptyset$,
$H''_3$ and $G_4$ are edge-disjoint (and so $H''_3, H_{(4,1)},\dots,H_{(4,2m_4+1)}$ are pairwise edge-disjoint).
Thus we can apply \prettyref{lem:prcovering}
with $G^*_3$, $H''_3$, $G_{(4,1)},\dots,G_{(4,2m_4+1)}$ and $H_{(4,1)},\dots,H_{(4,2m_4+1)}$
playing the roles of $G$, $H_0$, $G_1,\dots,G_{2m+1}$ and $H_1,\dots,H_{2m+1}$ to obtain a collection $\mathcal{HC}_{3}$ of edge-disjoint
Hamilton cycles such that the union $HC_3 := \bigcup \mathcal{HC}_{3}$ of these Hamilton
cycles satisfies
$$
H_{3}''\subseteq HC_{3}\subseteq H_{3}''\cup\bigcup_{j=1}^{2m_4+1} H_{(4,j)} \subseteq H_3'' \cup G_4.
$$
We claim that no edge of $G$ incident to $x_{0}$ is covered more than once in
$\mathcal{HC}:=\mathcal{HC}_{1}\cup\mathcal{HC}_{2}\cup\mathcal{HC}_{3}$. Indeed, (HC2) implies that this was the case for
$\mathcal{HC}_{1}\cup\mathcal{HC}_{2}$. Moreover, recall that the Hamilton cycles in $\mathcal{HC}_{3}$ are pairwise edge-disjoint.
In addition, ($\star$) implies that all edges incident to $x_{0}$ in $HC_{3}$ are contained in
$$
H'_{3}+F^*=H_3+F_2'+F^* \subseteq H_3 + F_2.
$$
So (HC2$'$) implies that none of these edges lies in $HC_{1}\cup HC_{2}$, which proves the claim.
Note that (HC2$''$) and the definition of $\mathcal{HC}_{3}$ together imply that $\mathcal{HC}$ covers all edges of $G-F_3$.
Let $F\subseteq F_3$ be the set of uncovered edges.
Then $F$ and $\mathcal{HC}$ are as required in the lemma.
\noproof\bigskip
We remark that for the final application of \prettyref{lem:prcovering} in the proof of Lemma~\ref{thm:defactomainresult}
it would have been enough to consider $G_3\cup R_4\cup G_4$
instead of $G'_3\cup G_3\cup R_4\cup G_4$ (since $H''_3$ and all the $G_{(4,j)}$ are contained in $(G_3\cup R_4\cup G_4)+ F'_2$).
However, we would not have been able to apply \prettyref{lem:changingvertices} in this case since $|F'_2| > \sqrt{np_{3}\log n}/8$.
Introducing $G'_3$ ensures that the conditions of \prettyref{lem:changingvertices} are satisfied (and this is the only purpose of $G'_3$).
We can now combine Theorem~\ref{lem:hamilton-connected} and Lemma~\ref{thm:defactomainresult}
in order to prove Theorem~\ref{thm:main-result}.
\medskip
\noindent
\emph{Proof of Theorem~\ref{thm:main-result}.}
Lemma~\ref{thm:defactomainresult} implies that a.a.s.~$G$ contains
a collection $\mathcal{HC}$ of Hamilton cycles and a collection $F$ of edges incident to the unique vertex $x_0$
of maximum degree such that no edge of $G$ incident to $x_{0}$ is contained in more than one Hamilton cycle in $\mathcal{HC}$
and such that the Hamilton cycles in $\mathcal{HC}$ cover precisely the edges of $G-F$.
Moreover, by Theorem~\ref{lem:hamilton-connected}, a.a.s.~$G-x_0$ is Hamilton-connected.
If $|F|$ is odd, we add one edge of $G-F$ incident to $x_0$ to~$F$. We still denote the resulting set of edges by~$F$.
Let $r:=|F|/2$ and $e_1e'_1,\dots,e_re'_r$ be pairs of edges such that $F$ is the union of all these $2r$ edges.
Since $G-x_0$ is Hamilton-connected, for each $1 \le i \le r$ there exists a
Hamilton cycle $C_i$ of $G$ containing both $e_i$ and $e'_i$. Then $\mathcal{HC}\cup \{C_1,\dots,C_r\}$ is a collection of
$\lceil \Delta(G)/2\rceil$ Hamilton cycles covering $G$, as desired.
\noproof\bigskip
Using further iterations in the proof of Lemma~\ref{thm:defactomainresult}, one could reduce the exponent $117$ in Lemma~\ref{thm:defactomainresult}
(and thus in Theorem~\ref{thm:main-result}). One further iteration would lead to an exponent of $60$, while the effect of yet further iterations quickly becomes
insignificant.%
\COMMENT{We set $p_5= (np)^{(3/4)^4} (\log n)^{\frac{259}{32} \cdot \frac{3}{4}+ \frac{7}{2}} \sim (np)^{1/2-0.18} (\log n)^{10.57-1}$,
and check that the analogue of~(\ref{eq:pnu}) still holds.
The upper bound can also be with a further iteration. With the methods of~\cite{KOappl}, it can even be improved to $1-O(1/n)$,
however the remaining very very dense case needs an extra argument, so all this does not seem worth mentioning or doing}
| {
"timestamp": "2013-07-25T02:04:46",
"yymm": "1203",
"arxiv_id": "1203.3868",
"language": "en",
"url": "https://arxiv.org/abs/1203.3868",
"abstract": "A packing of a graph G with Hamilton cycles is a set of edge-disjoint Hamilton cycles in G. Such packings have been studied intensively and recent results imply that a largest packing of Hamilton cycles in G_n,p a.a.s. has size \\lfloor delta(G_n,p) /2 \\rfloor. Glebov, Krivelevich and Szabó recently initiated research on the `dual' problem, where one asks for a set of Hamilton cycles covering all edges of G. Our main result states that for log^{117}n / n < p < 1-n^{-1/8}, a.a.s. the edges of G_n,p can be covered by \\lceil Delta(G_n,p)/2 \\rceil Hamilton cycles. This is clearly optimal and improves an approximate result of Glebov, Krivelevich and Szabó, which holds for p > n^{-1+\\eps}. Our proof is based on a result of Knox, Kühn and Osthus on packing Hamilton cycles in pseudorandom graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Optimal covers with Hamilton cycles in random graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985718063977109,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.708331473429751
} |
https://arxiv.org/abs/2010.15519 | Spanning trees at the connectivity threshold | We present an explicit connected spanning structure that appears in a random graph just above the connectivity threshold with high probability. | \section{Introduction}
\label{sec:intro}
The \defn{binomial random graph} $G(n,p)$ is a graph on $n$ vertices, in which every pair of vertices is connected independently with probability $p$.
It is a well known and thoroughly studied model (see, e.g.,~\cites{Bol,FK,JLR}).
A fundamental result, due to Erd\H{o}s and R\'enyi~\cite{ER59}, is that $G(n,p)$ exhibits a sharp threshold for connectivity at $p=p(n)=\log{n}/n$\footnote{Here and later the logarithms have natural base.}.
More precisely, setting $p=(\log{n}+f(n))/n$ and $G\sim G(n,p)$, if $f(n)\to\infty$ then $G$ is with high probability\footnote{With probability tending to $1$ as $n$ tends to infinity.} (\whp{}) connected, and if $f(n)\to-\infty$ then $G$ is \whp{} not connected.
This threshold coincides with the threshold for the disappearance of isolated vertices (vertices of degree $0$), and, in fact, isolated vertices are the bottleneck for connectivity in a stronger sense.
Evidently, a connected graph has a spanning tree.
This raises the natural question of {\em which} spanning trees can we expect to find in a connected random graph.
The standard proof for the connectivity threshold is obtained using the dual definition of connectivity, namely, by showing that \whp{} there is an edge in every cut; this seems to provide no hint of which trees appear above the threshold.
One can check, however, that just above the connectivity threshold a random graphs contains $\Theta(\log{n})$ vertices of degree $1$.
Obviously, these vertices must all be leaves in any spanning tree.
In particular, one cannot expect to find a Hamilton path (or any spanning tree with a constant number of leaves).
In this work, we present an \emph{explicit} spanning tree that appears in a random graph just above the connectivity threshold \whp{}.
In fact, we prove something stronger, by presenting a concrete unicyclic connected spanning subgraph.
Let $n,t,\ell$ be integers such that $t\cdot (\ell+1) \le n$. A \defn{KeyChain} with parameters $n,t,\ell$, denoted $\mathsf{KC}(n,t,\ell)$, is a cycle on $n-t$ vertices with additional $t$ vertices of degree $1$ (``keys") which have distinct neighbours in that cycle, where the distance between two consecutive such neighbours is $\ell$.
Formally, it is the graph $H=(V,E)$ with $V=[n]$ and
\[
E = \{ \{i,i+1\} \mid i \in [n-t-1] \}
\cup \{ \{n-t,1\} \}
\cup \{ \{i\ell ,n-t+i\} \mid i\in [t] \}.
\]
See \cref{fig:keychain} for an example.
Consider the following sequence: $a_1 = 1,\ a_{j+1} = \ceil{a_j \cdot \log n / 100}$, and set $j_0$ to be the minimum index $j$ for which $a_j \ge 10n / \log n$.
Let further $t:=\floor{\log{n}}$ and $\ell=2j_0$.
\begin{figure*}[t]
\captionsetup{width=0.879\textwidth,font=small}
\centering
\begin{tikzpicture}
\tikzset{vertex/.style={fill,circle,inner sep=1.5pt}}
\foreach \i in {1,2,...,24} {
\node[vertex] (\i) at (\i*15:4 and 1) {};
\draw (\i) -- (\i*15+15:4 and 1);
}
\foreach \i in {1,2,...,5} {
\node[vertex] (k\i) at ($(\i*45-60:4 and 1)+(\i*45-60:1)$) {};
\draw (k\i) -- (\i*45-60:4 and 1);
}
\end{tikzpicture}
\caption{$\mathsf{KC}(24,5,3)$.}
\label{fig:keychain}
\end{figure*}
\begin{theorem}\label{thm:kc}
Let $p$ be such that $np - \log{n} \rightarrow \infty$.
Then, $G\sim G(n,p)$ \whp{} contains $\mathsf{KC}(n,t,\ell)$ as a subgraph.
\end{theorem}
As $\mathsf{KC}(n,t,\ell)$ is connected and spanning,
and in fact contains $\left\lceil \frac{n-t}{2} \right\rceil$ distinct (unlabelled) spanning trees when $n$ is sufficiently big,
\cref{thm:kc} gives a set of distinct trees, all of which appear in $G\sim G(n,p)$ \whp{}.
Our proof provides, however, further spanning (but not necessarily connected) graphs that appear \whp{} in random graphs above the connectivity threshold (such as KeyChains on linearly many vertices alongside a disjoint cycle spanning the remaining set of vertices).
\subsection{Discussion}
\paragraph{Hamiltonicity}
Koml\'os and Szemer\'edi~\cite{KS83} and independently Bollob\'as~\cite{Bol84} showed that the threshold for the appearance of a Hamilton cycle in random graphs is $p=(\log{n}+\log\log{n})/n$.
This coincides with the threshold for the disappearance of vertices of degree $1$, an obvious obstacle in obtaining a Hamilton cycle.
In particular, if $np-\log{n}-\log{\log{n}}\to\infty$ and $G\sim G(n,p)$, then $G$ contains a Hamilton path as a spanning tree.
Thus, our result is interesting, and perhaps surprising, for values of $p$ which satisfy $np=\log{n}+f(n)$, where $f(n)\to\infty$ but $f(n)-\log{\log{n}}\not\to\infty$.
Our result states that in this intermediate regime, we can still construct a \emph{specific} tree which contains a path that spans almost all of the vertices of the graph.
\paragraph{Spanning trees in random graphs}
\Cref{thm:kc} gives an explicit set of trees, each of which spans a binomial random graph just above the connectivity threshold \whp{}.
Which other spanning trees can we expect to find around the same edge density?
Recently Montgomery~\cite{Mon19} solved a conjecture by Kahn (see~\cite{KLW16}) according to which every $n$-vertex tree of maximum degree at most $\Delta$ appears in $G(n,p)$ \whp{} if $np\ge C\log{n}$ for $C=C(\Delta)$.
Earlier, Hefetz, Krivelevich and Szab\'o~\cite{HKS12} showed that every bounded degree $n$-vertex tree with either linearly many leaves or a linearly long bare path appears in $G(n,p)$ \whp{} if $np\ge (1+\varepsilon)\log{n}$ for some $\varepsilon>0$.
We wish to stress that our declared goal in this paper is to present one concrete family of spanning trees appearing typically at the threshold for connectivity, and this goal is achieved in \cref{thm:kc}. Undoubtedly a wider class of spanning trees can be shown to appear \textbf{whp} for this value of $p(n)$ using similar (but possibly more involved) techniques and arguments, but we decided not to pursue this goal here, aiming rather for relative simplicity.
The question of which trees are likely to appear in $G(n,p)$ if one only requires $np-\log{n}\to\infty$ therefore remains open.
Another possible extension of our main result would be to consider the random graph process, and to try and prove that some concrete spanning subgraph (say, our KeyChain) appears typically at the very moment the graph becomes connected. This would imply our result by the standard connections between random graph processes and models $G(n,p), G(n,m)$. Again we chose not to push in this direction, preferring simplicity.
\paragraph{The maximum common subgraph problem}
Finding a maximum common subgraph of two graphs
(sometimes called \emph{maximum common edge subgraph}, or MCES)
is an $\mathbf{NP}$-hard problem
(for example, there is a trivial polynomial reduction from the Hamiltonicity problem to MCES)
with real-world applications in computer science and in chemistry (see, e.g.,~\cites{Bok81,RGW02}).
\Cref{thm:kc} can be thought of as an explicit (connected) subgraph
which is typically common to random graphs past the connectivity threshold.
In particular, writing $M(G_1,G_2)$ for the size of a maximum common subgraph of $G_1,G_2$, \cref{thm:kc} gives $M(G_1,G_2)\ge n$ \whp{} for independently sampled $G_1,G_2\sim G(n,p)$, where $np-\log{n}\to\infty$.
The following proposition, whose proof appears in \cref{sec:mcs}, shows that this bound is asymptotically tight.
\begin{proposition}\label{prop:mcs}
For every $\varepsilon>0$ there exists $\delta>0$ for which the following holds.
Suppose $p\le n^{-1+\delta}$, and let $G_1,G_2\sim G(n,p)$ be two independent random graphs.
Then, \whp{}, $M(G_1,G_2)\le(1+\varepsilon)n$.
\end{proposition}
\subsection{Proof outline}
We begin by listing useful properties of random graphs just above the connectivity threshold.
With these properties in hand, we continue as follows.
First, we take care of all ``keys'' of the KeyChain:
these consist of all vertices of degree $1$ in the graph, in addition to some vertices of degree $2$.
We aim to connect neighbours of these keys by equal-length paths, constructing a comb-like graph.
Eventually, we would want to connect the ends of the comb by a path which spans the remaining vertices.
This cannot be done naively, however, since some of the remaining vertices may have most (or all) of their neighbours inside the comb.
To overcome this difficulty we make a preparatory step in which we put aside small degree vertices with their neighbours.
This is stated precisely in \cref{sec:const}.
The ``comb'' is then found in the set of the remaining vertices.
\paragraph{Organisation of the paper}
We start by reviewing some preliminaries in \cref{sec:prem}.
In \cref{sec:random} we list and prove useful (and mostly standard) properties of random graphs just above the connectivity threshold.
In \cref{sec:const} we prepare our graph, handling future ``keys'' and small degree vertices, construct the ``comb'' and close it to a KeyChain, concluding the proof of \cref{thm:kc}.
We finish by a quick proof of \cref{prop:mcs} in \cref{sec:mcs}.
\section{Preliminaries}
\label{sec:prem}
In this section we provide several definitions and results to be used in this paper.
\subsection{Notation} \label{subsec:prem:notation}
The following graph theoretic notation is used.
For a graph $G=(V,E)$ and two disjoint vertex subsets $U,W\subseteq V$, we let $E_G(U,W)$ denote the set of edges of $G$ adjacent to exactly one vertex from $U$ and one vertex from $V$, and let $e_G(U,W)=|E_G(U,W)|$.
Similarly, $E_G(U)$ denotes the set of edges spanned by a subset $U$ of $V$, and $e_G(U)$ stands for $|E_G(U)|$.
The (external) neighbourhood of a vertex subset $U$, denoted by $N_G(U)$, is the set of vertices in $V\setminus U$ adjacent to a vertex of $U$, and for a vertex $v\in V$ we set $N_G(v)=N_G(\{v\})$.
The degree of a vertex $v\in V$, denoted by $d_G(v)$, is its number of incident edges.
For an integer $0 \leq i < |V|$, we let $D_i = D_i(G)$ be the set of vertices of degree $i$ in $G$,
and let $D_{\le i}=D_{\le i}(G):=\bigcup_{j=0}^i D_j$.
Finally, we let $n_i = n_i(G) := |D_i|$ and $n_{\le i} = n_{\le i}(G) := |D_{\le i}|$.
In the above notation, we sometimes omit the subscript $G$ if the graph $G$ is clear from the context.
We occasionally suppress the rounding notation to simplify the presentation. For two functions $f=f(n)$, $g=g(n)$ we write $f\sim g$ to indicate that $f = (1+o(1))g$.
\subsection{Probabilistic bounds} \label{subsec:prem:prob}
We will make use of the following useful bound (see, e.g., in,~\cite{JLR}*{Chapter 2}).
\begin{theorem}[Chernoff bounds]\label{thm:chernoff}
Let $X=\sum_{i=1}^n X_i$, where $X_i\sim\Dist{Bernoulli}(p_i)$ are independent, and let $\mu=\E{X}=\sum_{i=1}^n p_i$.
Let $0<\alpha<1<\beta$.
Then
\begin{align*}
\pr(X\le \alpha\mu) &\le \exp(-\mu(\alpha\log{\alpha}-\alpha+1)),\\
\pr(X\ge \beta\mu) &\le \exp(-\mu(\beta\log{\beta}-\beta+1)).
\end{align*}
\end{theorem}
The following are trivial yet useful bounds.
\begin{claim}\label{cl:bin:uptail}
Let $X\sim\Dist{Bin}(n,p)$ with $\mu=np$ and let $1\le k\le n$.
Then
\begin{equation*}
\pr(X\ge k) \le \left(\frac{enp}{k}\right)^k.
\end{equation*}
\end{claim}
For a proof see, e.g.,~\cite{FKMP18}.
\begin{claim}\label{cl:bin:lowtail}
Let $X\sim\Dist{Bin}(n,p)$ with $\mu=np$, write $q=1-p$ and let $1\le k\le np/q$.
Then
\begin{equation*}
\pr(X\le k) \le \left(\frac{enp}{kq}\right)^k e^{-np}.
\end{equation*}
\end{claim}
\subsection{P\'{o}sa's lemma and corollaries} \label{subsec:prem:posa}
For an overview of the rotation--extension technique, we refer the reader to~\cites{Kri16}.
\begin{lemma}[P\'{o}sa's lemma~\cite{Pos76}]\label{lem:Posa}
Let $G$ be a graph, let $P = v_0,\dots,v_t$ be a longest path in $G$, and let $R$ be the set of all $v \in V(P)$ such that there exists a path $P'$ in $G$ with $V(P') = V(P)$ and with endpoints $v_0$ and $v$.
Then $|N(R)| \leq 2|R|-1$.
\end{lemma}
Recall that a non-edge of $G$ is called a \defn{booster} if adding it to $G$ creates a graph which is either Hamiltonian or whose longest path is longer than that of $G$.
For a positive integer $k$ and a positive real $\alpha$ we say that a graph $G=(V,E)$ is a \defn{$(k,\alpha)$-expander} if $|N(U)|\ge\alpha|U|$ for every set $U\subseteq V$ of at most $k$ vertices.
The following is a widely-used fact stating that $(k,2)$-expanders have many boosters. For a proof see, e.g.,~\cite{Kri16}.
\begin{lemma}\label{lem:posa:boosters}
Let $G$ be a connected $(k,2)$-expander which contains no Hamilton cycle.
Then $G$ has at least $(k+1)^2/2$ boosters.
\end{lemma}
It is not hard to see that an $(n/4,2)$-expander on $n$ vertices is connected, a fact that we will use later on.
We will also use the following deterministic lemma from \cite{GKM21}.
\begin{lemma}[\cite{GKM21}*{Lemma 4.6}]\label{lem:expander}
Let $m,d \geq 1$ be integers, and let $H$ be a graph on $h\ge 4m$ vertices satisfying the following properties:
\begin{enumerate}
\item $\delta(H) \geq 2$;
\item No vertex $v \in V(H)$ with $d(v) < d$ is contained in a $3$- or a $4$-cycle, and every two distinct vertices $u,v \in V(H)$ with $d(u),d(v) < d$ are at distance at least $5$ apart;
\item Every set $F \subseteq V(H)$ of size at most $5 m$ spans at most $d|F| / 10$ edges;
\item There is an edge between every pair of disjoint sets $F_1,F_2 \subseteq V(H)$ of size $m$ each.
\end{enumerate}
Then $H$ is a $(h/4,2)$-expander (and, in particular, it is connected).
\end{lemma}
\section{Properties of random graphs}
\label{sec:random}
We now present and prove some typical properties of random graphs above the connectivity threshold, to be used in the proof of \cref{thm:kc}.
The properties we present are fairly standard and the consequent proofs are mainly technical and may get tedious, so a first time reader may wish to skip them.
Denote
\[
\textsc{Small} = \textsc{Small}(G) := D_{\le\log{n}/10}(G).
\]
Additionally, set
\[\gamma=10^{-4}.\]
\begin{lemma}\label{lem:gnp:prop}
Let $1\ll f(n)\ll \log\log{n}$, $p=(\log{n}+f(n))/n$ and $G\sim G(n,p)$.
Then, \whp{}, $G$ has the following properties:
\begin{description}[leftmargin=!,labelwidth=\widthof{\bfseries (P1)}]
\item[\namedlabel{P:maxdeg}{(P1)}]
$\Delta(G) \leq 10\log n$;
\item[\namedlabel{P:numkeys}{(P2)}]
$|D_1|\le\log{n}\le|D_2|$;
\item[\namedlabel{P:smalldist}{(P3)}]
There is no path of length at most $0.2\log n / \log \log n$ in $G$ whose (possibly identical) endpoints lie in $\textsc{Small}$;
\item[\namedlabel{P:smallsmall}{(P4)}]
$|\textsc{Small}\cup N(\textsc{Small})|\le n^{0.6}$;
\item[\namedlabel{P:smallsets}{(P5)}]
There in no $U\subseteq V(G)$ with $|U|\leq 10n/\log n$, $e(U,V(G)\setminus U) \geq |U|\log n/11 $ such that $|N(U)| \leq |U|\log n/18$;
\item[\namedlabel{P:local:sparse}{(P6)}]
Every set $U \subseteq V(G)$ of size at most $\gamma n/5000$ spans at most $\gamma\log n\cdot|U|/1000$ edges;
\item[\namedlabel{P:intersect}{(P7)}]
For every $U,W\subseteq V(G)$ disjoint with $10 n/\log n\leq |U|,|W|\leq n/9$, $|N(U)\cap N(W)|\geq n/9$;
\item[\namedlabel{P:pseudorandom}{(P8)}]
For every $U,W\subseteq V(G)$ disjoint with $|U|,|W|\geq \frac{\gamma n}{25000}$, $|E(U,W)|\ge \frac{1}{2}|U||W|\log{n}/n$;
\end{description}
\end{lemma}
\begin{proof}[Proof of \ref{P:maxdeg}]
Since $d(v)\sim\Dist{Bin}(n-1,p)$ for every $v\in V(G)$, we have
\[
\pr(d(v)\ge 10\log{n})
\le \binom{n}{10\log{n}}p^{10\log{n}}
\le \left(\frac{enp}{10\log{n}}\right)^{10\log{n}} = o(1/n),
\]
and the statement follows by the union bound.
\end{proof}
\begin{proof}[Proof of \ref{P:numkeys}
The probability that a given vertex is of degree $1$ is at most $np(1-p)^{n-2}=o(\log{n}/n)$.
Thus $\E[n_1]=o(\log{n})$ and by Markov's inequality $\pr(n_1>\log{n})=o(1)$.
The expectation of $n_2$ is $n\cdot \binom{n-1}{2} p^2 (1-p)^{n-3}=\omega(\log{n})$.
On the other hand, the variance is
\begin{equation*}
\begin{aligned}
\Var \left[ n_2 \right]
& = \E\left[ {n_2} ^2\right] - \E\left[ {n_2}\right] ^2 \\
& = -\E\left[ {n_2}\right] ^2 + \E\left[ {n_2}\right]
+ \sum_{\substack{(u,v)\in V(G)^2\\u\ne v}} \pr\left( d_G(u) = d_G(v) = 2 \right) \\
& = (-1 + o(1))\cdot \E\left[ {n_2}\right] ^2 + n(n-1)\cdot \left( \binom{n-2}{2}^2p^4(1-p)^{2n-7} + np^3(1-p)^{2n-6} \right) \\
& = o\left( \E\left[ {n_2}\right] ^2 \right) ,
\end{aligned}
\end{equation*}
which, by Chebyshev's inequality, implies that $\pr(n_2 < \log n) = o(1)$.
\end{proof}
\begin{proof}[Proof of \ref{P:smalldist}]
Write $L:=0.2\log{n}/\log{\log{n}}$.
Let $1\le\ell\le L$ and let $P=(v_0,\ldots,v_\ell)$ be a sequence of $\ell+1$ distinct vertices from $V(G)$, with the one possible exception $v_0=v_\ell$.
Suppose first that $v_0\ne v_\ell$.
Let $S:= V(G) \setminus \{v_0,v_1,v_{\ell-1},v_\ell\}$,
let $\mathcal{A}_P$ be the event that $P$ is contained in $G$ as a path, and
let $\mathcal{A}_d$ be the event that $d(\{v_0,v_\ell\},S) \le \log{n}/5$.
By \cref{thm:chernoff} with $\alpha = \frac{\log{n}}{5\cdot (2n-5)p} = 1/10 + o(1)$, we obtain that
\[
\pr(\mathcal{A}_d)
\le \exp \left( -(2n-5)p \cdot ( \alpha \log{\alpha} - \alpha +1 ) \right)
\le n^{-1.3}.
\]
The events $\mathcal{A}_P,\mathcal{A}_d$ are independent, hence $\pr(\mathcal{A}_P\land \mathcal{A}_d) \le p^\ell n^{-1.3}$.
Let $\mathcal{A}$ be the event that there exists a path $P=v_0,\ldots,v_\ell$ with $1\le\ell\le L$ in $G$ such that $\mathcal{A}_P$ and $d(v_0),d(v_\ell)\le\log{n}/10$, the probability of which is at most $\pr(\mathcal{A}_P\land \mathcal{A}_d)$.
By the union bound, summing over all sequence lengths and all sequences, we get
\[
\pr(\mathcal{A})
\le \sum_{\ell=1}^L n^{\ell+1}p^\ell n^{-1.3}
\le \sum_{\ell=1}^L n^{\frac{\log{(np)}}{\log{n}}\cdot \ell - 0.3}
\le L\cdot n^{\frac{1.1\log{\log{n}}}{\log{n}}\cdot L -0.3}
\le L\cdot n^{-1/20} = o(1).
\]
The case $v_0=v_\ell$ is similar.
Let $S:=V(G)\setminus \{v_1,v_{\ell-1}\}$ and let $\mathcal{A}_d$ be the event $d(v_0,S)\le\log{n}/10$.
Once again by \cref{thm:chernoff}, $\pr(\mathcal{A}_d)\le n^{-0.6}$, and the events $\mathcal{A}_P,\mathcal{A}_d$ are independent, hence $\pr(\mathcal{A}_p\land \mathcal{A}_d)\le p^\ell n^{-0.6}$.
Let $\mathcal{A}'$ be the event that there exists a cycle $P$ of length $3\le\ell\le L$ such that $\mathcal{A}_P$ and $d(v_0)\le\log{n}/10$.
By the union bound, $\pr(\mathcal{A}')\le \sum_{\ell=3}^L n^\ell p^\ell n^{-0.6}=o(1)$.
Finally, observe that $\pr(G\in \ref{P:smalldist}) \ge 1-\pr (\mathcal{A}) - \pr (\mathcal{A} ')$, and so the statement is obtained.
\end{proof}
\begin{proof}[Proof of \ref{P:smallsmall}]
Thus, by \cref{thm:chernoff} we have $\pr(d(v)\le\log{n}/10)\le n^{-0.6}$, and therefore by Markov's inequality $|\textsc{Small}|\le n^{0.5}$ \whp{}.
By the definition of $\textsc{Small}$ we have that $|\textsc{Small}\cup N(\textsc{Small})|\le n^{0.6}$ \whp{}.
\end{proof}
\begin{proof}[Proof of \ref{P:smallsets}]
For $U\subseteq V(G)$ with $|U|=k$, let $\mathcal{A}_U$ be the event that $e(U,V(G)\setminus U)\ge k\log{n}/11$ and $|N(U)| \le k\log{n}/18$.
On this event, there exists $W\subseteq V(G)$ of size $k\log{n}/18$, disjoint of $U$, which contains $N(U)$.
Denote this event by $\mathcal{A}_{U,W}$.
Evidently, using \cref{cl:bin:uptail},
\[
\begin{aligned}
\pr(\mathcal{A}_{U,W})
&\le \pr\left( \Dist{Bin}(k^2\log{n}/18,p) \ge k\log{n}/11 \right)
\cdot (1-p)^{k(n-k-k\log{n}/18)}\\
&\le\left( \frac{11ek p}{18} \right)^{k\log{n}/11}
\cdot e^{-pk(n-k-k\log{n}/18)}
\le \left( \frac{11ek p}{18} \right)^{k\log{n}/11}
\cdot e^{-0.4npk}.
\end{aligned}
\]
For $k\le 10n/\log{n}$ let $\mathcal{A}_k$ denote the event that there exists $U$ with $|U|=k$ for which $\mathcal{A}_U$ occurs.
By the union bound over all possible choices of $U,W$ we have that
\[
\begin{aligned}
\pr(\mathcal{A}_k) &\le \binom{n}{k+k\log{n}/18} \binom{k+k\log{n}/18}{k}
\cdot\pr(\mathcal{A}_{U,W})\\
&\le \left(\frac{en}{k+k\log{n}/18}\right)^{(1/18+o(1))k\log{n}}
\cdot \log^k{n}
\cdot\pr(\mathcal{A}_{U,W})\\
&\le \left(\frac{50n}{k\log{n}}\right)^{(1/18+o(1))k\log{n}}
\cdot
\left( \frac{11ek p}{18} \right)^{k\log{n}/11}
\cdot e^{-0.4npk}\\
&\le \left[\left(\frac{n}{k\log{n}}\right)^{1/18-1/11+o(1)}
\cdot 50^{1/17}
\cdot 2^{1/11}
\cdot e^{-0.4}\right]^{k\log{n}}\\
&\le \exp\left( k\log{n}\cdot \left(
\log(10)/25 + \log(50)/17 + \log(2)/11 - 0.4
\right) \right) = e^{-\Omega(k\log{n})}.
\end{aligned}
\]
Finally, let $\mathcal{A}$ denote the event that there exists $U$ with $1\le |U|\le 10n/\log{n}$ for which $\mathcal{A}_U$ occurs.
By the union bound over all possible cardinalities $k=|U|$ we have
\[
\pr(\mathcal{A})
\le \sum_{k=1}^{10n/\log{n}}\pr(\mathcal{A}_k)
\le \sum_{k=1}^{\infty} e^{-\Omega(k\log{n})} = o(1).\qedhere
\]
\end{proof}
\begin{proof}[Proof of \ref{P:local:sparse}]
Fix $U\subseteq V(G)$ and set $\gamma'=\gamma/5000$.
By \cref{cl:bin:uptail}, the probability that $|E(U)|\ge m$ for some $m\ge 1$ is at most $(e|U|^2p/m)^m$.
Hence, by the union bound, the probability that there is a subset $U\subseteq V(G)$ negating \ref{P:local:sparse} is at most
\begin{equation*}
\begin{aligned}
&\phantom{\le{}} \sum_{k=1}^{\gamma' n}
\binom{n}{k}\left(\frac{ekp}{5\gamma'\log{n}}\right)^{5\gamma'\log{n}\cdot k}\\
&\le \sum_{k=1}^{\gamma' n}
\left(\left(\frac{en}{k}\right)
\cdot \left(\frac{0.6k}{\gamma' n}\right)^{5\gamma'\log{n}}\right)^k\\
&\le \sum_{k=1}^{\gamma' n}
\left(\frac{e}{\gamma'} \cdot 0.6^{\Omega(\log{n})}\right)^k
= \sum_{k=1}^{\gamma' n} o(1)^k = o(1),
\end{aligned}
\end{equation*}
and the statement follows.
\end{proof}
\begin{proof}[Proof of \ref{P:intersect}]
If $U,W\subseteq V(G)$ are disjoint with $10 n/\log n\leq |U|,|W|\leq n/9$, and $|N(U)\cap N(W)| < n/9$, then there is a set $X\subseteq V(G)\setminus (U\cup W)$ of size $0.6n$ such that every vertex $x\in X$ is connected to no vertex in $U$, or connected to no vertex in $W$. By the union bound, the probability of this is at most
\begin{equation*}
\begin{aligned}
&\sum_{k=\frac{10n}{\log n}}^{n/9}
\sum_{s=\frac{10n}{\log n}}^{n/9}
\binom{n}{k} \binom{n}{s} \binom{n}{0.6n}
\cdot \left( (1-p)^k + (1-p)^s \right) ^{0.6n} \\
&\leq n^2\cdot \left( 9e \right)^{2n/9}
\cdot (2e)^{0.6n}
\cdot 2^{0.6 n}
\cdot \exp \left( -\frac{10n}{\log n}p\cdot 0.6n \right) \\
&\leq \exp \left(
\left( o(1) + \log (9e)/2 + 2\log 2 + 1 - 10 \right) \cdot 0.6 n
\right)
= o(1),
\end{aligned}
\end{equation*}
and the statement follows.
\end{proof}
\begin{proof}[Proof of \ref{P:pseudorandom}]
If $U,W\subseteq V(G)$ are disjoint, and $|U|,|W| \geq \gamma n/25000$, then by \cref{thm:chernoff}, the probability that $e(U,W) < \frac{1}{2}|U||W|\log n/n \leq \frac{2}{3}\E (e(U,W))$ is of order
\[
\exp \left( -\Omega(\E (e(U,W)))\right)
= \exp \left(-\Omega (|U||W|\log n /n) \right)
= \exp \left(-\Omega (n\log n)\right) .
\]
By the union bound, the probability that such $U,W$ exist is at most
\[
3^n\cdot \exp (-\Omega (n\log n)) = o(1).\qedhere
\]
\end{proof}
The following lemma describes yet another property of $G(n,p)$, which might need further explanation.
Recall that $G\sim G(n,p)$ is not assumed to be Hamiltonian, and in the relevant regime typically does not contain a Hamilton path.
We will need, however, to find a path (and, in fact, many such paths) which spans a large predetermined portion of its vertices.
It is not hard to see that subgraphs spanned by carefully chosen (large) sets of vertices are (very) good expanders.
Our way to argue that they are Hamiltonian, however, will be to show that they contain {\em sparser} expanders.
To this end, we will use the method of ``random sparsification'' which has become a fairly standard tool in the study of Hamiltonicity (see, e.g., in~\cites{BKS11mb,BKS11p,Kri16,FKMP18}).
The main idea behind this, which is the essence of the next lemma, is that we can show that \whp{} {\em every} sparse expander will have a relative booster.
\begin{lemma}\label{lem:boosters}
Let $1\ll f(n)\ll \log\log{n}$, $p=(\log{n}+f(n))/n$ and $G\sim G(n,p)$.
Then, \whp{},
for every $W \subseteq V(G)$ of size $|W|=h \geq n/10$ and for every $(h/4,2)$-expander $H$ on $W$ which is a subgraph of $G$ with at most $\gamma n \log n/100$ edges, $G$ contains a booster with respect to $H$.
\end{lemma}
\noindent For later references we will name the property described above
{\bf \namedlabel{P:boosters}{(P9)}}.
\begin{proof}
Fix $W\subseteq V(G)$ such that $h:=|W|\ge n/10$, and let $H$ be an $(h/4,2)$-expander on $W$ with $m$ edges, that is a subgraph of $G$.
Recall that $H$ is connected.
Hence, by \cref{lem:posa:boosters} we know that $G[W]$ has at least $n^2/3200$ boosters with respect to $H$.
Thus, the probability that $G[W]$ contains $H$ but no booster thereof is at most
\[
p^m \cdot (1-p)^{n^2/3200}
\le p^m \cdot \left(1-\frac{\log{n}}{n}\right)^{n^2/3200}
\le \left(\frac{2\log{n}}{n}\right)^m
\cdot \exp\left(-n\log{n}/3200\right).
\]
Write $\beta=\gamma/100$.
As there are at most $2^n$ choices for $W$ and at most $\binom{n^2}{m}\le\left(en^2/m\right)^m$ choices for $H$ for each $1\le m\le \beta n\log{n}$, we have, by the union bound, that the probability that there exist such $W$ and $H$ for which $G[W]$ does not contain a booster with respect to $H$ is at most
\begin{equation}\label{eq:lem:boosters}
2^n \cdot \exp\left(-n\log{n}/3200\right)
\cdot \sum_{m=1}^{\beta n\log{n}} \left(\frac{2en\log{n}}{m}\right)^m.
\end{equation}
Set $g(m)=(2en\log{n}/m)^m$ and observe that $g'(m)=g(m)\cdot(\log(2en\log{n}/m)-1)$, which is positive for $1\le m\le \beta n\log{n}$.
Thus, the sum in \eqref{eq:lem:boosters} can be bounded from above by
\[
\beta n\log{n}\cdot \left(2e/\beta\right)^{\beta n\log{n}}
= \exp\left( (\beta\log(2e/\beta) + o(1))n\log{n} \right),
\]
which, recalling that $\beta=\gamma/100=10^{-6}$, is smaller than $\exp(n\log{n}/3201)$,
and thus \eqref{eq:lem:boosters} tends to $0$ as $n\to\infty$.
\end{proof}
\section{Constructing a KeyChain}
\label{sec:const}
In \cref{sec:random} we have identified useful properties which are satisfied by random graphs \whp{}.
In this section we will assume our graph possesses these properties, and show that this deterministically implies that the graph contains a KeyChain.
For convenience, let us repeat some definitions from \cref{sec:intro,sec:random}.
Consider the following sequence: $a_1 = 1,\ a_{j+1} = \ceil{a_j \cdot \log n / 100}$, and set $j_0$ to be the minimum index $j$ for which $a_j \ge 10n / \log n$.
Let further
\[
t:=\floor{\log{n}}\quad\text{and}\quad \ell=2j_0.
\]
Finally, recall that $\gamma=10^{-4}$.
In this section we prove the following lemma, which, when put together with \cref{lem:gnp:prop,lem:boosters}, completes the proof of \cref{thm:kc}.
\begin{lemma}\label{lem:kc}
Any $n$-vertex graph $G$ satisfying Properties \ref{P:maxdeg}--\ref{P:boosters}
contains $\mathsf{KC}(n,t,\ell)$ as a subgraph.
\end{lemma}
Our plan is as follows.
We want to construct a ``comb'', which consists of $t$ keys (all vertices of degree $1$ in addition to some vertices of degree $2$), and equal-length paths between neighbours of consecutive keys.
We will then want to connect the endpoints of the comb by a path which spans the remaining set of vertices.
As hinted in the introduction, we cannot carelessly do so, as some vertices outside the comb might have most or all of their neighbours inside the comb.
Instead, we have to make a preparatory step, in which we put aside vertices of small degree (except the future ``keys'' of the KeyChain) along with their neighbourhoods.
This preparatory step is \cref{lem:prep}.
Given the partition in \cref{lem:prep}, we construct a comb in its large part (in \cref{lem:paths}), and then connect the endpoints with a path that spans the remaining set of vertices.
This is depicted in \cref{fig:hamkc}.
~
In the next lemmas we assume $G$ is an $n$-vertex graph satisfying Properties \ref{P:maxdeg}--\ref{P:boosters}.
\begin{lemma}\label{lem:prep}
There exist a partition $V(G)=V^\star\cup V'$ with $|V^\star|\sim 2\gamma n$ and a set $K\subseteq V'$ with $|K|=t$ for which the following holds:
\begin{description}
\item[(a)] $D_1(G)\subseteq K\subseteq D_{\le 2}(G)$;
\item[(b)] $N(K)\subseteq V'$;
\item[(c)] $d(v,V^\star)\le 200\gamma\log{n}$ and $d(v,V')\ge \log{n}/20$ for every $v\in V'\setminus K$;
\item[(d)] If $K\subseteq X\subseteq V'$ satisfies $|X|\le n/2$ and $w_1,w_2\in X\setminus K$ then there exist $z_1\sim w_1$ and $z_2\sim w_2$ in $V(G)\setminus X$, and a Hamilton path from $z_1$ to $z_2$ in $G[V(G)\setminus X]$.
\end{description}
\end{lemma}
Here, $K$ will be the set of keys for our future construction, and $V'$ will host the comb, with conditions {\bf (b)},{\bf (c)} ensuring that its construction is indeed possible. Finally, condition {\bf (d)} ensures that the comb can be extended into a copy of $\mathsf{KC}(n,t,\ell)$ by plugging it as $X$ and the two endpoints of the comb's path as $w_1,w_2$.
The proof of \cref{lem:prep} is based on two ingredients.
The first ingredient (\cref{lem:partition}) takes care of the actual partition promised by \cref{lem:prep}. In this step we take measures to ensure that our partition satisfies the desired conditions. In particular, vertices of $\textsc{Small} \setminus K$, and their neighbours, are placed in $V^\star$. This serves a dual purpose: we ensure that the minimum degree of the graph spanned by $V'$ is at least logarithmic, thus aiding us with the construction of the comb, and that the minimum degree after removing the comb is at least 2, which is a necessary condition for the completion of the comb into $\mathsf{KC}(n,t,\ell)$.
The second ingredient (\cref{lem:ham}), which is the core of the proof, gives {\bf (d)}, by showing that inside these ``well-prepared'' sets, one can find Hamilton paths with linearly many distinct endpoints, emerging from a given vertex.
\begin{lemma}\label{lem:partition}
There exist disjoint sets $K,U_1,U_2\subseteq V(G)$ with $|K|=t$ and $|U_1|,|U_2|\sim\gamma n$ for which the following holds.
Write $V'=V(G)\setminus(U_1\cup U_2)$.
Then
\begin{description}
\item[(a)] $D_1(G)\subseteq K\subseteq D_{\le 2}(G)$;
\item[(b)] For every $v\in K$, $N(v)\subseteq V'\setminus K$;
\item[(c)] If $v\notin\textsc{Small}$ then $\gamma\log{n}/100 \le d(v,U_1),d(v,U_2)\le 100\gamma \log n$ and $d(v,V')\ge \log{n}/20$;
\item[(d)] If $v\in\textsc{Small}\setminus K$ then $v$ and all of its neighbours are in $U_1$.
\end{description}
\end{lemma}
\begin{proof}
The proof involves an application of the symmetric form of the Local Lemma (see, e.g.,~\cite{AS}*{Chapter~5}; a similar application appears in~\cite{HKS12} and in~\cite{GKM21}).
Write $V=V(G)$, $\alpha=1/10$, $s=1/\gamma$. Let $r=\floor{n/s}\sim\gamma n$ and let $A_1,\ldots,A_r,Z$ be a partitioning of the vertices of $G$ into $r$ ``blobs'' $A_i$ of size $s$ and an extra set $Z$ with $0\le |Z|< s$.
For $j\in[r]$ let $(x_j^1,x_j^2)$ be a uniformly chosen pair of distinct vertices from $A_j$.
For $i=1,2$ define $U_i'=\{x_j^i\}_{j=1}^r$.
Clearly, $|U_1'|=|U_2'|=r$ and $U_1'\cap U_2'=\varnothing$.
For every $v\notin\textsc{Small}$ let $\mathcal{B}_v^-$ be the event that $d(v,U_i') < 2\gamma\alpha^2\log{n}$ for some $i=1,2$, and let $\mathcal{B}_v^+$ be the event that $d(v,U_i') > \gamma\alpha^{-2}\log{n}/2$ for some $i=1,2$.
For such $v$, let $L(v)$ be the set of blobs that contain neighbours of $v$,
namely, $L(v)=\{A_j:\ N(v)\cap A_j\ne\varnothing\}$.
For $j\in[r]$ write $n_j(v)=|N(v)\cap A_j|$,
and note that $\sum_j n_j(v) \ge d(v)-s \ge \alpha\log{n}/2$.
For $i=1,2$ and $j\in[r]$ let $\chi_j^i(v)$ be the indicator of the event that $x_j^i$ is a neighbour of $v$, and note that $\E\chi_j^i(v)=\gamma n_j(v)$.
Observe that for $i=1,2$, $d(v,U_i') = \sum_j \chi_j^i(v)$, hence $\E[d(v,U_i')]=\gamma\sum_j n_j(v)\ge\gamma\alpha\log{n}/2$.
Thus, by \cref{thm:chernoff}, $\pr(\mathcal{B}_v^-)\le n^{-c_-}$ for some $c_->0$.
Similarly, by \ref{P:maxdeg}, $\E[d(v,U_i')]=\gamma\sum_j n_j(v)\le \gamma\alpha^{-1}\log{n}$.
Thus, by \cref{thm:chernoff}, $\pr(\mathcal{B}_v^+)\le n^{-c_+}$ for some $c_+>0$.
We conclude that for $\mathcal{B}_v=\mathcal{B}_v^-\cup\mathcal{B}_v^+$ we have $\pr(\mathcal{B}_v)\le n^{-c}$ for some $c>0$.
For two distinct vertices $u,v\notin\textsc{Small}$ say that $u,v$ are \defn{related} if $L(u)\cap L(v)\ne\varnothing$.
For a vertex $u\notin\textsc{Small}$ let $R(u)$ be the set of vertices in $V\setminus\textsc{Small}$ which are related to $u$, and note that $|R(u)|\le s\Delta(G)^2$, which is, by \ref{P:maxdeg}, at most $C\log^2{n}$ for some $C>0$.
Note that $\mathcal{B}_u$ is mutually independent of the set of events $\{\mathcal{B}_v\mid v\in (V\setminus\textsc{Small})\setminus R(u)\}$.
We now apply the symmetric case of the Local Lemma: observing that $e n^{-c} \cdot C\log^2{n}<1$ (for large enough $n$), we get that with positive probability, none of the events $\mathcal{B}_v$ occur.
We choose $U_1',U_2'$ to satisfy this.
Choose a set $K$ of size $t$ arbitrarily to satisfy \textbf{(a)}; this is possible due to \ref{P:numkeys}.
Write $K^+=K\cup N(K)$ and $S^+=\textsc{Small}\cup N(\textsc{Small})\supseteq K^+$.
Note that $|K^+|\le 3\log{n}$ by~\ref{P:numkeys} and that $|S^+|\le n^{0.6}$ by~\ref{P:smallsmall}.
Define $U_1=(U_1'\cup S^+)\setminus K^+$ and $U_2=U_2'\setminus S^+$ and observe that $|U_1|,|U_2|\sim \gamma n$.
Due to \ref{P:smalldist}, $N(\textsc{Small}\setminus K)\subseteq S^+\setminus K^+$, hence
the construction satisfies \textbf{(d)}.
Let $v\notin\textsc{Small}$.
The fact that $G$ satisfies \ref{P:smalldist} implies that $v$ has at most $1$ neighbour in $S^+$.
Thus, for every $v\notin\textsc{Small}$ and $i=1,2$ it holds that $|d(v,U_i)-d(v,U_i')|\le 1$, and, in addition, $d(v,V')\ge d(v,V\setminus(U_1'\cup U_2'))-1 \ge \alpha\log{n}/2$, hence \textbf{(c)} is satisfied.
By the discussion above and by \ref{P:smalldist}, \textbf{(b)} is also satisfied.
We have thus proved the statement.
\end{proof}
\begin{lemma}\label{lem:ham}
If $W\subseteq V(G)$ is a vertex subset such that $|W| \geq n/10$, $\delta(G[W])\ge 2$, and such that for every $v\in W$ we have $d(v,W)\ge \min\{ d(v), \gamma \log n/100\}$,
then for every $w \in W$ there exists $R\subseteq W$ with $|R|\ge n/40$ such that for each $y\in R$, there is a Hamilton path in $G[W]$ whose endpoints are $w$ and $y$.
\end{lemma}
\begin{proof}
Write $\beta=\gamma/100$ and set $d_0=\beta\log{n}$.
Let $W\subseteq V(G)$ satisfy $h:=|W| \geq n/10$ and $\delta(G[W])\ge 2$, and assume that for every $v\in W$ we have $d(v,W)\ge \min\{ d(v), \beta \log n\}$.
We select a random edge subgraph $H$ of $G[W]$ as follows.
For each $v\in W$, if $d(v,W)\le d_0$ set $E(v)=E_G(v,W)$;
otherwise, namely if $d(v,W)>d_0$, then set $E(v)$ to be a (uniformly) selected set of $d_0$ random edges of $G[W]$ which are incident to $v$.
Let $H=(W,E_H)$ with $E_H=\bigcup_{v\in W} E(v)$.
Observe that $|E(H)|\le h\cdot d_0\le \beta n\log{n}/10$.
We now show that $H$ is, with positive probability, a (connected) $(h/4,2)$-expander.
Taking $d=d_0$ and $m=\beta n/250$, and noting that $h\ge n/10\ge 4m$, it is enough to show that $H$ satisfies, with positive probability, Conditions 1--4 in \cref{lem:expander}.
For the first condition, note that $\delta(H)\ge \min\{d_0,\delta(G[W])\}\ge 2$.
The second condition holds as it holds for $G$ by \ref{P:smalldist} (since $d\le \log{n}/10$), and clearly also for every subgraph thereof.
Similarly, noting that $5m=\beta n/50$, the third condition holds as it holds for $G$ by \ref{P:local:sparse}.
We move on the prove the fourth condition of \cref{lem:expander}.
Let $F_1,F_2\subseteq W$ with $|F_1|,|F_2|\ge m$.
By \ref{P:pseudorandom} we know that $|E(F_1,F_2)|\ge cn\log{n}$ for $c=10^{-6}\beta^2$.
For $u\in F_1$ for which $d_G(u,F_2)\ge 1$ let $\mathcal{A}_u$ be the event that none of the edges of $D(u)$ is incident to a vertex of $F_2$.
By the construction of $H$, if $d_G(u,W)\le d_0$ then $\pr(\mathcal{A}_u)=0$.
On the other hand, if $d_G(u,W)>d_0$ then, using \ref{P:maxdeg},
\[
\begin{aligned}
\pr(\mathcal{A}_u)
&\le \binom{d_G(u,W)-d_G(u,F_2)}{d_0} / \binom{d_G(u,W)}{d_0}
= \prod_{i=0}^{d_0-1} \frac{d_G(u,W)-d_G(u,F_2)-i}{d_G(u,W)-i}\\
&\le \left( 1-\frac{d_G(u,F_2)}{d_G(u,W)} \right)^{d_0}
\le \left( 1-\frac{d_G(u,F_2)}{10\log{n}} \right)^{d_0}
\le \exp\left( -d_G(u,F_2)\cdot\beta/10 \right).
\end{aligned}
\]
Note also that $\mathcal{A}_u$ are independent for different $u\in F_1$.
Thus,
\[
\pr(E_H(F_1,F_2)=\varnothing)
\le \exp\left(-\frac{\beta}{10}\sum_{u\in F_1} d_G(u,F_2)\right)
= \exp\left( -\frac{\beta}{10} |E_G(F_1,F_2)| \right)
\le e^{-c'n\log{n}}
\]
for $c'=10^{-7}\beta^3$.
By taking the union bound over all at most $2^{2n}$ choices of $F_1,F_2$, we see that Condition~4 of \cref{lem:expander} holds \whp{}.
Our next step is to show that $G[W]$ is Hamiltonian.
Fix a subgraph $H$ of $G[W]$ which is a $(h/4,2)$-expander.
To find a Hamilton cycle in $G[W]$ we define a sequence $H_0,H_1,\ldots,H_h$ of subgraphs of $G[W]$ as follows.
Set $H_0=H$.
For each $i\ge 0$, if $H_i$ is Hamiltonian then set $H_{i+1}=H_i$; otherwise, let $e_i$ be a booster of $H_i$ which is contained in $G[W]$.
Note that such a booster is guaranteed to exist by \ref{P:boosters}, as $|E(H_i)|\le |E(H)|+i \le \beta n\log{n}/10+h \le \beta n\log{n}$.
Evidently, one cannot add $h$ boosters to a graph on $h$ vertices sequentially without making it Hamiltonian, hence $H_h$ is a Hamiltonian subgraph of $G[W]$.
Now, let $w\in W$, and let $P$ be a Hamilton path in $G[W]$ with $w$ being one of its endpoints.
Let $R$ be the set of endpoints $y$ of Hamilton paths of $G[W]$ with endpoints $w$ and $y$.
Evidently, as $G[W]$ is Hamiltonian, $R$ is not empty.
Moreover, by \cref{lem:Posa} we have $|N_{G[W]}(R)|\le 2|R|-1$.
Since $G[W]$ is a $(h/4,2)$-expander (since $H$ is such), it must be the case that $|R|>h/4\ge n/40$, so the assertion of the lemma holds.
\end{proof}
We are now ready to prove \cref{lem:prep}.
\begin{proof}[Proof of \cref{lem:prep}]
Let $K,U_1,U_2$ be the disjoint subsets of $V=V(G)$ obtained in \cref{lem:partition}.
Set $V^\star= U_1\cup U_2$ and $V'=V\setminus V^\star$ (so $K\subseteq V'$ is of size $t$ and $D_1\subseteq K\subseteq D_{\le 2}$, hence \textbf{(a)} is satisfied).
Note that \textbf{(b)} and \textbf{(c)} are also satisfied by \cref{lem:partition}.
Let $K\subseteq X\subseteq V'$ satisfy $|X|\le n/2$ and let $w_1,w_2\in X\setminus K$.
Write $V''=V'\setminus X$ and partition $V''=V''_1\cup V''_2$ as equally as possible.
For $i=1,2$, let $W_i=V''_i\cup U_i$, and choose a neighbour $z_i$ of $w_i$ in $W_i$; this is possible since $d(w_i,U_i)\ge \gamma\log{n}/100$ by the condition in \cref{lem:partition}.
Note that $|W_i|\ge n/5$ and for every $v\in W_i$ it holds that $d(v,W_i)\ge \min\{ d(v), \gamma\log{n}/100\}$, hence by \cref{lem:ham} there exists a set $R_i\subseteq W_i$ with $|R_i|\ge n/40$ such that for every $y\in R_i$ there is a Hamilton path spanning $W_i$ from $z_i$ to $y$.
In view of \ref{P:pseudorandom}, there exists an edge $e$ between $R_1$ and $R_2$ with endpoints $y_i\in R_i$, say.
For $i=1,2$, denote by $Q_{y_i}$ the Hamilton path between $w_i$ and $y_i$.
We now construct a Hamilton path on $G[V(G)\setminus X]$ as follows (as depicted in~\cref{fig:ham}):
\[
z_1 \xrightarrow{Q_{y_1}} y_1 \xrightarrow{e} y_2 \xrightarrow{Q_{y_2}} z_2,
\]
hence \textbf{(d)} is satisfied.
\end{proof}
\begin{figure*}[t]
\captionsetup{width=0.879\textwidth,font=small}
\centering
\includegraphics{ham.pdf}
\caption{Visualisation of the proof of \cref{lem:prep}.}
\label{fig:ham}
\end{figure*}
Let $V(G)=V^\star\cup V'$ be the partition obtained by \cref{lem:prep}, and let $K\subseteq V'$ be the set of size $t$ obtained by it.
The following lemma guarantees that $V'$ contains a copy of the KeyChain's ``comb'', which when put together with property \textbf{(d)} from \cref{lem:prep} guarantees the existence of a copy of $\mathsf{KC}(n,t,\ell)$ in $G$.
Write $K=\{x_1,\ldots,x_t\}$, and for each $i\in[t]$ let $w_i$ be an arbitrary neighbour of $x_i$ in $V'$ (there exist such neighbours due to the properties of $K,V'$, and they are distinct due to \ref{P:smalldist}).
Set $Q=\{w_1,\ldots,w_t\}$.
Recall the definitions of $a_j$ and $j_0$ from \cref{sec:intro}.
\begin{lemma}\label{lem:paths}
There is a sequence of paths $P_1,...,P_{t-1} \subseteq G[V']$ for which the following holds:
\begin{enumerate}
\item The endpoints of $P_i$ are $\{w_i,w_{i+1}\}$ for all $1\le i \le t-1$;
\item The length of $P_i$ is exactly $\ell$ for all $1\le i \le t-1$;
\item $V\left( P_i \right) \cap V\left( P_{i+1} \right) = \left\{ w_{i+1} \right\}$ for all $1\le i < t-1$, and $V\left( P_i \right) \cap V\left( P_j \right) = \varnothing$ for all $1\le i,j \le t-1$ such that $|i-j|>1$.
\end{enumerate}
\end{lemma}
\begin{proof}
For $U\subseteq V^{\prime}$, set $N^{\prime}(U):= N_G(U) \cap V^{\prime}$ (and similarly $N^{\prime}(v):= N_G(v)\cap V^{\prime}$ for $v\in V^{\prime}$). By \ref{P:smalldist}, $Q \cap \textsc{Small} = \varnothing$, and $N^{\prime}(w_i) \cap N^{\prime}(w_j) = \varnothing$ for all $i \neq j$.
For each $1\leq i\leq t$ let $Y_i,Z_i \subseteq N^{\prime}(w_i)$ be arbitrary disjoint subsets of size $|Y_i| = |Z_i| = a_2$ (such sets exist, by the construction of $V^{\prime}$ in \cref{lem:prep}).
We now construct the required paths sequentially.
For $1\le i<t$ we assume that $P_1,P_2,...,P_{i-1}$ have already been constructed, and construct a path $P_i$ with the desired properties.
Additionally, we construct $P_i$ to be such that its internal vertices do not belong to $K':=K\cup Q \cup \left( \bigcup_{k=1}^{t-1} \left( Y_k \cup Z_{k+1} \right) \right)$ (and, accordingly, assume that the internal vertices of $P_1,P_2,...,P_{i-1}$ do not belong to $Y_i, Z_{i+1}$).
Set $S_2 := Y_i$, $T_2 := Z_{i+1}$.
Now, for $j \leq \ell /2$, given the sets $S_2,...,S_{j-1},T_2,...,T_{j-1}\subseteq V^{\prime}$ we construct sets $S_j,T_j\subseteq V^{\prime}$ with the following properties:
\begin{itemize}
\item $S_j \subseteq N^{\prime}\left( S_{j-1} \right)$, $T_j \subseteq N^{\prime}\left( T_{j-1} \right)$;
\item $|S_j| = |T_j| = a_j$;
\item $S_j \cap \left( \bigcup\limits_{k=1}^{i-1} V(P_k) \right) = T_j \cap \left( \bigcup\limits_{k=1}^{i-1} V(P_k) \right) = \varnothing$;
\item $S_j \cap \left( \bigcup\limits_{k=2}^{j-1} \left( S_k \cup T_k \right) \right) = T_j \cap \left( \bigcup\limits_{k=2}^{j-1} \left( S_k \cup T_k \right) \right) = \varnothing$;
\item $S_j \cap K' = T_j \cap K'= S_j \cap T_j = \varnothing$.
\end{itemize}
We make the following observation, obtained from properties \ref{P:smallsets},\ref{P:local:sparse} and from the construction of $K,V^{\prime},V^*$ in \cref{lem:prep}:
if $U \subseteq V^{\prime} \setminus K$ is of size at most $10n/\log n$, then $|N^{\prime}(U)| \geq |U|\cdot \log n/30$.
Indeed, assume otherwise, then
\[
|N(U)| \leq |N'(U)| + 200\gamma |U|\log n \leq \left( \frac{1}{30} + \frac{1}{50}\right) |U| \log n \leq |U|\log n/18.
\]
On the other hand, since $|U| \leq \frac{10n}{\log n} \leq \gamma n/5000$, by \ref{P:local:sparse}, $U$ spans at most $\gamma |U| \log n/1000$ edges, and since $U \cap \textsc{Small} = \varnothing$, we have
\[
e(U,V(G)\setminus U) \ge \sum_{u\in U}d(u) - 2\cdot e(U) \geq |U|\log n/11.
\]
a contradiction to \ref{P:smallsets}.
%
Therefore, since $|S_{j-1}| = |T_{j-1}| = a_{j-1} \leq a_{\ell /2 -1} < 10n / \log n$, we have
\[
|N^{\prime}(S_{j-1})|,|N^{\prime}(T_{j-1})| \geq a_{j-1} \cdot \log n / 30.
\]
This inequality implies the existence of two disjoint subsets $S^{\prime} ,T^{\prime}$ of $N^{\prime}(S_{j-1}),N^{\prime}(T_{j-1})$, respectively, of size at least $a_{j-1}\cdot \log n/60$. In addition, recalling that $\ell = o(\log n)$, we get
\[
\left| \left( \bigcup\limits_{k=1}^{i-1}V(P_i) \right) \cup \left( \bigcup\limits_{k=2}^{j-1} \left( S_k \cup T_k \right) \right) \right| \leq i\cdot \ell + 2\cdot j \cdot a_{j-1} = o(a_{j-1} \cdot \log n) .
\]
We now wish to make sure that we can choose large enough subsets of $S',T'$ which do not intersect $K'$.
To this end, note that by \ref{P:smalldist} $N^{\prime}(S_2) \cap K', N^{\prime}(T_2) \cap K' \subseteq Q$ and $|Q|=o(a_2\log{n})$,
and for $j>3$, $|K'|=O(\log^2{n})=o(a_{j-1}\log{n})$, so for every $3\le j\le\ell/2$ we have
\[
|N'(S_{j-1})\cap K'|,|N'(T_{j-1})\cap K'| =o( a_{j-1}\cdot\log n).
\]
So overall we get
\[
\left|
\left( S^{\prime} \cup T^{\prime} \right)
\cap \left(
K' \cup \left( \bigcup\limits_{k=1}^{i-1}V(P_k) \right)
\cup \left(
\bigcup\limits_{k=2}^{j-1} \left( S_k \cup T_k \right)
\right)
\right)
\right| = o(a_{j-1}\cdot \log n),
\]
which implies that there are subsets $S_j,T_j$ of $S^{\prime} , T^{\prime}$ with all the listed properties.
Finally, observe that $|S_{\ell /2}| = |T_{\ell /2}| = a_{\ell /2}$, and therefore
\[
10n / \log n \leq |S_{\ell /2}|, |T_{\ell /2}| \leq \frac{10n}{\log n} \cdot \left\lceil \frac{\log n}{100} \right\rceil \leq \frac{n}{9},
\]
and therefore, by \ref{P:intersect},
\[
|N^{\prime}(S_{\ell /2}) \cap N^{\prime}(T_{\ell /2})| \geq |N_G(S_{\ell /2}) \cap N_G(T_{\ell /2})| - |V^*| \geq \left( \frac{1}{9}-3\gamma \right) \cdot n \geq n/10,
\]
which implies that $N^{\prime}(S_{\ell /2}) \cap N^{\prime}(T_{\ell /2})$ contains a vertex that is not a member of $\left( \bigcup_{k=1}^{i-1}V(P_k) \right) \cup \left( \bigcup_{k=2}^{\ell /2-1} \left( S_k \cup T_k \right) \right)$.
By the definitions of $S_2,...,S_{\ell /2},T_2,...,T_{\ell /2}$, this proves that there is a path $P_i$ of length $\ell$ between $w_i$ and $w_{i+1}$ with all our desired properties.
\end{proof}
This concludes the proof of \cref{lem:kc}.
Indeed, let $P=\bigcup_{i=1}^t P_i$ be the union of the paths we have found in \cref{lem:paths}, and let $X=P\cup \lbrace \{x_i,w_i\} \rbrace _{i=1}^t$ be the ``comb''.
By \cref{lem:prep}, there exist neighbours $z_1\sim w_1$ and $z_t\sim w_t$ outside the comb, and a Hamilton path in $G[V(G)\setminus X]$ between $z_1$ and $z_t$.
The union of the comb, the edges $\{w_1,z_1\}$ and $\{w_t,z_t\}$ and the Hamilton path constitutes a copy of $\mathsf{KC}(n,t,\ell)$ in $G$ (see \cref{fig:hamkc}). \qed
\begin{figure*}[t]
\captionsetup{width=0.879\textwidth,font=small}
\centering
\includegraphics{hamkc.pdf}
\caption{Visualisation of the proof of \cref{lem:kc}.}
\label{fig:hamkc}
\end{figure*}
\section{Maximum common subgraph}\label{sec:mcs}
In this short section we prove \cref{prop:mcs}.
\begin{proof}[Proof of \cref{prop:mcs}]
We may assume that $\varepsilon>0$ is small enough.
Let $\delta=\delta(\varepsilon)>0$ to be chosen later and $p=n^{-1+\delta}$.
Let $m=(1+\varepsilon)n$, and let $\mathcal{A}_m$ be the event that there exists a subgraph $H$ of $G_1$ with $m$ edges which is also a subgraph of $G_2$.
By the union bound over the possible choices of $H$ and the permutations of the vertices of $G_2$, we obtain
\[
\pr(\mathcal{A}_m) \le \binom{\binom{n}{2}}{m}\cdot n!\cdot p^{2m}
\le \left(
\left(\frac{enp^2}{2(1+\varepsilon)} \right)^{1+\varepsilon} \cdot n
\right)^n
\le \left(2\cdot n^{(-1+2\delta)(1+\varepsilon)+1} \right)^n.
\]
Taking $\delta=\delta(\varepsilon)>0$ small enough ($\delta \le \varepsilon /3$ suffices), the last term is vanishing.
\end{proof}
| {
"timestamp": "2021-11-29T02:39:25",
"yymm": "2010",
"arxiv_id": "2010.15519",
"language": "en",
"url": "https://arxiv.org/abs/2010.15519",
"abstract": "We present an explicit connected spanning structure that appears in a random graph just above the connectivity threshold with high probability.",
"subjects": "Combinatorics (math.CO)",
"title": "Spanning trees at the connectivity threshold",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180694313357,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314714086769
} |
https://arxiv.org/abs/1509.03990 | Raising The Bar For Vertex Cover: Fixed-parameter Tractability Above A Higher Guarantee | We investigate the following above-guarantee parameterization of the classical Vertex Cover problem: Given a graph $G$ and $k\in\mathbb{N}$ as input, does $G$ have a vertex cover of size at most $(2LP-MM)+k$? Here $MM$ is the size of a maximum matching of $G$, $LP$ is the value of an optimum solution to the relaxed (standard) LP for Vertex Cover on $G$, and $k$ is the parameter. Since $(2LP-MM)\geq{LP}\geq{MM}$, this is a stricter parameterization than those---namely, above-$MM$, and above-$LP$---which have been studied so far.We prove that Vertex Cover is fixed-parameter tractable for this stricter parameter $k$: We derive an algorithm which solves Vertex Cover in time $O^{*}(3^{k})$, pushing the envelope further on the parameterized tractability of Vertex Cover. | \section{The Algorithm}\label{sec:algorithm}
In this section we describe our algorithm which solves \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace in
\(\ensuremath{\mathcal{O}^{\star}}\xspace(3^{\hat{k}})\) time. We start with an overview of the
algorithm. We then state the reduction and branching rules which
we use, and prove their correctness. We conclude the section by
proving that the algorithm correctly solves \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace and that it
runs within the stated running time.
\subsection{Overview}\label{sec:overview}
The \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace problem can be stated as:
\defparproblem{\name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace(\name{VCAL-P}\xspace)}%
{A graph \(G\)
and \(\hat{k}\in{}\mathbb{N}\).}%
{\(\hat{k}\)}%
{Let \(k=(2LP(G)-MM(G))+\hat{k}\). Is \(OPT(G)\leq{}k\)?}
At its core the algorithm is a simple branching algorithm. Indeed,
we employ only two basic branching steps which are both
``natural'' for \name{Vertex Cover}\xspace: We either branch on the two end-points of an
edge, or on a vertex \(v\) and a pair \(u,w\) of its neighbours.
We set \(\hat{k}\)
as the \emph{measure} for analyzing the running time of the
algorithm. We ensure that this measure decreases by \(1\) in each
branch of a branching step in the algorithm. Since
\(OPT(G)\geq(2LP(G)-MM(G))\) for every graph \(G\)---See
\autoref{lem:lower_bound}---and \(k\geq{}OPT(G)\) holds---by
definition---for a \textsc{Yes}\xspace instance, it follows that
\(\hat{k}=(k+MM(G)-2LP(G))\) is never negative for a \textsc{Yes}\xspace
instance. Hence we can safely terminate the branching at depth
\(\hat{k}\). Since the worst-case branching factor is three, we
get that the algorithm solves \name{VCAL-P}\xspace in time
\(\ensuremath{\mathcal{O}^{\star}}\xspace(3^{\hat{k}})\).
\input{pseudocode.tex}
The algorithm---see \autoref{algorithm} on
page~\pageref{algorithm}---modifies the input graph \(G\) in
various ways as it runs. In our description of the algorithm we
will sometimes, for the sake of brevity, slightly abuse the
notation and use \(G\) to refer to the ``current'' graph at each
point in the algorithm. Since the intended meaning will be clear
from the context, this should not cause any confusion.
The crux of the algorithm is the manner in which it finds in \(G\)
a small structure---such as an edge---on which to branch such that
the measure \(\hat{k}\) drops on each branch. Clearly, picking an
arbitrary edge---say---and branching on its end-points will not
suffice: this will certainly reduce the budget \(k\) by one on
each branch, but we will have no guarantees on how the values
\(MM(G)\) and \(LP(G)\) change, and so we cannot be sure that
\(\hat{k}\) drops. Instead, we apply a two-pronged strategy to
find a small structure in the graph which gives us enough control
over how the values \(k,MM(G),LP(G)\) change, in such a way as to
ensure that \(\hat{k}=(k+MM(G)-2LP(G))\) drops in each branch.
First, we employ a set of three reduction rules which give us
control over how \(LP(G)\) changes when we pick a vertex into the
solution and delete it from \(G\). These rules have the following
nice properties:
\begin{itemize}
\item Each rule is sound---see \autoref{sec:redrules}---and can be
applied in polynomial time;
\item Applying any of these rules does not increase the measure
\(\hat{k}\), and;
\item If none of these rules applies to \(G\), then we can delete
up to two vertices from \(G\) with an assured drop of \(0.5\)
\emph{per deleted vertex} in the value of \(LP(G)\).
\end{itemize}
In this we follow the approach of Narayanaswamy et
al.~\cite{NarayanaswamyRamanRamanujanSaurabh2012} who came up with
this strategy to find a small structure on which to branch so that
their measure---which was \((k-LP)\)---would reduce on each
branch. Indeed, we \emph{reuse} three of their reduction rules,
after proving that these rules do not increase our measure
\(\hat{k}\).
While these rules give us control over how \(LP(G)\) changes when
we delete a vertex, they do not give us such control over
\(MM(G)\). That is: let \(G\) be a graph to which none of these
rules applies and let \(v\) be a vertex of \(G\). Picking \(v\)
into a solution and deleting it from \(G\) to get \(G'\) would (i)
reduce \(k\) by \(1\), and (ii) make \(LP(G')=LP(G)-0.5\), but
will \emph{not} result in a drop in \(\hat{k}\) \emph{if it so
happens} that the matching number of \(G'\) is the same as that
of \(G\). Thus for our approach to succeed we need to be able to
consistently find, in a graph reduced with respect to the
reduction rules, a small structure---say, an edge---on which to
branch, such that deleting a small set of vertices from this
structure would reduce the matching number of the graph by at
least \(1\).
The search for such a structure led us to the second and novel
part of our approach, namely the use of the classical
\emph{Gallai-Edmonds decomposition} of graphs
(\autoref{def:gallai_edmonds_decomposition}). We found that by
first reducing the input graph with respect to the three reduction
rules and then carefully choosing edges to branch based on the
Gallai-Edmonds decomposition of the resulting graph, we could make
the measure \(\hat{k}\) drop on every branch. We describe the
reduction and branching rules in the next two subsections.
\input{reduction_rules.tex}
\input{branching_rules.tex}
\input{analysis.tex}
\subsection{Putting it All Together: Correctness and Running Time Analysis}
The correctness of our algorithm and the claimed bound on its
running time follow more or less directly from the above
discussion.
\begin{proof}[Proof of \autoref{thm:main}]
We claim that \autoref{algorithm} solves \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace in
\(\ensuremath{\mathcal{O}^{\star}}\xspace(3^{\hat{k}})\) time. The correctness of the algorithm
follows from the fact that both the reduction rules and the
branching rules are sound---\autoref{lem:rules_sound_fast} and
\autoref{lem:branching_sound}. This means that (i) no reduction
rule ever converts a \textsc{Yes}\xspace instance into a \textsc{No}\xspace instance or
\emph{vice versa}, and (ii) each branching rule outputs \emph{at
least one} \textsc{Yes}\xspace instance when given a \textsc{Yes}\xspace instance as output,
and \emph{all} \textsc{No}\xspace instances when given a \textsc{No}\xspace instance as
input. It follows that if the algorithm outputs \textsc{Yes}\xspace or \textsc{No}\xspace,
then the input instance must also have been \textsc{Yes}\xspace or \textsc{No}\xspace,
respectively.
The running time bound follows from three factors. Firstly, all
the reduction rules are safe, and so they never increase the
measure \(\hat{k}\)---see
\autoref{lem_rule_one_safe},~\autoref{lem_rule_two_safe},
\autoref{lem:rule_three_safe},
and~\autoref{lem:result_of_reduction_rules}. Also, each
reduction rule can be executed in polynomial time, and since
each reduction rule reduces the number of vertices in the graph
by at least one, they can be exhaustively applied in polynomial
time. Secondly, each branch of each branching rule reduces the
measure by at least \(1\)---\autoref{lem:measure_drop}---from
which we get that each path from the root of the recursion
tree---where the measure is the original value of
\(\hat{k}\)---to a leaf---where the measure first becomes zero
or less---has length at most \(\hat{k}\). Further, the largest
branching factor is \(3\), which means that the number of nodes
in the recursion tree is \(O(3^{\hat{k}})\). Thirdly, we know
that the computation at each node in the recursion tree takes
polynomial time. This follows from \autoref{lem:branching_sound}
for the nodes where we do branching. As for the leaf nodes:
Since we know that the measure will never be negative for a \textsc{Yes}\xspace
instance (\autoref{lem:lower_bound}), since we can check for the
applicability of each branching rule in polynomial time, and
since each branching rule reduces the measure by at least one,
we can solve the instance at each leaf node in polynomial time.
\end{proof}
\subsection{The Branching Rules}
Let \((G=(V,E),k)\) be the instance of \name{Vertex Cover}\xspace obtained after
exhaustively applying the reduction rules to the input instance,
and let \(\hat{k}=k+MM(G)-2LP(G)\). Then \(\surplus{G}\geq{}2\
. If \(\hat{k}<0\) then the instance \((G,\hat{k})\) of \name{VCAL-P}\xspace is
trivially a \textsc{No}\xspace instance (See \autoref{sec:overview}), and we
return \textsc{No}\xspace and stop. In the remaining case \(\hat{k}\geq0\), and
we apply one of two branching rules to the instance to reduce the
measure \(\hat{k}\). Each of our branching rules takes an instance
\((G,\hat{k})\) of \name{VCAL-P}\xspace, runs in polynomial time, and outputs
either two or three instances
\((G_{1},\hat{k}_{1}),(G_{2},\hat{k}_{2})[,(G_{3},\hat{k}_{3})]\)
of \name{VCAL-P}\xspace. The algorithm then recurses on each of these instances
in turn. We say that a branching rule is \emph{sound} if the
following holds: \((G,\hat{k})\) is a \textsc{Yes}\xspace instance of \name{VCAL-P}\xspace if
and only if \emph{at least one} of the (two or three) instances
output by the rule is a YES instance of \name{VCAL-P}\xspace. We now present the
branching rules, prove that they are sound, and show that the
measure \(\hat{k}\) drops by at least \(1\) on each branch of each
rule.
Before starting with the branching, we compute the
\emph{Gallai-Edmonds decomposition} of the graph \(G=(V,E)\). This
can be done in polynomial time~\cite{LovaszPlummerBook2009} and
yields a partition of the vertex set \(V\) into three parts
\(V=O\uplus{}I\uplus{}P\), one or more of which may be
empty---see~\autoref{def:gallai_edmonds_decomposition}. We then
branch on edges in the graph which are carefully chosen with
respect to this partition. \autoref{bra:IP_branch} applies to the
graph \(G\) if and only
if
the induced subgraph (\(G[I\cup{}P]\)) contains at least one edge.
\begin{branching}\label{bra:IP_branch}
Branch on the two end-points of an edge \(\{u,v\}\) in the
induced subgraph (\(G[I\cup{}P]\)).
More precisely, the two
branches
generate two instances as follows:
\begin{description}
\item[Branch 1:] \(G_{1}\gets{}(G\setminus{}\{u\}),k_{1}\gets(k-1),\hat{k_{1}}\gets{}k_{1}+MM(G_{1})-2LP(G_{1})\)
\item[Branch 2:] \(G_{2}\gets{}(G\setminus{}\{v\}),k_{2}\gets(k-1),\hat{k_{2}}\gets{}k_{2}+MM(G_{2})-2LP(G_{2})\)
\end{description}
This rule outputs the two instances \((G_{1},\hat{k}_{1})\) and \((G_{2},\hat{k}_{2})\).
\end{branching}
\noindent \autoref{bra:O_branch} applies to graph \(G\) if and
only if
\autoref{bra:IP_branch} does not apply to \(G\).
\begin{branching}\label{bra:O_branch}
Pick a vertex \(u\in{O}\) of \(G\) which
has at least two neighbours \(v,w\in{O}\
. Construct the graph \(G'=G\setminus{}\{u\}\) and compute its
Gallai-Edmonds decomposition \(O'\uplus{}I'\uplus{}P'\). Pick an
edge \(\{x,y\}\) in \(G'[P']\). The three branches of this
branching rule
generates three instances as follows:
\begin{description}
\item[Branch 1:] \(G_{1}\gets{}(G\setminus{}\{v,w\}),k_{1}\gets(k-2),\hat{k_{1}}\gets{}k_{1}+MM(G_{1})-2LP(G_{1})\)
\item[Branch 2:] \(G_{2}\gets{}(G'\setminus{}\{x\}),k_{2}\gets(k-2),\hat{k_{2}}\gets{}k_{2}+MM(G_{2})-2LP(G_{2})\)
\item[Branch 3:] \(G_{3}\gets{}(G'\setminus{}\{y\}),k_{3}\gets(k-2),\hat{k_{3}}\gets{}k_{3}+MM(G_{3})-2LP(G_{3})\)
\end{description}
This rule outputs the three instances \((G_{1},\hat{k}_{1})\),
\((G_{2},\hat{k}_{2})\), and \((G_{3},\hat{k}_{3})\).
\end{branching}
Note that in the pseudocode of \autoref{algorithm} we have not,
for the sake of brevity, written out the three branches of the
second rule. Instead, we have used a boolean switch (\(reduce\))
as a third argument to the procedure to simulate this behaviour.
\begin{lemma}\label{lem:branching_sound}
Both branching rules are sound, and each can be applied
in time polynomial in the size of the input \((G,k)\).
\end{lemma}
\begin{proof}
We first prove that the rules can be applied in polynomial
time. Recall that we can compute the Gallai-Edmonds
decomposition of graph \(G\) in polynomial
time~\cite{LovaszPlummerBook2009}.
\begin{description}
\item[\autoref{bra:IP_branch}:] Follows more or less directly
from the definition of the branching rule.
\item[\autoref{bra:O_branch}:] It is not difficult to see that
the applicability of this rule can be checked in polynomial
time. If the rule does apply, then the set \(P\) is empty and
the set \(I\) is an independent set in \(G\)
(\autoref{cor:gallai_edmonds_more_properties}). By
\autoref{lem:egbipart}, there is at least one vertex in the
set \(O\) which has at least two neighbours in \(O\). Hence
vertices \(u,v,w\) of the kind required in the rule
necessarily exist in \(G\). From \autoref{lem:egbipart} we
also get that an edge \(\{x,y\}\) of the required kind also
exists. Given the existence of these, it is not difficult to
see that the rule can be applied in polynomial time.
\end{description}
We now show that the rules are sound. Recall that a branching
rule is sound if the following holds: any instance
\((G,\hat{k})\) of \name{VCAL-P}\xspace is a \textsc{Yes}\xspace instance if and only if at
least one of the instances obtained by applying the rule to
\((G,\hat{k})\) is a \textsc{Yes}\xspace instance.
\begin{description}
\item[\autoref{bra:IP_branch}:] The soundness of this rule is
not difficult to see since it is a straightforward branching
on the two end-points of an edge. Nevertheless, we include a
full argument for the sake of completeness. So let
\((G,\hat{k})\) be a \textsc{Yes}\xspace instance of \name{VCAL-P}\xspace, and let
\((G_{1},\hat{k_{1}}),(G_{2},\hat{k_{2}})\) be the two
instances obtained by applying \autoref{bra:IP_branch} to
\((G,\hat{k})\). Since \((G,\hat{k})\) is a \textsc{Yes}\xspace instance, the
graph \(G\) has a vertex cover, say \(S\), of size at most
\(k=2LP(G)-MM(G)+\hat{k}\). Then
\((S\cap\{u,v\})\neq{}\emptyset\). Suppose \(u\in{}S\). Then
\(S_{1}=S\setminus{}\{u\})\) is a vertex cover of the graph
\(G_{1}=G\setminus{}\{u\}\), of size at most
\(k_{1}=(k-1)\). It follows that for
\(\hat{k_{1}}=k_{1}+MM(G_{1})-2LP(G_{1})\),
\((G_{1},\hat{k_{1}})\) is a \textsc{Yes}\xspace instance of \name{VCAL-P}\xspace. A
symmetric argument gives us the \textsc{Yes}\xspace instance
\((G_{2},\hat{k_{2}})\) of \name{VCAL-P}\xspace for the case \(v\in{}S\).
Conversely, suppose \((G_{1},\hat{k_{1}})\) is a \textsc{Yes}\xspace instance
of \name{VCAL-P}\xspace where
\(G_{1}=G\setminus{}\{u\},k_{1}=(k-1),\hat{k_{1}}=k_{1}+MM(G_{1})-2LP(G_{1})\). Then
graph \(G_{1}\) has a vertex cover, say \(S_{1}\), of size at
most \(k_{1}=(k-1)\). \(S=S_{1}\cup\{u\}\) is then a vertex
cover of graph \(G\) of size at most \(k\), and so
\((G,\hat{k})\), where \(\hat{k}=k+MM(G)-2LP(G)\) is a \textsc{Yes}\xspace
instance of \name{VCAL-P}\xspace as well. An essentially identical argument
works for the case when \((G_{2},\hat{k_{2}})\) is a \textsc{Yes}\xspace
instance of \name{VCAL-P}\xspace.
\item[\autoref{bra:O_branch}:] It is not difficult to see that
this rule is sound, either, since it consists of exhaustive
branching on a vertex \(u\). We include the arguments for
completeness. So let \((G,\hat{k})\) be a \textsc{Yes}\xspace instance of
\name{VCAL-P}\xspace, and let
\((G_{1},\hat{k_{1}}),(G_{2},\hat{k_{2}}),(G_{2},\hat{k_{2}})\)
be the three instances obtained by applying
\autoref{bra:O_branch} to \((G,\hat{k})\). Since
\((G,\hat{k})\) is a \textsc{Yes}\xspace instance, the graph \(G\) has a
vertex cover, say \(S\), of size at most
\(k=2LP(G)-MM(G)+\hat{k}\). We consider two cases:
\(u\in{S}\), and \(u\notin{S}\). First consider the case
\(u\notin{S}\). Then all the neighbours of \(u\) must be in
\(S\). In particular, \(\{v,w\}\subseteq{S}\). It follows that
the set \(S\setminus\{v,w\}\) is a vertex cover of the graph
\(G_{1}=(G\setminus\{v,w\})\), of size at most
\(k_{1}=(k-2)\). Hence we get that for
\(\hat{k_{1}}=k_{1}+MM(G_{1})-2LP(G_{1})\),
\((G_{1},\hat{k_{1}})\) is a \textsc{Yes}\xspace instance of \name{VCAL-P}\xspace.
Now consider the case \(u\in{S}\). Then the set
\(S'=S\setminus\{u\}\) is a vertex cover of the graph
\(G'=G\setminus\{u\}\), of size at most \(k-1\). Since
\(\{x,y\}\) is an edge in the graph \(G'\), we get that
\((S'\cap\{x,y\})\neq\emptyset\). Suppose \(x\in{}S'\). Then
\(S_{2}=(S'\setminus\{x\})\) is a vertex cover of the graph
\(G_{2}=G'\setminus{}\{x\}\), of size at most
\(k_{2}=(k-2)\). It follows that for
\(\hat{k_{2}}=k_{2}+MM(G_{2})-2LP(G_{2})\),
\((G_{2},\hat{k_{2}})\) is a \textsc{Yes}\xspace instance of \name{VCAL-P}\xspace. A
symmetric argument gives us the \textsc{Yes}\xspace instance
\((G_{3},\hat{k_{3}})\) of \name{VCAL-P}\xspace for the case \(y\in{}S\).
Conversely, suppose \((G_{1},\hat{k_{1}})\) is a \textsc{Yes}\xspace instance
of \name{VCAL-P}\xspace where
\(G_{1}=G\setminus\{v,w\},k_{1}=(k-2),\hat{k_{1}}=k_{1}+MM(G_{1})-2LP(G_{1})\). Then
graph \(G_{1}\) has a vertex cover, say \(S_{1}\), of size at
most \(k_{1}=(k-2)\). \(S=S_{1}\cup\{v,w\}\) is then a vertex
cover of graph \(G\) of size at most \(k\), and so
\((G,\hat{k})\) where \(\hat{k}=k+MM(G)-2LP(G)\), is a \textsc{Yes}\xspace
instance of \name{VCAL-P}\xspace as well. Essentially identical arguments
work for the cases when \((G_{2},\hat{k_{2}})\) or
\((G_{3},\hat{k_{3}})\) is a \textsc{Yes}\xspace instance of \name{VCAL-P}\xspace.
\end{description}
\end{proof}
Our choice of vertices on which we branch ensures that the measure
drops by at least one on each branch of the algorithm.
\begin{lemma}\label{lem:measure_drop}
Let \((G,\hat{k})\) be an input given to one of the branching
rules, and let \((G_{i},\hat{k}_{i})\) be an instance output by
the rule. Then \(\hat{k}_{i}\leq(\hat{k}-1)\).
\end{lemma}
\begin{proof}
Recall that by definition, \(\hat{k}=k+MM(G)-2LP(G)\). We
consider each branching rule. We reuse the notation from the
description of each rule.
\begin{description}
\item[\autoref{bra:IP_branch}:] Consider \textbf{Branch 1}.
Since \(u\in(I\cup{P})\), \emph{every} maximum matching of
graph \(G\) saturates vertex \(u\)
(\autoref{thm:gallai_edmonds_properties}). Hence we get that
\(MM(G_{1})=(MM(G)-1)\). Since \(\surplus{G}\geq2\) we
get---from \autoref{lem:surplus_lp_drop}---that
\(LP(G_{1})=(LP(G)-\frac{1}{2})\). And since \(k_{1}=(k-1)\)
by definition, we get that
\(\hat{k}_{1} = k_{1}+MM(G_{1})-2LP(G_{1}) =
(k-1)+(MM(G)-1)-2(LP(G)-\frac{1}{2}) =k+MM(G)-2LP(G)-1 =
(\hat{k}-1)\). An essentially identical argument applied to
the (symmetrical) \textbf{Branch 2} tells us that
\(\hat{k}_{2}=(\hat{k}-1)\).
\item[\autoref{bra:O_branch}:]Consider \textbf{Branch 1}. Since
\(\{u,v,w\}\subseteq{O}\) and \(v,w\) are neighbours of vertex
\(u\), we get that vertices \(v\) and \(w\) belong to the same
connected component of the induced subgraph \(G[O]\) of
\(G\). It follows
(\autoref{cor:gallai_edmonds_more_properties}) that
\(MM(G_{1})\leq(MM(G)-1)\). Since \(\surplus{G}\geq2\) and
\(G_{1}=G\setminus\{v,w\}\) we get---from
\autoref{lem:surplus_lp_drop}---that
\(LP(G_{1})=(LP(G)-1)\). And since \(k_{1}=(k-2)\) by
definition, we get that
\(\hat{k}_{1} = k_{1}+MM(G_{1})-2LP(G_{1})\leq
(k-2)+(MM(G)-1)-2(LP(G)-1) = k+MM(G)-2LP(G)-1 = (\hat{k}-1)\).
Now consider \textbf{Branch 2}. Since \(u\in{O}\) we
get---from the definition of the Gallai-Edmonds
decomposition---that \(MM(G')=MM(G)\). Now since \(x\in{P'}\)
we get---~\autoref{thm:gallai_edmonds_properties}---that
\(MM(G_{2})=(MM(G')-1)=(MM(G)-1)\). Since \(\surplus{G}\geq2\)
and \(G_{2}=G\setminus\{u,x\}\) we get---from
\autoref{lem:surplus_lp_drop}---that
\(LP(G_{2})=(LP(G)-1)\). And since \(k_{2}=(k-2)\) by
definition, we get that
\(\hat{k}_{2} = k_{2}+MM(G_{2})-2LP(G_{2}) =
(k-2)+(MM(G)-1)-2(LP(G)-1) = k+MM(G)-2LP(G)-1 =
(\hat{k}-1)\). An essentially identical argument applied to
the (symmetrical) \textbf{Branch 3} tells us that
\(\hat{k}_{3}=(\hat{k}-1)\).
\end{description}
\end{proof}
\section{Conclusion}\label{sec:conclusion}
Motivated by an observation of Lov\'{a}sz and Plummer, we derived
the new lower bound \(2LP(G)-MM(G)\) for the vertex cover number
of a graph \(G\). This bound is at least as large as the bounds
\(MM(G)\) and \(LP(G)\) which have hitherto been used as lower
bounds for investigating above-guarantee parameterizations of \name{Vertex Cover}\xspace.
We took up the parameterization of the \name{Vertex Cover}\xspace problem above our
``higher'' lower bound \(2LP(G)-MM(G)\), which we call the
Lov\'{a}sz-Plummer lower bound for \name{Vertex Cover}\xspace. We showed that \name{Vertex Cover}\xspace remains
fixed-paramter tractable even when parameterized above the
Lov\'{a}sz-Plummer bound. The main result of this work is an
\(\ensuremath{\mathcal{O}^{\star}}\xspace(3^{\hat{k}})\) algorithm for \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace.
The presence of both \((-2LP(G))\) and \(MM(G)\)---in addition to
the ``solution size''---in our measure made it challenging to find
structures on which to branch; we had to be able to control each
of these values and their interplay in order to ensure a drop in
the measure at each branch. The main new idea which we employed
for overcoming this hurdle is the use of the Gallai-Edmonds
decomposition of graphs for finding structures on which to branch
profitably. The main technical effort in this work has been
expended in proving that our choice of vertices/edges from the
Gallai-Edmonds decomposition actually work in the way we
want. Note, however, that the branching rules themselves are very
simple; it is only the analysis which is involved.
The most immediate open problem is whether we can improve on the
base \(3\) of the \textrm{\textup{FPT}}\xspace running time. Note that any such
improvement directly implies \textrm{\textup{FPT}}\xspace algorithms of the same running
time for \name{Above-Guarantee Vertex Cover}\xspace and \name{Vertex Cover Above LP}\xspace. Tempted by this implication, we have
tried to bring this number down but, so far, in vain. Another
question which suggests itself is: Is this the best lower bound
for vertex cover number above which \name{Vertex Cover}\xspace is \textrm{\textup{FPT}}\xspace? How far can we
push the lower bound before the problem becomes intractable?
\section{Introduction}\label{sec:introduction}
The input to the \name{Vertex Cover}\xspace problem consists of an undirected graph \(G\)
and an integer \(k\), and the question is whether \(G\) has a
\emph{vertex cover}---a subset \(S\subseteq{}V(G)\) of vertices
such that every edge in \(G\) has at least one end-point in
\(S\)---of size at most \(k\). This problem is among Karp's
original list of 21 \textrm{\textup{NP-complete}}\xspace problems~\cite{Karp1972}; it is also one
of the best-studied problems in the field of parameterized
algorithms and
complexity~\cite{ChenKanjXia2010,Lampis2011,MishraRamanSaurabhSikdarSubramanian2011,RamanRamanujanSaurabh2011}.
The input to a parameterized version of a classical decision
problem consists of two parts: the classical input and a specified
\emph{parameter}, usually an integer, usually denoted by the
letter \(k\). A fixed-parameter tractable (\textrm{\textup{FPT}}\xspace)
algorithm~\cite{DowneyFellows2013,FlumGroheBook} for this
parameterized problem is one which solves the underlying decision
problem in time \(f(k)\cdot{}n^{c}\) where (i) \(f\) is a
computable function of \(k\) alone, (ii) \(n\) is the size of the
(classical) input instance, and (iii) \(c\) is a constant
independent of \(n\) and \(k\). This running time is often written
as \(\ensuremath{\mathcal{O}^{\star}}\xspace(f(k))\); the \ensuremath{\mathcal{O}^{\star}}\xspace notation hides constant-degree
polynomial factors in the running time. A parameterized algorithm
which has an \textrm{\textup{FPT}}\xspace algorithm is itself said to be (in) \textrm{\textup{FPT}}\xspace.
The ``standard'' parameter for \name{Vertex Cover}\xspace is the number \(k\) which comes
as part of the input and represents the \emph{size of the vertex
cover} for which we look---hence referred to, loosely, as the
``solution size''. This is the most extensively studied
parameterization of
\name{Vertex Cover}\xspace~\cite{BalasubramanianFellowsRaman1998,ChandranGrandoni2005,ChenKanjXia2010,Lampis2011,NiedermeierRossmanith1999}. Starting
with a simple two-way branching algorithm (folklore) which solves
the problem in time \(\ensuremath{\mathcal{O}^{\star}}\xspace(2^{k})\) and serves as the \emph{de
facto} introduction-cum-elevator-pitch to the field, a number of
\textrm{\textup{FPT}}\xspace algorithms with improved running times have been found for
\name{Vertex Cover}\xspace, the current fastest of which solves the problem in
\(\ensuremath{\mathcal{O}^{\star}}\xspace(1.2738^{k})\) time~\cite{ChenKanjXia2010}. It is also
known that unless the Exponential Time Hypothesis (ETH) fails,
there is no algorithm which solves \name{Vertex Cover}\xspace in \(\ensuremath{\mathcal{O}^{\star}}\xspace(2^{o(k)})\)
time~\cite{ImpagliazzoPaturiZane2001}.
This last point hints at a fundamental drawback of this
parameterization of \name{Vertex Cover}\xspace: In cases where the size of a smallest
vertex cover (called the \emph{vertex cover number}) of the input
graph is ``large''---say, \(\Omega(n)\) where \(n\) is the number
of vertices in the graph---one cannot hope to use these algorithms
to find a smallest vertex cover ``fast''. This is, for instance,
the case when the input graph \(G\) has a large \emph{matching},
which is a set of edges of \(G\) no two of which share an
end-point. Observe that since each edge in a matching has a
distinct representative in any vertex cover, the size of a largest
matching in \(G\) is a \emph{lower bound} on the vertex cover
number of \(G\). So when input graphs have matchings of size
\(\Omega(n)\)---which is, in a real sense, \emph{the most common
case} by far (see, e.g.;~\cite[Theorem
7.14]{Bollobas2001})---\textrm{\textup{FPT}}\xspace algorithms of the kind described in
the previous paragraph take \(\Omega(c^{n})\) time for some
constant \(c\), and one cannot hope to improve this to the form
\(\ensuremath{\mathcal{O}}\xspace(c^{o(n)})\) unless ETH fails. Put differently: consider the
standard parameterization of \name{Vertex Cover}\xspace, and let \(MM\) denote the size
of a maximum matching---the \emph{matching number}---of the input
graph. Note that we can find \(MM\) in polynomial
time~\cite{Edmonds1965}. When \(k<MM\) the answer is trivially
\textsc{No}\xspace, and thus such instances are uninteresting, and when
\(k\geq{}MM\), \textrm{\textup{FPT}}\xspace algorithms for this parameterization of the
problem are impractical for those (most common) instances for
which \(MM=\Omega(n)\).
Such considerations led to the following alternative
parameterization of \name{Vertex Cover}\xspace, where the parameter is the ``excess''
above the matching number:
\defparproblem{\name{Above-Guarantee Vertex Cover}\xspace(\name{AGVC}\xspace)}%
{A graph \(G\)
and \(k_{\mu}\in{}\mathbb{N}\).}%
{\(k_{\mu}\)}%
{Does \(G\) have a vertex cover of size at most \(MM+k_{\mu}\)?}
The parameterized complexity of \name{AGVC}\xspace was
settled by Razgon and O'Sullivan~\cite{RazgonOSullivan2008} in
2008; they showed that the problem is \textrm{\textup{FPT}}\xspace and can be solved in
\(\ensuremath{\mathcal{O}^{\star}}\xspace(15^{k_{\mu}})\) time. A sequence of faster \textrm{\textup{FPT}}\xspace
algorithms followed: In 2011, Raman et
al.~\cite{RamanRamanujanSaurabh2011} improved the running time to
\(\ensuremath{\mathcal{O}^{\star}}\xspace(9^{k_{\mu}})\), and then Cygan et
al.~\cite{CyganPilipczukPilipczukWojtaszczyk2011,CyganPilipczukPilipczukWojtaszczyk2013}
improved it further to \(\ensuremath{\mathcal{O}^{\star}}\xspace(4^{k_{\mu}})\). In 2012,
Narayanaswamy et al.~\cite{NarayanaswamyRamanRamanujanSaurabh2012}
developed a faster algorithm which solved \name{AGVC}\xspace in time
\(\ensuremath{\mathcal{O}^{\star}}\xspace(2.618^{k_{\mu}})\). Lokshtanov et
al.~\cite{LokshtanovNarayanaswamyRamanRamanujanSaurabh2014}
improved on this to obtain an algorithm with a running time of
\(\ensuremath{\mathcal{O}^{\star}}\xspace(2.3146^{k_{\mu}})\). This is currently the fastest \textrm{\textup{FPT}}\xspace
algorithm for \name{Above-Guarantee Vertex Cover}\xspace.
The algorithms of Narayanaswamy et al. and Lokshtanov et al. in
fact solve a ``stricter'' parameterization of \name{Vertex Cover}\xspace. Let \(LP\)
denote the minimum value of a solution to the \emph{linear
programming relaxation} of the standard LP formulation of \name{Vertex Cover}\xspace
(See \autoref{sec:preliminaries} for definitions.).
Then \(LP\) is a lower bound on the vertex cover number of the
graph. Narayanaswamy et al. introduced the following
parameterization\footnote{To be precise, Narayanaswamy et al. used
the value \(\lceil{}LP\rceil\) instead of just \(LP\) in their
definition of \name{Vertex Cover Above LP}\xspace, but this makes no essential difference.}
of \name{Vertex Cover}\xspace, ``above'' \(LP\):
\defparproblem{\name{Vertex Cover Above LP}\xspace(\name{VCAL}\xspace)}%
{A graph \(G\)
and
\(k_{\lambda}\in{}\mathbb{N}\).}%
{\(k_{\lambda}\)}%
{Does \(G\) have a vertex cover of size at most
\(LP+k_{\lambda}\)?}
The two algorithms solve \name{VCAL}\xspace in times
\(\ensuremath{\mathcal{O}^{\star}}\xspace(2.618^{k_{\lambda}})\) and
\(\ensuremath{\mathcal{O}^{\star}}\xspace(2.3146^{k_{\lambda}})\), respectively. Since the
inequality \(LP\geq{}MM\) holds for every graph we get that \name{VCAL}\xspace
is a \emph{stricter} parameterization of \name{Vertex Cover}\xspace, in the sense that
these algorithms for \name{VCAL}\xspace directly imply algorithms which solve
\name{AGVC}\xspace in times \(\ensuremath{\mathcal{O}^{\star}}\xspace(2.618^{k_{\mu}})\) and
\(\ensuremath{\mathcal{O}^{\star}}\xspace(2.3146^{k_{\mu}})\), respectively. To see this, consider
an instance \((G,k_{\mu})\) of \name{AGVC}\xspace where \(MM\) is the matching
number and \(VC_{opt}\) is the (unknown) vertex cover number of
the input graph \(G\), and let \(x=MM+k_{\mu}\). The question is
then whether \(VC_{opt}\leq{}x\). To resolve this, find the value
\(LP\) for the graph \(G\) (in polynomial time) and set
\(k_{\lambda}=x-LP\). Now we have that
\(VC_{opt}\leq{}x\iff{}VC_{opt}\leq{}LP+k_{\lambda}\), and we can
check if the latter inequality holds---that is, we can solve
\name{VCAL}\xspace---in \(\ensuremath{\mathcal{O}^{\star}}\xspace(2.3146^{k_{\lambda}})\) time using the
algorithm of Lokshtanov et al. Now
\(LP\geq{}MM\implies(x-LP)\leq(x-MM)\implies{k_{\lambda}}\leq{}k_{\mu}\),
and so this algorithm runs in \(\ensuremath{\mathcal{O}^{\star}}\xspace(2.618^{k_{\mu}})\) time as
well.
This leads us naturally to the next question: can we push this
further? Is there an even stricter lower bound for \name{Vertex Cover}\xspace, such that
\name{Vertex Cover}\xspace is still fixed-parameter tractable when parameterized above
this bound? To start with, it is not clear that a stricter lower
bound even exists for \name{Vertex Cover}\xspace: what could such a bound possibly look
like? It turns out that we can indeed derive such a bound; we are
then left with the task of resolving tractability above this
stricter bound.
\medskip\noindent\textbf{Our Problem.} Motivated by an observation
of Lov\'{a}sz and Plummer~\cite{LovaszPlummerBook2009} we
show---see \autoref{lem:lower_bound}---that the quantity
\((2LP-MM)\) is a lower bound on the vertex cover number of a
graph, and since \(LP\geq{}MM\) we get that
\((2LP-MM)\geq{}LP\). This motivates the following
parameterization of \name{Vertex Cover}\xspace:
\defparproblem{\name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace(\name{VCAL-P}\xspace)}%
{A graph \(G\)
and \(\hat{k}\in{}\mathbb{N}\).}%
{\(\hat{k}\)}%
{Does \(G\) have a vertex cover of size at most
\((2LP-MM)+\hat{k}\)?}
Since \((2LP-MM)\geq{}LP\) we get, following similar arguments as
described above, that \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace is a stricter parameterization than
\name{Vertex Cover Above LP}\xspace. Further, \((k_{\lambda}-\hat{k})\) could be as large as
\((LP-MM)\) and---to the best of our knowledge---this difference
cannot be expressed as a function of \(k_{\lambda}\) alone for the
purpose of solving \name{Vertex Cover}\xspace. These facts justify our choice of
parameter: \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace is indeed a stricter parameterization than both
\name{Above-Guarantee Vertex Cover}\xspace and \name{Vertex Cover Above LP}\xspace, and its tractability does not follow directly
from known results.
\noindent\textbf{Our Results.} The main result of this work is that \name{Vertex Cover}\xspace is
fixed-parameter tractable even when parameterized above this
stricter lower bound:
\begin{theorem}\label{thm:main}
\name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace is fixed-parameter tractable and can be solved in
\(\ensuremath{\mathcal{O}^{\star}}\xspace(3^{\hat{k}})\) time.
\end{theorem}
By the discussions above, this directly implies similar \textrm{\textup{FPT}}\xspace
algorithms for the two weaker parameterizations:
\begin{corollary}\label{cor:main}
\name{Vertex Cover Above LP}\xspace can be solved in \(\ensuremath{\mathcal{O}^{\star}}\xspace(3^{k_{\lambda}})\) time, and
\name{Above-Guarantee Vertex Cover}\xspace can be solved in \(\ensuremath{\mathcal{O}^{\star}}\xspace(3^{k_{\mu}})\) time.
\end{corollary}
\medskip\noindent\textbf{Our Methods.} We now sketch the main
ideas behind our \textrm{\textup{FPT}}\xspace algorithm for \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace; the details are in
\autoref{sec:algorithm}. Let \(k=(2LP-MM)+\hat{k}\) denote the
``budget'', which is the maximum size of a vertex cover in whose
existence we are interested; we want to find if there is a vertex
cover of size at most \(k\). At its core, our algorithm is a
simple branching algorithm which (i) manages to drop the
\emph{measure} \(\hat{k}\) by at least \(1\) on each branch, and
(ii) has a worst-case branching factor of \(3\). To achieve this
kind of branching we need to find, in polynomial time,
constant-sized structures in the graph, on which we can branch
profitably. It turns out that none of the ``local'' branching
strategies traditionally used to attack \name{Vertex Cover}\xspace---from the simplest
``pick an edge and branch on its end-points'' to more involved
ones based on complicated structures (e.g., those of Chen et
al.~\cite{ChenKanjXia2010})---serve our purpose. All of these
branching strategies give us a drop in \(k\) in each
branch---because we pick one or more vertices into the
solution---but give us \emph{no control} over how \(LP\) and
\(MM\) change.
So we turn to the more recent ideas of Narayanaswamy et
al.~\cite{NarayanaswamyRamanRamanujanSaurabh2012} who solve a
similar problem: they find a way to \emph{preprocess} the input
graph using reduction rules in such a way that branching on small
structures in the resulting graph would make \emph{their measure}
\((k-LP)\) drop on each branch. We find that their reduction rules
do not increase our measure, and hence that we can safely apply
these rules to obtain a graph where we can pick up to two vertices
in each branch and still have control over how \(LP\) changes.
The \emph{branching rules} of Narayanaswamy et al., however, are
not of use to us: these rules help control the drop in \(LP\), but
they provide no control over how \(MM\) changes. Note that for our
measure \(\hat{k}\) to drop we need, roughly speaking, (i) a good
drop in \(k\), (ii) a small drop in \(LP\), and (iii) a \emph{good
drop} in \(MM\). None of the branching strategies of
Narayanaswamy et al. or Lokshtanov et al. (or others in the
literature which we tried) help with this.
To get past this point we look at the classical
\emph{Gallai-Edmonds decomposition} of the reduced graph, which
can be computed in polynomial
time~\cite{Gallai1963,Gallai1964,Edmonds1965,LovaszPlummerBook2009}. We
prove that by carefully choosing edges to branch based on this
decomposition, we can ensure that both \(LP\) and \(MM\) change in
a way which gives us a net drop in the measure \(\hat{k}\). The
key ingredient and the most novel aspect of our algorithm---as
compared to existing algorithms for \name{Vertex Cover}\xspace---is the way in which we
exploit the Gallai-Edmonds decomposition to find small
structures---edges and vertices---on which we can branch
profitably. While this part is almost trivial to implement, most
of the technical effort in the paper has gone into proving that
our choices are correct. See \autoref{algorithm_outline} for an
outline which highlights the new parts, and \autoref{algorithm} on
page~\pageref{algorithm} for the complete algorithm.
\begin{algorithm}[t]
\caption{An outline of the algorithm for \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace.}\label{algorithm_outline}
\begin{algorithmic}[1]
\Function{VCAL-P}{$(G,\hat{k})$}
\State Exhaustively apply the three reduction rules of Narayanaswamy et al. to
\((G,\hat{k})\).
\State Let \((G,\hat{k})\) denote the resulting graph on which no rule applies.
\If{\((G,\hat{k})\) is a trivial instance}
\State \textbf{return} \texttt{True} or \texttt{False} as appropriate.
\EndIf
\State Compute the Gallai-Edmonds decomposition \(V(G)=O\uplus{}I\uplus{}P\) of \(G\).
\If {\(G[I\cup{}P]\) contains at least one edge \(\{u,v\}\)}
\State Branch on the edge \(\{u,v\}\). \(\hat{k}\) drops by
\(1\) on each branch.
\Else\Comment{Now \(P=\emptyset\).}
\State Branch on a vertex \(u\in{}O=V(G)\) and two of its neighbours \(v,w\in{O}\):
\State \hspace{\algorithmicindent}When we pick both of \(v,w\) into the
solution in one branch, \(\hat{k}\) drops by \(1\).
\State \hspace{\algorithmicindent}When we pick \(u\) into the
solution in the other branch, \(\hat{k}\) \emph{may not}
drop. We find a suitable edge in \(G'=(G\setminus{}u)\) and
branch on its end-points to make \(\hat{k}\) drop.
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\section{Preliminaries}\label{sec:preliminaries}
We use \(\uplus\) to denote the disjoint union of sets. All our
graphs are undirected and simple. \(V(G)\) and \(E(G)\) denote,
respectively, the vertex and edge sets of a graph \(G\). \(G[X]\)
is the subgraph of \(G\) \emph{induced} by a vertex subset
\(X\subseteq{V(G)}\):
\(G[X]=(X,F)\;;\;F=\{\{v,w\}\in{E(G)}\;;\;v,w\in{X}\}\).
\(MM(G)\) is the matching number of graph \(G\), and \(OPT(G)\) is
the vertex cover number of \(G\). A matching \(M\) in graph \(G\)
\emph{saturates} each vertex which is an end-point of an edge in
\(M\), and \emph{exposes} every other vertex in \(G\). \(M\) is a
\emph{perfect matching} if it saturates all of \(V(G)\). \(M\) is
a \emph{near-perfect matching} if it saturates all but one vertex
of \(V(G)\). Graph \(G\) is said to be \emph{factor-critical} if
for each \(v\in{V(G)}\) the induced subgraph
\(G[(V(G)\setminus\{v\})]\) has a perfect matching.
For \(X\subseteq{V(G)}\), \(N(X)\) is the set of neighbours of
\(X\) which are not in \(X\):
\(N(X)=\{v\in{(V(G)\setminus{X})}\;;\;\exists{w}\in{X}\;:\;\{v,w\}\in{E(G)}\}\).
\(X\subseteq{V(G)}\) is an \emph{independent set} in graph \(G\)
if no edge in \(G\) has both its end-points in \(X\). The
\emph{surplus} of an independent set \(X\subseteq{V(G)}\) is
\(\surplus{X}=(|N(X)|-|X|)\). The \emph{surplus of a graph
\(G\)}, \(\surplus{G}\), is the minimum surplus over all
independent sets in \(G\). Graph \(G\) is a \emph{bipartite
graph} if \(V(G)\) can be partitioned as \(V(G)=X\uplus{Y}\)
such that every edge in \(G\) has exactly one end point in each of
the sets \(X,Y\). Hall's Theorem tells us that a bipartite graph
\(G=((X\uplus{Y}),E)\) contains a matching which saturates all
vertices of the set \(X\) if and only if
\(\;\forall{S}\subseteq{X}\;:\;|N(S)|\geq|S|\). K\"{o}nig's
Theorem tells us that for a bipartite graph \(G\),
\(OPT(G)=MM(G)\).
The \emph{linear programming (LP) relaxation} of the standard LP
formulation for \name{Vertex Cover}\xspace for a graph \(G\) (the \emph{relaxed \name{Vertex Cover}\xspace LP
for \(G\)} for short), denoted \(LPVC(G)\), is:
\begin{alignat*}{2}\label{vclp}
\text{minimize } & \sum_{v\in{V(G)}}x_{v}\ \\
\text{subject to } & x_u + x_v \geq 1\ &,\ & \{u,v\} \in E\\
& 0\leq{x_v}\leq1\ &,\ & v\in{V(G)}
\end{alignat*}
A \emph{feasible solution} to this LP is an assignment of values
to the variables \(x_{v}\;;\;v\in{V(G)}\) which satisfies all the
conditions in the LP, and an \emph{optimum solution} is a feasible
solution which minimizes the value of the objective function
\(\sum_{v\in{V(G)}}x_{v}\). We use \(w(x)\) to denote the value
(of the objective function) of a feasible solution \(x\) to
\(LPVC(G)\), and \(LP(G)\) to denote the value of an optimum
solution to \(LPVC(G)\). \(OPT(G)\) and \(MM(G)\) are then the
values of optimum solutions to the \emph{integer} programs
corresponding to \(LPVC(G)\) and to its LP \emph{dual},
respectively~\cite{BourjollyPulleyblank1989}. It follows that for
any graph \(G\), \(MM(G)\leq{LP(G)}\leq{OPT(G)}\). Our stronger
lower bound for \(OPT(G)\) is motivated by a similar bound due to
Lov\'{a}sz and
Plummer~\cite[Theorem~6.3.3]{LovaszPlummerBook2009}:
\begin{lemma}\label{lem:lower_bound}
For any graph \(G\), \(OPT(G)\geq(2LP(G)-MM(G))\).
\end{lemma}
\begin{proof}
Let \(S\) be a smallest vertex cover of graph \(G\), and let
\(H=((S\uplus(V(G)\setminus{S})),F)\) be the bipartite subgraph
of \(G\) with
\(F=\{u,v\}\in{E(G)}\;;\;u\in{S},v\in(V(G)\setminus{S})\). That
is, the vertex set of \(H\) is \(V(G)\) with the bipartition
\((S,(V(G)\setminus{S}))\), and the edge set of \(H\) consists
of exactly those edges of \(G\) which have one end-point in
\(S\) and the other in \((V(G)\setminus{S})\). Let \(T\) be a
smallest vertex cover of graph \(H\). Then \(|T|=OPT(H)=MM(H)\),
where the second equality follows from K\"{o}nig's
Theorem. Consider the following assignment \(y\) of values to
variables \(y_{v}\;;\;v\in{V(G)}\):
\[
y_{v}=
\begin{cases}
1 & \text{if } v\in(S\cap{T}) \\
\frac{1}{2} & \text{if }v\in((S\cup{T})\setminus(S\cap{T}))\\
0 & \text{otherwise}.
\end{cases}
\]
Observe that \(0\leq{y_{v}}\leq1\) for each \(v\in{V(G)}\). We
claim that \(y\) is a feasible solution to \(LPVC(G)\). Indeed,
since \(S\) is a vertex cover of \(G\), every edge
\(\{u,v\}\in{E(G)}\) must have at least one end-point in
\(S\). If \(\{u,v\}\cap(S\cap{T})\neq\emptyset\) then \(y\)
assigns the value \(1\) to at least one of \(y_{u},y_{v}\), and
so we have that \(y_{u}+y_{v}\geq1\). Otherwise, if
\(\{u,v\}\subseteq(S\setminus(S\cap{T}))\) then \(y\) assigns
the value \(\frac{1}{2}\) to both of \(y_{u},y_{v}\), and so we
have that \(y_{u}+y_{v}=1\). In the only remaining case, exactly
one of \(\{u,v\}\) is in \(S\), and the other vertex is in
\((V(G)\setminus{S})\). Without loss of generality, suppose
\(\{u,v\}\cap{S}=\{u\},v\in(V(G)\setminus{S})\). Then
\(\{u,v\}\in{E(H)}\) and \(u\notin{T}\), hence we get---since
\(T\) is a vertex cover of \(H\)---that \(v\in{T}\). Thus
\(\{u,v\}\subseteq((S\cup{T})\setminus(S\cap{T}))\), and so
\(y\) assigns the value \(\frac{1}{2}\) to both of
\(y_{u},y_{v}\) and we have that \(y_{u}+y_{v}=1\). Thus the
assignment \(y\) satisfies all the conditions in the
\(LPVC(G)\), and hence is a feasible solution to
\(LPVC(G)\). Thus \(w(y)\geq{LP(G)}\).
Observe now that
\[w(y) = \frac{|S|+|T|}{2} = \frac{OPT(G)+OPT(H)}{2} =
\frac{OPT(G)+MM(H)}{2} \leq \frac{OPT(G)+MM(G)}{2},\]
where the first equality follows from the way we defined \(y\),
and the inequality follows from the observation that the
matching number of the subgraph \(H\) of \(G\) cannot be
\emph{larger} than that of \(G\) itself. Putting these together
we get that \(LP(G)\leq{w(y)}\leq\frac{OPT(G)+MM(G)}{2}\), which
in turn gives the bound in the lemma.
\end{proof}
For any graph \(G\) there exists an \emph{optimum} solution to
\(LPVC(G)\) in which
\(x_{v}\in\{0,\frac{1}{2},1\}\;;\;v\in{V(G)}\)~\cite{NemhauserTrotter1974}. Such
an optimum solution is called a \emph{half-integral solution} to
\(LPVC(G)\), and we can find such a solution in polynomial
time~\cite{NemhauserTrotter1975}. Whenever we refer to an optimum
solution to \(LPVC(G)\) in the rest of the paper, we mean a
half-integral solution. Given a half-integral solution \(x\) to
\(LPVC(G)\), we define \(V_{i}^{x}=\{v\in{V(G)}\;;\;x_{v}=i\}\)
for each \(i\in\{0,\frac{1}{2},1\}\). For any optimal
half-integral solution \(x\) we have that
\(N(V^{x}_{0})=V^{x}_{1}\). Given a graph \(G\) as input we can,
in polynomial time, compute an optimum half-integral solution
\(x\) to \(LPVC(G)\) such that for the induced subgraph
\(H=G[V_{1/2}^{x}]\), setting all variables to the value
\(\frac{1}{2}\) is the \emph{unique} optimum solution to
\(LPVC(H)\)~\cite{NemhauserTrotter1975}. Graphs which satisfy the
latter property must have positive surplus, and conversely:
\begin{lemma}\textup{\textbf{\cite{Pulleyblank1979}}}\label{lem:all-halves_positive_surplus}
For any graph \(G\), all-\(\frac{1}{2}\) is the unique optimum
solution to \(LPVC(G)\) if and only if \(\surplus{G}>0\).
\end{lemma}
In fact, the surplus of a graph \(G\) is a lower bound on the
number of vertices that we can delete from \(G\), and have a
\emph{guaranteed} drop of \emph{exactly} \(\frac{1}{2}\) \emph{per
deleted vertex} in \(LP(G)\):
\begin{lemma}\label{lem:surplus_lp_drop}
Let \(G\) be a graph with \(\surplus{G}\geq{s}\). Then deleting
any subset of \(s\) vertices from \(G\) results in a graph
\(G'\) such that \(LP(G')=LP(G)-\frac{s}{2}\).
\end{lemma}
\begin{proof}
The proof is by induction on \(s\). Let \(n=|V(G)|\). The case
\(s=0\) is trivially true. Suppose \(s=1\). Then by
\autoref{lem:all-halves_positive_surplus} all-\(\frac{1}{2}\) is the
unique optimum solution to \(LPVC(G)\), and so
\(LP(G)=\frac{n}{2}\). Let \(v\in{V(G)}\), and let
\(G'=G[V(G)\setminus\{v\}]\). Then \(|V(G'|=(n-1)\). Since
all-\(\frac{1}{2}\) is a \emph{feasible} solution to
\(LPVC(G')\), we get that
\(LP(G')\leq\frac{(n-1)}{2}=LP(G)-\frac{1}{2}\). If possible,
let \(x'\) be an optimum solution for \(LPVC(G')\) such that
\(w(x')<\frac{(n-1)}{2}\).
From the half-integrality property of relaxed \name{Vertex Cover}\xspace LP
formulations we get that \(w(x')\leq(\frac{n}{2}-1)\). Now we
can assign the value \(1\) to vertex \(v\) and the values \(x'\)
to the remaining vertices of graph \(G\), to get a solution
\(x\) such that \(w(x)=\frac{n}{2}=LP(G)\). Thus \(x\) is an
\emph{optimum} solution to \(LPVC(G)\) which is \emph{not}
all-\(\frac{1}{2}\), a contradiction. So we get that
\(LP(G')=LP(G)-\frac{1}{2}\), proving the case \(s=1\).
For the induction step, let \(s\geq2\). Then by
\autoref{lem:all-halves_positive_surplus} all-\(\frac{1}{2}\) is
the unique optimum solution to \(LPVC(G)\), and so
\(LP(G)=\frac{n}{2}\). Let \(v\) be an arbitrary vertex in
\(G\), and let \(G'=G[V(G)\setminus\{v\}]\). Since deleting a
single vertex from \(G\) cannot cause the surplus of \(G\) to
drop by more than \(1\), we get that
\(\surplus{G'}\geq(s-1)\geq1\). So from
\autoref{lem:all-halves_positive_surplus} we get that
\(LP(G')=\frac{(n-1)}{2}\). Applying the induction hypothesis
to \(G'\) and \(s-1\), we get that deleting any subset of
\(s-1\) vertices from \(G'\) results in a graph \(G''\) such
that
\(LP(G'') = LP(G')-\frac{(s-1)}{2} =
\frac{(n-1)}{2}-\frac{(s-1)}{2} = \frac{(n-s)}{2}\). This
completes the induction step.
\end{proof}
There is a matching between the vertex sets which get the values
\(0\) and \(1\) in an optimal half-integral solution to the LP.
\begin{lemma}\label{lem:bimatch}
Let \(G\) be a graph, and let \(x\) be an optimal half-integral
solution to \(LPVC(G)\). Let
\(H=((V^{x}_{1}\uplus{V^{x}_{0}}),F)\) be the bipartite subgraph
of \(G\) where
\(F=\{u,v\}\in{E(G)}\;;\;u\in{V^{x}_{1}},v\in{V^{x}_{0}}\). Then
there exists a (maximum) matching of \(H\) which saturates all
of $V^x_1$.
\end{lemma}
\begin{proof} It is enough to show that for every
\(X\subseteq{V^{x}_{1}}\) the inequality \(|N(X)|\geq|X|\) holds
in the graph \(H\), and the rest will follow by Hall's Theorem.
So let there exist some \(X\subseteq{V^{x}_{1}}\) such that
\(|N(X)|<|X|\) in \(H\). Now consider a solution \(x^{*}\) to
\(LPVC(G)\) in which all the vertices of \(G\) have same values
as in \(x\), \emph{except} that all vertices in \(X\cup{N(X)}\)
get the value \(\frac{1}{2}\). It is not difficult to verify
that \(x^{*}\) is a feasible solution to \(LPVC(G)\). Now
\(w(x^{*})=w(x)+\frac{|N(X)|-|X|}{2}<w(x)\), which is a
contradiction since we assumed that \(x\) is a solution to
\(LPVC(G)\) with the minimum value. The lemma follows.
\end{proof}
We make critical use of the classical Gallai-Edmonds decomposition
of graphs.
\begin{definition}[Gallai-Edmonds decomposition]\label{def:gallai_edmonds_decomposition}
The Gallai-Edmonds decomposition of a graph \(G\) is a partition
of its vertex set \(V(G)\) as \(V(G)=O\uplus{I}\uplus{P}\)
where:
\begin{itemize}
\item
\(O = \{v\in{V(G)}\;;\; \text{ some \emph{maximum} matching of
} G \text{ leaves } v \text{ exposed}\}\)
\item \(I = N(O)\)
\item \(P = V(G)\setminus(I\cup{O})\)
\end{itemize}
\end{definition}
We now list a few of the many useful properties of Gallai-Edmonds
decompositions.
\begin{theorem}\textup{\textbf{\cite{LovaszPlummerBook2009,Gallai1964,Gallai1963,Edmonds1965}}}\label{thm:gallai_edmonds_properties}
The Gallai-Edmonds decomposition of a graph \(G\) is unique, and
can be computed in polynomial time in the size of \(G\). Let
\(V(G)=O\uplus{I}\uplus{P}\) be the Gallai-Edmonds decomposition
of \(G\). Then the following hold:
\begin{enumerate}
\item Every component of the induced subgraph \(G[O]\) is
factor-critical.
\item A matching \(M\) in graph \(G\) is a \emph{maximum} matching
of \(G\) if and only if:
\begin{enumerate}
\item For each connected component \(H\) of the induced subgraph
\(G[O]\), the edge set \(M\cap{E(H)}\) forms a \emph{near
perfect} matching of \(H\);
\item For each vertex \(i\in{I}\) there exists some vertex
\(i\in{O}\) such that \(\{i,o\}\in{M}\), and;
\item The edge set \(M\cap{E(G[P])}\) forms a \emph{perfect}
matching of the induced subgraph \(G[P]\).
\end {enumerate}
\item In particular: Any maximum matching \(M\) of \(G\) is a
disjoint union of (i) a perfect matching of \(G[P]\), (ii)
near-perfect matchings of each component of \(G[O]\), and (iii)
an edge from each vertex in \(I\) to a distinct component of
\(G[O]\).
\item The Stability Lemma: For a vertex \(v\in{V(G)}\) let
\(G-v={G[V(G)\setminus{v}]}\). Let \(O(H),I(H),P(H)\) denote the
three parts in the Gallai-Edmonds decomposition of a graph
\(H\).
\begin{itemize}
\item Let \(v\in{O}\). Then
\(O(G-v)\subseteq(O\setminus\{v\})\), \(I(G-v)\subseteq{I}\),
and \(P(G-v)\supseteq{P}\).
\item Let \(v\in{I}\). Then \(O(G-v)=O\),
\(I(G-v)=(I\setminus\{v\})\), and \(P(G-v)=P\).
\item Let \(v\in{P}\). Then \(O(G-v)\supseteq{O}\),
\(I(G-v)\supseteq{I}\), and
\(P(G-v)\subseteq(P\setminus\{v\})\).
\end{itemize}
\end {enumerate}
\end{theorem}
\begin{corollary}\label{cor:gallai_edmonds_more_properties}
Let \(V(G)=O\uplus{I}\uplus{P}\) be the Gallai-Edmonds
decomposition of graph \(G\). Then the following hold:
\begin{enumerate}
\item If \(I\cup{P}\) is an independent set in \(G\), then
\(P=\emptyset\).
\item Let \(v,w\in{O}\) be two vertices which are part of the
same connected component of the induced subgraph \(G[O]\), and
let \(G'=G[(V(G)\setminus\{v,w\})]\). Then
\(MM(G')\leq(MM(G)-1)\).
\end{enumerate}
\end{corollary}
\begin{proof}
We prove each statement.
\begin{enumerate}
\item From the assumption, \(G[P]\) contains no edges. By
\autoref{thm:gallai_edmonds_properties}, \(G[P]\) has a
perfect matching. Both of these can hold simultaneously only
when \(P=\emptyset\).
\item Let \(C\) be the connected component of \(G[O]\) which
contains both \(v\) and \(w\). From part (3) of
\autoref{thm:gallai_edmonds_properties} we get that any
maximum matching of graph \(G\) which \emph{exposes} vertex
\(v\) contains a \emph{perfect} matching of the subgraph
\(C''=C[(V(C)\setminus\{v\})]\). Therefore, every maximum
matching of \(G\) which survives in
\(G''=G[(V(G)\setminus\{v\})]\) contains a perfect matching of
\(C''\). It follows that if we delete \(w\in{V(C'')}\) as
well from \(G''\) to get \(G'\), then the matching number
reduces by at least one, since no perfect matching of \(C''\)
can survive the deletion of vertex \(w\in{V(C'')}\).
\end{enumerate}
\end{proof}
When the set \(I\cup{P}\) is independent in a graph of surplus at
least \(2\), we get more properties for the set \(O\):
\begin{lemma}\label{lem:egbipart}
Let \(G\) be a graph with \(\surplus{G}\geq2\), and let
\(V(G)=O\uplus{I}\uplus{P}\) be the Gallai-Edmonds decomposition
of graph \(G\). If \(I\cup{P}\) is an independent set in \(G\),
then:
\begin{enumerate}
\item There is at least one vertex \(o\in{O}\) which has at
least two neighbours in the set \(O\).
\item Let \(v\in{O}\) be a neighbour of some vertex \(o\in{O}\
. Let \(G'= G[(V(G)\setminus\{o\})]\), and let
\(V(G')=O'\uplus{I'}\uplus{P'}\) be the Gallai-Edmonds
decomposition of graph \(G'\). Then the induced subgraph
\(G'[P']\) contains at least one edg
.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove each statement.
\begin{enumerate}
\item Each component of \(G[O]\) is factor critical
(\autoref{thm:gallai_edmonds_properties}), and so has an odd
number of vertices. So to prove this part it is enough to
show that there is at least one component in \(G[O]\) which
has at least two (and hence, at least three) vertices.
Suppose to the contrary that each component in \(G[O]\) has
exactly one vertex. Then the set \(O\) is an independent set
in graph \(G\). Since the set \(I\cup{P}\) is independent by
assumption, we get from
\autoref{cor:gallai_edmonds_more_properties} that
\(P=\emptyset\). Thus \(G\) is a \emph{bipartite} graph with
\(O\) and \(I\) being two parts of the bipartition. In
particular, \(N(O)\subseteq{I}\) and \(N(I)\subseteq{O}\) both
hold. But since \(\surplus{G}\geq2\) this implies that both
\(|I|\geq(|O|+2)\) and \(|O|\geq(|I|+2)\) hold simultaneously,
which cannot happen. The claim follows.
\item Observe that by the definition of the set \(O\), we have
that \(MM(G')=MM(G)\). If \(G'[P']\) contains no edge, then
the fact that \(G[P']\) has a perfect matching
(\autoref{thm:gallai_edmonds_properties}) implies that
\(P'=\emptyset\). Together with the Stability Lemma (see
\autoref{thm:gallai_edmonds_properties}) this implies that
\(v\in{O'}\) in graph \(G'\). Then by the definition of the
set \(O'\) there is a maximum matching \(MM'\) of \(G'\) which
exposes the vertex \(v\). But then the matching \(MM'\)
together with the edge \(\{o,v\}\) forms a matching in graph
\(G\) of size \(MM(G')+1=MM(G)+1\), a contradiction. Therefore
\(G'[P']\) must contain at least one edg
.
\end{enumerate}
\end{proof}
\subsection{The Reduction Rules}\label{sec:redrules}
Given an instance \((G,\hat{k})\) of \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace our algorithm first
computes the number \(k=(2LP(G)-MM(G)+\hat{k})\) so that \((G,k)\)
is the equivalent instance of (classical) \V
. Each of our three reduction rules takes an instance
\((G=(V,E),k)\) of \name{Vertex Cover}\xspace, runs in polynomial time, and outputs an
instance \((G'=(V',E'),k')\) of \name{Vertex Cover}\xspace. We say that a reduction rule
is \emph{sound} if it always outputs an \emph{equivalent}
instance. That is, if it is always the case that \(G\) has a
vertex cover of size at most \(k\) if and only if \(G'\) has a
vertex cover of size at most \(k'\). We say that a reduction rule
is \emph{safe} if it never increases the measure \(\hat{k}\). That
is, if it is always the case that
\(k'+MM(G')-2LP(G')\leq{}k+MM(G)-2LP(G)\).
In the algorithm we apply these reduction rules
\emph{exhaustively}, in the order they are presented. That is, we
take the input instance \((G,k)\) and apply the first among
\autoref{red:NT_reduction}, \autoref{red:edgeinneighbour}, and
\autoref{red:struction} which \emph{applies}---see definitions
below---to \((G,k)\) to obtain a modified instance \((G',k')\). We
now set \(G\gets{G'},k\gets{k'}\) and repeat this procedure, till
none of the rules applies to the instance \((G,k)\). We say that
such an instance is \emph{reduced} with respect to our reduction
rules. The point of these reduction rules is that they help us
push the \emph{surplus} of the input graph to at least \(2\) in
polynomial time, while not increasing the measure \(\hat{k}\).
\begin{reduction}\label{red:NT_reduction} Compute an optimal solution
\(x\) to LPVC(\(G\)) such that all-\(\frac{1}{2}\) is the unique
optimum solution to LPVC(\(G[V^{x}_{1/2}]\)). Set
\(G'=G[V^{x}_{1/2}],k'=k-|V^{x}_{1}|\).
\end{reduction}
\noindent \autoref{red:NT_reduction} \emph{applies} to \((G,k)\)
if and only if all-\(\frac{1}{2}\) is \emph{not} the unique
solution to LPVC(\(G\)).
\begin{reduction}\label{red:edgeinneighbour}
If there is an independent set $Z\subseteq{}V(G)$ such that
\(\surplus{Z}=1\) and \(N(Z)\) is \emph{not} an independent set
in \(G\), then set \(G'=G\setminus{(Z\cup{}N(Z))},k'=k-|N(Z)|\).
\end{reduction}
\noindent \autoref{red:edgeinneighbour} \emph{applies} to
\((G,k)\) if and only if (i) \autoref{red:NT_reduction} does
\emph{not} apply, and (ii) \(G\) has a vertex subset \(Z\) with
the properties stated in \autoref{red:edgeinneighbour}.
\begin{reduction} \label{red:struction}
If there is an independent set $Z\subseteq{}V(G)$ such that
\(\surplus{Z}=1\) and \(N(Z)\) \emph{is} an independent set in
\(G\), then remove $Z$ from \(G\) and \emph{identify} the
vertices of $N(Z)$---that is, delete all of \(N(Z)\), add a new
vertex \(z\), and make \(z\) adjacent to all vertices of the set
\((N(N(Z))\setminus{Z})\)---to get \(G'\), and set \(k'=k-|Z|\).
\end{reduction}
\noindent \autoref{red:struction} \emph{applies} to \((G,k)\) if
and only if (i) neither of the previous rules applies, and (ii)
\(G\) has a vertex subset \(Z\) with the properties stated in
\autoref{red:struction}.
All our reduction rules are due to Narayanaswamy et
al.~\cite[Preprocessing Rules 1 and
2]{NarayanaswamyRamanRamanujanSaurabh2012}. The soundness of these
rules and their running-time bounds also follow directly from
their work. We need to argue, however, that these rules are safe
for our measure \(\hat{k}\).
\begin{lemma}\label{lem:rules_sound_fast}\textup{\textbf{\cite{NarayanaswamyRamanRamanujanSaurabh2012}}}
All the three reduction rules are sound, and each can be applied
in time polynomial in the size of the input \((G,k)\).
\end{lemma}
It remains to show that none of these reduction rules increases
the measure \(\hat{k}=k+MM(G)-2LP(G)\). That is, let
\((G=(V,E),k)\) be an instance of \name{Vertex Cover}\xspace to which one of these rules
applies, and let \((G'=(V',E'),k')\) be the instance obtained by
applying the rule to \((G,k)\). Then we have to show, for each
rule, that \(\hat{k}'=k'+MM(G')-2LP(G')\leq{}\hat{k}\) holds. We
establish this by examining how each rule changes the values
\(k\), \(MM\) and \(LP\). In each case, let \(x\) be an optimum
half-integral solution to \(LPVC(G)\) such that
all-\(\frac{1}{2}\) is the unique optimum solution to
\(LPVC(G[V^{x}_{1/2}])\), and let \(LP(G)\) be the (optimum) value
of this solution. Also, let \(x'\) be an optimum half-integral
solution to \(LPVC(G')\), and let \(LP(G')\) be the value of this
solution.
\begin{lemma}\label{lem_rule_one_safe}
\autoref{red:NT_reduction} is safe.
\end{lemma}
\begin{proof}
From the definition of the rule we get that
\(k'=k-|V^{x}_{1}|\). Now since \(x'\equiv\frac{1}{2}\) is the
unique optimum solution to \(LPVC(G')\), and
\(G'=G[V^{x}_{1/2}]\), we get that \(LP(G')=LP(G)-|V^{x}_{1}|\),
and hence that \(2LP(G')=2LP(G)-2|V^{x}_{1}|\). From
\autoref{lem:bimatch} and the construction of the graph \(G'\)
we get that
\(MM(G)\ge
MM(G')+|V^{x}_{1}|\),
and hence that \(MM(G')\leq{}MM(G)-|V^{x}_{1}|\). Putting these
together, we get that \(\hat{k}'\leq\hat{k}\).
\end{proof}
\begin{lemma}\label{lem_rule_two_safe}
\autoref{red:edgeinneighbour} is safe.
\end{lemma}
\begin{proof}
From the definition of the rule we get that \(k'=k-|N(Z)|\). We
bound the other two quantities.
\begin{claim}
\(LP(G')\geq{}LP(G)-|N(Z)|+\frac{1}{2}\).
\end{claim}
\begin{proof}[Proof of the claim.]
Since \autoref{red:NT_reduction} did not apply to \((G,k)\) we
know that all-\(\frac{1}{2}\) is the unique optimum solution
to \(LPVC(G)\). It follows from this and the construction of
graph \(G'\) that
\(LP(G')=\sum_{u\in{}V'}x'(u)=LP(G)-\frac{1}{2}(|Z|+|N(Z)|)+\frac{1}{2}(|V^{x'}_{1}|-|V^{x'}_{0}|)\).
Adding and subtracting \(\frac{1}{2}(|N(Z)|)\), we get
\(LP(G')=LP(G)-|N(Z)|+\frac{1}{2}(|N(Z)|-|Z|)+\frac{1}{2}(|V^{x'}_{1}|-|V^{x'}_{0}|)
=LP(G)-|N(Z)|+\frac{1}{2}(|N(Z)|+|V^{x'}_{1}|)-\frac{1}{2}(|Z|+|V^{x'}_{0}|)\).
Now since \(V(G')=(V(G)\setminus{}(Z\cup{}N(Z)))\) and
\(V^{x'}_{0}\subseteq{}V(G')\), we get that
\(Z\cup{}V^{x'}_{0}\) is an independent set in \(G\), and that
\(N(Z\cup{}V^{x'}_0)=N(Z)\cup{}V^{x'}_{1}\) in \(G\) (Recall
that \(N(V^{x}_{0})=V^{x}_{1}\) for any half-integral optimal
solution \(x\).). Since
\(\surplus{G}\geq{}1\)---\autoref{lem:all-halves_positive_surplus}---we
get that in \(G\),
\(|N(Z\cup{}V^{x'}_{0})|-|Z\cup{}V^{x'}_{0}|\geq{}1\), which
gives \((|N(Z)|+|V^{x'}_{1}|)-(|Z|+|V^{x'}_{0}|)\geq{}1\), and
so we get that
\(\frac{1}{2}(|N(Z)|+|V^{x'}_{1}|)-\frac{1}{2}(|Z|+|V^{x'}_{0}|)\geq{}\frac{1}{2}\).
Substituting this in the equation for \(LP(G')\), we get that
\(LP(G')\geq{}LP(G)-|N(Z)|+\frac{1}{2}\).
\end{proof}
Now we bound the drop in \(MM(G)\).
\begin{claim}
\(MM(G')\leq{}MM(G)-|Z|\).
\end{claim}
\begin{proof}[Proof of the claim.]
Consider the bipartite graph \(\hat{G}\) obtained from the
induced subgraph \(G[Z\cup{}N(Z)]\) of \(G\) by deleting every
edge which has both endpoints in \(N(Z)\). Observe that since
\(\surplus{Z}=1\) in \(G\), we get that
$|N(X)|\geq|X|+1\;\forall{}X\subseteq{}Z$, both in \(G\) and
in \(\hat{G}\). Hence by Hall's Theorem we get that
\(\hat{G}\) contains a matching saturating \(Z\), and hence
that \(MM(\hat{G})=|Z|\). But from the construction of graph
\(G'\) we get that
\(MM(G)\geq{}MM(G')+MM(\hat{G})\). Substituting for
\(MM(\hat{G})\), we get that \(MM(G')\leq{}MM(G)-|Z|\).
\end{proof}
Putting all these together, we get that
\(\hat{k}'=k'+MM(G')-2LP(G')\leq{}(k-|N(Z)|)+(MM(G)-|Z|)-2(LP(G)-|N(Z)|+\frac{1}{2})=k+MM(G)-2LP(G)+(|N(Z)|-|Z|-1)=k+MM(G)-2LP(G)=\hat{k}\),
where the last-but-one equality follows from the fact that
\(\surplus{Z}=1\). Thus we have that \(\hat{k}'\leq{}\hat{k}\).
\end{proof}
\begin{lemma}\label{lem:rule_three_safe}
\autoref{red:struction} is safe.
\end{lemma}
\begin{proof}
From the definition of the rule we get that \(k'=k-|Z|\). We
bound the other quantities. Let \(z\) be the vertex in \(G'\)
which results from identifying the vertices of \(N(Z)\) as
stipulated by the reduction rule.
\begin{claim}
\(LP(G')\geq LP(G)-|Z|\).
\end{claim}
\begin{proof}[Proof of the claim.]
Suppose not. Then we get---from the half-integrality property
of the relaxed LP for \name{Vertex Cover}\xspace---that
\(LP(G')\leq{}LP(G)-|Z|-\frac{1}{2}\). We show that this
inequality leads to a contradiction. We consider three cases
based on the value of \(x'(z)\), which must be one of
\(\{0,\frac{1}{2},1\}\). Recall that \(\surplus{Z}=1\) and so
\(|N(Z)|=|Z|+1\).
\begin{description}
\item[Case 1:] \(x'(z)=1\). Consider a function
\(x'':V\to\{0,\frac{1}{2},1\}\) defined as follows. For
every vertex \(v\) in $V'\setminus\{z\}$, \(x''\) retains
the value assigned by \(x'\); that is, \(x''(v)=x'(v)\). For
every vertex \(v\) in the set \(N(Z)\), set \(x''(v)=1\) and
for every vertex \(v\) in the set \(Z\), \(x''(v)=0\). It is
not difficult to check that \(x''\) is a feasible solution
to the relaxed \name{Vertex Cover}\xspace LP for \(G\). But now the value of this
solution,
\(w(x'')=LP(G')-x'(z)+|N(Z)|=LP(G')-1+(|Z|+1)\leq{}LP(G)-\frac{1}{2}\). Thus
\(x''\) is a feasible solution for \(G\) whose value is less
than that of the \emph{optimal} solution, a contradiction.
\item[Case 2:] \(x'(z)=0\). Now consider the following
function \(x'':V\to\{0,\frac{1}{2},1\}\). For every vertex
\(v\) in $V'\setminus\{z\}$, \(x''\) retains the value
assigned by \(x'\): \(x''(v)=x'(v)\). For every vertex
\(v\in{}Z\) set \(x''(v)=1\) and for every vertex
\(v\in{}N(Z)\), set \(x''(v)=0\). It is again not difficult
to check that \(x''\) is a feasible solution to the relaxed
\name{Vertex Cover}\xspace LP for \(G\). And the value of this solution, \(w(x'')=
LP(G')+|Z|\leq{}LP(G)-\frac{1}{2}\), again a contradiction.
\item[Case 3:] \(x'(z)=\frac{1}{2}\). Consider again a
function \(x'':V\to\{0,\frac{1}{2},1\}\), defined as
follows. For every vertex \(v\) in \(V'\setminus\{z\}\),
\(x''\)---once again---retains the value assigned by \(x'\):
\(x''(v)=x'(v)\). For every vertex \(v\in(Z\cup{}N(Z))\),
set \(x''(v)=\frac{1}{2}\). This is once again easily
verified to be a feasible solution to the relaxed \name{Vertex Cover}\xspace LP for
\(G\), and its value is
\(w(x'')=LP(G')-x'(z)+\frac{1}{2}(|Z|+|N(Z)|)=LP(G')-\frac{1}{2}+\frac{1}{2}(|Z|+|Z|+1)\leq{}LP(G)-\frac{1}{2}\),
once again a contradiction.
\end{description}
Thus our contrary assumption leads to a contradiction in all
possible cases, and this proves the claim.
\end{proof}
Now we bound the drop in \(MM(G)\).
\begin{claim}
\(MM(G')\leq{}MM(G)-|Z|\).
\end{claim}
\begin{proof}
For an arbitrary vertex \(u\in{}N(Z)\), let \(G_{u}\) be the
bipartite subgraph \(G[Z\cup{}(N(Z)\setminus{}\{u\})]\), which
is an induced subgraph of \(G\). Since \(\surplus{Z}=1\) in
\(G\) and only one vertex from \(N(Z)\) is missing in
\(G_{u}\), we get that
\(|N(X)|\geq|X|\;\forall{}X\subseteq{}Z\) holds in
\(G_{u}\). Hence by Hall's Theorem we get that \(G_{u}\)
contains a matching saturating \(Z\), and hence that
\(MM(G_{u})=|Z|\).
Now consider a maximum matching \(MM'\) of \(G'\). Starting
with \(MM'\) we can construct a matching \(MM''\) of size
\(|MM'|\) in the original graph \(G\) which saturates at most
one vertex of \(N(Z)\) and none of \(Z\), as follows: There is
at most one edge in \(MM'\) which saturates the vertex
\(z\). If there is \emph{no} edge in \(MM'\) which saturates
the vertex \(z\), then we set \(MM''=MM'\). It is not
difficult to see that \(MM''\) saturates no vertex in
\(Z\cup{}N(Z)\). If there is an edge \(\{z,v\}\in{}MM'\), then
we pick an arbitrary vertex \(u\in{}N(Z)\) such that
\(\{u,v\}\) is an edge in \(G\)---such a vertex must exist
since the edge \(\{z,v\}\) exists in \(G'\). We set
\(MM''=(MM'\setminus\{\{z,v\}\})\cup\{\{u,v\}\}\). It is not
difficult to see that \(MM''\) is a matching in \(G\) which
saturates exactly one vertex---\(u\in{}N(Z)\)---in
\(Z\cup{}N(Z)\).
If the matching \(MM''\), constructed as above, does not
saturate any vertex of \(N(Z)\), then we choose \(u\) to be an
arbitrary vertex of \(N(Z)\). If \(MM''\) does saturate a
vertex of \(N(Z)\), then we set \(u\) to be that vertex. In
either case, the union of \(MM''\) and any maximum matching of
the induced bipartite subgraph \(G_{u}\) is itself a matching
of \(G\), of size \(MM(G_{u})+MM(G')=|Z|+MM(G')\). It follows
that \(MM(G)\geq|Z|+MM(G')\), which implies
\(MM(G')\leq{}MM(G)-|Z|\).
\end{proof}
Putting all these together, we get that
\(\hat{k}'=k'+MM(G')-2LP(G')\leq{}(k-|Z|)+(MM(G)-|Z|)-2(LP(G)-|Z|)=k+MM(G)-2LP(G)=\hat{k}\). Thus
we have that \(\hat{k}'\leq{}\hat{k}\).
\end{proof}
Observe that if \(\surplus{G}=1\) then at least one of
\autoref{red:edgeinneighbour} and \autoref{red:struction}
necessarily applies. From this and
\autoref{lem:all-halves_positive_surplus} we get the following
useful property of graphs on which none of these rules applies.
\begin{lemma}\label{lem:reduced_graph_surplus_two}
None of the three reduction rules applies to an instance
\((G,k)\) of \name{Vertex Cover}\xspace if and only if \(\surplus{G}\geq{}2\).
\end{lemma}
Summarizing the results of this section, we have:
\begin{lemma}\label{lem:result_of_reduction_rules}
Given an instance \((G,\hat{k})\) of \name{Vertex Cover Above Lov\'{a}sz-Plummer}\xspace we can, in time
polynomial in the size of the input, compute an instance
\((G',\hat{k}')\) such that:
\begin{enumerate}
\item The two instances are equivalent: \(G\) has a vertex cover
of size at most \((2LP(G)-MM(G))+\hat{k}\) if and only if
\(G'\) has a vertex cover of size at most
\((2LP(G')-MM(G'))+\hat{k}'\);
\item \(\hat{k}'\leq\hat{k}\), and \(\surplus{G'}\geq{}2\).
\end{enumerate}
\end{lemma}
| {
"timestamp": "2015-09-15T02:12:56",
"yymm": "1509",
"arxiv_id": "1509.03990",
"language": "en",
"url": "https://arxiv.org/abs/1509.03990",
"abstract": "We investigate the following above-guarantee parameterization of the classical Vertex Cover problem: Given a graph $G$ and $k\\in\\mathbb{N}$ as input, does $G$ have a vertex cover of size at most $(2LP-MM)+k$? Here $MM$ is the size of a maximum matching of $G$, $LP$ is the value of an optimum solution to the relaxed (standard) LP for Vertex Cover on $G$, and $k$ is the parameter. Since $(2LP-MM)\\geq{LP}\\geq{MM}$, this is a stricter parameterization than those---namely, above-$MM$, and above-$LP$---which have been studied so far.We prove that Vertex Cover is fixed-parameter tractable for this stricter parameter $k$: We derive an algorithm which solves Vertex Cover in time $O^{*}(3^{k})$, pushing the envelope further on the parameterized tractability of Vertex Cover.",
"subjects": "Data Structures and Algorithms (cs.DS)",
"title": "Raising The Bar For Vertex Cover: Fixed-parameter Tractability Above A Higher Guarantee",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180690117799,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314711071865
} |
https://arxiv.org/abs/1712.02492 | Rates of convergence in $W^2_p$-norm for the Monge-Ampère equation | We develop discrete $W^2_p$-norm error estimates for the Oliker-Prussner method applied to the Monge-Ampère equation. This is obtained by extending discrete Alexandroff estimates and showing that the contact set of a nodal function contains information on its second order difference. In addition, we show that the size of the complement of the contact set is controlled by the consistency of the method. Combining both observations, we show that the error estimate $\|u - u_h\|_{W^2_p} \leq C h^{1/p}$ if $p > d$ and $\|u - u_h\|_{W^2_p} \leq C h^{1/d} \big(\ln\left(\frac 1 h \right)\big)^{1/d} $ if $p \leq d$. Here the constant $C$ depends on $\|{u}\|_{C^{3,1}(\bar\Omega)}$, the dimension $d$, and the constant $p$. Numerical examples are given in two space dimensions and confirm that the estimate is sharp in several cases. | \section{Introduction}
\thispagestyle{empty}
In this paper we develop
discrete $W^2_p$ error estimates
for numerical approximations of
the Monge-Amp\`ere equation
with Dirichlet boundary conditions:
\begin{subequations}\label{MA}
\begin{alignat}{2}\label{MA1}
\det(D^2 u) & = f\qquad \text{in }\Omega,\\
\label{MA2}
u &=0 \qquad \text{on }\partial\Omega,
\end{alignat}
\end{subequations}
with given function $f\in C(\bar\Omega)$
satisfying $\underline{f} \le f\le \bar{f}$ in $\bar\Omega$,
for some positive constants $\underline{f},\bar{f}$.
Here, $D^2 u$ denotes the Hessian matrix of $u$.
The domain $\Omega\subset \mathbb{R}^d$
is assumed to be bounded and uniformly convex.
We seek a solutions to \eqref{MA}
in the class of convex functions,
which ensures ellipticity of
the problem and its unique solvability \cite{Gutierrez01}.
The method we analyze in this paper
is due to Oliker and Prussner \cite{OlikerPrussner88}, which
is based on a geometric notion of generalized
solutions called Alexandroff solutions. In this setting,
the determinant of the Hessian matrix of $u$
in \eqref{MA1} is interpreted as the
measure of the sub-differential of $u$; see \cite{Gutierrez01}.
The method proposed
in \cite{OlikerPrussner88} simply poses this solution
concept onto the space of nodal functions
and enforces the geometric condition
implicitly given in \eqref{MA1} at a finite number
of points. Namely, the method seeks a nodal function $u_h$
satisfying the Dirichlet boundary conditions on boundary nodes, and
\[
|\partial u_h(x_i)| = f_i
\]
at all interior grid points $x_i$. Here, $\partial u_h(x_i)$ denotes
the sub-differential of $u_h$ at $x_i$, $|\cdot|$ is
the $d$-dimensional Lebesgue measure,
$f_i\approx h^d f(x_i)$, and $h$ is the mesh parameter.
Existence and uniqueness of the method, and convergence
to the Alexandroof solution is shown in two dimensions in \cite{OlikerPrussner88}.
Recently, Nochetto and the second author
derived pointwise error estimates of the Oliker-Prussner
scheme \cite{NochettoZhang17}. There it is shown that, if
the exact convex solution to \eqref{MA} is sufficiently
smooth, and if the nodes are translation invariant,
then the error is of (optimal) order $\mathcal{O}(h^2)$
in the $L_\infty$ norm. Generalities of these results, depending
on solution regularity, are also given.
The main tools
to develop these results include operator consistency estimates,
the Brunn-Minkowski inequality,
and discrete
Alexandroff-Bakelman-Pucci
estimates for continuous, piecewise linear functions \cite{KuoTrudinger00,NochettoZhang17A}.
Our contribution in this paper is to extend these results and to develop
discrete $W^2_p$ error estimates for all $p\in [1,\infty)$.
To summarize this result, we first introduce a discrete $W^2_p$ norm for discrete nodal functions. We
define the second-order difference
operator of a nodal or continuous function $v$
in the direction $e\in \mathbb{Z}^d$ at a node $x_i$ as
\begin{align*}
\delta_e v(x_i):= \frac{v(x_i+ h e) -2 v(x_i)+v(x_i-he)}{|e|^2 h^2},
\end{align*}
where $|e|$ denotes the Euclidean norm
of $e$, and it is assumed that $x_i\pm h e$ is also a node in the domain $\bar{\Omega}$.
If either $x_i - h e$ or $x_i + h e$ is outside $\Omega$, we define
\begin{align*}
\delta_e v(x_i):= \frac{\rho_2 v(x_i+ \rho_1 h e) - (\rho_1 + \rho_2) v(x_i)+ \rho_1 v(x_i- \rho_2 he)}{\rho_1 \rho_2 (\rho_1 + \rho_2) |e|^2 h^2 /2},
\end{align*}
where $\rho_1$ and $\rho_2$ are the largest number in $(0,1]$ such that $x_i+ \rho_1 h e$ and $x_i- \rho_2 h e$ are in $\bar{\Omega}$, respectively.
The (weighted) $W^2_p$-norm of a nodal function $v$ with respect
to direction $e$ on a set of nodes $S$ is given by
\begin{align*}
\|v\|_{W^2_{p}(S)} := \Big(\sum_{x_i\in S} f_i |\delta_e v(x_i)|^p\Big)^{1/p}.
\end{align*}
The main result of the paper, precisely given in Theorem \ref{thm:W2d}, is the estimate
\[
\|N_h u- u_h\|_{W^2_p({\mathcal{N}_{h}^I})}\leq
\begin{cases}
C h^{1/p} \quad &\mbox{if $p>d$,}
\\
C h^{1/d} \ln\left(\frac 1 h \right)^{1/d} \quad &\mbox{if $p\leq d$,}
\end{cases}
\]
where $N_h u$ denotes the nodal interpolant of $u$.
Similar
to the arguments in \cite{NochettoZhang17},
one of the tools we use is operator consistency of the method.
In addition, we extend the discrete
Alexandroff-Bakelman-Pucci estimates
given in \cite{KuoTrudinger00,NochettoZhang17A},
and show that the contact set also contains
useful information about the second-order differences.
Because of its wide array
of applications in e.g., differential geometry,
optimal mass transport, and meteorology,
several numerical methods
have been developed for the Monge-Amp\`ere problem.
These include the monotone finite difference
schemes \cite{Oberman08,FroeseOberman11,Benamou16,Mirebeau15},
the vanishing moment method \cite{FengNeilan09}, $C^1$ finite element
methods \cite{Bohmer08,Awanou15b},
$C^0$ penalty methods \cite{Brenner11,Neilan14,Awanou15},
and semi-Lagrangian schemes \cite{FengJensen17}.
We also refer the interested reader to a review of numerical methods for fully nonlinear elliptic equations \cite{NeilanSalgadoZhang}.
One application of our results is
to feed the solution of the Oliker-Prussner method
into a higher-order scheme. For example,
the results given in \cite{Neilan14} state
that Newton's method converges
to the discrete solution provided that difference
between the initial guess and the exact
solution is sufficiently small in a $W^2_p$-norm.
Therefore, we show that the solution of the
Oliker-Prussner scheme can be used as an initial guess
within a higher-order scheme.
We will explore this idea in a coming paper.
The organization of the paper is as follows.
In the next section, we state the Oliker-Prussner method
and state some preliminary results.
In Section \ref{sec:Consistency}
we give operator consistency results
of the scheme. Section \ref{sec:Stability} gives
stability results with respect to the second-order difference operators,
and in Section \ref{S:W21estimate}
we provide $W^{2}_p$ error estimates.
Finally, we end the paper with some numerical
experiments in
Section \ref{S:numerics}.
\section{Preliminaries}
\subsection{Nodal Set and Nodal Function}
Let ${\mathcal{N}_{h}}$ be a set of nodes in the domain $\bar\Omega$. We denote the set of interior nodes ${\mathcal{N}_{h}^I} := {\mathcal{N}_{h}} \cap \Omega$,
the set of boundary nodes
${\mathcal{N}_h^B} := {\mathcal{N}_{h}} \cap \partial \Omega$, and the nodal set
\[
{\mathcal{N}_{h}} = {\mathcal{N}_{h}^I} \cup {\mathcal{N}_h^B}.
\]
To ensure that the interior node is not too close to the boundary $\partial \Omega$, we require that
\begin{align}\label{nodalassumption}
\textrm{dist}(z, \partial \Omega) \geq \frac h 2 \quad \mbox{for any nodes $z \in {\mathcal{N}_{h}^I}$}
\end{align}
Such a nodal set can be obtained by removing the nodes whose distance to $\partial \Omega$ is less than $h/2$.
We assume that the nodal set is {\it translation invariant}, i.e.,
there exist a point $b \in \mathbb{R}^d$ and a basis $\{ e_i \}_{i=1}^d$
in $\mathbb{R}^d$ such that any interior node $z \in {\mathcal{N}_{h}^I}$ can be written as
\begin{align}\label{translationinvariant}
z = b + \sum_{i=1}^d h z_i e_i
\quad \mbox{for some integers $z_i \in \mathbb{Z}$.}
\end{align}
Since the basis $e_i$ can be transformed into the canonical basis in $\mathbb{R}^d$ under a linear transformation, hereafter to simplify the presentation, we will assume that
${\mathcal{N}_{h}^I} = b + h \mathbb Z ^d $.
We also make the following additional assumption on the boundary nodal set ${\mathcal{N}_h^B}$:
\begin{align}
\textrm{dist}(x,{\mathcal{N}_h^B}) \leq h,\qquad \forall x\in \partial \Omega.
\end{align}
We say the nodal spacing of ${\mathcal{N}_{h}}$ is $h$.
It is worth mentioning that one can construct a translation invariant ${\mathcal{N}_{h}}$ on a curved domain $\Omega$.
In fact, for a nodal set ${\mathcal{N}_{h}}$ to be translation invariant, we only require the interior nodal set ${\mathcal{N}_{h}^I}$ satisfies \eqref{translationinvariant}, while no such requirement is made on the boundary nodes.
Associated with the nodes
is a simplicial triangulation $\mathcal{T}_h$,
with vertices ${\mathcal{N}_{h}}$. We denote
by $h_T$ the diameter of $T\in \mathcal{T}_h$,
and by $\rho_T$ the diameter of the largest
inscribed ball in $T$. We assume that
that the triangulation is shape-regular, i.e., there
exists $\sigma>0$ such that
\begin{align*}
\frac{h_T}{\rho_T}\le \sigma\qquad \forall T\in \mathcal{T}_h.
\end{align*}
We denote by $\{\phi_i\}_{i=1}^n$,
with $n = \# {\mathcal{N}_{h}^I}$, the canonical piecewise linear
hat functions associated with $\mathcal{T}_h$.
Namely, the function $\phi_i\in C(\bar\Omega)$
is a piecewise linear polynomial with respect to $\mathcal{T}_h$,
and is uniquely determined by the condition $\phi_i(x_j) = \delta_{i,j}$ (Kronecker delta)
for all $x_j\in {\mathcal{N}_{h}^I}$ and $\phi_i(x_j)=0$ for all $x_j\in {\mathcal{N}_h^B}$.
We denote by $\omega_i$ the support of $\phi_i$, i.e.,
the patch of elements in $\mathcal{T}_h$ that have $x_i$
as a vertex.
A function defined on ${\mathcal{N}_{h}}$ is called a nodal function,
and we denote the space of nodal functions by $\mathcal{M}_h$.
For a nodal function $g$ with nodal value $\{g_i\}_{x_i\in {\mathcal{N}_{h}}}$,
and for a subset of nodal points $\mathcal{C} \subset {\mathcal{N}_{h}}$,
we set the discrete $\ell^d$ norm
as
\begin{align*}
\|g\|_{\ell^d(\mathcal{C})}:=\Big(\sum_{x_i\in \mathcal{C}} |g_i|^d \Big)^{1/d}.
\end{align*}
We say that a nodal function $u_h\in \mathcal{M}_h$ is convex
if, for all $x_i\in {\mathcal{N}_{h}^I}$, there exists a supporting hyperplane
$L$ of $u_h$, i.e.,
\[
L(x_j)\le u_h(x_j)\quad \forall x_j\in {\mathcal{N}_{h}}\text{ and } L(x_i) = u(x_i).
\]
The convex envelope of $u_h$ is
the function $\Gamma (u_h)\in C(\bar\Omega)$ given by
\[
\Gamma(u_h)(x) = \sup_{L} \{L(x)\text{ is affine}:\ L(x_i)\le u_h(x_i)\ \forall x_i\in {\mathcal{N}_{h}}\}.
\]
Finally, we denote by $N_h:C(\bar\Omega)\to \mathcal{M}_h$
the nodal interpolant satisfying $N_h v(x_i) = v(x_i)$
for all $x_i\in {\mathcal{N}_{h}}$. It is easy to see
that if $v$ is a convex function on $\bar\Omega$,
then $N_h v$ is a convex nodal function.
\subsection{The Oliker-Prussner Method}\label{sec:Method}
\begin{figure}
\begin{center}
\includegraphics[scale=0.2]{pwlinearf.jpg}
\includegraphics[scale=0.2]{subdifferential.jpg}
\\
\includegraphics[scale=0.2]{more_pwlinearf.jpg}
\includegraphics[scale=0.2]{more_subdifferential.jpg}
\caption{
A convex nodal function $u_h$ induces a convex piecewise linear function $\gamma_h=\Gamma(u_h)$.
The sub-differential $\partial u_h(0)$ of the convex
nodal function $u_h$ at node $0$ is the convex hull of the piecewise gradients $\nabla \gamma_h|_{T}$, which is the polygon in the second figure.
Let the domain $\Omega$ be a unit ball centered at $0$ and ${\mathcal{N}_{h}}$ be a nodal set in $\Omega$.
A convex nodal function $u_h$ defined on ${\mathcal{N}_{h}}$ induces a piecewise linear function $\Gamma(u_h)$.
For each node $x_i\in {\mathcal{N}_{h}}$, there is an associated subdifferential $\partial u_h(x_i)$ which corresponds to a polygon cell in the last figure.
The piecewise gradient of $u_h$ can be viewed as a map between the domain $\Omega$ and the diagram.
}
\label{fig:subdifferential}
\end{center}
\end{figure}
To motivate the method introduced
in \cite{OlikerPrussner88}, we first
introduce the notion of an Alexandroff solution
to the Monge-Amp\`ere equation \eqref{MA}.
To this end, note that if the solution to \eqref{MA}
is strictly convex, and if $u\in C^2(\Omega)$,
then a change of variables reveals that
\[
\int_E f\, dx = \int_E \det(D^2 u)\, dx = \int_{\nabla u(E)} dx = |\nabla u(E)|\quad \text{for all Borel }E\subset \Omega,
\]
where $|\nabla u(E)|$ denote the $d$-dimensional Lebesgue measure of $\nabla u(E)
=\{\nabla u(x):\ x\in E\}$.
To extend this identity to a larger class of functions,
we introduce the subdifferential
of the function $u$ at the point $x_0$ as
\begin{align*}
\partial u(x_0) = \{p\in \mathbb{R}^d:\ u(x)\ge u(x_0)+p\cdot (x-x_0) \quad \forall x \in \Omega\}.
\end{align*}
Thus, $\partial u(x_0)$ is the set of supporting hyperplanes
of the graph of $u$ at $x_0$. If
$u$ is strictly convex and smooth then $\partial u(x_0) = \{\nabla u(x_0)\}$,
and the same calculation as above shows that
\begin{align}
\label{eqn:AlexMotivation}
\int_E f\, dx = |\partial u(E)|\quad \text{for all Borel }E\subset \Omega.
\end{align}
\begin{definition}
A convex function $u\in C(\bar\Omega)$
is an {\em Alexandroff solution} to \eqref{MA}
provided that $u=0$ on $\partial\Omega$
and \eqref{eqn:AlexMotivation} is satisfied.
\end{definition}
The method
introduced in \cite{OlikerPrussner88}
simply poses this solution concept
onto the space of nodal functions.
To do so, the definition of the subdifferential
is extended to the spaces of nodal functions
in the natural way:
\begin{align}\label{def:subdifferential}
\partial u_h(x_i) = \{p\in \mathbb{R}^d:\ u(x_j)\ge u_h(x_i)+p\cdot (x_j-x_j)\ \forall x_j\in {\mathcal{N}_{h}}\}.
\end{align}
To characterize the sub-differential of a nodal function $u_h$, we note that the convex envelope of a convex nodal function $u_h$, which is a piecewise linear function defined in $\Omega$, induces a mesh $\tilde{\mathcal{T}}_h$; see Figure \ref{fig:subdifferential}.
Then the sub-differential of $u_h$ at node $x_i$ can be characterized as the convex hull of the constant gradients
$
\bld{\nabla} \Gamma(u_h)|_T $
for all $T \in \tilde{\mathcal{T}}_h$ which contain $x_i$; see Figure \ref{fig:subdifferential}.
The discrete method
is to find a convex nodal
function $u_h$ with $u_h=0$
on ${\mathcal{N}_h^B}$ and
\begin{align}\label{eqn:OPMethod}
|\partial u_h(x_i)| = f_i\qquad \forall x_i\in {\mathcal{N}_{h}^I},
\end{align}
where
\begin{align}\label{eqn:fiDef}
f_i = \int_{\Omega} f(x) \phi_i(x)\, dx = \int_{\omega_i} f(x) \phi_i(x)\, dx.
\end{align}
\begin{remark}
Existence and uniqueness
of a solution to \eqref{eqn:OPMethod}
is given in {\rm\cite{OlikerPrussner88,NochettoZhang17}}.
\end{remark}
\subsection{Brunn Minkowski inequality and subdifferential of convex functions}\label{sub:BM}
In this subsection, we develop a few techniques which will be useful in establishing the error estimate.
We start with the celebrated Brunn Minkowski inequality which relates the volumes of compact sets of $\mathbb R^d$.
\begin{prop}[Brunn Minkowski inequality]\label{BM}
Let $A$ and $B$ be two nonempty compact subsets of $\mathbb R^d$ for $d \geq 1$. Then the following inequality holds:
\[
|A + B|^{1/d} \ge |A|^{1/d} + |B|^{1/d},
\]
where $A+B$ denotes the Minkowski sum:
\[
A + B := \{ v + w \in \mathbb{R}^d: v \in A \text{ and } w \in B \}.
\]
\end{prop}
Next, we make the following observation on the sum of two subdifferential sets.
\begin{lemma}[Lemma 2.3 in \cite{NochettoZhang17}]\label{lem:addSubDiff}
Let $u_h$ and $v_h$
be two convex nodal functions.
Then there holds
\begin{align*}
\partial u_h(x_i) + \partial v_h(x_i)\subset \partial (u_h+v_h)(x_i)
\end{align*}
for all $x_i \in {\mathcal{N}_{h}^I}$.
\end{lemma}
\begin{proof}
Let $p_1$ and $p_2$ be in $\partial u_h(x_i)$ and $\partial v_h(x_i)$, respectively. By the definition of subdifferential \eqref{def:subdifferential}, we have
\begin{align*}
p_1 \cdot (x_j - x_i) \leq & u_h(x_j) - u_h(x_i) \quad \forall x_j \in {\mathcal{N}_{h}},
\\
p_2 \cdot (x_j - x_i) \leq & v_h(x_j) - v_h(x_i) \quad \forall x_j \in {\mathcal{N}_{h}}.
\end{align*}
Adding both inequalites, we obtain
\[
(p_1 + p_2) \cdot (x_j - x_i) \leq (u_h + v_h) (x_j) - (u_h + v_h)(x_i) \quad \forall x_j \in {\mathcal{N}_{h}}.
\]
This shows that $p_1 + p_2 \in \partial (u_h + v_h)(x_i)$.
\end{proof}
Combining both estimates, we derive the following result.
\begin{lemma}\label{lem:convexity_subdifferential}
Let $u_h$ and $v_h$ be two convex nodal functions defined on ${\mathcal{N}_{h}}$ and $\mathcal{C}_h$ be the lower contact set of $(u_h - v_h)$:
\[
\mathcal{C}_h:=\big\{x_i\in {\mathcal{N}_{h}^I}:\ \Gamma(u_h-v_h)(x_i) = (u_h-v_h)(x_i)\big\}.
\]
Then for any node $x_i \in \mathcal{C}_h$,
\begin{align}\label{convexity_subdifferential}
|\partial \Gamma (u_h - v_h)(x_i)|^{1/d} \leq |\partial u_h(x_i)|^{1/d} - |\partial v_h(x_i)|^{1/d}.
\end{align}
\end{lemma}
\begin{proof}
The proof of this result
is implicitly given in \cite[Proposition 4.3]{NochettoZhang17},
but we give it here for completeness.
The definition of the convex envelope
and the subdifferential shows that
\begin{align*}
\partial \Gamma (u_h-v_h)(x_i)\subset \partial (u_h-v_h)(x_i)
\end{align*}
for all $x_i\in \mathcal{C}_{\epsilon}$. Applying Lemma \ref{lem:addSubDiff}
then yields
\begin{align*}
\partial v_h(x_i) + \partial \Gamma(u_h-v_h)(x_i)\subset \partial v_h(x_i)+\partial (u_h-v_h)(x_i) \subset \partial u_h(x_i).
\end{align*}
An application of the Brunn-Minkowski inequality (cf.~Lemma \ref{BM})
gets
\begin{align*}
|\partial v_h(x_i)|^{1/d} + |\partial \Gamma (u_h-v_h)(x_i)|^{1/d}
&\le |\partial v_h(x_i)+ \partial \Gamma(u_h-v_h)(x_i)|^{1/d}\\
&\le |\partial u_h(x_i)|^{1/d}.
\end{align*}
Rearranging terms we obtain \eqref{convexity_subdifferential}.
\end{proof}
We also note that the numerical method \eqref{eqn:OPMethod} has a discrete comparison principle. Here, we refer to \cite{NochettoZhang17} for a proof.
\begin{lemma}[discrete comparison principle, Corollary 4.4 in \cite{NochettoZhang17}]\label{lem:discrete_compare}
Let $v_h,w_h\in \mathcal{M}_h$ satisfy $v_h(x_i) \ge w_h(x_i)$
for all $x_i\in {\mathcal{N}_h^B}$ and $|\partial v_h(x_i)|\le |\partial w_h(x_i)|$
for all $x_i\in {\mathcal{N}_{h}^I}$. Then
\[
v_h(x_i)\ge w_h(x_i)\qquad \forall x_i\in {\mathcal{N}_{h}}.
\]
\end{lemma}
\section{Consistency of the Oliker-Prussner method}\label{sec:Consistency}
In this section, we state
the consistency of the method
\eqref{eqn:OPMethod} given in \cite[Lemma 5.3, Proposition 5.4]{NochettoZhang17}.
The result shows that
the relative consistency error
is of order $\mathcal{O}(h^2)$ away from
the boundary and of order $\mathcal{O}(1)$
in a $\mathcal{O}(h)$ region of the boundary.
\begin{lemma}\label{lem:QuadraticConsistency}
Let ${\mathcal{N}_{h}}$ be translation invariant nodal set defined on the domain $\Omega$.
If $u\in C^{k,\alpha}(\bar{\Omega})$
is a convex function with $0 < \lambda I \le D^2 u\le \Lambda I$
and $2\le k+\alpha\le 4$,
there holds, for ${\rm dist}(x_i,\partial \Omega) \ge Rh$,
\begin{align}
\label{eqn:Wishful}
\big| |\partial N_h u(x_i)| - f_i\big|
\le C h^{k+\alpha+d-2},
\end{align}
where $R$ depends on $\lambda$ and $\Lambda$.
Moreover, there holds for ${\rm dist}(x_i,\partial \Omega)\le Rh$,
\[
\big|\partial N_h u(x_i)-f_i\big|\le C h^d.
\]
\end{lemma}
\begin{remark}
The regularity of $f$ and $\partial\Omega$,
the strict convexity of $\Omega$,
and the positivity of $f$ guarantees
that the convex solution
to \eqref{MA} enjoys the regularity $u\in C^{k,\alpha}(\bar{\Omega})$.
For example, if $f\in C^{k-2,\alpha}(\bar\Omega)$
and $\Omega$ is smooth,
then the solutions satisfies $u\in C^{k,\alpha}(\bar\Omega)$ \cite{Gutierrez01,CafNirSpruck84,TrudingerWang08}
\end{remark}
Thanks to the consistency error of the method, Lemma \ref{lem:QuadraticConsistency}, an $L_{\infty}$-error estimate is derived in \cite{NochettoZhang17} which states
\begin{proposition}\label{prop:LinftyEstimate}
Let $\Omega$ be uniformly convex
and ${\mathcal{N}_{h}^I}$ be translation invariant.
Suppose further that the boundary nodes satisfy \eqref{nodalassumption},
that $f\ge \underline{f}>0$, and that the convex solution to \eqref{MA}
satisfies $u\in C^{k,\alpha}(\bar\Omega)$
for some $2\le k+\alpha\le 4$ and $0< \lambda I \leq D^2 u \leq \Lambda I$. Then the numerical solution to the discrete Monge-Amp\`ere equation \eqref{eqn:OPMethod} satisfies
\begin{align*}
\|u_h-N_h u \|_{L_\infty({\mathcal{N}_{h}})}\le C h^{k+\alpha-2} \|u\|_{C^{k,\alpha}(\bar\Omega)},
\end{align*}
where
$
\|v_h\|_{L_\infty({\mathcal{N}_{h}})}:=\max_{x_i\in {\mathcal{N}_{h}}} |v_h(x_i)|.
$
\end{proposition}
We note that if $u \in C^{3,1}(\Omega)$, then the optimal order of the $L_\infty$ error is $\mathcal{O}(h^2)$.
By this $L_{\infty}$ error estimate and the assumption \eqref{nodalassumption} that the boundary node is at least $h/2$ away from the boundary, we immediately deduce that
$
| \delta_e (N_h u - u_h) (x_i) |
$
is bounded.
This observation will be useful in the following sections when we investigate the discrete $W^2_p$ error estimate.
\section{Stability of the Oliker-Prussner method}\label{sec:Stability}
To derive the discrete $W^2_p$-estimate, we first make an observation that the contact
set of a nodal function contains interesting information on its second order difference.
\begin{lemma}[estimate of second order difference]\label{lem:bound}
Given two convex nodal functions $v_h$ and $u_h$ defined on the nodal set ${\mathcal{N}_{h}}$, let
\[
w_{\epsilon} = u_h - (1-\epsilon)v_h
\quad \mbox{and} \quad
w^{\epsilon} = v_h - (1-\epsilon)u_h
\]
for some $0< \epsilon \leq 1$
and the contact sets
\begin{align}
\label{eqn:CebDef}
\mathcal{C}_{\epsilon} := &\; \{ x_i \in {\mathcal{N}_{h}}, \quad w_{\epsilon}(x_i) = \Gamma w_{\epsilon}(x_i) \},
\\
\label{eqn:CeaDef}
\mathcal{C}^{\epsilon} := &\; \{ x_i \in {\mathcal{N}_{h}}, \quad w^{\epsilon}(x_i) = \Gamma w^{\epsilon}(x_i) \}.
\end{align}
If a node $x_i \in \mathcal{C}_{\epsilon} \cap \mathcal{C}^{\epsilon}$, then
\begin{align}\label{2ndorderestimate}
-\epsilon \delta_e v_h(x_i) \leq \delta_e (u_h - v_h) (x_i) \leq \frac{\epsilon}{1 - \epsilon} \delta_e v_h (x_i)
\end{align}
for any vector $e \in \mathbb{Z}^d$.
\end{lemma}
\begin{proof}
We observe that if a node is in the contact set $x_i \in \mathcal{C}_{\epsilon}$, then the second order difference
of $w_{\epsilon}$ satisfies $\delta_e w_{\epsilon}(x_i) \ge \delta_e \Gamma w_{\epsilon}(x_i) \ge 0$
for any vector $e\in \mathbb{Z}^d$.
Hence, for any node $x_i \in \mathcal{C}_{\epsilon}$, we have
\begin{align}\label{lowerbound}
\delta_e (u_h - v_h) (x_i) \geq -\epsilon \delta_e v_h (x_i).
\end{align}
This inequality yields a lower bound of the second order difference.
To derive the upper bound, we apply a similar argument above to the function $w^{\epsilon}$ and derive
\[
\delta_e (v_h - u_h) (x_i) \geq -\epsilon \delta_e u_h (x_i)
\]
for any node $x_i \in \mathcal{C}^{\epsilon}$.
A simple algebraic manipulation yields
\begin{align}\label{upperbound}
\delta_e (u_h - v_h) (x_i) \leq \frac{\epsilon}{1 - \epsilon} \delta_e v_h (x_i).
\end{align}
Combining both the lower bound \eqref{lowerbound} and upper bound \eqref{upperbound}, we obtain the desired estimate.
\end{proof}
\begin{remark}\label{rmk:observation}
The lemma above shows that we have control of the error $\delta_e (u_h - v_h)$ on the
contact sets $\mathcal{C}_{\epsilon}$ and $\mathcal{C}^{\epsilon}$.
Define the set $E_{\tau}$ to be
\begin{align}\label{Et}
E_\tau = \left \{ x_i \in {\mathcal{N}_{h}}, \ \delta_e (v_h - u_h)(x_i) \geq \tau \delta_e v_h(x_i)
\quad \mbox{for some vector $e \in \mathbb{Z}^d$} \right\},
\end{align}
where $\tau = \epsilon/(1 - \epsilon)$.
Then the proof of Lemma \ref{lem:bound} shows
that $E_\tau$ is contained in the non-contact set
\begin{align}
\label{eqn:Seps}
S_\epsilon := {\mathcal{N}_{h}} \setminus \mathcal{C}_{\epsilon}.
\end{align}
Analogously,
\begin{align*}
E^\tau :&= \left\{x_i\in {\mathcal{N}_{h}},\ \delta_e(u_h-v_h)(x_i)\geq \tau \delta_e v_h(x_i)
\quad \mbox{for some vector $e \in \mathbb{Z}^d$} \right\}\\
&\subset S^\epsilon:={\mathcal{N}_{h}}\setminus \mathcal{C}^{\epsilon}.
\end{align*}
\end{remark}
In the next step, we estimate the cardinality of $S_\epsilon$.
Heuristically, if $\epsilon = 1$, then $w_{\epsilon} = u_h$
which is a convex nodal function, and so we have $S_\epsilon = \emptyset$.
As $\epsilon$ decreases to zero, the function $w_{\epsilon}$
becomes `less convex', and the cardinality $\#(S_\epsilon)$ increases;
see Figure \ref{fig:Remark1}.
Therefore, our next goal is to estimate how fast $\#(S_\epsilon)$ increases as $\epsilon \to 0$.
The following lemma shows that this is controlled by the consistency error of the method.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw [gray, thin] (1,0) -- (6,0);
\draw [blue, very thick, domain=1:6] plot (\x, { (0.5*\x-0.5) * (0.5*\x-3) });
\draw (5, 1) [above left] node {$w_\epsilon(x) = u_h(x)$};
\draw [gray, thin] (7,0) -- (12,0);
\draw [blue, very thick, domain=7:12] plot (\x, {0.5* (0.5*\x-3.5) *(0.5 *\x - 4)* (0.5*\x-6) });
\draw [dashed, very thick](7,0) -- (10, -0.75);
\draw [dashed, very thick, domain=10:12] plot (\x, {0.5* (0.5*\x-3.5) *(0.5 *\x - 4)* (0.5*\x-6) });
\draw (11, 1) [above left] node {$w_\epsilon(x) = u_h - \frac 12 v_h$};
\draw (10,-1) [below left] node {$\Gamma w_\epsilon(x)$};
\end{tikzpicture}
\end{center}
\caption{A pictorial description of Remark \ref{rmk:observation}}
\label{fig:Remark1}
\end{figure}
\begin{prop}\label{prop:measureestimate}
Let $u_h$ and $v_h$ be two convex nodal functions satisfying $u_h=v_h$ on ${\mathcal{N}_h^B}$,
$u_h \le v_h$ in ${\mathcal{N}_{h}^I}$, and
\begin{align}\label{eqn:uhvh}
| \partial u_h(x_i) | = f_i,
\quad \mbox{ and } \quad
| \partial v_h(x_i) | = g_i
\end{align}
for all $x_i \in {\mathcal{N}_{h}^I}$.
For any subset $S \subset {\mathcal{N}_{h}^I}$, let
\begin{align}
\label{eqn:MuDef}
\mu (S) = \sum_{x_i \in S} f_i
\quad \mbox{and} \quad
\nu_{\tau} (S) = \sum_{x_i \in S} \left( f_i^{1/d} + \frac 1 {\tau} e_i^{1/d} \right)^d,
\end{align}
where $e_i^{1/d} = g_i^{1/d} - f_i^{1/d}$.
Then
\begin{align}\label{eqn:measureestimate}
\mu(S_\epsilon) \leq \nu_{\tau}(\mathcal{C}_{\epsilon}) - \mu(\mathcal{C}_{\epsilon}),
\end{align}
where $\mathcal{C}_{\epsilon}$ is given by \eqref{eqn:CebDef},
$S_\epsilon$ is given by \eqref{eqn:Seps},
and $\tau = \epsilon/(1-\epsilon)$. Consequently, there holds
\begin{align}\label{eqn:measureestimate2}
\mu(S_\epsilon)\le \tau^{-1} C_f \|e^{1/d}\|_{\ell^d(\mathcal{C}_{\epsilon})},
\end{align}
with $C_f = d \|f^{1/d}\|_{\ell^d({\mathcal{N}_{h}^I})}^{d-1}.$
\end{prop}
\begin{proof}
We first show that
\begin{equation}\label{eqn:WTS123}
\sum_{x_i\in {\mathcal{N}_{h}^I}} \epsilon \partial u_h (x_i) \subset \sum_{x_i\in {\mathcal{N}_{h}^I}} \partial \Gamma w_{\epsilon} (x_i),
\end{equation}
where $w_{\epsilon} = u_h -(1-\epsilon)v_h$.
Since $u_h \leq v_h$ in ${\mathcal{N}_{h}^I}$ and $u_h = v_h$ on ${\mathcal{N}_h^B}$, we get
\[
w_{\epsilon} \leq \epsilon u_h \mbox{ in ${\mathcal{N}_{h}^I}$, }\quad \text{and} \quad
w_{\epsilon} = \epsilon u_h \mbox{ on ${\mathcal{N}_h^B}$.}
\]
Taking convex envelope on both side of the inequality, we obtain
\begin{align}
\label{eqn:WTSABC}
\Gamma w_{\epsilon}(x) \leq \epsilon \Gamma u_h(x) \quad \mbox{in $\Omega$ and} \quad
\Gamma w_{\epsilon}(x) = \epsilon \Gamma u_h(x) \quad \mbox{on $\partial \Omega$.}
\end{align}
Since $u_h = \Gamma u_h$ on ${\mathcal{N}_{h}}$ due to the convexity of $u_h$, the inequality \eqref{eqn:WTSABC} implies
\eqref{eqn:WTS123}.
Taking measure on both sides of \eqref{eqn:WTS123} and substituting \eqref{eqn:uhvh} yields
\[
\epsilon^d \sum_{x_i \in {\mathcal{N}_{h}^I}} f_i = \epsilon^d \sum_{x_i \in {\mathcal{N}_{h}^I}} |\partial u_h(x_i)| \leq \sum_{x_i \in \mathcal{C}_{\epsilon}} |\partial \Gamma w_{\epsilon}(x_i)|.
\]
In view of the convexity of the measure of the subidfferential \eqref{convexity_subdifferential},
\[
|\partial \Gamma w_{\epsilon}(x_i)|^{1/d} \leq |f^{1/d}_i - (1 - \epsilon) g^{1/d}_i |.
\]
Therefore, we infer that
\[
\epsilon^d \mu({\mathcal{N}_{h}^I}) = \epsilon^d \sum_{x_i \in {\mathcal{N}_{h}^I}} f_i \leq \sum_{x_i \in \mathcal{C}_{\epsilon}} |f^{1/d}_i - (1 - \epsilon) g^{1/d}_i |^d .
\]
Thus, {subtracting $\epsilon^d \mu(\mathcal{C}_{\epsilon})$, we obtain}
\begin{align*}
\epsilon^d \mu(S_{\epsilon}) =
\epsilon^d \sum_{x_i \in S_\epsilon} f_i
\leq &\; \sum_{x_i \in \mathcal{C}_{\epsilon}} \big(|\epsilon f^{1/d}_i + (1 - \epsilon) e_i^{1/d} |^d - \epsilon^d f_i \big).
\end{align*}
Therefore, dividing $\epsilon^d$, we obtain
\begin{align*}
\mu(S_\epsilon) \leq \nu_{\tau}(\mathcal{C}_{\epsilon}) - \mu(\mathcal{C}_{\epsilon}).
\end{align*}
To derive the estimate \eqref{eqn:measureestimate2}, we first
see that \eqref{eqn:measureestimate} is equivalent to
\begin{align*}
\|f^{1/d} \|_{\ell^d({\mathcal{N}_{h}^I})} \le \| f^{1/d} + \tau^{-1} e^{1/d} \|_{\ell^d(\mathcal{C}_{\epsilon})},
\end{align*}
and therefore $\|f^{1/d}\|_{\ell^d({\mathcal{N}_{h}^I})} - \|f^{1/d}\|_{\ell^d(\mathcal{C}_{\epsilon})}\le \tau^{-1} \|e^{1/d}\|_{\ell^d(\mathcal{C}_{\epsilon})}$
by the Minkowski inequality. From this estimate and the inequality $a^d - b^d\le d a^{d-1}(a-b)$ for $a\ge b$,
we derive
\begin{align*}
\mu(S_\epsilon)
&= \|f^{1/d}\|_{\ell^d({\mathcal{N}_{h}^I})}^d - \|f^{1/d}\|_{\ell^d(\mathcal{C}_{\epsilon})}^d\\
&\le d \|f^{1/d}\|_{\ell^d({\mathcal{N}_{h}^I})}^{d-1}\big(\|f^{1/d}\|_{\ell^d({\mathcal{N}_{h}^I})} - \|f^{1/d}\|_{\ell^d(\mathcal{C}_{\epsilon})}\big)\\
&\le C_f \tau^{-1} \|e^{1/d}\|_{\ell^d(\mathcal{C}_{\epsilon})}.
\end{align*}\hfill
\end{proof}
\section{$W^2_p$-estimate of the method}\label{S:W21estimate}
To establish
$W^{2}_p$-estimates of the method,
we first introduce an estimate of the discrete $L_1$ norm of a nodal function
in terms of its level sets.
\begin{lemma}\label{lem:L1norm}
Let $s_h$ be a bounded nodal function with
$|s_h(x_i)| \leq M$ for some $M >0$.
Then, for any $\sigma>0$,
\[
\sum_{x_i\in {\mathcal{N}_{h}^I}} f_i |s_h(x_i)| \leq \sigma \sum_{k=0}^N \mu(A_k),
\]
where
\[
A_k := \{x_i \in {\mathcal{N}_{h}^I}:\ |s_h(x_i)| \geq k \sigma\},
\]
$\mu(\cdot)$ is given by \eqref{eqn:MuDef}, and $N = \ceil{M / \sigma}$.
\end{lemma}
\begin{proof}
The estimate is illustrated in the Figure \ref{fig:L1norm}. Here, we give a rigorous proof.
\\
Set
\begin{align*}
P_k:=\{x_i\in {\mathcal{N}_{h}^I}:\ k\sigma \le |s_h(x_i)|<(k+1)\sigma\}.
\end{align*}
Then we clearly have
\begin{align*}
\sum_{x_i\in {\mathcal{N}_{h}^I}} f_i |s_h(x_i)|
= \sum_{k=0}^N \sum_{x_i\in P_k} f_i |s_h(x_i)|
\le \sum_{k=0}^N (k+1)\sigma \mu(P_k).
\end{align*}
We also have
\begin{align*}
A_k = \bigcup_{m\ge k}^N P_m,
\end{align*}
and so, since the sets $\{P_k\}$ are disjoint,
\begin{align*}
\mu(A_k) = \sum_{m=k}^N \mu(P_m).
\end{align*}
Therefore
\begin{align*}
\sigma \sum_{k=0}^N \mu(A_k) = \sigma \sum_{k=0}^N \sum_{m=k}^N \mu(P_m) = \sigma \sum_{k=0}^N (k+1) \mu(P_k) \ge \sum_{x_i\in {\mathcal{N}_{h}^I}} f_i |s_h(x_i)|.
\end{align*}
\hfill
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}\label{fig:L1norm}
\draw [gray, thin] (3,0) -- (11,0);
\draw [->] (3,0)--(3,4) ;
\draw [blue, very thick, domain=3:11] plot (\x, {-0.25* pow(0.5*\x-1.5, 2) * (0.5*\x-5.5) });
\draw [dashed] (11, 0.5) -- (3,0.5) ;
\draw [dashed] (11, 1) -- (3,1);
\draw [dashed] (9, 2.5) -- (3,2.5);
\draw [dashed] (4.6, 0.5) -- (4.6,-1);
\draw [dashed] (10.7, 0.5) -- (10.7,-1);
\draw [decorate,decoration={brace,raise=2mm,amplitude=6pt,mirror}] (4.6,-1) -- (10.7,-1) (7.7,-1.8) node {$A_1$};
\draw [dashed] (5.4, 1) -- (5.4,0);
\draw [dashed] (10.4, 1) -- (10.4,0);
\draw [decorate,decoration={brace,raise=2mm,amplitude=6pt,mirror}] (5.4,0) -- (10.4,0) (7.7,-.8) node {$A_2$};
\draw (11.2,2) [below left] node {$s_h(x)$};
\draw (3,0.5) [left] node {$\sigma$};
\draw (3,1) [left] node {$2\sigma$};
\draw (3,2.5) [left] node {$M = N \sigma$};
\draw (7, 3.5) [above] node
{
$
\sum_{x_i \in {\mathcal{N}_{h}^I}} f_i |s_h| \leq \sigma \sum_{k=0}^N \mu(A_k).
$
};
\end{tikzpicture}
\caption{
A pictorial illustration of Lemma \ref{lem:L1norm}. Here, the measure $\mu(A_k) := \sum_{x_i \in A_k} f_i$.
}
\end{center}
\end{figure}
\subsection{Ideal Case}\label{subsec:ideal}
Now we are ready to prove the estimate
in the case that the consistency
error \eqref{eqn:Wishful} holds for all
interior grid points.
\begin{theorem}\label{thm:C4case}
Let $u$ be the solution of the Monge-Amp\`ere equation \eqref{MA}. Assume that
\begin{align}\label{consistencyerror}
\big| |\partial N_h u (x_i)| - f_i \big| \leq C h^{2+d}
\quad
\mbox{ for every node $x_i \in {\mathcal{N}_{h}^I}$,}
\end{align}
where $N_h u$ is the interpolation of $u$ on the nodal set ${\mathcal{N}_{h}}$.
Assume further that $f$ is uniformly positive on $\Omega$.
Then the error in the weighted $W^2_p$-norm satisfies
\begin{align*}
\|N_h u - u_h\|_{W^2_p({\mathcal{N}_{h}^I})}
\leq C\left\{
\begin{array}{ll}
h^2 |\ln h| & \text{if $p=1$},\\
h^{2/p} & \text{if $p>1$}
\end{array}
\right.
\end{align*}\
provided that $h$ is sufficiently small.
\end{theorem}
\begin{proof}
We start by setting $v_h = (1-Ch^2)^{1/d} N_h u$, where the constant
$C>0$ is large enough, but independent of $h$, to ensure that (cf.~\eqref{consistencyerror})
\begin{align*}
g_i := |\partial v_h(x_i)| = (1-C h^2)|\partial N_h u(x_i)|\le f_i.
\end{align*}
By a comparison principle (cf.~Lemma \ref{lem:discrete_compare}), we have $u_h\le v_h$ on ${\mathcal{N}_{h}^I}$, and we see
that
\begin{align}\label{eqn:IdealConsistencyError123}
|f_i - g_i|\le C h^{2+d}\qquad \forall x_i \in {\mathcal{N}_{h}^I}
\end{align}
due to the assumption \eqref{consistencyerror}.
We also have $g_i \ge C h^d$ provided $h$ is sufficiently small,
and $|(v_h-N_h u)(x_i)|\le C h^2$.
Note that
\[
\|N_h u - u_h\|_{W^2_p({\mathcal{N}_{h}^I})} \leq
\|v_h - u_h\|_{W^2_p({\mathcal{N}_{h}^I})} + C h^2 \|N_h u\|_{W^2_p({\mathcal{N}_{h}^I})}
\]
Thus, to prove the theorem, it suffices to show that
\[
\sum_{x_i \in {\mathcal{N}_{h}^I}} f_i | \delta_e(v_h - u_h)(x_i) |^p
\leq C\left\{
\begin{array}{ll}
h^2 |\ln h| & \text{if $p=1$},\\
h^{2} & \text{if $p>1$}.
\end{array}
\right.
\]
Define the positive and negative parts
of $\delta_e(v_h-u_h)(x_i)$, respectively, as
\begin{align*}
\delta_e^+(v_h - u_h)(x_i) &= \max\{ \delta_e(v_h - u_h)(x_i), 0\},\\
\delta_e^-(v_h - u_h)(x_i) &= \max\{ -\delta_e(v_h - u_h)(x_i), 0\}.
\end{align*}
We shall prove
\begin{align*}
\sum_{x_i \in {\mathcal{N}_{h}^I}} f_i |\delta_e^+ (v_h - u_h)(x_i)|^p \leq C\left\{
\begin{array}{ll}
h^2 |\ln h| & \text{if $p=1$},\\
h^{2} & \text{if $p>1$}.
\end{array}
\right.
\end{align*
The estimate for the negative part can be proved in a similar fashion.
Due to the regularity assumption of $u$, a Taylor expansion
shows that $|\delta_e v_h(x_i)|\le C_2$ for all $x_i\in {\mathcal{N}_{h}^I}$, where $C_2>0$ depends
on $\|u\|_{C^{1,1}(\bar\Omega)}$.
Moreover, from the $L_\infty$ error estimate, Proposition \ref{prop:LinftyEstimate}
and the assumption \eqref{nodalassumption} that interior nodes are at least $h/2$ away from the boundary,
we deduce that
\[
\delta_e^+(v_h - u_h)(x_i) \leq C_{\infty}\qquad \forall x_i\in {\mathcal{N}_{h}^I},
\]
where the constant $C_\infty>0$ depends on $\|u\|_{C^{3,1}(\bar\Omega)}$.
Let $\tau_k = C_2 k^{1/p} h^2$, and define the set
\[
A_k := \{ x_i \in {\mathcal{N}_{h}^I}, \quad \delta^+_e (v_h - u_h)(x_i) \geq \tau_k \}.
\]
By Lemma \ref{lem:L1norm}
with $s_h(x_i) = |\delta_e^+(v_h-u_h)(x_i)|^p$, $\sigma = C^p_2 h^{2p}$,
and $M= C_\infty^p$, we obtain
\begin{align}\label{proof:W21sum}
\sum_{x_i\in {\mathcal{N}_{h}^I}} f_i |\delta_e^+ (v_h-u_h)(x_i)|^p
\le &\; C h^{2p} \left( \mu({\mathcal{N}_{h}^I}) + \sum_{k=1}^{ C h^{-2p}} \mu(A_k) \right).
\end{align}
We aim to estimate the measure of set $\mu(A_k)$.
Due to the relations of the second order difference and contact set given in Remark \ref{rmk:observation},
we have $A_k \subset S_{\epsilon_k} = {\mathcal{N}_{h}^I}\setminus \mathcal{C}_{\epsilon_k}$
with $\epsilon_k\in (0,1)$ satisfying $\tau_k = \epsilon_k/(1-\epsilon_k)$.
Therefore, by the estimate \eqref{eqn:measureestimate2} given
in Proposition \ref{prop:measureestimate},
\begin{align*}
\mu(A_k) \le
\mu(S_{\epsilon_k})\le \frac{C_f}{\tau_k} \|g^{1/d}-f^{1/d}\|_{\ell^d(\mathcal{C}_{\epsilon_k})}
= \frac{C_f}{k^{1/p} h^2} \|g^{1/d}-f^{1/d}\|_{\ell^d(\mathcal{C}_{\epsilon_k})}.
\end{align*}
From the concavity of $t\to t^{1/d}$, we have
$(t+\epsilon)^{1/d}-t^{1/d}\le d^{-1} t^{1/d-1} \epsilon$.
Setting $t = g_i$ and $\epsilon = f_i-g_i\ge 0$, we get
\begin{align*}
|f_i^{1/d}-g_i^{1/d}| = f_i^{1/d} - g_i^{1/d} \le d^{-1} g_i^{1/d-1} (f_i - g_i)\le Ch^3
\end{align*}
due to the consistency error \eqref{eqn:IdealConsistencyError123}
and the lower bound $g_i \ge Ch^d$. Consequently, we find that
\[
\norm{f^{1/d} - g^{1/d}}_{\ell^d(\mathcal{C}_{\epsilon_k})} \leq C h^2,
\]
and therefore $\mu(A_k)\le \frac{C}{k^{1/p}}$.
Applying this bound in \eqref{proof:W21sum}, we derive the estimate
\begin{align*}
\sum_{x_i\in {\mathcal{N}_{h}^I}} f_i |\delta_e^+ (u_h-v_h)(x_i)|^p \leq &\; C h^{2p} \sum_{k = 1}^{C h^{-2p}} \frac 1 {k^{1/p}}\le
C\left\{
\begin{array}{ll}
h^2 |\ln h| & \text{if $p=1$},\\
h^{2} & \text{if $p>1$}.
\end{array}
\right.
\end{align*}
This completes the proof.
\end{proof}
\begin{remark}
It is worth mentioning that the assumption on the consistency error \eqref{consistencyerror} holds for nodes bounded away from the boundary $\partial \Omega$ provided that $u \in C^{3,1}(\Omega)$.
However, for nodes close to the boundary $\partial \Omega$, such an estimate holds only for structured domain, such as a rectangle domain; see the first numerical experiment in Section \ref{S:numerics}. In general,
this estimate may not be true.
In fact, Lemma \ref{lem:QuadraticConsistency}
shows that the (relative) consistency error, $\mathcal{O}(h)$ away from the boundary, is of order $\mathcal{O}(1)$.
In the following subsection, we take into account the lack of consistency
in the boundary layer.
\end{remark}
\subsection{Estimate on general domain}
To this end, we define the barrier nodal function
\[
b_h(x_i) =
\begin{cases}
-h^2 \quad &\text{if } x_i \in {\mathcal{N}_{h}^I},
\\
0 \quad &\text{if } x_i \in {\mathcal{N}_h^B},
\end{cases}
\]
which will be used to ``push down''
the graph of the nodal interpolant of $u$
and as such, develop error estimates
in a general setting.
\begin{theorem}
\label{thm:W2d}
Let $u \in C^{3,1}(\bar\Omega)$ be the solution of the Monge-Amp\`ere equation \eqref{MA} with $0 < \lambda I \leq D^2 u \leq \Lambda I$,
and assume that the nodal set ${\mathcal{N}_{h}^I}$ translation invariant and that $f$ is uniformly positive on $\Omega$. Then the error
in the weighted $W^{2,p}$-norm satisfies
\[
\|N_h u - u_h\|_{W^2_p({\mathcal{N}_{h}^I})} \leq
C\begin{cases}
h^{1/p} \quad &\mbox{if $p > d$,}
\\
h^{1/d} \big(\ln\left(\frac 1 h \right)\big)^{1/d} \quad &\mbox{if $p \le d$,}
\end{cases}
\]
where $N_h u$ is the interpolation of $u$ on the nodal set ${\mathcal{N}_{h}}$ and the constant $C$
depends on $\|u\|_{C^{3,1}(\bar\Omega)}$, the dimension $d$, and the constant $p$.
\end{theorem}
\begin{proof}
We define the boundary layer:
\[
\Omega_h := \{ x_i \in {\mathcal{N}_{h}^I}, \textrm{dist}(x_i, \partial \Omega) \leq R h \},
\]
where the constant $R$ is the constant in the consistency error, Lemma \ref{lem:QuadraticConsistency}, which depends on the ellipticity constants $\lambda$ and $\Lambda$ of $D^2 u$.
We set
\[
v_h = N_h u - Cb_h,\qquad g_i = |\partial v_h(x_i)|,
\]
where the constant $C>0$ is sufficiently
large so that $u_h \le v_h$; see
Proposition \ref{prop:LinftyEstimate}.
It is clear from the definition of $b_h$
that
\[
|\partial v_h(x_i)| = |\partial N_h u(x_i)| \quad \mbox{for any $x_i \in {\mathcal{N}_{h}^I} \setminus \Omega_h$}
\]
and
\[
|\partial N_h u(x_i)| \geq |\partial v_h(x_i)| \geq 0 \quad \mbox{for any $x_i \in \Omega_h$}.
\]
This implies that
$|f_i-g_i|\le C h^{2+d}$ in ${\mathcal{N}_{h}^I}\setminus \Omega_h$
and $|f_i-g_i|\le C h^d$ in $\Omega_h$.
We have that $|\delta_e v_h(x_i)|\le C_2$
and $|\delta_e(v_h-u_h)(x_i)|\le C_\infty$
for all $x_i\in {\mathcal{N}_{h}^I}$.
As in Theorem \ref{thm:C4case}, we shall prove the estimate for the positive part:
\begin{align*}
\sum_{x_i \in {\mathcal{N}_{h}^I}} f_i \left( \delta_e^+ (v_h - u_h)(x_i) \right)^p
\leq
\begin{cases}
C h \quad &\mbox{if $p > d$,}
\\
C h \ln\left(\frac 1 h \right) \quad &\mbox{if $p = d$.}
\end{cases}
\end{align*}
The estimate for the negative part can be proved in a similar fashion.
Also note that the estimate for $p < d$ follows from the estimate of $p = d$ and H\"older's inequality:
\[
\|N_h u - u_h\|_{W^2_p({\mathcal{N}_{h}^I})} \leq C_{\mu} \|N_h u - u_h\|_{W^2_d({\mathcal{N}_{h}^I})} \quad \mbox{where } C_\mu := \mu({\mathcal{N}_{h}^I})^{1/p - 1/d} .
\]
We set $\tau_k = C_2 k^{1/p} h$
and define the set
\[
A_k := \{ x_i \in {\mathcal{N}_{h}^I}, \quad \left(\delta^+_e (v_h - u_h)(x_i)\right) \geq \tau_k \}.
\]
Then, by similar arguments as in Theorem \ref{thm:C4case},
we find by Lemma \ref{lem:L1norm} that
\begin{align}
\label{proof:W2d}
\sum_{x_i\in {\mathcal{N}_{h}^I}} f_i \left( \delta_e^+ (v_h-u_h)(x_i) \right)^p
\le &\; C_2 h^p \left( \mu({\mathcal{N}_{h}^I}) + \sum_{k=1}^{ h^{-p}} \mu(A_k) \right).
\end{align}
To estimate the measure of set $\mu(A_k)$, we note that $A_k \subset S_{\epsilon_k} = {\mathcal{N}_{h}^I} \setminus \mathcal{C}_{\epsilon_k}$
with $\tau_k = \epsilon_k/(1 - \epsilon_k)$.
Invoking the estimate of the measure of the non-contact set $S_{\epsilon}$
stated in Proposition \ref{prop:measureestimate}, we obtain
\[
\mu(A_k) \leq \mu (S_{\epsilon_k}) \leq \nu_{\tau_k}(\mathcal{C}_{\epsilon_k}) - \mu(\mathcal{C}_{\epsilon_k}).
\]
We then
divide the estimate of $\nu_{\tau_k}(\mathcal{C}_{\epsilon_k}) - \mu(\mathcal{C}_{\epsilon_k}) $ into two parts:
\begin{align*}
\nu_{\tau_k}(\mathcal{C}_{\epsilon_k}) - \mu(\mathcal{C}_{\epsilon_k}) =
&\;
\sum_{x_i \in \mathcal{C}_{\epsilon_k}}\left[
\left( f_i^{1/d} + \frac 1 {\tau_k} e_i^{1/d} \right)^d - f_i\right]
\\
= &\;
\left(
\sum_{x_i \in \mathcal{C}_{\epsilon_k} \cap \Omega_{h}} + \sum_{x_i \in \mathcal{C}_{\epsilon_k} \setminus \Omega_{h}}
\right)
\left[\left( f_i^{1/d} + \frac 1 {\tau_k} e_i^{1/d} \right)^d - f_i\right],
\end{align*}
where we recall that $e_i^{1/d} = f_i^{1/d}-g_i^{1/d}$.
Since $f_i^{1/d} = \mathcal{O}(h)$ and $g_i^{1/d}= \mathcal{O}(h)$, we have
\begin{align*}
\left|\left( f_i^{1/d} + \frac 1 {\tau_k} e_i^{1/d} \right)^d - f_i\right|
&\le \frac{d}{\tau_k} \max\{\big| f_i^{1/d} + \frac{1}{\tau_k} e_i^{1/d}\big|,f_i^{1/d}\}^{d-1} |e_i^{1/d}|\\
&\le \frac{C h^{d-1}}{\tau_k^d} |e_i^{1/d}|.
\end{align*}
In the set $\mathcal{C}_{\epsilon_k} \cap \Omega_{h}$, the consistency error satisfies $|e_i^{1/d}| = \mathcal{O}(h)$;
see Lemma \ref{lem:QuadraticConsistency}. Therefore, we have
\[
\left|
\left( f_i^{1/d} + \frac 1 {\tau_k} e_i^{1/d} \right)^d - f_i
\right|
\leq \frac{C h^d} {\tau_k^d}\qquad \forall x_i \in \mathcal{C}_{\epsilon_k}\cap \Omega_h.
\]
On the other hand,
in the set $\mathcal{C}_{\epsilon_k} \setminus \Omega_{h}$, we conclude
as in Theorem \ref{thm:C4case}, that $|e_{i}^{1/d}| = \mathcal{O}(h^3)$, and
\[
\left|
\left( f_i^{1/d} + \frac 1 {\tau_k} e_i^{1/d} \right)^d - f_i
\right|
\leq \frac{C h^{2+d}} {\tau_k^d}.
\]
Combining both estimate and applying the fact that $\# (\mathcal{C}_{\epsilon_k}\cap \Omega_h) \leq C h^{1-d}$
and $\# (\mathcal{C}_{\epsilon_k} \setminus \Omega_h)\le Ch^{-d}$, we obtain
\[
\nu_{\tau_k}(\mathcal{C}_{\epsilon}) - \mu(\mathcal{C}_{\epsilon})
\leq
\frac{C h}{\tau_k^d} + \frac{C h^2}{\tau_k^d}
\leq
\frac{C h}{\tau_k^d}
\]
because $h \leq 1$.
Hence, we conclude that
\[
\mu(A_k) \leq \frac{C h}{\tau_k^d} .
\]
Applying this estimate to \eqref{proof:W2d}, we arrive at
\[
\sum_{x_i \in {\mathcal{N}_{h}^I}} f_i |\delta_e^+ (v_h - u_h)(x_i)|^p \leq C_2 h^p \sum_{k = 1}^{h^{-p}} \frac{h}{h^d k^{d/p}}.
\]
Since
\[
\sum_{k = 1}^{h^{-p}} \frac{1}{k^{d/p}} \leq
\begin{cases}
C(d, p) h^{d - p} \quad &\mbox{if $p> d$,}
\\
C \ln \left(\frac 1 h \right) \quad &\mbox{if $p = d$,}
\end{cases}
\]
we conclude that
\[
\sum_{x_i \in {\mathcal{N}_{h}^I}} f_i |\delta_e^+ (v_h - u_h)(x_i)|^p
\leq
\begin{cases}
C h \quad &\mbox{if $p>d$,}
\\
C h \ln\left(\frac 1 h \right) \quad &\mbox{if $p = d$.}
\end{cases}
\]
Finally we note that by H\"older's inequality, there holds for $p<d$,
\begin{align*}
\|v_h\|_{W^2_p({\mathcal{N}_{h}^I})}
&= \Big(\sum_{x_i\in {\mathcal{N}_{h}^I}} f_i |\delta_e v_h(x_i)|^p\Big)^{1/p}\\
&\le \big(\sum_{x_i\in {\mathcal{N}_{h}^I}} f_i |\delta_e v_h(x_i)|^d\Big)^{1/d}\Big( \sum_{x_i\in {\mathcal{N}_{h}^I}} f_i \Big)^{(d-p)/(dp)}\le C\|v_h\|_{W^2_d({\mathcal{N}_{h}^I})}.
\end{align*}
This completes the proof.
\end{proof}
\section{Numerical experiments}\label{S:numerics}
In this section, we perform numerical examples to illustrate the accuracy of the method,
and to compare the results with the theory.
In the tests, we replace the homogeneous boundary condition \eqref{MA2}
with $u=g$ on $\partial \Omega$. The theoretical results
developed in the previous sections can be applied to this
slightly more general problem with minor modifications.
We consider three different test problems, each reflecting different scenarios of regularity.
Each set of problems is performed in two dimensions ($d=2$), and errors are reported
in the (discrete) $L_\infty$, $H^1$, $W_1^2$, and $W^2_2$ norms. Here,
a nine-point stencil is used in the definition of the $W^2_p$ norms
with $e_1 = (1,0)$, $e_2 = (0,1)$, $e_3 =(1,1)$ and $e_4 = (1, -1)$. That is,
with an abuse of notation, we set
\[
\|v\|_{W^2_p({\mathcal{N}_{h}^I})}^p = \sum_{j=1}^{4} \sum_{x_i \in {\mathcal{N}_{h}^I}} |\delta_{e_j}v(x_i)|^p.
\]
As explained in \cite{NochettoZhang17} and in Section \ref{sec:Method},
a convex nodal function induces a triangulation of $\Omega$
whose set of vertices corresponds to ${\mathcal{N}_{h}}$.
For a computed solution $u_h$, we associate with it
a piecewise linear polynomial on the induced mesh, which we still
denote by $u_h$, and use the quantity $\|u-u_h\|_{H^1(\Omega)}$
to denote the $H^1$ error in the experiments below.
A summary of the theoretical results in Sections \ref{sub:BM} and \ref{S:W21estimate} when $d=2$ are
\begin{align*}
\|N_h u-u_h\|_{L_\infty({\mathcal{N}_{h}^I})} = \mathcal{O}(h^2),\qquad
\|N_h u-u_h\|_{W^2_p({\mathcal{N}_{h}^I})} = \mathcal{O}(h^{1/2-\epsilon}) ,\ p=1,2
\end{align*}
for any $\epsilon>0$,
provided that $u \in C^{3,1}(\bar\Omega)$.
\subsection*{Example I: Smooth Solution $u\in C^\infty(\bar\Omega)$}
We consider the example
\begin{align}
\label{eqn:SmoothSolnExample}
u(x,y) = e^{\frac{x^2 + y^2}2},\quad f(x,y) = (1 + x^2 + y^2) e^{x^2 + y^2},\quad \text{and }\Omega = (-1,1)^2,
\end{align}
and list the resulting errors and rates of the scheme in Table \ref{Table:Example1}.
The Table clearly shows that the errors decay with rate $\mathcal{O}(h^2)$
in all norms. This behavior matches the theoretical results
of Proposition \ref{prop:LinftyEstimate}, but indicates
that the $W^2_p$ estimates stated in Theorem \ref{thm:W2d} are not sharp.
\begin{table}[h]
\begin{center}
\begin{tabular}[t]{ | l | c | c | c | c | c | c | c | c |}
\hline
$h$ & $L_{\infty}$ & rate & $H^1$ & rate & $W^2_1$ & rate & $W^2_2$ & rate \\
\hline
1 & 1.12e-01 & 0.00 & 2.24e-01 & & 4.49e-01 & & 1.44e+01 &
\\
\hline
1/2 & 4.78e-02 & 1.23 & 1.35e-01 & 0.73 & 6.02e-01 & -0.42 & 4.24e-01 & 5.08
\\
\hline
1/4 & 1.37e-02 & 1.80 & 4.35e-02 & 1.63 & 2.94e-01 & 1.03 & 1.93e-01 & 1.13
\\
\hline
1/8 & 3.55e-03 & 1.95 & 1.16e-02 & 1.91 & 9.93e-02 & 1.57 & 6.34e-02 & 1.61
\\
\hline
1/16 & 8.96e-04 & 1.99 & 2.94e-03 & 1.98 & 2.86e-02 & 1.80 & 1.80e-02 & 1.82
\\
\hline
1/32 & 2.24e-04 & 2.00 & 7.39e-04 & 1.99 & 7.66e-03 & 1.90 & 4.79e-03 & 1.91
\\
\hline
1/64 & 5.61e-05 & 2.00 & 1.85e-04 & 2.00 & 1.98e-03 & 1.95 & 1.24e-03 & 1.95
\\
\hline
\end{tabular}
\label{Table:Example1}
\caption{Rate of convergence for a smooth solution (Example I).}
\end{center}
\end{table}
\subsection*{Example II: Piecewise Smooth Solution $u \in W^{2}_{\infty}$}
In this example, the domain is
$\Omega= (-1,1)^2$, and the exact solution
and data are taken to be
\begin{align*}
u(x) =& \left\{
\begin{array}{ll}
2 |x|^2 \quad &\text{in $|x| \leq 1/2$},
\\
2(|x| - 1/2)^2 + 2 |x|^2 \quad &\text{in $ 1/2 \leq |x| $},
\end{array} \right.
\\
f(x) =& \left\{
\begin{array}{ll}
16 \quad \quad &\text{in $|x| \leq 1/2$},
\\
64 - 16 |x|^{-1} \quad \qquad\hspace{0.59cm}&\text{in $ 1/2 \leq |x| $}.
\end{array} \right.
\end{align*}
A simple calculation shows that $u \in C^{1,1}(\bar\Omega)$ and $u \in C^4(\Omega \setminus \partial B_1)$,
but $u\not\in C^2(\bar\Omega)$.
The errors and rates of convergence are given in Table \ref{Table:Example2}.
The table shows that, while all errors tend to zero as the mesh is refined,
the rates of convergence in the $L_\infty$ and $W^2_1$ norms are less obvious than the previous set of experiments.
Nonetheless, while Theorem \ref{thm:W2d} assumes more regularity
of the exact solution, we do observe a convergence rate of approximately $\mathcal{O}(h^{1/2})$
in the $W^2_2$ as stated in the theorem.
\begin{table}[h]
\begin{center}
\begin{tabular}[t]{ | l | c | c | c | c | c | c | c | c |}
\hline
$h$ & $L_{\infty}$ & rate & $H^1$ & rate & $W^2_1$ & rate & $W^2_2$ & rate \\
\hline
1 & 4.02e-01 & 0.00 & 8.04e-01 & 0.00 & 1.61 & 0.00 & 1.61 & 0.00
\\
\hline
1/2 & 4.19e-02 & 3.26 & 1.30e-01 & 2.63 & 6.08e-01 & 1.40 & 5.39e-01 & 1.58
\\
\hline
1/4 & 2.89e-02 & 0.53 & 6.84e-02 & 0.92 & 6.46e-01 & -0.09 & 5.54e-01 & -0.04
\\
\hline
1/8 & 1.27e-02 & 1.18 & 3.50e-02 & 0.97 & 5.14e-01 & 0.33 & 4.54e-01 & 0.29
\\
\hline
1/16 & 4.58e-03 & 1.47 & 1.38e-02 & 1.34 & 2.76e-01 & 0.90 & 3.15e-01 & 0.53
\\
\hline
1/32 & 8.02e-04 & 2.51 & 3.59e-03 & 1.94 & 1.08e-01 & 1.35 & 2.08e-01 & 0.60
\\
\hline
1/64 & 4.33e-04 & 0.89 & 1.50e-03 & 1.26 & 6.36e-02 & 0.77 & 1.56e-01 & 0.42
\\
\hline
\end{tabular}
\label{Table:Example2}
\caption{Rate of convergence of piecewise smooth viscosity solution (Example II).}
\end{center}
\end{table}
\subsection*{Example III: Singular Solution $u\in W^2_p$ with $p < 2$}
In the last series of experiments, the domain is $\Omega = (-1,1)^2$,
and the solution and data are
\begin{align*}
u(x) & = \left\{
\begin{array}{ll}
x^4 + \frac 32 y^2/x^2 \quad &\text{in $|y| \leq |x|^3$},
\\
\frac 12 x^2 y^{2/3} + 2 y^{4/3} \quad &\text{in $ |y| \geq |x|^3 $},
\end{array} \right.\\
\\
f(x) =& \left\{
\begin{array}{ll}
36-9y^2/x^6 \quad\ \hspace{1cm} &\text{in $|y|\leq |x|^3$},\\
\frac89 - \frac59 x^2/y^{2/3} \quad &\text{in $|y|>|x|^3$}.
\end{array}
\right.
\end{align*}
This example is constructed in \cite{Wang95} to show that $D^2 u(x)$ may not be in $W^2_p$ for large $p$ for discontinuous $f$.
The errors
of the method for this problem
are listed in Table \ref{Table:Example3}.
Because the exact solution does not
enjoy $W^2_2$ regularity,
it is not expected that the discrete solution will converge
in the discrete $W^2_2$ norm, and this is observed
in the table.
However, we do observe
convergence in the $L_\infty$, $H^1$, and $W^2_1$ norms
with approximate rates
$\|N_h u- u_h\|_{L_\infty({\mathcal{N}_{h}^I})} = \mathcal{O}(h^{4/3})$,
$\|N_h u-u_h\|_{H^1({\mathcal{N}_{h}^I})} = \mathcal{O}(h)$,
and $\|N_h u-u_h\|_{W^2_1({\mathcal{N}_{h}^I})} = \mathcal{O}(h^{1/2})$.
\begin{table}[h]
\begin{center}
\begin{tabular}[t]{ | l | c | c | c | c | c | c | c | c |}
\hline
$h$ & $L_{\infty}$ & rate & $H^1$ & rate & $W^2_1$ & rate & $W^2_2$ & rate \\
\hline
1 & 8.36e-01 & 0.00 & 1.67 & 0.00 & 3.35 & 0.00 & 3.35 & 0.00
\\
\hline
1/2 & 2.34e-01 & 1.84 & 9.11e-01 & 0.88 & 5.48 & -0.71 & 3.94 & -0.24
\\
\hline
1/4 & 1.86e-01 & 0.33 & 4.80e-01 & 0.92 & 4.90 & 0.16 & 4.02 & -0.03
\\
\hline
1/8 & 8.52e-02 & 1.13 & 2.41e-01 & 1.00 & 4.00 & 0.29 & 3.94 & 0.03
\\
\hline
1/16 & 3.41e-02 & 1.32 & 1.02e-01 & 1.24 & 2.38 & 0.75 & 3.33 & 0.24
\\
\hline
1/32 & 1.35e-02 & 1.34 & 4.79e-02 & 1.09 & 1.59 & 0.58 & 3.17 & 0.07
\\
\hline
\end{tabular}
\label{Table:Example3}
\caption{Rate of convergence of $W^2_p$ solution with $p<2$ (Example III).}
\end{center}
\end{table}
| {
"timestamp": "2017-12-08T02:05:14",
"yymm": "1712",
"arxiv_id": "1712.02492",
"language": "en",
"url": "https://arxiv.org/abs/1712.02492",
"abstract": "We develop discrete $W^2_p$-norm error estimates for the Oliker-Prussner method applied to the Monge-Ampère equation. This is obtained by extending discrete Alexandroff estimates and showing that the contact set of a nodal function contains information on its second order difference. In addition, we show that the size of the complement of the contact set is controlled by the consistency of the method. Combining both observations, we show that the error estimate $\\|u - u_h\\|_{W^2_p} \\leq C h^{1/p}$ if $p > d$ and $\\|u - u_h\\|_{W^2_p} \\leq C h^{1/d} \\big(\\ln\\left(\\frac 1 h \\right)\\big)^{1/d} $ if $p \\leq d$. Here the constant $C$ depends on $\\|{u}\\|_{C^{3,1}(\\bar\\Omega)}$, the dimension $d$, and the constant $p$. Numerical examples are given in two space dimensions and confirm that the estimate is sharp in several cases.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Rates of convergence in $W^2_p$-norm for the Monge-Ampère equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180690117799,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314711071865
} |
https://arxiv.org/abs/2004.02837 | Near-linear convergence of the Random Osborne algorithm for Matrix Balancing | We revisit Matrix Balancing, a pre-conditioning task used ubiquitously for computing eigenvalues and matrix exponentials. Since 1960, Osborne's algorithm has been the practitioners' algorithm of choice and is now implemented in most numerical software packages. However, its theoretical properties are not well understood. Here, we show that a simple random variant of Osborne's algorithm converges in near-linear time in the input sparsity. Specifically, it balances $K\in\mathbb{R}_{\geq 0}^{n\times n}$ after $O(m\epsilon^{-2}\log\kappa)$ arithmetic operations, where $m$ is the number of nonzeros in $K$, $\epsilon$ is the $\ell_1$ accuracy, and $\kappa=\sum_{ij}K_{ij}/(\min_{ij:K_{ij}\neq 0}K_{ij})$ measures the conditioning of $K$. Previous work had established near-linear runtimes either only for $\ell_2$ accuracy (a weaker criterion which is less relevant for applications), or through an entirely different algorithm based on (currently) impractical Laplacian solvers.We further show that if the graph with adjacency matrix $K$ is moderately connected--e.g., if $K$ has at least one positive row/column pair--then Osborne's algorithm initially converges exponentially fast, yielding an improved runtime $O(m\epsilon^{-1}\log\kappa)$. We also address numerical precision by showing that these runtime bounds still hold when using $O(\log(n\kappa/\epsilon))$-bit numbers.Our results are established through an intuitive potential argument that leverages a convex optimization perspective of Osborne's algorithm, and relates the per-iteration progress to the current imbalance as measured in Hellinger distance. Unlike previous analyses, we critically exploit log-convexity of the potential. Our analysis extends to other variants of Osborne's algorithm: along the way, we establish significantly improved runtime bounds for cyclic, greedy, and parallelized variants. | \section*{Acknowledgements.}
JA thanks Enric Boix-Adsera and Jonathan Niles-Weed for helpful conversations.
\section{Introduction}\label{sec:intro}
Let $\mathbf{1}$ denote the all-ones vector in $\R^n$. A nonnegative square matrix $A \in \R_{\geq 0}^{n \times n}$ is said to be \emph{balanced} if its row sums $r(A) := A\mathbf{1}$ equal its column sums $c(A) := A^T\mathbf{1}$, i.e.
\begin{align}
r(A) = c(A).
\label{eq-def:balance}
\end{align}
This paper revisits the classical problem of \emph{Matrix Balancing}---sometimes also called \textit{diagonal similarity scaling} or \textit{line-sum-symmetric scaling}---which asks: given a nonnegative matrix $K \in \R_{\geq 0}^{n \times n}$, find a positive diagonal matrix $D$ (if one exists\footnote{$K$ can be balanced if and only if $K$ is irreducible~\citep{EavHofRotSch85}. This can be efficiently checked in linear time~\citep{TarjanStronglyConnected}.
}) such that $A := DKD^{-1}$ is balanced.
\par Matrix Balancing is a fundamental problem in numerical linear algebra, scientific computing, and theoretical computer science with many applications and an extensive literature dating back to 1960. A particularly celebrated application of Matrix Balancing is pre-conditioning matrices before linear algebraic computations such as eigenvalue decomposition~\citep{Osborne60,ParRei69} and matrix exponentiation~\citep{Ward77,Higham05}. The point is that performing these linear algebra tasks on a balanced matrix can drastically improve numerical stability and readily recovers the desired answer on the original matrix~\citep{Osborne60}. Moreover, in practice, the runtime of (approximate) Matrix Balancing is essentially negligible compared to the runtime of these downstream tasks~\citep[\S11.6.1]{PreTeuVetFla07}.
The ubiquity of these applications has led to the implementation of Matrix Balancing in most linear algebra software packages, including EISPACK~\citep{Eispack}, LAPACK~\citep{Lapack}, R~\citep{Rbal}, and MATLAB~\citep{MATLABbal}.
In fact, Matrix Balancing is performed by default in the command for eigenvalue decomposition in MATLAB~\citep{MATLABeig} and in the command for matrix exponentation for R~\citep{Rexpm}. Matrix Balancing also has other diverse applications, including in economics~\citep{SchZen90}, and as the key subroutine for fast approximation algorithms for the Min-Mean-Cycle problem~\citep{AltPar20mmc}.
\par In practice, Matrix Balancing is performed approximately rather than exactly, since this can be done efficiently and typically suffices for applications. Specifically, in the \emph{approximate Matrix Balancing problem}, the goal is to compute a scaling $A := DKD^{-1}$ that is $\varepsilon$\emph{-balanced} in the $\ell_1$ sense, i.e.,
\begin{align}
\frac{\|r(A) - c(A)\|_1}{\sum_{ij} A_{ij}} \leq \varepsilon.
\label{eq-def:balance-approx}
\end{align}
\begin{remark}[$\ell_1$ versus $\ell_2$ imbalance]
Some papers~\citep{KalKhaSho97,OstRabYou17} study approximate Matrix Balancing with $\ell_2$ norm imbalance---rather than $\ell_1$ as done here in~\eqref{eq-def:balance-approx} and in e.g.,~\citep{NemRot99}---for what appears to be essentially historical reasons.
Here, we focus solely on the $\ell_1$ imbalance as it appears to be more useful for applications---e.g., it is critical for near-linear time approximation of the Min-Mean-Cycle problem~\citep{AltPar20mmc}---in large part due to its natural interpretations in both probabilistic problems (as total variation imbalance) and graph theoretic problems (as netflow imbalance)~\citep[Remarks 2.1 and 5.8]{AltPar20mmc}.\footnote{The analogous observation has also been made for the intimately related problem of Matrix Scaling. For example, the $\ell_1$ norm is pivotal there for applications including Optimal Transport~\citep{AltWeeRig17} and Bipartite Perfect Matching~\citep{ChaKha18}.}
Note also that the approximate balancing criterion~\eqref{eq-def:balance-approx} is significantly easier to achieve\footnote{
As a simple concrete example, let $n$ be even and consider
the $n \times n$ matrix $A$ which is $0$ everywhere except is the identity on the top right $n/2 \times n/2$ block.
Note that $r(A)/\sum_{ij} A_{ij} = [\tfrac{2}{n}\mathbf{1}_{n/2}, \mathbf{0}_{n/2}]^T$ and $c(A)/\sum_{ij} A_{ij} = [\mathbf{0}_{n/2}, \tfrac{2}{n}\mathbf{1}_{n/2}]^T$.
Thus $A$ is \emph{as unbalanced as possible} in $\ell_1$ norm since $\|r(A) - c(A)\|_1/\sum_{ij}A_{ij} = 2$; however, $A$ is very well balanced in $\ell_2$ norm since $\|r(A) - c(A)\|_2 / \sum_{ij}A_{ij} = 2/\sqrt{n}$ is vanishingly small.
} for $\ell_2$ imbalance than $\ell_1$: in fact, any matrix can be balanced to constant $\ell_2$ error by only rescaling a vanishing $1/n$ fraction of the entries~\citep{OstRabYou17}, whereas this is impossible for the $\ell_1$ norm.
(Note that this issue of which norm to measure imbalance should \emph{not} be confused with the $\ell_p$ Matrix Balancing problem, see Remark~\ref{rem:lp-bal}.)
\end{remark}
\subsection{Previous algorithms}\label{ssec:intro:prev}
The many applications of Matrix Balancing have motivated an extensive literature focused on solving it efficiently.
However, there is still a large gap between theory and practice, and several key issues remain. We overview the relevant previous results below.
\subsubsection{Practical state-of-the-art}
Ever since its invention in 1960, \emph{Osborne's algorithm} has been the algorithm of choice for practitioners~\citep{Osborne60,ParRei69}. Osborne's algorithm is a simple iterative algorithm which initializes $D$ to the identity (i.e., no balancing), and then in each iteration performs an \emph{Osborne update} on some update coordinate $k \in [n]$, in which $D_{kk}$ is updated to $\sqrt{c_k(A)/r_k(A)} D_{kk}$ so that the $k$-th row sum $r_k(A)$ and $k$-th column sum $c_k(A)$ of the current balancing $A = DKD^{-1}$ agree. The classical version of Osborne's algorithm, henceforth called \emph{Cyclic Osborne}, chooses the update coordinates by repeatedly cycling through $\{1,\dots, n\}$, either in round-robin order or using an independent random permutation for each cycle's order.
This algorithm\footnote{To be precise, following~\citep{ParRei69}, some implementations have two minor modifications: a pre-processing step where $K$ is permuted to a triangular block matrix with irreducible diagonal blocks; and a restriction of the entries of $D$ to exact powers of the radix base. We presently ignore these minor modifications since the former is easily performed in linear-time~\citep{TarjanStronglyConnected}, and the latter is solely to safeguard against numerical precision issues in practice.} performs remarkably well in practice and is the implementation of choice in most linear algebra software packages.
\par Yet despite this widespread adoption of Osborne's algorithm, a theoretical understanding of its convergence has proven to be quite challenging: indeed, non-asymptotic convergence bounds (i.e., runtime bounds) were not known for nearly $60$ years until the breakthrough $2017$ paper~\citep{OstRabYou17}. The paper~\citep{OstRabYou17} shows\footnote{Note that in~\citep{OstRabYou17}, bounds are written for the $\ell_2$ balancing criteria; see the discussion after~\eqref{eq-def:balance-approx}.\label{ft:ost}} that Cyclic Osborne computes an $\varepsilon$-balancing after $O(m n^2 \varepsilon^{-2} \log \kappa)$ arithmetic operations, where $m$ is the number of nonzeros in $K$, and $\kappa := (\sum_{ij} K_{ij})/(\min_{ij : K_{ij} \neq 0} K_{ij})$. They also show faster $\tilde{O}(n^2 \varepsilon^{-2} \log \kappa)$ runtimes for two variants of Osborne's algorithm which choose update coordinates in different orders than cyclically. Here and henceforth, the $\tilde{O}$ notation suppresses polylogarithmic factors in $n$ and $\varepsilon^{-1}$. The first variant, which we call \emph{Greedy Osborne}, chooses the coordinate with maximal imbalance as measured by $\argmax_k (\sqrt{r_k(A)} - \sqrt{c_k(A)})^2$. They show that Greedy Osborne's runtime dependence on $\varepsilon$ can be improved from $\varepsilon^{-2}$ to $\varepsilon^{-1}$; however, this comes at the high cost of an extra factor of $n$. A disadvantage of Greedy Osborne is that it has numerical precision issues and requires operating on $O(n \log \kappa)$-bit numbers.
The second variant, which we call \emph{Weighted Random Osborne}, chooses coordinate $k$ with probability proportional to $r_k(A) + c_k(A)$, and can be implemented using $O(\log(n\kappa/\varepsilon))$-bit numbers.
\par Collectively, these runtime bounds are fundamental results since they establish that Osborne's algorithm has polynomial runtime in $n$ and $\varepsilon^{-1}$, and moreover that variants of it converge in roughly $\tilde{O}(n^2\varepsilon^{-2})$ time for matrices satisfying $\log \kappa = \tilde{O}(1)$---henceforth called \emph{well-conditioned matrices}. However, these theoretical runtime bounds are still much slower than both Osborne's rapid empirical convergence and the state-of-the-art theoretical algorithms described below.
\par Two remaining open questions that this paper seeks to address are:
\begin{enumerate}
\item \textbf{Near-linear runtime\footnote{Throughout, we say a runtime is near-linear if it is $O(m)$, up to polylogarithmic factors in $n$ and polynomial factors in the inverse accuracy $\varepsilon^{-1}$.}.}
Does (any variant of) Osborne's algorithm have near-linear runtime in the input sparsity $m$? The fastest known runtimes scale as $n^2$, which is significantly slower for sparse problems.
\item \textbf{Scalability in accuracy.} The fastest runtimes for (any variant) of Osborne's algorithm scale poorly in the accuracy as $\varepsilon^{-2}$. (Except Greedy Osborne, for which it is only known that $\varepsilon^{-2}$ can be replaced by $\varepsilon^{-1}$ at the high cost of an extra factor of $n$.) Can this be improved?
\end{enumerate}
\subsubsection{Theoretical state-of-the-art}
A separate line of work leverages sophisticated optimization techniques to solve a convex optimization problem equivalent to Matrix Balancing. These algorithms have $\log \varepsilon^{-1}$ dependence on the accuracy, but are not practical (at least currently) due to costly overheads required by their significantly more complicated iterations. This direction originated in~\citep{KalKhaSho97}, which showed that the Ellipsoid algorithm produces an approximate balancing in $\tilde{O}(n^4 \log( (\log \kappa) / \varepsilon))$ arithmetic operations on $O(\log(n\kappa/\varepsilon))$-bit numbers.
Recently,~\citep{CohMadTsiVla17}\footnote{Similar runtimes were also developed by~\citep{ZhuLiOliWig17}.} gave an Interior Point algorithm with runtime $\tilde{O}(m^{3/2}\log(\kappa/\varepsilon))$ and a Newton-type algorithm with runtime
$\tilde{O}(m d \log^2 (\kappa /\varepsilon) \log \kappa)$,
where $d$ denotes the diameter of the directed graph $G_K$ with vertices $[n]$ and edges $\{(i,j) : K_{ij} > 0\}$~\citep[Theorem 4.18, Theorem 6.1, and Lemma 4.24]{CohMadTsiVla17}.
Note that under the condition that $K$ is a \emph{well-connected matrix}---by which we mean that $G_K$ has polylogarithmic diameter $d = \tilde{O}(1)$---then this latter algorithm has near-linear runtime in the input sparsity $m$. However, these algorithms heavily rely upon near-linear time Laplacian solvers, for which practical implementations are not known.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Variant} & \textbf{Best runtime bound (arithmetic operations)} & \textbf{Polylog bits} \\ [0.5ex] \hline \hline
%
%
%
Cyclic
&
$\tilde{O}(mn^2/\varepsilon^2)$~\citep{OstRabYou17} $\longrightarrow$ $\boldsymbol{\tilde{O}(m n^{1/2} / \varepsilon)}$ \textbf{(Theorem~\ref{thm:osb-cyc})}
&
\textbf{Yes (Theorem~\ref{thm:osb-all:bits})}
\\ \hline
%
%
%
Weighted Random
&
$\tilde{O}(n^2/\varepsilon^{2})$~\citep{OstRabYou17}
&
Yes~\citep{OstRabYou17}
\\ \hline
%
%
%
Greedy
&
$\tilde{O}((n^2/\varepsilon^2) \wedge (n^3/\varepsilon))$~\citep{OstRabYou17} $\longrightarrow$
$\boldsymbol{\tilde{O}(n^2 / \varepsilon)}$ \textbf{(Theorem~\ref{thm:osb-greedy})}
&
No
\\ \hline
%
%
%
Random
&
$\boldsymbol{\tilde{O}(m /\varepsilon)}$ \textbf{(Theorem~\ref{thm:osb-rand})}
&
\textbf{Yes (Theorem~\ref{thm:osb-all:bits})}
\\ \hline
%
\end{tabular}
\caption{
Variants of Osborne's algorithm for balancing a matrix $K \in \R_{\geq 0}^{n \times n}$ with $m$ nonzeros to $\varepsilon$ $\ell_1$ accuracy. For simplicity, here $K$ is assumed well-conditioned (i.e., $\log \kappa = \tilde{O}(1)$) and well-connected (i.e., $d = \tilde{O}(1))$; see the main text for detailed dependence on $\log \kappa$ and $d$.
Note that in~\citep{OstRabYou17}, bounds are written for the $\ell_2$ criterion; see the discussion after~\eqref{eq-def:balance-approx}.
Our new bounds are in bold.
}
\label{tab:bal}
\end{table}
\subsection{Contributions}\label{ssec:intro:contributions}
\paragraph*{Random Osborne converges in near-linear time.}
Our main result (Theorem~\ref{thm:osb-rand}) addresses the two open questions above by showing that a simple random variant of the ubiquitously used Osborne's algorithm has runtime that is (i) near-linear in the input sparsity $m$, and also (ii) linear in the inverse accuracy $\varepsilon^{-1}$ for well-connected inputs. Property (i) amends the aforementioned gap between theory and practice that the fastest known runtime of Osborne's algorithm scales as $n^2$~\citep{OstRabYou17}, while a different, impractical algorithm has theoretical runtime which is (conditionally) near-linear in $m$~\citep{CohMadTsiVla17}. Property (ii) shows that improving the runtime dependence in $\varepsilon$ from $\varepsilon^{-2}$ to $\varepsilon^{-1}$ does not require paying a costly factor of $n$ (c.f.,~\citep{OstRabYou17}).
\par Specifically, we propose a variant of Osborne's algorithm---henceforth called \emph{Random Osborne}\footnote{
Not to be confused with the different randomized variant of Osborne's algorithm in~\citep[\S5]{OstRabYou17}, which draws coordinates with non-uniform probabilities. We call that algorithm Weighted Random Osborne to avoid confusion.
}--- which chooses update coordinates uniformly at random, and show the following.
\begin{theorem}[Informal version of Theorem~\ref{thm:osb-rand}]\label{thm:intro:rand}
Random Osborne solves the approximate Matrix Balancing problem on input $K \in \R_{\geq 0}^{n \times n}$ to accuracy $\varepsilon > 0$ after
\begin{align}
O\left(\frac{m}{\varepsilon} \left(\frac{1}{\varepsilon} \wedge d\right) \log \kappa \right),
\label{eq-thm:intro:rand}
\end{align}
arithmetic operations, both in expectation and with high probability.
\end{theorem}
\par We make several remarks about Theorem~\ref{thm:intro:rand}. First, we interpret the runtime~\eqref{eq-thm:intro:rand}. This is the minimum of $O(m\varepsilon^{-2} \log \kappa)$ and $O(m d \varepsilon^{-1} \log \kappa)$. The former is near-linear in $m$. The latter is too if $G_K$ has polylogarithmic diameter $d = \tilde{O}(1)$---important special cases include matrices $K$ containing at least one strictly positive row/column pair (there, $d = 1$), and matrices with random sparsity patterns (there, $d = \tilde{O}(1)$ with high probability, see, e.g.,~\citep[Theorem 10.10]{Bol01}). Note that the complexity of Matrix Balancing is intimately related to the connectivity of $G_K$: indeed, $K$ can be balanced if and only if $G_K$ is strongly connected (i.e., if and only if $d$ is finite)~\citep{Osborne60}. Intuitively, the runtime dependence on $d$ is a quantitative measure of ``how balanceable'' the input $K$ is.
\par We note that the high probability bound in Theorem~\ref{thm:intro:rand} has tails that decay exponentially fast. This is optimal with our analysis, see Remark~\ref{rem:prob-subexp}.
\par Next, we comment on the $\log \kappa$ term in the runtime. This term appears in all other state-of-the-art runtimes~\citep{CohMadTsiVla17,OstRabYou17} and is mild: indeed, $\log \kappa \leq \log m + \log (\max_{ij} K_{ij}/ \min_{ij : K_{ij} > 0} K_{ij})$, where the former summand is $\tilde{O}(1)$---hence why the runtime is \textit{near}-linear---and the latter is the input size for the entries of $K$. In particular, if $K$ has quasi-polynomially bounded entries, then $\log \kappa = \tilde{O}(1)$.
\par Next, we compare to existing runtimes. Theorem~\ref{thm:intro:rand} gives a faster runtime than any existing practical algorithm, see Table~\ref{tab:bal}. If comparing to the (impractical) algorithm of~\citep{CohMadTsiVla17} on a purely theoretical plane, neither runtime dominates the other, and which is faster depends on the precise parameter regime:~\citep{CohMadTsiVla17} is better for high accuracy solutions\footnote{We remark that in practical applications of Matrix Balancing such as pre-conditioning, low accuracy solutions typically suffice. Indeed, this is a motivation of the commonly used variant of Osborne's algorithm which restricts entries of the scaling $D$ to exact powers of the radix base~\citep{ParRei69}.}, while Random Osborne has better dependence on the conditioning $\kappa$ of $K$ and the connectivity $d$ of $G_K$.
Finally, we remark about bit-complexity. In \S\ref{sec:bits}, we show that with only minor modification, Random Osborne is implementable using numbers with only logarithmically few $O(\log (n \kappa / \varepsilon))$ bits; see Theorem~\ref{thm:osb-all:bits} for formal statement.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Variant} & \textbf{Best runtime bound (rounds)} & \textbf{Total work} & \textbf{Polylog bits} \\ [0.5ex] \hline \hline
%
%
%
Cyclic Block
&
$\tilde{O}(p^{3/2}/\varepsilon)$
&
$\tilde{O}(m p^{1/2}/\varepsilon)$
&
Yes
\\ \hline
%
%
%
Greedy Block
&
$\tilde{O}(p/\varepsilon)$
&
$\tilde{O}(mp/\varepsilon)$
&
No
\\ \hline
%
%
%
Random Block
&
$\tilde{O}(p/\varepsilon)$
&
$\tilde{O}(m/\varepsilon)$
&
Yes
\\ \hline
%
\end{tabular}
\caption{
Parallelized variants of Osborne's algorithm for balancing a matrix $K \in \R_{\geq 0}^{n \times n}$ with $m$ nonzeros to $\varepsilon$ $\ell_1$ accuracy, given a partitioning of the dataset into $p$ blocks (see \S\ref{ssec:prelim:par} for details). For simplicity, here $K$ is assumed well-conditioned (i.e., $\log \kappa = \tilde{O}(1)$) and well-connected (i.e., $d = \tilde{O}(1))$; see the main text for detailed dependence on $\log \kappa$ and $d$.
All results are ours. The runtime and work bounds are in Theorem~\ref{thm:par:all}, and the bit-complexity bounds are in Theorem~\ref{thm:osb-all:bits}.
}
\label{tab:par}
\end{table}
\paragraph*{Simple, streamlined analysis for different Osborne variants.} We prove Theorem~\ref{thm:intro:rand} using an intuitive potential argument (overviewed in \S\ref{ssec:intro:overview} below). An attractive feature of this argument is that with only minor modification, it adapts to other Osborne variants. We elaborate below; see also Tables~\ref{tab:bal} and~\ref{tab:par} for summaries of our improved rates.
\par \underline{Greedy Osborne.} We show an improved runtime for Greedy Osborne where the $\varepsilon^{-2}$ dependence is improved to $\varepsilon^{-1}$ at the cost of $d$ (rather than a full factor of $n$ as in~\citep{OstRabYou17}). Specifically, in Theorem~\ref{thm:osb-greedy}, we show convergence after $O(n^2\varepsilon^{-1} (\varepsilon^{-1} \wedge d) \log \kappa)$ arithmetic operations, which improves upon the previous best $O(n^2\varepsilon^{-1} \log n \cdot (\varepsilon^{-1} \log \kappa \wedge n \log (\kappa/\varepsilon)))$ from~\citep{OstRabYou17}. (The other improved $\log n$ factor comes from simplifying the data structure used for efficient greedy updates, see Remark~\ref{rem:greedy-amortized}.)
\par \underline{Cyclic Osborne.} We show that Cyclic Osborne, using a fresh random permutation each cycle, converges after $O(m n^{1/2} \varepsilon^{-1} (\varepsilon^{-1} \wedge d) \log \kappa)$ arithmetic operations (Theorem~\ref{thm:osb-cyc}), which improves substantially upon the previous best $O(mn^2\varepsilon^{-2} \log \kappa)$ from~\citep{OstRabYou17}. Moreover,
we show that Cyclic Osborne can be implemented on $O(\log(n\kappa/\varepsilon))$-bit numbers (Theorem~\ref{thm:osb-all:bits}).
\par \underline{Parallelized Osborne.}
We also show fast convergence for the analogous greedy, cyclic, and random variants of a parallelized version of Osborne's algorithm that is recalled in \S\ref{ssec:prelim:par}. These runtimes bounds are summarized in Table~\ref{tab:par}.
Our main result here is that---modulo at most a single $\log n$ factor arising from the conditioning $\log \kappa$ of the input---Random Block Osborne converges after (i) only a linear number $O(\tfrac{p}{\varepsilon}(\tfrac{1}{\varepsilon} \wedge d) \log \kappa)$ of synchronization rounds in the size $p$ of the dataset partition; and (ii) the same amount of total work as its non-parallelized counterpart Random Osborne, which is in particular near-linear in $m$ (see Theorem~\ref{thm:intro:rand} and the ensuing discussion). Property (i) shows that, when giving an optimal coloring of $G_K$, Random Osborne converges in linear time in the chromatic number $\chi(G_K)$ of $G_K$ (see \S\ref{ssec:prelim:par} for further details). Property (ii) shows that the speedup of parallelization comes at no cost in the total work.
\subsection{Overview of approach}\label{ssec:intro:overview}
We establish all of our runtime bounds with essentially the same potential argument. Below, we first sketch this argument for Greedy Osborne, since it is the simplest. Next, we describe the modifications for Random Osborne---the argument is identical modulo probabilistic tools which, albeit necessary for a rigorous analysis, are not the heart of the argument. We then outline the analysis for Cyclic Osborne, which requires additional ideas.
We then briefly remark upon the very minor modifications required for the parallelized Osborne variants.
\par For all variants, the potential is the logarithm of the sum of the entries of the current balancing $A = DKD^{-1}$ minus that of the unique balancing $A$ of $K$, i.e., $\log \sum_{ij} A_{ij} - \log \sum_{ij} A_{ij}^*$.
Minimizing this potential function (over all positive diagonal matrices $D$) is well-known to be equivalent to Matrix Balancing; details in the Preliminaries section \S\ref{ssec:prelim:convex}. Note also that Osborne's algorithm is equivalent to Exact Coordinate Descent on this function---which, importantly, is convex after a re-parameterization; see \S\ref{ssec:prelim:cd}. In the interest of accessibility, the below overview describes our approach at an informal level that does not require further background. Later, \S\ref{sec:prelim} provides these preliminaries, and \S\ref{sec:pot} gives the technical details of the potential argument.
\subsubsection{Argument for Greedy Osborne}\label{sssec:intro:overview-greedy}
Here we sketch the $O(n^2 \varepsilon^{-1} (\varepsilon^{-1} \wedge d) \log \kappa)$ runtime we establish for Greedy Osborne in \S\ref{sec:greedy}. Since each Greedy Osborne iteration takes $O(n)$ arithmetic operations (see \S\ref{ssec:prelim:cd}), it suffices to bound the number of iterations by $O(n \varepsilon^{-1} (\varepsilon^{-1} \wedge d) \log \kappa)$.
\par The first step is relating the per-iteration progress of Osborne's algorithm to the imbalance of the current balancing---as measured in \emph{Hellinger distance} $\mathsf{H}(\cdot,\cdot)$. Specifically, we show that an Osborne update decreases the potential function by at least
\begin{align}
(\text{per-iteration decrease in potential})
\gtrsim
\frac{\mathsf{H}^2 \left( r(P), c(P) \right)}{n},
\label{eq-ove:hell}
\end{align}
where $P = A/\sum_{ij} A_{ij}$ is the normalization of the current scaling $A = DKD^{-1}$. Note that since $P$ is normalized, its marginals $r(P)$ and $c(P)$ are both probability distributions.
\par The second step is lower bounding this Hellinger imbalance $\mathsf{H}^2 \left( r(P), c(P) \right)$ by something large, so that we can argue that each iteration makes significant progress. Following is a simple such lower bound that yields an $O(n^2 \varepsilon^{-2} \log \kappa)$ runtime bound. Modulo small constant factors: a standard inequality in statistics lower bounds Hellinger distance by $\ell_1$ distance (a.k.a. total variation distance), and the $\ell_1$ distance is by definition at least $\varepsilon$ if the current iterate is not $\varepsilon$-balanced (see~\eqref{eq-def:balance-approx}). Therefore
\begin{align}
(\text{per-iteration decrease in potential})
\gtrsim
\frac{\varepsilon^2}{n}
\label{eq-ove:1}
\end{align}
for each iteration before convergence. Since the potential is initially not very large (at most $\log \kappa$, see Lemma~\ref{lem:pot-init}) and by construction always nonnegative, the total number of iterations before convergence is therefore at most $n \varepsilon^{-2} \log \kappa$.
\par The key to the improved bound is an extra inequality that shows that \emph{the per-iteration decrease is very large when the potential is large}. Specifically, this inequality---which has a simple proof using convexity of the potential---implies the following improvement of~\eqref{eq-ove:1}
\begin{align}
(\text{per-iteration decrease in potential})
\gtrsim
\frac{1}{n} \left[\frac{\text{(current potential)}}{R} \vee \varepsilon \right]^2
\label{eq-ove:2}
\end{align}
where $R = d \log \kappa$. The per-iteration decrease is thus governed by the maximum of these two quantities. In words, the former ensures a \emph{relative improvement} in the potential, and the latter ensures an \emph{additive improvement}. Which is bigger depends on the current potential: the former dominates when the potential is $\Omega(\varepsilon R)$, and the latter for $O(\varepsilon R)$.
It can be shown that both ``phases'' require $O(n \varepsilon^{-1} d \log \kappa)$ iterations, yielding the desired improved rate (details in \S\ref{sec:greedy}).
\subsubsection{Argument for Random Osborne}\label{sssec:intro:overview-rand}
The argument for Random Osborne is nearly identical, except for two minor changes. The first change is the per-iteration potential decrease. All the same bounds hold (i.e.,~\eqref{eq-ove:hell},~\eqref{eq-ove:1}, and~\eqref{eq-ove:2}), except that they are now \emph{in expectation} rather than deterministic. Nevertheless, this large expected progress is sufficient to obtain the same iteration-complexity bound. Specifically, an expected bound on the number of iterations is proved using Doob's Optional Stopping Theorem, and a h.p. bound using a martingale Chernoff bound (details in \S\ref{sssec:conv:runtime}).
\par The second change is the per-iteration runtime: it is faster in expectation.
\begin{obs}[Per-iteration runtime of Random Osborne]\label{obs:osb-rand:iter}
An iteration of Random Osborne requires $O(m/n)$ arithmetic operations in expectation.
\end{obs}
\begin{proof}
The number of arithmetic operations required by an Osborne update on coordinate $k$ is proportional to the number of nonzero entries on the $k$-th row and column of $K$. Since Random Osborne draws $k$ uniformly from $[n]$, this number of nonzeros is $2m/n$ in expectation.
\end{proof}
Note that this per-iteration runtime is $n^2/m$ times faster than Greedy Osborne's.
This is why our bound on the total runtime of Random Osborne is roughly $O(m)$, whereas for Greedy Osborne it is $O(n^2)$.
\par A technical nuance is that arguing a final runtime bound from a per-iteration runtime and an iteration-complexity bound is a bit more involved for Random Osborne. This is essentially because the number of iterations is not statistically independent from the per-iteration runtimes. For Greedy Osborne, the final runtime is bounded simply by the product of the per-iteration runtime and the number of iterations. We show a similar bound for Random Osborne in expectation via a slight variant of Wald's inequality, and in h.p. via a Chernoff bound; details in \S\ref{sssec:conv:decrease}.
\subsubsection{Argument for Cyclic Osborne}\label{sssec:intro:overview-cyc}
Analyzing Cyclic Osborne appears to be more difficult. The primary obstacle is that the improvement of an Osborne update is significantly affected by the previous Osborne updates in the cycle---and this effect is difficult to track.
Our analysis bypasses this obstacle by exploiting the independent random ordering used in each cycle of Cyclic Osborne, in order to make a simple coupling argument between Cyclic Osborne and Random Osborne. Specifically, we relate the first $\Theta(\sqrt{n})$ iterations of each cycle of Cyclic Osborne to the same number of iterations of Random Osborne by leveraging the fact that sampling $\Theta(\sqrt{n})$ coordinates from $[n]$ with or without replacement is ``indistinguishable'' in the sense that the total variation distance between these two sampling methods is constant. Since Osborne updates monotonically improve the potential, the per-cycle improvement of Cyclic Osborne is at least the improvement of the first $\Theta(\sqrt{n})$ iterations of the cycle; and by this coupling, this is (up to a constant factor) at least the improvement of the same number of Random Osborne iterations. This implies that Cyclic Osborne requires at most $\Theta(\sqrt{n})$ more iterations than Random Osborne, yielding our claimed bounds. Details in \S\ref{sec:cyclic}.
\subsubsection{Argument for parallelized Osborne}\label{sssec:intro:overview-block}
The argument for the parallelized variants of Osborne are nearly identical to the arguments for their non-parallelized counterparts, described above. Specifically, the main difference for the random and greedy variants is just that in the bounds~\eqref{eq-ove:hell},~\eqref{eq-ove:1}, and~\eqref{eq-ove:2}, the $1/n$ factor is improved to $1$ over the partitioning size $p$. The same argument then results in a final runtime that is sped up by this factor of $n/p$. The only difference for analyzing the cyclic variant is that here, the analogous coupling relates the first $\Theta(\sqrt{p})$ ``block'' updates in each cycle (of $p$ total updates) to that of the random variant, resulting in a runtime that is slower by a factor of $\sqrt{p}$. Details in \S\ref{sec:par}.
\subsubsection{Key differences from previous approaches}
The only other polynomial-time analysis of Osborne's algorithm also uses a potential argument~\citep{OstRabYou17}. However, our argument differs in several key ways---which enables much tighter bounds as well as a simpler argument that extends to many variants of Osborne's algorithm. Notably, their proof of Lemma 3 is specifically tailored to Greedy Osborne due to inequality (15), and seems unextendable to other variants such as Random Osborne. In particular, this precludes obtaining the near-linear runtime shown in this paper.
Another key difference is that they do \emph{not} use convexity of their potential (explicitly written on~\citep[page 157]{OstRabYou17}), whereas we exploit not only convexity but also \emph{log-convexity} (note our potential is the logarithm of theirs).
Specifically, they use~\citep[Lemma 2]{OstRabYou17} to improve $\varepsilon^{-2}$ to $\varepsilon^{-1}$ dependence at the cost of an extra factor of $n$, whereas here we show a significantly tighter bound (see proof of Proposition~\ref{prop:hell-lb}) that saves this factor of $n$ for well-connected graphs by exploiting log-convexity of their potential.
\subsection{Other related work}
We briefly remark about several tangentially related lines of work. Reference~\citep{Chen00} gives heuristics for speeding up Osborne's algorithm on sparse matrices in practice, but does not provide runtime bounds. Reference~\citep{OstRabYou18} gives a more complicated version of Osborne's algorithm that obtains a stricter approximate balancing in a polynomial (albeit less practical) runtime of roughly $\tilde{O}(n^{19} \varepsilon^{-4} \log^4 \kappa)$. Reference~\citep{MaiBat19} gives an asynchronous distributed version of Osborne's algorithm with applications to epidemic suppression.
\begin{remark}[Fast Coordinate Descent]
Since Osborne's algorithm is Exact Coordinate Descent on a certain associated convex optimization problem (details in \S\ref{ssec:prelim:cd}), it is natural to ask what runtimes the extensive literature on Coordinate Descent implies for Matrix Balancing. However, applying general-purpose bounds on Coordinate Descent out-of-the-box gives quite pessimistic runtime bounds for Matrix Balancing\footnote{E.g., applying the state-of-the-art guarantees of~\citep{NesSti17,AllQuRicYua16} for accelerated Coordinate Descent (which, note also, is \emph{not} exactly Osborne's algorithm) gives an iteration bound of $(\sum_{i=i}^n \sqrt{L_i})\delta^{-1/2}\|x^*\|_2$ for minimizing $\Phi$ (defined in \S\ref{ssec:prelim:convex}) to $\delta$ additive accuracy, where $L_i$ is the smoothness of $\Phi$ on coordinate $i$. By~\citep[Corollary 2]{KalKhaSho97} and Cauchy-Schwarz, $\delta = O(\varepsilon^2/n)$ ensures that such a $\delta$-approximate minimizer of $\Phi$ corresponds to an $\varepsilon$-approximate balancing. Bounding $L_i \leq 1$ and $\|x^*\|_2 \leq \sqrt{n} d \log \kappa$ by Corollary~\ref{cor:bal:R} therefore yields a bound of $O(n^{2} \varepsilon^{-1} d \log \kappa)$ iterations. Since iterations takes $O(m/n)$ time on average, this yields a final runtime bound of $O(mn\varepsilon^{-1} d \log \kappa)$, which is not near-linear.
}, essentially because they only rely on coordinate-smoothness of the function.
In order to achieve the near-linear time bounds in this paper, we heavily exploit the further global structure of the specific convex optimization problem at hand.
\end{remark}
\begin{remark}[$\ell_p$ Matrix Balancing]\label{rem:lp-bal}
The (approximate) $\ell_p$ Matrix Balancing problem is: given input $K \in \mathbb{C}^{n \times n}$ and $p \in [1,\infty)$, compute a scaling $A = DKD^{-1}$ such that for each $i \in [n]$, the $i$-th row and columnn of $A$ have (approximately) equal $\ell_p$ norm. (Note this $\ell_p$ variant should not be confused with the norm discussion following~\eqref{eq-def:balance-approx}.) Note that the Matrix Balancing problem studied in this paper is a special case of this: it is $\ell_1$ balancing a nonnegative matrix. However, it is actually no less general, since $\ell_p$ balancing $K \in \mathbb{C}^{n \times n}$ is trivially reducible to $\ell_1$ balancing the nonnegative matrix with entries $|K_{ij}|^p$, see, e.g.,~\citep{RotSchSch94}. Thus, following the literature, we focus only on the version of Matrix Balancing described above.
\end{remark}
\begin{remark}[Max-Balancing]
The Max-Balancing problem is $\ell_p$ Matrix Balancing for $p = \infty$, i.e.: given $K \in \R_{\geq 0}^{n \times n}$, compute a scaling $A = DKD^{-1}$ so that for each $i$, the maximum entry in the $i$-th row and column of $A$ are equal. There is an extensive literature on this problem, including polynomial-time combinatorial algorithms~\citep{SchSch91,YouTarOrl91} as well as a natural analogue of Osborne's algorithm~\citep{ParRei69} from the 1960s. Just as for Matrix Balancing, Osborne's algorithm has long been the choice in practice for Max-Balancing, yet its analysis has proven quite difficult: asymptotic convergence was not even known until 1998~\citep{Chen98}, and the first runtime bound was shown only a few years ago~\citep{SchSin17}.
However, despite the syntactic similarity of Max-Balancing and Matrix Balancing, the two problems are fundamentally very different: not only are the balancing goals different (which begets remarkably different properties, e.g., the Max-Balancing solution is not unique~\citep{Chen98}), but also the algorithms are quite different (even the analogous versions of Osborne's algorithm) and their analyses do not appear to carry over~\citep{OstRabYou17}.
\end{remark}
\begin{remark}[Matrix Scaling and Sinkhorn's algorithm]\label{rem:scaling}
The Matrix Scaling problem is: given $K \in \R_{\geq 0}^{n \times n}$ and vectors $\mu,\nu \in \R_{\geq 0}^n$ satisfying $\sum_{i} \mu_i = \sum_i \nu_i$, find positive diagonal matrices $D_1,D_2$ such that $A := D_1KD_2$ satisfies $r(A) = \mu$ and $c(A) = \nu$. The many applications of Matrix Scaling have motivated an extensive literature on it; see, e.g., the survey~\citep{Idel16}. In analogue to Osborne's algorithm for Matrix Balancing, there is a simple iterative procedure (Sinkhorn's algorithm) for Matrix Scaling~\citep{Sin67}.
Sinkhorn's algorithm was recently shown to converge in near-linear time~\citep{AltWeeRig17} (see also~\citep{GurYia98,ChaKha18,DvuGasKro18}). The analysis there also uses a potential argument. Interestingly, the per-iteration potential improvement for Matrix Scaling is the \emph{Kullback-Leibler divergence} of the current imbalance, whereas for Matrix Balancing it is the \emph{Hellinger divergence}. Further connections related to algorithmic techniques in this paper are deferred to Appendix~\ref{app:sink}.
\end{remark}
\subsection{Roadmap}\label{ssec:intro:outline}
\S\ref{sec:prelim} recalls preliminary background. \S\ref{sec:pot} establishes the key lemmas in the potential argument. \S\ref{sec:greedy}, \S\ref{sec:rand}, \S\ref{sec:cyclic}, and \S\ref{sec:par} use these tools to prove fast convergence for Greedy, Random, Cyclic, and parallelized Osborne variants, respectively.
For simplicity of exposition, these sections assume exact arithmetic; bit-complexity issues are addressed in \S\ref{sec:bits}. \S\ref{sec:conc} concludes with several open questions.
\section{Greedy Osborne converges quickly}\label{sec:greedy}
Here we show an improved runtime bound for Greedy Osborne that, for well-connected sparsity patterns, scales (near) linearly in both the total number of entries $n^2$ and the inverse accuracy $\varepsilon^{-1}$. See \S\ref{ssec:intro:contributions} for further discussion of the result, and \S\ref{sssec:intro:overview-greedy} for a proof sketch.
\begin{theorem}[Convergence of Greedy Osborne]\label{thm:osb-greedy}
Given a balanceable matrix $K \in \R_{\geq 0}^{n \times n}$ and accuracy $\varepsilon > 0$, Greedy Osborne
solves $\textsc{ABAL}(K,\eps)$
in
$O(\tfrac{n^2}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d)\log \kappa)$ arithmetic operations.
\end{theorem}
The key lemma is that each iteration of Greedy Osborne improves the potential significantly.
\begin{lemma}[Potential decrease of Greedy Osborne]\label{lem:pot-dec:greedy}
Consider any $x \in \R^n$ for which the corresponding scaling $A := \diag(e^x) K \diag(e^{-x})$ is not $\varepsilon$-balanced. If $x'$ is the next iterate obtained from a Greedy Osborne update, then
\begin{align*}
\Phi(x) - \Phi(x')
\geq
\frac{1}{4n} \left( \frac{\Phi(x) - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2.
\end{align*}
\end{lemma}
\begin{proof}
Using in order Lemma~\ref{lem:pot-dec}, the inequality $-\log(1 - z) \geq z$ which holds for any $z \in \mathbb{R}$, the definition of Greedy Osborne, and then Proposition~\ref{prop:hell-lb},
\begin{align}
\Phi(x) - \Phi(x')
&=
- \log (1 - \left( \sqrt{r_k(P)} - \sqrt{c_k(P)})^2 \right)
\label{eq-lpdg:1}
\\ &\geq
\left( \sqrt{r_k(P)} - \sqrt{c_k(P)} \right)^2
\label{eq-lpdg:2}
\\ &\geq
\frac{1}{n} \sum_{\ell=1}^n \left( \sqrt{r_{\ell}(P)} - \sqrt{c_{\ell}(P)} \right)^2
\label{eq-lpdg:3}
\\ &\geq
\frac{1}{4n} \left( \frac{\Phi(x) - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2.
\label{eq-lpdg:4}
\end{align}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:osb-greedy}]
Let $x^{(0)} = \mathbf{0}, x^{(1)},x^{(2)},\dots$ denote the iterates, and let $\tau$ be the first iteration for which $\diag(e^x)K\diag(e^{-x})$ is $\varepsilon$-balanced.
Since the number of arithmetic operations per iteration is amortized to $O(n)$ by Remark~\ref{rem:greedy-amortized}, it suffices to show that the number of iterations $\tau$ is at most $O(n\varepsilon^{-1}(\varepsilon^{-1} \wedge d) \log \kappa)$. Now by Lemma~\ref{lem:pot-dec:greedy}, for each $t \in \{0,1,\dots,\tau-1\}$ we have
\begin{align}
\Phi(x^{(t)}) - \Phi(x^{(t+1)})
\geq
\frac{1}{4n} \left( \frac{\Phi(x^{(t)}) - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2.
\label{eq:greedy}
\end{align}
\paragraph*{Case 1: $\boldsymbol{\varepsilon^{-1} \leq d}$.} By the second bound in~\eqref{eq:greedy}, the potential decreases by at least $\varepsilon^2/4n$ in each iteration. Since the potential is initially at most $\log \kappa$ by Lemma~\ref{lem:pot-init} and is always nonnegative by definition, the total number of iterations is at most
\begin{align}
\tau \leq \frac{\log \kappa}{\varepsilon^2/4n} = \frac{4n \log \kappa}{\varepsilon^2}.
\label{eq-pf-greedy:c1}
\end{align}
\paragraph*{Case 2: $\boldsymbol{\varepsilon^{-1} > d}$.} For shorthand, denote $\alpha := \varepsilon d \log \kappa$. Let $\tau_1$ be the first iteration for which the potential $\Phi(x^{(t)}) \leq \alpha$, and let $\tau_2 := \tau - \tau_1$ denote the number of remaining iterations. By an identical argument as in case 1,
\begin{align}
\tau_2
\leq
\frac{\alpha}{\varepsilon^2/4n}
=
\frac{4n d \log \kappa}{\varepsilon}.
\label{eq-pf-greedy:c2-2}
\end{align}
To bound $\tau_1$, partition this phase further as follows. Let $\phi_0 := \log \kappa$ and $\phi_i := \phi_{i-1}$ for $i = 1, 2, \dots$ until $\phi_N \leq \alpha$. Let $\tau_{1,i}$ be the number of iterations starting from when the potential is first no greater than $\phi_{i-1}$ and ending when it no greater than $\phi_{i}$. In the $i$-th subphase, the potential drops by at least $(\tfrac{\phi_i}{d \log \kappa})^2/4n$ per iteration by~\eqref{eq:greedy}. Thus
\begin{align}
\tau_{1,i}
\leq
\frac{\phi_{i-1} - \phi_i}{(\tfrac{\phi_i}{d \log \kappa})^2/4n}
=
\frac{4n d^2 \log^2 \kappa}{\phi_i}.
\label{eq-pf-greedy:c2-1i}
\end{align}
Since $\sum_{i=1}^N \tfrac{1}{\phi_i} = \tfrac{1}{\phi_N} \sum_{j=0}^{N-1} 2^{-j} \leq \tfrac{2}{\phi_N} \leq \tfrac{4}{\alpha}$, thus
\begin{align}
\tau_1
=
\sum_{i=1}^N \tau_{1,i}
\leq
\frac{16nd^2 \log \kappa^2}{\alpha}
=
\frac{16n d \log \kappa}{\varepsilon}.
\label{eq-pf-greedy:c2-1}
\end{align}
By~\eqref{eq-pf-greedy:c2-2} and~\eqref{eq-pf-greedy:c2-1}, the total number of iterations is at most $\tau = \tau_1 + \tau_2 \leq 20n d \varepsilon^{-1} \log \kappa$.
\end{proof}
\section{Random Osborne converges quickly}\label{sec:rand}
Here we show that Random Osborne has runtime that is (i) near-linear in the input sparsity $m$; and (ii) also linear in the inverse accuracy $\varepsilon^{-1}$ for well-connected sparsity patterns. See \S\ref{ssec:intro:contributions} for further discussion of the result, and \S\ref{sssec:intro:overview-rand} for a proof sketch.
\begin{theorem}[Convergence of Random Osborne]\label{thm:osb-rand}
Given a balanceable matrix $K \in \R_{\geq 0}^{n \times n}$ and accuracy $\varepsilon > 0$, Random Osborne
solves $\textsc{ABAL}(K,\eps)$
in $T$ arithmetic operations, where
\begin{itemize}
\item (Expectation guarantee.) $\mathbb{E}[T] = O(\tfrac{m}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d) \log \kappa)$.
\item (H.p. guarantee.) There exists a universal constant $c > 0$ such that for all $\delta > 0$,
\[
\Prob\left( T \leq c\left(\tfrac{m}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d) \log \kappa \logdel \right)\right) \geq 1 - \delta.
\]
\end{itemize}
\end{theorem}
As described in the proof overview in \S\ref{sssec:intro:overview-greedy}, the core argument is nearly identical to the analysis of Greedy Osborne in \S\ref{sec:greedy}. Below, we detail the additional probabilistic nuances and describe how to overcome them. Remaining details for the proof of Theorem~\ref{thm:osb-rand} are deferred to Appendix~\ref{app:rand-pf}.
\subsection{Bounding the number of iterations}\label{sssec:conv:decrease}
Analogous to the proof of Greedy (c.f. Lemma~\ref{lem:pot-dec:greedy}), the key lemma is that each iteration significantly decreases the potential. The statement and proof are nearly identical, the only difference being that for Random Osborne, this improvement is in \emph{expectation}.
\begin{lemma}[Potential decrease of Random Osborne]\label{lem:pot-dec:rand}
Consider any $x \in \R^n$ for which the corresponding scaling $A := \diag(e^x) K \diag(e^{-x})$ is not $\varepsilon$-balanced. If $x'$ is the next iterate obtained from a Random Osborne update, then
\begin{align*}
\mathbb{E} \left[ \Phi(x) - \Phi(x') \right]
\geq
\frac{1}{4n} \left( \frac{\Phi(x) - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2,
\end{align*}
where the expectation is over the algorithm's uniform random choice of update coordinate from $[n]$.
\end{lemma}
\begin{proof}
Identical to the proof for Greedy Osborne in Lemma~\ref{lem:pot-dec:greedy}, the only differences being that~\eqref{eq-lpdg:1} and~\eqref{eq-lpdg:2} are in expectation, and~\eqref{eq-lpdg:3} holds with equality by definition of Random Osborne.
\end{proof}
Lemma~\ref{lem:pot-init} shows that the potential is initially bounded, and Lemma~\ref{lem:pot-dec:rand} shows that each iteration significantly decreases the potential in expectation. In the analysis of Greedy Osborne, this potential drop is deterministic, and so we immediately concluded that the number of iterations is at most the initial potential divided by the per-iteration decrease (see~\eqref{eq-pf-greedy:c1} in \S\ref{sec:greedy}). Lemma~\ref{lem:prob} below shows that essentially the same bound holds in our stochastic setting. Indeed, the expectation bound is exactly this quantity (plus one), and the h.p. bound is the same up to a small constant.
\begin{lemma}[Per-iteration expected improvement implies few iterations]\label{lem:prob}
Let $A > a$ and $h > 0$.
Let $\{Y_t\}_{t \in \mathbb{N}_0}$ be a stochastic process adapted to a filtration $\{\mathcal{F}_t\}_{t \in \mathbb{N}_0}$ such that $Y_0 \leq A$ a.s.,
each difference $Y_{t-1} - Y_t$ is bounded within $[0,2(A-a)]$ a.s., and
\begin{align}
\mathbb{E}\left[ Y_t - Y_{t+1} \, | \, \mathcal{F}_{t}, \, Y_{t} \geq a \right] \geq h
\label{eq-lem:prob:iter}
\end{align}
for all $t \in \mathbb{N}_0$. Then the stopping time $\tau := \min \{t \in \mathbb{N}_0 \, : \, Y_t \leq a \}$ satisfies
\begin{itemize}
\item (Expectation bound.) $\mathbb{E}[\tau] \leq \tfrac{A - a}{h} + 1$.
\item (H.p. bound.)
For all $\delta \in (0,1/e)$, it holds that $\Prob(\tau \leq \tfrac{6(A-a)}{h} \logdel ) \geq 1 - \delta$.
\end{itemize}
\end{lemma}
The expectation bound in Lemma~\ref{lem:prob} is proved using Doob's Optional Stopping Theorem, and the h.p. bound using Chernoff bounds; details are deferred to Appendix~\ref{app:pfs-prob}.
\begin{remark}[Sub-exponential concentration]\label{rem:prob-subexp}
Lemma~\ref{lem:prob} shows that the upper tail of $\tau$ decays at a sub-exponential rate. This concentration cannot be improved to a sub-Gaussian rate: indeed, consider $X_t$ i.i.d. Bernoulli with parameter $h \in (0,1)$, $Y_t = 1 - \sum_{i=1}^t X_i$, $A = 1$, and $a = 0$. Then $\Prob(\tau \leq N) = 1 - \Prob(X_1 = \dots = X_N = 0) = 1 - (1-h)^N$ which is $\approx 1 - \delta$ when $N \approx \tfrac{1}{h} \logdel$.
\end{remark}
\subsection{Bounding the final runtime}\label{sssec:conv:runtime}
The key reason that Random Osborne is faster than Greedy Osborne (other than bit complexity) is that its per-iteration runtime is faster for sparse matrices: it is $O(m/n)$ by Observation~\ref{obs:osb-rand:iter} rather than $O(n)$.
In the deterministic setting, the final runtime is at most the product of the per-iteration runtime and the number of iterations (c.f. \S\ref{sec:greedy}).
However, obtaining a final runtime bound from a per-iteration runtime and an iteration-complexity bound requires additional tools in the stochastic setting.
A similar h.p. bound follows from a standard Chernoff bound. But proving an expectation bound is more nuanced.
The natural approach is Wald's equation, which states the the sum of a random number $\tau$ of i.i.d. random variables $Z_1,\dots,Z_\tau$ equals $\mathbb{E}\tau \mathbb{E} Z_1$, so long as $\tau$ is independent from $Z_1, \dots, Z_{\tau}$~\citep[Theorem 4.1.5]{Durrett10}. However, in our setting the per-iteration runtimes and the number of iterations are \emph{not} independent. Nevertheless, this dependence is weak enough for the identity to still hold. Formally, we require the following minor technical modifications of the per-iteration runtime bound in Observation~\ref{obs:osb-rand:iter} and Wald's equation.
\begin{lemma}[Per-iteration runtime of Random Osborne, irrespective of history]\label{lem:osb-rand:iter}
Let $\mathcal{F}_{t-1}$ denote the sigma-algebra generated by the first $t-1$ iterates of Random Osborne. Conditional on $\mathcal{F}_{t-1}$, the $t$-th iteration requires $O(m/n)$ arithmetic operations in expectation.
\end{lemma}
\begin{lemma}[Minor modification of Wald's equation]\label{lem:wald}
Let $Z_1, Z_2, \dots$ be i.i.d. nonnegative integrable r.v.'s. Let $\tau$ be an integrable $\mathbb{N}$-valued r.v. satisfying $\mathbb{E}[Z_t | \tau \geq t ] = \mathbb{E}[Z_1]$ for each $t \in \mathbb{N}$. Then $\mathbb{E}[ \sum_{t=1}^{\tau} Z_t] = \mathbb{E} \tau \mathbb{E} Z_1$.
\end{lemma}
The proof of Lemma~\ref{lem:osb-rand:iter} is nearly identical to the proof of Observation~\ref{obs:osb-rand:iter}, and is thus omitted.
The proof of Lemma~\ref{lem:wald} is a minor modification of the proof of the standard Wald's equation in~\citep{Durrett10}; details in Appendix~\ref{app:pfs-prob}.
\section{Conclusion}\label{sec:conc}
We conclude with several open questions:
\begin{enumerate}
\item Can one establish matching runtime lower bounds for the variants of Osborne's algorithm? The only existing lower bound is~\citep[Theorem 5]{OstRabYou17}, and there is a large gap between this and the current upper bounds.
%
\item In Theorem~\ref{thm:osb-cyc}, we show a runtime bound for Cyclic Osborne that scales in the input size as roughly $m\sqrt{n}$. Can this be improved to near-linear time?
%
%
%
\item Empirically, Osborne's algorithm often significantly outperforms its worst-case bounds.
Is it possible to prove faster average-case runtimes for ``typical'' matrices arising in practice? (This is the analog to the third open question in~\citep[\S6]{SchSin17} for Max-Balancing.)
\end{enumerate}
\section{Parallelized variants of Osborne converge quickly}\label{sec:par}
Here we show fast runtime bounds for parallelized variants of Osborne's algorithm when given a coloring of $G_K$ (see \S\ref{ssec:prelim:par}). See \S\ref{ssec:intro:contributions} for a discussion of these results, and \S\ref{sssec:intro:overview-block} for a proof sketch.
\begin{theorem}[Convergence of Block Osborne variants]\label{thm:par:all}
Consider balancing a balanceable matrix $K \in \R_{\geq 0}^{n \times n}$ to accuracy $\varepsilon > 0$ given a coloring of $G_K$ of size $p$.
\begin{itemize}
\item Greedy Block Osborne solves $\textsc{ABAL}(K,\eps)$ in $O(\tfrac{p}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d)\log \kappa)$ rounds and $O(\tfrac{mp}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d)\log \kappa)$ total work.
\item Random Block Osborne solves $\textsc{ABAL}(K,\eps)$ in $O(\tfrac{p}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d) \log \kappa)$ rounds and $O(\tfrac{m}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d) \log \kappa)$ total work, in expectation and w.h.p.
\item Cyclic Block Osborne solves $\textsc{ABAL}(K,\eps)$ in $O(\tfrac{p^{3/2}}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d) \log \kappa)$ rounds and $O(\tfrac{m\sqrt{p}}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d) \log \kappa)$ total work, in expectation and w.h.p.
\end{itemize}
\end{theorem}
Note that the h.p. bounds in Theorem~\ref{thm:par:all} have exponentially decaying tails, just as for the non-parallelized variants (c.f., Theorems~\ref{thm:osb-rand} and~\ref{thm:osb-cyc}; see also Remark~\ref{rem:prob-subexp}).
The proof of Theorem~\ref{thm:par:all} is nearly identical to the analysis of the analogous non-parallelized variants in \S\ref{sec:greedy}, \S\ref{sec:rand}, and \S\ref{sec:cyclic} above. For brevity, we only describe the differences. First, we show the rounds bounds. For Greedy and Random Block Osborne, the only difference is that the per-iteration potential decrease is now $n/p$ times larger than in Lemmas~\ref{lem:pot-dec:greedy} and~\ref{lem:pot-dec:rand}, respectively. Below we show this modification for Greedy Block Osborne; an identical argument applies for Random Block Osborne after taking an expectation (the inequality~\eqref{eq:prop:pot-dec:greedy-block} then becomes an equality).
\begin{lemma}[Potential decrease of Greedy Block Osborne]\label{lem:pot-dec:greedy-block}
Consider any $x \in \R^n$ for which the corresponding scaling $A := \diag(e^x) K \diag(e^{-x})$ is not $\varepsilon$-balanced. If $x'$ is the next iterate obtained from a Greedy Block Osborne update, then
\begin{align*}
\Phi(x) - \Phi(x')
\geq
\frac{1}{4p} \left( \frac{\Phi(x) - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2.
\end{align*}
\end{lemma}
\begin{proof}
Let $S_{\ell}$ be the chosen block. Using in order Lemma~\ref{lem:pot-dec}, the inequality $-\log(1 - z) \geq z$, the definition of Greedy Block Osborne, re-arranging, and then Proposition~\ref{prop:hell-lb},
\begin{align}
\Phi(x) - \Phi(x')
&=
- \sum_{k \in S_{\ell}} \log (1 - \left( \sqrt{r_k(P)} - \sqrt{c_k(P)})^2 \right) \nonumber
\\ &\geq
\sum_{k \in S_{\ell}} \left( \sqrt{r_k(P)} - \sqrt{c_k(P)} \right)^2 \nonumber
\\ &\geq
\frac{1}{p} \sum_{\ell=1}^p \sum_{k \in S_{\ell}} \left( \sqrt{r_{\ell}(P)} - \sqrt{c_{\ell}(P)} \right)^2
\label{eq:prop:pot-dec:greedy-block}
\\ &=
\frac{1}{p} \sum_{k=1}^n \left( \sqrt{r_{k}(P)} - \sqrt{c_{k}(P)} \right)^2 \nonumber
\\ &\geq
\frac{1}{4p} \left( \frac{\Phi(x) - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2. \nonumber
\end{align}
\end{proof}
With this $n/p$ times larger per-iteration potential decrease, the number of rounds required by Greedy and Random Block Osborne is then $n/p$ times smaller than the number of Osborne updates required by their non-parallelized counterparts, establishing the desired rounds bounds in Theorem~\ref{thm:par:all}. The rounds bound for Cyclic Block Osborne is then $\sqrt{p}$ times that of Random Block Osborne by an identical coupling argument as for their non-parallelized counterparts (see \S\ref{sec:cyclic}).
\par Next, we describe the total-work bounds in Theorem~\ref{thm:par:all}. For Cyclic Block Osborne, every $p$ rounds is a full cycle and therefore requires $\Theta(m)$ work. For Greedy and Random Block Osborne, each round takes work proportional to the number of nonzero entries in the updated block. For Random Block Osborne, this is $\Theta(m/p)$ on average by an identical argument to Observation~\ref{obs:osb-rand:iter}. For Greedy Block Osborne, this could be up to $O(m)$ in the worst case. (Although this is of course significantly improvable if the blocks have balanced sizes.)
Finally, we note that combining Theorem~\ref{thm:par:all} with the extensive literature on parallelized algorithms for coloring bounded-degree graphs yields a fast parallelized algorithm for balancing $\Delta$-uniformly sparse matrices, i.e., matrices $K$ for which $G_K$ has max degree\footnote{
This is the degree in the undirected graph where $(i,j)$ is an edge if either $(i,j)$ or $(j,i)$ is an edge in $G_K$.
} $\Delta$.
\begin{cor}[Parallelized Osborne for uniformly sparse matrices]\label{cor:par:unif}
There is a parallelized algorithm that, given any $\Delta$-uniformly sparse matrix $K \in \R_{\geq 0}^{n \times n}$, computes an $\varepsilon$-approximate balancing in $O(\tfrac{\Delta}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d)\log \kappa)$ rounds and $O(\tfrac{m}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d)\log \kappa)$ total work, both in expectation and w.h.p.
\end{cor}
\begin{proof}
The algorithm of~\citep{BarElkKuh14} computes a $\Delta+1$ coloring in $O(\Delta) + \half \log^*n$ rounds, where $\log^*$ is the iterated logarithm. Run Random Block Osborne with this coloring, and apply Theorem~\ref{thm:par:all}.
\end{proof}
We remark that a coloring of size $\Delta+1$ can be alternatively computed by a simple greedy algorithm in $O(m)$ linear time. Although sequential, this simpler algorithm may be more practical.
\section{Deferred proofs}\label{app:pfs}
\subsection{Probabilistic helper lemmas}\label{app:pfs-prob}
Several times we make use of the following standard (martingale) version of multiplicative Chernoff bounds, see, e.g.,~\citep[\S4]{MitUpf17}.
\begin{lemma}[Multiplicative Chernoff Bounds]\label{lem:chernoff}
Let $X_1, \dots X_n$ be supported in $[0,1]$, be adapted to some filtration $\mathcal{F}_0=\{\emptyset,\Omega \},\mathcal{F}_1, \dots, \mathcal{F}_n$, and satisfy $\mathbb{E}[X_i | \mathcal{F}_{i-1}] = p$ for each $i \in [n]$.
Denote $X := \sum_{i=1}^n X_i$ and $\mu := \mathbb{E} X$. Then
\begin{itemize}
\item (Lower tail.) For any $\Delta \in (0,1)$, $
\Prob\left( X \leq (1 - \Delta)\mu \right) \leq
e^{-\Delta^2 \mu / 2}
$.
\item (Upper tail.) For any $\Delta \geq 1$,
$
\Prob\left( X \geq (1 + \Delta) \mu \right) \leq
e^{-\Delta \mu / 3}
$.
\end{itemize}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:prob}]
\underline{Expectation bound.} Define $Z_t := Y_t + ht$. Then $Z_{t}^{\tau} := Z_{t \wedge \tau}$ is a stopped supermartingale with respect to $\mathcal{F}_t$. Thus by Doob's Optional Stopping Theorem~\citep{Durrett10} (which may be invoked by a.s. boundedness),
\[
A \geq \mathbb{E} Z_0 \geq \mathbb{E} Z_{\tau-1} = \mathbb{E} Y_{\tau-1} + h (\mathbb{E} \tau - 1) \geq a + h(\mathbb{E}\tau - 1)
\]
Re-arranging yields $\mathbb{E}[\tau] \leq \tfrac{A - a}{h} + 1$, as desired.
\par \underline{High probability bound.}
For shorthand, denote $B := 2(A-a)$ and $N := \lceil 3B/h \logdel \rceil$.
By definition of $\tau$, telescoping, and then the bound on $Y_0$,
\begin{align}
\Prob\left( \tau > N \right)
=
\Prob\left( Y_N > a \right)
=
\Prob\left( \sum_{t=1}^N (Y_{t-1} - Y_t) < Y_0 - a \right)
\leq
\Prob\left( \sum_{t=1}^N (Y_{t-1} - Y_t) < A - a \right)
\label{eq-pf:prob:hp-1}
\end{align}
To bound~\eqref{eq-pf:prob:hp-1}, define the process $X_t := (Y_{t-1} - Y_t) / B$. Each $X_t$ is a.s. bounded within $[0,1]$ by the bounded-difference assumption on $Y_t$. Thus by an application of the lower-tail Chernoff bound in Lemma~\ref{lem:chernoff} (combined with a simple stochastic domination argument since $\mathbb{E}[X_t | \mathcal{F}_{t-1}] \geq h/B$ rather than exactly equal), and then the choice of $N$, we conclude that
\begin{align}
\Prob\left( \sum_{t=1}^N (Y_{t-1} - Y_t) < A - a \right)
=
\Prob\left( \sum_{t=1}^N X_t < \frac{A - a}{B} \right)
\leq
\exp\left( -\left( 1 - \frac{A-a}{Nh} \right)^2 \frac{Nh}{2B} \right)
\leq \delta.
\label{eq-pf:prob:hp-2}
\end{align}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:wald}]
Observe that
$
\mathbb{E}[ \sum_{t=1}^{\tau} Z_t]
=
\sum_{T=1}^{\infty}
\mathbb{E}[
\sum_{t=1}^{\tau} Z_t \mathds{1}_{\tau = T}
]
=
\sum_{T=1}^{\infty}
\sum_{t=1}^{T}
\mathbb{E}[
Z_t \mathds{1}_{\tau = T}
]
=
\sum_{t=1}^{\infty}
\sum_{T=t}^{\infty}
\mathbb{E}[
Z_t \mathds{1}_{\tau = T}
]
=
\sum_{t=1}^{\infty}
\mathbb{E}[
Z_t \mathds{1}_{\tau \geq t}
]$,
where the third equality is because the assumption $Z_i \geq 0$ allows us to invoke Fubini's Theorem. Now since $
\mathbb{E}\left[
Z_t \mathds{1}_{\tau \geq t}
\right]
=
\mathbb{E}\left[
Z_t | \tau \geq t
\right] \Prob(\tau \geq t)
= \mathbb{E}[Z_t] \Prob(\tau \geq t)$ by assumption, we conclude that $\mathbb{E} [\sum_{t=1}^{\tau} Z_t] = \mathbb{E}[Z_1] (\sum_{t=1}^{\infty} \Prob (\tau \geq t)) = \mathbb{E}[Z_1] \mathbb{E}[\tau]$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:osb-rand}}\label{app:rand-pf}
Let $x^{(0)} = \mathbf{0}, x^{(1)},x^{(2)},\dots$ denote the iterates, and $\{\mathcal{F}_t := \sigma(x_1, \dots, x_t)\}_t$ denote the corresponding filtration. Define the stopping time $\tau := \min\{t \in \mathbb{N}_0 : \diag(e^x)K\diag(e^{-x}) \text{ is } \varepsilon\text{-balanced} \}$.
By Lemma~\ref{lem:pot-dec:rand},
\begin{align}
\mathbb{E}\left[ \Phi(x^{(t)}) - \Phi(x^{(t+1)}) \, | \, \mathcal{F}_t, t \leq \tau \right]
\geq
\frac{1}{4n} \left( \frac{\Phi(x^{(t)}) - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2.
\label{eq-pf:pot-dec:rand}
\end{align}
\paragraph*{Case 1: $\boldsymbol{\varepsilon^{-1} \leq d}$.}
Here, we establish the $O(m \varepsilon^{-2} \log \kappa)$ runtime bound both in expectation and w.h.p. To this end, let $T_t$ denote the runtime of iteration $t$, where (solely for analysis purposes) we consider also $t > \tau$ if the algorithm had continued after convergence. Define $Y_t$ to be $\Phi(x^{(t)})$ if $t \leq \tau$, and otherwise $\Phi(x^{(t)}) - (t - \tau)\varepsilon^2/4n$ if $t > \tau$. By~\eqref{eq-pf:pot-dec:rand}, we have
\begin{align}
\mathbb{E}\left[ Y_t - Y_{t+1} \, | \, \mathcal{F}_t, Y_t \geq 0 \right] \geq \frac{\varepsilon^2}{4n}.
\label{eq-pf:pot-dec:rand:case1}
\end{align}
For both expected and h.p. bounds below, we apply Lemma~\ref{lem:prob} to the process $Y_t$ with $A = \log \kappa$ (by Lemma~\ref{lem:pot-init}), $a = 0$, and $h = \varepsilon^2/4n$ (by~\eqref{eq-pf:pot-dec:rand:case1}).
\par \underline{Expectation bound.}
The expectation bound in Lemma~\ref{lem:prob} implies
$\mathbb{E}[\tau] \leq 4n \varepsilon^{-2} \log \kappa + 1$.
Since each iteration has expected runtime $\mathbb{E}[ T_t | \mathcal{F}_{t-1}] = O(m/n)$ by Lemma~\ref{lem:osb-rand:iter}, Lemma~\ref{lem:wald} ensures that the total expected runtime is $\mathbb{E} T = \mathbb{E} [\sum_{t=1}^{\tau} T_t] = \mathbb{E} \tau \mathbb{E} T_1 = O(m \varepsilon^{-2} \log \kappa)$.
\par \underline{H.p. bound.} For shortand, denote $U := 24 n \varepsilon^{-2} \log \kappa \logtdel$. The h.p. bound in Lemma~\ref{lem:pot-init} implies that $\Prob(\tau > U) \leq \delta/2$.
By Lemma~\ref{lem:osb-rand:iter}, there is some constant $c > 0$ such that $\mathbb{E} [ T_t] = cm/n$. Since the $T_t$ are independent, a Chernoff bound (Lemma~\ref{lem:chernoff}) implies that $\Prob( \sum_{t=1}^U T_t \leq 2cUm/n) \leq \delta/2$.
Therefore, a union bound implies that with probability at least $1 - \delta$, the total runtime $T = \sum_{t=1}^{\tau} T_\tau$ is at most $2cUm/n = 48c m \varepsilon^{-2} \log \kappa \logtdel$.
\paragraph*{Case 2: $\boldsymbol{\varepsilon^{-1} \geq d}$.} Here, we establish the $O(m d \varepsilon^{-1} \log \kappa)$ runtime bound both in expectation and w.h.p. Define $\alpha, \tau_1$, $\tau_2$, $\tau_{1,i}$, and $\phi_i$ as in the analysis of Greedy Osborne (see \S\ref{sec:greedy}).
\par \underline{Expectation bound.} To bound $\mathbb{E} \tau_2$, define $Y_t$ and
apply Lemma~\ref{lem:prob} as in case $1$ above (except now with $A = \varepsilon d \log \kappa$) to establish that
\begin{align}
\mathbb{E}\tau_2
\leq
\frac{\varepsilon d \log \kappa}{\varepsilon^2/4n} + 1 =
\frac{4n d \log \kappa}{\varepsilon} + 1.
\label{eq-pf:rand:exp:2}
\end{align}
Next, we bound $\mathbb{E} \tau_1$. Consider subphase $\tau_{1,i}$ for $i \in [N]$. By an application of Lemma~\ref{lem:prob} on the process $\Phi(x^{(t - \tau_{1,i-1})})$ where $A = \phi_{i-1}$, $a = \phi_i$, and $h = \phi_i^2/(4n d^2 \log^2 \kappa)$ from~\eqref{eq-pf:pot-dec:rand},
$
\mathbb{E}\tau_{1,i}
\leq
\frac{4n d^2 \log^2 \kappa}{\phi_i} + 1
$.
Thus $\mathbb{E}\tau_1
=
\sum_{i=1}^N
\mathbb{E}\tau_{1,i}
\leq
4n d^2 \log^2 \kappa (\sum_{i=1}^N \frac{1}{\phi_i}) + N$. Since $\sum_{i=1}^N \frac{1}{\phi_i} \leq \frac{4}{\varepsilon d \log \kappa}$,
\begin{align}
\mathbb{E}\tau_1
\leq
\frac{16n d \log \kappa}{\varepsilon} + \log_2 \left \lceil \frac{1}{\varepsilon d} \right \rceil .
\label{eq-pf:rand:exp:1}
\end{align}
Combining~\eqref{eq-pf:rand:exp:2} and~\eqref{eq-pf:rand:exp:1} establishes that
$ \mathbb{E}\tau= \mathbb{E}\tau_1 + \mathbb{E}\tau_2 \leq 21n d \varepsilon^{-1} \log \kappa$.
By the $O(m/n)$ per-iteration expected runtime bound in Lemma~\ref{lem:osb-rand:iter} and the variant of Wald's equation in Lemma~\ref{lem:wald}, the total expected runtime is therefore at most $\mathbb{E} T \leq O(m/n) \cdot \mathbb{E}\tau = O(m d \varepsilon^{-1} \log \kappa)$.
\par \underline{H.p. bound.}
By Lemma~\ref{lem:prob}, $\Prob(\tau_2 > 24n d \varepsilon^{-1} \log \kappa \log\tfrac{4}{\delta}) \leq \delta/4$.
To bound the first phase, define $p_i := \delta / 2^{N-i+3}$ for each $i \in [N]$. By Lemma~\ref{lem:prob}, $\Prob( \tau_{1,i} >
(24nd^2\log^2 \kappa \log 1/p_i)/\phi_i
) \leq p_i$. Note that $\sum_{i=1}^N \tfrac{\log 1/p_i}{\phi_i}
=
\tfrac{1}{\phi_N} \sum_{j=0}^{N-1} 2^{-j}( \log 8/\delta + j\log 2)
\leq
\tfrac{1}{\phi_N} \sum_{j=0}^{\infty} 2^{-j}( \log 8/\delta + j \log 2)
=
\tfrac{2 \log 8/\delta + 2\log 2}{\phi_N}
\leq
\tfrac{6 \log 8/\delta}{ \varepsilon d \log \kappa}$.
Thus by a union bound, with probability at most $\sum_{i=1}^N p_i \leq \delta/4$, the first phase has length at most $\tau_1 = \sum_{i=1}^N \tau_{1,i}
\leq 144 n d \varepsilon^{-1} \log \kappa \log\tfrac{8}{\delta}$.
We conclude by a further union bound that, with probability at least $1 - \delta/2$, the total number of iterations is at most $\tau = \tau_1 + \tau_2 \leq 168 n d \varepsilon^{-1} \log \kappa \log\tfrac{8}{\delta}$. The proof is complete by an identical Chernoff bound argument as in case 1 above.
\subsection{Proofs for \S\ref{sec:bits}}\label{app:pf:bits}
\begin{proof}[Proof of Lemma~\ref{lem:bits:bal-app}]
Let $A := \diag(e^x) K \diag(e^{-x})$ denote the corresponding scaling of $K$, and $P := A / \sum_{ij} A_{ij}$ denote its normalization. Similarly for $\tilde{A}$ and $\tilde{P}$. Note that each nonzero entry $\tilde{P}_{ij}$ approximates $P_{ij}$ to a multiplicative factor within $[(1-\gamma)/(1+\gamma), (1+\gamma)/(1-\gamma)] \subset [1-3\gamma, 1+3\gamma]$, where the last step used the assumption that $\gamma < 1/3$.
Thus each row marginal $r_k(\tilde{P})$ approximates $r_k(P)$ to the same multiplicative factor, and similarly for the column marginals.
Since $P$ and $\tilde{P}$ are normalized, this implies the additive approximations $|r_k(P) - r_k(\tilde{P})| \leq 3\gamma$, and similarly for the columns. Thus by the triangle inequality, $\|r(P) - c(P)\|_1
\leq
\|r(\tilde{P}) - c(\tilde{P})\|_1
+ 6n\gamma$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:bits:lse-lip}]
Let $x,y \in \mathbb{R}^n$. By the elementary inequality that $\min_i (a_i/b_i) \leq (\sum_{i=1}^n a_i) / (\sum_{i=1}^n b_i) \leq \max_i (a_i/b_i)$ for any $a,b \in \R_{> 0}^n$,
\[
\log \sum_{i=1}^n e^{x_i} - \log \sum_{i=1}^n e^{y_i}
=
\log \frac{\sum_{i=1}^n e^{x_i}}{\sum_{i=1}^n e^{y_i}}
\leq
\log \max_i e^{x_i - y_i}
=
\max_i x_i - y_i
\leq
\|x-y\|_{\infty},
\]
and similarly $\log \sum_{i=1}^n e^{x_i} - \log \sum_{i=1}^n e^{y_i} \geq \log \min_i e^{x_i - y_i} = \min_i x_i - y_i \geq -\|x-y\|_{\infty}$. We conclude that $|\log \sum_{i=1}^n e^{x_i} - \log \sum_{i=1}^n e^{y_i}| \leq \|x-y\|_{\infty}$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:bits:phi-lip}]
Let $x,y \in \R^n$. Clearly $|(x_i - x_j) - (y_i - y_j)| \leq 2\|x-y\|_{\infty}$ for any $i,j \in [n]$. Thus by Lemma~\ref{lem:bits:lse-lip}, $|\Phi(x) - \Phi(y)| = |\log (\sum_{(i,j) \in \operatorname{supp}(K)}e^{x_i - x_j + \log K_{ij}}) - \log (\sum_{(i,j) \in \operatorname{supp}(K)}e^{y_i - y_j + \log K_{ij}})| \leq 2\|x-y\|_{\infty}$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:bits:lse}]
Since $\log\sum_{i=1}^n e^{z_i} = \max_{j} z_j + \log \sum_{i=1}^n e^{z_i - (\max_j z_j)}$, we may assume without loss of generality after translation that each $z_i \leq 0$ and at least one $z_i = 0$. Since we need only approximate $\log \sum_{i=1}^n e^{z_i}$ to $\raisebox{.2ex}{$\scriptstyle\pm$} \tau$ accuracy, we can truncate each $z_i$ to additive accuracy $\raisebox{.2ex}{$\scriptstyle\pm$} O(\tau)$ by Lemma~\ref{lem:bits:lse-lip}, and also drop all $z_i$ below $-\log\tfrac{n}{O(\tau)}$. To summarize, in order to compute $\log \sum_{i=1}^n e^{z_i}$ to $\raisebox{.2ex}{$\scriptstyle\pm$} \tau$, it suffices to compute $\log\sum_{i=1}^k e^{\tilde{z}_i}$ to $\raisebox{.2ex}{$\scriptstyle\pm$} O(\tau)$
where $k \leq n$, each $\tilde{z}_i \in [-\log\tfrac{n}{O(\tau)}, 0]$, and each $\tilde{z}_i$ is represented by a number with at most $O(\log(\tfrac{\log (n/\tau)}{\tau} )) = O(\log\tfrac{1}{\tau} + \log \log n)$ bits. Now to compute $\log \sum_{i=1}^k e^{\tilde{z}_i}$ to $\raisebox{.2ex}{$\scriptstyle\pm$} O(\tau)$, we can tolerate computing each $e^{\tilde{z}_i}$ to multiplicative accuracy $(1\raisebox{.2ex}{$\scriptstyle\pm$} O(\tau))$. Thus since $e^{\tilde{z}_i} \geq O(\tau/n)$, we can tolerate computing each $e^{\tilde{z}_i}$ to additive accuracy $\raisebox{.2ex}{$\scriptstyle\pm$} O(\tau^2/n)$. Since $e^{\tilde{z}_i} \in
[0,1]$, it therefore suffices to compute $e^{\tilde{z}_i}$ using $O(\log \tfrac{1}{\tau^2/n}) = O(\log \tfrac{n}{\tau})$ bits of precision.
\end{proof}
\section{Bit complexity}\label{sec:bits}
\section{Numerical precision}\label{sec:bits}
So far we have assumed exact arithmetic for simplicity of exposition; here we address numerical precision issues. Note that Osborne iterates can have variation norm up to $O(n \log \kappa)$; see~\citep[\S3]{KalKhaSho97} and Lemma~\ref{lem:bal:R}. For such iterates, operations on the current balancing $\diag(e^{x})K\diag(e^{-x})$---namely, computing row and column sums for an Osborne update---na\"ively require arithmetic operations on $O(n \log \kappa)$-bit numbers.
Here, we show that there is an implementation that uses numbers with only logarithmically few bits and still achieves the same runtime bounds.\footnote{
Note that Theorem~\ref{thm:osb-all:bits} outputs only the balancing vector $x \in \mathbb{R}^n$, not the approximately balanced matrix $A = \diag(e^x)K\diag(e^{-x})$. If applications require $A$, this can be computed to polynomially small entrywise additive error using only logarithmically many bits; this is sufficient, e.g., for the application of approximating Min-Mean-Cycle~\citep[\S5.3]{AltPar20mmc}.}
\par Below, we assume for simplicity that each input entry $K_{ij}$ is represented using $O(\log \tfrac{K_{\max}}{K_{\min}} + \log \tfrac{n}{\varepsilon})$ bits. (Or $O(\log \log \tfrac{K_{\max}}{K_{\min}} + \log \tfrac{n}{\varepsilon})$ bits if input on the logarithmic scale $\log K_{ij}$, for $(i,j) \in \operatorname{supp}(K)$, see Remark~\ref{rem:bits-log}.) This assumption is made essentially without loss of generality since after a possible rescaling and truncation of entries to $\raisebox{.2ex}{$\scriptstyle\pm$} \varepsilonK_{\min}/n$---which does not change the problem of approximately balancing $K$ to $O(\varepsilon)$ accuracy by Lemma~\ref{lem:bits:bal-app}---all inputs are represented using this many bits.
\begin{theorem}[Osborne variants with low bit-complexity]\label{thm:osb-all:bits}
There is an implementation of Random Osborne (respectively, Cyclic Osborne, Random Block Osborne, and Cyclic Block Osborne) that uses arithmetic operations over $O(\log \tfrac{n}{\varepsilon} + \log \tfrac{K_{\max}}{K_{\min}})$-bit numbers and achieves the same runtime bounds as in Theorem~\ref{thm:osb-rand} (respectively, Theorem~\ref{thm:osb-cyc}, ~\ref{thm:par:all}, and again~\ref{thm:par:all}).
\par Moreover, if the matrix $K$ is given as input through the logarithms of its entries $\{\log K_{ij}\}_{(i,j) \in \operatorname{supp}(K)}$, this bit-complexity is improvable to $O(\log \tfrac{n}{\varepsilon} + \log \log \tfrac{K_{\max}}{K_{\min}})$.
\end{theorem}
\par This result may be of independent interest since the aforementioned bit-complexity issues of Osborne's algorithm are well-known to cause numerical precision issues in practice and have been difficult to analyze theoretically. We note that~\citep[\S5]{OstRabYou17} shows similar bit complexity $O(\log (n \kappa/\varepsilon))$ for an Osborne variant they propose; however, that variant has runtime scaling in $n^2$ rather than $m$ (see footnote~\ref{ft:ost}). Moreover, our analysis is relatively simple and extends to the related Sinkhorn algorithm for Matrix Scaling (see Appendix~\ref{app:sink}).
\par Before proving Theorem~\ref{thm:osb-all:bits}, we make several remarks.
\begin{remark}[Log-domain input]\label{rem:bits-log}
Theorem~\ref{thm:osb-all:bits} gives an improved bit-complexity if $K$ is input through the \textit{logarithms} of its entries. This is useful in an application such as Min-Mean-Cycle where the input is a weighted adjacency matrix $W$, and the matrix $K$ to balance is the entrywise exponential of (a constant times) $W$~\citep[\S5]{AltPar20mmc}.
\end{remark}
\begin{remark}[Greedy Osborne requires large bit-complexity]\label{rem:greedy-bits}
All known implementations of Greedy Osborne require bit-complexity at least $\tilde{\Omega}(n)$~\citep{OstRabYou17}. The obstacle is the computation~\eqref{def:greedy} of the next update coordinate, which requires computing the \emph{difference} of two log-sum-exp's. It can be shown that computing this difference to a constant multiplicative error suffices. However, this still requires at least computing the sign of the difference, which importantly, precludes dropping small summands in each log-sum-exp---a key trick used for computing an individual log-sum-exp to additive error with low bit-complexity (Lemma~\ref{lem:bits:lse}).
\end{remark}
We now turn to the proof of Theorem~\ref{thm:osb-all:bits}. For brevity, we establish this only for Random Osborne; the proofs for the other variants are nearly identical. Our implementation of Random Osborne makes three minor modifications to the exact-arithmetic implementation in Algorithm~\ref{alg:osb}. We emphasize that these modifications are in line with standard implementations of Osborne's algorithm in practice, see Remark~\ref{rem:logdomain}.
\begin{enumerate}
\item In a pre-processing step, compute $\{\log K_{ij}\}_{(i,j) \in \operatorname{supp}(K)}$ to additive accuracy $\gamma = \Theta(\varepsilon/n)$.
\item Truncate each Osborne iterate $x^{(t)}$ entrywise to additive accuracy $\tau = \Theta(\varepsilon^2/n)$.
\item Compute Osborne updates to additive accuracy $\tau$ by using log-sum-exp computation tricks (Lemma~\ref{lem:bits:lse}) and using $K_{ij}$ only through the truncated values $\log K_{ij}$ computed in step $1$.
\end{enumerate}
Step 1 is performed only when $K$ is not already input on the logarithmic scale, and is responsible for the $O(\log (K_{\max}/K_{\min}))$ bit-complexity.
To argue about these modifications, we collect several helpful observations, the proofs of which are simple and deferred to Appendix~\ref{app:pf:bits} for brevity.
\begin{lemma}[Approximately balancing an approximate matrix suffices]\label{lem:bits:bal-app}
Let $K,\tilde{K} \in \R_{\geq 0}^{n \times n}$ such that $\operatorname{supp}(K) = \operatorname{supp}(\tilde{K})$ and
the ratio $K_{ij}/\tilde{K}_{ij}$ of nonzero entries is bounded in $[1 - \gamma, 1 + \gamma]$ for some $\gamma \in (0,1/3)$. If $x$ is an $\varepsilon$-balancing of $K$, then $x$ is an $(\varepsilon + 6n\gamma)$-balancing of $\tilde{K}$.
\end{lemma}
\begin{lemma}[Stability of log-sum-exp]\label{lem:bits:lse-lip}
The function $z \mapsto \log (\sum_{i=1}^n e^{z_i})$ is $1$-Lipschitz with respect to the $\ell_{\infty}$ norm on $\R^n$.
\end{lemma}
\begin{lemma}[Stability of potential function]\label{lem:bits:phi-lip}
Let $K \in \R_{\geq 0}^{n \times n}$. Then $\Phi(x) := \log (\sum_{ij}e^{x_i - x_j} K_{ij})$ is $2$-Lipschitz with respect to the $\ell_{\infty}$ norm on $\R^n$.
\end{lemma}
\begin{lemma}[Computing log-sum-exp with low bit-complexity]\label{lem:bits:lse}
Let $z_1, \dots, z_n \in \mathbb{R}$ and $\tau > 0$ be given as input, each represented using $b $ bits. Then $\log(\sum_{i=1}^n e^{z_i})$ can computed to $\raisebox{.2ex}{$\scriptstyle\pm$} \tau$ in $O(n)$ operations on $O(b + \log(\tfrac{n}{\tau}))$-bit numbers.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:osb-all:bits}]
\noindent\textbf{Error and runtime analysis.}
\begin{enumerate}
\item Let $\tilde{K}$ be the matrix whose $ij$-th entry is the exponential of the truncated $\log K_{ij}$ for $(i,j) \in \operatorname{supp}(K)$, and $0$ otherwise. The effect of step (1) is to balance $\tilde{K}$ rather than $K$. But by Lemma~\ref{lem:bits:bal-app}, this suffices since an $O(\varepsilon)$ balancing of $\tilde{K}$ is an $O(\varepsilon + n \gamma) = O(\varepsilon)$ balancing of $K$.
\item[2,3.] The combined effect is that: given the previous Osborne iterate $x^{(t-1)}$, the next iterate $x^{(t)}$ differs from the value it would have in the exact-arithmetic implementation by $O(\tau)$ in $\ell_{\infty}$ norm. By Lemma~\ref{lem:bits:phi-lip}, this changes $\Phi(x^{(t)})$ by at most $O(\tau)$. By appropriately choosing the constant in the definition of $\tau = \Theta(\varepsilon^2/n)$, this decreases each iteration's expected progress (Lemma~\ref{lem:pot-dec:rand}) by at most a factor of $1/2$. The proof of Theorem~\ref{thm:osb-rand} then proceeds otherwise unchanged, resulting in a final runtime at most $2$ times larger.
\end{enumerate}
\textbf{Bit-complexity analysis.}
\begin{enumerate}
\item Consider $(i,j) \in \operatorname{supp}(K)$.
Since $\log K_{ij} \in [\log K_{\min}, \log K_{\max}]$ and are stored to additive accuracy $\gamma = \Theta(\varepsilon/n)$, the bit-complexity for storing $\log K_{ij}$ is
\[
O\left( \log \frac{\log K_{\max} - \log K_{\min}}{\gamma} \right) = O\left(\log \frac{n}{\varepsilon} + \log \log \frac{K_{\max}}{K_{\min}} \right).
\]
\item Since the coordinates of each Osborne iterate are truncated to additive accuracy $\tau = \Theta(\varepsilon^2/n)$ and have modulus at most $d \log \kappa$ by Lemma~\ref{lem:bal:R}, they require bit-complexity
\[
O\left( \log \frac{(d \log \kappa) - (-d \log \kappa)}{\tau} \right)
=
O\left(\log \frac{n}{\varepsilon} + \log \log \frac{K_{\max}}{K_{\min}} \right).
\]
\item By Lemma~\ref{lem:bits:lse}, the Osborne update requires bit-complexity $O(\log \frac{n}{\tau} ) = O(\log \frac{n}{\varepsilon})$.
\end{enumerate}
\end{proof}
\section{Connections to Matrix Scaling and Sinkhorn's algorithm}\label{app:sink}
Here, we continue the discussion in Remark~\ref{rem:scaling} by briefly mentioning two further connections between Osborne's algorithm for Matrix Balancing and Sinkhorn's algorithm for Matrix Scaling.
\paragraph*{Parallelizability.} In contrast to Osborne's algorithm for Matrix Balancing, Sinkhorn's algorithm for Matrix Scaling is so-called ``embarassingly parallelizable''. We briefly explain this in terms of the connection between parallelizability and graph coloring (see \S\ref{ssec:prelim:par}). For the Matrix Scaling problem on $K \in \R_{\geq 0}^{m \times n}$, the associated graph has vertex set $L \cupdot R$ where $|L| = m$ and $|R| = n$, and edge set $\{(i,j) : i \in [m], j \in [n], K_{ij} \neq 0 \}$. This graph is \emph{bipartite} and thus trivially \emph{$2$-colorable}, which is why Sinkhorn's algorithm can safely update all coordinates in $L$ or $R$ in parallel.
\paragraph*{Bit-complexity.} In Theorem~\ref{thm:osb-all:bits}, we showed that many variants of Osborne's algorithm can be implemented over numbers with logarithmically few bits, and still achieve the same runtime bounds. By a nearly identical argument, it can be shown that the analogous result applies to Sinkhorn's algorithm. This saves a similar factor of up to roughly $O(n)$ in the bit-complexity for poorly connected inputs. Moreover, this modification is also helpful for well-connected inputs, in particular for the application of Optimal Transport, where the matrix $K$ to scale is dense yet has exponentially large entries which require bit-complexity $O(L(\log n)/\varepsilon)$ in the notation of~\citep[Remark 1]{AltWeeRig17}. This modification reduces the bit-complexity to only logarithmic size $O(\log(Ln/\varepsilon))$.
\section{Preliminaries}\label{sec:prelim}
\subsection{Notation}\label{ssec:prelim:notation}
Throughout, we reserve $K \in \R_{\geq 0}^{n \times n}$ for the matrix we seek to balance, $\varepsilon > 0$ for the balancing accuracy, $m$ for the number of nonzero entries in $K$, $G_K$ for the graph associated to $K$, and $d$ for the diameter of $G_K$. The support, maximum entry, minimum nonzero entry, and condition number of $K$ are respectively denoted by $\operatorname{supp}(K) = \{(i,j) : K_{ij} > 0\}$, $K_{\max} = \max_{ij} K_{ij}$, $K_{\min} = \min_{(i,j) \in \operatorname{supp}(K)} K_{ij}$, and $\kappa = (\sum_{ij} K_{ij})/K_{\min}$.
The $\tilde{O}$ notation suppresses polylogarithmic factors in $n$ and $\varepsilon$.
The all-ones and all-zeros vectors in $\R^n$ are respectively denoted by $\mathbf{1}$ and $\mathbf{0}$. Let $v \in \R^n$. The $\ell_1$, $\ell_{\infty}$, and variation norm of $v$ are respectively $\|v\|_1 = \sum_{i=1}^n |v_i|$, $\|v\|_{\infty} = \max_{i \in [n]} |v_i|$, and $\var{v} = \max_i v_i - \min_j v_j$.
We denote the entrywise exponentiation of $v$ by $e^v \in \mathbb{R}^n$, and the diagonalization of $v$ by $\diag(v) \in \R^{n \times n}$.
The set of discrete probability distributions on $n$ atoms is identified with the simplex $\Delta_n = \{p \in \R_{\geq 0}^n : \sum_{i=1}^n p_i = 1 \}$. Let $\mu,\nu \in \Delta_n$. Their Hellinger distance is $\mathsf{H}(\mu,\nu) =
\sqrt{ \frac{1}{2} \sum_{\ell=1}^n (\sqrt{\mu_\ell} - \sqrt{\nu_\ell})^2 }$, and their total variation distance is $\mathsf{TV}(\mu,\nu) = \|\mu - \nu\|_1/2$.
We abbreviate ``with high probability'' by w.h.p., ``high probability'' by h.p., and ``almost surely'' by a.s.
We denote the minimum of $a,b \in \mathbb{R}$ by $a \wedge b$, and the maximum by $a \vee b$.
Logarithms take base $e$ unless otherwise specified.
All other specific notation is introduced in the main text.
\subsection{Matrix Balancing}\label{ssec:prelim:basic}
The formal definition of the (approximate) Matrix Balancing problem is in the ``log domain'' (i.e., output $x \in \R^n$ rather than $\diag(e^x)$). This is in part to avoid bit-complexity issues (see \S\ref{sec:bits}).
\begin{defin}[Matrix Balancing]\label{def:bal}
The \emph{Matrix Balancing problem} $\textsc{BAL}(K)$ for input $K \in \R_{\geq 0}^{n \times n}$ is to compute a vector $x \in \R^n$ such that $\diag(e^x) K \diag(e^{-x})$ is balanced.
\end{defin}
\begin{defin}[Approximate Matrix Balancing]\label{def:abal}
The \emph{approximate Matrix Balancing problem} $\textsc{ABAL}(K,\eps)$ for inputs $K \in \R_{\geq 0}^{n \times n}$ and $\varepsilon > 0$ is to compute a vector $x \in \R^n$ such that $\diag(e^x) K \diag(e^{-x})$ is $\varepsilon$-balanced (see~\eqref{eq-def:balance}).
\end{defin}
$K \in \R_{\geq 0}^{n \times n}$ is said to be \emph{balanceable} if $\textsc{BAL}(K)$ has a solution. It is known that non-balanceable matrices can be approximately balanced to arbitrary precision (i.e., $\textsc{ABAL}$ has a solution for every $K \in \R_{\geq 0}^{n \times n}$ and $\varepsilon > 0$), and moreover that this is efficiently reducible to approximately balancing balanceable matrices, see, e.g.,~\citep{Chen00,CohMadTsiVla17}.
Thus, following the literature, we assume throughout that $K$ is balanceable. In the sequel, we make use of the following classical characterization of balanceable matrices in terms of their sparsity patterns.
\begin{lemma}[Characterization of balanceability]\label{lem:balanceable}
$K \in \R_{\geq 0}^{n \times n}$ is balanceable if and only if it is irreducible---i.e., if and only if $G_K$ is strongly connected~\citep{Osborne60}.
\end{lemma}
\subsection{Matrix Balancing as convex optimization}\label{ssec:prelim:convex}
Key to to our analysis---as well as much of the other Matrix Balancing literature (e.g.,~\citep{KalKhaSho97,OstRabYou17,CohMadTsiVla17,NemRot99})---is the classical connection between (approximately) balancing a matrix $K \in \R_{\geq 0}^{n \times n}$ and (approximately) solving the convex optimization problem
\begin{align}
\min_{x \in \mathbb{R}^n} \Phi(x) := \log \sum_{ij} e^{x_i - x_j} K_{ij}.
\label{eq:opt}
\end{align}
In words, balancing $K$ is equivalent to scaling $DKD^{-1}$ so that the sum of its entries is minimized. This equivalence follows from KKT conditions and convexity of $\Phi(x)$, which ensures that local optimality implies global optimality. Intuition comes from computing the gradient:
\begin{align}
\nabla \Phi(x) = \frac{A\mathbf{1} - A^T\mathbf{1}}{\sum_{ij} A_{ij}}, \quad \text{where } A := \diag(e^x)K\diag(e^{-x}).
\label{eq:grad}
\end{align}
Indeed, solutions of $\textsc{BAL}(K)$ are points where this gradient vanishes, and thus are in correspondence with minimizers of $\Phi$.
This also holds approximately: solutions of $\textsc{ABAL}(K,\eps)$ are in correspondence with $\varepsilon$-stationary points for $\Phi$ w.r.t. the $\ell_1$ norm, i.e., $x \in \R^n$ for which $\|\nabla \Phi(x)\|_1 \leq \varepsilon$. The following lemma summarizes these classical connections; for a proof see, e.g.,~\citep{KalKhaSho97}.
\begin{lemma}[Matrix Balancing as convex optimization]\label{lem:bal:convex-opt}
Let $K \in \R_{\geq 0}^{n \times n}$ and $\varepsilon > 0$. Then:
\begin{enumerate}
\item $\Phi$ is convex over $\R^n$.
\item $x \in \R^n$ is a solution to $\textsc{BAL}(K)$ if and only if $x$ minimizes $\Phi$.
\item $x \in \R^n$ is a solution to $\textsc{ABAL}(K,\eps)$ if and only if $\|\nabla \Phi(x)\|_1 \leq \varepsilon$.
\item If $K$ is balanceable, then $\Phi$ has a unique minimizer modulo translations of $\mathbf{1}$.
\end{enumerate}
\end{lemma}
\subsection{Osborne's algorithm as coordinate descent}\label{ssec:prelim:cd}
Lemma~\ref{lem:bal:convex-opt} equates the \textit{problems} of (approximate) Matrix Balancing and (approximate) optimization of~\eqref{eq:opt}. This correspondence extends to \textit{algorithms}. In particular, in the sequel, we repeatedly leverage the following known connection, which appears in, e.g.,~\citep{OstRabYou17}.
\begin{obs}[Osborne's algorithm as Coordinate Descent]\label{obs:osb-cd}
Osborne's algorithm for Matrix Balancing is equivalent to Exact Coordinate Descent for optimizing~\eqref{eq:opt}.
\end{obs}
To explain this connection, let us recall the basics of both algorithms. Exact Coordinate Descent is an iterative algorithm for minimizing a function $\Phi$ that maintains an iterate $x \in \R^n$, and in each iteration updates $x$ along a coordinate $k \in [n]$ (chosen, e.g., cyclically, greedily, or randomly) by
\begin{align}
x
\gets
\argmin_{ z \in \{ x + \alpha e_k \, : \, \alpha \in \mathbb{R} \} } \Phi(z),
\label{eq:cd}
\end{align}
where $e_k$ denotes the $k$-th standard basis vector in $\mathbb{R}^n$. In words, this update~\eqref{eq:cd} improves the objective $\Phi(x)$ as much as possible by varying only the $k$-th coordinate of $x$.
\par Osborne's algorithm, as introduced briefly in \S\ref{sec:intro}, is an iterative algorithm for Matrix Balancing that repeatedly balances row/column pairs (chosen, e.g., cyclically, greedily, or randomly). Algorithm~\ref{alg:osb} provides pseudocode for an implementation on the ``log domain'' that maintains the logarithms $x \in \R^n$ of the scalings rather than the scalings $\diag(e^x)$ themselves. The connection in Observation~\ref{obs:osb-cd} is thus, stated more precisely, that \emph{Osborne's algorithm is a specification of the Exact Coordinate Descent algorithm to minimizing the function $\Phi$ in~\eqref{eq:opt} with initialization of $\mathbf{0}$.}
\begin{algorithm}
\caption{Osborne's algorithm for Matrix Balancing. The variant (e.g., cyclic, greedy, or random) depends on how the update coordinate is chosen in Line~\ref{line:osb-index}.}
\hspace*{\algorithmicindent} \textbf{Input:} Matrix $K \in \R_{\geq 0}^{n \times n}$ and accuracy $\varepsilon > 0$ \\
\hspace*{\algorithmicindent} \textbf{Output}: Vector $x \in \R^n$ that solves $\textsc{ABAL}(K,\eps)$
\begin{algorithmic}[1]
\State $x \gets \mathbf{0}$ \Comment{Initialization}
\While{$\diag(e^x)K\diag(e^{-x})$ is not $\varepsilon$-balanced}
\State Choose update coordinate $k \in [n]$ \Comment{E.g., cyclically, greedily, or randomly}\label{line:osb-index}
\State $x_k \gets x_k +
\tfrac{\log(c_k( \diag(e^x)K\diag(e^{-x}) )) -\log(r_k( \diag(e^x)K\diag(e^{-x}) ))}{2}
$ \Comment{Osborne update on coordinate $k$
} \label{line:osb-update}
\EndWhile
\State \Return{$x$}
\end{algorithmic}
\label{alg:osb}
\end{algorithm}
We note that besides elucidating Observation~\ref{obs:osb-cd}, the log-domain implementation of Osborne's Algorithm in Algorithm~\ref{alg:osb} is also critical for numerical precision, both in theory and practice.
\begin{remark}[Log-domain implementation]\label{rem:logdomain}
In practice, Osborne's algorithm should be implemented in the ``logarithmic domain'', i.e., store the iterates $x$ rather than the scalings $\diag(e^x)$, operate on $K$ through $\log K_{ij}$ (see Remark~\ref{rem:bits-log}), and compute Osborne updates using the following standard trick for numerically computing log-sum-exp: $\log ( \sum_{i=1}^n e^{z_i} ) = \max_j z_j + \log ( \sum_{i=1}^n e^{z_i - \max_j z_j} )$. In \S\ref{sec:bits}, we show that essentially just these modifications enable a provably logarithmic bit-complexity for several variants of Osborne's algorithm (Theorem~\ref{thm:osb-all:bits}).
\end{remark}
It remains to discuss the choice of update coordinate in Osborne's algorithm (Line~\ref{line:osb-index} of Algorithm~\ref{alg:osb}), or equivalently, in Coordinate Descent. We focus on the following natural options:
\begin{itemize}
\item \textbf{Cyclic Osborne.} Cycle through the coordinates, using an independent random permutation for the order each cycle.
%
\item \textbf{Greedy Osborne.} Choose the coordinate $k$ for which the $k$-th row and column sums of the current scaling $A := \diag(e^x)K\diag(e^{-x})$ disagree most, as measured by
\begin{align}
\argmax_{k \in [n]} \abs{\sqrt{r_k(A)} - \sqrt{c_k(A)}}.
\label{def:greedy}
\end{align}
(Ties are broken arbitrarily, e.g., lowest number.)
%
\item \textbf{Random Osborne.} Sample $k$ uniformly from $[n]$, independently between iterations.
\end{itemize}
\begin{remark}[Efficient implementation of Greedy]\label{rem:greedy-amortized}
In order to efficiently compute~\eqref{def:greedy}, Greedy Osborne maintains an auxiliary data structure: the row and column sums of the current balancing. This requires only $O(n)$ additional space, $O(m)$ additional computation in a pre-processing step, and $O(n)$ additional per-iteration computation for maintenance (increasing the per-iteration runtime by a small constant factor).
\end{remark}
\subsection{Parallelizing Osborne's algorithm via graph coloring}\label{ssec:prelim:par}
For scalability, parallelization of Osborne's algorithm can be critical. It is well-known (see, e.g.,~\citep{BerTsi89}) that Osborne's algorithm can be parallelized when one can compute a (small) coloring of $G_K$, i.e., a partitioning $S_1, \dots, S_p$ of the vertices $[n]$ such that any two vertices in the same partitioning are non-adjacent.
This idea stems from the observation that simultaneous Osborne updates do not intefere with each other when performed on coordinates corresponding to \emph{non-adjacent} vertices in $G_K$. Indeed, this suggests a simple, natural parallelization of Osborne's algorithm given a coloring: update in parallel all coordinates of the same color. We call this algorithm \emph{Block Osborne} due to the following connection to Exact Block Coordinate Descent, i.e., the variant of Exact Coordinate Descent where an iteration exactly minimizes over a subset (a.k.a., block) of the variables.
\begin{remark}[Block Osborne as Block Coordinate Descent]\label{rem:par:block-cd}
Extending Observation~\ref{obs:osb-cd}, \emph{Block} Osborne is equivalent to Exact \emph{Block} Coordinate Descent for minimizing $\Phi$. The connection to coloring is equivalently explained through this convex optimization lens: for each $S_{\ell}$, the (exponential\footnote{Note that by monotonocity of $\exp(\cdot)$, minimizing $\exp(\Phi(\cdot))$ is equivalent to minimizing $\Phi(\cdot)$.} of) $\Phi$ is separable in the variables in $S_{\ell}$. This is why their updates are independent.
\end{remark}
Just like the standard (non-parallelized) Osborne algorithm, the Block Osborne algorithm has several natural options for the choice of update block:
\begin{itemize}
\item \textbf{Cyclic Block Osborne.} Cycle through the blocks, using an independent random permutation for the order each cycle.
\item \textbf{Greedy Block Osborne:} Choose the block $\ell$ maximizing
\begin{align}
\frac{1}{|S_{\ell}|}\sum_{k \in S_{\ell}} \left(\sqrt{r_k(A)} - \sqrt{c_k(A)}\right)^2
\label{def:par-greedy}
\end{align}
where $A$ denotes the current balancing. (Ties are broken arbitrarily, e.g., lowest number.)
\item \textbf{Random Block Osborne.} Sample $\ell$ uniformly from $[p]$, independently between iterations.
\end{itemize}
Note that if $S_1,\dots,S_p$ are singletons---e.g., when $K \in \R_{> 0}^{n \times n}$ is strictly positive---then these variants of Block Osborne degenerate into the corresponding variants of the standard Osborne algorithm.
Of course, Block Osborne first requires a coloring of $G_K$. A smaller coloring yields better parallelization (indeed we establish a linear runtime in the number of colors, see \S\ref{sec:par}). However, finding the (approximately) smallest coloring is NP-hard~\citep{Karp72,GarJoh76,Zuc06}. Nevertheless, in certain cases a relatively good coloring may be obvious or easily computable. For instance, in certain applications the sparsity pattern of $K$ could be structured, known a priori, and thus leveraged.
An easily computable setting is matrices with uniformly sparse rows and columns, i.e., matrices whose corresponding graph $G_K$ has bounded max-degree; see Corollary~\ref{cor:par:unif}.
\section{Cyclic Osborne converges quickly}\label{sec:cyclic}
Here we show a runtime bound for Cyclic Osborne that improves significantly over the previous best bound $\tilde{O}(mn^2\varepsilon^{-2} \log \kappa)$ in~\citep{OstRabYou17}.
See \S\ref{ssec:intro:contributions} for further discussion, and \S\ref{sssec:intro:overview-cyc} for a proof sketch.
\begin{theorem}[Convergence of Cyclic Osborne]\label{thm:osb-cyc}
Given a balanceable matrix $K \in \R_{\geq 0}^{n \times n}$ and accuracy $\varepsilon > 0$, Cyclic Osborne
solves $\textsc{ABAL}(K,\eps)$
in $T$ arithmetic operations, where
\begin{itemize}
\item (Expectation guarantee.) $\mathbb{E}[T] = O(\tfrac{m\sqrt{n}}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d) \log \kappa)$.
\item (H.p. guarantee.) There exists a universal constant $c > 0$ such that for all $\delta > 0$,
\[
\Prob\left( T \leq c\left(\tfrac{m\sqrt{n}}{\varepsilon} (\tfrac{1}{\varepsilon} \wedge d) \log \kappa \logdel \right)\right) \geq 1 - \delta.
\]
\end{itemize}
\end{theorem}
Below for $t \leq n$, let $\mu_{n,t}$ (respectively, $\nu_{n,t}$) be the distribution over tuples $(i_1,\dots,i_t) \in [n]^t$ where $i_1, \dots, i_t$ are drawn uniformly at random with (respectively, without) replacement. That is, $\mu_{n,t}$ is the uniform distribution over all tuples $[n]^t$, and $\nu_{n,t}$ is the uniform distribution over the set $S$ of all distinct tuples in $[n]^t$. We make use of the following basic fact that $\mu_{n,t}$ and $\nu_{n,t}$ are close in total variation.; see e.g.,~\citep{Fre77}.
\begin{lemma}[Total variation between sampling with or without replacement]\label{lem:tv}
For all $n \in \mathbb{N}$, it holds that $\mathsf{TV}(\mu_{n,t},\nu_{n,t}) < 1/2$ for $t = \lfloor \sqrt{n} \rfloor$.
\end{lemma}
\begin{proof}
Since $|S| = n!/(n-t)!$, thus $\mathsf{TV}(\mu_{n,t}, \nu_{n,t})
=
\sum_{(i_1,\dots,i_t) \in S} \nu_{n,t}(i_1,\dots,i_t) - \mu_{n,t}(i_1,\dots,i_t)
=
1 - n^{-t}|S|
=
1 - n^{-t}n!/(n-t)!
$.
By the bound in~\citep{Fre77}, this is at most $t(t-1)/(2n)$. By our choice of $t \leq \sqrt{n}$, this is smaller than $1/2$.
\end{proof}
Combining Lemma~\ref{lem:tv} with the per-iteration potential decrease bound for Random Osborne (Lemma~\ref{lem:pot-dec:rand}) yields the following per-cycle potential decrease bound for Cyclic Osborne.
\begin{lemma}[Potential decrease of Cyclic Osborne]\label{lem:pot-dec:cyclic}
Let $x \in \R^n$, and let $x'$ be the iterate obtained from $x$ after a cycle of Cyclic Osborne. If none of the iterates between $x$ and $x'$ are $\varepsilon$-balancings, then
\begin{align*}
\mathbb{E} \left[ \Phi(x) - \Phi(x') \right]
\geq
\frac{1}{16\sqrt{n}} \left( \frac{\Phi(x') - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2,
\end{align*}
where the expectation is over the algorithm's random choice of update coordinates.
\end{lemma}
\begin{proof}
Let $t = \lfloor \sqrt{n} \rfloor$. By monotonicity of $\Phi$ w.r.t. Osborne updates (Lemma~\ref{lem:pot-dec}), and then a change-of-measure argument using Lemma~\ref{lem:tv},
$\mathbb{E} [ \Phi(x) - \Phi(x') ]
= \mathbb{E}[ \text{decrease in }\Phi\text{ from all updates in cycle}]
\geq \mathbb{E}[ \text{decrease in }\Phi\text{ from first }t\text{ updates in cycle}]
\geq \mathbb{E} [ \text{decrease in }\Phi\text{ after }t\text{ Random Osborne updates}]/2$. By Lemma~\ref{lem:pot-dec:rand} combined with monotonicity of $\Phi$ and the assumption that iterates are not $\varepsilon$ balanced, this is at least $\frac{\lfloor \sqrt{n} \rfloor}{8n} ( \tfrac{\Phi(x) - \Phi^*}{d \log \kappa} \vee \varepsilon )^2$. Finally, note that $\lfloor \sqrt{n} \rfloor \geq \sqrt{n}/2$ for any $n \in \mathbb{N}$.
\end{proof}
The runtime bound for Cyclic Osborne (Theorem~\ref{thm:osb-cyc}) given the expected per-cycle potential decrease (Lemma~\ref{lem:pot-dec:cyclic}) then follows by an identical argument as the runtime bound for Random Osborne (Theorem~\ref{thm:osb-rand}) given that algorithm's expected per-iteration potential decrease (Lemma~\ref{lem:pot-dec:rand}). The straightforward details are omitted for brevity.
\section{Potential argument}\label{sec:pot}
Here we develop the ingredients for our potential-based analysis of Osborne's algorithm. They are purposely stated \emph{independently of the Osborne variant}, i.e., how the Osborne algorithm chooses update coordinates. This enables the argument to be applied directly to different variants in the sequel. We point the reader to \S\ref{ssec:intro:overview} for a high-level overview of the argument.
First, we recall the following standard bound on the initial potential. This appears in, e.g.,~\citep{OstRabYou17,CohMadTsiVla17}.
For completeness, we briefly recall the simple proof. Below, we denote the optimal value of the convex optimization problem~\eqref{eq:opt} by $\Phi^* := \min_{x \in \mathbb{R}^n} \Phi(x)$.
\begin{lemma}[Bound on initial potential]\label{lem:pot-init}
$\Phi(\mathbf{0}) - \Phi^* \leq \log \kappa$.
\end{lemma}
\begin{proof}
It suffices to show $\Phi^* \geq \log K_{\min}$. Since $K$ is balanceable, $G_K$ is strongly connected (Lemma~\ref{lem:balanceable}), thus $G_K$ contains a cycle. By an averaging argument, this cycle contains an edge $(i,j)$ such that $x_i^* - x_j^* \geq 0$. Thus $\Phi^* \geq \log (e^{x_i^* - x_j^*} K_{ij}) \geq \log K_{\min}$.
\end{proof}
Next, we exactly compute the decrease in potential from an Osborne update on a fixed coordinate $k \in [n]$. This is a simple, direct calculation and is similar to~\citep[Lemma 1]{OstRabYou17}.
\begin{lemma}[Potential decrease from Osborne update]\label{lem:pot-dec}
Consider any $x \in \mathbb{R}^n$ and update coordinate $k \in [n]$. Let $x'$ denote the output of an Osborne update on $x$ w.r.t. coordinate $k$, $A := \diag(e^x)K\diag(e^{-x})$ denote the scaling corresponding to $x$, and $P := A/(\sum_{ij}A_{ij})$ its normalization. Then
\begin{align}
\Phi(x) - \Phi(x') =
- \log\left(1 - \left( \sqrt{r_k(P)} - \sqrt{c_k(P)} \right)^2 \right).
\label{eq-lem:pot-dec}
\end{align}
\end{lemma}
\begin{proof}
Let $A' := \diag(e^{x'}) K \diag(e^{-x'})$ denote the scaling corresponding to the next iterate $x'$. Then $e^{\Phi(x)} - e^{\Phi(x')} = (r_k(A) + c_k(A)) - (r_k(A') + c_k(A')) = (r_k(A) + c_k(A)) - 2\sqrt{r_k(A)}\sqrt{c_k}(A) = (\sqrt{r_k(A)} - \sqrt{c_k(A)})^2 = ( \sqrt{r_k(P)} - \sqrt{c_k(P)})^2 e^{\Phi(x)}$. Dividing by $e^{\Phi(x)}$ and re-arranging proves~\eqref{eq-lem:pot-dec}.
\end{proof}
In the sequel, we lower bound the per-iteration progress in~\eqref{eq-lem:pot-dec} by $(\sqrt{r_k(P)} - \sqrt{c_k(P)})^2$ using the elementary inequality $-\log(1 - z) \geq z$. Analyzing this further requires knowledge of how $k$ is chosen, i.e., the Osborne variant. However, for both Greedy Osborne and Random Osborne, this progress is at least the average
\begin{align}
\frac{1}{n}\sum_{k=1}^n (\sqrt{r_k(P)} - \sqrt{c_k(P)})^2 = \frac{2}{n}\mathsf{H}^2\big(r(P),c(P)\big).
\label{eq-pot:hell}
\end{align}
(For Random Osborne, this statement requires an expectation; see \S\ref{sec:rand}.) The rest of this section establishes the main ingredient in the potential argument: Proposition~\ref{prop:hell-lb} lower bounds this Hellinger imbalance, and thereby lower bounds the per-iteration progress. Note that Proposition~\ref{prop:hell-lb} is stated for ``nontrivial balancings'', i.e., $x \in \R^n$ satisfying $\Phi(x) \leq \Phi(\mathbf{0})$. This automatically holds for any iterate of the Osborne algorithm---regardless of the variant---since the first iterate is initialized to $\mathbf{0}$, and since the potential is monotonically non-increasing by Lemma~\ref{lem:pot-dec}.
\begin{prop}[Lower bound on Hellinger imbalance]\label{prop:hell-lb}
Consider any $x \in \R^n$. Let $A := \diag(e^x) K \diag(e^{-x})$ denote the corresponding scaling, and let $P := A / \sum_{ij}A_{ij}$ denote its normalization. If $\Phi(x) \leq \Phi(\mathbf{0})$ and $A$ is not $\varepsilon$-balanced, then
\begin{align}
\mathsf{H}^2\big(r(P),c(P)\big)
\geq
\frac{1}{8} \left( \frac{\Phi(x) - \Phi^*}{d \log \kappa} \vee \varepsilon \right)^2.
\label{eq-lem:pot-dec:rand}
\end{align}
\end{prop}
To prove Proposition~\ref{prop:hell-lb}, we collect several helpful lemmas. First is a standard inequality in statistics which lower bounds the Hellinger distance between two probability distributions by their $\ell_1$ distance (or equivalently, up to a factor of $2$, their total variation distance)~\citep{DezaDeza}.
A short, simple proof via Cauchy-Schwarz is provided for completeness.
\begin{lemma}[Hellinger versus $\ell_1$ inequality]\label{lem:hell-ineq}
If $\mu, \nu \in \Delta_n$, then
\begin{align}
\mathsf{H}(\mu,\nu) \geq \frac{1}{2\sqrt{2}} \|\mu - \nu\|_1.
\end{align}
\end{lemma}
\begin{proof}
By Cauchy-Schwarz, $
\|\mu - \nu \|_1^2
=
(\sum_k |\mu_k - \nu_k|)^2
=
(\sum_k |\sqrt{\mu_k} - \sqrt{\nu_k}| \cdot |\sqrt{\mu_k} + \sqrt{\nu_k}|)^2
\leq
(\sum_k (\sqrt{\mu_k} - \sqrt{\nu_k})^2) \cdot (\sum_k (\sqrt{\mu_k} + \sqrt{\nu_k})^2) = 2\mathsf{H}^2(\mu,\nu) \cdot ( \sum_k (\mu_k + \nu_k + 2\sqrt{\mu_k \nu_k}) )$.
By the AM-GM inequality and the assumption $\mu,\nu \in \Delta_n$, the latter sum is at most $
\sum_k (\mu_k + \nu_k + 2\sqrt{\mu_k \nu_k}) \leq
2 \sum_k (\mu_k + \nu_k) = 4$.
\end{proof}
Next, we recall the following standard bound on the variation norm of nontrivial balancings. This bound is often stated only for optimal balancings (e.g.,~\citep[Lemma 4.24]{CohMadTsiVla17})---however, the proof extends essentially without modifications; details are provided briefly for completeness.
\begin{lemma}[Variation norm of nontrivial balancings]\label{lem:bal:R}
If $x \in \R^n$ satisfies $\Phi(x) \leq \Phi(0)$, then $\var{x} \leq d \log \kappa$.
\end{lemma}
\begin{proof}
Consider any $u,v \in [n]$. By definition of $d$, there exists a path in $G_K$ from $u$ to $v$ of length at most $d$. For each edge $(i,j)$ on the path, we have $e^{x_i - x_j} K_{ij} \leq \Phi(x) \leq \Phi(0)$, and thus $x_i - x_j \leq \log \kappa$. Summing this inequality along the edges of the path and telescoping yields $x_u - x_v \leq d \log \kappa$. Since this holds for any $u,v$, we conclude $\var{x} = \max_u x_u - \min_v x_v \leq d \log \kappa$.
\end{proof}
From Lemma~\ref{lem:bal:R}, we deduce the following bound.
\begin{cor}[$\ell_{\infty}$ distance of nontrivial balancings to minimizers]\label{cor:bal:R}
If $x \in \R^n$ satisfies $\Phi(x) \leq \Phi(0)$, then there exists a minimizer $x^*$ of $\Phi$ such that $\|x - x^*\|_{\infty} \leq d \log \kappa$.
\end{cor}
\begin{proof}
By definition, $\Phi$ is invariant under translations of $\mathbf{1}$. Choose any minimizer $x^*$ and translate it by a multiple of $\mathbf{1}$ so that $\max_i (x - x^*)_i = - \min_j (x - x^*)_j$. Then $\|x-x^*\|_{\infty} = (\max_i (x_i - x_i^*) - \min_j (x_j - x_j^*))/2 \leq ((\max_i x_i - \min_j x_j) + (\max_i x_i^* - \min_j x_j^*))/2 = (\var{x} + \var{x^*})/2$.
By Lemma~\ref{lem:bal:R}, this is at most $d \log \kappa$.
\end{proof}
We are now ready to prove Proposition~\ref{prop:hell-lb}.
\begin{proof}[Proof of Proposition~\ref{prop:hell-lb}]
Since $P$ is normalized, its marginals $r(P)$ and $c(P)$ are both probability distributions in $\Delta_n$. Thus by Lemma~\ref{lem:hell-ineq},
\begin{align}
\mathsf{H}^2\big(r(P),c(P)\big)
\geq
\frac{1}{8} \|r(P) - c(P)\|_1^2.
\label{eq-pf-hell-lb:main}
\end{align}
The claim now follows by lower bounding $\|r(P) - c(P)\|_1$ in two different ways. First is $\|r(P) - c(P)\|_1 \geq \varepsilon$, which holds since $A$ is not $\varepsilon$-balanced by assumption.
Second is
\begin{align}
\|r(P) - c(P)\|_1 \geq \frac{\Phi(x) - \Phi(x^*)}{d \log \kappa},
\label{eq-pf-hell-lb:2}
\end{align}
which we show presently. By convexity of $\Phi$ (Lemma~\ref{lem:bal:convex-opt}) and then H\"older's inequality,
\begin{align}
\Phi(x) - \Phi(x^*) \leq \langle \nabla \Phi(x), x - x^* \rangle \leq \|\nabla \Phi(x)\|_1 \|x-x^*\|_{\infty}
\label{eq-pf-hell-lb:convexity}
\end{align}
for any minimizer $x^*$ of $\Phi$. Now by Corollary~\ref{cor:bal:R}, there exists a minimizer $x^*$ such that $\|x - x^*\|_{\infty} \leq d \log\kappa$; and by~\eqref{eq:grad}, the gradient is $\nabla \Phi(x) = r(P) - c(P)$. Re-arranging~\eqref{eq-pf-hell-lb:convexity} therefore establishes~\eqref{eq-pf-hell-lb:2}.
\end{proof}
\section{Helpful probability lemmas}\label{app:prob-lemmas}
| {
"timestamp": "2020-04-07T02:32:43",
"yymm": "2004",
"arxiv_id": "2004.02837",
"language": "en",
"url": "https://arxiv.org/abs/2004.02837",
"abstract": "We revisit Matrix Balancing, a pre-conditioning task used ubiquitously for computing eigenvalues and matrix exponentials. Since 1960, Osborne's algorithm has been the practitioners' algorithm of choice and is now implemented in most numerical software packages. However, its theoretical properties are not well understood. Here, we show that a simple random variant of Osborne's algorithm converges in near-linear time in the input sparsity. Specifically, it balances $K\\in\\mathbb{R}_{\\geq 0}^{n\\times n}$ after $O(m\\epsilon^{-2}\\log\\kappa)$ arithmetic operations, where $m$ is the number of nonzeros in $K$, $\\epsilon$ is the $\\ell_1$ accuracy, and $\\kappa=\\sum_{ij}K_{ij}/(\\min_{ij:K_{ij}\\neq 0}K_{ij})$ measures the conditioning of $K$. Previous work had established near-linear runtimes either only for $\\ell_2$ accuracy (a weaker criterion which is less relevant for applications), or through an entirely different algorithm based on (currently) impractical Laplacian solvers.We further show that if the graph with adjacency matrix $K$ is moderately connected--e.g., if $K$ has at least one positive row/column pair--then Osborne's algorithm initially converges exponentially fast, yielding an improved runtime $O(m\\epsilon^{-1}\\log\\kappa)$. We also address numerical precision by showing that these runtime bounds still hold when using $O(\\log(n\\kappa/\\epsilon))$-bit numbers.Our results are established through an intuitive potential argument that leverages a convex optimization perspective of Osborne's algorithm, and relates the per-iteration progress to the current imbalance as measured in Hellinger distance. Unlike previous analyses, we critically exploit log-convexity of the potential. Our analysis extends to other variants of Osborne's algorithm: along the way, we establish significantly improved runtime bounds for cyclic, greedy, and parallelized variants.",
"subjects": "Optimization and Control (math.OC); Data Structures and Algorithms (cs.DS); Numerical Analysis (math.NA)",
"title": "Near-linear convergence of the Random Osborne algorithm for Matrix Balancing",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180690117799,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314711071865
} |
https://arxiv.org/abs/1011.0180 | Independent sets in random graphs from the weighted second moment method | We prove new lower bounds on the likely size of a maximum independent set in a random graph with a given average degree. Our method is a weighted version of the second moment method, where we give each independent set a weight based on the total degree of its vertices. | \section{Introduction}
We are interested in the likely size of the largest independent set $S$ in a random graph with a given average degree. Azuma's inequality implies that $|S|$ is tightly concentrated around its expectation. Moreover, it is easy to see that $|S|=\Theta(n)$ whenever the average degree is constant. Thus for each constant $c$, there is a constant $\alpha_\mathrm{crit} = \alpha_\mathrm{crit}(c)$ such that
\[
\Pr[\mbox{$G(n,p=c/n)$ has an independent set of size $\alpha n$}]
= \begin{cases} 1 & \alpha < \alpha_\mathrm{crit} \\
0 & \alpha > \alpha_\mathrm{crit} \, .
\end{cases}
\]
By standard arguments this holds in $G(n,m=cn/2)$ as well.
Our goal is to bound $\alpha_\mathrm{crit}$ as a function of $c$, or equivalently to bound
\[
c_\mathrm{crit} = \sup \,\{ c : \alpha_\mathrm{crit}(c) \ge \alpha \} \, ,
\]
as a function of $\alpha$. For $c \le \mathrm{e}$, a greedy algorithm of Karp and Sipser~\cite{karp-sipser} asymptotically finds a maximal independent set, and analyzing this algorithm with differential equations yields the exact value of $\alpha_\mathrm{crit}$. For larger $c$, Frieze~\cite{frieze} determined $\alpha_\mathrm{crit}$ to within $o(1/c)$, where $o$ refers to the limit where $c$ is large. These bounds were improved by Coja-Oghlan and Efthymiou~\cite{coja} who prove detailed results on the structure of the set of independent sets.
We improve these bounds significantly. Our method is a weighted version of the second moment method, inspired by the work of Achlioptas and Peres~\cite{ach-peres} on random $k$-SAT, where each independent set is given a weight depending on the total degree of its vertices.
We work in a modified version of the $G(n,m)$ model which we call $\widetilde{G}(n,m)$. For each of the $m$ edges, we choose two vertices $u,v$ uniformly and independently and connect them. This may lead to a few multiple edges or self-loops, and any vertex with a self-loop cannot belong to an independent set.
In the sparse case where $m=cn/2$ for constant $c$, with constant positive probability $\widetilde{G}(n,m=cn/2)$ has no multiple edges or self-loops, in which case it is uniform in the usual model $G(n,m=cn/2)$ where edges are chosen without replacement from distinct pairs of vertices. Thus any property which holds with high probability for $\widetilde{G}(n,m)$ also holds with high probability for $G(n,m)$, and any bounds we prove on $\alpha_\mathrm{crit}$ in $\widetilde{G}(n,m)$ also hold in $G(n,m)$.
We review the first moment upper bound on $\alpha_\mathrm{crit}$ from Bollob\'as~\cite{bollobas}. Let $X$ denote the number of independent sets of size $\alpha n$ in $\widetilde{G}(n,m)$. Then
\[
\Pr[X > 0] \le \mathbb{E}[X] \, .
\]
By linearity of expectation, $\mathbb{E}[X]$ is the sum over all ${n \choose \alpha n}$ sets of $\alpha n$ vertices of the probability that a given one is independent. The $m$ edges $(u,v)$ are chosen independently and for each one $u,v \in S$ with probability $\alpha^2$, so
\[
\mathbb{E}[X] = {n \choose \alpha n} (1-\alpha^2)^m \, .
\]
In the limit $n \to \infty$, Stirling's approximation $n! = (1+o(1)) \sqrt{2\pi n} \,n^n \,\mathrm{e}^{-n}$ gives
\[
{n \choose \alpha n} \sim \frac{1}{\sqrt{n}} \,\mathrm{e}^{n h(\alpha)} \, ,
\]
where $h$ is the entropy function
\[
h(\alpha) = -\alpha \ln \alpha - (1-\alpha) \ln (1-\alpha) \, ,
\]
and where $\sim$ hides constants that depend smoothly on $\alpha$. Thus
\[
\mathbb{E}[X] \sim \mathrm{e}^{n(h(\alpha)+ (c/2) \ln (1-\alpha^2)} \, .
\]
For each $c$, the $\alpha$ such that
\begin{equation}
\label{eq:first-moment}
h(\alpha)+ (c/2) \ln (1-\alpha^2) = 0
\end{equation}
is an upper bound on $\alpha_\mathrm{crit}(c)$, since for larger $\alpha$ the expectation $\mathbb{E}[X]$ is exponentially small.
We find it more convenient to parametrize our bounds in terms of the $c_\mathrm{crit}(\alpha)$. Then~\eqref{eq:first-moment} gives the following upper bound,
\begin{equation}
\label{eq:c-upper}
c_\mathrm{crit}(\alpha)
\le 2 \,\frac{\alpha \ln \alpha + (1-\alpha) \ln (1-\alpha)}{\ln (1-\alpha^2)}
\le 2 \,\frac{\ln (1/\alpha) + 1}{\alpha} \, .
\end{equation}
We will prove the following nearly-matching lower bound.
\begin{theorem}
\label{thm:lower}
For any constant $x > 4/\mathrm{e}$, for sufficiently small $\alpha$
\begin{equation}
\label{eq:c-lower}
c_\mathrm{crit}(\alpha)
\ge 2 \,\frac{\ln (1/\alpha) + 1}{\alpha} - \frac{x}{\sqrt{\alpha}} \, .
\end{equation}
\end{theorem}
\noindent
We note that Coja-Oghlan and Efthymiou~\cite{coja} bounded $c_\mathrm{crit}$ within a slightly larger factor $O(\sqrt{\ln (1/\alpha)/\alpha})$.
Inverting~\eqref{eq:c-upper} and Theorem~\ref{thm:lower} gives the following bounds on $\alpha_\mathrm{crit}(c)$. The lower bound is a significant improvement over previous results:
\begin{corollary}
\label{cor:lower}
Let $W(z)$ denote the largest positive root $y$ of the equation $y \mathrm{e}^y = z$. Then for any constant $y > 4 \sqrt{2} / \mathrm{e}$,
\[
\frac{2}{c} \,W\!\left( \frac{\mathrm{e} c}{2} \right) - y \,\frac{\sqrt{\ln c}}{c^{3/2}}
\le \alpha_\mathrm{crit}
\le \frac{2}{c} \,W\!\left( \frac{\mathrm{e} c}{2} \right) \, ,
\]
where the lower bound holds for sufficiently large $c$.
\end{corollary}
\noindent
If we like we can expand $W(\mathrm{e} c/2)$ asymptotically in $c$,
\begin{align*}
W\!\left( \frac{\mathrm{e} c}{2} \right)
&= \ln c - \ln \ln c + 1 - \ln 2 + \frac{\ln \ln c}{\ln c} - \frac{1-\ln 2}{\ln c} \\
&+ \frac{1}{2} \frac{(\ln \ln c)^2}{(\ln c)^2} - (2-\ln 2) \frac{\ln \ln c}{(\ln c)^2}
+ \frac{3 + (\ln 2)^2 - 4 \ln 2}{2 (\ln c)^2} + O\!\left( \frac{(\ln \ln c)^3}{(\ln c)^3} \right)
\, .
\end{align*}
The first few of these terms correspond to the bound in~\cite{frieze}, and we can extract as many additional terms as we wish.
\section{The weighted second moment method}
Our proof uses the second moment method. For any nonnegative random variable $X$, the Cauchy-Schwarz inequality implies that
\begin{equation}\label{prob-lower-bound}
\Pr[X > 0] \ge \frac{\mathbb{E}[X]^2}{\mathbb{E}[X^2]} \, .
\end{equation}
Unfortunately, applying this directly to the number $X$ of independent sets fails utterly. The problem is that for most pairs of sets of size $\alpha n$, the events that they are independent are highly correlated, unlike the case where the average degree grows sufficiently quickly with $n$~\cite{frieze,bollobas}. As a result, $\mathbb{E}[X^2]$ is exponentially larger than $\mathbb{E}[X]^2$, and the second moment method yields an exponentially small lower bound on $\Pr[X > 0]$.
One way to do away with these correlations, used by Frieze in~\cite{frieze}, is to partition the vertices into sets $V_i$ of $\lfloor 1/\alpha \rfloor$ vertices each and focus on those independent sets that intersect each $V_i$ exactly once. Here we pursue a different approach, inspired by the work of Achlioptas and Peres~\cite{ach-peres} on random $k$-SAT. The idea is to give each independent set $S$
a weight $w(S)$, depending exponentially on local quantities in the graph. Specifically, we define
\[
w(S) = \mu^{\mbox{\# of edges $(u,v)$ with $u,v \notin S$}} \, ,
\]
where $\mu < 1$. If the number of edges $m$ is fixed, the number of edges where neither endpoint is in $S$ is simply $m$ minus the total degree of the vertices in $S$. Thus we can also write
\[
w(S) = \mu^{m-\sum_{v \in S} \deg(v)} \, .
\]
Weighting each $S$ in this way counteracts the temptation for $S$ to consist primarily of vertices of low degree. This is analogous to~\cite{ach-peres}, where satisfying assignments are given a weight that discourages them from satisfying the majority of literals in the formula.
We will apply the second moment method to the random variable
\[
X = \sum_{\substack{S \subseteq V, |S| = \alpha n \\ S\, \textrm{independent}}} w(S) \, .
\]
If we tune $\mu$ properly, then for particular $\alpha^\star, c^\star$ we have $\mathbb{E}[X^2] \sim \mathbb{E}[X]^2$, in which case $\Pr[X > 0]$ is bounded above zero.
In that case $c_\mathrm{crit}(\alpha^\star) \ge c^\star$, or equivalently $\alpha_\mathrm{crit}(c^\star) \ge \alpha^\star$.
To this end, let us compute the first and second moments of our random variable $X$. We extend the weight function $w(S)$ to all sets $S \subseteq V$ by setting $w(S)=0$ if $S$ is not independent. That is,
\[
X = \sum_{\substack{S \subseteq V \\ |S| = \alpha n}} w(S)
\]
where
\[
w(S) = \prod_{(u,v) \in E} w_{u,v}(S)
\]
and
\begin{equation}
\label{eq:wuv}
w_{u,v}(S) = \begin{cases}
\mu & \mbox{if $u, v \notin S$} \\
1 & \mbox{if $u \in S, v \notin S$ or vice versa} \\
0 & \mbox{if $u,v \in S$} \, .
\end{cases}
\end{equation}
We start by computing $\mathbb{E}[X]$. Fix a set $S$ of size $\alpha n$. Since the $m$ edges are chosen independently,
\[
\mathbb{E}[w(S)] = w_1(\alpha,\mu)^m
\quad
\text{where}
\quad
w_1(\alpha,\mu) = \mathbb{E}_{u,v}[w_{u,v}(S)] \, .
\]
For each edge $(u,v)$ in $\widetilde{G}(n,m)$, $u$ and $v$ are chosen randomly and independently, so the probabilities of the three cases in~\eqref{eq:wuv} are $(1-\alpha)^2$, $2\alpha(1-\alpha)$, and $\alpha^2$ respectively. Thus
\[
w_1(\alpha, \mu)
= (1-\alpha)^2 \mu + 2\alpha(1-\alpha) \, .
\]
By linearity of expectation,
\[
\mathbb{E}[X] = \sum_{\substack{S \subseteq V \\ |S| = \alpha n}} \mathbb{E}[w(S)] = {n \choose \alpha n} \,w_1(\alpha, \mu)^m \,.
\]
Using Stirling's approximation, ${n \choose \alpha n} \sim \frac{1}{\sqrt{n}} \,\mathrm{e}^{n h(\alpha)}$ and substituting $m = cn/2$ gives
\begin{equation}
\label{eq:x-first}
\mathbb{E}[X] \sim \frac{1}{\sqrt{n}} \,\mathrm{e}^{n f_1(\alpha)}
\quad \text{where} \quad
f_1(\alpha) = h(\alpha)+ \frac{c}{2} \,\ln w_1(\alpha, \mu) \, .
\end{equation}
As before, $\sim$ hides constant factors that depend smoothly on $\alpha$.
Next we compute the second moment. We have
\[
\mathbb{E}[X^2] = \mathbb{E}\left[ \sum_{S} w(S) \sum_{T}w(T)\right] = \sum_{S,T} \mathbb{E}[w(S) \,w(T)]
\]
where $S$ and $T$ are subsets of $V$ of size $\alpha n$. The expectation of $w(S) \,w(T)$ does not depend on the specific choice of $S$ and $T$, but it does depend on the size of their intersection. We say that $S$ and $T$ have \emph{overlap} $\zeta$ if $|S \cap T|=\zeta n$. Again using the independence of the edges, we have
\[
\mathbb{E}[w(S) \,w(T)] = w_2(\alpha, \zeta, \mu)^m
\quad
\text{where}
\quad
w_2(\alpha, \zeta, \mu) = \mathbb{E}_{u,v}\left[ w_{u,v}(S) \,w_{u,v}(T) \right] \, .
\]
For each edge $(u,v)$ of $\widetilde{G}$, the probability that it has no endpoints in $S$ or $T$ is $(1-2\alpha +\zeta)^2$, in which case it contributes $\mu^2$ to $w_{u,v}(S) \,w_{u,v}(T)$. The probability that it has one endpoint in $S$ and none in $T$ or vice versa is $2(2\alpha - 2\zeta)(1- 2\alpha +\zeta)$, in which case it contributes $\mu$. Finally, the probability that it has one endpoint in $S$ and one in $T$ is $2(\alpha - \zeta)^2 + 2\zeta(1- 2\alpha +\zeta)$, in which case it contributes $1$. With the remaining probability it has both endpoints in $S$ or $T$, in causing them to be non-independent and contributing zero. Thus
\[
w_2(\alpha, \zeta, \mu)
= (1-2\alpha +\zeta)^2 \mu^2 + 4(\alpha - \zeta)(1- 2\alpha +\zeta)\mu + 2(\alpha - \zeta)^2 + 2\zeta(1- 2\alpha +\zeta)
\]
Observe that when $\zeta = \alpha^2$, as it typically would be if $S$ and $T$ were chosen independently and uniformly, we have
\begin{equation}
\label{eq:w2-w1}
w_2 = w_1^2 \, .
\end{equation}
The number of pairs of sets $S, T$ of size $\alpha n$ and intersection of size $z = \zeta n $ is the multinomial
\[
{n \choose {\zeta n, (\alpha -\zeta) n, (\alpha -\zeta)n, (1-2\alpha +\zeta)n}}
= {n \choose \alpha n} {\alpha n \choose \zeta n} {(1-\alpha)n \choose (\alpha-\zeta) n} \, ,
\]
and linearity of expectation gives
\[
\mathbb{E}[X^2]
= \sum_{z=0}^{\alpha n}
{n \choose {z, \alpha n-z, \alpha n - z, (1-2\alpha)n + z}}
\,w_2(\alpha, \zeta, \mu)^m \, .
\]
This sum is dominated by the terms where $\zeta = z/n$ is bounded inside the interval $(0,\alpha)$. Stirling's approximation then gives
\begin{equation}
\label{eq:multi-stirling}
\binom{n}{\zeta n, (\alpha -\zeta) n, (\alpha -\zeta)n , (1-2\alpha +\zeta)n }
\sim \frac{1}{n^{3/2}}
\,\mathrm{e}^{n \left[ h(\alpha) + \alpha h\left(\zeta/\alpha\right) + (1-\alpha) h\left(\frac{\alpha -\zeta}{1-\alpha}\right) \right]}
\, ,
\end{equation}
where $\sim$ hides constants that vary slowly with $\alpha$ and $\zeta$.
Thus the contribution to $\mathbb{E}[X^2]$ of pairs of sets with overlap $\zeta \in (0,\alpha)$ is
\begin{equation}
\label{eq:x-second}
\frac{1}{n^{3/2}}
\,\mathrm{e}^{n f_2(\alpha,\zeta,\mu)}
\quad \text{where} \quad
f_2(\alpha,\zeta,\mu)
= h(\alpha) + \alpha h\!\left(\frac{\zeta}{\alpha}\right) + (1-\alpha) h\!\left(\frac{\alpha -\zeta}{1-\alpha}\right)
+ \frac{c}{2} \ln w_2(\alpha, \zeta, \mu) \, .
\end{equation}
Combining this with~\eqref{eq:x-first}, we can write
\begin{equation}
\label{eq:moments-ratio}
\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]^2}
\sim \frac{1}{\sqrt{n}} \sum_{z=0}^{\alpha n} \mathrm{e}^{n \phi(z/n)} \, ,
\end{equation}
where
\begin{align*}
\phi(\zeta)
= f_2(\alpha,\zeta,\mu) - 2 f_1(\alpha,\mu)
= \alpha h\!\left(\frac{\zeta}{\alpha}\right) + (1-\alpha) \,h\!\left(\frac{\alpha -\zeta}{1-\alpha}\right) - h(\alpha)
+ \frac{c}{2} \ln \frac{w_2(\alpha, \zeta, \mu)}{w_1(\alpha, \mu)^2} \, .
\end{align*}
Using~\eqref{eq:w2-w1} and the fact that the entropy terms cancel, we have
\[
\phi(\alpha^2) = 0 \, .
\]
In other words, the contribution to $\mathbb{E}[X^2]$ from pairs of sets with overlap $\alpha^2$ is proportional to $\mathbb{E}[X]^2$.
We can now replace the sum in~\eqref{eq:moments-ratio} with an integral,
\[
\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]^2}
\sim \frac{1}{\sqrt{n}} \sum_{z=0}^{\alpha n} \mathrm{e}^{n \phi(z/n)}
\sim \sqrt{n} \int_0^\alpha \mathrm{e}^{n \phi(\zeta)} \mathrm{d}\zeta \, ,
\]
and evaluate this integral using Laplace's method as in~\cite[Lemma 3]{ach-moore}. Its asymptotic behavior depends on the maximum value of $\phi$,
\[
\phi_{\max} = \max_{\zeta \in [0,\alpha]} \phi(\zeta) \, .
\]
If $\phi'' < 0$ at the corresponding $\zeta_{\max}$, then it is dominated by an interval of width $\Theta(1/\sqrt{n})$ around $\zeta_{\max}$ and
\[
\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]^2}
\sim \mathrm{e}^{n \phi_{\max}} \, .
\]
If $\phi_{\max} = \phi(\alpha^2) = 0$, then $\mathbb{E}[X^2] \sim \mathbb{E}[X]^2$ and the second moment method succeeds. Thus our goal is to show that $\phi$ is maximized at $\alpha^2$.
For this to happen, we at least need $\zeta = \alpha^2$ to be a \emph{local} maximum of $\phi$. In particular, we need
\begin{equation}
\label{eq:local-max}
\phi'(\alpha^2) = 0 \, .
\end{equation}
Differentiating, we find that~\eqref{eq:local-max} holds if
\[
\mu = \frac{1-2\alpha}{1-\alpha} \, .
\]
Henceforth, we will fix $\mu$ to this value. In that case we have
\[
w_1 = 1-\alpha
\quad \text{and} \quad
w_2 = (1-\alpha)^2 + \frac{(\zeta - \alpha^2)^2}{(1-\alpha)^2} \, ,
\]
so
\[
\phi(\zeta)
= \alpha h\!\left(\frac{\zeta}{\alpha}\right) + (1-\alpha) \,h\!\left(\frac{\alpha -\zeta}{1-\alpha}\right) - h(\alpha)
+ \frac{c}{2} \,\ln \!\left(1 +\frac{(\zeta -\alpha^2)^2}{(1-\alpha)^4} \right)
\]
The remainder of this paper is dedicated to showing that for sufficiently small $\alpha$ as a function of $c$ or vice versa, $\phi$ is indeed maximized at $\alpha^2$.
\section{Finding and bounding the maxima}
Using $\ln (1+x) \le x$, we write $\phi(\zeta) \le \psi(\zeta)$ where
\begin{equation}
\label{eq:psi}
\psi(\zeta)
= \alpha h\!\left( \frac{\zeta}{\alpha} \right) + (1-\alpha) \,h\!\left( \frac{\alpha-\zeta}{1-\alpha} \right) - h(\alpha)
+ \frac{c}{2} \frac{(\zeta-\alpha^2)^2}{(1-\alpha)^4}
\, .
\end{equation}
Note that
\[
\psi(\alpha^2) = \phi(\alpha^2) = 0 \, .
\]
Our goal is to show for an appropriate $c$ that $\zeta = \alpha^2$ is in fact the global maximum of $\psi$, and therefore of $\phi$.
In what follows, asymptotic symbols such as $O$ and $o$ refer to the limit $\alpha \to 0$, or equivalently the limit $c \to \infty$. Error terms may be positive or negative unless otherwise stated.
The first two derivatives of $\psi(\zeta)$ are
\begin{gather}
\psi'(\zeta) =
\frac{c \left(\zeta-\alpha^2\right)}{(1-\alpha)^4}
+ 2 \ln (\alpha-\zeta) - \ln \zeta - \ln (1-2 \alpha+\zeta)
\label{eq:psi1}
\\
\psi''(\zeta) = \frac{c}{(1-\alpha)^4} - \frac{2}{\alpha-\zeta} - \frac{1}{\zeta} - \frac{1}{1-2 \alpha+\zeta}
\label{eq:psi2}
\end{gather}
The second derivative $\psi''(\zeta)$ tends to $-\infty$ at $\zeta=0$ and $\zeta=\alpha$. Setting $\psi''(\zeta)=0$ yields a cubic equation in $\zeta$ which has one negative root and, for sufficiently small $\alpha$, two positive roots in the interval $[0,\alpha]$. Thus for each $\alpha$ and sufficiently small $\alpha$, there are $0 < \zeta_1 < \zeta_2 < \alpha$ where
\[
\psi''(\zeta) \begin{cases}
< 0 & 0 < \zeta < \zeta_1 \\
> 0 & \zeta_1 < \zeta < \zeta_2 \\
< 0 & \zeta_2 < \zeta < \alpha \, .
\end{cases}
\]
It follows that $\psi$ can have at most two local maxima. One is in the interval $[0,\zeta_1]$, and the following lemma shows that for the relevant $\alpha$ and $c$ this is $\alpha^2$:
\begin{lemma}
\label{lem:local}
If $c=o(1/\alpha^2)$ then for sufficiently small $\alpha$, $\zeta_1 > \alpha^2$ and $\psi(\alpha^2)$ is a local maximum.
\end{lemma}
\noindent
The other local maximum is in the interval $[\zeta_2,\alpha]$, and we denote it $\zeta_3$. To locate it, first we bound $\zeta_2$:
\begin{lemma}
\label{lem:zeta2}
If
\[
c = (2+o(1)) \,\frac{\ln (1/\alpha)}{\alpha} \, ,
\]
then
\[
\frac{\zeta_2}{\alpha} = 1 - \delta_2
\quad \text{where} \quad
\delta_2 = \frac{1+o(1)}{\ln (1/\alpha)} \, .
\]
\end{lemma}
\noindent
Thus $\zeta_2/\alpha$, and therefore $\zeta_3/\alpha$, tends toward $1$ as $\alpha \to 0$.
We can now locate $\zeta_3$ when $\alpha$ is close to its critical value.
\begin{lemma}
\label{lem:zeta3}
If
\[
c = \frac{1}{\alpha} \big( 2 \ln (1/\alpha) + 2-o(1) \big) \, ,
\]
then
\[
\frac{\zeta_3}{\alpha} = 1 - \delta_3
\quad \text{where} \quad
\delta_3 = \frac{1+o(1)}{\mathrm{e}} \sqrt{\alpha} \, .
\]
\end{lemma}
\begin{lemma}
\label{lem:done}
For any constant $x > 4/\mathrm{e}$, if
\[
c = \frac{2 \ln (1/\alpha) + 2 - x \sqrt{\alpha} }{\alpha} \, ,
\]
then $\psi(\zeta_3) < 0$ for sufficiently small $\alpha$.
\end{lemma}
\section{Proofs}
\begin{proof}[Proof of Lemma~\ref{lem:local}]
Setting $\zeta = \alpha^2$ in~\eqref{eq:psi2} gives
\[
\psi''(\alpha^2)
< \frac{c}{(1-\alpha)^4} - \frac{1}{\alpha^2} \, .
\]
If $c = o(1/\alpha^2)$ this is negative for sufficiently small $\alpha$, in which case $\zeta_1 > \alpha^2$ and $\psi(\alpha^2)$ is a local maximum.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:zeta2}]
For any constant $b$, if
\begin{equation}
\label{eq:zeta2a}
\frac{\zeta}{\alpha} = 1 - \delta
\quad \text{where} \quad
\delta = \frac{b}{\ln (1/\alpha)}
\end{equation}
then~\eqref{eq:psi2} gives
\[
\psi''(\zeta)
= \left( 2-\frac{2}{b}+o(1)\right) \frac{\ln (1/\alpha)}{\alpha} - O(1/\alpha) \, .
\]
If $b \ne 1$, for sufficiently small $\alpha$ this is negative if $b < 1$ and positive if $b > 1$. Therefore $\zeta_2 / \alpha = 1-\delta_3$ where $\delta_3 = (1+o(1))/\ln(1/\alpha)$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:zeta3}]
Lemma~\ref{lem:zeta2} tells us that $\zeta_3 = \alpha(1-\delta)$ for some
\[
\delta < \frac{1+o(1)}{\ln(1/\alpha)} \, .
\]
Setting $\zeta = \alpha(1-\delta)$ in~\eqref{eq:psi1} and using
\[
\frac{1}{(1-\alpha)^4} = 1 + O(\alpha)
\quad \text{and} \quad
-\ln (1-x) = O(x)
\]
gives, after some algebra,
\[
\psi'(\zeta)
= \alpha c + \ln \alpha + 2 \ln \delta + O(\alpha \delta c) + O(\alpha^2 c) \, .
\]
For any constant $b$, setting
\[
\delta = \frac{b \sqrt{\alpha}}{\mathrm{e}}
\]
gives
\[
\psi'(\zeta)
= \alpha c + 2 \ln \alpha + 2 \ln b - 2 + O(\alpha^{3/2} c) \, ,
\]
and setting
\[
c = \frac{2 \ln (1/\alpha) + 2-\varepsilon}{\alpha}
\]
then gives
\[
\psi'(\zeta) = 2 \ln b - \varepsilon + o(1) \, .
\]
If $\varepsilon = o(1)$ and $b \ne 1$, for sufficiently small $\alpha$ this is negative if $b < 1$ and positive if $b > 1$. Therefore $\zeta_3 / \alpha = 1-\delta_3$ where $\delta_3 = (1+o(1)) \sqrt{a} / \mathrm{e}$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:done}]
Setting $\zeta = \alpha(1-\delta)$ where $\delta = b \sqrt{a} / \mathrm{e}$ in~\eqref{eq:psi} and using the Taylor series
\[
\frac{1}{(1-\alpha)^4} = 1 + 4 \alpha + O(\alpha^2)
\quad \text{and} \quad
-\ln (1-x)=x+x^2/2+O(x^3)
\]
gives, after a fair amount of algebra,
\begin{align*}
\psi(\zeta)
&= \alpha (\ln \alpha -1)
- \left(\frac{2 b \ln \alpha - 4 b + 2 b \ln b}{\mathrm{e}} \right) \alpha^{3/2}
+ \left( \frac{c+1}{2} - \frac{b^2}{2 \mathrm{e}^2} \right) \alpha^2 \\
&- \frac{b}{\mathrm{e}} \,\alpha^{5/2} c
+ \left( \frac{b^2}{2 \mathrm{e}^2}+1 \right) \alpha^3 c + O(\alpha^{7/2} c) + O(\alpha^{5/2}) \, .
\end{align*}
Setting
\[
c = \frac{2 \ln (1/\alpha) + 2 - x \sqrt{\alpha} }{\alpha}
\]
for constant $x$ causes the terms proportional to $\alpha \ln \alpha$, $\alpha$, and $\alpha^{3/2} \ln \alpha$ to cancel, leaving
\[
\psi(\zeta)
= \left( \frac{2b (1-\ln b)}{\mathrm{e}} - \frac{x}{2} \right) \alpha^{3/2} + O(\alpha^2) \, .
\]
The coefficient of $\alpha^{3/2}$ is maximized when $b=1$, and is negative whenever $x > 4/\mathrm{e}$. In that case, $\psi(\zeta_3) < 0$ for sufficiently small $\alpha$, completing the proof.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:lower}]
First note that
\begin{equation}
\label{eq:a0}
\alpha_0 = \frac{2}{c} \,W\!\left( \frac{\mathrm{e} c}{2} \right)
\end{equation}
is the root of the equation
\[
c = 2 \,\frac{\ln (1/\alpha_0) + 1}{\alpha_0} \, ,
\]
since we can also write it as
\[
\mathrm{e}^{c \alpha/2} = \frac{\mathrm{e}}{\alpha_0} \, ,
\]
and multiplying both sides by $c \alpha_0/2$ gives
\[
\frac{c \alpha_0}{2} \,\mathrm{e}^{c \alpha_0/2} = \frac{\mathrm{e} c}{2} \, ,
\]
in which case~\eqref{eq:a0} follows from the definition of $W$.
The root $\alpha$ of
\[
c = 2 \,\frac{\ln (1/\alpha) + 1}{\alpha} - \frac{x}{\sqrt{\alpha}}
\]
is then at least
\[
\frac{2}{c} \,W\!\left( \frac{\mathrm{e} c}{2} \right) + \big(x+o(1)\big) \frac{\partial \alpha_0}{\partial c} \sqrt{\frac{c}{2 \ln c}}
\]
since $\alpha = (1+o(1)) 2 \ln c / c$ and $\partial^2 \alpha_0 / \partial^2 c \ge 0$. Since
\[
\frac{\partial \alpha_0}{\partial c} = - \big( 1+o(1) \big) \frac{2 \ln c}{c^2} \, ,
\]
the statement follows from Theorem~\ref{thm:lower}.
\end{proof}
\section*{Acknowledgments}
We are grateful to Amin Coja-Oghlan, Alan Frieze, Yuval Peres, Alex Russell, and Joel Spencer for helpful conversations.
\section{Introduction}
We are interested in the likely size of the largest independent set $S$ in a random graph with a given average degree. It is easy to see that $|S|=\mathrm{\Theta}(n)$ whenever the average degree is constant, and Shamir and Spencer~\cite{shamirSpencer87} showed using Azuma's inequality that, for any fixed $n$, $|S|$ is tightly concentrated around its mean. Moreover, Bayati, Gamarnik, and Tetali~\cite{bayati2010} recently showed that $|S|/n$ converges to a limit with high probability. Thus for each constant $c$ there is a constant $\alpha_\mathrm{crit} = \alpha_\mathrm{crit}(c)$ such that
\[
\lim_{n \rightarrow \infty} \Pr[\mbox{$G(n,p=c/n)$ has an independent set of size $\alpha n$}]
= \begin{cases} 1 & \alpha < \alpha_\mathrm{crit} \\
0 & \alpha > \alpha_\mathrm{crit} \, .
\end{cases}
\]
By standard arguments this holds in $G(n,m=cn/2)$ as well.
Our goal is to bound $\alpha_\mathrm{crit}$ as a function of $c$, or equivalently to bound
\[
c_\mathrm{crit} = \sup \,\{ c : \alpha_\mathrm{crit}(c) \ge \alpha \} \, ,
\]
as a function of $\alpha$. For $c \le \mathrm{e}$, a greedy algorithm of Karp and Sipser~\cite{karp-sipser} asymptotically finds a maximal independent set, and analyzing this algorithm with differential equations yields the exact value of $\alpha_\mathrm{crit}$. For larger $c$, Frieze~\cite{frieze90} determined $\alpha_\mathrm{crit}$ to within $o(1/c)$, where $o$ refers to the limit where $c$ is large. These bounds were improved by Coja-Oghlan and Efthymiou~\cite{coja2011} who prove detailed results on the structure of the set of independent sets.
We further improve these bounds. Our method is a weighted version of the second moment method, inspired by the work of Achlioptas and Peres~\cite{ach-peres} on random $k$-SAT, where each independent set is given a weight depending on the total degree of its vertices. In addition to improving bounds on this particular problem, our hope is that this advances the art and science of inventing random variables that counteract local sources of correlation in random structures.
We work in a modified version of the $G(n,m)$ model which we call $\widetilde{G}(n,m)$. For each of the $m$ edges, we choose two vertices $u,v$ uniformly and independently and connect them. This may lead to a few multiple edges or self-loops. A vertex with a self-loop cannot belong to an independent set. In the sparse case where $m=cn/2$ for constant $c$, with constant positive probability $\widetilde{G}(n,m=cn/2)$ has no multiple edges or self-loops, in which case it is uniform in the usual model $G(n,m=cn/2)$ where edges are chosen without replacement from distinct pairs of vertices. Thus any property which holds with high probability for $\widetilde{G}(n,m)$ also holds with high probability for $G(n,m)$, and any bounds we prove on $\alpha_\mathrm{crit}$ in $\widetilde{G}(n,m)$ also hold in $G(n,m)$.
We review the first moment upper bound on $\alpha_\mathrm{crit}$ from Bollob\'as~\cite{bollobas}. Let $X$ denote the number of independent sets of size $\alpha n$ in $\widetilde{G}(n,m)$. Then
\[
\Pr[X > 0] \le \mathbb{E}[X] \, .
\]
By linearity of expectation, $\mathbb{E}[X]$ is the sum over all $\binom{n}{\alpha n}$ sets of $\alpha n$ vertices of the probability that a given one is independent. The $m$ edges $(u,v)$ are chosen independently and for each one $u,v \in S$ with probability $\alpha^2$, so
\[
\mathbb{E}[X] = \binom{n}{\alpha n} (1-\alpha^2)^m \, .
\]
In the limit $n \to \infty$, Stirling's approximation $n! = (1+o(1)) \sqrt{2\pi n} \,n^n \,\mathrm{e}^{-n}$ gives
\begin{equation}
\label{eq:stirling}
\binom{n}{\alpha n} = \about{\frac{1}{\sqrt{n}} \,\mathrm{e}^{n h(\alpha)}} \, ,
\end{equation}
where $h$ is the entropy function
\[
h(\alpha) = -\alpha \ln \alpha - (1-\alpha) \ln (1-\alpha) \, ,
\]
and where $\mathrm{\Theta}$ hides constants that depend smoothly on $\alpha$. Thus
\[
\mathbb{E}[X] = \about{ \frac{1}{\sqrt{n}} \,\mathrm{e}^{n(h(\alpha)+ (c/2) \ln (1-\alpha^2))} } \, .
\]
For each $c$, the $\alpha$ such that
\begin{equation}
\label{eq:first-moment}
h(\alpha)+ (c/2) \ln (1-\alpha^2) = 0
\end{equation}
is an upper bound on $\alpha_\mathrm{crit}(c)$, since for larger $\alpha$ the expectation $\mathbb{E}[X]$ is exponentially small.
We find it more convenient to parametrize our bounds in terms of the function $c_\mathrm{crit}(\alpha)$. Then~\eqref{eq:first-moment} gives the following upper bound,
\begin{equation}
\label{eq:c-upper}
c_\mathrm{crit}(\alpha)
\le 2 \,\frac{\alpha \ln \alpha + (1-\alpha) \ln (1-\alpha)}{\ln (1-\alpha^2)}
\le 2 \,\frac{\ln (1/\alpha) + 1}{\alpha} \, .
\end{equation}
We will prove the following nearly-matching lower bound.
\begin{theorem}
\label{thm:lower}
For any constant $x > 4/\mathrm{e}$, for sufficiently small $\alpha$
\begin{equation}
\label{eq:c-lower}
c_\mathrm{crit}(\alpha)
\ge 2 \,\frac{\ln (1/\alpha) + 1}{\alpha} - \frac{x}{\sqrt{\alpha}} \, .
\end{equation}
\end{theorem}
\noindent
Coja-Oghlan and Efthymiou~\cite{coja2011} bounded $c_\mathrm{crit}$ within a slightly larger factor $O(\sqrt{\ln (1/\alpha)/\alpha})$.
Inverting~\eqref{eq:c-upper} and Theorem~\ref{thm:lower} gives the following bounds on $\alpha_\mathrm{crit}(c)$. The lower bound is a significant improvement over previous results:
\begin{corollary}
\label{cor:lower}
For $z > 0$, let $W(z)$ denote the unique positive root $x$ of the equation $x \mathrm{e}^x = z$. Then for any constant $y > 4 \sqrt{2} / \mathrm{e}$,
\[
\frac{2}{c} \,W\!\left( \frac{\mathrm{e} c}{2} \right) - y \,\frac{\sqrt{\ln c}}{c^{3/2}}
\le \alpha_\mathrm{crit}
\le \frac{2}{c} \,W\!\left( \frac{\mathrm{e} c}{2} \right) \, ,
\]
where the lower bound holds for sufficiently large $c$.
\end{corollary}
\noindent
If we like we can expand $W(\mathrm{e} c/2)$ asymptotically in $c$,
\begin{align*}
W\!\left( \frac{\mathrm{e} c}{2} \right)
&= \ln c - \ln \ln c + 1 - \ln 2 + \frac{\ln \ln c}{\ln c} - \frac{1-\ln 2}{\ln c} \\
&+ \frac{1}{2} \frac{(\ln \ln c)^2}{(\ln c)^2} - (2-\ln 2) \frac{\ln \ln c}{(\ln c)^2}
+ \frac{3 + (\ln 2)^2 - 4 \ln 2}{2 (\ln c)^2} + O\!\left( \frac{(\ln \ln c)^3}{(\ln c)^3} \right)
\, .
\end{align*}
The first few of these terms correspond to the bound in~\cite{frieze90}, and we can extract as many additional terms as we wish.
\section{The weighted second moment method}
Our proof uses the second moment method. For any nonnegative random variable $X$, the Cauchy-Schwarz inequality implies that
\begin{equation}
\label{eq:prob-lower-bound}
\Pr[X > 0] \ge \frac{\mathbb{E}[X]^2}{\mathbb{E}[X^2]} \, .
\end{equation}
If $X$ counts the number of objects of a certain kind, \eqref{eq:prob-lower-bound} shows that at least one such object exists as long as the expected number of objects is large and the variance is not too large.
Unfortunately, applying this directly to the number $X$ of independent sets fails utterly. The problem is that for most pairs of sets of size $\alpha n$, the events that they are independent are highly correlated, unlike the case where the average degree grows sufficiently quickly with $n$~\cite{bollobas,frieze90}. As a result, $\mathbb{E}[X^2]$ is exponentially larger than $\mathbb{E}[X]^2$, and the second moment method yields an exponentially small lower bound on $\Pr[X > 0]$.
One way to deal with these correlations, used by Frieze in~\cite{frieze90}, is to partition the vertices into sets $V_i$ of $\lfloor 1/\alpha \rfloor$ vertices each and focus on those independent sets that intersect each $V_i$ exactly once. In that case, a large-deviations inequality allows us to override the correlations.
Here we pursue a different approach, inspired by the work of Achlioptas and Peres~\cite{ach-peres} on random $k$-SAT. The idea is to give each independent set $S$
a weight $w(S)$, depending exponentially on local quantities in the graph. Specifically, we define
\[
w(S) = \mu^{\mbox{\scriptsize \# of edges $(u,v)$ with $u,v \notin S$}} \, ,
\]
for some $\mu < 1$. If the number of edges $m$ is fixed, the number of edges where neither endpoint is in $S$ is simply $m$ minus the total degree of the vertices in $S$. Thus we can also write
\begin{equation}
\label{eq:weight}
w(S) = \mu^{m-\sum_{v \in S} \deg(v)} \, .
\end{equation}
We will apply the second moment method to the total weight of all independent sets of size $\alpha n$,
\[
X = \sum_{\substack{S \subseteq V, |S| = \alpha n \\ S\, \textrm{independent}}} w(S) \, .
\]
If we tune $\mu$ properly, then for particular $\alpha^\star, c^\star$ we have $\mathbb{E}[X^2] = \about{\mathbb{E}[X]^2}$, in which case $\Pr[X > 0]$ is bounded above zero.
In that case $c_\mathrm{crit}(\alpha^\star) \ge c^\star$, or equivalently $\alpha_\mathrm{crit}(c^\star) \ge \alpha^\star$.
Why is this the right type of weight? Intuitively, one of the main sources of correlations between independent sets is the temptation to occupy low-degree vertices. For instance, any two maximal independent sets contain all the degree-zero vertices, giving them a large overlap. If $X$ simply counts the independent sets of size $\alpha n$, the resulting correlations make $X$'s variance exponentially large compared to the square of its expectation, and the second moment fails.
Weighting each $S$ as in~\eqref{eq:weight} counteracts this temptation, punishing sets that occupy low-degree vertices by reducing their weight exponentially. As we will see below, when $\mu$ is tuned to a particular value, making this punishment condign, these correlations disappear in the sense that the dominant contribution to $\mathbb{E}[X^2]$ comes from pairs of sets $S, T$ of size $\alpha n$ such that $\abs{S \cap T} = \alpha^2 n + O(\sqrt{n})$, just as if $S$ and $T$ were chosen independently from among all sets of size $\alpha n$.
This is analogous to the situation for $k$-SAT, where satisfying assignments are correlated because of the temptation to give each variable the truth value that agrees with the majority of its literals in the formula. By giving each satisfying assignment a weight $\eta^{\mbox{\scriptsize \# of true literals}}$ and tuning $\eta$ properly, we make the dominant contribution to $\mathbb{E}[X^2]$ come from pairs of satisfying assignments which agree on $n/2+O(\sqrt{n})$ variables, just as if they were chosen independently~\cite{ach-peres}.
Proceeding, let us compute the first and second moments of our random variable $X$. We extend the weight function $w(S)$ to all sets $S \subseteq V$ by setting $w(S)=0$ if $S$ is not independent. That is,
\[
X = \sum_{\substack{S \subseteq V \\ |S| = \alpha n}} w(S)
\]
where
\[
w(S) = \prod_{(u,v) \in E} w_{u,v}(S)
\]
and
\begin{equation}
\label{eq:wuv}
w_{u,v}(S) = \begin{cases}
\mu & \mbox{if $u, v \notin S$} \\
1 & \mbox{if $u \in S, v \notin S$ or vice versa} \\
0 & \mbox{if $u,v \in S$} \, .
\end{cases}
\end{equation}
We start by computing $\mathbb{E}[X]$. Fix a set $S$ of size $\alpha n$. Since the $m$ edges are chosen independently,
\[
\mathbb{E}[w(S)] = w_1(\alpha,\mu)^m
\quad
\text{where}
\quad
w_1(\alpha,\mu) = \mathbb{E}_{u,v}[w_{u,v}(S)] \, .
\]
For each edge $(u,v)$ in $\widetilde{G}(n,m)$, $u$ and $v$ are chosen randomly and independently, so the probabilities of the three cases in~\eqref{eq:wuv} are $(1-\alpha)^2$, $2\alpha(1-\alpha)$, and $\alpha^2$ respectively. Thus
\[
w_1(\alpha, \mu)
= (1-\alpha)^2 \mu + 2\alpha(1-\alpha) \, .
\]
By linearity of expectation,
\[
\mathbb{E}[X] = \sum_{\substack{S \subseteq V \\ |S| = \alpha n}} \mathbb{E}[w(S)] = \binom{n}{\alpha n} \,w_1(\alpha, \mu)^m \,.
\]
Using Stirling's approximation~\eqref{eq:stirling}
and substituting $m = cn/2$ gives
\begin{equation}
\label{eq:x-first}
\mathbb{E}[X] = \about{ \frac{1}{\sqrt{n}} \,\mathrm{e}^{n f_1(\alpha)} }
\quad \text{where} \quad
f_1(\alpha) = h(\alpha)+ \frac{c}{2} \,\ln w_1(\alpha, \mu) \, .
\end{equation}
As before, $\mathrm{\Theta}$ hides constant factors that depend smoothly on $\alpha$.
Next we compute the second moment. We have
\[
\mathbb{E}[X^2] = \mathbb{E}\left[ \sum_{S} w(S) \sum_{T}w(T)\right] = \sum_{S,T} \mathbb{E}[w(S) \,w(T)]
\]
where $S$ and $T$ are subsets of $V$ of size $\alpha n$. The expectation of $w(S) \,w(T)$ does not depend on the specific choice of $S$ and $T$, but it does depend on the size of their intersection. We say that $S$ and $T$ have \emph{overlap} $\zeta$ if $\abs{S \cap T} = \zeta n$. Again using the independence of the edges, we have
\[
\mathbb{E}[w(S) \,w(T)] = w_2(\alpha, \zeta, \mu)^m
\quad
\text{where}
\quad
w_2(\alpha, \zeta, \mu) = \mathbb{E}_{u,v}\left[ w_{u,v}(S) \,w_{u,v}(T) \right] \, .
\]
For each edge $(u,v)$ of $\widetilde{G}$, the probability that it has no endpoints in $S$ or $T$ is $(1-2\alpha +\zeta)^2$, in which case it contributes $\mu^2$ to $w_{u,v}(S) \,w_{u,v}(T)$. The probability that it has one endpoint in $S$ and none in $T$ or vice versa is $2(2\alpha - 2\zeta)(1- 2\alpha +\zeta)$, in which case it contributes $\mu$. Finally, the probability that it has one endpoint in $S$ and one in $T$ is $2(\alpha - \zeta)^2 + 2\zeta(1- 2\alpha +\zeta)$, in which case it contributes $1$. With the remaining probability it has both endpoints in $S$ or $T$, in causing them to be non-independent and contributing zero. Thus
\[
w_2(\alpha, \zeta, \mu)
= (1-2\alpha +\zeta)^2 \mu^2 + 4(\alpha - \zeta)(1- 2\alpha +\zeta)\mu + 2(\alpha - \zeta)^2 + 2\zeta(1- 2\alpha +\zeta)
\]
Observe that when $\zeta = \alpha^2$, as it typically would be if $S$ and $T$ were chosen independently and uniformly, we have
\begin{equation}
\label{eq:w2-w1}
w_2 = w_1^2 \, .
\end{equation}
The number of pairs of sets $S, T$ of size $\alpha n$ and intersection of size $z = \zeta n $ is the multinomial
\[
\binom{n}{\zeta n, (\alpha -\zeta) n, (\alpha -\zeta)n, (1-2\alpha +\zeta)n}
= \binom{n}{\alpha n} \binom{\alpha n}{\zeta n} \binom{(1-\alpha)n}{(\alpha-\zeta) n} \, ,
\]
and linearity of expectation gives
\[
\mathbb{E}[X^2]
= \sum_{z=0}^{\alpha n}
\binom{n}{z, \alpha n-z, \alpha n - z, (1-2\alpha)n + z}
\,w_2(\alpha, \zeta, \mu)^m \, .
\]
This sum is dominated by the terms where $\zeta = z/n$ is bounded inside the interval $(0,\alpha)$. Stirling's approximation then gives
\[
\binom{n}{\zeta n, (\alpha -\zeta) n, (\alpha -\zeta)n , (1-2\alpha +\zeta)n }
= \about{ \frac{\mathrm{e}^{n \left[ h(\alpha) + \alpha h\left(\zeta/\alpha\right) + (1-\alpha) h\left(\frac{\alpha -\zeta}{1-\alpha}\right) \right]}}{n^{3/2}} }
\, ,
\]
where $\mathrm{\Theta}$ hides constants that vary slowly with $\alpha$ and $\zeta$.
Thus the contribution to $\mathbb{E}[X^2]$ of pairs of sets with overlap $\zeta \in (0,\alpha)$ is
\begin{equation}
\label{eq:x-second}
\frac{1}{n^{3/2}}
\,\mathrm{e}^{n f_2(\alpha,\zeta,\mu)}
\end{equation}
where
\[
f_2(\alpha,\zeta,\mu)
= h(\alpha) + \alpha h\!\left(\frac{\zeta}{\alpha}\right) + (1-\alpha) h\!\left(\frac{\alpha -\zeta}{1-\alpha}\right)
+ \frac{c}{2} \ln w_2(\alpha, \zeta, \mu) \, .
\]
Combining~\eqref{eq:x-second} with~\eqref{eq:x-first}, we can write
\begin{equation}
\label{eq:moments-ratio}
\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]^2}
= \about{ \frac{1}{\sqrt{n}} \sum_{z=0}^{\alpha n} \mathrm{e}^{n \phi(z/n)} } \, ,
\end{equation}
where
\begin{align*}
\phi(\zeta)
&= f_2(\alpha,\zeta,\mu) - 2 f_1(\alpha,\mu) \\
&= \alpha h\!\left(\frac{\zeta}{\alpha}\right) + (1-\alpha) \,h\!\left(\frac{\alpha -\zeta}{1-\alpha}\right) - h(\alpha)
+ \frac{c}{2} \ln \frac{w_2(\alpha, \zeta, \mu)}{w_1(\alpha, \mu)^2} \, .
\end{align*}
Using~\eqref{eq:w2-w1} and the fact that the entropy terms cancel, we have
\[
\phi(\alpha^2) = 0 \, .
\]
In other words, the contribution to $\mathbb{E}[X^2]$ from pairs of sets with overlap $\alpha^2$ is proportional to $\mathbb{E}[X]^2$.
We can now replace the sum in~\eqref{eq:moments-ratio} with an integral,
\[
\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]^2}
= \about{ \frac{1}{\sqrt{n}} \sum_{z=0}^{\alpha n} \mathrm{e}^{n \phi(z/n)} }
= \about{ \sqrt{n} \int_0^\alpha \mathrm{e}^{n \phi(\zeta)} \mathrm{d}\zeta } \, ,
\]
and evaluate this integral using Laplace's method as in~\cite[Lemma 3]{ach-moore}. Its asymptotic behavior depends on the maximum value of $\phi$,
\[
\phi_{\max} = \max_{\zeta \in [0,\alpha]} \phi(\zeta) \, .
\]
If $\phi'' < 0$ at the corresponding $\zeta_{\max}$, then it is dominated by an interval of width $\about{1/\sqrt{n}}$ around $\zeta_{\max}$ and
\[
\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]^2}
= \about{ \mathrm{e}^{n \phi_{\max}} } \, .
\]
If $\phi_{\max} = \phi(\alpha^2) = 0$, then $\mathbb{E}[X^2] = \about{ \mathbb{E}[X]^2 }$ and the second moment method succeeds. Thus our goal is to show that $\phi$ is maximized at $\alpha^2$.
For this to happen, we at least need $\zeta = \alpha^2$ to be a \emph{local} maximum of $\phi$. In particular, we need
\begin{equation}
\label{eq:local-max}
\phi'(\alpha^2) = 0 \, .
\end{equation}
Differentiating, we find that~\eqref{eq:local-max} holds if
\[
\mu = \frac{1-2\alpha}{1-\alpha} \, .
\]
Henceforth, we will fix $\mu$ to this value. In that case we have
\[
w_1 = 1-\alpha
\quad \text{and} \quad
w_2 = (1-\alpha)^2 + \frac{(\zeta - \alpha^2)^2}{(1-\alpha)^2} \, ,
\]
so
\[
\phi(\zeta)
= \alpha h\!\left(\frac{\zeta}{\alpha}\right) + (1-\alpha) \,h\!\left(\frac{\alpha -\zeta}{1-\alpha}\right) - h(\alpha)
+ \frac{c}{2} \,\ln \!\left(1 +\frac{(\zeta -\alpha^2)^2}{(1-\alpha)^4} \right)
\]
The remainder of this paper is dedicated to showing that for sufficiently small $\alpha$ as a function of $c$ or vice versa, $\phi$ is indeed maximized at $\alpha^2$.
\section{Finding and bounding the maxima}
Using $\ln (1+x) \le x$, we write $\phi(\zeta) \le \psi(\zeta)$ where
\begin{equation}
\label{eq:psi}
\psi(\zeta)
= \alpha h\!\left( \frac{\zeta}{\alpha} \right) + (1-\alpha) \,h\!\left( \frac{\alpha-\zeta}{1-\alpha} \right) - h(\alpha)
+ \frac{c}{2} \frac{(\zeta-\alpha^2)^2}{(1-\alpha)^4}
\, .
\end{equation}
Note that
\[
\psi(\alpha^2) = \phi(\alpha^2) = 0 \, .
\]
Our goal is to show for an appropriate $c$ that $\zeta = \alpha^2$ is in fact the global maximum of $\psi$, and therefore of $\phi$.
In what follows, asymptotic symbols such as $O$ and $o$ refer to the limit $\alpha \to 0$, or equivalently the limit $c \to \infty$. Error terms may be positive or negative unless otherwise stated.
The first two derivatives of $\psi(\zeta)$ are
\begin{gather}
\psi'(\zeta) =
\frac{c \left(\zeta-\alpha^2\right)}{(1-\alpha)^4}
+ 2 \ln (\alpha-\zeta) - \ln \zeta - \ln (1-2 \alpha+\zeta)
\label{eq:psi1}
\\
\psi''(\zeta) = \frac{c}{(1-\alpha)^4} - \frac{2}{\alpha-\zeta} - \frac{1}{\zeta} - \frac{1}{1-2 \alpha+\zeta}
\label{eq:psi2}
\end{gather}
The second derivative $\psi''(\zeta)$ tends to $-\infty$ at $\zeta=0$ and $\zeta=\alpha$. Setting $\psi''(\zeta)=0$ yields a cubic equation in $\zeta$ which has one negative root and, for sufficiently small $\alpha$, two positive roots in the interval $[0,\alpha]$. Thus for each $\alpha$ and sufficiently small $\alpha$, there are $0 < \zeta_1 < \zeta_2 < \alpha$ where
\[
\psi''(\zeta) \begin{cases}
< 0 & 0 < \zeta < \zeta_1 \\
> 0 & \zeta_1 < \zeta < \zeta_2 \\
< 0 & \zeta_2 < \zeta < \alpha \, .
\end{cases}
\]
It follows that $\psi$ can have at most two local maxima. One is in the interval $[0,\zeta_1]$, and the following lemma shows that for the relevant $\alpha$ and $c$ this is $\alpha^2$:
\begin{lemma}
\label{lem:local}
If $c=o(1/\alpha^2)$ then for sufficiently small $\alpha$, $\zeta_1 > \alpha^2$ and $\psi(\alpha^2)$ is a local maximum.
\end{lemma}
\noindent
The other local maximum is in the interval $[\zeta_2,\alpha]$, and we denote it $\zeta_3$. To locate it, first we bound $\zeta_2$:
\begin{lemma}
\label{lem:zeta2}
If
\[
c = (2+o(1)) \,\frac{\ln (1/\alpha)}{\alpha} \, ,
\]
then
\[
\frac{\zeta_2}{\alpha} = 1 - \delta_2
\quad \text{where} \quad
\delta_2 = \frac{1+o(1)}{\ln (1/\alpha)} \, .
\]
\end{lemma}
\noindent
Thus $\zeta_2/\alpha$, and therefore $\zeta_3/\alpha$, tends toward $1$ as $\alpha \to 0$.
We can now locate $\zeta_3$ when $\alpha$ is close to its critical value.
\begin{lemma}
\label{lem:zeta3}
If
\[
c = \frac{1}{\alpha} \big( 2 \ln (1/\alpha) + 2-o(1) \big) \, ,
\]
then
\[
\frac{\zeta_3}{\alpha} = 1 - \delta_3
\quad \text{where} \quad
\delta_3 = \frac{1+o(1)}{\mathrm{e}} \sqrt{\alpha} \, .
\]
\end{lemma}
\begin{lemma}
\label{lem:done}
For any constant $x > 4/\mathrm{e}$, if
\[
c = \frac{2 \ln (1/\alpha) + 2 - x \sqrt{\alpha} }{\alpha} \, ,
\]
then $\psi(\zeta_3) < 0$ for sufficiently small $\alpha$.
\end{lemma}
\section{Proofs}
\begin{proofof}{Lemma~\ref{lem:local}}
Setting $\zeta = \alpha^2$ in~\eqref{eq:psi2} gives
\[
\psi''(\alpha^2)
< \frac{c}{(1-\alpha)^4} - \frac{1}{\alpha^2} \, .
\]
If $c = o(1/\alpha^2)$ this is negative for sufficiently small $\alpha$, in which case $\zeta_1 > \alpha^2$ and $\psi(\alpha^2)$ is a local maximum.
\end{proofof}
\begin{proofof}{Lemma~\ref{lem:zeta2}}
For any constant $b$, if
\begin{equation}
\label{eq:zeta2a}
\frac{\zeta}{\alpha} = 1 - \delta
\quad \text{where} \quad
\delta = \frac{b}{\ln (1/\alpha)}
\end{equation}
then~\eqref{eq:psi2} gives
\[
\psi''(\zeta)
= \left( 2-\frac{2}{b}+o(1)\right) \frac{\ln (1/\alpha)}{\alpha} - O(1/\alpha) \, .
\]
If $b \ne 1$, for sufficiently small $\alpha$ this is negative if $b < 1$ and positive if $b > 1$. Therefore $\zeta_2 / \alpha = 1-\delta_3$ where $\delta_3 = (1+o(1))/\ln(1/\alpha)$.
\end{proofof}
\begin{proofof}{Lemma~\ref{lem:zeta3}}
Lemma~\ref{lem:zeta2} tells us that $\zeta_3 = \alpha(1-\delta)$ for some
\[
\delta < \frac{1+o(1)}{\ln(1/\alpha)} \, .
\]
Setting $\zeta = \alpha(1-\delta)$ in~\eqref{eq:psi1} and using
\[
\frac{1}{(1-\alpha)^4} = 1 + O(\alpha)
\quad \text{and} \quad
-\ln (1-x) = O(x)
\]
gives, after some algebra,
\[
\psi'(\zeta)
= \alpha c + \ln \alpha + 2 \ln \delta + O(\alpha \delta c) + O(\alpha^2 c) \, .
\]
For any constant $b$, setting
\[
\delta = \frac{b \sqrt{\alpha}}{\mathrm{e}}
\]
gives
\[
\psi'(\zeta)
= \alpha c + 2 \ln \alpha + 2 \ln b - 2 + O(\alpha^{3/2} c) \, ,
\]
and setting
\[
c = \frac{2 \ln (1/\alpha) + 2-\varepsilon}{\alpha}
\]
then gives
\[
\psi'(\zeta) = 2 \ln b - \varepsilon + o(1) \, .
\]
If $\varepsilon = o(1)$ and $b \ne 1$, for sufficiently small $\alpha$ this is negative if $b < 1$ and positive if $b > 1$. Therefore $\zeta_3 / \alpha = 1-\delta_3$ where $\delta_3 = (1+o(1)) \sqrt{a} / \mathrm{e}$.
\end{proofof}
\begin{proofof}{Lemma~\ref{lem:done}}
Setting $\zeta = \alpha(1-\delta)$ where $\delta = b \sqrt{a} / \mathrm{e}$ in~\eqref{eq:psi} and using the Taylor series
\[
\frac{1}{(1-\alpha)^4} = 1 + 4 \alpha + O(\alpha^2)
\quad \text{and} \quad
-\ln (1-x)=x+x^2/2+O(x^3)
\]
gives, after a fair amount of algebra,
\begin{align*}
\psi(\zeta)
&= \alpha (\ln \alpha -1)
- \left(\frac{2 b \ln \alpha - 4 b + 2 b \ln b}{\mathrm{e}} \right) \alpha^{3/2}
+ \left( \frac{c+1}{2} - \frac{b^2}{2 \mathrm{e}^2} \right) \alpha^2 \\
&- \frac{b}{\mathrm{e}} \,\alpha^{5/2} c
+ \left( \frac{b^2}{2 \mathrm{e}^2}+1 \right) \alpha^3 c + O(\alpha^{7/2} c) + O(\alpha^{5/2}) \, .
\end{align*}
Setting
\[
c = \frac{2 \ln (1/\alpha) + 2 - x \sqrt{\alpha} }{\alpha}
\]
for constant $x$ causes the terms proportional to $\alpha \ln \alpha$, $\alpha$, and $\alpha^{3/2} \ln \alpha$ to cancel, leaving
\[
\psi(\zeta)
= \left( \frac{2b (1-\ln b)}{\mathrm{e}} - \frac{x}{2} \right) \alpha^{3/2} + O(\alpha^2) \, .
\]
The coefficient of $\alpha^{3/2}$ is maximized when $b=1$, and is negative whenever $x > 4/\mathrm{e}$. In that case, $\psi(\zeta_3) < 0$ for sufficiently small $\alpha$, completing the proof.
\end{proofof}
\begin{proofof}{Corollary~\ref{cor:lower}}
First note that
\begin{equation}
\label{eq:a0}
\alpha_0 = \frac{2}{c} \,W\!\left( \frac{\mathrm{e} c}{2} \right)
\end{equation}
is the root of the equation
\[
c = 2 \,\frac{\ln (1/\alpha_0) + 1}{\alpha_0} \, ,
\]
since we can also write it as
\[
\mathrm{e}^{c \alpha/2} = \frac{\mathrm{e}}{\alpha_0} \, ,
\]
and multiplying both sides by $c \alpha_0/2$ gives
\[
\frac{c \alpha_0}{2} \,\mathrm{e}^{c \alpha_0/2} = \frac{\mathrm{e} c}{2} \, ,
\]
in which case~\eqref{eq:a0} follows from the definition of $W$.
The root $\alpha$ of
\[
c = 2 \,\frac{\ln (1/\alpha) + 1}{\alpha} - \frac{x}{\sqrt{\alpha}}
\]
is then at least
\[
\frac{2}{c} \,W\!\left( \frac{\mathrm{e} c}{2} \right) + \big(x+o(1)\big) \frac{\partial \alpha_0}{\partial c} \sqrt{\frac{c}{2 \ln c}}
\]
since $\alpha = (1+o(1)) 2 \ln c / c$ and $\partial^2 \alpha_0 / \partial^2 c \ge 0$. Since
\[
\frac{\partial \alpha_0}{\partial c} = - \big( 1+o(1) \big) \frac{2 \ln c}{c^2} \, ,
\]
the statement follows from Theorem~\ref{thm:lower}.
\end{proofof}
\section*{Acknowledgments}
We are grateful to Amin Coja-Oghlan, Alan Frieze, David Gamarnik, Yuval Peres, Alex Russell, and Joel Spencer for helpful conversations, and to the anonymous reviews for their comments.
| {
"timestamp": "2011-06-20T02:00:37",
"yymm": "1011",
"arxiv_id": "1011.0180",
"language": "en",
"url": "https://arxiv.org/abs/1011.0180",
"abstract": "We prove new lower bounds on the likely size of a maximum independent set in a random graph with a given average degree. Our method is a weighted version of the second moment method, where we give each independent set a weight based on the total degree of its vertices.",
"subjects": "Computational Complexity (cs.CC); Statistical Mechanics (cond-mat.stat-mech); Combinatorics (math.CO); Probability (math.PR)",
"title": "Independent sets in random graphs from the weighted second moment method",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180677531123,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.708331470202715
} |
https://arxiv.org/abs/2001.09383 | Orientable Hamiltonian Embeddings of the Hypercube Graph | A Hamiltonian embedding is an embedding of a graph $G$ such that the boundary of each face is a Hamiltonian cycle of $G$. It is shown that the hypercube graph $Q_n$ admits such an embedding on an orientable surface when $n$ is a power of 2. Basic necessary conditions on Hamiltonian embeddings for $Q_n$ and conjectures are made about other values of $n$. | \section{Introduction}
The hypercube graph $Q_n$ is an example of a graph with a large amount of symmetry. We shall define $Q_n$ in two different ways.
\de{1.1} The $n$-dimensional hypercube $Q_n$, for $n \geq 1$ is given by
\[ V(Q_n) = \{0,1\}^n \qquad E(Q_n) = \{xy: x \mbox{ and $y$ differ in exactly one coordinate} \}\]
We also provide an alternate definition which will be the basis of how we work with the hypercube graph.
\de{1.2 (Cartesian Product of Graphs)}
Let $G$ and $H$ be graphs. The cartesian product of graphs $G \Box H$ is given by
\[V(G \Box H) = V(G) \times V(H), \]
\[E(G \Box H) = \{(x,y)(x',y'): \mbox{ either } x=x' \mbox{ and } yy'\in E(H) \mbox{ or } y=y' \mbox{ and } xx' \in E(G)\}. \]
To illustrate this concept and the interpretation of the hypercube graph that we will be using, we introduce the merge operator.
\de{1.3} Let $G$ and $G'$ be isomorphic graphs given by the isomorphism $f: G \to G'$. The \emph{merge} operator define a new graph $G \star G'$, which is given by
\[V(G\star G')= V(G) \sqcup V(G') \qquad E(G\star G') = E(G)\cup E(G') \cup \{xy: x\in G, y\in G', f(x)=y \}. \]
We shall refer to the edges of $G \star G'$ in the subset $E(G) \cup E(G')$ as \emph{inside} edges and we shall refer to the subset $\{xy: x\in G, y\in G', f(x)=y \}$ as \emph{outside} edges.\\
Now we consider $G \Box K_2$. We begin by labelling $V(K_2)= \{1,2\}$. So, by the definition of the Cartesian Product,
\[(x,i)(y,j) \mbox{ iff } u=v \mbox{ and } i \neq j \mbox{ or } i=j \mbox{ and }uv\in V(G). \]
This can be seen to be equivalent to the merge operation when $G=G'$ and $f$ is the natural isomorphism. Now $G \Box H$ can be interpreted in a similar manner by considering the disjoint union of copies of $G$ indexed by vertices of $H$. That is if $V(H) = \{x_1, \dots, x_n \}$, then we take $G^{x_1},\dots, G^{x_n}$. For every edge $xy\in E(H)$, perform the merge operation on $G^x$ and $G^y$.\\
Under this interpretation we shall denote $e(G^x,G^y)$ to denote the outside edges joining $G^x$ to $G^y$. This interpretation of the Cartesian product gives more clarity to the alternate definition of the $n$-dimensional hypercube $Q_n$.
\de{1.4 (Alternate Definition of $Q_n$)} For $n=1$ define $Q_1=K_2$. Then define $Q_n$ recursively so that $Q_n = Q_{n-1} \Box K_2$.\\
It turns out that $G \Box H$ is always isomorphic to $H \Box G$, but they are not the same. So we are careful in insisting that $Q_n = Q_{n-1} \Box K_2$. From this definition we can see that:
\[ Q_n = \underbrace{K_2 \Box K_2 \Box \dots \Box K_2}_{n \mbox{ times}}\]
and
\[Q_{2n} = \underbrace{K_2 \Box \dots \Box K_2}_{2n \mbox{ times}}= \underbrace{K_2 \Box \dots \Box K_2}_{n \mbox{ times}} \Box \underbrace{K_2 \Box \dots \Box K_2}_{n \mbox{ times}}= Q_n \Box Q_n.\]
\pr{1.5} $Q_n$ is bipartite with equal sized partitions.
\begin{proof}
We define the weight of a binary vector, $w(x)$, to be the number of non zero entries of $x$. Since $x$ is a binary vector, $w(x)$ is given by $\sum_{i=1}^n x_i$. Now if $xy \in E(Q_n)$, then either $w(x) = w(y)+1$ or $w(x) = w(y)-1$. So, $w(x)$ and $w(y)$ have opposite parities. Therefore, we define a partition $P = \{x: w(x) \equiv 0 \pmod{2}\}$ and $Q = \{x: w(x) \equiv 1 \pmod{2} \}$. It can be seen that $G[P]$ and $G[Q]$ both contain no edges. Futhermore, $|P|=|Q|$.
\end{proof}
We shall refer to vertices of $Q_n$ as being odd or even using the bipartition above. If $M$ is a matching of $Q_n$ then each matching edge is incident to a vertex of odd weight and a vertex of even weight. We shall refer to these vertices as the \emph{odd} and \emph{even endpoints} of the edge respectively.
\pr{1.6} The hypercube graph $Q_n$ has the following properties:
\begin{enumerate}
\item $v(Q_n) = 2^n$
\item $Q_n$ is $n$-regular
\item $e(Q_n) = n2^{n-1}$
\end{enumerate}
\begin{proof}
We shall proceed by induction using the cartesian product definition of $Q_n$. All three properties are easily checked for $Q_1 = K_2$. Now we suppose that the result holds for $Q_{n-1}$. Since, $Q_n = Q_{n-1} \Box K_2$, it follows that the process doubles the number of vertices. So,
\[v(Q_n) = 2v(Q_{n-1}) = 22^{n-1} = 2^n. \]
Furthermore, in the cartesian product, each vertex receives exactly one additional neighbour. So as $Q_{n-1}$ is $n-1$ regular, it follows that $Q_n$ must be $n$ regular. Lastly, by the handshake lemma
\[n2^n = \sum_{x\in V(Q_n)} \deg(x) = 2e(Q_n) \Rightarrow e(Q_n) = n2^{n-1}.\]
\end{proof}
Now, due to the high amount of symmetry, $Q_n$ is Hamiltonian. In fact for sufficiently large $n$, $Q_n$ has a lot Hamiltonian cycles. It is a fun exercise to inductively show that $Q_n$ is hamiltonian. In 1954 Ringel showed that the edge set $Q_n$ can be partitioned into hamiltonian cycles if $n$ is a power of two. Alspach et al. showed that this holds for every $Q_n$ with $n>2$. It is also known that $Q_n$ admits a Hamiltonian Cycle Double Cover, that is a list of Hamiltonian cycles $C_1, \dots, C_n$ so that each edge is contained exactly two of the cycles. However, we shall be looking for an even stronger structure. We are looking for an embedding of $Q_n$ on an orientable surface so that each face is bounded by a Hamiltonian cycle. First, we revise the definitions of embeddings.\\
\section{Embeddings of Graphs}
In the study of planar graphs, we look at the boundaries of faces (if the graph is 2-connected, all the boundaries will be cycles). If $F_1, \dots F_n$ are the faces, then each edge is on the boundary of exactly two of the faces, simply because on either side of the edge are two faces. So one might think that a cycle double cover of a graph is sufficient for specifying a planar embedding. However, some graphs are simply to dense for planar embeddings. For this we turn to surfaces.\\
We shall restrict ourselves to locally euclidean surfaces. That is surfaces so that every point has a neighbourhood that is homeomorphic to $\mathbb{R}^2$. It is well known that there are two such kinds of surfaces: orientable and non-orientable. Orientable surfaces are those in which rotating clockwise is distinct from the opposite, while on non-orientable surfaces there is no such distinction. The orientable surfaces are determined by the number of "holes" they have (up to homeomorphism). So there is the sphere, torus, the two holed torus and so on. The number of "holes" of a surface is called the genus, though we can define it more rigourously as the minimum number of handles we need to add on to a sphere so that the resulting surface is homeomorphic to the given surface. Non-orientable surfaces are determined by the number holes, crosscaps\footnote{A crosscap is given by identifying part of the surface with the boundary of a Mobius Strip} they have, but we shall be dealing only with orientable surfaces in this paper.\\
Now, we formally define what en embedding actually is. A surface $S$ is defined to be a compact connected 2-manifold\footnote{An t-manifold is a t-dimensional locally Euclidean space}. For a graph $G = (V,E)$, an embedding is a representation on $S$ such that the vertices of $G$ are points on $S$ and the edges of $G$ are simple arcs whose endpoints are the vertices. We maintain that the endpoint of the arc corresponding to $e$, correspond to the endpoints of the edge $e$. No arc includes points associated with other vertices and no two arcs intersect at a point which is interior to either arc. Technically, we are dealing with equivalence classes of surfaces, but the structure above is preserved under homeomorphism, since we demand that the embedding is injective.\\
Now as $S$ is locally euclidean we can look at a neighbourhood about each vertex in an embedding. Say we're looking at a vertex $v$. We enumerate the edges incident to $v$, say $e_1, \dots ,e_t$. Each of these edges borders two faces. So we take one of the cycles that contains $e_1$ and see what other edge it contains, say $f$. Now there is exactly one other cycle that contains $f$. We see what other edge on the cycle is incident to $v$. Repeating this process gives us a permutation on the edges incident to $v$. We can do this for all vertices and get a collection of permutations $\{ \pi_v: v\in V(G)\}$ called a \emph{rotation scheme}. We can see that on an orientable surface, each permutation needs to be a cycle. That is if $v$ is of degree $d$, then $\pi_v$ must be a $d$-cycle. Now, we quote a well-known result in topological graph theory.\\
\thm{2.1} Specifying an embedding of a graph $G = (V,E)$ on a surface $S$ is equivalent to specifying a rotation scheme. If the surface is orientable, then we have an embedding if and only if $\pi_v$ is a $\deg(v)$-cycle for all $v\in V(G)$.\\
Here we do not distinguish between a permutation and it's cyclic rotations. We see that specifying a rotation scheme is sufficient for concluding that $G$ has an embedding. To do this, we take an edge $e=xy$. Then $\pi_y(xy) = yz$. So we consider $\pi_z(yz)$ and so on. We are guaranteed to get a closed walk from this procedure. Furthermore, we get that each edge will be contained in two closed walks. To construct the embedding, we take a 2-cell for each cycle and "sew" them together. It takes a lot to show that if $\pi_v$ is a cycle for all $v$, then the resulting surface is orientable but the details can be found in [1]. In this paper we shall be looking for a particular type of embedding.
\de{2.2} A \textbf{Hamiltonian Embedding} of a $2$-connected graph $G$ is an embedding $\Sigma$ such that the cycle bounding each face is a Hamiltonian Cycle of $G$.
\de{2.3} A \textbf{Orientable Hamiltonian Embedding} of a $2$-connected graph is a Hamiltonian embedding $\Sigma$ on an orientable surface.\\
We can see that this is already a lot of structure to impose on a graph. In order to construct a Hamiltonian embedding, we first need a cycle double cover $C_1, \dots C_t$ such that each $C_i$ is a Hamiltonian cycle. Furthermore, in order to make sure that the embedding is orientable, we need to verify that the rotation scheme is a collection of cycles. That is $\pi_v$ is a $\deg(v)$ cycle for all $v$. Now we shall show that for certain values of $n$, $Q_n$ admits an Orientable Hamiltonian Embedding.
\section{Hamiltonian embeddings of $Q_{2^n}$}
Now we get to the central result.
\thm{3.1} If $n$ is a power of $2$ then $Q_n$ admits a Hamiltonian embedding on an orientable surface.
We shall do this by decomposing $Q_n$ into $n$ perfect matchings, say $M_1, M_2, \dots, M_n$. It is easy to see that the union of two perfect matchings is a collection of disjoint cycles as the resulting graph will be 2-regular. We will construct the matchings so that $M_i \cup M_{i+1}$ is a Hamiltonian cycle and $M_i \cap M_j = \emptyset$. This is the central focus of the paper.
\thm{3.2} For $Q_{2^n}$ there exists a collection of disjoint perfect matchings $M_1,\dots, M_{2^n}$ such that $M_i \cup M_{i+1}$ is a Hamiltonian cycle. Here the subscripts are taken modulo $2^n$.\\
Why does this suffice? It is clear the defining $C_i = M_i \cup M_{i+1}$, will give us a collection of Hamiltonian cycles and the each edge will be contained in exactly two cycles. However, we need to verify the rotation scheme. Fix a vertex v. Let $e_i$ be the unique edge in $M_i$ incident to $v$. Now $e_i \in C_i$ and $e_i \in C_{i-1}$. Traversing along each cycle, we see that the rotation scheme for each vertex will be $\pi_v = (e_1, \dots, e_{2^n})$ or $\pi_v = (e_1, e_{2^n}, \dots,e_2)$, up to cyclic rotation. That is $e_i$ must lie between $e_{i-1}$ and $e_{i+1}$. So we are guaranteed that each $\pi_v$ is a $2^n$-cycle and so we have a Hamiltonian embedding.\\
\thm{3.2}
For each $n\in\mathbb{N}$ there is a set of $m=2^n$ disjoint perfect matchings $M_1, \dots, M_m$ of $Q_{m}$, such that $M_i \cup M_{i+1}$ is a Hamiltonian cycle.
\begin{proof}
We shall proceed by induction on $n$. $Q_2$ is clearly planar as $Q_2 \cong C_4$ and clearly has the desired decomposition. Namely if $Q_2 =\{00,10,01,11\}$. So set $M_1 = \{\{00,01\},\{10,11\}\}$ and $M_2 = \{\{00,10\},\{01,11\}\}$. Note that in the argument above, $C_1$ and $C_2$ are the same cycle. We will have to take special note of this case when we show the matching decomposition of $Q_4$ is disjoint.\\
Suppose for $m=2^n$, we have such a matching decomposition for $Q_m$, $M_1, \dots, M_m$. We wish to construct such a matching decomposition for $Q_{2m}$. We shall use the indexing notation above. That is $Q_m^x$, for $x\in v(Q_m)$ refers to the implicit subgraph of $Q_{2m}$. Now for each matching $M_i$, we shall refer to the corresponding matching of $Q_m^x$ by $M_i^x$. So we define $N_i = \un_{x\in V(Q_n)}M_i^x$. $N_i$ is a perfect matching as any $v\in V(Q_m)$ belongs to a unique $M_i^x$. Therefore by definition of $M_i^x$, there is a unique edge of $M_i^x$ incident to $v$. So there is a unique edge of $N_i$ incident to $v$.\\
Now we define another set of matchings $O_i$ using the following classification. Let $e_E(Q_m^x,Q_m^y)$ denote the edges of $e(Q_m^x,Q_m^y)$ whose endpoint in $Q_m^x$ is even. Similarly let $e_O(Q_m^x,Q_m^y)$ denote the edges of $e(Q_m^x,Q_m^y)$ whose endpoint in $Q_m^x$ is even. We see that each edge of $e_E(Q_m^x, Q_m^y)$ has an odd endpoint in $Q_m^y$ and each edge of $e_O(Q_m^x,Q_m^y)$ has an even endpoint in $Q_m^y$. Futhermore $e_E(Q_m^x,Q_m^y) = e_O(Q_m^x,Q_m^y)$\\
We define $C_i = M_i \cup M_{i+1}$. Let $C_i = (x_{i_1}, \dots, x_{i_m})$. We orient these cycles in a clockwise direction with respect to the 2-cell embedding. Since the resulting surface is orientable this is well defined and we have if $xy$ appears in $C_i$ and $C_j$, then $x$ appears before $y$ in $C_i$'s orientation, and $y$ appears before $x$ in $C_j$'s orientation (or the other way around). This must be the case since left and right do not change direction anywhere on the surface. Rotate each of these so that $x_{i_1} = x_{j_1}$ for all $i,j = 1,\dots, 2^n$.
We define \[O_i = \un_{j=1}^m e_E(Q_m^{x_{i_j}},Q_m^{x_{i_{j+1}}})\]
We claim $O_i$ is a perfect matching for all $i$. Fix a vertex $v \in V(Q_{2m})$. Each $v$ belongs to a unique $Q_m^x$. Now, there are two edges in $C_i$ incident to $x$. Say $wx$ and $xy$. That is $C_i = (\dots, w,x,y, \dots)$. By the definition of merge, we have that there is a unique edge in $e(Q_m^x,Q_m^y)$ incident to $v$. Now if $v$ is of even weight, this edge is in $e_E(Q_m^x,Q_m^y)$ and therefore in $O_i$.\\
By the definition of $O_i$, the only other set that could have an edge incident with $v$ is the set $e_E(Q_m^w,Q_m^x)$. But each edge in this set has an odd endpoint in $Q_m^x$, so none of these edges can be incident with v and therefore $O_i$ has no other edges incident with $v$. Similarly, if $v$ is of odd weight, then there is a unique edge in $e_E(Q_m^w,Q_m^x)$ incident to $v$ and there are no edges in $e_E(Q_m^x,Q_m^y)$ incident with $v$.\\
Now we shall show that $N_i \cup O_i$ is a collection of $m/2$ cycles of length $2^{m}$. For each $v\in V(Q_m)$, let $v^x$ be the corresponding vertex in $Q_m^x$. Let $C_i = (x_{i_1}, \dots, x_{i_m})$ and orient them as in the statement of the theorem. First consider a matching edge $vw$ of $M_i$ We shall consider the matching edges of $M_i^{x_{i_j}}$, which are of the form $v^{x_{i_j}}w^{x_{i_j}}$. Now there is a unique edge of $O_i$, connecting $v^{x_{i_j}}$ to either $v^{x_{i_{j+1}}}$ or $v^{x_{i_{j-1}}}$. Without loss, we assume the unique edge is connected to $v^{x_{i_{j+1}}}$. Note that by the definition of $O_i$, the assumption for one value of $j$, implies that for all j $w^{x_{i_j}}$ is connected to $w^{x_{i_{j+1}}}$ by an edge of $O_i$\\
So traversing edges of $O_i$ and $N_i$ alternatively we get the following cycle
\[ (v^{x_{i_1}}v^{x_{i_2}}w^{x_{i_2}}w^{x_{i_3}},\dots, v^{x_{i_{2^m-1}}}v^{x_{i_{2^m-1}}}w^{x_{i_{2^m}}}) \]
As we only increase the lowest index every other edge, we see the cycle is of length $2^m$. Since we get a cycle for each edge in $M_i$, we get a collection of $m/2$ cycles in the union $N_i \cup O_i$.\\
Of course we wish to fix this. To do this we shall use the fact that $M_i \cup M_{i+1}$ is a Hamiltonian cycle of $Q_m$. So, $M_i^x \cup M_{i+1}^x$ is a Hamiltonian cycle of $Q_m^x$. So we now define $P_i = M_{i+1}^{x_{i_1}} \cup \un_{j=2}^m M_i^{x_{i_j}}$. We use the following lemma to show $O_i \cup P_i$ is a Hamiltonian cycle of $Q_{2m}$.
\lma{3.3} Let $C_1, \dots, C_n$ be disjoint cycles of $G$. Suppose there is a matching $M$ so that for each $e\in M$, there is a unique $i$ such that $e\in C_i$. Suppose there is another matching $M'$ such that $M \cup M'$ is a cycle, $M'$ is disjoint from $M$ and $M' \cap C_i = \emptyset$ for every $i$. Then $\un(C_i \backslash M)\cup M'$ is a cycle of $G$.
\begin{proof}
Let $e_i = M \cap C_i$. Fix an orientation on $M \cup M'$ and relabel $C_1, \dots, C_n$ so that the $e_i$ appear in order on the traversal. Let $e_i=x_iy_i.$ So $M \cup M' = \{x_1y_1x_2y_2,\dots \}$. Then $M' = \{y_1x_2,y_2x_3,\dots, y_nx_1 \}$. Fix orientation on each $C_i$ so that they start off with $x_i$ and end on $y_i$. I.e, $C_i = (x_i, \dots, y_i)$. Let $D_i = C_i \ \{y_ix_i\}$, the path from $x_i$ to $y_i$ using edges of $C_i$. We see by traversing
\[ x_1D_1y_1x_2D_2y_2 \dots x_nD_ny_nx_1\]
is a closed walk. The traversal is a cycle since each cycle is disjoint from other cycles. We see this cycle includes no edges of $M$, all edges of $M'$ and all edges of $C_i \backslash M$.
\end{proof}
Let $D_1,\dots, D_{\frac{m}{2}}$ be the disjoint cycles obtained from $N_i \cup O_i$. Set $M=M_i^{x_{i_1}}$ and $M' = M_{i+1}^{x_{i_1}}$ and apply the lemma.
Then
\begin{eqnarray*}
(N_i \cup O_i) \cup M_{i+1}^{x_{i_1}} \backslash M_i^{x_{i_1}} &=& O_i \cup ((N_i\backslash M_i^{x_{i_1}}\cup M_{i+1}^{x_{i_1}})) \\
&=& O_i \cup P_i
\end{eqnarray*}
Now we claim $P_i \cup O_{i+1}$ is also a Hamiltonian cycle. First we get $N_i \cup O_{i+1}$ is a collection of disjoint cycles by repeating the argument for $N_i \cup O_i$. Then using the lemma, we get that indeed $P_i \cup O_i$ is a Hamiltonian cycle.\\
Now all that remains is to show that the matchings are disjoint. By construction, $P_i \cap P_j = \emptyset$ and $P_i \cap O_j$ for all $i \neq j$. All that remains to show is that $O_i \cap O_j = \emptyset$ for all $i \neq j$.\\
Now we note by induction, $C_i \cap C_j$ is non empty if and only if $i=j+1$ or $i=j-1$.\\
Now, our definitions of $O_i$ depend on $C_i$. So an edge belongs to $O_i$ if and only if it belongs to $e_{E}(Q_m^x,Q_m^y)$ where $xy\in C_i$ and i is even or belongs to $e_{O}(Q_m^x,Q_m^y)$ where $xy\in C_i$ and i is odd.\\
First we consider the case where $m=2$. We started off with $M_1$ and $M_2$. But we noted that $C_1 = C_2$. At this stage we'll have constructed \[O_1 = e_E(Q_2^{00},Q_2^{10})\cup e_E(Q_2^{10},Q_2^{11})\cup e_E(Q_2^{11},Q_2^{01})\cup e_E(Q_2^{00},Q_2^{01})\]. But \[O_2=e_O(Q_2^{00},Q_2^{10})\cup e_O(Q_2^{10},Q_2^{11})\cup e_O(Q_2^{11},Q_2^{01})\cup e_O(Q_2^{00},Q_2^{01})\]
Which can be readily seen to be disjoint.
We continue for $m\geq 4$.\\
We have the following chain of equivalences. The which follow from the definition of $O_i$.
\[ xy\in C_i\cap C_j\iff e\in e_{E} (Q_m^x,Q_m^y) \mbox{ and } e\in e_{E}(Q_m^y,Q_m^x) \iff e\in O_i \cap O_j\]
We know by induction that $xy\in C_i\cap C_j$ only when $i=j+1$ or $i=j-1$. Futhermore, by the orientations specified we can say that without loss $x$ appears before $y$ in the orientation of $C_i$ and $y$ appears before $x$ in the orientation of $C_j$. So we have the edges are taken from $e_E(Q_m^x,Q_m^y)$ and $e_E(Q_m^y,Q_m^x) = e_O(Q_m^x,Q_m^y)$ respectively. Here we see the importance of the specification of orientations, otherwise we could have $xy$ appearing in that order in both orientations and we'd have the set $e_E(Q_m^x,Q_m^y)$, in both matchings. But due to our choice of orientations we are dealing with $e_E(Q_m^x,Q_m^y)$ and $e_O(Q_m^x,Q_m^y)$, which are completely disjoint. So $O_i \cap O_j$ is empty.\\
\end{proof}
\section{Necessary Conditions for Hamiltonian Embeddings}
A quick condition that can be derived is that if we have $d$ faces, all of which are Hamiltonian Cycles, we must have that the graph is $d$-regular. For a stronger result we turn to Heawood's generalization of Euler's Formula
\[v-e+f=2-2g\]
Where $g$ is the smallest number where a graph $G$ admits an embedding on an orientable surface of genus $g$. So noting the above, we suppose $G$ is a $d$-regular graph of order $n$. Therefore if we have a Hamiltonian embedding,
\[v= n \qquad e=\frac{nd}{2} \qquad f=d.\]
By Heawood's formula,
\[g = 1 - \frac{nd-2n-2d}{4}\]
So we must have
\[nd \equiv 2(n+d)\pmod{4}\]
which is summarized by the following
\pr{4.1}
Let $G$ is a $d$-regular graph of order $n$. Then if $G$ admits an orientable Hamiltonian Embedding we must have
\begin{enumerate}
\item At least one of $n$ or $d$ must be even.
\item If $n \not\equiv d \pmod{2}$, then either $n\equiv 2 \pmod{4}$ or $d \equiv 2 \pmod{4}$.
\item If both $n$ and $d$ are even, then $nd \not\equiv n+d \pmod{4}$
\end{enumerate}
From this simple result, we get that
\cor{4.2}
$Q_n$ has a Hamiltonian embedding only if $n$ is even.\\
Here we discuss some conjectures we have made.
\con{4.3}
If $Q_n$ has an (orientable) Hamiltonian embedding with facial boundaries $C_1,\dots, C_n$, then either $C_i \cap C_j$ is a perfect matching of $Q_n$ or $C_i \cap C_j = \emptyset$.\\
We can see that if we want the embedding to be orientable, then two cycles $C_i$ and $C_j$ cannot share incident edges. To see this, suppose $e=xy$ and $f=yz$ belong to $C_i$ and $C_j$. We get that $\pi_y$ contains the transposition $(ef)$, when written as a union of disjoint cycles. So the embedding cannot be orientable.\\
We shall deal with the \emph{weighted intersection graph} $W$. Where $V(W) = \{C_1,\dots, C_n\}$. An edge $C_iC_j$ is in $W$ if and only if $C_i$ and $C_j$ share at least one edge. The weight of $C_iC_j$ is the number of edges they have in their intersection. This idea gives rise to the following conjecture:
\con{4.4} If $Q_n$ has an orientable Hamiltonian embedding with facial boundaries $C_1,\dots, C_n$. Then the weighted intersection graph is a cycle with weights $2^{n-1}$ on all edges.\\
Furthermore, we have the strong conjecture:
\con{4.5}
$Q_n$ has a Hamiltonian embedding only if $n$ is a power of 2.
\section*{References}
\begin{enumerate}[{[1]}]
\item Bojan Mohar and Carsten Thomassen. \emph{Graphs on Surfaces}. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore, MD, 2001.
\item Douglas B. West. \emph{Introduction to Graph Theory}. Pearson, 2000.
\end{enumerate}
\end{document} | {
"timestamp": "2020-01-28T02:10:14",
"yymm": "2001",
"arxiv_id": "2001.09383",
"language": "en",
"url": "https://arxiv.org/abs/2001.09383",
"abstract": "A Hamiltonian embedding is an embedding of a graph $G$ such that the boundary of each face is a Hamiltonian cycle of $G$. It is shown that the hypercube graph $Q_n$ admits such an embedding on an orientable surface when $n$ is a power of 2. Basic necessary conditions on Hamiltonian embeddings for $Q_n$ and conjectures are made about other values of $n$.",
"subjects": "Combinatorics (math.CO)",
"title": "Orientable Hamiltonian Embeddings of the Hypercube Graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180664944447,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314692982435
} |
https://arxiv.org/abs/2101.12708 | Between steps: Intermediate relaxations between big-M and convex hull formulations | This work develops a class of relaxations in between the big-M and convex hull formulations of disjunctions, drawing advantages from both. The proposed "P-split" formulations split convex additively separable constraints into P partitions and form the convex hull of the partitioned disjuncts. Parameter P represents the trade-off of model size vs. relaxation strength. We examine the novel formulations and prove that, under certain assumptions, the relaxations form a hierarchy starting from a big-M equivalent and converging to the convex hull. We computationally compare the proposed formulations to big-M and convex hull formulations on a test set including: K-means clustering, P_ball problems, and ReLU neural networks. The computational results show that the intermediate P-split formulations can form strong outer approximations of the convex hull with fewer variables and constraints than the extended convex hull formulations, giving significant computational advantages over both the big-M and convex hull. | \section{Introduction}
There are well-known trade-offs between the big-M and convex hull relaxations of disjunctions in terms of problem size and relaxation tightness. Convex hull formulations \cite{balas1998disjunctive,ben2001lectures,ceria1999convex,helton2009sufficient,jeroslow1984modelling,stubbs1999branch} provide a \textit{sharp formulation} for a single disjunction, {\emph{i.e., }} the continuous relaxation provides the best possible lower bound. The convex hull is often represented by so-called extended (a.k.a.\ perspective/multiple-choice) formulations \cite{Balas2018,bonami2015mathematical,conforti2008compact,grossmann2003generalized,gunluk2010perspective,hijazi2012mixed,vielma2015mixed}, which introduce multiple copies of each variable in the disjunction(s). On the other hand, the big-M formulation only introduces one binary variable for each disjunct and results in a smaller problem in terms of both number of variables and constraints; however, in general it provides a weaker relaxation than the convex hull and may require a solver to explore significantly more nodes in a branch-and-bound tree \cite{conforti2014integer,vielma2015mixed}. Even though the big-M formulation is weaker, in some cases it can computationally outperform extended convex hull formulations, as the simpler subproblems can offset the larger number of explored nodes.
Anderson et al.\ \cite{anderson2020strong} describe a folklore observation in mixed-integer programming (MIP) that extended convex hull formulations tend to perform worse than expected.
The observation is supported by the numerical results in Anderson et al.\ \cite{anderson2020strong} and in this paper.
This paper presents a framework for generating formulations for disjunctions between the big-M and convex hull with the intention of combining the best of both worlds: a tight, yet computationally efficient, formulation.
The main idea behind the novel formulations is partitioning the constraints of each disjunct and moving most of the variables out of the disjunction. Forming the convex hull of the resulting disjunctions results in a smaller problem, while retaining some features of the convex hull. We call the new formulation the $P$-split, as the constraints are split into $P$ parts. While many efforts have been devoted to computationally efficient convex hull formulations \cite{balas1988convex,conforti2008compact,jeroslow1988simplification,sawaya2007computational,trespalacios2015algorithmic,vielma2019small,vielma2010mixed,vielma2011modeling} and techniques for deriving the convex hull of MIP problems \cite{balas1985disjunctive,lasserre2001explicit,lovasz1991cones,ruiz2012hierarchy,sherali1990hierarchy}, our primary goal is not to generate the convex hull.
Rather, we provide a straightforward framework for generating a family of relaxations that approximate the convex hull for a general class of disjunctions using a smaller problem formulation.
Our experiments show that the $P$-split formulations can give a significant computational advantage over both the big-M and convex hull formulations.
This paper is organized as follows:
the $P$-split formulation is presented in Section 2, together with properties of the $P$-split relaxations and how they compare to the big-M and convex hull relaxations. We also present a non-extended realization of the $P$-split formulation for the special case of a two-term disjunction. Finally, a numerical comparison of the formulations is presented in Section 3 using both instances with linear and nonlinear disjunctions.
\subsection{Background}
We consider optimization problems containing disjunctions of the form
\begin{equation}
\begin{aligned}
\label{eq:main_disjunction}
&\underset{l \in \mathcal{D}}{\lor} \begin{bmatrix} g_{k}(\boldsymbol{x}) \leq b_k \quad \forall k \in \mathcal{C}_{l}
\end{bmatrix}\\
&\boldsymbol{x} \in \mathcal{X} \subset \mathbb{R}^n,
\end{aligned}
\end{equation}
where $\mathcal{D}$ contains the indices of the disjuncts, $\mathcal{C}_{l}$ the indices of the constraints in disjunct $l$, and $\mathcal{X}$ is a convex compact set. This paper assumes the following:
\begin{assumption}
The functions $g_{k}: \mathbb{R}^n \rightarrow \mathbb{R} $ are convex additively separable functions, {\emph{i.e., }} $g_{k}(\boldsymbol{x}) = \sum_{i=1}^n h_{ik}(x_i)$ where $h_{ik}: \mathbb{R} \rightarrow \mathbb{R} $ are convex functions, and each disjunct is non-empty on $\mathcal{X}$.
\end{assumption}
\begin{assumption}
All functions $g_k$ are bounded over $\mathcal{X}$.
\end{assumption}
\begin{assumption}
Each disjunct contains far fewer constraints than the number of variables in the disjunction, {\emph{i.e., }} $\left| \mathcal{C}_{l}\right| << n$.
\end{assumption}
The first two assumptions are needed for the $P$-split formulation to be valid and result in a convex MIP.
While the first assumption simplifies our analysis of $P$-split formulations, it could easily be relaxed to partially additively separable functions.
Furthermore, the computational experiments only consider problems with linear or quadratic constraints, which ensures that the convex hull of the disjunction is representable by a polyhedron or (rotated) second-order cone constraints \cite{ben2001lectures}.
Assumption 3 characterizes problem structures favorable for the presented formulations. Problems with such a structure include, {\emph{e.g., }} clustering \cite{papageorgiou2018pseudo,sauglam2006mixed}, mixed-integer classification \cite{liittschwager1978integer,rubin1997solving}, optimization over trained neural networks \cite{anderson2020strong,botoeva2020efficient,fischetti2018deep,grimstad2019relu,serra2020lossless}, and coverage optimization \cite{huang2005coverage}.
\section{Relaxations between convex hull and big-M}
The formulations in this section apply to disjunctions with multiple constraints per disjunct. However, to simplify the derivation, we only consider disjunctions with one constraint per disjunct, {\emph{i.e., }} $|\mathcal{C}_{l}| = 1 \ \forall l \in \mathcal{D}$. The extension to multiple constraints per disjunct simply applies the splitting procedure to each constraint.
To derive the new formulations, we partition the variables into $P$ sets and form the corresponding index sets $\mathcal{I}_1, \dots, \mathcal{I}_P$. The constraint for each disjunct is then split into $P$ constraints, by introducing auxiliary variables $\alpha^j \in \mathbb{R}^{P}$
\begin{equation}
\label{eq:main_disjunction_lifted}
\begin{matrix}
\begin{aligned}
&\underset{l \in \mathcal{D}}{\lor} \begin{bmatrix}
g_l(\boldsymbol{x}) \leq b_l\\
\end{bmatrix}\\
&\boldsymbol{x} \in \mathcal{X}
\end{aligned}
\end{matrix}
\quad \quad
\longrightarrow \quad \quad
\begin{matrix}
\begin{aligned}
&\underset{l \in \mathcal{D}}{\lor}
\begin{bmatrix}
\begin{aligned}
&\underset{i \in \mathcal{I}_1}{\sum}h_{i,l}(x_i) \leq \alpha^l_1\\[-0.1cm]
& \quad \quad \quad \vdots \\
\vspace{0.1cm}
&\underset{i \in \mathcal{I}_P}{\sum}h_{i,l}(x_i) \leq \alpha^l_P\\
\vspace{0.1cm}
&\sum_{s=1}^P \alpha^l_s \leq b_l\\
& \ubar{\alpha}^l_s\leq \alpha^l_s \leq \bar{\alpha}^l_s \quad \forall s \in \{1,\dots, P\}
\end{aligned}
\end{bmatrix}
\\
&\boldsymbol{x} \in \mathcal{X}, \boldsymbol{\alpha}^l \in \mathbb{R}^{P}\ \forall\ l \in \mathcal{D}.
\end{aligned}
\end{matrix}
\end{equation}
By Assumption 2, function $h_{i,l}$ is bounded on $\mathcal{X}$, and bounds on the auxiliary variables are given by
\begin{equation}
\begin{aligned}
& \ubar{\alpha}^l_s := \min_{\boldsymbol{x} \in \mathcal{X}} \underset{i \in \mathcal{I}_s}{\sum}h_{i,l}(x_i), \quad & \bar{\alpha}^l_s := \max_{\boldsymbol{x} \in \mathcal{X}} \underset{i \in \mathcal{I}_s}{\sum}h_{i,l}(x_i) .
\end{aligned}
\end{equation}
The $P$-split formulation does not require tight bounds, but weak bounds result in an overall weaker relaxation.
The splitting creates a lifted formulation by introducing $P \times |\mathcal{D}|$ auxiliary variables. Both formulations in~\eqref{eq:main_disjunction_lifted} have the same feasible set in the $\boldsymbol{x}$ variables. We relax the disjunction by treating the splitted constraints as global constraints
\begin{equation}
\label{eq:main_disjunction_splitted}
\begin{aligned}
&\underset{l \in \mathcal{D}}{\lor} \begin{bmatrix}
\begin{aligned}
&\sum_{s=1}^P \alpha^l_s \leq b_l\\
&\ubar{\alpha}^l_s\leq \alpha^l_s \leq \bar{\alpha}^l_s \quad \forall s \in \{1,\dots, P\}
\end{aligned}
\end{bmatrix}
\\
&\underset{i \in \mathcal{I}_s}{\sum}h_{i,l}(x_i) \leq \alpha^l_s &&\forall s \in \{1,\dots, P\}, \ \forall \ l \in \mathcal{D} \\
&\boldsymbol{x} \in \mathcal{X}, \boldsymbol{\alpha}^l \in \mathbb{R}^{P}\ &&\forall \ l \in \mathcal{D}.
\end{aligned}
\end{equation}
\begin{definition}
Formulation \eqref{eq:main_disjunction_splitted} is a $P$-split representation of the original disjunction in \eqref{eq:main_disjunction_lifted}.
\end{definition}
Lemma \ref{lemma_feasset} relates the $P$-split representation to the original disjunction. The property is rather simple, but for completeness we have stated it as a lemma.
\begin{lemma}\label{lemma_feasset}
The feasible set of $P$-split representation projected onto the $\boldsymbol{x}$-space is equal to the feasible set of the original disjunctions in \eqref{eq:main_disjunction_lifted}.
\end{lemma}
\begin{proof}
An $\bar{\boldsymbol{x}}$ that is feasible for \eqref{eq:main_disjunction_splitted} and violates \eqref{eq:main_disjunction_lifted} gives a contradiction. Similarly, an $\bar{\boldsymbol{x}}$ that is feasible for \eqref{eq:main_disjunction_lifted} is also clearly feasible for \eqref{eq:main_disjunction_splitted}. \qed
\end{proof}
Using the extended formulation \cite{balas1998disjunctive} to represent the convex hull of the disjunction in \eqref{eq:main_disjunction_splitted} results in the \textit{$P$-split formulation}
\begin{equation}
\label{eq:p-split}
\tag{$P$-split}
\begin{aligned}
& \alpha^l_s = \underset{d \in \mathcal{D}}{\sum} \nu^{\alpha^l_s}_d && \forall \ s \in \{1, \dots, P\}, \ \forall\ l \in \mathcal{D} \\
& \sum_{s=1}^P \nu^{\alpha^l_s}_l \leq b_l\lambda_l &&\forall\ l \in \mathcal{D}\\
& \ubar{\alpha}^l_s\lambda_d \leq \nu^{\alpha^l_s}_d \leq \bar{\alpha}^l_s\lambda_d &&\forall \ s \in \{1, \dots, P\},\forall\ l,d \in \mathcal{D}\\
&\underset{i \in \mathcal{I}_s}{\sum}h_{i,l}(x_i) \leq \alpha^l_s &&\forall\ s \in \{1,\dots, P\}, \ \forall \ l \in \mathcal{D} \\
& \underset{l \in \mathcal{D}}{\sum}\lambda_l = 1, \quad \boldsymbol{\lambda} \in \{0, 1\}^{|\mathcal{D}|}\\
&\boldsymbol{x} \in \mathcal{X}, \boldsymbol{\alpha}^l \in \mathbb{R}^{P}, \ \boldsymbol{\nu}^{\alpha^l_s} \in \mathbb{R}^P &&\forall\ s \in \{1,\dots, P\}, \ \forall \ l \in \mathcal{D}\ ,
\end{aligned}
\end{equation}
which forms a convex MIP problem. To clarify our terminology: a 2-split formulation is a formulation \eqref{eq:p-split} where the constraints of the original disjunction are split up into two parts, {\emph{i.e., }} $P = 2$. We assume that the disjunction is part of a larger optimization problem that may contain multiple disjunctions. Therefore, we need to enforce integrality on the $\lambda$ variables even if we recover the convex hull of the disjunction. Proposition \ref{prop_feasset} shows the correctness of the the \eqref{eq:p-split} formulation of the original disjunction.
\begin{proposition}\label{prop_feasset}
The set of feasible $\boldsymbol{x}$ variables in formulation ~\eqref{eq:p-split} is equal to the feasible set of $\boldsymbol{x}$ variables in disjunction~\eqref{eq:main_disjunction_lifted}.
\end{proposition}
\begin{proof}
By Lemma 1, \eqref{eq:main_disjunction_lifted} and \eqref{eq:main_disjunction_splitted} have equivalent $\boldsymbol{x}$ feasible sets. For $\lambda \in \{0, 1\}^{|\mathcal{D}|}$, the extended formulation \eqref{eq:p-split} exactly represents the disjunction \eqref{eq:main_disjunction_splitted}. \qed
\end{proof}
Proposition \ref{prop_feasset} states that the $P$-split formulation is correct for integer feasible solutions, but it does not give any insight on the quality of the continuous relaxation. The following subsections further analyze the properties of the \eqref{eq:p-split} formulation and its relation to the big-M and convex hull formulations.
\begin{remark}
A \eqref{eq:p-split} formulation introduces $P\cdot \left(|\mathcal{D}|^2 +1 \right)$ continuous and $|\mathcal{D}|$ binary variables. Unlike the extended convex hull formulation (which introduces $|\mathcal{D}|\cdot n$ continuous and $|\mathcal{D}|$ binary variables), the number of \say{extra} variables is independent of $n$, {\emph{i.e., }} the number of variables in the original disjunction.
As we later show, there are applications where $|\mathcal{D}| << n$ for which \eqref{eq:p-split} formulations can be smaller and computationally more tractable than the extended convex hull formulation.
\end{remark}
\subsection{Properties of the $P$-Split formulation}
This section focuses on the strength of the continuous relaxation of the $P$-split formulation, and how it compares to convex hull and big-M formulations. To simplify the analyses, we only consider disjunctions with a single constraint per disjunct. However, the results directly extend to the case of multiple constraints per disjunct by applying the same procedure to each individual constraint.
We first analyze the 1-split, as summarized in the following theorem.
\begin{theorem}
The 1-split formulation is equivalent to the big-M formulation.
\end{theorem}
\begin{proof}
We eliminate the disaggregated variables $\nu^{\alpha^l}_d$ from the 1-split formulation using Fourier-Motzkin elimination.
Furthermore, we eliminate trivially redundant constraints, {\emph{e.g., }} $\ubar{\alpha}^l\lambda_d \leq \bar{\alpha}^l\lambda_d$, resulting in
\begin{equation}
\label{eq:1-split_bigM}
\begin{aligned}
& \alpha^l \leq b_l\lambda_l + \underset{d \in \mathcal{D} \setminus l}{\sum}\bar{\alpha}^l\lambda_d && \forall l \in \mathcal{D} \\
&\sum_{i=1}^nh_{i,l}(x_i) \leq \alpha^l &&\forall \ l \in \mathcal{D} \\
& \underset{l \in \mathcal{D}}{\sum}\lambda_l = 1, \quad \boldsymbol{\lambda} \in \{0, 1\}^{|\mathcal{D}|},\boldsymbol{x} \in \mathcal{X}, \boldsymbol{\alpha}^l \in \mathbb{R} \ &&\forall \ l \in \mathcal{D}.
\end{aligned}
\end{equation}
The auxiliary variables $ \alpha^l$ are removed by combining the first and second constraints in \eqref{eq:1-split_bigM}. The smallest valid big-M coefficients are $M^l = \bar{\alpha}^l - b_l $, which enables us to write \eqref{eq:1-split_bigM} as
\begin{equation}
\begin{aligned}
& \sum_{i=1}^nh_{i,l}(x_i) \leq b_l + M^l(1- \lambda_l) && \forall l \in \mathcal{D}_k \\
& \underset{l \in \mathcal{D}}{\sum}\lambda_l = 1, \quad \boldsymbol{\lambda} \in \{0, 1\}^{|\mathcal{D}|},\ \boldsymbol{x} \in \mathcal{X}.
\end{aligned}
\end{equation}
\qed
\end{proof}
Since the 1-split formulation introduces $|\mathcal{D}|^2 + 1$ auxiliary variables, but has the same continuous relaxation as the big-M formulation, there are no clear advantages of the 1-split formulation vs the big-M formulation.
We now examine the other extreme, where constraints are fully disaggregated, {\emph{i.e., }} the $n$-split. Its relation to the convex hull is given in the following theorem.
\begin{theorem}
If all $h_{i,l}$ are affine functions, then the $n$-split formulation (where constraints are split for each variable) provides the convex hull of the disjunction.
\end{theorem}
\begin{proof}
In the linear case, the original disjunction is given by
\begin{equation}
\label{eq:lin_disjunct}
\begin{matrix}
\begin{aligned}
&\underset{l \in \mathcal{D}}{\lor} \begin{bmatrix}
(\mathbf{a}^l)^T \boldsymbol{x} \leq {b_l}\\
\end{bmatrix}\\
&\boldsymbol{x} \in \mathcal{X},
\end{aligned}
\end{matrix}
\end{equation}
and the $n$-split formulation can be written compactly as
\begin{equation}
\label{eq:p-split-rep-lin}
\begin{matrix}
\begin{aligned}
&\underset{l \in \mathcal{D}}{\lor} \begin{bmatrix}
\mathbf{B}^l\tilde{\boldsymbol{\alpha}} \leq \tilde{\boldsymbol{b}_l}\\
\end{bmatrix}\\
&\tilde{\boldsymbol{\alpha}} = \mathbf{\Gamma} \boldsymbol{x}, \quad \boldsymbol{x} \in \mathcal{X}, \tilde{\boldsymbol{\alpha}} \in \mathbb{R}^{n\times|\mathcal{D}|}.
\end{aligned}
\end{matrix}
\end{equation}
The $n$-split formulation is given by the convex hull of \eqref{eq:p-split-rep-lin} through the extended formulation. Here, $\mathbf{\Gamma}$ defines a bijective mapping between the $\boldsymbol{x}$ and $\tilde{\boldsymbol{\alpha}}$ variable spaces (only true for an $n$-split). A reverse mapping is given by $\boldsymbol{x}= \mathbf{\Psi}\tilde{\boldsymbol{\alpha}}$. The linear transformations preserve an exact representation of the feasible sets, {\emph{i.e., }}
\begin{equation}
\begin{aligned}
& \mathbf{B}^l\tilde{\boldsymbol{\alpha}} \leq \tilde{\boldsymbol{b}_l} \iff (\mathbf{a}^l)^T \mathbf{\Psi}\tilde{\boldsymbol{\alpha}} \leq b_, \quad \quad (\mathbf{a}^l)^T \boldsymbol{x} \leq b_l \iff \mathbf{B}^l\mathbf{\Gamma} \boldsymbol{x} \leq \tilde{\boldsymbol{b}_l}.
\end{aligned}
\end{equation}
For any point $\boldsymbol{z}$ in the the convex hull of \eqref{eq:p-split-rep-lin} $\exists\ \tilde{\boldsymbol{\alpha}}^1, \tilde{\boldsymbol{\alpha}}^2, \dots \tilde{\boldsymbol{\alpha}}^{\mathcal{|D}|}$ and $\boldsymbol{\lambda} \in \mathbb{R}_+^{|\mathcal{D}|}$
\begin{align}
\label{eq:convex-comb}
&\boldsymbol{z} = \sum_{l=1}^{|\mathcal{D}|}\lambda_l\tilde{\boldsymbol{\alpha}}^l\\
& \sum_{l=1}^{|\mathcal{D}|}\lambda_l = 1,\ \ \mathbf{B}^l\tilde{\boldsymbol{\alpha}}^l \leq \tilde{\boldsymbol{b}_l} \quad \forall\ l \in \mathcal{D} \nonumber.
\end{align}
Applying the reverse mapping to \eqref{eq:convex-comb} gives
\begin{equation}
\mathbf{\Psi}\boldsymbol{z} = \sum_{l=1}^{|\mathcal{D}|}\lambda_l\mathbf{\Psi}\tilde{\boldsymbol{\alpha}}^l.
\end{equation}
By construction, $ \left(\mathbf{a}^l\right)^T\mathbf{\Psi}\tilde{\alpha}^l \leq b_l \quad \forall l \in \mathcal{D}$. The point $\mathbf{\Psi}\boldsymbol{z}$ is given by a convex combination of points that all satisfy the constraints of one of the disjuncts in \eqref{eq:lin_disjunct} and, therefore, belongs to the convex hull of \eqref{eq:lin_disjunct}. The same technique easily shows that any point in the convex hull of disjunction~\eqref{eq:lin_disjunct} also belongs to the convex hull of disjunction~\eqref{eq:p-split-rep-lin}.\qed
\end{proof}
Theorem 2 does not hold with nonlinear functions, since the mapping may not be bijective or a homomorphism. In general, the $n$-split formulation will not obtain the convex hull of nonlinear disjunctions, as Section 2.2 shows by example, but it can provide a strong outer approximation.
\subsubsection{Two-term disjunctions}
We further analyze the special case of a two-term disjunction for which we also present a non-lifted $P$-split formulation in the following theorem.
\begin{theorem}
For a two-term disjunction, the $P$-split formulation has the following non-extended realization
\begin{equation}
\begin{aligned}
\label{eq:P-2-split-nonlifted}
& \underset{j \in \mathcal{S}_p}{\sum}\left( \underset{i \in \mathcal{I}_j}{\sum}h_{i,1}(x_i) \right)\leq \left(b_1 - \underset{s \in \mathcal{S}\setminus \mathcal{S}_p}{\sum} \ubar{\alpha}^1_s\right)\lambda_1 + \underset{s \in \mathcal{S}_p}{\sum}\bar{\alpha}^1_s\lambda_2 && \forall \mathcal{S}_p \subset \mathcal{S}\\
& \underset{j \in \mathcal{S}_p}{\sum}\left( \underset{i \in \mathcal{I}_j}{\sum}h_{i,2}(x_i) \right)\leq \left(b_2 - \underset{s \in \mathcal{S}\setminus \mathcal{S}_p}{\sum} \ubar{\alpha}^2_s\right)\lambda_2 + \underset{s \in \mathcal{S}_p}{\sum}\bar{\alpha}^2_s\lambda_1 && \forall \mathcal{S}_p \subset \mathcal{S}\\
& \lambda_1 + \lambda_2 = 1, \ \ \boldsymbol{\lambda} \in \{0, 1\}^2, \ \ \boldsymbol{x} \in \mathcal{X},
\end{aligned}
\end{equation}
where $\mathcal{S} = \{1, 2, \dots P\}$.
\end{theorem}
\begin{proof}
The equality constraints for the disaggregated variables ($\alpha_s^l = \nu_1^{\alpha_s^l} + \nu_2^{\alpha_s^l}$) enable us to easily eliminate the variables $\nu^{\alpha^l_s}_1$ from \eqref{eq:p-split}, resulting in
\begin{align}
& \sum_{s=1}^P\left( \alpha^1_s - \nu^{\alpha^1_s}_2\right) \leq b_1\lambda_1 \label{eq:p-2-s-eq1}\\
& \sum_{s=1}^P \nu^{\alpha^2_s}_2 \leq b_2\lambda_2 \label{eq:p-2-s-eq2}\\
& \ubar{\alpha}^l_s\lambda_1 \leq \alpha^l_s - \nu^{\alpha^l_s}_2 \leq \bar{\alpha}^l_s\lambda_1 &&\forall s \in \{1, 2 , \dots, P\},\forall\ l \in \{1, 2\}\label{eq:p-2-s-eq3}\\
& \ubar{\alpha}^l_s\lambda_2 \leq \nu^{\alpha^l_s}_2 \leq \bar{\alpha}^l_s\lambda_2 &&\forall s \in \{1, 2 , \dots, P\},\forall\ l \in \{1, 2\}\label{eq:p-2-s-eq4}\\
&\underset{i \in \mathcal{I}_s}{\sum}h_{i,l}(x_i) \leq \alpha^l_s &&\forall\ s \in \{1, 2 , \dots, P\}, \ \forall \ l \in \{1, 2\} \label{eq:p-2-s-eq5}\\
& \lambda_1 + \lambda_2 = 1, \quad \boldsymbol{\lambda} \in \{0, 1\}^2 \label{eq:p-2-s-eq6}\\
&\boldsymbol{x} \in \mathcal{X}, \boldsymbol{\alpha}^l \in \mathbb{R}^{P}, \boldsymbol{\nu}^{\alpha^l_s} \in \mathbb{R}^P\ &&\forall \ l \in \{1, 2\}, \forall\ s \in \{1, 2 , \dots, P\}.
\end{align}
Next, we use Fourier-Motzkin elimination to project out the $\nu^{\alpha^1_s}_2$ variables. Combining the constraints in \eqref{eq:p-2-s-eq3} and \eqref{eq:p-2-s-eq4} only results in trivially redundant constraints, {\emph{e.g., }} $\alpha^l_s \leq \bar{\alpha}^l_s(\lambda_1 + \lambda_2)$. Eliminating the first variable $\nu^{\alpha^1_1}_2$ creates two new constraints by combining \eqref{eq:p-2-s-eq1} with \eqref{eq:p-2-s-eq3}--\eqref{eq:p-2-s-eq4}. The first constraint is obtained by removing $\nu^{\alpha^1_1}_2$ and $\alpha^1_1$ from \eqref{eq:p-2-s-eq1} and adding $\ubar{\alpha}^1_1\lambda_2$ to the left-hand side. The second constraint is obtained by removing $\nu^{\alpha^1_1}_2$ from \eqref{eq:p-2-s-eq1} and subtracting $\bar{\alpha}^1_1\lambda_2$ from the left-hand side. Eliminating the next variable is done by repeating the procedure of combining the two new constraints with the corresponding inequalities in \eqref{eq:p-2-s-eq3}--\eqref{eq:p-2-s-eq4}. Each elimination step doubles the number of constraints originating from inequality \eqref{eq:p-2-s-eq1}. Eliminating all the variables $\nu^{\alpha^1_s}_2$ and $\alpha^1_s$ results in the first set of constraints
\begin{equation}
\underset{s \in \mathcal{S}_p}\sum \alpha^1_s \leq \left(b_1 - \underset{s \in \mathcal{S}\setminus \mathcal{S}_p}{\sum} \ubar{\alpha}^1_s\right)\lambda_1 + \underset{s \in \mathcal{S}_p}{\sum}\bar{\alpha}^1_s\lambda_2 \quad \forall \mathcal{S}_p \subset \mathcal{S}.
\end{equation}
The variables $\nu^{\alpha^2_s}_2$ and $\alpha^2_s$ are eliminated by same steps, resulting in the second set of constraints in \eqref{eq:P-2-split-nonlifted}. \qed
\end{proof}
To further analyze the tightness of different $P$-split relaxations we require that the bounds on the auxiliary variables be \textit{independent}, as defined below:
\begin{definition}
We say that the upper and lower bounds for the constraint\\ $\sum_{i=1}^n h_{i}(x_i) \leq 0$ are independent on $\mathcal{X}$ if
\begin{equation}
\begin{aligned}
&\min_{\boldsymbol{x} \in \mathcal{X}} \left( h_{i}(x_i) + h_{j}(x_j) \right) = \min_{\boldsymbol{x} \in \mathcal{X}} \ h_{i}(x_i) +\min_{\boldsymbol{x} \in \mathcal{X}} \ h_{j}(x_j) \\
&\max_{\boldsymbol{x} \in \mathcal{X}} \left( h_{i}(x_i) + h_{j}(x_j) \right) = \max_{\boldsymbol{x} \in \mathcal{X}} \ h_{i}(x_i) +\max_{\boldsymbol{x} \in \mathcal{X}} \ h_{j}(x_j),
\end{aligned}
\end{equation}
hold for all $i,j \in \{1, 2, \dots n\}$.
\end{definition}
Independent bounds are not restricted to linear constraints, but the most general case of independent bounds are linear disjunctions with $\mathcal{X}$ defined as a box. Independent bounds enable us to establish a strict relation on the tightness of different $P$-split formulations, which is presented in the next corollary.
\begin{corollary}
For a two-term disjunction with independent bounds, a $(P+1)$-split formulation, obtained by splitting one variable group in the $P$-split, is always as tight or tighter than the corresponding P-split formulation.
\end{corollary}
\begin{proof}
The non-extended formulation \eqref{eq:P-2-split-nonlifted} for the $(P+1)$-split comprises the constraints in the $P$-split formulation and some additional constraints. \qed
\end{proof}
From Corollary 1 it follows that the $P$-split formulations represent a hierarchy of relaxations, and we formally state this property in the following corollary.
\begin{corollary}
For a linear two-term disjunction the P-split formulations form a hierarchy of relaxations, starting from the big-M relaxation ($P=1$) and converging to the convex hull relaxation ($P=n$).
\end{corollary}
\begin{proof}
Theorems 1 and 2 give equivalence to big-M and convex hull. By Corollary 1, the $(P+1)$-split is as tight or tighter than the $P$-split relaxation. \qed
\end{proof}
\subsection{Illustrative example}
To see the differences between $P$-split formulations, consider the disjunction
\begin{equation}
\label{eq:example1}
\tag{ex-1}
\begin{aligned}
&\begin{bmatrix}
\sum_{i=1}^4 x_i ^2 \leq 1
\end{bmatrix}
\
\lor \
\begin{bmatrix}
\sum_{i=1}^4 (3-x_i)^2 \leq 1
\end{bmatrix}\\
& \boldsymbol{x} \in \mathbb{R}^4.
\end{aligned}
\end{equation}
The tightest valid bounds on all the auxiliary variables are given by
\begin{equation}
\ubar{\alpha}^l_s = 0,\quad \bar{\alpha}^l_s := \left(\sqrt{|\mathcal{I}_s|\cdot3^2} + 1 \right)^2 \quad \forall s \in \{1,2,3,4\},\ \forall l \in \{1,2\}.
\end{equation}
These bounds are derived from the fact that one of the two constraints in the disjunction must hold, and are symmetric for the two set of $\alpha$-variables. The continuously relaxed feasible sets of the $P$-split formulations of disjunction \eqref{eq:example1} are shown in Fig.~\ref{fig:relaxations}, which shows that the relaxations overall tighten with increasing number of splits $P$. The 4-split formulation does not give the convex hull, but provides a good approximation. For this example, the independent bound property does not hold and the relaxations do not form a proper hierarchy. To show why the independent bound property is needed, we compare the non-extended representations of the 1-split and 2-split formulations:
\begin{align}
&\sum_{i=1}^4 x_i^2 \leq \lambda_1 + \left(\sqrt{36} +1\right)^2\lambda_2, &&\sum_{i=1}^4 (3-x_i)^2 \leq \lambda_2 + \left(\sqrt{36} +1\right)^2\lambda_1 \tag{1-s}\\
& \sum_{i=1}^2 x_i ^2 \leq \lambda_1 +\left(\sqrt{18} +1\right)^2\lambda_2, &&\sum_{i=3}^4 x_i ^2 \leq \lambda_1 +\left(\sqrt{18} +1\right)^2\lambda_2 \tag{2-s1}\\
& \sum_{i=1}^2 (3-x_i)^2 \leq \lambda_2 +\left(\sqrt{18} +1\right)^2\lambda_1, \hspace{-0.4cm}&&\sum_{i=3}^4 (3-x_i)^2 \leq \lambda_2 + \left(\sqrt{18} +1\right)^2\lambda_1 \tag{2-s2}\\
& \sum_{i=1}^4 x_i ^2 \leq \lambda_1 + 2\left(\sqrt{18} +1\right)^2\lambda_2, &&\hspace{-0.4cm}\sum_{i=1}^4 (3-x_i)^2 \leq \lambda_2 +2\left(\sqrt{18} +1\right)^2\lambda_1. \tag{2-s3}
\end{align}
The 1-split formulation is given by (1-s), and the 2-split by (2-s1)--(2-s3).
The 2-split contains additional constraints ~(2-s1) and (2-s2), but ~(2-s3) is a weaker version of (1-s). If the independent bound property were true, then (2-s3) and (1-s) would be identical and the relaxations would form a proper hierarchy.
\vspace{-10pt}
\begin{figure}[!h]
\centering
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[trim={1.5cm 0cm 2cm 0cm},clip,width=.99\linewidth]{ex1_fig1.png}
\caption{\centering 1-split/big-M $\left(\{x_1, x_2, x_3, x_4\}\right)$}
\end{subfigure}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[trim={1.5cm 0cm 2cm 0cm},clip,width=.99\linewidth]{ex1_fig2.png}
\caption{\centering 2-split \break $\left(\{x_1, x_2\}, \{x_3, x_4\}\right)$}
\end{subfigure}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[trim={1.5cm 0cm 2cm 0cm},clip,width=.99\linewidth]{ex1_fig4.png}
\caption{\centering 4-split $\left(\{x_1\},\{x_2\},\{x_3\},\{x_4\}\right)$}
\end{subfigure}
\caption{The dark circles show the feasible set of \eqref{eq:example1} in the $x_1,x_2$ space. The light grey areas show the continuously relaxed feasible set of the P-split formulations. The sets in the parentheses show the partitioning of variables.}
\label{fig:relaxations}
\end{figure}
\section{Numerical comparison}
To compare how the formulations perform computationally, we apply the $P$-split, big-M, and convex hull formulations to several test problems. We consider three types of optimization problems that have a suitable structure for the $P$-split formulation (assumptions 1--3) and that are known to be challenging.
\subsubsection{K-means clustering}
Using the formulation by Papageorgiou and Trespalacios \cite{papageorgiou2018pseudo}, the K-means clustering problem \cite{macqueen1967some} is given by
\begin{equation}
\label{eq:clustering}
\begin{aligned}
& \min_{\mathbf{r} \in \mathbb{R}^L, \boldsymbol{x}^j \in \mathbb{R}^n, \forall j \in \mathcal{K}} &&\sum_{i=1}^L r_i\\
& \text{s.t.} &&\underset{j \in \mathcal{K}}{\lor} \begin{bmatrix}
\norm{\boldsymbol{x}^j - \mathbf{d}^i}_2^2 \leq r_i
\end{bmatrix}\quad \forall i \in \{1, 2, \dots, L\},\\
\end{aligned}
\end{equation}
where $\boldsymbol{x}^j$ are the cluster centers, $\{\mathbf{d}^i\}_{i=1}^L$ are $n$-dimensional data points, and $\mathcal{K} =\{1,2,\dots k\}$. The tightest upper bound for the auxiliary variables in the P-split formulations are given by the largest squared Euclidean distance between any two data points in the subspace corresponding to the auxiliary variable. By introducing auxillary variables for the differences $(\boldsymbol{x}-\mathbf{d})$, we can express the convex hull of the disjunctions by rotated second order cone constraints \cite{ben2001lectures} in a form suitable for Gurobi. We use the G2 data set \cite{G2sets} to generate low-dimensional test instances, and the MNIST data set \cite{lecun2010mnist} to generate high-dimensional test instances. For the MNIST-based problems, we select the first images of each class ranging from 0 to the number of clusters. Details about the problems are presented in Table~\ref{tab:test_probs}.
\subsubsection{P\_ball problems} The task is to assign $p$-points to $n$-dimensional unit balls such that the total $\ell_1$ distance between all points is minimized and only one point is assigned to each unit ball \cite{kronqvist2020disjunctive}. Upper bounds on the auxiliary variables in the P-split formulation are given by the same technique as for the $M$-coefficients in \cite{kronqvist2020disjunctive}, but in the subspace corresponding to the auxiliary variable. By introducing auxiliary variables for the differences between the points and the centers, we are able to express the convex hull by second order cone constraints \cite{ben2001lectures} in a form suitable for Gurobi. We have generated a few larger instances to obtain more challenging problems and details of the problems are given in Table~\ref{tab:test_probs}.
\subsubsection{ReLU neural networks}
Optimization over a ReLU neural network (NN) is used to quantify extreme outputs \cite{anderson2020strong,botoeva2020efficient}. Each ReLU activation function ($y = \textrm{max} \{ 0, \boldsymbol{w}^T \boldsymbol{x} + b\}$) can be expressed as a two-part disjunction using the $P$-split formulation, by separating $\boldsymbol{w}^T \boldsymbol{x} = \sum_{i \in \mathcal{S}_1 \cup ... \cup \mathcal{S}_P} w_i x_i$.
We sort the variables $x_i$ by index and assign them to splits of even size.
Upper bounds on node outputs and auxiliary variables can be computed using simple interval arithmetic.
We created several instances (Table~\ref{tab:test_probs}) that minimize the prediction of single-output NNs trained on the $d$-dimensional Ackley/Rastrigin functions.
All NNs were implemented in PyTorch \cite{pytorch} and trained for 1000 epochs, using a Latin hypercube of 10$^6$ samples.
Note that more samples may be required to accurately represent the target functions, but here we are solely concerned with the performance of various optimization formulations.
\begin{table}[]
\caption{Details of the clustering, P$\_$ball and neural network problems.}
\centering
\begin{tabular}{ c| c | c |c } \hline
name &\ data points \ & \ data dimension \ & \ number of clusters\\
\hline
Cluster\_g1 & 20 & 32 & 2\\
Cluster\_g2 & 25 & 32 & 2\\
Cluster\_g3 & 20 & 16 & 3\\ \hline
Cluster\_m1 & 5 & 784 & 3\\
Cluster\_m2 & 8 & 784 & 2\\
Cluster\_m3 & 10 & 784 & 2\\
\hline
&\ number of balls \ & \ number of points \ & \ ball dimension\\
\hline
P\_ball\_1\ \ & 10 & 5 & 8\\
P\_ball\_2\ \ & 10 & 5 & 16\\
P\_ball\_3\ \ & 8 & 5 & 32\\
\hline
&\ input dimension ($d$) \ & \ hidden layers \ & \ function \\
\hline
NN\_1 & 2 & [50, 50, 50] & Ackley \\
NN\_2 & 10 & [50, 50, 50] & Ackley \\
NN\_3 & 3 & [100, 100] & Rastrigin \\ \hline
\end{tabular}
\label{tab:test_probs}
\vspace{-8pt}
\end{table}
\subsubsection{Computational setup}
Optimization performance is dependent on both the tightness and the computational complexity of the continuous relaxation.
The default (automatic) parameter selection in Gurobi caused large variations in the results that were due to different solution strategies rather than differences between formulations.
Therefore, we used the parameter settings \texttt{MIPFocus} = 3, \texttt{Cuts} = 1, and \texttt{MIQCPMethod} = 1 for all problems.
We found that using \texttt{PreMIQCPForm} = 2 drastically improves the performance of the extended convex hull formulations for the clustering and P\_ball problems. However, it resulted in worse performance for the other formulations and, therefore, we only used it with the convex hull. Since the NN problems only contain linear constraints, only the \texttt{MIPFocus} and \texttt{Cuts} parameters apply to these problems
The default values were used for all other settings.
All problems were solved using Gurobi 9.0.3 on a desktop computer with an i7 8700k processor and 16GB RAM.
Different variable partitionings can lead to differences in the $P$-split formulations. For all the problems, the variables are simply partitioned based on their ordered indices. For the K-means clustering and P\_ball problems, we have used the smallest valid M-coefficients and thight bounds for the $\alpha$-variables. The K-means clustering and P\_ball problems both have analytical expressions for all the bounds. For the NN problems tight bounds are not easily obtained, and the bounds are obtained using interval arithmetic.
\subsection{Numerical results}
Table~\ref{tab:res} shows the elapsed CPU time and number of nodes explored to solve each problem. The results show that $P$-split formulations can drastically reduce the number of explored nodes compared to the big-M formulation, even with only a few splits. The differences are clearest for the nonlinear problems, where both the CPU times and numbers of nodes are reduced by several orders of magnitude. As expected, the convex hull formulation results in the fewest explored nodes.
However, the $P$-split formulations have a simpler\footnotemark\ problem formulation, reducing the CPU times for all but one instance compared to the convex hull. The results clearly show the advantage of the intermediate $P$-split formulations, resulting in a tighter formulation than big-M and a computationally cheaper formulation than the extended convex hull.
\begin{table}[!h]
\caption{CPU times [s] and numbers of nodes explored for test problems. In bold is the \emph{winner} for each test instance with respect to both time and number of nodes. The grey shading shows the $P$-split times that strictly outperform both the big-M and convex hull formulations. The time limit was 1800 CPU seconds.
Cells marked NA correspond to instances with fewer than $P$ terms per disjunction.}
\vspace{2pt}
\centering
\begin{tabular}{ c c | c | c | c | c | c | c | c }
\hline
instance & &\ big-M \ & \ 2-split \ & \ 4-split & \ 8-split & \ 16-split & \ 32-split & \ convex hull\\
\hline
Cluster\_g1 \ &time & $>$1800 & 81.0 & \cellcolor[gray]{0.8}13.9 & \cellcolor[gray]{0.8}2.9 & \cellcolor[gray]{0.8}\textbf{1.7} & \cellcolor[gray]{0.8}3.5 & 42.0\\
& nodes & $>$8998 & 2946 & 1096 & 256 & 98 & 91 & \textbf{73}\\
\hline
Cluster\_g2 \ &time & $>$1800 & 106.3 & \cellcolor[gray]{0.8}7.7 & \cellcolor[gray]{0.8}4.3 & \cellcolor[gray]{0.8}\textbf{2.1 } & \cellcolor[gray]{0.8}4.5 & 40.6 \\
& nodes & $>$10431 & 1736 & 481 & 217 & 104 & 86 & \textbf{77}\\
\hline
Cluster\_g3 \ &time & $>$1800 & $>$1800 & \cellcolor[gray]{0.8}870.6 &\ \cellcolor[gray]{0.8}\textbf{407.2} & \cellcolor[gray]{0.8}597.5 & NA & $>$1800\\
& nodes & $>$28906 & $>$40820 & 19307 & \textbf{14923} & 16806 & & $>$7797\\
\hline
P\_ball\_1 \ &time & 403.0 & 235.4 & 285.1 & \cellcolor[gray]{0.8}\textbf{18.5} &NA &NA & 42.2 \\
& nodes & 29493 & 7919 & 5518 & 2202 & & & \textbf{1437}\\
\hline
P\_ball\_2 \ &time & $>$1800 & 483.6 & 326.6 & 41.6 & 30.6 &NA & \textbf{28.2} \\
& nodes & $>$19622 & 13602 & 5871 & 3921 & 1261 & & \textbf{531}\\
\hline
P\_ball\_3 &time & $>$1800 & $>$1800& $>$1800 & 149.3 & \cellcolor[gray]{0.8}91.1 & \cellcolor[gray]{0.8}\textbf{78.7} & 114.0 \\
& nodes & $>$7537 & $>$6035 & $>$6708 & 7042 & 3572 & 631 & \textbf{554} \\
\hline
& &\ big-M \ & \ 14-split \ & \ 28-split & \ 56-split & \ 196-split & \ 392-split & \ convex hull\\%\footnotemark\\
\hline
Cluster\_m1 \ &time & $>$1800 & $>$1800 & \cellcolor[gray]{0.8}129.5 & \cellcolor[gray]{0.8}76.8 & \cellcolor[gray]{0.8}\textbf{32.0} & \cellcolor[gray]{0.8}33.2 & 313.3\\
& nodes & $>$10680 & $>$9651 & 2926 & 1462 & 524 & \textbf{195} & 228\\
\hline
Cluster\_m2 \ &time & $>$1800 &\cellcolor[gray]{0.8} 1116.5 & \cellcolor[gray]{0.8} 156.1 & \cellcolor[gray]{0.8}\textbf{27.1} & \cellcolor[gray]{0.8} 97.0 & \cellcolor[gray]{0.8}54.2 & 1260.1 \\
& nodes & $>$4867 & 6220 & 1915 & 805 & 2752 & 1155 & \textbf{131}\\
\hline
Cluster\_m3 \ &time & $>$1800 & $>$1800 & \cellcolor[gray]{0.8}429.5 &\ \cellcolor[gray]{0.8}60.0 & \cellcolor[gray]{0.8}23.2 & \cellcolor[gray]{0.8}\textbf{19.8} & $>$1800\\
& nodes & $>$4419 & $>$4197 & 3095 & 1502 & 741 & \textbf{397} & $>$93\\
\hline
\rule{0pt}{2.5ex}& & \ 1-split/\ & \ 2-split & \ 4-split & \ 8-split & \ 16-split & \ 32-split & 50-split/ \\
& & \ big-M \ & & & & & &convex hull* \\
\hline
NN\_1 \ &time & 36.1 & \cellcolor[gray]{0.8} \textbf{29.4} & 41.8 & 57.0 & 85.7 & 145.1 & 198.5 \\
& nodes & 24177 & 12377 & 11229 & \textbf{7415} & 11117 & 9793 & 11734 \\
\hline
NN\_2 &time & \textbf{21.6} & 35.5 & 50.7 & 131.4 & 287.3 & 776.1 & $>$1800 \\
& nodes & 19746 & 20157 & 14003 & 11174 & \textbf{6687} & 12685 & $>$4016 \\
\hline
NN\_3 \ &time & \textbf{141.8} & 210.6 & 206.5 & 275.5 & 305.8 & 429.1 & 556.6\\
& nodes & 116996 & 101113 & 86582 & 84455 & 69022 & 56873 & \textbf{48153}\\
\hline
\end{tabular}
\footnotesize
*50-split is not the convex hull of each node for NN\_3, which has layers of 100 nodes.
\label{tab:res}
\end{table}
Note that the $P$-split formulations are in general robust towards the choice of $P$. For the clustering and P\_ball problems, all $P$-split formulations outperformed the big-M formulation both in terms of solution times and numbers of explored nodes. For the cases where the smallest $P$-split formulations timed out, Gurobi terminated with a much smaller gap compared to that of the big-M formulation. The $P$-split formulations also outperform the convex hull formulations in terms of solution time for a wide range of $P$ in all but one of the test problems.
\footnotetext{The extended convex hull formulations for the nonlinear problems require auxiliary variables and (rotated) second order cone constraints. All $P$-split formulations have fewer variables and constraints and only contain linear/convex-quadratic constraints.}
For the NN problems, which have linear disjunctions, the situation is somewhat different.
Here, while increasing $P$ still decreased the number of explored nodes, the improvements are less significant, and the trend is not completely monotonic.
Note that bounds on the inputs to layers 2--3 are computed using interval arithmetic, resulting in overall weaker relaxations for all formulations. The weaker bounds in layers 2--3 reduce the benefits of both the $P$-split and convex hull formulations, and may favor the simpler big-M formulation.
As the reduction in explored nodes is less drastic, smaller formulations perform the best in terms of CPU time, supporting claims that extended formulations may perform worse than expected \cite{anderson2020strong,vielma2019small}.
This may also be a consequence of Gurobi efficiently handling linear problems when it detects big-M-type constraints.
Ignoring the big-M (1-split), the 2- and 4-splits have the lowest CPU time for all NNs, and all the split formulations solve the problems significantly faster than the convex hull formulation.
\newpage
\section{Conclusions}
We have presented a general framework for generating intermediate relaxations in between the big-M and convex hull. The numerical results show a great potential of the intermediate relaxations, by providing a good approximation of the convex hull through a computationally simpler problem. For several of the test problems, the intermediate relaxations result in a similar number of explored nodes as the convex hull formulation while reducing the total solution time by an order of magnitude.
\section*{Acknowledgements}
The research was funded by a Newton International Fellowship by the Royal Society (NIF\textbackslash R1\textbackslash 182194) to JK, a grant by the Swedish Cultural Foundation in Finland to JK, and by Engineering \& Physical Sciences Research Council (EPSRC) Fellowships to RM and CT (grant numbers EP/P016871/1 and EP/T001577/1). CT also acknowledges support from an Imperial College Research Fellowship.
\bibliographystyle{splncs04}
| {
"timestamp": "2021-02-01T02:18:27",
"yymm": "2101",
"arxiv_id": "2101.12708",
"language": "en",
"url": "https://arxiv.org/abs/2101.12708",
"abstract": "This work develops a class of relaxations in between the big-M and convex hull formulations of disjunctions, drawing advantages from both. The proposed \"P-split\" formulations split convex additively separable constraints into P partitions and form the convex hull of the partitioned disjuncts. Parameter P represents the trade-off of model size vs. relaxation strength. We examine the novel formulations and prove that, under certain assumptions, the relaxations form a hierarchy starting from a big-M equivalent and converging to the convex hull. We computationally compare the proposed formulations to big-M and convex hull formulations on a test set including: K-means clustering, P_ball problems, and ReLU neural networks. The computational results show that the intermediate P-split formulations can form strong outer approximations of the convex hull with fewer variables and constraints than the extended convex hull formulations, giving significant computational advantages over both the big-M and convex hull.",
"subjects": "Optimization and Control (math.OC); Machine Learning (cs.LG)",
"title": "Between steps: Intermediate relaxations between big-M and convex hull formulations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180660748888,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.708331468996753
} |
https://arxiv.org/abs/1704.04752 | Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent | In this paper, we revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. We improve the existing results when the convergence is measured in the Wasserstein distance and provide further insights on the very tight relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. Finally, we also establish guarantees for the convergence of a version of the Langevin Monte Carlo algorithm that is based on noisy evaluations of the gradient. | \section{Introduction}
Let $p$ be a positive integer and $f:\mathbb{R}^p\to\mathbb{R}$ be a measurable function such that the
integral $\int_{\mathbb{R}^p} \exp\{-f(\boldsymbol{\theta})\}\,d\boldsymbol{\theta}$ is finite. In various applications,
one is faced with the problems of finding the minimum point of $f$ or computing the average
with respect to the probability density
\begin{equation}
\pi(\boldsymbol{\theta}) = \frac{e^{-f(\boldsymbol{\theta})}}{\int_{\mathbb{R}^p} e^{-f(\boldsymbol u)}\,d\boldsymbol u}.
\end{equation}
In other words, one often looks for approximating the values $\boldsymbol{\theta}^*$ and $\bar\boldsymbol{\theta}$
defined as
\begin{equation}
\bar\boldsymbol{\theta} = \int_{\mathbb{R}^p} \boldsymbol{\theta}\,\pi(\boldsymbol{\theta})\,d\boldsymbol{\theta},\qquad
\boldsymbol{\theta}^*\in\text{arg}\min_{\boldsymbol{\theta}\in\mathbb{R}^p} f(\boldsymbol{\theta}).
\end{equation}
In most situations, the approximations of these values are computed using iterative algorithms
which share many common features. There is a vast variety of such algorithms for solving both tasks,
see for example \citep{BoydBook} for optimization and \citep{Atchade2011} for approximate sampling.
The similarities between the task of optimization and that of averaging have been recently exploited
in the papers \citep{Dalalyan14,Durmus2,Durmus1} in order to establish fast and accurate theoretical
guarantees for sampling from and averaging with respect to the density $\pi$ using the Langevin
Monte Carlo algorithm. The goal of the present work is to push further this study both by improving
the existing bounds and by extending them in some directions.
We will focus on strongly convex functions $f$ having a Lipschitz continuous gradient.
That is, we assume that there exist two positive constants $m$ and $M$ such that
\begin{equation} \label{1}
\begin{cases}
f(\boldsymbol{\theta})-f(\boldsymbol{\theta}')-\nabla f(\boldsymbol{\theta}')^\top (\boldsymbol{\theta}-\boldsymbol{\theta}')
\ge (\nicefrac{m}2)\|\boldsymbol{\theta}-\boldsymbol{\theta}'\|_2^2, \text{\vphantom{$I_{\textstyle\int_{I_I}}$}}\\
\|\nabla f(\boldsymbol{\theta})-\nabla f(\boldsymbol{\theta}')\|_2 \le M \|\boldsymbol{\theta}-\boldsymbol{\theta}'\|_2,
\end{cases}
\qquad \forall \boldsymbol{\theta},\boldsymbol{\theta}'\in\mathbb{R}^p,
\end{equation}
where $\nabla f$ stands for the gradient of $f$ and $\|\cdot\|_2$ is the Euclidean norm.
We say that the density $\pi(\boldsymbol{\theta})\propto e^{-f(\boldsymbol{\theta})}$ is log-concave (resp.\ strongly
log-concave) if the function $f$ satisfies the first inequality of (\ref{1}) with $m=0$
(resp.\ $m>0$).
The Langevin Monte Carlo (LMC) algorithm studied throughout this work is the analogue of the
gradient descent algorithm for optimization. Starting from an initial point $\boldsymbol{\vartheta}^{(0)}\in\mathbb{R}^p$
that may be deterministic or random, the iterations of the algorithm are defined by
the update rule
\begin{align}\label{2}
\boldsymbol{\vartheta}^{(k+1,h)} = \boldsymbol{\vartheta}^{(k,h)} - h \nabla f(\boldsymbol{\vartheta}^{(k,h)})+ \sqrt{2h}\;\boldsymbol{\xi}^{(k+1)};
\qquad k=0,1,2,\ldots
\end{align}
where $h>0$ is a tuning parameter, referred to as the step-size, and $\boldsymbol{\xi}^{(1)},\ldots,\boldsymbol{\xi}^{(k)},\ldots$
is a sequence of mutually independent, and independent of $\boldsymbol{\vartheta}^{(0)}$, centered Gaussian vectors with
covariance matrices equal to identity.
Under the assumptions imposed on $f$, when $h$ is small and $k$ is large (so that the product
$kh$ is large), the distribution of $\boldsymbol{\vartheta}^{(k,h)}$ is close in various metrics to the
distribution with density $\pi(\boldsymbol{\theta})$, hereafter referred to as the target distribution.
An important question is to quantify this closeness; this might be particularly useful for
deriving a stopping rule for the LMC algorithm.
The measure of approximation used in this paper is the Wasserstein-Monge-Kantorovich
distance $W_2$. For two measures $\mu$ and $\nu$ defined on $(\mathbb{R}^p,\mathscr B(\mathbb{R}^p))$,
$W_2$ is defined by
\begin{equation}
W_2(\mu,\nu) = \Big(\inf_{\gamma\in \Gamma(\mu,\nu)} \int_{\mathbb{R}^p\times \mathbb{R}^p}
\|\boldsymbol{\theta}-\boldsymbol{\theta}'\|_2^2\,d\gamma(\boldsymbol{\theta},\boldsymbol{\theta}')\Big)^{1/2},
\end{equation}
where the $\inf$ is with respect to all joint distributions $\gamma$ having $\mu$ and
$\nu$ as marginal distributions. This distance is perhaps more suitable for quantifying
the quality of approximate sampling schemes than other metrics such as the total variation.
Indeed, on the one hand, bounds on the Wasserstein distance---unlike the bounds on
the total-variation distance---directly provide the level of approximating
the first order moment. For instance, if $\mu$ and $\nu$ are two Dirac measures at
the points $\boldsymbol{\theta}$ and $\boldsymbol{\theta}'$, respectively, then the total-variation distance
$D_{\rm TV}(\delta_{\boldsymbol{\theta}},\delta_{\boldsymbol{\theta}'})$ equals one whenever $\boldsymbol{\theta}\not=\boldsymbol{\theta}'$,
whereas $W_2(\delta_{\boldsymbol{\theta}},\delta_{\boldsymbol{\theta}'}) = \|\boldsymbol{\theta}-\boldsymbol{\theta}'\|_2$ is a smoothly
increasing function of the Euclidean distance between $\boldsymbol{\theta}$ and $\boldsymbol{\theta}'$. This
seems to better correspond to the intuition on the closeness of two distributions.
\section{Improved guarantees for the Wasserstein distance}
The rationale behind the LMC algorithm \eqref{2} is simple: the Markov chain
$\{\boldsymbol{\vartheta}^{(k,h)}\}_{k\in\mathbb{N}}$ is the Euler discretization of a continuous-time
diffusion process $\{\boldsymbol L_t :t\in\mathbb{R}_+\}$, known as Langevin diffusion, that has $\pi$
as invariant density \citep[Thm. 3.5]{bhattacharya1978}. The Langevin diffusion is
defined by the stochastic differential equation
\begin{equation}\label{3}
d\boldsymbol L_t = -\nabla f(\boldsymbol L_t)\,dt + \sqrt{2} \; d\boldsymbol W_t,\qquad t\ge 0,
\end{equation}
where $\{\boldsymbol W_t:t\ge 0\}$ is a $p$-dimensional Brownian motion. When $f$ satisfies
condition (\ref{1}), equation (\ref{3}) has a unique
strong solution which is a Markov process. Let $\nu_k$ be the distribution of the
$k$-th iterate of the LMC algorithm, that is $\vartheta^{(k,h)}\sim \nu_k$.
\begin{theorem}\label{thOne}
Assume that $h\in(0,\nicefrac2M)$. The following claims hold:
\begin{enumerate}\itemsep=10pt
\item[{\rm (a)}] If $h\le \nicefrac2{(m+M)}$ then $W_2(\nu_K, \pi) \le
(1-mh)^K W_2(\nu_0,\pi) + 1.82(M/m)(hp)^{1/2}$.
\item[{\rm (b)}] If $h\ge \nicefrac2{(m+M)}$ then $W_2(\nu_K, \pi) \le
\displaystyle (Mh-1)^K W_2(\nu_0,\pi) + 1.82\frac{Mh}{2-Mh}(hp)^{1/2}$.
\end{enumerate}
\end{theorem}
The proof of this theorem is postponed to \Cref{secProof}. We content ourselves here
by discussing the relation of this result to previous work. Note that if the initial
value $\boldsymbol{\vartheta}^{(0)}=\boldsymbol{\theta}^{(0)}$ is deterministic then, according to
\citep[Theorem 1]{Durmus2}, we have
\begin{align}
W_2(\nu_0,\pi)^2
& = \int_{\mathbb{R}^p} \|\boldsymbol{\theta}^{(0)}-\boldsymbol{\theta}\|_2^2\pi(d\boldsymbol{\theta})\\
& = \|\boldsymbol{\theta}^{(0)}-\bar\boldsymbol{\theta}\|_2^2 + \int_{\mathbb{R}^p} \|\bar\boldsymbol{\theta}-\boldsymbol{\theta}\|_2^2\pi(d\boldsymbol{\theta})\\
& \le \|\boldsymbol{\theta}^{(0)}-\bar\boldsymbol{\theta}\|_2^2 + p/m.\label{4}
\end{align}
First of all, let us remark that if we choose $h$ and $K$ so that
\begin{equation}\label{5}
h\le \nicefrac{2}{(m+M)},\qquad e^{-mhK}W_2(\nu_0,\pi)\le \varepsilon/2,\quad
1.82(M/m)(hp)^{1/2}\le \varepsilon/2,
\end{equation}
then we have $W_2(\nu_K, \pi) \le \varepsilon$. In other words, conditions
\eqref{5} are sufficient for the density of the output of the LMC algorithm with $K$
iterations to be within the precision $\varepsilon$ of the target density when the precision
is measured using the Wasserstein distance. This readily yields
\begin{equation}\label{6}
h\le \frac{m^2\varepsilon^2}{14M^2p}\wedge \frac2{m+M}\quad\text{and}\quad
hK\ge \frac1m\log\Big(\frac{2(\|\boldsymbol{\theta}^{(0)}-\bar\boldsymbol{\theta}\|_2^2+ p/m)^{1/2}}\varepsilon\Big)
\end{equation}
Assuming $m,M$ and $\|\boldsymbol{\theta}^{(0)}-\bar\boldsymbol{\theta}\|_2^2/p$ to be constants, we can deduce
from the last display that it suffices $K = C p\varepsilon^{-2}\log(p/\varepsilon)$
number of iterations in order to reach the precision level $\varepsilon$. This fact has
been first established in \citep{Dalalyan14} for the LMC algorithm with a warm start
and the total-variation distance. It was later improved by \cite{Durmus2}, who showed
that the same result holds for any starting point and established similar bounds for
the Wasserstein distance.
In order to make the comparison easier, let us recall below the corresponding result
from\footnote{We slightly adapt the original result taking into account the fact
that we are dealing with the LMC algorithm with a constant step.}
\citep{Durmus2}. It asserts that under condition \eqref{1}, if $h\le \nicefrac2{(m+M)}$
then
\begin{equation}\label{l}
W_2^2(\nu_{K}, \pi) \le
2\Big(1-\frac{mMh}{m+M}\Big)^K W^2_2(\nu,\pi) +
\frac{Mhp}{m}(m+M)\Big(h + \frac{m+M}{2mM}\Big)\Big(2+\frac{M^2h}{m}+\frac{M^2h^2}{6}\Big).
\end{equation}
When we compare this inequality with the claims of \Cref{thOne}, we see that
\begin{enumerate}
\item[i)] \Cref{thOne} holds under weaker conditions: $h\le \nicefrac2{M}$ instead of
$h\le \nicefrac2{(m+M)}$.
\item[ii)] The analytical expressions of the upper bounds on the Wasserstein distance
in \Cref{thOne} are not as involved as those of \eqref{l}.
\item[iii)] If we take a closer look, we can check that when $h\le \nicefrac2{(m+M)}$,
the upper bound in part (a) of \Cref{thOne} is sharper than that of \eqref{l}.
\end{enumerate}
In order to better illustrate the claim in iii) above, we consider a numerical example
in which $m=4$, $M = 5$ and $\|\boldsymbol{\theta}^{(0)}-\bar\boldsymbol{\theta}\|_2^2 = p$.
Let $F_{\rm our}(h,K,p)$ and $F_{\rm DM}(h,K,p)$ be the upper bounds on $W_2(\nu_K,\pi)$
provided by \Cref{thOne} and \eqref{l}. For different values of $p$, we compute
\begin{align}
K_{\rm our}(p) &= \min \big\{K : \text{ there exists $h\le \nicefrac2{(m+M)}$ such that }
F_{\rm our}(h,K,p)\le \varepsilon\big\},\\
K_{\rm DM}(p) &= \min \big\{K : \text{ there exists $h\le \nicefrac2{(m+M)}$ such that }
F_{\rm DM}(h,K,p)\le \varepsilon\big\}.
\end{align}
The curves of the functions $p\mapsto \log K_{\rm our}(p)$ and $p\mapsto \log K_{\rm DM}(p)$,
for $\varepsilon = 0.1$ and $\varepsilon = 0.3$ are plotted in~\Cref{figOne}. We can
deduce from these plots that the number of iterations yielded by our bound is more than 5
times smaller than the number of iterations recommended by bound \eqref{l} of
\cite{Durmus2}.
\begin{figure}
\centerline{\includegraphics[width = 0.75\textwidth]{cropped_K_our_K_DM.pdf}}
\caption{The curves of the functions $p\mapsto \log K(p)$, where $K(p)$ is the number of steps---
derived either from our bound or from the bound \eqref{l} of \citep{Durmus2}---sufficing for
reaching the precision level $\varepsilon$ (for $\varepsilon = 0.1$ and $\varepsilon = 0.3$).}
\label{figOne}
\end{figure}
\begin{remark}
Although the upper bound on $W_2(\nu_0,\pi)$ provided by~\eqref{4} is relevant for
understanding the order of magnitude of $W_2(\nu_0,\pi)$, it has limited applicability
since the distance $\|\boldsymbol{\theta}_0-\bar\boldsymbol{\theta}\|$ might be hard to evaluate. An attractive
alternative to that bound is the following\footnote{The second line follows from
strong convexity whereas the third line is a consequence of the two identities
$\int_{\mathbb{R}^p} \nabla f(\boldsymbol{\theta})\pi(d\boldsymbol{\theta}) = 0$ and $\int_{\mathbb{R}^p} \boldsymbol{\theta}^\top
\nabla f(\boldsymbol{\theta})\pi(d\boldsymbol{\theta}) = p$. These identities follow from the fundamental
theorem of calculus and the integration by parts formula, respectively.}:
\begin{align}
W_2(\nu_0,\pi)^2
& = \int_{\mathbb{R}^p} \|\boldsymbol{\theta}^{(0)}-\boldsymbol{\theta}\|_2^2\pi(d\boldsymbol{\theta})\\
& \le \frac2{m}\int_{\mathbb{R}^p}\Big(f(\boldsymbol{\theta}_0)-f(\boldsymbol{\theta})-\nabla f(\boldsymbol{\theta})^\top(\boldsymbol{\theta}_0-\boldsymbol{\theta})
\Big)\pi(d\boldsymbol{\theta})\\
& = \frac2{m}\Big(f(\boldsymbol{\theta}_0)-\int_{\mathbb{R}^p} f(\boldsymbol{\theta})\,\pi(d\boldsymbol{\theta})+p\Big).\label{init1}
\end{align}
If $f$ is lower bounded by some known constant, for instance if $f\ge 0$, the last inequality
provides the computable upper bound $W_2(\nu_0,\pi)^2 \le \frac2{m}\big(f(\boldsymbol{\theta}_0)+p\big)$.
\end{remark}
\section{Relation with optimization}
\label{secOpt}
We have already mentioned that the LMC algorithm is very close to the gradient descent
algorithm for computing the minimum $\boldsymbol{\theta}^*$ of the function $f$. However, when we
compare the guarantees of~\Cref{thOne} with those available for the optimization problem,
we remark the following striking difference. The approximate computation of $\boldsymbol{\theta}^*$ requires
a number of steps of the order of $\log(1/\varepsilon)$ to reach the precision $\varepsilon$,
whereas, for reaching the same precision in sampling from $\pi$, the LMC algorithm needs a
number of iterations proportional to $(p/\varepsilon^2)\log (p/\varepsilon)$.
The goal of this section is to explain that this, at first sight very disappointing
behavior of the LMC algorithm is, in fact, continuously connected to the exponential
convergence of the gradient descent.
The main ingredient for the explanation is that the function $f(\boldsymbol{\theta})$ and the function
$f_\tau(\boldsymbol{\theta}) = f(\boldsymbol{\theta})/\tau$ have the same point of minimum $\boldsymbol{\theta}^*$, whatever
the real number $\tau>0$. In addition, if we define the density function
$\pi_\tau(\boldsymbol{\theta})\propto \exp\big(-f_\tau(\boldsymbol{\theta})\big)$, then the average value
$$
\bar\boldsymbol{\theta}_\tau = \int_{\mathbb{R}^p } \boldsymbol{\theta}\, \pi_\tau(\boldsymbol{\theta})\,d\boldsymbol{\theta}
$$
tends to the minimum point $\boldsymbol{\theta}^*$ when $\tau$ goes to zero. Furthermore,
the distribution $\pi_\tau(d\boldsymbol{\theta})$ tends to the Dirac measure at $\boldsymbol{\theta}^*$.
Clearly, $f_\tau$ satisfies \eqref{1} with the constants $m_\tau = m/\tau$
and $M_\tau = M/\tau$. Therefore, on the one hand, we can apply to $\pi_\tau$
claim (a) of \Cref{thOne}, which tells us that if we choose $h = 1/M_\tau = \tau/M$,
then
\begin{equation}
\label{7}
W_2(\nu_K,\pi_\tau) \le \Big(1-\frac{m}{M}\Big)^K W_2(\delta_{\boldsymbol{\theta}^{(0)}},\pi_\tau)
+ 2\Big(\frac{M}{m}\Big)\Big(\frac{p\tau}{M}\Big)^{1/2}.
\end{equation}
On the other hand, the LMC algorithm with the step-size $h=\tau/M$ applied to
$f_\tau$ reads as
\begin{equation}
\label{8}
\boldsymbol{\vartheta}^{(k+1,h)} = \boldsymbol{\vartheta}^{(k,h)} - \frac1M \nabla f(\boldsymbol{\vartheta}^{(k,h)})+
\sqrt{\frac{2\tau}M}\;\boldsymbol{\xi}^{(k+1)};\qquad k=0,1,2,\ldots
\end{equation}
When the parameter $\tau$ goes to zero, the LMC sequence \eqref{8} tends
to the gradient descent sequence $\boldsymbol{\theta}^{(k)}$. Therefore, the limiting case
of \eqref{7} corresponding to $\tau\to 0$ writes as
\begin{equation}
\label{optimGuar}
\|\boldsymbol{\theta}^{(K)}-\boldsymbol{\theta}^*\|_2 \le \Big(1-\frac{m}{M}\Big)^K \|\boldsymbol{\theta}^{(0)}-\boldsymbol{\theta}^*\|_2,
\end{equation}
which is a well-known result in Optimization. This clearly shows that \Cref{thOne} is a natural
extension of the results of convergence from optimization to sampling.
\section{Guarantees for the noisy gradient version}
In some situations, the precise evaluation of the gradient $\nabla f(\boldsymbol{\theta})$
is computationally expensive or practically impossible, but it is possible to
obtain noisy evaluations of $\nabla f$ at any point. This is the setting considered
in the present section. More precisely, we assume that at any point $\boldsymbol{\vartheta}^{(k,h)}\in\mathbb{R}^p$
of the LMC algorithm, we can observe the value
\begin{equation}
\boldsymbol Y^{(k,h)} = \nabla f(\boldsymbol{\vartheta}^{(k,h)}) + \sigma\,\boldsymbol{\zeta}^{(k)},
\end{equation}
where $\{\boldsymbol{\zeta}^{(k)}:\,k=0,1,\ldots\}$ is a sequence of independent zero mean random vectors
such that $\mathbf E[\|\boldsymbol{\zeta}^{(k)}\|_2^2]\le p$ and $\sigma>0$ is a deterministic noise level. Furthermore,
the noise vector $\boldsymbol{\zeta}^{(k)}$ is independent of the past states
$\boldsymbol{\vartheta}^{(1,h)},\ldots,\boldsymbol{\vartheta}^{(k,h)}$. The noisy LMC (nLMC) algorithm is then defined as
\begin{align}\label{9}
\boldsymbol{\vartheta}^{(k+1,h)} = \boldsymbol{\vartheta}^{(k,h)} - h \boldsymbol Y^{(k,h)}+ \sqrt{2h}\;\boldsymbol{\xi}^{(k+1)};\qquad k=0,1,2,\ldots
\end{align}
where $h>0$ and $\boldsymbol{\xi}^{(k+1)}$ are as in \eqref{2}. The next theorem extends the guarantees
of \Cref{thOne} to the noisy-gradient setting and to the nLMC algorithm.
\begin{theorem}\label{thTwo}
Let $\boldsymbol{\vartheta}^{(K,h)}$ be the $K$-th iterate of the nLMC algorithm \eqref{9} and
$\nu_K$ be its distribution. If the function $f$ satisfies condition \eqref{1}
and $h\le 2/M$ then the following claims hold:
\begin{enumerate}\itemsep=2pt
\item[{\rm (a)}] If $h\le \nicefrac2{(m+M)}$ then
\begin{align}\label{A}
W_2(\nu_K, \pi) \le
\Big(1-\frac{mh}{2}\Big)^{K} W_2(\nu_0,\pi) +
\Big(\frac{2hp}{m}\Big)^{1/2}\Big\{\sigma^2 + \frac{3.3M^2}{m}\Big\}^{1/2}.
\end{align}
\item[{\rm (b)}] If $h\ge \nicefrac2{(m+M)}$ then
$$
W_2(\nu_K, \pi)
\le \Big(\frac{Mh}{2}\Big)^{K} W_2(\nu_0,\pi) +
\Big(\frac{2h^2p}{2-Mh}\Big)^{1/2}\Big\{\sigma^2 + \frac{6.6M}{2-Mh}\Big\}^{1/2}.
$$
\end{enumerate}
\end{theorem}
To understand the potential scope of applicability of this result, let us consider a typical
statistical problem in which $f(\boldsymbol{\theta})$ is the negative log-likelihood of $n$ independent
random variables $X_1,\ldots,X_n$. Then, if $\ell(\boldsymbol{\theta},x)$ is the log-likelihood of one
variable, we have
$$
f(\boldsymbol{\theta}) = \sum_{i=1}^n \ell(\boldsymbol{\theta},X_i).
$$
In such a situation, if the Fisher information is not degenerated, both $m$ and $M$ are
proportional to the sample size $n$. When the gradient of $\ell(\boldsymbol{\theta},X_i)$
with respect to parameter $\boldsymbol{\theta}$ is hard to compute, one can replace the evaluation
of $\nabla f(\boldsymbol{\vartheta}^{(k,h)})$ at each step $k$ by that of $Y_k= n \nabla_{\boldsymbol{\theta}} \ell
(\boldsymbol{\vartheta}^{(k,h)},X_k)$. Under suitable assumptions, this random vector satisfies
the conditions of \Cref{thTwo} with a $\sigma^2$ proportional to $n$. Therefore, if
we analyze the expression between curly brackets in \eqref{A}, we see that the
additional term, $\sigma^2$, due to the subsampling is
of the same order of magnitude as the term $3.3M^2/m$. Thus, using the subsampled
gradient in the LMC algorithm does not cause a significant deterioration of the
precision while reducing considerably the computational burden.
\section{Discussion and outlook}
We have established simple guarantees for the convergence of the Langevin Monte Carlo
algorithm under the Wasserstein metric. These guarantees are valid under strong convexity
and Lipschitz-gradient assumptions on the log-density function, for a step-size
smaller than $2/M$, where $M$ is the constant in the Lipschitz condition. These guarantees
are sharper than previously established analogous results and in perfect agreement with
the analogous results in Optimization. Furthermore, we have shown that similar results
can be obtained in the case where only noisy evaluations of the gradient are possible.
There are a number of interesting directions in which this work can be extended.
One relevant and closely related problem is the approximate computation of the volume
of a convex body, or, the problem of sampling from the uniform distribution on a
convex body. This problem has been analyzed by other Monte Carlo methods such as
``Hit and Run'' in a series of papers by \cite{Lovasz2,Lovasz1}, see also the more
recent paper \citep{Bubeck15}. Numerical experiments reported in \citep{Bubeck15} suggest
that the LMC algorithm might perform better in practice than ``Hit and Run''. It would
be interesting to have a theoretical result corroborating this observation.
Other interesting avenues for future research include the possible adaptation of the
Nesterov acceleration to the problem of sampling, extensions to second-order methods as
well as the alleviation of the strong-convexity assumptions. We also plan to investigate
in more depth the applications is high-dimensional statistics (see, for instance,
\cite{DalalyanTsybakov12a}). Some results in these directions are already obtained
in \citep{Dalalyan14,Durmus2,Durmus1}. It is a stimulating question whether we can
combine ideas of the present work and the aforementioned earlier results to get improved
guarantees.
\section{Proofs}
\label{secProof}
The first part of the proofs of \Cref{thOne} and \Cref{thTwo} is the same.
We start this section by this common part and then we proceed with the
proofs of the two theorems separately.
Let $\boldsymbol W$ be a $p$-dimensional Brownian Motion such that $\boldsymbol W_{(k+1)h} - \boldsymbol W_{kh} = \sqrt{h}\,
\boldsymbol{\xi}^{(k+1)}$. We define the stochastic process $\boldsymbol L$
so that $\boldsymbol L_0\sim \pi$ and
\begin{align}\label{B}
\boldsymbol L_t &= \boldsymbol L_0 - \int_0^t \nabla f(\boldsymbol L_s)\,ds + \sqrt{2}\,\boldsymbol W_t,\qquad\forall\, t>0.
\end{align}
It is clear that this equation implies that
\begin{align}
\displaystyle\boldsymbol L_{(k+1)h}
&= \boldsymbol L_{kh} - \int_{kh}^{(k+1)h} \nabla f(\boldsymbol L_s)\,ds + \sqrt{2}\,(\boldsymbol W_{(k+1)h}-\boldsymbol W_{kh})\\
&= \boldsymbol L_{kh} - \int_{kh}^{(k+1)h} \nabla f(\boldsymbol L_s)\,ds + \sqrt{2h}\,\boldsymbol{\xi}^{(k+1)}.
\end{align}
Furthermore, $\{\boldsymbol L_t:t\ge 0\}$ is a diffusion process having $\pi$ as the stationary
distribution. Since the initial value $\boldsymbol L_0$ is drawn from $\pi$, we have $\boldsymbol L_t\sim \pi$
for every $t\ge 0$.
Let us denote $\boldsymbol{\Delta}_k = \boldsymbol L_{kh}-\boldsymbol{\vartheta}^{(k,h)}$ and $I_k = (kh,(k+1)h]$. We have
\begin{align}
\boldsymbol{\Delta}_{k+1}
& = \boldsymbol{\Delta}_k + h \boldsymbol Y^{(k,h)} - \int_{I_k}\nabla f(\boldsymbol L_t)\,dt \\
& = \boldsymbol{\Delta}_k - h\big(\underbrace{\nabla f(\boldsymbol{\vartheta}^{(k,h)}+\boldsymbol{\Delta}_k)-\nabla f(\boldsymbol{\vartheta}^{(k,h)})
}_{:=\boldsymbol U_k}\big)+ \sigma h\boldsymbol{\zeta}^{(k)}
-\underbrace{\int_{I_k}\big(\nabla f(\boldsymbol L_t) - \nabla f(\boldsymbol L_{kh})\big)\,dt}_{:=\boldsymbol V_k}.
\end{align}
In view of the triangle inequality, we get
\begin{equation}
\|\boldsymbol{\Delta}_{k+1} \|_2 \le \|\boldsymbol{\Delta}_k -h \boldsymbol U_k + \sigma h \boldsymbol{\zeta}^{(k)}\|_2 + \|\boldsymbol V_k\|_2.\label{C}
\end{equation}
For the first norm in the right hand side, we can use the following inequalities:
\begin{align}
\mathbf E[\|\boldsymbol{\Delta}_k -h \boldsymbol U_k + \sigma h \boldsymbol{\zeta}^{(k)}\|_2^2]
& = \mathbf E[\|\boldsymbol{\Delta}_k -h \boldsymbol U_k\|_2^2] + \mathbf E[\|\sigma h \boldsymbol{\zeta}^{(k)}\|_2^2] \\
& = \mathbf E[\|\boldsymbol{\Delta}_k -h \boldsymbol U_k\|_2^2] + \sigma^2 h^2 p.\label{D}
\end{align}
We need now three technical lemmas the proofs of which are postponed to \Cref{ssecLem}.
\begin{lem}\label{lemA}
Let us introduce the constant $\gamma$ that equals $|1-mh|$ if
$h\le \nicefrac2{(m+M)}$ and $|1-Mh|$ if $h\ge \nicefrac2{(m+M)}$. (Since $h\in(0,\nicefrac2M)$,
this value $\gamma$ satisfies $0< \gamma <1$). It holds that
\begin{align}
\|\boldsymbol{\Delta}_k -h \boldsymbol U_k\|_2
& \le \gamma\|\boldsymbol{\Delta}_k\|_2.\label{E}
\end{align}
\end{lem}
\begin{lem}\label{lemB}
If the function $f$ is continuously differentiable and the gradient of $f$ is Lipschitz
with constant $M$, then
\begin{equation}
\int_{\mathbb{R}^p} \|\nabla f(\boldsymbol x)\|_2^2\,\pi(\boldsymbol x)\,d\boldsymbol x \le Mp.
\end{equation}
\end{lem}
\begin{lem}\label{lemC}
If the function $f$ has a Lipschitz-continuous gradient with the Lipschitz constant $M$,
$\boldsymbol L$ is the Langevin diffusion \eqref{B} and $\boldsymbol V(a) =
\int_a^{a+h}\big(\nabla f(\boldsymbol L_t)-\nabla f(\boldsymbol L_a)\big)\,dt$ for some $a\ge 0$, then
\begin{align}
\big(\mathbf E[\|\boldsymbol V(a)\|^2_2]\big)^{1/2} &\le \bigg(\frac13 h^4 M^{3}p\bigg)^{1/2} + (h^3p)^{1/2}M .
\end{align}
\end{lem}
This completes the common part of the proof. We present below the proofs of the theorems.
\subsection{Proof of \Cref{thOne}}
Using \eqref{C} with $\sigma=0$ and \Cref{lemA}, we get
\begin{equation}
\|\boldsymbol{\Delta}_{k+1} \|_2 \le \gamma \|\boldsymbol{\Delta}_k\|_2 + \|\boldsymbol V_k\|_2,\qquad \forall k\in\mathbb{N}.
\end{equation}
In view of the Minkowski inequality and \Cref{lemC}, this yields
\begin{align}
(\mathbf E[\|\boldsymbol{\Delta}_{k+1} \|_2^2])^{1/2}
&\le \gamma (\mathbf E[\|\boldsymbol{\Delta}_k\|_2^2])^{1/2} + (\mathbf E[\|\boldsymbol V_k\|_2^2])^{1/2}\\
&\le \gamma (\mathbf E[\|\boldsymbol{\Delta}_k\|_2^2])^{1/2} + 1.82(h^3 M^2 p)^{1/2},
\end{align}
where we have used the fact that $h\le 2/M$.
Using this inequality iteratively with $k-1,\ldots,0$ instead of $k$, we get
\begin{align}
(\mathbf E[\|\boldsymbol{\Delta}_{k+1} \|_2^2])^{1/2}
&\le \gamma^{k+1} (\mathbf E[\|\boldsymbol{\Delta}_0\|_2^2])^{1/2} + 1.82 (h^3 M^2 p)^{1/2}
\sum_{j=0}^{k} \gamma^j\\
&\le \gamma^{k+1} (\mathbf E[\|\boldsymbol{\Delta}_0\|_2^2])^{1/2} + 1.82 (h^3 M^2 p)^{1/2}
(1-\gamma)^{-1}.\label{F}
\end{align}
Since $\boldsymbol{\Delta}_{k+1} = \boldsymbol L_{(k+1)h} - \boldsymbol{\vartheta}^{(k+1,h)}$ and $\boldsymbol L_{(k+1)h}\sim \pi$,
we readily get the inequality $W_2(\nu_{k+1},\pi)\le \big(\mathbf E[\|\boldsymbol{\Delta}_{k+1} \|^2_2]\big)^{1/2}$.
In addition, one can choose $\boldsymbol L_0$ so that $W_2(\nu_0,\pi)=\big(\mathbf E[\|\boldsymbol{\Delta}_{0} \|^2_2]\big)^{1/2}$.
Using these relations and substituting $\gamma$ by its expression in \eqref{F},
we get the two claims of the theorem.
\subsection{Proof of \Cref{thTwo}}
Using \eqref{C}, \eqref{D} and \Cref{lemA}, we get (for every $t>0$)
\begin{align}
\mathbf E[\|\boldsymbol{\Delta}_{k+1} \|_2^2]
&= \mathbf E[\|\boldsymbol{\Delta}_k -h \boldsymbol U_k + \boldsymbol V_k\|_2^2] + \mathbf E[\|\sigma h \boldsymbol{\zeta}^{(k)}\|_2^2]\\
&\le (1+t)\mathbf E[\|\boldsymbol{\Delta}_k -h \boldsymbol U_k\|_2^2] + (1+t^{-1})\mathbf E[\|\boldsymbol V_k\|_2^2]+ \sigma^2 h^2 p \\
&\le (1+t)\gamma^2\mathbf E[\|\boldsymbol{\Delta}_k\|_2^2] + (1+t^{-1})\mathbf E[\|\boldsymbol V_k\|_2^2]+ \sigma^2 h^2 p .
\end{align}
Since $h\le 2/M$, \Cref{lemC} implies that
\begin{align}
\mathbf E[\|\boldsymbol{\Delta}_{k+1} \|_2^2]
&\le (1+t)\gamma^2\mathbf E[\|\boldsymbol{\Delta}_k\|_2^2] + (1+t^{-1})(1.82)^2h^3M^2p + \sigma^2 h^2 p
\end{align}
for every $t>0$. Let us choose $t = (\frac{1+\gamma}{2\gamma})^2 - 1$ so that
$(1+t)\gamma^2 = (\frac{1+\gamma}{2})^2$. By recursion, this leads to
\begin{align}
W_2^2(\nu_{k+1},\pi)
&\le \Big(\frac{1+\gamma}{2}\Big)^{2(k+1)} W_2^2(\nu_0,\pi) +
\Big(\frac{2}{1-\gamma}\Big)\Big\{\sigma^2 h^2 p + (1+t^{-1})(1.82)^2h^3M^2p\Big\}.
\end{align}
In the case $h\le 2/(m+M)$, $\gamma = 1-mh$ and we get $\frac{1+\gamma}{2} = 1-\frac12 mh$. Furthermore,
\begin{align}
(1+t^{-1})h^3M^2p & = \frac{(1+\gamma)^2h^3M^2p}{(1-\gamma)(1+3\gamma)}\le
\frac{h^2M^2p}{m}.
\end{align}
This readily yields
\begin{align}
W_2(\nu_{k+1},\pi)
&\le \Big(1-\frac{mh}{2}\Big)^{k+1} W_2(\nu_0,\pi) +
\Big(\frac{2hp}{m}\Big)^{1/2}\Big\{\sigma^2 + \frac{3.3M^2}{m}\Big\}^{1/2}.
\end{align}
Similarly, in the case $h\ge 2/(m+M)$, $\gamma = Mh-1$ and we get
$\frac{1+\gamma}{2} = \frac12 Mh$. Furthermore,
\begin{align}
(1+t^{-1})h^3M^2p & = \frac{(1+\gamma)^2h^3M^2p}{(1-\gamma)(1+3\gamma)}\le
\frac{h^3M^2p}{2-Mh} \le \frac{2h^2Mp}{2-Mh}.
\end{align}
This implies the inequality
\begin{align}
W_2(\nu_{k+1},\pi)
&\le \Big(\frac{Mh}{2}\Big)^{k+1} W_2(\nu_0,\pi) +
\Big(\frac{2h^2p}{2-Mh}\Big)^{1/2}\Big\{\sigma^2 + \frac{6.6M}{2-Mh}\Big\}^{1/2},
\end{align}
which completes the proof.
\subsection{Proofs of lemmas}
\label{ssecLem}
\begin{proof}[Proof of \Cref{lemA}]
Since $f$ is $m$-strongly convex, it satisfies the inequality
\begin{equation}
\boldsymbol{\Delta}^\top \big(\nabla f(\boldsymbol{\vartheta}+\boldsymbol{\Delta}) - \nabla f(\boldsymbol{\vartheta})\big)
\ge \frac{mM}{m+M}\|\boldsymbol{\Delta}\|_2^2 + \frac1{m+M} \|\nabla f(\boldsymbol{\vartheta}+\boldsymbol{\Delta}) - \nabla f(\boldsymbol{\vartheta})\|_2^2,
\end{equation}
for all $\boldsymbol{\Delta},\boldsymbol{\vartheta}\in\mathbb{R}^p$. Therefore, simple algebra yields
\begin{align}
\|\boldsymbol{\Delta}_k -h \boldsymbol U_k\|_2^2
& = \|\boldsymbol{\Delta}_k\|_2^2 - 2h \boldsymbol{\Delta}_k^\top \boldsymbol U_k+ h^2 \|\boldsymbol U_k\|_2^2 \\
& = \|\boldsymbol{\Delta}_k\|_2^2 - 2h \boldsymbol{\Delta}_k^\top \big(\nabla f(\boldsymbol{\vartheta}^{(k,h)}+\boldsymbol{\Delta}_k)
-\nabla f(\boldsymbol{\vartheta}^{(k,h)})\big) + h^2 \|\boldsymbol U_k\|_2^2 \\
& \le \|\boldsymbol{\Delta}_k\|_2^2 - \frac{2h mM}{m+M} \|\boldsymbol{\Delta}_k\|_2^2 -
\frac{2h}{m+M} \|\boldsymbol U_k\|_2^2 + h^2 \|\boldsymbol U_k\|_2^2 \\
& = \Big(1 - \frac{2h mM}{m+M}\Big) \|\boldsymbol{\Delta}_k\|_2^2 +
h\Big(h - \frac{2}{m+M}\Big)\|\boldsymbol U_k\|_2^2.\label{G}
\end{align}
Note that, thanks to the strong convexity of $f$, the inequality
$\|\boldsymbol U_k\|_2 = \|\nabla f(\boldsymbol{\vartheta}^{(k,h)}+\boldsymbol{\Delta}_k) - \nabla f(\boldsymbol{\vartheta}^{(k,h)}) \|_2\ge
m\|\boldsymbol{\Delta}_k\|_2$ is true. If $h\le \nicefrac2{(m+M)}$, this inequality can be
combined with \eqref{G} to obtain
\begin{align}
\|\boldsymbol{\Delta}_k -h \boldsymbol U_k\|_2^2
& \le (1 - h m)^2 \|\boldsymbol{\Delta}_k\|_2^
.\label{H}
\end{align}
Similarly, when $h\ge \nicefrac2{(m+M)}$, we can use the Lipschitz property of $\nabla f$
to infer that $\|\boldsymbol U_k\|_2 \le M\|\boldsymbol{\Delta}_k\|_2$. Combining with \eqref{G}, this
yields
\begin{align}
\|\boldsymbol{\Delta}_k -h \boldsymbol U_k\|_2^2
& \le (h M-1)^2 \|\boldsymbol{\Delta}_k\|_2^2,\qquad \text{if}\qquad h\ge \nicefrac2{(m+M)}.\label{I}
\end{align}
Thus, we have checked that \eqref{E} is true for every $h\in(0,\nicefrac2M)$.
\end{proof}
\begin{proof}[Proof of \Cref{lemB}]
To simplify notations, we prove the lemma for $p=1$. The function
$x\mapsto f'(x)$ being Lipschitz continuous is almost surely differentiable.
Furthermore, it is clear that $|f''(\boldsymbol x)|\le M$ for every $\boldsymbol x$ for which
this second derivative exists. The result of \citep[Theorem 7.20]{rudin87}
implies that
\begin{equation}
f'(x)-f'(0) = \int_0^x f''(y)\,dy.
\end{equation}
Therefore, using $f'(x)\,\pi(x) = -\pi'(x)$, we get
\begin{align}
\int_{\mathbb{R}} f'(x)^2\,\pi(x)\,dx
& = f'(0)\int_{\mathbb{R}} f'(x)\,\pi(x)\,dx +
\int_{\mathbb{R}}\Big(\int_0^x f''(y)\,dy\Big) f'(x)\,\pi(x)\,dx \\
& = -f'(0)\int_{\mathbb{R}} \pi'(x)\,dx -
\int_{\mathbb{R}}\Big(\int_0^x f''(y)\,dy\Big) \pi'(x)\,dx \\
& = -\int_{0}^\infty\int_0^x f''(y)\,\pi'(x)\,dy\,dx
+\int_{-\infty}^0\int_x^0 f''(y)\,\pi'(x)\,dy\,dx.
\end{align}
In view of Fubini's theorem, we arrive at
\begin{align}
\int_{\mathbb{R}} f'(x)^2\,\pi(x)\,dx & =
\int_{0}^\infty f''(y)\,\pi(y)\,dy
+\int_{-\infty}^0 f''(y)\,\pi(y)\,dy\le M.
\end{align}
This completes the proof.
\end{proof}
\begin{proof}[Proof of \Cref{lemC}]
Since the process $\boldsymbol L$ is stationary, $V(a)$ has the same distribution as $V(0)$. For this
reason, it suffices to prove the claim of the lemma for $a=0$ only.
Using the Lipschitz continuity of $f$, we get
\begin{align}
\mathbf E[\|\boldsymbol V(0)\|^2_2]
& = \mathbf E\Big[\Big\|\int_{0}^{h}\big(\nabla f(\boldsymbol L_t) - \nabla f(\boldsymbol L_{0})\big)\,dt\Big\|^2_2\Big]\\
& \le h\int_{0}^{h}\mathbf E\big[\big\|\nabla f(\boldsymbol L_t) - \nabla f(\boldsymbol L_{0})\big\|^2_2\big]\,dt\\
& \le hM^2\int_{0}^{h}\mathbf E\big[\big\|\boldsymbol L_t - \boldsymbol L_{0}\big\|^2_2\big]\,dt.
\end{align}
Combining this inequality with the stationarity of $\boldsymbol L_t$, we arrive at
\begin{align}
\Big(\mathbf E[\|\boldsymbol V(0)\|^2_2]\Big)^{1/2}
& \le \bigg(hM^2\int_{0}^{h}\mathbf E\big[\big\|-\int_{0}^t \nabla f(\boldsymbol L_s)\,ds +
\sqrt{2}\,\boldsymbol W_{t}\big\|^2_2\big]\,dt\bigg)^{1/2}\\
& \le \bigg(hM^2\int_{0}^h\mathbf E\big[\big\|\int_{0}^t \nabla f(\boldsymbol L_s)\,ds\big\|^2_2\big]\,dt\bigg)^{1/2}
+ \bigg(2hpM^2\int_0^h t\,dt\bigg)^{1/2}\\
& \le \bigg(hM^2\mathbf E\big[\big\|\nabla f(\boldsymbol L_0)\big\|^2_2\big]\int_0^h t^2\,dt\bigg)^{1/2}
+ \bigg(2hpM^2\int_0^h t\,dt\bigg)^{1/2}\\
& = \bigg(\frac13 h^4M^2 \mathbf E\big[\big\|\nabla f(\boldsymbol L_0)\big\|^2_2\big]\bigg)^{1/2}
+ \big(h^3 M^2p\big)^{1/2}.
\end{align}
To complete the proof, it suffices to apply \Cref{lemB}.
\end{proof}
{\renewcommand{\addtocontents}[2]{}
\acks{
The work of the author was partially supported by the grant
Investissements d'Avenir (ANR-11-IDEX-0003/Labex Ecodec/ANR-11-LABX-0047). The author would
like to thank Nicolas Brosse, who suggested an improvement in \Cref{thTwo}.
}%
{\renewcommand{\addtocontents}[2]{}
| {
"timestamp": "2017-07-31T02:08:55",
"yymm": "1704",
"arxiv_id": "1704.04752",
"language": "en",
"url": "https://arxiv.org/abs/1704.04752",
"abstract": "In this paper, we revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. We improve the existing results when the convergence is measured in the Wasserstein distance and provide further insights on the very tight relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. Finally, we also establish guarantees for the convergence of a version of the Langevin Monte Carlo algorithm that is based on noisy evaluations of the gradient.",
"subjects": "Statistics Theory (math.ST)",
"title": "Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180648162212,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314680922815
} |
https://arxiv.org/abs/math/0606646 | A quasisymmetric function for matroids | A new isomorphism invariant of matroids is introduced, in the form of a quasisymmetric function. This invariant (1) defines a Hopf morphism from the Hopf algebra of matroids to the quasisymmetric functions, which is surjective if one uses rational coefficients, (2) is a multivariate generating function for integer weight vectors that give minimum total weight to a unique base of the matroid, (3) is equivalent, via the Hopf antipode, to a generating function for integer weight vectors which keeps track of how many bases minimize the total weight, (4) behaves simply under matroid duality, (5) has a simple expansion in terms of P-partition enumerators, and (6) is a valuation on decompositions of matroid base polytopes.This last property leads to an interesting application: it can sometimes be used to prove that a matroid base polytope has no decompositions into smaller matroid base polytopes. Existence of such decompositions is a subtle issue arising in work of Lafforgue, where lack of such a decomposition implies the matroid has only a finite number of realizations up to projective equivalence. | \section{Definition as generating function}
\label{definition-section}
We begin by defining the new matroid invariant. For matroid terminology
undefined here, we refer the reader to some of the standard references, such
as \cite{CrapoRota, Oxley, Welsh, White1, White2, White3}.
Let $M=(E,{\mathcal B})$ be a matroid on ground set $E$, with bases ${\mathcal B}={\mathcal B}(M)$.
Let $\PP:=\{1,2,3,\ldots\}$ be the positive integers.
We will say that a weighting function $f: E \rightarrow \PP$ is {\it $M$-generic} if
the minimum $f$-weight $f(B):=\sum_{e \in B} f(e)$ among all bases $B$ of $M$
is achieved by a {\it unique} base $B \in {\mathcal B}(M)$. For example, it is a standard
exercise in matroid theory (see, e.g. \cite[Exer. 1.8.4]{Oxley}) to show that
$f$ is $M$-generic if $f$ is {\it injective}, that is, if $f$ assigns all distinct weights.
\begin{defn} \rm \
\label{definition-of-F}
Given a matroid $M$ as above,
define a power series $F(M,\xx)$ in countably many variables
$x_1,x_2,\ldots$ as the generating function for $M$-generic weighting functions $f$
according to number of times $f$ takes on each value in $\PP$. That is,
\begin{equation}
\label{definition-equation}
F(M,\xx):=\sum_{\substack{ M{\textrm{-generic }} \\ f: E \rightarrow \PP}} \xx_{f}
\end{equation}
where $\xx_{f}:=\prod_{e \in E} x_{f(e)}$.
\end{defn}
One of the defining properties of a matroid \cite[Theorem 1.8.5]{Oxley} is that
an $f$-minimizing base may be found by {\it (Kruskal's) greedy algorithm}:
\begin{quote}
Construct a sequence of independent sets
$$
\emptyset = :I_0, I_1, \ldots, I_{{\mathrm{rank}}(M)}
$$
by defining $I_j:=I_{j-1} \cup \{e\}$ where $e$ is any
element in $E$ having minimum weight $f(e)$ among those for which $I_{j-1} \cup \{e\}$
is independent. Then $I_{{\mathrm{rank}}(M)}$ is an $f$-minimizing base of $M$.
\end{quote}
\section{Quasisymmetry}
\label{quasisymmetry-section}
We recall \cite{Gessel}, \cite[\S 7.19]{Stanley-EC2}
what it means for a power series $f(\xx)$ in a linearly ordered variable set
$x_1,x_2,\ldots$ to be {\it quasisymmetric}: $f$ must have bounded degree, and for any fixed composition
$(\alpha_1,\ldots,\alpha_k)$ in $\PP^k$,
the coefficient of the monomials
$x_{i_1}^{\alpha_1} x_{i_2}^{\alpha_2} \cdots x_{i_k}^{\alpha_k}$ with $i_1 < i_2 < \cdots < i_k$
in $f$ are all the same.
Put differently, $f$ is quasisymmetric if and only if it is a (finite) linear combination of the
{\it monomial} quasisymmetric functions\footnote{While there is a danger of confusion between
matroids $M$ and monomial quasisymmetric functions $M_{\alpha}$, the difference will
always be clear by the context.} indexed by compositions
$\alpha=(\alpha_1,\ldots,\alpha_k)$:
$$
M_\alpha:=\sum_{ 1 \leq i_1 < i_2 < \cdots < i_k}
x_{i_1}^{\alpha_1} x_{i_2}^{\alpha_2} \cdots x_{i_k}^{\alpha_k}.
$$
\begin{proposition}
\label{quasisymmetry}
For any matroid $M$, the power series $F(M,\xx)$ is {\it quasisymmetric}.
\end{proposition}
\begin{proof}
This follows from the fact that the $f$-minimum bases can all be found by the
greedy algorithm, and this algorithm makes all of its decisions based only on
the {\it relative ordering} and equality of various weights $f(e)$, not on their actual values.
\end{proof}
\begin{example} \rm \
\label{tiny-examples}
When $|E|=0$, there is only one matroid $M_\emptyset$, having rank $0$ and exactly one base,
the empty base $\emptyset$. As there is only one function $f$ from the empty set $E$ into
$\PP$, and this $f$ has no coordinates (!), we should decree
$\xx_f=1$ (as the empty product is $1$). Hence $F(M_\emptyset,\xx)=1$.
There are two matroids with $|E|=1$, namely $M_\isthmus$ of rank $1$ having a single base
$\{e\}$, and $M_\looop$ of rank $0$ having a single base $\emptyset$.
Every $f:E \rightarrow \PP$ is generic for either of these, so that
$$
F(M_\isthmus,\xx) = F(M_\looop,\xx) = x_1 + x_2 + x_3 + \cdots = M_1.
$$
\end{example}
The enumerative information recorded in $F(M,\xx)$ is data about optimizing
weight functions on the bases of $M$. An obvious specialization counts $M$-generic
weight functions that take on only a limited number of distinct weight values.
\begin{defn} \rm \
For a positive integer $m$, let $[m]:=\{1,2,\ldots,m\}$, and define
$$
\begin{aligned}
\phi(M,m)
&: = F(M, \xx)_{\substack{x_1 = x_2 =\cdots = x_m =1 \\ x_{m+1}=x_{m+2}= \cdots =0}} \\
& = |\{ M\textrm{-generic }f:E \rightarrow [m] \}|.
\end{aligned}
$$
\end{defn}
Since $F(M,\xx)$ is a power series of bounded degree, $\phi(M,m)$ is a polynomial function of $m$.
When $m$ is large, almost all weight functions $f:E \rightarrow [m]$ are injective and hence
$M$-generic, so the polynomial expansion of $\phi(M,m)$ begins
$$
\phi(M,m) = m^n + O(m^{n-1})
$$
where $n:=|E|$.
In Section~\ref{antipodes}, our analysis of the behavior of $F(M,\xx)$ under
the Hopf algebra antipode on quasisymmetric functions will
imply an interesting reciprocity result for the polynomial $\phi(M,m)$.
In the remainder of this paper, we will suppress the $\xx$ in $F(M,\xx)$ and write $F(M)$
unless there is a need to consider the variables.
\section{Hopf algebra morphism}
\label{Hopf-section}
There is a known Hopf algebra structure built from matroids
\cite{CrapoSchmitt1, CrapoSchmitt2, CrapoSchmitt3, Schmitt}
and a perhaps better-known Hopf algebra of quasisymmetric functions \cite[\S 4]{Gessel}.
The goal of this section is to show that the invariant $F(M)$ defines a Hopf morphism
between them.
Let ${\mathcal Mat }$ be the free $\ZZ$-module
consisting of formal $\ZZ$-linear combinations
of basis elements $[M]$ indexed by isomorphism classes of matroids $M$. Endow ${\mathcal Mat }$ with
a product and coproduct extended $\ZZ$-linearly from the following definitions on
basis elements:
\begin{equation*}
\begin{aligned}
\, [ M_1 ] \cdot [ M_2 ] & := [M_1 \oplus M_2] \\
\Delta [M] :& = \sum_{A \subseteq E} [M|_A] \otimes [M/A]
\end{aligned}
\end{equation*}
\noindent
where $M_1 \oplus M_2$ is the {\it direct sum} of the matroids $M_1, M_2$, and
$M|_A, M/A$ denote the {\it restriction} of
$M$ to $A$ and the {\it contraction} (or {\it quotient}) of $M$ by $A$,
respectively. One has a $\ZZ$-module direct sum decomposition
${\mathcal Mat } = \bigoplus_{n \geq 0} {\mathcal Mat }_n$, where ${\mathcal Mat }_n$ denotes the submodule spanned
by the basis elements $[M]$ for which the ground set $E$ of $M$
has cardinality $|E|=n$. One can then
easily check that this product and coproduct make
${\mathcal Mat }$ into a graded, connected Hopf algebra over $\ZZ$ which is commutative, but non-cocommutative.
Here the unit is $[M_\emptyset]$.
Let ${\mathcal{QS}ym }$ (or ${\mathcal{QS}ym }({\bf x})$)
denote the Hopf algebra of quasisymmetric functions in the linearly ordered
variable set $x_1,x_2,\ldots$ and having coefficients in $\ZZ$. The product in ${\mathcal{QS}ym }$ is
inherited from the formal power series ring $\ZZ[[x_1,x_2,\ldots]]$. The coproduct may be
described as follows. A quasisymmetric function $f(\xx)$ defines a unique quasisymmetric
function $f(\xx,\yy)$ in the linearly ordered variable set
$$
x_1 < x_2 < \cdots < y_1 < y_2 < \cdots
$$
by insisting that $f(\xx, {\mathbf 0})=f(\xx)$. In other words, for any $i_1 < \cdots < i_k$ and
$j_1 < \cdots < j_\ell$, the coefficient of
$x_{i_1}^{\alpha_1} \cdots x_{i_k}^{\alpha_k} y_{j_1}^{\beta_1} \cdots y_{j_\ell}^{\beta_\ell}$
in $f(\xx,\yy)$ is defined to be the coefficient of
$x_1^{\alpha_1} \cdots x_k^{\alpha_k} x_{k+1}^{\beta_1} \cdots x_{k+\ell}^{\beta_\ell}$
in $f(\xx)$. Consider the injective map
$$
i: \ZZ[[x_1,x_2,\ldots]] \otimes \ZZ[[y_1,y_2,\ldots]] \rightarrow \ZZ[[x_1,x_2,\ldots,y_1,y_2,\ldots]]
$$
which sends $f(\xx) \otimes g(\yy)$ to $f(\xx)g(\yy)$. The image $i({\mathcal{QS}ym }({\bf x}) \otimes {\mathcal{QS}ym }({\bf y}) )$
contains the quasisymmetric functions ${\mathcal{QS}ym }({\bf x, y})$, that is, there
is a unique expansion $f(\xx,\yy) = \sum_i f_i(\xx) g_i(\yy)$ for any quasisymmetric function $f(\xx,\yy)$.
This defines the coproduct $\Delta: {\mathcal{QS}ym } \rightarrow {\mathcal{QS}ym } \otimes {\mathcal{QS}ym }$.
Grading ${\mathcal{QS}ym }$ by the usual notion of degree, one can check that
${\mathcal{QS}ym }$ becomes a graded, connected Hopf algebra over $\ZZ$ which is commutative, but non-cocommutative.
\begin{theorem}
\label{morphism-theorem}
The map $F$
$$ \begin{matrix}
{\mathcal Mat } &\rightarrow &{\mathcal{QS}ym } \\
[M] &\mapsto &F(M)
\end{matrix}
$$
is a morphism of Hopf algebras.
\end{theorem}
\begin{proof}
Example~\ref{tiny-examples} shows that $F$ sends the unit $M_\emptyset$ of ${\mathcal Mat }$
to the unit $1$ of ${\mathcal{QS}ym }$. The fact that $F$ preserves degree
shows that it preserves the counit.
The fact that $F$ preserves the product structures follows because the bases of
$M_1 \oplus M_2$ are the disjoint unions $B_1 \sqcup B_2$ of a base $B_1, B_2$
from each. This implies that $f:E_1 \sqcup E_2 \rightarrow \PP$ is
$(M_1 \oplus M_2)$-generic if and only if $f|_{E_i}$ is $M_i$-generic for $i=1,2$.
The fact that $F$ preserves the coalgebra structure is somewhat more interesting.
Unravelling the definitions, this amounts to checking the following identity:
\begin{equation}
\label{coalgebra-map}
F(M, \xx, \yy) = \sum_{ A \subseteq E} F(M|_A, \xx) \, F(M/A, \yy).
\end{equation}
The left side of \eqref{coalgebra-map} has the following interpretation. Linearly
order the disjoint union $\PP \sqcup \PP$ as follows:
$$
1 < 2 < 3 < \cdots < 1' < 2' < 3' <\cdots
$$
Given a weight function $f: E \rightarrow \PP \sqcup \PP$, define
$(\xx\yy)_f:=\prod_{e \in E} z_e$ where
$$
z_e := \begin{cases}
x_i & \textrm{ if }f(e) = i \textrm{ ( with no prime ) } \\
y_i & \textrm{ if }f(e) = i'
\end{cases}.
$$
Then
$$
F(M, \xx, \yy) = \sum_{\substack {M\textrm{-generic}\\f: E \rightarrow \PP \sqcup \PP}} (\xx\yy)_f.
$$
On the other hand, the right side of \eqref{coalgebra-map} expands to
$\sum_{(A,f_1,f_2)} \xx_{f_1} \yy_{f_2}$, where the sum ranges over all triples
$(A,f_1,f_2)$ in which
\begin{enumerate}
\item[$\bullet$] $A$ is a subset of $E$,
\item[$\bullet$] $f_1: A \rightarrow \PP$ is $M|_A$-generic, and
\item[$\bullet$] $f_2: E \backslash A \rightarrow \PP$ is $M/A$-generic.
\end{enumerate}
There is an obvious association $f \mapsto (A,f_1,f_2)$ defined by
$$
\begin{aligned}
A &:= \{e \in E: f(e) \textrm{ has no prime }\} \\
f_1 &:= f|_A \\
f_2 &:= f|_{E \backslash A}.
\end{aligned}
$$
It only remains to check that $f$ is $M$-generic if and only if
$f|_A$ and $f|_{E \backslash A}$ are $M|_A$ and $M/A$-generic, respectively. This follows from the
sequential nature of the greedy algorithm: because the primed values $i'$ are bigger than all
the unprimed values $i$, when the greedy algorithm finds $f$-minimizing bases for $M$,
it must first find $f|_A$-minimizing bases for $M|_A$ by trying to use only $e$'s
with unprimed values for as long as it can, and then proceed to
find $f|_{E\backslash A}$-minimizing
bases for $M/A$ using primed values. Lack of uniqueness in the $f$-minimizing bases of $M$
can only occur if it occurs in one of these two steps, leading either to lack of
uniqueness in the $f|_A$-minimizing bases of $M|_A$ or in the $f|_{E \backslash A}$-minimizing
bases of $M/A$. Conversely, lack of uniqueness in either step will lead to
lack of uniqueness for the whole computation.
\end{proof}
It turns out that the Hopf morphism ${\mathcal Mat } \rightarrow {\mathcal{QS}ym }$ is
{\it not} surjective if one works over $\ZZ$, but becomes
surjective after tensoring with the rationals. The somewhat technical proof
of this surjectivity\footnote{A shortening of parts of this proof has been found
recently by Luoto, as an application of his ``matroid-friendly'' basis of quasisymmetric
functions; see \cite[\S 7.4]{Luoto}.}
is given in the Appendix (Section~\ref{appendix}). The proof involves the
construction of two new $\ZZ$-bases for ${\mathcal{QS}ym }$, which may be of independent
interest.
\begin{remark} ({\it on combinatorial Hopf algebras}) \rm \ \\
Definition~\ref{definition-of-F} for $F(M)$ immediately implies that for any
composition $\alpha = (\alpha_1,\ldots,\alpha_k)$ of $n:=|E|$, the
coefficient $c_\alpha$ in the unique expansion
\begin{equation}
\label{M-expansion}
F(M)=\sum_{\alpha} c_\alpha M_\alpha
\end{equation}
has the following interpretation: $c_\alpha$ is the number of $M$-generic
$f:E \rightarrow \PP$ in which $|f^{-1}(i)|=\alpha_i$ for $i=1,2,\ldots,k$.
The work of Aguiar, Bergeron and Sottile \cite{AguiarBergeronSottile}
on combinatorial Hopf algebras also offers an interpretation for $c_\alpha$, using the fact that
$F$ is a Hopf morphism, as we explain here. In their theory, the {\it character} (= multiplicative linear
functional) $\zeta_{\mathcal Q}:{\mathcal{QS}ym } \rightarrow \ZZ$
defined by
$$
\zeta_{\mathcal Q}(M_\alpha) =
\begin{cases}
1 & \text{ if }\alpha\text{ has at most one part, and} \\
0 & \text{ otherwise}
\end{cases}
$$
plays a crucial role, making ${\mathcal{QS}ym }$ into what they call a {\it combinatorial Hopf algebra}.
The Hopf morphism $F:{\mathcal Mat } \rightarrow {\mathcal{QS}ym }$ then allows one to uniquely
define a character $\zeta_{\mathcal M}:{\mathcal Mat } \rightarrow \ZZ$, via
$\zeta_{\mathcal M} := \zeta_{\mathcal Q} \circ F$, so that $F$ becomes a {\it morphism of
combinatorial Hopf algebras}.
It is not hard to see directly
(or one can appeal to Corollary~\ref{zeta-computation} below) the following more explicit
description of the character $\zeta_{\mathcal M}$. Say that a matroid $M$ {\it splits completely} if it
is a direct sum of matroids on $1$ element, that is, a direct sum of loops and isthmuses,
or equivalently, if it has only one base $B$.
Then for any matroid $M$
$$
\zeta_{\mathcal Q}([M]) =
\begin{cases}
1 & \text{ if }$M$\text{ splits completely, and} \\
0 & \text{ otherwise.}
\end{cases}
$$
Using this, \cite[Theorem 4.1]{AguiarBergeronSottile} immediately implies another
interpretation for the coefficient $c_\alpha$ in \eqref{M-expansion}. Given
a flag ${\mathcal F}$ of subsets
\begin{equation}
\label{typical-flag}
{\mathcal F}: \emptyset =A_0 \subset A_1 \subset A_2 \subset \cdots \subset A_k=E
\end{equation}
where $E$ is the ground set for the matroid $M$, let $\alpha({\mathcal F})=(\alpha_1,\ldots,\alpha_k)$
be the composition of $n:=|E|$ defined by $\alpha_i:=|A_i|-|A_{i-1}|$.
\begin{proposition}
\label{ABS-computation}
The coefficient $c_\alpha$ in \eqref{M-expansion}
is the number of flags ${\mathcal F}$ of subsets of $E$ having $\alpha({\mathcal F})=\alpha$
and for which each subquotient $\left( M|_{A_i} \right) / A_{i-1}$ splits completely.
\end{proposition}
The equivalence of these two interpretations of $c_\alpha$ is easy to understand.
Any $f: E \rightarrow \PP$ with $|f^{-1}(i)|=\alpha_i$ for $i=1,2,\ldots,k$
gives rise to a flag ${\mathcal F}$ of subsets as in \eqref{typical-flag} with
$\alpha({\mathcal F})=\alpha$, by letting $A_i:=f^{-1}(\{1,2,\ldots,i\})$.
In other words, $f$ takes on the constant value $i$ on each of the set differences
$A_i \setminus A_{i-1}$. One can then readily see (e.g. from the greedy algorithm)
that $f$ will be $M$-generic if and only if each of the
subquotients $\left( M|_{A_i} \right) / A_{i-1}$ has only one base,
that is, if and only if each such subquotient splits completely.
A consequence of this equivalence is that \cite[Theorem 4.1]{AguiarBergeronSottile}
gives an alternate proof of Theorem~\ref{morphism-theorem} above.
Given this discussion, the existence of canonical {\it odd and even subalgebras} inside any combinatorial
Hopf algebra (see \cite[\S 5]{AguiarBergeronSottile}) naturally suggests the following question.
\begin{question}
\label{Eulerian-question}
What is the odd subalgebra of the combinatorial
Hopf algebra ${\mathcal Mat }$? Does it contain any elements of the form $[M]$ for a single matroid $M$, or does it
contain only nontrivial sums $\sum c_M [M]$?
\end{question}
\noindent
For any such $[M]$ in the odd subalgebra of ${\mathcal Mat }$, it will follow from
\cite[Propositions 5.8e and 6.5]{AguiarBergeronSottile} that $F(M)$ will
lie in the {\it peak subalgebra} of ${\mathcal{QS}ym }$ (see \cite{AguiarBergeronSottile}
for definitions).
\end{remark}
\begin{example} \rm \
One can use Proposition~\ref{ABS-computation} to compute some more examples
of $F(M)$. If $M$ is a rank $1$ matroid on ground set $E=\{1,2\}$ in which $1,2$ are
parallel elements, then there are exactly two flags ${\mathcal F}$ having all subquotients
that split completely:
$$
\begin{aligned}
&\emptyset \subset \{1\} \subset E=\{1,2\} \\
&\emptyset \subset \{2\} \subset E=\{1,2\} \\
\end{aligned}
$$
Both of these flags have $\alpha({\mathcal F})=(1,1)$ and hence $F(M)=2M_{1,1}$.
Similarly, if $M$ is a rank $1$ matroid on ground set $E=\{1,2,3\}$ in which
$1,2,3$ are all parallel, then there are two kinds of flags ${\mathcal F}$ having all subquotients
that split completely:
\begin{enumerate}
\item[$\bullet$]
$6=3!$ flags of the form
$$
\emptyset \subset \{a\} \subset \{a,b\} \subset E=\{a,b,c\}
$$
where $(a,b,c)$ is some permutation of $(1,2,3)$, all having $\alpha({\mathcal F})=(1,1,1)$, and
\item
[$\bullet$]
$3$ flags of the form
$$
\emptyset \subset \{a\} \subset E=\{a,b,c\}
$$
where $a \in \{1,2,3\}$, all having $\alpha({\mathcal F})=(1,2)$.
\end{enumerate}
Consequently, $F(M)=3M_{1,2} + 6M_{1,1,1}$.
\end{example}
\section{Behavior under matroid duality}
\label{matroid-duality-section}
Recall that if $M$ is a matroid on ground set $E$ with bases ${\mathcal B}(M)$, then
its {\it dual} (or {\it orthogonal}) matroid $M^\ast$ has the same ground set $E$
but bases
$$
{\mathcal B}(M^\ast)=\{B^\ast: B \in {\mathcal B}(M)\}
$$
where $B^\ast:=E\backslash B$ is called the
{\it cobase} of $M^\ast$ corresponding to the base $B$ of $M$.
\begin{proposition}
\label{duality-proposition}
$$
F(M) =\sum_{\alpha} c_\alpha M_\alpha
$$
if and only if
$$
F(M^\ast)=\sum_{\alpha} c_\alpha M_{\alpha^*}
$$
where $\alpha^* :=(\alpha_k,\alpha_{k-1},\ldots,\alpha_2,\alpha_1)$
is the reverse composition to $\alpha$.
\end{proposition}
\begin{proof}
We check that for any composition $\alpha \in \PP^k$,
the coefficient of $M_\alpha$ in $F(M)$ is
the same as the coefficient of $M_{\alpha^*}$ in $F(M^\ast)$.
The former coefficient counts the set of $M$-generic $f:E \rightarrow \PP$
for which $\xx_f=\xx^\alpha$. The latter coefficient counts the
set of $M^\ast$-generic $f^\ast:E \rightarrow \PP$ for which $\xx_{f^\ast}=\xx^{\alpha^*}$.
We exhibit a bijection between these sets as follows.
If $B$ is a base of $M$ with cobase $B^\ast$ of $M^\ast$, then
the equation
$$
f(B) + f(B^\ast) = \sum_{e \in E} f(e)
$$
shows that $B$ is $f$-minimizing if and only if $B^\ast$ is $f$-maximizing.
Now define $f^\ast(e) := k+1-f(e)$, so that one has
$$
f(B^\ast) + f^\ast(B^\ast) = (k+1) \; |B^\ast| = (k+1)\left(\; |E|-r(M)\; \right).
$$
This equation shows that $B^\ast$ is $f$-maximizing if and only if $B^\ast$ is $f^\ast$-minimizing.
Since $\xx_f = \xx^\alpha$ if and only if $\xx_{f^\ast} = \xx^{\alpha^*}$,
the map $f \mapsto f^\ast$ restricts to the desired bijection.
\end{proof}
\section{$P$-partition expansion}
\label{P-partition-section}
Quasisymmetric functions were originally introduced by Gessel \cite{Gessel}
(building on work of Stanley) as enumerators for $P$-partitions.
We review this here, and explain
how it leads to an expansion of $F(M)$ as a sum of $P$-partition
enumerators.
A {\it labelled poset} $(P,\gamma)$ on $n$ elements is a poset $P$ together with
a bijective labelling
function $\gamma: P \rightarrow [n]:=\{1,2,\ldots,n\}$.
A {\it $(P,\gamma)$-partition}
is a function $f: P \rightarrow \PP$ such that
$$
\begin{aligned}
f(p) \leq f(p') & \textrm{ if } p \leq p' \\
f(p) < f(p') & \textrm{ if }p \leq p'\textrm{ and }\gamma(p) > \gamma(p')
\end{aligned}
$$
It will sometimes be more convenient for us to refer only
to a labelled poset $P$ on $[n]$ (suppressing the extra labeling function
$\gamma$), by which we mean a partial order $<_P$ on the set $[n]$.
Using this terminology, a $P$-partition is a function $f: [n] \rightarrow \PP$ satisfying
$$
\begin{aligned}
f(i) \leq f(i') & \textrm{ if }i \leq_P i' \\
f(i) < f(i') & \textrm{ if }i \leq_P i' \textrm{ and }i >_{\ZZ} i'.
\end{aligned}
$$
For example, every
permutation $w=w_1 \cdots w_n$ of $[n]$ can be regarded as a labelled
poset on $[n]$ which is totally ordered: $w_1 <_w \cdots <_w w_n$.
Let ${\mathcal A}(P,\gamma)$ denote the set of $(P,\gamma)$-partitions,
and let $F(P,\gamma,\xx):=\sum_{f} \xx_f$ be their weight enumerator:
$$
F(P,\gamma,\xx):=\sum_{f \in {\mathcal A}(P,\gamma)} \xx_f.
$$
A basic result of Stanley tells how $F(P,\gamma,\xx)$ expands
in terms of another basis for ${\mathcal{QS}ym }$ indexed by compositions $\alpha$, known as the
{\it fundamental quasisymmetric functions}
\begin{equation}
\label{L-into-M-expansion}
L_\alpha:=\sum_{\beta: \beta \textrm{ refines }\alpha} M_\beta.
\end{equation}
Say that a permutation $w=w_1 \ldots w_n$ in the symmetric group $S_n$
is a {\it linear extension} of $(P,\gamma)$ if $p < p'$ in $P$
implies $w^{-1}(\gamma(p)) < w^{-1}(\gamma(p'))$. The {\it Jordan-H\"older
set} of $(P,\gamma)$ is the set ${\mathcal L}(P,\gamma)$ of all linear extensions
of $(P,\gamma)$. The {\it descent composition} for the permutation $w$
is the composition $\alpha(w)$ of $n$ which gives the lengths of the maximal
increasing consecutive subsequences ({\it runs}) of $w$.
It is not hard to check that, regarding $w$ as a totally ordered labelled poset
on $[n]$ as above, one has $ F( w, \xx)= L_{\alpha(w)}$. The basic result about
$P$-partitions is the following expansion.
\begin{proposition} \cite[\S 4.5]{Stanley-EC1}, \cite[\S 7.19]{Stanley-EC2}, \cite[eqn. (1)]{Gessel}
\label{Stanley's-P-partition-result}
$$
\begin{aligned}
F(P,\gamma,\xx) &=\sum_{w \in {\mathcal L}(P,\gamma)} F(w,\xx)\\
&=\sum_{w \in {\mathcal L}(P,\gamma)} L_{\alpha(w)}
\end{aligned}
$$
\end{proposition}
It turns out that every base $B$ of a matroid $M$ leads to a certain
labelled poset $P_B$, whose $P$-partition enumerator is relevant for expanding
$F(M)$; see Theorem~\ref{poset-expansion} below.
Given a base $B$ of a matroid $M$ on ground set $E$,
let $B^\ast=E\backslash B$ be the corresponding cobase
of $M^\ast$.
For each $e \in B$ the {\it basic bond for $e$ in $B^\ast$} is the
set of $e' \in E$ for which $(B\backslash \{e\}) \cup \{e'\}$
is another base of $M$. Dually, for each $e \in E-B (=B^\ast)$ the
{\it basic circuit for $e$ in $B$} is the
set of $e' \in E$ for which $(B\cup \{e\}) \backslash \{e'\}$
is another base of $M$. By definition then, one has a symmetric relationship:
$e'$ lies in the basic bond for $e$ in $B^\ast$ if and only if
$e$ lies in the basic circuit for $e'$ in $B$. Thus these relations
can be encoded by a bipartite graph with vertex set $E$, bipartitioned
as $E=B \sqcup B^\ast$.
Define the poset $P_B$ to be the one whose
Hasse diagram is this bipartite graph, with edges directed upward
from $B$ to $B^\ast$.
Say that a labelling $\gamma$ of a poset $P$
is {\it natural} (resp. {\it strict} or {\it anti-natural}) if
$\gamma(p) < \gamma(p')$ (resp. $\gamma(p) > \gamma(p')$)
whenever $p < p'$ in $P$.
\begin{theorem}
\label{poset-expansion}
For any matroid $M$,
$$
F(M,\xx) = \sum_{B \in {\mathcal B}(M)} F(P_B,\gamma_B,\xx)
$$
where $\gamma_B$ is any strict labelling of $P_B$.
\end{theorem}
\begin{proof}
We will show that $B$ is the unique $f$-minimizing base of $M$ for
some $f:E \rightarrow \PP$ if and only if $f$ lies in ${\mathcal A}(P_B,\gamma_B)$.
First assume that $f$ does not lie in ${\mathcal A}(P_B,\gamma_B)$, that is, there
exists some $e < e'$ in $P_B$ for which $f(e) \geq f(e')$. By definition
of $P_B$, this means that $e$ lies in $B$, $e'$ does not lie in $B$, and
$B':=B\backslash \{e\} \cup \{e'\}$ is another base of $M$. However
$f(e') \leq f(e)$ implies $f(B') \leq f(B)$,
so that $B$ cannot be the unique $f$-minimizing base.
Now assume that $B$ is not the unique $f$-minimizing base of $M$. This
means that there exists another base $B'$ of $M$ having $f(B') \leq f(B)$.
By convexity, we may assume that the pair $\{B,B'\}$ corresponds to an
edge of the {\it matroid base polytope} $Q(M)$,
which is defined to be the convex
hull in $\RR^E$ of all characteristic $\{0,1\}$-vectors of bases of $M$
(see Section~\ref{decomposition-section} below).
A well-known fact from matroid theory \cite[\S2.2, Theorem 1]{GelSerg}
says that all edges of $Q(M)$ take the form $\{B,B'\}$ in which $B, B'$ differ by a single {\it basis exchange}:
there exists some $e \in B$ and $e' \in B'$ such that $B'=B\backslash \{e\} \cup \{e'\}$.
Thus $e < e'$ in $P_B$. Since $f(B') \leq f(B)$ forces $f(e') \leq f(e)$, this means
$f$ is not in ${\mathcal A}(P_B,\gamma_B)$.
\end{proof}
\begin{remark}\rm \
Aguiar has pointed out that Theorem~\ref{poset-expansion} shows
the Hopf morphism $F: {\mathcal Mat } \rightarrow {\mathcal{QS}ym }$ factoring through the {\it Hopf algebra
$\mathcal P$ of (labelled) posets},
which is described (for unlabelled posets) in \cite[Example 2.3]{AguiarBergeronSottile}.
More precisely, one has a Hopf morphism
$$
\begin{matrix}
{\mathcal Mat } &\longrightarrow &{\mathcal P}\\
[M] &\mapsto &\sum_{B \in {\mathcal B}(M)}[(P_B,\gamma_B)]
\end{matrix}
$$
and the usual $(P,\gamma)$-partition enumerator Hopf morphism
$$
\begin{matrix}
{\mathcal P} & \longrightarrow & {\mathcal{QS}ym }\\
[(P,\gamma)] & \mapsto & F(P,\gamma,\xx).
\end{matrix}
$$
Then $F: {\mathcal Mat } \rightarrow {\mathcal{QS}ym }$ is the composite
of these two morphisms.
\end{remark}
\begin{corollary}
\label{last-coefficient}
Let $F(M) = \sum_{\alpha} c^M_\alpha L_\alpha$. Then
\begin{enumerate}
\item[(i)] the coefficients $c^M_\alpha$ are nonnegative,
\item[(ii)] their sum $\sum_{\alpha} c^M_\alpha$ is $n!$ where $n:=|E|$, and
\item[(iii)] the coefficient $c^M_{1,1,\ldots,1}$ of $L_{1,1\ldots,1}$ is
the number of bases of $M$.
\end{enumerate}
\end{corollary}
\begin{proof}
Everything will follow from Proposition~\ref{Stanley's-P-partition-result}
and Theorem~\ref{poset-expansion}.
Assertion (i) is immediate.
Assertion (ii) follows because each of the $n!$ linear orderings $e_1,\ldots,e_n$ of
$E$ is a linear extension for exactly one of the posets $P_B$, namely the one indexed
by the unique $f$-minimizing base $B$ when $f(e_1) < \cdots < f(e_n)$.
Assertion (iii) follows because any strictly (anti-naturally) labelled poset
$(P,\gamma)$ has the reversing permutation $w_0 = n \ldots 3 2 1$
in ${\mathcal L}(P,\gamma)$, and $w_0$ is the {\it only} permutation having
descent composition $(1,1,\ldots,1)$.
\end{proof}
Corollary~\ref{last-coefficient} gives a combinatorial interpretation for
the coefficient $c^M_{1,1,\ldots,1}$. It would be
nice to have such an interpretation for every coefficient $c^M_\alpha$. The next
result at least tells us how to interpret the coefficients ``at the
other end'' of the $L_\alpha$ expansion, namely $c^M_{\alpha}$ where $\alpha$ has at most two parts,
in terms of some basic matroid invariants of $M$.
Recall that an element $e$ in $E$ is a {\it loop} in $M$ if
it appears in {\it no} bases of $M$, and it is a {\it coloop} (or {\it isthmus})
if it appears in {\it every} base of $M$.
\begin{proposition}
\label{first-coefficients}
Let $M$ be matroid having
\begin{enumerate}
\item[$\bullet$] rank $r$,
\item[$\bullet$] corank $r^\ast:=|E|-r$,
\item[$\bullet$] number of loops equal to $\ell$,
\item[$\bullet$] number of coloops equal to $c$, and
\item[$\bullet$] number of bases $b$.
\end{enumerate}
Then
$$
F(M) = b\left( \sum_{j=0}^{\ell+c} \binom{\ell+c}{j} L_{(r+\ell-j,r^\ast-\ell+j)} \right)
+ \sum_{\beta: \ell(\beta) \geq 3} c_\beta L_\beta
$$
for some nonnegative coefficients $c_\beta$.
Here $\ell(\beta)$ denotes the number of parts in the composition $\beta$.
Equivalently, if $\hat{M}$ is the matroid obtained from
$M$ by removing all loops and coloops, so that $\hat{M}$ has
$$
\begin{aligned}
\text{ rank }\hat{r}&=r-c,\text{ and }\\
\text{ corank }\hat{r}^\ast&=r^\ast-\ell=|E|-r-\ell,
\end{aligned}
$$
then
$$
\begin{aligned}
F(M) &= (L_1)^{\ell+c} F(\hat{M}) \\
&= (L_1)^{\ell+c} \left( b \cdot
L_{(\hat{r},\hat{r}^\ast)} + \sum_{\gamma: \ell(\gamma) \geq 3} d_\gamma L_\gamma
\right)
\end{aligned}
$$
for some nonnegative integer coefficients $d_\gamma$.
\end{proposition}
\begin{proof}
The second assertion follows from the $\ell+c=0$ case of the first, applying the
multiplicative property $F(M_1 \oplus M_2) = F(M_1) F(M_2)$ to the decomposition of
$M$ as a direct sum of $\hat{M}$ with $\ell+c$ loops and isthmuses.
For the first assertion, we apply Theorem~\ref{poset-expansion}. For each base $B$, the poset $P_B$ will
have height one, and decompose into three sets:
\begin{enumerate}
\item[$\bullet$]
the set $A_1$ of $\ell+c$ loops and coloops, which are all both minimal and maximal in $P_B$,
\item[$\bullet$]
the set $A_2$ of $r-c$ non-coloop elements in $B$, each of
which is minimal but not maximal in $P_B$, and
\item[$\bullet$]
the set $A_3$ of $r^\ast-\ell$ non-loop elements in $B^\ast$, each of which is
maximal but not minimal in $P_B$.
\end{enumerate}
We are free to choose the strict labelling $\gamma_B$ so that the elements in
$A_2$ all have the highest labels, the elements in $A_3$
all have the lowest labels, and the elements in $A_1$ have the labels
in between.
How then can one choose a linear extension $w$ in ${\mathcal L}(P_B,\gamma_B)$ so that
its descent composition $\alpha(w)$ has at most two parts? This means that $w$
has at most two increasing runs, separated by a unique descent. Because of our
chosen labelling of $B$, such a $w$ will have the
first run of length at least $r-c$, and the second run of length at least $r^\ast-\ell$.
Furthermore, for any integer $j$ in the range $[0,\ell+c]$, one can check that
there are $\binom{c + \ell}{j}$ ways to choose such a $w$ in $(P_B,\gamma_B)$ so that it
starts with an increasing
run of length $r-c+j$, followed by its unique descent, and then ends with
an increasing run of length $r^\ast+c-j$: one must place the elements of
$A_2$ together with any $j$ elements chosen from $A_1$ before the
unique descent, and place the elements of $A_3$ together with other $\ell+c-j$
elements of $A_1$ after the unique descent.
\end{proof}
In particular, the previous proposition tells us the coefficient
of $L_\alpha$ in $F(M)$ when $\alpha$ has only $1$ part.
Recall from Section~\ref{Hopf-section} that a matroid $M$ is said to split completely if $M$ is
a direct sum of loops and isthmuses.
\begin{corollary}
\label{zeta-computation}
For $M$ a matroid on ground set $E$ of size $|E|=n$,
the expansion of $F(M)$ in the $L_\alpha$ (resp. $M_\alpha$) basis for ${\mathcal{QS}ym }$ has
the coefficient of $L_{(n)}$ (resp. $M_{(n)}$) equal to
$1$ if $M$ splits completely, and $0$ otherwise.
\end{corollary}
\begin{proof}
The assertion for the coefficient of $L_{(n)}$ follows from Proposition~\ref{first-coefficients}.
Then the assertion for the coefficient of $M_{(n)}$ follows from the expansion
\eqref{L-into-M-expansion} of $L_\alpha$ into $M_\beta$'s.
\end{proof}
\section{Reciprocity and behavior under the antipode}
\label{antipodes}
Part of the structure of a Hopf algebra is an involutive anti-automorphism known
as its {\it antipode}. For the Hopf algebra of quasisymmetric functions,
the antipode $S:{\mathcal{QS}ym } \rightarrow {\mathcal{QS}ym }$ is known to be related to
combinatorial reciprocity results \cite{MalvenutoReutenauer, Stanley2}.
It turns out to have an interesting effect on $F(M)$, transforming it into a different sort of
enumerator for weight functions $f: E \rightarrow \PP$. We begin by reviewing how the
antipode relates to reciprocity.
The antipode $S: {\mathcal{QS}ym } \rightarrow {\mathcal{QS}ym }$ has the following effect on
the $L_\alpha$-basis \cite[Corollary 2.3]{MalvenutoReutenauer}:
$$
S(L_\alpha) = (-1)^{|\alpha|} L_{\alpha^c}
$$
where $|\alpha|:=\alpha_1+\cdots + \alpha_k=n$ denotes the {\it weight} of the composition $\alpha$,
and $\alpha^c$ corresponds to the subset $T^c=[n-1] \backslash S$ if $\alpha$ corresponds to
the subset $T$ of $[n-1]$ ({\it i.e.} $T$ is the set of partial sums of $\alpha$).
Stanley's reciprocity theorem for $P$-partitions \cite[Theorem 4.5.7]{Stanley-EC1}
tell us that if $\gamma, \bar\gamma$ are
natural and strict labellings of the same poset $P$, then
\begin{equation}
\label{natural-strict-relation}
S(F(P,\gamma,\xx)) = (-1)^{|P|} F(P,\bar\gamma,\xx).
\end{equation}
Upon specializing $L_\alpha, L_{\alpha^c}$ to $\xx=1^m$, that is,
$$
\begin{aligned}
x_1=\cdots=x_m=1,\\
x_{m+1}=x_{m+2} = \cdots =0,
\end{aligned}
$$
one obtains
$$
\begin{aligned}
L_\alpha(1^m) &= \binom{m-k+n}{n} \\
L_{\alpha^c}(1^m) &=\binom{m+k-1}{n}.
\end{aligned}
$$
where $\alpha=(\alpha_1,\ldots,\alpha_k)$. Then the equality
$$
\binom{m-k+n}{n}=(-1)^n \binom{-m+k-1}{n}
$$
leads immediately to the following reciprocity fact (cf. \cite[\S 4]{Stanley2}).
\begin{proposition}
If two homogeneous quasisymmetric functions $F, {{F}^{\ast}}$ of degree $n$
are related by $S(F)={{F}^{\ast}}$, then their specializations
$$
\begin{aligned}
\phi(m)&=F(1^m)\\
{{\phi}^{\ast}}(m) &={{F}^{\ast}}(1^m)
\end{aligned}
$$
satisfy
$$
\phi(-m) = {{\phi}^{\ast}}(m).
$$
\end{proposition}
We can now identify the image of $F(M)$ under the antipode $S$ in ${\mathcal{QS}ym }$.
\begin{defn} \rm \
Define a power series in $x_1,x_2,\ldots$
$$
{{F}^{\ast}}(M,\xx):=\sum_{ f: E \rightarrow \PP} |\{ f\textrm{-minimizing bases of }M\}| \; \xx_f .
$$
Also define a polynomial in $m$
$$
\begin{aligned}
{{\phi}^{\ast}}(M,m)&:={{F}^{\ast}}(M,1^m) \\
&=\sum_{f: E \rightarrow [m]} |\{ f\textrm{-minimizing bases of }M\}|.
\end{aligned}
$$
\end{defn}
One could argue that these two enumerators ${{F}^{\ast}}(M,\xx), {{\phi}^{\ast}}(M,m)$
are at least as natural to consider as our original $F(M,\xx), \phi(M,m)$.
For example, the expected number of $f$-minimizing bases of $M$ attained when using at most $m$ distinct
values for the weights is exactly $\frac{1}{m^n} {{\phi}^{\ast}}(M,m)$.
\begin{theorem}
\label{reciprocity-theorem}
For any matroid $M$ on $n$ elements,
$$
S( F(M,\xx) ) = (-1)^n {{F}^{\ast}}(M,\xx)$$
and consequently,
$$
\phi(M,-m) = (-1)^n {{\phi}^{\ast}}(M,m).
$$
\end{theorem}
\begin{proof}
Theorem~\ref{poset-expansion} implies
$$
\begin{aligned}
S( F(M,\xx) ) &= \sum_{B \in {\mathcal B}(M)} S (F(P_B,\gamma_B,\xx) ) \\
&= (-1)^n \sum_{B \in {\mathcal B}(M)} F(P_B,\bar\gamma_B,\xx) \\
&= (-1)^n \sum_{B \in {\mathcal B}(M)}
\sum_{\substack{f:E \rightarrow \PP \\
B\textrm{ is }f\textrm{-minimizing}}} \xx_f \\
&= (-1)^n \sum_{f: E \rightarrow \PP} |\{ f\textrm{-minimizing bases of }M\}| \xx_f \\
&= (-1)^n {{F}^{\ast}}(M,\xx).
\end{aligned}
$$
\end{proof}
Note that since $F(M,\xx), {{F}^{\ast}}(M,\xx)$ are related by the antipode $S$, they
carry equivalent information, a fact which is not completely obvious from their
definitions. The same goes for $\phi(M,m)$ and ${{\phi}^{\ast}}(M,m)$.
\section{Valuation property and application to polytope decompositions}
\label{decomposition-section}
The goal of this section is to show that the
matroid invariant $F(M)$ behaves like a valuation on
the associated matroid base polytopes $Q(M)$,
and apply this to the subtle problem of detecting decompositions
of these polytopes.
By the {\it matroid base polytope} we mean the convex polytope
$$
Q(M):= \text{conv} \left\{\sum_{i\in B}e_{i} : B \text{~a base of~} M\right\},
$$
where $e_{i}$ denotes the $i^{th}$ standard basis vector in $\RR^{E}$. This polytope
$Q(M)$ is a face of a polytope
first studied by Edmonds \cite{Edmonds}, which took as vertices the indicator
functions of \emph{all} independent sets in $M$ (subsets of bases).
We are interested in the existence or non-existence of certain polytopal decompositions of $Q(M)$.
\begin{defn} \rm \
A {\it matroid base polytope decomposition} of
$Q(M)$ is a decomposition $Q(M) = \cup_{i=1}^t Q(M_{i})$ where
\begin{enumerate}
\item[$\bullet$] each $Q(M_i)$ is a matroid base polytope for some matroid $M_i$,
and
\item[$\bullet$]
for each $i \neq j$, the intersection
$Q(M_{i}) \cap Q(M_{j})$ is a face of both $Q(M_i)$ and of $Q(M_j)$.
\end{enumerate}
\end{defn}
We call such a decomposition a {\it hyperplane split} of $Q(M)$ if $t=2$.
We say that $Q(M)$ is {\it decomposable} if it has a matroid base polytope
decomposition with $t \geq 2$, and {\it indecomposable} otherwise.
We say that the decomposition is {\it coherent} if the $Q(M_i)$ are exactly the maximal domains of
linearity for some $\RR$-valued piecewise-linear convex
function on $Q(M)$. For example, hyperplane splits are always coherent.
Coherent matroid base polytope decompositions arise in work of Lafforgue
\cite{Lafforgue1, Lafforgue2} on compactifications of the fine Schubert cell of the Grassmannian
corresponding to the matroid $M$, and in related work
by Keel and Tevelev \cite[\S2.6]{KeelTevelev}, and by
Hacking, Keel and Tevelev \cite[\S 3.3]{HackingKeelTevelev}.
In particular, Lafforgue's work implies that
for a matroid $M$ represented by vectors in $\FF^r$, if $Q(M)$ is indecomposable,
then $M$ will be {\it rigid}, that is, $M$ will have only finitely many
realizations, up to scaling and the action of $GL(r,\FF)$.
\subsection{Polar cones and valuations}
We will need a version of a theorem of Lawrence \cite[Theorem 16]{Law} (see also
\cite[Corollary IV.1.6]{Barv}) about polarity,
which can be proved by a minor adjustment
to the proof of \cite[Theorem IV.1.5]{Barv}.
Let $ \langle \cdot, \cdot \rangle$ denote the usual inner product on $\RR^n$.
If $A$ is a convex set in $\RR^n$, then denote by $[A]$ its indicator function and by
$I(A)$ the convex set
$$I(A):= \{x \in \RR^n: \langle x,y \rangle > 0 \text{~for all~} y \in A \}.$$
Recall that a closed convex cone $K\subset \RR^{n}$ is said to be \emph{pointed} if it contains
no lines. In this case, its \emph{polar cone}
$K^{\circ}:= \{x\in \RR^{n} : \langle x,y \rangle \le 0 \text{ for all } y\in K \}$
has a nonempty interior. For a \emph{nonzero} pointed cone $K$, $I(K)$ is
the interior of $-K^{\circ}$.
We show that the function $A\mapsto I(A)$ acts as
a valuation on nonempty closed convex sets.
\begin{proposition}\label{polar_relation}
Let $A_{1}, A_{2},\dots, A_{N}$ be a finite family of nonempty closed convex sets.
If
$$
\sum_i \alpha_i [A_i]=0
$$
for real numbers $\alpha_{1}, \alpha_{2},\dots$, then
$$
\sum_i \alpha_i [I(A_i)] =0.
$$
\end{proposition}
\begin{proof}
The proof is as in \cite{Barv}, except that in Theorem IV.1.5, one defines
$$F_{\epsilon}(x,y)=F(x,y)=
\begin{cases}
1 & \text{if~} \langle x,y \rangle \le 0 \\
0 & \text{otherwise}.
\end{cases}
$$
In this case, the limiting argument of \cite{Barv} (and \cite{Law}) is not necessary.
As in \cite{Barv} the association
$${\mathcal{D}}: [A] \mapsto [I(A)]$$
is the specialization to indicator functions $[A]$ of a linear map
$${\mathcal{D}}: {\mathcal{C}}(\RR^d) \mapsto {\mathcal{C}}(\RR^d),$$
where ${\mathcal{C}}(\RR^d)$ is the algebra of indicator functions of closed convex sets in $\RR^d$ (see \cite[Defn. I.7.3]{Barv}). The map
${\mathcal{D}}$ may be defined as follows: for a function $g(x)$, the value of $({\mathcal{D}}g)(y)$ on a point $y \in \RR^d$ is given by
$$ \chi( g(x) ) - \chi( g(x)F(x,y) ).$$
Here $\chi$ denotes the Euler characteristic linear
functional on ${\mathcal{C}}(\RR^d)$; its value on
a function $h(x) \in {\mathcal{C}}(\RR^d)$ is determined uniquely
from knowing that it takes the value $1$ on indicator functions of
closed convex sets.
\end{proof}
\subsection{Matroid polytopes and decompositions}
We now wish to apply this to a decomposition $Q(M)= \cup_i Q(M_{i}) $ of matroid base polytopes.
Necessarily the $M_{i}$ will be {\it weak images} (\emph{degenerations})
of $M$, that is, ${\mathcal B}(M_{i})\subset {\mathcal B}(M)$.
If $M_{1}=(E,{\mathcal B}_{1})$ and $M_{2}=(E,{\mathcal B}_{2})$ are matroids of the same rank on the
same set $E$, then we define $M_{1} \cap M_{2} := (E, {\mathcal B}_{1}\cap {\mathcal B}_{2})$.
We write $M_{1} \cap M_{2}= \emptyset$ if $ {\mathcal B}_{1}\cap {\mathcal B}_{2}=\emptyset$ (as opposed to
$\{\emptyset\}$). Note that even when $M_{1} \cap M_{2}\not= \emptyset$,
it is not usually a matroid -- take, for example, the rank $2$ matroids
$M_1, M_2$ having bases
$$
\begin{aligned}
{\mathcal B}(M_1)&= \{13,14,23,24\},\\
{\mathcal B}(M_2)&= \{12,13,23,24,34\}
\end{aligned}
$$
so that ${\mathcal B}(M_{1}) \cap {\mathcal B}(M_{2})=\{13,23,24\}$.
However, when $Q(M_{1})$ and $Q(M_{2})$ meet along a
common face (as in a matroid base polytope
decomposition), and that face is nonempty,
the intersection $M_1 \cap M_2$ {\it will} be a matroid.
\begin{proposition}
\label{matroid-intersection}
If $M_{1}$ and $M_{2}$ are matroids of the same rank $r$ and $Q(M_{1}) \cap Q(M_{2})$ is a
nonempty common face of $Q(M_{1})$ and $Q(M_{2})$, then $M_{1} \cap M_{2}$ is a matroid of
rank $r$ and
$$Q(M_{1}\cap M_{2})=Q(M_{1})\cap Q(M_{2}).$$
\end{proposition}
\begin{proof}
Nonempty faces of matroid base polytopes are matroid base polytopes
\cite[\S2.5 Theorem 2]{GelSerg}, and so the common face $Q(M_{1}) \cap Q(M_{2})$ must
be a matroid base polytope. The vertices of $Q(M_{1}) \cap Q(M_{2})$
correspond to common bases of $M_{1}$ and $M_{2}$, that is, to elements of
${\mathcal B}_{1}\cap {\mathcal B}_{2}$.
\end{proof}
Suppose $e_{B}=\sum_{i \in B} e_{i}$ is the vertex of $Q(M)$ corresponding to the base $B$ of $M$.
We denote by
$K_{B}(M)$ the closed convex cone generated by the Minkowski sum (translate)
$Q(M) - \{e_{B}\}$. Its polar
$K_{B}^{\circ}(M)$ is the {\it normal cone} to $Q(M)$ at $e_{B}$.
Notice that by the proof of Theorem \ref{poset-expansion}, the expansion of $F(M)$ given
there can be written
\begin{equation}\label{F-decomp}
\begin{aligned}
F(M,\xx) &= \sum_{B \in {\mathcal B}(M)} F(K_B(M),\xx), \text{ where} \\
F(K_B(M),\xx) &= \sum_{f\in I\left(K_{B}(M)\right)}\xx_{f}
\end{aligned}
\end{equation}
With this, one can prove that $F(M)$ acts as a valuation over subdivisions of $Q(M)$.
\begin{theorem}\label{valuation}
The association $Q(M) \mapsto F(M)$ is a valuation on the class of matroid polytopes:
if $Q(M)$ can be subdivided into finitely many matroid polytopes
$Q(M_{i})$, then
$$
F(M) = \sum_{j\ge1} (-1)^{j-1} \sum_{i_{1}<i_{2}<\cdots <i_{j}}
F(M_{i_{1}}\cap M_{i_{2}}\cap \cdots \cap M_{i_{j}}),
$$
with the sum over $i_{1}<i_{2}<\cdots <i_{j}$ such that
$M_{i_{1}}\cap M_{i_{2}}\cap \cdots \cap M_{i_{j}}\ne \emptyset$.
\end{theorem}
\begin{proof}
Any decomposition of $Q(M)$ induces, for each $B\in {\mathcal B}(M)$, a decomposition
of $K_{B}(M)$ into $K_{B}(M_{i})$ where $B\in {\mathcal B}(M_{i})$. (For notational
convenience, we include all $B\in {\mathcal B}$ and set
$K_{B}(M_{i})= \emptyset$ when $B\notin {\mathcal B}(M_{i})$.)
This, in turn,
leads to an inclusion-exclusion relation (see, for example, \cite[Lemma I.7.2]{Barv})
\begin{align*}
\left[K_{B}(M)\right] &= \sum_{j}(-1)^{{j-1}}\sum_{i_{1}<i_{2}<\cdots<i_{j}}
\left[K_{B}(M_{i_{1}})\cap\cdots\cap K_{B}(M_{i_{j}})\right]\\
&= \sum_{j}(-1)^{{j-1}}\sum_{i_{1}<i_{2}<\cdots<i_{j}}
\left[K_{B}(M_{i_{1}}\cap\cdots\cap M_{i_{j}})\right],
\end{align*}
with the second equality following from Proposition \ref{matroid-intersection}.
Clearly, we can restrict these sums to those $i_{1}<i_{2}<\cdots<i_{j}$ for which
$B\in {\mathcal B}(M_{i_{i}})\cap\cdots\cap{\mathcal B}(M_{i_{j}})$, in which case
$M_{i_{i}}\cap\cdots\cap M_{i_{j}}\ne \emptyset$.
Thus, by Proposition \ref{polar_relation}, we have the relation
\begin{align*}
\left[I\left(K_{B}(M)\right)\right] = \sum_{j}(-1)^{{j-1}}\sum_{i_{1}<i_{2}<\cdots<i_{j}}
\left[I\left(K_{B}(M_{i_{1}}\cap\cdots\cap M_{i_{j}})\right)\right].
\end{align*}
The assertion now follows from \eqref{F-decomp}.
\end{proof}
It turns out that all of the terms with $j \geq 2$ in the summation of Theorem~\ref{valuation}
involve matroids which are {\it disconnected}. This will allow us to deduce a corollary
(Corollary~\ref{sumresolution} below) which ignores these terms, and
leaves a sum with {\it positive} coefficients.
To this end, recall that a nonempty subset $A \subseteq E$ is called a \emph{separator} of $M$ if it leads
to a direct sum decomposition of matroids:
$$
M = M|_{A} \oplus M|_{E\setminus A}
$$
The whole ground set $E$ is itself a separator,
and the collection of separators is closed under
intersection. Hence $E$ can be written as a disjoint union of
inclusion-minimal separators of $M$.
Denote by $s(M)$ the number of minimal separators of $M$. The following is
\cite[\S2.4, Proposition 4]{GelSerg}.
\begin{proposition}
The dimension of the matroid polytope $Q(M)$ is $|E|-s(M)$.
\end{proposition}
Considering ${\mathcal{QS}ym }$ as a graded $\ZZ$-algebra,
its maximal (homogeneous) ideal is ${\mathfrak m} = \oplus_{d \geq 1} {\mathcal{QS}ym }_d$.
Given an element $f \in {\mathcal{QS}ym }$, let $\overline{f}$ denote its image
in the quotient ring ${\mathcal{QS}ym }/{\mathfrak m}^2$.
\begin{corollary}
If $E\ne \emptyset$ and the dimension of $Q(M)$ is less than $|E|-1$, then $F(M)$ lies in the
square ${\mathfrak m}^{2}$ of the maximal ideal ${\mathfrak m}$. In other words,
$\overline{F(M)}=0$ in ${\mathcal{QS}ym }/{\mathfrak m}^2$.
\end{corollary}
\begin{proof}
If $Q(M)$ has dimension less than $|E|-1$ then $s(M) > 1$, so there
exists at least one proper separator $A \subsetneq E$.
Since $F: {\mathcal Mat } \rightarrow {\mathcal{QS}ym }$ is
an algebra morphism, one has $F(M) = F( M|_{A} ) F(M|_{E\setminus A})$, and hence
$F(M)$ lies in ${\mathfrak m}^{2}$.
\end{proof}
Since $Q(M_{1}\oplus M_{2}) = Q(M_{1}) \times Q(M_{2})$, to study decomposability of
matroid polytopes $Q(M)$, it is enough to restrict attention to \emph{connected} matroids $M$, that is,
those with $s(M)=1$. For these, the maximal cells in any decomposition $Q(M)=\cup_i Q(M_i)$ will have dimension
$|E|-1$ and so will also correspond to connected matroids. All their proper intersections, however,
will be lower-dimensional and so correspond to matroids with non-trivial separators.
\begin{corollary}\label{sumresolution}
If a matroid polytope $Q(M)$ can be subdivided into finitely many matroid polytopes
$Q(M_i)$, then in ${\mathcal{QS}ym }/{\mathfrak m}^2$ one has
$
\overline{F(M)} = \sum_{i} \overline{F(M_i)}.
$
\end{corollary}
This corollary interacts nicely with a result of Hazewinkel \cite[Theorem 8.1]{Haz},
confirming a conjecture of Ditters which says that the $\ZZ$-algebra structure on
${\mathcal{QS}ym }$ is that of a {\it free} commutative algebra, that is,
a polynomial algebra. Consequently, ${\mathfrak m}/{\mathfrak m}^2$ is a {\it free} (graded) $\ZZ$-module,
and hence each homogeneous component $({\mathcal{QS}ym }/{\mathfrak m}^2)_n$ is a $\ZZ$-lattice $\ZZ^{r_n}$ of some
finite rank\footnote{In fact, these ranks $r_n$ can be made more explicit in two ways.
First, they are determined uniquely by the power series relation
$$
\prod_{n \geq 1} \frac{1}{(1-t^n)^{r_n}}
= \mathrm{Hilb}({\mathcal{QS}ym },t)
= 1 + t + 2t^2 + 4t^3 + \cdots = \frac{1-t}{1-2t}.
$$
Second, $r_n$ has a combinatorial interpretation explained in \cite[\S 4]{Haz},
as the number of words in the alphabet $\{1,2,\ldots\}$ of {\it total weight} $n$ which are
{\it star powers of elementary Lyndon words}.
In practice, we have done our computer calculations in $({\mathcal{QS}ym }/{\mathfrak m}^2)_n$
using $\{L_\alpha\}_{|\alpha|=n}$ as a $\ZZ$-basis for ${\mathcal{QS}ym }_n$,
and using $\{L_\beta L_\epsilon\}_{|\beta|+|\epsilon|=n}$ as a $\ZZ$-spanning set for $({\mathfrak m}^2)_n$.
To do this, one can expand $L_\beta L_\epsilon$ in terms of $L_\alpha$'s using
Proposition~\ref{Stanley's-P-partition-result} above:
$L_\beta L_\epsilon = F(P,\gamma,\xx)$ for a labelled poset $(P,\gamma)$ which is the disjoint
union of two chains, one with descent composition $\beta$, the other with descent composition $\epsilon$.} $r_n$.
Thus for matroids $M$ of rank $r$ on ground set $E$ of size $n$,
to understand the potential matroid base polytope decompositions of $Q(M)$, it
helps to examine the {\it additive semigroup} structure generated by
the elements $\overline{F(M)}$ within the lattice $\ZZ^{r_n}$.
\begin{defn} \rm \
Say that $\overline{F(M)}$ is {\it decomposable} if there exist matroids $M_i$
with $\overline{F(M)} = \sum_{i=1}^t \overline{F(M_i)}$, and indecomposable
otherwise.
Say that $\overline{F(M)}$
is {\it weak image decomposable} if it is decomposable with
each $M_i$ a weak image of $M$, that is,
${\mathcal B}(M_i) \subset {\mathcal B}(M)$.
\end{defn}
In other words, $\overline{F(M)}$ is indecomposable if and only
if it is an element of the unique {\it Hilbert basis} \cite[Chapter 13]{Sturmfels}
for the additive semigroup generated by the $\overline{F(M)}$.
Corollary~\ref{sumresolution} implies that $Q(M)$ is indecomposable unless
$\overline{F(M)}$ is weak image decomposable. However, decomposability (and hence
also weak image decomposability) of
$\overline{F(M)}$ is easily checked using computer algebra packages that
can compute the {\it toric ideal} and/or the Hilbert basis for the additive
semigroup generated by the $\overline{F(M)}$ within ${\mathcal{QS}ym }/{\mathfrak m}^2$;
see \cite[Chapters 4 and 13]{Sturmfels}.
\begin{example} \rm \label{rank2}
A (loopless) rank $2$ matroid $M$ on $n$ elements is determined up to isomorphism by the partition
$\lambda(M)$ of $n$ that gives the sizes $\lambda_i$ of its parallelism classes.
Also, $M_1$ is a weak image of $M_2$, up to isomorphism,
if and only if the partition $\lambda(M_1)$ is refined by the partition $\lambda(M_2)$. Note that
$M$ is connected if and only if $\lambda$ has at least $3$ parts. Hence
the connected weak images of $M$ correspond to all coarsenings
of $\lambda(M)$ with at least $3$ parts.
In particular, if $\lambda(M)$ has {\it exactly} $3$ parts then $\overline{F(M)}$
must be weak image indecomposable and $Q(M)$ must be indecomposable.
In fact, by computer calculations,
we have verified for $3 \le n \le 9$ that the rank $2$ matroids $M$ for which $\lambda(M)$ has
exactly $3$ parts form the Hilbert basis for the semigroup generated by the $\overline{F(M)}$, and
those for which $\lambda(M)$ has more than $3$ parts all have $Q(M)$ decomposable
(and hence $\overline{F(M)}$ weak image decomposable). The following question was left open
in an earlier version of this paper, but has recently been resolved in the affirmative by
work of Luoto \cite[Corollary 6.7]{Luoto}:
\begin{question}
Fix $n$, and consider the semigroup generated by $\overline{F(M)}$ within ${\mathcal{QS}ym }/{\mathfrak m}^2$
as one ranges over all matroids $M$ of rank $2$ on $n$ elements.
Is the Hilbert basis for this semigroup indexed by
those $M$ for which $\lambda(M)$ has exactly $3$ parts?
\end{question}
We should mention that a convenient parametrization of all
rank $2$ matroid base polytope decompositions was given by Kapranov \cite[\S 1.3]{Kapranov},
who showed that all decompositions in this (rank $2$) setting
can be achieved by a sequence of hyperplane splits.
\end{example}
\begin{example} \rm \label{rank3}
Considering all 15 connected rank 3 matroids $M$ with $n:=|E|=6$ (see, for example, \cite[Fig. 2]{GelSerg}),
we found five for which $\overline{F(M)}$ is indecomposable.
These are illustrated in Figure \ref{extremes-figure}.
In particular, the two matroids $M_{1}$ and $M_{2}$ in
Figure~\ref{IsoTutte-figure}(a) satisfy $\overline{F(M_{1})}=\overline{F(M_{2})}$,
which can be written three different ways as sums of these indecomposables
\begin{align*}
\overline{F(M_{i})} &= \overline{F(M_{b})} + \overline{F(M_{c})} + 2 \overline{F(M_{d})}\\
&= 2 \overline{F(M_{a})} + \overline{F(M_{e})}\\
&= \overline{F(M_{a})} + 3 \overline{F(M_{d})} .
\end{align*}
For $M_{2}$, all three of these additive decompositions
correspond to matroid base polytope decompositions of $Q(M_{2})$,
as does the first for $M_{1}$.
However, since $M_{a}$ is not a weak image of $M_{1}$, the second and third cannot
correspond to such decompositions of $Q(M_{1})$.
\end{example}
\begin{question}
Does $\overline{F(M)}$ being weak image decomposable in ${\mathcal{QS}ym }/{\mathfrak m}^2$ imply that
$Q(M)$ is decomposable?
\end{question}
We see no reason, a priori, for this to hold,
but the matroids considered in Examples~\ref{rank2} and \ref{rank3} provide no
counterexamples. In fact, for all of the matroids $M$ in those examples,
one has $Q(M)$ indecomposable if and only if $\overline{F(M)}$ is indecomposable
if and only if $M$ is minimally connected
({\it i.e.}, all weak images of $M$ have a nontrivial separator).
\begin{figure}
\begin{center}
\includegraphics[scale=0.58]{extremes.eps}
\end{center}
\caption{
\label{extremes-figure}
The connected rank three matroids on 6 points for which $Q(M)$ is
indecomposable, represented
as affine point configurations in the plane. The first
matroid contains a tripled point, and the second contains two pairs of doubled
points.
}
\end{figure}
\begin{example} \rm \
It is worth noting that among the rank $3$ matroids with $n=6$ elements,
one finds the first matroid base polytope decompositions which are {\it not}
hyperplane splits.
For example, if $M$ is the rank 3 matroid on $E=\{1,2,3,4,5,6\}$ having every
triple
but $\{1,2,3\}$, $\{1,4,5\}$ and $\{3,5,6\}$ as bases, then both
$\overline{F(M)}$
and $Q(M)$ split into three indecomposable pieces, each isomorphic to the
matroid
$M_d$ in Figure \ref{extremes-figure}.
This subdivision of $Q(M)$ cannot be obtained via hyperplane splits.
\end{example}
\section{Comparison to the other matroid invariants}
\label{T-G-section}
One might ask how fine a matroid invariant is $F(M)$. That is, how well does it
distinguish non-isomorphic matroids, say in comparison with well-studied
matroid invariants like the Tutte polynomial?
Certainly the kernel of the Hopf algebra map $F: {\mathcal Mat } \rightarrow {\mathcal{QS}ym }$
contains $p:=[M_\isthmus]-[M_\looop]$ by Example~\ref{tiny-examples},
and hence contains the smallest Hopf ideal $I$ generated
by $p$. In fact, since $p$ is primitive (as it is of degree $1$),
the Hopf ideal $I$ which it generates coincides with the principal ideal
consisting of all multiples of $p$. Consequently $F$ factors through the quotient ${\mathcal Mat } /I$, that is,
through the Hopf algebra of matroids modulo ``loops = coloops''.
Beyond this inability to distinguish loops from coloops, one might
ask how discriminating $F(M)$ is.
The next two examples show that it certainly doesn't distinguish all loopless and
coloopless matroids up
to isomorphism (which would have been too much to ask), but it
at least does better than the well-known Tutte polynomial in some instances.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{IsoTutte_examples.eps}
\end{center}
\caption{
\label{IsoTutte-figure}
(a) Two matroids $M_1, M_2$, represented by affine point configurations
having the same Tutte polynomials and the same quasisymmetric functions.
(b) Two matroids $M_3, M_4$ having the same Tutte polynomials
but different quasisymmetric functions.
}
\end{figure}
\begin{example} \rm \
\label{six-element-example}
Figure~\ref{IsoTutte-figure}(a) depicts two matroids, represented as affine point
configurations, having the same Tutte polynomial (because $M_1/e \cong M_2/e$ and
$M_1 \backslash e \cong M_2 \backslash e$, where $e$ is the labelled point in
each case). Direct computer calculation (using Theorem~\ref{poset-expansion}) shows that
$F(M_1)=F(M_2)$.
\end{example}
These two examples were borrowed from Brylawski and Oxley's survey on the Tutte polynomial
\cite[pp. 197]{White3}; they are the smallest examples of non-isomorphic matroids with
the same Tutte polynomial.
\begin{example} \rm \
\label{seven-element-example}
Figure~\ref{IsoTutte-figure}(b) depicts two matroids $M_3, M_4$ having the same Tutte polynomials
(since $M_3/e \cong M_4/e$ and $M_3 \backslash e \cong M_4 \backslash e$), but
different quasisymmetric functions: it turns out that the coefficient of $L_{(1,3,3)}$
in $F(M_3)$ is $16$, while in $F(M_4)$ is $18$.
\end{example}
These two examples were again taken from Brylawski and Oxley's survey
\cite[pp. 133]{White3}, where they point out other features that $M_3, M_4$ share and do
not share.
Note that Example~\ref{seven-element-example} rules out the possibility of computing
$F(M)$ purely in terms of $F(M \backslash e), F(M/e)$.
\begin{question} \rm \
Even though there is no direct deletion-contraction computation for $F(M)$ for
which one might have naively hoped, does this rule out other sorts of recursions?
\end{question}
In particular, one is tempted to try the following.
Theorem~\ref{poset-expansion} says
$$
\begin{aligned}
F(M,\xx) &= \sum_{B \in {\mathcal B}(M)} F(P_B,\gamma_B,\xx) \\
&= \sum_{B \in {\mathcal B}(M): e \notin B} F(P_B,\gamma_B,\xx) +
\sum_{B \in {\mathcal B}(M): e \in B} F(P_B,\gamma_B,\xx).
\end{aligned}
$$
Can one better identify the two summands in this last equation?
Are they instances of some quasisymmetric functions that should be
associated to objects more general than matroids?
Lastly, we mention an invariant $g_M(t)$ for a matroid $M$ (representable over $\QQ$)
recently introduced by Speyer \cite{Speyer}, which shares some common features with
$F(M)$. Among other of its properties, this invariant $g_M(t)$ is
\begin{enumerate}
\item[(i)] a polynomial in one variable t with integer coefficients
(conjecturally nonnegative),
\item[(ii)]
multiplicative under direct sums: $g_{M_1 \oplus M_2} = g_{M_1} g_{M_2}$,
\item[(iii)] invariant under duality of matroids: $g_M = g_{M^\ast}$,
\item[(iv)] additive under any decomposition of the matroid base polytope
$Q(M)=\cup_{i=1}^t Q(M_i)$, where $Q(M_1),\ldots,Q(M_t)$ are all of the {\it interior}
faces of the decomposition, in the sense that $g_M(t) = \sum_{i} g_{M_i}(t)$.
\end{enumerate}
\begin{question} \rm \
Is $g_M(t)$ related to (some specialization of) $F(M)$?
\end{question}
\begin{remark} \rm \
In personal communication, Speyer has pointed out that all three invariants of
a matroid $M$ discussed in this section, behave either {\it valuatively} or
{\it additively} under matroid base polytope decompositions:
\begin{enumerate}
\item[$\bullet$] the quasisymmetric function $F(M, \xx)$ behaves valuatively according to
Theorem~\ref{valuation},
\item[$\bullet$] Speyer has checked that the Tutte polynomial $T_M(x,y)$ also behaves valuatively
(via a small calculation using the {\it corank-nullity} formula for $T_M(x,y)$;
see also \cite{ArdilaFinkRincon}), and
\item[$\bullet$] his invariant $g_M(t)$ behaves additively by property (iv) above.
\end{enumerate}
\noindent
Speyer then used this to explain why all three invariants take the same value for the
two matroids $M_1, M_2$ shown in Figure~\ref{IsoTutte-figure}(a): either of the matroid
base polytopes $Q(M_i)$ for $i=1,2$ can be obtained from the hypersimplex $Q(U(3,6))$
associated to the uniform matroid of rank $3$ on the same six elements, by splitting off
with hyperplanes (in any order) two other polytopes $Q(M_i'), Q(M_i'')$, that is,
$$
Q(U(3,6)) = Q(M_i) \cup Q(M_i') \cup Q(M_i'').
$$
Furthermore, the $M_i', M_i''$ are {\it all isomorphic} as matroids
$$
M_1' \cong M_2' \cong M_1'' \cong M_2''.
$$
and have {\it isomorphic intersections}
$$
M_1 \cap M_1' \cong M_2 \cap M_2' \cong M_1 \cap M_1'' \cong M_2 \cap M_2''.
$$
As a consequence, a matroid invariant $f(M)$ will have
$$
\begin{aligned}
& f(U(3,6)) = \\
&
\begin{cases}
f(M_i) + f(M_i') + f(M_i'') - f(M_i \cap M_i') - f(M_i \cap M_i'') &\text{ if }f\text{ is valuative},\\
f(M_i) + f(M_i') + f(M_i'') + f(M_i \cap M_i') + f(M_i \cap M_i'') &\text{ if }f\text{ is additive}
\end{cases}
\end{aligned}
$$
for $i=1,2$. In either case, this forces $f(M_1)=f(M_2)$.
This strongly suggests trying to define a ``universal'' valuative invariant of
matroids, following McMullen's {\it polytope algebra},
and in particular his section \cite[\S 20]{McMullen} dealing with valuations invariant under a
finite group action. Build an abelian group starting with the free abelian group
on basis elements $[M]$ indexed by matroids $M$, imposing the valuation relation
for each matroid base polytope decomposition of $Q(M)$, and the relation $[M]=[M']$ if $M$ and
$M'$ are isomorphic as matroids\footnote{As McMullen points out in \cite[\S 20]{McMullen},
imposing invariance under finite group action (such as matroid isomorphism)
seems to require sacrificing the multiplicative structure in the polytope algebra
coming from Minkowski addition. It also appears that in our situation
one must sacrifice translation-invariance, and the structure coming from dilatations,
as the vertices of each matroid base polytope $Q(M)$
are required to be $\{0,1\}$-vectors whose coordinates sum to the rank $r(M)$.}.
Valuative matroid invariants are exactly the linear functionals on this abelian group.
\begin{problem} \rm \
Study the structure of this abelian group. Are there special classes of special matroids
which generate it?
\end{problem}
\noindent
For example, a conjecture of Speyer \cite[Conjecture 11.3]{Speyer} would follow
if this abelian group were generated by the classes $[M]$ where $M$ runs over
all direct sums of {\it series-parallel matroids}.
\end{remark}
\section{Generalization to generalized permutohedra}
\label{generalized-permutohedra-section}
It turns out that the proofs of Proposition~\ref{quasisymmetry}, Theorem~\ref{poset-expansion},
Corollary~\ref{last-coefficient}, Theorem~\ref{reciprocity-theorem}, and Theorem~\ref{valuation}
generalize in a straightforward way to give results about a general class of convex polytopes
studied recently by Postnikov \cite{Postnikov}; see also
\cite{SturmfelsEtAl} and \cite{PRW}.
Given a convex
polytope $Q$ in $\RR^n$, the following conditions are well-known to
be equivalent \cite[Proposition 7.12]{Ziegler}:
\begin{enumerate}
\item[$\bullet$] Every edge of $Q$ lies in one of the directions
$\{e_i - e_j: 1 \leq i \neq j \leq n\}$.
\item[$\bullet$] The normal fan of $Q$ in $(\RR^n)^*$ is refined by the
usual {\it braid arrangement} (or {\it type $A_{n-1}$ Weyl chamber fan}).
\item[$\bullet$] The polytope $Q$ is a Minkowski summand of some realization of the
{\it permutohedron} as a Minkowski sum of line segments (possibly of different lengths)
in the directions
$\{e_i - e_j : 1 \leq i < j \leq n\}$.
\end{enumerate}
Say that $Q$ is a {\it generalized $n$-permutohedron} when any of these
equivalent conditions hold\footnote{Actually, the definition of generalized permutohedra given
in \cite{Postnikov} looks slightly different, but is shown to be equivalent to these
conditions in \cite[Appendix]{PRW}.}.
\begin{example} \rm \
\label{matroid-base-polytope-example}
Given a matroid $M$ on ground set $E=[n]$, the matroid base polytope $Q(M)$ defined
in Section \ref{decomposition-section} is a generalized
$n$-permutohedron \cite[\S2.2, Theorem 1]{GelSerg}, a
fact that played a crucial role in the proof of Theorem~\ref{poset-expansion}.
\end{example}
Given a polytope $Q$ in $\RR^n$, say that a function $f:[n] \rightarrow \PP$
(which we think of as giving an element of $(\RR^n)^*$) is {\it $Q$-generic}
if $f$ maximizes over $Q$ uniquely at a vertex. In other words, $f$ lies in
the {\it interior} of an $n$-dimensional cone in the normal fan for $Q$.
One can then prove the following:
\begin{theorem}
\label{permutohedron-summand-theorem}
If $Q$ is a generalized $n$-permutohedron in $\RR^n$, then
\begin{enumerate}
\item[(i)] the power series
$$
F(Q,\xx):=\sum_{\substack{Q\textrm{-generic} \\f:[n] \rightarrow \PP}} \xx_f
$$
is quasisymmetric, with
\item[(ii)]
an expansion in terms of $P$-partitions enumerators as
$$
F(Q,\xx) = \sum_{ \textrm{vertices }v\textrm{ of }Q} F(P_v,\gamma_v, \xx)
$$
where $(P_v,\gamma_v)$ are certain strictly labelled posets indexed
by the vertices of $Q$.
\item[(iii)]
Furthermore, the coefficients $c^Q_\alpha$ in its expansion
$F(Q,\xx) =\sum_{\alpha} c^Q_\alpha L_\alpha$
\begin{enumerate}
\item[(a)] are nonnegative,
\item[(b)] sum to $n!$, and
\item[(c)] have $c^Q_{1,1,\ldots,1}$ equal to the number the number of vertices of $Q$.
\end{enumerate}
\item[(iv)] The antipode $S$ on ${\mathcal{QS}ym }$ satisfies
$$
S( F(Q,\xx) ) = (-1)^n {{F}^{\ast}}(Q,\xx)
$$
where
$$
{{F}^{\ast}}(Q,\xx) := \sum_{f:[n] \rightarrow \PP} |\{ f\textrm{-minimizing vertices of }Q\}|.
$$
\item[(v)] The two polynomials $\phi(Q,m), {{\phi}^{\ast}}(Q,m)$ in the variable $m$ defined
by specializing $F(Q,\xx), {{F}^{\ast}}(Q,\xx)$ to $\xx=1^m$ satisfy
$$
\phi(Q,-m) = (-1)^n {{\phi}^{\ast}}(Q,m).
$$
\item[(vi)] Suppose $Q = \cup_i Q_i$ is a decomposition of $Q$ into finitely many
permutohedron summands $Q_i$, in which $Q_i \cap Q_j$ is a common face of $Q_i$ and
$Q_j$ for all $i,j$. Then
$$
F(Q,\xx) =
\sum_{j\ge1} (-1)^{j-1} \sum_{i_{1}<i_{2}<\cdots <i_{j}}
F(Q_{i_{1}}\cap Q_{i_{2}}\cap \cdots \cap Q_{i_{j}}),
$$
where the sum is over those terms in which $Q_{i_{1}}\cap Q_{i_{2}}\cap \cdots \cap Q_{i_{j}}$
is nonempty.
\end{enumerate}
\end{theorem}
\noindent
In fact, the posets $P_v$ appearing in the theorem have a very simple description:
$P_v$ is the transitive closure of the binary relation on $[n]$ which
has $i <_{P_v} j$ if there exists an edge of $Q$ of the form
$\{v,v'\}$ with $v'-v = e_j - e_i$.
In the remainder of this section, we discuss three naturally occurring
families of generalized $n$-permutohedra that have occurred in the literature.
\begin{problem} \rm \
\label{gen-perm-qsym}
Study the quasisymmetric functions $F(Q,\xx)$ associated with any
of these families of generalized permutohedra $Q$.
\end{problem}
\subsection{Graphic zonotopes and Stanley's chromatic symmetric function.} \ \\
Let $G$ be a simple graph on vertex set $[n]$. Let $Z_G$ denote the Minkowski sum of
line segments in the directions
$$
\{e_i -e_j: \{i,j\} \text{ is a an edge of }G\}.
$$
Then $Z_G$ is a generalized $n$-permutohedron;
the $n$-permutohedron itself equals $Z_{K_n}$ where $K_n$ is the complete graph
on $n$ vertices.
It is easy to see that a function $f:[n] \rightarrow \PP$ is $Z_G$-generic
if and only if it is a {\it proper coloring} of the vertex set $[n]$ of $G$.
One concludes that $F(Z_G,\xx)$ is the same as the {\it chromatic symmetric
function} $X_G(x_1,x_2,\ldots)$ introduced by Stanley \cite{Stanley2}, and studied
further by others in recent years. Many of the results of this paper were
inspired by his work, and in particular Theorem~\ref{permutohedron-summand-theorem}
generalizes a few of the facts about $X_G$.
It is also known (see \cite[Example 4.5]{AguiarBergeronSottile})
that the map $G \mapsto X_G$ can be interpreted as a Hopf morphism
between a certain Hopf algebra of graphs and the
Hopf algebra $\Lambda$ of {\it symmetric functions} inside the quasisymmetric functions
${\mathcal{QS}ym }$. As far as we know, this morphism is of a different nature than our Hopf morphism
$F: {\mathcal Mat } \rightarrow {\mathcal{QS}ym }$.
\subsection{Polymatroids and flag matroids.} \ \\
Example~\ref{matroid-base-polytope-example} alludes to
a famous resut of Gelfand, Goresky, MacPherson, and Serganova, characterizing
matroids in terms of their matroid base polytopes, which we rephrase slightly here.
\begin{theorem}
(see \cite[\S2.2, Theorem 1]{GelSerg}, \cite[Theorem 1.11.1]{BorovikGelfandWhite})
Let ${\mathcal B}$ be a collection of $r$-subsets of $[n]$, and $Q$ the
convex hull of their characteristic vectors in $\{0,1\}^n \subset \RR^n$.
Then ${\mathcal B}$ is the collection of bases ${\mathcal B}(M)$ for some matroid $M$ on ground set $E=[n]$
(and $Q=Q(M)$ is the associated matroid base polytope) if and only if
$Q$ is a generalized $n$-permutohedron.
\end{theorem}
This led Gelfand, Goresky, MacPherson, and Serganova to the notion of
{\it Coxeter matroids} \cite{BorovikGelfandWhite}. A Coxeter matroid is
the result of taking the characterization in the previous theorem and
\begin{enumerate}
\item[$\bullet$]
replacing $r$-subsets of $[n]$, which can be thought of as
the cosets of maximal parabolic subgroups in the Coxeter
group of type $A_{n-1}$, with cosets of an arbitrary parabolic subgroup in an
arbitrary finite Coxeter group,
\item[$\bullet$] replacing the characteristic vectors of $r$-subsets
with $W$-translates of sums of fundamental dominant weights,
\item[$\bullet$] replacing generalized $n$-permutohedra with Minkowski summands of
the zonotopes generated by other root systems.
\end{enumerate}
When the Coxeter group is of type $A_{n-1}$, considering arbitrary parabolic subgroups
instead of just maximal ones leads to the notion of a {\it flag matroid}, and its
{\it flag matroid base polytope}. These will be generalized $n$-permutohedra
generalizing
the matroid base polytopes, whose vertices are vectors in $\NN^n$ that no longer necessarily sum to $r$,
but obey certain constraints on the sizes of their coordinates; see \cite[\S 1.11]{BorovikGelfandWhite}.
Generalizing in another direction,
a {\it discrete polymatroid base polytope of rank }$r$ (see \cite{HerzogHibi})
is a generalized $n$-permutohedron,
each of whose vertices has nonnegative integer coordinates summing to $r$.
These polytopes were introduced by Edmonds \cite{Edmonds} in the context of combinatorial optimization.
\subsection{Graph-associahedra.} \ \\
Building on work of others (De Concini-Procesi, Davis-Januszkiewicz-Scott, and
Carr-Devadoss),
Postnikov \cite{Postnikov} showed that the generalized $n$-permutohedra contain
an interesting subclass of polytopes
called {\it graph-associahedra}, indexed by simple graphs $G$ on vertex set $[n]$. Within
this subclass, the associahedra and cyclohedra correspond to the cases where the graphs $G$ are
paths and cycles, respectively.
\section{Appendix: surjectivity and new bases for ${\mathcal{QS}ym }$}
\label{appendix}
\subsection{Sketch of surjectivity}
\label{surjectivity-section}
The goal of this appendix is to prove the following.
\begin{theorem}
\label{surjectivity-theorem}
The Hopf algebra morphism $F: {\mathcal Mat } \rightarrow {\mathcal{QS}ym }$ is surjective when one extends the scalars
to a field $\FF$ of characteristic zero.
\end{theorem}
\noindent
We observe here that the morphism $F$ is definitely {\it not} surjective without extending
scalars. The image of the map ${\mathcal Mat }_2 \overset{F}{\rightarrow} {\mathcal{QS}ym }_2$ on homogeneous components
of degree $2$ is a sublattice of index $2$ within
${\mathcal{QS}ym }_2$: there are only four non-isomorphic matroids on $2$ elements,
whose images under $F$ are all either of the form $L_{1,1}+L_{2}$ or $2L_{1,1}$.
Our approach will be to define, for each degree $n$,
a family of $2^{n-1}$ matroids on ground set $E=[n]$, whose images under $F$ span
$QSym_n$ with rational coefficients. It turns out that it will suffice to take
a subfamily of a family of $2^n$ matroids
which were called {\it freedom matroids} in \cite{CrapoSchmitt1}, and
which we will call {\it $PI$-matroids} here. They were considered in the context
of face enumeration in \cite{Liu} and in \cite{BER}, where they arose
in the context of combinatorial operators on zonotopes.
Given a matroid $M$,
let $I(M):=M \oplus M_\isthmus$ be a single-element extension of $M$ by
an isthmus. Let $P(M)$ be a single-element extension of $M$ which is the
{\it principal extension of $M$ along the improper flat}, that is, one adjoins
a new element $e$ to the ground set, which is generic while obeying the constraint
that it does not increase the rank.
Say that $M$ is a {\it $PI$-matroid} if it can be
obtained from the empty matroid $M_\varnothing$ on
$E=\varnothing$ by performing a sequence of repeated $M \mapsto I(M)$
and/or $M \mapsto P(M)$ operations. It happens that every matroid with $|E|\leq 3$
is isomorphic to a $PI$-matroid.
Let $0\{0,1\}^{n-1}$ denote the collection of all
binary strings $\sigma \in \{0,1\}^n$ that begin with a $0$.
Given $\sigma$ in $0\{0,1\}^{n-1}$, let $M_\sigma$ be the $PI$-matroid
built from this sequence beginning with an empty matroid, where one performs
the $I$ operation for each
$0$ and the $P$ operation for each $1$ in $\sigma$. For example, the sequence $01111$
would build the $PI$-matroid $M_{01111}$ of rank $1$ consisting of $5$ parallel elements).
We will prove the following refinement of Theorem~\ref{surjectivity-theorem}.
\begin{theorem}
\label{refined-surjectivity-thm}
The quasisymmetric functions
$$
\{ F(M_\sigma) : \sigma \in 0\{0,1\}^{n-1} \}
$$
span ${\mathcal{QS}ym }_n \otimes \FF$ whenever $n!$ is invertible in $\FF$.
\end{theorem}
\begin{remark} \rm \
The operation $M \mapsto I(M)$ which adds an isthmus to $M$
has a predictable effect on $F(M)$:
$$
F( I(M)) = L_1 \cdot F(M).
$$
Seeing this, one might hope to approach Theorem~\ref{refined-surjectivity-thm}
by understanding how $F( P(M))$ relates
to $F(M)$. Unfortunately, $F( P(M))$ does not depend solely on $F(M)$ via
some operation in ${\mathcal{QS}ym }$. For example, the two matroids
$$
\begin{aligned}
M_1 & := M_\isthmus \oplus M_\isthmus, \textrm{ and }\\
M_2 & := M_\isthmus \oplus M_\looop
\end{aligned}
$$
have $F(M_1)=F(M_2) (= L_1^2)$, however
$$
\begin{aligned}
F( P(M_1)) &= 3L_{2,1}+3L_{1,1,1}, \textrm{ while}\\
F( P(M_2) ) &= 2L_{2,1} + 2L_{1,2} + 2L_{1,1,1}.
\end{aligned}
$$
\end{remark}
Instead, the proof of Theorem~\ref{refined-surjectivity-thm} (and hence
Theorem~\ref{surjectivity-theorem}) proceeds in three steps,
carried out over this and the next two subsections.
\medskip
\noindent \emph{Step 1.} Introduce a family of posets $R_\sigma$ on $[n]$, also indexed
by $0\{0,1\}^{n-1}$, and show that the expansion of the
$F(M_\sigma,\xx)$ in terms of the {\it strictly labelled} $P$-partition enumerators for the $R_\sigma$
is triangular in some ordering. Furthermore, the diagonal coefficients in this expansion
are products of binomial coefficients that all divide $n!$.
\medskip
\noindent \emph{Step 2.} Introduce another family of labelled posets $Q_\sigma$ on $[n]$,
also indexed by $0\{0,1\}^{n-1}$, which are easily seen to form a $\ZZ$-basis for ${\mathcal{QS}ym }$,
and have some nice properties.
\medskip
\noindent \emph{Step 3.} Show that the expansion of the {\it naturally labelled}
$P$-partition enumerators for the $R_\sigma$ in terms of the $P$-partition
enumerators for the $Q_\sigma$ is unitriangular with respect to some ordering.
From Step 2 it then follows that the former $P$-partition enumerators also give a $\ZZ$-basis
for ${\mathcal{QS}ym }$.
\medskip
\noindent
Then by equation \eqref{natural-strict-relation},
the strict $P$-partition enumerators of the $R_\sigma$ also give a $\ZZ$-basis for
${\mathcal{QS}ym }$, and together with Step 1 this proves Theorem~\ref{refined-surjectivity-thm}.
\medskip
Step 1 is completed in the remainder of this subsection,
while Steps 2 and 3 are achieved in Subsections \ref{Q-basis-section} and \ref{R-Q-expansion-section}. As mentioned in an earlier footnote, Luoto \cite[\S 7.4]{Luoto} has recently
found an alternative to Steps 2 and 3, by expanding the $R_\sigma$ basis elements
unitriangularly in terms of his ``matroid-friendly'' basis for ${\mathcal{QS}ym }$.
\medskip
Given $\sigma$ in $0\{0,1\}^{n-1}$, let $R_\sigma$ be the
labelled poset of height 1 (or 0) on $[n]$
having $i <_{Q_\sigma} j$ if $\sigma_i = 0, \sigma_j = 1$ and $i<j$.
Each such $\sigma$ also defines a partition of the set $[n]$ into
intervals that we will call the {\it blocks} $A_1,\ldots,A_t$ of $\sigma$,
by breaking $[n]$ between the
positions $i,i+1$ where $(\sigma_i,\sigma_{i+1})= (1,0)$.
We also define a vector $(z_1,\ldots,z_t)$ associated to $\sigma$ as follows:
$z_i$ is the number of positions $j$ in
the block $A_i$ for which $\sigma_j=0$. It is not hard to see that
one can recover $\sigma$ uniquely from the blocks $(A_1,...,A_t)$ and
the values $(z_1,\ldots,z_t)$.
\begin{example} \rm \
Let $n=7$ and let $\sigma$ be the string in $0\{0,1\}^{n-1}$ given by
$$
\begin{matrix}
\sigma = &0&1&0&0&1&1&1&0 \\
&1&2&3&4&5&6&7&8.
\end{matrix}
$$
Then $R_\sigma$ is the labelled poset on $[8]$ in which the
minimal elements are $1,3,4,8$, the maximal elements
are $2,5,6,7$ (and $8$), and the order relations are
$$
\begin{aligned}
1 &<2,5,6,7 \\
3,4 &<5,6,7\\
\end{aligned}
$$
as illustrated in Figure~\ref{R-poset-figure}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.30]{R-poset.eps}
\end{center}
\caption{
\label{R-poset-figure}
The poset $R_{01001110}$, along with its associated blocks $(A_1,A_2,A_3)$.
}
\end{figure}
Also, $\sigma$ has associated to it the blocks $(A_1,A_2,A_3)=(12,34567,8)$,
and vector $(z_1,z_2,z_3)=(1,2,1)$.
The blocks $A_i$ are separated by dotted lines in Figure~\ref{R-poset-figure}.
\end{example}
It should be clear that the posets
$R_\sigma$ are characterized up to isomorphism by the following {\it stable/shifted labelling} property.
\begin{proposition}
\label{shifted-characterization}
A labelled poset $P$ on $[n]$ is isomorphic to $R_\sigma$ for some $\sigma$ if and only if it has height at most
one, and can be relabelled so that each minimal (resp. maximal) element has its upward (resp. downward)
neighbors in $P$ forming a final (resp. initial) segment of $[n]$.
\end{proposition}
\begin{proposition}
\label{PI-triangularity}
The lexicographic order $<_{lex}$ on $0\{0,1\}^{n-1}$ induced by $0 < 1$
makes the expansion of $\{ F(M_\sigma,\xx) : \sigma \in 0\{0,1\}^{n-1} \}$
in terms of the strict
$P$-partition enumerators $\{ F(R_\sigma,\gamma_\sigma,\xx) : \sigma \in 0\{0,1\}^{n-1} \}$
triangular of the following form:
\begin{equation}
\label{Step1-expansion}
F(M_\sigma,\xx) =
\sum_{ \tau \leq_{lex} \sigma } c_{\sigma,\tau} F(R_\tau,\gamma_\tau,\xx).
\end{equation}
where $c_{\sigma,\tau} \in \ZZ$, and $\gamma_\tau$ is any strict labelling of the poset $R_\tau$.
Furthermore, the diagonal coefficient $c_{\sigma,\sigma}$ can be expressed in terms
of the blocks $(A_1,\ldots,A_t)$ and vector $(z_1,\ldots,z_t)$ associated to $\sigma$
as follows:
$$
c_{\sigma,\sigma} = \prod_{i=1}^t \binom{|A_i|}{z_i}.
$$
\end{proposition}
\begin{proof}
We use Theorem~\ref{poset-expansion} and expand
$$
F(M_\sigma,\xx) = \sum_{B \in {\mathcal B}(M_\sigma)} F(P_B,\gamma_B,\xx).
$$
The bases $B$ of $M_\sigma$ are easily analyzed in terms of
the blocks $(A_1,\ldots,A_t)$ and vector $(z_1,\ldots,z_t)$ associated to $\sigma$
(cf. \cite[Proposition 5.1]{CrapoSchmitt1}).
Note that $M_\sigma$ will have rank $r:=z_1+\cdots+z_t$, and
it has a distinguished chain of flats
$$
\varnothing \subset F_1 \subset \cdots \subset F_t = [n]
$$
in which $F_i:=A_1 \sqcup A_2 \sqcup \cdots \sqcup A_i$.
Bases $B$ of $M_\sigma$ are then simply the $r$-subsets $B$ of $[n]$ that
contain for each $i=1,\ldots,t$ at most $z_1 + z_2 + \cdots + z_i$
elements from the flat $F_i$.
Given any base $B$ of $M_\sigma$, we claim that the poset $P_B$ is
isomorphic to some $R_\tau$. To see this, we use Proposition~\ref{shifted-characterization}.
We know that $P_B$ has height at most $1$. Relabel its minimal (resp. maximal) elements,
that is, those in $B$ (resp. $B^\ast\ast$) by
an initial (resp. final) segment of $[n]$, with those lying in block $A_i$ coming earlier than those
in block $A_j$ whenever $i < j$. It is then easy to check that any minimal (resp. maximal)
element of $P_B$ will have its upward (resp. downward) neighbors in $P_B$ forming a
final (resp. initial) segment of $[n]$.
The diagonal terms on the right side of \eqref{Step1-expansion} come from
bases $B$ of $M_\sigma$ containing {\it exactly} $z_i$ elements of $F_i\backslash F_{i-1}$ for each $i$;
let us call these the {\it diagonal} bases of $M_\sigma$.
For example, the lexicographically earliest
base $B_0$ for $M_\sigma$ is a diagonal base,
and it is not hard to see that $P_{B_0} = R_\sigma$ on the nose; see Figure~\ref{diagonal-bases-figure} for an example.
There are a total of $\prod_{i=1}^t \binom{|A_i|}{z_i}$ diagonal bases $B$ for $M_\sigma$,
and each has $P_B \cong P_{B_0} = R_\sigma$.
For any non-diagonal base $B$, there is some smallest index $i$ such that $B$ contains
less than $z_i$ elements of $F_i\backslash F_{i-1}$. It is not hard to see that such a $B$ will
have $P_B \cong R_\tau$ for some $\tau$ that agrees with $\sigma$ in the first
$|F_{i-1}|$ positions, that is, in the positions indexed by their first $i-1$ blocks of $\sigma$
(or $\tau$).
But then the $i^{th}$ block $A_i$ for $\tau$ indexes a $\{0,1\}$-substring of $\tau$
of the form $00 \cdots 011 \cdots 1$ starting with more zeroes than does
the corresponding $i^{th}$ block $A_i$ for $\sigma$, so that $\tau <_{lex} \sigma$.
\end{proof}
\begin{example} \rm \
Figure~\ref{diagonal-bases-figure} illustrates the previous proof.
Here $\sigma=01101011$. The matroid $M_\sigma$ is drawn as an affine point configuration.
Its associated chain of flats is $$F_1=123 \subset F_2=12345 \subset F_3=123456789.$$
The lexicographically first base $B_0=146$ of $M_\sigma$ is a diagonal base, having
poset $P_{B_0}$ which coincides with $R_\sigma$.
An example of a non-diagonal base $B=167$ is shown, with poset $P_B$ isomorphic to $R_\tau$ where
where $\tau = 01101011$. Here the smallest index $i$ for which $B$ does not contain
$z_i$ elements of $F_i - F_{i-1}$ is $i=2$, and hence $\sigma, \tau$ agree in
their first $|F_1|=3$ positions. However the second block $A_2=456789$ in $\tau$ indexes a substring
$00111$ starting with two zeroes, while the second block $A_2=45$ in $\sigma$ indexes a substring
$01$ starting with only one zero. Hence $\tau <_{lex} \sigma$.
\end{example}
This completes Step 1 of our program: the formula for $c_{\sigma,\sigma}$ in the previous
result only contains factors of the form $\binom{|A_i|}{z_i}$ in which $|A_i| \leq n$, so
that each of these factors divides $n!$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.40]{diagonal-bases.eps}
\end{center}
\caption{
\label{diagonal-bases-figure}
An example of the proof of Proposition~\ref{PI-triangularity}.
}
\end{figure}
\subsection{The first new basis for ${\mathcal{QS}ym }$}
\label{Q-basis-section}
In this subsection, we complete Step 2 of the proof of Theorem~\ref{refined-surjectivity-thm}
by exhibiting a new $\ZZ$-basis for ${\mathcal{QS}ym }$ that may be of independent interest.
This basis turns out to have a nice expansion property
(Lemma~\ref{positive-triangular-expansion-property}) when one multiplies
one of its elements by $L_1=x_1+x_2+\cdots$.
This new basis comes from a family of (non-naturally, non-strictly)
labelled posets $Q_\sigma$ on $[n]$, indexed
by $\sigma$ in $0\{0,1\}^{n-1}$, which are defined recursively. Before defining
them, we recall some standard labelled poset terminology.
\medskip
Let $P_1, P_2$ be labelled posets on label sets $A_1, A_2$ that disjointly decompose $[n]$,
that is, $[n]=A_1 \sqcup A_2$. Their {\it disjoint sum} $P_1 + P_2$ is
the labelled poset on label set $[n]$ keeping all order relations that were present in
$P_1$ or in $P_2$, with no new order relations between $P_1$ and $P_2$.
Their {\it ordinal sum} $P_1 \oplus P_2$ is
obtained by from the disjoint sum by imposing further new order relations:
$p_1 < p_2$ for all $p_1 \in P_1, p_2 \in P_2$.
Now one can define the labelled posets
$Q_\sigma$ for $\sigma$ in $0\{0,1\}^{n-1}$ recursively by:
\begin{enumerate}
\item[$\bullet$]
$Q_{\underbrace{00\cdots0}_{n\text{ zeroes}}}$ is the labelled poset on $[n]$
which is an antichain.
\item[$\bullet$] If $\sigma$ ends with a $1$, say $\sigma=\hat\sigma 1$, then
$Q_{\sigma} = Q_{\hat\sigma} \oplus (n+1)$
where $(n+1)$ is a labelled poset with one element labelled $n+1$.
\item[$\bullet$] If $\sigma$ ends with a $0$ (but is not {\it all} zeroes), say $\sigma=\hat\sigma 0$,
then $Q_\sigma$ is obtained from $Q_{\hat\sigma}$ by adding in a new element labelled $n+1$, with
only one new order relation $n < n+1$ (plus all others generated by transitivity), and then
swapping the labels of $n, n+1$.
\end{enumerate}
\begin{figure}
\begin{center}
\includegraphics[scale=0.30]{Q-poset.eps}
\end{center}
\caption{
\label{Q-poset-figure}
The poset $Q_{001100010110000}$.
}
\end{figure}
\begin{example} \rm \
\label{Q-poset-example}
The string $\sigma$ in $0\{0,1\}^{14}$ given by
$$
\begin{matrix}
\sigma=&0&0&1&1&0&0&0&1&0&1 &1 &0 &0 &0 &0 \\
&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15
\end{matrix}
$$
has $Q_\sigma$ given by these order relations:
$$
1,2 < 3 < 7 < 4,5,6 < 9 < 8 < 10 < 15 < 11,12,13,14
$$
as depicted in Figure~\ref{Q-poset-figure}.
\end{example}
It is not hard to see that $Q_\sigma$ is always isomorphic to an
iterated ordinal sum of a sequence of antichains. For example, in
the poset $Q_\sigma$ of Example~\ref{Q-poset-example}, these
antichains are the induced subposets on these sets:
$$
\{1,2\}, \{3\}, \{7\}, \{4,5,6\},\{9\},\{8\},\{10\}, \{15\}, \{11, 12, 13, 14\}.
$$
\begin{remark} \rm \
The recursive definition of $Q_\sigma$ can be rephrased, after introducing a certain
simple operation on labelled posets, which will be useful later.
For each positive integer $m$, define an operation $\psi_m$ that takes labelled posets
on $[n]$ to labelled posets on $[n+m]$ as follows. Given a labelled poset $P$ on
$n$, then $\psi_m(P):= P \oplus (n+m) \oplus A$ where $(n+m)$ is a labelled poset
with one element labelled $n+m$, and $A$ is an $(m-1)$-element antichain with elements labelled
$n+1, n+2,...,n+m-1$.
To describe $Q_\sigma$ in terms of these operations,
uniquely decompose $\sigma$ into an initial sequence of $n_0$ zeroes, and then
sequences of length $n_1,n_2,\ldots,n_p \geq 1$ of the form ${100\cdots0}$
Then
$$
Q_\sigma:=\psi_{n_p} \cdots \psi_{n_2} \psi_{n_1} (Q_{\underbrace{00\cdots0}_{n_0\text{ zeroes}}}).
$$
\end{remark}
One then has the following proposition.
\begin{proposition}
\label{Q-basis-prop}
The $P$-partition enumerators $\{ F(Q_\sigma, \xx) : \sigma \in 0\{0,1\}^{n-1} \}$
form a $\ZZ$-basis for ${\mathcal{QS}ym }_n$.
\end{proposition}
\begin{proof}
Given $\sigma \in 0\{0,1\}^{n-1}$, let $w_\sigma$ be the linear extension of
the labelled poset $Q_\sigma$ obtained by reading each of the antichains
discussed above in the {\it reverse} of their usual numerical order.
E.g. one has
$$
w_\sigma = 2\cdot1\quad3\quad7\cdot6\cdot5\cdot4\quad9\cdot8\quad10\quad15\cdot14\cdot13\cdot12\cdot11
$$
in the previous example, where we have indicated the positions of
descents in $w_\sigma$ by dots.
It is easily seen that
\begin{enumerate}
\item[$\bullet$] the descent set of $w_\sigma$ can be read from $\sigma$ as follows:
$$
{\mathrm{Des}}(w_\sigma)= \{i \in [n-1]:\sigma_{i+1} = 0\},
$$
and
\item[$\bullet$] every other linear extension $w$ in ${\mathcal L}(Q_\sigma)$ has
$$
{\mathrm{Des}}(w) \subsetneq {\mathrm{Des}}(w_\sigma)
$$
because at least one of the antichains discussed above must not appear in
reverse order in $w$.
\end{enumerate}
Hence the expansion
$$
F(Q_\sigma) = L_{\alpha(w_\sigma)} + \sum_{w \in {\mathcal L}(Q_\sigma)-\{w_\sigma\}} L_{\alpha(w)}
$$
is unitriangular with respect to the lexicographic orders on the set $0\{0,1\}^{n-1}$ and the
set of compositions $\alpha$ of $n$.
\end{proof}
\subsection{An expansion property} \ \\
It turns out that the $F(Q_\sigma, \xx)$ basis for ${\mathcal{QS}ym }$ has an interesting expansion property when one
multiplies by $L_1:=x_1 + x_2 + \cdots$. The expansion is both nonnegative, and triangular in
a certain sense; see Lemma~\ref{positive-triangular-expansion-property} below.
Before diving into its statement and proof,
we introduce some notation, and observe a few simple facts about
$P$-partition enumerators.
\begin{defn} \rm \ \\
Let $P$ be a labelled poset on $n$ integers $\omega_1 <_\ZZ \ldots <_\ZZ \omega_n$.
Then the {\it standardization} ${\mathrm{std}}(P)$ of $P$ is the labelled poset on
$[n]$ obtained by replacing the label $\omega_i$ in $P$ with
the integer $i$ for $i=1,\ldots,n$.
Given binary strings $\sigma$ and $\tau$, denote their concatenation
by $\sigma \tau$; the most frequently used case for us will be
where $\tau=100\cdots0$ so that $\sigma\tau = \sigma100\cdots0$.
\end{defn}
The next two propositions should then be clear from Proposition~\ref{Stanley's-P-partition-result},
and will be used repeatedly without reference.
\begin{proposition}
Let $P$ be a labelled poset on $[n]$ which is an ordinal sum
$$
P=P_1 \oplus (n) \oplus P_2
$$
in which $(n)$ is the labelled poset
with one element labelled $n$, and $P_1$ have $n_1, n_2$ elements respectively
(so that $n_1+n_2+1=n$).
Let $P'$ be the following labelled poset on $[n]$. First form the
labelled poset $P_2'$ on $[n_2]$ obtained from ${\mathrm{std}}(P_2)$ by adding $n_1$ to all
of its labels. Define
$$
P' := {\mathrm{std}}(P_1) \oplus (n) \oplus P_2'.
$$
Then
$$
F(P,\xx) = F( P',\xx) \qed.
$$
\end{proposition}
\begin{proposition}
\label{psi-map-prop}
The $\ZZ$-linear map $\psi_m: {\mathcal{QS}ym }_n \longrightarrow {\mathcal{QS}ym }_{n+m}$
defined by sending
$$
F(w,\xx) \longmapsto F( \psi_m(w), \xx)
$$
for any permutation $w$ will also send
$$
F(Q_\sigma, \xx) \longmapsto F(Q_{\sigma 100\cdots 0},\xx)
$$
and more generally, for any labelled poset $P$ on $[n]$, sends
$$
F(P, \xx) \longmapsto F(\psi_m(P),\xx).
\qed
$$
\end{proposition}
\noindent
Note that we are slightly abusing terminology here, in using the same name $\psi_m$ for
a $\ZZ$-linear map and also for an operation on posets.
We now come to the crucial expansion property of the $F(Q_\sigma,\xx)$ basis.
\begin{lemma}
\label{positive-triangular-expansion-property}
For any $\sigma$ in $0\{0,1\}^{n-1}$,
$$
F(Q_\sigma, \xx) \cdot L_1
= F(Q_{\sigma 0}, \xx) + \sum_{\tau <_{lex} \sigma 0} c_\tau F(Q_\tau, \xx)
\text{ with }c_\tau \in \NN.
$$
\end{lemma}
\begin{proof}
Induct on $n$. One has
\begin{equation}
\label{common-poset-expansion}
\begin{aligned}
F(Q_\sigma, \xx) \cdot L_1 &= F(Q_{\sigma} + (n+1), \xx) \\
&= \sum_{w \in {\mathcal L}(Q_{\sigma} + (n+1))} F(w,\xx)
\end{aligned}
\end{equation}
\noindent
We analyze the set of linear extensions ${\mathcal L}(Q_{\sigma} + (n+1))$. The analysis
breaks up into two cases.
\vskip.1in
\noindent
{\bf Case 1.} $\sigma$ ends with a $1$, say $\sigma = \hat{\sigma} 1$.
In this case, $n$ is a top element of $Q_\sigma$ by construction,
and we decompose the linear extensions $w$ in ${\mathcal L}(Q_{\sigma} + (n+1))$ into three sets,
based on the location of $n+1$ relative to $n$:
\begin{enumerate}
\item[${\mathcal L}_1:$]
Those $w$ with $n+1$ occurring second-to-last, just before $n$.
\item[${\mathcal L}_2:$]
Those $w$ with $n+1$ occurring last, just after $n$.
\item[${\mathcal L}_3:$]
Those $w$ remaining, in which $n+1$ occurs at least two positions before $n$.
\end{enumerate}
\noindent
It is easy to see that ${\mathcal L}_1 = {\mathcal L}(Q_{\sigma 0})$.
Letting $t=(n,n+1)$ denote the transposition that swaps the labels $n, n+1$ in a labelled
poset on $[n+1]$, a little thought shows
$$
{\mathcal L}_2 \sqcup t {\mathcal L}_3 = {\mathcal L}( (Q_{\hat\sigma} + (n)) \oplus (n+1) ).
$$
Also, if one applies $t$ to a linear extension in which $n, n+1$ are not adjacent,
there is no effect on the descent set. Since this is true for every linear extension
in ${\mathcal L}_3$, one knows that ${\mathcal L}_2 \sqcup t {\mathcal L}_3$ has the same distribution of descent sets
as ${\mathcal L}_2 \sqcup {\mathcal L}_3$.
Therefore, \eqref{common-poset-expansion} implies
\begin{equation}
\label{Case1-eqn-string}
\begin{aligned}
F(Q_\sigma, \xx) \cdot L_1
&=F(Q_{\sigma 0}, \xx) + F( (Q_{\hat\sigma} + (n)) \oplus (n+1), \xx) \\
&=F(Q_{\sigma 0}, \xx) + \psi_1 ( F( Q_{\hat\sigma} , \xx) \cdot L_1 ) \\
&=F(Q_{\sigma 0}, \xx) + \psi_1 ( F(Q_{\hat\sigma 0},\xx)
+ \sum_{\hat{\tau} <_{lex} \hat\sigma 0} c_{\hat \tau} F(Q_{\hat \tau}) )\\
&=F(Q_{\sigma 0}, \xx) + F(Q_{\hat\sigma 01},\xx)
+ \sum_{\hat{\tau} <_{lex} \hat\sigma 0} c_{\hat \tau} F(Q_{\hat \tau 1}) \\
\end{aligned}
\end{equation}
where the third equality uses the inductive hypothesis.
Since
$$
\hat \tau 1 <_{lex} \hat \sigma 0 1 <_{lex} \hat \sigma 1 0 = \sigma 0,
$$
the last equation in \eqref{Case1-eqn-string} gives the desired conclusion.
\vskip.1in
\noindent
{\bf Case 2.} $\sigma$ ends with a $0$, say
$\sigma = \hat{\sigma} \underbrace{100\cdots 0}_{m\text{ letters}}$.
This time we decompose the linear extensions $w$ in ${\mathcal L}(Q_{\sigma} + (n+1)$ into four sets,
again based on the location of $n+1$ relative to $n$:
\begin{enumerate}
\item[${\mathcal L}_1:$]
Those $w$ with $n+1$ at least two positions after $n$.
\item[${\mathcal L}_2:$]
Those $w$ with $n+1$ immediately after $n$.
\item[${\mathcal L}_3:$]
Those $w$ with $n+1$ immediately preceding $n$.
\item[${\mathcal L}_4:$]
Those $w$ with $n+1$ at least two positions before $n$.
\end{enumerate}
\noindent
Note that the sets ${\mathcal L}_1, {\mathcal L}_4$ will have their descent set distributions unchanged
when one applies the transposition $t=(n, n+1)$ to their labels. A little thought then shows that
$$
{\mathcal L}_3 \sqcup t{\mathcal L}_1 =
{\mathcal L}(Q_{\hat{\sigma} \underbrace{100\cdots 0}_{m+1\text{ letters}}}) = {\mathcal L}(Q_{\sigma 0})
$$
and
$$
{\mathcal L}_2 \sqcup t {\mathcal L}_4 = {\mathcal L}( \psi_m( Q_{\hat\sigma} + (n)) ).
$$
Consequently \eqref{common-poset-expansion} implies
\begin{equation}
\label{Case2-eqn-string}
\begin{aligned}
F(Q_\sigma, \xx) \cdot L_1
&=F(Q_{\sigma 0}, \xx) + F( \psi_m (Q_{\hat\sigma} + (n)) , \xx) \\
&=F(Q_{\sigma 0}, \xx) + \psi_m ( F( Q_{\hat\sigma} , \xx) \cdot L_1 ) \\
&=F(Q_{\sigma 0}, \xx) + \psi_m ( F(Q_{\hat\sigma 0},\xx)
+ \sum_{\hat{\tau} <_{lex} \hat\sigma 0} c_{\hat \tau} F(Q_{\hat \tau}) )\\
&=F(Q_{\sigma 0}, \xx) + F(Q_{\hat\sigma 0100\cdots 0},\xx)
+ \sum_{\hat{\tau} <_{lex} \hat\sigma 0} c_{\hat \tau} F(Q_{\hat \tau 100\cdots 0}) \\
\end{aligned}
\end{equation}
where the third equality uses the inductive hypothesis.
Since
$$
\hat \tau 1 0 0 \cdots 0 <_{lex} \hat \sigma 0 1 0 0 \cdots 0 <_{lex} \hat \sigma 1 0 0 \cdots 0 = \sigma 0,
$$
the last equation in \eqref{Case2-eqn-string} gives the desired conclusion.
\end{proof}
This completes Step 2 in the proof of Theorem~\ref{refined-surjectivity-thm}.
\subsection{The second new basis for ${\mathcal{QS}ym }$}
\label{R-Q-expansion-section}
The goal of this subsection is to prove the following positive, unitriangular
expansion of the $F(R_\sigma,\xx)$ in terms of the $F(Q_\sigma,\xx)$.
\begin{theorem}
\label{R-Q-expansion}
For $\sigma$ in $0\{0,1\}^{n-1}$,
$$
F(R_\sigma,\xx) = F(Q_\sigma,\xx) +
\sum_{ \tau <_{lex} \sigma } c_{\tau} F(Q_\tau,\xx).
$$
for some $c_{\tau}$ in $\NN$.
\end{theorem}
Note that this implies the $F(R_\sigma,\xx)$ form a $\ZZ$-basis for ${\mathcal{QS}ym }$,
which would complete Step 3 of the proof of Theorem~\ref{refined-surjectivity-thm}.
Theorem~\ref{R-Q-expansion} is simply the conjunction of assertions (i) and (ii) in the following lemma.
\begin{lemma}
For $\sigma$ in $0\{0,1\}^{n-2}$,
\begin{enumerate}
\item[(i)]
$$
F(R_{\sigma 1},\xx) = F(Q_{\sigma 1},\xx) +
\sum_{ \tau <_{lex} \sigma 1 } c_{\tau} F(Q_{\tau},\xx) \text{ with }c_\tau \in \NN.
$$
\item[(ii)]
$$
F(R_{\sigma 0},\xx) = F(Q_{\sigma 0},\xx) +
\sum_{ \tau <_{lex} \sigma 0 } c_{\tau} F(Q_{\tau},\xx) \text{ with }c_\tau \in \NN.
$$
\item[(iii)] For $\sigma$ in $0\{0,1\}^{n-m}$ and $m \geq 1$,
$$
\psi_m F(R_\sigma,\xx) = F(Q_{\sigma 100\cdots 0},\xx) +
\sum_{ \tau <_{lex} \sigma 100\cdots 0} c_{\tau} F(Q_\tau,\xx) \text{ with }c_\tau \in \NN .
$$
\end{enumerate}
\end{lemma}
\begin{proof}
We prove all three assertions (i),(ii),(iii) by a simultaneous induction on $n$.
\vskip.1in
\noindent
{\bf Proof of (ii).}
Given $\sigma$ in $0\{0,1\}^{n-2}$, one has
$$
\begin{aligned}
F(R_{\sigma 0},\xx)
&= F(R_\sigma + (n), \xx) \\
&= F(R_\sigma,\xx) \cdot L_1 \\
&= ( F(Q_\sigma,\xx) + \sum_{\tau <_{lex} \sigma} c'_\tau F(Q_\tau,\xx) ) \cdot L_1 \\
&= F(Q_{\sigma 0},\xx) + \sum_{\rho <_{lex} \sigma 0} c''_\rho F(Q_\rho,\xx)
+ \sum_{\tau <_{lex} \sigma} \quad
\sum_{\nu \leq_{lex} \tau 0} c'_\tau c''_\nu F(Q_\nu,\xx)
\end{aligned}
$$
where the third equality uses induction, and the
last equality uses Lemma~\ref{positive-triangular-expansion-property}.
Note that the last equality implies assertion (ii).
\vskip.1in
\noindent
{\bf Proof of (iii).}
Given $\sigma$ in $0\{0,1\}^{n-m}$ and $m \geq 1$, one has
$$
F(R_\sigma,\xx) = F(Q_\sigma,\xx) +
\sum_{ \tau <_{lex} \sigma } c_{\tau} F(Q_\tau,\xx) \text{ with }c_\tau \in \NN,
$$
by induction using assertions (i),(ii) (that is, Theorem~\ref{R-Q-expansion}).
Applying $\psi_m$ to this equality gives
$$
\begin{aligned}
\psi_m F(R_\sigma,\xx)
&= \psi_m F(Q_\sigma,\xx) + \sum_{ \tau <_{lex} \sigma } c_{\tau} \psi_m F(Q_\tau,\xx) \\
&= F(Q_{\sigma 100 \cdots 0},\xx) + \sum_{ \tau <_{lex} \sigma } c_{\tau} F(Q_{\tau100\cdots 0},\xx)
\end{aligned}
$$
where the last equality uses Proposition~\ref{psi-map-prop}. This gives assertion (iii).
\vskip.1in
\noindent
{\bf Proof of (i).}
Given $\sigma$ in $0\{0,1\}^{n-2}$, let
$$
J:=\{j \in [n-1]: \sigma_j=1\},
$$
so that the labelled poset $R_{\sigma 1}$ has the element labelled $n$ above
{\it all} of the elements in $[n]-J$, and above {\it none} of the element in $J$.
This means that for every linear extension $w$ in ${\mathcal L}(R_{\sigma 1})$, there is
a unique subset $I \subseteq J$ consisting of those elements appearing later
({\it i.e.} higher) in $w$ than $n$. A little thought shows that this gives a decomposition
$$
{\mathcal L}(R_{\sigma 1}) = \bigsqcup_{I \subseteq J} {\mathcal L}(P^I)
$$
where $P^I:=P^I_1 \oplus (n) \oplus P^I_2$ is a labelled poset on $[n]$
having $P^I_1$ the restriction of $R_\sigma$ to its elements labelled by $[n-1]-I$,
and $P^I_2$ an antichain labelled by the elements of $I$.
Consequently,
$$
\begin{aligned}
F(R_{\sigma 1},\xx)
&= \sum_{I \subseteq J} F(P^I,\xx) \\
&= \sum_{I \subseteq J} F(P^I_1 \oplus (n) \oplus P^I_2 ,\xx) \\
&= \sum_{I \subseteq J} \psi_{|I|+1} F({\mathrm{std}}(P^I_1),\xx)\\
&= \sum_{I \subseteq J} \psi_{|I|+1} F(R_{\sigma\backslash I},\xx)
\end{aligned}
$$
where for each $I \subseteq J$, the string $\sigma \backslash I$ is obtained from the string $\sigma$
by removing all the ones that were in the positions indexed by $I$.
Hence by induction using assertion (iii) one obtains
\begin{equation}
\label{assertion-(iii)-usage}
F(R_{\sigma 1},\xx)
= \sum_{I \subseteq J} \left( F(Q_{(\sigma\backslash I)100\cdots 0},\xx)
+ \sum_{\tau <_{lex} (\sigma \backslash I)100\cdots 0} c_\tau F(Q_\tau,\xx) \right)
\end{equation}
with $c_\tau$ in $\NN$. Note that for any $I \subseteq J$ one has
$$
(\sigma \backslash I)\underbrace{100\cdots 0}_{|I|+1\text{ letters}} \leq_{lex} \sigma 1
$$
and equality occurs if and only if $I=\varnothing$. Hence assertion (i) follows
from \eqref{assertion-(iii)-usage}.
\end{proof}
\subsection{Remarks on the bases for ${\mathcal{QS}ym }$}
We close with a few remarks on these new bases for ${\mathcal{QS}ym }$.
\begin{remark} \rm \
Note that the $F(R_\sigma,\xx)$ basis for ${\mathcal{QS}ym }$ consists entirely of {\it naturally labelled}
$P$-partition enumerators. This answers affirmatively the question of whether ${\mathcal{QS}ym }$ is
$\ZZ$-linearly spanned by naturally labelled $P$-partition enumerators; note that neither
of the usual $\ZZ$-bases for ${\mathcal{QS}ym }$ (the $M_\alpha$ or $L_\alpha$) have this form.
The same question was also answered (affirmatively) in recent work of Stanley \cite{Stanley3}
who, after being queried by the authors of the current paper, produced yet another $\ZZ$-basis
for ${\mathcal{QS}ym }$ consisting of naturally labelled $P$-partition enumerators. Given a composition
$\alpha=(\alpha_1,\ldots,\alpha_k)$ of $n$, he defined $P_\alpha$ to be the naturally labelled poset
which is the ordinal sum $A_1 \oplus \cdots \oplus A_k$,
in which $A_i$ is an antichain on $\alpha_i$ elements
for each $i=1,2,\ldots,k$. These posets $P_\alpha$ bear a close resemblance
to the (non-naturally) labelled posets $Q_\sigma$ defined above, in that both have simple,
unitriangular expansions of their $P$-partition enumerators
in terms of the $L_\alpha$-basis. In \cite{Stanley3}, Stanley combinatorially interprets
this upper unitriangular change-of-basis matrix between his basis and the
$L_\alpha$-basis, as well as providing a nice (and remarkably similar) combinatorial interpretation
for the inverse change-of-basis matrix.
\end{remark}
\begin{remark} \rm \
The matrix $A_n$ giving the expansion of $F(R_\sigma,\xx)$ into $L_\alpha$ within ${\mathcal{QS}ym }_n$
is unimodular, and it turns out that our previous results imply a nice
$LU$-decomposition for it.
Order the strings $\sigma$ in $0\{0,1\}^{n-1}$ with lex order,
and order the compositions $\alpha$ of $n$ also in lex order.
Then the matrix $U_n$ expanding $F(R_\sigma,\xx)$ in terms of
$F(Q_\sigma,\xx)$ will be upper unitriangular (by Theorem~\ref{R-Q-expansion}),
while the matrix $L_n$ expanding $F(Q_\sigma,\xx)$ in terms of $L_\alpha$ will be
lower unitriangular (by the proof of Proposition~\ref{Q-basis-prop}). And $A_n=L_n U_n$.
For example, when $n=3$, this looks like
$$
A_3=\left[
\begin{matrix}
& R_{000} & R_{001} & R_{010} & R_{011} \\
L_{111} & 1 & 0 & 0 & 0 \\
L_{12} & 2 & 1 & 1 & 0 \\
L_{21} & 2 & 0 & 1 & 1 \\
L_{3} & 1 & 1 & 1 & 1
\end{matrix}
\right]
\quad =
$$
$$
\left[
\begin{matrix}
& Q_{000} & Q_{001} & Q_{010} & Q_{011} \\
L_{111} & 1 & & & \\
L_{12} & 2 & 1 & & \\
L_{21} & 2 & 0 & 1 & \\
L_{3} & 1 & 1 & 0 & 1
\end{matrix}
\right]
\left[
\begin{matrix}
& R_{000} & R_{001} & R_{010} & R_{011} \\
Q_{000} & 1 & 0 & 0 & 0 \\
Q_{001} & & 1 & 1 & 0 \\
Q_{010} & & & 1 & 1 \\
Q_{111} & & & & 1
\end{matrix}
\right]
$$
\end{remark}
\begin{remark} \rm \
We have now encountered five $\ZZ$-bases for the Hopf algebra ${\mathcal{QS}ym }$ of
quasisymmetric functions, namely
$$
M_\alpha, L_\alpha, F(P_\alpha,\xx), F(Q_\sigma,\xx), F(R_\sigma,\xx).
$$
Given any such basis $B_\alpha$, one might ask whether the
structure constants $c^{\alpha,\beta}_{\gamma}$ from the unique expansion
$$
B_\alpha B_\beta = \sum_{\gamma} c^{\alpha,\beta}_{\gamma} B_\gamma
$$
are always nonnegative. For the monomial basis $M_\alpha$ and the fundamental bases $L_\alpha$,
this property is well-known to hold and is straightforward.
Unfortunately, this property fails for the remaining three bases
$P_\alpha, R_\sigma, Q_\sigma$. They turn out to have
some negative multiplication structure constants occurring already in (relatively) low degrees:
$$
\begin{aligned}
F(P_{(1,1)},\xx) F(P_{(1)},\xx)
& \left ( = F(P_{(1,1)},\xx) \cdot L_1 \right ) \\
&= F(P_{(0,0,1)},\xx) + F(P_{(0,1,0)},\xx) - F(P_{(0,1,1)},\xx) \\
& \\
F(R_{01},\xx)^2 &= 2 F(R_{0101},\xx) - F(R_{0011},\xx) \\
& \\
F(Q_{010},\xx)^2 &= F(Q_{001000},\xx) + 2 F(Q_{010100},\xx) + F(Q_{001100},\xx) \\
&\qquad + 2 F(Q_{010010},\xx) - F(Q_{001001},\xx).
\end{aligned}
$$
On the other hand, the new $\ZZ$-basis for ${\mathcal{QS}ym }$ found by Luoto which was mentioned earlier {\it does} have this property;
see \cite[\S 4.4]{Luoto}.
\end{remark}
| {
"timestamp": "2009-01-12T21:15:06",
"yymm": "0606",
"arxiv_id": "math/0606646",
"language": "en",
"url": "https://arxiv.org/abs/math/0606646",
"abstract": "A new isomorphism invariant of matroids is introduced, in the form of a quasisymmetric function. This invariant (1) defines a Hopf morphism from the Hopf algebra of matroids to the quasisymmetric functions, which is surjective if one uses rational coefficients, (2) is a multivariate generating function for integer weight vectors that give minimum total weight to a unique base of the matroid, (3) is equivalent, via the Hopf antipode, to a generating function for integer weight vectors which keeps track of how many bases minimize the total weight, (4) behaves simply under matroid duality, (5) has a simple expansion in terms of P-partition enumerators, and (6) is a valuation on decompositions of matroid base polytopes.This last property leads to an interesting application: it can sometimes be used to prove that a matroid base polytope has no decompositions into smaller matroid base polytopes. Existence of such decompositions is a subtle issue arising in work of Lafforgue, where lack of such a decomposition implies the matroid has only a finite number of realizations up to projective equivalence.",
"subjects": "Combinatorics (math.CO)",
"title": "A quasisymmetric function for matroids",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985718064816221,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314680922814
} |
https://arxiv.org/abs/2005.01260 | A conformal characterization of manifolds of constant sectional curvature | A special case of the main result states that a complete $1$-connected Riemannian manifold $(M^n,g)$ is isometric to one of the models $\mathbb R^n$, $S^n(c)$, $\mathbb H^n(-c)$ of constant curvature if and only if every $p\in M^n$ is a non-degenerate maximum of a germ of smooth functions whose Riemannian gradient is a conformal vector field. | \section{Introduction}
The goal of the present paper is to offer a characterization of connected Riemannian manifolds of constant sectional curvature in terms of the existence of certain special germs of functions.
For conceptual clarity, it is convenient to single out the following notion:
\begin{defn} \rm
A $\underline{ \rm conformal \; Morse \; germ} $ (CMG, for short) based at a point $p$ of a Riemannian manifold $(M^n,g)$ is a germ $[f]_p$ of smooth functions for which $p$ is a non-degenerate critical point of (any representative) $f$ and, relative to $g$, the gradient $\nabla f$ is a conformal vector field on a neighborhood of $p$. \end{defn}
Recall that conformality of $\nabla f$ has a dynamical meaning: for all small $|t|$, the time-$t$ map of the local flow of
$\nabla f$ is conformal in the usual sense, i.e. it is angle-preserving relative to the metric $g$.
It is easy to see that $[f]_p$ is a CMG in $(M^n,g)$ if and only if $\nabla f(p)=0$ and $\nabla^2f=hg$ for some function $h$ with $h(p)\neq 0$. It follows that if $p$ is the base of a CMG, then $p$ is either a
non-degenerate local maximum or a non-degenerate local minimum of $f$.
If $[f]_p$ is a CMG, so is $[-f]_p$. In particular, we may assume that $p$ is a local maximum of $f$, in which case the forward flow of $\nabla f$ pushes all nearby points towards the point of maximum $p$, while preserving angles between curves.
The existence of this special dynamics suggests that, at the point $p$ itself, the possibilities for the geometry of $(M^n, g)$ should be severely restricted. The theorem below shows that this is indeed the case:
\begin{thm} \label {point} Let $p$ be the base of a conformal Morse germ in $(M^n,g)$.
\vskip3pt
\noindent i) If $n=2$, then $p$ is a critical point of the curvature.
\vskip3pt
\noindent ii) If $n>2$, then the sectional curvatures of all $2$-planes contained in $T_pM^n$ are equal.
\end{thm}
\noindent \begin{rem} \rm (Pointwise constancy of sectional curvatures.) To the best of our knowledge, Theorem \ref{point} ii) is the only known criterion for constancy of the sectional curvatures of all $2$-planes contained in a $\underline{\rm fixed}$, preassigned, tangent space $T_pM^n$. In dimensions three and higher, if this ``pointwise" constancy of the sectional curvatures holds everywhere on the manifold, then $(M^n, g)$ actually has constant curvature (Schur's theorem \cite{KN}), but situations may arise in which one would like to establish that the sectional curvatures are the same at only a single point. \end{rem}
\begin{exm} \rm \label{exemplo} One can construct conformal Morse germs on any of the model spaces $\mathbb R^n$, $S^n(c)$, $\mathbb H^n(-c)$ of constant sectional curvature,
where $c$ is a positive constant. Explicitly, in $S^n(c)$ the function $f(x)=\cos(\sqrt{c}d(x,p))$ satisfies $\nabla^2 f=-cfg$, in $\mathbb R^n$ the function $f(x)=||x-p||^2$ satisfies $\nabla^2 f=2g$, and in $\mathbb H^n(-c)$ the function $f(x)=\cosh(\sqrt{c}d(x,p))$ satisfies $\nabla^2 f=cfg$.
\end{exm}
\noindent The main application of Theorem \ref{point} is the following characterization of manifolds of constant sectional curvature, in terms of conformal Morse germs:
\begin{thm} \label {fundamental criterion}
A connected Riemannian manifold $(M^n,g)$ has constant sectional curvature if and only if every $p\in M^n$ is a non-degenerate critical point of a smooth germ $[f]_p$ for which $\nabla f$ is a conformal vector field. \end{thm}
\begin{cor} \label{space form} A complete $1$-connected Riemannian manifold $(M^n,g)$ is isometric to one of the model spaces $\mathbb R^n$, $S^n(c)$,
$\mathbb H^n(-c)$ if and only if every $p\in M^n$ is the base of a CMG.
\end{cor}
Theorem \ref{fundamental criterion} is a direct consequence of connectedness, Theorem \ref{point}, Schur's theorem \cite{KN} and, for the converse, Example \ref{exemplo}.
The proof of Theorem \ref{point} itself is rather involved, and will be supplied in the next section.
\par The sectional curvatures of $(M^n,g)$ are not necessarily constant if
for every point $p \in M^n$ there is a germ $[X]_p$ of conformal vector fields such that $p$ is an isolated non-degenerate zero of $X$.
In fact, every Riemannian metric on a two dimensional manifold is locally conformally flat, and so there are always germs $[X]_p$ as above. It is only when $X$ is a gradient field that the curvature is constant (Theorem \ref{fundamental criterion}).
\par
\begin{rem} \rm In the study of manifolds of constant sectional curvature it is usually the case that one is forced to consider the trichotomy given by the sign of the curvature (positive, zero, or negative), leading to a separate analysis in each alternative. For instance, the proof of Obata's theorem (\cite{O}, \cite{BGM}), characterizing the Euclidean sphere, revolves around the equation $\nabla^2f=-fg$, whereas similar investigations for hyperbolic space would involve the equation $\nabla^2f=fg$.
By contrast, an appealing feature of Theorem \ref{fundamental criterion} is that it identifies a relatively simple property that captures the notion of ``constant curvature" in all three cases simultaneously, independently of the actual value of the curvature. \end{rem}
We close this Introduction by drawing attention to the possibility of using the theory of quasiconformal mapings \cite{GMP} to extend the results in this paper. Given a point $p$ in a Riemannian manifold $(M, g)$ and $\sigma\subset T_pM$ a $2$-plane, we write $K(p, \sigma)$ for the corresponding sectional curvature. Consider the oscillation of the restriction of the sectional curvature function to the Grassmannian
$G(2, T_pM)$:
\begin{eqnarray} \text{Osc} K_p= \text{max}_{\sigma\subset T_pM} K(p, \sigma)-\text{min}_{\sigma\subset T_pM} K(p, \sigma). \end{eqnarray}
Suppose that $[f]_p$ is a germ that has a non-degenerate critical point at $p$ and that, in addition, $\nabla f$ is $\kappa$-quasiconformal for some $\kappa\geq 1$, in the sense that
$\kappa-1$ measures the deviation of $\nabla^2f$ from being a functional multiple of the metric tensor. Can one use $[f]_p$ as a probe, so to speak, in order to estimate the oscillation $ \text{Osc} K_p$ of the sectional curvatures in terms of the quasiconformality coefficient $\kappa$? If so, Theorem \ref{point} ii) would be a special case of such a result, as the former states that $\text{Osc} K_p=0$ if $\kappa=1$.
For other characterizations of spaces of constant curvature see, for instance, \cite{BGM}, \cite{O} - \cite {WY} and the references therein.
\section{Proof of Theorem \ref{point}}
\noindent The lemma below will be used in the proofs of both halves of Theorem \ref{point}.
\begin{lem} \label{onlyone} Let $(M,g)$ be an $n$-dimensional Riemannian manifold, $p\in M$, $[f]_p$ a germ for which $p$ is a non-degenerate critical point, and $v\in T_pM$ a unit vector. Then there exists a sequence $(q_k)$ in $M-\{p\}$ such that
$$\lim_{k \to \infty} q_k=p, \;\;\;
\lim_{k \to \infty}\frac{\nabla f(q_k)}{||\nabla f(q_k)||} = v.$$
\end{lem}
\vskip10pt
To begin the proof of the lemma, let $k\in \{0, \dots, n\}$ be the Morse index of the singularity $p$ of the vector field $X=\nabla f$. By Morse's lemma \cite{H}, one can choose coordinates $(x_1, \dots, x_n)$ so that $0$ corresponds to $p$ and, locally,
$$f(x_1,\dots,x_n)=-x_1^2-\dots-x_k^2+x_{k+1}^2+\dots+x_n^2.$$
Consider the flat metric $g_0=dx_1^2+\dots+dx_n^2$ on a suitably small neighborhood of $p$, and write
$\nabla f$, $\nabla_0f$ for the gradients of $f$ relative to the metrics $g$ and $g_0$, respectively.
Consider the local matrix approximation
\begin{eqnarray*} (g^{ij}(x))=(g^{ij}(0))+ (E^{ij}(x)), \;\; \lim_{x\to 0}|| E^{ij}(x))||=0, \end{eqnarray*}
and let $(g^{ij}(0))=U^{-1}DU$ be a diagonalization of $(g^{ij}(0))$, where $D=\text{diag}(c_1, \dots, c_n)$, $c_j>0$.
For $t\in [0,1]$, write
\begin{eqnarray*} D_t=\text{diag}((1-t)c_1+t, \dots, (1-t)c_n+t), \;\; H(t,x)=U^{-1}D_tU+(1-t)(E^{ij}(x)),\end{eqnarray*}
and observe that the symmetric matrices $H(t,x)$ satisfy
\begin{eqnarray*} H(0,x)=(g^{ij}(x)), \;\;\; H(1,x)=I. \end{eqnarray*}
Since the eigenvalues of $D_t$ are bounded from below on $[0, 1]$ by a positive constant, over a sufficiently small $g_0$-ball $B_{\delta}$ centered at $0$, one has
\begin{eqnarray*} \inf_{t\in [0,1], x\in B_{\delta}} \text{det} H(t,x) >0. \end{eqnarray*}
In coordinates, the $g$-gradient of $f$ is
\begin{eqnarray*} \nabla f=\sum_{i=1}^n\big (\sum_{j=1}^n g^{ij}(x) \frac {\partial f}{\partial x_j}\big )\frac{\partial}{\partial x_i}. \end{eqnarray*}
Writing $X$ for the column vector $(\frac {\partial f}{\partial x_1}, \dots, \frac {\partial f}{\partial x_n})^t$, the above expression for $\nabla f$ can be interpreted as the matrix product
\begin{eqnarray*} \nabla f(x)= (g^{ij}(x))X. \end{eqnarray*}
In particular, $H(t,x) X$, with $t\in [0,1]$,
provides a continuous deformation of the vector field $\nabla f(x)=(g^{ij}(x))X=H(0,x)X$ into
$\nabla_0f=X=H(1, x)X$, through vector fields that have no zeros outside $0$. By the invariance of the Poincar\'e-Hopf index under homotopies \cite{H},
we have
\begin{eqnarray} \label{index} \text{Ind}(\nabla f,p)=\text{Ind}(\nabla_0 f,p)=\text{Ind}((-2x_1,\cdots,-2x_k,2x_{k+1}, \cdots,2x_n),0)
=(-1)^k.\end{eqnarray}
For the convenience of the reader, we briefly recall the definition of Poincar\'e-Hopf index of an isolated singularity.
Let $Y$ be a vector field on $M$ which has an isolated singularity at $p$. Suppose that $\phi: U \rightarrow \phi(U)\subset \mathbb{R}^n$ is a local parametrization
of $M$ carrying $p$ to the origin of $\mathbb{R}^n$, where $U$ is an open set containing $p$. Then the Poincar\'e-Hopf index of $Y$ at $p$ is defined to be
the Poincar\'e-Hopf index of $\widetilde{Y}=d \phi (Y_{|U})$ at the origin, which is equal to the degree of the following map:
$$ F_{\epsilon}: \mathbb{S}_{\epsilon} \rightarrow \mathbb{S}^{n-1}, y \rightarrow \frac{\widetilde{Y}(y)}{\|\widetilde{Y}(y)\|},$$
where $ \mathbb{S}_{\epsilon} $ is a sphere of radius $\epsilon$ centered at origin. Let $D_0 \widetilde{Y}$ be the differential of
the map $\widetilde{Y}: \phi (U) \rightarrow \mathbb{R}^n$ at the origin. If $D_0 \widetilde{Y}$ is an isomorphism, then
$p$ is said to be a non-degenerate singularity and we have
$$ \text{degree }(F_{\epsilon})=\text{sign}(\text{det} (D_0 \widetilde{Y}) ) \in \{{\pm 1}\}. $$
\par Now, (\ref{index}) follows from the above description of the Poincar\'e-Hopf index and
the lemma follows from (\ref{index}) also the fact that a continuous non-surjective map $S^{n-1}\to S^{n-1}$ has degree zero.
\qed
\vskip10pt
\noindent To prove Theorem \ref{point}, observe that for all smooth vector fields $X,Z$ one has
\begin{eqnarray}
\nabla^3 f(Z,X)=(\nabla_Z(\nabla^2 f))(X)&=&\nabla_Z\nabla^2 f(X)-\nabla^2 f(\nabla_ZX)\nonumber=\nabla_Z\nabla_X\nabla f-\nabla_{\nabla_ZX}\nabla f.\nonumber
\end{eqnarray}
Likewise,
$ \nabla^3 f(X,Z)=\nabla_X\nabla_Z\nabla f-\nabla_{\nabla_XZ}\nabla f$, and so
\begin{eqnarray}
\nabla^3 f(Z,X)-\nabla^3 f(X,Z)&=&\nabla_Z\nabla_X\nabla f-\nabla_X\nabla_Z\nabla f-\nabla_{[Z,X]}\nabla f\nonumber=R(Z,X)\nabla f,\nonumber
\end{eqnarray}
where $R$ stands for the curvature tensor.
In particular, for any $q\neq p$, $q$ sufficiently close to $p$, and unit vector $z$ perpendicular to $\nabla f(q)$, one has
\begin{eqnarray}
\frac{1}{|\nabla f(q)|}\{\nabla^3 f(z,\nabla f(q))-\nabla^3 f(\nabla f(q),z)\}=R(z,\frac{\nabla f(q)}{|\nabla f(q)|})\nabla f(q).\nonumber
\end{eqnarray}
Dividing by $|\nabla f(q)|$ and taking the inner product with $z$, one obtains an expression for the sectional curvature of the plane spanned by $\nabla f(q)/||\nabla f(q)||$ and $z$:
\begin{eqnarray} \label{Longo}
K(\frac{\nabla f(q)}{|\nabla f(q)|},z)=\frac{1}{|\nabla f(q)|^2}\langle\nabla^3 f(z,\nabla f(q))-\nabla^3 f(\nabla f(q),z),z\rangle.
\end{eqnarray}
Since $\nabla f$ is a conformal vector field, there exists a smooth function $h$ such that $\nabla^2f(Y)=hY$ for every vector field $Y$. Hence, for $q\neq p$ and any unit vector $z$, $z\perp \nabla f(q)$, one has
\begin{eqnarray}
\nabla^3 f(z,\nabla f(q))=\nabla_z\nabla^2 f(\nabla f)-\nabla^2 f(\nabla_z\nabla f)=\nabla_z(h\nabla f)-h(q)\nabla_z\nabla f=z(h)\nabla f(q).\nonumber
\end{eqnarray}
Similarly,
$\nabla^3 f(\nabla f(q),z)=\nabla f(q)(h)z$.
Substituting in (\ref{Longo}) the values for $\nabla^3 f(z,\nabla f(q))$ and $\nabla^3 f(\nabla f(q),z)$ obtained above, one obtains
\begin{eqnarray} \label{conf}
K(\frac{\nabla f(q)}{|\nabla f(q)|},z)=-\frac{\langle\nabla f,\nabla h\rangle}{|\nabla f|^2}(q).
\end{eqnarray}
For future reference observe that, crucially, the expression for the sectional curvature of the plane generated by $\nabla f(q)/|\nabla f(q)|$ and $z$ given in (\ref{conf}) is actually independent of the unit vector $z$ perpendicular to $\nabla f(q)$.
Assume now that $\text{dim M}=2$ and let $\varphi=\frac{1}{2}|\nabla f|^2$. Since $\nabla^2 f=hI$ for some smooth function $h$, one has
\begin{eqnarray*}
\langle\nabla\varphi,Y\rangle=\frac{1}{2}Y\langle\nabla f,\nabla f\rangle=\langle\nabla_Y\nabla f,\nabla f\rangle=\langle hY,\nabla f\rangle \nonumber
\end{eqnarray*}
for all vector fields $Y$, and so $\nabla\varphi=h\nabla f$. Hence,
\begin{eqnarray*}
\text{Hess}\,\varphi (X,Y)=\langle\nabla_X\nabla\varphi,Y\rangle=\langle X(h)\nabla f+h^2X,Y\rangle=X(h)Y(f)+h^2\langle X,Y\rangle,\nonumber
\end{eqnarray*}
for all $X,Y$. Similarly,
$\text{Hess}\,\varphi (Y,X)=Y(h)X(f)+h^2\langle Y,X\rangle$.
It follows from the last two equalities, together with the symmetry of the Hessian, that
\begin{eqnarray*}
X(h)Y(f)=Y(h)X(f),\;\;\;\forall X,Y,\nonumber
\end{eqnarray*}
and so $X(h)\nabla f=X(f)\nabla h$ for all $X$. Taking $X=\nabla f$, one concludes that
\begin{eqnarray*}
\nabla h(q)=\frac{\langle\nabla h(q),\nabla f(q)\rangle}{|\nabla f(q)|^2}\nabla f(q),\;\;\;q\neq p.\nonumber
\end{eqnarray*}
In view of (\ref{conf}), this yields
\begin{eqnarray}\label{intermediaria}
\nabla h=-K\nabla f.
\end{eqnarray}
Next, we compute the Hessian of $h$. From (\ref{intermediaria}) and $\nabla^2 f=hI$ one obtains
\begin{eqnarray}
\text{Hess}\,h(X,Y)=\langle\nabla_X\nabla h,Y\rangle&=&-\langle\nabla_X(K\nabla f),Y\rangle\nonumber\\&=&-\langle X(K)\nabla f+K\nabla_X\nabla f,Y\rangle\nonumber\\&=&-X(K)Y(f)-Kh\langle X,Y\rangle.\nonumber
\end{eqnarray}
Likewise,
$
\text{Hess}\,h(Y,X)=-Y(K)X(f)-Kh\langle Y,X\rangle$.
Comparing the last two equations and using the symmetry of the Hessian, one has
\begin{eqnarray}
-X(K)Y(f)=-Y(K)X(f),\;\;\;\forall X,Y,\nonumber
\end{eqnarray}
and so $X(K)\nabla f=X(f)\nabla K$ for all vector fields $X$. Taking $X=\nabla f$ in this equality, one concludes that
\begin{eqnarray}\label{final}
\nabla K(q)=\langle\nabla K(q),\frac{\nabla f(q)}{|\nabla f(q)|}\rangle \frac{\nabla f(q)}{|\nabla f(q)|},\;\;\;q\neq p.
\end{eqnarray}
As $\nabla K(q)$ and
$\nabla f(q)$ are collinear by (\ref{final}), and $\nabla f(q)$ approaches all directions as $q\to p$ by Lemma \ref{onlyone}, one must have $\nabla K(p)=0$. This concludes the proof of Theorem \ref{point} i).
\vskip10pt
We now turn our attention to the second half of Theorem \ref{point}, and assume $\text{dim}\,M>2$. Let $\{e_1,...,e_n\}$ be an orthonormal basis of $T_pM$ and $c=K(e_1,e_2)$. We will show that the sectional curvature of every $2$-plane in $T_pM$ is $c$. To this end, for each $i\in \{2,...,n\}$ denote by $V_i$ the linear span of $\{e_1,...,e_i\}$, so that
$$V_2\subset V_3\subset \cdots \subset V_{n-1}\subset V_n=T_pM.$$
Consider the set $\mathcal O\subset \{2, \dots, n\}$ defined by the property that $j\in \mathcal O$ if and only if
the sectional curvature of every $2$-plane $\sigma \subset V_j$ satisfies $K(\sigma)=c$. Notice that $\mathcal O\neq \emptyset$, as $2\in \mathcal O$. Let now
$\kappa=\max \mathcal O$. Clearly, Theorem \ref{point} ii) holds if
$\kappa=n$. Let's argue by contradiction and assume that
$2\leq \kappa \leq n-1.$ In particular,
\vskip8pt
\noindent ($\dagger$) \;\; $K(\sigma)=c$ \;\;$\forall$ $\sigma \in G_2( V_{\kappa})$, \;\; $\exists \; \tilde \sigma \in G_2(V_{\kappa+1}) : K(\tilde \sigma)\neq c$.
\vskip8pt
\noindent The possibilities for $\text{dim} (V_{\kappa}\cap \tilde\sigma)$ are, of course, $0,1$ or $2$.
The first case cannot occur otherwise $\kappa+2=\text{dim} (V_{\kappa}\oplus \tilde\sigma)\leq \text{dim} V_{\kappa+1}=\kappa+1$. Since $ K(\tilde \sigma)\neq c$ one cannot have $\text{dim} (V_{\kappa}\cap \tilde\sigma)= 2$ either, and so the dimension of $V_{\kappa}\cap \tilde\sigma$ is $1$.
Let $w_1$ be a unit vector that generates $V_{\kappa}\cap \tilde\sigma$, $w_2$ a unit vector such that
$\{w_1, w_2\}$ is an orthonormal basis of $\tilde \sigma$. Since $\text{dim} V_{\kappa}\geq 2$, one can choose $w_3\in V_{\kappa}$ such that $\{w_1, w_3\}$ is an orthonormal set.
By ($\dagger$),
\begin{eqnarray} \label {c} K(w_1,w_3)=c \neq K(\tilde \sigma)=K(w_1, w_2). \end{eqnarray}
By Lemma \ref{onlyone} there exists a sequence $(q_k)$ in $M-\{p\}$ such that
$$\lim_{k \to \infty} q_k=p, \;\;\;
\lim_{k \to \infty}\frac{\nabla f(q_k)}{|\nabla f(q_k)|} = w_1.$$
By (\ref{conf}), for every convergent sequence of unit vectors $z_k\perp \nabla f(q_k)$, say $z_k\to z$, one has
\begin{eqnarray} \label{near the end}
K(\frac{\nabla f(q_k)}{|\nabla f(q_k)|},z_k)=-\frac{\langle\nabla f,\nabla h\rangle}{|\nabla f|^2}(q_k).
\end{eqnarray}
The LHS of (\ref{near the end}) converges to $K(w_1, z)$. Manifestly, the RHS of (\ref{near the end}) is independent of $z_k$. It follows that there exists a constant $d$ such that
\vskip8pt
\noindent ($\dagger \dagger$) $K(w_1,z)=d \;\;\; \forall z\in w_1^{\perp}$, $|z|=1$.
\vskip8pt
\noindent Taking $z=w_2$ and $z=w_3$ in ($\dagger \dagger$),
\begin{eqnarray} \label{d} K(w_1,w_3)=d =K(w_1, w_2),\end{eqnarray}
a direct contradiction to (\ref{c}). This concludes the proof of Theorem \ref{point}.\qed
| {
"timestamp": "2020-05-05T02:23:49",
"yymm": "2005",
"arxiv_id": "2005.01260",
"language": "en",
"url": "https://arxiv.org/abs/2005.01260",
"abstract": "A special case of the main result states that a complete $1$-connected Riemannian manifold $(M^n,g)$ is isometric to one of the models $\\mathbb R^n$, $S^n(c)$, $\\mathbb H^n(-c)$ of constant curvature if and only if every $p\\in M^n$ is a non-degenerate maximum of a germ of smooth functions whose Riemannian gradient is a conformal vector field.",
"subjects": "Differential Geometry (math.DG)",
"title": "A conformal characterization of manifolds of constant sectional curvature",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985718064816221,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314680922814
} |
https://arxiv.org/abs/1408.4082 | Higher Affine Connections | For a smooth manifold $M$, it was shown in \cite{BPH} that every affine connection on the tangent bundle $TM$ naturally gives rise to covariant differentiation of multivector fields (MVFs) and differential forms along MVFs. In this paper, we generalize the covariant derivative of \cite{BPH} and construct covariant derivatives along MVFs which are not induced by affine connections on $TM$. We call this more general class of covariant derivatives \textit{higher affine connections}. In addition, we also propose a framework which gives rise to non-induced higher connections; this framework is obtained by equipping the full exterior bundle $\wedge^\bullet TM$ with an associative bilinear form $\eta$. Since the latter can be shown to be equivalent to a set of differential forms of various degrees, this framework also provides a link between higher connections and multisymplectic geometry. | \section{Introduction}
Let $M$ be a manfiold. It was shown in \cite{BPH} that every affine connection $\nabla$ on the tangent bundle $TM$ naturally gives rise to covariant differentiation of multivector fields (MVFs) and differential forms along MVFs. For covariant differentiation of MVFs along MVFs, the covariant derivative of \cite{BPH} (which we will again denote as $\nabla$) satisfies
\begin{align}
\label{BPHPropertyA}
\nabla_{X\wedge Y} Z&=(-1)^kX \wedge \nabla_Y Z+(-1)^{(k-1)l} Y\wedge \nabla_X Z\\
\label{BPHPropertyB}
\nabla_X(Y\wedge Z)&=(\nabla_XY)\wedge Z+(-1)^{(k-1)l}Y\wedge\nabla_X Z
\end{align}
for $X\in \Gamma(\wedge^k TM)$, $Y\in \Gamma(\wedge^l TM)$, and $Z\in \Gamma(\wedge^\bullet TM)$. Any covariant derivative of MVFs along MVFs which satisfies (\ref{BPHPropertyA}) and (\ref{BPHPropertyB}) is necessarily induced by an affine connection on the tangent bundle. In this paper, we consider a class of covariant derivatives of MVFs along MVFs which satisfies all the main properties of those of \cite{BPH} except possibly (\ref{BPHPropertyA}) and (\ref{BPHPropertyB}). We call such a covariant derivative a \textit{higher affine connection}. Hence, the covariant derivative of MVFs along MVFs from \cite{BPH} can be seen as a special case of a higher affine connection.
For covariant differentiation of differential forms along MVFs, the covariant derivative of \cite{BPH} has several nice properties which are summarized in Theorem 4.2 of \cite{BPH}. To describe the approach of \cite{BPH}, we start with a decomposable $k$-vector field $X=X_1\wedge \cdots \wedge X_k$ and an affine connection $\nabla$ on $TM$. For a differential form $\omega\in \Omega^\bullet(M)$, we define
\begin{equation}
\label{BPHDiffForm1}
\nabla_X \omega=\sum_{j=1}^k (-1)^{j+1}i_{X[j]} (\nabla_{X_j}\omega),
\end{equation}
where $X[j]:=X_1\wedge \cdots \wedge \widehat{X}_j\wedge \cdots \wedge X_k$ (the ``hat" denoting omission) and $i_{X[j]}$ is the interior product by $X[j]$. For $f\in C^\infty(M)$ and $\sigma\in \mathfrak{S}_k$, a direct calculation shows that
\begin{equation}
\label{BPHDiffForm2}
\nabla_{fX^{\sigma}}\omega=\mbox{sgn}(\sigma)f\nabla_X\omega,
\end{equation}
where $X^\sigma:=X_{\sigma(1)}\wedge \cdots \wedge X_{\sigma(k)}$. Since every $k$-vector field is locally a finite sum of decomposable $k$-vector fields, (\ref{BPHDiffForm2}) implies that the above definition extends to all $X\in \Gamma(\wedge^k TM)$ by $C^\infty(M)$-linearity. Note that if $\omega$ is an $l$-form and $X$ a $k$-vector field with $l\ge k-1$, then $\nabla_X \omega$ is an $(l-k+1)$-form.
Since a higher connection is not completely defined by an affine connection on $TM$, (\ref{BPHDiffForm1}) is not directly applicable to higher connections. Even so, we show that higher connections do indeed allow for covariant differentiation of differential forms along MVFs. Moreover, the result has properties similar to those of \cite{BPH}.
The motivation for discarding condition (\ref{BPHPropertyA}) stems from two sources: \textit{generalized geometry} \cite{H1} \cite{H2} and the basic idea at the root of \textit{string theory} \cite{GSW} \cite{Z}. In generalized geometry, one replaces the tangent bundle with $TM \oplus T^\ast M$ and the Lie bracket with the Courant bracket. In this setting, one looks for geometric structures on $TM \oplus T^\ast M$ which are analogues of the familiar objects one encounters in differential geometry. In string theory, the notion of a point particle is replaced by one dimensional extended objects called \textit{strings}. As a consequence of this, a worldline (i.e., the path that a point particle makes through spacetime) is generalized to a 2-dimensional worldsheet (i.e., the surface that a string sweeps out as it propagates). Let $X: \Sigma\rightarrow M$ be a worldsheet map, where $\Sigma \subset \mathbb{R}^2$ has coordinates $(\tau,\sigma)$, and let $p$ be a point on $X$. One can think of $X$ as a higher dimensional worldline with the following ``tangent vectors" at $p$:
\begin{equation}
\nonumber
\partial_\tau X|_{(\tau_0,\sigma_0)}, \hspace*{0.1in} \partial_\sigma X|_{(\tau_0,\sigma_0)},\hspace*{0.1in} (\partial_\tau X \wedge \partial_\sigma X)_{(\tau_0,\sigma_0)},
\end{equation}
where $X(\tau_0,\sigma_0)=p$ and $\partial_\tau X:=\frac{\partial X}{\partial \tau}$, $\partial_\sigma X:=\frac{\partial X}{\partial \sigma}$. In doing so, one regards $\wedge^2 TM$ as part of an extended tangent bundle.
For MVFs, the natural analogue of the Lie bracket is the Schouten-Nijenhuis bracket (SNB) \cite{Mar} \cite{Nij}. The SNB of two multivector fields of degrees $k$ and $l$ is a multivector field of degree $k+l-1$. With the SNB in the role of the Lie bracket, the idea of generalized geometry suggests that we must consider the full exterior bundle $\wedge^\bullet TM$ in the role of the tangent bundle as opposed to just $TM\oplus \wedge^2TM$. Our basic ``philosophy" then is to treat $\wedge^k T_pM$ ($p\in M$, $k\ge 2$) as part of an extended tangent space on $M$. Hence, a $k$-vector $v \in \wedge^k T_pM$ should be regarded as a new kind of tangent vector. If we apply this viewpoint to the problem of covariant differentiation of MVFs along MVFs, one would expect $X\wedge Y$ and $Y\wedge Z$ to play a more explict role on the right side of (\ref{BPHPropertyA}) and (\ref{BPHPropertyB}) respectively. However, for this to be true, conditions (\ref{BPHPropertyA}) and (\ref{BPHPropertyB}) must be relaxed; the upshot of this is the notion of a higher affine connection.
A natural question in all this is where do higher connections actually arise. More specifically, what framework requires the notion of \textit{non-induced} higher connections (i.e., the higher connecitons which do not satisfy (\ref{BPHPropertyA}) or (\ref{BPHPropertyB}))? In this paper, we propose a solution to this question. The answer comes by equipping $\wedge^\bullet TM$ with a smooth bilinear form $\eta$ which is associative in the sense that
\begin{equation}
\label{FrobeniusIntro}
\eta(x\wedge y,z)=\eta(x,y\wedge z)
\end{equation}
for all $x,y,z\in \wedge^\bullet T_pM$, $p\in M$. These associative bilinear forms are shown to be in one to one correspondence with the space of differential forms: $\Omega^\bullet(M):=\bigoplus_{i}\Omega^i(M)$. This fact allows one to define a covariant derivative of $\eta$ with respect to a higher connection. Naturally, one would like to find a higher connection for which
\begin{equation}
\label{IntroParallel}
\nabla\eta \equiv 0.
\end{equation}
A direct calculation shows that, in general, the induced higher connections (i.e., the ones that satisfy both (\ref{BPHPropertyA}) and (\ref{BPHPropertyB})) are incapable of satisfying (\ref{IntroParallel}). By requiring $\nabla\eta\equiv 0$, the notion of non-induced higher connections becomes a necessary one. Furthermore, since non-induced higher connections arise from associative bilinear forms on $\wedge^\bullet TM$, and the latter is equivalent to a set of differential forms of various degrees, this viewpoint also provides a way of linking higher connections to multisymplectic geometry \cite{CIL} \cite{Rog}.
The rest of the paper is organized as follows. In section $2$, we review the basic machinery of MVFs and set up the notation we will use for the rest of the paper. In section $3$, we introduce the notion of higher affine connections and prove a classification theorem for them (see Theorem \ref{HigherConnectionData}). In addition, the notion of \textit{higher torsion} is also introduced. In section $4$, we define the covariant derivative of differential forms along MVFs in terms of higher connections and examine the properties of this construction. In section $5$, we relate non-induced higher connections to associative bilinear forms on the full exterior bundle $\wedge^\bullet TM$. Finally, in section $6$, we conlcude the paper with some closing remarks and directions for future work.
\section{Preliminaries}
\subsection{Some Multilinear Algebra}
\noindent In this brief section, we recall some basic results from multilinear algebra. Throughout this section, $V$ is a finite dimensional vector space over $\mathbb{R}$ of dimension $m>0$.
\begin{definition}
A $k$-vector $v\in \wedge^k V$ is \textit{decomposable} if it can be expressed as $v=v_1\wedge \cdots\wedge v_k$ for some $v_i\in V$, $i=1,\dots, k$.
\end{definition}
\begin{theorem}
\label{UniversalPropExterior}
Let $V^{\times k}:=V\times \cdots \times V$ ($k$ times) and let $U$ be a finite dimensional vector space over $\mathbb{R}$. For any alternating multilinear map, $\varphi:V^{\times k}\rightarrow U$, there exists a unique linear map $\widetilde{\varphi}: \wedge^k V\rightarrow U$ such that
\begin{equation}
\nonumber
\widetilde{\varphi}(v_1\wedge \cdots \wedge v_k)=\varphi(v_1,\dots, v_k)
\end{equation}
for all $v_1,\dots, v_k\in V$.
\end{theorem}
\begin{proof}
Let $V^{\otimes k}:=V\otimes \cdots \otimes V$ ($k$ times). Since $\varphi$ is multilinear, the universal property of the tensor product gives a unique linear map $\overline{\varphi}: V^{\otimes k}\rightarrow U$ such that $\overline{\varphi}(v_1\otimes \cdots \otimes v_k)=\varphi(v_1,\dots, v_k)$. Let $I\subset V^{\otimes k}$ be the space spanned by elements of the form $v_1\otimes \cdots\otimes v_k$ where $v_1,\dots, v_k\in V$ and $v_i=v_j$ for some $i\neq j$. Since $\varphi$ is alternating, we have $\overline{\varphi}|_I=0$. Hence, $\overline{\varphi}$ induces a linear map from $\wedge^k V:=V^{\otimes k}/I$ to $U$ which satisfies
\begin{equation}
\nonumber
\widetilde{\varphi}(v_1\wedge \cdots \wedge v_k)=\overline{\varphi}(v_1\otimes \cdots \otimes v_k)=\varphi(v_1,\dots, v_k).
\end{equation}
Since $\widetilde{\varphi}$ is linear and $\wedge^k V$ is spanned by decomposable $k$-vectors, $\widetilde{\varphi}$ is necessarily unique.
\end{proof}
\begin{proposition}
\label{Multilinear1}
Let $W$ and $W'$ be any $k$-dimensional subspaces of $V$ and let $\{w_i\}$ and $\{w'_i\}$ be any bases on $W$ and $W'$ respectively. Then
\begin{equation}
\nonumber
w_1\wedge \cdots \wedge w_k=\lambda w_1'\wedge \cdots \wedge w_k'
\end{equation}
for some $\lambda \in \mathbb{R}$ iff $W=W'$.
\end{proposition}
\begin{proof}
Let $w:=w_1\wedge \cdots \wedge w_k$ and $w':=w_1'\wedge \cdots \wedge w_k'$.\\
($\Rightarrow$) Suppose $w=\lambda w'$ for some $\lambda\in \mathbb{R}$. If $k=\dim V$, then $W=W'=V$. Now suppose that $k<\dim V$. If $W\neq W'$, then there exists a nonzero $w_{k+1}'\in W'$ which is not in $W$. Since $w_{k+1}'$ is a linear combination of $\{w_1',\dots, w_k'\}$, we have $w'\wedge w_{k+1}'=0$. On the other hand, $w\wedge w_{k+1}'\neq 0$. This contradicts the assumption that $w=\lambda w'$. Hence, $W=W'$.
($\Leftarrow$) Suppose $W=W'$. Then $\{w_i\}$ can be expressed as a linear combination of $\{w_i'\}$:
\begin{equation}
\nonumber
w_j=\sum_{i=1}^k a_{ij} w_i'.
\end{equation}
A direct calculation shows that $w=\det(a_{ij})w'$.
\end{proof}
\noindent Proposition \ref{Multilinear1} shows that any subspace $W\subset V$ of dimension $k$ is represented by a decomposable $k$-vector which is unique up to a multiplicative constant. This fact immediately implies the following:
\begin{corollary}
\label{Multilinear1Cor}
Let $V$ be a finite dimensional vector space. The mapping $\varphi: G(k,V)\rightarrow \mathbb{P}(\wedge^k V)$ given by
\begin{equation}
W=\langle w_1,\dots,w_k\rangle \mapsto [w_1\wedge \cdots \wedge w_k],
\end{equation}
is well-defined and injective. Here, $G(k,V)$ is the Grassmanian, the set of all $k$-dimensional subspaces of $V$ and $\mathbb{P}(\wedge^k V)$ is the projectivization of $\wedge^k V$.
\end{corollary}
\begin{remark}
The map $\varphi$ in Corollary \ref{Multilinear1Cor} is called the \textit{Pl\"{u}cker embedding}.
\end{remark}
\begin{lemma}
\label{Multilinear4}
Let $W\subset V$ be a subspace of $V$ of dimension $k>0$. Let $w$ be any decomposable $k$-vector which represents $W$. For any $x\in \wedge^m V$, there exists a decomposable $(m-k)$-vector $u$ such that $x=u\wedge w$.
\end{lemma}
\begin{proof}
Let $w=w_1\wedge\cdots\wedge w_k$ be any decomposable $k$-vector which represents $W$. If $k=m$, then $W=V$ and any $x\in \wedge^m V$ is of the form $x=\lambda w$ for some $\lambda\in \mathbb{R}$. In this case, we just take $u=\lambda\in \wedge^0 V:=\mathbb{R}$. Now suppose that $k<m$. By Proposition \ref{Multilinear1}, $\{w_i\}$ is a basis for $W$. Extend $\{w_i\}_{i=1}^k$ to a basis on $V$: $\{w_1,\dots, w_k, e_1,\dots,e_{m-k}\}$. Let $e=e_1\wedge \cdots \wedge e_{m-k}$. Then $e\wedge w$ generates $\wedge^m V$. Consequently, any $x\in \wedge^m V$ is of the form $x=\lambda e\wedge w$ for some $\lambda\in \mathbb{R}$. Setting $u=\lambda e$ proves the lemma.
\end{proof}
\begin{proposition}
\label{Multilinear5}
Let $W$ and $W'$ be any two subspaces of $V$ of dimensions $k>0$ and $k'>0$ respectively. Let $w$ and $w'$ be any decomposable $k$ and $k'$-vectors which represent $W$ and $W'$ repsectively. Then $\dim W\cap W'>0$ iff $w\wedge w'=0$.
\end{proposition}
\begin{proof}
($\Rightarrow$) Suppose $l:=\dim W\cap W'>0$. Let $w''$ be the decomposable $l$-vector which represents $W\cap W'$. Since $W\cap W'$ is a subspace of both $W$ and $W'$, Lemma \ref{Multilinear4} implies that there exists a decomposable $(k-l)$-vector $u$ and a decomposable $(k'-l)$-vector $u'$ such that $w=u\wedge w''$ and $w'=u'\wedge w''$. This implies that $w\wedge w'=0$.
($\Leftarrow$) Suppose $w\wedge w'=0$. Since $w$ and $w'$ are decomposable, $w$ and $w'$ can be expressed as
\begin{equation}
\nonumber
w=w_1\wedge \cdots \wedge w_k,\hspace*{0.1in} w'=w_1'\wedge \cdots \wedge w_{k'}'
\end{equation}
where $\{w_i\}$ and $\{w_i'\}$ are bases of $W$ and $W'$ respectively. The subspace $W+W'$ is spanned by $\{w_1,\dots,w_k,w_1',\dots, w'_{k'}\}$. Since $w\wedge w'=0$, the aforementioned set cannot be linearly independent.
Hence, $\dim (W+W')<\dim W+\dim W'$. Since
\begin{equation}
\nonumber
\dim W\cap W' = \dim W+\dim W'-\dim(W+W'),
\end{equation}
we have $\dim W\cap W'>0$.
\end{proof}
\begin{proposition}
\label{Multilinear3}
Let $\{v_i^{(k)}\}_{i=1}^t$ be any linearly indpendent set of $k$-vectors in $\wedge^k V$ (not necessarily decomposable). Then there exists an $(m-k)$-vector $u\in \wedge^{m-k}V$ (not necessarily decomposable) such that
\begin{equation}
\nonumber
u\wedge v_1^{(k)}\neq 0,\hspace*{0.1in}u\wedge v_i^{(k)}=0~\mbox{for} ~i=2,\dots, t
\end{equation}
\end{proposition}
\begin{proof}
Fix a basis $\{e_1,\dots, e_m\}$ on $V$ and set $e:=e_1\wedge \cdots \wedge e_m$. For any $u\in \wedge^{m-k} V$ and $v\in \wedge^k V$, define $\lambda_u(v)\in \mathbb{R}$ by
\begin{equation}
\nonumber
u\wedge v=\lambda_u(v)e.
\end{equation}
It's easy to see that $\lambda_u\in (\wedge^k V)^\ast$ and that
\begin{equation}
\nonumber
u\mapsto \lambda_u,~\hspace*{0.1in} u\in \mbox{$\wedge^{m-k}V$}
\end{equation}
defines a linear map from $\wedge^{m-k}V$ to $(\wedge^k V)^\ast$. We will now show that the aforementioned map is an isomorphism. To do this, let
\begin{equation}
e^{(k)}_I:=e_{i_1}\wedge \cdots \wedge e_{i_k}
\end{equation}
where $I=\{i_1,\dots, i_k\}$ and $1\le i_1<\cdots <i_k\le m$. Then $\{e^{(k)}_I\}_I$ is a basis on $\wedge^kV$. Similarly, $\{e^{(m-k)}_J\}_J$ is a basis on $\wedge^{m-k} V$. Then
\begin{equation}
\nonumber
\lambda_{e^{(m-k)}_J}(e^{(k)}_I)=\pm 1\hspace*{0.1in} \mbox{iff $I\cap J=\emptyset$}
\end{equation}
and zero otherwise. Hence, up to a sign, the set $\{\lambda_{e^{(m-k)}_J}\}_J$ is the dual basis of $\{e^{(k)}_I\}_I$. This establishes the isomorphism.
To prove the proposition, note that since $\{v_i^{(k)}\}_{i=1}^t$ is linearly independent, there exists an element $\varphi\in (\wedge^k V)^\ast$ such that $\varphi(v_1^{(k)})=1$ and $\varphi(v_i^{(k)})=0$ for $i>1$. Consequently, there exists some $u\in \wedge^{m-k}V$ such that $\lambda_u=\varphi$. From this, we have
\begin{equation}
\nonumber
u\wedge v_1^{(k)}=\lambda_u(v_1^{(k)}) e = \varphi(v_1^{(k)})e=e\neq 0
\end{equation}
and
\begin{equation}
\nonumber
u\wedge v_i^{(k)}=\lambda_u(v_i^{(k)}) e = \varphi(v_i^{(k)})e=0,\hspace*{0.1in} i>1
\end{equation}
This completes the proof.
\end{proof}
\begin{proposition}
\label{VectorCovectorProp}
$(\wedge^kV)^\ast$ and $\wedge^k V^\ast$ are naturally isomorphic.
\end{proposition}
\begin{proof}
Let $\omega:=\omega^1\wedge \cdots \wedge \omega^k$ and $v:=v_1\wedge\cdots \wedge v_k$ be decomposable $k$-vectors in $\wedge^k V^\ast$ and $\wedge^k V$ respectively. Define
\begin{equation}
\label{VectorCovector}
\varphi_\omega(v):=\det(\omega^i(v_j)).
\end{equation}
The fact that the determinant is an alternating multilinear map with respect to the $v_j$'s (and the $\omega^i$'s) implies that $\varphi_\omega\in (\wedge^kV)^\ast$, and that (\ref{VectorCovector}) extends to a linear map
\begin{equation}
\nonumber
\varphi: \mbox{$\wedge^k V^\ast$}\rightarrow \mbox{$(\wedge^kV)^\ast$},\hspace*{0.1in} \omega\mapsto \varphi_\omega.
\end{equation}
To see that this is an isomorphism, let $e_1,\dots, e_m$ be a basis on $V$ and $\phi^1,\dots, \phi^m$ the dual basis. Let
\begin{equation}
\mathcal{I}_k:=\{(i_1,\dots, i_k)~|~1\le i_1<\cdots <i_k\le m\},
\end{equation}
and for $I=(i_1,\dots, i_k)\in\mathcal{I}_k$, let $e_I:=e_{i_1}\wedge \cdots \wedge e_{i_k}$ and $\phi^I:=\phi^{i_1}\wedge \cdots \wedge \phi^{i_k}$. Then $\{e_I\}$ and $\{\phi^I\}$ are bases on $\wedge^k V$ and $\wedge^k V^\ast$ respectively. It follows easily from (\ref{VectorCovector}) that $\{\varphi_{\phi^I}\}$ is the dual basis of $\{e_I\}$. Hence, $\varphi$ is an isomorphism.
\end{proof}
\subsection{Multivector Fields}
\noindent Let $M$ be a smooth manifold. To simplify notation, we set $A^k(M):=\Gamma(\wedge^k TM)$ and $\mathcal{A}(M):=\bigoplus_{k=0}^\infty A^k(M)$, where for a vector bundle $E\rightarrow M$, $\Gamma(E)$ denotes the space of sections of $E$. The space of $k$-forms on $M$ is denoted as $\Omega^k(M)$. For convenience, we also set $A^k(M)=0$ and $\Omega^k(M)=0$ for $k<0$.
\begin{definition}
Let $k\in \mathbb{N}$. A \textit{multiderivation of degree $k$} (or $k$-\textit{derivation}) on a manifold $M$ is a $k$-linear map
\begin{equation}
\nonumber
\varphi: C^{\infty}(M)\times \cdots \times C^\infty(M)\rightarrow C^\infty(M)
\end{equation}
over $\mathbb{R}$, which is totally antisymmeric and a derivation of $C^\infty(M)$ in each of its arguments, i.e.
\begin{itemize}
\item[(i)] $\varphi(f_{\sigma(1)},\dots,f_{\sigma(k)})=\mbox{sgn}(\sigma)\varphi(f_1,\dots, f_k)$ $\forall~\sigma\in \mathfrak{S}_k$
\item[(ii)] $\varphi(f_1g,f_2,\dots, f_k)=\varphi(f_1,\dots,f_k)g+f_1\varphi(g,f_2,\dots, f_k)$
\end{itemize}
for all $f_i,~g\in C^\infty(M)$.
\end{definition}
\noindent The next two results are well known in differential geometry and we state them without proof.
\begin{proposition}
Every $k$-vector field $X\in A^k(M)$ defines a $k$-derivation via
\begin{equation}
\label{kVectorField1}
X(f_1,\dots,f_k):=(df_1\wedge\cdots\wedge df_k)(X).
\end{equation}
\end{proposition}
\begin{proposition}
There is a one-one correspondence between the space of $k$-derivations and the space of $k$-vector fields. Specifically, every $k$-derivation $\varphi$ is given by
\begin{equation}
\nonumber
\varphi(f_1,\dots, f_k)=X(f_1,\dots, f_k),\hspace*{0.1in} f_i\in C^\infty(M),~i=1,\dots, k,
\end{equation}
for some unique $X\in A^k(M)$.
\end{proposition}
\begin{definition}
\label{SchoutenBracketMVF}
The \textit{Schouten-Nijenhuis bracket} of multivector fields is the unique $\mathbb{R}$-bilinear map\footnote{see Proposition 3.1 of \cite{Mar}}
\begin{equation}
\nonumber
[\cdot,\cdot]: A^k(M)\times A^l(M)\rightarrow A^{k+l-1}(M)
\end{equation}
which satisfies the following conditions:
\begin{itemize}
\item[(i)] For $f,g\in C^\infty(M)$, $[f,g]=0$
\item[(ii)] For $X\in A^1(M)$, $Q\in \mathcal{A}(M)$, $[X,Q]=L_X Q$, the Lie derivative of $Q$ with respect to $X$
\item[(iii)] For $P\in A^p(M)$, $Q\in A^q(M)$, $[P,Q]=-(-1)^{(p-1)(q-1)}[Q,P]$
\item[(iv)] $ad_P:=[P,\cdot]$ is a derivation of degree $p-1$ for $P\in A^p(M)$ of the exterior product on $\mathcal{A}(M)$, that is,
\begin{equation}
\nonumber
ad_P(Q\wedge R)=ad_P(Q)\wedge R+(-1)^{(p-1)q}Q\wedge ad_P(R)
\end{equation}
for $Q\in A^q(M)$, $R\in \mathcal{A}(M)$.
\end{itemize}
\end{definition}
\noindent Definition \ref{SchoutenBracketMVF} implies that \cite{Mar}
\begin{align}
\nonumber
(-1)^{(p-1)(r-1)}&[P,[Q,R]]+(-1)^{(q-1)(p-1)}[Q,[R,P]]\\
\nonumber
&+(-1)^{(r-1)(q-1)}[R,[P,Q]]=0
\end{align}
for $P\in A^p(M)$, $Q\in A^q(M)$, and $R\in A^r(M)$. This together with (iii) of Definition \ref{SchoutenBracketMVF} shows that $(\mathcal{A}(M),[\cdot,\cdot])$ is a graded Lie algebra if $\deg ~A^p(M):=p-1$.
For $X_1,X_2\dots, X_p,Y_1,\dots, Y_q\in A^1(M)$, the Schouten-Nijenhuis bracket is given explicitly by
\begin{align}
\nonumber
[X_1\wedge &\dots \wedge X_p,Y_1\wedge\cdots \wedge Y_q]\\
\nonumber
&= \sum_{i,j}(-1)^{i+j}[X_i,Y_j]\wedge X_1\wedge \cdots \wedge \hat{X}_i\wedge\cdots\wedge X_p\wedge Y_1\wedge \cdots\wedge \hat{Y}_j\wedge\cdots \wedge Y_q.
\end{align}
\begin{definition}
\label{InteriorProductDef}
The interior product of a smooth function $f$ with a $k$-derivation $X\in A^k(M)$ ($k\ge 1$) is the $k-1$-derviation $i_fX$ defined by
\begin{equation}
\nonumber
i_{f}X(g_1,\dots, g_{k-1}):=X(f,g_1,\dots, g_{k-1}),
\end{equation}
for $g_1,\dots, g_{k-1}\in C^\infty(M)$. For $g\in A^0(M):=C^\infty(M)$, $i_fg:=0$.
\end{definition}
\begin{proposition}
\label{InteriorProductProp}
Let $f,g\in C^\infty(M)$, $X\in A^k(M)$, and $Y,Y'\in \mathcal{A}(M)$. Then
\begin{itemize}
\item[(i)] $i_{f+g} X=i_fX+i_gX$
\item[(ii)] $i_f(Y+Y')=i_f Y+i_fY'$
\item[(iii)] $i_{fg} X=g i_f X+ fi_g X$
\item[(iv)] $i_f(X\wedge Y)=(i_f X)\wedge Y+(-1)^kX\wedge (i_f Y)$
\item[(v)] $[X,f]=(-1)^{k-1}i_fX$
\item[(vi)] $[fX,Y]=f[X,Y]-X\wedge i_f Y$
\end{itemize}
\end{proposition}
\begin{proof}
(i) and (ii) are immediate.
Note that (iii)-(v) is satisfied for the case when $X$ is a smooth function. We now prove (iii)-(v) for the case when $X\in A^k(M)$ with $k\ge 1$.
(iii) is a direct consequence of the fact that $X$ is a derivation of $C^\infty(M)$ in each of its arguments. Specifically,
\begin{align}
\nonumber
i_{fg}X&:=X(fg,\cdot,\dots, \cdot)\\
\nonumber
&=X(f,\cdot, \dots, \cdot)g+fX(g,\cdot,\dots,\cdot)\\
\nonumber
&=gi_f X+fi_g X.
\end{align}
For (iv), take $Y\in A^{l}(M)$ without loss of generality and set $g_1:=f$. Then
\begin{align}
\nonumber
i_f (X\wedge Y)(g_2,\dots, g_{k+l})&=(X\wedge Y)(g_1,g_2,\dots,g_{k+l})\\
\label{WedgeSum1}
&=\sum_{\sigma\in S(k,l)} \epsilon(\sigma)X(g_{\sigma(1)},\dots, g_{\sigma(k)})Y(g_{\sigma(k+1)},\dots,g_{\sigma(k+l)}),
\end{align}
where $\sigma\in S(k,l)\subset \mathfrak{S}_{k+l}$ is a shuffle permutation, i.e., $\sigma$ satisfies
\begin{equation}
\nonumber
\sigma(1)<\cdots <\sigma(k),\hspace*{0.2in} \sigma(k+1)<\cdots <\sigma(k+l)
\end{equation}
and $\epsilon(\sigma)=+1$ $(-1)$ if $\sigma$ is even (odd). (\ref{WedgeSum1}) can be decomposed as
\begin{align}
\nonumber
&\sum_{\sigma\in S(k,l),~\sigma(1)=1} \epsilon(\sigma)X(g_{\sigma(1)},\dots, g_{\sigma(k)})Y(g_{\sigma(k+1)},\dots,g_{\sigma(k+l)})\\
\nonumber
&+\sum_{\sigma\in S(k,l),~\sigma(k+1)=1} \epsilon(\sigma)X(g_{\sigma(1)},\dots, g_{\sigma(k)})Y(g_{\sigma(k+1)},\dots,g_{\sigma(k+l)})\\
\nonumber
&=\sum_{\sigma\in S(k-1,l)} \epsilon(\sigma)i_fX(\tilde{g}_{\sigma(1)},\dots, \tilde{g}_{\sigma(k-1)})Y(\tilde{g}_{\sigma(k)},\dots, \tilde{g}_{\sigma(l+k-1)})\\
\label{WedgeSum2}
&+(-1)^{kl}\sum_{\sigma\in S(l-1,k)} \epsilon(\sigma)i_fY(\tilde{g}_{\sigma(1)},\dots, \tilde{g}_{\sigma(l-1)})X(\tilde{g}_{\sigma(l)},\dots, \tilde{g}_{\sigma(k+l-1)}),
\end{align}
where $\tilde{g}_i=g_{i+1}$ for $i=1,\dots,k+l-1$. (\ref{WedgeSum2}) can then be rewritten as
\begin{align}
\nonumber
&(i_f X\wedge Y)(g_2,\dots, g_{k+l})+(-1)^{kl}(i_f Y\wedge X)(g_2,\dots, g_{k+l})\\
\nonumber
&=(i_f X\wedge Y)(g_2,\dots, g_{k+l})+(-1)^{kl+k(l-1)}(X\wedge i_f Y)(g_2,\dots, g_{k+l})\\
\nonumber
&=(i_f X\wedge Y)(g_2,\dots, g_{k+l})+(-1)^{k}(X\wedge i_f Y)(g_2,\dots, g_{k+l})
\end{align}
For (v), let $X\in A^k(M)$. For $k=0$, the result follows from Definition \ref{SchoutenBracketMVF} and Definition \ref{InteriorProductDef}. Now consider the case when $k\ge 1$. Condition (iv) of Definition \ref{SchoutenBracketMVF} implies that the Schouten-Nijenhuis bracket is local in nature. Since $X$ is locally a finite sum of decomposable terms, it sufficies to prove (v) of Proposition \ref{InteriorProductProp} for the case when $X$ is decomposable, i.e. $X=X_1\wedge \cdots \wedge X_k$ where $X_i\in A^1(M)$ for $i=1,\dots, k$. For $k=1$, $X\in A^1(M)$ and
\begin{equation}
\nonumber
[X,f] = L_X f=Xf=i_fX=(-1)^{k-1}i_fX,
\end{equation}
by (ii) of Definition \ref{SchoutenBracketMVF}. We now prove (v) by induction on $k$. Suppose then that (v) holds for $k$ (where $k\ge 1$) and let $X_{k+1}\in A^1(M)$. By (iii) and (iv) of Definition \ref{SchoutenBracketMVF}, we have
\begin{align}
\nonumber
[X\wedge X_{k+1},f]&=(-1)^{k+1}[f,X\wedge X_{k+1}]\\
\nonumber
&=(-1)^{k+1}([f,X]\wedge X_{k+1}+(-1)^{k} X\wedge [f,X_{k+1}])\\
\nonumber
&=(-1)^{k+1}((-1)^{k}[X,f]\wedge X_{k+1}+(-1)^{k+1}X\wedge [X_{k+1},f])\\
\nonumber
&=(-1)^{k+1}(-i_f X\wedge X_{k+1}+(-1)^{k+1} X\wedge i_f X_{k+1})\\
\nonumber
&=(-1)^k(i_f X\wedge X_{k+1}+(-1)^k X\wedge i_f X_{k+1})\\
\nonumber
&=(-1)^ki_f (X\wedge X_{k+1}),
\end{align}
where we used the induction hypothesis in the fourth equality and (iv) of Proposition \ref{InteriorProductProp} in the sixth equality.
For (vi), take $Y\in A^l(M)$ without loss of generality. Then
\begin{align}
\nonumber
[fX,Y]&=[f\wedge X,Y]\\
\nonumber
&=-(-1)^{(k-1)(l-1)}[Y,f\wedge X]\\
\nonumber
&=-(-1)^{(k-1)(l-1)}([Y,f]\wedge X+f\wedge[Y,X])\\
\nonumber
&=-(-1)^{(k-1)(l-1)}((-1)^{(l-1)}i_fY\wedge X+f[Y,X])\\
\nonumber
&=f[X,Y]-(-1)^{(l-1)k} i_fY \wedge X\\
\nonumber
&=f[X,Y]-X\wedge i_fY,
\end{align}
where we used (iii) and (iv) of Definition \ref{SchoutenBracketMVF} in the second and third equalities respectively, and (v) of Proposition \ref{InteriorProductProp} was used in the fourth equality. This completes the proof.
\end{proof}
\begin{definition}
Let $X\in A^k(M)$, $k\ge 1$. The \textit{interior product} by $X$ \cite{Mar} is the $C^\infty(M)$-linear map $i_X: \Omega^l(M)\rightarrow \Omega^{l-k}(M)$ which is defined as follows:
\begin{itemize}
\item[(i)] for $\omega\in \Omega^l(X)$, $l> k$, $i_X\omega$ is given by
\begin{equation}
i_X\omega(Y):=\omega(X\wedge Y),\hspace*{0.2in} \forall ~Y\in A^{l-k}(M),
\end{equation}
\item[(ii)] for $\omega\in \Omega^l(X)$, $l= k$, $i_X\omega$ is given by
\begin{equation}
i_X\omega:=\omega(X),
\end{equation}
\item[(iii)] for $\omega\in \Omega^l(X)$, $l< k$, $i_X\omega:=0$.
\end{itemize}
\vspace*{0.1in}
For $f\in A^0(M):=C^\infty(M)$, $i_f\omega:=f\omega$ $~\forall~\omega\in \Omega^\bullet(M)$.
\end{definition}
\begin{remark}
For $X\in A^1(M)$, one can show that $i_X$ is a derivation of degree $-1$, that is,
\begin{equation}
\label{InteriorProductDeg1}
i_X(\omega\wedge \eta)=(i_X\omega)\wedge \eta+(-1)^l\omega\wedge i_X\eta,
\end{equation}
for $\omega\in \Omega^l(M)$, $\eta\in \Omega^\bullet(M)$.
\end{remark}
\begin{proposition}
\label{InteriorProduct1}
Let $X\in A^k(M)$ and $Y\in A^l(M)$. Then
\begin{itemize}
\item[(i)] $i_{X\wedge Y}= i_Y\circ i_X$
\item[(ii)] $ i_Y\circ i_X=(-1)^{kl}i_X\circ i_Y$
\end{itemize}
\end{proposition}
\begin{proof}
Let $\omega\in \Omega^p(M)$ with $p > k+l$ and let $Z\in A^{p-k-l}(M)$. For (i), we have
\begin{align}
\nonumber
i_{X\wedge Y}\omega(Z)&:=\omega(X\wedge Y\wedge Z)\\
\nonumber
&=(i_X\omega)(Y\wedge Z)\\
\nonumber
&=i_Y(i_X\omega)(Z).
\end{align}
The case when $p=k+l$ is handled similarly.
For (ii), we have
\begin{align}
\nonumber
i_Y\circ i_X = i_{X\wedge Y}=(-1)^{kl}i_{Y\wedge X}=(-1)^{kl}i_X\circ i_Y
\end{align}
where the first and third equality follows from part (i) of Proposition \ref{InteriorProduct1}.
\end{proof}
We conclude this section by recalling the Lie derivative of a differential form $\omega$ along a MVF $X\in A^k(M)$ $(k\ge 1)$ \cite{FPR}:
\begin{equation}
\label{LieDerivativeMVF}
L_X\omega:=d i_X\omega - (-1)^k i_X d\omega.
\end{equation}
For $\omega\in \Omega^l(M)$, (\ref{LieDerivativeMVF}) implies that $L_X\omega\in \Omega^{l-k+1}(M)$. The next result summarizes the properties of the Lie derivative of differential forms along MVFs:
\begin{proposition}\cite{FPR}
\label{LieDerivativeProp}
Let $X\in A^k(M)$, $Y\in A^l(M)$, and $\omega\in \Omega^\bullet(M)$. Then
\begin{itemize}
\item[(i)] $dL_X\omega = (-1)^{k-1}L_X d\omega$
\item[(ii)] $i_{[X,Y]} \omega=(-1)^{(k-1)l}L_X i_Y\omega-i_Y L_X\omega$
\item[(iii)] $L_{[X,Y]}\omega=(-1)^{(k-1)(l-1)}L_X L_Y\omega-L_YL_X\omega$
\item[(iv)] $L_{X\wedge Y} \omega=(-1)^l i_Y L_X\omega+L_Y i_X\omega$
\end{itemize}
\end{proposition}
\begin{proof}
See Proposition A.3 of \cite{FPR}.
\end{proof}
\section{Higher Affine Connections}
\begin{definition}
\label{HCDef}
A \textit{higher affine connection} (or \textit{higher connection}) on $M$ is a map
\begin{align}
\nonumber
&\nabla: \mathcal{A}(M)\times \mathcal{A}(M)\rightarrow \mathcal{A}(M),\hspace*{0.2in}(X,Y)\mapsto \nabla_X Y
\end{align}
such that
\begin{itemize}
\item[(i)] $\nabla_{X} Y\in A^{k+l-1}(M)$ for $X\in A^k(M)$, $Y\in A^l(M)$
\item[(ii)] $\nabla_{fX+X'}Y=f\nabla_XY+\nabla_{X'} Y$ for $X,X',Y\in \mathcal{A}(M)$
\item[(iii)] $\nabla_X(Y+Y')=\nabla_XY+\nabla_X Y'$ for $X,Y,Y'\in\mathcal{A}(M)$
\item[(iv)] $\nabla_X f=[X,f]$ for $X\in A^k(M)$, $f\in C^\infty(M)$
\item[(v)] $\nabla_X fY=[X,f]\wedge Y+f\nabla_X Y$, for $X\in A^k(M)$, $f\in C^\infty(M)$, $Y\in \mathcal{A}(M)$
\item[(vi)] $\nabla_f X=0$ for $f\in C^\infty(M)$
\end{itemize}
\end{definition}
\begin{corollary}
\label{HCCor1}
Let $\nabla$be a higher connection on $M$. Then
\begin{itemize}
\item[(i)] $\nabla_X fY=(-1)^{k-1}i_f X\wedge Y+f\nabla_X Y$ for $X\in A^k(M)$, $Y\in \mathcal{A}(M)$, and $f\in C^\infty(M)$; in particular, $\nabla_X f = (-1)^{k-1}i_f X$
\item[(ii)] the restriction of $\nabla$ to $A^1(M)\times A^1(M)$ is an affine connection on $M$.
\end{itemize}
\end{corollary}
\begin{proof}
(i) of Corollary \ref{HCCor1} follows from (iv) and (v) of Definition \ref{HCDef} and Proposition \ref{InteriorProductProp}-(v). (ii) of Corollary \ref{HCCor1} follows from Definition \ref{HCDef} and (i) of Corollary \ref{HCCor1}.
\end{proof}
\begin{remark}
It's imporant to stress that even when a higher connection is restricted to covariant differentiation along 1-vector fields, the result will not (in general) coincide with the usual extension of an affine connection to arbitrary tensor fields. For example, if $\nabla$ is a higher connection and $X\in A^1(M)$, $Y\in A^l(M)$, and $Z\in \mathcal{A}(M)$, then, in general,
\begin{equation}
\label{compareAffine}
\nabla_{X} (Y\wedge Z)\neq (\nabla_X Y)\wedge Z+Y\wedge \nabla_X Z.
\end{equation}
The right side of (\ref{compareAffine}) is exactly how an affine connection on $TM$ would operate on $Y\wedge Z$. The fact that higher connections do not do this (in general) is a consequence of the fact that higher connections attempt to put $k$-vector fields and 1-vector fields on a more equal footing. Hence, a higher connection will view a $k$-vector field as being ``indivisible" in some sense. Consequently, $\nabla_X(Y\wedge Z)$ will depend not only on $Y$ and $Z$, but also on $Y\wedge Z$.
\end{remark}
\begin{proposition}
\label{ExistenceHC}
There is a one to one correspondence between affine connections on $TM$ and higher connections satisfying
\begin{align}
\label{ExistenceHC1}
\nabla_{X\wedge Y} Z&=X\wedge \nabla_Y Z+(-1)^{kl} Y\wedge \nabla_X Z\\
\label{ExistenceHC2}
\nabla_X(Y\wedge Z)&=(\nabla_X Y)\wedge Z+(-1)^{(k-1)l}Y\wedge \nabla_X Z,
\end{align}
for $X\in A^k(M)$, $Y\in A^l(M)$, and $Z\in \mathcal{A}(M)$.
\end{proposition}
\begin{proof}
Let $\mathcal{H}_0$ be the set of all higher connections satisfying (\ref{ExistenceHC1}) and (\ref{ExistenceHC2}) and let $\mathcal{C}$ be the set of all affine connections on $TM$. Let $\varphi: \mathcal{H}_0\rightarrow \mathcal{C}$ be the map which sends $\nabla\in \mathcal{H}_0$ to its restriction $\nabla|_{A^1(M)\times A^1(M)}\in \mathcal{C}$. To see that this map is injective, let $\nabla$ be any higher connection which satisfies (\ref{ExistenceHC1}) and let $X=X_1\wedge \cdots \wedge X_k$ be a decomposable $k$-vector field. Then for $Y\in \mathcal{A}(M)$, we have
\begin{equation}
\label{ExplicitForm1}
\nabla_X Y=\sum_{j=1}^k(-1)^{k-j}X_1\wedge \cdots \wedge \widehat{X}_j\wedge \cdots \wedge X_k \wedge \nabla_{X_j} Y,
\end{equation}
where $\widehat{X}_j$ denotes the omission of $X_j$. On the other hand, if $\nabla$ satisfies (\ref{ExistenceHC2}) and $Y=Y_1\wedge \cdots \wedge Y_l$ is a decomposable $l$-vector field, then for $X\in \mathcal{A}(M)$, we have
\begin{equation}
\label{ExplicitForm2}
\nabla_X Y= \sum_{j=1}^l (-1)^{j-1}(\nabla_X Y_j)\wedge Y_1\wedge \cdots \wedge \widehat{Y_j}\wedge \cdots \wedge Y_l.
\end{equation}
Equations (\ref{ExplicitForm1}) and (\ref{ExplicitForm2}) imply that if $\nabla$ satisfies \textit{both} (\ref{ExistenceHC1}) and (\ref{ExistenceHC2}), then $\nabla$ is completely determined as a higher connection by its restriction to $A^1(M)\times A^1(M)$, which is simply an affine connection on $TM$. Consequently, any two higher connections in $\mathcal{H}_0$ which agree on $A^1(M)\times A^1(M)$ must be the same. This proves that $\varphi$ is injective.
To see that $\varphi$ is surjective, let $\widetilde{\nabla}$ be any affine connection on $TM$. We now extend $\widetilde{\nabla}$ to a higher connection $\nabla'$ as follows. For $X\in A^k(M)$ and $f\in C^\infty(M)$, define $\nabla'_f\equiv 0$ and $\nabla'_X f:=[X,f]$. To extend $\nabla'$ to $A^k(M)\times A^l(M)$ for $k,l>0$, let $X=X_1\wedge \cdots \wedge X_k\in A^k(M)$ be a decomposable $k$-vector field and let $Y\in A^l(M)$ be any $l$-vector field. We define $\nabla'_XY$ via
\begin{equation}
\label{ExplicitForm3A}
\nabla'_X Y=\sum_{j=1}^k (-1)^{k-j} X_1\wedge \cdots \wedge\widehat{X_j}\wedge \cdots \wedge X_k\wedge \widetilde{\nabla}_{X_j} Y,
\end{equation}
where $\widetilde{\nabla}_{X_j} Y$ is defined in the usual way and $\widehat{X_j}$ denotes omission as usual. With $Y$ fixed, let
\begin{equation}
\nonumber
\rho(X_1,\dots, X_k):=\nabla'_XY.
\end{equation}
A direct calculation shows that $\rho$ is an alternating $C^\infty(M)$-multilinear map. This implies that (\ref{ExplicitForm3A}) extends to all $X\in A^k(M)$ by $C^\infty(M)$-linearity. Using Lemma \ref{SchoutenLemma} from section 4, one can show that
\begin{equation}
\nonumber
\nabla'_Xf Y=[X,f]\wedge Y+f\nabla'_X Y,
\end{equation}
for $f\in C^\infty(M)$. The other axioms of Definition \ref{HCDef} are are easily verified. Hence, $\nabla'$ is a higher connection. Another straightforward calculation shows that $\nabla'$ also satisfies (\ref{InducedHC1}) and (\ref{InducedHC2}). Furthermore, (\ref{ExplicitForm3A}) implies that $\varphi(\nabla')=\widetilde{\nabla}$. This completes the proof.
\end{proof}
\noindent Proposition \ref{ExistenceHC} motivates the following definition.
\begin{definition}
\label{InducedHC}
A higher connection $\nabla$ is called \textit{induced} if it satisfies
\begin{align}
\label{InducedHC1}
\nabla_{X\wedge Y}Z&=X\wedge \nabla_Y Z+(-1)^{kl}Y\wedge \nabla_X Z\\
\label{InducedHC2}
\nabla_X(Y\wedge Z)&=(\nabla_X Y)\wedge Z+(-1)^{(k-1)l}Y\wedge \nabla_X Z
\end{align}
for $X\in A^k(M)$, $Y\in A^l(M)$, and $Z\in \mathcal{A}(M)$.
\end{definition}
\begin{remark}
For an affine connection $\widetilde{\nabla}$ on $TM$, we will use the same symbol $\widetilde{\nabla}$ to denote the induced higher connection associated to $\widetilde{\nabla}$ that was constructed in the proof of Proposition \ref{ExistenceHC}.
\end{remark}
\begin{remark}
The higher connection given by Proposition \ref{ExistenceHC} is equivalent to the covariant derivative introduced in \cite{BPH}. To see this, let $\nabla$ be a higher connection given by Proposition \ref{ExistenceHC} and define $\nabla'$ by
\begin{equation}
\nonumber
{\nabla'}_XY:=(-1)^{k-1} \nabla_X Y.
\end{equation}
for $X\in A^k(M)$, $Y\in \mathcal{A}(M)$. Then a direct calculation shows
\begin{align}
\nonumber
{\nabla'}_{X\wedge Y}Z&=(-1)^k X\wedge {\nabla'}_Y Z+(-1)^{l(k-1)}Y\wedge {\nabla'}_X Z\\
\nonumber
{\nabla'}_{X}(Y\wedge Z)&=({\nabla'}_X Y)\wedge Z+(-1)^{l(k-1)}Y\wedge {\nabla'}_X Z
\end{align}
which is precisely condition (\ref{BPHPropertyA}) and (\ref{BPHPropertyB}) from \cite{BPH}.
\end{remark}
\begin{lemma}
\label{DifferenceHC}
Let $\nabla$ and ${\nabla}'$ be two higher connections on $M$ and let $F:\mathcal{A}(M)\times \mathcal{A}(M)\rightarrow \mathcal{A}(M)$ be given by $F(X,Y):=\nabla_XY-{\nabla'}_X Y$. Then $F$ is $C^\infty(M)$-linear in $X$ and $Y$. In particular, $F^{k,l}:=F|_{A^k(M)\times A^l(M)}$ is a section of the bundle $\wedge^{k+l-1} TM\otimes \wedge^k T^\ast M\otimes \wedge^l T^\ast M$ for $k,l>0$ with $k+l-1\le n:=\dim M$.
\end{lemma}
\begin{proof}
Let $X\in A^k(M)$, $Y\in \mathcal{A}(M)$, and $h\in C^\infty(M)$. Its clear that $F(hX,Y)=hF(X,Y)$. We now show that $F$ is $C^\infty(M)$-linear in $Y$:
\begin{align}
\nonumber
F(X,hY)&=\nabla_{X}(hY)-{\nabla'}_{X}(hY)\\
\nonumber
&=[X,h]\wedge Y+h\nabla_X Y- [X,h]\wedge Y-h{\nabla'}_X Y\\
\nonumber
&=hF(X,Y).
\end{align}
\end{proof}
\begin{theorem}
\label{HigherConnectionData}
Let $\nabla$ be any higher connection on $M$. Then there exists
\begin{itemize}
\item[(i)] a unique affine connection $\widetilde{\nabla}$ on $TM$, and
\item[(ii)] a unique collection of sections $F^{k,l}$ of the bundle
\begin{equation}
\nonumber
E^{k,l}:=\mbox{$\wedge^{k+l-1} TM$}\otimes \mbox{$\wedge^k T^\ast M$}\otimes \mbox{$\wedge^lT^\ast M$}
\end{equation}
for $k,l>0$ with $k+l-1\le n:=\dim M$
\end{itemize}
such that $\forall~X\in A^k(M),~Y\in A^l(M)$
\begin{align}
\label{HCData1}
\nabla_X Y = \widetilde{\nabla}_X Y+F^{k,l}(X,Y),\hspace*{0.1in}
\end{align}
where $F^{1,1}\equiv 0$ and $\widetilde{\nabla}_X Y$ in (\ref{HCData1}) is understood to be the higher connection induced by the affine connection $\widetilde{\nabla}$ according to (\ref{ExistenceHC1}) and (\ref{ExistenceHC2}). Conversely, any affine connection $\widetilde{\nabla}$ on $TM$ together with any collection of sections $F^{k,l}\in \Gamma(E^{k,l})$ for $k,l>0$ with $k+l-1\le n$ and $F^{1,1}\equiv 0$ determines a unique higher connection on $M$ which satisfies (\ref{HCData1}). In particular, there is a bijection between the space of all higher connections and the set of all pairs of the form $(\widetilde{\nabla},\{F^{k,l}\})$, where $\widetilde{\nabla}$ is an affine connection on $TM$ and $F^{k,l}\in \Gamma(E^{k,l})$ for $k,l>0$ with $k+l-1\le n$ and $F^{1,1}\equiv 0$.
\end{theorem}
\begin{proof}
Let $\nabla$ be any higher connection on $M$ and let $\widetilde{\nabla}$ be the affine connection on $TM$ defined by $\widetilde{\nabla}_XY:=\nabla_XY$ for $X,Y\in A^1(M)$. Extend $\widetilde{\nabla}$ to a higher connection on $M$ via (\ref{ExistenceHC1}) and (\ref{ExistenceHC2}). By Lemma \ref{DifferenceHC},
\begin{equation}
F^{k,l}:=(\nabla-\widetilde{\nabla})|_{A^k(M)\times A^l(M)}:A^k(M)\times A^l(M)\rightarrow A^{k+l-1}(M)
\end{equation}
is a section of $E^{k,l}:=\wedge^{k+l-1} TM\otimes \wedge^k T^\ast M\otimes \wedge^l T^\ast M$ for $k,l>0$ with $k+l-1\le n:=\dim M$. (Note that $F^{1,1}\equiv 0$.) Hence, we have
\begin{equation}
\nonumber
\nabla_X Y= \widetilde{\nabla}_X Y+F^{k,l}(X,Y),
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$, $k,l>0$. To see that $(\widetilde{\nabla},\{F^{k,l}\})$ is unique, suppose that $(\widehat{\nabla},\{G^{k,l}\})$ is another such pair which satisfies (\ref{HCData1}). Then for all $X,Y\in A^1(M)$, we have
\begin{equation}
\nonumber
\widetilde{\nabla}_XY=\nabla_XY=\widehat{\nabla}_XY.
\end{equation}
Hence, $\widetilde{\nabla}=\widehat{\nabla}$. This fact together with (\ref{HCData1}) then implies that $F^{k,l}=G^{k,l}$ for all $k,l$.
Conversely, suppose $\widetilde{\nabla}$ is an affine connection on $TM$ (extended to a higher connection on $M$ via (\ref{ExistenceHC1}) and (\ref{ExistenceHC2})) and $F^{k,l}$ is a section of the bundle $E^{k,l}$ for $k,l>0$ with $k+l-1\le n$ and $F^{1,1}\equiv 0$. For $X\in A^k(M)$, $Y\in A^l(M)$ with $k,l>0$ and $k+l-1\le n$, define $\nabla_XY$ according to (\ref{HCData1}) and set $\nabla_X f:=[X,f]$ and $\nabla_f :=0$ for $f\in C^\infty(M)$. Its clear that $\nabla$ satisfies all the axioms of Definition \ref{HCDef} with the possible exception of axiom (v). To verify axiom (v), let
$X\in A^k(M)$, $Y\in A^l(M)$, and $f\in C^\infty(M)$ with $k,l>0$ and $k+l-1\le n$. Then
\begin{align}
\nonumber
\nabla_X fY&=\widetilde{\nabla}_X(fY)+F^{k,l}(X,fY)\\
\nonumber
&=[X,f]\wedge Y+f\widetilde{\nabla}_X(Y)+fF^{k,l}(X,Y)\\
\nonumber
&=[X,f]\wedge Y+f\nabla_X Y.
\end{align}
This proves that $\nabla$ is a higher connection. Furthermore, by the argument given in the first part of the proof, the pair $(\widetilde{\nabla},\{F^{k,l}\})$ can be recovered from $\nabla$. This proves the bijection between the space of higher connections and all pairs of the form $(\widetilde{\nabla},\{F^{k,l}\})$ where $\widetilde{\nabla}$ is an affine connection on $TM$ and $F^{k,l}\in \Gamma(E^{k,l})$ for all $k,l>0$ with $k+l-1\le n$ and $F^{1,1}\equiv 0$.
\end{proof}
\begin{remark}
Let $\nabla$ be a higher connection and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated to $\nabla$ by Theorem \ref{HigherConnectionData}. For convenience, we set $F^{k,l}\equiv 0$ whenever $kl=0$ or $k+l-1>\dim M$. With these definitions, Theorem \ref{HigherConnectionData} implies
\begin{equation}
\nabla_X Y=\widetilde{\nabla}_X Y+F^{k,l}(X,Y),~\forall~ k,l\ge 0.
\end{equation}
\end{remark}
\begin{theorem}
\label{InducedResultA}
Let $\nabla$ be a higher connection and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated to $\nabla$ by Theorem \ref{HigherConnectionData}. Then $\nabla$ is an induced higher connection (i.e., one that satisfies (\ref{InducedHC1}) and (\ref{InducedHC2})) iff $F^{k,l}\equiv 0$, $\forall~k,l$.
\end{theorem}
\begin{proof}
Extend $\widetilde{\nabla}$ to a higher connection via (\ref{InducedHC1}) and (\ref{InducedHC2}).
$(\Rightarrow)$. Let $\nabla$ be an induced higher connection and suppose that not all $F^{i,j}=0$. Since $F^{1,1}\equiv 0$ by Theorem \ref{HigherConnectionData}, there are three possible cases:
\begin{itemize}
\item[1.] $F^{k,1}\neq 0$ for some $k>1$
\item[2.] $F^{1,l}\neq 0$ for some $l>1$
\item[3.] $F^{k,l}=0$ whenever $k=1$ or $l=1$, but there exists some $k,l>1$ such that $F^{k,l}\neq 0$
\end{itemize}
For case 1, let $k:=\min\{i~|~F^{i,1}\neq 0\}$. Since $F^{k,1}\neq 0$, there exists $X,Y\in A^{1}(M)$ and $X'\in A^{k-1}(M)$ such that $F^{k,1}(X\wedge X', Y)\neq 0$. Then
\begin{align}
\nonumber
\nabla_{X\wedge X'}Y&=\widetilde{\nabla}_{X\wedge X'}Y+F^{k,1}(X'',Y)\\
\nonumber
&=X\wedge \widetilde{\nabla}_{X'}Y+(-1)^{k-1}X'\wedge \widetilde{\nabla}_XY+F^{k,1}(X\wedge X',Y)\\
\label{Case1AFkl}
&=X\wedge \nabla_{X'}Y+(-1)^{k-1}X'\wedge \nabla_XY+F^{k,1}(X\wedge X',Y),
\end{align}
where the third equality follows from the fact that $F^{i,1}=0$ for $i<k$. On the other hand, since $\nabla$ is an induced higher connection, we also have
\begin{equation}
\label{Case1BFkl}
\nabla_{X\wedge X'}Y=X\wedge \nabla_{X'}Y+(-1)^{k-1}X'\wedge \nabla_XY.
\end{equation}
Comparing (\ref{Case1AFkl}) and (\ref{Case1BFkl}) shows that $F^{k,1}(X\wedge X',Y)=0$, which is a contradiction. Hence, $F^{k,1}=0$ for all $k$.
For case 2, let $l:=\min\{j~|~F^{1,j}\neq 0\}$. Since $F^{1,l}\neq 0$, there exists $X,Y\in A^1(M)$ and $Y'\in A^{l-1}(M)$ such that $F^{1,l}(X,Y\wedge Y')\neq 0$. Then
\begin{align}
\nonumber
\nabla_X (Y\wedge Y')&=\widetilde{\nabla}_X(Y\wedge Y')+F^{1,l}(X,Y\wedge Y')\\
\nonumber
&=(\widetilde{\nabla}_X Y)\wedge Y'+Y\wedge \widetilde{\nabla}_X Y'+F^{1,l}(X,Y\wedge Y')\\
\label{Case2AFkl}
&=(\nabla_X Y)\wedge Y'+Y\wedge \nabla_X Y'+F^{1,l}(X,Y\wedge Y'),
\end{align}
where the the third equaltiy follows from the fact that $F^{1,j}=0$ for $j<l$. Since $\nabla$ is induced, we also have
\begin{equation}
\label{Case2BFkl}
\nabla_X (Y\wedge Y')=(\nabla_X Y)\wedge Y'+Y\wedge \nabla_X Y'.
\end{equation}
Comparing (\ref{Case2AFkl}) with (\ref{Case2BFkl}) shows that $F^{1,l}(X,Y\wedge Y')=0$, which is a contradiction. Hence, $F^{1,l}=0$ for all $l$.
For case 3, there exists $a,b>1$ such that $F^{a,b}\neq 0$. Let $k:=\min\{i~|~F^{i,b}\neq 0\}$. (Note that by hypothesis, we have $k>1$.). Since $F^{k,b}\neq 0$, there exists $X\in A^1(M)$, $X'\in A^{k-1}(M)$, and $Y\in A^b(M)$ such that $F^{k.b}(X\wedge X',Y)\neq 0$. At this point, the rest of the proof proceeds exactly as in case 1 leading to the contradiction that $F^{k,b}(X\wedge X',Y)=0$. From this, we conclude that $F^{i,j}=0$ for all $i,j$.
$(\Leftarrow)$ If $F^{k,l}=0$ for all $k,l$, then Theorem \ref{HigherConnectionData} gives $\nabla=\widetilde{\nabla}$ and the latter is an induced higher connection.
\end{proof}
\begin{definition}
Let $\nabla$ be a higher connection and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated to $\nabla$ by Theorem \ref{HigherConnectionData}. The tensor fields $F^{k,l}$ are called the \textit{twist fields} of the connection.
\end{definition}
\noindent Theorem \ref{InducedResultA} shows that any induced higher connection (i.e., one satisfying \textit{both} (\ref{InducedHC1}) and (\ref{InducedHC2})) must have vanishing twist fields. This fact motivates the following two definitions:
\begin{definition}
\label{UpperInduced}
A higher connection $\nabla$ is called \textit{upper induced} if it satisfies
\begin{equation}
\nonumber
\nabla_X (Y\wedge Z) = (\nabla_XY)\wedge Z+(-1)^{(k-1)l}Y\wedge \nabla_X Z
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$ and $Z\in \mathcal{A}(M)$.
\end{definition}
\begin{definition}
\label{LowerInduced}
A higher connection $\nabla$ is called \textit{lower induced} if it satisfies
\begin{equation}
\nonumber
\nabla_{X\wedge Y} Z=X\wedge \nabla_Y Z+ (-1)^{kl} Y\wedge \nabla_X Z,
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$, and $Z\in \mathcal{A}(M)$.
\end{definition}
\noindent The notion of upper and lower induced puts the following restrictions on the twist fields:
\begin{proposition}
\label{UpperInducedProp}
Let $\nabla$ be a higher connection and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated to $\nabla$ by Theorem \ref{HigherConnectionData}. $\nabla$ is upper induced iff
\begin{equation}
\label{Special0}
F^{k,l+m}(X,Y\wedge Z)=F^{k,l}(X,Y)\wedge Z+(-1)^{(k-1)l}Y\wedge F^{k,m}(X,Z)
\end{equation}
for $X\in A^k(X)$, $Y\in A^l(M)$, and $Z\in {A}^m(M)$, with $k,l,m>0$ and $k+l+m-1\le n:=\dim M$. In particular, if $\nabla$ is upper induced, then the twist fields are completely determined by the set $\{F^{k,1}\}_{k=1}^n$. For a decomposable $l$-vector field $Y=Y_1\wedge Y_2\wedge \cdots \wedge Y_l$, $F^{k,l}$ is given by
\begin{equation}
\label{Special4}
F^{k,l}(X,Y)=\sum_{j=1}^l(-1)^{j-1}F^{k,1}(X,Y_j)\wedge Y_1\wedge \cdots \wedge \widehat{Y_j}\wedge \cdots \wedge Y_l.
\end{equation}
In particular, $F^{1,l}\equiv 0$ for all $l$.
\end{proposition}
\begin{proof}
Let $X\in A^k(X)$, $Y\in A^l(M)$, and $Z\in {A}^m(M)$, with $k+l+m-1\le n:=\dim M$. By Theorem \ref{HigherConnectionData}, we have
\begin{align}
\nonumber
\nabla_X(Y\wedge Z)&=\widetilde{\nabla}_X(Y\wedge Z)+F^{k,l+m}(X,Y\wedge Z)\\
\label{Special1}
&=(\widetilde{\nabla}_XY)\wedge Z+(-1)^{(k-1)l}Y\wedge \widetilde{\nabla}_XZ+F^{k,l+m}(X,Y\wedge Z),
\end{align}
where the last equality follows from the fact that $\widetilde{\nabla}$ is induced. Now suppose that $\nabla$ is upper induced. Then
\begin{align}
\nonumber
\nabla_X(Y\wedge Z)&=(\nabla_X Y)\wedge Z+(-1)^{(k-1)l}Y\wedge\nabla_X Z\\
\label{Special2}
&=(\widetilde{\nabla}_X Y)\wedge Z+F^{k,l}(X,Y)\wedge Z\\
\nonumber
&+(-1)^{(k-1)l}Y\wedge \widetilde{\nabla}_XZ+(-1)^{(k-1)l}Y\wedge F^{k,m}(X,Z).
\end{align}
Comparing (\ref{Special1}) and (\ref{Special2}) gives
\begin{equation}
\label{Special3}
F^{k,l+m}(X,Y\wedge Z)=F^{k,l}(X,Y)\wedge Z+(-1)^{(k-1)l}Y\wedge F^{k,m}(X,Z).
\end{equation}
On the other hand, if the twist fields satisfy (\ref{Special3}), then it follows that $\nabla$ is upper induced; this can be easily seen by substituting (\ref{Special3}) into (\ref{Special1}), rearranging the terms, and applying Theorem \ref{HigherConnectionData}. For the last part, note that (\ref{Special4}) follows from (\ref{Special3}) by a straightforward calculation, and $F^{1,l}\equiv 0$ since $F^{1,1}\equiv 0$ by Theorem \ref{HigherConnectionData}.
\end{proof}
\begin{proposition}
\label{LowerInducedProp}
Let $\nabla$ be a higher connection and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated to $\nabla$ by Theorem \ref{HigherConnectionData}. $\nabla$ is lower induced iff
\begin{equation}
\nonumber
F^{k+l,m}(X\wedge Y,Z)=X\wedge F^{l,m}(Y,Z)+(-1)^{kl}Y\wedge F^{k,m}(X,Z),
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$, and $Z\in A^m(M)$, with $k+l+m-1\le n:=\dim M$. In particular, if $\nabla$ is lower induced, then the twist fields are completely determined by the set $\{F^{1,l}\}_{l=1}^n$. For a decomposable $k$-vector field $X=X_1\wedge \cdots \wedge X_k$, $F^{k,l}$ is given by
\begin{equation}
\label{Lower0}
F^{k,l}(X,Y)=\sum_{j=1}^k (-1)^{k-j} X_1\wedge \cdots \wedge \widehat{X_j}\wedge \cdots \wedge X_k\wedge F^{1,l}(X_j,Y).
\end{equation}
In particular, $F^{k,1}\equiv 0$ for all $k$.
\end{proposition}
\begin{proof}
Let $X\in A^k(M)$, $Y\in A^l(M)$, and $Z\in A^m(M)$, with $k+l+m-1\le n:=\dim M$. By Theorem \ref{HigherConnectionData}, we have
\begin{align}
\nonumber
\nabla_{X\wedge Y}Z&=\widetilde{\nabla}_{X\wedge Y}Z+F^{k+l,m}(X\wedge Y,Z)\\
\label{Lower1}
&=X\wedge \widetilde{\nabla}_Y Z+(-1)^{kl} Y\wedge\widetilde{\nabla}_X Z+F^{k+l,m}(X\wedge Y,Z),
\end{align}
where the last equality follows from the fact that $\widetilde{\nabla}$ is induced. Now suppose that $\nabla$ is lower induced. Then
\begin{align}
\nonumber
\nabla_{X\wedge Y} Z&=X\wedge \nabla_Y Z+(-1)^{kl} Y\wedge \nabla_X Z\\
\nonumber
&=X\wedge \widetilde{\nabla}_Y Z+X\wedge F^{l,m}(Y,Z)\\
\label{Lower2}
&+(-1)^{kl}Y\wedge \widetilde{\nabla}_X Z+(-1)^{kl}Y\wedge F^{k,m}(X,Z).
\end{align}
Comparing (\ref{Lower1}) and (\ref{Lower2}) gives
\begin{equation}
\label{Lower3}
F^{k+l,m}(X\wedge Y,Z)=X\wedge F^{l,m}(Y,Z)+(-1)^{kl}Y\wedge F^{k,m}(X,Z).
\end{equation}
On the other hand, if the twist fields satisfy (\ref{Lower3}), then substitution of (\ref{Lower3}) into (\ref{Lower1}) shows that $\nabla$ is lower induced.
For the last part, note that (\ref{Lower0}) follows from (\ref{Lower3}) by a straightforward calculation and $F^{k,1}\equiv 0$ for all $k$ since $F^{1,1}\equiv 0$ by Theorem \ref{HigherConnectionData}.
\end{proof}
\noindent We now introduce the notion of \textit{higher torsion} for higher connections:
\begin{definition}
\label{HigherTorsion}
Let $\nabla$ be a higher connection. The \textit{higher torsion} associated to $\nabla$ is defined by
\begin{equation}
T(X,Y):=\nabla_XY-(-1)^{(k-1)(l-1)}\nabla_Y X-[X,Y]
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$. $\nabla$ is \textit{torsion-free} if $T\equiv 0$.
\end{definition}
\begin{proposition}
\label{HigherTorsionProp}
Let $\nabla$ be a higher connection with higher torsion $T$. For $X\in A^k(M)$, $Y\in A^l(M)$, and $f\in C^\infty(M)$, $T$ satisfies
\begin{itemize}
\item[(i)] $T(X,Y)=-(-1)^{(k-1)(l-1)}T(Y,X)$
\item[(ii)] $T(fX,Y)=fT(X,Y)$
\end{itemize}
\end{proposition}
\begin{proof}
\noindent Let $s=(-1)^{(k-1)(l-1)}$. For (i), we have
\begin{align}
\nonumber
T(X,Y)&:=\nabla_XY-s\nabla_Y X-[X,Y]\\
\nonumber
&=-s(\nabla_Y X - s\nabla_X Y + s[X,Y])\\
\nonumber
&=-s(\nabla_Y X - s\nabla_X Y + s(-s[Y,X])\\
\nonumber
&=-s(\nabla_Y X - s\nabla_X Y - [Y,X])\\
\nonumber
&=-sT(Y,X).
\end{align}
For (ii), we have
\begin{align}
\nonumber
T(fX,Y)&=\nabla_{fX}Y-s\nabla_Y(fX)-[fX,Y]\\
\nonumber
&=f\nabla_X Y-s([Y,f]\wedge X+f\nabla_Y X)-(f[X,Y]-X\wedge i_fY)\\
\nonumber
&=f\nabla_X Y-s((-1)^{l-1}i_f Y\wedge X+f\nabla_Y X)-(f[X,Y]-X\wedge i_fY)\\
\nonumber
&=f\nabla_X Y-(-1)^{k(l-1)}i_f Y\wedge X-sf\nabla_Y X-f[X,Y]+ (-1)^{k(l-1)} i_fY\wedge X\\
\nonumber
&=f\nabla_X Y-sf\nabla_Y X-f[X,Y]\\
\nonumber
&=fT(X,Y).
\end{align}
where we used Proposition \ref{InteriorProductProp}-(vi) and (v) in the second and third equalities respectively.
\end{proof}
\noindent The next two results provide a characterization of torsion-free higher connections.
\begin{proposition}
\label{VanishingTorsionProp}
Let $\widetilde{\nabla}$ be an affine connection on $TM$. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $\widetilde{\nabla}$ is torsion-free as an affine connection on $TM$.
\item[(ii)] $\widetilde{\nabla}$ is torsion-free as an induced higher connection.
\end{itemize}
\end{proposition}
\begin{proof}
$(i) \Leftarrow (ii)$. Immediate.
$(i) \Rightarrow (ii)$. Suppose that $\widetilde{\nabla}$ is torsion-free as an affine connection on $TM$. Extend $\widetilde{\nabla}$ to a higher connection via (\ref{InducedHC1}) and (\ref{InducedHC2}) and let $T$ denote its higher torsion. To prove that $T\equiv 0$, it suffices to show that $T(X,Y)\equiv 0$ for the case when $X$ and $Y$ are decomposable $k$ and $l$-vector fields respectively. So, let $X=X_1\wedge \cdots \wedge X_k$ and $Y=Y_1\wedge \cdots \wedge Y_l$. To simplify things, write
\begin{align}
\nonumber
X[i] &= X_1\wedge\cdots \wedge \widehat{X}_i\wedge \cdots \wedge X_k\\
Y[j] &= Y_1\wedge\cdots \wedge \widehat{Y}_j\wedge \cdots \wedge Y_l,
\end{align}
where $\widehat{X}_i$ and $\widehat{Y}_j$ denotes omission as usual. Using (\ref{ExplicitForm1}) and (\ref{ExplicitForm2}) from the proof of Proposition \ref{InducedHC1}, we have
\begin{align}
\nonumber
\widetilde{\nabla}_XY&=\sum_{i=1}^k(-1)^{k-i}X[i]\wedge \widetilde{\nabla}_{X_i} Y\\
\nonumber
&=\sum_{i=1}^{k}\sum_{j=1}^l (-1)^{k-i}(-1)^{j-1}X[i]\wedge \widetilde{\nabla}_{X_i} Y_j\wedge Y[j]\\
\nonumber
&=\sum_{i=1}^{k}\sum_{j=1}^l (-1)^{i+j}\widetilde{\nabla}_{X_i} Y_j\wedge X[i]\wedge Y[j].
\end{align}
Likewise,
\begin{equation}
\nonumber
\widetilde{\nabla}_YX=\sum_{i=1}^{k}\sum_{j=1}^l (-1)^{i+j}\widetilde{\nabla}_{Y_j} X_i\wedge Y[j]\wedge X[i].
\end{equation}
Recall that for $X$, $Y$ decomposable, the Schouten-Nijenhuis bracket is given by
\begin{equation}
\nonumber
[X,Y]=\sum_{i=1}^k\sum_{j=1}^l(-1)^{i+j}[X_i,Y_j]\wedge X[i]\wedge Y[j].
\end{equation}
Putting everything together gives
\begin{align}
\nonumber
T(X,Y)&=\widetilde{\nabla}_XY-(-1)^{(k-1)(l-1)}\widetilde{\nabla}_YX-[X,Y]\\
\nonumber
&=\sum_{i=1}^{k}\sum_{j=1}^l (-1)^{i+j}(\widetilde{\nabla}_{X_i} Y_j-\widetilde{\nabla}_{Y_j} X_i-[X_i,Y_j])\wedge X[i]\wedge Y[j]\\
\nonumber
&=0.
\end{align}
This completes the proof.
\end{proof}
\begin{theorem}
\label{TorsionThmB}
Let $\nabla$ be a higher connection and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated to $\nabla$ by Theorem \ref{HigherConnectionData}. Then $\nabla$ is torsion-free iff
\begin{itemize}
\item[(i)] $\widetilde{\nabla}$ is torsion-free, and
\item[(ii)] $F^{k,l}(X,Y)=(-1)^{(k-1)(l-1)}F^{l,k}(Y,X)$ for all $X\in A^k(M)$, $Y\in A^l(M)$.
\end{itemize}
\end{theorem}
\begin{proof}
Let $X\in A^k(M)$ and $Y\in A^l(M)$ and let $T$ denote the higher torsion associated with $\nabla$. Note that if $k=0$ or $l=0$, then $T(X,Y)=0$. Consequently, assume that $k,l>0$. Then
\begin{align}
\nonumber
T(X,Y)&=\nabla_XY-(-1)^{(k-1)(l-1)}\nabla_Y X-[X,Y]\\
\nonumber
&=\widetilde{\nabla}_XY+F^{k,l}(X,Y)-(-1)^{(k-1)(l-1)}\widetilde{\nabla}_Y X-(-1)^{(k-1)(l-1)}F^{l,k}(Y,X)\\
\nonumber
&-[X,Y]\\
\label{TorsionThmB1}
&=\widetilde{T}(X,Y)+F^{k,l}(X,Y)-(-1)^{(k-1)(l-1)}F^{l,k}(Y,X),
\end{align}
where $\widetilde{T}$ denotes the higher torsion associated to $\widetilde{\nabla}$ (as an induced higher connection).
Now suppose that $T\equiv 0$. Since $F^{1,1}\equiv 0$, we have $\widetilde{T}(X,Y)=T(X,Y)=0$ for all $X,Y\in A^1(M)$. Consequently, $\widetilde{\nabla}$ is torsion free as an affine connection on $TM$. Proposition \ref{VanishingTorsionProp} then implies that $\widetilde{T}\equiv 0$. This fact along with (\ref{TorsionThmB1}) implies that
\begin{equation}
\nonumber
F^{k,l}(X,Y)=(-1)^{(k-1)(l-1)}F^{l,k}(Y,X).
\end{equation}
The converse follows immediately from (\ref{TorsionThmB1}). This completes the proof.
\end{proof}
\begin{corollary}
\label{TorsionGkl}
Let $\nabla$ be a higher connection and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated to $\nabla$ by Theorem \ref{HigherConnectionData}. Let
\begin{equation}
\nonumber
G^{k,l}(X,Y):=F^{k,l}(X,Y)+(-1)^{(k-1)(l-1)}F^{l,k}(Y,X),
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$. Let $\nabla'$ be the unique higher connection associated to $(\widetilde{\nabla},\{G^{k,l}\})$ by Theorem \ref{HigherConnectionData}. If $\widetilde{\nabla}$ is torsion-free as an affine connection on $TM$, then $\nabla'$ has vanishing higher torsion.
\end{corollary}
\begin{proof}
This follows immediately from Theorem \ref{TorsionThmB} by noting that
\begin{equation}
\nonumber
G^{k,l}(X,Y)=(-1)^{(k-1)(l-1)} G^{l,k}(Y,X)
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$.
\end{proof}
\begin{corollary}
\label{UpperLowerTorsionCor}
Let $\nabla$ be an upper-induced (or lower-induced) higher connection. If $\nabla$ is torsion-free, then $\nabla$ is induced by a torsion-free affine connection on $TM$.
\end{corollary}
\begin{proof}
Suppose that $\nabla$ is a torsion-free, upper-induced higher connection, and let $(\widetilde{\nabla},F^{k,l})$ be the unique pair associated to $\nabla$ by Theorem \ref{HigherConnectionData}. By Theorem \ref{TorsionThmB}, we have
\begin{equation}
\label{ULTorsionCor1}
F^{k,l}(X,Y)=(-1)^{(k-1)(l-1)}F^{l,k}(Y,X)
\end{equation}
for all $X\in A^k(M)$ and $Y\in A^l(M)$. In particular, $F^{k,1}(X,Y)=F^{1,k}(Y,X)$ for all $X\in A^k(M)$ and $Y\in A^1(M)$. By Proposition \ref{UpperInducedProp}, $F^{1,k}\equiv 0$ for all $k$. Equation (\ref{ULTorsionCor1}) (with $l=1$) then implies that $F^{k,1}\equiv 0$ for all $k$. Applying Proposition \ref{UpperInducedProp} once more, we have $F^{k,l}\equiv 0$ for all $k,l$. This shows that $\nabla=\widetilde{\nabla}$ and $\widetilde{\nabla}$ is torsion-free by Theorem \ref{TorsionThmB}. The proof for the case when $\nabla$ is torsion-free and lower induced is similar.
\end{proof}
\noindent We conclude this section with the following observation:
\begin{proposition}
\label{FlatnessProp}
Let $\nabla$ be a higher connection and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated with $\nabla$ by Theorem \ref{HigherConnectionData}. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $\widetilde{\nabla}$ is flat as an affine connection on $TM$.
\item[(ii)] For any $p\in M$ and any $y\in \wedge^l T_pM$, $l>0$, there exists a neighborhood $U\subset M$ of $p$ and a $Y\in A^l(U)$ such that $Y_p=y$ and $\nabla_XY=F^{k,l}(X,Y)$ for all $X\in A^k(U)$, $k>0$.
\end{itemize}
\end{proposition}
\begin{proof}
(i)$\Rightarrow$(ii). Let $\widetilde{\nabla}$ be flat as an affine connection on $TM$ and let $p\in M$. To start, consider the case when $y\in \wedge^l T_pM$ is a decomposable $l$-vector. Then $y=y_1\wedge \cdots \wedge y_l$ for some $y_j\in T_pM$, $j=1,\dots, l$. Since $\widetilde{\nabla}$ is flat, there exists a neighborhood $U_i$ of $p$ and a $Y_j\in A^1(U_j)$ such that $(Y_j)_p=y_j$ and $\widetilde{\nabla}_X Y_j=0$ on $U_j$ for all $X\in A^1(U_j)$, $j=1,\dots, l$. Let $U=U_1\cap \cdots \cap U_l$, $Y=Y_1\wedge \cdots \wedge Y_l$, and let $X=X_1\wedge \cdots \wedge X_k$ be any decomposable $k$-vector field on $U$, $k>0$. Since $\widetilde{\nabla}$ is induced, we have
\begin{align}
\label{FlatnessProp1}
\widetilde{\nabla}_X Y&=\sum_{i=1}^k \sum_{j=1}^l (-1)^{k-i} (-1)^{j-1}X[i]\wedge \widetilde{\nabla}_{X_i}Y_j \wedge Y[j]=0,
\end{align}
where $X[i]=X_1\wedge \cdots \wedge \widehat{X_i}\wedge \cdots \wedge X_k$ and $Y[j]=Y_1\wedge \cdots \wedge \widehat{Y_j}\wedge \cdots \wedge Y_l$. By linearity, (\ref{FlatnessProp1}) holds for all $X\in A^k(U)$.
Now let $y$ be any $l$-vector at $p$. Then $y$ is a sum of decomposable $l$-vectors: $y=y_{(1)}+\cdots + y_{(t)}$, where $y_{(a)}$ is a decomposable $l$-vector for $a=1,\dots, t$. By the above argument, there exists a neighborhood $U_{(a)}$ of $p$ and a decomposable $l$-vector field $Y_{(a)}$ on $U_{(a)}$ such that $(Y_{(a)})_p=y_{(a)}$ and $\widetilde{\nabla}_X Y_{(a)}=0$ for all $X\in A^k(U_{(a)})$, $a=1,\dots, t$. Now set $U=U_{(1)}\cap \cdots \cap U_{(a)}$ and $Y=Y_{(1)}+\cdots +Y_{(t)}$. Then $Y_p=y$ and $\widetilde{\nabla}_XY=0$ for all $X\in A^k(U)$, $k>0$. This fact along with Theorem \ref{HigherConnectionData} gives
\begin{equation}
\nonumber
\nabla_X Y=\widetilde{\nabla}_X Y +F^{k,l}(X,Y)=F^{k,l}(X,Y),\hspace*{0.1in}\forall X\in A^k(U),~k>0.
\end{equation}
(i)$\Leftarrow$(ii). Let $\nabla$ be a higher connection satisfying (ii). From Theorem \ref{HigherConnectionData}, $\nabla$ and $\widetilde{\nabla}$ (as an induced higher connection), coincide on $A^1(M)\times A^1(M)$ (i.e., $F^{1,1}\equiv 0$). Consequently, for the special case of $k=l=1$, (ii) implies that for any $p\in M$ and any $y\in T_pM$, there exists a neighborhood $U$ of $p$ and a $Y\in A^1(U)$ such that $Y_p=y$ and $\widetilde{\nabla}_X Y=0$ on $U$ for all $X\in A^1(U)$. In other words, $\widetilde{\nabla}|_{A^1(M)\times A^1(M)}$ is a flat affine connection. This completes the proof.
\end{proof}
\begin{corollary}
Let $\widetilde{\nabla}$ be a flat affine connection on $TM$ and extend $\widetilde{\nabla}$ to an induced higher connection. Then for any $p\in M$ and any $y\in \wedge^l T_pM$, $l>0$, there exists a neighborhood $U\subset M$ of $p$ and a $Y\in A^l(U)$ such that $Y_p=y$ and $\widetilde{\nabla}_XY=0$ for all $X\in A^k(U)$, $k>0$.
\end{corollary}
\section{Extension to Differential Forms}
Let $\nabla$ be a higher connection, $\omega \in \Omega^l(M)$, and $X\in A^k(M)$. For $k>0$ and $l-k+1\ge 0$, we define $\nabla_X\omega\in \Omega^{l-k+1}(M)$ via
\begin{align}
\label{CovDerDiffForm}
\nabla_X\omega(Y):=(-1)^{(k-1)(l-1)}L_X i_Y\omega-\omega(\nabla_X Y)
\end{align}
for all $Y\in A^{l-k+1}(M)$. (For $\eta\in \Omega^0(M)$ and $f\in A^0(M):=C^\infty(M)$, we understand $\eta(f)$ to mean $f\eta$.) Lastly, for $k=0$ or $l-k+1<0$, we set $\nabla_X\omega:=0$. We will now verify that $\nabla_X\omega$ is indeed an $(l-k+1)$-form. To do this, we need the following lemmas:
\begin{lemma}
\label{InteriorProduct2}
Let $X=X_1\wedge \cdots \wedge X_k$ be a decomposable $k$-vector field and let $\alpha\in \Omega^1(M)$ and $\omega\in \Omega^\bullet(M)$. Then
\begin{equation}
\nonumber
i_X(\alpha\wedge \omega)=\sum_{j=1}^k(-1)^{j+1}\alpha(X_j)i_{X_1\wedge \cdots \wedge \widehat{X}_j\wedge \cdots \wedge X_k}\omega+(-1)^k\alpha\wedge i_X\omega.
\end{equation}
\end{lemma}
\begin{proof}
We prove Lemma \ref{InteriorProduct2} by induction on $k$. For $k=1$, $X$ is a vector field and (\ref{InteriorProductDeg1}) gives
\begin{equation}
i_X(\alpha\wedge \omega)=\alpha(X)\omega-\alpha\wedge i_X\omega.
\end{equation}
Hence, Lemma \ref{InteriorProduct2} holds for $k=1$. Now suppose that Lemma \ref{InteriorProduct2} holds for $X=X_1\wedge \cdots \wedge X_k$. Let $X_{k+1}\in A^1(M)$. By Proposition \ref{InteriorProduct1}, we have
\begin{align}
\nonumber
i_{X\wedge X_{k+1}}(\alpha\wedge \omega) &= i_{X_{k+1}} i_X(\alpha\wedge \omega)\\
\nonumber
&=i_{X_{k+1}}\left(\sum_{j=1}^k(-1)^{j+1}\alpha(X_j)i_{X_1\wedge \cdots \wedge \widehat{X}_j\wedge \cdots \wedge X_k}\omega+(-1)^k\alpha\wedge i_X\omega\right)\\
\nonumber
&=\sum_{j=1}^k(-1)^{j+1}\alpha(X_j)i_{X_{k+1}}i_{X_1\wedge \cdots \wedge \widehat{X}_j\wedge \cdots \wedge X_k}\omega+(-1)^ki_{X_{k+1}}(\alpha\wedge i_X\omega)\\
\nonumber
&=\sum_{j=1}^k(-1)^{j+1}\alpha(X_j)i_{X_1\wedge \cdots \wedge \widehat{X}_j\wedge \cdots \wedge X_{k+1}}\omega\\
\nonumber
&+(-1)^k(\alpha(X_{k+1}) i_X\omega-\alpha\wedge i_{X_{k+1}}i_X\omega)\\
\nonumber
&=\sum_{j=1}^k(-1)^{j+1}\alpha(X_j)i_{X_1\wedge \cdots \wedge \widehat{X}_j\wedge \cdots \wedge X_{k+1}}\omega+(-1)^{k+2}\alpha(X_{k+1}) i_X\omega\\
\nonumber
&+(-1)^{k+1}\alpha\wedge i_{X\wedge X_{k+1}}\omega\\
\nonumber
&=\sum_{j=1}^{k+1}(-1)^{j+1}\alpha(X_j)i_{X_1\wedge \cdots \wedge \widehat{X}_j\wedge \cdots \wedge X_{k+1}}+(-1)^{k+1}\alpha\wedge i_{X\wedge X_{k+1}}\omega,
\end{align}
where we have used the induction hypothesis in the second equality, and Proposition \ref{InteriorProduct1} and equation (\ref{InteriorProductDeg1}) in the fourth equality. This completes the proof.
\end{proof}
\begin{lemma}
\label{SchoutenLemma}
Let $X=X_1\wedge \cdots \wedge X_k$ be a decomposable $k$-vector field and let $f\in C^\infty(M)$. Then
\begin{equation}
\nonumber
[X,f]=\sum_{j=1}^k(-1)^{k-j} (X_jf) X_1\wedge \cdots \wedge\widehat{X}_j\wedge \cdots \wedge X_k
\end{equation}
\end{lemma}
\begin{proof}
We prove this by induction on $k$. For $k=1$, we have $X=X_1$ and $[X_1,f]:=X_1f$. Hence, the lemma holds for $k=1$. Suppose that the lemma holds for $X=X_1\wedge \cdots \wedge X_k$. Let $X_{k+1}\in A^1(M)$. Then
\begin{align}
\nonumber
[X\wedge X_{k+1},f]&=-(-1)^k[f,X\wedge X_{k+1}]\\
\nonumber
&=-(-1)^k([f,X]\wedge X_{k+1}+(-1)^k X\wedge [f,X_{k+1}])\\
\nonumber
&=-(-1)^k(-(-1)^{k-1}[X,f]\wedge X_{k+1}+(-1)^{k+1} X\wedge [X_{k+1},f])\\
\nonumber
&=-(-1)^k((-1)^{k}[X,f]\wedge X_{k+1}+(-1)^{k+1} (X_{k+1}f)X\\
\nonumber
&=-[X,f]\wedge X_{k+1}+ (X_{k+1}f)X\\
\nonumber
&=\sum_{j=1}^k(-1)^{k+1-j} (X_jf) X_1\wedge \cdots\wedge \widehat{X}_j\wedge \cdots \wedge X_k\wedge X_{k+1}+ (X_{k+1}f)X\\
\nonumber
&=\sum_{j=1}^{k+1}(-1)^{k+1-j} (X_jf) X_1\wedge \cdots\wedge \widehat{X}_j\wedge \cdots \wedge X_k\wedge X_{k+1},
\end{align}
where we have used the induction hypothesis in the sixth equality. This completes the proof.
\end{proof}
\begin{lemma}
\label{InteriorProduct3}
Let $X\in A^k(M)$, $f\in C^\infty(M)$, and $\omega\in \Omega^\bullet(M)$. Then
\begin{equation}
\nonumber
L_Xf\omega = fL_X\omega+i_{[X,f]}\omega.
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality, let $X=X_1\wedge \cdots \wedge X_k$ be a decomposable $k$-vector field. Then
\begin{align}
\nonumber
L_Xf\omega &=di_Xf\omega-(-1)^ki_Xd f\omega\\
\nonumber
&=d(fi_X\omega) - (-1)^ki_X(df\wedge \omega+fd\omega)\\
\nonumber
&=df\wedge i_X\omega+f d i_X\omega\\
\nonumber
&-(-1)^k\left(\sum_{j=1}^{k} (-1)^{j+1}(X_j f) i_{X_1\wedge\cdots \wedge\widehat{X}_j \wedge \cdots \wedge X_k}\omega+(-1)^kdf\wedge i_X\omega+fi_Xd\omega\right)\\
\nonumber
&=fdi_X\omega+\sum_{j=1}^k(-1)^{k-j}(X_j f) i_{X_1\wedge\cdots \wedge\widehat{X}_j \wedge \cdots \wedge X_k}\omega-(-1)^k f i_Xd\omega\\
\nonumber
&=fL_X\omega+\sum_{j=1}^k(-1)^{k-j}(X_j f) i_{X_1\wedge\cdots \wedge\widehat{X}_j \wedge \cdots \wedge X_k}\omega\\
\nonumber
&=fL_X\omega+i_{[X,f]}\omega,
\end{align}
where Lemma \ref{InteriorProduct2} is used in the third equality and Lemma \ref{SchoutenLemma} is used in the last equality.
\end{proof}
\begin{proposition}
\label{P925}
Let $\nabla$ be a higher connection. Then $\nabla_X\omega=L_X\omega\in C^\infty(M)$ for $X\in A^k(M)$ and $\omega\in \Omega^{k-1}(M)$.
\end{proposition}
\begin{proof}
Let $f\in C^\infty(M)$. Then
\begin{align}
\nonumber
f\nabla_X\omega&=\nabla_X\omega(f)\\
\nonumber
&:=(-1)^{(k-1)(k-2)}L_Xi_f\omega-\omega(\nabla_Xf)\\
\nonumber
&=L_Xf\omega-\omega([X,f])\\
\nonumber
&=fL_X\omega+i_{[X,f]}\omega-\omega([X,f])\\
\nonumber
&=fL_X\omega.
\end{align}
where Lemma \ref{InteriorProduct3} is used in the fourth equality. This proves the proposition.
\end{proof}
\begin{theorem}
\label{CovDerDiffFormProp}
Let $\nabla$ be a higher connection, $\omega \in \Omega^l(M)$, and $X\in A^k(M)$ with $k> 0$. Then $\nabla_X \omega\in \Omega^{l-k+1}(M)$.
\end{theorem}
\begin{proof}
For $l-k+1\ge 0$, let $Y\in A^{l-k+1}(M)$. It follows from (\ref{CovDerDiffForm}) that $\nabla_X\omega(Y)\in C^\infty(M)$. To prove Theorem \ref{CovDerDiffFormProp}, it suffices to show
\begin{equation}
\label{CovDerDiffFormProp1}
\nabla_X\omega(fY)=f\nabla_X\omega(Y)
\end{equation}
for $f\in C^\infty(M)$. We now verify (\ref{CovDerDiffFormProp1}):
\begin{align}
\nonumber
\nabla_X\omega(fY)&=(-1)^{(k-1)(l-1)}L_X i_{fY}\omega-\omega(\nabla_X fY)\\
\nonumber
&=(-1)^{(k-1)(l-1)}L_X fi_Y\omega-\omega([X,f]\wedge Y)-f\omega(\nabla_XY)\\
\nonumber
&=(-1)^{(k-1)(l-1)}fL_X i_Y\omega+(-1)^{(k-1)(l-1)}i_{[X,f]}i_Y\omega-\omega([X,f]\wedge Y)\\
\nonumber
&-f\omega(\nabla_XY)\\
\nonumber
&=f\nabla_X\omega(Y)+(-1)^{(k-1)(l-1)}i_{[X,f]}i_Y\omega-\omega([X,f]\wedge Y)\\
\nonumber
&=f\nabla_X\omega(Y)+(-1)^{(k-1)(l-1)}i_Y\omega([X,f])-\omega([X,f]\wedge Y)\\
\nonumber
&=f\nabla_X\omega(Y)+(-1)^{(k-1)(l-1)}\omega(Y\wedge [X,f])-\omega([X,f]\wedge Y)\\
\nonumber
&=f\nabla_X\omega(Y)+(-1)^{k(k-1)}\omega([X,f]\wedge Y)-\omega([X,f]\wedge Y)\\
\nonumber
&=f\nabla_X\omega(Y),
\end{align}
where Lemma \ref{InteriorProduct3} is used in the third equality. This completes the proof.
\end{proof}
\noindent We now conclude this section with some of the properties of (\ref{CovDerDiffForm}). Before doing so, we need a quick lemma:
\begin{lemma}
\label{LieDerivativeLemma1}
Let $X\in A^k(M)$, $\eta\in \Omega^\bullet(M)$, and $f\in C^\infty(M)$. Then
\begin{equation}
\nonumber
L_{fX}\eta=df\wedge i_X\eta +fL_X\eta.
\end{equation}
\end{lemma}
\begin{proof}
From (\ref{LieDerivativeMVF}), we have
\begin{align}
\nonumber
L_{fX}\eta&=di_{fX}\eta-(-1)^ki_{fX}d\eta\\
\nonumber
&=dfi_X\eta-(-1)^kfi_Xd\eta\\
\nonumber
&=df\wedge i_X\eta+fdi_X\eta-(-1)^kfi_Xd\eta\\
\nonumber
&=df\wedge i_X\eta+fL_X\eta.
\end{align}
\end{proof}
\begin{theorem}
\label{CovDerDiffFormProp2}
Let $\nabla$ be a higher connection. For $\omega \in \Omega^l(M)$, $X\in A^k(M)$ $(k>0)$, and $f\in C^\infty(M)$, $\nabla$ satisfies
\begin{itemize}
\item[(i)] $\nabla_{fX}\omega=f\nabla_X\omega$
\item[(ii)] $\nabla_Xf\omega=f\nabla_X\omega+i_{[X,f]} \omega$
\end{itemize}
If $\nabla$ is also upper induced, then
\begin{itemize}
\item[(iii)] $\nabla_X(i_W\omega)=(-1)^{j(k-1)}(i_W\nabla_X\omega+i_{\nabla_XW}\omega)$ for $W\in A^j(M)$.
\end{itemize}
\end{theorem}
\begin{proof}
Let $l-k+1\ge 0$ and let $Y\in A^{l-k+1}(M)$. For (i), we have
\begin{align}
\nonumber
\nabla_{fX}\omega(Y)&=(-1)^{(k-1)(l-1)}L_{fX}i_Y\omega-\omega(\nabla_{fX}Y)\\
\nonumber
&=(-1)^{(k-1)(l-1)}df\wedge i_Xi_Y\omega+(-1)^{(k-1)(l-1)}fL_Xi_Y\omega-f\omega(\nabla_{X}Y)\\
\nonumber
&=(-1)^{(k-1)(l-1)}df\wedge i_{Y\wedge X}\omega+f\nabla_X\omega(Y)\\
\nonumber
&=f\nabla_X\omega(Y),
\end{align}
where Lemma \ref{LieDerivativeLemma1} was used in the second equality, Proposition \ref{InteriorProduct1} was used in the third equality, and the last equality follows from the fact that $Y\wedge X\in A^{l+1}(M)$ and $\omega\in \Omega^l(M)$.
For (ii), we have
\begin{align}
\nonumber
\nabla_X f\omega (Y)&=(-1)^{(k-1)(l-1)}L_Xi_Y(f\omega)-(f\omega)(\nabla_XY)\\
\nonumber
&=(-1)^{(k-1)(l-1)}L_Xfi_Y\omega-(f\omega)(\nabla_XY)\\
\nonumber
&=(-1)^{(k-1)(l-1)}fL_Xi_Y\omega+(-1)^{(k-1)(l-1)}i_{[X,f]}i_Y\omega-(f\omega)(\nabla_XY)\\
\nonumber
&=f\nabla_X\omega(Y)+(-1)^{(k-1)(l-1)}i_{[X,f]}i_Y\omega\\
\nonumber
&=f\nabla_X\omega(Y)+(-1)^{k(k-1)}i_Yi_{[X,f]}\omega\\
\nonumber
&=f\nabla_X\omega(Y)+i_{[X,f]}\omega(Y),
\end{align}
where Lemma \ref{InteriorProduct3} is used in the third equality and Proposition \ref{InteriorProduct1} is used in the fifth equality.
For (iii), let $\nabla$ be an upper induced higher connection. Note that for $j=0$, (iii) follows directly from (ii) of Theorem \ref{CovDerDiffFormProp2}. Now assume that $j>0$. Let $t=l-j-k+1\ge 0$ and let $Z\in A^t(M)$. Then
\begin{align}
\nonumber
\nabla_X(i_W\omega)(Z)&=(-1)^{(l-j-1)(k-1)}L_Xi_Zi_W\omega-i_W\omega(\nabla_X Z)\\
\nonumber
&=(-1)^{(l-1)(k-1)}(-1)^{j(k-1)}L_Xi_{W\wedge Z}\omega-\omega(W\wedge\nabla_X Z)\\
\nonumber
&=(-1)^{(l-1)(k-1)}(-1)^{j(k-1)}L_Xi_{W\wedge Z}\omega-(-1)^{j(k-1)}\omega(\nabla_X(W\wedge Z))\\
\nonumber
&+(-1)^{j(k-1)}\omega(\nabla_X(W\wedge Z))-\omega(W\wedge\nabla_X Z)\\
\nonumber
&=(-1)^{j(k-1)}\nabla_X\omega(W\wedge Z)\\
\nonumber
&+(-1)^{j(k-1)}(\omega((\nabla_XW)\wedge Z)+(-1)^{j(k-1)}\omega(W\wedge \nabla_X Z))\\
\nonumber
&-\omega(W\wedge\nabla_X Z)\\
\nonumber
&=(-1)^{j(k-1)}\nabla_X\omega(W\wedge Z)+(-1)^{j(k-1)}\omega((\nabla_XW)\wedge Z)\\
\nonumber
&=(-1)^{j(k-1)}(i_W\nabla_X\omega(Z)+i_{\nabla_XW}\omega(Z)),
\end{align}
where Proposition \ref{InteriorProduct1} is used in the second equality, and (\ref{CovDerDiffForm}) and the fact that $\nabla$ is upper induced are used in the fourth equality. This completes the proof.
\end{proof}
\begin{theorem}
Let $\nabla$ be a torsion-free induced higher connection. Then
\begin{equation}
\label{P924a}
\nabla_{X\wedge Y} \omega = (-1)^li_Y\nabla_X\omega+(-1)^{k(l-1)}i_X\nabla_Y\omega
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$, and $\omega\in \Omega^m(M)$.
\end{theorem}
\begin{proof}
For $m<k+l-1$, both sides of (\ref{P924a}) are zero. In addition, for $k=0$ or $l=0$, (\ref{P924a}) follows from Theorem \ref{CovDerDiffFormProp2}-(i). We now verify (\ref{P924a}) for $m\ge k+l-1$ with $k,l>0$.
Let $Z\in A^t(M)$ where $t=m-k-l+1$. Then
\begin{align}
\label{P924b}
\nabla_{X\wedge Y}\omega(Z)=qL_{X\wedge Y} i_Z\omega-\omega(\nabla_{X\wedge Y} Z),
\end{align}
where $q:=(-1)^{(k+l-1)(m-1)}$. Using (iv) and (ii) of Proposition \ref{LieDerivativeProp} and (i) of Proposition \ref{InteriorProduct1}, the first term in (\ref{P924b}) can be decomposed as
\begin{align}
\label{P924c}
qL_{X\wedge Y} i_Z\omega=q(-1)^{kl}L_Xi_{Z\wedge Y}\omega-q(-1)^li_{Z\wedge [X,Y]}\omega+qL_Yi_{Z\wedge X}\omega.
\end{align}
From (\ref{CovDerDiffForm}), we have
\begin{align}
\label{P924d}
(-1)^{(k-1)(m-1)}L_Xi_{Z\wedge Y}\omega&=\nabla_X\omega(Z\wedge Y)+\omega(\nabla_X(Z\wedge Y))\\
\label{P924e}
(-1)^{(l-1)(m-1)}L_Y i_{Z\wedge X}\omega&=\nabla_Y\omega(Z\wedge X)+\omega(\nabla_Y(Z\wedge X)).
\end{align}
Substituting (\ref{P924d}) and (\ref{P924e}) into (\ref{P924c}) and using the fact that
\begin{equation}
\nonumber
q=(-1)^{(k-1)(m-1)}(-1)^{l(m-1)}=(-1)^{(l-1)(m-1)}(-1)^{k(m-1)}
\end{equation}
gives
\begin{align}
\nonumber
qL_{X\wedge Y} i_Z\omega&=(-1)^{l(m-1)}(-1)^{kl}\left(\nabla_X\omega(Z\wedge Y)+\omega(\nabla_X(Z\wedge Y))\right)\\
\nonumber
&+(-1)^{k(m-1)}\left(\nabla_Y\omega(Z\wedge X)+\omega(\nabla_Y(Z\wedge X))\right)\\
\label{P924f}
&-q(-1)^l\omega(Z\wedge [X,Y]).
\end{align}
Since $\nabla$ is induced, (\ref{P924f}) is further expanded as
\begin{align}
\nonumber
qL_{X\wedge Y} i_Z\omega&=(-1)^{l(m-1)}(-1)^{kl}\nabla_X\omega(Z\wedge Y)\\
\nonumber
&+(-1)^{l(m-1)}(-1)^{kl}\omega((\nabla_X Z)\wedge Y)\\
\nonumber
&+(-1)^{l(m-1)}(-1)^{kl}(-1)^{(k-1)t}\omega(Z\wedge \nabla_X Y)\\
\nonumber
&+(-1)^{k(m-1)}\nabla_Y\omega(Z\wedge X)\\
\nonumber
&+(-1)^{k(m-1)}\omega((\nabla_YZ)\wedge X))\\
\nonumber
&+(-1)^{k(m-1)}(-1)^{(l-1)t}\omega(Z\wedge \nabla_Y X)\\
\label{P924g}
&-q(-1)^l\omega(Z\wedge [X,Y]).
\end{align}
Swapping all the wedge products in (\ref{P924g}) with the appropriate signs gives
\begin{align}
\nonumber
qL_{X\wedge Y} i_Z\omega&=(-1)^{l}\nabla_X\omega(Y\wedge Z)+(-1)^{kl}\omega(Y\wedge \nabla_X Z)\\
\nonumber
&+(-1)^{l}\omega((\nabla_X Y)\wedge Z)+(-1)^{k(l-1)}\nabla_Y\omega(X\wedge Z)\\
\nonumber
&+\omega(X\wedge \nabla_YZ)+(-1)^{k(l-1)}\omega((\nabla_Y X)\wedge Z)\\
\label{P924h}
&-(-1)^l\omega([X,Y]\wedge Z).
\end{align}
(\ref{P924h}) can be rewritten as
\begin{align}
\nonumber
qL_{X\wedge Y} i_Z\omega&=(-1)^{l}i_Y(\nabla_X\omega)(Z)+(-1)^{kl}\omega(Y\wedge \nabla_X Z)\\
\nonumber
&+(-1)^{k(l-1)}i_X(\nabla_Y\omega)(Z)+\omega(X\wedge \nabla_YZ)\\
\label{P924i}
&+(-1)^l\omega(T(X,Y)\wedge Z),
\end{align}
where $T$ is the higher torsion of $\nabla$. Since $\nabla$ is torsion-free, the last term vanishes and we obtain
\begin{align}
\nonumber
qL_{X\wedge Y} i_Z\omega&=(-1)^{l}i_Y(\nabla_X\omega)(Z)+(-1)^{kl}\omega(Y\wedge \nabla_X Z)\\
\label{P924j}
&+(-1)^{k(l-1)}i_X(\nabla_Y\omega)(Z)+\omega(X\wedge \nabla_YZ).
\end{align}
Since $\nabla$ is induced, the second term in (\ref{P924b}) decomposes as
\begin{equation}
\label{P924k}
\omega(\nabla_{X\wedge Y} Z)=\omega(X\wedge \nabla_Y Z)+(-1)^{kl}\omega(Y\wedge \nabla_X Z).
\end{equation}
Subsituting (\ref{P924j}) and (\ref{P924k}) into (\ref{P924b}) proves (\ref{P924a}) for the case when $m\ge k+l-1$.
\end{proof}
\section{Higher Connections and Associative Bilinear Forms}
\noindent Throughout this section, let $n:=\dim M$.
\begin{definition}
\label{DefBM}
Let $\mathcal{B}(M)$ be the set of all fiberwise $\mathbb{R}$-bilinear forms $\eta$ on $\wedge^\bullet TM$ such that
\begin{itemize}
\item[(i)] $\eta(x\wedge y,z)=\eta(x,y\wedge z)$ for all $x\in \wedge^k T_pM$, $y\in \wedge^l T_pM$, $z\in \wedge^m T_pM$, $p\in M$ with $k,l,m\ge 0$,
\item[(ii)] $\eta$ is smooth in the sense that for all $X\in A^k(M)$, $Y\in A^l(M)$, the function $\eta(X,Y)(p):=\eta(X_p,Y_p)$ for $p\in M$ is smooth.
\end{itemize}
\end{definition}
\begin{corollary}
\label{CorBM1}
Let $\eta\in \mathcal{B}(M)$. For all $p\in M$, $\eta$ satisfies
\begin{itemize}
\item[(a)] $\eta(x,y)=(-1)^{kl}\eta(y,x)$ for $x\in \wedge^k T_pM$, $y\in \wedge^l T_pM$
\item[(b)] $\eta(x,y)=0$ for $x\in \wedge^k T_pM$, $y\in \wedge^l T_pM$ with $k+l>n$.
\end{itemize}
\end{corollary}
\begin{proof}
Let $1\in \wedge^0T_pM:=\mathbb{R}$. For (a), we have
\begin{align}
\nonumber
\eta(x,y)&=\eta(x,y\wedge 1)\\
\nonumber
&=\eta(x\wedge y,1)\\
\nonumber
&=(-1)^{kl}\eta(y\wedge x,1)\\
\nonumber
&=(-1)^{kl}\eta(y,x\wedge 1)\\
\nonumber
&=(-1)^{kl}\eta(y,x).
\end{align}
For $k+l>n$, $x\wedge y=0$ and the second equality in the above calculation implies that $\eta(x,y)=0$. This proves (b).
\end{proof}
\begin{proposition}
\label{BMDiffFormProp}
There is a one to one correspondence between $\mathcal{B}(M)$ and $\Omega^\bullet(M):=\bigoplus_{k=0}^n\Omega^k(M)$. This correspondence is given by associating $\eta\in \mathcal{B}(M)$ with $\{\omega^{(k)}\}_{k=0}^n$ where
\begin{itemize}
\item[(1)] $\omega^{(k)}\in \Omega^k(M)$, for $k=0,\dots, n$,
\item[(2)] $\eta(x,y)=\omega^{(k+l)}(x\wedge y)$ $\forall ~x\in \wedge^kT_pM$, $y\in \wedge^lT_pM$, $p\in M$, $1\le k+l\le n$, and
\item[(3)] $\eta(x,y)=(xy)\omega_p^{(0)}$ $\forall~x,y\in \wedge^0 T_pM:=\mathbb{R}$, $p\in M$.
\end{itemize}
\end{proposition}
\begin{proof}
Let $\omega\in \Omega^\bullet(M)$ and decompose $\omega$ as
\begin{equation}
\nonumber
\omega=\omega^{(0)}+\omega^{(1)}+\cdots +\omega^{(n)},
\end{equation}
where $\omega^{(k)}\in \Omega^k(M)$. For $p\in M$, $x\in \wedge^k T_pM$, $y\in \wedge^lT_pM$ with $1\le k+l\le n$, $\eta$ is defined via
\begin{equation}
\nonumber
\eta(x,y):=\omega^{(k+l)}_p(x\wedge y).
\end{equation}
Then for $z\in \wedge^m T_pM$ with $1 \le k+l+m\le n$, we have
\begin{align}
\nonumber
\eta(x\wedge y,z)&=\omega^{(k+l+m)}_p((x\wedge y)\wedge z)\\
\nonumber
&=\omega^{(k+l+m)}_p(x\wedge (y\wedge z))\\
\nonumber
&=\eta(x,y\wedge z).
\end{align}
For $p\in M$, $x,y\in \wedge^0T_pM:=\mathbb{R}$, $\eta$ is defined by $\eta(x,y):=(xy)\omega^{(0)}_p$. For $z\in \wedge^0T_pM:=\mathbb{R}$, we have
\begin{equation}
\nonumber
\eta(xy,z):=((xy)z)\omega^{(0)}_p=(x(yz))\omega^{(0)}_p=\eta(x,yz).
\end{equation}
For $x\in \wedge^kT_pM$, $y\in \wedge^lT_pM$, $p\in M$ with $k+l>n$, we set $\eta(x,y):=0$. This proves (i) of Definition \ref{DefBM}. (ii) of Definition \ref{DefBM} follows from the fact that $\omega$ is smooth.
Now let $\eta\in \mathcal{B}(M)$. For $p\in M$, $x\in \wedge^kT_pM$ with $k\ge 1$, we define
\begin{equation}
\nonumber
\omega^{(k)}_p(x):=\eta(x,1),
\end{equation}
where we identify $\omega_p^{(k)}$ with an element of $\wedge^k T^\ast_pM$ via the natural isomorphism $(\wedge^k T_pM)^\ast\simeq \wedge^k T^\ast_pM$. For $k=0$, we define $\omega_p^{(0)}:=\eta(\mathbf{1},\mathbf{1})(p)$ where $\mathbf{1}\in A^0(M)=C^\infty(M)$ is the constant function $p\mapsto 1\in \mathbb{R}$. (ii) of Definition \ref{DefBM} implies that $\omega^{(k)}$ is smooth.
Let $\varphi: \Omega^\bullet(M)\rightarrow \mathcal{B}(M)$ and $\psi: \mathcal{B}(M)\rightarrow \Omega^\bullet(M)$ be the two mappings constructed above. Its clear then that $\psi\circ \varphi=id_{\Omega^\bullet(M)}$ and $\varphi\circ \psi=id_{\mathcal{B}(M)}$. This proves the proposition.
\end{proof}
\noindent We now give a necessary and sufficient condition for $\eta\in \mathcal{B}(M)$ to be nondegenerate on the fibers of $\wedge^\bullet TM$, that is, for any $p\in M$ and any $v\in \wedge^\bullet T_pM$, we have
\begin{equation}
\nonumber
\eta(u,v)=0~\forall~u\in \wedge^\bullet T_pM \Longleftrightarrow ~v=0.
\end{equation}
\begin{proposition}
\label{NondegenBM}
Let $\eta\in \mathcal{B}(M)$ and let $\{\omega^{(k)}\}_{k=0}^n$ be the set of differential forms associated with $\eta$ by Propositon \ref{BMDiffFormProp}. Then $\eta$ is nondegenerate on the fibers of $\wedge^\bullet TM$ iff $\omega^{(n)}$ is a volume form.
\end{proposition}
\begin{proof}
$(\Rightarrow)$ Suppose $\eta$ is nondegenerate on the fibers of $\wedge^\bullet TM$. Let $p$ be any element of $M$ and let $v$ be any nonzero element of $\wedge^n T_pM$. Since $\eta$ is nondegenerate, we have
\begin{equation}
\nonumber
\omega^{(n)}_p(v)=\omega^{(n)}_p(v\wedge 1)=\eta(v,1)\neq 0.
\end{equation}
Since $p\in M$ was arbitrary, this proves that $\omega^{(n)}$ is a volume form.
($\Leftarrow$) Suppose $\omega^{(n)}$ is a volume form. Let $p$ be any element of $M$ and let $v$ be any nonzero element in $\wedge^\bullet T_pM$. Then $v$ can be decomposed as
\begin{equation}
\nonumber
v=v^{(0)}+v^{(1)}+\cdots + v^{(n)},
\end{equation}
where $v^{(i)}\in \wedge^iT_pM$ for $i=0,\dots, n$. Since $v\neq 0$, there exists some $i$ such that $v^{(i)}\neq 0$. Let $k:=\min\{i~|~v^{(i)}\neq 0\}$. By Propositon \ref{Multilinear3}, there exists an $(n-k)$-vector $u\in \wedge^{(n-k)}T_pM$ such that $u\wedge v^{(k)}\in \wedge^nT_pM$ is nonzero. For $i>k$, we clearly have $u\wedge v^{(i)} =0$. This gives
\begin{align}
\nonumber
\eta(u,v)&=\eta(u\wedge v,1)\\
\nonumber
&=\omega^{(n)}(u\wedge v^{(k)})\\
\nonumber
&\neq 0,
\end{align}
where the last line follows from the fact that $\wedge^nT_pM$ is a 1-dimensional space generated by $u\wedge v^{(k)}\neq 0$ and $\omega^{(n)}$ is non-vanishing. This completes the proof.
\end{proof}
\begin{corollary}
\label{FrobeniusAlgCor}
Let $\eta\in \mathcal{B}(M)$ and let $\{\omega^{(k)}\}$ be the set of differential forms associated with $\eta$ by Proposition \ref{BMDiffFormProp}. Then $(\wedge^\bullet T_pM,\eta_p)$ is a supercommutative Frobenius algebra for all $p\in M$ iff $\omega^{(n)}$ is a volume form.
\end{corollary}
\begin{proof}
Recall that a Frobenius algebra is a finite dimensional unital associative algebra $A$ with a nondegenerate bilinear form $\langle\cdot,\cdot\rangle$ satisfying
\begin{equation}
\label{FrobeniusAssociative}
\langle ab,c\rangle =\langle a, bc\rangle\hspace*{0.1in} \forall a,b,c\in A.
\end{equation}
Note that $\wedge^\bullet T_pM$ is naturally a finite dimensional unital associative algebra under the exterior product. In addition, the bilinear form $\eta_p$ satisfies the associativity condition of (\ref{FrobeniusAssociative}). For $(\wedge^\bullet T_pM,\eta_p)$ to be a Frobenius algbera, we only need $\eta_p$ to be nondgenerate. By Proposition \ref{NondegenBM}, this happens precisely when $\omega^{(n)}$ is a volume form. Lastly, note that $\wedge^\bullet T_pM$ is naturally a $\mathbb{Z}_2$-graded algebra via
\begin{equation}
\nonumber
\wedge^\bullet T_pM=\left(\wedge^\bullet T_pM\right)_0\oplus (\wedge^\bullet T_pM)_1
\end{equation}
where
\begin{equation}
\nonumber
\left(\wedge^\bullet T_pM\right)_0:=\bigoplus_{k=0}\wedge^{2k} T_pM,\hspace*{0.2in}\left(\wedge^\bullet T_pM\right)_1:=\bigoplus_{k=0}\wedge^{2k+1} T_pM.
\end{equation}
With the above $\mathbb{Z}_2$-grading, the relation $x\wedge y=(-1)^{kl}y\wedge x$ for $x\in \wedge^kT_pM$ and $y\in \wedge^l T_pM$ is precisely the conditon of supercommutativity.
\end{proof}
\begin{proposition}
\label{closedBM}
Let $\eta\in \mathcal{B}(M)$ and let $\{\omega^{(k)}\}_{k=0}^n$ be the set of differential forms associated with $\eta$ by Proposition \ref{BMDiffFormProp}. If $d\omega^{(k)}=0$ for all $k$, then for any decomposable $(k+1)$-vector field $X=X_1\wedge \cdots\wedge X_{k+1}$, $\eta$ satisfies
\begin{equation}
\label{closedBMFormula}
\sum_{i=1}^{k+1} (-1)^i X_i(\eta(X[i],\textbf{1}))=\sum_{1\le i<j\le k+1} (-1)^{i+j}\eta([X_i,X_j],X[i,j])
\end{equation}
where $\textbf{1}$ denotes the function on $M$ whose value is always $1\in\mathbb{R}$ and
\begin{align}
\nonumber
X[i]&:=X_1\wedge \cdots \wedge \widehat{X_i}\wedge \cdots \wedge X_{k+1}\\
\nonumber
X[i,j]&:=X_1\wedge \cdots \wedge \widehat{X_i}\wedge \cdots\wedge\widehat{X_j}\wedge \cdots \wedge X_{k+1}.
\end{align}
In addition, if $M$ is connected, then $\eta(\textbf{1},\textbf{1})$ is constant on $M$.
\end{proposition}
\begin{proof}
This follows from the invariant formula for the exterior derivative \cite{Lee}. Specifically,
\begin{align}
\nonumber
d\omega^{(k)}(X)&=\sum_{i=1}^{k+1} (-1)^{i-1}X_i(\omega^{(k)}(X[i]))+\sum_{1\le i<j\le k+1}(-1)^{i+j}\omega^{(k)}([X_i,X_j]\wedge X[i,j])\\
\nonumber
&=\sum_{i=1}^{k+1} (-1)^{i-1}X_i(\eta(X[i],\textbf{1}))+\sum_{1\le i<j\le k+1}(-1)^{i+j}\eta([X_i,X_j], X[i,j]).
\end{align}
The first part then follows from the fact that $d\omega^{(k)}=0$. For the second part, suppose that $M$ is connected. Then for any $Y\in A^1(M)$, we have
\begin{align}
d\omega^{(0)}(Y)=Y\omega^{(0)}=Y\eta(\textbf{1},\textbf{1}).
\end{align}
Since $d\omega^{(0)}=0$, it follows that $\eta(\textbf{1},\textbf{1})$ is locally constant on $M$. Since $M$ is connected, $\eta(\textbf{1},\textbf{1})$ must be constant on all of $M$.
\end{proof}
Using Proposition \ref{BMDiffFormProp}, there is a natural way to define the covariant derivative of $\eta$ with respect to a higher connection $\nabla$. For $X\in A^k(M)$, $Y\in A^l(M)$, and $Z\in A^m(M)$ (with $k+l+m\le n+1$), we define
\begin{align}
\label{CovDerBM}
(\nabla_X\eta)(Y,Z):=(\nabla_X\omega^{(t)})(Y\wedge Z).
\end{align}
where $t=k+l+m-1$ and $\{\omega^{(i)}\}$ is the set of differential forms associated with $\eta$ by Proposition \ref{BMDiffFormProp}.
\begin{definition}
\label{ParallelBMDef}
Let $\eta\in \mathcal{B}(M)$ and let $\nabla$ be a higher connection on $M$. $\eta$ is \textit{parallel} with respect to $\nabla$ if $(\nabla_X\eta)(Y,Z)=0$ for all $X\in A^k(M)$, $Y\in A^l(M)$, and $Z\in A^m(M)$, where $k>0$ and $l+m>0$. This condition is denoted symbolically by $\nabla\eta\equiv 0$.
\end{definition}
\begin{remark}
Note that when $l+m=0$ in Definition \ref{ParallelBMDef}, we have $Y,Z\in C^\infty(M)$. Let $f:=Y$ and $g:=Z$. Then for $X\in A^k(M)$ $(k>0)$, Proposition \ref{P925} implies that
\begin{align}
\nonumber
(\nabla_X\eta)(f,g)&=(\nabla_X\omega^{(k-1)})(fg)\\
\nonumber
&=fg(\nabla_X\omega^{(k-1)})\\
\nonumber
&=fgL_X\omega^{(k-1)}.
\end{align}
In other words, when $l+m=0$, $(\nabla_X\eta)(f,g)$ is completely determined by the Lie derivative of $\omega^{(k-1)}$ along $X$ with no contribution from the higher connection.
\end{remark}
\noindent The next result gives a necessary and sufficient condition for $\nabla \eta\equiv 0$:
\begin{proposition}
\label{ParallelBMProp}
Let $\eta\in \mathcal{B}(M)$ and let $\nabla$ be a higher connection on $M$. In addition, let $\{\omega^{(i)}\}$ be the set of differential forms associated with $\eta$ by Proposition \ref{BMDiffFormProp} and let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated with $\nabla$ by Theorem \ref{HigherConnectionData}. Then $\nabla\eta\equiv 0$ (in the sense of Definition \ref{ParallelBMDef}) iff
\begin{equation}
\label{ParallelBMProp1}
(\widetilde{\nabla}_X\omega^{(t)})(Y)=\omega^{(t)}(F^{k,l}(X,Y))
\end{equation}
for all $X\in A^k(M)$, $Y\in A^l(M)$ with $k,l>0$, where $t=k+l-1\le n:=\dim M$.
\end{proposition}
\begin{proof}
From the definition of $(\nabla_X\eta)(Y,Z)$ it suffices to consider the case where $Y\in A^l(M)$ with $l>0$ and $Z=\textbf{1}\in \Omega^0(M):=C^\infty(M)$, where $\textbf{1}$ is the function whose value is always $1\in \mathbb{R}$. Then
\begin{align}
\nonumber
(\nabla_X\eta)(Y,\textbf{1})&=(\nabla_X\omega^{(t)})(Y\wedge \textbf{1})\\
\nonumber
&=(\nabla_X\omega^{(t)})(Y)\\
\nonumber
&=(-1)^{(k-1)(t-1)}L_Xi_Y\omega^{(t)}-\omega^{(t)}(\nabla_XY)\\
\nonumber
&=(-1)^{(k-1)(t-1)}L_Xi_Y\omega^{(t)}-\omega^{(t)}(\widetilde{\nabla}_XY)-\omega^{(t)}(F^{k,l}(X,Y))\\
\nonumber
&=(\widetilde{\nabla}_X\omega^{(t)})(Y)-\omega^{(t)}(F^{k,l}(X,Y)),
\end{align}
where the last equality follows from (\ref{CovDerDiffForm}). From this, we see that $\nabla\eta\equiv 0$ iff (\ref{ParallelBMProp1}) is satisfied.
\end{proof}
\begin{remark}
Since $F^{1,1}\equiv 0$, we see from Proposition \ref{ParallelBMProp} that $\nabla\eta\equiv 0$ implies that $\omega^{(1)}$ is parallel with respect to the affine connection $\widetilde{\nabla}$, that is, $(\widetilde{\nabla}_X\omega^{(1)})(Y)= 0$ for all $X,Y\in A^1(M)$.
\end{remark}
\noindent Proposition \ref{ParallelBMProp} suggests that an induced higher connection has little chance of satisfying $\nabla\eta\equiv 0$. Without placing very restrictive conditions on $\eta$, any higher connection satisfying $\nabla\eta\equiv 0$ will, in general, have non-zero twist fields. In other words, a non-induced higher connection is needed to satisfy $\nabla\eta\equiv 0$. So we see that by equipping the full exterior bundle $\wedge^\bullet TM$ with an associative bilinear form $\eta$ and then demanding that $\nabla\eta\equiv 0$, the notion of a non-induced higher connection is required.
Given $\eta\in \mathcal{B}(M)$, we seek a higher connection $\nabla$ such that
\begin{itemize}
\item[(a)] $\nabla\eta\equiv 0$
\item[(b)] $T^\nabla\equiv 0$ (where $T^\nabla$ denotes the higher torsion of $\nabla$).
\end{itemize}
Let $(\widetilde{\nabla},\{F^{k,l}\})$ be the unique pair associated with $\nabla$ by Theorem \ref{HigherConnectionData}. By Theorem \ref{TorsionThmB}, $\nabla$ is torsion-free iff $\widetilde{\nabla}$ is torsion-free as an affine connection on $TM$, and the twist fields satisfy
\begin{equation}
\label{TwistFieldsTorsionRecall}
F^{k,l}(X,Y)=(-1)^{(k-1)(l-1)}F^{l,k}(Y,X)
\end{equation}
for $X\in A^k(M)$, $Y\in A^l(M)$, with $k,l> 0$ and $k+l-1\le n:=\dim M$. Let $\{\omega^{(t)}\}$ be the unique set of differential forms associated with $\eta$ by Proposition \ref{BMDiffFormProp}. It follows from Proposition \ref{ParallelBMProp} and equation (\ref{TwistFieldsTorsionRecall}) that any higher connection which satisfies (a) and (b) simultaneously must also satisfy the condition
\begin{equation}
\label{NecessaryConditionAB}
(\widetilde{\nabla}_X\omega^{(t)})(Y)=(-1)^{(k-1)(l-1)}(\widetilde{\nabla}_Y\omega^{(t)})(X)
\end{equation}
for all $X\in A^k(M)$, $Y\in A^l(M)$, with $k,l> 0$ and $k+l-1\le n$, where $t:=k+l-1$. Unfortunately, equation (\ref{NecessaryConditionAB}) does not hold in general, which means having a higher connection which satisfies (a) and (b) simultaneously is not possible. However, all is not quite lost, since (\ref{NecessaryConditionAB}) does hold when $X\wedge Y=0$ as the next result shows.
\begin{proposition}
\label{FlipXYProp}
Let $\widetilde{\nabla}$ be a torsion free affine connection on $TM$ and extend $\widetilde{\nabla}$ to an induced higher connection. In addition, let $X\in A^k(M)$, $Y\in A^l(M)$, and $\omega\in \Omega^{k+l-1}(M)$, with $k,l>0$ and $k+l-1\le n$. If $X\wedge Y=0$, then
\begin{equation}
\nonumber
(\widetilde{\nabla}_X\omega)(Y)=(-1)^{(k-1)(l-1)}(\widetilde{\nabla}_Y\omega)(X).
\end{equation}
\end{proposition}
\begin{proof}
Let $t:=k+l-1$. Using (\ref{CovDerDiffForm}) and the fact that $(-1)^{(k-1)(t-1)}=(-1)^{(k-1)l}$ and $(-1)^{(l-1)(t-1)}=(-1)^{(l-1)k}$ gives
\begin{align}
\label{FlipXY1}
(\widetilde{\nabla}_X\omega)(Y)&=(-1)^{(k-1)l}L_Xi_Y\omega-\omega(\widetilde{\nabla}_XY)
\end{align}
and
\begin{align}
\label{FlipXY2}
(\widetilde{\nabla}_Y\omega)(X)&=(-1)^{(l-1)k}L_Yi_X\omega-\omega(\widetilde{\nabla}_YX).
\end{align}
Applying Proposition \ref{LieDerivativeProp}-(ii) to (\ref{FlipXY1}) gives
\begin{align}
\label{FlipXY3}
(\widetilde{\nabla}_X\omega)(Y)&=\omega([X,Y])+i_YL_X\omega-\omega(\widetilde{\nabla}_XY).
\end{align}
Next, multiply (\ref{FlipXY2}) by $s:=(-1)^{(k-1)(l-1)}$ to obtain
\begin{align}
\label{FlipXY4}
s(\widetilde{\nabla}_Y\omega)(X)&=(-1)^{l-1}L_Yi_X\omega-s\omega(\widetilde{\nabla}_YX).
\end{align}
Since $X\wedge Y=0$ (by hypothesis), Proposition \ref{LieDerivativeProp}-(iv) implies
\begin{align}
\label{FlipXY5}
(-1)^{l-1}L_Yi_X\omega=i_YL_X\omega.
\end{align}
Subsituting (\ref{FlipXY5}) into (\ref{FlipXY4}) gives
\begin{equation}
\label{FlipXY6}
s(\widetilde{\nabla}_Y\omega)(X)=i_YL_X\omega -s\omega(\widetilde{\nabla}_YX).
\end{equation}
Since $\widetilde{\nabla}$ is torsion-free as an affine connection, its higher torsion (as an induced higher connection) also vanishes by Proposition \ref{VanishingTorsionProp}. Hence, (\ref{FlipXY3}) can be rewritten as
\begin{equation}
\label{FlipXY7}
(\widetilde{\nabla}_X\omega)(Y)=i_YL_X\omega-s\omega(\widetilde{\nabla}_YX).
\end{equation}
A quick comparison of (\ref{FlipXY7}) and (\ref{FlipXY6}) completes the proof.
\end{proof}
\noindent Motivated by Proposition \ref{FlipXYProp} and the above discussion, we introduce the notion of \textit{almost torsion-free}:
\begin{definition}
\label{AlmostTorsionFreeDef}
Let $\nabla$ be a higher connection and let $T$ denote its higher torsion. Then $\nabla$ is \textit{almost torsion-free} if
\begin{itemize}
\item[1.] $T(X,Y)=0$ for all $X,Y\in A^1(M)$; and
\item[2.] $T(X,Y)=0$ for all $X\in A^k(M)$, $Y\in A^l(M)$ such that $X\wedge Y=0$.
\end{itemize}
\end{definition}
\noindent To set up our next result, let
\begin{equation}
\nonumber
\varphi^{(k)}: \mathcal{B}(M)\rightarrow \Omega^k(M),
\end{equation}
be the map given by $\eta\mapsto \omega^{(k)}$, where $\{\omega^{(t)}\}_{t=0}^n$ is the set of differential forms associated with $\eta$ by Proposition \ref{BMDiffFormProp}. Let $\mathcal{B}^\circ(M)\subset \mathcal{B}(M)$ be the set of all $\eta\in \mathcal{B}(M)$ such that
\begin{itemize}
\item[1.] $\varphi^{(1)}(\eta)\equiv 0$
\item[2.] for $t>1$, if $\omega^{(t)}:=\varphi^{(t)}(\eta)\neq 0$, then $\omega^{(t)}$ is also nonvanishing, that is, $\omega^{(t)}_p\neq 0$ for all $p\in M$.
\end{itemize}
\begin{theorem}
\label{HigherConnBMThm}
For any $\eta\in \mathcal{B}^\circ(M)$, $M$ admits an almost torsion-free higher connection $\nabla$ such that $\nabla\eta\equiv 0$ in the sense of Definition \ref{ParallelBMDef}.
\end{theorem}
\begin{proof}
Let $\omega^{(t)}:=\varphi^{(t)}(\eta)$ for $t=0,1,\dots, n$ and let $\mathcal{T}:=\{t~|~\omega^{(t)}\neq 0\}$. Fix a Riemannian metric $g$ on $M$. Let $\overline{g}$ denote the inverse metric on $T^\ast M$. Recall that if $g$ is expressed in local coordinates as
\begin{equation}
\nonumber
g=\sum_{i,j} g_{ij}dx^i\otimes dx^j,
\end{equation}
then
\begin{equation}
\nonumber
\overline{g}=\sum_{i,j}g^{ij}\frac{\partial}{\partial x^i}\otimes \frac{\partial}{\partial x^j}
\end{equation}
where $(g^{ij})=(g_{ij})^{-1}$. For decomposable $k$-forms $\omega:=\omega^1\wedge \cdots \wedge \omega^k$ and $\phi:=\phi^1\wedge \cdots \wedge \phi^k$ define
\begin{equation}
\nonumber
\langle\omega,\phi\rangle := \det(\overline{g}(\phi^i,\omega^j)).
\end{equation}
By linearity, $\langle\cdot, \cdot\rangle$ extends to a smooth, symmetric, positive definite bilinear form on the exterior bundle $\wedge^k T^\ast M$. For each $t\in \mathcal{T}$ with $t>0$, let $E^{(t)}\in A^t(M)$, be the $t$-vector field defined by
\begin{equation}
\label{EtDef}
E^{(t)}:=\frac{1}{\langle\omega^{(t)},\omega^{(t)}\rangle}(\omega^{(t)})^\sharp,
\end{equation}
where $(\omega^{(t)})^\sharp$ is the $t$-vector field obtained by raising the indices of $\omega$ with $\overline{g}$. Recall that by hypothesis, $\omega^{(t)}$ is non-vanishing on $M$. Hence, $\langle\omega^{(t)},\omega^{(t)}\rangle|_{p}\neq 0$ for all $p\in M$. (\ref{EtDef}) then implies
\begin{align}
\label{EtIdentity}
\omega^{(t)}(E^{(t)})&=\frac{1}{\langle\omega^{(t)},\omega^{(t)}\rangle}\omega^{(t)}((\omega^{(t)})^\sharp)=\frac{1}{\langle\omega^{(t)},\omega^{(t)}\rangle}\langle\omega^{(t)},\omega^{(t)}\rangle=1.
\end{align}
We will now use $\{E^{(t)}\}$ to construct an almost torsion-free higher connection satisfying $\nabla\eta\equiv 0$.
By Theorem \ref{HigherConnectionData}, a higher connection is determined by an affine connection $\widetilde{\nabla}$ on $TM$ and a set of twist fields $\{F^{k,l}\}$ where $k,l>0$ and $k+l-1\le n:=\dim M$. To obtain the desired higher connection, let $\widetilde{\nabla}$ be any torsion-free affine connection on $TM$ (e.g., take $\widetilde{\nabla}$ to be the Levi-Civita connection associated to $g$). By Proposition \ref{ParallelBMProp}, the twist fields must be chosen so that
\begin{equation}
\label{ChooseFklA}
(\widetilde{\nabla}_X\omega^{(t)})(Y)=\omega^{(t)}(F^{k,l}(X,Y))
\end{equation}
for all $X\in A^k(M)$, $Y\in A^l(M)$ with $k,l>0$, where $t=k+l-1\le n:=\dim M$. We will now construct a complete set of twist fields $\{F^{k,l}\}$ ($k,l>0$, $k+l-1\le n$) which satisfy (\ref{ChooseFklA}).
Now, for $t\notin \mathcal{T}$ ($t\le n$), we have $\omega^{(t)}\equiv 0$ by definition. Consequently, for each $t\notin \mathcal{T}$ we can set $F^{k,l}\equiv 0$ for all $k,l>0$ satisfying $k+l-1=t$. This clearly satisfies (\ref{ChooseFklA}). Note that by hypothesis, $1\notin \mathcal{T}$, and with our choice, we also have $F^{1,1}\equiv 0$ (as required by Theorem \ref{HigherConnectionData}).
Now for each $t\in \mathcal{T}$, we define $F^{k,l}$ with $k,l>0$ and $k+l-1=t$ by
\begin{equation}
\label{ChooseFklB}
F^{k,l}(X,Y):=\left[(\widetilde{\nabla}_X\omega^{(t)})(Y)\right]E^{(t)}\in A^{k+l-1}(M)
\end{equation}
for all $X\in A^k(M)$, $Y\in A^l(M)$. Note that $(\widetilde{\nabla}_X\omega^{(t)})(Y)\in C^\infty(M)$, and that $F^{k,l}(fX,Y)=F^{k,l}(X,fY)=fF^{k,l}(X,Y)$ for all $f\in C^\infty(M)$ by Theorems \ref{CovDerDiffFormProp} and \ref{CovDerDiffFormProp2}. It follows easily from (\ref{EtIdentity}) that (\ref{ChooseFklB}) satisfies (\ref{ChooseFklA}). This completes the construction of the twist fields.
With $\widetilde{\nabla}$ and $\{F^{k,l}\}$ in hand, we define $\nabla$ to be the higher connection which is uniquely determined by the pair $(\widetilde{\nabla},\{F^{k,l}\})$. Proposition \ref{ParallelBMProp} then implies that $\nabla\eta\equiv 0$ in the sense of Definition \ref{ParallelBMDef}. All that remains to be done now is to show that $\nabla$ is almost torsion-free. To do this, let $T$ denote the higher torsion of $\nabla$. Since $\widetilde{\nabla}$ is torsion-free, we have $T(X,Y)=0$ for all $X, Y\in A^1(M)$, which is the first condition of Definition \ref{AlmostTorsionFreeDef}. To verify the second condition, let $X\in A^k(M)$, $Y\in A^l(M)$ with $k,l>0$, $k+l-1\le n$, and $X\wedge Y=0$. With $\widetilde{\nabla}$ torsion-free, the proof of Theorem \ref{TorsionThmB} shows that $T(X,Y)=0$ iff the twist fields satisfy
\begin{equation}
\label{CheckAlmostTorsionFree1}
F^{k,l}(X,Y)=(-1)^{(k-1)(l-1)}F^{l,k}(Y,X).
\end{equation}
We now verify (\ref{CheckAlmostTorsionFree1}). For $k+l-1\notin \mathcal{T}$, we have
\begin{equation}
F^{k,l}(X,Y)=0=(-1)^{(k-1)(l-1)}F^{l,k}(X,Y).
\end{equation}
For $t=k+l-1\in \mathcal{T}$, we have
\begin{align}
\nonumber
F^{k,l}(X,Y)&=[(\widetilde{\nabla}_X\omega^{(t)})(Y)]E^{(t)}\\
\nonumber
&=(-1)^{(k-1)(l-1)}[(\widetilde{\nabla}_Y\omega^{(t)})(X)]E^{(t)}\\
\nonumber
&=(-1)^{(k-1)(l-1)}F^{l,k}(Y,X),
\end{align}
where the second equality follows from Proposition \ref{FlipXYProp}. This completes the proof.
\end{proof}
Using Theorem \ref{HigherConnBMThm}, we can relate higher connections to \textit{multisymplectic geometry} \cite{CIL} (or \textit{higher symplectic geometry} if one follows the terminology of \cite{Rog}). To start, we recall the notion of a multisymplectic form of degree $t+1$ (or, more concisely, a $t$-plectic form):
\begin{definition}
A $(t+1)$-form $\omega$ on $M$ is multisymplectic of degree $t+1$ (or $t$-plectic) if it satisfies the following two conditions:
\begin{itemize}
\item[(i)] $d\omega=0$
\item[(ii)] $\omega$ is non-degenerate in the sense that for all $p\in M$ and $v\in T_pM$,
\begin{equation}
\nonumber
i_v\omega=0 \Leftrightarrow v=0.
\end{equation}
The pair $(M,\omega)$ is then called a multisymplectic manifold of order $t+1$ (or \textit{$t$-plectic manifold}).
\end{itemize}
\end{definition}
\begin{remark}
Following the terminology of \cite{Rog}, a 1-plectic form is just a symplectic form on $M$.
\end{remark}
To relate $t$-plectic forms to higher connections, let $\mathcal{B}^{plc}(M)$ be the set of all $\eta\in \mathcal{B}(M)$ such that
\begin{itemize}
\item[(i)] $\varphi^{(1)}(\eta)\equiv 0$
\item[(ii)]for $t>1$, if $\varphi^{(t)}(\eta)\neq 0$, then $\varphi^{(t)}$ is a $(t-1)$-plectic form.
\end{itemize}
Since a $(t-1)$-plectic form is necessarily non-vanishing, we immediately have $\mathcal{B}^{plc}(M)\subset \mathcal{B}^\circ(M)$. Theorem \ref{HigherConnBMThm} then implies the following:
\begin{corollary}
\label{tPlecticCor}
Let $\eta\in \mathcal{B}^{plc}(M)$. Then $M$ admits an almost torsion-free higher connection $\nabla$ such that $\nabla\eta\equiv 0$ in the sense of Definition \ref{ParallelBMDef}
\end{corollary}
\section{Conclusion}
In this paper, the notion of higher connections has been introduced as part of a program in differential geometry to extend the familiar constructions and operations for vector fields to multivector fields (MVFs). The aforementioned program is motivated by generalized geometry and string theory, and is based on the idea of treating the full exterior bundle $\wedge^\bullet TM$ as an extended tangent bundle with the Schouten-Nijenhuis bracket playing the role of the Lie bracket of vector fields. Consequently, in the context of this program, a higher connection on the full exterior bundle $\wedge^\bullet TM$ is the analogue of an affine connection on the tangent bundle $TM$.
In section 5, we equipped the full exterior bundle $\wedge^\bullet TM$ with an associative bilinear form $\eta$ and showed that such a structure can be naturally identified with a collection of differential forms $\{\omega^{(t)}\}$ of various degrees. This fact allowed one to take the covariant derivative of $\eta$ with respect to a higher connection. The natural problem of finding a higher connection $\nabla$ which satisfies $\nabla\eta\equiv 0$ naturally leads to the notion of a non-induced higher connection; the differential forms associated with $\eta$ determine the twist fields of the higher connection. For any $\eta\in \mathcal{B}^\circ(M)$,
Theorem \ref{HigherConnBMThm} shows that $M$ admits an almost torsion-free higher connection $\nabla$ which satisfies $\nabla\eta\equiv 0$. However, the higher connection constructed in the proof of Theorem \ref{HigherConnBMThm} is by no means unique or canonical, and this raises the following question:
\begin{itemize}
\item[ ] \textit{What conditions could be placed on the associative bilinear form $\eta$ which would give rise to a unique or canonical higher connection? In other words, is there a ``best" choice of higher connection?}
\end{itemize}
Corollary \ref{tPlecticCor}, an immediate consequence of Theorem \ref{HigherConnBMThm}, links higher connections to multisymplectic geometry by restricting attention to all $\eta$ which are built up from multisymplectic forms of various degrees. The question raised above as well as the relationship between higher connections and multisymplectic geometry (which was only touched upon in this paper) will be explored in greater depth as part of future work.
| {
"timestamp": "2014-12-30T02:17:18",
"yymm": "1408",
"arxiv_id": "1408.4082",
"language": "en",
"url": "https://arxiv.org/abs/1408.4082",
"abstract": "For a smooth manifold $M$, it was shown in \\cite{BPH} that every affine connection on the tangent bundle $TM$ naturally gives rise to covariant differentiation of multivector fields (MVFs) and differential forms along MVFs. In this paper, we generalize the covariant derivative of \\cite{BPH} and construct covariant derivatives along MVFs which are not induced by affine connections on $TM$. We call this more general class of covariant derivatives \\textit{higher affine connections}. In addition, we also propose a framework which gives rise to non-induced higher connections; this framework is obtained by equipping the full exterior bundle $\\wedge^\\bullet TM$ with an associative bilinear form $\\eta$. Since the latter can be shown to be equivalent to a set of differential forms of various degrees, this framework also provides a link between higher connections and multisymplectic geometry.",
"subjects": "Differential Geometry (math.DG); Mathematical Physics (math-ph)",
"title": "Higher Affine Connections",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180631379973,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314668863193
} |
https://arxiv.org/abs/1602.03773 | A sequence of triangle-free pseudorandom graphs | A construction of Alon yields a sequence of highly pseudorandom triangle-free graphs with edge density significantly higher than one might expect from comparison with random graphs. We give an alternative construction for such graphs. | \section{Introduction}
A graph $G$ is said to be $(p, \beta)$-jumbled if
\[\left|e(X) - p \binom{|X|}{2}\right| \leq \beta |X|\]
for all $X \subseteq V(G)$. For example, the binomial random graph $G_{n,p}$ is $(p, \beta)$-jumbled with $\beta = O(\sqrt{pn})$. It is not hard to show~\cite{EGPS88, ES71} that this is essentially best possible, in that a graph with $n$ vertices cannot be $(p, \beta)$-jumbled with $\beta = c\sqrt{pn}$ for $c$ sufficiently small. For further information on jumbled graphs and their properties, we refer the reader to the survey~\cite{KS06} or, for more recent developments, the paper~\cite{CFZ14}.
One important class of $(p, \beta)$-jumbled graphs is the collection of $(n, d, \lambda)$-graphs. These are $d$-regular graphs on $n$ vertices such that all eigenvalues of the adjacency matrix, save the largest, are bounded in absolute value by $\lambda$. By the famous expander mixing lemma, these graphs are $(p, \beta)$-jumbled with $p = d/n$ and $\beta = \lambda$.
One of the best known examples of a pseudorandom graph, constructed by Alon~\cite{A94}, is a triangle-free $(n, d, \lambda)$-graph with $n = 2^{3k}$, $d = 2^{k-1} (2^{k-1} - 1)$ and $\lambda = O(2^k)$. Taking $p = d/n$, we have $\sqrt{p n} = \sqrt{d} = \Omega(2^k)$, so the graph is close to optimally pseudorandom. Since $p = \Omega(n^{-1/3})$, the construction also has surprisingly high density. While there are various ways to modify the usual random graph to produce triangle-free graphs with density roughly $n^{-1/2}$ (see, for example,~\cite{B09}), no such modification can hope to push very far past this density. Nevertheless, Alon's construction does so. The purpose of this note is to give another construction for such graphs.
\begin{theorem} \label{thm:main}
There exists a sequence $(n_i)_{i=1}^{\infty}$ of positive integers such that, for each $i \geq 1$, there is a triangle-free graph $G_i$ on $n_i$ vertices which is $(p, \beta)$-jumbled with $p = \Omega(n_i^{-1/3})$ and $\beta = O(\sqrt{p n_i} \log n_i)$.
\end{theorem}
Our construction is weaker than Alon's on several counts: it does not produce regular graphs; it is not completely explicit; and it does not generalise easily. One might also level the accusation that the resulting graphs are not optimally pseudorandom, with the condition $\beta = O(\sqrt{pn} \log n)$ being a logarithmic factor away from the desired bound. However, it seems likely that this extra log factor is simply an artifact of our proof. Countering these disadvantages, we believe that our construction, which we describe in detail below, is more intuitive than Alon's.
For concreteness, we will work with the polarity graph of Lazebnik, Ustimenko and Woldar~\cite{LUW99}, though the role played by this graph could also be taken by a number of other $C_6$-free graphs with $n$ vertices and $\Omega(n^{4/3})$ edges. Suppose then that $q$ is an odd power of $2$ and let $n = q^3 + q^2 + q + 1$. The polarity graph, which has a small number of loops, is an $(n, d, \lambda)$-graph with $d = q + 1 \geq n^{1/3}$ and, as noted in~\cite[Section 3.7]{KS06}, $\lambda = \sqrt{2q} = O(n^{1/6})$. Once the loops are removed, the resulting graph, which we label by $H$, is $C_3, C_4$ and $C_6$ free. For each vertex $v$ in $H$, we randomly partition its neighbourhood $N_H(v)$ into two sets $A_v$ and $B_v$ and let $G_v$ be the complete bipartite graph between $A_v$ and $B_v$. We now define $G$ to be the graph with the same vertex set as $H$ and edge set $\cup_{v \in V(H)} G_v$. We will show that $G$ asymptotically almost surely satisfies the requirements of Theorem~\ref{thm:main}.
It is straightforward to verify that $G$ contains no triangles. To begin, note that since $H$ is $C_4$-free, the only edges of $G$ in $N_H(v)$ are those in $G_v$. Now suppose that $abc$ is a triangle in $G$. If $a, b$ and $c$ are all contained in the same neighbourhood $N_H(v)$ they cannot form a triangle, since the only edges of $G$ in $N_H(v)$ are those in $G_v$ and $G_v$ is bipartite. It must then be the case that the edges $ab$, $bc$ and $ca$ are contained in three different neighbourhoods $N_H(u)$, $N_H(v)$ and $N_H(w)$, respectively. If $u, v$ and $w$ are all distinct from $a, b$ and $c$, then $a u b v c w$ would form a cycle of length $6$ in $H$, contradicting the fact that $H$ is $C_6$-free. On the other hand, if $u = c$, say, then $H$ must contain the triangle $b v c$, again contradicting the choice of $H$.
The remaining claim, that $G$ is asymptotically almost surely $(p, \beta)$-jumbled with $p = \Omega(n^{-1/3})$ and $\beta = O(\sqrt{p n} \log n)$, will be verified in the next section.
\section{Proving $G$ is jumbled} \label{sec:jumble}
Let $H_0$ be the polarity graph with $n = q^3 + q^2 + q + 1$ vertices. This graph is $(q+1)$-regular, has $q^2 + 1$ loops and all eigenvalues of the adjacency matrix, save the largest, are bounded in absolute value by $\sqrt{2q}$. We will form $G$ from $H_0$ by a slightly different procedure to that described in the introduction, though the two are easily seen to produce the same graph.
In the first step, we form $H_0^{(2)}$, the multigraph with loops on the same vertex set as $H_0$ where two vertices are joined if there is a walk of length two in $H_0$ between them, allowing for multiple edges if there is more than one such walk. We note that each vertex has $q+1$ loops, one for each edge in $H_0$, and, since $H$, the simple graph formed by removing the loops from $H_0$, is $C_3$ and $C_4$-free, the only parallel edges arise from loops in $H_0$. In the next step, we turn $H_0^{(2)}$ into a simple graph by removing all loops from $H_0^{(2)}$ and all edges whose corresponding walk in $H_0$ used a loop. The resulting graph $G_1$ is easily seen to be the union of $n$ cliques, each clique being $N_H(v)$ for some $v$ in $H$. We now form the required graph, as before, by randomly partitioning $N_H(v)$ into two sets $A_v$ and $B_v$ and letting $G$ be the union over all $v$ in $H$ of the complete bipartite graphs between $A_v$ and $B_v$.
Following this plan, we first look at $H_0^{(2)}$. Letting $M$ be the adjacency matrix of $H_0$, the adjacency matrix of $H_0^{(2)}$ is simply $M^2$, which implies that the eigenvalues of $H_0^{(2)}$ are the squares of the eigenvalues of $H_0$. Therefore, $H_0^{(2)}$ is an $(n, d, \lambda)$-graph with $d = (q + 1)^2$ and $\lambda = 2q$. Since each vertex of $H_0^{(2)}$ is contained in exactly $q + 1$ loops, the graph $G_0$ formed by removing these loops has adjacency matrix $M^2 - (q+1)I$, implying that $G_0$ is an $(n, d, \lambda)$-graph with $d = q(q+1)$ and $\lambda = q + 1$. Therefore, by the expander mixing lemma, for all $X \subseteq V(G_0)$,
\[\left|e_{G_0}(X) - \frac{q}{q^2+1} \binom{|X|}{2}\right| \leq (q + 1) |X|,\]
where we used the fact that $n = q^3 + q^2 + q + 1 = (q^2+1)(q+1)$.
Note now that every loop in $H_0$ has exactly $q$ other neighbours with which it can form a non-degenerate walk of length two. Therefore, since any $X \subseteq V(H_0)$ contains at most $|X|$ loops, we remove at most $q |X|$ edges from $X$ when forming $G_1$ from $G_0$. By the estimate above, this implies that, for all $X \subseteq V(G_1)$,
\[\left|e_{G_1}(X) - \frac{q}{q^2 + 1}\binom{|X|}{2}\right| \leq (2q + 1) |X|.\]
Therefore, $G_1$ is $(p, \beta)$-jumbled with $p = q/(q^2 + 1)$ and $\beta = 2q + 1$.
Recall that $G_1$ is the union of cliques, while $G$ is the union of random bipartite graphs, one for each clique in $G_1$. Our aim now is to show that asymptotically almost surely $G$ is $(p, \beta)$-jumbled with $p = q/2(q^2 + 1) = \Omega(n^{-1/3})$ and $\beta = O(q \log n) = O(\sqrt{pn} \log n)$. We will do this in a rather naive fashion, estimating the probability that
\begin{equation} \label{eqn:main}
\left|e_{G}(X) - \frac{q}{2(q^2 + 1)}\binom{|X|}{2}\right| \leq C q |X| \log n
\end{equation}
for any given $X$ and taking a union bound. To do this, we will need the following concentration inequality for quadratic forms in independent random variables due to Hanson and Wright~\cite{HW71} (the exact version we state follows from Theorem 1.1 in~\cite{RV13}).
\begin{lemma} \label{lem:HW}
Let $Z = (Z_1, \dots, Z_t) \in \{-1, +1\}^t$ be a random vector with independent components each of which is equal to $1$ or $-1$ with probability $1/2$. Let $M$ be a $t \times t$ real matrix. Then
\[\mathbb{P}[|Z^T M Z - \mathbb{E}(Z^T M Z)| > \epsilon] \leq 2 \exp\left\{- c \min \left(\frac{\epsilon^2}{\|M\|_{F}^2}, \frac{\epsilon}{\|M\|}\right)\right\}, \]
where $\|M\|_{F} = (\sum_{i, j} m_{ij}^2)^{1/2}$ is the Frobenius norm and $\|M\| = \sup_{x \neq 0} \|M x\|_2/\|x\|_2$ is the spectral norm.
\end{lemma}
Suppose now that $G[X]$ is the union of $s$ cliques $T_1, \dots, T_s$, of orders $t_1, \dots, t_s$, and let $t = t_1 + \dots + t_s$. We define $t$ random variables $X_1, \dots, X_t$, each equal to $1$ or $-1$ with probability $1/2$, and assign one of these random variables to every vertex of every clique, noting that any given vertex may receive multiple random variables, but only one relative to any given clique.
Suppose that the random variables assigned to the clique $T_i$ are $Z_{i1}, \dots, Z_{it_i}$. If $v_i$ is the vertex whose neighbourhood in $H$ is $T_i$, the value of $Z_{ij}$ determines whether its corresponding vertex $T_i(j)$ is placed in $A_{v_i}$ or $B_{v_i}$, with $T_i(j)$ placed in $A_{v_i}$ if $Z_{ij} = 1$ and $B_{v_i}$ if $Z_{ij} = -1$. The number of edges in $G[T_i]$ is then
\[e_G(T_i) = |A_{v_i}||B_{v_i}| = \left(\frac{t_i}{2} + \frac{1}{2}\sum_{j=1}^{t_i} Z_{ij}\right) \left(\frac{t_i}{2} - \frac{1}{2}\sum_{j=1}^{t_i} Z_{ij}\right) = \frac{t_i^2}{4} - \frac{1}{4} \sum_{j=1}^{t_i} \sum_{k=1}^{t_i} Z_{ij} Z_{ik}.\]
Summing over $i$, we have
\begin{align*}
e_G(X) = \sum_{i=1}^s e_G(T_i) & = \sum_{i=1}^s \frac{t_i^2}{4} - \frac{1}{4} \sum_{i=1}^s \sum_{j=1}^{t_i} \sum_{k=1}^{t_i} Z_{ij} Z_{ik}\\
& = \frac{1}{2} \sum_{i=1}^s \binom{t_i}{2} + \frac{1}{4} \sum_{i=1}^s t_i - \frac{1}{4} \sum_{i=1}^s \sum_{j=1}^{t_i} \sum_{k=1}^{t_i} Z_{ij} Z_{ik}\\
& = \frac{e_{G_1}(X)}{2} - \frac{1}{4} \sum_{i=1}^s \sum_{1 \leq j \neq k \leq t_i} Z_{ij} Z_{ik}.
\end{align*}
Therefore, by our estimate on $e_{G_1}(X)$, it only remains to show that
\[Q = \sum_{i=1}^s \sum_{1 \leq j \neq k \leq t_i} Z_{ij} Z_{ik}\]
is smaller than $C q |X| \log n$ with sufficiently high probability.
Let $M$ be the $t \times t$ matrix whose entry $m_{jk}$ is equal to $1$ if $j \neq k$ and $j$ and $k$ (each of which represents a vertex associated to a particular clique) are from the same clique $T_i$. Otherwise, we take $m_{jk}$ to be $0$. Then $Q = Z^T M Z$ and it follows from the Hanson--Wright bound that
\[\mathbb{P}[|Q| > C q |X| \log n] \leq 2 \exp\left\{- c \min \left(\frac{C^2 q^2 |X|^2 \log^2 n}{\|M\|_{F}^2}, \frac{C q |X| \log n}{\|M\|}\right)\right\}.\]
But it is straightforward to verify that
\[\|M\|_F^2 = \sum_{i=1}^s t_i(t_i-1) = 2 e_{G_1}(X) \leq \frac{|X|^2}{q} + 6q|X| \leq 2 \max\left\{\frac{|X|^2}{q}, 6q|X|\right\}\]
and, writing $\rho(A)$ for the spectral radius of a matrix $A$,
\[\|M\| = \sqrt{\rho(M^* M)} = \rho(M) \leq \sup_{x \neq 0} \frac{\|M x\|_{\infty}}{\|x\|_{\infty}} = \max_{1 \leq j \leq t} \sum_{k=1}^t |m_{jk}| \leq q.\]
Therefore,
\begin{align*}
\mathbb{P}[|Q| > C q |X| \log n] & \leq 2 \exp\left\{- c \min \left(\frac{1}{2}C^2 q^3 \log^2 n, \frac{1}{12}C^2 q |X| \log^2 n, C |X| \log n\right)\right\}\\
& \leq 2 \exp\{- 2 |X| \log n\}
\end{align*}
for $C$ sufficiently large in terms of $c$. Applying the union bound, we see that the probability there exists a set $X$ such that \eqref{eqn:main} fails is at most
\[\sum_{|X| = 1}^n \binom{n}{|X|} 2 e^{- 2 |X| \log n} \leq 2 \sum_{|X| = 1}^n n^{|X|} e^{-2 |X| \log n} \leq 2 \sum_{|X| = 1}^n e^{-|X| \log n} \leq \frac{2}{n-1}.\]
The result follows.
\section{Concluding remarks}
Our construction also extends to give a sequence $(n_i)_{i=1}^{\infty}$ of positive integers such that, for each $i \geq 1$, there is a $C_5$-free graph $G_i$ on $n_i$ vertices which is $(p, \beta)$-jumbled with $p = \Omega(n_i^{-3/5})$ and $\beta = O(\sqrt{p n_i} \log n_i)$. The construction starts with the $C_{10}$-free polarity graph of Lazebnik, Ustimenko and Woldar~\cite{LUW99}, but follows the proof of Theorem~\ref{thm:main} in all other respects. Because we lack optimal constructions of $C_{2\ell}$-free graphs for $\ell \geq 6$, our method does not extend further to give constructions of pseudorandom $C_{2k+1}$-free graphs for $k \geq 3$. As mentioned in the introduction, this is a distinct weakness of our method when compared to Alon's, which does extend to longer odd cycles~\cite{AK98, KS06}.
An alternative method for proving the jumbledness of our construction $G$ might be to estimate the eigenvalues of its adjacency matrix $M$ by using the fact that Tr$(M^{2k})$ is both the sum of the $2k^{th}$ powers of its eigenvalues and the number of walks of length $2k$ in $G$. However, $G$ is constructed by starting from a graph $G_1$ which is a union of cliques and then taking a random bipartite graph within each clique. This process causes an imbalance between odd and even cycles within each clique, deleting all odd cycles, while doubling the proportion of even cycles relative to the density. This makes it difficult to count the number of degenerate walks of length $2k$ without having close control over the counts of degenerate walks of different types in the base graph $G_1$. Nevertheless, it is plausible that this could be done, and may even allow one to save the lost logarithmic factor in $\beta$.
Another definite weakness of our method is that it is not explicit and so, unlike Alon's example, cannot be used to give a constructive lower bound for the off-diagonal Ramsey number $r(3, t)$. It remains to decide whether there is some more explicit method for choosing large bipartite subgraphs of the cliques in $G_1$ which also produces a highly pseudorandom subgraph.
\vspace{3mm}
{\bf Acknowledgements.} I would like to thank the anonymous referee for a number of helpful remarks.
| {
"timestamp": "2016-08-05T02:03:15",
"yymm": "1602",
"arxiv_id": "1602.03773",
"language": "en",
"url": "https://arxiv.org/abs/1602.03773",
"abstract": "A construction of Alon yields a sequence of highly pseudorandom triangle-free graphs with edge density significantly higher than one might expect from comparison with random graphs. We give an alternative construction for such graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "A sequence of triangle-free pseudorandom graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180631379971,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314668863192
} |
https://arxiv.org/abs/1203.2449 | Tropical matrix groups | We study the subgroup structure of the semigroup of finitary tropical matrices under multiplication. We show that every maximal subgroup is isomorphic to the full linear automorphism group of a related tropical polytope, and that each of these groups is the direct product of the real numbers with a finite group. We also show that there is a natural and canonical embedding of each full rank maximal subgroup into the group of units of the semigroup of matrices over the tropical semiring with minus infinity. Our results have numerous corollaries, including the fact that every automorphism of a projective (as a module) tropical polytope of full rank extends to an automorphism of the containing space, and that every full rank subgroup has a common eigenvector. | \section{Introduction}
Tropical algebra is the algebra of the real numbers (sometimes augmented
with an extra element denoted by $-\infty$) under the operations of
addition and maximum. It has applications in areas such as combinatorial
optimisation and scheduling, control theory, and algebraic geometry to
name but a few (see \cite{Butkovic10} for a survey of applications). Many
problems arising from these application areas are naturally expressed
using (tropical) linear equations, so much of tropical algebra concerns
matrices.
In this paper, we study the full semigroup of real $n \times n$ square
matrices with tropical multiplication. An important step in understanding
tropical
algebra is to understand the maximal subgroups of this semigroup, in terms
of both their abstract group structure and the geometry of their natural
actions on tropical space. It is a basic fact of semigroup theory that every
subgroup of a semigroup $S$ lies in a unique maximal subgroup. Moreover,
the maximal subgroups of $S$ are precisely the \emph{$\GreenH$-classes}
(see Section~\ref{sec_Green} below for definitions) of $S$ which contain
idempotents element. Recent research of the authors into the structure
of idempotent matrices \cite{K_puredim} provides a useful basis for
studying subgroups.
In addition to this introduction, this article comprises seven sections.
In Section 2 we introduce some preliminary definitions. In Section 3 we
give a brief account of Green's relations for the semigroup of $n \times
n$ tropical matrices and prove that every maximal subgroup $H$ is
isomorphic to the automorphism group of a particular tropical polytope
(namely, the column space of any element of $H$). In Sections 4 and 5 we
summarise a number of results on idempotent tropical matrices.
A consequence of \cite{K_puredim} is that there is an extremely well-behaved
notion of \textit{rank} for tropical idempotents, and hence for maximal subgroups
of tropical matrices. The understanding of idempotents arising from
\cite{K_puredim} is markedly more comprehensive when the idempotents in
question have full rank. Accordingly, Section 6 reduces the problem of
understanding maximal subgroups to the full rank case, by showing that every
rank $k$ maximal subgroup of the semigroup of $n \times n$ matrices
is naturally isomorphic to a maximal subgroup of the semigroup of
$k \times k$ matrices.
Finally, Section~7 establishes our strongest results. We exhibit a
natural and canonical embedding of each full rank maximal subgroup
(and hence of each subgroup) into the group of units of the corresponding
semigroup of matrices over the tropical semiring with $-\infty$.
An analysis of this embedding allows us to show that each maximal subgroup
of rank $k$ is the direct product of $\mathbb{R}$ with a finite group of
permutation degree $k$ or less. Other corollaries of our results include
that every automorphism of a projective tropical polytope extends to an
automorphism of the containing space, and that every subgroup has a common
eigenvector.
The decomposition of maximal subgroups as direct products of $\mathbb{R}$
with finite groups establishes, in the case of matrices with real entries,
a conjecture of the second and third authors \cite{K_tropicalgreen}, which
states that every group of $n \times n$ tropical matrices has a torsion free
abelian subgroup of index $n!$ or less. This conjecture has been independently
proved in the
general case by Shitov \cite{Shitov12}. It is natural to ask exactly which
finite groups arise in these decompositions. In a companion paper of the
second and third authors \cite{K_finitemetric}, we shall show that for
every finite group $G$, the group $G \times \mathbb{R}$ arises as a
maximal subgroup in sufficiently high dimensions.
Of course, it is also natural to ask about the subgroup structure of the
full semigroup of $n \times n$ matrices over the larger tropical semiring
including $-\infty$. Indeed, there are already a few interesting results
in this direction \cite{K_tropicalgreen,Shitov12}. In this article we have
chosen to focus on matrices with real entries, partly to avoid
technicalities and partly because some of the machinery and results we
employed have not yet been developed for the case with $-\infty$. However,
we believe the results and methods developed here should, with careful
application, suffice to permit a full understanding of the subgroup
structure in the more general case.
\section{Preliminaries}
\label{prelim}
We write $\ft$ for the set $\mathbb{R}$ equipped with the operations of maximum (denoted by $\oplus$) and addition (denoted by $\otimes$, by $+$ or simply by juxtaposition). Thus, we write $a \oplus b = \max(a,b)$ and $a \otimes b = ab = a + b$. It is readily verified that $\ft$ is an abelian group (with neutral element $0$) under $\otimes$ and a commutative semigroup of idempotents (without a neutral element) under $\oplus$, and that $\otimes$ distributes over $\oplus$. These properties mean $\ft$ has the structure of an \textit{idempotent semifield}.
It will sometimes be convenient to work with the extended tropical semifield $\trop =\ft\cup\{-\infty\}$, where we extend the definitions of $\oplus$ and $\otimes$ in the obvious way (namely, $a \oplus -\infty = -\infty \oplus a = a$ and $a \otimes -\infty = -\infty \otimes a = -\infty$, for all $a \in \trop$).
Let $M_n(\ft)$ denote the set of all $n \times n$ matrices with entries in
$\ft$. The operations $\oplus$ and $\otimes$ can be extended in the
obvious way to give corresponding operations on $M_n(\ft)$. In particular,
it is easy to see that $M_n(\ft)$ is a semigroup with respect to tropical
matrix multiplication. We shall see in the following sections that this
semigroup has a rich and interesting structure.
We shall be interested in the space $\ft^n$ consisting of $n$-tuples $x$ with entries in $\ft$; we
write $x_i$ for the $i$th component of $x$. We call $\ft^n$ \textit{(affine) tropical $n$-space}.
The space $\ft^n$ admits
an addition and a scaling action of $\ft$ given by $(x\oplus y)_i = x_i \oplus y_i$ and
$(\lambda x)_i = \lambda (x_i)$ respectively. These operations give $\ft^n$
the structure of an \textit{$\ft$-module}\footnote{Some authors use the term \textit{semimodule}, to
emphasise the non-invertibility of addition, but since no other kind of module exists over $\ft$
we have preferred the more concise term.}. It also
has the structure of a lattice, under the partial order given by $x \leq y$ if $x_i \leq y_i$ for all $i$.
From affine tropical $n$-space we obtain \textit{projective tropical $(n-1)$-space}, denoted $\pft^{n-1}$,
by identifying two vectors if one is a tropical multiple of the other by an element of $\ft$. We
identify $\pft^{n-1}$ with $\mathbb{R}^{n-1}$ via the map
$$(x_1, \ldots, x_n) \mapsto (x_1-x_n, x_2 -x_n, \ldots, x_{n-1} - x_n).$$
Submodules of $\ft^n$ (that is, subsets closed under tropical addition and scaling) are termed
\textit{(tropical) convex sets}. Finitely generated convex sets are called \textit{(tropical) polytopes}.
Since convex sets are closed under scaling, each convex set $X \subseteq \ft^n$ induces a subset of
$\pft^{n-1}$, termed the \textit{projectivisation} of $X$ and denoted $\mathcal{P}X$.
For $A \in M_n(\ft)$ we let $R(A)$ denote the tropical polytope in $\ft^n$ generated by the rows of $A$ and let $C(A)$ denote the tropical polytope in $\ft^n$ generated by the columns of $A$. We call these tropical polytopes the \emph{row space} and \emph{column space} of $A$ respectively.
A point $x$ in a convex set $X$ is called \textit{extremal in $X$} if the set
$$X \smallsetminus \lbrace \lambda \otimes x: \lambda \in \ft \rbrace$$
is a submodule of $X$. Clearly some scaling of every such extremal point must lie in every generating set for
$X$. In fact, every tropical polytope is generated by its extremal points considered up to scaling \cite{Butkovic07,Wagneur91}.
\section{Green's relations, idempotents and regularity}
\label{sec_Green}
Green's relations are five equivalence relations ($\GreenL$, $\GreenR$, $\GreenH$, $\GreenD$ and $\GreenJ$) and three partial orders ($\leq_\GreenR$, $\leq_\GreenL$ and $\leq_\GreenJ$), which can be defined on any semigroup, and which describe the structure of its maximal subgroups and principal left, right and two-sided ideals. We briefly recap the definitions here; for further details (including proofs of the claimed properties) we refer the reader to an introductory text such as \cite{Howie95}.
Let $S$ be any semigroup. If $S$ is a monoid, we set $S^1 = S$, and otherwise we denote by $S^1$ the monoid obtained by adjoining a new identity element $1$ to $S$. We define a binary relation $\leq_\GreenR$ on $S$ by $a \leq_\GreenR b$ if $a S^1 \subseteq b S^1$, that is, if either $a = b$ or there exists $q$ with $a = bq$. We define another relation $\GreenR$ by $a \GreenR b$ if and only if $a S^1 = b S^1$. It is straight-forward to check that $\GreenR$ is an equivalence relation, and $\leq_\GreenR$ is a preorder (a reflexive, transitive binary relation) which induces a partial order on the $\GreenR$-equivalence classes.
The relations $\leq_\GreenL$ and $\GreenL$ are the left-right duals of
$\leq_\GreenR$ and $\GreenR$ (that is, $a \leq_\GreenL b$ if $S^1 a
\subseteq S^1 b$, and $a \GreenL b$ if $S^1 a = S^1 b$). The relations
$\leq_\GreenJ$ and $\GreenJ$ are two-sided analogues ($a \leq_\GreenJ b$
if $S^1 a S^1 \subseteq S^1 b S^1$, and $a \GreenJ b$ if
$S^1 a S^1 = S^1 b S^1$). The relations $\GreenH$ and $\GreenD$ are
described in terms of
the $\GreenL$ and $\GreenR$ relations. The $\GreenH$ relation is the
intersection of $\GreenL$ and $\GreenR$ (that is, $a \GreenH b$ if $a
\GreenL b$ and $a \GreenR b$), whilst the $\GreenD$ relation can be defined
by $a \GreenD b$ if and only if there
exists an element $c \in S$ such that $a \GreenR c$ and $c \GreenL a$. It
can be shown that both $\GreenH$ and $\GreenD$ are equivalence relations.
The study of Green's relations for the full tropical matrix semigroups was initiated (in the case of $M_2(\trop)$) by the second and third authors \cite{K_tropicalgreen}. In \cite{K_tropd}, Hollings and the third author gave a complete description of the $\GreenD$-relation for $M_n(\ft)$, using the \textit{duality} between the row and column space of a tropical matrix. In \cite{K_tropj} the second and third authors described the equivalence relation $\GreenJ$ and pre-order $\leq_\GreenJ$ in in $M_n(\ft)$ and $M_n(\trop)$. The main results of these papers, for the case $M_n(\ft)$, are summarised in the following theorem (see \cite[Proposition 3.1 and Theorem 5.1]{K_tropd} and \cite[Theorem 5.3 and Theorem 6.1]{K_tropj} for full details and proofs).
\begin{theorem}
\label{thm_greenchar}
Let $A, B \in M_n(\ft)$.
\begin{itemize}
\item[(i)] $A \leq_{\GreenL} B$ if and only if $R(A) \subseteq R(B)$;
\item[(ii)] $A \GreenL B$ if and only if $R(A) = R(B)$;
\item[(iii)] $A \leq_{\GreenR} B$ if and only if $C(A) \subseteq C(B)$;
\item[(iv)] $A \GreenR B$ if and only if $C(A) = C(B)$;
\item[(v)] $A \GreenH B$ if and only if $R(A) = R(B)$ and $C(A) = C(B)$;
\item[(vi)] $A \GreenD B$ if and only if $C(A)$ and $C(B)$ are isomorphic as $\ft$-modules;
\item[(vii)] $A \GreenD B$ if and only if $R(A)$ and $R(B)$ are isomorphic as $\ft$-modules;
\item[(viii)] $A \leq_{\GreenJ} B$ if and only if there exists a convex set $X \subseteq \ft^n$ such that $R(A)$ embeds linearly into $X$ and $R(B)$ surjects linearly onto $X$;
\item[(ix)] $A \leq_{\GreenJ} B$ if and only if there exists a convex set $X \subseteq \ft^n$ such that $C(A)$ embeds linearly into $X$ and $C(B)$ surjects linearly onto $X$.
\item[(x)] $A \GreenJ B$ if and only if $A \GreenD B$.
\end{itemize}
\end{theorem}
Parts (i)-(v) of the above theorem are straight-forward, whilst parts (vi)-(x) require considerably more work (the proofs make use of the row-column duality alluded to above as well as some elementary topological arguments).
The following result follows immediately from the definitions.
\begin{lemma}
\label{lem_relations}
Let $A,B \in M_n(\ft)$.
\begin{itemize}
\item[(i)] If $A \leq_{\GreenL} B$ then any linear relation between the \emph{columns} of $B$ induces the same relation between the \emph{columns} of $A$.
\item[(ii)] If $A \leq_{\GreenR} B$ then any linear relation between the \emph{rows} of $B$ induces the same relation between the \emph{rows} of $A$.
\end{itemize}
\end{lemma}
\begin{proof}
We prove (i), the proof of (ii) being dual. Let $A \leq_{\GreenL} B$. If $A = B$, the result holds trivially. Assume then that $A= XB$ for some $X \in M_n(\ft)$. Define $f:C(B)\rightarrow C(A)$ to be left multiplication by $X$. Thus $f$ is a linear map sending the $i$th column of $B$ to the $i$th column of $A$ and it follows that any relation between the columns of $B$ induces the corresponding relation between the columns of $A$.
\end{proof}
In fact, there is a correspondence between inclusions of row spaces and certain surjections of column spaces; see \cite{K_tropd} for full details.
\begin{theorem}
\label{thm_HK}\cite[Theorem 4.2, Corollary4.3]{K_tropd}.
\begin{itemize}
\item[(i)] $R(A)\subseteq R(B)$ if and only if there there is a surjective linear morphism from $C(B)$ to $C(A)$ taking the $i$th column of $B$ to the $i$th column of $A$ for all $i$.
\item[(ii)] $R(A) =R(B)$ if and only if there is a linear isomorphism from $C(A)$ to $C(B)$ taking the $i$th column of $B$ to the $i$th column of $A$ for all $i$.
\item[(iii)] $C(A)\subseteq C(B)$ if and only if there there is a surjective linear morphism from $R(B)$ to $R(A)$ taking the $i$th row of $B$ to the $i$th row of $A$ for all $i$.
\item[(iv)] $C(A) =C(B)$ if and only if there is a linear isomorphism from $R(A)$ to $R(B)$ taking the $i$th row of $B$ to the $i$th row of $A$ for all $i$.
\end{itemize}
\end{theorem}
We shall require a few more semigroup theoretic definitions. Let $S$ be any semigroup. We recall that $e \in S$ is an \emph{idempotent} if $e^2 =e$, whilst $a \in S$ is \emph{(von Neumann) regular} if there exists $x \in S$ with $axa=a$. It is easy to see that $a$ is regular if and only if $a$ is $\GreenR$-related to an idempotent (dually, if and only if $a$ is $\GreenL$-related to an idempotent). Moreover, it is well known that every subgroup of a semigroup $S$ lies in a unique maximal subgroup, and that the maximal subgroups are precisely the $\GreenH$-classes of $S$ containing
idempotents.
A complete description of the idempotent elements of $M_2(\ft)$ was given in \cite{K_tropicalgreen} and it follows from the results given there that the semigroup $M_2(\ft)$ is regular (that is, every $\GreenR$ class and every $\GreenL$ class contains an idempotent) and each maximal subgroup is isomorphic to either $\mathbb{R}$ or $\mathbb{R} \times S_2$. For $n \geq 3$ it is known that the semigroups $M_n(\ft)$ are no longer regular. In \cite{K_puredim} the present authors gave a geometric characterisation of the regular elements of $M_n(\ft)$. In the present paper we turn our attention to the maximal subgroups of $M_n(\ft)$, in other words, the $\GreenH$-classes containing an idempotent tropical matrix. Given an idempotent tropical matrix $E$, we first note that the $\GreenH$-class containing $E$ is isomorphic to the group of $\ft$-module automorphisms of $C(E)$.
\begin{theorem}
\label{thm_aut}
Let $E$ be an idempotent in $M_n(\ft)$, with corresponding $\GreenH$-class
denoted by $H_E$, and let ${\rm Aut}(C(E))$ denote the group of $\ft$-module automorphisms of $C(E)$. Define $\psi: {\rm Aut}(C(E)) \rightarrow H_E$ by
$$\psi: f \mapsto (f(E_1) \cdots f(E_n)),$$
where $E_i$ denotes the $i$th column of $E$. Then $\psi$ is an isomorphism of groups. [Dually, the $\GreenH$-class of $E$ is isomorphic to the group of $\ft$-module automorphisms of $R(E)$.]
\end{theorem}
\begin{proof}
Let $f: C(E) \rightarrow C(E)$ be an $\ft$-module automorphism and let $A = \psi(f)$. Since $f$ is surjective, it is clear that $C(A)=C(E)$ and hence $A \GreenR E$, by Theorem~\ref{thm_greenchar}~(iii). Now, since $f$ is a linear isomorphism of column spaces, taking the $i$th column of $E$ to the $i$th column of $A$, it follows from Theorem~\ref{thm_HK}~(ii) that $R(A)=R(E)$, giving $A \GreenL E$ and hence $A \GreenH E$. Thus $\psi$ is well-defined.
We claim that $\psi$ is a homomorphism of groups. Indeed, let $f, g$ be $\ft$-module automorphisms of $C(E)$ and let $\psi(f)=A$ and $\psi(g)=B$. Then it is straight-forward to check that $f(x) = A \otimes x$ and $g(x) = B \otimes x$, for all $x \in C(E)$. Moreover, since $B \GreenH E$, we have $B \otimes E = B$ giving
\begin{eqnarray*}
\psi(f \circ g) &=& (f\circ g(E_1) \cdots f\circ g(E_n))\\
&=&(A \otimes B\otimes E_1 \cdots A \otimes B\otimes E_n)\\
&=&A \otimes B\otimes E = A \otimes B = \psi(f )\otimes \psi(g).
\end{eqnarray*}
In fact, we shall show that $\psi$ is an isomorphism.
We show first that $\psi$ is injective. Indeed, suppose $\psi(f) = \psi(g)$.
Then $f(E_i) = g(E_i)$ for all $i$. But the columns $E_i$ generate $C(E)$,
so by linearity it must be that $f = g$.
It remains to show that this homomorphism is surjective.
Let $A \in H_E$ and define $f: C(E) \rightarrow C(E)$ by $f(x) = A \otimes x$.
Since $H_E$ is a group, there exists $A' \in H_E$ such that
$A\otimes A' = E = A' \otimes A$. Since $E$ acts as the identity on $C(E)$,
it follows that $f$ must be bijective; define $f': C(E) \rightarrow C(E)$ by $f'(x) = A' \otimes x$ and suppose for contradiction that $f(x) = f(y)$ for some $x \neq y$. Then
$$x = E \otimes x = A' \otimes A \otimes x = f'(f(x)) = f'(f(y)) = A' \otimes A \otimes y = E \otimes y = y,$$
contradicting $x \neq y$. Thus $f \in {\rm Aut}(C(E))$ and it is clear that $\psi(f) = A$, giving that $\psi$ is surjective.
\end{proof}
\section{Dimension, projectivity, idempotents and regularity}
There are several important notions of dimension for a tropical convex set $X \subseteq \ft^n$. The \emph{tropical dimension} is the topological dimension of $X$, viewed as a subset of $\mathbb{R}^n$ with the usual topology. Note that, in contrast to the classical (Euclidean) case, tropical convex sets may have regions of different
topological dimension. We say that $X$ has \emph{pure} dimension $k$ if every open (within X with the induced topology) subset of X has topological dimension $k$. The \emph{generator dimension} of $X$ is the minimal cardinality of a generating subset, under the linear operations of scaling and addition. If $X$ is a polytope, this is equal to the number of extremal points of $X$ considered up to scaling \cite{Butkovic07,Wagneur91}. The \emph{dual dimension}~\cite{K_puredim} is the minimal cardinality of a generating set under scaling and the induced operation of greatest lower bound within the convex set. (Notice that, in general, the greatest lower bound of two elements within a convex set $X$ need not be the same as their component-wise minimum, which may not be contained in $X$.)
In \cite{K_puredim}, the present authors gave a characterisation of \emph{projectivity} for tropical polytopes in terms of the geometric and order-theoretic structure on these sets. We briefly recall that a module $P$ is called \emph{projective} if every morphism from $P$ to another module $M$ factors through every surjective module morphism onto $M$. One of the main results of \cite{K_puredim} can be summarised as follows.
\begin{theorem}\cite[Theorems 1.1 and 4.5]{K_puredim}.
\label{thm_IJKmain}
Let $X\subseteq \ft^n$ be a tropical polytope. Then the following are equivalent:
\begin{itemize}
\item[(i)] $X$ is projective as an $\ft$-module;
\item[(ii)] $X$ is the column space of an idempotent matrix in $M_n(\ft)$;
\item[(iii)] $X$ has pure dimension equal to its generator dimension and dual dimension.
\end{itemize}
\end{theorem}
Since all three notions of dimension coincide for projective polytopes, we define the \emph{dimension} of a projective tropical polytope to be this common value. We shall refer to projective polytopes of dimension $k$ as \emph{projective $k$-polytopes}. The following result is a consequence of \cite[Theorem 4.2]{K_puredim}.
\begin{proposition}\cite[Theorem 4.2]{K_puredim}.
\label{prop_IJK}
Let $X\subseteq \ft^n$ be a projective $k$-polytope. Then $X$ is isomorphic to the column space of a $k \times k$ idempotent matrix over $\ft$.
\end{proposition}
We note that projective $n$-polytopes in $\ft^n$ turn out to have a particularly nice structure:
\begin{theorem}\cite[Proposition 5.5]{K_puredim}.
\label{thm_minplusproj}
Let $X\subseteq \ft^n$ be a projective $n$-polytope. Then $X$ is min-plus (as well as max-plus) convex.
\label{npolytrope}
\end{theorem}
It is easily verified that any tropical polytope that is min-plus (as well as max-plus) convex must be convex in the usual (Euclidean) sense.
Numerous definitions of rank have been introduced and studied for tropical
matrices (see for example \cite{Akian06,Develin05} for more details),
mostly corresponding to different notions of ``dimension'' of the row or
column space. In light of Theorem~\ref{thm_IJKmain}, we shall focus on the
following three definitions of rank. The \emph{tropical rank} of a matrix
is the tropical dimension of its row space (or equivalently, by
\cite[Theorem 23]{Develin04} for example, its column space). It also has
a characterisation in terms of the computation of the matrix permanent
\cite{Develin05}. The \emph{row rank} is
the generator dimension of the row space, which by \cite[Proposition
3.1]{K_puredim} is also the dual dimension of the column space. Dually,
the \emph{column rank} is the generator dimension of the column space and
also the dual dimension of the row space. Whilst these three notions of
rank can in general differ, it follows from
Theorem~\ref{thm_IJKmain} that they must coincide for any
\emph{idempotent} matrix. Thus we shall refer without ambiguity to the
\emph{rank} of an idempotent matrix.
We define a scalar product operation $\ft^n \times \ft^n \rightarrow \ft^n$ on affine tropical $n$-space by setting
$$\langle x | y \rangle = {\rm max}\{\lambda \in \ft : \lambda \otimes x \leq y\}.$$
This is a \textit{residual} operation in the sense of residuation theory \cite{Blyth72}, and has been frequently employed in max-plus algebra. We recall the following result from \cite{K_finitemetric}.
\begin{lemma}\cite[Lemma 5.3]{K_finitemetric}
\label{lem_bracket}
Let $E$ be an idempotent element of $M_n(\ft)$. Let $E_1, \ldots, E_n$ denote the columns of $E$. If $E_{i,i} = 0$ then $E_{j,i} = \langle E_j | E_i \rangle$ for all $j$.
\end{lemma}
\section{Eigenvalues, eigenvectors and idempotents}
Let $A \in M_n(\ft)$ and let $\Gamma_A$ denote the corresponding weighted directed graph. We define the \emph{maximum cycle mean of $A$} to be the maximum average\footnote{By \textit{average}, we mean the
classical arithmetic mean, which is the tropical geometric mean.} weight of a path from a node to itself in $\Gamma_A$. Let $\lambda$ be the maximum cycle mean of $A$. Then the \emph{critical graph of $A$} consists of all nodes and edges involved in any path from a node to itself with average weight $\lambda$. The nodes occurring in the critical graph are called \emph{critical nodes}. It can be shown (see \cite{Butkovic10}, for example) that if $\lambda \leq 0$ then the following series converges to a finite limit, denoted $A^+$, in $M_n(\ft)$:
$$A \oplus A^2 \oplus \cdots \oplus A^n \oplus \cdots.$$
The following result is well known to experts in tropical mathematics (see \cite{Butkovic10} for details).
\begin{theorem}
\label{eigenval}
Let $A \in M_n(\ft)$ with corresponding weighted directed graph $\Gamma_A$. Let $\lambda$ denote the maximum cycle mean of $A$ and set $A_{\lambda} = -\lambda \otimes A$. Then
\begin{itemize}
\item[(i)] The maximum cycle mean $\lambda$ is the unique eigenvalue of $A$.
\item[(ii)] The columns of $(A_{\lambda})^+$ labelled by the critical nodes form a generating set for the eigenspace of $A$.
\item[(iii)] Let $i$ and $j$ be critical nodes. The columns of $(A_{\lambda})^+$ labelled by $i$ and $j$ are tropical multiples of each other if and only if they occur in the same strongly connected component of the critical graph.
\item[(iv)] The eigenspace has generator dimension equal to the number of strongly connected components in the critical graph.
\end{itemize}
\end{theorem}
From now on let $E$ be an idempotent in $M_n(\ft)$. Theorem~\ref{eigenval} applied to $E$ has a number of interesting consequences.
\begin{corollary}
\label{cor_eigen}
Let $E \in M_n(\ft)$ be an idempotent. Then
\begin{itemize}
\item[(i)] $E$ has unique eigenvalue $0$ (which is the maximum cycle mean of $E$) and corresponding eigenspace $C(E)$.
\item[(ii)] $E_{i,i}=0$ if and only if $i$ is a node in the critical graph of $E$.
\item[(iii)] The columns of $E$ with diagonal entry $0$ form a generating set for the column space of $E$.
\item[(iv)] Every extremal point of $C(E)$ occurs up to scaling as a column of $E$ with $0$ in the diagonal position.
\item[(v)] The rows of $E$ with diagonal entry $0$ form a generating set for the row space of $E$.
\item[(vi)] Every extremal point of $R(E)$ occurs up to scaling as a row of $E$ with $0$ in the diagonal position.
\item[(vii)] The rank of $E$ is equal to the number of strongly connected components of the critical graph of $E$.
\item[(viii)] Each strongly connected component of the critical graph of $E$ is a complete subgraph of $\Gamma_E$.
\item[(ix)] If $i$ and $j$ are in the same strongly connected component of the critical graph then the $i$th column of $E$ is a multiple of the $j$th column of $E$.
\item[(x)] If $i$ and $j$ are in the same strongly connected component of the critical graph then the $i$th row of $E$ is a multiple of the $j$th row of $E$.
\end{itemize}
\end{corollary}
\begin{proof}
(i) Since $E$ is idempotent it is immediate that $E \otimes c = c$ for all columns $c$ of $E$, giving that $0$ is an eigenvalue of $E$, with corresponding eigenspace $C(E)$. By Theorem~\ref{eigenval}(i), $E$ has unique eigenvalue equal to the maximum cycle mean of $E$.
(ii) We note that the critical graph of $E$ consists of all nodes and edges involved in any zero-weighted path from a node to itself. If $E_{i,i}=0$ then, by definition, $i$ is a critical node of $E$. On the other hand, if $i$ is a critical node of $E$ then there is a path from $i$ to $i$ with weight zero. If this path has length $k$, then $E^{\otimes k}_{i,i} = 0$. Since $E$ is an idempotent, this gives $E_{i,i} =0$.
(iii) By Theorem~\ref{eigenval}(ii), the columns of $E$ labelled by the critical nodes form a generating set for the eigenspace of $E$. Since the eigenspace of $E$ is equal to the column space of $E$, the result now follows from, part (ii).
(iv) By the remarks in Section 2, some scaling of every extremal point must
lie in every generating set for $C(E)$.
(vii) Recall that the rank of an idempotent matrix may be defined to be the generator dimension of its column space. The result then follows from Theorem~\ref{eigenval}(iv).
(viii) Let $i$ and $j$ be in the same strongly connected component of the critical graph of $E$ and let $E_i$ and $E_j$ denote the $i$th and $j$th columns of $E$. By Theorem~\ref{eigenval}(iii) we see that $E_i = \alpha \otimes E_j$ (and hence $E_j = -\alpha \otimes E_i$). Moreover, since $i$ and $j$ are critical nodes we have $E_{i,i} = E_{j,j}=0$ by part (ii), giving
$$E_{i,j} + E_{j,i} = (\alpha \otimes E_{j,j}) + (-\alpha \otimes E_{i,i}) = \alpha + -\alpha = 0.$$
Thus we have a zero-weighted path from $i$ to itself via $j$. By definition, this cycle is in the critical graph of $E$. It then follows that each strongly connected component of the critical graph will be a complete graph.
(ix) This follows immediately from Theorem~\ref{eigenval}(iii).
Similar arguments hold for parts (v), (vi) and (x) by considering the row space of $E$ to be the column space of the idempotent $E^T$.
\end{proof}
Consider the set $C$ of critical nodes of $E$. There is an obvious equivalence relation on $C$, given by $i \sim j$ if and only if $i$ and $j$ are in the same strongly connected component of the critical graph. We call the equivalence classes of this relation the \emph{critical classes}. Notice that Corollary~\ref{cor_eigen} tells us that any set of representatives of the critical classes of $E$ yields a minimal generating set for the row [respectively, column] space of $E$. In fact, by Lemma~\ref{lem_relations}, any such set of representatives will give a generating set (not necessarily minimal) for the row [respectively, column] space of any matrix $\GreenR$-below [respectively, $\GreenL$-below] $E$.
\begin{corollary}
\label{cor_basis}
Let $A, E \in M_n(\ft)$, with $E$ idempotent and let $\{c_1, \ldots, c_k\}$ be a set of representatives of the critical classes of $E$.
\begin{itemize}
\item[(i)] If $A \leq_\GreenR E$ then the rows labelled by $c_1, \ldots, c_k$ form a generating set for the row space of $A$. If $A \GreenR E$ then this generating set is minimal.
\item[(i)] If $A \leq_\GreenL E$ then the columns labelled by $c_1, \ldots, c_k$ form a generating set for the column space of $A$. If $A \GreenL E$ then this generating set is minimal.
\end{itemize}
\end{corollary}
\begin{proof}
We prove part (i), the proof of part (ii) being dual. It follows immediately from Lemma~\ref{lem_relations} and Corollary~\ref{cor_eigen} that the rows labelled by $c_1, \ldots, c_k$ form a generating set for the row space of $A$. Now suppose that $A \GreenR E$. Then $C(A) = C(E)$ and Theorem~\ref{thm_IJKmain} gives that $C(A)$ has generator dimension equal to its dual dimension. By \cite[Proposition 3.1]{K_puredim}, the dual dimension of $C(A)$ is equal to the generator dimension of $R(A)$. Thus we see that the minimum cardinality of a generating set for $R(A)$ is equal to the minimal cardinality of a generating set for $C(A)=C(E)$, which by Corollary~\ref{cor_eigen} is equal to $k$.
\end{proof}
\section{A reduction to idempotents of full rank}
Let $E \in M_n(\ft)$ and suppose that $E$ has rank $k \leq n$. In this section we shall prove that the $\GreenH$-class of $E$, denoted by $H_E$, is
isomorphic, as a group, to the $\GreenH$-class of a $k \times k$ idempotent $F$ of full rank $k$. We begin by showing that each $\GreenD$-class contains an idempotent whose diagonal entries are all equal to $0$. Since the maximal subgroups in each $\GreenD$-class are all isomorphic, it follows that we may restrict attention to those idempotents with all diagonal entries equal to $0$. Given such an idempotent $E$, the main result of this section (Theorem~\ref{thm_fullrank}) constructs a $k \times k$ idempotent $F$ of full rank $k$ and a group isomorphism between the corresponding $\GreenH$-classes, $H_E$ and $H_F$. Throughout this section we shall make use of several results and proofs from \cite{K_puredim}.
\begin{theorem}
\label{thm_minplus}
Let $E\in M_n(\ft)$ be an idempotent. Then the column space of $E$ is min-plus convex if and only if there is an idempotent $F\in M_n(\ft)$ such that $F_{i,i}=0$ for all $i$ and $C(F)=C(E)$.
\end{theorem}
\begin{proof}
The statement is trivial for $n=1$. Thus we assume that $n\geq 2$.
Suppose first that $F$ is an idempotent with all diagonal entries equal
to zero. We show that $C(F)$ is min-plus convex, using the proof strategy
of \cite[Proposition 5.5]{K_puredim}.
Let $x, y \in C(F)$ and let $z$ be the component-wise minimum of $x$ and $y$. It suffices to show that $z \in C(F)$. Since $x, y \in C(F)$ and $F$ is idempotent it is immediate that $x=F \otimes x$ and $y = F \otimes y$. Moreover, since $z \leq x$ and $z \leq y$ we see that $F \otimes z \leq F \otimes x = x$ and $F \otimes z \leq F \otimes y = y$, giving $F \otimes z \leq z$. Using the fact that all the diagonal entries of $F$ are zero yields
$$(F \otimes z)_i = \bigoplus_{j=1}^n F_{i,j} \otimes z_j \geq F_{i,i} \otimes z_i = z_i,$$
so that $F \otimes z \geq z$. Thus we have shown that $F \otimes z = z$, giving $z \in C(F)$ as required.
Now suppose that $C(E)$ is min-plus (as well as max-plus) convex. To
construct an idempotent $F$ with the desired properties, we use a
strategy based on the proof of \cite[Theorem 1.4]{K_puredim}, although
some extra complications result from the fact that $E$ need not have full
rank. The key
idea is to construct a matrix whose $i$th column is the infimum
(in $\ft^n$) of all elements $u \in C(E)$ such that $u_i \geq 0$ and show
that it has the desired properties. Of course, we must first check that
such infima exist.
Let $i \in \{1, \ldots, n\}$ and for each coordinate $j \neq i$, consider the set
$$\{u_j: u \in C(E), u_i \geq 0\}.$$
It is easy to see that this set is non-empty and, since $C(E)$ is finitely generated, it has a lower bound and hence an infimum. It follows from the fact $C(E)$ is closed that this infimum will be attained. Choose an element $w_j \in C(E)$ such that $w_{j,j}$ attains this minimum and $w_{j,i}\geq 0$. By the minimality of $w_{j,j}$ and the fact that $C(E)$ is closed under scaling, we must have $w_{j,i} = 0$. Now let $F_i$ be the minimum of all the $w_j$'s. Then $(F_i)_i = 0$ and $F_i$ is clearly less than or equal to all vectors $u \in C(E)$ with $u_i \geq 0$. Thus we have shown that $F_i$ is a lower bound for all elements $u \in C(E)$ such that $u_i \geq 0$. Suppose that $z$ is another lower bound. By the min-plus convexity of $C(E)$, we see that each $F_i$ is itself an element of $C(E)$ with $(F_i)_i \geq 0$. Hence $z \leq F_i$. Thus $F_i$ is the infimum of all elements $u \in C(E)$ such that $u_i \geq 0$. Let $F$ be the matrix whose $i$th column is $F_i$. We shall prove that $F$ is an idempotent in $M_n(\ft)$ with $C(F)=C(E)$. Since the diagonal entries of $F$ are $F_{i,i} = (F_i)_i=0$, this will complete the proof.
We first show that $F$ must be idempotent. It follows from the definition of matrix multiplication that for all $i$ and $j$,
$$(F^2)_{i,j} \geq F_{i,j} \otimes F_{j,j} = F_{i,j} + 0 = F_{i,j}.$$
Now let $i, j, k \in \{1, \ldots, n\}$. It will suffice to show that
$F_{i,j} \geq F_{i,k} \otimes F_{k,j}$, since then
$F_{i,j} \geq \bigoplus_{k=1}^n F_{i,k} \otimes F_{k,j} = (F^2)_{i,j}$. Consider $w = (-F_{k,j})\otimes F_j = -(F_j)_k\otimes F_j$. Then $w \in C(E)$ and $w_k \geq 0$. Since $F_k$ is the infimum of all such points we have $F_k \leq w$. In particular, comparing the $i$th entries of these elements, we have $(-F_{k,j})\otimes F_{i,j} =(-F_{k,j})\otimes (F_j)_i= w_i \geq (F_k)_i = F_{i,k}$ and so $F_{i,j} \geq F_{i,k} \otimes F_{k,j}$, as required.
It remains to show that $C(F)=C(E)$. Since each column $F_i$ of $F$ is contained in $C(E)$ we have $C(F) \subseteq C(E)$. We shall prove that every extremal point of $C(E)$ occurs, up to scaling, as a column of $F$, so that $C(E)\subseteq C(F)$.
We first show that the columns of $F$ are extremal points of $C(E)$. Suppose
for a contradiction that $F_i$ is not an extremal point of $C(E)$. Then by definition we may write $F_i$ as a finite sum of elements in $C(E)$ which are not multiples of $F_i$, say $F_i = z_1 \oplus \cdots \oplus z_k$. Let $j$ be such that $0=F_{i,i}=z_{j,i}$. Since $z_j \in C(E)$, $z_{j,i} \geq 0$ and $F_i$ is the infimum of all such points, we must have $F_i \leq z_j$. On the other hand, $z_j$ forms part of a linear combination for $F_i$, giving $z_j \leq F_i$. So $F_i=z_j$, contradicting that $z_j$ is not a multiple of $F_i$.
Now let $x$ be an extremal point of $C(E)$ and let $E_1, \ldots, E_n$ denote the columns of $E$. By Corollary~\ref{cor_eigen} we know that $x$ is a multiple of some column $E_k$ with $E_{k,k}=0$. We shall show that $F_k = E_k$ and hence $x$ occurs up to scaling as a column of $F$, as required. Since $F_k$ is an extremal point of $C(E)$, there exists $j$ and $\lambda \in \ft$ such that $F_k = \lambda \otimes E_j$ and $E_{j,j}=0$. Moreover, since $F_{k,k}=0$ we find that
$$0=F_{k,k} = \lambda \otimes (E_j)_k = \lambda + E_{k,j},$$
and hence $\lambda = -E_{k,j}$. On the other hand, $E_{k,k} = 0$ and $E_k \in C(E)$, so by the definition of $F_k$ we have $F_k \leq E_k$. In other words, $ -E_{k,j} \otimes E_j \leq E_k$. Thus $-E_{k,j} \leq \langle E_j| E_k\rangle$. Now, since $E_{k,k}=E_{j,j}=0$ we may apply Lemma~\ref{lem_bracket} to find $\langle E_j| E_k\rangle = E_{j,k}$ and $\langle E_k| E_j\rangle = E_{k,j}$. Thus $E_{k,j} + E_{j,k} \geq 0$. Since $E$ is idempotent, it follows that $-E_{k,j} = E_{j,k}$. Thus $\lambda = -\langle E_k| E_j\rangle =\langle E_j| E_k\rangle$ and hence $E_k = \lambda \otimes E_j =F_k$, as required.
\end{proof}
\begin{theorem}
Let $E \in M_n(\ft)$ be an idempotent. Then there is an idempotent
$E' \in M_n(\ft)$ such that $E \GreenD E'$ and $E'$ has all diagonal entries equal to $0$.
\end{theorem}
\begin{proof}
Let $E$ be an idempotent of rank $k$. By Proposition~\ref{prop_IJK},
$C(E) \cong C(F)$ for some $k \times k$ idempotent $F$. Since the various
notions of dimension described in Section 4 are isomorphism invariant, it is clear
that $F$ must have rank $k$ and hence, by Corollary~\ref{cor_eigen} for
example, $F$ has all diagonal entries equal to $0$. Moreover,
Theorem~\ref{thm_minplusproj} yields that $C(F)$ is min-plus convex.
Now let $X$ be the subset of $\ft^n$ consisting of all elements $v$ such
that: (i) the restriction of $v$ to the first $k$ entries yields an element
of $C(F)$ and (ii) all other entries of $v$ are equal to $v_k$. It is clear
from the definition that $X$ is a tropical polytope which is both max-plus
and min-plus convex. Moreover, it is easy to see that $X \cong C(F)$ and
hence $X$ is projective. By Theorem~\ref{thm_IJKmain}, $X$ is the column
space of some
$n \times n$ idempotent, $X = C(E')$, say. Theorem~\ref{thm_minplus} and
the min-plus convexity of $X$ guarantee that $E'$ can be chosen with all
diagonal entries equal to zero. Hence we have shown that
$$C(E) \cong C(F) \cong X = C(E'),$$
where $E, E' \in M_n(\ft)$. By Theorem~\ref{thm_greenchar}(vi), this gives $E \GreenD E'$ as required.
\end{proof}
For the rest of this section we shall assume that $E$ is an idempotent matrix in $M_n(\ft)$ of rank $k$ whose diagonal entries are all equal to $0$. In the following proof we shall make use of the extended tropical semiring $\trop$, defined in Section 2.
\begin{theorem}
\label{thm_fullrank}
Let $E$ be an idempotent matrix in $M_n(\ft)$ whose diagonal entries are
all equal to $0$. Choose a fixed set of representatives of the critical
classes of $E$, $\{c_1, \ldots, c_k\}$ say, and let $M$ be the $k \times n$
matrix, with entries in $\trop$ and rows indexed by $c_1, \dots c_k$,
defined by $M_{c_i, j} = 0 $ if $j = c_i$ and $M_{c_i, j} = -\infty$ otherwise. Then
\begin{itemize}
\item[(i)] $F=M\otimes E \otimes M^T$ is an idempotent of rank $k$ in $M_k(\ft)$;
\item[(ii)] the map $\phi: A \mapsto M\otimes A \otimes M^T$ induces an isomorphism of groups between the $\GreenH$-class of $E$ and the $\GreenH$-class of $F$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) We first note that, by definition, $F$ is the $k \times k$ submatrix of $E$, whose entries are labelled by $c_1, \ldots, c_k$. For simplicity, we index the entries of $F$ by $c_1, \ldots, c_k$ so that $F_{c_i,c_j} = E_{c_i,c_j}$. Since each $c_i$ is a critical node, it is immediate that all diagonal entries of $F$ are equal to $0$, giving
$$(F \otimes F)_{c_i, c_j} \geq F_{c_i,c_i} \otimes F_{c_i, c_j} = F_{c_i,c_j}.$$
On the other hand, $(F \otimes F)_{c_i, c_j}$ is equal to the maximum weight of a path of length 2 from node $c_j$ to node $c_i$ via one of $c_1, \ldots, c_k$. This is clearly bounded above by the maximum weight of a path of length 2 from node $c_j$ to node $c_i$ via \emph{any} node $1, \ldots, n$. Thus $(F \otimes F)_{c_i, c_j} \leq (E \otimes E)_{c_i,c_j} = E_{c_i,c_j} = F_{c_i,c_j}$, as required.
(ii) We note that $\phi$ maps each $n \times n$ matrix $A$ to the $k \times k$ submatrix of $A$, with entries labelled by $c_1, \ldots, c_k$ and, in particular, $F=\phi(E)$. We write $H_E$ to denote the $\GreenH$-class of $E$ in $M_n(\ft)$ and $H_F$ to denote the $\GreenH$-class of $F$ in $M_k(\ft)$. Let $A \in H_E$. By Corollary \ref{cor_basis}, the columns of $A$ labelled by $c_1, \ldots, c_k$ form a minimal generating set for the column space of $A$ and the rows of $A$ labelled by $c_1, \ldots, c_k$ form a minimal generating set for the row space of $A$. This gives $C(AM^T) = C(A) = C(E)$ and $R(MA) = R(A) = R(E)$. Let $\nu: C(E) \rightarrow C(F)$ and $\rho: R(E) \rightarrow R(F)$ be the linear maps given by $\nu: v \mapsto M \otimes v$ and $\rho: v \mapsto v\otimes M$. Since $\nu$ maps the columns of $E$ labelled by $c_1, \ldots, c_k$ to the columns of $F$ labelled by $c_1, \ldots, c_k$, we see that $\nu$ is onto. Similarly, $\rho$ maps the rows of $E$ labelled by $c_1, \ldots, c_k$ to the rows of $F$ labelled by $c_1, \ldots, c_k$, giving that $\rho$ is onto. (In fact, it is straight-forward to check that these maps are isomorphisms of $\ft$-modules.) It follows that $C(F) = \nu(C(E)) = \nu(C(AM^T)) = C(MAM^T)= C(\phi(A))$ and $R(F) = \rho(R(E)) = \rho(R(MA)) = R(MAM^T)=R(\phi(A))$. Thus, for all $A \in H_E$, we have shown that $\phi(A) \GreenH F$. In other words, $\phi(A) \in H_F$ for all $A \in H_E$ so that $\phi$ restricts to a map from $H_E$ to $H_F$. Moreover, it follows from the fact that $\nu$ and $\rho$ are isomorphisms that $\phi$ is injective. We claim that $\phi: H_E \rightarrow H_F$ is an isomorphism of groups.
Let $A, B \in H_E$. We must show that $\phi(A \otimes B) = \phi(A) \otimes \phi(B)$. In other words, we want to show that $MABM^T = MAM^TMBM^T$ for all $A, B \in H_E$. It is straight-forward to verify from the definition of matrix multiplication that this amounts to proving the following claim:
{\bf Claim: }For every pair $i, j \in \{1, \ldots, k\}$ there exists $t \in \{1, \ldots, k\}$ such that $(A\otimes B)_{c_i,c_j} = A_{c_i, c_t} \otimes B_{c_t,c_j}.$
Suppose for contradiction that that this is not the case. Then there exists $s \notin \{c_1, \ldots, c_k\}$ such that
$$(A\otimes B)_{c_i,c_j} = A_{c_i, s} \otimes B_{s,c_j} > A_{c_i, c_l} \otimes B_{c_l,c_j},$$
for all $l \in \{1, \ldots, k\}$. Since $E$ has all diagonal entries equal to $0$, we know that $s$ occurs in the same strongly connected component as $c_t$ for some $t \in \{1, \ldots, k\}$. Corollary~\ref{cor_eigen} yields that column $s$ of $E$ must be $\alpha$ times column $c_t$ of $E$, whilst row $s$ of $E$ must be $-\alpha$ times row $c_t$ of $E$, for some $\alpha \in \ft$. Since $A$ and $B$ are $\GreenH$-related to $E$, it follows from Lemma~\ref{lem_relations} that we also have column $s$ of $A$ is equal to $\alpha$ times column $c_t$ of $A$ and row $s$ of $B$ is equal to $-\alpha$ times row $c_t$ of $B$. Thus
$$(A\otimes B)_{c_i,c_j} = A_{c_i, s} \otimes B_{s,c_j} = \alpha \otimes A_{c_i, c_t} \otimes -\alpha \otimes B_{c_t,c_j} = A_{c_i, c_t} \otimes B_{c_t,c_j},$$
giving a contradiction. So $\phi$ is a homomorphism of groups.
It remains to show that $\phi$ is surjective. Let $N$ denote the $n \times k$
matrix defined by $N_{s, c_t} = 0 $ if $s = c_t$, $N_{s, c_t} = E_{s,c_t} $ if $s \notin \{c_1, \ldots, c_k\}$ and $N_{s, c_t} = -\infty$ otherwise.
Let $P$ denote the $k \times n$ matrix defined by $P_{c_t, s} = 0 $ if $s = c_t$, $P_{c_t, s} = E_{c_t,s} $ if $s \notin \{c_1, \ldots, c_k\}$ and $P_{c_t, s} = -\infty$ otherwise. Notice that
$$\phi(NKP) = M(NKP)M^T =(MN)K(PM^T)= K,$$
for all $K \in M_k(\ft)$. Let $G \in H_F$. We claim that $NGP \in H_E$, hence giving that $\phi$ is a surjection.
By the definition of $P$, the columns of $NGP$ labelled by $c_1, \ldots, c_k$ are the columns of $NG$, and all other columns are linear combinations of these. Thus $C(NGP) = C(NG)$. Since $G \in H_F$ we have that $FG=G$, giving $C(NG)=C(NFG)$ and it is clear that $C(NFG) \subseteq C(NF)$. Finally, it is straight-forward to check from the definitions of $N$ and $F$ that $NF$ is the $n \times k$ submatrix of $E$ whose columns are labelled by $c_1, \ldots, c_k$. This gives that $C(NF)=C(E)$ and hence we have shown that $C(NGP) \subseteq C(E)$.
Consider the linear map $\alpha: \ft^k \rightarrow \ft^n$ given by $v \mapsto N \otimes v$. It is clear from the definition of $N$ that $\alpha$ is injective. Moreover $\alpha$ maps the $c_i$th column of $G$ to the $c_i$th column of $NGP$, inducing a map $\alpha': C(G) \rightarrow C(NGP)$. As noted above, the columns of $NGP$ labelled by $c_1, \ldots, c_k$ form a generating set for $C(NGP)$ and hence $\alpha'$ is onto. This shows that $C(G) \cong C(NGP)$ via $\alpha'$. Hence we have $C(NGP) \cong C(G) =C(F) \cong C(E)$ and $C(NGP) \subseteq C(E)$. Since $C(NGP), C(E) \subseteq \ft^n$, the only way this can happen is if $C(NGP) = C(E)$. Thus $NGP \GreenR E$.
Dual arguments will show that $R(NGP) \subseteq R(E)$ and $R(E) \cong R(NGP)$, giving $NGP \GreenL E$ and hence $NGP \GreenH E$, as required.
\end{proof}
\section{The $\GreenH$-class of an idempotent of full rank}
Let $E$ be an idempotent in $M_n(\ft)$ and let $H_E$ denote the
$\GreenH$-class of $E$. We can consider the matrices in $H_E$ as maps from
$\ft^n$ to $\ft^n$, acting by left multiplication, and it follows from
Theorem~\ref{thm_aut} that these maps restrict to $\ft$-module
automorphisms of $C(E)$. We shall show that when $E$ has rank $n$, these
automorphisms are affine linear maps, \textbf{in the classical sense}. It
then follows that every automorphism of $C(E)$ extends to an automorphism
of $\ft^n$.
We note that the \emph{boundary} of $C(E)$ is the set of all points
$y \in C(E)$ for which the equation $E \otimes x = y$ has multiple solutions.
Since $E$ acts trivially on $C(E)$, it follows that for every
boundary point $y$ there exists some exterior point $z \notin C(E)$ such
that $E \otimes z = y$. In fact, it is easy to see that every exterior
point must be mapped to the boundary.
We shall need the following fact about the action of full rank idempotents on tropical $n$-space, which follows from results in
\cite{Butkovic10}.
\begin{lemma}
\label{lem_exterior}
Let $E$ be an idempotent of rank $n$ in $M_n(\ft)$, and consider
the column space $C(E)$ as a subset of $\mathbb{R}^n$ equipped with the usual
topology. Then left multiplication by $E$ maps all points exterior to $C(E)$
onto the boundary of $C(E)$.
\end{lemma}
\begin{proof}
Suppose false for a contradiction, and let $x \notin C(E)$ be such that
$Ex$ does not lie on the boundary of $C(E)$. Certainly $Ex \in C(E)$, so it
must be that $Ex$ lies in the interior of $C(E)$.
Also since $E$ is idempotent, it has eigenspace
$C(E)$ with eigenvalue $0$, and $E = E^k$ for all $k$. Thus, by
\cite[Theorem~6.2.14]{Butkovic10}, each point in the interior of $C(E)$ has
a unique preimage under the action of $E$. But since $E$ is idempotent
we have $Ex = E(Ex)$, so $x = Ex$, which contradicts the fact that
$x \notin C(E)$.
\end{proof}
Now let $A \in H_E$. Thus $C(A)=C(E)$. Since $E$ has rank $n$, it follows that the columns of $A$ give a minimal generating set for $C(E)$. Since
this minimal generating set is unique up to permutation and scaling, we can find a permutation $\sigma$ and scalars $\lambda_1, \ldots, \lambda_n$ such that $A_i = \lambda \otimes E_{\sigma(i)}$ for all $i$. It now follows easily from \cite[Corollary 3.1.3]{Butkovic10}, for example, that the equation $A \otimes x = y$ has a unique solution if and only if the equation $E \otimes x = y$ has a unique solution. Thus it is clear that every boundary point $y$ of $C(E)$ is such that the equation $A \otimes x = y$ does \emph{not} have a unique solution. Since $A$ acts on $C(E)$ by an automorphism, it follows that for every boundary point $y$ there exists some exterior point $z \notin C(E)$ such that $A \otimes z = y$.
\begin{lemma}
\label{lem_Amaps}
Let $E$ be an idempotent of rank $n$ in $M_n(\ft)$, and let $A$ be an element in the $\GreenH$-class of $E$. Define $\phi_A: \ft^n \rightarrow \ft^n$ to be the map given by left multiplication by $A$. Then
\begin{itemize}
\item[(i)] $\phi_A$ maps interior points of $C(E)$ to interior points;
\item[(ii)] $\phi_A$ maps boundary points of $C(E)$ to boundary points and
\item[(iii)] $\phi_A$ maps all points exterior to $C(E)$ onto the boundary of $C(E)$.
\end{itemize}
[Dually, right multiplication by $A$ induces an $\ft$-module automorphism of $R(E)$, mapping interior points to interior points and boundary points to boundary points.]
\end{lemma}
\begin{proof}
It is clear that the image of $\phi_A$ is $C(E)$ and it follows from Theorem~\ref{thm_aut} that $\phi_A$ restricts to an automorphism of $C(E)$. Since $H_E$ is a group, there exists $A' \in H_E$ such that $AA'=A'A=E$ and hence $\phi_A$ and $\phi_{A'}$ are mutually inverse on $C(E)$.
(i) Let $x \in C(E)$ be an interior point of the column space. Suppose for contradiction that $\phi_A$ maps $x$ to some point $y$ on the boundary of $C(E)$. Then $\phi_{A'}$ must map $y$ back to $x$, so that $Ex = A'Ax = A'y = x$.
Consider the equation $Az=y$. It is easy to see that $x$ must be the unique
solution (since $\phi_A$ is an automorphism of $C(E)$, no other element of $C(E)$ can be a solution; if $z \notin C(E)$ were a solution, then $Ez=A'Az=A'y=x$, contradicting Lemma \ref{lem_exterior}). However, we have seen that given any point $y$ on the boundary there exists some $z \notin C(A)$ such that $Az=y$.
(ii) It follows immediately that $A$ must map boundary points to boundary points too; if $A$ maps a boundary point $y$ to an interior point $x$, then $A'$ must map $x$ back to $y$, contradicting part (i).
(iii) Finally, let $z \in \ft^n$ with $z \notin C(E)$ and suppose for contradiction that $A$ maps $z$ to an interior point $x$. By Lemma \ref{lem_exterior} we know that $E$ must map $z$ to a point on the boundary, $y$ say. Now $A'x = A'Az = Ez=y$, giving that $A'$ maps an interior point to a
boundary point, contradicting part (i).
\end{proof}
Recall that $\trop =\ft\cup\{-\infty\}$. We briefly consider the semigroup $M_n(\trop)$. It is clear that this is a monoid, whose identity element is the $n \times n$ matrix with $0$ entries on the diagonal and $-\infty$ entries off the diagonal. It is well known that the units of $M_n(\trop)$ are precisely the \emph{tropical monomial matrices}, that is, those matrices with exactly one entry not equal to $-\infty$ in each row and in each column. Thus it is clear that every unit has the form
$D(\lambda_1, \ldots, \lambda_n)P_{\sigma}$, where $D(\lambda_1, \ldots, \lambda_n)$ is a diagonal matrix with entries $\lambda_1, \ldots, \lambda_n$ and $P_{\sigma}$ is a tropical permutation matrix whose $i$th row has a $0$ in the $\sigma(i)$th position and $-\infty$ entries elsewhere. We shall now show that given an idempotent $E$ of rank $n$ in $M_n(\ft)$, the corresponding $\GreenH$-class $H_E$ is isomorphic to a certain subgroup of the group of units in $M_n(\trop)$.
\begin{theorem}\label{thm_upstairs}
Let $E$ be an idempotent of rank $n$ in $M_n(\ft)$ and let $H_E$ denote
the $\GreenH$-class of $E$. Let $G_E$ be the set of all units
$G \in M_n(\trop)$ which commute with $E$. Then there is a group isomorphism:
$$G_E \to H_E, \ G \mapsto GE = EG.$$
\end{theorem}
\begin{proof}
Let $\gamma: G_E \rightarrow H_E$ be the map defined by $G\mapsto GE = EG$.
It is easy to see that $\gamma$ is a homomorphism of groups; indeed, for $G,H \in G_E$,
$$\gamma(GH)= E(GH)=E^2(GH)= E(EG)H = (EG)(EH)=\gamma(G)\gamma(H).$$
For injectivity, suppose $EG = EH$ for some units $G,H \in G_E$. Then
$EGH^{-1} = E$. Note that since $GH^{-1}$ is a monomial matrices, each
column of $EGH^{-1}$ is a scaling of a permutation of a column of $E$.
But since $E$ has rank $n$, no column of $E$ is a scaling of another
column. It follows that $GH^{-1}$ must be the identity element, giving
that $G=H$.
It remains to show that $\gamma$ is surjective. Let $A \in H_E$. By
Corollary~\ref{cor_basis}, the columns of $A$ provide a minimal
generating set for the column space of $E$. It follows that we may choose
a unit $G \in M_n(\trop)$ such that $A = EG$. Let $x$ be an interior
point of $C(E)$. By Lemma \ref{lem_Amaps}, $A$ maps $x$ to an interior
point. It follows that $G$ must also map $x$ to an interior point; if
not then, by Lemma~\ref{lem_exterior}, $EG$ maps $x$ to the boundary
of $C(E)$, contradicting $A=EG$. Thus it is easy to see that $Gx = Ax$
for all interior points $x$. Since $C(E)$ has pure dimension, every
boundary point is a limit of interior points. Since $G$ and $A$ are
continuous maps, it follows that $Gx = Ax$ for all $x \in C(E)$. In
particular, by Theorem~\ref{thm_aut}, the action of $G$ restricted to $C(E)$ is an
$\ft$-module automorphism. Now for all $x \in C(E)$ we have
$EGx=Gx=GEx$, giving
$$EG=A=AE=EGE=GEE=GE.$$
\end{proof}
Our main theorems combine to prove some results which may be of independent
interest.
\begin{theorem}\label{thm_extendaffine}
Every automorphism of a projective $n$-polytope in $\ft^n$
\begin{itemize}
\item[(i)] extends to an automorphism of $\ft^n$; and
\item[(ii)] is a (classical) affine linear map.
\end{itemize}
\end{theorem}
\begin{proof}
Let $X \subseteq \ft^n$ be a projective $n$-polytope and let
$\phi : X \to X$ be an $\ft$-module automorphism. By Theorem~\ref{thm_IJKmain},
$X = C(E)$ for some full rank idempotent $E \in M_n(\ft)$. By
Theorem~\ref{thm_aut}, there is a matrix $A \in H_E$ such that
$\phi(x) = Ax$ for all $x \in C(E) = X$. By Theorem~\ref{thm_upstairs}
there is a unit $G \in M_n(\trop)$ such that $A=EG=GE$. Now for
any $x \in X = C(E)$ we have
$$Gx = GEx = Ax = \phi(x),$$
so the map
$$\ft^n \to \ft^n, \ x \mapsto Gx$$
is an automorphism of $\ft^n$ extending $\phi$ as required to establish (i).
Now since $G$ is a monomial matrix, we have
$$G_{i,j}=\begin{cases}
\lambda_i &\mbox{ if }j=\sigma(i)\\
-\infty &\mbox{otherwise}.
\end{cases}$$
Thus, for all $x \in C(E)$,
$$(A \otimes x)_i = (GE\otimes x)_i = (G \otimes x)_i =\lambda_i \otimes x_{\sigma(i)} = x_{\sigma(i)} + \lambda_i,$$
giving $\phi_A(x) = P \cdot x + \lambda$, where $P$ is the (classical) permutation matrix corresponding to $\sigma$ and $\lambda = (\lambda_1, \ldots, \lambda_n)^T.$
\end{proof}
Since an $\ft$-module automorphism $f$ of $C(E)$ respects scaling, it induces
a well-defined map $\hat{f}$ on the projectivisation of $C(E)$. It turns out
that, in the case $E$ is an idempotent matrix of full rank, this map too is
affine linear.
\begin{corollary}
Let $E$ be an idempotent of rank $n$ in $M_n(\ft)$, let $A$ be an element
in the $\GreenH$-class of $E$ and let $\widehat{\phi_A}: \mathcal{P}C(E) \rightarrow \mathcal{P}C(E)$ denote the corresponding map induced by left multiplication by $A$. Then $\widehat{\phi_A}$ is a (classical) affine linear map on $\mathcal{P}C(E)$, regarded as a subset of $\mathbb{R}^{n-1}$.
\end{corollary}
\begin{proof}
By Theorem~\ref{thm_extendaffine}, $\phi_A(x) = P \cdot x + \lambda$, where $P$ is a (classical) permutation matrix and $\lambda = (\lambda_1, \ldots, \lambda_n)^T$ is a constant vector. Let
$$P_{i,j} = \begin{cases}
1 &\mbox{ if }j=\sigma(i)\\
0 &\mbox{otherwise},
\end{cases}$$
for some permutation $\sigma \in S_n$. Then
$$\phi_A(x) = \left(\begin{array} {c}
x_{\sigma(1)}\\
\vdots\\
x_{\sigma(n)}\\
\end{array}\right) + \left(\begin{array} {c}
\lambda_1\\
\vdots\\
\lambda_n\\
\end{array}\right).
$$
We may identify $\pft^{(n-1)}$ with $\mathbb{R}^{(n-1)}$ via the map
$$(x_1, \ldots, x_n) \mapsto (x_1 -x_n,\ldots, x_{n-1}-x_n)$$
and hence
$$\widehat{\phi_A}\left(\begin{array} {c}
x_{1}-x_n\\
\vdots\\
x_{n-1}-x_n\\
\end{array}\right) = \left(\begin{array} {c}
x_{\sigma(1)}-x_{\sigma(n)}\\
\vdots\\
x_{\sigma(n-1)}-x_{\sigma(n)}\\
\end{array}\right) + \left(\begin{array} {c}
\lambda_1-\lambda_n\\
\vdots\\
\lambda_{n-1}-\lambda_n\\
\end{array}\right).
$$
If $\sigma(n)=n$ then it is immediate that $\widehat{\phi_A}$ is an affine linear map on $\mathcal{P}C(E)$ regarded as a subset of $\mathbb{R}^{n-1}$. Suppose that $\sigma(n)\neq n$. Then $\sigma(k)=n$ for some $k \in\{1, \ldots,n-1\}$. Let $B$ be the $(n-1)\times (n-1)$ matrix given by
$$B_{i,j} = \begin{cases}
1 & \mbox{ if } i \neq k \mbox{ and } j=\sigma(i),\\
-1& \mbox{ if } j=\sigma(n),\\
0&\mbox{ otherwise.}\\
\end{cases}$$
Then it is easy to check that
$$\widehat{\phi_A}\left(\begin{array} {c}
x_{1}-x_n\\
\vdots\\
x_{n-1}-x_n\\
\end{array}\right) = B \cdot \left(\begin{array} {c}
x_{1}-x_n\\
\vdots\\
x_{n-1}-x_n\\
\end{array}\right) + \left(\begin{array} {c}
\lambda_1-\lambda_n\\
\vdots\\
\lambda_{n-1}-\lambda_n\\
\end{array}\right).$$
\end{proof}
We shall use Theorem~\ref{thm_upstairs} to prove our main result; that is,
that every maximal subgroup $H_E$ is isomorphic to a direct product
$\mathbb{R} \times \Sigma$, where $\Sigma$ is a subgroup of $S_n$. We
shall require the following two lemmas, concerning units that commute with
elements of $M_n(\ft)$.
\begin{lemma}\label{lem_uniteig} Let $A \in M_n(\ft)$ and let $G \in M_n(\trop)$
be a unit which commutes with $A$. Then $G$ has only one eigenvalue.
\end{lemma}
\begin{proof} Suppose false. It is easy to see that some power $G^k$ of $G$ is a diagonal matrix. Because $G$ has distinct eigenvalues, so does $G^k$, so $G^k$
cannot be a scaling matrix. But now it is easy to see that $G^k$ cannot commute with any matrix in $M_n(\ft)$, which contradicts the assumption that $G$ commutes with $A$.
\end{proof}
\begin{lemma}\label{lem_commute}Let $A \in M_n(\ft)$ and let $G$ and $H$ be a units which commute with
$A$. Suppose also that $G$ and $H$ have maximal cycle mean $0$. Then $GH$ has maximal cycle mean $0$.
\end{lemma}
\begin{proof}
It follows from Lemma~\ref{lem_uniteig} that every cycle in the graphs
corresponding to $G$ and $H$ has mean weight $0$. In particular, the
(classical) sum
of all the finite entries in $G$ is equal to $0$, and the sum of the
finite entries in $H$ is equal to $0$. By a simple calculation, the
sum of the finite entries in the product $GH$ must also be $0$. Now,
since the product $GH$ also commutes with $A$, applying
Lemma~\ref{lem_uniteig} again yields that every cycle in $GH$ has the same
average weight. Since the sum of the entries (which is 0) is a weighted
sum of the cycle means, and the cycle means are all the same, we deduce
that the cycle means are all 0. In particular, the maximum cycle mean is
0.
\end{proof}
\begin{theorem}
Let $E$ be an idempotent of rank $n$ in $M_n(\ft)$ and let $G_E$ denote the
group of units commuting with $E$. Define
$R = \{\lambda \otimes I_n: \lambda \in \ft\}$
and $\Sigma = \{ G \in G_E: G \mbox{ has eigenvalue } 0\}$. Then $R$
and $\Sigma$ are subgroups of $G_E$, $R \cong \mathbb{R}$, $\Sigma$ is
a finite group embeddable in the symmetric from $S_n$, and $G_E \cong R \times \Sigma$.
\end{theorem}
\begin{proof}
It is easy to see that $R \unlhd G_E$. Moreover, the only diagonal matrices which commute with $E$ are those contained in $R$. Since $I_n$ has eigenvalue $0$ we have $I_n \in \Sigma$. Let $A, B \in \Sigma$. By
Lemma~\ref{lem_commute} we see that $AB \in \Sigma$ and it is also clear
that $A^{-1} \in \Sigma$, giving that $\Sigma \leq G_E$. Thus
$R, \Sigma \leq H_E$ and it is clear that $R \cap \Sigma = \{I_n\}$.
Let $G \in G_E$. By Lemma~\ref{lem_uniteig}, $G$ has a unique eigenvalue,
$\lambda$ say. Hence we may write $G=\lambda \otimes I_n \otimes G_{\lambda}$, where $G_{\lambda} \in \Sigma$. In other words, $G_E = R\Sigma$. It is clear that every element of $R$ commutes with every element of $\Sigma$, giving $G_E = R \times \Sigma$.
It is clear that $R \cong \mathbb{R}$. Moreover, from the definition of
matrix multiplication it is easy to show that the map
$$\phi \ : \ G_E \rightarrow S_n, \ D(\lambda_1, \ldots, \lambda_n) P_{\sigma} \mapsto \sigma$$
is a homomorphism of groups, with kernel the set of all diagonal matrices that commute with $E$. Thus ${\rm ker} \phi = R$ and hence $\Sigma \cong G_E/R \cong {\rm Im} \phi \leq S_n$.
\end{proof}
\begin{corollary}
Let $E$ be an idempotent of rank $n$ in $M_n(\ft)$ and let $H_E$ denote the $\GreenH$-class of $E$. There exists an element $x \in C(E)$ such that $x$ is an eigenvector for all $A \in H_E$.
\end{corollary}
\begin{proof}
Let $S = \{\lambda \otimes E: \lambda \in \ft\}$ and $R = \{\lambda \otimes I_n: \lambda \in \ft\}$. Since
the map $\gamma: G_E \rightarrow H_E$ given by $\gamma(G) = EG$ is an isomorphism of groups, we see that $S=\gamma(R)$. Hence $S$ is a normal subgroup of $H_E$ which is isomorphic to $\mathbb{R}$ and $H_E / S$ is isomorphic to ${\rm Im} \phi \leq S_n$. It is clear that $S$ acts trivially on $\mathcal{P}C(E)$. Thus the quotient group $H_E / S$ acts on $\mathcal{P}C(E)$ by affine linear transformations. Since $H_E / S$ is a finite group, we may apply a theorem of Day \cite[Theorem~1]{Day61} to find a fixed point $x \in \mathcal{P}C(E)$ common to all elements of $H_E / S$. In other words, $x$ is an eigenvector for all elements of $H_E$.
\end{proof}
\begin{corollary}
\label{cor_directprod}
Let $H$ be a maximal subgroup of $M_n(\ft)$. Then $H$ is isomorphic to a direct product of the form $\mathbb{R} \times \Sigma$ for some $\Sigma \leq S_n$.
\end{corollary}
\bibliographystyle{plain}
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
| {
"timestamp": "2012-03-13T01:04:03",
"yymm": "1203",
"arxiv_id": "1203.2449",
"language": "en",
"url": "https://arxiv.org/abs/1203.2449",
"abstract": "We study the subgroup structure of the semigroup of finitary tropical matrices under multiplication. We show that every maximal subgroup is isomorphic to the full linear automorphism group of a related tropical polytope, and that each of these groups is the direct product of the real numbers with a finite group. We also show that there is a natural and canonical embedding of each full rank maximal subgroup into the group of units of the semigroup of matrices over the tropical semiring with minus infinity. Our results have numerous corollaries, including the fact that every automorphism of a projective (as a module) tropical polytope of full rank extends to an automorphism of the containing space, and that every full rank subgroup has a common eigenvector.",
"subjects": "Group Theory (math.GR); Rings and Algebras (math.RA)",
"title": "Tropical matrix groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180627184412,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7083314665848286
} |
https://arxiv.org/abs/2110.05194 | A note on some properties of the $λ$-Polynomial | The expression $a^n + b^n$ can be factored as $(a+b)(a^{n-1} - a^{n-2} b + a^{n-3} b^2 - ... + b^{n-1})$ when $n$ is an odd integer greater than one. This paper focuses on proving a few properties of the longer factor above, which we call $\lambda_n(a,b)$. One such property is that the primes which divide $\lambda_n(a,b)$ satsify $p \ge n$, if $a,b$ are coprime integers and $n$ is an odd prime. | \section{Preliminary lemmas}
\textbf{Lemma 1.1} Let $a,b \in \mathbb{Z}$ and let $d \in \mathbb{Z}$ divide $a$ but not $b$. Then $d$ does not divide $a+b$.\\
\textit{Proof.} Suppose otherwise. We can then write $a = kd$ and $a+b = hd$ for some $k,h \in \mathbb{Z}$. As a consequence $b = hd - kd = d(h-k)$ which implies that $b$ is divisible by $d$, a contradiction.
\null \hfill{$\blacksquare$} \\
\textbf{Lemma 1.2} Let $a, b \in \mathbb{Z}$ be coprime. Then $ab$ and $a+b$ are coprime. \\
\textit{Proof.} Notice that $ab = \pm 1$ only if $a,b \in \{1,-1\}$, in which case $ab$ and $a+b$ are coprime. Otherwise suppose $p$ is a prime which divides $ab$. Then $p$ divides either $a$ or $b$, but not both. It therefore follows by Lemma 1.1 that $p$ cannot divide $a+b$.
\null \hfill{$\blacksquare$} \\
\textbf{Corollary 1.3} Let $a, b \in \mathbb{Z}$ be coprime. Then $(ab)^{m_1}$ and $(a+b)^{m_2}$ are coprime for any positive integers $m_1$ and $m_2$. \\
\textit{Proof.} By Lemma 1.2 it follows that $ab$ and $a+b$ are coprime. But raising any of these to some positive power does not change their prime factors, which means that they remain coprime.
\null \hfill{$\blacksquare$} \\
\section{The $\lambda$-Polynomial}
Suppose that $n$ is an odd integer greater than one and $a,b \in \mathbb{Z}$. In this section we wish to study the longer factor of $a^n+b^n=(a+b)(a^{n-1} - a^{n-2} b + a^{n-3} b^2 - ... + b^{n-1})$ which we will call $\lambda_n(a,b)$. Notice that when $b \neq -a$ we can rearrange the above equation to obtain
$$\lambda_n(a,b) = \frac{a^n + b^n}{a+b}.$$
In the case that $b = -a$ the original expression gives $\lambda_n(a,-a) = n a^{n-1}$. This provides us with two equivalent definitions for $\lambda_n(a,b)$: \\
\textbf{Definition 2.1} Let $a,b \in \mathbb{Z}$ and $n$ be an odd integer greater than one. We define
$$\lambda_n(a,b) = \sum_{i=0}^{n-1} (-1)^i a^{n-1-i} b^i$$
or
$$\lambda_n(a,b) =
\begin{cases}
\cfrac{a^n + b^n}{a+b} &\text{if } b \neq -a; \\
na^{n-1} &\text{if } b = -a.
\end{cases}$$
The second definition is much more compact and just nicer work with in general. \\
\textbf{Lemma 2.2} (Non-negativity) Let $a,b \in \mathbb{Z}$ and $n$ be an odd integer greater than one. Then $\lambda_n(a,b)$ is a non-negative integer. \\
\textit{Proof.} In the case $b \neq -a$ we have $\lambda_n(a,b) = \frac{a^n +b^n}{a+b}$. Because $n$ is odd the numerator and denominator will always have the same sign, ensuring positivity. In the case $b = -a$ we have $\lambda_n(a,b) = n a^{n-1}$. Since $n$ is positive and $a^{n-1}$ is non-negative we must have that their product is also non-negative.
\null \hfill{$\blacksquare$} \\
\textbf{Remark 2.3} Notice that $\lambda_n(a,b)$ is actually a positive integer if we add the condition that at least one of $a$ and $b$ are nonzero. Note also that this condition is implied if $a$ and $b$ are coprime. \\
\textbf{Lemma 2.4} (Symmetry) Let $a,b \in \mathbb{Z}$ and let $n$ be an odd integer greater than one. Then $\lambda_n(a,b) = \lambda_n(b,a)$. \\
In the case $b \neq -a$ this is quite plain. In the case $b = -a$ we have that $\lambda_n(a,-a) = na^{n-1}$ is the same as $\lambda_n(-a,a) = n(-a)^{n-1}$ because $n-1$ is even.
\null \hfill{$\blacksquare$} \\
\textbf{Lemma 2.5} (Homogeneity) Let $a,b,d \in \mathbb{Z}$ and let $n$ be an odd integer greater than one. Then $\lambda_n(da,db) = d^{n-1} \cdot \lambda_n(a,b)$.\\
\textit{Proof.} This is quite easy to see both when $b \neq -a$ and $b = -a$.
\null \hfill{$\blacksquare$} \\
\textbf{Corollary 2.6} (Sign Symmetry) Let $a,b \in \mathbb{Z}$ and let $n$ be an odd integer greater than one. Then $\lambda_n(-a,b) = \lambda_n(a,-b)$. \\
\textit{Proof.} By Lemma 2.5
$$\lambda_n(-a,b) = \lambda_n(-1 \cdot a, -1 \cdot (-b)) = (-1)^{n-1} \lambda_n(a,-b) = \lambda_n(a,-b).$$
\null \hfill{$\blacksquare$} \\
\textbf{Corollary 2.7} (Eveness) Let $a,b \in \mathbb{Z}$ and let $n$ be an odd integer greater than one. Then $\lambda_n(-a,-b) = \lambda_n(a,b)$. \\
\textit{Proof.} By Lemma 2.5
$$\lambda_n(-a,-b) =\lambda_n(-1 \cdot a, -1 \cdot b) = (-1)^{n-1} \lambda_n(a,b) = \lambda_n(a,b).$$
\null \hfill{$\blacksquare$} \\
The next few results are more selective, requiring that $a$ and $b$ be coprime integers or that $n$ be an odd prime. \\
\textbf{Lemma 2.8} Let $a, b \in \mathbb{Z}$ be coprime and $n$ be an odd integer greater than one. Then $\lambda_n(a,b)$ is coprime to $a$ and $b$. \\
\textit{Proof.} By using the suitable values for Lemma 1.2, we see that $a^n + b^n$ is coprime to both $a$ and $b$. Since $a^n + b^n = (a+b) \cdot \lambda_n(a,b)$. It follows that $\lambda_n(a,b)$ must also be coprime to $a$ and $b$.
\null \hfill{$\blacksquare$} \\
\textbf{Proposition 2.9} Let $a, b \in \mathbb{Z}$ be coprime and $n$ be an odd integer greater than one. Then the common divisors of $a+b$ and $ \lambda_n(a,b)$ must divide $n$. \\
\textit{Proof.} We consider the expression
$$\lambda_n(a,b) \pm n a^{\frac{n-1}{2}}b^{\frac{n-1}{2}}$$
with ``$-$'' if $\frac{n-1}{2}$ is even and ``$+$" if $\frac{n-1}{2}$ is odd. If we can prove that the above expression is divisible by $a+b$, then we'll be able to rearrange some things and prove the Proposition. We therefore view the expression modulo $a+b$. Since $b \equiv -a \pmod{a+b}$ we get
$$\lambda_n(a,b) \pm n a^{\frac{n-1}{2}}b^{\frac{n-1}{2}} \equiv na^{n-1} \pm na^{\frac{n-1}{2}}(-a)^{\frac{n-1}{2}} \pmod{a+b}.$$
If $\frac{n-1}{2}$ is even we have that
$$\lambda_n(a,b) - n a^{\frac{n-1}{2}}b^{\frac{n-1}{2}} \equiv n a^{n-1} - n a^{n-1} \equiv 0 \pmod{a+b}$$
otherwise if $\frac{n-1}{2}$ is odd
$$\lambda_n(a,b) + n a^{\frac{n-1}{2}}b^{\frac{n-1}{2}} \equiv n a^{n-1} - n a^{n-1} \equiv 0 \pmod{a+b}.$$
Consequently, $\lambda_n(a,b) + k(a+b) = \mp na^\frac{n-1}{2}b^\frac{n-1}{2}$ for some $k \in \mathbb{Z}$. By Lemma 2.8, $\lambda_n(a,b)$ is coprime to $a$ and $b$, therefore $k \neq 0$. Suppose then that $d$ divides both $a+b$ and $\lambda_n(a,b)$. It follows by the equation above that $d$ also divides $\mp na^\frac{n-1}{2}b^\frac{n-1}{2}$. However, applying Corollary 1.3 with $m_1 = \frac{n-1}{2}$ and $m_2 = 1$ it follows that $d$ must divide $n$.
\null \hfill{$\blacksquare$} \\
\textbf{Proposition 2.10} Let $a,b \in \mathbb{Z}$ and $n$ be an odd prime. Then $a+b$ is divisible by $n$ if and only if $\lambda_n(a,b)$ is divisible by $n$. \\
\textit{Proof.} Suppose that $a+b$ is divisible by $n$. Then $b \equiv -a$ (mod $n$), and as a consequence $\lambda_n(a,b) \equiv n a^{n-1} \equiv 0$ (mod $n$). Suppose on the other hand that $\lambda_n(a,b)$ is divisible by $n$. Since $(a+b) \cdot \lambda_n(a,b) = a^n +b^n$ we obtain that $a^n + b^n \equiv 0$ (mod $n$). Finally, applying Fermat's Little Theorem gives $a+b \equiv 0$ (mod $n$).
\null \hfill{$\blacksquare$} \\
The following result is taken from \cite{gauss1986disquisitiones} and is provided without proof: \\
\textbf{Lemma 2.11} Let $p$ be a prime and $m$ be a positive integer. If $m$ is coprime to $p-1$, then every element of $\mathbb{Z} / p \mathbb{Z}$ has a unique $m$-th root.
\null \hfill{$\blacksquare$} \\
\textbf{Theorem 2.12} Let $a,b \in \mathbb{Z}$ be coprime and let $n$ be an odd prime. Suppose that $p \neq n$ is a prime which divides $\lambda_n(a,b)$. Then $p \equiv 1$ (mod $n$). \\
\textit{Proof.} We once again use the fact that $\lambda_n(a,b) \cdot (a+b) = a^n +b^n$, from which we obtain $a^n + b^n \equiv 0$ (mod $p$), or rearranged: $a^n \equiv (-b)^n$ (mod $p$). Now by Lemma 2.11, if $n$ does not divide $p-1$, we are able to take $n$-th roots modulo $p$. Suppose that this is the case. Then, $a \equiv -b$ (mod $p$), that is, $a+b \equiv 0$ (mod $p$). However, by Proposition 2.9, this is a contradiction. We must therefore have that $n$ divides $p-1$, in other words, $p-1 \equiv 0$ (mod $n$), which completes the proof.
\null \hfill{$\blacksquare$} \\
\textbf{Remark 2.13} Notice that if $a$ and $b$ are not coprime, by Lemma 2.5 the common prime factors of $a$ and $b$ will form ``clusters" of the form $p^{n-1}$ in $\lambda_n(a,b)$. These ``clusters" are congruent to one modulo $n$ by Fermat's Little Theorem, as long as $p \neq n$. The other prime factors of $\lambda_n(a,b)$ will be each individually congruent to one modulo $n$ by Theorem 2.12, as long as they are not equal to $n$.\\
\textbf{Corollary 2.14} Let $a,b \in \mathbb{Z}$ be coprime and let $n$ be an odd prime. If $p \neq n$ is a prime which divides $\lambda_n(a,b)$, then $p > n$. \\
\textit{Proof.} Suppose not. Then there exists some prime factor $p$ of $\lambda_n(a,b)$ which satisfies $p<n$. By Theorem 2.12 we also have that $p \equiv 1$ (mod $n$). This implies that $p = 1$, a contradiction.
\null \hfill{$\blacksquare$} \\
\textbf{Corollary 2.15} Let $a,b \in \mathbb{Z}$ be coprime and let $n$ be an odd prime. Then $\lambda_n(a,b)$ is an odd integer. \\
\textit{Proof.} Corollary 2.14.
\null \hfill{$\blacksquare$} \\
\textbf{Corollary 2.16} Let $a,b \in \mathbb{Z}$ be coprime and let $n$ be an odd prime. If $n$ does not divide $\lambda_n(a,b)$, then $\lambda_n(a,b) \equiv 1$ (mod $n$). \\
\textit{Proof.} The result follows from combining Lemma 2.2 with Theorem 2.12.
\null \hfill{$\blacksquare$} \\
\textbf{Lemma 2.17} (Generalization of Corollary 2.16) Let $a,b \in \mathbb{Z}$, not necessarily coprime, and let $n$ be an odd prime. If $n$ does not divide $\lambda_n(a,b)$, then $\lambda_n(a,b) \equiv 1$ (mod $n$). \\
\textit{Proof.} This can be deduced directly from Lemma 2.2 and Remark 2.13. Alternatively, we have that $\lambda_n(a,b) \cdot (a+b) = a^n + b^n$. Using Fermat's Little Theorem we obtain $\lambda_n(a,b) \cdot (a+b) \equiv a+b$ (mod $n$). By Proposition 2.10, since $n$ does not divide $\lambda_n(a,b)$ it does not divide $a+b$ either. As such, we can multiply both sides of the congruence above by $(a+b)^{-1}$ to get $\lambda_n(a,b) \equiv 1$ (mod $n$).
\null \hfill{$\blacksquare$} \\
\section*{Acknowledgements}
The author is grateful to Nicolas Mascot for his helpful comments on this paper.
\bibliographystyle{plain}
| {
"timestamp": "2021-10-28T02:28:05",
"yymm": "2110",
"arxiv_id": "2110.05194",
"language": "en",
"url": "https://arxiv.org/abs/2110.05194",
"abstract": "The expression $a^n + b^n$ can be factored as $(a+b)(a^{n-1} - a^{n-2} b + a^{n-3} b^2 - ... + b^{n-1})$ when $n$ is an odd integer greater than one. This paper focuses on proving a few properties of the longer factor above, which we call $\\lambda_n(a,b)$. One such property is that the primes which divide $\\lambda_n(a,b)$ satsify $p \\ge n$, if $a,b$ are coprime integers and $n$ is an odd prime.",
"subjects": "General Mathematics (math.GM)",
"title": "A note on some properties of the $λ$-Polynomial",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180685922241,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7083314648652452
} |
https://arxiv.org/abs/1708.00833 | tt-geometry of filtered modules | We compute the tensor triangular spectrum of perfect complexes of filtered modules over a commutative ring, and deduce a classification of the thick tensor ideals. We give two proofs: one by reducing to perfect complexes of graded modules which have already been studied in the literature, and one more direct for which we develop some useful tools. |
\section{Introduction}
\label{sec:intro}
One of the age-old problems mathematicians engage in is to classify their objects of study, up to an appropriate equivalence relation. In contexts in which the domain is organized in a category with compatible tensor and triangulated structure (we call this a \emph{tt-category}) it is natural to view objects as equivalent when they can be constructed from each other using sums, extensions, translations, tensor product etc., in other words, using the tensor and triangulated structure alone. This can be made precise by saying that the objects generate the same thick tensor ideal (or, \emph{tt-ideal}) in the tt-category. This sort of classification is precisely what tt-geometry, as developed by Balmer, achieves. To a (small) tt-category $\T$ it associates a topological space $\spec(\T)$ called the \emph{tt-spectrum} of $\T$ which, via its Thomason subsets, classifies the tt-ideals of~$\T$. A number of classical mathematical domains have in the meantime been studied through the lens of tt-geometry; we refer to~\cite{balmer:icm} for an overview of the basic theory, its early successes and applications.
One type of context which does not seem to have received any attention so far arises from filtered objects. Examples pertinent to tt-geometry abound: filtrations by the weight in algebraic geometry induce filtrations on cohomology theories, giving rise to filtered vector spaces, representations or motives; (mixed) Hodge theory involves bifiltered vector spaces; filtrations by the order of a differential operator play an important role in the theory of ${\mathcal D}$-modules.
In this note, we take first steps in the study of filtered objects through the lens of tt-geometry by focusing on a particularly interesting case whose unfiltered analogue is well-understood. Namely, we give a complete account of the tt-geometry of filtered modules. This is already enough to say something interesting about certain motives, as we explain at the end of this introduction. To describe our results in more detail, let us recall the analogous situation for modules.
Let $R$ be a ring, assumed commutative and with unit. Its derived category $\D{R}$ is a tt-category which moreover is compactly generated, and the compact objects coincide with the rigid (or, strongly dualizable) objects, which are also called perfect complexes. These are (up to isomorphism in the derived category) the bounded complexes of finitely generated projective $R$-modules. The full subcategory $\per(R)$ of perfect complexes inherits the structure of a (small) tt-category, and the Hopkins-Neeman-Thomason classification of its thick subcategories can be interpreted as the statement that the tt-spectrum $\spec(\per(R))$ is precisely the Zariski spectrum~$\spec(R)$. In this particular case, thick subcategories are the same as tt-ideals so that this result indeed classifies perfect complexes up to the triangulated and tensor structure available.
In this note we will replicate these results for filtered $R$-modules. Its derived category $\D{\FMd(R)}$ is a tt-category which moreover is compactly generated, and the compact objects coincide with the rigid objects. We characterize these ``perfect complexes'' as bounded complexes of ``finitely generated projective'' objects in the category $\FMd(R)$ of filtered $R$-modules.\footnote{In the body of the text these are rather called \emph{split finite projective} for reasons which will become apparent when they are introduced.} The full subcategory $\fper(R)$ of perfect complexes inherits the structure of a (small) tt-category. For a regular ring $R$ this is precisely the filtered derived category of $R$ in the sense first studied by Illusie in~\cite{illusie:cotangent-complex-I}, and for general rings it is a full subcategory. Our main theorem computes the tt-spectrum of this tt-category.
{
\renewcommand{\thethm}{\ref{main-thm}}
\begin{thm-intro}
The tt-spectrum of $\fper(R)$ is canonically isomorphic to the homogeneous Zariski spectrum $\spech(R[\beta])$ of the polynomial ring in one variable. In particular, the underlying topological space contains two copies of $\spec(R)$, connected by specialization. Schematically:
\begin{center}
\scalebox{0.8}{ \begin{tikzpicture}
\node at (-4.5,0) {$\mathrm{Spec}(R)\approx U(\beta)$};
\node at (-4.5,2.5) {$\mathrm{Spec}(R)\approx Z(\beta)$};
\draw (0,0) ellipse (2cm and .7cm);
\draw (0,2.5) ellipse (2cm and .7cm);
\node[label=left:$\mathfrak{p}$] (p) at (-1,0) {$\bullet$};
\node[label={[label distance=-.35cm]35:$\mathfrak{p}+\langle\beta\rangle$}] (pb) at (-1,2.5) {$\bullet$};
\draw[-] (p.north) to (pb.south);
\node[label=left:$\mathfrak{q}$] (q) at (1,-.3) {$\bullet$};
\node[label={[label distance=-.15cm]90:$\mathfrak{q}+\langle\beta\rangle$}] (qb) at (1,2.2) {$\bullet$};
\draw[-] (q.north) to (qb.south);
\draw [decorate,decoration={brace,amplitude=10pt,mirror,raise=4pt},yshift=0pt] (2.5,-0.7) -- (2.5,3.2) node [black,midway,xshift=1.5cm] {$\spech(R[\beta])$}; \end{tikzpicture}
}\end{center}
\end{thm-intro}
\addtocounter{thm}{-1}
}
As a consequence we are able to classify the tt-ideals in~$\fper(R)$. To state it precisely notice that we may associate to any filtered $R$-module $M$ its underlying $R$-module $\pi(M)$ as well as the $R$-module of its graded pieces~$\gr(M)$. These induce two tt-functors~$\fper(R)\to\per(R)$. Also, recall that the \emph{support} of an object $M\in\per(R)$, denoted by $\supp(M)$, is the set of primes in $\spec(\per(R))=\spec(R)$ which do not contain $M$. This is extended to a set ${\mathcal E}$ of objects by taking the union of the supports of its elements: $\supp({\mathcal E}):=\cup_{M\in{\mathcal E}}\supp(M)$. Conversely, starting with a set of primes $Y\subset\spec(R)$, we define ${\mathcal K}_{Y}:=\{M\in\per(R)\mid \supp(M)\subset Y\}$.
{
\renewcommand{\thethm}{\ref{classification-tt-ideals}}
\begin{cor}
There is an inclusion preserving bijection
\begin{align*}
\left\{\Pi\subset \Gamma\mid \Pi,\Gamma\subset\spec(R)\text{ Thomason subsets}
\right\}&\longleftrightarrow
\left\{
\text{tt-ideals in }\fper(R)
\right\} \\
(\Pi\subset \Gamma)&\longmapsto \pi^{-1}({\mathcal K}_{\Pi})\cap\gr^{-1}({\mathcal K}_{\Gamma})\\
\left(
\supp(\pi{\mathcal J})\subset\supp(\gr{\mathcal J})
\right)&\longmapsfrom {\mathcal J}
\end{align*}
\end{cor}
\addtocounter{thm}{-1}
}
Clearly, an important role is played by the element $\beta$ appearing in the Theorem. It can be interpreted as the following morphism of filtered $R$-modules. Let $R(0)$ be the module $R$ placed in filtration degree 0, while $R(1)$ is $R$ placed in degree 1 (our filtrations are by convention decreasing), and $\beta:R(0)\to R(1)$ is the identity on the underlying modules:\footnote{We call this element $\beta$ in view of the intended application described at the end of this introduction. In the context of motives considered there, $\beta$ is the ``Bott element'' of~\cite{haesemeyer-hornbostel:bott}.}
\begin{equation*}
\xymatrix{R(1):&\cdots\ar@{}[r]|-*[@]{=}&0\ar@{}[r]|-*[@]{\subset}&R\ar@{}[r]|-*[@]{=}&R\ar@{}[r]|-*[@]{=}&\cdots\\
R(0):\ar[u]_{\beta}& \cdots\ar@{}[r]|-*[@]{=}&0\ar[u]\ar@{}[r]|-*[@]{=}&0\ar[u]\ar@{}[r]|-*[@]{\subset}&R\ar[u]_{\id}\ar@{}[r]|-*[@]{=}&\cdots}
\end{equation*}
Note that $\beta$ has trivial kernel and cokernel but is not an isomorphism, witnessing the fact that the category of filtered modules is \emph{not} abelian. We will give two proofs of \cref{main-thm}, the first of which relies on ``abelianizing'' the category. It is observed in \cite{Schneiders:quasi-abelian} that the derived category of filtered modules is canonically identified with the derived category of graded $R[\beta]$-modules. And the tt-geometry of graded modules has been studied in \cite{ambrogio-stevenson:graded-ring,ambrogio-stevenson:tt-comparison-2-rings}. Together these two results provide a short proof of \cref{main-thm}, but in view of future studies of filtered objects in more general abelian tensor categories we thought it might be worthwile to study filtered modules in more detail and in their own right. For the second proof we will use the abelianization only minimally to construct the category of perfect complexes of filtered modules (\cref{sec:fmod,sec:fmod-derived}). The computation of the tt-geometry stays within the realm of filtered modules, as we now proceed to explain.
As mentioned above, forgetting the filtration and taking the associated graded of a filtered $R$-module gives rise to two tt-functors. It is not difficult to show that $\spec(\pi)$ and $\spec(\gr)$ are injective with disjoint images~(\cref{sec:main-thm}). The challenge is in proving that they are jointly surjective - more precisely, proving that the images of $\spec(\pi)$ and $\spec(\gr)$ are exactly the two copies of $\spec(R)$ in the picture above. As suggested by this then, and as we will prove, inverting $\beta$ (in a categorical sense) amounts to passing from filtered to unfiltered $R$-modules, while killing $\beta$ amounts to passing to the associated graded.
We prove surjectivity first for $R$ a noetherian ring, by reducing to the local case, using some general results we establish on central localization (\cref{sec:localization}), extending the discussion in~\cite{balmer:sss}. In the local noetherian case, the maximal ideal is ``generated by non-zerodivisors and nilpotent elements'' (more precisely, it admits a system of parameters); we will study how killing such elements affects the tt-spectrum (\cref{sec:reduction}) which allows us to decrease the Krull dimension of $R$ one by one until we reach the case of $R$ a field.
Although the category of filtered modules is not abelian, it has the structure of a \emph{quasi}-abelian category, and we will use the results of Schneiders on the derived category of a quasi-abelian category, in particular the existence of two t-structures~\cite{Schneiders:quasi-abelian}, to deal with the case of a field~(\cref{sec:field}). In fact, the category of filtered vector spaces can reasonably be called a \emph{semisimple} quasi-abelian category, and we will prove in general that the t-structures in that case are hereditary. With this fact it is then possible to deduce the theorem in the case of a field.
Finally, we will reduce the case of arbitrary rings to noetherian rings (\cref{sec:continuity}) by proving in general that tt-spectra are \emph{continuous}, that is for filtered colimits of tt-categories one has a canonical homeomorphism
\begin{equation*}
\spec(\varinjlim_{i}\T_{i})\xrightarrow{\sim}\varprojlim_{i}\spec(\T_{i}).
\end{equation*}
In fact, we will prove a more general statement which we believe will be useful in other studies of tt-geometry as well, because it often allows to reduce the tt-geometry of ``infinite objects'' to the tt-geometry of ``finite objects''. For example, it shows immediately that the noetherianity assumption in the results of~\cite{ambrogio-stevenson:graded-ring} is superfluous, arguably simplifying the proof given for this observation in~\cite{ambrogio-stevenson:tt-comparison-2-rings}.
We mentioned above that one of our motivations for studying the questions discussed in this note lies in the theory of motives. Let us therefore give the following application. We are able to describe completely the spectrum of the triangulated category of Tate motives over the algebraic numbers with integer coefficients,~$\dtm(\overline{\Q},\Z)$. (Previously, only the rational part $\dtm(\overline{\Q},\Q)$ was known.)
\begin{thm*}
The tt-spectrum of $\dtm(\overline{\Q},\Z)$ consists of the following primes, with specialization relations as indicated by the lines going upward.
\begin{center}
\vspace{0.2cm} \scalebox{0.75}{\input{spec}} \vspace{0.1cm}
\end{center}
Here, $\ell$ runs through all prime numbers, and the primes are defined by the vanishing of the cohomology theories as indicated on the right. Moreover, the proper closed subsets are precisely the finite subsets stable under specialization.
\end{thm*}
As a consequence, we are able to classify the thick tensor ideals of~$\dtm(\overline{\Q},\Z)$. This Theorem and related results are proved in a separate paper~\cite{gallauer:tt-dtm-algclosed}.
\section*{Acknowledgment}
I would like to thank Paul Balmer for his interest in this note, and his critical input on an earlier version.
\section*{Conventions}
\label{sec:conventions}
A symmetric, unital monoidal structure on a category is called \emph{tensor structure} if the category is additive, and if the monoidal product is additive in each variable separately. We also call these data simply a \emph{tensor category}. A \emph{tensor functor} between tensor categories is a unital symmetric monoidal additive functor.
Our conventions regarding tensor triangular geometry mostly follow those of~\cite{balmer:sss}. A \emph{tensor triangulated category} (or \emph{tt-category} for short) is a triangulated category with a compatible (symmetric, unital) tensor structure. Typically, one assumes that the category is (essentially) small, and the tensor structure strict. If not specified otherwise, the tensor product is denoted by $\otimes$ and the unit by~$\one$. A \emph{tt-functor} is an exact tensor functor between tt-categories, again usually assumed to preserve the structure on the nose.
A \emph{tt-ideal} in a tt-category $\T$ is a thick subcategory $\I\subset \T$ such that~$\T\otimes \I\subset\I$. If $S$ is a set of objects in $\T$ we denote by $\langle S\rangle$ the tt-ideal generated by~$S$. To a small rigid tt-category $\T$ one associates a locally ringed space $\spec(\T)$, called the \emph{tt-spectrum of $\T$}, whose underlying topological space is denoted by $\spc(\T)$. It consists of \emph{prime ideals} in $\T$, \ie tt-ideals $\I$ such that $a\otimes b\in\I$ implies $a\in\I$ or~$b\in\I$. (The underlying topological space $\spc(\T)$ is defined even if $\T$ is not rigid.)
All rings are commutative with unit, and morphisms of rings are unital. For $R$ a ring, we denote by $\spec(R)$ the Zariski spectrum of $R$ (considered as a locally ringed space) whereas $\spc(R)$ denotes its underlying topological space (as for the tt-spectrum). We adopt similar conventions regarding graded rings $R$: they are commutative in a general graded sense~\cite[3.4]{balmer:sss}, and possess a unit. $\spech(R)$ denotes the homogeneous Zariski spectrum with underlying topological space~$\spch(R)$.
As a general rule, canonical isomorphisms in categories are typically written as equalities.
\section{Category of filtered modules}
\label{sec:fmod}
In this section we describe filtered modules from a slightly nonstandard perspective which will be useful in the sequel. Hereby we follow the treatment in~\cite{schapira-schneiders:filtered}. The idea is to embed the (non-abelian) category of filtered modules into its \emph{abelianization}, the category of presheaves of modules on the poset $\Z$. From this embedding we deduce a number of properties of the category of filtered modules. Much of the discussion in this section applies more generally to filtered objects in suitable abelian tensor categories.
Fix a commutative ring with unit~$R$. Denote by $\Md(R)$ the abelian category of $R$-modules, with its canonical tensor structure. We view $\Z$ as a monoidal category where
\begin{equation*}
\hom(m,n)=
\begin{cases}
\{\ast\}&:m\leq n\\
\emptyset&:m>n
\end{cases}
\end{equation*}
and~$m\otimes n=m+n$. The Day convolution product then induces a tensor structure on the category of presheaves on $\Z$ with values in $\Md(R)$ which we denote by~$\seq R$. Explicitly, an object $a$ of $\seq{R}$ is an infinite sequence of morphisms in $\Md(R)$
\begin{equation}\label{sequence}
\cdots\to a_{n+1}\xrightarrow{a_{n,n+1}} a_{n}\xrightarrow{a_{n-1,n}} a_{n-1}\to\cdots,
\end{equation}
and the tensor product of two such objects $a$ and $b$ is described by
\begin{equation*}
(a\seqtimes b)_{n}=\colim_{p+q\geq n}a_{p}\otimes_{R} b_{q}.
\end{equation*}
Let $M$ be an $R$-module and~$n\in\Z$. The associated represented presheaf $\oplus_{\hom_{Z}(-,n)}M$ is denoted by~$M(n)$. It is the object
\begin{equation*}
\cdots\to 0\to 0\to M\xrightarrow{\id}M\xrightarrow{\id} M\to\cdots
\end{equation*}
with the first $M$ in degree~$n$. Via the association $\sigma_{0}:M\mapsto M(0)$ we view $\Md(R)$ as a full subcategory of~$\seq{R}$. For any object $a\in\seq{R}$ and $n\in\Z$ we denote by $a(n)$ the tensor product $a\otimes R(n)$, and we call it the $n$th twist of~$a$. Explicitly, this is the sequence of \cref{sequence} shifted to the left by $n$ places, \ie $a(n)_{m}=a_{m-n}$.
The category $\seq{R}$ is $R$-linear Grothendieck abelian, and the monoidal structure is closed. Explicitly, the internal hom of $a,b\in\seq{R}$ is given by
\begin{equation*}
\ihom(a,b)_{n}=\hom_{\seq{R}}(a(n),b).
\end{equation*}
\begin{dfn}
\begin{enumerate}
\item A \emph{filtered $R$-module} is an object $a\in \seq{R}$ such that $a_{n,n+1}$ is a monomorphism for all~$n\in\Z$. The full subcategory of filtered $R$-modules in $\seq{R}$ is denoted by~$\FMd(R)$.
\item A \emph{finitely filtered $R$-module} is a filtered $R$-module $a$ such that $a_{n,n+1}$ is an isomorphism for almost all~$n$.
\item A filtered $R$-module $a$ is \emph{separated} if~$\cap_{n\in\Z}a_{n}=0$.
\end{enumerate}
\end{dfn}
For a filtered $R$-module $a$ we denote the ``underlying'' $R$-module~$\varinjlim_{n\to -\infty}a_{n}$ by $\pi(a)$. This clearly defines a functor $\pi:\FMd(R)\to\Md(R)$ which ``forgets the filtration''. In this way we recover the more classical perspective on filtrations: an $R$-module $\pi(a)$ together with a (decreasing, exhaustive) filtration $(a_{n})_{n\in\Z}$; a morphism $f:a\to b$ of filtered $R$-modules $a,b$ is an $R$-linear morphism $\pi(a)\to \pi(b)$ compatible with the filtration.
To a filtered $R$-module $a$ one can associate its ($\Z$-)graded $R$-module whose $n$th graded piece is~$\coker(a_{n,n+1})=a_{n}/a_{n+1}$. This clearly defines a functor~$\gr_{\bullet}:\FMd(R)\to\GMd(R)=\prod_{n\in\Z}\Md(R)$.
The following observation is simple but very useful.
\begin{lem}[{\cite[3.5]{schapira-schneiders:filtered}}]\label{fmod-reflective}
The inclusion $\iota:\FMd(R)\to\seq{R}$ admits a left adjoint $\kappa:\seq{R}\to\FMd(R)$ given by
\begin{equation*}
\kappa(a)_{n}=\img(a_{n}\to\varinjlim_{m\to -\infty}a_{m})
\end{equation*}
and the canonical transition maps.
\end{lem}
It follows from \cref{fmod-reflective} that $\FMd(R)$ is complete and cocomplete. Limits, filtered colimits and direct sums are computed in $\seq{R}$ while pushouts are computed by applying the reflector $\kappa$ to the pushout in~$\seq{R}$. (The statement about limits and pushouts is formal, while the rest stems from the fact that filtered colimits and direct sums are exact in $\Md(R)$.) In particular, $\FMd(R)$ is additive and has kernels and cokernels. However, it is not an abelian category as witnessed by the morphism
\begin{equation}
\label{beta}
\beta:R(0)\to R(1)
\end{equation}
induced by the map $0\to 1$ in $\Z$ through the Yoneda embedding: both $\ker(\beta)$ and $\coker(\beta)$ are 0 but $\beta$ is not an isomorphism. It is an example of a \emph{non-strict} morphism. (A morphism $f:a\to b$ is called \emph{strict} if the canonical morphism $\coimg(f)\to\img(f)$ is an isomorphism, or equivalently if $\img(\pi(f))\cap b_{n}=\img(f_{n})$ for all~$n\in\Z$.) However, one can easily check that strict monomorphisms and strict epimorphisms in $\FMd(R)$ are preserved by pushouts and pullbacks, respectively~\cite[3.9]{schapira-schneiders:filtered}. In other words, $\FMd(R)$ is a \emph{quasi-}abelian category (we will use~\cite{Schneiders:quasi-abelian} as a reference for the basic theory of quasi-abelian categories).
An object $a$ in a quasi-abelian category is called \emph{projective} if $\hom(a,-)$ takes strict epimorphisms to surjections. (Note that this convention differs from the categorical notion of a projective object!) For example, for a projective $R$-module $M$ and $n\in\Z$ the object $M(n)$ is projective since~$\hom_{\FMd(R)}(M(n),-)= \hom_{R}(M,(-)_{n})$.
\begin{lem}[{\cite[3.1.8]{Schneiders:quasi-abelian}}]\label{fmod-enough-projectives}
For any $a\in\FMd(R)$, the canonical morphism
\begin{equation}
\oplus_{n\in\Z}\oplus_{x\in a_{n}}R(n)\to a
\end{equation}
is a strict epimorphism with projective domain. In particular, the quasi-abelian category $\FMd(R)$ has enough projectives.
\end{lem}
Let us denote by $\sigma:\GMd(R)\to\FMd(R)$ the canonical functor which takes $(M_{n})_{n}$ to $\oplus_{n}M_{n}(n)$. A filtered $R$-module is called \emph{split} if it lies in the essential image of $\sigma$. Correspondingly we call a filtered $R$-module \emph{split free} (respectively, \emph{split projective}, \emph{split finite projective}) if it is (isomorphic to) the image of a free (respectively, projective, finite projective) graded $R$-module under $\sigma$. In other words, an object of the form $\oplus_{n}M_{n}(n)$ with $\oplus_{n}M_{n}$ free (respectively, projective, finite projective). \cref{fmod-enough-projectives} shows that every object in $\FMd(R)$ admits a canonical split free resolution.
It is clear that split projective objects are projective, and the converse is also true as we now prove.
\begin{lem}\label{fmod-projectives}
For a filtered $R$-module $a\in \FMd(R)$ the following are equivalent:
\begin{enumerate}
\item $a$ is projective.
\item $a$ is split projective.
\end{enumerate}
\end{lem}
\begin{proof}
Let $a$ be projective. As remarked in \cref{fmod-enough-projectives}, there is a canonical strict epimorphism $b\to a$ with $b$ split free. By definition of projectivity, there is a section $a\to b$, and since $\FMd(R)$ has kernels and images, we deduce that $a$ is a direct summand of $b$. It therefore suffices to prove that every direct summand of a split free is split projective. This follows from \cref{split-projective-idempotent} below.
\end{proof}
\begin{cor}\label{split-projective-idempotent}
The full additive subcategory $\FP(R)$ of split projectives is idempotent complete. The same is true for the full additive subcategory $\fmd(R)$ of split finite projectives.
\end{cor}
\begin{proof}
Let $f:a\xrightarrow{\sim} b\oplus c$ be an isomorphism, with $a$ split projective. Since $a$ is split, there is a canonical isomorphism $g:a\xrightarrow{\sim}\oplus_{n}\gr_{n}(a)(n)$, and we can define the following composition of isomorphisms:
\begin{equation*}
b\oplus c\xrightarrow[\sim]{f^{-1}}a\xrightarrow[\sim]{g}\oplus_{n}\gr_{n}(a)(n)
\xrightarrow[\sim]{f}\oplus_{n}\gr_{n}(b\oplus c)(n)=
\left(
\oplus_{n}\gr_{n}(b)(n)
\right)\oplus
\left(
\oplus_{n}\gr_{n}(c)(n)
\right).
\end{equation*}
It is easy to see that this induces an isomorphism $b\cong\oplus_{n}\gr_{n}(b)(n)$, and we also see that $\gr_{n}(b)$ is a direct summand of $\gr_{n}(a)$. In other words, $b$ is split projective as required. The same proof applies in the finite case.
\end{proof}
In general, due to the possibility of the tensor product in $\Md(R)$ not being exact, the tensor structure on $\seq{R}$ does not restrict to the subcategory~$\FMd(R)$. We can use the reflector $\kappa$ to rectify this: for $a,b\in\FMd(R)$, let
\begin{equation*}
a\otimes b=\kappa(\iota(a)\seqtimes\iota(b)).
\end{equation*}
This defines a tensor structure on $\FMd(R)$. It is clear that the internal hom on $\seq{R}$ restricts to a bifunctor on $\FMd(R)$, and it follows formally from \cref{fmod-reflective} that this bifunctor is the internal hom on~$\FMd(R)$.
Although we will in the sequel only use the implication (1)$\Rightarrow$(2) of the following result, it is satisfying to see these notions match up as they do in $\Md(R)$. Recall that an object $a$ in a category with filtered colimits is called \emph{compact} if $\hom(a,-)$ commutes with these filtered colimits.
\begin{lem}\label{fmod-finite-projectives}
For a filtered $R$-module $a\in\FMd(R)$ the following are equivalent:
\begin{enumerate}
\item $a$ is split finite projective.
\item $a$ is rigid (or strongly dualizable).
\item $a$ is compact and projective.
\end{enumerate}
\end{lem}
\begin{proof}
Since $\sigma:\GMd(R)\to\FMd(R)$ is a tensor functor it preserves rigid objects. This shows the implication (1)$\Rightarrow$(2).
For (2)$\Rightarrow$(3) notice that the unit $R(0)$ is both compact and projective. The latter is clear, and the former is true as filtered colimits are computed in $\seq{R}$. The implication is now obtained from the identification
\begin{equation*}
\hom(a,-)=\hom(R(0),\ihom(a,R(0))\otimes -)
\end{equation*}
together with the fact that the tensor product preserves filtered colimits and strict epimorphisms.
Finally for (3)$\Rightarrow$(1), we start with the identification $a=\oplus_{n}\gr_{n}(a)(n)$ with $\gr_{n}(a)$ projective $R$-modules, which exists by \cref{fmod-projectives}. Notice that the forgetful functor $\pi:\FMd(R)\to\Md(R)$ has a right adjoint $\Delta:\Md(R)\to\FMd(R)$ which takes an $R$-module to the same $R$-module with the constant filtration. It is clear that $\Delta$ commutes with filtered colimits so that
\begin{equation*}
\hom(\pi(a),\varinjlim -)=\hom(a,\varinjlim\Delta-)=\varinjlim\hom(a,\Delta -)=\varinjlim\hom(\pi(a),-)
\end{equation*}
and hence $\pi(a)$ is compact, \ie a finitely presented $R$-module. We conclude that $a=\oplus\gr_{n}(a)(n)$ is split finite projective.
\end{proof}
\begin{cor}\label{fmod-projectives-tensor}
\begin{enumerate}
\item If $a\in\FMd(R)$ is projective then $a\otimes-$ preserves kernels of arbitrary morphisms.
\item If $a,b\in\FMd(R)$ are projective then so is~$a\otimes b$.
\end{enumerate}
\end{cor}
\begin{proof}
Since the tensor product commutes with direct sums both statements follow from \cref{fmod-projectives}.
\end{proof}
\section{Derived category of filtered modules}
\label{sec:fmod-derived}
Quasi-abelian categories are examples of exact categories and can therefore be derived in the same way. However, the theory for quasi-abelian categories is more precise and we will exploit this fact starting in the current section. In the case of (separated, finitely) filtered $R$-modules we obtain what is classically known as the filtered derived category of~$R$. Some of its basic properties are established, a number of which are deduced from the relation with the derived category of $\seq{R}$.
For $\ast\in\{b,-,+,\emptyset\}$ we denote by $\C{\FMd(R)}^{\ast}$ the category of bounded (respectively bounded above, bounded below, unbounded) cochain complexes in $\FMd(R)$, and by $\K{\FMd(R)}^{\ast}$ the associated homotopy category. A complex
\begin{equation*}
A:\quad \cdots\to A^{l-1}\xrightarrow{d^{l-1}}A^{l}\xrightarrow{d^{l}}A^{l+1}\to\cdots
\end{equation*}
is called \emph{strictly exact} if all differentials $d^{l}$ are strict, and the canonical morphism $\img(d^{l-1})\to\ker(d^{l})$ is an isomorphism for all~$l$. We note the following simple but useful fact.
\begin{lem}[{\cite[1]{sjodin:filtered-graded-modules}}]\label{strictly-exact}
Let $A$ be a complex in $\FMd(R)$ and consider the following conditions:
\begin{enumerate}
\item $A$ is strictly exact;
\item all its differentials $d^{l}$ are strict and the underlying complex $\pi(A)$ is exact;
\item the associated graded complex $\gr_{\bullet}(A)$ is exact, \ie $\gr_{n}(A)$ is an exact complex for all~$n\in\Z$.
\end{enumerate}
We have (1)$\Leftrightarrow$(2)$\Rightarrow$(3), and if $A^{l}$ is finitely filtered and separated for all $l\in\Z$ then all conditions are equivalent.
\end{lem}
The class of strictly exact complexes forms a saturated null system $\Kac^{\ast}$~\cite[1.2.15]{Schneiders:quasi-abelian} and we set~$\D{\FMd(R)}^{\ast}=\K{\FMd(R)}^{\ast}/\Kac^{\ast}$. The canonical triangulated structure on $\K{\FMd(R)}^{\ast}$ induces a triangulated structure on~$\D{\FMd(R)}^{\ast}$. As follows from \cref{strictly-exact}, this definition is an extension of the classical ``filtered derived category'' considered in~\cite{illusie:cotangent-complex-I}. There, complexes are assumed to be (uniformly) finitely filtered separated, and the localization is with respect to filtered quasi-isomorphisms, \ie morphisms $f:A\to B$ of complexes such that $\gr_{n}(f)$ is a quasi-isomorphism of complexes of $R$-modules, for all~$n\in \Z$.
The functor $\iota:\FMd(R)\to\seq{R}$ clearly preserves strictly exact complexes (we say that $\iota$ is \emph{strictly exact}), hence it derives trivially to an exact functor of triangulated categories~$\iota:\D{\FMd(R)}^{\ast}\to\D{\seq{R}}^{\ast}$.
\begin{pro}[{\cite[3.16]{schapira-schneiders:filtered}}]\label{fmod-seq-equivalence}
The functor $\iota:\D{\FMd(R)}^{\ast}\to\D{\seq{R}}^{\ast}$ is an equivalence of categories. Its quasi-inverse is given by the left derived functor of~$\kappa$.
\end{pro}
Explicitly, $\dL\kappa$ may be computed using the ``Rees functor'' $\lambda:\seq{R}\to\FMd(R)$ which takes $a\in\seq{R}$ to the filtered $R$-module $\lambda(a)$ with
\begin{equation*}
\lambda(a)_{n}=\oplus_{m\geq n}a_{m}
\end{equation*}
and the obvious inclusions as transition maps~\cite[3.12]{schapira-schneiders:filtered}. It comes with a canonical epimorphism $\varepsilon_{a}:\iota\lambda(a)\to a$ and since $\FMd(R)$ is closed under subobjects in $\seq{R}$, objects in $\seq{R}$ admit an additively functorial two-term resolution by objects in~$\FMd(R)$. Thus a complex $A$ in $\seq{R}$ is replaced by the cone of $\ker(\varepsilon_{A})\to \iota\lambda(A)$ which is a complex in $\FMd(R)$ and computes~$\dL\kappa(A)$.
The tensor product $\seqtimes$ on $\seq{R}$ can be left derived and yields
\begin{equation*}
\seqtimes^{\dL}:\D{\seq{R}}^{\ast}\times\D{\seq{R}}^{\ast}\to\D{\seq{R}}^{\ast}
\end{equation*}
for~$\ast\in\{-,\emptyset\}$. This follows for example from~\cite[2.3]{cisinski-deglise:homalg-grothendieck-cats} (where the descent structure is given by~$({\mathcal G}=\{R(n)\mid n\in\Z\}, {\mathcal H}=\{0\})$).
\begin{lem}\label{fmod-seq-equivalence-tensor}
The tensor product on $\FMd(R)$ induces a left-derived tensor product
\begin{equation*}
\otimes^{\dL}:\D{\FMd(R)}^{\ast}\times\D{\FMd(R)}^{\ast}\to\D{\FMd(R)}^{\ast}
\end{equation*}
where~$\ast\in\{-,\emptyset\}$. Moreover, the equivalence of \cref{fmod-seq-equivalence} is compatible with the derived tensor products.
\end{lem}
\begin{proof}
Recall that the tensor product was defined as~$\kappa\circ\seqtimes\circ(\iota\times\iota)$. Therefore the left-derived tensor product is given by
\begin{equation*}
\otimes^{\dL}=\dL\kappa\circ\seqtimes^{\dL}\circ(\iota\times\iota).
\end{equation*}
The second statement is then clear.
\end{proof}
\begin{cor}\label{fmod-der-compact}
The triangulated category $\D{\FMd(R)}$ is compactly generated. For an object $A\in\D{\FMd(R)}$ the following are equivalent:
\begin{enumerate}
\item $A$ is compact.
\item $A$ is rigid.
\item $A$ is isomorphic to a bounded complex of split finite projectives.
\end{enumerate}
\end{cor}
\begin{proof}
It is easy to see \cite[3.20]{choudhury-gallauer:dg-hty} that the set $\{R(n)\mid n\in\Z\}$ compactly generates $\D{\seq{R}}$. The first statement therefore follows from \cref{fmod-seq-equivalence}. As is true in general \cite[2.2]{neeman:localizing-smashing}, the compact objects span precisely the thick subcategory generated by these generators $R(n)$. From this we see immediately that (3) implies (1). The converse implication follows from \cref{fper-homotopy-derived} below.
That (3) implies (2) is easy to see, using \cref{fmod-finite-projectives}. Finally that (2) implies (1) follows formally from the tensor unit being compact (\cf the proof of \cref{fmod-finite-projectives}).
\end{proof}
We denote by $\fper(R)$ the full subcategory of compact objects in~$\D{\FMd(R)}$. Its objects are also called \emph{perfect filtered complexes}. Note that this is an idempotent complete, rigid tt-category. We denote the tensor product on $\fper(R)$ simply by~$\otimes$. Recall that $\fmd(R)$ denotes the additive category of split finite projective filtered $R$-modules.
\begin{cor}\label{fper-homotopy-derived}
The canonical functor $\K{\fmd(R)}^{b}\to \D{\FMd(R)}$ induces an equivalence of tt-categories
\begin{equation*}
\K{\fmd(R)}^{b}\xrightarrow{\sim}\fper(R).
\end{equation*}
\end{cor}
\begin{proof}
The fact that the image of the functor is contained in $\fper(R)$ was proved in \cref{fmod-der-compact}. It therefore makes sense to consider the following square of canonical exact functors
\begin{equation*}
\xymatrix{\K{\fmd(R)}^{b}\ar[r]\ar[d]&\fper(R)\ar[d]\\
\K{\FP(R)}^{-}\ar[r]&\D{\FMd(R)}^{-}}
\end{equation*}
The vertical arrows are the inclusions of full subcategories. (For the right vertical arrow this follows from \cite[11.7]{keller:derived-categories}.) Moreover, the bottom horizontal arrow is an equivalence, by \cite[1.3.22]{Schneiders:quasi-abelian} together with \cref{fmod-enough-projectives}. We conclude that the top horizontal arrow is fully faithful as well.
Next, we notice that since $\fmd(R)$ is idempotent complete by \cref{split-projective-idempotent}, the same is true of its bounded homotopy category \cite[2.8]{Balmer-Schlichting}. It follows that the image of the top horizontal arrow is a thick subcategory containing $R(n)$, $n\in\Z$. As remarked in the proof of \cref{fmod-der-compact}, this implies essential surjectivity.
As tensoring with a split finite projective is strictly exact, by \cref{fmod-projectives-tensor}, the same is true for objects in $\K{\fmd(R)}^{b}$. It is then clear that the equivalence just established preserves the tensor product.
\end{proof}
For future reference we record the following simple fact.
\begin{lem}
Let $\mathcal{J}\subset\fper(R)$ be a thick subcategory. Then the following are equivalent.
\begin{enumerate}
\item ${\mathcal J}$ is a tt-ideal.
\item ${\mathcal J}$ is closed under $R(n)\otimes -$,~$n\in\Z$.
\end{enumerate}
\end{lem}
\begin{proof}
As remarked in the proof of \cref{fmod-der-compact}, the category of filtered complexes $\fper(R)$ is generated as a thick subcategory by $R(n)$,~$n\in\Z$. Thus (2) implies~(1):
\begin{equation*}
{\mathcal J}\otimes\fper(R)={\mathcal J}\otimes\langle R(n)\mid n\in\Z\rangle^{\mathrm{thick}}\subset{\mathcal J}.
\end{equation*}
The converse is trivial.
\end{proof}
Let us discuss the derived analogues of the functors $\pi$ and $\gr_{\bullet}$ introduced earlier.
\begin{lem}\label{pi-derived}
The functor $\pi:\FMd(R)\to \Md(R)$ is strictly exact and derives trivially to a tt-functor $\pi:\D{\FMd(R)}\to\D{R}$. The latter preserves compact objects and restricts to a tt-functor
\begin{equation*}
\pi:\fper(R)\to\per(R),
\end{equation*}
where $\per(R)$ denotes the category of perfect complexes over $R$, \ie the compact objects in~$\D{R}$.
\end{lem}
\begin{proof}
The first statement follows from \cref{strictly-exact}. The functor $\pi$ being tensor, it preserves rigid objects and the second statement follows from \cref{fmod-der-compact}.
\end{proof}
\begin{lem}\label{gr-derived}
The functor $\gr_{\bullet}:\FMd(R)\to\GMd(R)$ is strictly exact and derives trivially to an exact functor $\gr_{\bullet}:\D{\FMd(R)}\to\D{\GMd(R)}$. The latter preserves compact objects and induces a conservative tt-functor
\begin{equation*}
\gr:=\oplus\gr_{\bullet}:\fper(R)\to\per(R).
\end{equation*}
\end{lem}
\begin{proof}
That $\gr_{\bullet}$ is strictly exact is \cref{strictly-exact}. It follows that $\gr_{\bullet}$ derives trivially to give an exact functor $\gr_{\bullet}:\D{\FMd(R)}\to\D{\GMd(R)}$. For each $n$, $\gr_{n}$ clearly sends perfect filtered complexes to perfect complexes, \ie $\gr_{\bullet}$ preserves compact objects (by \cref{fmod-der-compact}).
The functor $\oplus:\GMd(R)\to\Md(R)$ is strictly exact (in fact, it preserves arbitrary kernels and cokernels) and hence derives trivially as well to give a tt-functor which preserves compact objects.
There is a canonical natural transformation (on the underived level) $\gr_{\bullet}\otimes\gr_{\bullet}\to\gr_{\bullet}\circ\otimes$ endowing $\gr_{\bullet}$ with the structure of a strong unital lax monoidal functor~\cite[3]{sjodin:filtered-graded-modules}. This natural transformation is easily seen to be an isomorphism for split finite projective filtered $R$-modules~\cite[12]{sjodin:filtered-graded-modules}. It follows that $\gr:\K{\fmd(R)}^{b}\to\K{\md(R)}^{b}$ is a tt-functor ($\md(R)$ is the category of finitely generated projective $R$-modules). Conservativity of this functor follows from \cref{strictly-exact}.
\end{proof}
Finally, notice that viewing $\Md(R)$ as a tensor subcategory of $\FMd(R)$ induces a section
\begin{equation*}
\sigma_{0}:\per(R)\to\fper(R)
\end{equation*}
to both $\gr$ and~$\pi$.
\section{Main result}
\label{sec:main-thm}
The set of endomorphisms of the unit in a tt-category $\T$ is a (unital commutative) ring $\R_{\T}$, called the central ring of~$\T$. There is a canonical morphism of locally ringed spaces
\begin{equation*}
\rho_{\T}:\spec(\T)\to\spec(\R_{\T})
\end{equation*}
comparing the tt-spectrum of $\T$ with the Zariski spectrum of its central ring, as explained in~\cite{balmer:sss}.
There is also a graded version of this construction. Given an invertible object $u\in\T$, it makes sense to consider the graded central ring with respect to $u$~(\cite[3.2]{balmer:sss}, see also \cref{sec:localization} for further discussion):
\begin{equation*}
{\mathcal R}^{\bullet}_{\T,u}:=\hom_{\T}(\one,u^{\otimes\bullet}),\qquad \bullet\in\Z.
\end{equation*}
This is a unital $\epsilon$-commutative graded ring~\cite[3.3]{balmer:sss} and we can therefore consider its homogeneous spectrum. There is again a canonical morphism of locally ringed spaces
\begin{equation*}
\rho^{\bullet}_{\T,u}:\spec(\T)\to\spech({\mathcal R}^{\bullet}_{\T,u}).
\end{equation*}
The inclusion $\R_{\T}\to \R_{\T,u}^{\bullet}$ as the degree 0 part provides a factorization~$\rho_{\T}=( R_{\T}\cap -)\circ\rho_{\T,u}^{\bullet}$.
Let us specialize to~$\T=\fper(R)$. The object $R(1)\in\fper(R)$ is clearly invertible and we define $\R_{R}^{\bullet}:=\R_{\fper(R),R(1)}^{\bullet}$, so that
\begin{equation*}
{\mathcal R}^{\bullet}_{R}=\hom_{\fper(R)}(R(0),R(\bullet)),\qquad \bullet\in\Z.
\end{equation*}
Also,~$\rho^{\bullet}_{R}:=\rho_{\fper(R),R(1)}^{\bullet}$.
We are now in a position to state our main result.
\begin{thm}\label{main-thm}
\mbox{}
\begin{enumerate}
\item The graded central ring ${\mathcal R}^{\bullet}_{R}$ is canonically isomorphic to the polynomial ring $R[\beta]$ where $\beta:R(0)\to R(1)$ as in \cref{beta} has degree 1.
\item The morphism
\begin{equation*}
\rho^{\bullet}_{R}:\spec(\fper(R))\to\spech(R[\beta])
\end{equation*}
is an isomorphism of locally ringed spaces.
\end{enumerate}
\end{thm}
The first part is immediate: by \cref{fper-homotopy-derived}, morphisms $R(0)\to R(n)$ in $\fper(R)$ may be computed in the homotopy category into which $\fmd(R)$ embeds fully faithfully. Using the Yoneda embedding we therefore find
\begin{align*}
\hom_{\fper(R)}(R(0),R(n))&=\hom_{\K{\fmd(R)}^{\mathrm{b}}}(R(0),R(n))\\
&=\hom_{\fmd(R)}(R(0),R(n))\\
&=\oplus_{\hom_{\Z}(0,n)}R\\
&=
\begin{cases}
R\cdot \{0\to n\}&:n\geq 0\\
0&:n<0
\end{cases}
\end{align*}
and under this identification, $\{0\to n\}$ corresponds to~$\beta^{n}$.
In the remainder of this section we outline two proofs of the second part of \cref{main-thm}, and deduce the classification of tt-ideals in $\fper(R)$ in \cref{classification-tt-ideals}. The subsequent sections will provide the missing details.
\begin{proof}[First proof of \cref{main-thm}.(2).]
Let $a\in\seq{R}$ be a presheaf of $R$-modules. Associate to it the graded $R[\beta]$-module $\oplus_{n\in\Z}a_{n}$ with $\beta$ acting, as it should, by $\beta:a\to a(1)$, \ie in degree $n$ by $a_{n-1,n}:a_{n}\to a_{n-1}$. In particular, $\beta$ is assumed to have degree -1. Conversely, given a graded $R[\beta]$-module $\oplus_{n\in\Z}M_{n}$, define a presheaf by $n\mapsto M_{n}$ and transition maps $\cdot\beta:M_{n}\to M_{n-1}$. This clearly establishes an isomorphism of categories $\seq{R}=\GMd(R[\beta])$, and it is not difficult to see that the isomorphism is compatible with the tensor structures on both sides.
It is proven in [\citealp[5.1]{ambrogio-stevenson:graded-ring} ($R$ noetherian); \cite[4.7]{ambrogio-stevenson:tt-comparison-2-rings} ($R$ general)] that the comparison map
\begin{equation*}
\rho^{\bullet}:\spc(\D{R[\beta]}^{\mathrm{perf}}_{\scriptscriptstyle\mathrm{gr}})\to\spch(R[\beta])
\end{equation*}
is a homeomorphism, where $\D{R[\beta]}^{\mathrm{perf}}$ denotes the thick subcategory of compact objects in $\D{\GMd(R[\beta])}$. It then follows from \cref{fmod-seq-equivalence} and \cref{fmod-seq-equivalence-tensor} that the same is true for $\rho^{\bullet}_{R}:\spc(\fper(R))\to\spch(R[\beta])$. By~\cite[6.11]{balmer:sss}, the morphism of locally ringed spaces $\rho^{\bullet}_{R}$ is then automatically an isomorphism.
\end{proof}
For the second proof of \cref{main-thm}.(2) we proceed as follows. By~\cite[6.11]{balmer:sss}, it suffices to show that
\begin{equation*}
\rho^{\bullet}_{R}:\spc(\fper(R))\to\spch({\mathcal R}^{\bullet}_{R})
\end{equation*}
is a homeomorphism on the underlying topological spaces.
Consider the invertible object $R\in\per(R)$ and the associated graded central ring $R[t,t^{-1}]$ where $t=\id:R\to R$ has degree~1. The morphisms of graded $R$-algebras induced by $\gr$ and $\pi$ respectively are given by
\begin{align}\label{gr-pi-ring-morphisms}
\xymatrix@R=0em{R[\beta]\ar[r]^-{\gr}& R[t,t^{-1}]&&R[\beta]\ar[r]^-{\pi}&R[t,t^{-1}]\\
\beta\ar@{|->}[r]& 0&&\beta\ar@{|->}[r]& t}
\end{align}
Recall (\cref{sec:fmod-derived}) the existence of a section $\sigma_{0}$ to $\gr$ and~$\pi$. We therefore obtain for $\xi\in\{\gr,\pi\}$ commutative diagrams of topological spaces and continuous maps
\begin{equation*}
\xymatrix{\spc(\per(R))\ar[r]^{\spc(\xi)}\ar[d]_{\rho^{\bullet}}\ar@/_4pc/[dd]_{\rho}&\spc(\fper(R))\ar[r]^{\spc(\sigma_{0})}\ar[d]_{\rho^{\bullet}_{R}}&\spc(\per(R))\ar[d]^{\rho^{\bullet}}\ar@/^4pc/[dd]^{\rho}\\
\spch(R[t,t^{-1}])\ar[r]_{\spch(\xi)}\ar[d]_{\sim}&\spch(R[\beta])\ar[r]_{\spch(\sigma_{0})}\ar[d]&\spch(R[t,t^{-1}])\ar[d]^{\sim}\\
\spc(R)\ar[r]^{=}_{\spc(\xi)}&\spc(R)\ar[r]^{=}_{\spc(\sigma_{0})}&\spc(R)}
\end{equation*}
where the outer vertical maps are all homeomorphisms~\cite[8.1]{balmer:sss}, and the composition of the horizontal morphisms in each row is the identity. It follows immediately that both $\spc(\gr)$ and $\spc(\pi)$ are homeomorphisms onto their respective images which are disjoint by \cref{gr-pi-ring-morphisms}. More precisely, we have
\begin{align}\label{image-gr-pi}
\img(\spc(\gr))&\subseteq (\rho_{R}^{\bullet})^{-1}(Z(\beta))=\supp(\cone(\beta)),\\\img(\spc(\pi))&\subseteq (\rho_{R}^{\bullet})^{-1}(U(\beta))=U(\cone(\beta)).\notag
\end{align}
It now remains to prove two things:
\begin{itemize}
\item $\spc(\gr)$ and $\spc(\pi)$ are jointly surjective.
\item Specializations lift along $\rho^{\bullet}_{R}$.
\end{itemize}
Indeed, since $\rho^{\bullet}_{R}$ is a spectral map between spectral spaces~\cite[5.7]{balmer:sss}, it being a homeomorphism is equivalent to it being bijective and lifting specializations~\cite[8.16]{hochster:spectral}.
The first bullet point is the subject of the subsequent sections. Let us assume it for now and establish the second bullet point. In particular, we now assume that the inclusions in \cref{image-gr-pi} are equalities. Let $\rho_{R}^{\bullet}(\mathfrak{P})\leadsto\rho_{R}^{\bullet}(\mathfrak{Q})$ be a specialization in $\spech(R[\beta])$, \ie~$\rho_{R}^{\bullet}(\mathfrak{P})\subset\rho_{R}^{\bullet}(\mathfrak{Q})$. If $\beta\notin\rho_{R}^{\bullet}(\mathfrak{Q})$ then both primes lie in the image of $\spc(\pi)$ and we already know that~$\mathfrak{P}\leadsto\mathfrak{Q}$. Similarly, if $\beta\in\rho_{R}^{\bullet}(\mathfrak{P})$ then both primes lie in the image of $\spc(\gr)$ and we deduce again that $\mathfrak{P}\leadsto\mathfrak{Q}$. So we may assume~$\beta\in\rho_{R}^{\bullet}(\mathfrak{Q})\backslash\rho_{R}^{\bullet}(\mathfrak{P})$. Define $\mathfrak{r}=\rho_{R}^{\bullet}(\mathfrak{Q})\cap R\in\spec(R)$ and notice that
\begin{equation*}
\rho_{R}^{\bullet}(\mathfrak{P})\subset\mathfrak{r}[\beta]\subset\mathfrak{r}+\langle\beta\rangle=\rho_{R}^{\bullet}(\mathfrak{Q}).
\end{equation*}
Consequently, the preimage of $\mathfrak{r}[\beta]$ under $\rho^{\bullet}_{R}$ is the prime
\begin{equation*}
\mathfrak{R}=\ker\left(\fper(R)\xrightarrow{\pi}\per(R)\to \per(R/\mathfrak{r})\right)
\end{equation*}
which contains the prime
\begin{equation*}
\mathfrak{Q}=\ker\left(\fper(R)\xrightarrow{\gr}\per(R)\to \per(R/\mathfrak{r})\right).
\end{equation*}
We now obtain specialization relations
\begin{equation*}
\mathfrak{P}\leadsto\mathfrak{R}\leadsto\mathfrak{Q}
\end{equation*}
and the proof is complete.
As a consequence of \cref{main-thm} we will classify the tt-ideals in~$\fper(R)$.
\begin{lem}\label{fper-thomason-subsets}
The following two maps set up an order preserving bijection
\begin{align*}
\left\{\Pi\subset\Gamma \mid \Pi, \Gamma\subset\spc(R)\text{ Thomason subsets}
\right\}&\longleftrightarrow
\left\{
\text{Thomason subsets of }\spc(\fper(R))
\right\} \\
(\Pi\subset \Gamma)&\longmapsto\spc(\pi)(\Pi) \sqcup\spc(\gr)(\Gamma)\\
(\spc(\pi)^{-1}(Y)\subset\spc(\gr)^{-1}(Y))&\longmapsfrom Y
\end{align*}
\end{lem}
Here, the order relation on the left is given by $(\Pi\subset \Gamma)\leq (\Pi'\subset \Gamma')$ if $\Pi\subset \Pi'$ and~$\Gamma\subset \Gamma'$.
\begin{proof}
To ease the notation, let us denote in this proof by $p:S\to T$ (respectively, $g:S\to T$) the map $\spc(\pi):\spc(R)\to\spc(\fper(R))$ (respectively~$\spc(\gr)$). It might be helpful to keep the following picture in mind.
\begin{center}
\scalebox{0.8}{
\begin{tikzpicture}
\node [draw, ellipse, minimum width=4cm, minimum height=1.4cm]
(domain) at (-5.5,1.25) {$S=\mathrm{Spc}(R)$}; \node [draw,
ellipse, minimum width=4cm, minimum height=1.4cm] (target-p) at
(0,0) {}; \node[right] at (target-p.0)
{$U(\beta)\approx\mathrm{Spc}(R)$}; \node [draw, ellipse,
minimum width=4cm, minimum height=1.4cm] (target-g) at (0,2.5)
{}; \node[right] at (target-g.0)
{$Z(\beta)\approx\mathrm{Spc}(R)$};
\draw[->,shorten >=10pt,shorten <=10pt] (domain) to
[out=70,in=160] node[midway,above]{$g$} (target-g);
\draw[->,shorten >=10pt,shorten <=10pt] (domain) to
[out=-70,in=-160] node[midway,below]{$p$} (target-p);
\node[label=left:$\mathfrak{p}$] (p) at (-1,0) {$\bullet$};
\node[label={[label
distance=-.35cm]35:$\mathfrak{p}+\langle\beta\rangle$}] (pb)
at (-1,2.5) {$\bullet$}; \draw[-] (p.north) to (pb.south);
\node[label=left:$\mathfrak{q}$] (q) at (1,-.3) {$\bullet$};
\node[label={[label
distance=-.15cm]90:$\mathfrak{q}+\langle\beta\rangle$}] (qb)
at (1,2.2) {$\bullet$}; \draw[-] (q.north) to (qb.south); \draw
[decorate,decoration={brace,amplitude=10pt,mirror,raise=4pt},yshift=0pt]
(4.5,-0.7) -- (4.5,3.2) node [black,midway,xshift=0.8cm] {$T$};
\end{tikzpicture}
}
\end{center}
Thus $g$ and $p$ are spectral maps between spectral spaces, homeomorphisms onto disjoint images which jointly make up all of~$T$. Moreover, the image of $p$ is open, and there is a common retraction $r:T\to S$ to both $g$ and~$p$.
First, the preimages of a Thomason subset $Y\subset T$ under the spectral maps $g$ and $p$ are Thomason. Moreover, every Thomason subset is closed under specializations from which one deduces $p^{-1}(Y)\subset g^{-1}(Y)$. This shows that the map from right to left is well-defined.
Next, given $\Pi\subset \Gamma\subset\spc(R)$ two Thomason subsets we claim that $p(\Pi)\sqcup g(\Gamma)$ is Thomason as well. By assumption, we may write $\Pi=\cup_{i}\Pi_{i}$, $\Gamma=\cup_{j}\Gamma_{j}$ with $\Pi_{i}^{c},\Gamma_{j}^{c}$ quasi-compact open subsets, and hence also
\begin{align*}
\Pi=\Pi\cap \Gamma=(\cup_{i} \Pi_{i})\cap (\cup_{j} \Gamma_{j})=\cup_{i,j}(\Pi_{i}\cap \Gamma_{j})
\end{align*}
with $(\Pi_{i}\cap \Gamma_{j})^{c}=\Pi_{i}^{c}\cup \Gamma_{j}^{c}$ quasi-compact open.
Then
\begin{align*}
p(\Pi)\sqcup g(\Gamma)=(\cup_{i,j} p(\Pi_{i}\cap \Gamma_{j}))\sqcup (\cup_{j} g(\Gamma_{j}))=\cup_{i,j}
\left(
p(\Pi_{i}\cap \Gamma_{j})\sqcup g(\Gamma_{j})
\right)
\end{align*}
and we reduce to the case where $\Pi^{c}$ and $\Gamma^{c}$ are quasi-compact open. But in that case,
\begin{align*}
\left(
p(\Pi)\sqcup g(\Gamma)
\right)^{c}=
\left(
p(\Gamma^{c})\sqcup g(\Gamma^{c})
\right)\cup p(\Pi^{c})=r^{-1}(\Gamma^{c})\cup p(\Pi^{c}).
\end{align*}
Again, $r$ is a spectral map and hence the first set is quasi-compact open. The second one is quasi-compact by assumption, and also open since $p$ is a homeomorphism onto an open subset. This shows that the map from left to right is also well-defined.
It is obvious that the two maps are order preserving and inverses to each other.
\end{proof}
To state the classification result more succinctly, let us make the following definition.
\begin{dfn}
Let $a\in\fper(R)$. For $\xi\in\{\pi,\gr\}$ set
\begin{equation*}
\supp_{\xi}(a):=\{\mathfrak{p}\in\spc(R)\mid \xi(a\otimes \kappa(\mathfrak{p}))\neq 0\in\per(\kappa(\mathfrak{p}))\}.
\end{equation*}
We extend this definition to arbitrary subsets ${\mathcal J}\subset \fper(R)$ by
\begin{equation*}
\supp_{\xi}({\mathcal J}):=\bigcup_{a\in{\mathcal J}}\supp_{\xi}(a).
\end{equation*}
\end{dfn}
\begin{lem}\label{support-basic}
Let~$a\in\fper(R)$. Then:
\begin{enumerate}
\item $\supp_{\gr}(a)=\{\mathfrak{p}\in\spc(R)\mid a\otimes \kappa(\mathfrak{p})\neq 0\in\fper(\kappa(\mathfrak{p}))\}$.
\item $\supp_{\pi}(a)\subset\supp_{\gr}(a)$.
\item $\supp_{\xi}(a)=\supp(\xi(a))$.
\end{enumerate}
\end{lem}
\begin{proof}
\begin{enumerate}
\item The functor $\gr:\fper(\kappa(\mathfrak{p}))\to\per(\kappa(\mathfrak{p}))$ is conservative by \cref{gr-derived}, thus the claim.
\item This follows immediately from the first part.
\item We have
\begin{align*}
\supp_{\xi}(a)&=\{\mathfrak{p}\in\spc(R)\mid \xi(a\otimes \kappa(\mathfrak{p}))\neq 0\in\per(\kappa(\mathfrak{p}))\}\\
&=\{\mathfrak{p}\in\spc(R)\mid \xi(a)\otimes \kappa(\mathfrak{p})\neq 0\in\per(\kappa(\mathfrak{p}))\}\\
&=\supp(\xi(a)).
\end{align*}
\end{enumerate}
\end{proof}
The relation to the usual support can be expressed in two (equivalent) ways.
\begin{lem}\label{support-decomposition}
Let~$a\in\fper(R)$. Then
\begin{enumerate}
\item $\supp(a)=
\spc(\pi)(\supp_{\pi}(a))\sqcup\spc(\gr)(\supp_{\gr}(a))$.
\item Under the bijection of \cref{fper-thomason-subsets}, $\supp(a)$ corresponds to the pair~$\supp_{\pi}(a)\subset\supp_{\gr}(a)$.
\end{enumerate}
\end{lem}
\begin{proof}
Both statements follow from
\begin{equation*}
\spc(\xi)^{-1}(\supp(a))=\supp(\xi(a))=\supp_{\xi}(a),
\end{equation*}
the last equality being true by \cref{support-basic}.
\end{proof}
\begin{lem}\label{tt-ideal-K}
Let $Y\subset\spc(\fper(R))$ be a Thomason subset, corresponding to $\Pi\subset \Gamma$ under the bijection in \cref{fper-thomason-subsets}. For $a\in\fper(R)$ the following are equivalent:
\begin{enumerate}
\item $\supp(a)\subset Y$;
\item $\supp_{\pi}(a)\subset \Pi$ and~$\supp_{\gr}(a)\subset \Gamma$.
\end{enumerate}
\end{lem}
\begin{proof}
This follows immediately from the way $\Pi\subset \Gamma$ is associated to $Y$, together with \cref{support-decomposition}.
\end{proof}
\begin{cor}\label{classification-tt-ideals}
There is an inclusion preserving bijection
\begin{align*}
\left\{\Pi\subset \Gamma\mid \Pi, \Gamma\subset\spc(R)\text{ Thomason subsets}
\right\}&\longleftrightarrow
\left\{
\text{tt-ideals in }\fper(R)
\right\} \\
(\Pi\subset \Gamma)&\longmapsto \{a\mid \supp_{\pi}(a)\subset \Pi,\supp_{\gr}(a)\subset \Gamma\}\\
\left(
\supp_{\pi}({\mathcal J})\subset\supp_{\gr}({\mathcal J})
\right)&\longmapsfrom {\mathcal J}
\end{align*}
\end{cor}
\begin{proof}
A bijection between Thomason subsets of $\spc(\fper(R))$ and tt-ideals in $\fper(R)$ is described in~\cite[14]{balmer:icm}. Explicitly, it is given by $Y\mapsto \{a\mid\supp(a)\subset Y\}$ and~$\supp({\mathcal J})\mapsfrom{\mathcal J}$. The Corollary follows by composing this bijection with the one of \cref{fper-thomason-subsets}, using \cref{support-decomposition} and \cref{tt-ideal-K}.
\end{proof}
\section{Central localization}
\label{sec:localization}
In this section we study several localizations of $\fper(R)$ which will allow us to catch primes (points for the tt-spectrum). In order to accommodate the different localizations we are interested in, we want to work in the following setting. Let $\A$ be a tensor category with central ring $R=\hom_{\A}(\one,\one)$, and fix an invertible object~$u\in\A$. Most of the discussion in~\cite[section 3]{balmer:sss} regarding graded homomorphisms and central localization carries over to our setting. Let us recall what we will need from~\loccit
The graded central ring of $\A$ with respect to $u$ is~$\R^{\bullet}=\hom_{\A}(\one,u^{\otimes\bullet})$. This is a $\Z$-graded $\varepsilon$-commutative ring, where $\varepsilon\in R$ is the central switch for $u$, \ie the switch $u\otimes u\xrightarrow{\sim}u\otimes u$ is given by multiplication by~$\varepsilon$. For any objects $a,b\in\A$, the $\Z$-graded abelian group $\hom_{\A}^{\bullet}(a,b)=\hom_{\A}(a,b\otimes u^{\otimes \bullet})$ has the structure of a graded $\R^{\bullet}$-module in a natural way.
Let $S\subset \R^{\bullet}$ be a multiplicative subset of central homogeneous elements. The central localization $S^{-1}\A$ of $\A$ with respect to $S$ is obtained as follows: it has the same objects as $\A$, and for $a,b\in\A$,
\begin{equation*}
\hom_{S^{-1}\A}(a,b)=
\left(
S^{-1}\hom_{\A}^{\bullet}(a,b)
\right)^{0},
\end{equation*}
the degree 0 elements in the graded localization.
We now prove that this is in fact a categorical localization.
\begin{pro}\label{central-categorical-localization}
The canonical functor ${\mathcal Q}:\A\to S^{-1}\A$ is the localization with respect to
\begin{equation*}
\Sigma=\{a\xrightarrow{s} a\otimes u^{\otimes n}\mid a\in\A, s\in S, |s|=n\}.
\end{equation*}
Moreover, $S^{-1}\A$ has a canonical structure of a tensor category, and ${\mathcal Q}$ is a tensor functor.
\end{pro}
\begin{proof}
Denote by ${\mathcal Q}'$ the localization functor~$\A\to\Sigma^{-1}\A$. It is clear by construction that every morphism in $\Sigma$ is inverted in $S^{-1}\A$ thus ${\mathcal Q}$ factors through ${\mathcal Q}'$, say via the functor~$F:\Sigma^{-1}\A\to S^{-1}\A$. The functor $F$ is clearly essentially surjective. And fully faithfulness follows readily from the fact, easy to verify, that $\Sigma$ admits a calculus of left (and right) fractions~\cite[2.2]{gabriel-zisman:localization}.
The fact that $\Sigma^{-1}\A$ is an additive category and ${\mathcal Q}'$ an additive functor is~\cite[3.3]{gabriel-zisman:localization}, and the analogous statement about the monoidal structure is proven in~\cite{day:monoidal-localization}. The monoidal product in $\Sigma^{-1}\A$ is automatically additive in each variable.
\end{proof}
Consider the homotopy category $\K{\A}^{b}$ of~$\A$. This is a tt-category (large if $\A$ is) with the same graded central ring $\R^{\bullet}$ (with respect to $u$ considered in degree 0).
\begin{lem}\label{localization-homotopy}
There is a canonical equivalence of tt-categories $S^{-1}\K{\A}^{b}\simeq \K{S^{-1}\A}^{b}$, and both are equal to the Verdier localization of $\K{\A}^{b}$ with kernel~$\langle\cone(s)\mid s\in S\rangle$.
\end{lem}
\begin{proof}
The first statement can be shown in two steps. First, consider the category of chain complexes $\C{\A}^{b}$ and the canonical functor~$\C{\A}^{b}\to \C{S^{-1}\A}^{b}$. By \cref{central-categorical-localization}, it factors through $S^{-1}\C{\A}^{b}\to \C{S^{-1}\A}^{b}$ and fully faithfulness is an easy exercise using the explicit nature of the central localization. (The point is that for bounded complexes there are always only finitely many morphisms involved thus the possibility of finding a ``common denominator''.)
Next, since $\C{-}^{b}\to \K{-}^{b}$ is a categorical localization (with respect to chain homotopy equivalences), \cref{central-categorical-localization} easily implies the claim.
Compatibility with the tt-structure is also straightforward. The second statement follows from~\cite[3.6]{balmer:sss}.
\end{proof}
We want to draw two consequences from this discussion. For the first one, denote by $\md(R)$ the tensor category of rigid objects in $\Md(R)$, \ie the category of finitely generated projective $R$-modules. We let $\A=\fmd(R)$ and as the invertible object $u$ we choose $R(1)$ so that $\R^{\bullet}=R[\beta]$.
\begin{lem}\label{localization-beta-fmod}
The functor $\pi:\fmd(R)\to\md(R)$ is the central localization at the multiplicative set~$\{\beta^{n}\mid n\geq 0\}\subset R[\beta]$.
\end{lem}
\begin{proof}
Consider the set of arrows $\Sigma=\{\beta^{n}:a\to a(n)\mid a\in \fmd(R), n\geq 0\}$. By \cref{central-categorical-localization}, the central localization in the statement of the Lemma is the localization at~$\Sigma$. We have $\Sigma^{-1}\fmd(R)(a,b)=\varinjlim_{n}\hom_{\fmd(R)}(a(-n),b)$. At each level $n$, this maps injectively into $\hom_{\md(R)}(\pi a,\pi b)$, and the transition maps $f\mapsto f\circ\beta$ are injective as well since $\beta$ is an epimorphism, hence the induced map
\begin{equation*}
\varinjlim_{n}\hom_{\fmd(R)}(a(-n),b)\to\hom_{\md(R)}(\pi{a},\pi{b})
\end{equation*}
is injective. For surjectivity, we may assume $a,b\in \fmd(R)$ are of ``weight in $[m,n]$'', \ie $m\leq n$ and $\gr_{i}(a)=\gr_{i}(b)=0$ for all~$i\notin [m,n]$. In that case $f:\pi a\to\pi b$ comes from a map~$f:a(m-n)\to b$.
We have proved that $\pi:\Sigma^{-1}\fmd(R)\to\md(R)$ is fully faithful. Essential surjectivity is clear.
\end{proof}
\begin{cor}\label{localization-beta-derived-fmod}
The functor $\pi:\fper(R)\to\per(R)$ is the Verdier localization at the morphisms $\beta:A\to A(1)$, every~$A\in\fper(R)$. In particular, $\ker(\pi)=\langle\cone(\beta)\rangle$.
\end{cor}
\begin{proof}
Let $S=\{\beta^{n}\}\subset R[\beta]$. We know from \cref{localization-beta-fmod} that $S^{-1}\fmd(R)=\md(R)$ hence also $S^{-1}\fper(R)=\per(R)$, by \cref{localization-homotopy}, and this is the Verdier localization with kernel $\langle\cone(\beta^{n})\mid n\geq 0\rangle$. The latter tt-ideal is equal to $\langle\cone(\beta)\rangle$ by~\cite[2.16]{balmer:sss} and we conclude.
\end{proof}
Still in the same context let $\mathfrak{p}\subset R$ be a prime ideal. Denote by $q:R\to R_{\mathfrak{p}}$ the canonical localization morphism, and set~$S=R\backslash \mathfrak{p}$.
\begin{lem}\label{localization-fmod}
The morphism $q$ induces an equivalence of tensor categories
\begin{equation*}
S^{-1}\fmd(R)\simeq\fmd(R_{\mathfrak{p}}).
\end{equation*}
\end{lem}
\begin{proof}
The functor $S^{-1}\fmd(R)\to\fmd(R_{\mathfrak{p}})$ is given by~$\otimes_{R}R_{\mathfrak{p}}$. This is clearly a tensor functor. Since $R_{\mathfrak{p}}$ is local every finitely generated projective $R_{\mathfrak{p}}$-module is free thus $\otimes_{R}R_{\mathfrak{p}}$ is essentially surjective. For full faithfulness notice that $\otimes_{R}R_{\mathfrak{p}}$ is additive and one therefore reduces to check this for twists of $R$:
\begin{align*}
S^{-1}\hom_{\fmd(R)}(R(m),R(n))&=
\begin{cases}
S^{-1}R&:n\geq m\\
0&:n<m
\end{cases}\\
&=
\begin{cases}
R_{\mathfrak{p}}&:n\geq m\\
0&:n<m
\end{cases}\\
&=\hom_{\fmd(R_{\mathfrak{p}})}(R_{\mathfrak{p}}(m),R_{\mathfrak{p}}(n)).
\end{align*}
\end{proof}
\begin{cor}\label{localization-fmod-derived}
The square of topological spaces
\begin{equation*}
\xymatrix{\spc(\fper(R))\ar[d]_{\rho_{R}}&\spc(\fper(R_{\mathfrak{p}}))\ar[l]_{\spc(q)}\ar[d]^{\rho_{R_{\mathfrak{p}}}}\\
\spc(R)&\spc(R_{\mathfrak{p}})\ar[l]^{\spc(q)}}
\end{equation*}
is cartesian.
\end{cor}
\begin{proof}
By \cref{localization-homotopy} and \cref{localization-fmod}, we know that $\fper(R_{\mathfrak{p}})$ is the Verdier localization of $\fper(R)$ with kernel $\langle\cone(s)\mid s\in S\rangle$. The claim now follows from~\cite[5.6]{balmer:sss}.
\end{proof}
\begin{rmk}
\cref{localization-fmod} is false for general multiplicative subsets $S\subset R$, even without taking into account filtrations. The proof shows that the functor $S^{-1}\fmd(R)\to\fmd(S^{-1}R)$ is always fully faithful but it may fail to be essentially surjective. The correct statement would therefore be that $
\left(
S^{-1}\fmd(R)
\right)^{\natural}\simeq\fmd(S^{-1}R)$, where $(-)^{\natural}$ denotes the idempotent completion. We then deduce
\begin{align*}
\K{\fmd(S^{-1}R)}^{b}&\simeq \K{\left(S^{-1}\fmd(R)\right)^{\natural}}^{b}\\
&\simeq\left(\K{S^{-1}\fmd(R)}^{b}\right)^{\natural}&&\text{\cite[2.8]{Balmer-Schlichting}}\\
&\simeq\left(S^{-1}\K{\fmd(R)}^{b}\right)^{\natural}&&\text{\cref{localization-homotopy}}
\end{align*}
and since the tt-spectrum is invariant under idempotent completion, we obtain a cartesian square as in \cref{localization-fmod-derived} for arbitrary multiplicative subsets~$S\subset R$.
\end{rmk}
\section{Reduction steps}
\label{sec:reduction}
Let $R$ be a noetherian ring. Recall from \cref{sec:main-thm} that we would like to prove that the tt-functors $\pi,\gr:\fper(R)\to\per(R)$ induce jointly surjective maps
\begin{equation*}
\spc(\pi),\spc(\gr):\spc(\per(R))\to\spc(\fper(R)).
\end{equation*}
In this section, we will explain how to reduce this statement to $R$ a field. The latter case will be proved in \cref{sec:field}, and the case of arbitrary (\ie not necessarily noetherian) rings will be addressed in \cref{sec:continuity}.
\begin{pro}\label{reduction-nilpotent}
If $r\in R$ is nilpotent then the canonical map
\begin{equation*}
\spc(\fper(R/r))\to\spc(\fper(R))
\end{equation*}
is surjective.
\end{pro}
\begin{proof}
Let $F=\otimes_{R}R/r:\fper(R)\to\fper(R/r)$. We will use the criterion in~\cite[1.3]{balmer:surjectivity} to establish surjectivity of $\spc(F)$, \ie we want to prove that $F$ detects $\otimes$-nilpotent morphisms. Let $f:A\to B\in\fper(R)$ such that~$\overline{f}:=F(f)=0$. $f$ determines a morphism $f':R(0)\to A^{\vee}\otimes B$, where $A^{\vee}$ denotes the dual of $A$. Then $\overline{f'}=0$ and if $(f')^{\otimes m}=0$ then also $f^{\otimes m}=0$, in other words we reduce to $A=R(0)$.
$f$ is then determined by a map $f^{0}:R(0)\to B^{0}$ such that $\delta^{0}f^{0}=0$, and $\overline{f}=0$ means that there is a map $\overline{h}:R/r(0)\to B^{-1}/r$ such that $\overline{f^{0}}=\overline{\delta^{0}}\overline{h}$. Choose a lift $h:R(0)\to B^{-1}$ of $\overline{h}$ to $\fmd(R)$. There exists $g:R(0)\to B^{0}$ such that $f^{0}-gr=\delta^{-1}h$. $gr$ determines a chain morphism, and we may assume that $f^{0}$ is of the form $gr$ for some $g:R(0)\to B^{0}$. ($g$ itself does not necessarily determine a chain morphism.)
Let $m\geq 1$ such that $r^{\circ m}=0$. Then $f^{\otimes m}:R(0)\to B^{\otimes m}$ is described by the morphism
\begin{align*}
R(0)\xrightarrow{(gr)^{\otimes m}} (B^{0})^{\otimes m}\hookrightarrow (B^{\otimes m})^{0}\\
\intertext{which factors as}\\
R(0)\xrightarrow{r^{\circ m}=0}R(0)\xrightarrow{g^{\otimes m}} (B^{0})^{\otimes m}\hookrightarrow (B^{\otimes m})^{0}.
\end{align*}
We conclude that $f$ is $\otimes$-nilpotent as required.
\end{proof}
\begin{pro}\label{reduction-nzd}
Let $r\in R$ be a non-zerodivisor. The image of the canonical map
\begin{equation*}
\spc(\fper(R/r))\to\spc(\fper(R))
\end{equation*}
is precisely the support of $\cone(r)$.
\end{pro}
\begin{proof}
Let $F=\otimes_{R}R/r:\fper(R)\to\fper(R/r)$. The fact that $r$ is a non-zerodivisor means that $R/r(0)$ is an object in $\fper(R)$ hence $F$ admits a right adjoint $G:\fper(R/r)\to\fper(R)$ (which is simply the forgetful functor). We may therefore invoke~\cite[1.7]{balmer:surjectivity}: the image of $\spc(F)$ is the support of~$G(R/r(0))=\cone(r)$.
\end{proof}
We can now put these pieces together. Notice that we have, for any ring morphism $R\to R'$ and $\xi\in\{\pi,\gr\}$, commutative squares
\begin{equation}\label{basechange-prime-generators}
\xymatrix@C=5em{\spc(\fper(R))&\spc(\fper(R'))\ar[l]_{\spc(\otimes_{R}R')}\\
\spc(\per(R))\ar[u]^{\spc(\xi)}&\spc(\per(R'))\ar[l]^{\spc(\otimes_{R}R')}\ar[u]_{\spc(\xi)}.}
\end{equation}
Let $\mathfrak{P}\in\spc(\fper(R))$ be a prime and set~$\mathfrak{p}=\rho_{R}(\mathfrak{P})\in\spc(R)$. From \cref{localization-fmod-derived} we know that $\mathfrak{P}$ lies in the subspace~$\spc(\fper(R_{\mathfrak{p}}))$. Using \cref{basechange-prime-generators} we therefore reduce to a local ring $(R,\mathfrak{p})$ (still assuming $\mathfrak{p}=\rho_{R}(\mathfrak{P})$). We now do induction on the dimension $d$ of~$R$. In each case, repeated application of \cref{reduction-nilpotent} in conjunction with \cref{basechange-prime-generators} allows us to assume $R$ reduced. If $d=0$, $R$ is necessarily a field and this case will be dealt with in \cref{field-spectrum}. If $d>0$ there exists a non-zerodivisor~$r\in \mathfrak{p}$. \cref{reduction-nzd} in conjunction with \cref{basechange-prime-generators} reduce us to $R/r$ but this ring has dimension $d-1$ and we conclude by induction.
\section{The case of a field}
\label{sec:field}
In this section we will prove \cref{main-thm} in the case of $R=k$ a field. This will follow easily from a more precise description of~$\fper(k)$.
We begin with a result describing the structure of any morphism in~$\fmd(k)$. For this, let us agree to call a quasi-abelian category \emph{semisimple} if every short strictly exact sequence splits. Equivalently, a quasi-abelian category is semisimple if every object is projective.
\begin{lem}
The category $\fmd(k)$ is semisimple quasi-abelian.
\end{lem}
\begin{proof}
Notice that $\fmd(k)\subset \FMd(k)$ is simply the full subcategory of separated filtered vector spaces whose underlying vector space is finite dimensional. This is an additive subcategory and the set of objects is closed under kernels and cokernels in $\FMd(k)$. We deduce that it is a quasi-abelian subcategory.
Since every object in $\fmd(k)$ is projective (\cref{fmod-finite-projectives}), semisimplicity follows.
\end{proof}
\begin{lem}\label{field-morphism-structure}
Let $f:a\to b$ be a morphism in a semisimple quasiabelian category. Then $f$ is the composition
\begin{equation}
\label{field-morphism-composition}
f=f_{m} \circ f_{em}\circ f_{e},
\end{equation}
where
\begin{itemize}
\item $f_{e}$ is the projection onto a direct summand (in particular a strict epimorphism),
\item $f_{em}$ is an epimonomorphism,
\item $f_{m}$ is the inclusion of a direct summand (in particular a strict monomorphism).
\end{itemize}
\end{lem}
\begin{proof}
As in every quasi-abelian category, $f$ factors as
\begin{equation*}
a\xrightarrow{f_{e}}\coimg(f)\xrightarrow{f_{em}}\img(f)\xrightarrow{f_{m}} b,
\end{equation*}
where $f_{e}$ is a strict epimorphism, $f_{em}$ is an epimonomorphism, and $f_{m}$ is a strict monomorphism. The Lemma now follows from the definition of semisimplicity.
\end{proof}
\begin{rmk}\label{field-morphism-characterization}
\cref{field-morphism-structure} allows to characterize certain properties of morphisms $f:a\to b$ in a particularly simple way:
\begin{enumerate}
\item $f$ is a monomorphism if and only if $f_{e}$ is an isomorphism.
\item $f$ is an epimorphism if and only if $f_{m}$ is an isomorphism.
\item $f$ is strict if and only if $f_{em}$ is an isomorphism.
\end{enumerate}
\end{rmk}
Fix a semisimple quasi-abelian category~${\mathcal A}$. Its bounded derived category $\D{\mathcal A}^{b}$ admits a bounded t-structure whose heart $\D{\mathcal A}^{\heartsuit}$ is the subcategory of objects represented by complexes
\begin{equation}
0\to a\xrightarrow{f}b\to 0,\label{left-heart-complex}
\end{equation}
where $b$ sits in degree 0 and $f$ is a monomorphism in ${\mathcal A}$.\footnote{This is~\cite[1.2.18, 1.2.21]{Schneiders:quasi-abelian}. The reader who is puzzled by the asymmetry of this statement should rest assured that there is a dual t-structure for which the objects in the heart are represented by \emph{epi}morphisms~\cite[1.2.23]{Schneiders:quasi-abelian}. Also, the existence of the t-structures does not require $\A$ to be semisimple.}
\begin{lem}\label{semisimple-hereditary}
The t-structure on $\D{\mathcal A}^{b}$ is strongly hereditary, \ie for any $A,B\in \D{\mathcal A}^{\heartsuit}$ and $i\geq 2$, we have~$\hom_{\D{\mathcal A}^{b}}(A,B[i])=0$.
\end{lem}
\begin{proof}
This follows from the fact that $A$ and $B$ are represented by complexes of the form (\ref{left-heart-complex}), and that homomorphisms can be computed in the homotopy category. Indeed, as every object in ${\mathcal A}$ is projective, the canonical functor $\K{\A}^{b}\to\D{\A}^{b}$ is an equivalence \cite[1.3.22]{Schneiders:quasi-abelian}.
\end{proof}
Assume now in addition that ${\mathcal A}$ is a tensor category and every object is a finite sum of invertibles. Clearly, $\fmd(k)$ satisfies this condition.
\begin{pro}\label{semisimple-derived-objects}
Every object in $\D{\mathcal A}^{b}$ is a finite direct sum of shifts of invertibles and $\cone(g)$, where $g$ is an epimonomorphism in~${\mathcal A}$.
\end{pro}
\begin{proof}
Let $A\in \D{\mathcal A}^{b}$. By \cref{semisimple-hereditary}, the object $A$ is the direct sum of shifts of objects in~$\D{\mathcal A}^{\heartsuit}$. As discussed above, every object in the heart is represented by a complex as in \cref{left-heart-complex}. We then deduce from \cref{field-morphism-characterization} that $f$ is an epimonomorphism $g$ followed by the inclusion of a direct summand, say with direct complement~$c$. Thus
\begin{equation*}
\cone(f)=\cone(g)\oplus c.
\end{equation*}
\end{proof}
We now come to the study of tt-ideals in~$\fper(k)=\D{\fmd(k)}^{b}$. \cref{semisimple-derived-objects} tells us that every prime ideal is generated by cones of epimonomorphisms in~$\fmd(k)$. However, it turns out that all these cones generate the same prime ideal (except if 0, of course).
\begin{pro}\label{field-tt-ideals}
There is a unique non-trivial tt-ideal in $\fper(k)$ given by
\begin{equation*}
\ker(\pi)=\langle\cone(\beta)\rangle.
\end{equation*}
In particular, $\langle\cone(\beta)\rangle$ is a prime ideal.
\end{pro}
\begin{proof}
The equality of the two tt-ideals follows from \cref{localization-beta-derived-fmod}. Since $\pi$ is a tt-functor, it is clear that its kernel is a prime ideal.
Let $A$ be a non-zero object in $\fper(k)$ such that~$\langle A\rangle\neq\fper(k)$. We would like to show $\langle A\rangle=\langle\cone(\beta)\rangle$. By \cref{semisimple-derived-objects}, we may assume $A=\cone(g)$ where $g$ is a non-strict epimonomorphism in~$\fmd(k)$. Writing the domain and codomain of $g$ as a sum of invertibles, we may identify $g$ with a square matrix with entries in the polynomial ring~$k[\beta]$. Let $p(\beta)\in k[\beta]$ be the determinant. Since $g$ is not an isomorphism neither is $\gr(g)\in\per(k)$ by \cref{gr-derived}. We deduce that $p(0)=0$, or in other words $p(\beta)=\beta\cdot p'(\beta)$ for some~$p'(\beta)\in k[\beta]$.
Let $\T=\fper(k)/\langle\cone(g)\rangle$ and denote by $\varphi:\fper(k)\to\T$ the localization functor. As $\T$ is a tt-category we can consider the graded (automatically commutative) central ring ${\mathcal R}^{\bullet}_{\T}$ with respect to~$\varphi(k(1))$. Since $\varphi(g)$ is invertible, $\varphi(p)\in{\mathcal R}^{\bullet}_{\T}$ is invertible as well. But then we must have
\begin{equation*}
\left(
\varphi(p)^{-1}\cdot\varphi(p')
\right)\cdot\varphi(\beta)=\varphi(p)^{-1}\cdot\varphi(p)=1
\end{equation*}
so $\varphi(\beta)$ is invertible as well. In other words, $\cone(\beta)\in\ker(\varphi)=\langle{\cone(g)}\rangle$.
Conversely, $\pi(g)$ is an isomorphism since $g$ is an epimonomorphism. In other words,~$\cone(g)\in\langle\cone(\beta)\rangle$.
\end{proof}
\begin{cor}\label{field-spectrum}
The canonical morphism
\begin{equation*}
\rho^{\bullet}_{k}:\spec(\fper(k))\to\spech(k[\beta])
\end{equation*}
is an isomorphism of locally ringed spaces. The tt-spectrum $\spc(\fper(k))$ is the topological space
\begin{equation*}
\xymatrix{\langle 0\rangle=\ker(\gr)\\
\langle\cone(\beta)\rangle=\ker(\pi)\ar@{-}[u]}
\end{equation*}
where the only non-trivial specialization relation is indicated by the vertical line going upward.
\end{cor}
\section{Continuity of tt-spectra}
\label{sec:continuity}
Our primary goal in this section is to deduce the veracity of \cref{main-thm} from its veracity for noetherian rings. The idea is to write an arbitrary ring as a filtered colimit of noetherian rings, and since this technique of reducing some statement in tt-geometry to the analogous statement about ``more finite'' objects can be useful in other contexts we decided to approach the question in greater generality.
Denote by $\ttCat$ the category of small tt-categories and tt-functors. For the moment we assume that all structure is strict, \eg the tt-functors preserve the tensor product and translation functor on the nose.
\begin{lem}\label{ttcat-filtered-colimits}
The forgetful functor $\ttCat\to\Cat$ creates filtered colimits.
\end{lem}
\begin{proof}[Proof sketch.]
The fact that filtered colimits of monoidal categories are created by the forgetful functor is~\cite[C1.1.8]{johnstone:sketches}. Since filtered colimits commute with finite products, the colimit will be an additive category. It is obvious how to endow the filtered colimit with a translation functor and a class of distinguished triangles. The axioms for the triangulated structure all involve only finitely many objects and morphisms each and therefore are easily seen to hold. The same is true for exactness of the monoidal product.
It remains to check universality. But given a cocone on the diagram there is a unique morphism (apriori not respecting the tt-structure) from the filtered colimit. Hence all one needs to know is that it actually does respect the tt-structure. Again, in each case this only involves finitely many objects and morphisms and is easily seen to hold.
\end{proof}
Let us be given a filtered diagram $(\T_{i},f_{ij}:\T_{i}\to \T_{j})_{i\in I}$ in $\ttCat$ and denote by $\T$ its colimit, and by $f_{i}:\T_{i}\to\T$ the canonical functors.
\begin{pro}\label{spc-continuity}
The induced map
\begin{equation*}
\varphi:=\varprojlim_{i}f_{i}^{-1}:\spc(\T)\to\varprojlim_{i}\spc(\T_{i})
\end{equation*}
is a homeomorphism.
\end{pro}
\begin{proof}
This follows from \cref{spc-continuity-variant}.
\end{proof}
\begin{rmk}\label{filtered-pseudo-colimit}
In practice, of course, tt-categories and tt-functors are rarely strict, and (filtered) diagrams of such things are rarely strictly functorial. Denote by $\ttCatp$ the 2-category of small tt-categories, tt-functors, and tt-isotransformations without any strictness assumptions.
Given a pseudo-functor $F:I\to\ttCatp$, where $I$ is a small filtered category, we are going to endow its pseudo-colimit 2-$\varinjlim_{I}F$ with the structure of a tt-category. For this, choose a strictification of $F$, \ie a strict 2-functor $G:I\to\Cat$ together with a pseudo-natural equivalence $\eta:F\to G$ (as pseudo-functors~$I\to\Cat$). Then use $\eta$ pointwise to endow each category $G(i)$, where $i\in I$, with the structure of a tt-category, and each functor $G(\alpha)$, where $\alpha:i\to j$, with the structure of a tt-functor. In other words, make $\eta$ into a pseudo-natural equivalence of pseudo-functors~$I\to\ttCatp$. Since 2-$\varinjlim F\simeq$ 2-$\varinjlim G$, we may assume without loss of generality that $F$ is a strict 2-functor. But in this case the canonical functor 2-$\varinjlim_{I}F\to\varinjlim_{I}F$ from the pseudo-colimit to the (1-categorical) colimit is an equivalence (here we use the assumption that $I$ is filtered~\cite[see][VI.6.8]{sga4}). Then we can apply \cref{ttcat-filtered-colimits}.\footnote{This is maybe not wholly satisfactory. In analogy to \cref{ttcat-filtered-colimits} one might expect the statement that $\ttCatp\to\Catp$ creates filtered pseudo-colimits. We won't need this at present, and leave it as a question for the interested reader.}
\cref{spc-continuity} also holds in this non-strict context. Notice first that non-strict tt-functors induce maps on spectra exactly in the same way as strict ones. Moreover, isomorphic (non-strict) tt-functors induce the same map. Therefore the statement of \cref{spc-continuity} makes sense also for pseudo-functors~$I\to\ttCatp$. It is clear that $F\to \text{2-}\varinjlim F$ satisfies the assumptions of \cref{spc-continuity-variant} thus a homeomorphism
\begin{equation*}
\spc(\text{2-}\varinjlim F)\to\varprojlim_{i}\spc(F(i)).
\end{equation*}
\end{rmk}
For the following discussion a category $I$ is said to be \emph{filtered} if
\begin{itemize}
\item $I$ is non-empty, and
\item for any $i,j\in I$ there exists $k\in I$ and $i\to k$,~$j\to k$.
\end{itemize}
In particular, it is not necessary that parallel morphisms can be equalized. (Of course, in applications $I$ will often just be a directed poset.)
\begin{dfn}
Let $\T_{\bullet}:I\to\ttCatp$ be a filtered pseudo-functor and
$f:\T_{\bullet}\to \T$ a pseudo-natural transformation,~$\T\in\ttCatp$. We say that
\begin{itemize}
\item $f$ is \emph{surjective on morphisms} if for each
morphism $\alpha:a\to b$ in $\T$ there exists $i\in I$, and
a morphism $\alpha_{i}:a_{i}\to b_{i}$ in $\T_{i}$ such that~$f_{i}(\alpha_{i})\cong \alpha$.
\item $f$ \emph{detects isomorphisms} if for each
$a_{i},b_{i} \in \T_{i}$ such that
$f_{i}(a_{i})\cong f_{i}(b_{i})$ in $\T$ there exists $u:i\to j$
such that~$\T_{u}(a_{i})\cong \T_{u}(b_{i})$.
\end{itemize}
\end{dfn}
The condition $f_{i}(\alpha_{i})\cong \alpha$ here means that there are isomorphisms $a\cong f_{i}(a_{i})$ and $b\cong f_{i}(b_{i})$ such that
\begin{equation*}
\xymatrix{a\ar[r]^{\alpha}\ar@{-}[d]_{\sim}&b\ar@{-}[d]^{\sim}\\
f_{i}(a_{i})\ar[r]_{f_{i}(\alpha_{i})}&f_{i}(b_{i})}
\end{equation*}
commutes. The transformation $f$ being surjective on morphisms implies in particular that $f$ is ``surjective on objects'' and even ``surjective on triangles'', in an obvious sense. Note also that detecting isomorphisms is equivalent to detecting zero objects.
\begin{pro}\label{spc-continuity-variant}
Let $\T_{\bullet}:I\to\ttCatp$ be a filtered pseudo-functor and $f:\T_{\bullet}\to \T$ a pseudo-natural transformation,~$\T\in\ttCatp$. Assume that $f$ is surjective on morphisms and detects isomorphisms. Then the induced map
\begin{equation*}
\varphi:=\varprojlim_{i}f_{i}^{-1}:\spc(\T)\to\varprojlim_{i}\spc(\T_{i})
\end{equation*}
is a homeomorphism.
\end{pro}
\begin{proof}
\begin{enumerate}
\item We first prove injectivity. Let $\mathfrak{P}\neq\mathfrak{Q}\in\spc(\T)$, say~$a\in\mathfrak{P}\backslash\mathfrak{Q}$. There exists $i\in I$ and $a_{i}\in \T_{i}$ such that $f_{i}(a_{i})\cong a$ since $f$ is surjective on objects. But then $a_{i}\in f_{i}^{-1}(\mathfrak{P})\backslash f_{i}^{-1}(\mathfrak{Q})$ which implies~$\varphi(\mathfrak{P})\neq\varphi(\mathfrak{Q})$.
\item For surjectivity, let~$(\mathfrak{P}_{i})_{i}\in\varprojlim\spc(\T_{i})$. Define
\begin{equation*}
\mathfrak{P}=\{a\in\T\mid \exists i\in I, a_{i}\in\mathfrak{P}_{i}:a\cong f_{i}(a_{i})\}\subset \T.
\end{equation*}
We claim that $\mathfrak{P}$ can also be described as
\begin{equation*}
\mathfrak{P}'=\{a\in\T\mid \forall i\in I, a_{i}\in\T_{i}:a\cong f_{i}(a_{i})\Rightarrow a_{i}\in\mathfrak{P}_{i}\}.
\end{equation*}
Indeed, if $a\in\mathfrak{P}'$ choose $i\in I$ and $a_{i}\in\T_{i}$ such that $a\cong f_{i}(a_{i})$ which is possible since $f$ is surjective on objects. By definition of $\mathfrak{P}'$ we must have $a_{i}\in\mathfrak{P}_{i}$, and therefore~$a\in\mathfrak{P}$. Conversely, if $a\in\mathfrak{P}$, say $a\cong f_{i}(a_{i})$ with $a_{i}\in\mathfrak{P}_{i}$, and we are given $a_{j}'\in\T_{j}$ such that $a\cong f_{j}(a_{j}')$, let $k\in I$ and $u_{i}:i\to k$,~$u_{j}:j\to k$. We have $f_{k}\T_{u_{i}}(a_{i})\cong f_{i}(a_{i})\cong a\cong f_{j}(a_{j}')\cong f_{k}\T_{u_{j}}(a_{j}')$ and so by assumption on $f$ there exists $u:k\to l$ such that $\T_{uu_{i}}(a_{i})\cong \T_{u}\T_{u_{i}}(a_{i})\cong \T_{u}\T_{u_{j}}(a_{j}')\cong \T_{uu_{j}}(a_{j}')$. The former lies in $\mathfrak{P}_{l}$ hence so does the latter, and this implies~$a_{j}'\in\mathfrak{P}_{j}$.
It is now straightforward to prove that $\mathfrak{P}$ is a prime ideal. For example, let $D:a\to b\to c\to^{+}$ be a triangle in $\T$ with~$a,b\in\mathfrak{P}$. By assumption there exists $i\in I$ and a triangle $D_{i}:a_{i}\to b_{i}\to c_{i}\to^{+}$ in $\T_{i}$ such that~$f_{i}(D_{i})\cong D$. By what we just proved we must then have $a_{i},b_{i}\in\mathfrak{P}_{i}$ and hence also~$c_{i}\in\mathfrak{P}_{i}$. But then~$c\cong f_{i}(c_{i})\in\mathfrak{P}$. Since $\mathfrak{P}$ is clearly closed under translations, this shows that is is a triangulated subcategory.
For thickness we proceed similarly. Let $a,b\in\T$ such that~$a\oplus b\in\mathfrak{P}$. We may find $i\in I$ and $a_{i},b_{i}\in\T_{i}$ such that $a\cong f_{i}(a_{i})$,~$b\cong f_{i}(b_{i})$. Then $f_{i}(a_{i}\oplus b_{i})\cong a\oplus b\in\mathfrak{P}$ thus $a_{i}\oplus b_{i}\in\mathfrak{P}_{i}$ and this implies $a_{i}\in\mathfrak{P}_{i}$ or $b_{i}\in\mathfrak{P}_{i}$, \ie $a\in\mathfrak{P}$ or~$b\in\mathfrak{P}$. Primality is proven in exactly the same way as thickness.
Let $\pi_{i}:\varprojlim\spc(\T_{i})\to \spc(\T_{i})$ be the canonical projection so that~$\pi_{i}\varphi=f_{i}^{-1}$. Then
\begin{align*}
\pi_{i}\varphi(\mathfrak{P})=f_{i}^{-1}(\mathfrak{P})=f_{i}^{-1}(\mathfrak{P}')=\mathfrak{P}_{i}
\end{align*}
and this completes the proof of surjectivity.
\item Since $\varphi$ is continuous, it remains to show that it is open. A basis for the topology of $\spc(\T)$ is given by $U(a)=\spc(\T)\backslash\supp(a)$, where $a$ runs through the objects of~$\T$. Fix $a\in\T$, say $a\cong f_{i}(a_{i})$ with some~$a_{i}\in\T_{i}$. We claim that $\varphi(U(a))=\pi_{i}^{-1}(U(a_{i}))$ (which is open hence this would complete the proof).
Let $\mathfrak{P}\in U(a)$, which means $f_{i}(a_{i})\cong a\in\mathfrak{P}$, or equivalently, $a_{i}\in f_{i}^{-1}(\mathfrak{P})=\pi_{i}\varphi(\mathfrak{P})$, \ie $\varphi(\mathfrak{P})\in\pi_{i}^{-1}(U(a_{i}))$. Conversely, suppose $(\mathfrak{P}_{i})_{i}\in\pi_{i}^{-1}(U(a_{i}))$, \ie~$a_{i}\in\mathfrak{P}_{i}$. By the proof of surjectivity in part (2), $(\mathfrak{P}_{i})_{i}=\varphi(\mathfrak{P})$ with $a\in\mathfrak{P}$, \ie~$(\mathfrak{P}_{i})_{i}\in\varphi(U(a))$.
\end{enumerate}
\end{proof}
\begin{rmk}
Certainly, these are not the only reasonable conditions on $f$ which allow to deduce a homeomorphism on spectra. For example, it is likely that surjectivity on morphisms could be replaced by a nilfaithfulness assumption inspired by~\cite{balmer:surjectivity}. We mainly chose these with easy applicability in mind.
\end{rmk}
We may apply this result to filtered modules, thereby concluding the second proof of \cref{main-thm}.
\begin{cor}\label{noetherian-reduction}
If $\rho_{R}^{\bullet}:\spc(\fper(R))\to\spch(R[\beta])$ is a homeomorphism for noetherian rings then it is a homeomorphism for all rings.
\end{cor}
\begin{proof}
Let $R$ be an arbitrary ring and write it as the filtered colimit of its finitely generated subrings~$R=\varinjlim_{i}R_{i}$. An inclusion $R_{i}\subset R_{j}$ induces a basechange tt-functor $\otimes_{R_{i}}R_{j}:\fper(R_{i})\to\fper(R_{j})$ and we obtain a pseudo-functor $\fper(R_{\bullet}):I\to\ttCatp$ together with a pseudo-natural transformation~$f=\otimes R:\fper(R_{\bullet})\to\fper(R)$. Let us check that $f$ satisfies the assumptions of \cref{spc-continuity-variant}.
Note first that every free $R$-module comes from a free $R_{i}$-module by basechange, for any~$i$. Also, a morphism between finitely generated free $R$-modules is described by a matrix with entries in~$R$. Adding these finitely many entries to $R_{i}$ we see that morphisms also come from some~$R_{i}$. In particular, this is true for idempotent endomorphisms of finitely generated free $R$-modules. We deduce that finitely generated projective $R$-modules also arise by basechange from some~$R_{i}$. The same is then true for objects and morphisms in $\fmd(R)$ and therefore also in $\fper(R)=\K{\fmd(R)}^{b}$ (\cref{fper-homotopy-derived}). In other words, $f$ is surjective on morphisms. Moreover, a perfect filtered complex is 0 in $\fper(R)$ if and only if it is nullhomotopic and such a homotopy again comes from some $R_{i}$. We conclude that $f$ detects isomorphisms as well.
We may therefore apply \cref{spc-continuity-variant} to deduce a commutative square
\begin{equation*}
\xymatrix{\spc(\fper(R))\ar[r]\ar[d]_{\rho_{R}^{\bullet}}&\varprojlim_{i}\spc(\fper(R_{i}))\ar[d]^{(\rho_{R_{i}}^{\bullet})_{i}}\\
\spch(R[\beta])\ar[r]&\varprojlim_{i}\spch(R_{i}[\beta])}
\end{equation*}
where the top horizontal map is a homeomorphism. Since the $R_{i}$ are all noetherian rings, the right vertical map is a homeomorphism by assumption. And the bottom horizontal map is clearly a homeomorphism. We conclude that the left vertical map is too.
\end{proof}
\bibliographystyle{hsiam}
| {
"timestamp": "2017-08-08T02:09:20",
"yymm": "1708",
"arxiv_id": "1708.00833",
"language": "en",
"url": "https://arxiv.org/abs/1708.00833",
"abstract": "We compute the tensor triangular spectrum of perfect complexes of filtered modules over a commutative ring, and deduce a classification of the thick tensor ideals. We give two proofs: one by reducing to perfect complexes of graded modules which have already been studied in the literature, and one more direct for which we develop some useful tools.",
"subjects": "Category Theory (math.CT); Commutative Algebra (math.AC)",
"title": "tt-geometry of filtered modules",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985718065235777,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7083314624533212
} |
https://arxiv.org/abs/2010.02906 | The Index Theorem for Toeplitz Operators as a Corollary of Bott Periodicity | This is an expository paper about the index of Toeplitz operators, and in particular Boutet de Monvel's theorem. We prove Boutet de Monvel's theorem as a corollary of Bott periodicity, and independently of the Atiyah-Singer index theorem. | \section{Introduction}
This is an expository paper about the index of Toeplitz operators,
and in particular Boutet de Monvel's theorem \cite{Bo79}.
We prove Boutet de Monvel's theorem as a corollary of Bott periodicity,
and independently of the Atiyah-Singer index theorem.
Let $M$ be an odd dimensional closed Spin$^c$ manifold with Dirac operator $D$
acting on sections of the spinor bundle $S$.
If $E$ is a smooth $\CC$ vector bundle on $M$,
$D^E$ denotes $D$ twisted by $E$.
The closure $\bar{D}^E$ of $D^E$ is an unbounded self-adjoint operator on
the Hilbert space $L^2(M,S\otimes E)$ of $L^2$-sections of $S\otimes E$.
$\bar{D}^E$ has discrete spectrum with finite dimensional eigenspaces.
Denote by $L^2_+(M,S\otimes E)$ the Hilbert space direct sum of the eigenspaces of $\bar{D}_E$ for eigenvalues $\lambda\ge 0$.
$P^E_+$ denotes the orthogonal projection
\[ P^E_+:L^2(M,S\otimes E)\to L^2_+(M,S\otimes E)\]
Suppose that $\alpha$ is an automorphism of $E$,
and $I_S\otimes \alpha$ the resulting automorphism of $S\otimes E$.
$\mathcal{M}_\alpha$ is the bounded invertible operator on $L^2(S\otimes E)$
obtained from $I_S\otimes \alpha$.
The Toeplitz operator $T_\alpha$ is the composition of $\mathcal{M}_\alpha:L^2_+\to L^2$
with $P^E_+:L^2\to L^2_+$,
\[ T_\alpha =P^E_+\mathcal{M}_\alpha : L^2_+(M,S\otimes E)\to L^2_+(M,S\otimes E)\]
The Toeplitz operator $T_\alpha$ is a Fredholm operator (see section \ref{sec2}).
\begin{theorem}\label{Thm}
Let $M$ be an odd dimensional compact Spin$^c$ manifold without boundary.
If $E$ is a smooth $\CC$ vector bundle on $M$,
and $\alpha$ is an automorphsim of $E$,
then
\[ \ind T_\alpha = (\ch(E,\alpha)\cup \Td(M))[M]\]
Here $\ch(E,\alpha)$ is the Chern character of $(E,\alpha)$, $\Td(M)$ is the Todd class of the Spin$^c$ vector bundle $TM$
and $[M]$ is the fundamental cycle of $M$.
\end{theorem}
Our proof of Theorem \ref{Thm} is based on three points:
\begin{itemize}
\item Bott periodicity.
\item Bordism invariance of the index.
\item Invariance of the index under vector bundle modification.
\end{itemize}
The last two points are analytical, and are proved in this paper.
The key topological feature of our proof is Bott periodicity (in its original form).
Our proof does not use $K$-theory or $K$-homology, or cobordism theory, and is independent of the Atiyah-Singer theorem.
In section \ref{sec:BdM} we show that Theorem \ref{Thm} implies Boutet de Monvel's theorem.
Special cases of Theorem \ref{Thm}, when $M=S^1$, were proven by F.\ Noether \cite{No20}, and Gohberg-Krein \cite{GK60}.
Venugopalkrishna \cite{V72} proved the case of Boutet de Monvel's theorem when $M=S^{2r+1}$.
\subsection*{Acknowledgement}
We thank Magnus Goffeng for carefully reading a first draft of this paper, and pointing out a mistake in one of the proofs.
\section{Todd class and Chern character}
In this section we review the characteristic classes that appear in Theorem \ref{Thm}.
We assume familiarity with Chern and Pontryagin classes (see \cite{MilSta}).
The Todd class of a $\CC$ vector bundle is the characteristic class
\[ \mathrm{Td} = \prod_j \frac{x_j}{1-e^{-x_j}}\]
where $x_j$ are the Chern roots.
The $\hat{A}$ class of an $\RR$ vector bundle is the characteristic class
\[ \hat{A} = \prod_j \frac{x_j/2}{\sinh{(x_j/2)}}\]
where $x_j$ are the Pontryagin roots.
Due to the power series identity
\[ e^{x/2}\cdot \frac{x/2}{\sinh{(x/2)}} = \frac{x}{1-e^{-x}}\]
for a $\CC$ vector bundle $F$,
\[ \mathrm{Td}(F) = e^{c_1(F)/2} \; \hat{A}(F)\]
where $\hat{A}(F)$ is the $\hat{A}$ class of the underlying $\RR$ vector bundle of $F$.
A spin$^c$ vector bundle is an $\RR$ vector bundle with given extra structure.
Thus (as an $\RR$ vector bundle) it has an $\hat{A}$ class.
Associated to each spin$^c$ vector bundle $F$ is a $\CC$ line bundle $L_F$.
If $F$ is a $\CC$ vector bundle, $L_F$ is the determinant bundle.
The first Chern class $c_1(F)$ of a spin$^c$ vector bundle
is, by definition, the Chern class of $L_F$.
The Todd class of a spin$^c$ vector bundle $F$ is defined by the formula,
\[ \mathrm{Td}(F) = e^{c_1(F)/2} \; \hat{A}(F)\]
For a spin vector bundle $F$ the associated line bundle is trivial, and so
\[ \mathrm{Td}(F) = \hat{A}(F)\]
The Chern character $\mathrm{ch}(E,\alpha)$ of a smooth $\CC$ vector bundle $E$ with smooth automorphism $\alpha$
is as follows.
First assume that $E$ is the trivial bundle $X\times \CC^r$,
and $\alpha$ is a smooth map $\alpha:X\to GL(r, \CC)$.
Then if the dimension of $X$ is $2m+1$, the Chern character is the cohomology class represented by the differential form
\[ \mathrm{ch}(\alpha)
= \sum_{j=0}^m -\frac{j!}{(2j+1)!(2\pi i)^{j+1}} \mathrm{Tr}((\alpha^{-1}d\alpha)^{2j+1})\]
More generally, if $E$ is not trivial, choose a vector bundle $E'$ such that $E\oplus E'$ is trivialized,
and extend $\alpha$ by adding the identity automorphism on $E'$.
Then proceed as above.
\section{Toeplitz operators on the circle}
The simplest case of Theorem \ref{Thm} is:
\begin{theorem}\label{thm:Noether}
For a continuous function $f:S^1\to \CC\setminus \{0\}$
the Fredholm operator $T_f$ has index
\[ \Ind T_f = -\mathrm{winding\; number}\, f\]
\end{theorem}
\begin{proof}
First consider $f(z)=z^m$ with $m\ge 0$.
Using the orthonormal basis $e_n(z)=z^n$ (with $n\ge 0$) of $L^2_+(S^1)$,
we have $T_{z^m}e_n=e_{n+m}$. Thus.
\[ \dim\mathrm{Ker}\, T_{z^m} = 0\qquad \dim\mathrm{Coker}\, T_{z^m} =m\]
If $m<0$ we get $T_{z^m}e_n=0$ for $n=0,\dots, |m|-1$ and $T_{z^m}e_n=e_{n+m}$ otherwise. Then,
\[ \dim\mathrm{Ker}\, T_{z^m} =|m|= -m\qquad \dim\mathrm{Coker}\, T_{z^m} =0\]
In both cases we find $\Ind T_f = -m$.
Now let $f:S^1\to \CC\setminus \{0\}$ be an arbitrary continuous function with winding number $m$.
Then $f$ is homotopic to $z^m$
by a homotopy $f_t:S^1\to \CC\setminus \{0\}$, $t\in [0,1]$.
Since the map $f\mapsto T_f$, $C(S^1)\to \mathcal{B}(L^2_+)$ is continous,
it follows that $T_{f_t}$ is a norm continuous path of Fredholm operators.
Therefore
\[\Ind\, T_{f}=\Ind\, T_{z^m}=-m=-\mathrm{winding\;number}\, f\]
\end{proof}
When $f$ is smooth,
\[ -\mathrm{winding\; number}\, f=-\frac{1}{2\pi i}\int_{S^1}f^{-1}df=\mathrm{ch}(f)[S^1]\]
\section{Toeplitz operators on closed manifolds}\label{sec2}
With notation as in the introduction, in this section we prove that the Toeplitz operator $T_\alpha$ is a Fredholm operator.
First assume that $E=M\times \CC$ is a trivial line bundle.
The Dirac operator $D$ of $M$
acts on $C^\infty$-sections of the spinor bundle $S$,
\[ D: C^\infty(M,S)\to C^\infty(M,S)\]
As above, denote by $L^2_+(M,S)$ the Hilbert space direct sum of the eigenspaces of $\bar{D}$ for eigenvalues $\lambda\ge 0$.
$P_+$ denotes the orthogonal projection
\[ P_+:L^2(M,S)\to L^2_+(M,S)\subset L^2(M,S)\]
For a continuous function $f\in C(M)$, $\mathcal{M}_f$ denotes the multiplication operator on $L^2(S)$,
\[ (\mathcal{M}_fu)(x) = f(x)u(x) \qquad u\in L^2(S)\]
\begin{lemma}
The commutator $[P_+,\mathcal{M}_f]=P_+\mathcal{M}_f-\mathcal{M}_fP_+$ is compact for every continuous function $f$ on $M$.
\end{lemma}
\begin{proof}
If $f$ is a smooth function, then $\mathcal{M}_f$ is a pseudodifferential operator of order zero.
Therefore, the commutator $[P_+,\mathcal{M}_f]$ is a pseudodifferential operator of order $-1$, and hence compact.
Since $\|\mathcal{M}_f\|=\|f\|_\infty$ we have $\|[P,\mathcal{M}_f]\|\le 2\|f\|_\infty$, so the map $f\mapsto [P_+,\mathcal{M}_f]$ is continuous.
The lemma follows since $C^\infty(M)$ is uniformly dense in $C(M)$ and the space of compact operators is closed in operator norm.
\end{proof}
Now let $E$ be any smooth hermitian vector bundle on $M$.
For a continous endomorphism $\alpha$ of $E$ (viewed as a topological vector bundle),
$\mathcal{M}_\alpha$ denotes the bounded operator on $L^2(M,S\otimes E)$
determined by $I_S\otimes \alpha$.
Here $\alpha$ is not necessarily an automorphism of $E$.
\begin{lemma}\label{lem:comm2}
The commutator $[P_+^E,\mathcal{M}_\alpha]$ is compact for every continuous endomorphism $\alpha$ of $E$.
\end{lemma}
\begin{proof}
If $E=M\times \CC^r$ is a trivial bundle, then $\mathcal{M}_\alpha$
is a $r\times r$ matrix of multiplication operators by functions.
Then $[P^E_+,\mathcal{M}_\alpha]$ is a matrix of commutators of $P_+$ with functions, each of which is compact by the previous lemma.
Thus $[P_+^E,\mathcal{M}_\alpha]$ is compact when $E$ is a trivial vector bundle.
In general, choose $F$ such that $E\oplus F$ is a trivial vector bundle.
Then we can take $D^{E\oplus F}=D^E\oplus D^F$, hence $P^{E\oplus F}_+=P^E_+\oplus P^F_+$.
Also $\mathcal{M}_{\alpha\oplus 0} = \mathcal{M}_\alpha\oplus 0$,
and so $[P_+^{E\oplus F}, \mathcal{M}_{\alpha\oplus 0}]=[P_+^E, \mathcal{M}_\alpha]\oplus 0$.
Since $[P_+^{E\oplus F}, \mathcal{M}_{\alpha\oplus 0}]$ is compact, so is $[P_+^E, \mathcal{M}_\alpha]$.
\end{proof}
\begin{proposition}
If $\alpha, \beta$ are two endomorphisms of $E$, then $T_\alpha T_\beta-T_{\alpha\beta}$ is a compact operator.
Therefore, if $\alpha$ is an automorphism of $E$ then $T_\alpha$ is a Fredholm operator.
\end{proposition}
\begin{proof}
Due to Lemma \ref{lem:comm2}, modulo compact operators we have
\[T_\alpha T_\beta = P\mathcal{M}_\alpha P \mathcal{M}_\beta \sim PP\mathcal{M}_\alpha \mathcal{M}_\beta = P\mathcal{M}_{\alpha\beta}=T_{\alpha\beta}\]
It follows that if $\alpha$ is an automorphism with inverse $\beta$, then $T_\alpha T_\beta-I$ and $T_\beta T_\alpha-I$ are compact operators.
Therefore $T_\alpha$ is Fredholm by Atkinson's theorem.
\end{proof}
\section{Toeplitz operators on compact manifolds with boundary}\label{sec3}
In this section $\Omega$ is a compact even dimensional Spin$^c$ manifold with boundary $M=\partial \Omega$, and $D_\Omega$ is the Dirac operator of $\Omega$, acting on sections in the graded spinor bundle $S^+\oplus S^-$,
\[ D_\Omega: C_c^\infty(\Omega\setminus M, S^+) \to C_c^\infty(\Omega\setminus M, S^-)\]
We denote by $\bar{D}_\Omega$ the maximal closed extension of $D_\Omega$.
$\bar{D}_\Omega$ is a closed unbounded Hilbert space operator with domain
\[ \{u\in L^2(\Omega,S^+)\;\mid \; D_\Omega u \in L^2(\Omega, S^-)\}\]
where $D_\Omega u$ is taken in the distributional sense.
Denote the kernel of $\bar{D}_\Omega$ by
\[\Hardy = \{u\in L^2(\Omega,S^+)\;\mid \; D_\Omega u = 0 \} \]
and let $Q$ be the Hilbert space projection of $L^2(\Omega, S^+)$
onto the closed linear subspace $\Hardy$,
\[ Q:L^2(\Omega,S^+)\to \Hardy\]
For a continuous function $f\in C(\Omega)$, let $\mathcal{M}_f$ denote the multiplication operator
\[ \MM_f : L^2(\Omega, S^+\oplus S^-)\to L^2(\Omega, S^+\oplus S^-)\qquad (\MM_fu)(x) = f(x)u(x) \]
$\MM^+_f$ and $\MM_f^-$ are the restrictions of $\MM_f$ to positive and negative spinors respectively.
\begin{proposition}\label{prop:commutator1}
The commutator $[Q, \mathcal{M}^+_f]$ is a compact operator for all $f\in C(\Omega)$.
Moreover, $\mathcal{M}^+_f Q$ is compact if $f(x)=0$ for all $x\in \partial \Omega$.
\end{proposition}
\begin{proof}
Let $T$ be the bounded operator
\[T=\bar{D}_\Omega(1+\bar{D}^*_\Omega\bar{D}_\Omega)^{-1/2}\]
and $V$ the partial isometry (with the same kernel as $\bar{D}_\Omega$) determined by the polar decomposition
\[ \bar{D}_\Omega= V|\bar{D}_\Omega|,\qquad |\bar{D}_\Omega| = (\bar{D}^*_\Omega\bar{D}_\Omega)^{1/2}\]
$T$ has the following standard properties:
\begin{itemize}
\item $T-V$ is a compact operator.
\item $\mathcal{M}_f^+T-T\mathcal{M}_f^-$ is compact for all $f\in C_0(\Omega\setminus M)$.
\item $\mathcal{M}_f^+(I-T^*T)$ is compact for all $f\in C_0(\Omega\setminus M)$.
\end{itemize}
Proposition 1.1 in \cite{BDT89} implies the much stronger property:
\begin{itemize}
\item $\mathcal{M}_f^+T-T\mathcal{M}_f^-$ is compact for all $f\in C(\Omega)$.
\end{itemize}
Since also $T^*\mathcal{M}_f^+-\mathcal{M}_f^-T^*$ is compact for all $f\in C(\Omega)$,
we obtain that the commutators $[\mathcal{M}_f^+, T^*T]$ are compact.
Note that
\[ \mathrm{ker}\; \bar{D}_\Omega = \mathrm{ker}\; T = \mathrm{ker}\; V \]
Therefore $Q=I-V^*V$ differs from $I-T^*T$ by a compact operator.
Hence $[Q,\mathcal{M}_f^+]$ is compact.
Finally, the third property above implies that $\mathcal{M}^+_fQ$ is compact if $f\in C_0(\Omega\setminus M)$.
(For full details, see \cite{BDT89}.)
\end{proof}
An endomorphism $\theta$ of the trivial vector bundle $E=\Omega\times \CC^r$
is naturally identified with a continuous matrix-valued function
\[ \theta:\Omega\to M_r(\CC)\]
The multiplication operator $\MM_\theta$ is the bounded operator on the Hilbert space
\[ L^2(\Omega, S^+\otimes E) = L^2(\Omega, S^+)\otimes \CC^r\]
obtained from $I_{S^+}\otimes \theta$.
Denote $Q_r=Q\otimes I_r$,
\[ Q_r: L^2(\Omega, S^+)\otimes \CC^r \to \Hardy \otimes \CC^r\]
where $I_r$ is the $r\times r$ identity matrix.
\begin{corollary}\label{comcomp}
For every continuous map $\theta:\Omega\to M_r(\CC)$,
the commutator $[Q_r, \MM_\theta]$ is a compact operator on $L^2(\Omega, S^+)\otimes \CC^r$.
Moreover, $\MM_\theta Q_r$ is compact for every $\theta$ such that $\theta(x)=0$
for all $x\in M$.
Here $Q_r$ is viewed as an operator from $L^2$ to $L^2$.
\end{corollary}
\begin{proof}
The proof is the same as the proof of Lemma \ref{lem:comm2}.
\end{proof}
The Toeplitz operator $T_\theta$ is the composition of $\MM_\theta$ with $Q_r$,
\[ T_\theta =Q_r\mathcal{M}_\theta : \Hardy\otimes \CC^r \to \Hardy\otimes \CC^r\]
\begin{proposition}
If $\theta, \eta$ are two continuous maps $\Omega\to M_r(\CC)$, then $T_\theta T_\eta-T_{\theta\eta}$ is compact.
If $\theta(x)=0$ for all $x\in M=\partial\Omega$ then $T_\theta$ is compact.
Therefore $T_\theta$ is a Fredholm operator if $\theta(x)$ is an invertible matrix for every $x\in M$.
\end{proposition}
\begin{proof}
The first two statements follow from Proposition \ref{prop:commutator1}.
Now let $\theta:\Omega\to M_r(\CC)$ be a continuous function,
and $\theta(x)$ invertible for every $x\in M$.
Let $\eta(x) = \theta(x)^{-1}$, and extend $\eta$ to a continuous function $\eta:\Omega\to M_r(\CC)$.
Then modulo compact operators $T_{\theta\eta}\sim I$ and so $T_{\theta}T_\eta\sim T_\eta T_{\theta}\sim I$.
\end{proof}
\begin{remark}
Results closely related to the content of this section were proved by Venugopalkrishna \cite{V72} in the special case where $\Omega$ is a strongly pseudoconvex domain in $\CC^n$.
For the general case of compact Spin$^c$ manifolds with boundary see \cite{BDT89}, and also \cite{BD91}.
\end{remark}
\section{The trace map and the Calderon projection}\label{sec4}
In this section we show how the index of Toeplitz operators on $\Omega$ is related to the index of Toeplitz operators on $M$.
As in the previous section, $\Omega$ is a compact even dimensional Spin$^c$ manifold
and $M$ is the boundary of $\Omega$.
$D_\Omega$ is the Dirac operator of $\Omega$, acting on sections of the spinor bundles $S^+$ and $S^-$.
$D_M$ is the Dirac operator of $M$, acting on sections of the spinor bundle $S$.
We identify $S^+|M = S$.
We denote the $L^2$ Sobolev space of degree $s\in \RR$ (on $\Omega$ or on $M$) by $W_s$.
Note that $W_0=L^2$.
$C^\infty(\Omega, S^+)$ is the space of sections of $S^+$ that are smooth on $\Omega$,
which means, in particular, that all derivatives up to all orders extend continuously to the boundary $M$.
For a smooth section $s\in C^\infty(\Omega, S^+)$,
let $\gamma(s)\in C^\infty(M,S)$ denote the restriction of $s$ to $M$.
The restriction map $\gamma$ extends to a bounded linear map on Sobolev spaces, called the trace map,
\[ \gamma_s:W_{s+\frac{1}{2}}(\Omega, S^+)\to W_{s}(M,S)\qquad s\in \RR\]
Let $W_s^\natural(M,S)$ be the subspace of $W_s(M,S)$ consisting of restrictions to the boundary (via the trace map) of distributional solutions of $D_\Omega u=0$,
\[ W_s^\natural(M,S) = \{ \gamma_s(u) \mid u \in W_{s+\frac{1}{2}}(\Omega,S^+), \; D_\Omega u=0\}\]
The Calderon projection $P_\natural$ is the orthogonal projection of $L^2$ onto $W^\natural_0$,
\[ P_\natural:L^2(M,S)\to W_0^\natural(M,S)\subset L^2(M,S)\]
$P_\natural$ is a pseudodifferential operator of order zero.
For all $s\in \RR$, the range of the idempotent $P_\natural:W_{s}\to W_{s}$ is $W_s^\natural(M,S)$.
\begin{proposition}\label{propa}
The bounded operator $F:=P_\natural(1+D_M^2)^{-1/4}\circ \gamma_{-\frac{1}{2}}$,
\[ F:\Hardy \to W_0^\natural(M,S)\]
is Fredholm.
\end{proposition}
\begin{proof}
The space of $L^2$-solutions $u\in L^2(\Omega, S^+)$ of the Dirichlet problem
\[ D_\Omega u=0, \qquad \gamma_{-\frac{1}{2}}(u)=0\]
is finite dimensional.
Therefore the surjective map
\[ \gamma_{-\frac{1}{2}}: \Hardy \to W_{-\frac{1}{2}}^\natural(M,S)\]
is a Fredholm operator.
Denote $A=(1+D_M^2)^{-1/4}$.
$A$ is an elliptic pseudodifferential operator of order $-1/2$.
$A$ has scalar symbol, and so the commutator $[A, P_\natural]$ is of order $-3/2$.
Let $B$ be the pseudodifferential operator of order $-1/2$,
\[ B = P_\natural AP_\natural +(I-P_\natural )A(I-P_\natural )\]
$A-B$ is of order $-3/2$,
\begin{align*}
A-B&= P_\natural A(I-P_\natural ) +(I-P_\natural )AP_\natural \\
&= [P_\natural ,A](I-P_\natural ) +(I-P_\natural )[A,P_\natural ]
\end{align*}
and so $A$ and $B$ have the same principal symbol.
Thus $B$ is elliptic, and the bounded operator
\[B:W_{-\frac{1}{2}}(M,S)\to L^2(M,S)\]
is Fredholm. Because $B$ commutes with $P_\natural$, $B$ restricts to a bounded operator
\[ B_\natural: W_{-\frac{1}{2}}^\natural (M,S)\to W_0^\natural(M,S)\]
which is also Fredholm.
Finally, $F=P_\natural A\circ \gamma_{-\frac{1}{2}}=B_\natural\circ \gamma_{-\frac{1}{2}}$ is the composition of two Fredholm operators.
\end{proof}
For a smooth funtion $\tilde{f}\in C^\infty(\Omega)$,
let $T_{\tilde{f}}=Q\mathcal{M}_{\tilde{f}}$ be the operator
\[ T_{\tilde{f}} : \Hardy \to \Hardy\]
with $Q$ as in section \ref{sec3}.
If $f$ is the restriction of $\tilde{f}$ to $M$, let $T^\natural_f=P_\natural\mathcal{M}_f$ be the operator
\[ T_{f}^\natural : W^0_\natural(M,S)\to W^0_\natural(M,S)\]
\begin{proposition}\label{propb}
With $F=P_\natural(1+D_M^2)^{-1/4}\circ \gamma_{-\frac{1}{2}}$ as above,
the diagram
\[ \xymatrix{ \Hardy \ar[r]^{T_{\tilde{f}}}\ar[d]_F & \Hardy \ar[d]^{F}& \\
W^0_\natural(M,S) \ar[r]^{T_f^\natural} & W^0_\natural(M,S)}
\]
commutes modulo compact operators, i.e. $FT_{\tilde{f}}-T^\natural_f F$ is a compact operator for any $\tilde{f}\in C^\infty(\Omega)$, and $f=\tilde{f}|M$.
\end{proposition}
\begin{proof}
Let $\sim$ denote equality modulo compact operators. By Proposition \ref{prop:commutator1},
\[ FT_{\tilde{f}} = FQ\mathcal{M}_{\tilde{f}} \sim F\mathcal{M}_{\tilde{f}} Q = F \mathcal{M}_{\tilde{f}}= P_\natural A\gamma_{-\frac{1}{2}} \mathcal{M}_{\tilde{f}} \]
$P_\natural$ and $A$ are pseudodifferential operators, and the principal symbols of $P_\natural$ and $A$ commute with the symbol of $\mathcal{M}_f$.
Therefore $P_\natural\mathcal{M}_f \sim \mathcal{M}_f P_\natural$ and $A\mathcal{M}_f\sim \mathcal{M}_f A$, and
\[ T^\natural_f F = P_\natural\mathcal{M}_f P_\natural A \gamma_{-\frac{1}{2}}
\sim P_\natural A \mathcal{M}_f \gamma_{-\frac{1}{2}}\]
The proposition now follows from the equality
\[\gamma_{-\frac{1}{2}} \mathcal{M}_{\tilde{f}} = M_f \gamma_{-\frac{1}{2}}\]
\end{proof}
\begin{corollary} \label{corc}
With $\tilde{f}$, $f$, as above, if $f(x)\ne 0$ for all $x\in M$, then
$T_{\tilde{f}}$ and $T_f^\natural$ are Fredholm operators, and
\[ \mathrm{Index}\, T_{\tilde{f}} = \mathrm{Index}\, T_f^\natural\]
\end{corollary}
\begin{proof}
As in section \ref{sec2}, since the projection $P_\natural$ is a pseudodifferential operator of order zero,
$T^\natural_f$ is Fredholm if $f(x)\ne 0$ for all $x\in M$.
Combining Propositions \ref{propa} and \ref{propb} we see that $FT_{\tilde{f}}$ and $T_f^\natural F$ are Fredholm operators with the same index, and therefore
\[ \mathrm{Index}\, F + \mathrm{Index}\, T_{\tilde{f}} = \mathrm{Index}\, T_f^\natural + \mathrm{Index}\, F\]
\end{proof}
The Calderon operator $P_\natural$ and the projection $P_+$ (defined in section \ref{sec2}) are pseudodifferential operators of order zero with the same principal symbol.
This fact, combined with Corollary \ref{corc}, gives the main result of this section.
\begin{proposition}\label{propd}
Let $\theta:M\to M_r(\CC)$ be a continuous matrix-valued function on $M$, such that $\theta(x)$ is invertible for all $x\in M$,
and $\tilde{\theta}:\Omega\to M_r(\CC)$ any continuous extension of $\theta$ to $\Omega$.
Then the Fredholm operators
\[ T_\theta. :L^2_+(M,S)\otimes \CC^r\to L^2_+(M,S)\otimes \CC^r\]
(as in section \ref{sec2}) and
\[ T_{\tilde{\theta}} : \Hardy\otimes \CC^r\to \Hardy\otimes \CC^r\]
(as in section \ref{sec3}) have the same index,
\[ \mathrm{Index}\, T_\theta = \mathrm{Index}\, T_{\tilde{\theta}}\]
\end{proposition}
\begin{proof}
For simplicity, assume first that $r=1$.
The projections $P_\natural$ and $P_+$ differ by a compact operator, because they have the same principal symbol.
This implies that
\[ \mathrm{Index}\, T_\theta = \mathrm{Index}\, T_\theta^\natural\]
The proposition now follows from Corollary \ref{corc}.
For $r>1$, the evident generalizations of Proposition \ref{propa}, Proposition \ref{propb}, and Corollary \ref{corc} are valid, and Proposition \ref{propd} follows.
\end{proof}
\section{Bordism invariance of the index}
In this section we prove bordism invariance of the index of Toeplitz operators,
based on the results of sections \ref{sec3} and \ref{sec4}.
\begin{proposition}
Let $\Omega$ be a compact even dimensional Spin$^c$ manifold with boundary $M$.
Let $\tilde{E}$ be a smooth $\CC$ vector bundle on $\Omega$, and $\tilde{\alpha}$ an automorphism of $\tilde{E}$.
Denote by $(E,\alpha)$ the restriction of $(\tilde{E}, \tilde{\alpha})$ to $M$,
and by $T_\alpha=P_+^E\MM_\alpha$ the Toeplitz operator determined by $\alpha$,
\[ T_\alpha : L^2_+(M,S\otimes E)\to L^2_+(M,S\otimes E)\]
Then
\[ \mathrm{Index}\, T_\alpha = 0\]
\end{proposition}
\begin{proof}
Choose $\tilde{F}$ such that $\tilde{E}\oplus \tilde{F}\cong \Omega\times \CC^r$ is trivial.
Then the Toeplitz operator $T_{\alpha \oplus I_F}$ determined by the automorphism $\alpha\oplus I_F$
of $E\oplus F$ has the same index as $T_\alpha$.
Thus, it suffices to prove the proposition for trivial vector bundles $E$.
Hence we assume that $\tilde{\alpha}$ and $\alpha$ are matrix valued functions,
\[ \tilde{\alpha}:\Omega\to GL(r,\CC) \qquad \alpha: M\to GL(r,\CC)\]
Note that $\tilde{\alpha}(x)$ is invertible for all $x\in \Omega$.
By Proposition \ref{propd}, it suffices to show that
\[\mathrm{Index}\, T_{\tilde{\alpha}}=0\]
where $T_{\tilde{\alpha}}$
is the Toeplitz operator determined by $\tilde{\alpha}$,
\[ T_{\tilde{\alpha}} : \Hardy\otimes \CC^r\to \Hardy\otimes \CC^r\]
There is the direct sum decomposition
\[ L^2(\Omega, S^+) \otimes \CC^r= \mathcal{N}(\bar{D}_\Omega)\otimes \CC^r \oplus \mathcal{N}(\bar{D}_\Omega)^\perp\otimes \CC^r\]
With respect to this decomposition,
the multiplication operator $\mathcal{M}_{\tilde{\alpha}}=\mathcal{M}^+_{\tilde{\alpha}}$ is a $2\times 2$ matrix
\[ \mathcal{M}^+_{\tilde{\alpha}}
=\left(\begin{array}{cc}T_{\tilde{\alpha}}&A\\B&S_{\tilde{\alpha}}\end{array}\right)
\]
where $A$ and $B$ are compact operators by Proposition \ref{comcomp}.
Since $\mathcal{M}_{\tilde{\alpha}}$ is invertible, $T_{\tilde{\alpha}}$ and $S_{\tilde{\alpha}}$ are Fredholm operators, and
\[ \mathrm{Index}\, T_{\tilde{\alpha}} + \mathrm{Index}\, S_{\tilde{\alpha}} = \mathrm{Index}\, \mathcal{M}^+_{\tilde{\alpha}} = 0\]
Thus it will suffice to show that
\[ \mathrm{Index}\, S_{\tilde{\alpha}} = 0\]
As in section \ref{sec3} above, let $V$ be the partial isometry determined by $\bar{D}_\Omega = V |\bar{D}_\Omega|$,
where $V$ has the same kernel as $\bar{D}_\Omega$.
Denote $V_r=V\otimes I_r$.
Consider the diagram
\[ \xymatrix{ \mathcal{N}(\bar{D}_\Omega)^\perp \otimes \CC^r \ar[r]^{S_{\tilde{\alpha}}}\ar[d]_{V_r} & \mathcal{N}(\bar{D}_\Omega)^\perp\otimes \CC^r \ar[d]^{V_r}& \\
L^2(\Omega, S^-)\otimes \CC^r\ar[r]^{\mathcal{M}^-_{\tilde{\alpha}}}& L^2(\Omega, S^-)\otimes \CC^r}
\]
In this diagram $V_r$ is a Fredholm operator.
The diagram commutes modulo compact operators, because
\[ V_rS_{\tilde{\alpha}} = V_r(I-Q_r)\mathcal{M}^+_{\tilde{\alpha}} \sim V_r\mathcal{M}^+_{\tilde{\alpha}}(I-Q_r)=V_r\mathcal{M}^+_{\tilde{\alpha}}\]
where $\sim$ denotes equality modulo compact operators.
Finally
\[ V_r\mathcal{M}^+_{\tilde{\alpha}} \sim \mathcal{M}^-_{\tilde{\alpha}} V_r\]
follows from
\[ T\mathcal{M}^+_{\tilde{\alpha}} \sim \mathcal{M}^-_{\tilde{\alpha}} T\]
where $T=\bar{D}_\Omega(I+\bar{D}_\Omega^*\bar{D}_\Omega)^{-1/2}$
(as in the proof of Proposition \ref{prop:commutator1}),
and the fact that $T-V$ is compact.
Thus, the Fredholm operators $V_rS_{\tilde{\alpha}}$ and $\mathcal{M}^-_{\tilde{\alpha}}V_r$ have the same index, and additivity of the index implies
\[ \mathrm{Index}\, S_{\tilde{\alpha}} = \mathrm{Index}\,\mathcal{M}^-_{\tilde{\alpha}}\]
Since $\mathcal{M}^-_{\tilde{\alpha}}$ is an invertible operator, it has index zero.
\end{proof}
\section{The product lemma}
Let $M_1$ and $M_2$ be two closed even dimensional spin$^c$ manifolds with Dirac operators $D_1, D_2$.
The Dirac operator of $M_1\times M_2$
is the sharp product,
\[ D_{M_1\times M_2} = D_1 \# D_2 =\left(\begin{matrix}D_1\otimes I & -I\otimes D_2^*\\ I\otimes D_2& D_1^*\otimes I\end{matrix}\right)\]
where the $2\times 2$ matrix is an operator from the positive spinors
\[ S^+_{M_1\times M_2} = (S_1^+\boxtimes S_2^+)\oplus (S_1^-\boxtimes S_2^-)\]
to the negative spinors
\[ S^-_{M_1\times M_2} = (S_1^+\boxtimes S_2^-)\oplus (S_1^+\boxtimes S_2^-)\]
It is straightforward to derive from this formula that
\[ \mathrm{Index}\, D = \mathrm{Index}\, D_1\,\cdot \, \mathrm{Index}\, D_2\]
In this section we prove the analogue of this product formula
in the case when $M_1$ is odd dimensional and $M_2$ is even dimensional.
\begin{proposition}\label{Prod}
Let $M$ be a closed odd dimensional spin$^c$ manifold,
and let $E$, $\alpha$, $\mathcal{M}_\alpha$, $T_\alpha$ be as above.
Let $W$ be a closed even dimensional spin$^c$ manifold,
and $F$ a smooth $\CC$ vector bundle on $W$.
On the odd dimensional spin$^c$ manifold $M\times W$,
let $T_{\alpha\otimes I_F}$ be the Toeplitz operator associated to the automorphism $\alpha\otimes I_F$ of the vector bundle $E\boxtimes F$.
Then
\[ \mathrm{Index}\, (T_{\alpha\otimes I_F}) = \mathrm{Index}\, T_\alpha \,\cdot \, \mathrm{Index}\, (D_{W}\otimes F)\]
where $D_W\otimes F$ is the Dirac operator of $W$ twisted by $F$.
\end{proposition}
\begin{proof}
Let $A=D_M\otimes E$ denote the Dirac operator of $M$ twisted by $E$,
and $B=D_W\otimes F$ the Dirac operator of $W$ twisted by $F$.
$B=B^+\oplus B^-$ is graded.
The spinor bundle of $M\times W$ is
\[ S_{M\times W} = S_M\boxtimes S_W = (S_M\boxtimes S_W^+)\oplus (S_M\boxtimes S_W^-)\]
where $S_M, S_W=S_W^+\oplus S_W^-$ are the spinor bundles of $M, W$.
The Dirac operator of $M\times W$ twisted by $E\otimes F$ is
\[ D = \left(\begin{matrix}A\otimes I & I\otimes B^-\\ I\otimes B^+& -A\otimes I\end{matrix}\right)\]
acting on the direct sum
\[ [(S_M\otimes E) \boxtimes (S_W^+\otimes F)]\oplus [(S_M\otimes E) \boxtimes (S_W^-\otimes F)]\]
Consider the polar decomposition $B=V|B|$, where $V$ is a partial isometry with the same kernel as $B$.
Let $U$ be the operator
\[ U = \left(\begin{matrix}0& I\otimes -V^-\\ I\otimes V^+& 0\end{matrix}\right)\]
$U$ anticommutes with $D$ and commutes with $\mathcal{M}_{\alpha\otimes I_F}$,
\[UD=-DU\qquad U\mathcal{M}_{\alpha\otimes I_F}=\mathcal{M}_{\alpha\otimes I_F}U\]
$V$ restricts to a unitary operator on $(\mathrm{Ker}\, B)^\perp$,
and thus $U$ restricts to a unitary operator on $L^2(S_M^E) \otimes (\mathrm{Ker}\, B)^\perp$.
View the Hilbert space $L^2((S_M\otimes E)\boxtimes (S_W\otimes F))$ as the direct sum
\[ [L^2(S_M^E) \otimes \mathrm{Ker}\, B^+]\oplus [L^2(S_M^E) \otimes \mathrm{Ker}\, B^-]\oplus [L^2(S_M^E) \otimes (\mathrm{Ker}\, B)^\perp]\]
where $S_M^E=S_M\otimes E$.
The multiplication operator $\mathcal{M}_{\alpha\otimes I_F}$ is
\[ \mathcal{M}_{\alpha\otimes I_F} =
\left(\begin{matrix}
\mathcal{M}_\alpha \otimes I_{{\mathrm Ker}\, B^+} & 0&0\\
0 & \mathcal{M}_\alpha \otimes I_{{\mathrm Ker}\, B^-} & 0\\
0& 0& \mathcal{M}_\alpha \otimes I_{({\mathrm Ker}\, B)^\perp}
\end{matrix}\right)\]
The three summands are invariant spaces for $D$, and the positive space of $D$ is
\[ [L^2_+(S_M^E) \otimes \mathrm{Ker}\, B^+]\oplus [L^2_-(S_M^E) \otimes \mathrm{Ker}\, B^-]\oplus H\]
where $H$ is a closed linear subspace of $L^2(S_M^E) \otimes (\mathrm{Ker}\, B)^\perp$.
Thus the Toeplitz operator $T_{\alpha\otimes I_F}$ is of the form
\[ T_{\alpha\otimes I_F} =
\left(\begin{matrix}
T_\alpha \otimes I_{{\mathrm Ker}\, B^+} & 0 & 0\\
0 & T^-_\alpha \otimes I_{{\mathrm Ker}\, B^-} & 0\\
0 & 0& Q
\end{matrix}\right)\]
where $T^-_\alpha$ is the compression of $\mathcal{M}_\alpha$ to $L^2_-(S^E_M)$,
and $Q$ is the restriction of $T_{\alpha\otimes I_F}$ to $H$.
Since
\[ \mathrm{Index}\, T^-_\alpha = - \mathrm{Index}\, T_\alpha\]
it follows that
\begin{align*}
\mathrm{Index}\, T_{\alpha\otimes I_F} &= \mathrm{Index}\, T_\alpha \,\cdot\,\mathrm{dim\, Ker}\,B^+ + \mathrm{Index}\, T^-_\alpha\,\cdot\, \mathrm{dim\, Ker}\,B^-+\mathrm{Index}\,Q\\
& = \mathrm{Index}\, T_\alpha\,\cdot\, \mathrm{Index}\,B+\mathrm{Index}\,Q
\end{align*}
On the third summand $L^2(S_M^E) \otimes (\mathrm{Ker}\, B)^\perp$ we have
\[ UDU^*=-D\qquad U\mathcal{M}_{\alpha\otimes I_F}U^*=\mathcal{M}_{\alpha\otimes I_F}\]
Therefore the compression of $\mathcal{M}_{\alpha\otimes I_F}$
to the positive space (i.e. the operator $Q$) has the same index as the compression to the negative space.
Hence both are zero, i.e. $\mathrm{Index}\,Q=0$
\end{proof}
\section{Vector bundle modification}
With $M, E, \alpha$ as above, let $F\to M$ be a spin$^c$ vector bundle on $M$ with even fiber dimension $n=2r$.
Part of the Spin$^c$ datum of $F$ is a principal $\Spinc$ bundle $P$ on $M$,
\[F=P\times_\Spinc \RR^n\]
where $\Spinc$ acts on $\RR^n$ via the map $\Spinc\to \SO$.
The Bott generator vector bundle $\beta$ on $S^n\subset \RR^n\times \RR$ is $\Spinc$ equivariant,
where $\Spinc$ acts on the first factor $\RR^n$.
Therefore, associated to $P$ we have a fiber bundle $\pi\,\colon \Sigma F \to M$ whose fibers are oriented spheres of dimension $n$,
\[ \Sigma F = P\times_\Spinc S^n\]
and a vector bundle on $\Sigma F$
\[ \beta_F = P\times_\Spinc \beta\]
Since $M$ is a an odd dimensional Spin$^c$ manifold, the total space of $F$ is an odd dimensional Spin$^c$ manifold, because $TF = \pi^*F\oplus \pi^*TM$,
and the direct sum of two Spin$^c$ vector bundles is Spin$^c$.
Every trivial bundle is Spin$^c$, so the total space of $F\oplus \underline{\RR}$ is Spin$^c$.
$\Sigma F$ is a Spin$^c$ manifold as the boundary of the unit ball bundle of $F\oplus \underline{\RR}$.
On the odd dimensional Spin$^c$ manifold $\Sigma F$ we have a Toeplitz operator $T_{\tilde \alpha}$,
where $\tilde{\alpha}$ is the automorphism $\pi^*\alpha\otimes I$ of the vector bundle $\pi^*E\otimes \beta_F$.
Here $\pi:\Sigma F\to M$ is the projection.
\begin{proposition}\label{VM}
\[ \mathrm{Index}\, T_\alpha = \mathrm{Index}\,T_{\tilde\alpha}\]
\end{proposition}
The proof of Proposition \ref{VM} is a straightforward generalization of the proof of Proposition \ref{Prod},
and uses the following basic fact about the Dirac operator of an even dimensional sphere.
\begin{proposition}\label{index1}
If $D$ is the Dirac operator of the even dimensional sphere $S^n$
with the Spin$^c$ structure it receives as the boundary of the unit ball in $\RR^{n+1}$,
then $D_\beta$ has one dimensional kernel and zero cokernel, and so
\[ \mathrm{Index} \;D_\beta = 1\]
Moreover, $D_\beta$ is equivariant for the group Spin$^c(n+1)$ which acts by orientation preserving isometries on $S^n$, and this group acts trivially on the kernel of $D_\beta$.
\end{proposition}
\begin{proof}
Recall that if $V$ is a finite dimensional vector space, then there is a canonical nonzero element in $V\otimes V^*$,
which maps to the identity map under the isomorphism $V\otimes V^*\cong \mathrm{Hom}(V,V)$.
On any even dimensional Spin$^c$ manifold $M$, there is a canonical isomorphism of vector bundles
\[ S\otimes S^* \cong \Lambda_\CC TM\]
where $S$ is the spinor bundle of $M$.
This is implied by the fact that, as representations of $\Spinc$,
\[ \CC^{2^r}\otimes (\CC^{2^r})^*\cong \Lambda_\CC \RR^n\]
Via this isomorphism, $D_{S^*}$ identifies (up to lower order terms) with $d+d^*$,
where $d$ is the de Rham operator, and $d^*$ its formal adjoint.
Note that the kernel of $D_{S^*}$ is the same as the kernel of $D_{S^*}^2=(d+d^*)^2$,
i.e. it consists of harmonic forms.
On $S^n$ the only harmonic forms are the constant functions, and scalar multiples of the standard volume form.
Because the Bott generator vector bundle $\beta$ is dual to $S^+$,
$S^+\otimes \beta\cong \mathrm{Hom}(S^+,S^+)$ contains a trivial line bundle,
which identifies with the line bundle in $\Lambda^0\oplus \Lambda^n$
spanned by $(1,\omega)$, where $\omega$ is the standard volume form.
This follows from representation theory.
Thus, the kernel of $D_\beta$ is the one dimensional vector space spanned by the harmonic forms $c+c\omega$, $c\in \CC$.
The intersection of $\Lambda^0\oplus \Lambda^n$ with $S^-\otimes \beta$ is zero,
and so the cokernel of $D_\beta$ is zero.
\end{proof}
\begin{remark} $D_\beta$ is the (positive) half-signature operator of $S^n$ (with $n$ even).
\end{remark}
\begin{proof}[Proof of Proposition \ref{VM}]
If $F$ is a trivial bundle, then $\Sigma F=M\times S^n$
and Proposition \ref{VM} is a special case of Proposition \ref{Prod}.
In general, $\Sigma F$ is a sphere bundle over $M$.
The proof is essentially the same as the proof of Proposition \ref{Prod}, with the following modifications.
$D_\beta$ is equivariant for the action of the structure group of the sphere bundle $\Sigma F$.
Therefore, there is a well-defined ``vertical'' operator $B$ on $\Sigma F$ which in each local trivialization of the sphere bundle $\Sigma F$ is $I\otimes D_\beta$.
Next, we construct a first order differential operator $A$ on $\Sigma F$
which is a ``lift'' of $D_M\otimes E$ from $M$ to $\Sigma F$.
Choose a finite open cover $\{U_j\}$ of $M$,
such that the fiber bundle $\Sigma F$ restricted to each $U_j$ has been trivialized,
\[ \Sigma F|U_j \cong U_j\times S^n\]
Let $A_j$ be the evident lift of $D_M\otimes E$ from $U_j$ to the product $U_j\times S^n$,
\[A_j:C^\infty(\pi^*((S^+\otimes E)|U_j))\to C^\infty(\pi^*((S^-\otimes E)|U_j)) \]
$A_j$ differentiates in the $U_j$ direction.
Let $\{\varphi_j\}$ be a smooth partition of unity on $M$ subordinate to the cover $\{U_j\}$.
Using this partition of unity and the local trivializations $\Sigma F|U_j \cong U_j\times S^n$,
we construct the lift $A$ as
\[ A := \sum A_j(\varphi_j\circ \pi) :C^\infty(\pi^*(S_1^+\otimes E))\to C^\infty(\pi^*(S_1^-\otimes E)) \]
Note that $A$ and $B$ commute.
Now, the sharp product formula for the Dirac operator of $\Sigma F$ twisted by $\pi^*E\otimes \beta_F$ is
\[ D = \left(\begin{matrix}A & B^-\\ B^+& -A\end{matrix}\right)\]
We can construct the partial isometry $U$ as in the proof of Proposition \ref{Prod},
with the properties
\[UD=-DU\qquad U\mathcal{M}_{\alpha\otimes I}=\mathcal{M}_{\alpha\otimes I}U\]
$D$ and $\mathcal{M}_{\alpha\otimes I}$ act on the Hilbert space $\mathcal{H}=L^2(S_{\Sigma F}\otimes \pi^*E\otimes \beta_F)$.
The Hilbert space $L^2(S_{S^n}\otimes \beta)$ on which $D_\beta$ acts is a direct sum $\mathrm{Ker}D_\beta\oplus(\mathrm{Ker}D_\beta)^\perp$.
The summands are invariant for the structure group $SO(n+1)$ of the sphere bundle $\Sigma F$.
If we view $\mathcal{H}$ as the Hilbert space of $L^2$-sections in a field of Hilbert spaces over $M$,
the orthogonal decomposition of its fibers gives rise to an orthogonal decomposition $\mathcal{H}=\mathcal{H}_1\oplus \mathcal{H}_2$.
Due to the commuting of $A$ and $B$,
the subspaces $\mathcal{H}_1$ and $\mathcal{H}_2$ are invariant for $D$ as well as for $\mathcal{M}_{\alpha\otimes I}$,
and $U$ restricts to a unitary on $\mathcal{H}_2$.
The Toeplitz operator $T_{\tilde{\alpha}}=T_1\oplus T_2$ has a corresponding direct sum decomposition,
where $T_j$ acts on a subspace of $\mathcal{H}_j$.
As in the proof of Proposition \ref{Prod}, $\mathrm{Index}\,T_2=0$
because $\mathrm{Index}\,UT_2U^* = -\mathrm{index} \,T_2$.
Due to the fact that the structure group acts trivially on $\mathrm{Ker}D_\beta$,
the first summand $\mathcal{H}_1$ identifies canonically with $L^2(S_M\otimes E)$,
and $T_1$ identifies with $T_\alpha$. Thus,
\[ \mathrm{Index} \,T_{\tilde{\alpha}}= \mathrm{Index} \,T_1+ \mathrm{Index} \,T_2= \mathrm{Index} \,T_\alpha\]
\end{proof}
\begin{remark}
The canonical identifications $\mathcal{H}_1=L^2(S_M\otimes E)$ and $T_1=T_\alpha$ in the above proof are due to the fact that the kernels of the family of Dirac operators of the fibers of $\Sigma F$, twisted by $\beta_F$, form a trivial line bundle over $M$.
\end{remark}
\section{Bordism and vector bundle modification for the topological index}
Bordism invariance of the topological index follows immediately from Stokes' Theorem.
\begin{proposition}
Let $\Omega$ be a compact even dimensional Spin$^c$ manifold with boundary $M$.
Let $\tilde{E}$ be a smooth $\CC$ vector bundle on $\Omega$, and $\tilde{\alpha}$ an automorphism of $\tilde{E}$.
Denote by $(E,\alpha)$ the restriction of $(\tilde{E}, \tilde{\alpha})$ to $M$,
and by $T_\alpha=P_+^E\MM_\alpha$ the Toeplitz operator determined by $\alpha$,
\[ T_\alpha : L^2_+(M,S\otimes E)\to L^2_+(M,S\otimes E)\]
Then
\[ (\mathrm{ch}(E, \alpha)\cup \mathrm{Td}(M))[M] = 0\]
\end{proposition}
\begin{proof}
The restriction of the Spin$^c$ vector bundle $T\Omega$ to $M$
is the direct sum of the Spin$^c$ vector bundle $TM$ with a trivial line bundle (i.e. the normal bundle).
The Todd class is a stable characteristic class.
Therefore the cohomology class $\mathrm{Td}(M)$ is the restriction of $\mathrm{Td}(\Omega)$ to $M$.
Likewise, by naturality, $\mathrm{ch}(E, \alpha)$ is the restriction to $M$ of
$\mathrm{ch}(\tilde{E}, \tilde{\alpha})$.
The proposition now follows from Stokes' Theorem.
\end{proof}
Invariance of the topological index under vector bundle modification is
\[ (\mathrm{ch}(\pi^*E\otimes \beta_F, \pi^*\alpha\otimes I)\cup \mathrm{Td}(\Sigma F))[\Sigma F] =(\mathrm{ch}(E,\alpha)\cup \mathrm{Td}(M))[M]\]
Note that
\[\mathrm{ch}(\pi^*E\otimes \beta_F, \pi^*\alpha\otimes I) = \pi^*\mathrm{ch}(E,\alpha)\cup \mathrm{ch}(\beta_F)\]
As spin$^c$ vector bundles on $\Sigma F=S(F\oplus\underline{\RR})$,
\[ \underline{\RR} \oplus T(\Sigma F) \cong \pi^*F\oplus \underline{\RR} \oplus \pi^*(TM)\]
Multiplicativity of the Todd class implies
\[ \mathrm{Td}(\Sigma F) = \pi^*\mathrm{Td}(M)\cup \pi^*\mathrm{Td}(F)\]
Therefore invariance of the topological index under vector bundle modification follows from
\begin{proposition}\label{IF}
\[\pi_!\,\mathrm{ch}(\beta_F) = \frac{1}{\mathrm{Td}(F)}\]
where $\pi_!$ is integration along the fiber of $\pi\,\colon \Sigma F\to M$.
\end{proposition}
For a proof of Proposition \ref{IF}, see section 6.2 in \cite{BvE1}.
\section{Sphere lemma}
\begin{lemma}\label{sphere}
Let $M, E, \alpha$ be as above.
If $n=\mathrm{dim}\,M$, then for every odd integer $m\ge 2n+1$
there exists a continuous map $\tilde{\alpha}:S^{m}\to \mathrm{GL}(r,\CC)$
such that
\[ \mathrm{Index}\,T_\alpha = \mathrm{Index}\,T_{\tilde{\alpha}}\]
and
\[(\ch(E,\alpha)\cup \Td(M))[M]=\ch(\tilde{\alpha})[S^m]\]
\end{lemma}
\begin{remark}
Note that $\mathrm{Td}(S^m)=1$ if $m$ is odd.
\end{remark}
\begin{proof}
We shall prove the Lemma by moving, in a finite number of steps, from $(M,E,\alpha)$ to $(S^m,\underline{\CC}^r,\tilde{\alpha})$.
Each step preserves the analytic index as well as the topological index.
We denote equality of both the analytic and topological index as an equivalence $\sim$ of triples.
By the Whitney Embedding Theorem, $M$ embeds in $\RR^m$.
The 2-out-of-3 principle implies that the normal bundle $\nu$
\[ 0\to TM\to M\times \RR^m\to \nu\to 0\]
is spin$^c$ oriented.
Vector bundle modification by $\nu$ gives
\[ (M,E,\alpha)\sim (\Sigma\nu, \pi^*E\otimes \beta_\nu, \pi^*\alpha\otimes I)\]
By adding on a vector bundle with its identity automorphism, if necessary,
\[ (\Sigma\nu, \pi^*E\otimes \beta_\nu, \pi^*\alpha\otimes I) \sim (\Sigma\nu,\underline{\CC}^r,\alpha_1)\]
Since $M$ is compact we may assume that
$M$ is embedded in the interior of the unit ball of $\RR^{m}$.
Using the inclusion $\RR^{m}\to \RR^{m+1}$,
a compact tubular neighborhood of $M$ in $\RR^{m+1}$
identifies with the ball bundle $B(\nu\oplus \underline{\RR})$, whose boundary is $\Sigma \nu$.
Let $\Omega$ be the unit ball of $\RR^{m+1}$ with the interior of $B(\nu\oplus \underline{\RR})$ removed.
Then $\Omega$ is a bordism of spin$^c$ manifolds from $\Sigma \nu$ to $S^{m}$.
By Lemma \ref{L} below, there exist continuous maps
$\alpha_2:\Omega\to \mathrm{GL}(r,\CC), \alpha_3:B(\nu\oplus \underline{\RR})\to \mathrm{GL}(r,\CC)$, such that, when restricted to $\Sigma\nu$, $\alpha_3\alpha_1=\alpha_2$.
Then
\[ \mathrm{Index}\,T^{\Sigma\nu}_{\alpha_2}=\mathrm{Index}\,T^{\Sigma\nu}_{\alpha_3\alpha_1}=\mathrm{Index}\,T^{\Sigma\nu}_{\alpha_3}+\mathrm{Index}\,T^{\Sigma\nu}_{\alpha_1}\]
Since $\Sigma \nu$ is the boundary of $B(\nu\oplus \underline{\RR})$ and $\alpha_3$ is invertible on $B(\nu\oplus \underline{\RR})$,
\[ \mathrm{Index}\,T^{\Sigma\nu}_{\alpha_3}=0\]
Thus
\[ \mathrm{Index}\,T^{\Sigma\nu}_{\alpha_2}=\mathrm{Index}\,T^{\Sigma\nu}_{\alpha_1}\]
The same argument applies to the topological index, and we obtain
\[ (\Sigma\nu,\underline{\CC}^r,\alpha_1)\sim(\Sigma\nu,\underline{\CC}^r,\alpha_2)\]
Finally, the bordism $(\Omega,\alpha_2)$ gives
\[(\Sigma\nu,\underline{\CC}^r,\alpha_2)\sim(S^m,\underline{\CC}^r,\alpha_2)\]
\end{proof}
\begin{lemma}\label{L}
Let the unit ball in $\RR^n$ be the union of two compact subsets $B, \Omega$ with $\Sigma=B\cap \Omega$.
Given a continuous map $\alpha:\Sigma\to \mathrm{GL}(r,\CC)$, there exist continuous maps $\alpha_1:B\to \mathrm{GL}(r,\CC)$, $\alpha_2:\Omega\to \mathrm{GL}(r,\CC)$ such that, when restricted to $\Sigma$, $\alpha_1\alpha=\alpha_2$.
\end{lemma}
\begin{proof}
Let $E$ be the vector bundle on the unit ball in $\RR^n$
obtained by clutching trivial bundles on $B$ and $\Omega$ via $\alpha$,
\[ E = \Omega\times \CC^r\sqcup B\times \CC^r/\sim\qquad (p,v)\sim (p,\alpha(p)v)\quad p\in \Sigma\]
Because the unit ball is contractible, $E$ can be trivialized.
A trivialization of $E$ amounts to a choice of continuous maps $\alpha_1:B\to \mathrm{GL}(r,\CC)$, $\alpha_2:\Omega\to \mathrm{GL}(r,\CC)$
such that $\alpha_1(p)\alpha(p)v=\alpha_2(p)v$ for all $p\in \Sigma$, $v\in \CC^r$.
\end{proof}
\section{Proof of the index theorem}
Bott periodicity is the following statement about the homotopy groups of $\mathrm{GL}(n,\CC)$ \cite{Bo59},
\[ \pi_j\,\mathrm{GL}(n,\CC)\cong\left\{
\begin{array}{ll} \ZZ & j\;\mbox{odd}\\ 0& j\;\mbox{even} \end{array}
\right.\]
\[j=0,1,2,\dots, 2n-1\]
\begin{proposition}\label{sphere2}
Let $S^m$ be an odd dimensional sphere.
If $\alpha:S^m\to \mathrm{GL}(r,\CC)$ is a continuous map
then
\[ \ind T_\alpha = \ch(\alpha)[S^m]\]
\end{proposition}
\begin{proof}
Assume $r\ge (m+1)/2$, so that $\pi_m\mathrm{GL}(r,\CC)\cong \ZZ$.
If not, replace $r$ with $r'\ge (m+1)/2$, and $\alpha$ with $\alpha\oplus I_{r'-r}$.
Due to homotopy invariance of the index, the analytic and topological index determine maps
\[ \phi:\pi_m\,\mathrm{GL}(r,\CC)\to \ZZ\qquad \phi([\alpha]) = \mathrm{Index}\,T_\alpha\]
\[ \psi:\pi_m\,\mathrm{GL}(r,\CC)\to \QQ\qquad \psi([\alpha]) = \ch(\alpha)[S^m]\]
Note that if $\alpha_1,\alpha_2:S^m\to \mathrm{GL}(r,\CC)$ represent two elements in $\pi_m \mathrm{GL}(r,\CC)$,
their product in $\pi_m \mathrm{GL}(r,\CC)$ can be represented by the map $p\mapsto \alpha_1(p)\alpha_2(p)$.
Since
\[ \mathrm{Index}\,T_{\alpha_1\alpha_2}=\mathrm{Index}\,T_{\alpha_1}+\mathrm{Index}\,T_{\alpha_2}
\qquad \ch(\alpha_1\alpha_2)=\ch(\alpha_1)+\ch(\alpha_2)\]
it follows that $\phi:\ZZ\to \ZZ$ and $\psi:\ZZ\to \QQ$ are homomorphisms of abelian groups.
By Theorem \ref{thm:Noether}, for $f:S^1\to \CC\setminus \{0\}, f(z)=z^{-1}$,
\[ \mathrm{Index}\,T_f=\ch(f)[S^1]=1\]
Then by Lemma \ref{sphere}
there exists $\alpha:S^m\to \mathrm{GL}(r,\CC)$ with
\[ \mathrm{Index}\,T_\alpha=\ch(\alpha)[S^m]=1\]
This implies that $\phi([\alpha]) = \psi([\alpha])$
for all $[\alpha]\in \pi_m \mathrm{GL}(r,\CC)\cong \ZZ$.
\end{proof}
\begin{remark}
For an alternate approach to this proof, using Bott periodicity combined with a direct calculation, see Venugopalkrishna \cite{V72}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{Thm}]
By Lemma \ref{sphere2}, Theorem \ref{Thm} holds for odd dimensional spheres $M=S^m$.
Then by Lemma \ref{sphere}, Theorem \ref{Thm} holds for all odd dimensional spin$^c$ manifolds $M$.
\end{proof}
\section{Boutet de Monvel's theorem}\label{sec:BdM}
Let $\tilde{\Omega}$ be a complex analytic manifold, and $\Omega^0\subset \tilde{\Omega}$ a relatively compact open submanifold with smooth boundary $M=\partial \Omega^0$.
Choose a defining function of the boundary $\rho:\tilde{\Omega}\to \RR$ with $\Omega^0=\rho^{-1}((-\infty,0))$, $\Omega=\rho^{-1}((-\infty,0])$, $M=\rho^{-1}(0)$, and $d\rho(p)\ne 0$ for all $p\in M$.
The boundary of $\Omega^0$ is called strictly pseudoconvex if for every $p\in M$ and every holomorphic vector $w$ that is tangent to $M$,
\[w\in T^{1,0}_pM = T^{1,0}_p\tilde{\Omega}\cap (T_pM\otimes \CC)\]
we have
\[\partial\bar{\partial}\rho(w,\bar{w})> 0\qquad \text{if}\;w\ne 0\]
Strict pseudoconvexity is biholomorphically invariant.
A domain $\Omega^0$ in $\CC^n$ that is strictly convex in the Euclidean sense is strictly pseudoconvex.
The Hardy space $H^2(M)$ is the space of $L^2$-functions on $M$ that extend to a holomorphic function on $\Omega$.
The Szeg\"o projection $S$ is the orthogonal projection
\[ S:L^2(M)\to H^2(M)\]
For a continuous map $\alpha:M\to \mathrm{GL}(r,\CC)$, let $\mathcal{M}_\alpha$ be the corresponding multiplication operator
on $L^2(M)\otimes \CC^r$.
The Toeplitz operator $\mathfrak{T}_\alpha$ is the composition of $\mathcal{M}_\alpha$
with $S\otimes I_r$,
\[ \mathfrak{T}_\alpha =(S\otimes I_r)\mathcal{M}_\alpha : H^2(M)\otimes \CC^r\to H^2(M)\otimes \CC^r\]
$\mathfrak{T}_\alpha$ is a bounded Fredholm operator.
Boutet de Monvel's theorem is (Theorem 1 in \cite{Bo79}):
\begin{theorem}\label{BdM}
\[ \ind \mathfrak{T}_\alpha = (\ch(\alpha)\cup \Td(M))[M]\]
\end{theorem}
\begin{proof}
The complex manifold $\Omega$ is a spin$^c$ manifold,
and so is its boundary $M$.
\footnote{A strictly pseudoconvex boundary $M$ is a contact manifold, with contact $1$-form $(\partial\rho-\bar{\partial}\rho)|M$.
The spin$^c$ structure of $M$ as the boundary of $\Omega$ agrees with its spin$^c$ structure as a contact manifold.}
Using the projection $P_+$ onto the positive space of the Dirac operator of $M$, as in the introduction of this paper,
we form a Toeplitz operator
\[ T_\alpha = (P_+\otimes I_r)\mathcal{M}_\alpha:L^2_+(M,S)\otimes \CC^r\to L^2_+(M,S)\otimes \CC^r\]
Theorem \ref{BdM} follows from Theorem \ref{Thm}
if we can show that
\[ \ind \mathfrak{T}_\alpha = \ind T_\alpha\]
This is Proposition 4.6 of \cite{BDT89}.
An outline of the proof given in \cite{BDT89} is as follows. According to Proposition \ref{propd} above,
\[ \ind T_\alpha = \ind T_{\tilde{\alpha}}\]
where $\tilde{\alpha}:\Omega\to M_r(\CC)$ is any continuous function whose restriction to
the boundary $M=\partial\Omega$ is $\alpha$, and $T_{\tilde{\alpha}}$ is the Toeplitz operator,
\[ T_{\tilde{\alpha}} : \Hardy\otimes \CC^r\to \Hardy\otimes \CC^r\]
Here $\Hardy$ is the null-space of the maximal closed extension of $D_\Omega$, as in section \ref{sec3}.
Following section 3 of \cite{BDT89}, we now choose a different domain for the Dirac operator of $\Omega$,
using $\bar{\partial}$-Neumann boundary conditions.
The Dirac operator of $\Omega$ is the assembled Dolbeault complex,
\[\bar{\partial}+\bar{\partial}^*:C^\infty(\Omega,S^+)\to C^\infty(\Omega,S^-)\]
with positive and negative spinor bundles
\[ S^+= \bigoplus_{k\;\text{even}}\Lambda^kT^{0,1}\Omega\qquad S^-= \bigoplus_{k\;\text{odd}}\Lambda^kT^{0,1}\Omega\]
Here $C^\infty(\Omega, S^{+/-})$ denotes the space of spinors that are smooth up to and including the boundary. Let
\[ \mathcal{A}^k := \{u\in C^\infty(\Omega,\Lambda^kT^{0,1}\Omega)\;\mid\; \iota_{ \bar{\partial} \rho} u=0\;\text{on}\;M=\partial\Omega\}\]
where, as usual, $\iota$ denotes contraction.
Let $D_N$ denote the closure of $\bar{\partial}+\bar{\partial}^*$ restricted to $\bigoplus_{k\;\text{even}} \mathcal{A}^k$. Then
\[ D_\Omega\subseteq D_N\subseteq \bar{D}_\Omega\]
\begin{comment}
In section \ref{sec3} the Dirac operator $D_\Omega$ acts on $C^\infty$-sections with compact support,
\[D_\Omega :C^\infty_c(\Omega^0,S^+)\to C^\infty_c(\Omega^0,S^-)\qquad \Omega^0=\Omega\setminus M\]
Elliptic regularity implies that the null space $\mathcal{N}(\bar{D}_\Omega)$
consists of $L^2$-sections in the null space of $D^\dagger_\Omega$,
\[ \mathcal{N}(\bar{D}_\Omega) =\mathcal{N}(D^\dagger_\Omega)\cap L^2(\Omega,S^+)\]
Therefore the range of the Calderon projection $P_\natural$ (see section \ref{sec4}) is a direct sum of $H^2(M)$ and a finite dimensional space.
In summary,
the ranges of the projections $S$ and $P_\natural$ differ by a space of finite rank,
while $P_\natural$ and $P_+$ differ by a compact operator --- and so with the notation of section \ref{sec4},
\[ \ind \mathfrak{T}_\alpha = \ind T^\natural_\alpha=\ind T_\alpha\]
\end{comment}
By the change of domain principle of section 2 of \cite{BDT89},
the index of any Toeplitz operator obtained by using the projection onto the null space $\Hardy\otimes \CC^r$,
is equal to the index of the Toeplitz operator obtained by using projection onto the null space $\mathcal{N}(D_N)\otimes \CC^r$ (Proposition 3.3 in \cite{BDT89}).
Let $\Box=D_N^*D_N$ be the complex Laplacian acting on $(0,k)$-forms with $k$ even,
likewise $\Box=D_ND_N^*$ acting on $(0,k)$-forms with $k$ odd.
By a result of J.\ Kohn \citelist{\cite{Ko63} \cite{Ko64}}, if the boundary of $\Omega$ is strictly pseudoconvex, the operator $\Box$ has compact resolvant for $k\ne 0$.
This then implies that $\mathcal{N}(D_N^*)$ is finite dimensional,
and $\mathcal{N}(D_N)$ is at most a finite dimensional perturbation of
\[ H^\omega(\Omega) = \{u\in L^2(\Omega)\;\mid\; \bar{\partial}u=0\}\]
The projection $L^2(\Omega)\to H^\omega(\Omega)$ is the Bergmann projection.
The index of any Toeplitz operator obtained by using projection onto the null space $\mathcal{N}(D_N)\otimes \CC^r$
is equal to the index of the Toeplitz operator obtained using the Bergmann projection.
Finally, the transition from the Bergmann projection (on $\Omega$) to the Szeg\"o projection $S$ (on $M=\partial\Omega$) is done by the same argument as in the proof of Proposition \ref{propd}.
In summary, the proof is
\[ P_+
\rightsquigarrow \text{Calderon}
\rightsquigarrow \mathcal{N}(\bar{D}_\Omega)
\rightsquigarrow \mathcal{N}(D_N)
\rightsquigarrow \text{Bergmann}
\rightsquigarrow \text{Szeg\"o}
\]
\end{proof}
\begin{remark}
A crucial step in the proof of Theorem \ref{BdM} is the passage from spinors to functions (i.e. the step $\mathcal{N}(D_N)
\rightsquigarrow \text{Bergmann}$),
which is done by applying a result of J.\ Kohn.
Compare this to the sheaf theoretic result that, if $\Omega$ is strictly pseudoconvex, then the sheaf cohomology $H^k(\Omega^0,\mathcal{O})$ is a finite dimensional complex vector space for $k>0$ (Proposition 4 in \cite{Gr58}).
Here $\mathcal{O}$ denotes the structure sheaf (germs of holomorphic functions) of $\Omega^0$.
$H^0(\Omega^0,\mathcal{O})$ is the space of holomorphic functions on $\Omega^0$.
$H^k(\Omega^0,\mathcal{O})$
identifies with the $k$-th homology of the Dolbeault complex.
\end{remark}
\bibliographystyle{abbrv}
| {
"timestamp": "2021-01-01T02:13:53",
"yymm": "2010",
"arxiv_id": "2010.02906",
"language": "en",
"url": "https://arxiv.org/abs/2010.02906",
"abstract": "This is an expository paper about the index of Toeplitz operators, and in particular Boutet de Monvel's theorem. We prove Boutet de Monvel's theorem as a corollary of Bott periodicity, and independently of the Atiyah-Singer index theorem.",
"subjects": "Functional Analysis (math.FA)",
"title": "The Index Theorem for Toeplitz Operators as a Corollary of Bott Periodicity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180643966651,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7083314618503401
} |
https://arxiv.org/abs/2109.11769 | Non-Euclidean Self-Organizing Maps | Self-Organizing Maps (SOMs, Kohonen networks) belong to neural network models of the unsupervised class. In this paper, we present the generalized setup for non-Euclidean SOMs. Most data analysts take it for granted to use some subregions of a flat space as their data model; however, by the assumption that the underlying geometry is non-Euclidean we obtain a new degree of freedom for the techniques that translate the similarities into spatial neighborhood relationships. We improve the traditional SOM algorithm by introducing topology-related extensions. Our proposition can be successfully applied to dimension reduction, clustering or finding similarities in big data (both hierarchical and non-hierarchical). | \section{Introduction}
Self-Organizing Maps (SOMs, also known as Kohonen networks) belong to neural network models of the unsupervised class allowing for dimension reduction in data without a significant loss of information. SOMs preserve the underlying topology of high-dimensional input and transform the information into one or two-dimensional layer of neurons.
The projection is nonlinear, and in the display, the clustering of the data space and the metric-topological relations of the data items are visible \cite{kohonen}.
In comparison to other techniques of reducing dimensionality, SOMs have many advantages. They do not impose any assumptions regarding the distributions of the variables and do not require independence among variables. They allow for solving non-linear problems; their applications are numerous, e.g., in pattern recognition (see, e.g., \cite{pattern}), brain studies \cite{brain_review,brain,brain_3} or biological modeling \cite{bio,tomatoes}. At the same time, they are relatively easy to implement and modify \cite{kohonen,soms}.
A typical setup for SOM assumes usage of a region of Euclidean plane. On the other hand, non-Euclidean geometries are steadily gaining attention of the data scientists \cite{tda,tda_chazal}. In particular, hyperbolic geometry has been proven useful in data visualization \cite{munzner}
and the modeling of scale-free networks \cite{hypgeo,papa}. Such a usefulness comes from the exponential growth property of hyperbolic geometry, which makes it much more appropriate than Euclidean for modeling and visualizing hierarchical data. Since the idea of SOM is rooted in geometry, we can expect to gain new insights from non-Euclidean SOM setups. Surprisingly, there are nearly no attempts to do so. Even if there have been propositions to use hyperbolic geometry in SOMs \cite{ritter99,ontrup}, other possibilites of inclusion of non-Euclidean geometries and different topologies (e.g., spherical geometry, quotient spaces) have been neglected. There is also no research on characteristics of data that affect the quality of Self-Organizing Maps.
Against this background, our contributions in this paper can be summarized as follows:
\begin{itemize}
\item We are the first to present the generalized setup for non-Euclidean SOMs. Our proposition allows for usage of (so far neglected or absent) quotient spaces. In consequence, we get a more regular and visually appealing results that the previous setups.
\item By using Goldberg-Coxeter construction, our proposition allows for easy scalability of the templates. It also makes spheres a worthy counterpart for analysis -- we are no longer restricted to usage of platonic solids.
\item To our best knowledge, we are the first to extend SOM setup by non-Euclidean aspects other than the shape of the template. We introduce geometry-related adjustments in the dispersion function. Moreover, we show that our proposition improves the results in comparison to traditional Gaussian dispersion.
\item Our quantitative analysis proves that the shape of data matters for Self-Organizing Maps. We use measures of topology preservation from the literature\longonly{, as well as our own measures}.
\item The results of non-Euclidean SOMs have interpretation. Usage of different geometries allows us to find and highlights various aspects in the data sets. E.g., spherical geometry allows for an easy examination of polarization, and hyperbolic geometry due to the exponential growth fosters finding similarities. This makes non-Euclidean SOMs suitable both as stand-alone technique, but also as an auxiliary one to include in other models.
\end{itemize}
\section{Prerequisities}
\subsection{Non-Euclidean geometries}
Most data analysts take it for granted to use some subregions of a flat space as their data model, which means utilizing constructs which follow the principles of the Euclidean geometry.%
\longonly{However, the fifth axiom of this geometry is a problematic one and raises some questions about the nature of parallelness. Take a line $L$ and a~point $A$. According to Euclidean geometry principles, there is exactly one line going through $A$ which does not cross $L$. However, can there be more? Or less? \\}%
\longonly{We can find the answers to those questions in non-Euclidean geometries. The first, and probably the most famous one, is the hyperbolic geometry, discovered by Gauss, Lobachevsky, and Bolyai. In this case, there are infinitely many lines going through $A$ which do not cross $L$. One of the properties of this geometry is that the amount
of the area in the distance $d$ from a given point is exponential in $d$; intuitively,
the metric structure of the hyperbolic plane is similar to that of an infinite binary
tree, except that each vertex is additionally connected to two adjacent vertices on the same
level. \\
While hyperbolic geometry is not common in our world (typical examples include coral reef or lettuce), the second kind of non-Euclidean geometry is more common---that is, the geometry of the sphere. When we consider
great circles on the sphere (such as the equator, or the lines of constant longitude) to be straight lines, no lines go through $A$ which do not cross $L$.} %
\shortonly{However, by}\longonly{By}
the assumption that the underlying geometry is non-Euclidean, we obtain a~new degree of freedom for the techniques of analysis which translate the similarities into spatial neighborhood relationships \cite{ritter99}. Recall that formally, the Euclidean plane is the set of points $\{(x,y); x, y \in \mathbb{R}^2\}$, with the metric $d(a,b) = {||a-b||}$, where $||(x,y)||=\sqrt{x^2+y^2}$. The sphere $\mathbb{S}^2$ is the set of points $\{(x,y,z); x, y, z \in \mathbb{R}^3, x^2+y^2+z^2=1\}$, with the metric $d(a,b) = 2\mathrm{asin}(||a-b||/2)$, where $||(x,y,z)||=\sqrt{x^2+y^2+z^2}$.
The Minkowski hyperboloid $\mathbb{H}^2$ is the set of points $\{(x,y,z); x, y, z \in \mathbb{R}^3, x^2+y^2+1=z^2, z>0\}$, with the metric $d(a,b) = 2\mathrm{asinh}(||a-b||/2)$, where we use the Minkowski metric $||(x,y,z)||=\sqrt{x^2+y^2-z^2}$.
\longonly{If we perceive the surface of the sphere in $\mathbb{R}^3$ as the ``true form'' of spherical geometry (Figure \ref{big} (b)), then the Minkowski hyperboloid should be a ``true form'' of hyperbolic geometry. However, this model may be unintuitive. Minkowski hyperboloid lives in the Minkowski space, defined by Minkowski metric. This means that if the points on the hyperbolic space are in the distance $d$ to each other, they will be in the distance $d$ to each other on the Minkowski hyperboloid, but only according to the Minkowski metric. According to the usual metric, they can be very distant even if $d$ is small! Figure \ref{big} (a) depicts such a situation; the heptagons which appear to be oblong are regular.}
\begin{figure}
\centering
\subfig{0.24\linewidth}{img/minkowski-green.pdf}
\subfig{0.22\linewidth}{img/sphere-green.pdf}
\subfig{0.22\linewidth}{img/nonbitrunc-green.pdf} \hskip -1mm
\subfig{0.22\linewidth}{img/bitrunc-green.pdf} \hskip -1mm
\caption{\label{big}
Representations of non-Euclidean geometries: (a) Minkowski hyperboloid; (b) Sphere.
Hyperbolic tessellations in Poincar\'e disk model: (c) order-3 heptagonal tiling, (d) bitruncated order-3 heptagonal tiling.
}
\end{figure}
All the geometries mentioned are characterized by constant curvature $K$: $K=0$ in the case of Euclidean plane, while in \longonly{the hyperbolic geometry}\shortonly{$\mathbb{H}^2$} we have $K<0$, and in \longonly{spherical geometry}\shortonly{$\mathbb{S}^2$} we have $K>0$.
\longonly{From the practical point of view, we live on a spherical sector (similar to the flat plane), which makes us more comfortable with imagining things in Euclidean rather than spherical
or hyperbolic way. The limitations of our senses make solving the problem of visualization with tangible results quite a challenge. However, the technology allows us to use computer simulations to
picture being inside a non-Euclidean space. There are numerous projections of non-Euclidean surfaces; here we will present popular examples.}
\longonly{\paragraph{Orthographic projection.} The surface of the sphere (Figure \ref{big} (b)) is an isometric 3D model of spherical geometry. To represent it in two dimensions, we need a~projection. In orthographic projection we project $(x,y,z)$ to $(x,y)$. The shapes and areas are distorted, particularly near the edges. For hyperbolic geometry, Gans model is orthographic.}
\longonly{\paragraph{Stereographic projection.} Stereographic projection projects the point $a$ of the unit sphere to the point $b$ on the plane $z=1$ such that $a$, $b$, and $(0,0,-1)$ are colinear.
This projection is conformal, i.e., it preserves angles at which curves meet. One of the widely used models of hyperbolic geometry, the Poincar\'e disk model, is the hyperbolic counterpart of the stereographic projection.
We can obtain Poincar{\'e} model from the Minkowski
hyperboloid model by viewing the Minkowski hyperboloid from $(0,0,-1)$ (Figure \ref{big}).}
Figure \ref{big} shows two tilings of \longonly{the hyperbolic plane}\shortonly{$\mathbb{H}^2$}, the order-3 heptagonal
tiling and its bitruncated variant, in the Poincar\'e disk model.
In the Poincar\'e disk model, points of \longonly{the hyperbolic plane}\shortonly{\shortonly{$\mathbb{H}^2$}} are represented by points inside a disk.
\longonly{We can view the Poincar\'e model as a planar map of the hyperbolic
plane -- however, the scale of the map is not a constant: if a point $A$ of the
hyperbolic plane is represented as a point $A'$ such that the distance of $A'$ from
the boundary circle of the model is $d$ then the scale is roughly proportional to $d$.}
In the hyperbolic metric, all the triangles, heptagons and hexagons in each of the tessellations in Figure \ref{big} are actually of the same size,
and the points on the boundary of the disk are infinitely far from the center.
\subsection{Tessellations of non-Euclidean spaces}\label{tones}
Tessellations from Figure \ref{big} can be
\longonly{naturally }interpreted as metric spaces, where the points are the tiles,
and the distance $\delta(v,w)$ is the number of edges \longonly{we have to traverse}\shortonly{crossed} to reach $w$ from $v$.
Such metric spaces have properties similar to the underlying surface.
\def\sch#1#2{\{#1,#2\}}
\def\scht#1{\sch{3}{#1}}
\def\schq#1{\sch{4}{#1}}
\def\gp#1#2{GC_{#1,#2}}
\paragraph{Schl\"afli symbol} In a regular tessellation every face is a regular $p$-gon, and every degree has degree $q$ (we assume $p,q\geq 3$). We say that such a tessellation has a Schl\"afli symbol $\sch{p}{q}$. Such a tessellation exists on the sphere if and only if $(p-2)(q-2)<4$, plane if and only if $(p-2)(q-2)=4$, and hyperbolic plane if and only if $(p-2)(q-2) > 4$.
Contrary to the Euclidean tessellations, we cannot scale hyperbolic or spherical tessellations. On a hyperbolic plane of curvature -1, every face in a $\sch{q}{p}$ tessellation
will have area $\pi(q\frac{p-2}{p}-2)$. Thus, among hyperbolic tessellations of form $\sch{q}{3}$, $\sch{7}{3}$
is the finest, and they get coarser and coarser as $q$ increases. Regular spherical tessellations correspond to the platonic solids.
\subsection{Self-Organizing Maps: general idea}\label{somdefault}
SOM network consists of two layers: the input layer containing the variables in the input data, and the output layer of the resulting clustering.
We describe every element in the input data $D$ using $k$ variables: $D \subseteq \mathbb{R}^k$. The elements of $x \in D$ are the values of the $k$ variables which serve as the basis for clustering.
\longonly{Similarly to other dimension-reduction techniques, if there are large differences in the values of variances of the variables in the dataset, standardization of the data is required in order to avoid the dominance of a~particular variable or the subset of variables.}
Neurons are traditionally arranged in a lattice.
For each neuron $i$ in the set of neurons we initialize the weight vector $w_i \in \mathbb{R}^k$. Weights are links that connect the input layer to the output layer. The final results may depend on the distribution of the initial weights \cite{soms}.
\longonly{The weights can be random, determined arbitrarily or obtained during a~preliminary training phase.}
\shortonly{We assign the initial weights randomly.}
The neurons need to be exposed to a~sufficient number of different inputs to ensure the quality of learning processes.
In a usual setup, the formation of the SOM is controlled by three parameters: the learning rate $\eta$, the number of iterations $t_{max}$, and the initial neighborhood radius $\sigma(t_{max})$.
Every iteration involves two stages\longonly{: competition and adaptation}.
\paragraph{Competition stage.} We pick $x_t \in D$.
The neurons compete to become activated. Only the node that is the most similar to the input data $x_t$ will be activated and later adjust the values of weights in their neighborhood.
\longonly{The Euclidean distance is a~generally accepted measure of distance, but other methods, e.g., Mahalanobis distance are also available.} For each neuron $i$ in the set of neurons we compute the value of the scoring function $d(w_i, x_t) = \|w - x\|$. The neuron for which the value of the scoring function is the lowest becomes the winning neuron.
\paragraph{Adaptation.} For a~given input, the winning neuron and its neighbors adapt their weights. The adjustments enhance the responses to the same or to a~similar input that occurs subsequently. This way the group of neurons specializes in attracting given pattern in input data.
The input data $x_t$ affects every other neuron $j$ with the factor of $d_{\sigma(t)}(r) = \eta\exp(-r^2/{2\sigma(t)^2})$, where $r$ is the distance between
the neuron $j$ and the winning neuron $i$, and $\sigma(t)$ is the neighborhood radius in the iteration $t$; we take
$\sigma(t) = \sigma(t_{max})(1-t/t_{max})$ \cite{kohonen,soms,ritter99,ontrup}.
This dispersion has a~natural interpretation in the Euclidean geometry. Imagine the information as particles spreading between
neurons according to the random walk model: each particle starts in neuron $i$, and in each of time steps, the information can randomly spread
(with probability $p$) to one of the adjacent neurons. From the Central Limit Theorem we know that the distribution of particles after $tC$ time steps approximates the normal distribution with variance proportional to $t$, which motivates using the function $d_{\sigma(t)}(r)$. Heat conduction is a~well-known physical process which works according to \longonly{very }similar rules, but where time and space are \longonly{considered }continuous.
\section{Our contribution}
The core idea of the SOM algorithm is using a~deformable template to translate data similarities into spatial relationships.
The overwhelming majority of SOM applications use subregions of Euclidean space.
Instead, we use non-Euclidean geometries to take advantage of their properties, such as the exponential growth of hyperbolic space.
While the basic idea has appeared in \cite{ritter99,ontrup}, we improve on it in the following ways.
\subsection{Choice of the tessellation}
Continuous computations can be costly and prone to precision errors. Continuity is also not always essential. Usually, SOMs are based on the regular grids.
Ritter \cite{ritter99} argues that spherical tessellations are not useful in data analysis, because there are only five regular tessellations, namely platonic solids. Those solids are rather coarse and provide limited possibilities for manipulations of neighborhoods, even in comparison with the Euclidean surfaces.
Similarly, regular hyperbolic tessellations such as $\sch{7}{3}$ suffer because the exponential growth is too fast.
We combat these issues while losing only a bit of regularity by using the Goldberg-Coxeter construction.
This construction adds additional hexagonal tiles.
Consider the hexagonal grid $\sch{6}{3}$ on the plane, and take an equilateral triangle $X$ with one vertex in the point $(0,0)$ and another vertex in the point obtained by moving $a$ steps in a
straight line, turning 60 degrees right, and moving $b$ steps more. The tessellation $\gp{a}{b} \sch{p}{3}$ is obtained from the triangulation $\sch{3}{p}$ by replacing each of regular triangles with a copy
of $X$. In Figure \ref{goldberg}, brown lines depict the underlying regular triangulation. Regular tessellations are a special case where $a=1, b=0$.
Figure~\ref{big}b shows the result of applying the Goldberg-Coxeter construction to the sphere.
\longonly{
\begin{figure}
\centering
\subfig{0.48\linewidth}{img/goldberg-eu.pdf}
\subfig{0.48\linewidth}{img/goldberg.pdf}
\caption{Goldberg-Coxeter construction: (a) Euclidean plane; (b) $\gp{2}{1} \sch{7}{3}$ \label{goldberg}}
\end{figure}
}
\shortonly{
\begin{figure}
\centering
\subfig{0.23\linewidth}{img/goldberg-eu.pdf}
\subfig{0.23\linewidth}{img/goldberg.pdf}
\subfig{0.23\linewidth}{img/fundamental-torus2.pdf} \hskip -1mm
\subfig{0.23\linewidth}{img/fundamental-elliptic.pdf} \hskip -1mm
\caption{Goldberg-Coxeter construction: (a) Euclidean plane; (b) $\gp{2}{1} \sch{7}{3}$. \label{goldberg} \label{goldfund}
Fundamental domains: (c) torus, (d) elliptic plane.}
\end{figure}
}
\subsection{Using closed manifolds}
The effects caused by the neurons on the boundary
having less neighbors may make the maps less regular and less visually attractive.
This problem does not appear on the sphere, which is a closed manifold. On the other hand, it is magnified
in hyperbolic geometry, where the perimeter of a region is proportional to its area, causing a large fraction of the neurons
to be affected by the boundary effects.
We combat these issues by using quotient spaces. A quotient space is obtained by identifying points in the manifold.
For example, a square torus, a quotient space of the Euclidean plane, is obtained by identifying points $(x,y)$ and $(x',y')$ such that $x-x'$ and $y-y'$ are both integers.
We call the original Euclidean plane the {\bf covering space} of the torus.
Intuitively, a quotient space is created by cutting a fragment of a surface and gluing its edges.
While the torus is usually presented in introductory topology books in its wrapped, donut-like shape, we present our quotient spaces in the covering space presentation, such as in
\longonly{Fig.~\ref{quoths} (a)}\shortonly{Fig.~\ref{goldfund} (c))}.
We show the covering space of our manifold; our quotient space corresponds to the periodically repeating part of the picture. Such a presentation lets us present the whole manifold on a
single picture, and is much more clean, especially for hyperbolic or non-orientable quotient spaces. Intuitively, the covering space presentation simulates how the manifold is perceived by the native beings
(or neurons).
In the spherical geometry, we can identify each point with its antipodal point, obtaining the elliptic plane \longonly{(Figure \ref{quoths} (b))}\shortonly{(Figure \ref{goldfund} (d))}. The elliptic plane is non-orientable: a right-handed neuron would see a left-handed version of themselves on the other pole. Figure \longonly{\ref{quoths} (b)}\shortonly{\ref{goldfund} (d)} depicts
the stereographic projection of the elliptic plane; the blue circle is the equator. \longonly{The tiles of the same color are the same objects -- we may see that the pink tile is symmetrical to its counterpart.}
The sphere is a surface of genus 0, while the torus is a surface of genus 1; the genus of an orientable surface is, intuitively, the number of ``holes'' in it.
Orientable quotient spaces of the hyperbolic plane have genus greater than 1, or equivalently, Euler characteristics $\chi=2-2g < 0$.
If we tile a surface with Euler characteristics $\chi$ with $a$ pentagons, $b$ hexagons and $c$ heptagons in such a way that three polygons meet in every vertex,
the following relationship will hold: $6\chi=a-c$. Thus, a sphere can be tiled with 12 pentagons (dodecahedron), a torus can be tiled with only hexagons, and
hyperbolic quotient spaces can be tiled with only hexagons and $-6\chi$ heptagons. \shortonly{See the full version \cite{somarxiv} for the details on our manifolds.}
\longonly{The smallest hyperbolic quotient space is a non-orientable surface with $\chi=-1$ (six heptagons), which we call the minimal quotient space.
Hurwitz surfaces are hyperbolic quotient spaces that are highly symmetric: a Hurwitz surface of genus $g$ will have precisely $84(g-1) = -42\chi$ automorphisms, which is the
highest possible number \cite{hurwitz}, and corresponds to all the rotations of each of the $-6\chi$ heptagons.
A Hurwitz surface of genus $g=3$ is called the Klein quartic; Hurwitz surfaces also exist for larger genera, such as 7 or 14.}
\longonly{
\begin{figure}[h!]
\centering
\subfig{0.48\linewidth}{img/fundamental-torus2.pdf} \hskip -1mm
\subfig{0.48\linewidth}{img/fundamental-elliptic.pdf} \hskip -1mm
\longonly{\\ \subfig{0.48\linewidth}{img/fundamental-minimal-goldberg.pdf} \hskip -1mm
\subfig{0.48\linewidth}{img/fundamental-klein-goldberg.pdf}}
\caption{\label{quoths} Fundamental domains: a) torus; b) elliptic plane\longonly{; c) minimal quotient; d) Klein quartic}}
\end{figure}
}
\longonly{Figs. \ref{quoths} depict the fundamental domains for the mentioned quotient spaces.
Heptagons and pentagons are colored with a darker shade of green. We use the Goldberg-Coxeter construction to add extra hexagons to our tessellation.
A fundamental domain is a subset of the covering space which contains one element of every set of identified points;
intuitively, we obtain we quotient space by gluing the edges of the fundamental domain. The edges we should glue together are marked with the same numbers;
in Figure \ref{quoths} (c), gray numbers denote that the edge should be reversed first (like in the M\"obius band).}
\subsection{Dispersion}
The natural interpretation of the dispersion function mentioned in the Prerequisities section no longer works in non-Euclidean geometry.
In particular, the exponential nature of the hyperbolic plane makes the random walk
process behave very differently in larger time frames (see, e.g., \cite{heat_kernel} for a~study of heat conduction in the hyperbolic plane). For example, it is well known that the random walk on a~two-dimensional Euclidean grid returns to the starting point (and any other point) with
probability 1. In a~two-dimensional hyperbolic grid, this probability decreases exponentially with distance. Interestingly, Ontrup and Ritter \cite{ritter99,ontrup} who originally introduced non-Euclidean SOMs did not discuss this issue. In applications we may also use quotient spaces, which changes the situation even further -- the information particle could reach the same neuron $j$ in many different ways (e.g., by going left or by going right).
For that reason, we use a~different dispersion function, based on numerically simulating the random walk process on our manifold. We compute the probability $P_{i,j,t}$ that the information particle starting in neuron $i$ will end up in neuron $j$ after $t$ time steps.
This probability can be computed with a~dynamic programming algorithm: for $t=0$ we have $P_{i,j,t} = 1$ if and only if $i=1$ and 0 otherwise;
for $t+1$ we have $P_{i,j,t+1} = P_{i,j,t} + p \sum_k P_{i,k,t} - P_{i,j,t}$, where we sum over all the neighbors $k$ of the neuron $j$.
See Algorithm \ref{dispalg} for the pseudocode which computes $P_{i,j,t}$. In this pseudocode, $N(i)$ denotes the neighborhood of the neuron $i$.
This algorithm has time complexity $O(n^2T)$. Our application is based on highly symmetric surfaces, which lets us to
reduce the time complexity by taking advantage of the symmetries.
\longonly{For example, on the torus, we can reduce the time complexity of $O(nT)$,
since $P_{i,j,t} = P_{i',j',t}$ if the transition vector between the neuron $i$ and neuron $j$ is the same as the transition vector between $i'$ and $j'$.
A Hurwitz surface of Euler characteristic $\gamma$ has $42|\gamma|$ symmetries, allowing us to reduce the time complexity by this factor.}
\begin{algorithm}[tb]
\textbf{Parameter}: the set of all nodes of a network $V$; neighborhoods $N(i)$; the number of time steps $T$ and precision $p$\\
\textbf{Output}: the dispersion array $P_{i,j,t}$ for $t=0,\ldots,T$
\begin{algorithmic}[1]
\FOR{$i, j \in V$}
\STATE $P_{i,j,t} := 0$
\ENDFOR
\FOR{$i \in V$}
\STATE $P_{i,i,t} := 1$
\ENDFOR
\FOR{$t=0,\ldots,T-1$}
\FOR{$i, j \in V$}
\STATE $P_{i,j,t+1} := P_{i,j,t}$
\ENDFOR
\FOR{$k \in N(i)$}
\STATE $P_{i,j,t+1} := P_{i,j,t+1} + p \cdot (P_{i,k,t} - P_{i,j,t})$
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{Dispersion algorithm \label{dispalg}.}
\end{algorithm}
In iteration $t$, the weights are updated for every neuron $j$ according to the formula:
$
w_j := w_j + \frac{\eta P_{i,j,f(t)}}{\max_t P_{i,j,t}}(x_t - w_j),
$
where $i$ is the winning neuron, and $f(t) = T (1-\frac{t}{t_{max}})^\longonly{s}\shortonly{2}$.
\longonly{We take $s=2$ to make the dispersion radius scale linearly with time, similar to the Gaussian formula.}
\section{Example visualizations of our results}
To visualize the result of the proposed algorithm we will use the classic iris flower dataset by Fisher \cite{iris} and the palmerpenguins dataset \cite{palmerpenguins}.
Figure \ref{kohonen} depicts the visual example result of the SOM clustering. Coloring of the tiles allows for the examination of the clusters. We utilize a~standard tool: inverted U-matrix (unified distance matrix) where the
Euclidean distance between the representative vectors of neurons in the neighborhood is depicted in grayscale. The darker the shade, the less dissimilarity there is within
the neighborhood. We may see the smooth and gradual changes.
\longonly{We compare various setups for SOMs (keeping similar number of tiles).
Figures \ref{kohonen}(a,b) depict the results for a standard SOM setup (Euclidean plane tiled with squares) as a benchmark. Note that the results are
rather unsatisfactory -- SOM with Euclidean setup does not combat the known issue of mixed observations in two of the groups, moreover, for penguins dataset
it also suggests that there are more than three groups (see the boundary within violet observations). On the contrary, SOMs based on our setup perfom visually better
Redefining neighborhood by introducing hexagons and heptagons helps in minimizing the intermixing. Moreover, one can see that setups with closed manifolds (Figures c-f)
lead to a better visual distinction of the edges among the groups. Especially in the case of Klein quartic, due to the exponential growth, we can fit similar objects closer to each other than it would be possible in the Euclidean plane.}
\longonly{
\begin{figure}[!h]
\centering
\subfig{0.45\linewidth}{img/iris-squares.pdf}
\subfig{0.45\linewidth}{img/penguins-squares.pdf} \\
\subfig{0.45\linewidth}{img/iris-torus.pdf}
\subfig{0.45\linewidth}{img/penguins-torus.pdf} \\
\subfig{0.45\linewidth}{img/iris-kq.pdf}
\subfig{0.45\linewidth}{img/penguins-klein.pdf} \\
\subfig{0.45\linewidth}{img/iris-sphere-ortho.pdf}
\subfig{0.45\linewidth}{img/penguins-hdisk.pdf}
\caption{Example results of SOM on the iris flower (aceg) and palmerpenguins (bdfh) dataset.
(ab) a disk on the Euclidean square grid, (cd) a torus with the hex grid, (ef) Klein quartic, (g) sphere in orthographic projection, (h) a hyperbolic disk.
\label{kohonen}}
\end{figure}
}
\shortonly{
\begin{figure}[!h]
\centering
\subfig{0.31\linewidth}{img/penguins-torus.pdf}
\subfig{0.31\linewidth}{img/iris-kq.pdf}
\subfig{0.31\linewidth}{img/penguins-hdisk.pdf}
\caption{Example results of SOM on the iris flower (b) and palmerpenguins (ac) dataset.
(a) a torus\longonly{ with the hex grid}, (b) Klein quartic, (c) a disk in $\mathbb{H}^2$.
\label{kohonen}}
\end{figure}
}
While static visualizations work for Euclidean geometry,
hyperbolic geometry is useful for visualization of hierarchical data, where we can focus by centering the visualization on any node
in the hierarchy \cite{lampingrao,munzner}. Our visualization engine lets the viewer to
smoothly change the central point of the projection to any point in the manifold, and thus clearly see the spatial relationships of every cluster.
The locations and the neighborhoods returned by SOMs have interpretation. Given that competition and adaptation stages force the neighborhood to attract similar objects,
the distance between the neurons becomes a~measure for similarity: the further, the less similar objects are. We may use the resulting classification and mapping in further analyses.
\section{Experiments}
Our general experimental setup is as follows.
\begin{itemize}
\item We construct the original manifold $O$. Let $T_O$ be the set of tiles and $E_O$
be the set of edges between the tiles.
\item We map all the tiles into the Euclidean space $m: T_O \rightarrow \mathbb{R}^d$, where $d$ is
the number of dimensions.
\item We construct the target embedding manifold $E$. Let $T_E$ be the set of tiles
and $E_E$ be the set of edges between the tiles.
\item We apply our algorithm to the data given by $m$, This effectively yields an
embedding $e: T_O \rightarrow E_O$.
\item We measure the quality of the embedding.
\end{itemize}
To limit the effects of randomness (random initial weight of neurons, random ordering of data)
we apply this process independently 100 times for every pair of manifolds\longonly{ $E$ and $O$}.
\shortonly{
\begin{figure}
\centering
\subfig{0.24\linewidth}{img/well-mapped-torus.pdf} \hskip -1mm
\subfig{0.24\linewidth}{img/badly-mapped-torus.pdf} \hskip -1mm
\subfig{0.24\linewidth}{img/sphere-to-torus.pdf} \hskip -1mm
\subfig{0.24\linewidth}{img/torus-to-sphere.pdf}
\caption{\label{retrieval}
(a)~square torus to rectangular torus;
(b)~rectangular torus to square torus;
(c)~sphere \longonly{mapped }to hex torus; (d)~hex torus \longonly{mapped }to sphere}
\end{figure}
}
\longonly{
\begin{figure*}
\centering
\subfig{0.49\linewidth}{img/well-mapped-torus.pdf} \hskip -1mm
\subfig{0.49\linewidth}{img/badly-mapped-torus.pdf} \hskip -1mm
\subfig{0.49\linewidth}{img/sphere-to-torus.pdf} \hskip -1mm
\subfig{0.49\linewidth}{img/torus-to-sphere.pdf}
\caption{\label{retrieval}
(a)~square torus to rectangular torus;
(b)~rectangular torus to square torus;
(c)~sphere mapped to hex torus; (d)~hex torus mapped to sphere}
\end{figure*}
}
In Figure \ref{retrieval} the effects of four runs are shown. Small gray polygons
(hexagons) represent the tiles of $E$. The green and red polygons depict the fundamental
domain. Every circle represents a tile from $T_O$ that has been mapped to the
manifold $E$, to the tile shown in the visualization. Edges between circles correspond
to the orignal edges $E_O$ between them.
In Figures \ref{retrieval}a and \ref{retrieval}b, both $E$ and $O$ are tori of
different shapes. In the run shown in Figure \ref{retrieval}a we obtain what we consider
a successful map: the topology of the data is recovered correctly (despite the different
shape of the two tori). Figure \ref{retrieval}b
shows an unsuccessful recovery of topology. In this case, the original torus $O$ has been cut into
two cylinders $O_1$ and $O_2$, which are respectively mapped to cylinders $E_1$ and $E_2$
in $E$; however, the two maps $O_1 \rightarrow E_1$ and $O_2 \rightarrow E_2$ are mirrored.
This issue is clearly visible in our visualization: parts of the boundary areas between
$E_1$ and $E_2$ contain no tiles, and the edges show singularities.
Figures \ref{retrieval}c and \ref{retrieval}d show a pair of mappings where $E$ and $O$ have
different topologies (a sphere and a torus). Since the topologies are different, there
is no way to map $T_E$ to $T_O$ without singularities. In Figure \ref{retrieval}c our algorithm
has stretched the sphere $O$ on the poles, obtaining a cylinder; that cylinder is then mapped
to a cylinder obtained by cutting the torus. The torus has been cut along the white area.
Most edges are mapped nicely, to pairs of close cells, but some edges close to the poles
will have to go around the cylinder. Figure \ref{retrieval}d is the inverse case. The torus
is obtained by removing two disks at the poles, and gluing the boundaries of the removed
disks. Edges which connect the boundaries of the two disks go across the whole sphere,
while the remaining edges have the correct topological structure.
\longonly{
\begin{figure}
\centering
\subfig{0.99\linewidth}{img/signpost-klein.pdf} \hskip -1mm
\subfig{0.49\linewidth}{img/signpost-bolza.pdf} \hskip -1mm
\subfig{0.49\linewidth}{img/landscape-disk.pdf} \hskip -1mm
\caption{\label{embs}
(a)~signpost on Klein bottle;
(b)~signpost on Bolza surface;
(c)~landscape on the disk}
\end{figure}
}
\longonly{
\subsection{List of manifolds}
\begin{table*}[h]
\resizebox{\textwidth}{!}{
\begin{tabular}{l|rrrrrrrrrrrrrrrrrrrrr}
name&$n$&edges&col&embtype&$a$&$b$&o&s&c&q&d&q&$p$&curvature&$\chi$&$A$&g&m&avgdist&avgdist2&kmax\\ \hline
disk10&520&1214&60&landscape&1&0&1&0&0&0&2&3&7&-1&---&1&h&10&7.37242&58.5336&0.847069\\
disk11&520&1326&60&landscape&1&1&1&0&0&0&2&3&7&-0.313462&---&3&h&14&9.21862&94.4243&0.904728\\
disk20&520&1340&60&landscape&2&0&1&0&0&0&2&3&7&-0.217308&---&4&h&16&9.58927&103.118&0.914314\\
disk21&520&1381&60&landscape&2&1&1&0&0&0&2&3&7&-0.136538&---&7&h&18&10.3977&123.463&0.928886\\
disk40&520&1409&60&landscape&4&0&1&0&0&0&2&3&7&-0.0557692&---&16&h&22&11.0977&143.849&0.940335\\
disk43&520&1445&60&landscape&4&3&1&0&0&0&2&3&7&-0.0153846&---&37&h&24&11.5947&160.277&0.947179\\
disk-euclid&520&1479&60&landscape&1&0&1&0&0&0&2&3&6&0&0&1&e&26&11.9668&174.804&0.951944\\
elliptic&541&1620&6&signpost&6&6&0&1&1&1&2&3&5&0.0110906&1&108&s&15&9.43387&101.41&0.918638\\
sphere&522&1560&3&natural&6&2&1&1&1&0&2&3&5&0.0229885&2&52&s&20&10.1551&121.663&0.937623\\
sphere4&510&1524&3&natural&7&6&1&1&1&0&2&3&4&0.0235294&2&127&s&20&9.94615&116.516&0.93592\\
torus-hex&529&1587&6&natural&1&0&1&1&1&1&2&3&6&0&0&1&e&15&8.95455&90.7727&0.91323\\
torus-sq&520&1560&4&natural&1&0&1&1&1&1&2&3&6&0&0&1&e&16&8.96724&91.5915&0.916843\\
torus-rec&522&1566&4&natural&1&0&1&0&1&1&2&3&6&0&0&1&e&19&9.74856&112.363&0.934869\\
klein-sq&520&1560&52&signpost*&1&0&0&0&1&1&2&3&6&0&0&1&e&16&8.93757&90.7337&0.915332\\
Bolza&502&1512&22&signpost*&6&3&1&1&1&1&2&3&8&-0.0239044&-2&63&h&15&8.0759&73.1651&0.900291\\
Bolza2&492&1488&12&signpost&5&1&1&1&1&1&2&3&8&-0.0487805&-4&31&h&12&7.42308&60.7784&0.87572\\
minimal&524&1575&6&signpost&5&5&0&0&1&1&2&3&7&-0.0114504&-1&75&h&17&8.7837&87.8727&0.915569\\
Zebra&516&1554&12&signpost&4&3&1&0&1&1&2&3&7&-0.0232558&-2&37&h&16&8.76821&88.2358&0.919384\\
KQ&528&1596&24&signpost&3&2&1&1&1&1&2&3&7&-0.0454545&-4&19&h&13&7.73279&65.8849&0.877095\\
Macbeath&576&1764&72&signpost&2&1&1&1&1&1&2&3&7&-0.125&-12&7&h&13&7.12609&55.667&0.862014\\
triplet1&520&1638&156&signpost&1&1&1&1&1&1&2&3&7&-0.3&-26&3&h&9&5.89461&37.4094&0.816052\\
triplet2&520&1638&156&signpost&1&1&1&1&1&1&2&3&7&-0.3&-26&3&h&11&5.97726&38.8343&0.832687\\
triplet3&520&1638&156&signpost&1&1&1&1&1&1&2&3&7&-0.3&-26&3&h&10&5.88092&37.3156&0.818189\\
\end{tabular}
}
\caption{The list of manifolds in the experiment. $n$=neurons/samples, col=columns, o=orientable (1=TRUE), s=symmetric (1=TRUE), q=quotient (1=TRUE), c=closed (1=TRUE), d=dimension,
$a,b$ -- Goldberg-Coxeter parameters, $p,q$ -- Schl\"afli symbol, g=geometry (hyperbolic/euclidean/spherical), m=max tile distance,
$A$=area, $\chi$=Euler characteristics, kmax -- maximum Kendall coefficient.}\label{manilist}
\end{table*}
Table \ref{manilist} lists all manifolds in our experiment.
\begin{itemize}
\item KQ (Klein Quartic), Macbeath, and triplet (first Hurwitz triplet) are Hurwitz manifolds (closed hyperbolic manifolds with underlying \sch{7}{3} tessellation exhibiting very high symmetry.
\item Zebra and minimal are less symmetric manifolds, also with underlying \sch{7}{3} tessellation.
\item Bolza surface has underlying \sch{8}{3} tessellation, and Bolza2 is its double cover. They are also highly symmetric.
\item sphere and sphere4 are spheres with different underlying tilings (\sch{5}{3} and \sch{4}{3}). Elliptic is the elliptic plane (\sch{5}{3}).
\item torus-hex, torus-sq (square torus) and torus-rec (rectangular torus) are tori with different shapes (\sch{6}{3}).
klein-sq is the Klein bottle.
\item Disks are disks with different Goldberg-Coxeter subdivisions of \sch{7}{3}. Each of them consists of 520 cells closest to the origin.
\end{itemize}
A closed manifold with Euler characteristics $\chi \neq 0$ and underlying $\{p,3\}$ tessellation will have $u=6\chi/(6-p)$ underlying tiles. These tiles will form
$t=pu/3$ triangles. Goldberg-Coxeter construction $\gp{a}{b}$ will replace each of these triangles with
$\frac A2=\frac{(2a+b)^2 + 3b^2}{4}$ tiles. Thus, the total number of tiles in the manifold equals $n=u + \frac{A-1}2 t$.
We can control $n$ by changing the Goldberg parameters $a$ and $b$.
However, for the first Hurwitz triplet we have $u=156$, so we do not have much control. We get
$n=520$ for $\gp{1}{1}$, and we adjust $a$ and $b$ for all the other manifolds to have as close $n$ as possible. For non-orientable manifolds only $b=0$ or $b=a$ are legal.
Curvature is defined as $2q/(q-2)-v$, where $v$ is the average number of neighbor tiles (counting tiles outside of the sample in the case of disks).
We consider two manifolds to have same geometry if both are hyperbolic, both are Euclidean or both are spherical.
We consider two closed manifolds to have same topology if they have the same Euler characteristics and orientability.
This happens in the following cases: all disks; all tori; all triplet manifolds; sphere vs sphere4; Bolza vs Zebra; and Bolza2 vs KQ.
}
\longonly{\paragraph{Double density manifolds}
Taking both $E$ and $O$ from the same list of manifolds could potentially cause overfitting. To combat this issue, we also
consider double density manifolds, which are obtained by doubling both Goldberg parameters $a$ and $b$. This increases the number of samples roughly four times
(exactly four times in the cases of disks and tori).}
\subsection{Embedding into $\mathbb{R}^d$}\label{sub:embed}
We need to embed the manifold $O$ into $\mathbb{R}^d$ in such a way that both its topology and
geometry are preserved, that is, distances in $\mathbb{R}^d$ are a good representation of
actual geodesic distances in $O$. We use the following methods.
\paragraph{Natural} Well-known embeddings are known for the following cases:
\begin{itemize}
\longonly{
\item The Euclidean disk $\mathbb{D}^2$ has a well-known embedding to $\mathbb{R}^2$ (as
explained later, we do not use this embedding for consistency).}
\item Sphere $\mathbb{S}^2$ has a well-known embedding to $\mathbb{R}^3$.
\item The square torus has an embedding to $\mathbb{R}^4 = \mathbb{R}^2 \times \mathbb{R}^2$,
obtained by representing the torus as $\mathbb{T}^2 = \mathbb{S}^1 \times \mathbb{S}^1$ and mapping every
circle $\mathbb{S}^1$ to $\mathbb{R}^2$.
\longonly{\item} For the rectangular torus, we use two circles of sizes corresponding to
the length of edges.
\longonly{\item} For the hexagonal torus, we use three circles, corresponding to the
three axes.
\end{itemize}
\paragraph{Signpost} For closed hyperbolic manifolds we use the following
method. We choose the subset of tiles ${t_1, \ldots, t_d}$ as \emph{signposts}.
Then, for every tile t, we set $m(t) = (\delta(t,t_1), \ldots, \delta(t,t_d))$.
In most cases we choose the signposts to be the tiles of degree other than $6$.
\longonly{
We use other methods in the case of Klein bottle (where we use 13x4 signposts arranged regularly
in the 13x20 tessellation) and Bolza surface (where we also add the vertices of the original
tiles before the Goldberg-Coxeter construction, since the distances from 6 Bolza tiles are
not enough to identify the tile). Figure \ref{embs}ab shows perfect mappings ($E=O$);
signpost tiles are marked with red.}
\paragraph{Landscape} For hyperbolic and Euclidean disks, we use the following method.
We find all the zig-zag lines in the tessellation. These zig-zag lines go along the edges
and split the manifold into two parts. They are obtained by turning alternately to left and
right at every vertex\longonly{ (we assume that all vertices are of valence 3 here)}.
\longonly{
In Figure \ref{embs}\longonly{c}, we have three zig-zag lines in the $GC(2,1)$ disk, splitting
the disk into 5 regions; as seen, zig-zag lines approximate straight lines. }Let $L$ be the
set of all lines. We assign a random Gaussian vector $v_l$ to each straight line $l\in L$.
The central tile $t_0$ has all coordinates equal to 0. For any other tiles $t$, we find
the set $L_t$ of all the straight lines separating $t_0$ and $t$, and set $m(t) = \sum_{l \in L_t} v_l$.
We call this \emph{landscape method} because it is inspired by the method used to generate
natural-looking landscapes in HyperRogue \cite{hyperrogue}. It represents the reason why
hyperbolic geometry may appear in real-life data such as social network analysis: every line
$l \in L$ represents a part of the social network becoming polarized or specialized\longonly{ and thus
changing their values}.
\subsection{Measuring the quality of final embedding}
\longonly{We are interested in measuring the quality of the final embeddings.
The following two methods are natural.
{\bf Energy} is given as $\frac{1}{|E_O|} (\sum_{(t,t') \in E_O} \delta(e(t), e(t'))^2-1)$. As we have seen
in our visualization, topologically incorrect edges become stretched, and thus the energies of embeddings
include them are high. The {\bf Kendall coefficient} $k$ measures the correlation of $d_O=\delta(t_1,t_2)$ and
$d_E=\delta(e(t_1), e(t_2))$; every pair of pairs of distances $((d_O,d_E), (d'_O,d'_E))$ contributes
1 if $d_O > d'_O$ and $d_E > d'_E$ or $d_O < d'_O$ and $d_E < d'_E$, and -1 if $d_O > d'_O$ and $d_E < d'_E$ or
$d_O < d'_O$ and $d_E > d'_E$. We normalize by dividing by the total number of pairs where
$d_O \neq d'_O$. The {\bf Kendall unfitness} is then $100\cdot(1-k)$.}
\longonly{Unfortunately, both of these measures return relatively bad values for embeddings which are actually
correct, such as the embedding from Figure \ref{retrieval}a, where the map
is stretched in one direction (say, horizontal) and compressed in the other direction (say, vertical). The
cases where $(t_1,t_2)$ is a horizontal pair and $(t'_1,t'_2)$ is a horizontal pair worsen the Kendall coefficient. A similar issue happens
for energy.}
\longonly{Therefore we need a topological measure of quality.}
\shortonly{Our final embeddings should correctly preserve the topology.}
Our primary method of measuring topology preservation is based on the ideas of Villmann et al
\cite{villmann}. This measure is based on the neuron weights $w_t$ for every tile $t \in T_E$.
For every tile $t \in T_E$, let $p_t$ be the point in the manifold $O$ which is the closest
to $w_t$. Define the Voronoi cell $V_t$ as the set of points in the manifold $O$ which are closer to $p_t$
than to any other $p_{t'}$, i.e., $V_t = \{x \in O: \forall t' \in T_E |x-p_t| \leq |x-p_{t'}|\}$. Two Voronoi
cells $V_t$ and $V_{t'}$ are adjacent if $V_t \cap V_{t'} \neq \emptyset$. This way, we obtain two graphs
on the set of tiles $T_E$: the graph $G_E = (T_E, E_E)$ where two tiles are adjacent iff there is an edge in $E$
between them, and the graph $G_V = (T_E, E_V)$ where two tiles $t$, $t'$ are adjacent iff their Voronoi cells
are adjacent.
For an embedding that ideally preserves the topology and also preserves the geometry well enough, we have
$E_V = E_E$. In general, let $d_E(t_1,t_2)$ and $d_V(t_1,t_2)$ be the length of the shortest paths between
tiles $t_1$ and $t_2$ in the graphs $G_V$ and $G_E$. We define the \emph{Villmann measure} of the embedding as
$v = \max_{(t_1,t_2) \in E_E} d_V(t_1,t_2) + \max_{(t_1,t_2) \in E_V} d_E(t_1,t_2)-2$.
Ideally, we get $v=0$; embeddings which stretch the manifold or have local folds yield small values of $v$ ($2\leq v \leq 4$),
and embeddings which do not preserve the topology yield larger values. An embedding does not preserve the topology
if one of two cases hold: the induced map from $E$ to $O$ is not continuous (making the first component of $v$ large)
or the induced map from $O$ to $E$ is not continuous (making the second component of $v$ large) \cite{villmann}.
\shortonly{See the full version for other methods of measuring embedding quality.}
\longonly{This measures the largest discontinuity. We might also want to measure the number of discontinuities. One natural formula is
$\sum_{(t_1,t_2) \in E_E} (d_V(t_1,t_2)-1)^2 + \sum_{(t_1,t_2) \in E_V} (d_E(t_1,t_2)-1)^2$. Our experiments indicate that such a formula
is very sensitive to local folds, which are in turn very sensitive to the parameters of the SOM algorithm (for sphere-to-sphere mappings,
dispersion scaling power $s=1$ yields significantly smaller values than $s=2$, both for simulated and Gaussian dispersion), making it
difficult to compare various algorithms.}
\longonly{
Another measure of topology preservation is the tears measure, which ignores stretches and local folds. In all the bad cases in Figure \ref{retrieval},
topological errors are exhibited by areas where no tiles from $T_O$ are mapped. However, not all empty
tiles are bad -- some tiles will remain empty simply because $T_O$ is not dense enough. Therefore,
tears($r$) is measured as follows: we count the number of tiles $t \in T_E$ which are empty but
there are tiles $t_1$ and $t_2$ such that $\delta(t,t_1), \delta(t,t_2) \leq r$ and $\delta(t,t_1)+\delta(t_t2) = \delta(t_1,t_2)$.
For $r=1$, this method prevents us from counting empty cells visible in Figure \ref{retrieval}a.
We use $r=1$. This measure is suitable only when both manifolds $E$ and $O$ are closed.}
\newcommand*{\SuperScriptSameStyle}[1]{%
{\small{$^{#1}$}}%
}%
\newcommand*{{\scriptsize{$^{*}$}}}{\SuperScriptSameStyle{*}}
\newcommand*{{\scriptsize{$^{**}$}}}{\SuperScriptSameStyle{**}}
\newcommand*{\SuperScriptSameStyle{\dagger}}{\SuperScriptSameStyle{\dagger}}
\defclos{clos}
\defclosed{closed}
\defdisk{disk}
\longonly{
\begin{table*}[bt]
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c}
& \multicolumn{4}{c|}{energy} & \multicolumn{4}{c|}{Kendall unfitness} & \multicolumn{3}{c}{tears(1)} \\
$O$ & clos & disk & clos & disk & clos & disk & clos & disk & \multicolumn{3}{|c}{clos} \\
$E$ & clos & disk & disk & clos & clos & disk & disk & clos & \multicolumn{3}{|c}{clos} \\ \hline
Effect type & \multicolumn{8}{c|}{${\Delta E(y)}/{\Delta x_k}$} & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ \\ \hline
(Intercept) & 5.649\SuperScriptSameStyle{\dagger} & 4.613\SuperScriptSameStyle{\dagger} & 21.947\SuperScriptSameStyle{\dagger} & 3.142\SuperScriptSameStyle{\dagger} & 37.975\SuperScriptSameStyle{\dagger} & 22.481\SuperScriptSameStyle{\dagger} & 57.984\SuperScriptSameStyle{\dagger} & 55.725\SuperScriptSameStyle{\dagger} & --- & --- & ---\\
$O$ hyperbolic & 2.534\SuperScriptSameStyle{\dagger} & -1.735\SuperScriptSameStyle{\dagger} & 3.407\SuperScriptSameStyle{\dagger} & 0.392\SuperScriptSameStyle{\dagger} & 8.579\SuperScriptSameStyle{\dagger} & 2.092\SuperScriptSameStyle{\dagger} & 4.430\SuperScriptSameStyle{\dagger} & 21.516\SuperScriptSameStyle{\dagger} & 21.419\SuperScriptSameStyle{\dagger} & 0.001\SuperScriptSameStyle{\dagger} & 0.003\SuperScriptSameStyle{\dagger}\\
$O$ spherical & -0.397\SuperScriptSameStyle{\dagger} & --- & -6.561\SuperScriptSameStyle{\dagger} & --- & -2.573\SuperScriptSameStyle{\dagger} & --- & -9.595\SuperScriptSameStyle{\dagger} & 5.652\SuperScriptSameStyle{\dagger} & 5.640\SuperScriptSameStyle{\dagger} & 0.000169\SuperScriptSameStyle{\dagger} & -4.2e-04\SuperScriptSameStyle{\dagger}\\
same\_manifold & -2.953\SuperScriptSameStyle{\dagger} & -0.379\SuperScriptSameStyle{\dagger} & --- & --- & -38.203\SuperScriptSameStyle{\dagger} & 0.681{\scriptsize{$^{**}$}} & --- & -74.210\SuperScriptSameStyle{\dagger} & -65.403\SuperScriptSameStyle{\dagger} & -0.488\SuperScriptSameStyle{\dagger} & -0.341\SuperScriptSameStyle{\dagger}\\
same\_geometry & -3.129\SuperScriptSameStyle{\dagger} & -0.981\SuperScriptSameStyle{\dagger} & 0.276\SuperScriptSameStyle{\dagger} & -0.414\SuperScriptSameStyle{\dagger} & -2.224\SuperScriptSameStyle{\dagger} & -14.200\SuperScriptSameStyle{\dagger} & 2.080\SuperScriptSameStyle{\dagger} & -4.608\SuperScriptSameStyle{\dagger} & -4.594\SuperScriptSameStyle{\dagger} & -0.000192\SuperScriptSameStyle{\dagger} & -3.6e-04\SuperScriptSameStyle{\dagger}\\
both\_orientable & 0.146\SuperScriptSameStyle{\dagger} & --- & -3.828\SuperScriptSameStyle{\dagger} & 0.444\SuperScriptSameStyle{\dagger} & -0.637\SuperScriptSameStyle{\dagger} & --- & -6.813\SuperScriptSameStyle{\dagger} & 3.189\SuperScriptSameStyle{\dagger} & 3.179\SuperScriptSameStyle{\dagger} & 0.000136\SuperScriptSameStyle{\dagger} & 2.3e-04\SuperScriptSameStyle{\dagger}\\
both\_symmetric & -0.356\SuperScriptSameStyle{\dagger} & --- & 8.238\SuperScriptSameStyle{\dagger} & 0.545\SuperScriptSameStyle{\dagger} & 1.482\SuperScriptSameStyle{\dagger} & --- & 16.318\SuperScriptSameStyle{\dagger} & 0.223 & 0.222 & 8.6e-06 & -1.7e-05\\
diff\_curvature\_pos & 49.145\SuperScriptSameStyle{\dagger} & 0.976\SuperScriptSameStyle{\dagger} & 67.177\SuperScriptSameStyle{\dagger} & 0.394\SuperScriptSameStyle{\dagger} & 89.692\SuperScriptSameStyle{\dagger} & 22.575\SuperScriptSameStyle{\dagger} & 27.560\SuperScriptSameStyle{\dagger} & 124.302\SuperScriptSameStyle{\dagger} & 123.956\SuperScriptSameStyle{\dagger} & 0.005\SuperScriptSameStyle{\dagger} & 0.009\SuperScriptSameStyle{\dagger}\\
diff\_curvature\_neg & -6.583\SuperScriptSameStyle{\dagger} & 5.365\SuperScriptSameStyle{\dagger} & -10.691\SuperScriptSameStyle{\dagger} & -1.340\SuperScriptSameStyle{\dagger} & 39.961\SuperScriptSameStyle{\dagger} & 31.779\SuperScriptSameStyle{\dagger} & 7.286\SuperScriptSameStyle{\dagger} & 422.827\SuperScriptSameStyle{\dagger} & 421.652\SuperScriptSameStyle{\dagger} & 0.016\SuperScriptSameStyle{\dagger} & 0.035\SuperScriptSameStyle{\dagger}\\
diff\_samples & -0.010\SuperScriptSameStyle{\dagger} & --- & -0.008\SuperScriptSameStyle{\dagger} & 0.005\SuperScriptSameStyle{\dagger} & -0.047\SuperScriptSameStyle{\dagger} & --- & -0.048\SuperScriptSameStyle{\dagger} & 0.198\SuperScriptSameStyle{\dagger} & 0.197\SuperScriptSameStyle{\dagger} & 7.58e-06\SuperScriptSameStyle{\dagger} & 1.4e-05\SuperScriptSameStyle{\dagger}\\
same\_topology & 0.564\SuperScriptSameStyle{\dagger} & --- & --- & --- & 2.010\SuperScriptSameStyle{\dagger} & --- & --- & 3.603\SuperScriptSameStyle{\dagger} & 3.594\SuperScriptSameStyle{\dagger} & 0.000114\SuperScriptSameStyle{\dagger} & -8.3e-04\SuperScriptSameStyle{\dagger}\\
\hline
N & 25600 & 4900 & 11200 & 11200 & 25900 & 4900 & 11200 & 11200 & \multicolumn{3}{c}{25600} \\
(pseudo)$R^2$ & 0.8496 & 0.7195 & 0.9077 & 0.2471 & 0.7519 & 0.7432 & 0.7736 & 0.6361 & \multicolumn{3}{c}{0.1703} \\
$R^2_{adj}$ & 0.8495 & 0.7193 & 0.9076 & 0.2466 & 0.7518 & 0.7429 & 0.7735 & 0.6358 & \multicolumn{3}{c}{---} \\
\end{tabular}
}
\caption{Factors affecting quality of SOM embedding. Partial effects for OLS and marginal effects for tobit. \SuperScriptSameStyle{\dagger}, {\scriptsize{$^{**}$}}, {\scriptsize{$^{*}$}} denote significance at 1\%, 5\%, 10\% level, accordingly.
In all regressions, p-values for joint significance tests equaled 0.000.
}\label{regs}
\end{table*}
}
\longonly{
\begin{table*}[bt]
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c}
& \multicolumn{4}{c|}{energy} & \multicolumn{4}{c|}{Kendall unfitness} & \multicolumn{3}{c}{tears(1)} \\
$O$ & clos & disk & clos & disk & clos & disk & clos & disk & \multicolumn{3}{|c}{clos} \\
$E$ & clos & disk & disk & clos & clos & disk & disk & clos & \multicolumn{3}{|c}{clos} \\ \hline
Effect type & \multicolumn{8}{c|}{\scalebox{.9}{${\Delta E(y)}/{\Delta x_k}$}} & \scalebox{.9}{$E(y|x)$} & \scalebox{0.9}{$E(y|x,\!y\!>\!0)$} & \scalebox{.9}{$P(y>0|x)$} \\ \hline
(Intercept) & 5.483\SuperScriptSameStyle{\dagger} & 2.212\SuperScriptSameStyle{\dagger} & 11.598\SuperScriptSameStyle{\dagger} & 1.591\SuperScriptSameStyle{\dagger} & 36.971\SuperScriptSameStyle{\dagger} & 23.578\SuperScriptSameStyle{\dagger} & 59.334\SuperScriptSameStyle{\dagger} & 55.909\SuperScriptSameStyle{\dagger} & --- & --- & ---\\
$O$ hyperbolic & 2.908\SuperScriptSameStyle{\dagger} & -0.931\SuperScriptSameStyle{\dagger} & 1.642\SuperScriptSameStyle{\dagger} & 0.173\SuperScriptSameStyle{\dagger} & 10.171\SuperScriptSameStyle{\dagger} & 2.465\SuperScriptSameStyle{\dagger} & 4.432\SuperScriptSameStyle{\dagger} & 18.884\SuperScriptSameStyle{\dagger} & 18.769\SuperScriptSameStyle{\dagger} & 0.002\SuperScriptSameStyle{\dagger} & 0.003\SuperScriptSameStyle{\dagger}\\
$O$ spherical & -0.072 & --- & -3.395\SuperScriptSameStyle{\dagger} & --- & -2.337\SuperScriptSameStyle{\dagger} & --- & -9.476\SuperScriptSameStyle{\dagger} & 3.992\SuperScriptSameStyle{\dagger} & 3.979\SuperScriptSameStyle{\dagger} & 0.000205\SuperScriptSameStyle{\dagger} & -4.2e-04\SuperScriptSameStyle{\dagger}\\
same\_manifold & -3.257\SuperScriptSameStyle{\dagger} & -0.166\SuperScriptSameStyle{\dagger} & --- & --- & -34.519\SuperScriptSameStyle{\dagger} & 2.025\SuperScriptSameStyle{\dagger} & --- & -69.686\SuperScriptSameStyle{\dagger} & -60.862\SuperScriptSameStyle{\dagger} & -0.556\SuperScriptSameStyle{\dagger} & -0.341\SuperScriptSameStyle{\dagger}\\
same\_geometry & -3.606\SuperScriptSameStyle{\dagger} & -0.559\SuperScriptSameStyle{\dagger} & 0.116{\scriptsize{$^{**}$}} & -0.208\SuperScriptSameStyle{\dagger} & -2.676\SuperScriptSameStyle{\dagger} & -14.219\SuperScriptSameStyle{\dagger} & 1.806\SuperScriptSameStyle{\dagger} & -3.716\SuperScriptSameStyle{\dagger} & -3.700\SuperScriptSameStyle{\dagger} & -0.000245\SuperScriptSameStyle{\dagger} & -3.6e-04\SuperScriptSameStyle{\dagger}\\
both\_orientable & 0.485\SuperScriptSameStyle{\dagger} & --- & -1.976\SuperScriptSameStyle{\dagger} & 0.188\SuperScriptSameStyle{\dagger} & -0.952\SuperScriptSameStyle{\dagger} & --- & -6.548\SuperScriptSameStyle{\dagger} & 2.976\SuperScriptSameStyle{\dagger} & 2.963\SuperScriptSameStyle{\dagger} & 0.000204\SuperScriptSameStyle{\dagger} & 2.3e-04\SuperScriptSameStyle{\dagger}\\
both\_symmetric & -0.232\SuperScriptSameStyle{\dagger} & --- & 4.330\SuperScriptSameStyle{\dagger} & 0.257\SuperScriptSameStyle{\dagger} & 1.821\SuperScriptSameStyle{\dagger} & --- & 15.724\SuperScriptSameStyle{\dagger} & 0.911\SuperScriptSameStyle{\dagger} & 0.907\SuperScriptSameStyle{\dagger} & 5.74e-05\SuperScriptSameStyle{\dagger} & -1.7e-05\\
diff\_curvature\_pos & 51.397\SuperScriptSameStyle{\dagger} & 1.446\SuperScriptSameStyle{\dagger} & 149.335\SuperScriptSameStyle{\dagger} & 0.097{\scriptsize{$^{**}$}} & 91.479\SuperScriptSameStyle{\dagger} & 100.401\SuperScriptSameStyle{\dagger} & 101.167\SuperScriptSameStyle{\dagger} & 616.885\SuperScriptSameStyle{\dagger} & 614.368\SuperScriptSameStyle{\dagger} & 0.038\SuperScriptSameStyle{\dagger} & 0.009\SuperScriptSameStyle{\dagger}\\
diff\_curvature\_neg & -7.692\SuperScriptSameStyle{\dagger} & 11.848\SuperScriptSameStyle{\dagger} & -22.978\SuperScriptSameStyle{\dagger} & -1.158\SuperScriptSameStyle{\dagger} & 49.689\SuperScriptSameStyle{\dagger} & 132.153\SuperScriptSameStyle{\dagger} & 29.844\SuperScriptSameStyle{\dagger} & 1667.528\SuperScriptSameStyle{\dagger} & 1660.723\SuperScriptSameStyle{\dagger} & 0.103\SuperScriptSameStyle{\dagger} & 0.035\SuperScriptSameStyle{\dagger}\\
diff\_samples & -0.005\SuperScriptSameStyle{\dagger} & --- & -0.002\SuperScriptSameStyle{\dagger} & 0.0002\SuperScriptSameStyle{\dagger} & -0.056\SuperScriptSameStyle{\dagger} & --- & -0.018\SuperScriptSameStyle{\dagger} & 0.040\SuperScriptSameStyle{\dagger} & 0.040\SuperScriptSameStyle{\dagger} & 2.46e-06\SuperScriptSameStyle{\dagger} & 1.4e-05\SuperScriptSameStyle{\dagger}\\
same\_topology & 0.885\SuperScriptSameStyle{\dagger} & --- & --- & --- & 1.351\SuperScriptSameStyle{\dagger} & --- & --- & 7.564\SuperScriptSameStyle{\dagger} & 7.543\SuperScriptSameStyle{\dagger} & 0.000316\SuperScriptSameStyle{\dagger} & -8.3e-04\SuperScriptSameStyle{\dagger}\\
N & 25600 & 4900 & 11200 & 11200 & 25600 & 4900 & 11200 & 11200 & \multicolumn{3}{c}{25600} \\
(pseudo)$R^2$ & 0.8513 & 0.7384 & 0.9091 & 0.1522 & 0.7535 & 0.7602 & 0.7964 & 0.6636 & \multicolumn{3}{c}{0.1728} \\
$R^2_{adj}$ & 0.8513 & 0.7382 & 0.9090 & 0.1517 & 0.7535 & 0.7600 & 0.7963 & 0.6634 & \multicolumn{3}{c}{---} \\
\end{tabular}
}
\caption{Factors affecting quality of SOM embedding (double density of original manifold). Partial effects for OLS and marginal effects for tobit. \SuperScriptSameStyle{\dagger}, {\scriptsize{$^{**}$}}, {\scriptsize{$^{*}$}} denote significance at 1\%, 5\%, 10\% level, accordingly.
In all regressions, p-values for joint significance tests equaled 0.000.
}\label{regs_d2}
\end{table*}
}
\longonly{
\begin{table*}[bt]
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c|c}
$O$ & \multicolumn{3}{c|}{clos} & \multicolumn{3}{c|}{disk} & clos & disk \\
$E$ & \multicolumn{3}{c|}{clos} & \multicolumn{3}{c|}{disk} & disk & clos \\ \hline
Effect type & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ & \multicolumn{2}{|c}{${\Delta E(y)}/{\Delta x_k}$} \\ \hline
(Intercept) & --- & --- & --- & --- & --- & --- & 29.700\SuperScriptSameStyle{\dagger} & 32.425\SuperScriptSameStyle{\dagger}\\
$O$ hyperbolic & 1.789\SuperScriptSameStyle{\dagger} & 1.789\SuperScriptSameStyle{\dagger} & 3.53e-07\SuperScriptSameStyle{\dagger} & -0.223 & -0.180 & -0.024 & 0.470\SuperScriptSameStyle{\dagger} & -2.568\SuperScriptSameStyle{\dagger}\\
$O$ spherical & 1.942\SuperScriptSameStyle{\dagger} & 1.942\SuperScriptSameStyle{\dagger} & 1.49e-07\SuperScriptSameStyle{\dagger} & --- & --- & --- & 0.807\SuperScriptSameStyle{\dagger} & ---\\
same\_manifold & -15.153\SuperScriptSameStyle{\dagger} & -14.697\SuperScriptSameStyle{\dagger} & -0.076\SuperScriptSameStyle{\dagger} & -5.900\SuperScriptSameStyle{\dagger} & -7.235\SuperScriptSameStyle{\dagger} & -0.653\SuperScriptSameStyle{\dagger} & --- & ---\\
same\_geometry & -3.088\SuperScriptSameStyle{\dagger} & -3.088\SuperScriptSameStyle{\dagger} & -1.1e-06\SuperScriptSameStyle{\dagger} & -6.068\SuperScriptSameStyle{\dagger} & -4.524\SuperScriptSameStyle{\dagger} & -0.487\SuperScriptSameStyle{\dagger} & -3.165\SuperScriptSameStyle{\dagger} & -2.696\SuperScriptSameStyle{\dagger}\\
both\_orientable & 0.322\SuperScriptSameStyle{\dagger} & 0.322\SuperScriptSameStyle{\dagger} & 4.78e-08\SuperScriptSameStyle{\dagger} & --- & --- & --- & 1.196\SuperScriptSameStyle{\dagger} & -0.043\\
both\_symmetric & -1.344\SuperScriptSameStyle{\dagger} & -1.344\SuperScriptSameStyle{\dagger} & -1.65e-07\SuperScriptSameStyle{\dagger} & --- & --- & --- & -1.830\SuperScriptSameStyle{\dagger} & 0.354\SuperScriptSameStyle{\dagger}\\
diff\_curvature\_pos & -9.534\SuperScriptSameStyle{\dagger} & -9.534\SuperScriptSameStyle{\dagger} & 0 & 4.837\SuperScriptSameStyle{\dagger} & 3.928\SuperScriptSameStyle{\dagger} & 0.532\SuperScriptSameStyle{\dagger} & 3.344\SuperScriptSameStyle{\dagger} & -5.788\SuperScriptSameStyle{\dagger}\\
diff\_curvature\_neg & -10.300\SuperScriptSameStyle{\dagger} & -10.300\SuperScriptSameStyle{\dagger} & 0 & 6.730\SuperScriptSameStyle{\dagger} & 5.466\SuperScriptSameStyle{\dagger} & 0.741\SuperScriptSameStyle{\dagger} & -10.195\SuperScriptSameStyle{\dagger} & -12.806\SuperScriptSameStyle{\dagger}\\
diff\_samples & 0.017\SuperScriptSameStyle{\dagger} & 0.017\SuperScriptSameStyle{\dagger} & 0 & --- & --- & --- & 0.025\SuperScriptSameStyle{\dagger} & 0.013\SuperScriptSameStyle{\dagger}\\
same\_topology & -9.021\SuperScriptSameStyle{\dagger} & -9.011\SuperScriptSameStyle{\dagger} & -0.000822\SuperScriptSameStyle{\dagger} & --- & --- & --- & --- & ---\\
\hline
N &\multicolumn{3}{c|}{25600} & \multicolumn{3}{c|}{4900} & 11200 & 11200 \\
(pseudo)$R^2$ &\multicolumn{3}{c|}{0.2099} & \multicolumn{3}{c|}{0.1355} & 0.5882 & 0.5488 \\
$R^2_{adj}$ &\multicolumn{3}{c|}{---} & \multicolumn{3}{c|}{---} & 0.5879 & 0.5485 \\
\end{tabular}
}
\caption{Factors affecting quality of SOM embedding -- Villmann measure. Partial effects for OLS and marginal effects for tobit. \SuperScriptSameStyle{\dagger}, {\scriptsize{$^{**}$}}, {\scriptsize{$^{*}$}} denote significance at 1\%, 5\%, 10\% level, accordingly. In all regressions, p-values for joint significance tests equaled 0.000.
}\label{regs_vil}
\end{table*}
}
\longonly{
\begin{table*}[bt]
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c|c}
$O$ & \multicolumn{3}{c|}{clos} & \multicolumn{3}{c|}{disk} & clos & disk \\
$E$ & \multicolumn{3}{c|}{clos} & \multicolumn{3}{c|}{disk} & disk & clos \\ \hline
Effect type & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ & \multicolumn{2}{|c}{${\Delta E(y)}/{\Delta x_k}$} \\ \hline
(Intercept) & --- & --- & --- & --- & --- & --- & 29.789\SuperScriptSameStyle{\dagger} & 33.035\SuperScriptSameStyle{\dagger}\\
$O$ hyperbolic & 1.806\SuperScriptSameStyle{\dagger} & 1.806\SuperScriptSameStyle{\dagger} & 3.52e-07\SuperScriptSameStyle{\dagger} & -0.025 & -0.018 & -0.002 & 0.701\SuperScriptSameStyle{\dagger} & -2.591\SuperScriptSameStyle{\dagger}\\
$O$ spherical & 2.078\SuperScriptSameStyle{\dagger} & 2.078\SuperScriptSameStyle{\dagger} & 1.55e-07\SuperScriptSameStyle{\dagger} & --- & --- & --- & 0.717\SuperScriptSameStyle{\dagger} & ---\\
same\_manifold & -15.228\SuperScriptSameStyle{\dagger} & -14.771\SuperScriptSameStyle{\dagger} & -0.075\SuperScriptSameStyle{\dagger} & -5.163\SuperScriptSameStyle{\dagger} & -4.719\SuperScriptSameStyle{\dagger} & -0.561\SuperScriptSameStyle{\dagger} & --- & ---\\
same\_geometry & -3.078\SuperScriptSameStyle{\dagger} & -3.078\SuperScriptSameStyle{\dagger} & -1.07e-06\SuperScriptSameStyle{\dagger} & -8.714\SuperScriptSameStyle{\dagger} & -6.325\SuperScriptSameStyle{\dagger} & -0.493\SuperScriptSameStyle{\dagger} & -3.120\SuperScriptSameStyle{\dagger} & -2.788\SuperScriptSameStyle{\dagger}\\
both\_orientable & 0.386\SuperScriptSameStyle{\dagger} & 0.386\SuperScriptSameStyle{\dagger} & 5.76e-08\SuperScriptSameStyle{\dagger} & --- & --- & --- & 0.965\SuperScriptSameStyle{\dagger} & -0.318\SuperScriptSameStyle{\dagger}\\
both\_symmetric & -1.398\SuperScriptSameStyle{\dagger} & -1.398\SuperScriptSameStyle{\dagger} & -1.69e-07\SuperScriptSameStyle{\dagger} & --- & --- & --- & -1.473\SuperScriptSameStyle{\dagger} & 0.283\SuperScriptSameStyle{\dagger}\\
diff\_curvature\_pos & -37.127\SuperScriptSameStyle{\dagger} & -37.127\SuperScriptSameStyle{\dagger} & 0 & 27.320\SuperScriptSameStyle{\dagger} & 19.796\SuperScriptSameStyle{\dagger} & 2.243\SuperScriptSameStyle{\dagger} & 21.645\SuperScriptSameStyle{\dagger} & -24.700\SuperScriptSameStyle{\dagger}\\
diff\_curvature\_neg & -47.384\SuperScriptSameStyle{\dagger} & -47.384\SuperScriptSameStyle{\dagger} & 0 & 42.883\SuperScriptSameStyle{\dagger} & 31.073\SuperScriptSameStyle{\dagger} & 3.521\SuperScriptSameStyle{\dagger} & -41.980\SuperScriptSameStyle{\dagger} & -53.582\SuperScriptSameStyle{\dagger}\\
diff\_samples & 0.004\SuperScriptSameStyle{\dagger} & 0.004\SuperScriptSameStyle{\dagger} & 0 & --- & --- & --- & 0.012\SuperScriptSameStyle{\dagger} & -0.002\SuperScriptSameStyle{\dagger}\\
same\_topology & -9.053\SuperScriptSameStyle{\dagger} & -9.043\SuperScriptSameStyle{\dagger} & -0.00081\SuperScriptSameStyle{\dagger} & --- & --- & --- & --- & ---\\
\hline
N &\multicolumn{3}{c|}{25600} & \multicolumn{3}{c|}{4900} & 11200 & 11200 \\
(pseudo)$R^2$ &\multicolumn{3}{c|}{0.2092} & \multicolumn{3}{c|}{0.1402} & 0.6078 & 0.5598 \\
$R^2_{adj}$ &\multicolumn{3}{c|}{---} & \multicolumn{3}{c|}{---} & 0.6075 & 0.5595 \\
\end{tabular}
}
\caption{Factors affecting quality of SOM embedding -- Villmann measure. Partial effects for OLS and marginal effects for tobit. \SuperScriptSameStyle{\dagger}, {\scriptsize{$^{**}$}}, {\scriptsize{$^{*}$}} denote significance at 1\%, 5\%, 10\% level, accordingly. In all regressions, p-values for joint significance tests equaled 0.000.
}\label{regs_vil_d2}
\end{table*}
}
\longonly{
\begin{table*}[bt]
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c|c}
$O$ & \multicolumn{3}{c|}{closed} & \multicolumn{3}{c|}{disk} & closed & disk \\
$E$ & \multicolumn{3}{c|}{closed} & \multicolumn{3}{c|}{disk} & disk & closed \\ \hline
Effect type & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ & \multicolumn{2}{|c}{${\Delta E(y)}/{\Delta x_k}$} \\ \hline
(Intercept) & --- & --- & --- & --- & --- & --- & 29.79\SuperScriptSameStyle{\dagger} & 33.04\SuperScriptSameStyle{\dagger}\\
$O$ hyperbolic & 1.81\SuperScriptSameStyle{\dagger} & 1.81\SuperScriptSameStyle{\dagger} & 3.5e-07\SuperScriptSameStyle{\dagger} & -0.025 & -0.018 & -0.0021 & 0.70\SuperScriptSameStyle{\dagger} & -2.59\SuperScriptSameStyle{\dagger}\\
$O$ spherical & 2.08\SuperScriptSameStyle{\dagger} & 2.08\SuperScriptSameStyle{\dagger} & 1.5e-07\SuperScriptSameStyle{\dagger} & --- & --- & --- & 0.72\SuperScriptSameStyle{\dagger} & ---\\
same\_manifold & -15.23\SuperScriptSameStyle{\dagger} & -14.77\SuperScriptSameStyle{\dagger} & -0.08\SuperScriptSameStyle{\dagger} & -5.16\SuperScriptSameStyle{\dagger} & -4.72\SuperScriptSameStyle{\dagger} & -0.56\SuperScriptSameStyle{\dagger} & --- & ---\\
same\_geometry & -3.08\SuperScriptSameStyle{\dagger} & -3.08\SuperScriptSameStyle{\dagger} & -1.1e-06\SuperScriptSameStyle{\dagger} & -8.71\SuperScriptSameStyle{\dagger} & -6.32\SuperScriptSameStyle{\dagger} & -0.49\SuperScriptSameStyle{\dagger} & -3.12\SuperScriptSameStyle{\dagger} & -2.79\SuperScriptSameStyle{\dagger}\\
both\_orientable & 0.39\SuperScriptSameStyle{\dagger} & 0.39\SuperScriptSameStyle{\dagger} & 5.8e-08\SuperScriptSameStyle{\dagger} & --- & --- & --- & 0.96\SuperScriptSameStyle{\dagger} & -0.32\SuperScriptSameStyle{\dagger}\\
both\_symmetric & -1.40\SuperScriptSameStyle{\dagger} & -1.40\SuperScriptSameStyle{\dagger} & -1.7e-07\SuperScriptSameStyle{\dagger} & --- & --- & --- & -1.47\SuperScriptSameStyle{\dagger} & 0.28\SuperScriptSameStyle{\dagger}\\
diff\_curvature\_pos & -37.13\SuperScriptSameStyle{\dagger} & -37.13\SuperScriptSameStyle{\dagger} & 0 & 27.32\SuperScriptSameStyle{\dagger} & 19.80\SuperScriptSameStyle{\dagger} & 2.24\SuperScriptSameStyle{\dagger} & 21.64\SuperScriptSameStyle{\dagger} & -24.70\SuperScriptSameStyle{\dagger}\\
diff\_curvature\_neg & -47.38\SuperScriptSameStyle{\dagger} & -47.38\SuperScriptSameStyle{\dagger} & 0 & 42.88\SuperScriptSameStyle{\dagger} & 31.07\SuperScriptSameStyle{\dagger} & 3.52\SuperScriptSameStyle{\dagger} & -41.98\SuperScriptSameStyle{\dagger} & -53.58\SuperScriptSameStyle{\dagger}\\
diff\_samples & 0.004\SuperScriptSameStyle{\dagger} & 0.004\SuperScriptSameStyle{\dagger} & 0 & --- & --- & --- & 0.012\SuperScriptSameStyle{\dagger} & -0.0018\SuperScriptSameStyle{\dagger}\\
same\_topology & -9.05\SuperScriptSameStyle{\dagger} & -9.04\SuperScriptSameStyle{\dagger} & -0.00081\SuperScriptSameStyle{\dagger} & --- & --- & --- & --- & ---\\
\hline
N &\multicolumn{3}{c|}{25600} & \multicolumn{3}{c|}{4900} & 11200 & 11200 \\
(pseudo)$R^2$ &\multicolumn{3}{c|}{0.2092} & \multicolumn{3}{c|}{0.1402} & 0.6078 & 0.5598 \\
$R^2_{adj}$ &\multicolumn{3}{c|}{---} & \multicolumn{3}{c|}{---} & 0.6075 & 0.5595 \\
\end{tabular}
}
\caption{Factors affecting quality of SOM embedding -- Villmann measure. Partial effects for OLS and marginal effects for tobit. \SuperScriptSameStyle{\dagger}, {\scriptsize{$^{**}$}}, {\scriptsize{$^{*}$}} denote significance at 1\%, 5\%, 10\% level, accordingly. In all regressions, p-values for joint significance tests equaled 0.000.
}\label{regs_vil_d2}
\end{table*}
}
\shortonly{
\begin{table*}[bt]
\begin{tabular}{l|c|c|c|c|c|c|c|c}
\toprule
$O$ & \multicolumn{3}{c|}{closed} & \multicolumn{3}{c|}{disk} & closed & disk \\
$E$ & \multicolumn{3}{c|}{closed} & \multicolumn{3}{c|}{disk} & disk & closed \\ \hline
Effect type & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ & $E(y|x)$ & $E(y|x, y>0)$ & $P(y>0|x)$ & \multicolumn{2}{|c}{${\Delta E(y)}/{\Delta x_k}$} \\ \midrule
(Intercept) & --- & --- & --- & --- & --- & --- & 29.70\SuperScriptSameStyle{\dagger} & 32.42\SuperScriptSameStyle{\dagger}\\
$O$ hyperbolic & 1.79\SuperScriptSameStyle{\dagger} & 1.79\SuperScriptSameStyle{\dagger} & 3.5e-07\SuperScriptSameStyle{\dagger} & -0.22 & -0.18 & -0.024 & 0.47\SuperScriptSameStyle{\dagger} & -2.57\SuperScriptSameStyle{\dagger}\\
$O$ spherical & 1.94\SuperScriptSameStyle{\dagger} & 1.94\SuperScriptSameStyle{\dagger} & 1.5e-07\SuperScriptSameStyle{\dagger} & --- & --- & --- & 0.81\SuperScriptSameStyle{\dagger} & ---\\
same\_manifold & -15.15\SuperScriptSameStyle{\dagger} & -14.70\SuperScriptSameStyle{\dagger} & -0.08\SuperScriptSameStyle{\dagger} & -5.90\SuperScriptSameStyle{\dagger} & -7.23\SuperScriptSameStyle{\dagger} & -0.65\SuperScriptSameStyle{\dagger} & --- & ---\\
same\_geometry & -3.09\SuperScriptSameStyle{\dagger} & -3.09\SuperScriptSameStyle{\dagger} & -1.1e-06\SuperScriptSameStyle{\dagger} & -6.07\SuperScriptSameStyle{\dagger} & -4.52\SuperScriptSameStyle{\dagger} & -0.49\SuperScriptSameStyle{\dagger} & -3.16\SuperScriptSameStyle{\dagger} & -2.70\SuperScriptSameStyle{\dagger}\\
both\_orientable & 0.32\SuperScriptSameStyle{\dagger} & 0.32\SuperScriptSameStyle{\dagger} & 4.8e-08\SuperScriptSameStyle{\dagger} & --- & --- & --- & 1.20\SuperScriptSameStyle{\dagger} & -0.043\\
both\_symmetric & -1.34\SuperScriptSameStyle{\dagger} & -1.34\SuperScriptSameStyle{\dagger} & -1.7e-07\SuperScriptSameStyle{\dagger} & --- & --- & --- & -1.83\SuperScriptSameStyle{\dagger} & 0.35\SuperScriptSameStyle{\dagger}\\
diff\_curv\_pos & -9.53\SuperScriptSameStyle{\dagger} & -9.53\SuperScriptSameStyle{\dagger} & 0 & 4.84\SuperScriptSameStyle{\dagger} & 3.93\SuperScriptSameStyle{\dagger} & 0.53\SuperScriptSameStyle{\dagger} & 3.34\SuperScriptSameStyle{\dagger} & -5.79\SuperScriptSameStyle{\dagger}\\
diff\_curv\_neg & -10.30\SuperScriptSameStyle{\dagger} & -10.30\SuperScriptSameStyle{\dagger} & 0 & 6.73\SuperScriptSameStyle{\dagger} & 5.47\SuperScriptSameStyle{\dagger} & 0.74\SuperScriptSameStyle{\dagger} & -10.20\SuperScriptSameStyle{\dagger} & -12.81\SuperScriptSameStyle{\dagger}\\
diff\_samples & 0.017\SuperScriptSameStyle{\dagger} & 0.017\SuperScriptSameStyle{\dagger} & 0 & --- & --- & --- & 0.025\SuperScriptSameStyle{\dagger} & 0.013\SuperScriptSameStyle{\dagger}\\
same\_topology & -9.02\SuperScriptSameStyle{\dagger} & -9.01\SuperScriptSameStyle{\dagger} & -0.00082\SuperScriptSameStyle{\dagger} & --- & --- & --- & --- & ---\\
\hline
N &\multicolumn{3}{c|}{25600} & \multicolumn{3}{c|}{4900} & 11200 & 11200 \\
(pseudo)$R^2$ &\multicolumn{3}{c|}{0.2099} & \multicolumn{3}{c|}{0.1355} & 0.5882 & 0.5488 \\
$R^2_{adj}$ &\multicolumn{3}{c|}{---} & \multicolumn{3}{c|}{---} & 0.5879 & 0.5485 \\ \bottomrule
\end{tabular}
\caption{Factors affecting quality of SOM embedding -- Villmann measure. Partial effects for OLS and marginal effects for tobit. \SuperScriptSameStyle{\dagger}, {\scriptsize{$^{**}$}}, {\scriptsize{$^{*}$}} denote significance at 1\%, 5\%, 10\% level, accordingly. In all regressions, p-values for joint significance tests equaled 0.000.
}\label{regs_vil}
\end{table*}
}
\subsection{Quantitative Results}
We use the following parameters: $t_{max}=30000$ iterations, learning coefficient $\eta=0.1$,
dispersion precision $p=10^{-4}$, $T$ is the number of dispersion steps until the max value/min value $\leq$ 1.6, 60 landscape dimensions,
manifolds with about 520 tiles.
Computing 57600 embeddings takes 4 hours on 8-core Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz.
Our implementation is included in the RogueViz non-Euclidean geometry engine\longonly{ \cite{rogueviz2021}}.
The results of our experiments, code and visualizations are at
\url{https://figshare.com/articles/software/Non-Euclidean_Self-Organizing_Maps_code_and_data_/16624393}.
\paragraph{Comparison of simulated and Gaussian dispersion}
We use the aforementioned measures of quality to check if simulated dispersion improves
the quality of the embedding in comparison to Gaussian. In the Gaussian dispersion function,
we take the discrete distance between tiles as the distance between two neurons.
We take advantage of the paired nature of the data
and compute the differences between the values of the quality measure obtained with Gaussian and the simulated
dispersion. We use Wilcoxon test to check if the difference is statistically insiginificant,
against the alternative hypothesis that the results from simulated dispersion are better. We use 1\% significance level.
\longonly{
\begin{table}[h]
\scalebox{0.9}{
\begin{tabular}{l|c|c|c|c|c|c|c}
& \multirow{3}{*}{all} & \multicolumn{2}{|c|}{$E=O$} & \multicolumn{4}{c}{$E\neq O$} \\
$O$ & & clos & disk & clos & disk & clos & disk \\
$E$ & & clos & disk & clos & disk & disk & clos \\ \hline
energy & 0.00 & 0.00 & 0.02$^\ddagger$ & 0.00 & 0.00 & 0.00 & 0.92$^\ddagger$ \\ \hline
K. unfit. & 0.00 & 0.00 & 0.97$^\ddagger$ & 0.00 & 0.00 & 1.00 & 0.00 \\ \hline
tears(1) & 1.00 & 0.00 & --- & 1.00 & --- & --- & --- \\ \hline
Villmann & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\end{tabular}
}
\caption{
P-values for Wilcoxon tests on differences between quality measures from
SOMs with Gaussian against simulated dispersion. $H_1$ indicates better results from simulated dispersion. $\ddagger$ denotes statistically insignificant difference.}\label{wilcoxon}
\end{table}
}
\longonly{
\begin{table}[h]
\scalebox{0.9}{
\begin{tabular}{l|c|c|c|c|c|c|c}
& \multirow{3}{*}{all} & \multicolumn{2}{|c|}{$E=O$} & \multicolumn{4}{c}{$E\neq O$} \\
$O$ & & clos & disk & clos & disk & clos & disk \\
$E$ & & clos & disk & clos & disk & disk & clos \\ \hline
energy & 0.00 & 0.91$^\ddagger$ & 0.77$^\ddagger$ & 1.00 & 0.00 & 0.00 & 1.00 \\ \hline
K. unfit. & 0.00 & 0.00 & 0.99$^\ddagger$ & 0.00 & 0.00 & 1.00 & 0.00 \\ \hline
tears(1) & 1.00 & 0.00 & --- & 1.00 & --- & --- & --- \\ \hline
Villmann & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\end{tabular}
}
\caption{
P-values for Wilcoxon tests on differences between quality measures from
SOMs with Gaussian against simulated dispersion (double density of original manifold). $H_1$ indicates better results from simulated dispersion. $\ddagger$ denotes statistically insignificant difference.}\label{wilcoxon_d2}
\end{table}
}
\longonly{
Our results (Table~\ref{wilcoxon}) show that the embeddings obtained with simulated dispersion have usually lower energy than the
embeddings obtained with Gaussian dispersion. Two expections are the scenario when we embed disk data on a closed manifold and the scenario when we embed correctly disks.
However, those results are statistically significant (p-value for two-sided Wilcoxon test 0.153 for the first case and 0.044 for the second one). Embeddings obtained with
simulated dispersion also yield results better preserving the original relationships in data with respect to Kendall coefficient.
Two exceptions are: the insignificant difference for correctly used discs (p-value for two sided Wilcoxon test 0.061) and a significant difference
in favor of Gaussian in the case of wrongly embedded closed manifold to disk data. For completeness, we also show the results for topological errors.
Here, we find some evidence of advantage of Gaussian if we embed to wrong manifold. However, embedding into a
wrong manifold naturally comes with creating tears in embeddings.
In the case of correctly embedded closed manifolds, simulated dispersion yields lower number of tears.
Our results are stable even for double density of the original manifold (Table~ \ref{wilcoxon_d2}).}
\shortonly{Our results show that the embeddings obtained with simulated dispersion have usually lower Villmann's measure than the embeddings obtained with Gaussian dispersion.
This remains true whether we use single or double density of the original manifold, or when we restrict only to closed/disk $O$, or closed/disk $E$. The p-value is 0.00
in all cases. The results are slightly different for other measures of embedding quality; see the full version for more details.}
\longonly{
\begin{figure*}
\centering
\subfig{0.24\linewidth}{graphs/diff_energy_d2.pdf} \hskip -1mm
\subfig{0.24\linewidth}{graphs/diff_kendall_d2.pdf} \hskip -1mm
\subfig{0.24\linewidth}{graphs/diff_emptyx_same_d2.pdf}
\subfig{0.24\linewidth}{graphs/diff_villman_d2.pdf}
\caption{\label{fig:diffd2}
Distribution of differences (Gaussian$-$simulated) for (a) energy, (b) Kendall unfitness, (c) tears(1) for the same closed manifold, (d) Villmann measure. Black graph is double density, blue graph is single density.}
\end{figure*}
See Figure \ref{fig:diffd2} for the Distribution of differences (Gaussian$-$simulated) for our mesasures.}
\paragraph{Quality of shape recovery}
We have already shown that using simulated dispersion improves the quality of the embedding. However, SOM on even correctly
chosen manifold may be prone to errors. Here, we will analyze the factors that affect the errors in embeddings. \longonly{To this end, for errors in energy and Kendall
unfitness we will use OLS regression.}
\longonly{According to data in Table~\ref{regs}, if original data comes from hyperbolic geometry, SOMs err more in comparison to original data from Euclidean geometry.
On the contrary, SOMs err less on data from spherical geometry than from Euclidean geometry in terms of energy and Kendall coefficient. Correct choice of
embedding manifold reduces errors (with exception for Kendall coefficient for correctly chosen disks). Same geometry has ambiguous effect. With correctly chosen
embedding manifolds, SOMs err less, otherwise, same geometry may not help. Wrong choices of curvature usually come with greater numbers of errors: using embedding manifold with lower curvature than the original
(positive difference in curvature, {\it diff\_curv\_pos})
always worsens the fit; higher curvature than the original usually improves the energy.
Surprisingly, same topology worsens Kendall coefficient and increases energy errors. However, these measures are not well fit for this purpose.}
\longonly{Tears in visualizations of data embeddings from SOMs should indicate boundaries of clusters in data.
In most cases, the users of SOM do not know the real shape of data. In our setup, we did not
create clusters, so tears are errors that could mislead the user of SOM. Therefore, understanding what factors affect
the topological errors is crucial for the users. However, OLS fails to account for the qualitative difference between zero (lack of errors) observations and non-zero observations when we analyze tears.
Therefore for topological errors, we will use censored regression model (tobit model) \cite{tobit,greene}.}
\longonly{Out of 25600 embeddings on closed manifolds we got 2081 embeddings without topological errors (8.1\%). For correctly chosen closed manifolds this percentage was
significantly higher (94.3\%). The probability of SOMs making topological errors ($P(y\!>\!0|x)$) decreases if we correctly choose manifold, topology or at least geometry.
Differences in curvature vastly increase both the probability of SOM yielding tears and the number of those errors (for whole sample $E(y|x)$ and conditionally
if SOM erred $(E(y|x,\!y{>}0)$). Again, original hyperbolic manifolds are harder to recover in comparison to
Euclidean manifolds; also, orientability increases the difficulty of the task. Symmetry is insignificant when it comes to topological errors. The results are stable for the double density of the original manifold \longonly{(Table~\ref{regs_d2})}\shortonly{(full version)}\vshortonly{\cite{somarxiv}}.
}
OLS regression fails to account for the qualitative difference between low values of Villmann measure (only local geometric errors) and high values (discontinuities).
Therefore for topological errors where correct embeddings are possible, we will use censored regression model (tobit model) \cite{tobit,greene}. To this end, we left censor values of Villmann measures lower than 8 to 0.
In the case of closed manifolds embedded on close manifolds, out of 25600 embeddings we got 2082 embeddings without topological errors (8.1\%). If disks were embedded to disks, out of 4900 embeddings 2499 were free of topological errors (51\%). For correctly chosen manifolds this percentage was
significantly higher (94.3\% for closed ones and 100\% for disks). The probability of SOMs making topological errors ($P(y\!>\!0|x)$) decreases if we correctly choose manifold, topology or at least geometry. Differences in curvature vastly increase both the probability of SOM yielding tears and the number of those errors (for whole sample $E(y|x)$ and conditionally
if SOM erred $(E(y|x,\!y{>}0)$) in the case of disks. Again, original hyperbolic closed manifolds are harder to recover in comparison to
Euclidean manifolds; also, orientability increases the difficulty of the task.
The results are stable for the double density of the original manifold\longonly{ (Table~\ref{regs_d2})}\shortonly{ (full version)}\vshortonly{(\cite{somarxiv})}.
\section{Discussion}
\paragraph{Choosing the manifold.} One of the major concerns regarding using non-Euclidean SOMs is the choice of the underlying manifold. Depending on what is the core interest of the researcher, the choice of the underlying manifold may vary. It is typical for multidimensional analysis techniques that the eventual choice of the setup can be subjective. To make sure the results are robust, one may conduct the simulations
on distinct spaces. \longonly{Our proposition allows for easy comparison of the results. }The Goldberg-Coxeter construction lets us use
similar number of neurons for different manifolds, controling for the number of possible groups. Later diagnostics may include comparison of information criteria
\longonly{
\paragraph{Distances in Gaussian}
The Gaussian dispersion \cite{ritter99} was based on geometric distance, while in our benchmark we take the discrete distance between tiles.
\longonly{Table~\ref{wilcoxon_gauss} contains p-values for Wilcoxon tests with alternative hypotheses that SOMs with Gaussian
dispersion based on geometric distances perform worse than those with Gaussian dispersion based on discrete distances.
Geometric distance between $a$ and $b$ is the length of the (shortest) geodesic from $a$ to $b$ on sphere, Euclidean plane or hyperbolic plane.
Original proposition \cite{ritter99} did not take into account quotient spaces.
In the case of quotient spaces, this notion is less natural, since there may be multiple geodesics from $a$ to $b$; therefore,
we performed our comparison only for disks and spheres.
The results obtained with discrete distances were significantly better than the results obtained with geometric ones in terms of all metrics but Villmann's measure. According to Villmann's measure, geometric distances were a better fit when the embedding manifold is not the same as the original one.}
}
\longonly{
\begin{table}[h]
\centering
\begin{tabular}{l|c|c|c}
& all & $E=O$ & $E\neq O$ \\ \hline
energy & 0.00 & 0.00 & 0.00 \\ \hline
K. unfit. & 0.00 & 0.00 & 0.00 \\ \hline
Villmann & 1.00 & 0.00 & 1.00 \\
\end{tabular}
\caption{
P-values for Wilcoxon tests on differences between SOMs with Gaussian dispersion based on geometric distance and SOMs with Gaussian dispersion based on geometric distance. $H_1$ indicates better results from discrete distances.}\label{wilcoxon_gauss}
\end{table}
}
\longonly{
\paragraph{Landscape dimension}
\begin{figure}
\centering
\subfig{0.49\linewidth}{graphs/energy_landscapes.pdf} \hskip -1mm
\subfig{0.49\linewidth}{graphs/kendall_landscapes.pdf}
\subfig{0.49\linewidth}{graphs/villman_landscapes.pdf}
\subfig{0.49\linewidth}{graphs/diff_energy_landscapes.pdf} \hskip -1mm
\subfig{0.49\linewidth}{graphs/diff_kendall_landscapes.pdf}
\subfig{0.49\linewidth}{graphs/diff_villman_landscapes.pdf}
\caption{\label{fig:ldim}
Changing the landscape dimension. (black) $d=10$, (red) $d=30$, (blue) $d=60$, (dashed) deterministic.
(a)~absolute energy (simulated dispersion);
(b)~absolute Kendall unfitness (simulated dispersion);
(c)~absolute Villmann measure (simulated dispersion);
(d)~difference in energy (Gaussian$-$simulated);
(e)~difference in Kendall unfitness (Gaussian$-$simulated).}
(f)~difference in Villmann measure (Gaussian$-$simulated);
\end{figure}
\begin{figure}
\centering
\subfig{0.49\linewidth}{graphs/heatmap_energy.pdf} \hskip -1mm
\subfig{0.49\linewidth}{graphs/heatmap_kendall.pdf}
\caption{\label{fig:lreg}
Changing the landscape dimension. Coefficient of variation (CV) for OLS coefficients.
(a)~absolute energy (simulated dispersion);
(b)~absolute Kendall unfitness (simulated dispersion).}
\end{figure}
}
\longonly{
\begin{table}[h]
\centering
\begin{tabular}{l|c|c|c|c}
& landscape & all & $E=O$ & $E\neq O$ \\ \hline
\multirow{3}{*}{energy} & $d=10$ & 0.00 & 0.00 & 0.00 \\
& $d=30$ & 0.00 & 0.00 & 0.00 \\
& $d=60$ & 0.00 & 0.02$\ddagger$ & 0.00 \\
& deterministic & 0.00 & 0.56$\ddagger$ & 0.00 \\ \hline
\multirow{3}{*}{K. unfit.} & $d=10$ & 1.00 & 0.99$\ddagger$ & 1.00 \\
& $d=30$ & 1.00 & 0.96$\ddagger$ & 1.00 \\
& $d=60$ & 1.00 & 0.03$\ddagger$ & 1.00 \\
& deterministic & 1.00 & 0.00 & 1.00 \\ \hline
\multirow{3}{*}{Villmann} & $d=10$ & 0.00 & 0.01$\ddagger$ & 0.00 \\
& $d=30$ & 0.00 & 0.00 & 0.00 \\
& $d=60$ & 0.00 & 0.00 & 0.00 \\
& deterministic & 0.00 & 0.00 & 0.00 \\ \hline
\end{tabular}
\caption{
P-values for Wilcoxon tests on differences between quality measures from SOMs with Gaussian against simulated dispersion (computed with various landscapes). $H_1$ indicates better results from simulated dispersion. $\ddagger$ denotes statistically insignificant difference.}\label{wilcoxon_landscapes}
\end{table}
As explained in Subsection \ref{sub:embed}, we are using the landscape dimension of $d=60$ for our experiments.
With a large enough value of $d$, random Gaussian vectors $v_l\in \mathbb{R}^d$ (agreeing with the interpretations above)
should produce an embedding with good properties. For simulations, we can also use the deterministic variant of the landscape
method, where we take $d=|L|$ and pick every $v_l$ to be a different unit vector.}
\longonly{
Figure \ref{fig:ldim}(ab) presents the distribution of energy and Kendall unfitness. We only consider disks as original manifolds.
Dimension 10 is clearly not sufficient in our case. With higher dimension, the distances between vertex coordinates are a better approximation
of their distances in the manifold. While the deterministic variant achieves the best scores, its high number of dimensions significantly
slows down our algorithm, and a non-determinisitc variant with a lower number of dimensions is more relevant for applications. On the other hand,
dimension of landscape does not impact significantly our qualitative results -- the insights driven from the analysis of Wilcoxon tests (Table~\ref{wilcoxon_landscapes})
and density plots (Figure~\ref{fig:ldim}) are similar and stable. Simulated dispersion scores better than the Gaussian dispersion in terms of energy.
}
\longonly{
We also checked if the choice of landscape dimension impacts the insights from OLS regressions. To this end, we computed coefficients of variation.
The coefficient of variation (CV) is the ratio of the sample standard deviation to the sample mean. From Gauss-Markov theorem, we know that if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero (valid in our case), OLS estimator is the best, linear, unbiased estimator.
Moreover, OLS coefficients are normally distributed, so the average taken
on those coefficients obtained with different landscapes is also normally distributed. If the landscape has no significant impact on the coefficients, we should obtain relatively low CVs. Figures~\ref{fig:lreg} depict heatmaps of the obtained CVs. Similarly to the insights from Figure~\ref{fig:ldim}
we notice that the variation is higher if we include dimension 10; the coefficients obtained from higher number of dimensions are comparable. As our sample is very small,
we find the CVs low enough to conclude that the choice of the landscape had no siginificant effect on the qualitative insights from regressions.
}
\shortonly{\paragraph{Other parameters}
The full version contains results of changing other parameters, such as the landscape dimension and the density of original manifolds.}
\section{Conclusions}
In this paper, we provide the general setup for non-Euclidean SOMs.
We utilize symmetric quotient spaces to make our maps uniform, the Goldberg-Coxeter construction to remove the limitations related to the number and size of available grids, and suggest using a dispersion function different than Gaussian
to match the underlying geometry.
It is surprising to us that the idea of using non-Euclidean templates seems to have been neglected after
the initial papers \cite{ritter99,ontrup}. There is research on extending the SOM algorithm to the
cases where the data $D$ is no longer considered a subset of $\mathbb{R}^k$ with Euclidean distances, but rather
based the distances are based on dissimilarity matrices or Mercer kernels \cite{somrossi,kerneltrick}.
While such data representations are sometimes referred to as non-Euclidean, they are not directly related to
non-Euclidean geometry.
Such approaches can be seen as orthogonal to ours: they run SOM on Euclidean
lattices but change the representation of $D$, while in our approach, the data manifold in still embedded
into $\mathbb{R}^k$, but we change the template. One possible direction of further research is to combine both
approaches. However, contrary to our approach, the non-geometrical nature of these settings makes
them less usable for visualization.
\longonly{
In this paper, we restricted ourselves to two-dimensional geometries. An exciting future direction is using three-dimensional visualizations.
While two-dimensional non-Euclidean geometries only differ in curvature, which can be negative, zero or positive,
in three dimensions we have eight Thurston geometries \cite{thurston1982}. In addition to the three isotropic
geometries, we have the product spaces
$\mathbb{H}^2\times\mathbb{R}$ and $\mathbb{S}^2\times\mathbb{R}$ (obtained by adding an extra dimension to hyperbolic plane or the sphere
in the Euclidean way), twisted geometries Nil and $PSL(2,\mathbb{R})$, and Solv, which has a different nature than
the hyperbolic space while still featuring exponential growth.
Recent advances in Virtual Reality and the visualization and tessellation of Thurston geometries \cite{rtviz,segmarching} make
us believe that our approach can be adapted to such geometries, yielding insightful visualizations.
}
\shortonly{
In this paper, we restricted ourselves to two-dimensional geometries.
An exciting future direction is using three-dimensional visualizations, applying the
recent advances in Virtual Reality and the visualization of Thurston geometries \cite{segmarching,rtviz}.
We believe that our approach can be adapted to such geometries, yielding insightful visualizations.
}
We are grateful to the referees, whose constructive comments on the earlier versions of this work helped us to improve the
quality of the paper. This work has been supported by the National Science Centre, Poland, grant UMO-2019//35/B/ST6/04456.
\bibliographystyle{named}
| {
"timestamp": "2022-05-03T02:37:41",
"yymm": "2109",
"arxiv_id": "2109.11769",
"language": "en",
"url": "https://arxiv.org/abs/2109.11769",
"abstract": "Self-Organizing Maps (SOMs, Kohonen networks) belong to neural network models of the unsupervised class. In this paper, we present the generalized setup for non-Euclidean SOMs. Most data analysts take it for granted to use some subregions of a flat space as their data model; however, by the assumption that the underlying geometry is non-Euclidean we obtain a new degree of freedom for the techniques that translate the similarities into spatial neighborhood relationships. We improve the traditional SOM algorithm by introducing topology-related extensions. Our proposition can be successfully applied to dimension reduction, clustering or finding similarities in big data (both hierarchical and non-hierarchical).",
"subjects": "Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE); Geometric Topology (math.GT)",
"title": "Non-Euclidean Self-Organizing Maps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551515780319,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.7083190130162867
} |
https://arxiv.org/abs/1909.03203 | Geometry of planar curves intersecting many lines in a few points | The local Lipschitz property is shown for the graph avoiding multiple point intersection with lines directed in a given cone. The assumption is much stronger than those of Marstrand's well-known theorem, but the conclusion is much stronger too. Additionally, a continuous curve with a similar property is $\sigma$-finite with respect to Hausdorff length and an estimate on the Hausdorff measure of each "piece" is found. | \section{The statement of the problem}
The problem we consider in this note grew from a question in perturbation theory of self-adjoint operators, see \cite{LTV}. The question was to better understand the structure of Borel sets in $\mathbb{R}^n$ which have a small intersection with a whole cone of lines. Marstrand's and Mattila's theorems in \cite{JM} and \cite{PM} give a lot information about the exceptional set of finite-rank perturbations of a given self-adjoint operator. The exception happens when singular parts of unperturbed and perturbed operators are \emph{not mutually singular}. It is know that this is a rare event in the sense that its measure is zero among all finite-rank perturbations. Paper \cite{LTV} proves a stronger claim: the dimension of a bad set of perturbations actually drops.
Here we impose a stronger condition on the Borel set in question: it is just a continuous curve on the plane. But we also get a stronger result, claiming some estimate on Hausdorff measure (not just Hausdorff dimension).
On the other hand, in \cite{EL} it is shown that given countably many graphs of functions there is another function whose graph has just one intersection with all shifts of the given graphs but whose graph has dimension $2$.
\begin{prop} \label{2D}
Let $λ>0$ be a fixed number and consider all the cones of lines with slopes between $λ$ and $-λ$ (containing the vertical line). If $f:(0,1)\to\mathbb{R}$ is a continuous function such that any line of these cones intersects its graph at at most two points, then $f$ is locally Lipschitz.
\end{prop}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.15]{cone_of_lines.jpg}
\caption{Each line from any cone intersects the graph in at most two points.}
\label{cone}
\end{figure}
Notice that our hypothesis implies that no three points of the graph of $f$ can lie on the same line unless that line has slope between $-λ$ and $λ$.
For the proof we are going to need the following lemmata:
\begin{lem} \label{haus}
Any locally Lipschitz curve has Hausdorff dimension 1.
\end{lem}
\begin{lem} \label{lip}
Every convex (or concave) function on an open interval is locally Lipschitz.
\end{lem}
\begin{lem} \label{mono}
If a function $g:(0,1)\to\mathbb{R}$ is continuous and has a unique local minimum at $\1x\in(0,1)$, then $g$ is strictly increasing in $[\1x,1)$ and strictly decreasing in $(0,\1x]$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{mono}]
Consider two points $x_1<x_2\in[\1x,1)$ such that $g(x_1)>g(x_2)$. On the compact interval $[x_1,x_2]$ $g$ has to attain a minimum, say at $c$, and by assumption $c>x_1\geq \1x$. But this contradicts the uniqueness of $\1x$ and therefore $g$ has to be increasing in $[\1x,1)$. Similarly for $(0,\1x]$.
\end{proof}
\begin{proof}
Consider the slope function of $f$, $S(x,y)=\frac{f(x)-f(y)}{x-y}$, and note that $$S(x,y)=\frac{f(x)-f(y)}{x-y}=ζ \iff f(x)-ζx=f(y)-ζy.$$
If for any two points $x<y\in(0,1)$ we have that $|S(x,y)|<λ$, then $f$ is Lipschitz (with Lipschitz constant at most $λ$) and so from Lemma \eqref{haus} it has dimension 1.
Now suppose that there exist $x_0,y_0\in(0,1)$ for which $|S(x_0,y_0)|\geq λ$ and consider the case where $S(x_0,y_0)=λ'\geq λ$. Since $S(x,y)=S(y,x)$, we can assume that $x_0<y_0$. We will denote the line passing through $(x_0,f(x_0))$ and $(y_0,f(y_0))$ by $ε_{λ'}$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.15]{two_intersection_points.jpg}
\caption{$S(x_0,y_0)=λ'\geq λ$ ; The part of the graph of $f$ between $x_0$ and $y_0$ cannot lie on different sides of $ε_{λ'}$.}
\label{constantly_slope}
\end{figure}
If there are numbers $x_0<a<b<y_0$ such that $(S(x_0,a)-λ')(S(x_0,b)-λ')\leq0$, then by the continuity of $S(x,\cdot)$ there has to exist a number $c\in[a,b]$ such that $\frac{f(x_0)-f(c)}{x_0-c}=λ'=\frac{f(x_0)-f(y_0)}{x_0-y_0}$. But this means that $(x_0,f(x_0)), (c,f(c))$ and $(y_0,f(y_0))$ are co-linear which contradicts our hypothesis and therefore $S(x_0,y)$ has to be constantly greater or constantly less than $λ'$ for $x_0<y<y_0$. (see fig.\ref{constantly_slope}) For the same reasons $S(x_0,y)$ has to be constantly greater or constantly less than $λ'$ also for $y>y_0$ and the same holds for $S(x,y_0)$ for $x<x_0$.
Graphically, this means that $ε_{λ'}$ separates $f$ in three parts that do not intersect $ε_{λ'}$; one before $x_0$, one over $(x_0,y_0)$ and one after $y_0$. We proceed to show that the part over $(x_0,y_0)$ lies on a different side of $ε_{λ'}$ from the other two.
\begin{figure}[h!]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=\linewidth, height=3.5cm]{no_minimum_point_concave.jpg}
\subcaption*{$S(x_0,y)> λ'$}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\includegraphics[width=\linewidth, height=3.5cm]{no_minimum_point_convex.jpg}
\subcaption*{$S(x_0,y)< λ'$}
\end{minipage}
\caption{the two cases when $x_0<y<y_0$}
\label{cases}
\end{figure}
Let us consider the case when $S(x_0,y)<λ'$ for $x_0<y<y_0$. Then, the function $f(x)-λ'x$ defined on $[x_0,y_0]$ attains a maximum at $x_0$ and at $y_0$ (which also implies that $S(x,y_0)>λ'$ for $x_0<x<y_0$) and let $\1y\in(x_0,y_0)$ be the point where $f(x)-λ'x$ attains a minimum. (see fig.\ref{no_going_back}) Now, suppose additionally that $S(x_0,y)<λ'$ also for $y>y_0$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{not_direction_change.jpg}
\caption{If $S(x_0,y)<λ'$ for every $y\notin(x_0,1)\backslash\{y_0\}$, by moving the line $ε_{λ'}$ slightly down, we get three points of intersection.}
\label{no_going_back}
\end{figure}
Pick a number $k$ with $f(x_0)-λ'x_0>k>\max\{f(\1y)-λ'\1y,f(y)-λ'y\}$ for some $y>y_0$. Then, all the following hold:
\begin{align*}
f(\1y)-λ'\1y<k<f(x_0)-λ'x_0\\
f(\1y)-λ'\1y<k<f(y_0)-λ'y_0\\
f(y)-λ'y<k<f(y_0)-λ'y_0
\end{align*}
The continuity of $f$ and the above inequalities imply that there must exist numbers $a,b$ and $c$ in $(x_0,\1y),(\1y,y_0)$ and $(y_0,y)$ respectively such that $f(a)-λ'a=f(b)-λ'b=f(c)-λ'c\ =k$ which implies that $(a,f(a)),(b,f(b))$ and $(c,f(c))$ are co-linear, a contradiction, and therefore $S(x_0,y)$ has to be greater than $λ'$ for $y>y_0$. Working similarly, we get that $S(x,y_0)<λ'$ for $x<x_0$.
An identical argument gives us that $\1y$ is the only point in $[x_0,y_0]$, and eventually in $[x_0,1)$, where $f(x)-λ'x$ attains a local minimum (see fig.\ref{unique_min}) and from Lemma \eqref{mono} we get that $f(x)-λ'x$ has to be increasing in $[\1y,1)$. Hence, for any $x,y\geq\1y$ we have: $$x<y\iff f(x)-λ'x<f(y)-λ'y\xiff{x<y} S(x,y)>λ'.$$
\begin{figure}[h!]
\centering
\includegraphics[scale=0.2]{unique_minimum.jpg}
\caption{If $f$ attains local minimum at another point $\1y'>\1y$, we can find a line of slope greater than $λ'$ intersecting $f$ at three points.}
\label{unique_min}
\end{figure}
However, observe that for any $x$ and $y$ for which $S(x,y)>λ'$ the function $S(x,\cdot)$ has to be 1-1 otherwise our hypothesis fails in a similar way as above and, since it is continuous, it has to be monotone in $(x,1)$ for every $x\in[\1y,1)$. Therefore, $f$ is either convex or concave in $[\1y,1)$.
Assume $f$ is concave and let $x$ be any number in $(\1y,y_0)$. (see fig.\ref{no_concave}) By concavity, the point $(\1y,f(\1y))$ has to lie below the line passing through $(y_0,f(y_0))$ with slope $ζ=S(x,y_0)$ and, since $ζ=S(x,y_0)>S(x_0,y_0)=λ'\geq λ$, the point $(x_0,f(x_0))$ lies above. Hence, this line will intersect the graph of $f$ at some point $(c,f(c))$ with $c\in(x_0,\1y)$ and the points $(c,f(c)),(x,f(x))$ and $(y_0,f(y_0))$ are co-linear, a contradiction. Therefore, $f$ has to be convex in $[\1y,1)$ and thus locally Lipschitz in $(\1y,1)$ thank's to Lemma \eqref{lip}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.15]{concave_case_with_point.jpg}
\caption{$S(x_0,y)$ has to be strictly increasing in $(y_0,1)$.}
\label{no_concave}
\end{figure}
If we instead assume that $S(x_0,y)>λ'$ for $x_0<y<y_0$, working similarly we conclude that there must exist $\1y\in[x_0,y_0]$ such that $f$ is concave in $(0,\1y]$.
The case when there exist $x_0,y_0\in(0,1)$ for which $S(x_0,y_0)=λ'\leq -λ$ is identical and gives us the reverse implications.
Eventually, we conclude that there are points $\1x,\1y\in(0,1)$ such that $f$ has some particular convexity on $(0,\1x]$ and on $[\1y,1)$. These intervals cannot overlap, since otherwise $f$ would be a line segment of slope at least $λ$ (or at most $-λ$) on $[\1y,\1x]$, which contradicts our hypothesis and so $\1x\leq\1y$. If there are multiple such points, for example $\1x_1, \1x_2$, they have to give the same concavity in, say $(0,\1x_1]$ and $(0,\1x_2]$, and then we can simply pick the largest of them. Hence, we can assume $\1x$ and $\1y$ are the only points with the above property. In the case where $\1x<\1y$, there are no points $x,y\in[\1x,\1y]$ so that $|S(x,y)|\geq λ$ otherwise the uniqueness of $\1x$ and $\1y$ fails. This implies that $f$ is Lipschitz in $[\1x,\1y]$ with Lipschitz constant $λ$.
This concludes the proof.
\end{proof}
Of course, any continuous function satisfying the condition of the proposition and which has different convexity on $(a,\1x]$ and on $[\1y,b)$ has to additionally satisfy $\lim_{x\to a^+,y\to b^-}|S(x,y)|<λ$.
Additionally, notice that the fact that cone is vertical (or at least that it contains the vertical line) is essential to get the locally Lipschitz property:
If $C$ is a cone avoiding the vertical line, we can restrict the function $\sqrt[3]{x}$ on a small enough interval around 0 so that it intersects all the lines of the cone in at most two points. But $\sqrt[3]{x}$ is clearly not Lipschitz around 0. However, we do have the following corollary:
\begin{figure}[h!]
\centering
\begin{minipage}{0.49\linewidth}
\frame
{\includegraphics[width=\linewidth, height=3cm]{possibility1.jpg}}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\frame
{\includegraphics[width=\linewidth, height=3cm]{possibility2.jpg}}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\frame
{\includegraphics[width=\linewidth, height=3cm]{possibility3.jpg}}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\frame
{\includegraphics[width=\linewidth, height=3cm]{possibility4.jpg}}
\end{minipage}
\caption{All the possible ways the graph of $f$ can look like.}
\label{possibilities}
\end{figure}
\begin{cor*}
Let $λ_1>0>λ_2$ be some fixed numbers and consider all the cones of lines with slopes between $λ_1$ and $λ_2$ (containing the vertical line). If $f:(0,1)\to\mathbb{R}$ is a continuous function satisfying the same condition as above, then it is locally Lipschitz.
\end{cor*}
\begin{proof}
The inequalities $|S(x,y)|<λ$ and $|S(x,y)|\geq λ$ in this case correspond to $λ_2<S(x,y)<λ_1$ and $S(x,y)\geq λ_1\ or\ S(x,y)\leq λ_2$ respectively. The proof is the same as before and on the regions where $f$ is not convex or concave it is Lipschitz with Lipschitz constant the maximum of $λ_1$ and $-λ_2$.
\end{proof}
\begin{rem*}
All the above hold for any interval $(a,b)$. It is not hard to see that the same proof also works in the case where $f$ is defined on a closed interval, but Lemma \ref{lip} cannot be used into this setting. However, if $f:[0,1]\to\mathbb{R}$, its restriction $f\2{(0,1)}$ is locally Lipschitz.
\end{rem*}
\section{An Example}
It is natural then to ask whether our assumption still gives us the locally Lipschitz property when we allow more points of intersection. It turns out this horribly fails even for at most 3 points of intersection in the sense that there can be infinitely many points around where the function cannot be locally Lipschitz. Here we construct such a function whose graph intersects a certain cone of lines in at most three points.
\ \\
Consider the sequence $a_k=\frac{1}{2}-\frac{1}{2^k}$ for $k\geq1$ and on the each of the intervals $[a_k,a_{k+1}]$ we define the continuous function $f_k$ with the following properties:
\begin{minipage}[!ht]{0.7\textheight}
\begin{enumerate}[i)]
\item $f_1(0)=0$, $f_1(\frac{1}{4})=f_2(\frac{1}{4})=\frac{λ}{4}$
\item $f_{k+1}(a_{k+1})=f_k(a_{k+1})$ \label{contin}
\item $f_k(a_{k+1})=\frac{1}{2}\left(f_k(a_k)+f_{k-1}(a_{k-1})\right)$ \label{aver}
\item $f_{2k}$ is convex and decreasing on $[a_{2k},a_{2k+1}]$ and $f_{2k-1}$ is concave and increasing on $[a_{2k-1},a_{2k}]$. \label{concav}
\item The tangent line to $f_k$ at $(a_k,f_k(a_k))$ is vertical.
\end{enumerate}
\end{minipage}
Let $f:[0,1]\to\mathbb{R}$ be the function given by
$$f(x)=\begin{cases}
f_k(x) & \afterline{if} x\in[a_k,a_{k+1})\\
f_k(1-x) & \afterline{if} x\in(1-a_{k+1},1-a_k]\\
\frac{λ}{6} & \afterline{if} x=\frac{1}{2}
\end{cases}$$
for all $k\geq 1$ (fig. \ref{bat}) which is clearly continuous in $(0,1)\backslash\{\frac{1}{2}\}$ because of \eqref{contin}. Observe that the sequence $(b_k)=(f_k(a_k))$ is recursively defined by $b_{k+1}=\frac{b_k+b_{k-1}}{2}$ (through property \eqref{aver}) and it converges. In particular, we have $\frac{b_{k+1}-b_k}{b_k-b_{k-1}}=-\frac{1}{2}$ and therefore
\begin{equation}\label{sequen}
b_{k+1}=b_k+\left(\frac{-1}{2}\right)^{k-1}(b_2-b_1)\implies b_{k+1}=b_2-\frac{1}{3}\left(1-\left(\frac{-1}{2}\right)^{k-1}\right)(b_2-b_1)
\end{equation}
In our case, we have $b_1=f_1(0)=0$ and $b_2=f_2(\frac{1}{4})=\frac{λ}{4}$ and so we get that ${f_k(a_k)=\frac{λ}{6}\left(1-\left(\frac{-1}{2}\right)^{k-1}\right)}$, hence $\lim_{k\to+\infty}f_k(a_k)=\frac{λ}{6}$. But note that for every $x\in(0,\frac{1}{2})$ there is an $n\geq1$ for which $x\in[a_n,a_{n+1})$ and, since each $f_k$ is monotone in $[a_k,a_{k+1})$ for every $k$, we get that
$$\min\{f_n(a_n),f_{n+1}(a_{n+1})\}\leq f(x)\leq\max\{f_n(a_n),f_{n+1}(a_{n+1})\}.$$
Therefore, we have $\lim_{x\to\frac{1}{2}^-}f(x)=\frac{λ}{6}=f(\frac{1}{2})$, and similarly for $x\in(\frac{1}{2},1)$, meaning that $f$ is also continuous at $\frac{1}{2}$.
However, by construction $f$ is locally Lipschitz on $(0,1)\backslash\{\frac{1}{2}\}$ except at around $a_k$ and $1-a_k$, $k\geq1$, and therefore it is not locally Lipschitz around $\frac{1}{2}$ either, since $a_k\to\frac{1}{2}$ as $k\to+\infty$.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{bat_function.jpg}
\caption{At most 3 points of intersection with any line inside the cones}
\label{bat}
\end{figure}
Now we proceed to show the graph of $f$ has at most 3 intersection points with any line inside a vertical cone with slopes between $λ$ and $-λ$.
Each $f_k$ is monotone and has certain concavity on $[a_k,a_{k+1}]$, hence its graph is contained inside the triangle $T_k$ with vertices $(a_k,f(a_k))$, $(a_{k+1},f_{k+1}(a_{k+1}))$ and $(a_k,f(a_{k+1}))$ (see fig. \ref{triangles}) and therefore any line intersecting the graph of $f$ (in at least two points) has to pass through some of these triangles. Notice, however, that if a line passes through two non-consecutive triangles, say $T_k$ and $T_{k+j}$ $(j>1)$, then it falls outside the admissible cone of lines. In particular, (because of properties \eqref{contin} through \eqref{concav}) each $T_{k+1}$ is half the size of $T_k$ and they are placed is such a way so that the maximum and minimum slope a line through them can have is respectively the maximum and the minimum of the quantities
$$\frac{f_{k+j}(a_{k+j})-f_k(a_k)}{a_{k+j}-a_k}\quad\text{and}\quad\frac{f_{k+j}(a_{k+j+1})-f_k(a_{k+1})}{a_{k+j}-a_{k+1}},$$
when one of the numbers $k$ and $k+j$ is even and the other is odd, and the maximum and minimum of the quantities
$$\frac{f_{k+j}(a_{k+j+1})-f_k(a_k)}{a_{k+j}-a_k}\quad\text{and}\quad\frac{f_{k+j}(a_{k+j})-f_k(a_{k+1})}{a_{k+j}-a_{k+1}},$$
when $k$ and $k+j$ are both even or both odd. Using \eqref{sequen} we can see that each of the above is bounded in absolute value by $λ$ whenever $j>1$.
For the same reasons any admissible line passing through $(\frac{1}{2},\frac{λ}{6})$ intersects the graph only that point, since $$\left|\frac{f_k(a_k)-\frac{λ}{6}}{a_k-\frac{1}{2}}\right|=\frac{λ}{3}<λ.$$
Therefore, the admissible lines intersecting the graph necessarily pass through two (or maybe just one) consecutive triangles and each such line intersects the graph of $f_k$ in at most two points because of \eqref{concav}. Furthermore, due to the difference in concavity of $f_k$ and $f_{k+1}$ a line cannot intersect both of their graphs at two points, since then it would need to have both negative and position slope, which is absurd.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{boxed_bat.jpg}
\caption{The case when $k$ and $k+j$ are both odd.}
\label{triangles}
\end{figure}
An example of a sequence $(f_k)$ of functions with the above properties is the following:
$$f_k(x)=\frac{λ}{6}\left(1-\left(\frac{-1}{2}\right)^{k-1}\right)+\frac{(-1)^{k+1}λ}{2^{\frac{k+1}{2}}}\sqrt{x-a_k}.$$
\section{Hausdorff Measure}
Marstrand in \cite[Theorem 6.5.III]{JM} proved that if a Borel set on the plane has the property that
\begin{tequation} \label{Mar}
if the lines in a positive measure of directions intersect this Borel set on a set of Hausdorff dimension zero, then the Hausdorff dimension of this Borel set is at most \nolinebreak 1.
\end{tequation}
In particular, this happens if the intersections are at most countable. The Borel assumption is essential.
This being said Marstrand's theorem does not in general guarantee the Hausdorff measure of the Borel set is finite. Our next goal will be to deal with the Hausdorff measure of a continuous curve and also generalise to arbitrarily many points of intersection with our cones (still finitely many, though). It turns out that the graph has to always be $σ$-finite with respect to the $\mathcal{H}^1$ measure.
In order to proceed we need set up things more rigorously:
\begin{notat*}
Let $C(φ,0)=\{(x,y)\in\mathbb{R}^2\such|y|\geq \tan(φ)\,|x|\}$ denote the vertical closed cone in between the lines through the point $(0,0)$ with slopes $\tan(φ)$ and $-\tan(φ)$ (where $0<φ<\frac{π}{2}$). By $C_+(φ,0)$ we will denote the upper half the cone $C(φ,0)$, that is $C_+(φ,0)=\{(x,y)\in\mathbb{R}^2\such|y|\geq \tan(φ)\,|x|,\ y\geq0\}$, and by $C_-(φ,0)$ its lower half. Let $C(φ,ρ)$ be the cone's counter-clockwise rotation by angle $ρ$, $C(φ,0,h)=B_0(h)\cap C(φ,0)$, where $B_x(r)=B(x,r)$ is the closed ball centered at $x$ with radius $r$, and $C_P(φ,0)$ the translation of $C(φ,0)$ so that its vertex is the point $P$. Finally, $C^*$ will denote the dual cone of $C$, that is $C^*(φ,0)=\0{C(φ,0)^C}$. We will be combining the different notations in the natural way, for example $C_+(φ,ρ,h)$ is the upper half of the truncated and rotated cone with vertex at $0$.
$γ:[0,1]\to\mathbb{R}^2$ will be a continuous curve. By abuse of notation $γ$ will also stand for the graph of this curve.
\end{notat*}
\subsection{The main hypothesis}
\begin{tequation} \label{hypo}
Fix an integer $k\geq 2$. Fix an angle $φ\in(0,\frac{π}{2})$ and a rotation $ρ\in[0,2π)$. A line contained inside the cone $C_P(φ,ρ)$ for some point $P\in\mathbb{R}^2$ intersects the graph of $γ$ in at most $k$ many points.
\end{tequation}
Any such line and will be called \emph{admissible}. A cone consisting of only admissible lines will also be called \emph{admissible}.
\subsection{$γ$ is $σ$-finite}
For simplicity and without loss of generality we will assume the the curve $γ:[0,1]\to\mathbb{R}^2$ is bounded inside the unit square and that $(0,0),(1,1)\in γ$. We additionally assume that the cones of our hypothesis are vertical, i.e. that $ρ=0$.
\begin{theor}
$γ$ can be split into countably many sets $γ_n$ with $\mathcal{H}^1(γ_n)<\infty$.
\end{theor}
The following lemma plays a key role in the proof of this theorem, but we will postpone its proof until later:
\begin{lem} \label{free_of_cones}
For every point $P\in γ$ there exists an admissible cone $C_P(θ,ρ,h)$ which avoids the graph of $γ$ except at $P$, that is $C_P(θ,ρ,h)\cap γ=\{P\}$.
\end{lem}
In view of Lemma \ref{free_of_cones} -- by slightly tilting $ρ$, enlarging $θ$ and decreasing $h$ -- we can assume the triplet $(θ,ρ,h)$ consists of rational numbers. If $\{(θ_n,ρ_n,h_n)\}$ is an enumeration of all rational triples that still lie within our admissible set, then we can decomposed $γ$ into the countably many sets
$$γ_n=\{P\in γ\such C_P(θ_n,ρ_n,h_n)\cap γ=\{P\}\}$$
(see fig. \ref{covering_balls}). Note that $γ_n$ are not necessarily disjoint for different values of $n$.
We proceed to prove each one of them has finite $\mathcal{H}^1$ measure. For the rest of this section $n$ will be fixed.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{free_part.jpg}
\caption{The curve $γ$ and its part $γ_n$ for $θ_n$, $ρ_n=0$ and $h_n$.}
\label{covering_balls}
\end{figure}
\newpage
\begin{lem}
$\mathcal{H}^1(γ_n)<\infty$.
\end{lem}
\begin{proof}
Without loss of generality we can assume the cone $C_P(θ_n,ρ_n,h_n)$ is vertical, i.e. that $ρ_n=0$. Let us now split the unit square into $N$ many vertical strips, $S_j$ ($j=1,2,\dots,N$), of base length $\frac{1}{N}$ with $N$ big enough so that $\frac{1}{N}<\cos(θ_n)\,h_n$. Let $J$ be the set of indices $j$ for which
$$S_j\cap γ_n\neq\emptyset$$
and for any point $P\in γ$ denote the connected component of $γ$ inside $S_j$ through $P\in S_j\cap γ$ by $Γ^*_P(j)$.
Fix a $j\in J$ and consider a point $P\in S_j\cap γ_n$. Because $\frac{1}{N}<\cos(θ_n)\,h_n$, the sides of $S_j$ necessarily intersect both sides of the cone $C_P(θ_n,0,h_n)$ creating thus two triangles both contained inside the ball $B_P\left(\frac{1}{N\cos(θ_n)}\right)$ (fig. \ref{the_cone}). For any point $P'\in S_j\cap γ_n$ other than $P$ there are two cases: either $|P-P'|\leq h_n$ or $|P-P'|> h_n$. In the first case, the sets $Γ^*_P(j)$ and $Γ^*_{P'}(j)$ are both contained inside the two triangles $C^*_P(θ_n,0)\cap S_j$. In the latter, they are necessarily disjoint, because $C_P(θ_n,0,h_n)$ is free from points of $γ$ (other than $P$). These additionally imply that there can be no more than $\frac{1}{\sin(θ_n)\,h_n}$ distinct such paths inside $S_j$. In particular,
$$P\in Γ^*_P(j) \subset S_j\cap γ\cap B_P(h_n)\subset C^*_P(θ_n,0,h_n)\cap S_j\subset B_P\left(\tfrac{1}{N\cos(θ_n)}\right).$$
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{the_cone.jpg}
\caption{Each cone intersects a strip of length $\frac{1}{N}<\cos(θ_n)\,h_n$.}
\label{the_cone}
\end{figure}
Now, let $\mathcal{P}_j$ be a maximal set of points in $S_j\cap γ_n$ such that the sets $Γ^*_P(j)$ for
$P\in\mathcal{P}_j$
are all disjoint and observe that $S_j\cap γ_n$ is covered by the balls $B_P\left(\tfrac{1}{N\cos(θ_n)}\right)$ with $P\in\mathcal{P}_j$. Indeed, if $P_0\in S_j\cap γ_n$ is not inside the set $\bigcup_{P\in\mathcal{P}_j}B_P\left(\tfrac{1}{N\cos(θ_n)}\right)$, then by construction it is also outside $\bigcup_{P\in\mathcal{P}_j}B_P(h_n)$ and therefore $Γ^*_{P_0}(j)$ and $Γ^*_P(j)$ are disjoint for all $P\in\mathcal{P}_j$ contradicting the maximality of $\mathcal{P}_j$. Moreover, due to the connectedness of $γ$, the set $\{P\}$ has to be path-connected with $(0,0)$ and $(1,1)$ and therefore each $Γ^*_P(j)$ has to intersect at least one side of the strip $S_j$. Hence, because of \eqref{hypo}, there can be at most $2k$ of these paths, i.e. $\#(\mathcal{P}_j)\leq\min\{2k,\frac{1}{\sin(θ_n)\,h_n}\}\leq 2k$ for every $j\in J$. Therefore,
$$γ_n\cap S_j\subset\bigcup_{P\in\mathcal{P}_j} B_P\left(\tfrac{1}{N\cos(θ_n)}\right)\implies γ_n\subset\bigcup_{j\in J}\bigcup_{P\in\mathcal{P}_j}B_P\left(\tfrac{1}{N\cos(θ_n)}\right)$$
and the total sum of the radii of these balls is at most
$$2k \frac{1}{N\cos(θ_n)}\#(J)\leq\frac{2k}{\cos(θ_n)}.$$
Finally, if $\1{γ}_n=\{P\in γ\such C_P(θ_n,0,h_n/2)\cap γ=\{P\}\}$, then $γ_n\subset\1{γ}_n$. Repeating the above construction with $\frac{1}{N}<\cos(θ_n)\,\frac{h_n}{2}$, we get a cover of $\1{γ}_n$ -- and thus of $γ_n$ -- consisting of balls with a total sum of radii at most $\frac{2k}{\cos(θ_n)}$. Eventually, we get that
$$\mathcal{H}^1(γ_n)\leq\frac{2k}{\cos(θ_n)}.$$
\end{proof}
\begin{rem*}
In the above construction we are in fact able to cover the whole part of $γ$ inside $\bigcup_{j\in J}S_j$ with the same balls and not just $γ_n$.
\end{rem*}
Eventually, the graph of $γ$ has to be $σ$-finite.
\subsection{Cones Free of $γ$}
Here we prove Lemma \ref{free_of_cones}.
Fix $P\in γ$. Because $γ$ is bounded there must exist an $\1 h>0$ such that $C_P(φ,0)\cap γ=C_P(φ,0,\1 h)\cap γ$. If
$$C_P(φ',0)\cap γ=\{P\}\inline{or}C_P(φ',0,h)\cap γ=\{P\}$$
for some $φ'\in[φ,\frac{π}{2})$ and some $h>0$, then we are done.
Suppose this doesn't happen. Then, for all $φ'\in[φ,\frac{π}{2})$ and for all small enough $h>0$
\begin{equation} \label{no_1cone}
C_P(φ',0,h)\cap γ\backslash\{P\}\neq\emptyset.
\end{equation}
\begin{lem} \label{components}
For any $P\in γ$ the set $C_P(φ,0)\cap γ$ has finitely many (closed) connected components.
\end{lem}
\begin{proof}
Since $γ$ is connected, every point of $C_P(φ,0)\cap γ$ has to be path connected with the point $P$ through some part of the graph of $γ$. There are two possibilities: either that path is entirely contained inside $C_P(φ,0)$ or it has to pass through its sides. If a path does not intersect the sides, then it necessarily has to pass through $P$ otherwise $γ$ wouldn't be connected. This yields just one connected component -- the one containing $P$ -- and all the rest (if any) have to intersect the sides of the cone.
If these components are infinitely many, there have to exist also infinitely many points of intersection on the sides of the cone; at least one for each connected component. But this contradicts \eqref{hypo}.
\end{proof}
\begin{rem*}
The connected components of Lemma \ref{components} are at most $2k$ and $P$ need not be a point of the curve. This lemma is still valid regardless of the cone we are working with as soon as it is in our admissible family of cones.
\end{rem*}
Let $Γ_P(φ,0)$ be the connected component of $C_P(φ,0)\cap γ$ that contains the point $P$, which because of \eqref{no_1cone} cannot be just the point set $\{P\}$. Because of Lemma \ref{components} the set $C_P(φ,0)\cap γ\backslash Γ_P(φ,0)$ is compact and thus there exists $h_0>0$ such that $C_P(φ,0,h_0)\cap γ\subset Γ_P(φ,0)$. Observe that $C_P(φ,0)\cap γ\backslash Γ_P(φ,0)$ could be empty in general in which case $h_0=\infty$, however, we can always assume that $h_0\leq\1h$.
Next, we bisect our cone into two new identical cones sharing one common side
$$C_P(φ,0)=C_P(φ_1,ρ_1)\cup C_P(φ_1,-ρ_1),$$
where $φ_1=\frac{π}{4}+\frac{φ}{2}$ and $ρ_1=\frac{π}{4}-\frac{φ}{2}$, and repeat the above arguments for each new cone: If
$$C_P(φ',ρ_1)\cap γ=\{P\}\inline{or}C_P(φ',ρ_1,h)\cap γ=\{P\}$$
for some $φ'\in[φ_1,\frac{π}{2})$ and some $h>0$, then we are done. Similarly for $-ρ_1$ in place of $ρ_1$.
Suppose none of these happen. Then, for all $φ'\in[φ_1,\frac{π}{2})$ and for all small enough $h$ and $h'$
\begin{equation} \label{no_2cone}
C_P(φ',ρ_1,h)\cap γ\backslash\{P\}\neq\emptyset\inline{and}C_P(φ',-ρ_1,h')\cap γ\backslash\{P\}\neq\emptyset.
\end{equation}
Denote by $Γ_P(φ_1,ρ_1)$ and $Γ_P(φ_1,-ρ_1)$ the connected component of $C_P(φ_1,ρ_1)\cap γ$ and $C_P(φ_1,-ρ_1)\cap γ$ containing $P$ respectively. The sets $C_P(φ_1,ρ_1)\cap γ\backslash Γ_P(φ_1,ρ_1)$ and $C_P(φ_1,-ρ_1)\cap γ\backslash Γ_P(φ_1,-ρ_1)$ are compact (thanks to Lemma \ref{components}) and thus there exist $h_{1,0},h_{1,1}\in(0,\1 h]$ such that $C_P(φ_1,ρ_1,h_{1,0})\cap γ\subset Γ_P(φ_1,ρ_1)$ and $C_P(φ_1,-ρ_1,h_{1,1})\cap γ\subset Γ_P(φ_1,-ρ_1)$.
\begin{figure}[h!]
\centering
\includegraphics[draft=false,width=0.85\textwidth]{avoiding_cone.jpg}
\caption{Finding a cone free from points of $γ$. The parameters $r$, $d$ and $h$ determine the radius.}
\label{avoiding_cone}
\end{figure}
We iterate this construction indefinitely (fig. \ref{avoiding_cone}). If at any step we get
\begin{equation} \label{cone_find}
C_P(φ',ρ,h)\cap γ=\{P\}
\end{equation}
for some $φ'$, $ρ$ and $h$, then we have found our desired cone and we stop. Otherwise, we get an infinite sequence of smaller and smaller cones satisfying the following:
$$\{P\}\varsubsetneq C_P(φ_n,ρ_{n,i},h_{n,i})\cap γ\subset Γ_P(φ_n,ρ_{n,i})\subset C_P(φ_n,ρ_{n,i})\afterline{for all}i=0,1,\dots,2^n-1$$
for all $n\geq 0$ where
\begin{align*}
& φ_0=φ && φ_1=\frac{π}{4}+\frac{φ}{2} && && φ_n=\frac{π}{4}+\frac{φ_{n-1}}{2}\\
& ρ_{0,0}=0 && ρ_{1,0}=ρ_1=\frac{π}{4}-\frac{φ}{2} && ρ_{1,1}=-ρ_1 && ρ_{n,i}=(φ_n-φ)-i\frac{2(φ_n-φ)}{2^n-1}\\
& h_{0,0}=h_0 && && && 0<h_{n,i}\leq\1 h.
\end{align*}
Note that at the $n$-th iteration we have exactly $2^n$ truncated closed cones separated by the lines
$$l_{n,i}=P+\{(x,y)\such y=\tan(π-φ_n+ρ_{n,i})\,x\}$$
through $P$. The sets $Γ_P(φ_n,ρ_{n,i})$ might intersect these lines, but this can happen in at most $k$ may points due to \eqref{hypo}. Let $r_{n,i}$ be the smallest distance between these points of intersection (if any) and $P$, that is
$$r_{n,i}=\textup{dist}(P,l_{n,i}\cap Γ_P(φ_n,ρ_{n,i})\backslash\{P\})$$
(again we can arbitrarily set some $0<r_{n,i}\leq\1 h$ if $l_{n,i}\cap Γ_P(φ_n,ρ_{n,i})\backslash\{P\}=\emptyset$) and let
$$d_{n,i}=\min\Big\{\sup\{d(P,Γ_{P+}(t)\backslash P)\such t\in(0,1]\},\ \sup\{d(P,Γ_{P-}(t)\backslash P)\such t\in(0,1]\}\Big\}$$
where $Γ_{P+}(t)$ and $Γ_{P-}(t)$ are parametrisations of $Γ_P(φ_n,ρ_{n,i})\cap C_{P+}(φ_n,ρ_{n,i})$ and $Γ_P(φ_n,ρ_{n,i})\cap C_{P-}(φ_n,ρ_{n,i})$ respectively (which in general could be just the point set $\{P\}$) with $Γ_{P+}(0)=Γ_{P-}(0)=P$. Finally, we set
$$h_n=\min\{r_{n,i},\ d_{n,i},\ h_{n,i}\such i=0,1,\dots,2^n-1\}.$$
Since the above set is finite, $h_n>0$. From this construction for every $n\geq 0$ we get a collection of truncated cones $C_P(φ_n,ρ_{n,i},h_n)$, for $i=0,1,\dots,2^n-1$, (see fig. \ref{avoiding_cone}) which have the following property:
\begin{tequation} \label{paths}
There is a path (part of $γ$) lying inside the cone that connects the point $P$ with at least one of the two arcs of length $(π-2φ_n)h_n$ which bound the cone $C_P(φ_n,ρ_{n,i},h_n)$. Moreover, these paths avoid any other intersections with that cone's boundary aside $P$ and the (closed) arc(s).
\end{tequation}
Now, fix $n$ big enough so that $2^n\geq 2k+3$. Then, we can find at least $k+2$ many of the cones $C_P(φ_n,ρ_{n,i},h_n)$ which contain some path of those mentioned at \eqref{paths} all lying on the same half-cone, say on $C_{P+}(φ,0,h_n)$. Consider one of the sides of our initial cone $C_P(φ,0)$, say $l=P+\{(x,y)\such y=\tan(φ)\,x\}$, fix $0<ε<h_n\sin(π-2φ_n)$ and translate $l$ vertically by $ε$: $l_ε=l+(0,ε)$. Then, $l_ε$ necessarily intersects all the $2^n$ different sectors of the ball $B_P(h_n)$ inside $C_{P+}(φ,0,h_n)$, but only the right-most one, $C_{P+}(φ_n,ρ_{n,2^n-1},h_n)$, at its arc-like part of the boundary. In particular, $l_ε$ has to intersect the sides of at least $k+1$ sectors that contain the paths described in \eqref{paths} and therefore also intersects these paths. Hence, $l_ε$ is one of our admissible lines that has at least $k+1$ intersections with $γ$, a contradiction.
Lemma \ref{free_of_cones} is proved.
\begin{rems*}
\begin{enumerate}[i)]
\item In the definition of $h_n$ there are three different parameters present, $r_{n,i}$, $d_{n,i}$ and $h_{n,i}$. Without $h_{n,i}$ \eqref{cone_find} automatically fails. $d_{n,i}$ is to ensure $Γ_P(φ_n,ρ_{n,i})$ will always intersect the boundary of the corresponding cone and $r_{n,i}$ forces this intersection to avoid the sides.
\item In the above construction we bisected the initial cone into 2, 4, 8 etc. smaller cones every time. However, any possible way to cut the cones would still work as soon as it eventually yields an infinite sequence.
\item The same proof can be applied to any cone within our admissible set of directions.
\end{enumerate}
\end{rems*}
\section{Higher Dimensions}
Mattila in \cite[Lemma 6.4]{PM} (and in \cite{PMbook}) generalised Marstrand's results from \cite{JM} and showed the following:
\begin{lem}[Mattila] \label{l:Mattila--Marstrand}
Let $E$ be an $\mathcal{H}^s$ measurable subset of $\mathbb{R}^n$ with $0<\mathcal{H}^s(E)<\infty$. Then,
\[\dim(E\cap (V+x))\geq s+m-n\]
for almost all $(x,V)\in E\times G(n,m)$.
\end{lem}
In particular, for a Borel set in, say, $\mathbb{R}^2$ we have that
\begin{tequation*}
if any 2-dimensional plane in a positive measure of directions intersects this Borel set on a set of Hausdorff dimension at most 1, then the Hausdorff dimension of this Borel set is at most 2.
\end{tequation*}
Furthermore, if every line in the direction of some 2-dimensional cone intersects a Borel set (not just the graph of some continuous function) in at most countably many points, then any 2-dimensional plane in a positive measure of directions intersects this Borel set on a set of Hausdorff dimension at most 1 (Marstrand) and then the Hausdorff dimension of this Borel set is at most 2 (Mattila).
Of course, the same also holds in $\mathbb{R}^n$, that is, if Borel set has countable intersection with a certain cone of lines, then it's dimension doesn't exceed $n-1$.
\bigskip
Now, we restrict our attention to what happens with just 2 points of intersection in higher dimensions and we want to generalise Proposition \ref{2D} in $\mathbb{R}^n$:
Suppose we have a continuous function $z=f(x,y)$, say, on a square in $\mathbb{R}^2$, satisfying the property that
\begin{tequation} \label{2pts}
any line in the direction of a certain open cone with axis along a vector $\v v\in\mathbb{R}^3$ intersects the graph in at most two points.
\end{tequation}
Then, we would want $f$ to obey the same rule. Namely we ask the following:
\begin{quest} \label{3D}
Is a continuous function on $(-1,1)^2$ having property \eqref{2pts} locally Lipschitz?
\end{quest}
| {
"timestamp": "2019-09-10T02:05:59",
"yymm": "1909",
"arxiv_id": "1909.03203",
"language": "en",
"url": "https://arxiv.org/abs/1909.03203",
"abstract": "The local Lipschitz property is shown for the graph avoiding multiple point intersection with lines directed in a given cone. The assumption is much stronger than those of Marstrand's well-known theorem, but the conclusion is much stronger too. Additionally, a continuous curve with a similar property is $\\sigma$-finite with respect to Hausdorff length and an estimate on the Hausdorff measure of each \"piece\" is found.",
"subjects": "Analysis of PDEs (math.AP); Metric Geometry (math.MG)",
"title": "Geometry of planar curves intersecting many lines in a few points",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551515780319,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.7083190130162867
} |
https://arxiv.org/abs/2003.03311 | Covering cycles in sparse graphs | Let $k \geq 2$ be an integer. Kouider and Lonc proved that the vertex set of every graph $G$ with $n \geq n_0(k)$ vertices and minimum degree at least $n/k$ can be covered by $k - 1$ cycles. Our main result states that for every $\alpha > 0$ and $p = p(n) \in (0, 1]$, the same conclusion holds for graphs $G$ with minimum degree $(1/k + \alpha)np$ that are sparse in the sense that \[e_G(X,Y) \leq p|X||Y| + o(np\sqrt{|X||Y|}/\log^3 n) \qquad \forall X,Y\subseteq V(G). \] In particular, this allows us to determine the local resilience of random and pseudorandom graphs with respect to having a vertex cover by a fixed number of cycles. The proof uses a version of the absorbing method in sparse expander graphs. | \section{The Absorber Lemma}\label{sec:absorbers}
This section is dedicated to the construction of the absorbers and the proof of
the Absorbing Lemma (Lemma~\ref{lem:absorbing-lemma}). We recall the definition
of an absorber first.
\absorber*
The most `natural' way to construct an absorber would be to first find
structures which `absorb' one vertex of $X \cup Y$ at a time, and then depending
on the sets $X'$ and $Y'$ individually decide which vertex needs to be
`absorbed'. Unfortunately, this is not possible in our case for the following
reason. A single-vertex absorber $A_x$ would need to have two $ab$-paths: one
containing $x$, one not containing $x$, and both containing all other vertices
of $A_x$. It is an easy observation that such a structure necessarily contains
an odd cycle. Since the graph $G$ might be bipartite, we cannot hope to find a
single-vertex absorber described above in it.
In order to circumvent this we instead first build a collection of {\em
two-vertex absorbers} $H_{xy}$, each of which contains two paths between its
endpoints, one containing all the vertices of $H_{xy}$ including $x$ and $y$ and
the other containing all vertices of $H_{xy}$ except $x$ and $y$. Such an
absorber is depicted in Figure~\ref{fig:two-vertex-absorber}.
\begin{figure}[!htbp]
\begin{subfigure}[b]{\textwidth}
\centering
\input{figures/two_vertex_absorber}
\caption{An absorber $H_{xy}$ of unrealistically small size.}
\end{subfigure}
\par\bigskip
\begin{subfigure}[b]{0.48\textwidth}
\centering
\input{figures/absorbing_path}
\caption{The `absorbing' $u_{xy}v_{xy}$-path.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\input{figures/non_absorbing_path}
\caption{The `non-absorbing' $u_{xy}v_{xy}$-path.}
\end{subfigure}
\caption{An example of a two-vertex absorber for $x$ and $y$. The colours of
the vertices correspond to the colour classes of the bipartite graph $G$.
Dashed lines represent disjoint paths of length at most $\ell$ (see the proof
below) and solid lines the actual edges.}
\label{fig:two-vertex-absorber}
\end{figure}
The following lemma provides us with a `template' that we use to combine several
two-vertex absorbers into an actual absorber. It is similar to a lemma of
Montgomery~\cite[Lemma~10.7]{montgomery2019spanning}, which is proven in nearly
the same way.
\begin{lemma}\label{lem:template}
There is an integer $n_0 \in \N$ such that, for every $n \geq n_0$, there exist
a bipartite graph $G = (A, B, E)$ with $|A| = |B| = 2n$ and $\Delta(G) \leq 40$,
as well as subsets $A' \subseteq A$ and $B' \subseteq B$ with $|A'| = |B'| = n$,
satisfying the following. For every set $Z \subseteq A' \cup B'$ with $|Z \cap
A'| = |Z \cap B'|$, the graph $G[V(G) \setminus Z]$ contains a perfect
matching.
\end{lemma}
\begin{proof}
Fix disjoint sets $A_1$ and $B_1$ with $|A_1| = |B_1| = n$. Let $H$ be a
random bipartite graph with parts $A_1$ and $B_1$ obtained by inserting $20$
independent random perfect matchings between $A_1$ and $B_1$ (and merging
eventual multiple edges). We first show that w.h.p.\ $H$ satisfies the
following properties:
\begin{enumerate}[(i), font=\itshape]
\item\label{templatei} for every $X \subseteq A_1$ with $|X| \leq n/2$, we
have $|N_H(X)| \geq 2|X|$,
\item\label{templateii} for every $Y \subseteq B_1$ with $|Y| \leq n/2$, we
have $|N_H(Y)| \geq 2|Y|$, and
\item\label{templateiii} for any two subsets $X \subseteq A_1$ and $Y
\subseteq B_1$ with $|X| = |Y| = \ceil{n/4}$, we have $e_H(X, Y) > 0$.
\end{enumerate}
Given two sets $X \subseteq A_1$ and $Y \subseteq B_1$, the probability that
$N_M(X) \subseteq Y$ in a random matching $M$ is $\binom{|Y|}{|X|}
\binom{n}{|X|}^{-1}$. Therefore, for a fixed integer $t \in [n/4]$, the
probability that there is a set $X \subseteq A_1$ with $|X| = t$ and $|N_H(X)|
\leq 2t$ is at most
\[
\binom{n}{t} \binom{n}{2t} \bigg( \binom{2t}{t}\binom{n}{t}^{-1} \bigg)^{20}
\leq \Big( \frac{en}{t} \Big)^t \Big( \frac{en}{2t} \Big)^{2t} \Big(
\frac{2t}{n} \Big)^{20t} = \bigg(2e^3 \bigg( \frac{2t}{n} \bigg)^{17}
\bigg)^t.
\]
By a union bound over all $t \in [n/4]$, and looking separately at the cases
$t \leq\log n$ and $\log n \leq t \leq n/4$, the probability that
$\ref{templatei}$ fails tends to $0$ as $n \to \infty$. In the same way,
exchanging the roles of $A_1$ and $B_1$, one can show that w.h.p.\ the
statement in $\ref{templateii}$ holds as well. Lastly, the probability that
$\ref{templateiii}$ fails is similarly at most
\[
\binom{n}{n/4}^2 \bigg( \binom{3n/4}{n/4} \binom{n}{n/4}^{-1} \bigg)^{20}
\leq (4e)^{n/2} \Big( \frac{3}{4} \Big)^{20n/4} \leq 2^{-n/4}.
\]
Thus, w.h.p.\ $\ref{templateiii}$ holds.
We now take any graph $H$ as above that satisfies $\ref{templatei}$,
$\ref{templateii}$, and $\ref{templateiii}$---such a graph exists for all
large enough $n$---and define $G$ by duplicating the vertices in $A_1$ and
$B_1$ and keeping the edges as in $H$, except that a single edge of $H$ now
corresponds to four edges in $G$. More precisely, we define $A = (A_1 \times
\{0\}) \cup (A_1 \times \{1\})$ and $B = (B_1 \times \{0\}) \cup (B_1 \times
\{1\})$ and insert an edge between $(a, i) \in A$ and $(b, j) \in B$ whenever
there is an edge between $a$ and $b$ in $H$. We let $A' = A_1 \times \{0\}
\subseteq A$ and $B' = B_1 \times \{0\} \subseteq B$. Note that because $H$ is
the union of $20$ matchings, $G$ has maximum degree at most $40$, even after
duplicating the vertices.
To complete the proof, we need to show that for any $Z \subseteq A' \cup B'$
with $|Z \cap A'| = |Z \cap B'|$, the graph $G[V(G) \setminus Z]$ contains a
perfect matching. Let $m := |Z \cap A'| = |Z \cap B'|$ and set $A_Z := A
\setminus Z$ and $B_Z := B \setminus Z$. We show that $G[V(G) \setminus Z]$
contains a perfect matching by verifying Hall's condition, i.e., by showing
that for every set $X \subseteq A_Z$, we have $|N_G(X, B_Z)| \geq |X|$.
Assume first $|X| \leq n/2$. Let $X'$ be the larger of the sets $A' \cap X$
and $(A \setminus A') \cap X$. Then $|X|/2 \leq |X'| \leq n/2$. By
$\ref{templatei}$ we have $|N_G(X, B_Z)| \geq |N_G(X', B_Z)| \geq |N_H(X')|
\geq 2|X'| \geq |X|$.
Suppose next $n/2 < |X| \leq 3n/2 - m$. Let $X'$ be the larger of the sets $A'
\cap X$ and $(A \setminus A') \cap X$, and note $|X'| \geq n/4$. By
definition, there are no edges between $X$ and $Y := B_Z \setminus N_G(X,
B_Z)$. If we assume $|N_G(X, B_Z)| \leq |X|$, then we have $|Y| \geq (2n - m)
- (3n/2 - m) \geq n/2$. Let $Y'$ be the larger of the sets $B' \cap Y$ and $(B
\setminus B') \cap Y$. The fact that there are no edges between $X'$ and $Y'$
then contradicts $\ref{templateiii}$.
Finally, suppose $3n/2 - m < |X| \leq 2n - m$. Assume towards contradiction
that $N_G(X, B_Z)$ is contained in a set $Q \subseteq B_Z$ of size $|X| - 1$
and let $Y := B_Z \setminus Q$. Note that $|Y| = 2n - m - (|X| - 1) \leq n/2$.
Let $Y'$ be the larger of the sets $B' \cap Y$ and $(B \setminus B') \cap Y$
and thus $|Y'| \geq |Y|/2$. However, all neighbours of $Y'$ are contained in
the set $A_Z \setminus X$ of size $2n - m - |X| = |Y| - 1$. This contradicts
$\ref{templateii}$.
\end{proof}
We are ready to give a proof of the Absorber Lemma. We first restate it for
convenience.
\absorbing*
\begin{proof}[Proof of Lemma~\ref{lem:absorbing-lemma}]
Let $\gamma_1 = \gamma_{\ref{lem:inheritance-lemma}}(\gamma)$ and $C = 4 \cdot
\max{\{ C_{\ref{lem:inheritance-lemma}}(\gamma),
C_{\ref{lem:connecting-lemma}}(\gamma_1, 80) \}}$. Let us write $W_A := W \cap
A$ and $W_B := W \cap B$. We first show that both of these sets have size at
least $\gamma |W|/4$. Indeed, if we assume that, say, $|W_A| \leq \gamma
|W|/4$, we have
\[
\gamma|W|^2p/3 \leq e_G(W_A, W_B) \leq p|W_A||W_B| + \beta \sqrt{|W_A||W_B|}
\leq \gamma p|W|^2/4 + \beta |W| < \gamma p|W|^2/3,
\]
where in the first inequality we use the fact that $G$ is a $\gamma
p$-expander, in the second that it is a $(p, \beta)$-sparse graph, and in the
last that $|W| \geq C\beta \sqrt{\log n}/p$. Thus, by an analogous argument
for $|W_B|$, we conclude
\begin{equation}\label{eq:wawb}
|W_A| \geq \gamma |W|/4 \qquad \text{and} \qquad |W_B| \geq \gamma |W|/4.
\end{equation}
Let now $a$ and $b$ be two arbitrary vertices such that $a \in W_A$ and $b \in
W_B$. These vertices are going to be the endpoints of our absorber. We start
by making some preparations.
Let $m := \max{\{ |U \cap A|, |U \cap B| \}}$ and let $W_A' \subseteq W_A
\setminus \{a\}$ and $W_B' \subseteq W_B \setminus \{b\}$ be subsets with
$|W_A'| = 2m - |U \cap A|$ and $|W_B'| = 2m - |U \cap B|$ chosen uniformly at
random among all subsets of this size. Note that it is possible to choose
subsets of this size because of \eqref{eq:wawb} and the assumption $|U| \leq
|W|/(C \log^2 n)$. In the following, we write $U_A := W_A' \cup (U \cap A)$
and $U_B := W_B' \cup (U \cap B)$; both of these sets have size $2m$.
Furthermore, let $W_1 \cup W_2 \cup W_3 = W \setminus (W_A' \cup W_B' \cup
\{a, b\})$ be an equipartition chosen uniformly at random. Since $|W_A'|,
|W_B'| \leq 2|U| \leq |W|/\log^2 n$, we have $|W_i| \geq |W|/4$. Therefore, by
our choice of $C$, the assumptions of the lemma together with the Inheritance
Lemma (Lemma~\ref{lem:inheritance-lemma}), Chernoff's inequality, and the
union bound, ensure that w.h.p.\ the following three properties are satisfied
for all $i \in [3]$:
\begin{enumerate}[(i), font=\itshape]
\item\label{abs-Wi-exp} $G[W_i]$ is a $\gamma_1p$-expander,
\item\label{abs-Wi-size} $|W_i| \geq (C/4) \cdot \beta\sqrt{\log n}/p$, and
\item\label{abs-Wi-deg} for every vertex $u \in U \cup W$, we have $\deg(u,
W_i) \geq \gamma_1|W_i|p$.
\end{enumerate}
In the following we assume that these properties hold deterministically for
all $W_i$.
Let $G_T = (A_T, B_T, E_T)$ with $|A_T| = |B_T| = 2m$ and $\Delta(G_T) \leq
40$ be a graph given by Lemma~\ref{lem:template}. Furthermore, let $A_T'
\subseteq A_T$ and $B_T' \subseteq B_T$ with $|A_T'| = |B_T'| = m$ be subsets
given by the former lemma with the property that for every $Z \subseteq A_T'
\cup B_T'$ with $|Z \cap A_T'| = |Z \cap B_T'|$, the graph $G_T[V(G_T)
\setminus Z]$ contains a perfect matching. Lastly, let $f \colon V(G_T) \to
U_A \cup U_B$ be a bijection mapping the vertices of $A_T$ onto $U_A$ and
$B_T$ onto $U_B$ and such that $U \cap A \subseteq f(A_T')$ and $U \cap B
\subseteq f(B_T')$.
The construction of the absorber now proceeds by three independent
applications of the Connecting Lemma. Set $\ell = \ceil{30\log
n/(\gamma_1\log\log n)}$. Firstly, we apply it with $\gamma_1$ (as $\gamma$),
$U_A \cup U_B$ (as $U$), $W_1$ (as $W$), and with the multigraph $M$ with the
vertex set $U_A \cup U_B$ and the edge set defined as follows: for every edge
$\{x, y\}\ \in E(G_T)$, add a \emph{double} edge $\{f(x), f(y)\}$ to $M$.
Since $\Delta(G_T) \leq 40$, it follows that $\Delta(M) \leq 80$. Moreover,
the assumption $|U|\leq |W|/(C\log^2 n) \leq 4|W_1|/(C\log^2 n)$ shows that
the assumption $e(M) \leq |W_1|/(C\ell)$ is satisfied. Finally,
$\ref{abs-W-exp}$--$\ref{abs-U-deg}$ show that assumptions
Lemma~\ref{lem:connecting-lemma}~$\ref{cl-W-exp}$--$\ref{cl-U-deg}$ are
satisfied. Therefore, we obtain for every edge $\{x, y\} \in E(G_T)$ two
$f(x)f(y)$-paths of length at most $\ell$ such that all paths are internally
vertex-disjoint and use only vertices from $W_1$. For a given edge $\{x, y\}
\in E(G_T)$, we denote these two $f(x)f(y)$-paths by $P_{xy}$ and $Q_{xy}$ and
let
\[
P_{xy} = f(x), a_p^1, b_p^1, a_p^2, \dotsc, a_p^{\ell_p}, b_p^{\ell_p}, f(y)
\qquad \text{and} \qquad Q_{xy} = f(x), a_q^1, b_q^1, a_q^2, \dotsc,
a_q^{\ell_q}, b_q^{\ell_q}, f(y),
\]
where $\ell_p = (|P_{xy}| - 2)/2$, $\ell_q = (|Q_{xy}| - 2)/2$, $\ell_q \geq
\ell_p$ (w.l.o.g.), and $a_p^i, a_q^j \in A$ and $b_p^i, b_q^j \in B$. Note
that since $f(x)$ and $f(y)$ lie in different colour classes of $G$, both
paths necessarily have odd length.
Given a collection of paths $\ensuremath{\mathcal P} = \{P_{xy}, Q_{xy} : \{x, y\} \in
E(G_T)\}$ as above, let
\[
U_{\ensuremath{\mathcal P}} := \bigcup_{\{x, y\} \in E_T} V(P_{xy}) \cup V(Q_{xy}) \setminus
\{f(x), f(y)\}.
\]
Next, we apply the Connecting Lemma with $\gamma_1$ (as $\gamma$), $U_\ensuremath{\mathcal P}$ (as
$U$), $W_2$ (as $W$), and a multigraph $M$ defined as follows: the vertex set
of $M$ is just $U_{\ensuremath{\mathcal P}}$ and the edge set is the union of
\begin{align*}
E(M_{xy}) := \big\{ & \{b_p^i, a_q^i\}_{1 \leq i \leq \ell_p}, \{ a_p^{i+1},
b_q^i \}_{1 \leq i \leq \ell_p - 1}, \\
%
& \{b_q^{\ell_p + i - 1}, b_q^{\ell_q - i + 1}\}_{1 \leq i \leq
\ceil{(\ell_q - \ell_p)/2}} , \{a_q^{\ell_p + i}, a_q^{\ell_q - i + 1}\}_{1
\leq i \leq \ceil{(\ell_q - \ell_p)/2}} \big\},
\end{align*}
for all $\{x, y\} \in E(G_T)$. The edge set is much easier to `define'
visually---it is given by the dashed lines in
Figure~\ref{fig:two-vertex-absorber} for every $\{x, y\} \in E(G_T)$.
It is easy to verify that the assumptions of the Connecting Lemma are all
satisfied---perhaps the least evident being that $e(M) \leq |W_2|/(C\ell)$,
which holds because $M$ has at most $e(G_T) \cdot 4\ell \leq 320m \ell \leq
320|U|\ell$ edges and $|U| \leq |W|/(C\log^2n) \leq |W_2|/(320 C\ell^2)$.
Therefore, we obtain an $(M, W_2, \ell)$-matching, that is replace the dashed
edges in Figure~\ref{fig:two-vertex-absorber}, for all $\{x, y\} \in E(G_T)$,
by internally vertex-disjoint paths in $G$ whose internal vertices all belong
to $W_2$.
Lastly, for every $\{x, y\} \in E(G_T)$ denote by $u_{xy}$ and $v_{xy}$ the
vertices $a_p^1$ and $b_q^{(\ell_p + \ell_q)/2}$ (assuming $\ell_q$ is even,
otherwise this is $a_q^{(\ell_p + \ell_q + 1)/2})$, respectively (the ones as in
Figure~\ref{fig:two-vertex-absorber}) and let us fix an arbitrary ordering
$\{x_1, y_1\}, \dotsc, \{x_{m_T}, y_{m_T}\}$ of the edges of $G_T$, where $m_T
:= e(G_T)$. We now apply the Connecting Lemma for the third and last time with
$V(M) := \{u_{x_i y_i}, v_{x_i y_i}\}_{i \in [m_T]} \cup \{a, b\}$ (as $U$),
$W_3$ (as $W$), and the edge set of $M$
\[
E(M) := \big\{ \{a, u_{x_ 1y_1}\}, \{v_{x_i y_i}, u_{x_{i + 1} y_{i +
1}}\}_{i \in [m_T - 1]}, \{v_{x_{m_T} y_{m_T}}, b\} \big\}.
\]
Similarly as above, one easily checks that all the assumptions of the
Connecting Lemma are satisfied and hence we obtain an $(M, W_3,
\ell)$-matching which connects all the two-vertex absorbers into one large
absorber $H$. For a more natural, visual representation we depict the obtained
structure on Figure~\ref{fig:absorber}.
\begin{figure}[!htbp]
\centering
\input{figures/absorber}
\caption{The absorber $H$. The vertices $f(x_1), f(x_2), f(x_3)$ and
respectively $f(y_1), f(y_2), f(y_3)$ are not necessarily distinct (they
would be distinct only if $G_T$ were itself a perfect matching); however,
all other vertices are actually distinct.}
\label{fig:absorber}
\end{figure}
It remains to show that the graph constructed in this way is a $(U \cap A, U
\cap B)$-absorber with endpoints $a$ and $b$. For this, suppose that $A'
\subseteq U \cap A$ and $B' \subseteq U \cap B$ are subsets such that $|A'| =
|B'|$. Then we let $Z := f^{-1}(A' \cup B')$ and note that $Z$ is a subset of
$A_T' \cup B_T'$ that intersects each set $A_T'$ and $B_T'$ in the same number
of vertices. Hence, by the defining property of $G_T$, the graph $G_T[V(G_T)
\setminus Z]$ contains a perfect matching $M$. We can then find an $ab$-path
using all vertices except those in $A' \cup B'$, as follows: for each edge
$\{x, y\}$ in the given perfect matching take a $u_{xy}v_{xy}$-path which
includes all vertices of $H_{xy}$; for all other edges take a
$u_{xy}v_{xy}$-path which includes all vertices of $H_{xy}$ except for $f(x)$
and $f(y)$. Since the edges in $M$ form a perfect matching, it is clear that
this is indeed a path (that is, no vertex is used twice) and that this path
visits each vertex of the absorber except those contained in the set $A' \cup
B'$.
\end{proof}
\section{The Connecting Lemma}\label{sec:connecting-lemma}
The main result of this section is essentially that if $G$ is a pseudorandom
graph and $W$ is a sufficiently large subset inducing a good expander, then $G$
is `highly connected via $W$', i.e., it is possible to connect many given pairs
of vertices using short paths whose internal vertices lie entirely in $W$. We
formally restate the lemma for the convenience of the reader.
\connecting*
The main step towards proving the Connecting Lemma is to prove the following
technical auxiliary lemma whose proof we defer to the end of the section.
\begin{lemma}\label{lem:resilient-connecting}
For every $\gamma \in (0, 1)$, the following holds for all sufficiently large
$n$, all $p \in (0, 1)$, and all $\beta > 0$. Let $G$ be a $(p, \beta)$-sparse
graph on $n$ vertices and let $U, W \subseteq V(G)$ be disjoint subsets such
that:
\begin{enumerate}[(i)]
\item\label{cl-res-W-exp} $G[W]$ is a $\gamma p$-expander,
\item\label{cl-res-W-size} $|W| \geq 10\gamma^{-1} \beta \sqrt{\log n}/p$,
and
\item\label{cl-res-U-deg} every vertex $u \in U$ satisfies $\deg(u, W) \geq
\gamma|W|p$.
\end{enumerate}
Let $\ell = \ceil{10\log n/(\gamma\log\log n)}$. Then for every subset $Z
\subseteq W$ with
\[
|Z| \leq \min{\{\gamma|W|/20, |U|\log n\}},
\]
there is a vertex $x \in U$ such that $|N^\ell(x, W \setminus Z)| > |W|/2$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:connecting-lemma}]
We define an $\ell$-uniform hypergraph $\ensuremath{\mathcal H}$ on the vertex set $E(M) \cup W$,
where for every edge $e \in E(M)$ and every set $Y \subseteq W$ of size $\ell
- 1$, we add a hyperedge $\{e\} \cup Y$ if and only if $G$ contains a path
joining the endpoints of $e$ whose internal vertices belong to $Y$. Hence, if
there is an $E(M)$-saturating matching in $\ensuremath{\mathcal H}$, then $G$ contains an $(M, W,
\ell)$-matching. We use Haxell's criterion (Theorem~\ref{thm:haxell-matching})
to show that $\ensuremath{\mathcal H}$ contains an $E(M)$-saturating matching.
For this, let $S \subseteq E(M)$ and $Z \subseteq W$ be subsets with $|Z| \leq
2\ell|S|$. Since $M$ has maximum degree at most $\Delta$, we can greedily find
a subset $S' \subseteq S$ of size $|S|/(2\Delta)$ such that the edges in $S'$
are pairwise disjoint. In other words, there exist disjoint sets $U_1, U_2
\subseteq U$ of size $|U_1| = |U_2| \geq |S|/(2\Delta)$, and a bijection $\phi
\colon U_1 \to U_2$, such that for every $u \in U_1$, the pair $\{u,
\phi(u)\}$ belongs to $S'$. It is enough to show that for some $u \in U_1$,
$G$ contains a $u\phi(u)$-path of length at most $\ell$ whose internal
vertices are all in the set $W \setminus Z$. Indeed, this implies that $\ensuremath{\mathcal H}$
contains an edge intersecting $S'$ (and hence $S$) and not intersecting $Z$,
showing that Haxell's criterion is satisfied. In the remainder we show that
such a vertex $u \in U_1$ exists.
Let $\ell_1 = \ceil{10 \log n/(\gamma\log\log n)}$ and let $U_1' \subseteq
U_1$ be the set consisting of all vertices $u \in U_1$ for which
$|N^{\ell_1}(u, W \setminus Z)| > |W|/2$. We claim that $|U_1'| > |U_1|/2$.
Towards a contradiction suppose $|U_1'| \leq |U_1|/2$ and let $U_1'' = U_1
\setminus U_1'$. Note, $|U_1''| \geq |U_1|/2 \geq |S|/(4\Delta)$. Since $|Z|
\leq 2\ell|S| \leq |U_1''|\log n$ and also $|Z| \leq 2\ell|S| \leq 2\ell e(M)
\leq 2|W|/C$, we have
\[
|Z| \leq \min{\{ \gamma|W|/20, |U_1''|\log n \}}
\]
for sufficiently large $C$. By the assumption $|W| \geq C\beta\sqrt{\log
n}/p$, we also see that $|W| \geq 10\gamma^{-1} \beta \sqrt{\log n}/p$.
Therefore, by Lemma~\ref{lem:resilient-connecting} applied for $U_1''$ (as
$U$), $W$, $Z$, and $\ell_1$ (as $\ell$), there exists a vertex $u \in U_1''$
such that $|N^{\ell_1}(u, W \setminus Z)| > |W|/2$, which is a contradiction
with the definition of $U_1''$. It follows that, indeed, $|U_1'| > |U_1|/2$.
Analogously, the set $U_2' \subseteq U_2$ of all vertices $u \in U_2$ for
which $|N^{\ell_1}(u, W\setminus Z)| > |W|/2$ has size $|U_2'| > |U_2|/2$. It
then follows that there is some $u \in U_1'$ such that $\phi(u) \in U_2'$.
Thus, from both $u$ and $\phi(u)$ it is possible to reach strictly more
than $|W|/2$ vertices of $W \setminus Z$ via paths of length at most $\ell_1$
whose internal vertices belong to $W \setminus Z$. Hence, there must exist a
$u\phi(u)$-path of length at most $2\ell_1 \leq \ell$ whose internal vertices
all lie in $W \setminus Z$. This completes the proof.
\end{proof}
We now proceed with the proof of Lemma~\ref{lem:resilient-connecting}.
\begin{proof}[Proof of Lemma~\ref{lem:resilient-connecting}]
Without loss of generality, assume $|U| \leq |W|/(2\log n)$. Indeed, if $U$
contains more than $|W|/(2\log n)$ elements, then replacing it by a subset of
size $\floor{|W|/(2\log n)}$ does not violate any of the assumptions
$\ref{cl-res-W-exp}$--$\ref{cl-res-U-deg}$ nor the bound on the size of $Z$.
We first prove an auxiliary claim about expansion of certain subsets which we
use in the proof of the lemma.
\begin{claim}\label{cl:many-neighbours}
If $X \subseteq U \cup W$ is such that $|X| \geq |U|$ and $e(X, W \setminus
X) \geq \gamma|X||W|p/2$, then
\[
\big| N(X, W \setminus (X \cup Z)) \big| \geq \min{\{|X|\log n,
\gamma|W|/5\}}.
\]
\end{claim}
\begin{proof}
Let us denote $N(X, W \setminus X)$ by $Y$. Recall that since $G$ is $(p,
\beta)$-sparse, we have
\[
e(X, W \setminus X) = e(X, Y) \leq |X||Y|p + \beta\sqrt{|X||Y|}.
\]
Combining this with the assumption $e(X, W \setminus X) \geq
\gamma|X||W|p/2$, we obtain
\[
|Y| \geq \frac{\gamma |W|}{2} - \frac{\beta \sqrt{|Y|}}{\sqrt{|X|}p}.
\]
If $\sqrt{|Y|} \leq \gamma \sqrt{|X|}|W|p/(4\beta)$, then the above
inequality gives $|Y| \geq \gamma |W|/4$. In this case, it follows from the
assumption $|Z| \leq \gamma |W|/20$ that
\[
\big| N(X, W \setminus (X \cup Z)) \big| \geq \gamma|W|/4 - |Z| \geq
\gamma|W|/5.
\]
On the other hand, if $\sqrt{|Y|} > \gamma \sqrt{|X|}|W|p/(4\beta)$ then
using the assumption $|W| \geq 10\gamma^{-1}\beta\sqrt{\log n}/p$ we get the
lower bound $\sqrt{|Y|} \geq 2\sqrt{|X|\log n}$, or, equivalently, $|Y| \geq
4|X|\log n$. Since $|Z| \leq |U|\log n \leq |X|\log n$ we obtain
\[
\big| N(X, W \setminus (X \cup Z)) \big| \geq 4|X|\log n - |Z| \geq
|X|\log n,
\]
completing the proof of the claim.
\end{proof}
We are now ready to prove the lemma. For every integer $j \geq 0$, set
\[
\ell_j := \ceil{\log n/(\gamma \log\log n)} + (j + 1) \ceil{5/\gamma}.
\]
The goal is to construct a sequence of subsets $U = U_0 \supseteq U_1
\supseteq U_2 \supseteq \dotsb$ such that for every $j \geq 0$ we have
$|U_{j+1}| \leq \ceil{|U_j|/\log n}$ and
\[
|N^{\ell_j}(U_j, W \setminus Z)| > |W|/2.
\]
Note that $|U_j| \leq \ceil{|U_{j - 1}|/\log n}$ implies that either $|U_j| =
1$ or $|U_j| \leq 2|U_{j - 1}|/\log n$, and therefore, for $j = \ceil{\log
n/(\log\log n - \log 2)}$, we have $|U_j| = 1$. Moreover, for this $j$, we
have $\ell_j \leq 10\log n/(\gamma \log\log n) \leq \ell$. In this way, the
statement of the lemma follows, provided we can indeed construct the sets
$U_0, U_1, U_2, \dots$ with the properties mentioned above.
For the set $U_0 = U$, we only need to verify that $|N^{\ell_0}(U_0, W
\setminus Z)| > |W|/2$. Note that as $e(U_0, W \setminus U_0) \geq
\gamma|U_0||W|p$ due to $\ref{cl-res-U-deg}$, Claim~\ref{cl:many-neighbours}
for $U_0$ (as $X$) implies
\[
|N^1(U_0, W \setminus Z)| \geq \min{\{ |U|\log n, \gamma|W|/5 \}} \geq |U|,
\]
where the last inequality holds because of our assumption $|U| \leq |W|/(2\log
n)$. Moreover, if for some $i \geq 1$, we have $|U| \leq |N^i(U_0, W \setminus
Z)| \leq |W|/2$, then as
\begin{align*}
e\big( N^i(U_0, W \setminus Z), W \setminus N^i(U_0, W \setminus Z) \big) &
\osref{$\ref{cl-res-W-exp}$}\geq \gamma \big|N^i(U_0, W \setminus Z)\big|
\big|W \setminus N^i(U_0, W \setminus Z)\big|p \\
%
& \geq \gamma |N^i(U_0, W \setminus Z)||W|p/2,
\end{align*}
Claim~\ref{cl:many-neighbours} applied to $N^i(U_0, W \setminus Z)$ (as $X$)
shows
\[
|N^{i+1}(U_0, W \setminus Z)| \geq |N^i(U_0, W \setminus Z)| +
\min{\{|U|\log^{i+1} n, \gamma|W|/5\}}.
\]
One easily sees that for $\ell_0 = \ceil{\log n/(\gamma \log\log n)} +
\ceil{5/\gamma}$, we have $|N^{\ell_0}(U_0, W \setminus Z)| > |W|/2$ as
required.
Suppose now we have constructed the set $U_j$ as above and want to construct
$U_{j + 1}$. We thus assume
\[
|N^{\ell_j}(U_j, W \setminus Z)| > |W|/2.
\]
By averaging, there exists a subset $U_{j + 1} \subseteq U_j$ of size $|U_{j +
1}| \leq \ceil{|U_j|/\log n}$ such that
\[
|N^{\ell_j}(U_{j+1}, W \setminus Z)| > |W|/(2\log n) \geq |U|.
\]
Successively applying Claim~\ref{cl:many-neighbours} at most $\ceil{5/\gamma}$
times, it is easy to see that
\[
|N^{\ell_j + \ceil{5/\gamma}}(U_{j+1}, W \setminus Z)| > |W|/2.
\]
Indeed, in a single step the set $N^{\ell_j}(U_{j+1}, W \setminus Z)$ expands
to a set of size
\[
\min{\{ |W|/(2\log n) \cdot \log n, \gamma |W|/5 \}} \geq \gamma|W|/5,
\]
and in at most $\ceil{5/\gamma} - 1$ additional steps, it expands to a set of
size greater than $|W|/2$. This completes the proof of the lemma.
\end{proof}
\subsection*{Acknowledgements.} The authors would like to thank the anonymous
reviewers for their thorough reading of the paper and valuable comments; in
particular, for pointing out a slight gap in a previous version of the paper.
\bibliographystyle{abbrv}
\section{Embedding and boosting}\label{sec:embedding-lemma}
In this section we give the proof of the Embedding Lemma
(Lemma~\ref{lem:embedding-lemma}). The proof relies on the following approximate
version covering almost all the vertices of $G$.
\begin{lemma}\label{lem:embedding-cheat}
For every integer $k \geq 2$ and all $\alpha, \mu > 0$, there exists a
positive $\eta(\alpha, k)$ such that the following holds for all sufficiently
large $n$. Let $p \in (0, 1)$ and $\beta \leq \eta np$ and let $G$ be a $(p,
\beta)$-sparse graph on $n$ vertices such that
\[
\delta(G) \geq 2\alpha np \quad \text{and} \quad \delta_{\alpha/4}(G) \geq
(1/k + \alpha)np.
\]
Then $G$ contains a collection of $k - 1$ cycles covering all but at most $\mu
n$ vertices.
\end{lemma}
\begin{proof}[Proof sketch]
Since the argument is fairly standard nowadays, we only give a rough sketch of
the proof. We apply the sparse regularity lemma
(Lemma~\ref{lem:sparse-regularity-lemma}) to the graph $G$ with a sufficiently
small parameter $\varepsilon > 0$. Let $t$ be the number of vertices in the reduced
graph $R$. Since $G$ is $(p, \beta)$-sparse, straightforward calculations show
that $\delta(R) \geq \alpha t$ and $\delta_{\alpha/4+\varepsilon}(R) \geq
(1/k+\alpha-2\varepsilon)t$. Let $U$ be the set containing the at most
$(\alpha/4+\varepsilon)t$ vertices (clusters) $v \in V(R)$ with $\deg_R(v) <
(1/k+\alpha-2\varepsilon)t$. A simple greedy strategy allows us to find a matching
$M$ in $R$ that saturates the set $U$; this matching contains at most
$2|U|\leq (\alpha/2+2\varepsilon)t$ vertices. Let $W = V(R) \setminus V(M)$. Since
$R[W]$ has minimum degree at least $(1/k+\alpha-2\varepsilon)t - 2|U| \geq t/k$, it
can be covered by at most $k - 1$ cycles, by Theorem~\ref{thm:KL}. Moreover,
the minimum degree of $R$ ensures that we can select a different neighbour in
$W$ for each of the vertices in $V(M)$.
Using standard machinery, one can now translate each cycle in the covering of
$R[W]$, as well as all the matching edges in $M$ that have an endpoint whose
selected neighbour lies on that cycle, into a single cycle that covers all but
at most $O(\varepsilon n)$ vertices of the graph $G$. The following figure
schematically represents one such cycle together with the two edges of $M$
that have an endpoint whose selected neighbour lies on the cycle.
\begin{center}
\begin{tikzpicture}[scale=1.5]
\coordinate (Ma) at (0.5,1);
\coordinate (Mb) at (2,1);
\coordinate (Mu) at (0.5,-1);
\coordinate (Mv) at (2,-1);
\draw[line width=0.9cm,color=black!20] (Ma)--(Mb) (Mu)--(Mv);
\begin{scope}[xshift=5cm]
\foreach \i in {1,...,6} {
\coordinate (C\i) at (60*\i-30:1.5cm);
}
\end{scope}
\draw[line width=0.9cm,color=black!20] (Mb)--(C3) (Mv)--(C4);
\draw[line width=0.9cm,color=black!20] (C6)--(C1) (C1)--(C2) (C2)--(C3)
(C3)--(C4) (C4)--(C5) (C5)--(C6);
\foreach \i in {a,b,u,v} {
\draw[fill=white] (M\i) circle (0.5cm) circle (0.5cm);
}
\foreach \i in {1,...,6} {
\draw[fill=white] (C\i) circle (0.5cm);
}
\foreach \i in {1,2,4,5,6} {
\begin{scope}[xshift=5cm,rotate=60*\i]
\draw (30:1.5cm) ++(0,-0.2) coordinate (f\i) ++(0.1,0) coordinate (au1)
++(0.1,0) coordinate (au2);
\draw (30:1.5cm) +(-0.2,-0.2) coordinate (h\i);
\draw (-30:1.5cm) ++(0,0.2) coordinate (g\i) ++(0.1,0) coordinate (au3)
++(0.1,0) coordinate (au4);
\draw[black]
(f\i) -- (au3) -- (au2) -- (au4) -- (au1) -- (g\i);
\end{scope}
}
\begin{scope}[xshift=5cm,rotate=60*3]
\draw (30:1.5cm) ++(0,-0.2) coordinate (au2) ++(0.1,0) coordinate (au1)
++(0.1,0) coordinate (f3);
\draw (30:1.5cm) +(-0.2,-0.2) coordinate (h3);
\draw (-30:1.5cm) ++(0,0.2) coordinate (au4) ++(0.1,0) coordinate (au3)
++(0.1,0) coordinate (g3);
\draw[black]
(f3) -- (au3) -- (au2) -- (au4) -- (au1) -- (g3);
\end{scope}
\draw[black] (Mb) ++(0,-0.3) coordinate (a) {} -- ++(-1.5,0.1) --
+(1.5,0.1) -- ++(0,0.2) -- +(1.5,0.1) -- ++(0,0.2) -- +(1.5,0.1)
coordinate (b) {};
\draw[black] (Mv) ++(0,-0.3) coordinate (u) {} -- ++(-1.5,0.1) --
+(1.5,0.1) -- ++(0,0.2) -- +(1.5,0.1) -- ++(0,0.2) -- +(1.5,0.1)
coordinate (v) {};
\draw[densely dotted] (b)--(f2) (a)--(g3) (v)--(f3) (u)--(g4);
\draw[densely dotted] (f1)--(h2)--(g2) (f4)--(h5)--(g5)
(f5)--(h6)--(g6) (f6)--(h1)--(g1);
\end{tikzpicture}
\end{center}
The long black paths---each of which covers all but $O(\varepsilon n/t)$ vertices in
both of the clusters it belongs to---are embedded first into the dense regular
pairs, using, e.g., the method from the proof
of~\cite[Lemma~2.3]{ben2012long}. Finally, using the definition of an $(\varepsilon,
p)$-regular pair, the dotted edges can be added by sacrificing at most $O(\varepsilon
n/t)$ vertices from each black path. In this way, one obtains a collection of
$k-1$ cycles in $G$ covering all but $O(\varepsilon n) \leq \mu n$ vertices.
\end{proof}
Utilising a trick of Nenadov and the second author~\cite{nenadov2020komlos}, as
a consequence of Lemma~\ref{lem:embedding-cheat} we get
Lemma~\ref{lem:embedding-lemma}.
\embedding*
\begin{proof}
Let $C = C(\alpha, k) > 0$ be a sufficiently large constant for the arguments
below to go through, $\eta = \eta_{\ref{lem:embedding-cheat}}(\alpha/4, k)$,
and $\mu = \min{\{1/2, \alpha/48, \eta/8, 1/(4C) \}}$.
Take $m$ to be the largest integer such that $n/2^{m - 1} > \max{\{
\floor{\eta^{-1}\beta/p}, \floor{C\log n/p} \}}$ and note that $m \leq \log_2
n$. We first claim that there exists a partition $V(G) = L_1 \cup \dotsb \cup
L_m$ such that:
\begin{enumerate}[(i), font=\itshape]
\item\label{boost1} $|L_i| = \floor{n/2^i}$ for all $i \in [m - 1]$,
\item\label{boost2} $\deg(v, L_i) \geq \alpha|L_i|p$ for all $v \in V(G)$
and $i \in [m]$, and
\item\label{boost3} for every $i, j \in [m]$, we have $\deg(v, L_i) \geq
(1/k + \alpha/2)|L_i|p$ for all but at most $(\alpha/24)|L_j|$ vertices $v
\in L_j$.
\end{enumerate}
Indeed, a partition $V(G) = L_1 \cup \dotsb \cup L_m$ chosen uniformly at
random among all partitions such that $|L_i| = \floor{n/2^i}$ for all $i \in
[m - 1]$ has these properties with high probability. We briefly explain how
one can conclude this. Observe that
\begin{equation}\label{eq:leftover-lb}
|L_m| = n - \sum_{i = 1}^{m - 1} \floor{n/2^i} \geq n - n \sum_{i = 1}^{m -
1} 1/2^i = n/2^{m - 1},
\end{equation}
and thus $|L_i| \geq C\log n/p$ for all $i \in [m]$. For every fixed $v \in
V(G)$ and $i \in [m]$, the random variable $\deg(v, L_i)$ follows a
hypergeometric distribution with mean at least $2\alpha|L_i|p \geq 2\alpha
C\log n$, so by Chernoff's inequality, we have $\Pr[\deg(v, L_i) \leq
\alpha|L_i|p] \leq n^{-2}$. Thus, the union bound shows that $\ref{boost2}$
holds with high probability for every $i \in [m]$ and $v \in V(G)$. The
statement in $\ref{boost3}$ follows similarly.
Fix a partition $V(G) = L_1 \cup \dotsb \cup L_m$ satisfying
$\ref{boost1}$--$\ref{boost3}$. We show by induction that for every $i \in
[m]$, the graph $G[L_1 \cup \dotsb \cup L_i]$ contains $k - 1$ path forests
with at most $i$ paths each, which together cover all but at most $\mu|L_i|$
vertices in $L_1 \cup \dotsb \cup L_i$. By maximality of $m$, we have
\[
|L_m| \leq n/2^{m - 1} + m \leq 2\max{\{ \eta^{-1} \beta/p, C\log n/p \}} +
\log_2 n \leq 4\max{\{ \eta^{-1} \beta/p, C\log n/p \}}.
\]
Hence, for $i = m$, adding at most $\mu|L_m|$ uncovered vertices to one of the
path forests used to cover $G[L_1 \cup \dotsb \cup L_m]$ results in a cover of
$G$ by $k - 1$ path forests, each of which contains at most $(k - 1)m + \mu
\cdot 4\max{\{ \eta^{-1}\beta/p, C\log n/p \}} \leq \max{\{ \beta/p, \log n/p
\}}$ paths, for our choice of $\mu$.
For the base case $i = 1$, this is a consequence of
Lemma~\ref{lem:embedding-cheat} applied with $\alpha/2$ (as $\alpha$) and
$\mu$ to $G[L_1]$. We can indeed do so since $\beta \leq \eta|L_1|p$.
Assume now the statement holds for some $1 \leq i < m$ and let us show it for
$i + 1$. Denote by $W$ the vertices not covered by the $k - 1$ path forests in
$G[L_1 \cup \dotsb \cup L_i]$ and note that $|W| \leq \mu|L_i| \leq 2\mu|L_{i
+ 1}|$. For every $v \in V(G)$ we have
\[
\deg_G(v, L_{i+1} \cup W) \geq \deg_G(v, L_{i+1}) \osref{$\ref{boost2}$}\geq
\alpha|L_{i+1}|p \geq \alpha\frac{|L_{i+1}| + |W|}{1 + 4\mu} p \geq
(\alpha/2)|L_{i+1} \cup W|p.
\]
Similar calculation shows that every vertex $v \in V(G)$ with $\deg_G(v,
L_{i+1}) \geq (1/k + \alpha/2)|L_{i+1}|p$ also satisfies $\deg_G(v, L_{i+1}
\cup W) \geq (1/k + \alpha/4)|L_{i+1} \cup W|p$. Since there were at most
$(\alpha/24)|L_{i+1}|$ vertices in $L_{i+1}$ violating the previous degree
condition by $\ref{boost3}$, and $|W| \leq \mu|L_{i+1}|$, there are at most
$(\alpha/16)|L_{i+1}|$ vertices $v \in L_{i+1} \cup W$ with $\deg_G(v, L_{i+1}
\cup W) < (1/k + \alpha/4)|L_{i+1} \cup W|p$. Therefore, another application
of Lemma~\ref{lem:embedding-cheat} for $\alpha/4$ (as $\alpha$) to $G[L_{i+1}
\cup W]$ shows that the hypothesis holds for $i+1$. Once again, we may apply
the lemma as $\beta \leq \eta|L_{i+1} \cup W|p$ by \eqref{eq:leftover-lb}.
\end{proof}
\section{The Inheritance Lemma}\label{sec:inheritance}
For the convenience of the reader, we repeat the statement of
Lemma~\ref{lem:inheritance-lemma}.
\inheritance*
\begin{proof}
Assume that $R \subseteq V(G)$ is a set of size $r$ chosen uniformly at
random. We aim to show that $R$ induces an $\Omega(p)$-expander with
probability at least $1 - n^{-1}$ which in turn implies the lemma.
Note that as $G$ is a $\gamma p$-expander, it has minimum degree at least
$\gamma p (n - 1)$. Then, as $R$ is a uniformly random subset of size $r \geq
C \log n/p$ standard application of Chernoff's inequality and the union bound
shows that with probability at least $1 - o(n^{-2})$ (for $C$ large enough),
we have
\begin{equation}\label{eq:inh-min-degree}
\delta(G[R]) \geq \gamma rp/2.
\end{equation}
To show that $G[R]$ is a $\gamma_1p$-expander, we need to verify that for all
partitions $X \cup Y = R$, we have $e(X, Y) \geq \gamma_1 p |X||Y|$. We first
show that if \eqref{eq:inh-min-degree} holds, then this is indeed the case for
all partitions where one of the parts, say $X$, is smaller than $\gamma r/4$.
For this, we use the fact that $G$, and hence also $G[R]$, is $(p,
\beta)$-sparse. Applying Lemma \ref{lem:edges-out-small-set} to the graph
$G[R]$ and using $|X| \leq \gamma r/4$ and \eqref{eq:inh-min-degree}, we
obtain
\[
e(X, Y) \geq \gamma|X|rp/4 - \beta|X|.
\]
By choosing $C$ to be sufficiently large and from the assumption that $\beta
\leq rp/C$ we further get
\[
e(X, Y) \geq \gamma |X|rp/4 - \beta |X| \geq \gamma|X|rp/5 \geq \gamma
|X||Y|p/5.
\]
In particular, the expansion property holds with $\gamma_1 = \gamma/5$.
The argument for the remaining case is more complicated and can be summarised
as follows. We apply the sparse regularity lemma to the graph $G$, thus
obtaining an $(\varepsilon, \tilde p)$-regular partition $V_0 \cup V_1 \cup \dotsc
\cup V_t$, where $\varepsilon$ is a small positive constant and $\tilde p =
e(G)/\binom{n}{2}$ is the density of $G$. The random set $R$ intersects each
part of the regular partition in roughly the expected number of vertices.
Moreover, whenever $(V_i, V_j)$ is an $(\varepsilon, \tilde p)$-regular pair with
density $d_{ij} \tilde p$, then the pair $(V_i \cap R, V_j \cap R)$ is
$(\varepsilon', d_{ij}\tilde p)$-\emph{lower-regular} with a slightly weaker
parameter $\varepsilon'$, which follows from Lemma~\ref{lem:small-subsets-regular}.
Now if we have a partition $X \cup Y = R$, where $|X|, |Y| \geq \gamma r/4$,
then we `approximate' it by a partition $X' \cup Y' = V(G) \setminus V_0$ by
letting $X'$ (resp.\ $Y'$) be the union of the classes in the regular
partition that intersect $X$ (resp.\ $Y$) in some $\Omega(r)$ vertices (note
that the sets defined in this way may not be disjoint; there is thus a simple
cleaning-up step to make sure that this is the case). Since $G$ is a $\gamma
p$-expander and $V_0$ is small, we know that there are $\gamma p|X'||Y'|$
edges in $G[X', Y']$, which implies that there are many dense regular pairs
$(V_i, V_j)$ such that $V_i \subseteq X'$ and $V_j \subseteq Y'$. Because for
such pairs the pair $(V_i \cap R, V_j \cap R)$ is lower-regular, $(V_i \cap X,
V_j \cap Y)$ contains many edges, that is, there are many edges in $G[X, Y]$.
We now give the details.
Set $\delta = 1/e$ and $\tilde p = e(G)/\binom{n}{2}$, and let $d$ and $\varepsilon'
< d/2$ be sufficiently smaller than $\gamma$ in order to support the arguments
that follow. Let $\varepsilon_0 = \varepsilon_{0_{\ref{lem:small-subsets-regular}}}(\varepsilon',
\delta)$ and $D = D_{\ref{lem:small-subsets-regular}}(\varepsilon')$. Lastly, let
$\varepsilon$ be sufficiently small with respect to all previously chosen constants,
let $t$ be as given by Lemma~\ref{lem:sparse-regularity-lemma} applied for
$\varepsilon$, and assume that $C$ is large enough (in particular, $C$ is much larger
than $t$). For ease of reference we point out that we have the following
hierarchy of constants
\[
0 < \varepsilon \ll \varepsilon_0 \ll \varepsilon' \ll d \ll \gamma < 1 \ll t \ll C.
\]
Observe that $G$ being a $\gamma p$-expander which is $(p, \beta)$-sparse
implies $\gamma p/2 \leq \tilde p \leq 4p$.
We apply the sparse regularity lemma (Lemma~\ref{lem:sparse-regularity-lemma})
to $G$ for $\varepsilon$ to obtain an $(\varepsilon, \tilde p)$-regular partition $(V_i)_{i
= 0}^{t}$ of $V(G)$, where $V_0$ is the exceptional class of size $|V_0| \leq
\varepsilon n$. Since $|V_1| = \dotsb = |V_t|$, we have $(1 - \varepsilon)n/t \leq |V_i|
\leq n/t$ for all $i \in [t]$.
For each $0 \leq i \leq t$, set $U_i := R \cap V_i$. Note that each $U_i$ is a
random subset of $V_i$ whose size follows a hypergeometric distribution with
mean $r|V_i|/n \geq C \log n \cdot \min\{\varepsilon, (1 - \varepsilon)/t\}$. By our choice
of $C$, Chernoff's inequality together with the union bound shows that with
probability at least $1 - o(n^{-2})$, all sets $U_i$, $i \in [t]$, satisfy
$|U_i| = (1 \pm \varepsilon) r|V_i|/n$ and similarly $|U_0| \leq (1 + \varepsilon) \varepsilon r$.
Recalling that $(1 - \varepsilon) n/t \leq |V_i| \leq n/t$, this implies in
particular $|U_i| = (1 \pm \varepsilon') r/t$ and $|U_0| \leq \varepsilon' r$.
Consider now a fixed $(\varepsilon, \tilde p)$-regular pair $(V_i, V_j)$ with density
at least $d \tilde p$. By definition, $(V_i, V_j)$ is then also $(\varepsilon/d,
d\tilde p)$-lower-regular. We apply Lemma~\ref{lem:small-subsets-regular} with
$\varepsilon'$, $\delta$, $d \tilde p$ (as $p$), and $|U_i|, |U_j|$ (as $q_1, q_2$)
to get that with probability at least
\[
1 - \delta^{\min{\{q_1, q_2\}}} = 1 - \delta^{(1 - \varepsilon') C\log n/t} \geq 1
- o(n^{-2}),
\]
the pair $(U_i, U_j)$ is $(\varepsilon', d\tilde p)$-lower-regular. Indeed, we can
apply the lemma since $|U_i|, |U_j| \geq (1 - \varepsilon')r/t \geq D(d \tilde
p)^{-1}$ due to the assumption of the lemma on $r$ and the fact that $\gamma
\tilde p/2 \leq p$, and $\varepsilon/d < \varepsilon_0$. Therefore, the union bound over at
most $t^2$ pairs $(V_i, V_j)$ shows that with probability at least $1 -
o(n^{-2})$, the following two properties hold:
\begin{enumerate}[(i), font=\itshape]
\item\label{ui-size} $|U_0| \leq \varepsilon' r$ and $(1 - \varepsilon')r/t \leq (1 -
\varepsilon)r |V_i|/n \leq |U_i| \leq (1 + \varepsilon')r/t$, for all $i\in [t]$, and
\item\label{ui-lower-reg} whenever $(V_i, V_j)$ is an $(\varepsilon, \tilde
p)$-regular pair with density $d_{ij} \tilde p \geq d \tilde p$, then
$(U_i, U_j)$ is $(\varepsilon', d_{ij} \tilde p)$-lower-regular.
\end{enumerate}
From these two properties we show the expansion of $G[R]$ deterministically
for all partitions $X \cup Y = R$ where $|X|, |Y| \geq \gamma r/4$. Let us fix
such a partition and assume without loss of generality that $|X| \leq r/2 \leq
|Y|$. As outlined above, we approximate the given partition of $R$ by a
certain partition of the vertices of $V(G) \setminus V_0$, or, to be more
precise, by a partition of the classes $V_1, \dotsc, V_t$. We now describe
precisely how to do this.
Let $\ensuremath{\mathcal I}_X = \{i \in [t] : |U_i \cap X| \geq \gamma |U_i|/10\}$ and
$\ensuremath{\mathcal I}_Y = \{i \in [t] : |U_i \cap Y|\geq \gamma |U_i|/10\}$. Note that
$\ensuremath{\mathcal I}_X \cup \ensuremath{\mathcal I}_Y = [t]$, although $\ensuremath{\mathcal I}_X$ and $\ensuremath{\mathcal I}_Y$ are not necessarily
disjoint. Using $\ref{ui-size}$ we have
\[
|\ensuremath{\mathcal I}_X| \cdot (1 + \varepsilon')r/t \geq \sum_{i \in \ensuremath{\mathcal I}_X} |U_i| \geq |X| -
\sum_{i \notin \ensuremath{\mathcal I}_X} \gamma |U_i|/10 - |U_0| \geq \gamma r/4 - (1 +
\varepsilon')\gamma r/10 - \varepsilon'r
\]
and hence
\[
|\ensuremath{\mathcal I}_X| \geq \frac{\gamma r/4 - (1 + \varepsilon')\gamma r/10 - \varepsilon' r}{(1 +
\varepsilon')r/t} \geq \gamma t/8,
\]
using the fact that $\varepsilon'$ is sufficiently small compared to $\gamma$. In the
same way, using $|Y| \geq r/2$, we also obtain $|\ensuremath{\mathcal I}_Y| \geq t/4$. Since
$\ensuremath{\mathcal I}_X \cup \ensuremath{\mathcal I}_Y = [t]$, it follows that there exists a {\em partition}
$\ensuremath{\mathcal J}_X \cup \ensuremath{\mathcal J}_Y = [t]$ such that $\ensuremath{\mathcal J}_X \subseteq \ensuremath{\mathcal I}_X$ and $\ensuremath{\mathcal J}_Y
\subseteq \ensuremath{\mathcal I}_Y$ and such that $|\ensuremath{\mathcal J}_X| \geq \gamma t/8$ and $|\ensuremath{\mathcal J}_Y| \geq
t/4$.
Next, we say that a pair $(V_i, V_j)$ for $1 \leq i < j \leq t$ is {\em good}
if it is $(\varepsilon, \tilde p)$-regular with density $d_{ij} \tilde p \geq d
\tilde p$, and if $(i, j) \in \ensuremath{\mathcal J}_X \times \ensuremath{\mathcal J}_Y$. From $\ref{ui-lower-reg}$
it follows that if $(V_i, V_j)$ is good, then $(U_i, U_j)$ is $(\varepsilon', d_{ij}
\tilde p)$-lower-regular. The lower-regularity and $|U_i \cap X| \geq \gamma
|U_i|/10 \geq \varepsilon'|U_i|$ and $|U_j \cap Y| \geq \gamma |U_j|/10 \geq
\varepsilon'|U_j|$ in turn imply
\[
e(U_i \cap X, U_j \cap Y) \geq (1-\varepsilon')d_{ij}\tilde p|U_i \cap X||U_j \cap
Y| \geq \frac{\gamma^2}{100} \cdot (1-\varepsilon')d_{ij}\tilde p|U_i||U_j|.
\]
Together with $\ref{ui-size}$, we obtain
\[
d_{ij}\tilde p|U_i||U_j| \geq (1-\varepsilon')^3 d_{ij}\tilde p |V_i||V_j| r^2/n^2
\geq e(V_i, V_j) r^2/(2n^2),
\]
which then implies $e(U_i \cap X, U_j \cap Y) \geq \gamma^2 e(V_i, V_j)
r^2/(200n^2)$. In particular,
\begin{equation}\label{eq:inh-r1r2}
e(X, Y) \geq \sum_{\substack{1 \leq i < j \leq t \\ (V_i, V_j) \text{
good}}} e(U_i \cap X, U_j \cap Y) \geq \frac{\gamma^2 r^2}{200n^2}
\sum_{\substack{1 \leq i < j \leq t \\ (V_i, V_j) \text{ good}}} e(V_i,
V_j).
\end{equation}
To complete the proof, we show that many edges of $G$ go between good pairs
$(V_i, V_j)$. Set $X' = \bigcup_{i \in \ensuremath{\mathcal J}_X} V_i$ and $Y' = \bigcup_{i \in
\ensuremath{\mathcal J}_Y} V_i$. Note that $X' \cup Y' = V(G) \setminus V_0$. Since $|\ensuremath{\mathcal J}_X| \geq
\gamma t/8$ and $|V_i| \geq (1 - \varepsilon)n/t$, we have $|X'| \geq \gamma n/10$,
for $\varepsilon$ small enough. Similarly, $|\ensuremath{\mathcal J}_Y| \geq t/4$ implies $|Y'| \geq
n/5$. From the assumption that $G$ is a $\gamma p$-expander, we obtain
\[
e(X', Y' \cup V_0) \geq \gamma p |X'||Y'| \geq \gamma^2 n^2p/50.
\]
On the other hand, we have
\[
e(V_0, X') \leq p|V_0||X'| + \beta \sqrt{|V_0||X'|} \leq 2\varepsilon n^2p.
\]
where we used $|V_0| \leq \varepsilon n$ and $\beta \leq rp/C \leq np/C \leq \varepsilon np$
(provided $C$ is large enough). From these it follows that
\begin{equation}\label{eq:w1w2}
e(X', Y') \geq e(X', Y' \cup V_0) - 2\varepsilon n^2p \geq \gamma^2 n^2p/100,
\end{equation}
using that $\varepsilon$ is sufficiently small compared to $\gamma$.
Using once more the fact that $G$ is $(p, \beta)$-sparse, we have $e(V_i, V_j)
\leq |V_i||V_j|p + \beta \sqrt{|V_i||V_j|} \leq 2n^2p/t^2$ for all $i, j \in
[t]$. By the definition of an $(\varepsilon, \tilde p)$-regular partition, there are
at most $\varepsilon t^2$ pairs $(V_i, V_j)$ that are not $(\varepsilon, \tilde p)$-regular.
Hence, the number of edges in $G[X', Y']$ which go between non-regular pairs
$(V_i, V_j)$ is at most $2\varepsilon n^2p$. Moreover, the number of edges in $G[X',
Y']$ with one endpoint in each part of a pair $(V_i, V_j)$ with density below
$d \tilde p$ is clearly at most $dn^2\tilde p \leq 4dn^2p$. Lastly, note that
by definition of $X'$ and $Y'$, for every edge $e \in G[X', Y']$, there is
some $(V_i, V_j)$ with $(i, j) \in \ensuremath{\mathcal J}_X \times \ensuremath{\mathcal J}_Y$ such that $e \in G[V_i,
V_j]$. Then \eqref{eq:w1w2} and the definition of a good pair give
\[
\sum_{\substack{1 \leq i < j \leq t \\ (V_i, V_j) \text{ good}}} e(V_i,
V_j) \geq e(X', Y') - 2\varepsilon n^2\tilde p - 4dn^2p \geq \gamma^2
n^2p/200,
\]
using again that $\varepsilon$, $\varepsilon'$, and $d$ are small compared to $\gamma$.
Finally, with \eqref{eq:inh-r1r2}, we get
\[
e(X, Y) \geq \frac{\gamma^2r^2}{200n^2} \sum_{\substack{1 \leq i < j \leq t
\\ (V_i, V_j) \text{ good}}} e(V_i, V_j) \geq \frac{\gamma^4}{200^2}
\cdot r^2p \geq \gamma_1 |X||Y|p,
\]
for $\gamma_1 = \gamma^4/40000$.
\end{proof}
\section{Introduction}
A classical result of Dirac states that every graph with $n \geq 3$ vertices and
minimum degree at least $n/2$ contains a Hamilton cycle, that is, a cycle
passing through all vertices of the graph~\cite{dirac1952some}. There exist a
vast number of extensions of this theorem, most of which state that every graph
satisfying a certain minimum degree condition must have some `global' structure.
For example, Hajnal and Szemer\'{e}di~\cite{hajnal1970proof} proved that, for
every $k \geq 2$, the vertex set of every graph with $n$ vertices and minimum
degree at least $(k-1)n/k$ can be covered by vertex-disjoint copies of $K_k$,
provided that $k$ divides $n$. P\'{o}sa~\cite{erdos1964problem} and
Seymour~\cite{seymour1973problem} conjectured that, more generally, every graph
with $n$ vertices and minimum degree at least $(k-1)n/k$ contains the $(k-1)$-st
power of a Hamilton cycle, that is, a Hamilton cycle in which every pair of
vertices at distance at most $k-1$ is connected by an edge (the case $k=2$ is
Dirac's theorem). This conjecture was first proved for large $n$ by Koml\'{o}s,
S\'{a}rk\"{o}zy, and Szemer\'{e}di~\cite{komlos1998proof}, using the regularity
method, and later for smaller values of $n$ by Levitt, S\'{a}rk\"{o}zy, and
Szemer\'{e}di~\cite{levitt2010avoid} and Chau, DeBiasio, and
Kierstead~\cite{chau2011posa}. A minimum degree condition ensuring the presence
of more general subgraphs, formulated in terms of the chromatic number, is given
by the bandwidth theorem of B\"{o}ttcher, Schacht, and
Taraz~\cite{bottcher2009proof}.
The above results all concern graphs with rather large minimum degrees. In the
case where the minimum degree can be smaller than $\floor{n/2}$, the graph might
not be connected, so one can no longer guarantee any spanning connected
subgraph; similarly, the graph might be bipartite, and one cannot guarantee any
non-bipartite subgraph. However, such graphs may still have some interesting
global properties. The following extension of Dirac's theorem to graphs with
minimum degree below $n/2$ was first conjectured by Enomoto, Kaneko, and
Tuza~\cite{enomoto1987p_3} and proved by Kouider and
Lonc~\cite{kouider1996covering}.
\begin{theorem}[\cite{kouider1996covering}]\label{thm:KL}
Let $k \geq 2$ be an integer and let $G$ be a graph with $n$ vertices and
minimum degree at least $n/k$. Then the vertex set of $G$ can be covered by $k
- 1$ cycles, edges, or vertices.
\end{theorem}
We note that if $n=n(k)$ is large enough, then `cycles, edges, or vertices' can
be replaced by `cycles' (this follows for example from the main result
in~\cite{balogh2017stability}).
There is a trend in modern combinatorics towards proving sparse analogues of
extremal results (see, e.g., Conlon's survey~\cite{conlon2014combinatorial}).
Our main result in this paper is a sparse analogue of Theorem~\ref{thm:KL}. We
use the following natural notion of uniformly sparse graphs, which can be seen
as a one-sided version of Thomason's jumbled graphs~\cite{thomason1987pseudo,
thomason1987random}.
\begin{definition}[$(p, \beta)$-sparse]
A graph $G$ is \emph{$(p, \beta)$-sparse} if for all subsets $X, Y \subseteq
V(G)$,
\[
e_G(X, Y) \leq p|X||Y| + \beta \sqrt{|X||Y|}.
\]
\end{definition}
It is well known that for every $p=p(n)\leq 0.99$, the Erd\H{o}s--R\'{e}nyi
random graph $G_{n,p}$ is $(p,O(\sqrt{np}))$-sparse w.h.p.\footnote{With high
probability, that is, with probability tending to $1$ as $n \to
\infty$.}~\cite{krivelevich2006pseudo}. With this definition, our main result
reads as follows.
\begin{theorem}\label{thm:main-theorem}
For every integer $k \geq 2$ and every $\alpha > 0$, there exists a positive
$\eta(\alpha, k)$ such that the following holds for all sufficiently large
$n$, all $p \in (0, 1]$, and all $\beta \leq\eta np/\log^3 n$. Let $G$ be a
$(p,\beta)$-sparse graph with $n$ vertices and minimum degree at least $(1/k +
\alpha)np$. Then the vertex set of $G$ can be covered by $k-1$ cycles.
\end{theorem}
The minimum degree $(1/k+\alpha)np$ cannot be much improved. Indeed, assume
$\log^6 n/n \ll p \ll 1$ and let $G$ be a graph consisting of $k$
vertex-disjoint copies of $G_{n/k,p}$. Then w.h.p.\ $G$ has minimum degree at
least $(1/k-o(1))np$ and is $(p,\beta)$-sparse with $\beta = O(\sqrt{np}) =
o(np/\log^3 n)$, but cannot be covered by $k-1$ cycles. A very similar
construction shows that the upper bound on $\beta$ in our result is optimal up
to the logarithmic factors. To see this, take any $\log n/n \ll p \ll 1$ and
consider the random graph $G$ given by the union of $k$ vertex-disjoint copies
of $G_{n/k, q}$, where $q = (1 + 2k\alpha)p$. Then $G$ cannot be covered by $k -
1$ cycles but, at the same time, w.h.p.\ it has minimum degree at least $(1/k -
o(1))nq \geq (1/k + \alpha)np$ and is $(q, O(\sqrt{nq}))$-sparse. The latter
means that for all $X,Y\subseteq V(G)$,
\[
e(X,Y) \leq q|X||Y| + O(\sqrt{n q|X||Y|}) \leq p|X||Y| + O(np\sqrt{|X||Y|}),
\]
using $p|X||Y|\leq np\sqrt{|X||Y|}$, so $G$ is in fact $(p,O(np))$-sparse.
Our main motivation for studying the problem from
Theorem~\ref{thm:main-theorem} is the connection to the local resilience of
sparse random and pseudorandom graphs. The systematic study of this notion was
initiated by Sudakov and Vu~\cite{sudakov2008local} and, since then, the topic
has garnered considerable attention (see, e.g.,~\cite{allen2020bandwidth,
balogh2011local, balogh2012corradi, lee2012dirac, montgomery2020hamiltonicity,
vskoric2018local} and the surveys~\cite{bottcher2017large,
sudakov2017robustness}).
\begin{definition}[Local resilience]
Let $\ensuremath{\mathcal P}$ be a monotone\footnote{A graph property $\ensuremath{\mathcal P}$ is monotone if it is
preserved under adding edges.} graph property and let $G$ be a graph in $\ensuremath{\mathcal P}$.
The \emph{local resilience of $G$ with respect to $\mathcal P$} is defined as
the maximum $r\in [0, 1]$ such that, for every $H\subseteq G$ satisfying
$\deg_H (v) < r \deg_G(v)$ for all $v\in V(G)$, we have $G - H \in \ensuremath{\mathcal P}$.
\end{definition}
For example, Theorem~\ref{thm:KL} implies that the local resilience of $K_n$
with respect to having a vertex-cover by $k-1$ cycles is at least $(k-1)/k -
o(1)$, which is easily seen to be optimal (consider a disjoint union of $k$
cliques of size $n/k$). Since $G_{n, p}$ is w.h.p.\ $(p, O(\sqrt{np}))$-sparse and
has degrees concentrated around $np$, Theorem~\ref{thm:main-theorem} has the
following consequence for the local resilience of random graphs.
\begin{theorem}\label{thm:main-theorem-gnp}
Let $k\geq 2$ be an integer and let $p = p(n)$ be such that $p\gg \log^6 n/n$.
Then the local resilience of $G_{n, p}$ with respect to having a vertex-cover by
$k-1$ cycles is w.h.p.\ $(k-1)/k \pm o(1)$.
\end{theorem}
Indeed, it is not difficult to see that $(k-1)/k+o(1)$ is an upper bound
(consider a random partition of the vertex set into $k$ parts). Note that if $p
\ll \log n/n$, then w.h.p.\ $G_{n, p}$ has an unbounded number of connected
components and thus no vertex-cover by $k-1$ cycles. We believe that with a bit
more care, one could improve the lower bound on $p$ in the above theorem to
$p\gg \log^4 n/n$, using essentially the same proof strategy. However, there is
a natural barrier in our method that prevents us from going down all the way to
$\log n/n$ (which is the threshold for having a vertex-cover by a constant
number of cycles). Doing so would require new ideas and techniques.
Note also that the case $k = 2$ in Theorem~\ref{thm:main-theorem-gnp}
corresponds to the problem of determining the local resilience $G_{n, p}$ with
respect to Hamiltonicity. This question was resolved independently by
Montgomery~\cite{montgomery2019hamiltonicity} and by Nenadov, Steger, and the
third author~\cite{nenadov2019resilience}, who showed that the local resilience
of $G_{n, p}$ with respect to Hamiltonicity is w.h.p.\ $1/2 \pm o(1)$ whenever $p =
(\log n + \log\log n + \omega(1))/n$.
Our last result concerns pseudorandom graphs. An $(n,d,\lambda)$-graph is a
$d$-regular graph with $n$ vertices for which all eigenvalues of the adjacency
matrix, with the exception of the largest one, are bounded in absolute value by
$\lambda$. It follows from the expander mixing lemma that every $(n, d,
\lambda)$-graph is $(d/n, \lambda)$-sparse. Thus, Theorem~\ref{thm:main-theorem}
has the following consequence.
\begin{theorem}\label{thm:main-theorem-ndlambda}
For every integer $k \geq 2$ and every $\alpha > 0$, there exists a positive
$\eta(\alpha, k)$ such that the following holds for all sufficiently large
$n$. Let $G$ be an $(n,d,\lambda)$-graph with $\lambda \leq \eta d/\log^3 n$.
Then the local resilience of $G$ with respect to having a vertex-cover by $k -
1$ cycles is $(k - 1)/k \pm \alpha$.
\end{theorem}
Again, it is not difficult to see that $(k-1)/k+o(1)$ is an upper bound.
Theorem~\ref{thm:main-theorem-ndlambda} generalises a result of Sudakov and
Vu~\cite{sudakov2008local} stating that every $(n, d, \lambda)$-graph $G$ with
$\lambda \leq d/\log^2 n$ has local resilience at least $1/2-o(1)$ with respect
to Hamiltonicity.
Theorems~\ref{thm:main-theorem-gnp} and \ref{thm:main-theorem-ndlambda} are, to
the best of our knowledge, the first positive results on the local resilience of
(pseudo)random graphs where the local resilience is significantly larger than
$1/2$. We believe the methods introduced in this paper could be used to tackle
different problems allowing such high resilience.
\subsection{Methods and techniques}
The proof of Theorem~\ref{thm:main-theorem} combines several techniques for
embedding large structures into (pseudo)random graphs and their subgraphs. The
focal point is the {\em absorbing method}, first introduced under this name by
R\"{o}dl, Rucinski, and Szemer\'{e}di~\cite{rodl2006dirac}, but already used
implicitly in earlier works of Erd\H{o}s, Gy\'{a}rf\'{a}s, and
Pyber~\cite{erdHos1991vertex} and Krivelevich~\cite{krivelevich1997triangle}.
We now give a simplified account of how this method approaches the problem of
embedding a spanning graph $S$ (in our case, a spanning union of $k-1$ cycles)
into a graph $G$. First, one reserves a small subset $R$ of the vertex set of
$G$, called the \emph{reservoir}; frequently, this is simply a uniformly random
subset of small size. Then one embeds a certain highly structured graph $A$,
called the \emph{absorber}, into $G[V(G) \setminus R]$, such that the following
holds: suppose that we embed a fixed subgraph $S'\subseteq S$ into $G[V(G)
\setminus V(A)]$ in such a way that all vertices outside of $R\cup V(A)$ are
covered; then there exists a completion of the embedding of $S'$ to an embedding
of $S$. We remark that, usually, it is very difficult to control which vertices
of $R$ are used by the embedding of $S'$, so this property relies on a careful
choice of $A$ (we think of $A$ as `absorbing' the vertices in $R$ not covered by
the embedding of $S'$). In this way, the problem of embedding the spanning graph
$S$ is reduced to the (often easier) problem of embedding the non-spanning graph
$S'$. A method similar to the one described in this paragraph has been
successfully applied to numerous problems in (random) graph theory (see,
e.g.,~\cite{ferber2017robust, georgakopoulos2018spanning,
glock2021decompositions, kwan2020almost, levitt2010avoid,
montgomery2019spanning, nenadov2019powers}).
The first step towards carrying out the above approach is to figure out the
structure of the absorber. In our case, one of the main issues to overcome is
that the graph $G$ might be bipartite, which means that, in order to have any
hope of embedding the absorber into $G[V(G) \setminus R]$, the absorber must be
bipartite as well. This creates several technical challenges. Most importantly,
we cannot use the common approach of building an absorber by stringing together
many single-vertex absorbers, as absorbing only a single vertex may upset the
delicate balance between the two parts of the bipartition; rather, we need to be
able to absorb two vertices at a time (one from each part of the bipartition).
This is done using a variation of a trick of
Montgomery~\cite{montgomery2019spanning}. A consequence of this approach is that
our absorber is only able to absorb subsets of $R$ that contain the same number
of vertices from each part of the (hypothetical) bipartition.
Even after deciding upon a suitable structure for the absorber, we still face
the issue of actually embedding this structure into $G[V(G) \setminus R]$. In
the context of sparse pseudorandom graphs, this is usually done by exploiting
the expansion properties of the graph to show that one can connect prescribed
pairs of vertices or edges by disjoint copies of a given fixed graph $F$ (for
example, a path of length $\log n$). This statement is usually referred to as
the \emph{Connecting Lemma}, though its precise formulation depends on the
nature of $S$. The actual absorber is then embedded by multiple uses of this
lemma. A difficulty that arises in our setting (and that is not a problem when
the minimum degree is at least $(1/2+o(1))np$) is that the graph $G$ that we are
dealing with might not be a very good expander at all---in fact, if $k$ is
large, then $G$ might have a large number of connected components, implying that
there are fairly small sets (of size roughly $n/k$) that do not expand at all.
An important step in our proof is to show that $G$ can nevertheless be
partitioned into at most $k - 1$ subgraphs, each having strong enough expansion
properties to embed an absorber, all without sacrificing much of the minimum
degree. We refer to this as the \emph{Partitioning Lemma}.
In order to complete the proof, we cover each of these expanding subgraphs by
systems of disjoint paths, leaving uncovered only vertices in the reservoir and
the absorber. This is done using a standard application of the sparse regularity
lemma in conjunction with the recent bootstrapping argument of Nenadov and the
second author~\cite{nenadov2020komlos}. Finally, the absorber is used to connect
each of these systems of disjoint paths into a cycle (and picking up the
uncovered vertices in the reservoir along the way).
We believe that some of these techniques are likely to be of use for other,
related problems concerning the structure of uniformly sparse graphs satisfying
a minimum degree condition. In particular, the Partitioning Lemma and Connecting
Lemma that we prove are quite general statements that are largely unrelated to
the concrete problem that is solved in this paper.
\subsection{Organisation of the paper}
The paper is structured as follows. In Section~\ref{sec:preliminaries} we
introduce the notion of expander graphs and state several useful properties of
such graphs. Furthermore, we mention some of the more standard tools we use,
namely Haxell's criterion for matchings in hypergraphs and Szemer\'{e}di's
regularity lemma for sparse graphs and related concepts. In
Section~\ref{sec:the-main-proof} we reduce Theorem~\ref{thm:main-theorem} to a
version in which one can assume that the graph is an expander graph. We also
state all the necessary `big gun' lemmas used in order to prove it and
subsequently give a proof of Theorem~\ref{thm:main-expander} modulo those
lemmas. Each of the following
Sections~\ref{sec:inheritance}--\ref{sec:embedding-lemma} are fully dedicated to
the proof of one of the lemmas.
\subsection{Notation}
We use standard graph theoretic notation. In particular, given a graph $G$,
$V(G)$ and $E(G)$ denote the sets of vertices and edges of $G$, respectively. We
write $v(G) = |V(G)|$ and $e(G) = |E(G)|$. For a subset $X \subseteq V(G)$, we
denote by $G[X]$ the subgraph induced by the vertex set $X$. For two (not
necessarily disjoint) subsets $X, Y \subseteq V(G)$, we write $e_G(X, Y)$ for
the number of pairs in $X \times Y$ that form an edge, and $e_G(X)$ for the
number of edges in $X$. Note that $e_G(X, X) = 2e_G(X)$. Furthermore, we denote
by $N_G(X, Y)$ the set of all neighbours in $Y$ of vertices from $X$. We
abbreviate $N_G(\{x\}, Y)$ to $N_G(x, Y)$ and define $\deg_G(x, Y) = |N_G(x,
Y)|$ and $\deg_G(x) = \deg_G(x, V(G))$. We say that a path $P$ \emph{connects}
two vertices $x, y$ if $x$ and $y$ are its endpoints, and say that $P$ is an
$xy$-path. The length of a path is defined as the number of edges in it. For
$\ell \in \N$, we denote by $N_G^{\ell}(X, Y)$ the set of vertices $y \in Y$ for
which there exists a path $P$ of length at most $\ell$ connecting $y$ to some $x
\in X$ and whose internal vertices are in $Y$; in particular $N_G^1(X, Y) =
N_G(X, Y)$. Again, we abbreviate $N_G^\ell(\{x\},Y) = N_G^\ell(x,Y)$. If $X$ and
$Y$ are disjoint, then $G[X,Y]$ is the induced bipartite subgraph with parts $X$
and $Y$, and the {\em density} of the pair $(X, Y)$ is $d_G(X, Y) = e_G(X,
Y)/(|X||Y|)$. In all of the above notations, we may omit the subscript $G$ when
it is clear which graph we are talking about.
For an integer $n$ we write $[n] = \{1, \dotsc, n\}$ and for $a, b, \varepsilon \in
\R$, we write $a \in (1 \pm \varepsilon)b$ to denote $(1 - \varepsilon)b \leq a \leq (1 +
\varepsilon)b$. We use the standard asymptotic notation $o, O, \omega$, and $\Omega$.
The logarithm function is always used with the natural base $e$. We suppress
floors and ceilings whenever they are not crucial. Finally, we use the
convention that if the statement of (say) Lemma 3.6 features a value named $C$,
then we may elsewhere write $C_{3.6}$ to denote this value.
\section{Proof of Theorem~\ref{thm:main-theorem}}\label{sec:the-main-proof}
The proof of Theorem~\ref{thm:main-theorem} involves several different
ingredients. In this section, we gather the necessary definitions and key
lemmas, whose proofs we postpone to the subsequent sections. We then show how
these lemmas can be used to deduce our main result.
Recall, our goal is to show that if $G$ is a $(p, \beta)$-sparse graph with $n$
vertices, minimum degree at least $(1/k + \alpha)np$, and $\beta \leq \eta
np/\log^3 n$, then the vertex set of $G$ can be covered by $k - 1$ cycles. It
turns out that if $G$ is an expander, then it has all kinds of good connectivity
properties that make it easier to cover it by cycles. However, it is not hard to
see that if $k > 2$, then $G$ need not even be connected; in particular, it need
not be a good expander.
The first step of the proof deals with this problem. For a graph $G$ and $\xi
\geq 0$, let us write $\delta_\xi(G)$ for the maximal integer $d$ such that all
but at most $\xi v(G)$ vertices $v$ of $G$ satisfy $\deg_G(v) \geq d$. Note that
$\delta_0(G)$ is simply the minimum degree $\delta(G)$ of $G$. On the other
hand, $\delta_\xi(G)$ for a small constant $\xi > 0$ is a sort of `essential
minimum degree' of $G$ possessed by a $(1-\xi)$-fraction of the vertices.
\begin{restatable}[Partitioning Lemma]{lemma}{partitioning}
\label{lem:partitioning-lemma}
For all $c, \alpha, \xi \in (0, 1)$, there exist positive $\gamma(\alpha, \xi,
c)$ and $\eta(\alpha, \xi, c)$ such that the following holds for all
sufficiently large $n$. Let $p \in (0, 1)$ and $\beta \leq \eta np$, and let
$G$ be a $(p, \beta)$-sparse graph on $n$ vertices with minimum degree at
least $(c + \alpha)np$. Then there exists a partition $V(G) = V_1 \cup \dotsb
\cup V_\ell$ into $1 \leq \ell < 1/c$ parts such that, for every $i \in
[\ell]$,
\begin{enumerate}[(i)]
\item $G[V_i]$ is a $\gamma p$-expander,
\item $\delta(G[V_i]) \geq c^2 np$,
\item $\delta_\xi(G[V_i]) \geq (c + \alpha - \xi)np$.
\end{enumerate}
\end{restatable}
The proof of this lemma is given in Section \ref{sec:partitioning-lemma}.
Informally, it says that if $G$ has minimum degree slightly above $cnp$, then it
can be partitioned into fewer than $1/c$ good expanders, each of which
\emph{essentially} still has the same minimum degree as $G$ (and each of which
has minimum degree at least $c^2np$). This means that from now on, instead of
working with an arbitrary $(p, \beta)$-sparse graph with minimum degree
$\delta(G) \geq (1/k + \alpha)np$, we can work with a $(p, \beta)$-sparse
\emph{expander graph} satisfying $\delta_\xi (G)\geq (1/k + \alpha - \xi)np$,
where $\xi$ is an arbitrarily small positive constant. Luckily for us, the fact
that we go from a graph with $\delta(G) \geq (c + \alpha)np$ to a graph with
$\delta_\xi(G) \geq (c + \alpha - \xi)np$ does not pose any insurmountable
difficulty. Theorem~\ref{thm:main-theorem} now follows easily from the following
`robust' version specialised to expander graphs.
\begin{theorem}\label{thm:main-expander}
For every integer $k \geq 2$ and all $\alpha, \gamma \in (0, 1)$, there exists
a positive $\eta(\alpha, \gamma, k)$, such that the following holds for all
sufficiently large $n$. Let $p \in (0, 1]$ and $\beta \leq \eta np/\log^3 n$.
Then every $(p, \beta)$-sparse $\gamma p$-expander $G$ on $n$ vertices
satisfying
\[
\delta(G) \geq 2\alpha np \quad \text{and} \quad \delta_{\alpha/128}(G) \geq
(1/k + \alpha)np
\]
has a vertex cover by $k - 1$ cycles.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:main-theorem}]
Without loss of generality we may assume that $\alpha > 0$ is small enough, in
particular smaller than $1/(2k^2)$. Let $\xi = \alpha/129$, $\alpha' = \alpha
- \xi$, $\gamma = \gamma_{\ref{lem:partitioning-lemma}}(\alpha, \xi, 1/k)$,
$\eta' = \min_{2 \leq i \leq k}{\{ \eta_{\ref{thm:main-expander}}(\alpha',
\gamma, i) \}}$, and $\eta = \min{\{ \alpha\eta'/2,
\eta_{\ref{lem:partitioning-lemma}}(\alpha, \xi, 1/k) \}}$. Let $G = (V, E)$
be a $(p, \beta)$-sparse graph with $n$ vertices, minimum degree $(1/k +
\alpha)np$, and $\beta \leq \eta np/\log^3 n$. We apply
Lemma~\ref{lem:partitioning-lemma} with $1/k$ (as $c$) to obtain some $1 \leq
\ell < k$ and a partition $V = V_1 \cup \dotsb \cup V_\ell$ such that, for
every $i \in [\ell]$,
\begin{enumerate}[(i), font=\itshape]
\item $G[V_i]$ is a $\gamma p$-expander,
\item\label{s1} $\delta(G[V_i]) \geq np/k^2 \geq 2\alpha np$,
\item $\delta_{\xi}(G[V_i]) \geq (1/k + \alpha - \xi)np$.
\end{enumerate}
Let $n_i := |V_i|$ and $k_i := \ceil{k \cdot n_i/n}$, for every $i \in
[\ell]$. Then, by our choice of constants,
\[
\delta_{\alpha'/128}(G[V_i]) = \delta_{\xi}(G[V_i]) \geq (1/k + \alpha')np
\geq (1/k_i + \alpha')n_ip.
\]
Next, $\ref{s1}$ and the fact that $G$ is
$(p, \beta)$-sparse imply that
\[
\alpha |V_i|np \leq e(V_i, V_i) \leq |V_i|^2p + \beta|V_i| = |V_i|(|V_i|p +
\beta),
\]
from which we deduce, using $\beta \leq \eta np/\log^3 n$ for large enough
$n$, that $n_i \geq (\alpha/2)n$. In particular, we have $\beta \leq \eta
np/\log^3 n \leq \eta'n_ip/\log^3 n_i$, by our choice of $\eta$. Since
$G[V_i]$ is a subgraph of $G$, it is clearly also $(p, \beta)$-sparse.
It now follows from Theorem~\ref{thm:main-expander} applied with $\alpha'$ (as
$\alpha$) that each graph $G[V_i]$ can be covered by $k_i - 1$ cycles. Since
\[
\sum_{i = 1}^{\ell} (k_i - 1) = \sum_{i = 1}^{\ell} \ceil{k \cdot n_i/n} - 1
< \sum_{i = 1}^{\ell} k \cdot n_i/n = k,
\]
this shows that also $G$ can be covered by $k - 1$ cycles, as required.
\end{proof}
It thus remains to prove Theorem~\ref{thm:main-expander}. As one might expect,
the proof makes heavy use of the fact that the graph $G$ is a $\gamma
p$-expander. The main ingredient is the Connecting Lemma which allows us to
connect many given pairs of vertices in $G$ using short disjoint paths. In order
to make this precise, we start with the following definition.
\begin{definition}[$(M, W, \ell)$-matching]
Let $G$ be a graph and let $W \subseteq V(G)$ be a subset of the vertices. Let
$M$ be a multigraph with $V(M) \subseteq V(G) \setminus W$. Then an \emph{$(M,
W, \ell)$-matching} in $G$ is a collection $\{P_e : e \in E(M)\}$ of
internally vertex-disjoint paths in $G$ where for every edge $e = \{u, v\} \in
E(M)$, the path $P_e$ is a $uv$-path of length at most $\ell$ whose internal
vertices all lie in the set $W$.
\end{definition}
The multigraph $M$ can be thought of as prescribing which pairs of vertices of
$G$ ought to be connected by how many paths; an $(M, W, \ell)$-matching is then
a system of internally vertex-disjoint paths of length at most $\ell$ connecting
the prescribed vertices in the prescribed manner, such that the internal
vertices of the paths all lie in the set $W$. The proof of the Connecting Lemma
is deferred to Section~\ref{sec:connecting-lemma}.
\begin{restatable}[Connecting Lemma]{lemma}{connecting}
\label{lem:connecting-lemma}
For every $\gamma \in (0, 1)$ and $\Delta > 0$, there exists a positive
$C(\gamma, \Delta)$ such that the following holds for all sufficiently large
$n$. Let $p \in (0, 1)$ and $\beta > 0$, let $G$ be a $(p, \beta)$-sparse
graph on $n$ vertices, and let $U, W \subseteq V(G)$ be disjoint subsets such
that:
\begin{enumerate}[(i)]
\item\label{cl-W-exp} $G[W]$ is a $\gamma p$-expander,
\item\label{cl-W-size} $|W| \geq C\beta\sqrt{\log n}/p$, and
\item\label{cl-U-deg} every vertex $u \in U$ satisfies $\deg(u, W) \geq
\gamma |W|p$.
\end{enumerate}
Let $\ell = \ceil{30\log n/(\gamma\log\log n)}$. Then for every multigraph $M$
with $V(M) \subseteq U$, at most $|W|/(C\ell)$ edges, and maximum degree at
most $\Delta$, there exists an $(M, W, \ell)$-matching in $G$.
\end{restatable}
One important technical property that we make use of is that most not too small
subsets of a good expander again induce good expanders, albeit with a slightly
weaker parameter.
\begin{restatable}[Inheritance Lemma]{lemma}{inheritance}
\label{lem:inheritance-lemma}
For every $\gamma \in (0, 1)$, there exist positive $\gamma_1(\gamma)$ and
$C(\gamma)$ such that the following holds for all sufficiently large $n$. Let
$p \in (0, 1)$ and $\beta > 0$, let $G$ be a $(p, \beta)$-sparse $\gamma
p$-expander on $n$ vertices, and let $r$ be a positive integer such that $rp
\geq C \cdot \max{\{ \log n, \beta \}}$. Then the number of subsets $R
\subseteq V(G)$ of size $r$ for which $G[R]$ is a $\gamma_1p$-expander is at
least $(1 - n^{-1}) \binom{n}{r}$.
\end{restatable}
The proof of the lemma is based on the sparse regularity lemma. We postpone it
to Section~\ref{sec:inheritance}. Here we just remark that the proof gives a
dependence in the order of $\gamma_1 = \Theta(\gamma^4)$, but we have no reason
to believe that this is optimal. An interesting question, unrelated to the topic
of this paper, is whether it is possible to achieve a dependence of the form
$\gamma_1 = \Omega(\gamma)$.
The proof of Theorem~\ref{thm:main-expander} relies on the {\em absorbing
method}. The main idea of this approach is to embed a small auxiliary structure
(an \emph{absorber}) into the graph $G$ that allows us to reduce the problem of
covering $G$ by $k - 1$ cycles to a simpler problem. In our case, this simpler
problem is to show that there exist subgraphs $P_1, \dotsc, P_{k-1} \subseteq G$
covering the vertices of $G$, where each $P_i$ is a vertex-disjoint union of
$O(n/\log^3 n)$ paths. This simpler problem can then be solved using a method
based on the sparse regularity lemma.
\begin{restatable}[$(X, Y)$-absorber]{definition}{absorber}
\label{def:absorber}
Let $X$ and $Y$ be disjoint sets of vertices. An \emph{$(X, Y)$-absorber} is a
graph $H$ with two designated vertices $a$ and $b$ (called the
\emph{endpoints} of the absorber) such that $X \cup Y \subseteq V(H) \setminus
\{a, b\}$ and such that, for all subsets $X' \subseteq X$ and $Y' \subseteq Y$
with $|X'| = |Y'|$, $H$ contains an $ab$-path $P$ with $V(P) = V(H) \setminus
(X' \cup Y')$ (i.e., an $ab$-path using all vertices of $H$ with the exception
of exactly the vertices in $X' \cup Y'$).
\end{restatable}
With this definition we can state the Absorbing Lemma whose proof is presented
in Section~\ref{sec:absorbers}.
\begin{restatable}[Absorbing Lemma]{lemma}{absorbing}\label{lem:absorbing-lemma}
For every $\gamma\in (0, 1)$, there exists a positive $C(\gamma)$ such that
the following holds for all sufficiently large $n$. Let $p \in (0, 1)$ and
$\beta > 0$, let $G = (A, B, E)$ be a bipartite $(p, \beta)$-sparse graph on
$n$ vertices, and let $U, W \subseteq V(G)$ be disjoint subsets such that:
\begin{enumerate}[(i)]
\item\label{abs-W-exp} $G[W]$ is a $\gamma p$-expander,
\item\label{abs-W-size} $|W| \geq C \cdot \max{\{\log n/p, \beta \sqrt{\log
n}/p, |U|\log^2n\}}$, and
\item\label{abs-U-deg} $|U| \geq 2$ and every vertex $u \in U$ satisfies
$\deg (u, W) \geq \gamma |W|p$.
\end{enumerate}
Then $G[U \cup W]$ contains a $(U \cap A, U \cap B)$-absorber with one
endpoint in $W \cap A$ and another in $W \cap B$.
\end{restatable}
Often when dealing with embedding problems in sparse (pseudorandom) graphs it is
not too difficult to embed a desired structure that covers all but $\varepsilon n$
vertices of the graph. With the next lemma we reduce this leftover
significantly, and what is more, require that only the majority of vertices have
the `correct' degree for the structure to be embedded. The proof relies on a
standard application of the sparse regularity lemma combined with a trick of
Nenadov and the second author and is presented in
Section~\ref{sec:embedding-lemma}.
\begin{restatable}[Embedding Lemma]{lemma}{embedding}\label{lem:embedding-lemma}
For every integer $k \geq 2$ and every $\alpha > 0$, there exists a positive
$\eta(\alpha, k)$ such that the following holds for all sufficiently large
$n$. Let $p \in (0, 1)$ and $\beta \leq \eta np$, and let $G$ be a $(p,
\beta)$-sparse graph on $n$ vertices such that
\[
\delta(G) \geq 2\alpha np \quad \text{and} \quad \delta_{\alpha/32}(G) \geq
(1/k + \alpha) np.
\]
Then the vertices of $G$ can be covered by $k - 1$ path forests $P_1, \dotsc,
P_{k-1}$ that contain at most $\max{\{ \beta/p, \log n/p \}}$ paths each.
\end{restatable}
With all the `big guns' at hand we are ready to prove
Theorem~\ref{thm:main-expander}.
\subsection{Overview}
The proof follows a standard strategy relying on the absorbing method: (1)
choose a partition $V(G) = V' \cup U \cup W$ into sets of appropriate size
uniformly at random; (2) use the Absorbing Lemma to find an absorber $H$ in $G[U
\cup W]$; (3) use the Embedding Lemma to cover the vertices of $G[V' \cup W]$
not in $H$ by $k-1$ path forests each consisting of at most $n/\log^3 n$ paths;
(4) for every forest, use the Connecting Lemma over $G[U]$ to combine all its
paths into a cycle; and (5) use the absorbing property of $H$ to take in the
unused vertices of $G[U]$. One of the main issues we need to deal with while
executing this strategy is that for $k \geq 3$ the graph $G$ can be bipartite.
This introduces several complications.
First of all, in order to find an absorber $H$, the underlying graph $G[U \cup
W]$ needs to be {\em bipartite} and $G[W]$ a {\em $\gamma p$-expander}; even
more, for the Connecting Lemma we need $G[U]$ to be an expander as well. In case
$k = 2$ this is circumvented by choosing a random equipartition of both $U = U_A
\cup U_B$ and $W = W_A \cup W_B$ taking $F$ to be the bipartite graph between
corresponding colour classes. As the (essential) minimum degree of $F[U]$ is in
this case around $(1/2+o(1))|U|p/2$, a simple calculation shows that $F[U]$ is a
$\gamma_1p$-expander, for a suitable choice of $\gamma_1$. The same argument
applies for $F[W]$. The second issue lies in the fact that, when using the
Connecting Lemma across $F[U]$, we need to use exactly the same number of
vertices of both $U_A$ and $U_B$ in order for the `absorbing property' of $H$ to
apply. In case $k = 2$ this is not a problem, as all vertices $v \in V(G)$ have
a significant portion of their degree into {\em both} $U_A$ and $U_B$ in $G$.
One can then ensure that the vertices of $U_A$ and $U_B$ are used in a balanced
manner.
More serious problems start to occur when $k \geq 3$. For a start, the above
strategy of choosing a random equipartition of $U$ and $W$ simply does not work
as the obtained bipartite graph is not necessarily an expander. Before
partitioning $V(G)$ we first apply
Lemma~\ref{lem:expander-to-bipartite-expander} to the whole graph $G$, to find a
spanning bipartite $(\gamma p/2)$-expander $F \subseteq G$ on colour classes $A$
and $B$. Next, a random partition is chosen into sets $V' \cup U \cup W$ as
above, and the Inheritance Lemma (Lemma~\ref{lem:inheritance-lemma}) implies
$F[U]$ and $F[W]$ are w.h.p.\ both a $\gamma_1 p$-expander. However, in this
case we cannot guarantee that there is even a single vertex with significant
portion of its degree into both $U \cap A$ and $U \cap B$---recall, $G$ may be
bipartite itself and $F = G$ potentially! Hence, if in step (4) from above the
Connecting Lemma is applied over $F[U \cap A, U \cap B]$ blindly, it may result
in an imbalance in the number of vertices used, as we cannot control which of
the vertices (the ones given as $V(M)$ to Lemma~\ref{lem:connecting-lemma}) have
neighbours in $U \cap A$ and which in $U \cap B$. It may be that all
$(k-1)n/\log^3 n$ pairs of vertices we are trying to connect have neighbours
only in, say, $U \cap A$, resulting in $(k-1)n/\log^3 n$ more vertices of $U
\cap A$ being used than those of $U \cap B$, since all paths in $F[U]$ with both
endpoints in $U \cap A$ are of even length. Luckily, as $k \geq 3$, we can reuse
some vertices across the $k - 1$ cycles we are trying to find, which gives us
additional flexibility.
The plan is to find sets $Q_A$ and $Q_B$, in $V(G) \setminus (U \cup W)$, each
of size roughly $kn/\log^3 n$, such that all vertices of $Q_A$ in $G$ have a
large neighbourhood in $U \cap A$ and all vertices of $Q_B$ have a large
neighbourhood in $U \cap B$. The Connecting Lemma is then used to connect all
these vertices into a single path $P_Q$, before covering the remainder $V'
\setminus V(P_Q)$ by $k-1$ path forests, each containing at most $n/\log^3 n$
paths. We connect $P_Q$ with the first path forest through $F[U]$ to get a cycle
$C_1$ and then, while connecting the remaining forests into cycles $C_2, \dotsc,
C_{k-1}$, we can \emph{reuse} the vertices of $P_Q$ by adding a carefully chosen
subset of them to the pairs in $V(M)$ we aim to connect by the Connecting Lemma.
This can be done so that exactly the same number of vertices of $U \cap A$ and
$U \cap B$ is used, allowing us to use the `absorbing property' of $H$ to
finally complete the embedding.
\begin{proof}[Proof of Theorem~\ref{thm:main-expander}]
Without loss of generality we may assume $0 < p \leq 1/2$. Given $k$, $\alpha$,
and $\gamma$, let $\gamma_1 = \min{\{\alpha/1024, \gamma/1024,
\gamma_{1_{\ref{lem:inheritance-lemma}}}(\gamma/2)\}}$, $\varepsilon =
\min{\{\alpha/1024, \gamma_1/64\}}$,
\[
C = \max{\{
30/\varepsilon^2,
C_{\ref{lem:connecting-lemma}}(\gamma_1, 2),
C_{\ref{lem:connecting-lemma}}(\gamma_1^2/8, 2),
C_{\ref{lem:inheritance-lemma}}(\gamma/2),
C_{\ref{lem:absorbing-lemma}}(\gamma_1/2)
\}},
\]
and $\eta = \varepsilon/C^2$. Suppose that $G = (V, E)$ is a $(p, \beta)$-sparse
graph with minimum degree at least $2\alpha np$ and $\delta_{\alpha/128}(G)
\geq (1/k + \alpha)np$. Note that, since $\beta \leq \eta np/\log^3 n$ and
$\delta(G) \geq 2\alpha np$, the definition of $(p,\beta)$-sparse graphs
applied to a vertex $v$ and its neighbourhood $N_G(v)$ implies $p \geq
\eta^{-1}\log^4 n/n$, with room to spare.
\subsubsection*{The Hamiltonian case, \texorpdfstring{$k = 2$}{k = 2}}
Let $V = V' \cup U \cup W$ be a partition of $V$ chosen uniformly at random
such that
\begin{equation}\label{eq:main-2-sizes}
|U| = \floor*{\frac{\varepsilon n}{C\log^2 n}}, \quad |W| = \floor{\varepsilon n},
\quad \text{and} \quad |V'| = n - |U| - |W|.
\end{equation}
Hence, from the bounds on $p$ and $\beta$, we have
\begin{equation}\label{eq:main-2-partition-lb-size}
|U|, |W|, |V'| \geq C \cdot \max{\{ \beta\sqrt{\log n}/p, \log n/p \}},
\end{equation}
by our choice of $\eta$. As an easy consequence of Chernoff's inequality and
the union bound, with high probability
\begin{equation}\label{eq:main-2-degrees}
\deg_G(v, Z) \geq (2\alpha-\varepsilon)|Z|p \qquad \text{and} \qquad
\delta_{\alpha/126}(G[Z]) \geq (1/2+3\alpha/4)|Z|p,
\end{equation}
for all $v \in V(G)$ and $Z \in \{V', U, W\}$. For the remainder of the
proof we fix such a good choice of sets $V'$, $U$, and $W$.
Let $U = U_A \cup U_B$ and $W = W_A \cup W_B$ be partitions with $|U_A| =
\floor{|U|/2}$ and $|W_A| = \floor{|W|/2}$, and let $F := G[U_A \cup W_A, U_B
\cup W_B]$ a bipartite graph such that the following holds:
\begin{enumerate}[label=(P\arabic*), leftmargin=3em]
\item\label{2-expander} $F[U]$ and $F[W]$ are both a $\gamma_1p$-expander,
\item\label{2-min-deg} $\deg_F(v, Z) \geq (\alpha/2)|Z|p$ and
$\delta_{\alpha/120}(F[Z]) \geq (1/4+\alpha/2)|Z|p$, for all $v \in U \cup
W$ and $Z \in \{U, W\}$,
\item\label{2-min-deg-fix} for every $v \in V(G)$, $\min\{\deg_G(v, U_A),
\deg_G(u, U_B)\} \geq \gamma_1|U|p$.
\end{enumerate}
If such partitions are chosen uniformly at random then Chernoff's inequality
and the union bound, together with \eqref{eq:main-2-sizes} and
\eqref{eq:main-2-degrees}, show that w.h.p.\ \ref{2-min-deg} and
\ref{2-min-deg-fix} hold. Fix such a choice of sets. We show that these imply
\ref{2-expander} as well.
We only show that $F[U]$ is a $\gamma_1p$-expander, as an analogous argument
works for $F[W]$. Let $X \subseteq U$ be of size $|X| \leq |U|/2$. If $|X|
\leq (\alpha/4)|U|$, then from \ref{2-min-deg} by applying
Lemma~\ref{lem:edges-out-small-set} with $\alpha/4$ (as $\alpha$), $F[U]$ (as
$G$), and $X$ (as $A$) we obtain
\[
e_F(X, U \setminus X) \geq \big((\alpha/4)|U|p - \beta\big)|X| \geq
(\alpha/8)|X||U|p \geq \gamma_1|X||U|p,
\]
since $\beta = o(|U|p)$. On the other hand, if $(\alpha/4)|U| < |X| \leq
|U|/2$, from \ref{2-min-deg} we have
\[
e_F(X, U \setminus X) \geq \big(|X| -
\tfrac{\alpha}{120}|U|\big)(1/4+\alpha/2)|U|p - 2e_G(X \cap U_A, X \cap
U_B).
\]
Then, $G$ being $(p, \beta)$-sparse further gives
\[
e_F(X, U \setminus X) \geq (1/4+\alpha/2)|X||U|p -
\tfrac{\alpha}{120}(1/4+\alpha/2)|U|^2p - |X|^2p/2 - \beta|X|.
\]
An easy case distinction between $|X| < |U|/4$ and $|U|/4 \leq |X| \leq |U|/2$
in both cases shows (with room to spare) $e_F(X, U \setminus X) \geq
\gamma_1|X||U|p$, as desired.
Given such a graph $F$, for $X \in \{A, B\}$, we say that a vertex $v \in
V(G)$ is {\em $X$-expanding} if $\deg_G(v, U_X) \geq \gamma_1|U|p$. We say
that a pair $\{u, v\}$ of vertices $u, v \in V(G)$ is an {\em $X$-pair} if $u$
and $v$ are both $X$-expanding. Note that a vertex can be both $A$-expanding
and $B$-expanding.
We apply the Absorbing Lemma (Lemma~\ref{lem:absorbing-lemma}) with $\gamma_1$
(as $\gamma$) and $F$ (as $G$) to obtain a $(U_A, U_B)$-absorber $H$ in the
graph $F[U \cup W]$ with one endpoint $a \in W_A$ and the other $b \in W_B$.
Indeed, \ref{2-expander}, \eqref{eq:main-2-sizes} and
\eqref{eq:main-2-partition-lb-size}, and \ref{2-min-deg} in that order verify
Lemma~\ref{lem:absorbing-lemma}~$\ref{abs-W-exp}$--$\ref{abs-U-deg}$.
Let $V'' := V' \cup (W \setminus V(H))$. Observe that $|W \setminus V(H)| \leq
\varepsilon n$. Hence, we get for all $v \in V''$
\begin{equation}\label{eq:min-deg-cov-V}
\deg_G(v, V'') \geq \deg_G(v, V') \osref{\eqref{eq:main-2-degrees}}\geq
(2\alpha-\varepsilon)|V'|p \geq \frac{2\alpha-\varepsilon}{1+2\varepsilon}|V''|p \geq
\alpha|V''|p.
\end{equation}
Similarly, all $v \in V''$ that previously satisfied $\deg_G(v, V') \geq
(1/2+3\alpha/4)|V'|p$, now satisfy
\begin{equation}\label{eq:essential-min-deg-cov-V}
\deg_G(v, V'') \geq \frac{1/2+3\alpha/4}{1+2\varepsilon}|V''|p \geq
(1/2+\alpha/2)|V''|p,
\end{equation}
due to our choice of $\varepsilon$. Lastly, the set of vertices in $V''$ violating
\eqref{eq:essential-min-deg-cov-V} is of size at most $(\alpha/126)|V'| +
|W| \leq (\alpha/64)|V''|$. Therefore, we can apply the Embedding Lemma
(Lemma~\ref{lem:embedding-lemma}) with $\alpha/2$ (as $\alpha$) to the graph
$G [V'']$ to obtain a path forest $P_1$ that contains at most $\max{\{
\beta/p, \log n/p \}} \leq n/\log^3 n$ paths and covers all vertices of $V''$.
Let $m$ denote the total number of paths in $P_1$. Take an arbitrary ordering
of these $m$ paths and let us denote the endpoints of the $i$-th path by $s_i$
and $t_i$, for all $i \in [m]$; note that a path may be only a single vertex,
in which case $s_i = t_i$. Since every $v \in V''$ has degree at least
$\gamma_1|U|p$ into {\em both} $U_A$ and $U_B$ (by \ref{2-min-deg-fix}), by
sequentially choosing for each vertex whether we make it $A$-expanding or
$B$-expanding, that is, artificially removing the edges to the other colour
class of $F[U]$, we can make the following set of pairs have exactly the same
number of $A$-pairs as $B$-pairs:
\[
\ensuremath{\mathcal P} = \big\{\{b, s_1\}, \{t_i, s_{i + 1}\}_{i \in [m - 1]}, \{t_m,
a\}\big\}.
\]
Finally, we apply the Connecting Lemma (Lemma~\ref{lem:connecting-lemma}) to
the graph obtained from $G$ by substituting $G[U]$ by $F[U]$, and with
$\gamma_1$ (as $\gamma$), $V(G) \setminus U$ (as $U$), $U$ (as $W$), $\ell =
30\log n/(\gamma_1 \log\log n)$, and the multigraph $M$ with the vertex set
\[
V(M) = \{a, b\} \cup \{s_i, t_i\}_{i \in [m]}
\]
and the edge set $E(M) = \ensuremath{\mathcal P}$. Clearly, $\Delta(M) \leq 2$ and $e(M) \leq m+1
\leq 2n/\log^3 n \leq |U|/(C\ell)$. Therefore, we obtain an $ab$-path $P'$
covering all vertices of $V''$. Crucially, $|V(P') \cap U_A| = |V(P') \cap
U_B|$, ensured by the argument above.
Let $A'$ and $B'$ denote the set of vertices in $U_A$ and $U_B$, respectively,
belonging to $V(P')$. By definition (see Definition~\ref{def:absorber}), $H$
contains an $ab$-path $P$ with $V(P) = V(H) \setminus (A' \cup B')$, and
substituting $H$ with $P$ together with $P'$ closes a cycle $C_1$ which covers
all vertices of $G$, as required.
\subsubsection*{The general case, \texorpdfstring{$k \geq 3$}{k = 3}}
In this case, due to low (essential) minimum degree, splitting $W$ and $U$
uniformly at random into colour classes as above is not enough to guarantee
the obtained bipartite graph is an expander. On top of that, the whole graph
$G$ may be bipartite and a property like \ref{2-min-deg-fix} cannot hold. We
use a slightly different approach in this case.
Let $F \subseteq G$ be a spanning bipartite $(\gamma p/2)$-expander with
colour classes $A$ and $B$ obtained by applying
Lemma~\ref{lem:expander-to-bipartite-expander} to $G$. Let $V = V' \cup U \cup
W \cup Q \cup Y$ be a partition of $V$ chosen uniformly at random such that
\begin{equation}\label{eq:main-k-set-sizes}
|U| = \floor*{\frac{\varepsilon n}{C\log^2 n}}, \quad |W| = |Q| = |Y| = \floor{\varepsilon
n}, \quad \text{and} \quad |V'| = n - |U| - |W| - |Q| - |Y|.
\end{equation}
Observe that, from the bounds on $p$ and $\beta$, we have
\begin{equation}\label{eq:main-k-partition-lb-size}
|U|, |W|, |Q|, |Y|, |V'| \geq C \cdot \max{\{\beta\sqrt{\log n}/p, \log
n/p\}},
\end{equation}
as before. Since $\delta(F) \geq (\gamma/2)p(n-1)$, as an easy consequence of
Chernoff's inequality and the union bound, with high probability
\begin{equation}\label{eq:main-k-degrees}
\deg_F(v, Z) \geq \gamma_1|Z|p, \enspace \deg_G(v, Z) \geq
(2\alpha-\varepsilon)|Z|p, \enspace \text{and} \enspace \delta_{\alpha/126}(G[V'])
\geq (1/k+3\alpha/4)|V'|p,
\end{equation}
for every $v \in V(G)$ and $Z \in \{V', U, W, Q, Y\}$. Additionally, from the
Inheritance Lemma (Lemma~\ref{lem:inheritance-lemma}) with probability at
least $1 - 6n^{-1}$ we have that all of $G[Q]$, $G[Y]$, $G[U \cup Q]$, $F[U]$,
$F[W]$, $F[U \cup W]$ are a $\gamma_1p$-expander. For the remainder of the
proof we fix such a good partition.
By the Absorbing Lemma (Lemma~\ref{lem:absorbing-lemma}) applied with
$\gamma_1$ (as $\gamma$) and $F$ (as $G$) there is a $(U \cap A, U \cap
B)$-absorber $H$ in the graph $F[U \cup W]$ with endpoints $a \in W \cap A$
and $b \in W \cap B$. Indeed, $F[W]$ being a $\gamma_1p$-expander,
\eqref{eq:main-k-set-sizes} and \eqref{eq:main-k-partition-lb-size}, and
\eqref{eq:main-k-degrees} in that order establish
Lemma~\ref{lem:absorbing-lemma}~$\ref{abs-W-exp}$--$\ref{abs-U-deg}$.
As before, given $U \cap A$ and $U \cap B$ we say that a pair $\{u, v\}$ of
vertices $u, v \in V(G)$ is an $A$-pair if $u$ and $v$ are both $A$-expanding
and a $B$-pair if both are $B$-expanding. In order to gain some flexibility on
the number of $A$-pairs and $B$-pairs we make use of the following claim.
\begin{claim}\label{cl:main-k-fix-balance}
There exist $Q_A, Q_B \subseteq Q$ of size $|Q_A|, |Q_B| = \floor{kn/\log^3
n}$ such that
\begin{enumerate}[(a)]
\item $\deg_G(v, U \cap A) \geq (\gamma_1^2/8)|U|p$ for all $v \in Q_A$,
and
\item $\deg_G(v, U \cap B) \geq (\gamma_1^2/8)|U|p$ for all $v \in Q_B$.
\end{enumerate}
\end{claim}
\begin{proof}
We show only the first assertion as the second one follows analogously. Let
$U_A := U \cap A$ and $U_B := U \cap B$. We first show that both of these
sets have size at least $\gamma_1|U|/4$. Indeed, if we assume that, say,
$|U_A| < \gamma_1|U|/4$, we have from the fact that $F[U]$ is a
$\gamma_1p$-expander, and thus $\delta(F[U]) \geq \gamma_1(|U|-1)p$,
\[
\gamma_1|U|^2p/3 \leq e_F(U_A, U_B) \leq e_G(U_A, U_B) \leq
\gamma_1|U|^2p/4 + \beta|U|/2,
\]
where the upper bound follows from $G$ being $(p,\beta)$-sparse. This leads
to a contradiction as $\beta \leq \eta np/\log^3 n = o(|U|p)$.
Let $Q_A \subseteq Q$ be defined as
\[
Q_A := \{v \in Q : \deg_G(v, U_A) \geq (\gamma_1^2/8)|U|p\},
\]
and suppose towards contradiction $|Q_A| \leq n/\log^2 n$. Using the fact
that $G[U \cup Q]$ is a $\gamma_1p$-expander and $|U_A| \geq
(\gamma_1/4)|U|$, we get
\begin{equation}\label{eq:balance-set-k-3}
(\gamma_1^2/4)|U||Q \cup U_B|p \leq e_G(U_A, Q \cup U_B) \leq e_G(U_A,
U_B) + e_G(U_A, Q_A) + (\gamma_1^2/8)|Q||U|p.
\end{equation}
As $\beta \leq \eta np/\log^3 n$, $|U_A|, |U_B| = \Theta(n/\log^2 n)$, and
$|Q| = \varepsilon n$, from $G$ being $(p, \beta)$-sparse, we have
\[
e_G(U_A, U_B) \leq (1 + o(1))|U_A||U_B|p \qquad \text{and} \qquad e_G(U_A,
Q_A) \leq o(|U||Q|p).
\]
Lastly, as $|U_B| = o(|Q|)$, the whole right hand side in
\eqref{eq:balance-set-k-3} can be bounded by $(3\gamma_1^2/16)|Q||U|p$,
which leads to a contradiction. In conclusion, $|Q_A| > n/\log^2 n$. This
implies there exist sets $Q_A, Q_B$ of size $\floor{kn/\log^3n}$ as desired.
\end{proof}
Take an arbitrary ordering $\{v_1, \dotsc, v_q\}$ of the vertices in $Q_A \cup
Q_B$, where $q := |Q_A \cup Q_B|$. We apply the Connecting Lemma
(Lemma~\ref{lem:connecting-lemma}) to $G$ with $\gamma_1$ (as $\gamma$), $Q_A
\cup Q_B$ (as $U$), $Y$ (as $W$), and the multigraph $M$ defined as
\[
V(M) = Q_A \cup Q_B \qquad \text{and} \qquad E(M) = \big\{ \{v_i, v_{i +
1}\}_{i \in [q - 1]} \big\}.
\]
Since $G[Y]$ is a $\gamma_1p$-expander, by \eqref{eq:main-k-partition-lb-size}
and \eqref{eq:main-k-degrees} all the assumptions
Lemma~\ref{lem:connecting-lemma}~$\ref{cl-W-exp}$--$\ref{cl-U-deg}$ are
satisfied. Perhaps the least obvious is the bound on $e(M)$ which holds as $q
\leq 2k n/\log^3 n \leq |Y|/(C\log n)$
(see~\eqref{eq:main-k-partition-lb-size}). We obtain a $v_1v_q$-path $P_Q$
which contains all vertices of $Q_A \cup Q_B$.
Let $W' := W \setminus V(H)$, $Y' := Y \setminus V(P_Q)$, $Q' := Q \setminus
(Q_A \cup Q_B)$, and $V'' := V' \cup W' \cup Q' \cup Y'$. One easily checks,
similarly as in \eqref{eq:min-deg-cov-V} and
\eqref{eq:essential-min-deg-cov-V}, that $\delta(G[V'']) \geq \alpha|V''|p$
and $\delta_{\alpha/64}(G[V'']) \geq (1/k+\alpha/2)|V''|p$. Consequently, we
apply the Embedding Lemma (Lemma~\ref{lem:embedding-lemma}) with $\alpha/2$
(as $\alpha$) to the graph $G[V'']$ to obtain a collection of $k - 1$ path
forests $P_1, \dotsc, P_{k - 1}$ which contain at most $n/\log^3 n$ paths
each.
Our goal at this point is to connect all the paths belonging to $P_1$ together
with $P_Q$ into the first cycle $C_1$, connect all the paths belonging to
$P_2$ together with the absorber $H$ into the second cycle $C_2$, and the
paths belonging to each $P_i$ for all $i \geq 3$ into a cycle $C_i$. Along the
way of constructing the cycles $C_2, \dotsc, C_{k-1}$, we may {\em reuse} some
of the vertices of $Q_A \cup Q_B$, which is not a problem.
Set $\gamma' := \gamma_1^2/8$. Let $m_i$ denote the number of paths in $P_i$,
for all $i \in [k - 1]$. Take an arbitrary ordering of the paths in each $P_i$
and denote their endpoints by $s_i^j$ and $t_i^j$, for all $i \in [k - 1]$, $j
\in [m_i]$. We again artificially make every vertex either $A$-expanding or
$B$-expanding depending on their degree into $U \cap A$ and $U \cap B$,
keeping at least $\gamma'|U|p$ edges for each vertex. This is possible due to
\eqref{eq:main-k-degrees} and our choice of $\gamma_1$. Next, we construct a
set $\{u_1, \dotsc, u_x\}$, for some $0 \leq x \leq 2kn/\log^3 n$, by
sequentially choosing vertices from $Q_A$ or $Q_B$ depending on the difference
in the number of $A$-pairs and $B$-pairs. It is easy to see that one can
greedily choose such a set in order for
\begin{minipage}{\textwidth}
\begin{minipage}[t]{0.6\textwidth}
\begin{align*}
\ensuremath{\mathcal P} = \big\{ & \{v_q, s_1^1\}, \{t_1^j, s_1^{j + 1}\}_{j \in [m_1 - 1]},
\{t_1^{m_1}, v_1\}, \\
%
& \{t_i^j, s_i^{j + 1}\}_{3 \leq i \leq k - 1, j \in [m_i - 1]},
\{t_i^{m_i}, s_i^1\}_{i \in [k - 1]} \\
%
& \{b, s_2^1\}, \{t_2^j, s_2^{j+1}\}_{j \in [m_2-1]}, \{t_2^{m_2},
u_1\}, \{u_i, u_{i+1}\}_{i \in [x-1]}, \{u_x, a\}
\big\}
\end{align*}
\end{minipage}
\begin{minipage}[t]{0.3\textwidth}
\begin{align*}
& \text{\small\itshape (cycle $C_1$ including $P_Q$)} \\
& \text{\small\itshape (cycles $C_3, \dotsc, C_{k - 1}$)} \\
& \text{\small\itshape (cycle $C_2$ including $H$)} \\
\end{align*}
\end{minipage}%
\end{minipage}
to be such that the number of $A$-pairs and $B$-pairs in $\ensuremath{\mathcal P}$ is the same.
Finally, we apply the Connecting Lemma (Lemma~\ref{lem:connecting-lemma}) to
the graph obtained from $G$ by substituting $G[U]$ by $F[U]$, and with
$\gamma'$ (as $\gamma$), $V(G) \setminus U$ (as $U$), $U$ (as $W$), and the
multigraph $M$ with the vertex set
\[
V(M) = \{a, b\} \cup \{s_i^j, t_i^j\}_{i \in [k - 1], j \in [m_i]} \cup
\{v_i\}_{i \in [q]}
\]
and the edge set $E(M) = \ensuremath{\mathcal P}$. This is indeed possible as $\ref{cl-W-exp}$
$F[U]$ is a $\gamma_1p$-expander and thus a $\gamma'p$-expander as well,
$\ref{cl-W-size}$ $|U| = \floor{\varepsilon n/(C\log^2 n)}$ and $\beta \leq \eta
np/\log^3 n$, and $\ref{cl-U-deg}$ all vertices have degree at least
$\gamma'|U|p$ into $U$ due to Claim~\ref{cl:main-k-fix-balance} and
\eqref{eq:main-k-degrees}. Therefore, we obtain $k - 1$ cycles $C_1, C_2,
\dotsc, C_{k-1}$, where $C_2$ contains $P_Q$ as a subgraph and $C_1$ contains
$H$ and possibly some vertices of $P_Q$ as a subgraph. As in the case $k = 2$,
replacing $H$ in $C_1$ by an absorbing $ab$-path $P'$ using all the uncovered
vertices of $U$ completes the proof.
\end{proof}
In the remainder of the paper we supply the missing proofs of the lemmas stated
above, each section fully being dedicated to one of the lemmas.
\section{The Partitioning Lemma}\label{sec:partitioning-lemma}
In this section we give a proof of the Partitioning Lemma which allows us to
partition every $(p, o(np))$-sparse graph satisfying a certain minimum degree
condition into linear-sized expanders. The proof uses the following notions of
\emph{good} and \emph{perfect} partitions.
\begin{definition}[$(c,\alpha)$-good, $(c,\alpha, \gamma)$-perfect]
\label{def:good}
Let $G$ be a graph on $n$ vertices. We say that a partition $V(G) = V_0 \cup
\dotsb \cup V_\ell$ of the vertex set of $G$ is \emph{$(c,\alpha)$-good} if
\begin{enumerate}[label=(L\arabic*)]
\item\label{L-size-bad} $|V_0| \leq \alpha n$, and
\item\label{L-min-deg} $\delta(G[V_i]) \geq (c + \alpha/2^{\ell})np$ for
every $i \in [\ell]$.
\end{enumerate}
We say that the partition is \emph{$(c, \alpha, \gamma)$-perfect} if it
additionally satisfies
\begin{enumerate}[label=(L\arabic*), resume]
\item\label{L-non-extremal} $G[V_i]$ is a $\gamma p$-expander for every $i
\in [\ell]$.
\end{enumerate}
\end{definition}
A first step towards proving Lemma~\ref{lem:partitioning-lemma} is proving the
following auxiliary lemma.
\begin{lemma}\label{lem:perfect-partition}
For all $c, \alpha \in (0, 1)$, there exist positive $\gamma(\alpha, c)$ and
$\eta(\alpha, c)$ such that the following holds for all sufficiently large
$n$. Let $p \in (0, 1)$ and $\beta \leq \eta np$ and let $G$ be a $(p,
\beta)$-sparse graph with $n$ vertices and minimum degree at least $(c +
\alpha)np$. Then for some integer $1 \leq \ell < 1/c$, there exists a $(c,
\alpha, \gamma)$-perfect partition $V(G) = V_0 \cup V_1 \cup \dotsb \cup
V_\ell$ of the vertex set of $G$.
\end{lemma}
\begin{proof}
Let $G = (V, E)$ and let $\gamma = \alpha^2/2^{2/c + 1}$. Suppose that $\eta =
\eta(c, \alpha)$ is sufficiently small for the rest of the argument to go
through. For every $\ell \geq 1$, set
\[
c_\ell = c + \alpha/2^{\ell - 1} \geq c, \quad \alpha_\ell = \alpha (2^{-1}
+ 2^{-2} + \cdots + 2^{-\ell}) \leq \alpha, \quad \text{and} \quad
\gamma_\ell = \alpha^2/2^{2\ell+1}.
\]
Note that the partition $V = V_0 \cup V_1$ where $V_0 = \varnothing$ is
trivially $(c_1, \alpha_1)$-good. In the following we argue that if for some
$\ell< c^{-1}$ a partition $V = V_0 \cup \dotsb \cup V_\ell$ is $(c_\ell,
\alpha_\ell)$-good, but not $(c_\ell, \alpha_\ell, \gamma_\ell)$-perfect, then
there exists a partition $V = W_0 \cup \dotsb \cup W_{\ell + 1}$ that is
$(c_{\ell + 1}, \alpha_{\ell + 1})$-good. This is sufficient to complete the
proof, which can be seen as follows. By repeatedly applying the statement
above, we either obtain a $(c_\ell, \alpha_\ell, \gamma_\ell)$-perfect
partition $V = V_0 \cup \dotsb \cup V_\ell$ for some $1 \leq \ell < c^{-1}$ or
we obtain a $(c_\ell, \alpha_\ell)$-good partition $V = V_0 \cup \dotsb \cup
V_\ell$ for $\ell = \ceil{c^{-1}}$. In the former case, we are done because
$c_\ell = c + \alpha/2^{\ell-1}$, $\alpha_\ell \leq \alpha$, and $\gamma_\ell
\geq \gamma$ for $\ell < c^{-1}$. The latter case results in a contradiction,
as we now verify. By averaging, there is some $i \in [\ell]$ such that $|V_i|
\leq n/\ell$. Since $\ell \geq c^{-1}$, we then have $|V_i|\leq cn$.
Furthermore, \ref{L-min-deg} states that $\delta(G[V_i]) \geq (c +
\alpha/2^{\ell - 1})np \geq (c + \alpha/2^{1/c})np$, which in particular
implies
\[
2e(V_i) \geq |V_i| (c + \alpha/2^{1/c})np.
\]
On the other hand, since $G$ is $(p, \beta)$-sparse, we have $2e(V_i) = e(V_i,
V_i) \leq p|V_i|^2 + \beta |V_i|$, which combines with the above to yield
\[
(c + \alpha/2^{1/c})np \leq p|V_i| + \beta.
\]
Since $|V_i|\leq cn$ and $\beta \leq \eta np$, this is a contradiction if
$\eta < \alpha/2^{1/c}$.
We now prove the statement mentioned above. Assume that the partition $V = V_0
\cup \dotsb \cup V_\ell$ is $(c_\ell, \alpha_\ell)$-good but not $(c_\ell,
\alpha_\ell, \gamma_\ell)$-perfect, for some $\ell < c^{-1}$. Then there is
some $i \in [\ell]$ and a partition $V_i = X \cup Y$ such that
\begin{equation}\label{eq:non-perfect-tuple}
e(X, Y) < \gamma_\ell |X||Y|p.
\end{equation}
We further define partitions $X = V_X \cup W_X$ and $Y = V_Y \cup W_Y$ as
follows. Set
\begin{equation}\label{eq:w0}
W_X^0 := \{ v \in X : \deg(v, Y) \geq \alpha n p/2^\ell \},
\end{equation}
and, for every $j \geq 1$, set
\begin{equation}\label{eq:wj}
W_X^j :=
\begin{cases}
W_X^{j - 1} \cup \{v\}, & \text{if there exists $v \in X \setminus
W_X^{j - 1}$ with $\deg(v, W_X^{j - 1}) \geq \alpha np/2^\ell$},
\\
W_X^{j - 1}, & \text{otherwise}.
\end{cases}
\end{equation}
Finally, define $W_X := \bigcup_{j \geq 0} W_X^j$ and $V_X := X \setminus
W_X$. The partition $Y = V_Y \cup W_Y$ is defined analogously (with $X$
replaced by $Y$).
We now define the partition $V = W_0 \cup \dotsb \cup W_{\ell + 1}$ by setting
$W_0 = V_0 \cup W_X \cup W_Y$ and $(W_1, \dotsc, W_{\ell + 1}) = (V_1, \dotsc,
V_{i - 1}, V_X, V_Y, V_{i + 1}, \dotsc, V_\ell)$, i.e., we obtain the new
partition of $V$ by replacing the part $V_i$ by the two parts $V_X$ and $V_Y$
and adding $W_X \cup W_Y$ to $V_0$. It remains to check that this partition is
$(c_{\ell + 1}, \alpha_{\ell + 1})$-good.
Showing \ref{L-size-bad} essentially boils down to proving that $W_X$ and
$W_Y$ are not too large. As a first step, it follows from
\eqref{eq:non-perfect-tuple} and \eqref{eq:w0} that
\[
|W_X^0| \cdot \alpha np/2^\ell \leq e(X, Y) < \gamma_{\ell}|X||Y|p \leq
\gamma_\ell |V_i|^2p/4,
\]
and thus $|W_X^0| \leq 2^{\ell - 2} \gamma_\ell |V_i|/\alpha \leq 2^{\ell
- 2}\gamma_\ell n/\alpha$. Assume towards a contradiction that $|W_X| > \alpha
n/2^{\ell + 2}$. Then there must exist some $j \geq 0$ such that $|W_X^j| =
\ceil{\alpha n/2^{\ell + 2}}$. We remark that by the choice $\gamma_\ell =
\alpha^2/2^{2\ell + 1}$, we have $|W_X^0| \leq 2^{\ell - 2}\gamma_\ell n/a =
\alpha n/2^{\ell + 3} \leq |W_X^j|/2$. From the definition of $W_X^j$, we
moreover see that every vertex in $W_X^j \setminus W_X^0$ adds at least
$\alpha np/2^\ell$ edges to $e(W_X^j)$. Therefore, we have
\begin{equation}\label{eq:wj-lower-bound}
e(W_X^j) \geq \frac{\alpha}{2^\ell} np \cdot |W_X^j \setminus W_X^0| \geq
\frac{\alpha}{2^{\ell + 1}} np \cdot |W_X^j|.
\end{equation}
On the other hand, since $G$ is $(p, \beta)$-sparse,
\begin{equation}\label{eq:wj-upper-bound}
e(W_X^j) \leq |W_X^j|^2 p/2 + \beta |W_X^j| = \ceil*{\frac{\alpha n}{2^{\ell
+ 2}}} (|W_X^j| p/2 + \beta).
\end{equation}
As $\beta \leq \eta np \leq \eta 2^{\ell + 2}|W_X^j|p/\alpha$, we see that for
small enough $\eta$, equations \eqref{eq:wj-lower-bound} and
\eqref{eq:wj-upper-bound} result in a contradiction. It follows that $|W_X|
\leq \alpha n/2^{\ell + 2}$ and one can show analogously that $|W_Y|\leq
\alpha n/2^{\ell + 2}$. In conclusion,
\[
|W_0| = |V_0| + |W_X| + |W_Y| \leq (\alpha_\ell + \alpha/2^{\ell + 1})n =
\alpha_{\ell + 1} n,
\]
completing the proof of \ref{L-size-bad}.
Lastly, we prove \ref{L-min-deg}. Since $V = V_0 \cup \dotsb \cup V_\ell$ is
$(c_\ell, \alpha_\ell)$-good, we have $\delta(G[V_i]) \geq c_\ell np = (c +
\alpha/2^{\ell - 1})np$. Observe that \eqref{eq:w0} and \eqref{eq:wj} imply
that $V_X$ contains only vertices with fewer than $\alpha np/2^{\ell}$
neighbours in $W_X \cup Y = V_i \setminus V_X$. Therefore,
\[
\delta(G[V_X]) \geq (c + \alpha/2^{\ell - 1})np - \alpha n p/2^\ell = (c +
\alpha/2^\ell)np = c_{\ell + 1}np.
\]
Similarly, we prove $\delta(G[V_Y]) \geq c_{\ell + 1}np$, thus establishing
\ref{L-min-deg}.
\end{proof}
We can now prove Lemma \ref{lem:partitioning-lemma} which we restate for
convenience.
\partitioning*
\begin{proof}
We may assume without loss of generality that $\xi < \alpha$. Let $c' = c +
\alpha - \xi$, $\alpha' = c^2\xi/4$, and choose $\eta$ to be small enough so
that the following arguments hold. Then $G$ has minimum degree at least $(c' +
\alpha')np$, and so we can apply Lemma~\ref{lem:perfect-partition} to $G$ to
obtain, for some $\gamma' > 0$ and some integer $1 \leq \ell < c^{-1}$, a
$(c', \alpha', \gamma')$-perfect partition $V(G) = V_0' \cup V_1' \cup \dotsb
\cup V_\ell'$. In the following, we distribute the vertices in $V_0'$ over the
other sets to obtain a partition $V(G) = V_1 \cup \dotsb \cup V_\ell$ as in
the statement of the lemma (in particular, we aim to have $V_i' \subseteq V_i$
for every $i \in [\ell]$).
Let $m = |V_0'|$ and note that by \ref{L-size-bad}, we have $m \leq \alpha'
n$. It then follows from the fact that $G$ has minimum degree at least $(c' +
\alpha')np \geq |V_0'|p + c'np$ and Lemma~\ref{lem:edges-out-small-set} that
there exists an ordering $w_1, \dotsc, w_m$ of the vertices of $V_0'$ such
that for every $j \in [m]$, we have
\begin{equation}\label{eq:min-degree}
\deg(w_j, V \setminus \{w_j, \dotsc, w_m\}) \geq c'np - \beta \geq cnp,
\end{equation}
where the last inequality holds by choosing $\eta$ to be small enough. We
process the vertices $w_1, \dotsc, w_m$ in this order, defining $\ell$ chains
of subsets
\[
\varnothing = W_i^0 \subseteq W_i^1 \subseteq \cdots \subseteq W_i^m
\subseteq V_0' \qquad \text{for $i \in [\ell]$}
\]
along the way. For this, we set $W_i^0 = \varnothing$ for every $i \in
[\ell]$, and for every vertex $w_j$, we do the following:
\begin{enumerate}[(1)]
\item\label{i*} choose an arbitrary $i^\star \in [\ell]$ satisfying
$\deg(w_j, V'_{i^\star} \cup W_{i^\star}^{j - 1}) \geq cnp/\ell$,
\item\label{Wj} set $W_{i^\star}^j := W_{i^\star}^{j - 1} \cup \{w_j\}$ and
$W_i^j := W_i^{j - 1}$ for all $i \neq i^\star$.
\end{enumerate}
Observe that by \eqref{eq:min-degree} there always exists at least one
$i^\star \in [\ell]$ as in \ref{i*}. Lastly, we define $V_i := W_i^m \cup
V_i'$ for every $i \in [\ell]$.
It is easy to see that $V_1, \dotsc, V_\ell$ contain all vertices of $V_0'$
and that we have $\delta(G[V_i]) \geq cnp/\ell \geq c^2np$ for every $i \in
[\ell]$. Moreover, by \ref{L-min-deg}, all but at most $|V_0'| \leq \alpha'n$
vertices $v$ in each set $V_i$ satisfy $\deg(v, V_i) \geq c'np = (c + \alpha -
\xi)np$. Since $G$ is $(p, \beta)$-sparse and $\delta(G[V_i]) \geq c^2 np$,
one easily derives $|V_i| \geq c^2n/2$. Thus, by our choice of constants
$\alpha' n \leq \xi|V_i|$ and so $\delta_\xi(G[V_i]) \geq (c + \alpha -
\xi)np$. We finish the proof by showing that each graph $G[V_i]$ is a $\gamma
p$-expander, with $\gamma = \min{\{ c^2/4, \gamma'/4 \}}$.
For this, fix some partition $V_i = X \cup Y$ into non-empty sets where,
without loss of generality, we assume $|X| \leq |Y|$. If $|X| \leq c^2 n/2$,
then it follows from $\delta(G[V_i]) \geq c^2np \geq |X|p + c^2np/2$ and
Lemma~\ref{lem:edges-out-small-set} that $e(X, Y) \geq c^2|X|np/2 - \beta|X|
\geq \gamma |X||Y|p$, for $\eta$ small enough. On the other hand, if $|X|, |Y|
\geq c^2n/2$, then we use the fact that $G[V_i']$ is a $\gamma' p$-expander to
get
\[
e(X, Y) \geq e(X \cap V_i', Y \cap V_i') \geq \gamma'|X \cap V_i'| |Y \cap
V_i'|p.
\]
The assumption on the sizes of $X$ and $Y$ implies $|X \cap V_i'| = |X
\setminus V'_0| \geq |X| - \alpha' n \geq |X|/2$ and similarly $|Y \cap V_i'|
\geq |Y|/2$. This gives
\[
e(X, Y) \geq \gamma'|X||Y|p/4 \geq \gamma |X||Y|p,
\]
as required.
\end{proof}
\section{Expansion and other preliminaries}\label{sec:preliminaries}
A simple but important property of $(p,\beta)$-sparse graphs is that every set
$A$ of vertices with degree drastically above $|A|p$ must expand by a
significant amount. This is the content of our first lemma.
\begin{lemma}
\label{lem:edges-out-small-set}
Let $p \in [0, 1]$, let $\alpha, \beta > 0$, and let $G$ be a $(p,
\beta)$-sparse graph on $n$ vertices. Assume $A \subseteq V(G)$ is a subset
such that $\deg(a) \geq |A|p + \alpha np$ for all $a \in A$. Then
\[
e(A, V(G) \setminus A) \geq (\alpha np - \beta) |A|.
\]
In particular, if $A$ is not empty, then there exists a vertex $a \in A$ such
that $\deg(a, V(G) \setminus A) \geq \alpha np - \beta$.
\end{lemma}
\begin{proof}
Set $B = V(G) \setminus A$. Recalling the definition of $e(\cdot, \cdot)$ our
assumption implies
\[
e(A, B) = e(A, V(G)) - e(A, A) \geq |A|^2p + |A|\cdot \alpha np - e(A, A).
\]
On the other hand, as $G$ is $(p, \beta)$-sparse,
\[
e(A, A) \leq |A|^2p + \beta|A|.
\]
Combining these inequalities gives $e(A, B) \geq (\alpha np - \beta) |A|$, and
the last assertion follows simply by averaging.
\end{proof}
In particular, if $G$ is a $(p,o(np))$-sparse graph with minimum degree
$\Omega(np)$, then the above lemma shows that all small enough linear-sized
subsets of vertices expand by a factor $\Omega(np)$. An important role in the
proof is played by graphs in which also the larger sets of vertices have this
property. We make the following definition.
\begin{definition}[$q$-expander]
\label{def:expander}
Let $q > 0$. A graph $G$ is a \emph{$q$-expander} if for every partition $V(G)
= V_1 \cup V_2$, we have
\[
e(V_1, V_2) \geq q|V_1||V_2|.
\]
\end{definition}
Informally, we think of a $(p, o(np))$-sparse graph $G$ as being a `good
expander' if it is a $q$-expander for some $q = \Omega(p)$. One can see that
this is essentially best possible, since the definition of a $(p,o(np))$-sparse
graph implies that $e(V_1, V_2) \leq p|V_1||V_2| + \beta \sqrt{|V_1||V_2|} = (1
+ o(1)) p|V_1||V_2|$ for every partition $V = V_1 \cup V_2$ into sets of linear
size.
The following simple lemma allows us to assume without loss of generality that
our expanders are bipartite, which turns out to be convenient later on in the
proof.
\begin{lemma}
\label{lem:expander-to-bipartite-expander}
Let $q > 0$. Every $q$-expander $G$ contains a spanning bipartite
$(q/2)$-expander as a subgraph.
\end{lemma}
\begin{proof}
Let $V(G) = A \cup B$ be a partition of the vertex set of $G$ that maximises
$e_G(A, B)$. We claim that $H = G[A, B]$ is a $(q/2)$-expander. Since we
assume that $G$ is a $q$-expander, it is enough to show that for any partition
of the vertices into non-empty sets $X$ and $Y$, we have $e_H(X, Y) \geq
e_G(X, Y)/2$.
To see this, define the sets $V_1 = X\cap A$, $V_2= X\cap B$, $V_3 = Y\cap A$,
and $V_4 = Y\cap B$, and note that
\begin{align*}
2e_H(X,Y) + e_G(V_1\cup V_4, V_2\cup V_3)
& =
e_G(V_1\cup V_2,V_3\cup V_4) + e_G(V_1\cup V_3,V_2\cup V_4) \\
& = e_G(X,Y) + e_G(A,B);
\end{align*}
here, the first equality can be verified by writing $e_H(X,Y) = e_G(V_1,V_4) +
e_G(V_2,V_3)$ and observing that for all $1\leq i\leq j\leq 4$, both sides of
the equality count each edge of $G[V_i,V_j]$ the same number of times. The
maximal choice of $(A, B)$ ensures that $e_G(V_1\cup V_4, V_2\cup V_3) \leq
e_G(A,B)$. This implies the desired inequality $e_H(X, Y) \geq e_G(X, Y)/2$.
\end{proof}
\subsection{Matchings in hypergraphs}
The following theorem due to Haxell has recently seen a surge of applications in
problems concerning embedding (spanning) structures into sparse graphs. It is
similar to Hall's theorem in spirit, providing a condition for the existence of
a perfect matching in certain hypergraphs.
\begin{theorem}[Haxell's criterion~\cite{haxell1995condition}]
\label{thm:haxell-matching}
Let $A$ and $B$ be disjoint sets and let $\ensuremath{\mathcal H} = (A \cup B, E)$ be an
$r$-uniform hypergraph such that $|A \cap e| = 1$ and $|B \cap e| = r - 1$ for
every edge $e \in E$. Suppose that for every choice of subsets $S \subseteq A$
and $Z \subseteq B$ such that $|Z| \leq (2r-3)(|S|-1)$, there is an edge $e
\in E$ intersecting $S$ but not $Z$. Then $\ensuremath{\mathcal H}$ contains an $A$-saturating
matching (that is, a collection of disjoint hyperedges whose union contains
$A$).
\end{theorem}
\subsection{Sparse regularity lemma}
Given a graph $G$ and $\varepsilon, p > 0$, we say that a pair $(X, Y)$ of disjoint
subsets $X, Y \subseteq V(G)$ is $(\varepsilon, p)$-{\em regular} if for all subsets
$X' \subseteq X$ and $Y' \subseteq Y$ with $|X'| \geq \varepsilon|X|$ and $|Y'| \geq
\varepsilon|Y|$, we have $|d(X', Y') - d(X, Y)| \leq \varepsilon p$. We say that the pair $(X,
Y)$ is $(\varepsilon, p)$-{\em lower-regular} if for all $X'$ and $Y'$ as above, we
have $d(X', Y') \geq (1 - \varepsilon)p$.
A partition $V(G) = V_0 \cup \dotsb \cup V_t$ is called an $(\varepsilon, p)$-{\em
regular partition with exceptional class $V_0$} if $|V_0| \leq \varepsilon n$, $|V_1| =
\dotsb = |V_t| \leq n/t$, and all but at most $\varepsilon t^2$ pairs $(V_i, V_j)$ with
$1 \leq i < j \leq t$ are $(\varepsilon, p)$-regular.
\begin{lemma}[Sparse regularity lemma~\cite{scott2011szemeredi}]
\label{lem:sparse-regularity-lemma}
For all $\varepsilon, m > 0$, there exists $M(\varepsilon, m)$ such that for every graph $G$
on at least $M$ vertices, there exists an $(\varepsilon, p)$-regular partition
$(V_i)_{i = 0}^{t}$ of $V(G)$, where $p = e(G)/\binom{n}{2}$ is the density of
$G$ and $m \leq t \leq M$.
\end{lemma}
Lastly, we need a lemma due to Gerke, Kohayakawa, Rödl, and
Steger~\cite{gerke2007small} stating that in an $(\varepsilon,p)$-lower-regular pair,
almost all subsets of size at least $D/p$, for some $D > 0$, inherit
lower-regularity, with slightly weaker parameters.
\begin{lemma}[Corollary 3.8 in~\cite{gerke2007small}]
\label{lem:small-subsets-regular}
For all $\varepsilon', \delta \in (0, 1)$, there exist positive constants
$\varepsilon_0(\varepsilon', \delta)$ and $D(\varepsilon')$ such that the following holds for all
$0 < \varepsilon \leq \varepsilon_0$ and $p \in (0, 1)$. Suppose $(V_1, V_2)$ is an $(\varepsilon,
p)$-lower-regular pair and $q_1, q_2 \geq Dp^{-1}$. Then the number of pairs
$(Q_1, Q_2)$ with $Q_i \subseteq V_i$ and $|Q_i| = q_i$ ($i = 1, 2$) that are
$(\varepsilon', p)$-lower-regular is at least
\[
(1 - \delta^{\min{\{q_1, q_2\}}}) \binom{|V_1|}{q_1} \binom{|V_2|}{q_2}.
\]
\end{lemma}
| {
"timestamp": "2021-11-18T02:23:07",
"yymm": "2003",
"arxiv_id": "2003.03311",
"language": "en",
"url": "https://arxiv.org/abs/2003.03311",
"abstract": "Let $k \\geq 2$ be an integer. Kouider and Lonc proved that the vertex set of every graph $G$ with $n \\geq n_0(k)$ vertices and minimum degree at least $n/k$ can be covered by $k - 1$ cycles. Our main result states that for every $\\alpha > 0$ and $p = p(n) \\in (0, 1]$, the same conclusion holds for graphs $G$ with minimum degree $(1/k + \\alpha)np$ that are sparse in the sense that \\[e_G(X,Y) \\leq p|X||Y| + o(np\\sqrt{|X||Y|}/\\log^3 n) \\qquad \\forall X,Y\\subseteq V(G). \\] In particular, this allows us to determine the local resilience of random and pseudorandom graphs with respect to having a vertex cover by a fixed number of cycles. The proof uses a version of the absorbing method in sparse expander graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Covering cycles in sparse graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551495568569,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.7083190115325027
} |
https://arxiv.org/abs/2111.02967 | An Empirical Comparison of the Quadratic Sieve Factoring Algorithm and the Pollard Rho Factoring Algorithm | One of the most significant challenges on cryptography today is the problem of factoring large integers since there are no algorithms that can factor in polynomial time, and factoring large numbers more than some limits(200 digits) remain difficult. The security of the current cryptosystems depends on the hardness of factoring large public keys. In this work, we want to implement two existing factoring algorithms - pollard-rho and quadratic sieve - and compare their performance. In addition, we want to analyze how close is the theoretical time complexity of both algorithms compared to their actual time complexity and how bit length of numbers can affect quadratic sieve's performance. Finally, we verify whether the quadratic sieve would do better than pollard-rho for factoring numbers smaller than 80 bits. | \section{Introduction}
The idea of public-key cryptography was first introduced in 1975 by Martin Hellman, Ralph Merkle, and Whitfield Diffie at Stanford University~\cite{ten_years}. Before the public key cryptosystem era, if two people want to exchange secret information without anybody else knowing, they have to agree in advance on a secret key known only by them but not anyone else. After the invention of public-key systems, two people could exchange secret information without ever meeting each other. The idea is that the secret message can only be decrypted in a reasonable amount of time using secret keys possessed by two people exchanging information. Then Ron Rivest, Adi Shamir, and Leonard Adleman introduced the RSA public-key cryptosystem that is considered more secure than previous cryptosystems. This cryptosystem is implemented based on two ideas: public-key encryption and digital signatures~\cite{RSA}. In the public-key encryption part, the sender generates a random prime p, a base number g, and a random number a.
Then, the sender sends need to send (p, g, g$^a$( (mod p)) to the receiver. After the receiver receives the number, the receiver generates a random number b, and send g$^b$ (mod p) back to the sender. The sender and the receiver can then compute the shared secret key easily based on the information they have ~\cite{diffie}. However, for a third party to know the secret, they have to compute g$^{ab}$ with four values - p, g, g$^a$( (mod p), and g$^b$ (mod p). The Diffie-Hellman key exchange algorithm is the place where the factoring algorithm comes in. If the public key consists of two large primes, where each prime is roughly 512 bits, or 1024 bits, or 2048 bits, then the third party would have to find the factors for the public key to find out the secret key. There are already many existing factoring algorithms. However, none of them can factor large products in a reasonable time- polynomial time. Nowadays, it is feasible to factor 155- decimal digit numbers, but it is still considered hard to factor numbers more than 150 decimal digits ~\cite{framework}. Although we cannot examine the runtime of those algorithms on large products- more than 512 bits, we can analyze some of those algorithms with relatively small scale numbers- numbers fewer than 100 bits. Pollard-rho and quadratic-sieve are two popular factoring algorithms we analyze in this work.
We implement the pollard-rho algorithm based on the birthday paradox, and probability theory ~\cite{Rho}. Then we implement the quadratic sieve algorithm and optimize some important steps using fast Gaussian elimination. These two algorithms serve to factor products of two primes. Runtime between the two algorithms will be compared and analyzed, then runtime of quadratic sieve will be analyzed when the factors of the products have different bit lengths. \\
The contribution of this work has is summarized as follows:
\begin{itemize}
\item Our experiment shows that pollard-rho performs better than quadratic sieve for numbers under 80 bits most of the time.
\item We do an extensive analysis on the runtime of quadratic sieve under different circumstances- the bit difference of the two factors are large, and the bit difference of the two factors are small. Our result shows that the average runtime of the quadratic sieve tends to be shorter for products of the same bit length when the bit difference between the two factors is smaller. We verified that the quadratic sieve does better when the bit difference of the two factors are small.
\end{itemize}
\section{Related Work}
In this section, we briefly review the work related to pollard rho and quadratic sieve. In 1983, Joseph L. Gerver implemented quadratic sieve algorithm(QS), continued algorithm(CF) of Brillhart and Morrison and continued algorithm with early abort modification(CFEA) ~\cite{cfa}. In his work, he factored a 47-digit number that had never been factored into three primes using the quadratic sieve, and that number is 17674971819005665268668200903822757930076\\11. He also compared the runtime of QS, CF, and CFEA. It turns out that QS starts to do better than CF when the product exceeds 40 bits, and QS does better than CFEA when the product exceeds 60 bits. Although QS could beat CF and CFEA easily, when the product bit length exceeds 60 bits, pollard-rho is considered better than QS theoretically for products under 100 bits.
Peter Montgomery, an American mathematician who worked at the System Development Corporation and Microsoft Research, then did a modification to quadratic sieve and named it $\textbf{Multiple Polynomial Quadratic Sieve}$. Robert D. Silverman later implemented this modification of quadratic sieve and factored 45 digit numbers in 0.25 hours and 82 digit numbers in 1265 hours~\cite{multiple}. Before Silverman's time, there were only two implementations of the quadratic sieve algorithm. The second implementation, done from the 'Cunningham Project', used a Cray XMP supercomputer to factor the number ($10^{71}$ - 1)/9 (about 70 digits) in 9.5 hours ~\cite{multiple}. By the year 1994, the quadratic sieve was able to a 129- digit RSA number ~\cite{tale}. As better supercomputers and parallel machines are made, the quadratic sieve can factor numbers with more digits.
$\textbf{Pollard-Rho factoring algorithm}$. Aminudin et al. analyzed the runtime of pollard-rho. Their experimental results show that their algorithm could factor a 44-bit number- 11752700814259- at 7.394 seconds and a 66-bit number- 49808531654765413631- at 28 seconds. They concluded that pollard-rho was significantly faster than Fermat's factorization ~\cite{pollard}. As a comparison, our pollard-rho algorithm can factor 11752700814259 in 0.0007 seconds and 49808531654765413631 in 0.06 seconds.
\section{Methodology}
We first show the theories behind pollard rho and our implementation of it. Then we show the mathematical steps required to implement the quadratic sieve and what techniques we incorporate into optimizing the quadratic sieve algorithm.
\subsection{Pollard Rho Factorization} The pollard-rho factoring algorithm is based on a probabilistic method to factor composite numbers N by finding the greatest common divisor(gcd) between the difference of two random numbers x and y (x and y are between 1 and N - 1) generated by an arbitrary function and N iteratively. The hope is that we could somehow find a pair of random numbers x and y such that the gcd(x - y, N) is the factor of N. The method was published by J.M. Pollard in 1975 and is based on the Birthday Paradox problem ~\cite{birthday}. The Birthday Paradox states that if we choose m samples from N items with replacement, with m to be large enough, we will choose some items twice. The pollard-rho algorithm uses the following sequences to generate random x and y values for finding the gcd of x-y and N:
\begin{center} $x_0$ $\leftarrow$ random positive integer (mod $N$) \end{center}
\begin{center} c $\leftarrow$ random positive integer (mod $N$) \end{center}
\begin{center} $x_i$ $\leftarrow$ $f_c(x) = x_{i - 1} * x_{i - 1} + c$ (mod $N$) \end{center}
\begin{center} $y_0$ $\leftarrow$ $f_c(x)$ (mod $N$)\end{center}
\begin{center} $y_i$ $\leftarrow$ $f_c(f_c(y_{i - 1}))$ (mod $N$) \end{center}
Since the sequence can be at most $N-1$, the sequence will finally become periodic when the sequence gets larger. We find the nontrivial factor of the composite if $gcd(x-y, N)$ $\neq$ 1 and
$gcd(x-y, N)$ $\neq$ $N$. According to the Birthday Paradox, the time to find a nontrivial factor is proportional to the size of $N$\footnote{\url{https://www.cs.umd.edu/users/gasarch/COURSES/456/F20/lecfactoring/bday.pdf}}. The expected number of steps to find the factor is approximate $N^{\frac{1}{4}}$. The runtime is considered extremely good with one flaw: the algorithm stops if it cannot find a nontrivial factor.
The implementation of our algorithm is shown below:
\begin{algorithm}
\caption{Pollard-Rho Algorithm}\label{alg:cap}
\begin{algorithmic}
\State $c \gets rand(1, N-1)$
\State $f_c(x) \gets x * x + c$
\State $x \gets rand(1, N-1)$
\State $y \gets f_c(x)$ (mod $N$)
\While{True}
\State $x \gets f_c(x)$
\State $y \gets f_c(f_c(y))$
\State $d \gets gcd(x-y, N)$
\If{$d$ $\neq$ 1 and $d$ $\neq$ N}
\State return $d$
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
Besides the pseudocode shown above, we also added two extra tricks into the algorithm. We first check whether $N$ is prime or not, then we run the algorithm to prevent an infinite loop. Before running the main loop in the algorithm above, we check whether $N$ is divisible by the first ten primes to speed up the algorithm.
\subsection{Quadratic Sieve Algorithm}
The basic quadratic sieve algorithm is a more complicated factoring algorithm that contains several parts.
$\textbf{B Smooth Numbers}$ Our goal is to factor a number $N$, which composes of two primes. One particular step of the quadratic sieve is to break down the numbers into smaller parts and see whether we can factor a large composite with smooth numbers. First, we want to choose a smooth bound $B$, a set of primes less than $B$, denoted as $\pi(B)$.
$\textbf{Parameter M}$ Next step is to use sieving to find a set of $a_i$ such that $b_i^2 \equiv a_i(mod N)$. In our implementaion of the quadratic sieve, we set $\sqrt{N}$ as our $b_1$, then we find $a_1$ such that $b_1^2 \equiv a_1(mod N)$. We repeat the process and update $b_{i+1}^2 = b_i^2 + 1$ until we reach $b_M^2 \equiv a_M(mod N)$.
$\textbf{Forming the Matrix}$ After we find the set of $a_i$ from the previous step, we want to write each $a_i$ as prime factors from the B smooth numbers we found in step one and generate exponent vectors for factors of each $a_i$ then mod them by 2. Example:
The B smooth bound we choose is 15, so the set of primes less than 10 is {2, 3, 5, 7}. The number we want to factor is 400289. Then $\sqrt{400289}$ = 633. We compute 633$\equiv 2^4\cdot5^2$(mod 400289). The vector we generate for $2^4\cdot5^2$ is $\vec{a_1}$ = (4, 0, 2, 0) (mod 2) = (0, 0, 0, 0).
We form a 2-dimensional matrix from a set of exponent vectors generated from all the $a_i$, then use the Gaussian elimination process to find a linear combination of rows mod 2 that could sum up to the 0 vector.
$\textbf{Finding the Factors}$ Once we found an identity such that $b^2\equiv a^2(mod N)$, we can rewrite the identity to be $(b-a)(b+a)\equiv 0 (mod N)$. The final step is to find the greatest common divisor between $(b-a)$ and $(b+a)$. If the greatest common divisor between $(b-a)$ and $(b+a)$ is not the trivial factor- 1, we find a factor for $N$. If it is the trivial factor, we increase our $B$ smooth bound and M parameters and repeat step one. Algorithm 2 shows the basic algorithm for the quadratic sieve.
\begin{algorithm}
\caption{Quadratic Sieve Algorithm}\label{alg:cap}
\begin{algorithmic}
\State Given $N, B, M$
\State Generate a list of primes less than $B$
\State $b_1 \gets \sqrt{N}$
\State An empty matrix list $A$
\For{$k \gets 1$ to $M$}
\State $a_k = b_k $ (mod $N$)
\State $c_i$ $\gets$ exponent vectors of $a_i$ (mod 2)
\State Add $c_i$ to $A$
\EndFor
\State Perform Gaussian elimination on A
\State Let a, b be a list of linear combination of rows in A such that $a^2 \equiv b^2 (mod N)$
\If{$gcd(a-b, a+b)$ $\neq$ 1 }
\State return $gcd(a-b, a+b)$
\Else
\State $B$ $\gets$ $B$ + 10
\State $M$ $\gets$ $M$ + 100
\State Repeat the process again
\EndIf
\end{algorithmic}
\end{algorithm}
The asymptotic running time for quadratic sieve is $O(e^{\sqrt{1.125ln(N)ln(ln(N))}}))$ ~\cite{quad_runtime}.
\subsection{Safe Prime Generators}
\hspace{2.5 mm}We want to generate random primes of certain lengths, but sometimes it is expensive to check whether a large number is prime or even impossible. Thus, we want to generate numbers considered primes with high probabilities but can still be composites with a low probability.
\section{Experiments}
In this section, we introduce the experimental setup and present the results of our experiment. Concrete and specific examples will be presented to give a better understanding of our results.
\subsection{Experimental Setup}
\textbf{Dataset} We want to test the performance of the two algorithms on composites with different bit lengths. We used the safe prime method to generate safe primes of different bit lengths- 40 bits, 50 bits, and 60 bits. We also want to compare the performance of both algorithms for composites of certain bit lengths but with prime factors of different bit lengths. An example for generating 40-bit composites is a combination of 5 bit and 35-bit primes multiplied together to get a 40-bit product. The list of combinations of primes we generated is shown in table 1. In addition, we want to examine the two algorithms' performance on composites of random bit lengths (The bit lengths of the composites are not necessarily multiples of 5). Thus, we generated 4462 pairs of primes with random bit lengths, but the products of those pairs of primes do not exceed 70 bits. The experiment is run in a personal laptop MacBook pro with a RAM of 16 GB and 2GHz Quad-Core Intel Core i5. No parallel machine nor GPU is used in the whole experiment.
\begin{table}
\centering
\begin{tabular}{c|c|c}
\hline
Composite Size & Prime 1 Size & Prime 2 Size\\ [0.1ex]
\hline\hline
40 & 5 & 35 \\
40 & 10 & 30 \\
40 & 15 & 25 \\
40 & 20 & 20 \\
50 & 5 & 45 \\
50 & 10 & 40 \\
50 & 15 & 35 \\
50 & 20 & 30 \\
50 & 25 & 25 \\
60 & 5 & 55 \\
60 & 10 & 50 \\
60 & 15 & 45 \\
60 & 20 & 40 \\
60 & 25 & 35 \\
60 & 30 & 30 \\
\hline
\end{tabular}
\caption{\textnormal{Composites generated by combination of prime factors of different bit lengths}}
\end{table}
\subsection {Bit Length of Products and Runtime of the Two Algorithms Comparison on Random Bit Composites}
First, we want to compare the performance of quadratic sieve and pollard-rho on composites of random bit lengths below 70 bits. In this experiment, we limit the runtime of finding suitable B and M parameters for the quadratic sieve algorithm to 3 minutes for the sake of limited time for this project. We compare the runtime of the two algorithms on products in different bit lengths. We generated 4462 products of random bit lengths and tested the runtime of both programs. Figure 1 visualizes the runtime of the quadratic sieve algorithm on products of random bit lengths. Figure 2 shows the runtime of pollard rho algorithms for the same set of products. Among 4462 products, only 4268 products are being successfully factored in the time limit. From the two graphs, we see that the maximum runtime of the quadratic sieve is about 1.2 seconds.
On the contrary, the highest runtime for pollard rho on the products is about 0.175 seconds. Thus, we see that pollard rho completely beats quadratic sieve on random bit length products no greater than 70 bits. In addition, it seems to take the quadratic sieve the longest time to factor products of around 65 bit length. The runtime of the quadratic sieve seems to increase as the bit length of the products increase, which looks reasonable. However, some numbers between bit length of 50 bits and 60 bits took significantly longer runtime than other numbers smaller than 60 bits. Here, we consider the time to factor a number smaller than 60 bits more than 0.6 seconds to be long. Table 2 shows the numbers smaller than 60 bits but took more than 0.6 seconds to factor. Among those products, the performance of pollard-rho is still very good- all below 0.001 seconds.
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
Product & Factor 1 & Factor 2 & Factor 1 Length & Factor 2 Length & Pollard Runtime\\ [0.5ex]
\hline\hline
27522003582288223 & 2963 & 9288560102021 & 12 & 44 & 0.0000472\\
1751357076372217 & 4441 & 394360971937 & 13 & 39 & 0.0001149\\
349739602979093 & 291148573 & 1201241 & 29 & 21 & 0.0007229\\
185456974731188183 & 18796061 & 9866800003 & 25 & 34 & 0.0002370\\
\hline
\end{tabular}
\caption{\textnormal{Products that take quadratic sieve more than 0.6 seconds to run}}
\end{table}
For Pollard rho, it is not surprising that the program's runtime gradually increases as the bit length of the products increases. As the number gets larger, the number of cycles to hit the correct candidate factors becomes bigger. There is a significant jump in the runtime of Pollard rho from 60 bit to 70 bits.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Quad_time.jpg}
\caption{The histogram shows the bit length of composites and the runtime corresponding to that bit length for quadratic sieving algorithm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Pollard_time.jpg}
\caption{Bit length of composites and the runtime corresponding to that bit length for Pollard $\rho$}
\end{figure}
\subsection{Analysis on Products Quadratic Sieve Performs Better than Pollard-Rho}
The following section compares the runtime between quadratic sieve and pollard rho for products of 40 bits, 50 bits, and 60 bits.
We randomly generated 1276 products of 40 bits, 2749 products of 50 bits, and 3561 products of 60 bits. We then factored them using both algorithms. The result shows that the runtime of pollard-rho is faster than the quadratic sieve for all 40-bit products. Two numbers quadratic sieve performs better than pollard-rho for 50-bit products, and five numbers quadratic sieve is better than pollard-rho for 60-bit products. Table 3 lists the products that quadratic sieve does better than pollard-rho. From the table, we see that the quadratic sieve does better than pollard-rho for all the products. The bit length of factor 1 and factor 2 are the same. The average runtime for quadratic sieve on products of two 25-bit primes is 0.140562 seconds and 0.004899 seconds for pollard-rho. The average runtime for quadratic sieve on products of two 30-bit primes is 0.224792 seconds and 0.047880 seconds for pollard-rho.
For all the products in table 3 except the one with factors 633488353 and 674203633, the runtime of pollard-rho is above the average runtime of pollard rho on products with factors of the same bit length. On the other hand, the runtime of the quadratic sieve on those products is much smaller than the average runtime on other 25-25 bit products and 30-30 bit products. Thus, we conclude that the quadratic sieve performs better than pollard-rho on products of the same bit length when pollard-rho performs below average, and the quadratic sieve performs above average.
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
Factor 1 & Factor 2 & Factor 1 Length & Factor 2 Length & Pollard Runtime & Quad Runtime\\ [0.5ex]
\hline\hline
17410997 & 19124933 & 25 & 25 & 0.0147488 &0.0138339996 \\
21094421 & 21577789 & 25 & 25 & 0.0104671 & 0.0087039470\\
642478871 & 674265899 & 30 & 30 & 0.0537941 & 0.0379431247\\
630635641 & 563503649 & 30 & 30 & 0.058249 & 0.0529510974\\
718778023 & 648638647 & 30 & 30 & 0.0558028 & 0.0537159442\\
633488353 & 674203633 & 30 & 30 & 0.0393589 & 0.0285172462\\
639632243 & 758675243 & 30 & 30 & 0.053247 & 0.0451288223\\
\hline
\end{tabular}
\caption{\textnormal{Products quadratic does better than pollard-rho}}
\end{table}
\subsection{Failure Rate of Quadratic Sieve for 40-bit, 50-bit, and 60-bit Products Within Three Minutes}
Due to the time limits of this experiment, it is impossible to allow the program to run an infinite amount of time to factor some numbers that are hard to factor. Thus, if it takes the program more than three minutes to find the suitable B and M values and factor a product, we will cut off the program and factor the next product.
$\textbf{Failures on 40-bit Products}$
There are 14 numbers quadratic sieve cannot factor for 40-bit products. The numbers the program cannot factor is shown in table 3 in the appendix. From table 4, we see that all the products quadratic sieve cannot factor are composed of a 10-bit prime factor and a 30-bit prime factor.
\begin{table}
\centering
\begin{tabular}{c|c|c}
\hline
Product & Prime 1 Size & Prime 2 Size\\ [0.1ex]
\hline\hline
369062373143 & 10 & 30 \\
534106988197 & 10 & 30 \\
508085497589 & 10 & 30 \\
369901521617 & 10 & 30 \\
369901521617 & 10 & 30 \\
458343342553 & 10 & 30 \\
341238699311 & 10 & 30 \\
421135689817 & 10 & 30 \\
563755932497 & 10 & 30 \\
482560209653 & 10 & 30 \\
477265583519 & 10 & 30 \\
456886122281 & 10 & 30 \\
442321850393 & 10 & 30 \\
386706654493 & 10 & 30 \\
\hline
\end{tabular}
\caption{\textnormal{40-bit products QS cannot factor within three minutes}}
\end{table}
$\textbf{Failures on 50-bit Products}$
For 50-bit products, there are 206 products quadratic sieve cannot factor within three minutes. Table 5 shows the bit length of two primes and the number of products that cannot be factored on time. From table 4, we see that the most number of products that cannot be factored in are combinations of 10-bit primes and 40-bit primes. We see products that contain 10-bit primes are harder to factor.
\begin{table}
\centering
\begin{tabular}{c|c|c}
\hline
Prime 1 Size & Prime 2 Size & Failure Count\\ [0.1ex]
\hline\hline
5 & 45 & 38 \\
10 & 40 & 51 \\
15 & 35 & 38 \\
20 & 30 & 37 \\
25 & 25 & 42 \\
\hline
\end{tabular}
\caption{\textnormal{Counts on products that cannot be factored for different combination of prime factors on 50-bit products}}
\end{table}
$\textbf{Failures on 60-bit Products}$
Table 6 shows the number of 60-bit products that cannot be factored. However, for 60-bit products, the most of products that cannot be factored is not a combination of a 10-bit prime and a 50-bit prime. It is a combination of a 5-bit prime and a 55-bit prime. A possible reason why products cannot be factored is the combination of bit lengths between the two prime factors. A hypothesis is that when the difference of bit lengths of two primes is large, it takes more time for the quadratic sieve to factor those numbers. We will verify this hypothesis in the following section.
\begin{table}
\centering
\begin{tabular}{c|c|c}
\hline
Prime 1 Size & Prime 2 Size & Failure Count\\ [0.1ex]
\hline\hline
5 & 55 & 417 \\
10 & 50 & 383 \\
15 & 45 & 367 \\
20 & 40 & 383 \\
25 & 35 & 373 \\
30 & 30 & 406 \\
\hline
\end{tabular}
\caption{\textnormal{Counts on products that cannot be factored for different combination of prime factors on 60-bit products}}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
\hline
Bit Difference & Factor 1 Bit Length & Factor 2 Bit Length & Percentage that Can be Factored\\ [0.1ex]
\hline\hline
30 & 5 & 35 & 1.0000 \\
20 & 10 & 30 & 0.9646 \\
10 & 15 & 25 & 1.0 \\
0 & 20 & 20 & 1.0 \\
40 & 5 & 45 & 0.9023\\
30 & 10 & 40 & 0.9136 \\
20 & 15 & 35 & 0.9354 \\
10 & 20 & 30 & 0.9370 \\
0 & 25 & 25 & 0.9294 \\
50 & 5 & 55 & 0.2992 \\
40 & 10 & 50 & 0.3541 \\
30 & 15 & 45 & 0.3822 \\
20 & 20 & 40 & 0.3541 \\
10 & 25 & 35 & 0.3721 \\
0 & 30 & 30 & 0.3142 \\
\hline
\end{tabular}
\caption{\textnormal{Shows the percentage of successful factoring within three minutes for products of different big lengths and different bit length between the two factors of the product}}
\end{table}
\subsection{Percentage of Products that Cannot Be Factored For 40-bit, 50-bit and 60-bit Products}
Because the number of samples for 40-bit, 50-bit and 60-bit products are different, looking at how many products quadratic sieve cannot factor for each bit length is insufficient. We must look at the percentage of products quadratic sieve cannot factor given the bit length of the product. The result shows that for 40-bit products, about 1.097 percent of the numbers cannot be successfully factored within three minutes. For 50-bit products, about 9.657 percent of the numbers cannot be factored within three minutes. For 60-bit products, the failure rate is up to 65.403 percent. Thus, the longer the bit length of the product is, the harder it is for the quadratic sieve to find out the suitable B and M values and factor the products on time. When the size of a product gets larger, the algorithm may spend most of its time finding the correct B and M values instead of doing the actual process of factoring the number, such as Gaussian elimination.
\subsection{How Well Does Quadratic Sieve Perform When the Bit Difference of Primes Changes}
Table 7 shows how the quadratic sieve performs when the bit difference of the two prime factors vary. Especially, they show the percentage of numbers that can be factored within three minutes for different combinations of bit length primes.
From table 7, we see that except for 40-bit products, the success rate of factoring in three minutes seem to be the lowest when the bit difference of the two primes are the largest. As the bit difference gets smaller, the success rate of factoring increases for 50-bit and 60-bit products.
\subsection{Comparing Average Runtime for Quadratic Sieve when the Difference of Bit Lengths for the Two Primes are large and small}
The previous section concluded that the percentage of successful factoring within three minutes for the quadratic sieve to factor a number at a certain bit length is the lowest when the bit difference of the two primes are large. Thus, we examine whether changing the bit difference between two factors for fixed bit length products can affect the average runtime of the quadratic sieve. Table 8 shows the average runtime for the quadratic sieve for combinations of products of different bit lengths and different bit difference
Indeed, from table 8, we see that for 40-bit, 50-bit and 60-bit products, the average runtime in seconds to factor a product is the highest when the bit difference is the greatest. Thus, the runtime of the quadratic sieve depends not only on the bit length of the product but also on the bit difference between two prime factors.~\cite{gaussian}.
\begin{table}
\centering
\begin{tabular}{c|c|c}
\hline
Product Size in Bits & Bit Difference & Average Runtime in Seconds \\ [0.1ex]
\hline\hline
40 & 30 & 0.123 \\
40 & 20 & 0.122 \\
40 & 10 & 0.096 \\
40 & 0 & 0.098 \\
50 & 40 & 0.245 \\
50 & 30 & 0.154 \\
50 & 20 & 0.146 \\
50 & 10 & 0.143 \\
50 & 0 & 0.140 \\
60 & 50 & 0.242 \\
60 & 40 & 0.233 \\
60 & 30 & 0.240 \\
60 & 20 & 0.236 \\
60 & 10 & 0.242 \\
60 & 0 & 0.224 \\
\hline
\end{tabular}
\caption{\textnormal{Shows the average runtime in seconds for different size products}}
\end{table}
\section{Limitations}
\textbf{Computer Resources}
We have limited computing power for running the two algorithms. The only computer we use is a mac book pro with 16GB memory. Quadratic sieve algorithm would speed up if it runs in a parallel machine since it can perform Gaussian elimination simultaneously on multiple rows in a matrix. However, for a normal personal computer, the quadratic sieve algorithm takes a long time to find the correct B and M values. With the time limit, the success rate of factoring a number for a quadratic sieve becomes much lower when the bit length of the product exceeds 70 bits. Thus, we did not run quadratic sieve and pollard rho on numbers greater than 70 bits. However, in theory, the quadratic sieve should do better than pollard rho when the product is over 100 bits. We ran an experiment on both algorithms for an 80-bit number, the runtime of pollard rho is about 22.95 seconds, and the runtime of the quadratic sieve is about 44.25 seconds. Afterwards, we ran both algorithms on a 100-bit number, and pollard rho did not successfully factor in ten minutes. Quadratic sieve factored the number in 587.66 seconds. Although there is only one example for 100-bit numbers, we at least find an example that satisfies the theory.
\textbf{Time Limits} Although the time limit of three minutes for quadratic sieve and pollard rho greatly saves our time on factoring thousands of examples, we did not record the time for finding the correct B and M values for quadratic sieve, which is useful. In many cases, the quadratic sieve cannot factor the products merely because it cannot find the suitable B and M values on time. If we have a better algorithm for finding the correct B and M values for the quadratic sieve, the success rate of factoring numbers of quadratic sieve would be much higher.
\textbf{Limited Sample Sizes} Because we can only analyze 40-bit, 50-bit and 60-bit products, the comparisons we can do for pollard rho and quadratic sieve are very limited. We cannot see a significant performance of quadratic sieve over pollard rho for 100-bit length numbers for each pair of B and M values. It would take 10 minutes for the quadratic sieve to factor and maybe more than 10 minutes for the pollard rho to factor. The time cost of doing large bit length numbers is very high. Also, the failure rate of factoring 80-bit numbers for quadratic sieve is high for a three-minute timing- about 90 percent failure rate. The limited sample size limits the type of comparison we want to do between the two algorithms.
\section{Conclusion}
In this paper, we analyzed the performance of quadratic sieve and pollard-rho from various aspects. Pollard-rho dominates the performance for composites smaller than 80 bits. However, we are still able to find some composites that the quadratic sieve outperforms pollard-rho. We also conclude that quadratic sieve is more likely to succeed when the bit difference between the factors is small but with the same bit size products. Also, the average performance of the quadratic sieve increases for composites of the same bit size but smaller bit difference.
\section{Summary and Outlook}
\subsection{Time Span to run the algorithms}Increase the timespan for factoring in each product. It takes 587.6639738 seconds- almost 10 minutes for the quadratic sieve to factor the product. Also, we have to count for the time to find out the B and M values, which means 20 minutes to factor each product. The time to run the algorithms might take a month.
\subsection{Factor more specific bit length primes}Find how much time it takes to factor a 40-bit number when the two products are 5 bit and 35 bit, 6 bit and 34 bit, 7 bit and 33 bit, etc. Do the same for 45 bit, 50 bit, 55-bit, 60-bit products.
\subsection{Finding a better algorithm for finding B and M values} Instead of incrementing B and M by ten each alternately, starting B = 10 and M = 10, for larger numbers, we start B and M with 100 and 1000 and increment the two values by 100 or other intervals alternately.
\bibliographystyle{unsrtnat}
| {
"timestamp": "2021-11-05T01:20:09",
"yymm": "2111",
"arxiv_id": "2111.02967",
"language": "en",
"url": "https://arxiv.org/abs/2111.02967",
"abstract": "One of the most significant challenges on cryptography today is the problem of factoring large integers since there are no algorithms that can factor in polynomial time, and factoring large numbers more than some limits(200 digits) remain difficult. The security of the current cryptosystems depends on the hardness of factoring large public keys. In this work, we want to implement two existing factoring algorithms - pollard-rho and quadratic sieve - and compare their performance. In addition, we want to analyze how close is the theoretical time complexity of both algorithms compared to their actual time complexity and how bit length of numbers can affect quadratic sieve's performance. Finally, we verify whether the quadratic sieve would do better than pollard-rho for factoring numbers smaller than 80 bits.",
"subjects": "Cryptography and Security (cs.CR); Computational Complexity (cs.CC)",
"title": "An Empirical Comparison of the Quadratic Sieve Factoring Algorithm and the Pollard Rho Factoring Algorithm",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551566309688,
"lm_q2_score": 0.734119526900183,
"lm_q1q2_score": 0.7083190111131288
} |
https://arxiv.org/abs/0711.2541 | Schubert calculus and cohomology of Lie groups. Part I. 1-connected Lie groups | Let $G$ be a compact and $1$--connected Lie group with a maximal torus $T$. Based on Schubert calculus on the flag manifold $G/T$ [15] we construct the integral cohomology ring $H^{\ast}(G)$ uniformly for all $G$. | \section{Introduction}
Let $G$ be a compact, $1$--connected and simple Lie group, namely, $G$ is one
of the classical groups $SU(n),Spin(n),Sp(n)$, or one of the exceptional
groups $G_{2},F_{4},E_{6},E_{7},E_{8}$. These groups constitute the
cornerstones in all compact Lie groups in views of Cartan's classification
Theorem \cite[p.674]{Wh}. Our main concern is the cohomology ring $H^{\ast
}(G;\mathbb{F})$ with the coefficients $\mathbb{F}$ either the ring
$\mathbb{Z}$ of integers, the field $\mathbb{Q}$ of rationals, or one of the
finite fields $\mathbb{F}_{p}$.
The problem of determining the ring $H^{\ast}(G;\mathbb{F})$ was initiated by
E. Cartan in 1929, and has been one of the main focuses in the 20$^{th}$
century algebraic topology for its relevance with the topologies of the loop
space $\Omega G$, classifying space $BG$, homogeneous spaces $G/H$, as well as
the homology theory of H--spaces \cite{Mi, Ka}. Just some of the many
mathematician involved were Brauer, Pontryagin, Hopf, Samelson, Yan, Leray,
Chevalley, Miller, Borel, Araki, Ka\v{c} and Toda \cite{C, Sa, K2}. However,
despite many achievements in about one century two important questions remain open.
Historically, the rings $H^{\ast}(G;\mathbb{F}_{p})$ were obtained by quite
different methods and using case by case computations depending on $G$ and $p$
\cite{B2,B3, B4, B5, B6, B7, BC, AS, A1, A2, Ko, P}. The question to find a
general procedure to compute $H^{\ast}(G;\mathbb{F}_{p})$ was raised and
studied by Ka\v{c} \cite{K2}, who translated the ring structure for $p\neq2$
and additive structure for $p=2$ into a purely \textquotedblleft Weyl group
question\textquotedblright\ by showing that these structures are entirely
determined by the degrees of basic $W$--invariants over $\mathbb{Q}$ and the
degrees of the basic \textquotedblleft generalized
invariants\textquotedblright\ over $\mathbb{F}_{p}$. However, in order to
compute the degrees of those invariants Ka\v{c} made extensively use of the
previous computation of $H^{\ast}(G;\mathbb{F}_{p})$ \textquotedblleft in the
inverse order\textquotedblright\ \cite{K2}. Similar question was asked by Lin
\cite{Lin3} for the case of $p=2$ and in the context of homotopy theory of
finite H--spaces.
The traditional method computing the algebra $H^{\ast}(G;\mathbb{F})$ over
$\mathbb{F=Q}$ or $\mathbb{F}_{p}$ relies largely on the classification of the
finite dimensional Hopf algebras due Hopf, Samelson and Borel \cite{MM} which
does not directly apply to the most important but subtle case for
$\mathbb{F=Z}$. Apart from the facts that the ring $H^{\ast}(G;\mathbb{Z})$
has been determined for the classical $G$ by Borel \cite{B5} and Pittie
\cite{P}, and for $G=G_{2},F_{4}$ by Borel \cite{B5, B6}, the cases of
$G=E_{6},E_{7}$ and $E_{8}$ remain a challenge and beckon us for decades.
This paper is devoted to a general procedure constructing the cohomology
$H^{\ast}(G;\mathbb{F})$ for all simple $G$ and $\mathbb{F}=\mathbb{Z},$
$\mathbb{Q}$ or $\mathbb{F}_{p}$. In particular, with respect to our explicit
constructed generators a unified additive presentation for $H^{\ast
}(G;\mathbb{F})$ is obtained in Theorems 1--2 of \S 5; and the rings $H^{\ast
}(G;\mathbb{F})$ for all exceptional Lie groups $G$ are determined in Theorems
3--6 of \S 6. The method and construction of this paper have been applied in
\cite{DZ4, D2} to determine the structure of $H^{\ast}(G;\mathbb{F}_{p})$ as a
Hopf algebra over the Steenrod algebra $\mathcal{A}_{p}$, and structure of
$H^{\ast}(G;\mathbb{Z})$ as a near--Hopf ring for all exceptional Lie groups.
\bigskip
Our approach begins with the Leray--Serre spectral sequence $\{E_{r
^{\ast,\ast}(G;\mathbb{F}),d_{r}\}$ of the fibration
\begin{enumerate}
\item[(1.1)] $T\rightarrow G\overset{\pi}{\rightarrow}G/T$,
\end{enumerate}
\noindent where $G$ is a simple Lie group with a fixed maximal torus $T\subset
G$. Let $W$ be the Weyl group of $G$ and let $\{\omega_{i}\}_{1\leq i\leq
n}\subset H^{2}(G/T;\mathbb{Z})$ be a set of \textsl{fundamental dominant
weights }of $G$, $n=\dim T$ \cite{BH}. It is well known that (\cite{K1, M1})
\begin{enumerate}
\item[(1.2)] $E_{2}^{p,q}(G;\mathbb{F})=H^{p}(G/T;H^{q}(T;\mathbb{F
)))=H^{p}(G/T)\otimes\Lambda_{\mathbb{F}}^{q}(t_{1},\cdots,t_{n})$;
\item[(1.3)] the differential $d_{2}:E_{2}^{p,q}(G;\mathbb{F})\rightarrow
E_{2}^{p+2,q-1}(G;\mathbb{F})$ is given by
$\qquad d_{2}(x\otimes t_{k})=x\omega_{k}\otimes1$, $x\in H^{p}(G/T;\mathbb{F
)$, $1\leq k\leq n$.
\end{enumerate}
\noindent where $t_{i}\in H^{1}(T;\mathbb{F})$ the class that is mapped to
$\omega_{i}$ under the Borel transgression $\tau:H^{1}(T;\mathbb{F
)\rightarrow H^{2}(G/T;\mathbb{F})$ \cite{B1}, and where $\Lambda_{\mathbb{F
}^{\ast}(t_{1},\cdots,t_{n})$ is the exterior algebra in $t_{1},\cdots,t_{n}$.
Intimate relationship between the two\textsl{ additive groups} $H^{\ast
}(G;\mathbb{F})$ and $E_{3}^{\ast,\ast}(G;\mathbb{F})$ has been expected by
many authors. Leray \cite{L} established that $H^{\ast}(G;\mathbb{Q
)=E_{3}^{\ast,\ast}(G;\mathbb{Q})$. Serre \cite{Se} showed that $H^{\ast
}(G;\mathbb{F}_{p})$ $=E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})$. Marlin \cite{M1}
conjectured that $E_{3}^{\ast,\ast}(G;\mathbb{Z})=H^{\ast}(G;\mathbb{Z})$.
Conceivably, a thorough understanding of $E_{3}^{\ast,\ast}(G;\mathbb{Z})$ may
bring a way to $H^{\ast}(G;\mathbb{Z})$.
In principle, the two ingredients (1.2), (1.3) required by computing
$E_{3}^{\ast,\ast}(G;\mathbb{Z})$ are known in the context of
\textsl{Schubert's calculus} \cite{K1, M1}: the \textsl{basis theorem} of
Chevalley asserts that the set of \textsl{Schubert classes} $\{s_{w}\}_{w\in
W}$ on $G/T$ furnishes $H^{\ast}(G/T;\mathbb{Z})$ with an additive basis; the
\textsl{Chevalley formula} that expands the product of a Schubert class
$s_{w}$ with a weight $\omega_{i}$ characterizes $d_{2}$ explicitly \cite{Ch,
BGG, D1}. However, direct calculation based on these ideas fails to access
$E_{3}^{\ast,\ast}(G;\mathbb{Z})$ \cite{K1, M1}, not to mention that our
interest is the\textsl{ ring structure} on $H^{\ast}(G;\mathbb{Z})$.
The principal idea in this paper is the following one. The basis theorem
merely describes $H^{\ast}(G/T;\mathbb{Z})$ additively. Resorting to its ring
structure one may attempt a compact presentation of $H^{\ast}(G/T;\mathbb{Z})$
such as the quotient of a polynomial ring. To merge the geometry of Schubert
classes into calculating $H^{\ast}(G;\mathbb{Z})$ and to eliminate the
computation cost from the very beginning, one may expect further that the
generators for $H^{\ast}(G/T;\mathbb{Z})$ are taken in \textsl{a minimal set
of Schubert classes} on $G/T$. It is the fulfilment of this task in \cite{DZ3}
for all exceptional $G$ that reveals certain general features of the ring
$H^{\ast}(G/T;\mathbb{Z})$ that are summarized in Lemma 2.1 of \S 2.
Starting from Lemma 2.1 the ring $E_{3}^{\ast,\ast}(G;\mathbb{F})$ (resp.
$H^{\ast}(G,\mathbb{F})$) can be calculated effectively and uniformly for all
$G$ and $\mathbb{F}$. More precisely,
\begin{quote}
i) \textsl{a set of generators for }$E_{3}^{\ast,\ast}(G;\mathbb{F})$\textsl{
(resp. for }$H^{\ast}(G;\mathbb{F})$\textsl{)} \textsl{can be explicitly
constructed from certain polynomials in the Schubert classes on }$G/T$
\textsl{(e.g. Definition 2.9)};
ii) \textsl{in terms of these generators, the ring }$H^{\ast}(G,\mathbb{F
)$\textsl{ can be explicitly presented} \textsl{(e.g. Theorem 1--2 in \S 5;
Theorem 3--6 in \S 6)}.
\end{quote}
\noindent Moreover, the sequel works \cite{D2, DZ4} demonstrate that
\begin{quote}
iii) \textsl{the structure of }$H^{\ast}(G,\mathbb{F}_{p})$\textsl{ as a Hopf
algebra over the Steenrod algebra (resp. the structure of }$H^{\ast
}(G,\mathbb{Z})$\textsl{ as a near--Hopf ring) can be boiled down to
computation with those polynomials in the Schubert classes (e.g. Remark 6.6)}.
\end{quote}
\noindent Along the way we confirm the conjecture $E_{3}^{\ast,\ast
}(G;\mathbb{Z})=E_{\infty}^{\ast,\ast}(G;\mathbb{Z})$ by Ka\v{c} \cite{K1},
and the stronger one $E_{3}^{\ast,\ast}(G;\mathbb{Z})=H^{\ast}(G;\mathbb{Z})$
by Marlin \cite{M1}.
Hopefully, our approach to $H^{\ast}(G,\mathbb{F})$ may admit natural
generalizations. Let $\mathcal{A}$ be an extended Cartan matrix with
associated group $\mathcal{G}$ of the Ka\v{c}--Moody type, and let
$\mathcal{B}\subset$ $\mathcal{G}$ be a Borel subgroup \cite{Ku}. One may
expect that result analogue to Lemma 2.1 hold for the complete flag variety
$\mathcal{G}/\mathcal{B}$ so that the method of this paper is extendable to
formulate $H^{\ast}(\mathcal{G};\mathbb{F})$. We emphasize that the problem of
understanding $H^{\ast}(\mathcal{G};\mathbb{F})$ in the context of Schubert
calculus has been raised by Ka\v{c} \cite{K1}, and that as the partition of
$\mathcal{G}/\mathcal{B}$ by the Schubert cells is completely determined by
the Cartan matrix $\mathcal{A}$ (as in the classical situation $G/T$), the
techniques developed in our previous works \cite{D1, DZ1, DZ2, DZ3} may be
applicable to extend Lemma 2.1 from $G/T$ to $\mathcal{G}/\mathcal{B}$.
\section{Construction based on the Schubert presentation of $H^{\ast
}(G/T;\mathbb{Z})$}
For a compact Lie group $G$ with a maximal torus $T$, the quotient space $G/T$
is called the \textsl{complete flag manifold} of $G$. In the modern form taken
by the classical topic known as "enumerative geometry" in the 19$^{th}$
century, Schubert calculus amounts to the determination of the ring $H^{\ast
}(G/T;\mathbb{Z})$ \cite[p 331]{W}.
Based on the formula for multiplying Schubert classes in \cite{D1}, explicit
presentation of the ring $H^{\ast}(G/T;\mathbb{Z})$ as the quotient of a
polynomial ring in certain Schubert classes on $G/T$ was obtained in
\cite{DZ3}. In Lemmas 2.1, 2.5 and 2.6 we demonstrate concise presentation of
$H^{\ast}(G/T;\mathbb{F})$ (in accordance to $\mathbb{F=}\mathbb{Z
,\mathbb{Q}$ and $\mathbb{F}_{p}$) that are barely sufficient for the purpose
of this paper. In terms of the defining polynomials for the ideal in those
presentation a set of so called \textsl{primary polynomials }are specified in
\S 2.3. They are utilized in \S 2.4 to construct a set of elements in
$E_{3}^{\ast,1}(G;\mathbb{F})$, temporarily named as the \textsl{primary forms
in }$E_{3}^{\ast,1}(G;\mathbb{F})$. The ring $H^{\ast}(G;\mathbb{F})$ will be
formulated by those elements in \S 5 and \S 6.
\begin{center}
\textbf{2.1. Presentation of the ring }$H^{\ast}(G/T;\mathbb{Z})$.
\end{center}
In this paper all elements in a graded vector space (or graded $\mathbb{Z
$--module) are homogeneous. Given a subset $\{f_{1},\cdots,f_{m}\}$ in a ring
write $\left\langle f_{1},\cdots,f_{m}\right\rangle $ for the ideal generated
by $f_{1},\cdots,f_{m}$. For a set $S$ denote by $\left\vert S\right\vert $
its cardinality.
We begin with the next result shown in \cite[Theorem 6]{DZ3}, that summarizes
general properties in the Schubert presentations of the rings $H^{\ast
}(G/T;\mathbb{Z})$ for all simple Lie groups $G$.
\bigskip
\noindent\textbf{Lemma 2.1.} \textsl{For each simple Lie group }$G$\textsl{
there exist a set }$\left\{ y_{1},\cdots,y_{m}\right\} $\textsl{ of Schubert
classes on }$G/T$\textsl{ with }$\deg y_{i}$\textsl{ }$>2$\textsl{ so that
}$\{\omega_{1},\cdots,\omega_{n},y_{1},\cdots,y_{m}\}$\textsl{ is a minimal
set of generators for the ring }$H^{\ast}(G/T;\mathbb{Z})$\textsl{.}
\textsl{Moreover, with respect to those generators one has the presentation}
\begin{enumerate}
\item[(2.1)] $H^{\ast}(G/T;\mathbb{Z})=\mathbb{Z}[\omega_{1},\cdots,\omega
_{n},y_{1},\cdots,y_{m}]/\left\langle e_{i},f_{j},g_{j}\right\rangle _{1\leq
i\leq k;1\leq j\leq m}$\textsl{,}
\end{enumerate}
\noindent\textsl{where }$k=n-\left\vert \{\deg g_{j}\mid1\leq j\leq
m\}\right\vert $\textsl{, and where}
\begin{quote}
\textsl{i) for each }$1\leq i\leq k$\textsl{, }$e_{i}\in\left\langle
\omega_{1},\cdots,\omega_{n}\right\rangle $\textsl{;}
\textsl{ii) for each }$1\leq j\leq m$\textsl{, the pair }$(f_{j},g_{j
)$\textsl{ of polynomials is related to the Schubert class }$y_{j}$\textsl{ in
the fashion}
$\qquad f_{j}$\textsl{ }$=$\textsl{ }$p_{j}y_{j}+\alpha_{j}$\textsl{,}$\quad$
$\qquad g_{j}=y_{j}^{k_{j}}+\beta_{j}$\textsl{, }$1\leq j\leq m$\textsl{,}
\textsl{with }$p_{j}\in\{2,3,5\}$\textsl{, }$\alpha_{j},\beta_{j
\in\left\langle \omega_{1},\cdots,\omega_{n}\right\rangle $\textsl{.}$\square$
\end{quote}
\bigskip
With the minimal constraint on $m$ in Lemma 2.1 the sets of integers
\begin{quote}
$\{k,m\}$, $\{\deg e_{i}\}_{1\leq i\leq k}$, $\{\deg y_{j}\}_{1\leq j\leq m}$,
$\{p_{j}\}_{1\leq j\leq m}$, $\{k_{j}\}_{1\leq j\leq m}$
\end{quote}
\noindent emerging in (2.1) can be shown to be invariants of $G$, and will be
called \textsl{the basic data} of $G$. For all simple Lie groups their basic
data have been determined in \cite{DZ3} and are tabulated below:
\begin{center
\begin{tabular}
[c]{l}\hline\hlin
\begin{tabular}
[c]{l|llll
$G$ & $SU(n)$ & $Sp(n)$ & $Spin(2n)$ & $Spin(2n+1)$\\\hline
$(k,m)$ & $(n-1,0)$ & $(n,0)$ & $([\frac{n+3}{2}],[\frac{n-2}{2}])$ &
$([\frac{n+2}{2}],[\frac{n-1}{2}])$\\
$\{\deg e_{{\small i}}\}$ & $\{2i+2\}$ & $\{4i\}$ & $\{4t,2n,2^{[\ln
(n-1)]+2}\}_{{\small 1\leq t\leq\lbrack}\frac{n-1}{2}{\small ]}}$ &
$\{4t,2^{[\ln n]+2}\}_{{\small 1\leq t\leq\lbrack}\frac{n}{2}{\small ]}}$\\
$\{\deg y_{{\small j}}\}$ & & & $\{4j+2\}$ & $\{4j+2\}$\\
$\{p_{{\small j}}\}$ & & & $\{2,\cdots,2\}$ & $\{2,\cdots,2\}$\\
$\{k_{{\small j}}\}$ & & & $\{2^{{\small [}\ln\frac{n-1}{2j+1}{\small ]+1
}\}$ & $\{2^{{\small [}\ln\frac{n}{2j+1}{\small ]+1}}\}
\end{tabular}
\\\hline\hline
\end{tabular}
{\small Table 1. Basic data for the classical groups}
\bigski
\begin{tabular}
[c]{l}\hline\hlin
\begin{tabular}
[c]{l|lllll
$G$ & $G_{2}$ & $F_{4}$ & $E_{6}$ & $E_{7}$ & $E_{8}$\\\hline
$(k,m)$ & $(1,1)$ & $(2,2)$ & $(4,2)$ & $(3,4)$ & $(3,7)$\\
$\{\deg e_{i}\}$ & $\{4\}$ & $\{4,16\}$ & $\{4,10,16,18\}$ & $\{4,16,28\}$ &
$\{4,16,28\}$\\
$\{\deg y_{j}\}$ & $\{6\}$ & $\{6,8\}$ & $\{6,8\}$ & $\{6,8,10,18\}$ &
$\{6,8,10,12,18,20,30\}$\\
$\{p_{j}\}$ & $\{2\}$ & $\{2,3\}$ & $\{2,3\}$ & $\{2,3,2,2\}$ &
$\{2,3,2,5,2,3,2\}$\\
$\{k_{j}\}$ & $\{2\}$ & $\{2,3\}$ & $\{2,3\}$ & $\{2,3,2,2\}$ &
$\{8,3,4,5,2,3,2\}
\end{tabular}
\\\hline\hline
\end{tabular}
{\small Table 2. Basic data for exceptional Lie groups.}
\end{center}
For $G\neq E_{8}$ the polynomials $e_{i},f_{j},g_{j}$ in (2.1) can be shown to
be algebraically independent in $\mathbb{Z}[\omega_{1},\cdots,\omega_{n
;y_{1},\cdots,y_{m}]$. In contrast, in terms of the basic data for $E_{8}$
given in the last column of Table 2, there appears the following phenomenon
which will cause a few additional concerns for the case of $G=E_{8}$ in our
unified approach to $H^{\ast}(G;\mathbb{F})$ (see formula (5.3) in \cite{DZ3}).
\bigskip
\noindent\textbf{Lemma 2.2.} \textsl{For }$G=E_{8}$\textsl{ there exists a
polynomial }$f\in Z[\omega_{1},\cdots,\omega_{8};y_{1},\cdots,y_{7}]$\textsl{
of the form }
\begin{quote}
$f=2y_{4}^{5}-y_{6}^{3}+y_{7}^{2}+\beta$\textsl{,}$\quad\beta\in\left\langle
\omega_{1},\cdots,\omega_{8}\right\rangle $\textsl{, }
\end{quote}
\noindent\textsl{so that}
\begin{enumerate}
\item[(2.2)] $\left\{
\begin{tabular}
[c]{l
$g_{4}=-12f+5y_{4}^{4}f_{4}-4y_{6}^{2}f_{6}+6y_{7}f_{7}$;\\
$g_{6}=-10f+4y_{4}^{4}f_{4}-3yf_{6}+5y_{7}f_{7}$;\\
$g_{7}=15f-6y_{4}^{4}f_{4}+5y_{6}^{2}f_{6}-7y_{7}f_{7}$.
\end{tabular}
\ \ \ \ \ \ \right. \square$
\end{enumerate}
\bigskip
\noindent\textbf{Remark 2.3.} Since the set $\left\{ \omega_{1},\cdots
,\omega_{n}\right\} $ consists of all Schubert classes on $G/T$ with
cohomology degree $2$ (\cite{DZ3}), (2.1) describes $H^{\ast}(G/T;\mathbb{Z})$
by certain Schubert classes on $G/T$ and therefore, is called \textsl{a
Schubert presentation} of the ring $H^{\ast}(G/T;\mathbb{Z})$. In addition to
$\left\{ \omega_{1},\cdots,\omega_{n}\right\} $ elements in $\{y_{i
\}_{1\leq i\leq m}$ are called the \textsl{special Schubert classes} on $G/T$
(\cite{DZ3}). To be precise for each exceptional Lie group $G$ a set of
special Schubert classes is given by their Weyl coordinates (\cite{DZ3}) in
the table below
\begin{center
\begin{tabular}
[c]{|l|l|l|l|}\hline
$G/T$ & $G_{2}/T$ & $F_{4}/T$ & $E_{n}/T,\text{ }n=6,7,8$\\\hline\hline
$y_{1}$ & $\sigma_{\lbrack1,2,1]}$ & $\sigma_{\lbrack3,2,1]}$ & $\sigma
_{\lbrack5,4,2]}\text{, }n=6,7,8$\\
$y_{2}$ & & $\sigma_{\lbrack4,3,2,1]}$ & $\sigma_{\lbrack6,5,4,2]}\text{,
}n=6,7,8$\\
$y_{3}$ & & & $\sigma_{\lbrack{7,6,5,4,2}]}\text{, }n=7,8$\\
$y_{4}$ & & & $\sigma_{\lbrack1,3,6,5,4,2]}\text{, }n=8$\\
$y_{5}$ & & & $\sigma_{\lbrack1,5,4,3,7,6,5,4,2]}$,$\text{ }n=7,8$\\
$y_{6}$ & & & $\sigma_{\lbrack{1,6,5,4,3,7,6,5,4,2}]}\text{, }n=8$\\
$y_{7}$ & & & $\sigma_{\lbrack5,4,2,3,1,6,5,4,3,8,7,6,5,4,2]}\text{,
n=8$\\\hline
\end{tabular}
{\small Table 3. The special Schubert classes on }$G/T${\small for all
exceptional }$G$.$\square$
\end{center}
\bigskip
\begin{center}
\textbf{2.2. The algebra }$H^{\ast}(G/T;\mathbb{F})$,\textbf{ }$\mathbb{F
=$\textbf{ }$\mathbb{Q}$\textbf{, }$\mathbb{F}_{p}$\textbf{.}
\end{center}
Since the ring $H^{\ast}(G/T;\mathbb{Z})$ is torsion free \cite{BS}, one may
deduce presentation of $H^{\ast}(G/T;\mathbb{F})$ directly from Lemma 2.1 and
the isomorphism $H^{\ast}(G/T;\mathbb{F})=H^{\ast}(G/T;\mathbb{Z
)\otimes\mathbb{F}$.
One of the attempts in this work is to describe the ring $H^{\ast
}(G;\mathbb{F})$ by \textsl{a minimal set} of generators. As an initial step
we need to characterize $H^{\ast}(G/T;\mathbb{F})$ by a minimal system of
generators and relations. The following notion subsequent to the basic data of
$G$ serves this purpose.
\bigskip
\noindent\textbf{Definition 2.4.} For each simple $G$ and prime $p$ we set
\begin{quote}
$G(p)=\{j\mid1\leq j\leq m$, $p_{j}=p\}$ (see Tables 1 and 2).
\end{quote}
\noindent We shall also put for $G\neq E_{8}$ that
\begin{quote}
$\overline{G}(\mathbb{F})=\left\{
\begin{tabular}
[c]{l
$\{1,\ldots,m\}\text{ if }\mathbb{F=Z}\text{ or }\mathbb{Q}\text{;}$\\
$\text{the complement of }G(p)\text{ in }\{1,\cdots,m\}\text{ if
\mathbb{F=F}_{p}$,
\end{tabular}
\ \ \right. $
\end{quote}
\noindent and let
\begin{quote}
$\overline{E}_{8}(\mathbb{F})=\left\{
\begin{tabular}
[c]{l
$\{1,2,3,5,6\}\text{ if }\mathbb{F=Z},\mathbb{Q}\text{ or }\mathbb{F
_{p}\text{ with }p\neq2,3,5$;\\
$\{2\}\text{ if }\mathbb{F}=\mathbb{F}_{2};$\\
$\{1,3,5\}\text{ if }\mathbb{F=F}_{3};$\\
$\{1,2,3,5\}\text{ if }\mathbb{F=F}_{5}$.$\square
\end{tabular}
\ \ \right. $
\end{quote}
\noindent\textbf{Lemma 2.5.} \textsl{Let }$e_{i}^{(0)},g_{j}^{(0)
\in\mathbb{Q}[\omega_{1},\cdots,\omega_{n}]$\textsl{ be the polynomials
obtained from }$e_{i},g_{j}$\textsl{ in Lemma 2.1 by eliminating the classes
}$y_{j}$\textsl{ using }$f_{j}$\textsl{, }$1\leq j\leq m$\textsl{. Then}
\begin{quote}
$H^{\ast}(G/T;\mathbb{Q})=\mathbb{Q}[\omega_{1},\cdots,\omega_{n
]/\left\langle e_{i}^{(0)},g_{j}^{(0)}\right\rangle _{1\leq i\leq
k,j\in\overline{G}(\mathbb{Q})}$.
\end{quote}
\noindent\textbf{Proof.} Rationally $y_{j}=-\frac{1}{p_{j}}\alpha_{j}$ by the
relation $f_{j}$ in Lemma 2.1. It implies that
\begin{quote}
$H^{\ast}(G/T;\mathbb{Q})=\mathbb{Q}[\omega_{1},\cdots,\omega_{n
]/\left\langle e_{i}^{(0)},g_{j}^{(0)}\right\rangle _{1\leq i\leq k,1\leq
j\leq m}$,
\end{quote}
\noindent which verifies Lemma 2.5 for $G\neq E_{8}$. For $G=E_{8}$ we get
from (2.2) that $g_{4}^{(0)}=\frac{6}{5}g_{6}^{(0)}$, $g_{7}^{(0)}=-\frac
{3}{2}g_{6}^{(0)}$. This completes the proof.$\square$
\bigskip
\noindent\textbf{Lemma 2.6.} \textsl{For a prime }$p$\textsl{ let
$e_{i}^{(p)},\alpha_{t}^{(p)},g_{j}^{(p)},\beta_{j}^{(p)}$\textsl{ be the
polynomials obtained respectively from }$\rho_{i},\alpha_{t},\mu_{j},\beta
_{j}$\textsl{ in Lemma 2.1 by eliminating }$y_{s}$\textsl{, }$s\notin
G(p)$\textsl{, using }$f_{s}$\textsl{. Then}
\begin{quote}
$H^{\ast}(G/T;\mathbb{F}_{p})=\mathbb{F}_{p}[\omega_{1},\cdots,\omega
_{n},y_{t}]/\left\langle e_{i}^{(p)},\alpha_{t}^{(p)},g_{t}^{(p)},g_{s
^{(p)}\right\rangle _{1\leq i\leq k,t\in G(p),s\in\overline{G}(\mathbb{F
_{p})}$,
\end{quote}
\noindent\textsl{with}
\textsl{i) }$g_{t}^{(p)}=y_{t}^{k_{t}}+\beta_{t}^{(p)}$\textsl{, }$t\in
G(p)$\textsl{;}
\textsl{ii) }$\{e_{i}^{(p)},\alpha_{t}^{(p)},\beta_{t}^{(p)},g_{s
^{(p)}\}\subset\left\langle \omega_{1},\cdots,\omega_{n}\right\rangle
_{\mathbb{F}_{p}}$\textsl{, }
\noindent\textsl{where }$\left\langle \omega_{1},\cdots,\omega_{n
\right\rangle _{\mathbb{F}_{p}}$\textsl{ is the ideal in }$\mathbb{F
_{p}[\omega_{1},\cdots,\omega_{n},y_{t}]$\textsl{ generated by }$\omega
_{1},\cdots,\omega_{n}$\textsl{.}
\noindent\textbf{Proof.} After $\operatorname{mod}p$ the relation $f_{t}$ in
Lemma 2.1 becomes
\begin{quote}
i) $\alpha_{t}\equiv0$ $\operatorname{mod}p$ for $t\in G(p)$;\quad
ii) $y_{t}-q_{t}\alpha_{t}\equiv0$ $\operatorname{mod}p$ for $t\notin G(p)$,
\end{quote}
\noindent where $q_{t}>0$ is the smallest integer satisfying $q_{t}p_{t
\equiv-1\operatorname{mod}p$. i) implies that the relations $f_{t}$ with $t\in
G(p)$ should be replaced by $\alpha_{t}\equiv0$. In view of ii) we can
eliminate all $y_{s}$ with $s\notin G(p)$ from the set of generators and
replace it in the remaining relations by $q_{s}\alpha_{s}$ to obtain the presentation
\begin{quote}
iii) $H^{\ast}(G/T;\mathbb{F}_{p})=\mathbb{F}_{p}[\omega_{1},\cdots,\omega
_{n},y_{t}]/\left\langle e_{i}^{(p)},\alpha_{t}^{(p)},g_{j}^{(p)}\right\rangle
_{1\leq i\leq k,t\in G(p),1\leq j\leq m}$.
\end{quote}
\noindent For $G\neq E_{8}$ the result is verified by $\{1,\cdots
,m\}=G(p)\sqcup\overline{G}(\mathbb{F}_{p})$. For $G=E_{8}$ reduction
$\operatorname{mod}p$ of the system (2.2) yields the next relations
\begin{quote}
$g_{4}^{(p)}\equiv0$; $g_{6}^{(p)}\equiv y_{7}\alpha_{7}^{(p)}$ for $p=2$;
$\quad$
$g_{4}^{(p)}\equiv g_{7}^{(p)}\equiv-y_{6}^{2}\alpha_{6}^{(p)}$ for $p=3$;
$g_{6}^{(p)}\equiv g_{7}^{(p)}\equiv-y_{4}^{4}\alpha_{4}^{(p)}$ for $p=5$;
$g_{4}^{(p)}\equiv sg_{6}^{(p)}$; $g_{7}^{(p)}\equiv tg_{6}^{(p)}$ if
$p\neq2,3,5$ (for some $s,t\in\mathbb{F}_{p}$).
\end{quote}
\noindent Combining these with iii) establishes Lemma 2.6 for $G=E_{8
$.$\square$
\begin{center}
\textbf{2.3. Primary polynomials }
\end{center}
The ring $H^{\ast}(G;\mathbb{F})$ may vary considerably with respect to $G$
and $\mathbb{F}$. The following notations allow one to carry out construction
and calculation uniformly for all $G$ and $\mathbb{F}$.
\begin{enumerate}
\item[(2.3)] $P_{G,\mathbb{F}}:=$\ the numerator (ring) in the presentation of
$H^{\ast}(G/T;\mathbb{F})$ in Lemmas 2.1, 2.5 and 2.6 (in accordance with
$\mathbb{F}=\mathbb{Z},\mathbb{Q}$ and $\mathbb{F}_{p}$).
\item[(2.4)] $P_{G,\mathbb{F}}^{\omega}:=$ the subring of $P_{G,\mathbb{F}}$
free of $\omega_{1},\cdots,\omega_{n}$.
\item[(2.5)] $I_{G,\mathbb{F}}:=$ the ideal in $P_{G,\mathbb{F}}$ appearing as
the denominator in the presentation of $H^{\ast}(G/T;\mathbb{F})$ in Lemmas
2.1, 2.5 and 2.6.
\item[(2.6)] $\left\langle \omega_{1},\cdots,\omega_{n}\right\rangle
_{\mathbb{F}}:=$ the ideal in $P_{G,\mathbb{F}}$ generated by $\omega
_{1},\cdots,\omega_{n}$.
\end{enumerate}
In Lemmas 2.1, 2.5 and 2.6, the polynomials enclosed to specify
$I_{G,\mathbb{F}}$ will be called the \textsl{defining polynomials} of
$I_{G,\mathbb{F}}$. Precisely, if we write $\Sigma_{G,\mathbb{F}}\subset
I_{G,\mathbb{F}}$ for the set of those polynomials, then
\begin{quote}
$\Sigma_{G,\mathbb{Z}}=\{e_{i},f_{j},g_{j}\}_{1\leq i\leq k,1\leq j\leq m}$;
$\Sigma_{G,\mathbb{Q}}=\{e_{i}^{(0)},g_{j}^{(0)}\}_{1\leq i\leq k,j\in
\overline{G}(\mathbb{Q})}$;
$\Sigma_{G,\mathbb{F}_{p}}=\{e_{i}^{(p)},\alpha_{t}^{(p)},g_{t}^{(p)
,g_{s}^{(p)}\}_{1\leq i\leq k,t\in G(p),s\in\overline{G}(\mathbb{F}_{p})}$.
\end{quote}
\noindent With the presence of $\Sigma_{G,\mathbb{F}}$ we single out a subset
of $I_{G,\mathbb{F}}$ which will give rise to a minimal set of generators for
the ring $H^{\ast}(G;\mathbb{F})$.
\bigskip
\noindent\textbf{Definition 2.7. }In accordance with\textbf{ }$\mathbb{F
=\mathbb{Z},\mathbb{Q}$ and $\mathbb{F}_{p}$ the set $\Phi_{G,\mathbb{F}}$ of
\textsl{primary polynomials} in $I_{G,\mathbb{F}}$ consists of
\begin{quote}
i) $\Phi_{G,\mathbb{Z}}=\{e_{i},h_{j}\}_{1\leq i\leq k,j\in\overline
{G}(\mathbb{Z})}$ with $h_{j}=p_{j}g_{j}-y_{j}^{k_{j}-1}f_{j}$;
ii) $\Phi_{G,\mathbb{Q}}=\{e_{i}^{(0)},g_{j}^{(0)}\}_{1\leq i\leq
k,j\in\overline{G}(\mathbb{Q})}$;
iii) $\Phi_{G,\mathbb{F}_{p}}=\{e_{i}^{(p)},\alpha_{t}^{(p)},g_{s
^{(p)}\}_{1\leq i\leq k,t\in G(p),s\in\overline{G}(\mathbb{F}_{p})}$.$\square$
\end{quote}
Useful properties of the set $\Phi_{G,\mathbb{F}}$ are collected in the next result.
\bigskip
\noindent\textbf{Lemma 2.8. }\textsl{One has }$\left\vert \Phi_{G,\mathbb{F
}\right\vert =n$\textsl{ and}
\textsl{i) }$\Phi_{G,\mathbb{F}}\subset\left\langle \omega_{1},\cdots
,\omega_{n}\right\rangle _{\mathbb{F}}\cap I_{G,\mathbb{F}}$\textsl{;}
\textsl{ii) }$\dim G=\left\{
\begin{tabular}
[c]{l
$\Sigma_{u\in\Phi_{G,\mathbb{F}}}(\deg u-1)$ for $\mathbb{F}=\mathbb{Q}$ or
$\mathbb{Z}$;\\
$\Sigma_{u\in\Phi_{G,\mathbb{F}_{p}}}(\deg u-1)+\Sigma_{t\in G(p)
(k_{t}-1)\deg y_{t}$.
\end{tabular}
\ \ \ \ \right. $
\textsl{iii) the }$\operatorname{mod}p$\textsl{ reduction from
$P_{G,\mathbb{Z}}$\textsl{ to }$P_{G,\mathbb{F}_{p}}$\textsl{ satisfies}
\begin{center}
$e_{i}\equiv e_{i}^{(p)}$\textsl{;}$\quad g_{j}\equiv g_{j}^{(p)}
\textsl{;}$\quad f_{j}\equiv\left\{
\begin{tabular}
[c]{l
$\alpha_{j}^{(p)}\text{ if }j\in G(p)$;\\
$0\text{ if }j\notin G(p)$.
\end{tabular}
\ \ \ \right. $
\end{center}
\noindent\textsl{In particular,}
\begin{center}
$\quad h_{j}\equiv\left\{
\begin{tabular}
[c]{l
$-y_{j}^{k_{j}-1}\alpha_{j}^{(p)}\text{ if }j\in G(p)$;\\
$p_{j}g_{j}^{(p)}\text{ if }j\in\overline{G}(\mathbb{F}_{p})$,
\end{tabular}
\ \ \ \ \ \ \right. $
\end{center}
\noindent\textsl{and if }$G=E_{8}$\textsl{, }
\begin{center}
$h_{6}\equiv\left\{
\begin{tabular}
[c]{l
$y_{7}\alpha_{7}^{(2)}$ if $p=2$;\\
$-y_{6}^{2}\alpha_{6}^{(3)}$ if $p=3$;\\
$2y_{4}^{4}\alpha_{4}^{(5)}$ if $p=5$;\\
$3g_{6}^{(p)}$ if $p\neq2,3,5$.
\end{tabular}
\ \ \ \ \ \ \right. $
\end{center}
\noindent\textbf{Proof.} i) is trivial for $\mathbb{F}=\mathbb{Q}$, and has
been shown by i) of Lemma 2.6 for $\mathbb{F}=\mathbb{F}_{p}$. For the case
$\mathbb{F=Z}$ substituting in the formula of $h_{j}$ the expressions of
$f_{j}$ and $g_{j}$ in Lemma 2.1 yields that
\begin{quote}
$h_{j}=p_{j}\beta_{j}-y_{j}^{k_{j}-1}\alpha_{j}$\textbf{. }
\end{quote}
\noindent The proof of i) is completed by $\rho_{i},\alpha_{j},\beta_{j
\in\left\langle \omega_{1},\cdots,\omega_{n}\right\rangle _{\mathbb{Z}}$ and
$e_{i},h_{j}\in I_{G,\mathbb{Z}}$.
With the basic data for $G$ given in Tables 1 and 2 and taking into account of
Definition 2.4, result in ii) can be directly verified (when $\mathbb{F
=\mathbb{F}_{p}$ these may be done in accordance with $p=2,3,5$ and
$p\neq2,3,5$).
Finally the relations in iii) are clear from the proof of Lemma 2.5, and when
$G=E_{8}$, from the alternative expression by (2.2)
\begin{quote}
$h_{6}=-30f+13y_{4}^{4}f_{4}-10y_{6}^{2}f_{6}+15y_{7}f_{7}$.$\square$
\end{quote}
\begin{center}
\textbf{2.4. Construction in }$E_{3}^{\ast,1}(G;\mathbb{F})$\textbf{: primary
forms}
\end{center}
The ideal $\left\langle \omega_{1},\cdots,\omega_{n}\right\rangle
_{\mathbb{F}}\subset P_{G,\mathbb{F}}$ is a module over the ring
$P_{G,\mathbb{F}}^{\omega}$ with basis
\begin{quote}
$\{\omega_{1}^{b_{1}}\cdots\omega_{n}^{b_{n}}\mid b_{i}\geq0$, $\sum b_{i
\geq1\}$.
\end{quote}
\noindent This simple fact gives rise to the well defined $P_{G,\mathbb{F
}^{\omega}$--linear map
\begin{enumerate}
\item[(2.7)] $\varphi:\left\langle \omega_{1},\cdots,\omega_{n}\right\rangle
_{\mathbb{F}}\rightarrow E_{2}^{\ast,1}(G;\mathbb{F})=H^{\ast}(G/T;\mathbb{F
)\otimes\Lambda_{\mathbb{F}}^{1}$ by
$\qquad\qquad\varphi(\omega_{1}^{b_{1}}\cdots\omega_{n}^{b_{n}})=\omega
_{1}^{b_{1}}\cdots\omega_{k}^{b_{k}-1}\cdots\omega_{n}^{b_{n}}\otimes t_{k}$,
\end{enumerate}
\noindent where $k\in\{1,\cdots,n\}$ is the least one with $b_{k}\geq1$. Since
$\Phi_{G,\mathbb{F}}\subset\left\langle \omega_{1},\cdots,\omega
_{n}\right\rangle _{\mathbb{F}}$ by i) of Lemma 2.8, $\varphi$ acts on
$\Phi_{G,\mathbb{F}}$. Let
\begin{quote}
$\iota_{\mathbb{F}}:\left\langle \omega_{1},\cdots,\omega_{n}\right\rangle
_{\mathbb{F}}\rightarrow H^{\ast}(G/T;\mathbb{F})$
\end{quote}
\noindent be the composition of the inclusion $\left\langle \omega_{1
,\cdots,\omega_{n}\right\rangle _{\mathbb{F}}\rightarrow P_{G,\mathbb{F}}$
followed by the obvious quotient map $P_{G,\mathbb{F}}\rightarrow H^{\ast
}(G/T;\mathbb{F})$. Then, from $\iota_{\mathbb{F}}=d_{2}\circ\varphi$ and
\begin{quote}
$E_{2}^{\ast,0}(G;\mathbb{F})=H^{\ast}(G/T;\mathbb{F})=P_{G,\mathbb{F
}/I_{G,\mathbb{F}}$
\end{quote}
\noindent we find that
\begin{quote}
$\varphi(\Phi_{G,\mathbb{F}})\subset\ker[$ $E_{2}^{\ast,1}(G;\mathbb{F
)\overset{d_{2}}{\rightarrow}E_{2}^{\ast,0}(G;\mathbb{F})]$.
\end{quote}
\noindent\textbf{Definition 2.9.} For a $d_{2}$--cocycle $h\in E_{2
^{\ast,\ast}(G;\mathbb{F})$ write $[h]\in E_{3}^{\ast,\ast}(G;\mathbb{F})$ for
its cohomology class. Elements in the subset
\begin{quote}
$\mathcal{O}_{G,\mathbb{F}}=\{[\varphi(g)]\in E_{3}^{\deg g-2,1
(G;\mathbb{F})\mid g\in\Phi_{G,\mathbb{F}}\}$
\end{quote}
\noindent are called the \textsl{primary forms} in $E_{3}^{\ast,1
(G;\mathbb{F})$.
For the notational convenience we adopt the abbreviations for all primary
forms in accordance with $\mathbb{F}=\mathbb{Z},\mathbb{Q}$ and $\mathbb{F
_{p}$:
\begin{enumerate}
\item[(2.8)] if $\mathbb{F=Z}$ let $\xi_{i}:=[\varphi(e_{i})]$, $\eta
_{j}:=[\varphi(h_{j})]$, $1\leq i\leq k$, $j\in\overline{G}(\mathbb{Z})$.
\item[(2.9)] if $\mathbb{F=Q}$ set $\xi_{i}^{(0)}:=[\varphi(e_{i}^{(0)})]$,
$\eta_{j}^{(0)}:=[\varphi(g_{j}^{(0)})]$, $1\leq i\leq k$, $j\in\overline
{G}(\mathbb{Q})$.
\item[(2.10)] if $\mathbb{F=F}_{p}$ put $\xi_{i}^{(p)}:=[\varphi(e_{i
^{(p)})]$, $\theta_{t}^{(p)}:=[\varphi(\alpha_{t}^{(p)})]$, $\eta_{s
^{(p)}:=[\varphi(g_{s}^{(p)})]$, where $1\leq i\leq k$, $t\in G(p)$,
$s\in\overline{G}(\mathbb{F}_{p})$.$\square$
\end{enumerate}
\noindent\textbf{Example 2.10.} For a given pair $(G,\mathbb{F)}$ all elements
in $\mathcal{O}_{G,\mathbb{F}}$, together with their degrees, can be
enumerated from the basic data of $G$ in Tables 1 and 2. For the exceptional
Lie groups see in Tables 4--8 of \S 6 for the set $\mathcal{O}_{G,\mathbb{F}
$, as well as the degrees of its elements, so obtained.$\square$
\bigskip
It follows from Lemma 2.8 that
\bigskip
\noindent\textbf{Lemma 2.11.} \textsl{We have }$\left\vert \mathcal{O
_{G,\mathbb{F}}\right\vert =n$\textsl{ and}
\begin{quote}
$\dim G=\left\{
\begin{tabular}
[c]{l
$\Sigma_{u\in\mathcal{O}_{G,\mathbb{F}}}\deg u$ for $\mathbb{F}=\mathbb{Q}$ or
$\mathbb{Z}$;\\
$\Sigma_{u\in\mathcal{O}_{G,\mathbb{F}_{p}}}\deg u+\Sigma_{t\in G(p)
(k_{t}-1)\deg y_{t}$ for $\mathbb{F}=\mathbb{F}_{p}
\end{tabular}
\right. $.$\square$
\end{quote}
\begin{center}
\textbf{2.5. Preliminaries in algebra. }
\end{center}
We conclude this section with two results in algebra for further use.
\bigskip
\noindent\textbf{Definition 2.12.} Let $\mathcal{R}^{\ast}=\oplus_{i\geq
0}\mathcal{R}^{i}$ be a graded algebra over a field $\mathbb{F}$ (resp. a
graded ring over $\mathbb{F}=\mathbb{Z}$) and let $u=t_{1}^{b_{1}}\cdots
t_{h}^{b_{h}}\in\mathcal{R}^{r}$ be a decomposed element in degree $r$,
$b_{i}\geq1$.
We call $\mathcal{R}^{\ast}$ \textsl{monotone} \textsl{in degree }$r$
\textsl{with respect to }$u$ if $\mathcal{R}^{r}=\mathbb{F}$ is generated by
$u$, and $t_{1}^{c_{1}}\cdots t_{h}^{c_{h}}=0$ for all $(c_{1},\cdots
,c_{h})\neq(b_{1},\cdots,b_{h})$ with $\deg t_{1}^{c_{1}}\cdots t_{h}^{c_{h
}=r$.$\square$
\bigskip
\noindent\textbf{Lemma 2.13.} \textsl{Let }$\mathcal{R}^{\ast}$\textsl{ be a
graded algebra (resp. ring) which is monotone with respect to }$u=t_{1
^{b_{1}}\cdots t_{n}^{b_{n}}\in\mathcal{R}^{r}$\textsl{.}
\textsl{Then the set }$\{t_{1}^{k_{1}}\cdots t_{n}^{k_{n}}\}_{0\leq k_{i}\leq
b_{i}}$\textsl{ of monomials is linearly independent, and spans a direct
summand of }$\mathcal{R}^{\ast}$\textsl{ (resp. of the free part of
}$\mathcal{R}^{\ast}$\textsl{).}$\square$
\bigskip
\noindent\textbf{Lemma 2.14.} \textsl{Let }$A,B,C$\textsl{ be three abelian
groups, and let }$f:$\textsl{ }$A\oplus B\rightarrow C$\textsl{ be an
epimorphism. If }$a\in A$\textsl{, }$b\in B$\textsl{ are such that}
\begin{quote}
\textsl{i) the element }$a$\textsl{ spans a direct summand of }$A$\textsl{;}
\textsl{ii) }$f(a)=f(b)$\textsl{,}
\end{quote}
\noindent\textsl{then }$f$\textsl{ induces an epimorphism }$\widehat
{f}:A/\left\langle a\right\rangle \oplus B\rightarrow C$\textsl{, where
}$\left\langle a\right\rangle \subset A$\textsl{ is the cyclic subgroup
spanned by }$a$\textsl{.}$\square$
\bigskip
The following standard notations will be adopted in this paper. Given a ring
$A$ and a finite set $S=\{u_{1},\cdots,u_{t}\}$ we write
\begin{enumerate}
\item[(2.11)] $A\{S\}=A\{u_{i}\}_{1\leq i\leq t}$ for the free $A$--module
with basis $\{u_{1},\cdots,u_{t}\}$;
\item[(2.12)] $A[S]=A[u_{i}]_{1\leq i\leq t}$ for the polynomials ring in
$u_{1},\cdots,u_{t}$ over $A$;
\item[(2.13)] $\Lambda_{\mathbb{F}}(S)=\Lambda_{\mathbb{F}}(u_{i})_{1\leq
i\leq t}$ for the exterior algebra over $\mathbb{F}$ generated by
$u_{1},\cdots,u_{t}$;
\item[(2.14)] $A\otimes\Delta(S)=A\otimes\Delta(u_{i})_{1\leq i\leq t}$ for
the $A$--module in the simple system of generators $u_{1},\cdots,u_{t}$
\cite{B1}.
\end{enumerate}
\noindent In addition, if $A=\mathbb{F}$, $\Delta_{\mathbb{F}}(S)$ is used
instead of $\mathbb{F}\otimes\Delta(S)$.
\section{Computing with $E_{3}^{\ast,r}(G,\mathbb{F})$, $r=0,1$}
From Lemma 2.1 we compute $E_{3}^{\ast,0}(G;\mathbb{F})$ in Lemma 3.1. In
terms of primary forms introduced in \S 2.4 we deduce a partial presentation
for $E_{3}^{\ast,1}(G;\mathbb{F})$ (with $\mathbb{F}$ a field) in Lemmas 3.4.
The relationship between $E_{3}^{\ast,1}(G;\mathbb{F}_{p})$ and $E_{3
^{\ast,1}(G;\mathbb{Z})$ with respect to the $\operatorname{mod}p$ reduction
and the Bockstein homomorphism is discussed in Lemma 3.5. These results will
be summarized in \S 4 as to give a complete characterization for $E_{\infty
}^{\ast,\ast}(G;\mathbb{F})$.
\begin{center}
\textbf{3.1. The Chow rings of reductive algebraic groups.}
\end{center}
In term of (1.1) define the subring $A_{G;\mathbb{F}}^{\ast}$ of $H^{\ast
}(G;\mathbb{F})$ by
\begin{quote}
$A_{G;\mathbb{F}}^{\ast}:=\operatorname{Im}\{\pi^{\ast}:H^{\ast
(G/T;\mathbb{F})\rightarrow H^{\ast}(G;\mathbb{F})\}$.
\end{quote}
\noindent Grothendieck \cite{G} showed that it is the \textsl{Chow ring} (with
$\mathbb{F}$ coefficient) of the reductive algebraic group $G^{c}$
corresponding to $G$, and $\pi^{\ast}$ induces an isomorphism
\begin{quote}
$A_{G;\mathbb{F}}^{\ast}=H^{\ast}(G/T;\mathbb{F})\mid_{\omega_{1
=\cdots=\omega_{n}=0}$.
\end{quote}
\noindent On the other hand, according to (1.2) and (1.3), $E_{3}^{\ast
,0}(G;\mathbb{F})$ is the cokernel of the differential $d_{2}:H^{\ast
}(G/T;\mathbb{F})\otimes\Lambda_{\mathbb{F}}^{1}\rightarrow H^{\ast
}(G/T;\mathbb{F})$, where $d_{2}(a\otimes t_{k})=a\omega_{k}$ implies that
$\operatorname{Im}d_{2}$ in the ideal in $H^{\ast}(G/T;\mathbb{F})$ generated
by $\omega_{1},\cdots,\omega_{n}$. Therefore, we get directly from Lemma 2.1 that
\bigskip
\noindent\textbf{Lemma 3.1.} $E_{3}^{\ast,0}(G;\mathbb{F})=A_{G;\mathbb{F
}^{\ast}=A_{G;\mathbb{Z}}^{\ast}\otimes\mathbb{F}$\textsl{, where}
\begin{quote}
$A_{G;\mathbb{Z}}^{\ast}=\mathbb{Z}[y_{j}]_{1\leq j\leq m}/\left\langle
p_{j}y_{j},y_{j}^{k_{j}}\right\rangle $.$\square$
\end{quote}
\noindent\textbf{Example 3.2.} To facilitate with calculation in \S 6 concrete
presentations of $A_{G;\mathbb{Z}}^{\ast}$ for all exceptional $G$ are needed.
These can be obtained by inputting in the formula in Lemma 3.1 the values of
$p_{j}$, $k_{j}$ given in Table 2. To emphasize the cohomology degrees of the
Schubert classes $y_{j}$ (see Table 3), $x_{\deg y_{j}}$ is used instead of
$y_{j}$.
\begin{quote}
$A_{G_{2};\mathbb{Z}}^{\ast}=\mathbb{Z}[x_{6}]/\left\langle 2x_{6},x_{6
^{2}\right\rangle $;
$A_{F_{4};\mathbb{Z}}^{\ast}=\mathbb{Z}[x_{6},x_{8}]/\left\langle 2x_{6
,x_{6}^{2},3x_{8},x_{8}^{3}\right\rangle $;
$A_{E_{6};\mathbb{Z}}^{\ast}=\mathbb{Z}[x_{6},x_{8}]/\left\langle 2x_{6
,x_{6}^{2},3x_{8},x_{8}^{3}\right\rangle $;
$A_{E_{7};\mathbb{Z}}^{\ast}=\mathbb{Z}[x_{6},x_{8},x_{10},x_{18
]/\left\langle 2x_{6},3x_{8},2x_{10},2x_{18},x_{6}^{2},x_{8}^{3},x_{10
^{2},x_{18}^{2}\right\rangle $;
$A_{E_{8};\mathbb{Z}}^{\ast}=\mathbb{Z}[x_{6},x_{8},x_{10},x_{12
,x_{18},x_{20},x_{30}]/$
$\quad\left\langle 2x_{6},3x_{8},2x_{10},5x_{12},2x_{18},3x_{20},2x_{30
,x_{6}^{8},x_{8}^{3},x_{10}^{4},x_{12}^{5},x_{18}^{2},x_{20}^{3},x_{30
^{2}\right\rangle $.$\square$
\end{quote}
\noindent\textbf{Remark 3.3.} For $G=Spin(n),G_{2},F_{4}$ the ring
$A_{G;\mathbb{Z}}^{\ast}$ was computed by Marlin \cite{M2}. In \cite{K2}
Ka\v{c} obtained presentations for the algebras $A_{G;\mathbb{F}_{p}}$ for all
simple $G$ in which the generators are specified only up to their degrees. In
comparison Lemma 3.1 presents\textbf{ }$A_{G;\mathbb{Z}}^{\ast}$ in terms of
the special Schubert classes on $G/T$ whose geometric configuration can be
read from their Weyl coordinates \cite{DZ3}. $\square$
\begin{center}
\textbf{3.2. }$E_{3}^{\ast,1}(G;\mathbb{F})$\textbf{ with }$\mathbb{F
$\textbf{ a field. }
\end{center}
The exact sequence of $\mathbb{F}$--modules
\begin{quote}
$0\rightarrow I_{G;\mathbb{F}}\rightarrow P_{G,\mathbb{F}}\rightarrow H^{\ast
}(G/T;\mathbb{F})\rightarrow0$ (see (2.3) and (2.5))
\end{quote}
\noindent gives rise to the short exact sequence of cochain complexes
\begin{enumerate}
\item[(3.1)] $0\rightarrow I_{G;\mathbb{F}}\otimes\Lambda^{\ast}\rightarrow
P_{G,\mathbb{F}}\otimes\Lambda^{\ast}\rightarrow H^{\ast}(G/T;\mathbb{F
)\otimes\Lambda^{\ast}\rightarrow0$,
\end{enumerate}
\noindent where $\Lambda^{\ast}=\Lambda_{\mathbb{F}}(t_{1},\cdots,t_{n})$,
$d(a\otimes t_{k})=a\omega_{k}\otimes1$. Since (as is clear)
\begin{quote}
$H^{r}(P_{G,\mathbb{F}}\otimes\Lambda^{\ast})=\left\{
\begin{tabular}
[c]{l
$P_{G,\mathbb{F}}^{\omega}$ if $r=0$\\
$0$ if $r\geq1
\end{tabular}
\ \right. $,
$H^{r}(H^{\ast}(G/T;\mathbb{F})\otimes\Lambda^{\ast})=E_{3}^{\ast
,r}(G;\mathbb{F})$,
\end{quote}
\noindent the cohomology exact sequence of (3.1) contains the section
\begin{enumerate}
\item[(3.2)] $0\rightarrow E_{3}^{\ast,1}(G;\mathbb{F})\overset{\delta
}{\rightarrow}I_{G,\mathbb{F}}/J_{G,\mathbb{F}}\overset{\theta}{\rightarrow
}P_{G,\mathbb{F}}^{\omega}\overset{\chi}{\rightarrow}A_{G;\mathbb{F}}^{\ast
}=E_{3}^{\ast,0}(G;\mathbb{F})\rightarrow0$,
\end{enumerate}
\noindent where $J_{G,\mathbb{F}}=\operatorname{Im}[d:I_{G,\mathbb{F}
\otimes\Lambda^{1}\rightarrow I_{G,\mathbb{F}}]$. This implies that
\begin{enumerate}
\item[(3.3)] $E_{3}^{\ast,1}(G;\mathbb{F})$\textsl{ }can be calculated
as\textsl{ }$\ker\theta$.
\end{enumerate}
\noindent In addition, the map $\theta$ in (3.2) is evaluated by the simple
algorithm: for each $f\in I_{G,\mathbb{F}}$ write $[f]$ for its residue class
in $I_{G,\mathbb{F}}/J_{G,\mathbb{F}}$, then
\begin{enumerate}
\item[(3.4)] $\theta\lbrack f]=f\mid_{\omega_{1}=\cdots=\omega_{n}=0}$.
\end{enumerate}
The natural action $E_{3}^{\ast,0}(G;\mathbb{F})\otimes E_{3}^{\ast,\ast
}(G;\mathbb{F})\rightarrow E_{3}^{\ast,\ast}(G;\mathbb{F})$ of $E_{3}^{\ast
,0}(G;\mathbb{F})$ furnishes $E_{3}^{\ast,\ast}$ with the structure of an
$A_{G;\mathbb{F}}^{\ast}=E_{3}^{\ast,0}$ module. We apply (3.3) and (3.4) to show
\bigskip
\noindent\textbf{Lemma 3.4.} \textsl{If }$\mathbb{F}$\textsl{ is a field, the
}$A_{G;\mathbb{F}}^{\ast}$\textsl{ module }$E_{3}^{\ast,1}(G;\mathbb{F
)$\textsl{ is spanned by the set }$O_{G,\mathbb{F}}$\textsl{ of primary forms
(Definition 2.9).}
\noindent\textbf{Proof. }The map $\psi:P_{G,\mathbb{F}}\{\Sigma_{G,\mathbb{F
}\}\rightarrow I_{G,\mathbb{F}}$ induced from the inclusion $\Sigma
_{G,\mathbb{F}}\subset I_{G,\mathbb{F}}$ is clearly surjective. Since
$\omega_{1}^{b_{1}}\cdots\omega_{n}^{b_{n}}\cdot\Sigma_{G,\mathbb{F}}\subset
J_{G,\mathbb{F}}$ for any $(b_{1},\cdots,b_{n})$ with $\Sigma b_{i}\geq1$,
$\psi$ restricts to a surjective map
\begin{enumerate}
\item[(3.5)] $\psi_{1}:P_{G,\mathbb{F}}^{\omega}\{\Sigma_{G,\mathbb{F
}\}\rightarrow I_{G,\mathbb{F}}/J_{G,\mathbb{F}}\rightarrow0$
\end{enumerate}
\noindent by Lemma 2.14. On the other hand, as the injection $\delta$ in (3.2)
carries $\varphi(h)\in\mathcal{O}_{G,\mathbb{F}}$ to $[h]\in I_{G,\mathbb{F
}/J_{G,\mathbb{F}}$, $h\in\Phi_{G,\mathbb{F}}$ (\S 2.4), Lemma 3.4 is
established once we show that
\begin{enumerate}
\item[(3.6)] the map $\psi_{1}$ factors through the subquotient
$A_{G;\mathbb{F}}^{\ast}\{\Phi_{G,\mathbb{F}}\}$ of $P_{G,\mathbb{F}}^{\omega
}\{\Sigma_{G,\mathbb{F}}\}$ as to yield a surjection $\psi_{2}:A_{G;\mathbb{F
}^{\ast}\{\Phi_{G,\mathbb{F}}\}\rightarrow\ker\theta$.
\end{enumerate}
\noindent\textbf{Case 1.} $\mathbb{F=Q}$. Since $P_{G,\mathbb{Q}}^{\omega
}=A_{G;\mathbb{Q}}^{\ast}=\mathbb{Q}$ by Lemma 3.1 and $\Sigma_{G,\mathbb{Q
}=\Phi_{G,\mathbb{Q}}$ by Definition 2.7, $\delta:$ $E_{3}^{\ast
,1}(G;\mathbb{Q})\rightarrow I_{G,\mathbb{Q}}/J_{G,\mathbb{Q}}$ in (3.2) is an
isomorphism. The map $\psi_{2}$ asserted in (3.6) is given by $\psi_{1}$.
\noindent\textbf{Case 2.} $\mathbb{F=F}_{p}$. With $P_{G,\mathbb{F}_{p
}^{\omega}=\mathbb{F}_{p}[y_{t}]_{t\in G(p)}$ the surjection in (3.5) is
\begin{quote}
$\psi_{1}:\mathbb{F}_{p}[y_{t}]_{t\in G(p)}\{\Sigma_{G,\mathbb{F}_{p
}\}\rightarrow I_{G,\mathbb{F}_{p}}/J_{G,\mathbb{F}_{p}}\rightarrow0$.
\end{quote}
\noindent The set $G(p)$, regarded as a subsequence of $\{1,\cdots,m\}$, will
be denoted by $G(p)=\{i_{1},\cdots,i_{r}\}$, $r=\left\vert G(p)\right\vert $.
For a sequence $c_{1},\cdots,c_{r}$ of $r$ non--negative integers and $s\in
G(p)$ set
\begin{quote}
$h_{s}^{c_{1},\cdots,c_{r}}=y_{i_{1}}^{c_{1}}\cdots y_{i_{r}}^{c_{r}
g_{s}^{(p)}\in\mathbb{F}_{p}[y_{t}]_{t\in G(p)}\{\Sigma_{G,\mathbb{F}_{p}}\}$.
\end{quote}
\noindent Then, with respect to the partition
\begin{quote}
$\Sigma_{G,\mathbb{F}_{p}}=\Phi_{G,\mathbb{F}_{p}}\sqcup$ $\{g_{t
^{(p)}\}_{t\in G(p)}$
\end{quote}
\noindent by iii) of Definition 2.7, we have the splitting of $\mathbb{F
_{p}[y_{t}]_{t\in G(p)}$--modules
\begin{enumerate}
\item[(3.7)] $\mathbb{F}_{p}[y_{t}]_{t\in G(p)}\{\Sigma_{G,\mathbb{F}_{p
}\}=\mathbb{F}_{p}[y_{t}]_{t\in G(p)}\{\Phi_{G,\mathbb{F}_{p}}\}\oplus_{s\in
G(p)}\mathbb{F}_{p}\{h_{s}^{c_{1},\cdots,c_{r}}\}_{c_{i}\geq0}$.
\end{enumerate}
Since $\Phi_{G,\mathbb{F}_{p}}\subset\left\langle \omega_{1},\cdots,\omega
_{n}\right\rangle _{\mathbb{F}_{p}}\cap I_{G,\mathbb{F}_{p}}$ by i) of Lemma
2.8, and since
\begin{quote}
$g_{s}^{(p)}=y_{s}^{k_{s}}+\beta_{s}^{(p)}$ with $\beta_{s}^{(p)
\in\left\langle \omega_{1},\cdots,\omega_{n}\right\rangle _{\mathbb{F}}$ for
$s\in G(p)$
\end{quote}
\noindent by ii) of Lemma 2.6, we get from (3.2) that, for $h\in
\Sigma_{G,\mathbb{F}_{p}}$, $g\in$ $\Phi_{G,\mathbb{F}_{p}}$ and $s\in G(p)$
\begin{quote}
a) $h\cdot g\in J_{G,\mathbb{F}_{p}}$; \quad
b) $(y_{s}^{k_{s}}-g_{s}^{(p)})\cdot h(=-\beta_{s}^{(p)}\cdot h)\in
J_{G,\mathbb{F}_{p}}$.
\end{quote}
\noindent In particular, for $s,t\in G(p)$
\begin{quote}
c) $\psi_{1}(y_{s}^{k_{s}}\cdot g)\equiv\lbrack\widetilde{\mu}_{s}\cdot
g]\equiv0$ $\operatorname{mod}J_{G,\mathbb{F}_{p}}$ by b) and a);
d) $\psi_{1}(y_{s}^{k_{s}}\cdot g_{t}^{(p)})\equiv\lbrack g_{s}^{(p)}\cdot
g_{t}^{(p)}]\equiv\psi_{1}(y_{t}^{k_{t}}\cdot g_{s}^{(p)})$
$\operatorname{mod}J_{G,\mathbb{F}_{p}}$ by b).
\end{quote}
\noindent It follows from c), d) and Lemma 2.14 that, with respect to the
decomposition (3.7), $\psi_{1}$ induces an epimorphism
\begin{center}
\noindent$\psi_{1}^{\prime}:A_{G,\mathbb{F}_{p}}^{\ast}\{\Phi_{G,\mathbb{F
_{p}}\}\oplus_{s\in G(p)}\mathbb{F}_{p}\{h_{s}^{c_{1},\cdots,c_{r}
\}_{c_{j}<k_{i_{j}}\text{ for }s<i_{j}}\rightarrow I_{G;\mathbb{F}_{p
}/J_{G,\mathbb{F}_{p}}\rightarrow0$.
\end{center}
\noindent Further, from (3.4) we find that
\begin{quote}
$\theta\circ\psi_{1}^{\prime}(h)=0$, $h\in\Phi_{G,\mathbb{F}_{p}}$;$\quad$
$\theta\circ\psi_{1}^{\prime}(h_{s}^{c_{1},\cdots,c_{r}})=y_{i_{1}}^{c_{1
}\cdots y_{i_{s}}^{c_{s}+k_{i_{s}}}\cdots y_{i_{r}}^{c_{r}}$.
\end{quote}
\noindent These imply respectively that the composition
\begin{center}
$\theta\circ\psi_{1}^{\prime}:$ $A_{G,\mathbb{F}_{p}}^{\ast}\{\Phi
_{G,\mathbb{F}_{p}}\}\oplus_{s\in G(p)}\mathbb{F}_{p}\{h_{s}^{c_{1
,\cdots,c_{r}}\}_{c_{j}<k_{i_{j}}\text{ for }s<i_{j}}\rightarrow
P_{G,\mathbb{F}}^{\omega}=\mathbb{F}_{p}[y_{t}]_{t\in G(p)}$
\end{center}
\noindent is trivial on the first summand and is injective on the second. That
is, $\psi_{1}^{\prime}$ restricts to a surjection $\psi_{2}:A_{G,\mathbb{F
_{p}}^{\ast}\{\Phi_{G,\mathbb{F}_{p}}\}\rightarrow\ker\theta$. This shows
(3.6), hence completes the proof of Lemma 3.4.$\square$
\begin{center}
\textbf{3.3. Relationship between }$E_{3}^{\ast,1}(G;\mathbb{F}_{p})$\textbf{
and }$E_{3}^{\ast,1}(G;\mathbb{Z})$.
\end{center}
For a prime $p$ consider the \textsl{Bockstein} sequence
\begin{quote}
$\cdots\rightarrow E_{3}^{\ast,1}(G;\mathbb{Z})\overset{\times p}{\rightarrow
}E_{3}^{\ast,1}(G;\mathbb{Z})\overset{r_{p}}{\rightarrow}E_{3}^{\ast
,1}(G;\mathbb{F}_{p})\overset{\beta_{p}}{\rightarrow}A_{G;\mathbb{Z}}^{\ast
}\rightarrow\cdots$
\end{quote}
\noindent associated to the exact sequence $0\rightarrow\mathbb{Z
\overset{\times p}{\rightarrow}\mathbb{Z}\rightarrow\mathbb{F}_{p
\rightarrow0$ of coefficients, where $\beta_{p}$ is the \textsl{Bockstein
homomorphisms} and $r_{p}$ is the $\operatorname{mod}p$ reduction.
\bigskip
\noindent\textbf{Lemma 3.5.} \textsl{On }$E_{3}^{\ast,r}(G;\mathbb{Z
)$\textsl{ (}$r=0,1$\textsl{) the reduction }$r_{p}$\textsl{ is given by}
\begin{enumerate}
\item[(3.8)] $r_{p}(\xi_{i})\equiv\xi_{i}^{(p)}$;$\ $
$r_{p}(y_{j})\equiv\left\{
\begin{tabular}
[c]{l
$y_{j}\text{ for }j\in G(p)$\\
$0$ $\text{for }j\notin G(p)
\end{tabular}
\ \ \ \right. $;$\quad$
$r_{p}(\eta_{j})\equiv\left\{
\begin{tabular}
[c]{l
$-y_{j}^{k_{j}-1}\theta_{j}^{(p)}$ $\text{for }j\in G(p)$\\
$p_{j}\eta_{j}^{(p)}$ $\text{for }j\in\overline{G}(\mathbb{F}_{p})
\end{tabular}
\ \ \ \right. $,
\end{enumerate}
\noindent\textsl{where }$1\leq i\leq k$\textsl{, }$j\in\overline{G
(\mathbb{Z})$\textsl{. In particular, when }$G=E_{8}$\textsl{,}
\begin{enumerate}
\item[(3.9)] $r_{p}(\eta_{6})\equiv\left\{
\begin{tabular}
[c]{l
$y_{7}\theta_{7}^{(2)}$ if $p=2$;\\
$-y_{6}^{2}\theta_{6}^{(3)}$ if $p=3$;\\
$2y_{4}^{4}\theta_{4}^{(5)}$ if $p=5$;\\
$3\eta_{6}^{(p)}$ if $p\neq2,3,5$.
\end{tabular}
\ \ \right. $
\end{enumerate}
\textsl{On }$E_{3}^{\ast,1}(G;\mathbb{F}_{p})$\textsl{ the Bockstein
$\beta_{p}$\textsl{ satisfies:}
\begin{enumerate}
\item[(3.10)] $\beta_{p}(\xi_{i}^{(p)})=\beta_{p}(\eta_{s}^{(p)})=0$;
$\beta_{p}(\theta_{t}^{(p)})=-y_{t}$
\end{enumerate}
\noindent\textsl{where} $1\leq i\leq k$, $s\in\overline{G}(\mathbb{F}_{p})$,
$t\in G(p)$.
\noindent\noindent\textbf{Proof. }Reduction $\operatorname{mod}p$ yields the
commutative diagram
\begin{center}
\begin{array}
[c]{ccccc
0 & \rightarrow & E_{3}^{\ast,1}(G;\mathbb{Z}) & \overset{\delta}{\rightarrow}
& I_{G,\mathbb{Z}}/J_{G,\mathbb{Z}}\\
& & r_{p}\downarrow & & r_{p}\downarrow\\
0 & \rightarrow & E_{3}^{\ast,1}(G;\mathbb{F}_{p}) & \overset{\delta
}{\rightarrow} & I_{G,\mathbb{F}_{p}}/J_{G,\mathbb{F}_{p}
\end{array}
$
\end{center}
\noindent by which the relations in (3.8) and (3.9) are verified by iii) of
Lemma 2.8.
Turning to (3.10) we get from $r_{p}(\xi_{i})\equiv\xi_{i}^{(p)}$; $r_{p
(\eta_{s})\equiv p_{s}\eta_{s}^{(p)}$ that
\begin{quote}
$\beta_{p}(\xi_{i}^{(p)})=\beta_{p}(\eta_{s}^{(p)})=0$.
\end{quote}
\noindent Finally, the relation $\beta_{p}(\theta_{t}^{(p)})=-y_{t}$, $t\in
G(p)$, comes from the diagram chasing
\begin{center}
\begin{array}
[c]{ccccc}
& & \varphi(\alpha_{t}) & \rightarrow & \theta_{t}^{(p)}\\
& & d\downarrow\quad & & d\downarrow\quad\\
-y_{t} & \overset{p}{\longrightarrow} & \alpha_{t} & & \alpha_{t}^{(p)}=0
\end{array}
$
\end{center}
\noindent in the short exact sequence of cochain complexes
\begin{center}
$0\rightarrow H^{\ast}(G/T;\mathbb{Z})\otimes\Lambda^{\ast}\overset
{p}{\rightarrow}H^{\ast}(G/T;\mathbb{Z})\otimes\Lambda^{\ast}\overset{r_{p
}{\rightarrow}H^{\ast}(G/T;\mathbb{F}_{p})\otimes\Lambda^{\ast}\rightarrow0$,
\end{center}
\noindent where $\varphi$ is the map in (2.7), and where $\alpha_{t}$ and
$\alpha_{t}^{(p)}$ are the polynomials specified respectively in Lemmas 2.1
and 2.6.$\square$
\bigskip
\noindent\textbf{Remark 3.6.} For all exceptional $G$ concrete expression for
the reduction $r_{p}:E_{3}^{\ast,1}(G;\mathbb{Z})\rightarrow E_{3}^{\ast
,r}(G;\mathbb{F}_{p})$ with respect to the primary forms is given in Lemma 3.2
in \cite{D2}, where the results play a decisive role in translating the Hopf
algebra structure on $H^{\ast}(G;\mathbb{F}_{p})$ to the near--Hopf ring
structure on $H^{\ast}(G;\mathbb{Z})$.$\square$
\section{The structure of $E_{\infty}^{\ast,\ast}(G;\mathbb{F})$}
In this section, we determine $E_{3}^{\ast,\ast}(G;\mathbb{F})$ and show that
$E_{3}^{\ast,\ast}=E_{\infty}^{\ast,\ast}$ in Lemma 4.1 for $\mathbb{F}$ a
field, and in Lemma 4.6 for $\mathbb{F=Z}$, respectively. We begin by
mentioning useful properties of $E_{3}^{\ast,\ast}(G;\mathbb{F})$ in (4.1)--(4.4):
\begin{enumerate}
\item[(4.1)] $E_{3}^{\ast,\ast}(G;\mathbb{F})$ is a module over the ring
$A_{G;\mathbb{F}}^{\ast}=E_{3}^{\ast,0}(G;\mathbb{F})$;
\item[(4.2)] the product in $E_{3}^{\ast,\ast}(G;\mathbb{F})$ satisfies
$x^{2}=0$ for $x\in E_{3}^{\ast,1}(G;\mathbb{F})$;
\end{enumerate}
\noindent and, as a standard property of the \textsl{Koszul complex} (cf.
\cite{K2, Se}),
\begin{enumerate}
\item[(4.3)] if $\mathbb{F}$ is a field, $E_{3}^{\ast,\ast}(G;\mathbb{F})$ is
generated by $E_{3}^{\ast,0}(G;\mathbb{F})$ and $E_{3}^{\ast,1}(G;\mathbb{F})$.
\end{enumerate}
\noindent Finally, letting $n=\dim T$ and $g=\dim G/T$, then
\begin{enumerate}
\item[(4.4)] $E_{3}^{g,n}(G;\mathbb{F})=E_{2}^{g,n}(G;\mathbb{F})=\mathbb{F}$
(since $E_{2}^{g-2,n+1}=E_{2}^{g+2,n-1}=0$).
\end{enumerate}
\begin{center}
\textbf{4.1. The algebra }$E_{\infty}^{\ast,\ast}(G;\mathbb{F})$\textbf{ with
}$\mathbb{F}$ \textbf{a field.}
\end{center}
Let $\mathbb{F}$ be a field and let $\mathcal{O}_{G,\mathbb{F}}$ be the set of
primary forms in $E_{3}^{\ast,1}(G;\mathbb{F})$. Combining Lemmas 3.1 and 3.4
with (4.2) and (4.3), we find that
\begin{enumerate}
\item[(4.5)] the inclusions $A_{G;\mathbb{F}}^{\ast}$, $\mathcal{O
_{G,\mathbb{F}}\subset E_{3}^{\ast,\ast}(G;\mathbb{F})$ extend to a surjective
ring map
\end{enumerate}
\begin{quote}
$\psi_{\mathbb{F}}:A_{G;\mathbb{F}}^{\ast}\otimes\Lambda_{\mathbb{F
}(\mathcal{O}_{G,\mathbb{F}})\rightarrow E_{3}^{\ast,\ast}(G;\mathbb{F})$.
\end{quote}
\noindent\textbf{Lemma 4.1.} \textsl{The map }$\psi_{\mathbb{F}}$\textsl{ is a
ring isomorphism. Consequently,}
\begin{quote}
i) $E_{3}^{\ast,\ast}(G;\mathbb{F})=E_{\infty}^{\ast,\ast}(G;\mathbb{F})$;
ii) $\dim_{\mathbb{F}}E_{3}^{\ast,\ast}(G;\mathbb{F})=\left\{
\begin{tabular}
[c]{l
$2^{n}\text{ \textsl{if either} }\mathbb{F=Q}\text{ \textsl{or}
\mathbb{F}_{p}\text{ \textsl{with} }G(p)=\emptyset\text{;}$\\
$2^{n}\prod_{t\in G(p)}k_{t}\text{ \textsl{if} }\mathbb{F}=\mathbb{F
_{p}\text{ \textsl{with} }G(p)\neq\emptyset\text{.}
\end{tabular}
\right. $
\end{quote}
\noindent\textbf{Proof.} Granted with (4.5) it suffices to show that
$\psi_{\mathbb{F}}$ is injective.
If $\mathbb{F}=\mathbb{Q}$ then $A_{G;\mathbb{F}}^{\ast}=\mathbb{Q}$ by Lemma
3.1. In the top degree the algebra $\Lambda_{\mathbb{Q}}(\mathcal{O
_{G,\mathbb{Q}})\mathbb{\ }$is spanned by the single element $u=\prod
_{v\in\mathcal{O}_{G,\mathbb{Q}}}v$. Since $\deg u=\dim G(=g+n)$ by Lemma
2.11, $\psi_{\mathbb{Q}}(u)\in E_{3}^{g,n}(G;\mathbb{Q})=\mathbb{Q}$ must be a
generator by (4.5). The proof for $\mathbb{F}=\mathbb{Q}$ is done by
\begin{quote}
$2^{n}=\dim\Lambda_{\mathbb{Q}}(\mathcal{O}_{G,\mathbb{Q}})\geq\dim
E_{3}^{\ast,\ast}(G;\mathbb{Q})$ $\geq2^{n}$
\end{quote}
\noindent in which the first inequality $\geq$ comes from (4.5), and the
second is obtained by applying Lemma 2.13 to the class $\psi_{\mathbb{Q
}(u)=\prod_{v\in\mathcal{O}_{G,\mathbb{Q}}}\psi_{\mathbb{Q}}(v)$, with respect
to which the algebra $E_{3}^{\ast,\ast}(G;\mathbb{Q})$ is monotone in
bi--degree $(g,n)$ by (4.2).
The same argument applies equally well to the case $\mathbb{F}=\mathbb{F}_{p
$. It follows from
\begin{quote}
$A_{G;\mathbb{F}_{p}}^{\ast}=\mathbb{F}_{p}[y_{t}]_{t\in G(p)}/\left\langle
y_{t}^{k_{t}}\right\rangle $ (by Lemma 3.1)
\end{quote}
\noindent that, in the top degree, the algebra $A_{G;\mathbb{F}_{p}}^{\ast
}\otimes\Lambda_{\mathbb{F}_{p}}(\mathcal{O}_{G,\mathbb{F}_{p}})$ is spanned
by the single element
\begin{enumerate}
\item[(4.6)] $u_{p}=\prod_{1\leq i\leq k}\xi_{i}^{(p)}\prod_{t\in G(p)
y_{t}^{k_{t}-1}\theta_{t}^{(p)}\prod_{s\in\overline{G}(\mathbb{F}_{p})
\eta_{s}^{(p)}$.
\end{enumerate}
\noindent Since $\deg u_{p}=\dim G(=g+n)$ by Lemma 2.11, $\psi_{\mathbb{F
_{p}}(u_{p})\in E_{3}^{g,n}(G;\mathbb{F}_{p})=\mathbb{F}_{p}$ must be a
generator by (4.5). The proof is done by
\begin{quote}
$\dim A_{G;\mathbb{F}_{p}}^{\ast}\otimes\Lambda_{\mathbb{F}_{p}
(\mathcal{O}_{G,\mathbb{F}_{p}})=2^{n}\prod_{t\in G(p)}k_{t}$
$\qquad\geq\dim E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})\geq2^{n}\prod_{t\in
G(p)}k_{t}$,
\end{quote}
\noindent where the second inequality $\geq$ is obtained by applying Lemma
2.13 to the class $\psi_{\mathbb{F}_{p}}(u_{p})$, with respect to it
$E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})$ is monotone in bi--degree $(g,n)$ by
(4.2).$\square$
\bigskip
For a prime $p$ the differential of bi-degree $(2,-1)$
\begin{quote}
$\partial_{p}=r_{p}\circ\beta_{p}:E_{3}^{\ast,\ast}(G;\mathbb{F}_{p
)\overset{\beta_{p}}{\rightarrow}E_{3}^{\ast,\ast}(G;\mathbb{Z})\overset
{r_{p}}{\rightarrow}E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})$
\end{quote}
\noindent clearly satisfies that $\partial_{p}^{2}=0$. In view of the presentation
\begin{quote}
$E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})=A_{G;\mathbb{F}_{p}}^{\ast}\otimes
\Lambda_{\mathbb{F}_{p}}(\mathcal{O}_{G,\mathbb{F}_{p}})$
\end{quote}
\noindent by Lemma 4.1, its action on $E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})$
has been determined by (3.10):
\begin{enumerate}
\item[(4.7)] $\partial_{p}(\theta_{t}^{(p)})=-y_{t}$; $\partial_{p
(y_{t})=\partial_{p}(\xi_{i}^{(p)})=\partial_{p}(\eta_{s}^{(p)})=0$.
\end{enumerate}
\noindent In preparation for computing the torsion ideal in $E_{3}^{\ast,\ast
}(G;\mathbb{Z})$ we show that
\bigskip
\noindent\textbf{Lemma 4.2.} $\dim_{\mathbb{F}_{p}}H^{\ast}(E_{3}^{\ast,\ast
}(G;\mathbb{F}_{p});\partial_{p})=2^{n}$.
\noindent\textbf{Proof. }Granted with Lemmas 3.1 and 4.1 we have the ring decomposition
\begin{quote}
$E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})=\otimes_{t\in G(p)}C_{t}\otimes
\Lambda_{\mathbb{F}_{p}}(\xi_{i}^{(p)},\eta_{s}^{(p)})_{1\leq i\leq
k,s\in\overline{G}(\mathbb{F}_{p})}$,
\end{quote}
\noindent where $C_{t}=(\mathbb{F}_{p}[y_{t}]/\left\langle y_{t}^{k_{t
}\right\rangle )\otimes\Lambda_{\mathbb{F}_{p}}(\theta_{t}^{(p)})$. Since by (4.7)
\begin{quote}
i) each factor $C_{t}$ is invariant with respect to $\partial_{p}$; and
ii) $\partial_{p}$ acts trivially on the factor $\Lambda_{\mathbb{F}_{p}
(\xi_{i}^{(p)},\eta_{s}^{(p)})_{1\leq i\leq k,s\in\overline{G}(\mathbb{F
_{p})}$,
\end{quote}
\noindent we get from the K\"{u}nneth formula and the universal--coefficient
theorem that
\begin{quote}
$H^{\ast}(E_{3}^{\ast,\ast}(G;\mathbb{F}_{p}),\partial_{p})=\otimes_{t\in
G(p)}H^{\ast}(C_{t},\partial_{p})\otimes\Lambda_{\mathbb{F}_{p}}(\xi_{i
^{(p)},\eta_{s}^{(p)})_{1\leq i\leq k,s\in\overline{G}(\mathbb{F}_{p})}$.
\end{quote}
\noindent The proof is completed by
\begin{quote}
$\dim_{\mathbb{F}_{p}}H^{\ast}(C_{t},\partial_{p})=2$
\end{quote}
\noindent since $H^{\ast}(C_{t},\partial_{p})$ has a basis represented by
$1,y_{t}^{k_{t}-1}\theta_{t}^{(p)}$, and by
\begin{quote}
$\dim_{\mathbb{F}_{p}}\Lambda_{\mathbb{F}_{p}}(\xi_{i}^{(p)},\eta_{s
^{(p)})=2^{n-\left\vert G(p)\right\vert }$
\end{quote}
\noindent since $\left\vert G(p)\right\vert +\left\vert \overline
{G}(\mathbb{F}_{p})\right\vert +k=n$ by Lemma 2.11.$\square$
\bigskip
\noindent\textbf{Remark 4.3.} From the foregoing it is clear that
$\otimes_{t\in G(p)}C_{t}$ is a subcomplex of $\{E_{3}^{\ast,\ast
;\partial_{p}\}$ whose cohomology is spanned by its subset $\{1,\prod_{t\in
I}y_{t}^{k_{t}-1}\theta_{t}^{(p)}\}_{I\subseteq G(p)}$.$\square$
\begin{center}
\textbf{4.2. The free part of }$E_{3}^{\ast,\ast}(G;\mathbb{Z})$.
\end{center}
In view of (4.2) the inclusion $\mathcal{O}_{G,\mathbb{Z}}\subset E_{3
^{\ast,1}(G;\mathbb{Z})$ extends to a ring map
\begin{enumerate}
\item[(4.8)] $\psi:\Lambda_{\mathbb{Z}}(\mathcal{O}_{G,\mathbb{Z}})\rightarrow
E_{3}^{\ast,\ast}(G;\mathbb{Z})$.
\end{enumerate}
\noindent\textbf{Lemma 4.4.} \textsl{The map }$\psi$\textsl{ in (4.8) is
injective and induces a splitting}
\begin{quote}
$E_{3}^{\ast,\ast}(G;\mathbb{Z})=\Lambda_{\mathbb{Z}}(O_{G,\mathbb{Z}})\oplus
T(G)$\textsl{,}
\end{quote}
\noindent\textsl{where }$T(G)$\textsl{ is the torsion ideal of }$E_{3
^{\ast,\ast}(G;\mathbb{Z})$\textsl{.}
\noindent\textbf{Proof}. According to Lemma 3.5 the reduction $r_{p
:E_{3}^{g,n}(G;\mathbb{Z})\rightarrow E_{3}^{g,n}(G;\mathbb{F}_{p})$ maps the
class $u=\prod_{v\in\mathcal{O}_{G,\mathbb{Z}}}v$ to
\begin{quote}
$r_{p}(u)\equiv a\prod_{1\leq i\leq k}\xi_{i}^{(p)}\prod_{t\in G(p)
y_{t}^{k_{t}-1}\theta_{t}^{(p)}\prod_{s\in\overline{G}(\mathbb{F}_{p})
p_{s}\eta_{s}^{(p)}$
$\qquad~\equiv(a\prod_{s\in\overline{G}(\mathbb{F}_{p})}p_{s})u_{p}$,
\end{quote}
\noindent where, if $G=E_{8}$ the factor $r_{p}(\eta_{6})$ in $r_{p}(u)$ is
evaluated as in (3.9), and where
\begin{quote}
i) $a=\left\{
\begin{tabular}
[c]{l
$(-1)^{\left\vert G(p)\right\vert }\text{ if either }G\neq E_{8}\text{ or
}G=E_{8}\text{, }p\neq2,5$;\\
$(-1)^{3}\text{ if }G=E_{8}\text{, }p=2$;\\
$2\text{ if }G=E_{8}\text{, }p=5$.
\end{tabular}
\ \right. $
ii) $u_{p}$ is the class given by (4.6). \
\end{quote}
\noindent Since $u_{p}$ generates $E_{3}^{g,n}(G;\mathbb{F}_{p})=\mathbb{F
_{p}$ by the proof of Lemma 4.1 and since the coefficient $(a\prod_{s}p_{s})$
is always co--prime to $p$, $r_{p}(u)$ generates $E_{3}^{g,n}(G;\mathbb{F
_{p})=\mathbb{F}_{p}$ for every prime $p$. It follows now from (4.2) and (4.4)
that the ring $E_{3}^{\ast,\ast}(G;\mathbb{Z})$ is monotone with respect to
$u\in E_{3}^{g,n}(G;\mathbb{Z})=\mathbb{Z}$ and consequently, the set
$\{\prod_{v\in\mathcal{O}_{G,\mathbb{Z}}}v^{\varepsilon_{v}}\}_{\varepsilon
_{v}=0,1}$ is linearly independent, and spans a direct summand of rank $2^{n}$
for the free part of $E_{3}^{\ast,\ast}(G;\mathbb{Z})$ by Lemma 2.13.
It remains to show that tensoring $\mathbb{Q}$ yields an isomorphism
\begin{quote}
$\psi\otimes1:\Lambda_{\mathbb{Z}}(\mathcal{O}_{G,\mathbb{Z}})\otimes
\mathbb{Q}\rightarrow E_{3}^{\ast,\ast}(G;\mathbb{Q})$.
\end{quote}
\noindent But this comes directly from $\dim_{\mathbb{Q}}E_{3}^{\ast,\ast
}(G;\mathbb{Q})=2^{n}$ by Lemma 4.1, and the injectivity of $\psi\otimes
1$.$\square$
\begin{center}
\textbf{4.3. The ring }$E_{\infty}^{\ast,\ast}(G;\mathbb{Z})$.\textbf{ }
\end{center}
For a prime $p$ the $p$--\textsl{primary component} of the torsion ideal
$T(G)$ is the subgroup
\begin{quote}
$T_{p}(G)=\{x\in T(G)\mid p^{r}\cdot x=0$ for some $r\geq1\}$.
\end{quote}
\noindent Consider the Bockstein\textsl{ }sequence
\begin{center}
$\cdots\rightarrow E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})\overset{\beta_{p
}{\rightarrow}E_{3}^{\ast,\ast}(G;\mathbb{Z})\overset{p}{\rightarrow
E_{3}^{\ast,\ast}(G;\mathbb{Z})\overset{r_{p}}{\rightarrow}E_{3}^{\ast,\ast
}(G;\mathbb{F}_{p})\overset{\beta_{p}}{\rightarrow}\cdots$.
\end{center}
\noindent With the presentation of $E_{3}^{\ast,\ast}(G;\mathbb{Z})$ in Lemma
4.4 the universal coefficients theorem yields the exact sequence
\begin{enumerate}
\item[(4.9)] $0\rightarrow\Lambda_{\mathbb{F}_{p}}(\mathcal{O}_{G,\mathbb{Z
})\oplus T_{p}(G)\otimes\mathbb{F}_{p}\rightarrow E_{3}^{\ast,\ast
}(G;\mathbb{F}_{p})\rightarrow Tor(T_{p}(G);\mathbb{F}_{p})\rightarrow0$
\end{enumerate}
\noindent in which
\begin{quote}
a) $\Lambda_{\mathbb{F}_{p}}(\mathcal{O}_{G,\mathbb{Z}})\oplus T_{p
(G)\otimes\mathbb{F}_{p}=\operatorname{Im}r_{p}\subseteq\ker\partial_{p}$;
b) $\beta_{p}$ maps $Tor(T_{p}(G);\mathbb{F}_{p})$ isomorphically onto the subgroup
$\qquad t_{p}(G)=\{x\in T_{p}(G)\mid px=0\}$.
\end{quote}
\noindent\textbf{Lemma 4.5.} \textsl{For a prime }$p$\textsl{ we have}
\begin{quote}
i) $\operatorname{Im}\beta_{p}=T_{p}(G)\cong\operatorname{Im}\partial_{p}$
\textsl{under} $r_{p}$;
ii) $\dim_{\mathbb{F}_{p}}T_{p}(G)=\left\{
\begin{tabular}
[c]{l
$0$ \textsl{if} $G(p)=\emptyset$;\\
$2^{n-1}(\prod_{t\in G(p)}k_{t}-1)$ \textsl{if} $G(p)\neq\emptyset$.
\end{tabular}
\ \right. $
\end{quote}
\noindent\textbf{Proof.} For i) it suffices to show that $t_{p}(G)=T_{p}(G)$.
Assuming on the contrary that there exists $x\in T_{p}(G)$ with $p^{r}x=0$ but
$p^{r-1}x\neq0$, $r\geq2$, then $r_{p}(p^{r-1}x)=0$ and $p^{r-1
x\in\operatorname{Im}\beta_{p}$ imply that the restriction of $\partial
_{p}=r_{p}\circ\beta_{p}$ on $Tor(T_{p}(G);\mathbb{F}_{p})$ has a nontrivial
kernel. Since $\partial_{p}$ maps $Tor(T_{p}(G);\mathbb{F}_{p})$ into the
summand $T_{p}(G)\otimes\mathbb{F}_{p}$ in a) and since $\dim\Lambda
_{\mathbb{F}_{p}}(\mathcal{O}_{G,\mathbb{Z}})=2^{n}$, we have
\begin{quote}
$\dim_{\mathbb{F}_{p}}H^{\ast}(E_{3}^{\ast,\ast}(G;\mathbb{F}_{p
),\partial_{p})>2^{n}$.
\end{quote}
\noindent This contradiction to Lemma 4.2 shows that $t_{p}(G)=T_{p}(G)$,
hence verifies i) of Lemma 4.4.
By i) $\partial_{p}=r_{p}\circ\beta_{p}$ must map $Tor(T_{p}(G);\mathbb{F
_{p})$ isomorphically onto the summand $T_{p}(G)\otimes\mathbb{F}_{p}$ in a).
With $\dim_{\mathbb{F}_{p}}E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})$ being given in
Lemma 4.1 the equalities in ii) are obtained from $T_{p}(G)\otimes
\mathbb{F}_{p}\cong T_{p}(G)$ and
\begin{quote}
$2\dim_{\mathbb{F}_{p}}T_{p}(G)\otimes\mathbb{F}_{p}+2^{n}=\dim_{\mathbb{F
_{p}}E_{3}^{\ast,\ast}(G;\mathbb{F}_{p})$ by (4.9).$\square$
\end{quote}
Since $G(p)=\emptyset$ for $p\neq2,3,5$ by Lemma 2.1, Lemmas 4.3 and 4.4 yield
the next result, which implies a conjecture by Ka\v{c} (\cite{K1}).
\bigskip
\noindent\textbf{Lemma 4.6.} $E_{\infty}^{\ast,\ast}(G;\mathbb{Z})=E_{3
^{\ast,\ast}(G;\mathbb{Z})=\Lambda_{\mathbb{Z}}(\mathcal{O}_{G,\mathbb{Z
})\oplus_{p\in\{2,3,5\}}\operatorname{Im}\beta_{p}$.$\square$
\bigskip
\noindent\textbf{Remark 4.7.} Lemma 4.6 is trivial for $G=SU(n),Sp(n)$, and
has been shown for $G=Spin(n)$ by Pittie \cite{P}. $\square$
\section{Additive presentations of $H^{\ast}(G;\mathbb{F})$}
We obtain additive presentations for $H^{\ast}(G;\mathbb{F})$ in Theorem 1 for
$\mathbb{F}$ a field, and in Theorem 2 for $\mathbb{F=Z}$, that are very close
to our eventual characterization for $H^{\ast}(G;\mathbb{F})$ as a ring. We
begin by singling out certain terms $E_{\infty}^{s,t}(G;\mathbb{F})$ that are
naturally subgroups of $H^{\ast}(G;\mathbb{F})$. First of all, combining Lemma
3.1 with Lemmas 4.1 and 4.6 we have
\begin{enumerate}
\item[(5.1)] $E_{\infty}^{\ast,0}(G;\mathbb{F})=A_{G;\mathbb{F}}^{\ast}$ is
the subring $\operatorname{Im}\pi^{\ast}$ of $H^{\ast}(G;\mathbb{F})$.
\end{enumerate}
\noindent Next, let $\mathcal{F}$ be the filtration on $H^{\ast
(G;\mathbb{F})$ induced from $\pi$. For all $p+q=g+n$ with $g=\dim G/T$ and
$n=\dim T$, we have by Lemmas 4.1, 4.6 and (4.4) that
\begin{quote}
$E_{\infty}^{p,q}(G;\mathbb{F})=E_{3}^{p,q}(G;\mathbb{F})=\left\{
\begin{tabular}
[c]{l
$\mathbb{F}\text{ if }(p,q)=(g,n)\text{;}$\\
$0\text{ otherwise.}
\end{tabular}
\right. $
\end{quote}
\noindent This implies that the filtration $\mathcal{F}$ on $H^{g+n
(G;\mathbb{F})$ reads
\begin{quote}
$H^{g+n}(G;\mathbb{F})=\mathcal{F}^{g}H^{g+n}\supset\mathcal{F}^{g+1
H^{g+n}=0$,
\end{quote}
\noindent and therefore,
\begin{enumerate}
\item[(5.2)] $H^{g+n}(G;\mathbb{F})=\mathcal{F}^{g}H^{g+n}(G;\mathbb{F
)=E_{\infty}^{g,n}(G;\mathbb{F})=\mathbb{F}$.
\end{enumerate}
\noindent Further, as $E_{2}^{p,q}(G;\mathbb{F})=0$ for odd $p$, we have
the\textsl{ canonical }monomorphism
\begin{enumerate}
\item[(5.3)] $\kappa:$ $E_{\infty}^{2k,1}(G;\mathbb{F})=\mathcal{F
^{2k}H^{2k+1}(G;\mathbb{F})\subset H^{2k+1}(G;\mathbb{F})$ (see Pittie
\cite{P})
\end{enumerate}
\noindent which interprets \textsl{directly} elements in $E_{3}^{\ast
,1}(G;\mathbb{F})$ (in particular, the primary forms) as cohomology classes of
$G$.
\bigskip
\noindent\textbf{Definition 5.1.} Elements in the subset $\mathcal{O
_{G,\mathbb{F}}^{\kappa}=\{\kappa(u)\in H^{\ast}(G;\mathbb{F})\mid
u\in\mathcal{O}_{G,\mathbb{F}}\}$ will be called \textsl{primary generators
of} $H^{\ast}(G;\mathbb{F})$.
\bigskip
The inclusion $\kappa$ in (5.3) has three useful properties that are explained
in (5.4)--(5.6) below. Firstly, since the products in $\mathcal{F}^{i}$ is
compatible with that in $H^{\ast}(G;\mathbb{F})$, one infers from (5.2) and
(5.3) that
\begin{enumerate}
\item[(5.4)] for all $k_{1},\cdots,k_{n}$ with $2(k_{1}+\cdots+k_{n})=g$, the
diagram commutes
\end{enumerate}
\begin{center}
\begin{array}
[c]{ccc
E_{\infty}^{2k_{1},1}(G;\mathbb{F})\times\cdots\times E_{\infty}^{2k_{n
,1}(G;\mathbb{F}) & \rightarrow & E_{\infty}^{g,n}(G;\mathbb{F})\\
\kappa\times\cdots\times\kappa\quad\downarrow & & \parallel\\
H^{2k_{1}+1}(G;\mathbb{F})\times\cdots\times H^{2k_{n}+1}(G;\mathbb{F}) &
\rightarrow & H^{g+n}(G;\mathbb{F})
\end{array}
$,
\end{center}
\noindent where the horizontal maps are the products in $E_{\infty}^{\ast
,\ast}(G;\mathbb{F})$ and $H^{\ast}(G;\mathbb{F})$ respectively. Secondly, for
$x\in E_{\infty}^{2k,1}$ we get from $x^{2}=0$ in
\begin{quote}
$E_{\infty}^{4k,2}=\mathcal{F}^{4k}H^{4k+2}/\mathcal{F}^{4k+1}H^{4k+2}$
\end{quote}
\noindent that $\kappa(x)^{2}\in\mathcal{F}^{4k+1}H^{4k+2}$. It follows then from
\begin{quote}
$\mathcal{F}^{4k+1}H^{4k+2}/\mathcal{F}^{4k+2}H^{4k+2}=E_{\infty}^{4k+1,1}=0$
(since $E_{\infty}^{p,q}=0$ for odd $p$)
\end{quote}
\noindent that $\kappa(x)^{2}\in\mathcal{F}^{4k+2}H^{4k+2}=E_{\infty
^{4k+2,0}$. This implies by (5.1) that
\begin{enumerate}
\item[(5.5)] $\kappa(x)^{2}\in A_{G;\mathbb{F}}^{\ast}\subset H^{\ast
}(G;\mathbb{F})$ for all $x\in E_{\infty}^{2k,1}(G;\mathbb{F})$.
\end{enumerate}
\noindent Finally, $\kappa$ is compatible with the \textsl{Bockstein
$\delta_{p}$ on $H^{\ast}(G;\mathbb{F}_{p})$ in the sense that the following
diagram is commutative (in which the vertical map on the right is given by the
inclusion (5.1))
\begin{enumerate}
\item[(5.6)]
\begin{array}
[c]{ccc
E_{\infty}^{2k,1}(G;\mathbb{F}_{p}) & \overset{\partial_{p}}{\rightarrow} &
A_{G;\mathbb{Z}}^{\ast}=E_{3}^{\ast,0}(G;\mathbb{Z})\\
\kappa\downarrow & & \downarrow\\
H^{2k+1}(G;\mathbb{F}_{p}) & \overset{\delta_{p}}{\rightarrow} & H^{\ast
}(G;\mathbb{Z})
\end{array}
$.
\end{enumerate}
\begin{center}
\textbf{5.1. }$H^{\ast}(G;\mathbb{F})$\textbf{ with }$\mathbb{F}$\textbf{ a
field}.
\end{center}
\noindent\textbf{Theorem 1}.\textsl{ If }$\mathbb{F}$\textsl{ is a field, the
inclusions }$A_{G;\mathbb{F}}^{\ast},O_{G,\mathbb{F}}^{\kappa}\subset H^{\ast
}(G;\mathbb{F})$\textsl{ by (5.1) and (5.3) induces an isomorphism of
}$A_{G;\mathbb{F}}^{\ast}$\textsl{ modules}
\begin{enumerate}
\item[(5.7)] $H^{\ast}(G;\mathbb{F})=A_{G;\mathbb{F}}^{\ast}\otimes
\Delta_{\mathbb{F}}(\mathcal{O}_{G,\mathbb{F}}^{\kappa})$.
\end{enumerate}
\noindent\textsl{Consequently,}
\begin{quote}
$\dim_{\mathbb{F}}H^{\ast}(G;\mathbb{F})=\left\{
\begin{tabular}
[c]{l
$2^{n}\text{ \textsl{if either} }\mathbb{F}=\mathbb{Q}\text{ \textsl{or}
}\mathbb{F}_{p}\text{ \textsl{with} }G(p)=\emptyset\text{,}$\\
$2^{n}\prod_{t\in G(p)}k_{t}\text{ \textsl{if} }\mathbb{F}=\mathbb{F
_{p}\text{ \textsl{with} }G(p)\neq\emptyset\text{.}
\end{tabular}
\ \right. $
\end{quote}
\noindent\textbf{Proof.} If $\mathbb{F}=\mathbb{Q}$, then $A_{G;\mathbb{F
}^{\ast}=\mathbb{Q}$ and $H^{\dim G}(G;\mathbb{Q})=\mathbb{Q}$ is spanned by
$u=\prod_{v\in\mathcal{O}_{G,\mathbb{Q}}}\kappa(v)$ by (5.4). Since
$\kappa(v)^{2}=0$ for $v\in\mathcal{O}_{G,\mathbb{Q}}$ the graded algebra
$H^{\ast}(G;\mathbb{Q})$ is monotone with respect to $u$ in degree $\dim G$.
By Lemma 2.13 the subset
\begin{quote}
$\{\prod_{v\in\mathcal{O}_{G,\mathbb{Q}}}\kappa(v)^{\varepsilon_{v
}\}_{\varepsilon_{v}=0,1}\subset H^{\ast}(G;\mathbb{Q})$
\end{quote}
\noindent of cardinality $2^{n}$ is linearly independent. The proof for
$\mathbb{F}=\mathbb{Q}$ is completed by
\begin{quote}
$\dim H^{\ast}(G;\mathbb{Q})=\dim E_{\infty}^{\ast,\ast}(G;\mathbb{Q})=2^{n}$,
\end{quote}
\noindent where the last equality comes from Lemma 4.1.
Consider next the case of $\mathbb{F}=\mathbb{F}_{p}$. By (4.6) and (5.4)
$H^{g+n}(G;\mathbb{F}_{p})=\mathbb{F}_{p}$ is spanned by
\begin{quote}
$\kappa(u_{p})=\prod_{1\leq i\leq k}\kappa(\xi_{i}^{(p)})\prod_{t\in
G(p)}y_{t}^{k_{t}-1}\kappa(\theta_{t}^{(p)})\prod_{s\in\overline{G
(\mathbb{F}_{p})}\kappa(\eta_{s}^{(p)})$.
\end{quote}
\noindent It should be noted that the ring $H^{\ast}(G;\mathbb{F}_{p})$ in
general is not monotone with respect to $\kappa(u_{p})$ when $p=2$. However,
we can establish the next assertion in the place of Lemma 2.13:
\begin{enumerate}
\item[(5.8)] \textsl{the set of monomials}
$\{\prod_{1\leq i\leq k}\kappa(\xi_{i}^{(p)})^{\varepsilon_{i}}\prod_{t\in
G(p)}$\textsl{ }$y_{t}^{r_{t}}\kappa(\theta_{t}^{(p)})^{\varepsilon_{t}
\prod_{s\in\overline{G}(\mathbb{F}_{p})}\kappa(\eta_{s}^{(p)})^{\varepsilon
_{s}}\}_{\varepsilon_{j}=0,1;0\leq r_{t}\leq k_{t}-1}$
\textsl{is linearly independent in }$H^{\ast}(G;\mathbb{F}_{p})$\textsl{,}
\end{enumerate}
\noindent from which (5.7) comes from $\dim H^{\ast}(G;\mathbb{F}_{p})=\dim
E_{\infty}^{\ast,\ast}(G;\mathbb{F}_{p})$.
Denote by $\mathcal{B}$ the set in (5.8), and let $\mathcal{V}$ be the graded
subspace of $H^{\ast}(G;\mathbb{F}_{p})$ spanned by $\mathcal{B}$. Consider
the involution $\tau$ on $\mathcal{B}$ defined by
\begin{center}
$\tau
{\displaystyle\prod}
\kappa(\xi_{i}^{(p)})^{\varepsilon_{i}
{\displaystyle\prod}
$ $y_{t}^{r_{t}}\kappa(\theta_{t}^{(p)})^{\varepsilon_{t}
{\displaystyle\prod}
\kappa(\eta_{s}^{(p)})^{\varepsilon_{s}})$
$
{\displaystyle\prod}
\kappa(\xi_{i}^{(p)})^{\varepsilon_{i}^{\prime}
{\displaystyle\prod}
y_{t}^{r_{t}^{\prime}}\kappa(\theta_{t}^{(p)})^{\varepsilon_{t}^{\prime}
{\displaystyle\prod}
\kappa(\eta_{s}^{(p)})^{\varepsilon_{s}^{\prime}}$,
\end{center}
\noindent where $\varepsilon_{i}^{\prime}=0$ or $1$ in accordance with
$\varepsilon_{i}=1$ or $0$, and where $r_{t}^{\prime}=k_{t}-1-r_{t}$. It
follows from (5.5) that, for any pair $(x,y)\in\mathcal{B}\times\mathcal{B}$
with $\deg x+\deg y=g+n$ ($=\dim G$), their product in $H^{\ast
(G;\mathbb{F}_{p})$ satisfies
\begin{quote}
$xy=\left\{
\begin{array}
[c]{c
\pm\kappa(u_{p})\text{ if }y=\tau(x)\text{;}\\
0\text{ if }y\neq\tau(x)\text{.\qquad
\end{array}
\right. $.
\end{quote}
\noindent This implies that $\dim\mathcal{V}=\left\vert \mathcal{B}\right\vert
$, hence verifies (5.8).$\square$
\bigskip
\noindent\textbf{Remark 5.2.} In \cite[Theorem 6]{K2} Ka\v{c} obtained the presentation
\begin{quote}
$H^{\ast}(G;\mathbb{F}_{p})=A_{G;\mathbb{F}_{p}}^{\ast}\otimes\Delta
_{\mathbb{F}_{p}}(\xi_{2b_{1}-1},\cdots,\xi_{2b_{n}-1})$, $\deg\xi_{r}=r$,
\end{quote}
\noindent in which the set $\{2b_{1},\cdots,2b_{n}\}$ of integers agree with
the set of degrees of \textsl{a regular sequence of homogeneous generators
}for the\textsl{ }ideal of\textsl{ }generalized $W$--invariants. It should be
noticed that
\begin{quote}
i) for a given $G$ and $p$ the number $\tau(G;p)$ of different choices of a
set $\xi_{2b_{1}-1},\cdots,\xi_{2b_{n}-1}$ of generators subject the same
degree constraints can be very large, as shown by the formula
$\qquad\tau(G;p)
{\textstyle\prod\limits_{1\leq i\leq k}}
[p-2+p^{\dim H^{n_{i}}(G;\mathbb{F}_{p})-1}]$
\end{quote}
\noindent and that
\begin{quote}
ii) the presentation of $H^{\ast}(G;\mathbb{F}_{p})$ as a Hopf algebra over
the Steenrod algebra $\mathcal{A}_{p}$ vary sensitively with respect to
different choices of a set of such generators.
\end{quote}
\noindent In comparison, with respect to the set $\mathcal{O}_{G,\mathbb{F
}^{\kappa}$ of explicitly generators in (5.7) stemming from Lemma 2.1, the
structure of $H^{\ast}(G;\mathbb{F}_{p})$ as a Hopf algebra over the
$\mathcal{A}_{p}$ can be effectively analyzed, see Theorems 1 and 2 in
\cite{DZ4}. $\square$
\bigskip
In view of the presentation (5.7) for $H^{\ast}(G;\mathbb{F}_{p})$ we examine
the differential of degree $1$ on $H^{\ast}(G;\mathbb{F}_{p})$
\begin{quote}
$\delta_{p}=r_{p}\circ\beta_{p}:H^{\ast}(G;\mathbb{F}_{p})\rightarrow H^{\ast
}(G;\mathbb{Z})\rightarrow H^{\ast}(G;\mathbb{F}_{p})$.
\end{quote}
\noindent Since $H^{\ast}(G;\mathbb{F}_{p})$ has a basis consisting of certain
monomials in $\kappa(\theta_{t}^{(p)}),$ $y_{t},$ $\kappa(\xi_{i}^{(p)})$ and
$\kappa(\eta_{s}^{(p)})$ by Theorem 1, the behavior of $\delta_{p}$ is
determined by the equations
\begin{enumerate}
\item[(5.9)] $\delta_{p}(\kappa(\theta_{t}^{(p)}))=-y_{t}$, $\delta_{p
(y_{t})=\delta_{p}\kappa(\xi_{i}^{(p)})=\delta_{p}\kappa(\eta_{s}^{(p)})=0$,
\end{enumerate}
\noindent see (4.7) and (5.6). In particular, invoking to the presentation
(5.7) one has
\begin{enumerate}
\item[(5.10)] $H^{\ast}(G;\mathbb{F}_{p})=\otimes_{t\in G(p)}((\mathbb{F
_{p}[y_{t}]/\left\langle y_{t}^{k_{t}}\right\rangle )\otimes\Delta
(\kappa(\theta_{t}^{(p)})))\otimes\Delta(\kappa(\xi_{i}^{(p)}),\kappa(\eta
_{s}^{(p)}))$.
\end{enumerate}
\noindent The same argument as that in the proof of Lemma 4.2 shows that
\begin{enumerate}
\item[(5.11)] $\dim_{\mathbb{F}_{p}}H^{\ast}(H^{\ast}(G;\mathbb{F}_{p
);\delta_{p})=2^{n}$.
\end{enumerate}
\begin{center}
\textbf{5.2. An additive presentation of }$H^{\ast}(G;\mathbb{Z})$.
\end{center}
Let $\tau(G)$ be the torsion ideal of $H^{\ast}(G;\mathbb{Z})$, and let
$\tau_{p}(G)$ be the $p$--primary component of $\tau(G)$. A subset $I\subseteq
G(p)$ defines in $H^{\ast}(G;\mathbb{F}_{p})$ the elements:
\begin{enumerate}
\item[(5.12)] $\theta_{I}^{(p)}=\prod_{t\in I}\kappa(\theta_{t}^{(p)})$;
$\ \mathcal{C}_{I}^{(p)}=\delta_{p}(\theta_{I}^{(p)})$; $\ $
$\mathcal{D}_{I}^{(p)}=(\prod_{t\in I}y_{t}^{k_{t}-1})\mathcal{C}_{I}^{(p)}$;
$\ \mathcal{R}_{I}^{(p)}=\sum_{t\in I}y_{t}\mathcal{C}_{I_{t}}^{(p)}$,
\end{enumerate}
\noindent where $I_{t}$ is obtained by deleting $t\in I$ from $I$. We note
that if $I=\{t\}$ is a singleton, then $\theta_{\{t\}}^{(p)}=\kappa(\theta
_{t}^{(p)})$, $\mathcal{C}_{\{t\}}^{(p)}=-y_{t}$ by (5.9).
If $V^{\ast}=V^{0}\oplus V^{1}\oplus V^{2}\oplus\cdots$ is a graded vector
space (resp. a graded ring), define its subspace (resp. subring) $V^{+}$ by
$V^{+}=V^{1}\oplus V^{2}\oplus\cdots$.
\bigskip
\noindent\textbf{Theorem 2.} \textsl{The inclusion }$O_{G,\mathbb{Z}}^{\kappa
}\subset H^{\ast}(G;\mathbb{Z})$\textsl{ by (5.3) induces a splitting}
\begin{enumerate}
\item[(5.13)] $H^{\ast}(G;\mathbb{Z})=\Delta_{\mathbb{Z}}(\mathcal{O
_{G,\mathbb{Z}}^{\kappa})\underset{p\in\{2,3,5\}}{\oplus}\tau_{p}(G)$,
\end{enumerate}
\noindent\textsl{on which the reduction }$r_{p}$\textsl{ restricts to an
additive isomorphism}
\begin{enumerate}
\item[(5.14)] $\tau_{p}(G)\cong\frac{\mathbb{F}_{p}[y_{t}]\{1,\mathcal{C
_{I}^{(p)}\}^{+}}{\left\langle y_{t}^{k_{t}},\mathcal{D}_{J}^{(p)
,\mathcal{R}_{K}^{(p)}\right\rangle }\otimes\Delta_{\mathbb{F}_{p}}(\kappa
(\xi_{i}^{(p)}),\kappa(\eta_{s}^{(p)}))_{1\leq i\leq k,s\in\overline
{G}(\mathbb{F}_{p})}$, $t\in G(p)$,
\end{enumerate}
\noindent\textsl{where}
\begin{quote}
\textsl{i)} $I,J,K\subseteq G(p)$ \textsl{with} $\left\vert I\right\vert
,\left\vert J\right\vert \geq2$, $\left\vert K\right\vert \geq3$\textsl{, and}
\textsl{ii)} $\left\langle y_{t}^{k_{t}},\mathcal{D}_{J}^{(p)},\mathcal{R
_{K}^{(p)}\right\rangle $ \textsl{is the} $\mathbb{F}_{p}[y_{t}]_{t\in G(p)
$\textsl{--module spanned by} $y_{t}^{k_{t}},\mathcal{D}_{J}^{(p)
,\mathcal{R}_{K}^{(p)}$.
\end{quote}
\noindent\textbf{Proof. }The identification (5.2) carries the generator
$\prod_{v\in\mathcal{O}_{G,\mathbb{Z}}}v$ of $E_{\infty}^{g,n}(G;\mathbb{Z
)=\mathbb{Z}$ to the generator $u=\prod_{v\in\mathcal{O}_{G,\mathbb{Z}}
\kappa(v)$ of $H^{g+n}(G;$ $\mathbb{Z})=\mathbb{Z}$ by (5.4). Since
$\kappa(v)^{2}\in\tau(G)$ for $v\in\mathcal{O}_{G,\mathbb{Z}}$ by (5.5), the
ring $H^{\ast}(G;\mathbb{Z})$ is monotone with respect to $u$ in degree $\dim
G=g+n$. By Lemma 2.13 the set of monomials $\{\prod_{v\in\mathcal{O
_{G,\mathbb{Z}}}\kappa(v)^{\varepsilon_{v}}\}_{\varepsilon_{v}=0,1}$ is
linearly independent, and spans a direct summand of rank $2^{n}$ for the free
part of $H^{\ast}(G;\mathbb{Z})$. From $\dim_{\mathbb{Q}}H^{\ast
(G;\mathbb{Q})=2^{n}$ by Theorem 1 we get
\begin{enumerate}
\item[(5.15)] $H^{\ast}(G;\mathbb{Z})=\Delta_{\mathbb{Z}}(\mathcal{O
_{G,\mathbb{Z}}^{\kappa})\oplus\tau(G)$.
\end{enumerate}
Granted with Theorem 1, (5.11) and (5.15), the same argument as that in the
proof of Lemma 4.4 shows that for any prime $p$:
\begin{enumerate}
\item[(5.16)] $\operatorname{Im}\beta_{p}=\tau_{p}(G)\cong\operatorname{Im
\delta_{p}$ under $r_{p}$;
\item[(5.17)] $\dim_{\mathbb{F}_{p}}\tau_{p}(G)=\left\{
\begin{tabular}
[c]{l
$0\text{ if }G(p)=\emptyset\text{;}$\\
$2^{n-1}(\prod_{t\in G(p)}k_{t}-1)\text{ if }G(p)\neq\emptyset\text{.}
\end{tabular}
\right. $
\end{enumerate}
\noindent In particular, we get (5.13) from (5.15), (5.17) and that
$G(p)=\emptyset$ for $p\neq2,3,5$ by Lemma 2.1.
In view of (5.16) it remains for us to establish the presentation (5.14) for
$\operatorname{Im}\delta_{p}$. Consider the decomposition obtained from
Theorem 1
\begin{quote}
$H^{\ast}(G;\mathbb{F}_{p})=\oplus_{0\leq r\leq\left\vert G(p)\right\vert
}L_{r}\otimes\Delta_{\mathbb{F}_{p}}(\kappa(\xi_{i}^{(p)}),\kappa(\eta
_{s}^{(p)}))_{1\leq i\leq k,s\in\overline{G}(\mathbb{F}_{p})}$,
\end{quote}
\noindent where $L_{0}=A_{G;\mathbb{F}_{p}}^{\ast}$; $L_{r}=A_{G;\mathbb{F
_{p}}^{\ast}\{\theta_{I}^{(p)}\}_{I\subseteq G(p),\left\vert I\right\vert =r
$. Since the $\delta_{p}$ action satisfies
\begin{quote}
$\delta_{p}(L_{r}\otimes1)\subseteq L_{r-1}\otimes1$, $\delta_{p
(1\otimes\Delta_{\mathbb{F}_{p}}(\kappa(\xi_{i}^{(p)}),\kappa(\eta_{s
^{(p)}))=0$
\end{quote}
\noindent by (5.9), if we let $\delta_{p,r}$ be the restriction of $\delta
_{p}$ on $L_{r}=L_{r}\otimes1$, then
\begin{enumerate}
\item[(5.18)] $\operatorname{Im}\delta_{p}=\oplus_{1\leq r\leq\left\vert
G(p)\right\vert }\operatorname{Im}\delta_{p,r}\otimes\Delta_{\mathbb{F}_{p
}(\kappa(\xi_{i}^{(p)}),\kappa(\eta_{s}^{(p)}))_{1\leq i\leq k,s\in
\overline{G}(\mathbb{F}_{p})}$.
\end{enumerate}
If $r=1$ we have $\operatorname{Im}\delta_{p,1}=A_{G;\mathbb{F}_{p}}^{+}$.
Assuming next $r\geq2$, the isomorphism
\begin{quote}
$f:L_{r}\rightarrow A_{G;\mathbb{F}_{p}}^{\ast}\{\mathcal{C}_{I
^{(p)}\}_{I\subseteq G(p),\left\vert I\right\vert =r}$
\end{quote}
\noindent of $A_{G;\mathbb{F}_{p}}^{\ast}$--modules defined by $f(\theta
_{I}^{(p)})=\mathcal{C}_{I}^{(p)}$ clearly fits in the commutative diagram
\begin{enumerate}
\item[(5.19)]
\begin{array}
[c]{ccc}
& & A_{G;\mathbb{F}_{p}}^{\ast}\{\mathcal{C}_{I}^{(p)}\}_{I\subseteq
G(p),\left\vert I\right\vert =r}\\
& ^{f}\nearrow & \downarrow\chi\\
L_{r} & \underset{\delta_{p,r}}{\rightarrow} & L_{r-1
\end{array}
$,
\end{enumerate}
\noindent where $\chi$ is the $A_{G;\mathbb{F}_{p}}^{\ast}$ linear map induced
by the inclusion $\mathcal{C}_{I}^{(p)}\in L_{r-1}$. Since
\begin{quote}
$\ker\delta_{p,r}=\operatorname{Im}\delta_{p,r+1}\oplus H^{r}(L_{\ast
;\delta_{p,\ast})\subset L_{r}$
$\operatorname{Im}\delta_{p,r+1}=$ the $A_{G;\mathbb{F}_{p}}^{\ast}$--span of
$\{\mathcal{C}_{J}^{(p)}\}_{J\sqsubseteq G(p),\left\vert J\right\vert =r+1}$;
$H^{r}(L_{\ast};\delta_{p,\ast})=\mathbb{F}_{p}\{(\prod_{t\in I}y_{t
^{k_{t}-1})\theta_{I}^{(p)}\}_{I\subseteq G(p),\left\vert I\right\vert =r}$
(by Remark 4.3),
\end{quote}
\noindent and since
\begin{quote}
$f(\mathcal{C}_{J}^{(p)})=\mathcal{R}_{J}^{(p)}$, $\quad f((\prod_{t\in
I}y_{t}^{k_{t}-1})\theta_{I}^{(p)})=\mathcal{D}_{I}^{(p)}$,
\end{quote}
\noindent$\chi$ in (5.19) induces an isomorphism of $A_{G;\mathbb{F}_{p
}^{\ast}$ modules
\begin{enumerate}
\item[(5.20)] $A_{G;\mathbb{F}_{p}}^{\ast}\{\mathcal{C}_{I}^{(p)
\}_{\left\vert I\right\vert =r}/\left\langle \mathcal{D}_{J}^{(p)
,\mathcal{R}_{K}^{(p)}\right\rangle _{\left\vert J\right\vert =r,\left\vert
K\right\vert =r+1}\cong\operatorname{Im}\delta_{p,r}$, $I,J,K\subseteq G(p)$.
\end{enumerate}
\noindent The isomorphism (5.14) has now been established by (5.18) and
(5.20).$\square$
\bigskip
\noindent\textbf{Remark 5.3.} i) From (5.13) and (5.16) we have
\begin{quote}
$H^{\ast}(G;\mathbb{Z})=\Delta_{\mathbb{Z}}(\mathcal{O}_{G,\mathbb{Z}
^{\kappa})\underset{p\in\{2,3,5\}}{\oplus}\operatorname{Im}\beta_{p}$.
\end{quote}
\noindent Comparing this with Lemma 4.6 and taking into account of (5.7), we
obtain an additive isomorphism $E_{3}^{\ast,\ast}(G;\mathbb{Z})=H^{\ast
}(G;\mathbb{Z})$ for all simple $G$. This was conjectured by Marlin \cite{M1}
who has also checked this up to $n=4$.
ii) The presentation (5.14) of $\tau_{p}(G)$ not only implies the classical
result that an $1$--connected compact Lie group has no $p^{2}$--torsion in its
integral cohomology, see Lin \cite{Lin1, Lin2} or Kane \cite{Ka}, but is also
very close to our eventual characterization of $\tau_{p}(G)$ as a ring, see
computation in Lemmas 6.2 and 6.3.$\square$
\section{The ring $H^{\ast}(G,\mathbb{F})$ for exceptional $G$}
Assume in this section that $G$ is exceptional. Based on Theorems 1 and 2 in
\S 5 we recover the classical results about ring $H^{\ast}(G;\mathbb{F})$ with
$\mathbb{F}$ a field in \S 6.1--6.3, and determine the ring $H^{\ast
}(G;\mathbb{Z})$ in \S 6.4--6.5. To emphasize the degrees of the primary
generators the notation $\zeta_{\deg u}\in H^{\ast}(G;\mathbb{F})$, for
instance, is used in the place of $\kappa(u)$, $u\in\mathcal{O}_{G,\mathbb{F
}$ (see (5.3)). In addition $x_{\deg y_{j}}$ is used instead of $y_{j}\in
A_{G;\mathbb{F}}^{\ast}$ (as in Example 3.2).
Historically, the cohomologies $H^{\ast}(G;\mathbb{F}_{p})$ were calculated
case by case, presented using generators with different origins and
characterized mainly by their degrees. As a result one could hardly analyzing
$H^{\ast}(G;\mathbb{Z})$ from the existing information on $H^{\ast
}(G;\mathbb{F}_{p})$. In comparison, with our \textsl{primary generators} in
various coefficients stemming solely from the system $\{e_{i},f_{j},g_{j}\}$
in the Schubert classes on $G/T$, the relationships between $H^{\ast
}(G;\mathbb{Z})$ and $H^{\ast}(G;\mathbb{F}_{p})$ are transparent from the
very beginning, compare iii) of Lemma 2.8 with Lemma 3.5. It is for this
reason that, starting from the presentation in Theorem 2 we can proceed to
determine the ring structure on $H^{\ast}(G;\mathbb{Z})$.
\begin{center}
\textbf{6.1. The ring }$H^{\ast}(G;\mathbb{Q})$.
\end{center}
Write $\vartheta_{\deg u}\in H^{\ast}(G;\mathbb{Q})$ instead of $\kappa(u)$,
$u\in\mathcal{O}_{G,\mathbb{Q}}$ . The elements in $\mathcal{O}_{G,\mathbb{Q
}$, together with their $\kappa$--images, are listed in Table 4 below (Example 2.10).
\begin{center
\begin{tabular}
[c]{l|llllllllllll}\hline
$u\in O_{G_{2},\mathbb{Q}}$ & $\xi_{1}^{(0)}$ & & $\eta_{1}^{(0)}$ & & & &
& & & & & \\
$u\in O_{F_{4},\mathbb{Q}}$ & $\xi_{1}^{(0)}$ & & $\eta_{1}^{(0)}$ & $\xi
_{2}^{(0)}$ & & & $\eta_{2}^{(0)}$ & & & & & \\
$u\in O_{E_{6},\mathbb{Q}}$ & $\xi_{1}^{(0)}$ & $\xi_{2}^{(0)}$ & $\eta
_{1}^{(0)}$ & $\xi_{3}^{(0)}$ & $\xi_{4}^{(0)}$ & & $\eta_{2}^{(0)}$ & & &
& & \\
$u\in O_{E_{7},\mathbb{Q}}$ & $\xi_{1}^{(0)}$ & & $\eta_{1}^{(0)}$ & $\xi
_{2}^{(0)}$ & & $\eta_{3}^{(0)}$ & $\eta_{2}^{(0)}$ & $\xi_{3}^{(0)}$ &
$\eta_{4}^{(0)}$ & & & \\
$u\in O_{E_{8},\mathbb{Q}}$ & $\xi_{1}^{(0)}$ & & & $\xi_{2}^{(0)}$ & & &
$\eta_{2}^{(0)}$ & $\xi_{3}^{(0)}$ & $\eta_{5}^{(0)}$ & $\eta_{3}^{(0)}$ &
$\eta_{1}^{(0)}$ & $\eta_{4}^{(0)}$\\\hline
$\kappa(u)\in O_{G,\mathbb{Q}}^{\kappa}$ & $\vartheta_{3}$ & $\vartheta_{9}$ &
$\vartheta_{11}$ & $\vartheta_{15}$ & $\vartheta_{17}$ & $\vartheta_{19}$ &
$\vartheta_{23}$ & $\vartheta_{27}$ & $\vartheta_{35}$ & $\vartheta_{39}$ &
$\vartheta_{47}$ & $\vartheta_{59}$\\\hline
\end{tabular}
{\small Table 4. the elements in }$\mathcal{O}_{G,\mathbb{Q}}$ {\small and
their }$\kappa${\small --images}
\end{center}
Since $u^{2}=0$ for $u\in H^{odd}(G;\mathbb{Q})$, the factor $\Delta
_{\mathbb{Q}}(\mathcal{O}_{G,\mathbb{Q}}^{\kappa})$ in (5.7) can be replaced
by $\Lambda_{\mathbb{Q}}(\mathcal{O}_{G,\mathbb{Q}}^{\kappa})$. Therefore,
Theorem 1, together with contents in Table 4, yields the next result that
implies the classical computation of Yen, Borel and Chevalley \cite{Y,BC}:
\bigskip
\noindent\textbf{Theorem 3.} \textsl{The inclusion }$O_{G,\mathbb{Q}}^{\kappa
}\subset H^{\ast}(G;\mathbb{Q})$\textsl{ induces the ring isomorphisms}
\begin{quote}
$H^{\ast}(G_{2};\mathbb{Q})=\Lambda_{\mathbb{Q}}(\vartheta_{3},\vartheta
_{11})$;$\qquad$
$H^{\ast}(F_{4};\mathbb{Q})=\Lambda_{\mathbb{Q}}(\vartheta_{3},\vartheta
_{11},\vartheta_{15},\vartheta_{23})$;
$H^{\ast}(E_{6};\mathbb{Q})=\Lambda_{\mathbb{Q}}(\vartheta_{3},\vartheta
_{9},\vartheta_{11},\vartheta_{15},\vartheta_{17},\vartheta_{23})$;
$H^{\ast}(E_{7};\mathbb{Q})=\Lambda_{\mathbb{Q}}(\vartheta_{3},\vartheta
_{11},\vartheta_{15},\vartheta_{19},\vartheta_{23},\vartheta_{27
,\vartheta_{35})$;
$H^{\ast}(E_{8};\mathbb{Q})=\Lambda_{\mathbb{Q}}(\vartheta_{3},\vartheta
_{15},\vartheta_{23},\vartheta_{27},\vartheta_{35},\vartheta_{39
,\vartheta_{47},\vartheta_{59})$.$\square$
\end{quote}
\begin{center}
\textbf{6.2. The ring }$H^{\ast}(G;\mathbb{F}_{p})$ \textbf{for }$p=3,5,$.
\end{center}
Write $\zeta_{\deg u}\in H^{\ast}(G;\mathbb{F}_{p})$ instead of $\kappa(u)$,
$u\in\mathcal{O}_{G,\mathbb{F}_{p}}$. The elements in $\mathcal{O
_{G,\mathbb{F}_{p}}$, together with their $\kappa$--images, are given in the
tables below (see Example 2.10).
\begin{center
\begin{tabular}
[c]{l|lllllllllll}\hline
$u\in\mathcal{O}_{G_{2},\mathbb{F}_{3}}$ & $\xi_{1}^{(3)}$ & & & $\eta
_{1}^{(3)}$ & & & & & & & \\
$u\in\mathcal{O}_{F_{4},\mathbb{F}_{3}}$ & $\xi_{1}^{(3)}$ & $\theta_{2
^{(3)}$ & & $\eta_{1}^{(3)}$ & $\xi_{2}^{(3)}$ & & & & & & \\
$u\in\mathcal{O}_{E_{6},\mathbb{F}_{3}}$ & $\xi_{1}^{(3)}$ & $\theta_{2
^{(3)}$ & $\xi_{2}^{(3)}$ & $\eta_{1}^{(3)}$ & $\xi_{3}^{(3)}$ & $\xi
_{4}^{(3)}$ & & & & & \\
$u\in\mathcal{O}_{E_{7},\mathbb{F}_{3}}$ & $\xi_{1}^{(3)}$ & $\theta_{2
^{(3)}$ & & $\eta_{1}^{(3)}$ & $\xi_{2}^{(3)}$ & & $\eta_{3}^{(3)}$ &
$\xi_{3}^{(3)}$ & $\eta_{4}^{(3)}$ & & \\
$u\in\mathcal{O}_{E_{8},\mathbb{F}_{3}}$ & $\xi_{1}^{(3)}$ & $\theta_{2
^{(3)}$ & & & $\xi_{2}^{(3)}$ & & $\theta_{6}^{(3)}$ & $\xi_{3}^{(3)}$ &
$\eta_{5}^{(3)}$ & $\eta_{3}^{(3)}$ & $\eta_{1}^{(3)}$\\\hline
$\kappa(u)\in\mathcal{O}_{G,\mathbb{F}_{3}}^{\kappa}$ & $\zeta_{3}$ &
$\zeta_{7}$ & $\zeta_{9}$ & $\zeta_{11}$ & $\zeta_{15}$ & $\zeta_{17}$ &
$\zeta_{19}$ & $\zeta_{27}$ & $\zeta_{35}$ & $\zeta_{39}$ & $\zeta_{47
$\\\hline
\end{tabular}
{\small Table 5. the elements in }$\mathcal{O}_{G,\mathbb{F}_{3}}$ {\small and
their }$\kappa${\small --images}
\bigski
\begin{tabular}
[c]{l|lllllllllll}\hline
$u\in\mathcal{O}_{G_{2},\mathbb{F}_{5}}$ & $\xi_{1}^{(5)}$ & & $\eta
_{1}^{(5)}$ & & & & & & & & \\
$u\in\mathcal{O}_{F_{4},\mathbb{F}_{5}}$ & $\xi_{1}^{(5)}$ & & $\eta
_{1}^{(5)}$ & $\xi_{2}^{(5)}$ & & & $\eta_{2}^{(5)}$ & & & & \\
$u\in\mathcal{O}_{E_{6},\mathbb{F}_{5}}$ & $\xi_{1}^{(5)}$ & $\xi_{2}^{(5)}$ &
$\eta_{1}^{(5)}$ & $\xi_{3}^{(5)}$ & $\xi_{4}^{(5)}$ & & $\eta_{2}^{(5)}$ &
& & & \\
$u\in\mathcal{O}_{E_{7},\mathbb{F}_{5}}$ & $\xi_{1}^{(5)}$ & & $\eta
_{1}^{(5)}$ & $\xi_{2}^{(5)}$ & & $\eta_{3}^{(5)}$ & $\eta_{2}^{(5)}$ &
$\xi_{3}^{(5)}$ & $\eta_{4}^{(5)}$ & & \\
$u\in\mathcal{O}_{E_{8},\mathbb{F}_{5}}$ & $\xi_{1}^{(5)}$ & & $\theta
_{4}^{(5)}$ & $\xi_{2}^{(5)}$ & & & $\eta_{2}^{(5)}$ & $\xi_{3}^{(5)}$ &
$\eta_{5}^{(5)}$ & $\eta_{3}^{(5)}$ & $\eta_{1}^{(5)}$\\\hline
$\kappa(u)\in\mathcal{O}_{G,\mathbb{F}_{5}}^{\kappa}$ & $\zeta_{3}$ &
$\zeta_{9}$ & $\zeta_{11}$ & $\zeta_{15}$ & $\zeta_{17}$ & $\zeta_{19}$ &
$\zeta_{23}$ & $\zeta_{27}$ & $\zeta_{35}$ & $\zeta_{39}$ & $\zeta_{47
$\\\hline
\end{tabular}
{\small Table 6: the elements in }$\mathcal{O}_{G,\mathbb{F}_{5}}$ {\small and
their }$\kappa${\small --images}
\end{center}
For an odd prime $p$ elements in $H^{odd}(G;\mathbb{F}_{p})$ are also square
free. Therefore, Theorem 1, together with the contents in Tables 5 and 6,
gives rise to the next results that imply the calculations by Borel, Araki in
\cite{B5, B6, A2}.
\bigskip
\noindent\textbf{Theorem 4.} \textsl{The inclusions }$A_{G;\mathbb{F}_{p
}^{\ast},O_{G,\mathbb{F}_{p}}^{\kappa}\subset H^{\ast}(G;\mathbb{F}_{p
)$\textsl{ induces the ring isomorphisms}
\begin{quote}
$H^{\ast}(G_{2};\mathbb{F}_{3})=\Lambda_{\mathbb{F}_{3}}(\zeta_{3},\zeta
_{11})$;
$H^{\ast}(F_{4};\mathbb{F}_{3})=\mathbb{F}_{3}[x_{8}]/\left\langle x_{8
^{3}\right\rangle \otimes\Lambda_{\mathbb{F}_{3}}(\zeta_{3},\zeta_{7
,\zeta_{11},\zeta_{15})$;
$H^{\ast}(E_{6};\mathbb{F}_{3})=\mathbb{F}_{3}[x_{8}]/\left\langle x_{8
^{3}\right\rangle \otimes\Lambda_{\mathbb{F}_{3}}(\zeta_{3},\zeta_{7
,\zeta_{9},\zeta_{11},\zeta_{15},\zeta_{17})$;
$H^{\ast}(E_{7};\mathbb{F}_{3})=\mathbb{F}_{3}[x_{8}]/\left\langle x_{8
^{3}\right\rangle \otimes\Lambda_{\mathbb{F}_{3}}(\zeta_{3},\zeta_{7
,\zeta_{11},\zeta_{15},\zeta_{19},\zeta_{27},\zeta_{35})$;
$H^{\ast}(E_{8};\mathbb{F}_{3})=\mathbb{F}_{3}[x_{8},x_{20}]/\left\langle
x_{8}^{3},x_{20}^{3}\right\rangle \otimes\Lambda_{\mathbb{F}_{3}}(\zeta
_{3},\zeta_{7},\zeta_{15},\zeta_{19},\zeta_{27},\zeta_{35},\zeta_{39
,\zeta_{47})$;
$H^{\ast}(G_{2};\mathbb{F}_{5})=\Lambda_{\mathbb{F}_{5}}(\zeta_{3},\zeta
_{11})$;
$H^{\ast}(F_{4};\mathbb{F}_{5})=\Lambda_{\mathbb{F}_{5}}(\zeta_{3},\zeta
_{11},\zeta_{15},\zeta_{23})$;
$H^{\ast}(E_{6};\mathbb{F}_{5})=\Lambda_{\mathbb{F}_{5}}(\zeta_{3},\zeta
_{9},\zeta_{11},\zeta_{15},\zeta_{17},\zeta_{23})$;
$H^{\ast}(E_{7};\mathbb{F}_{5})=\Lambda_{\mathbb{F}_{5}}(\zeta_{3},\zeta
_{11},\zeta_{15},\zeta_{19},\zeta_{23},\zeta_{27},\zeta_{35})$;
$H^{\ast}(E_{8};\mathbb{F}_{5})=\mathbb{F}_{5}[x_{12}]/\left\langle x_{12
^{5}\right\rangle \otimes\Lambda(\zeta_{3},\zeta_{11},\zeta_{15},\zeta
_{23},\zeta_{27},\zeta_{35},\zeta_{39},\zeta_{47})$.$\square$
\end{quote}
\begin{center}
\textbf{6.3.} \textbf{The ring }$H^{\ast}(G;\mathbb{F}_{2})$
\end{center}
All elements in $\mathcal{O}_{G,\mathbb{F}_{2}}$, and their $\kappa$--images
in $H^{\ast}(G;\mathbb{F}_{2})$, are given by the table below (see Example 2.10)
\begin{center
\begin{tabular}
[c]{l|llllllll}\hline
$u\in\mathcal{O}_{G_{2},\mathbb{F}_{2}}$ & $\xi_{1}^{(2)}$ & $\theta_{1
^{(2)}$ & & & & & & \\
$u\in\mathcal{O}_{F_{4},\mathbb{F}_{2}}$ & $\xi_{1}^{(2)}$ & $\theta_{1
^{(2)}$ & & $\xi_{2}^{(2)}$ & & $\eta_{2}^{(2)}$ & & \\
$u\in\mathcal{O}_{E_{6},\mathbb{F}_{2}}$ & $\xi_{1}^{(2)}$ & $\theta_{1
^{(2)}$ & $\xi_{2}^{(2)}$ & $\xi_{3}^{(2)}$ & $\xi_{4}^{(2)}$ & $\eta
_{2}^{(2)}$ & & \\
$u\in\mathcal{O}_{E_{7},\mathbb{F}_{2}}$ & $\xi_{1}^{(2)}$ & $\theta_{1
^{(2)}$ & $\theta_{3}^{(2)}$ & $\xi_{2}^{(2)}$ & $\theta_{4}^{(2)}$ &
$\eta_{2}^{(2)}$ & $\xi_{3}^{(2)}$ & \\
$u\in\mathcal{O}_{E_{8},\mathbb{F}_{2}}$ & $\xi_{1}^{(2)}$ & $\theta_{1
^{(2)}$ & $\theta_{3}^{(2)}$ & $\xi_{2}^{(2)}$ & $\theta_{5}^{(2)}$ &
$\eta_{2}^{(2)}$ & $\xi_{3}^{(2)}$ & $\theta_{7}^{(2)}$\\\hline
$\kappa(u)\in\mathcal{O}_{G,\mathbb{F}_{2}}^{\kappa}$ & $\zeta_{3}$ &
$\zeta_{5}$ & $\zeta_{9}$ & $\zeta_{15}$ & $\zeta_{17}$ & $\zeta_{23}$ &
$\zeta_{27}$ & $\zeta_{29}$\\\hline
\end{tabular}
{\small Table 7. the elements in }$\mathcal{O}_{G,\mathbb{F}_{2}}$ {\small and
their }$\kappa${\small --images}.
\end{center}
\noindent where $\zeta_{\deg u}=\kappa(u)$, $u\in\mathcal{O}_{G,\mathbb{F
_{p}}$. By Theorem 1 we have
\begin{enumerate}
\item[(6.1)] $H^{\ast}(G;\mathbb{F}_{2})=A_{G;\mathbb{F}_{2}}^{\ast
\otimes\Delta_{\mathbb{F}_{2}}(\zeta_{\deg u})_{u\in\mathcal{O}_{G,\mathbb{F
_{2}}}$,
\end{enumerate}
\noindent in which $A_{G;\mathbb{F}_{2}}^{\ast}$ has been decided in Example
3.2. The determination of the ring $H^{\ast}(G;\mathbb{F}_{2})$ now amounts to
expressing all the squares $\zeta_{r}^{2}$ as elements in $A_{G;\mathbb{F
_{2}}^{\ast}$ (see (5.5)). This requires computation with explicit
presentations of primary polynomials in $\Phi_{G,\mathbb{F}_{2}}$ (Definition
2.7), and has been done in \cite[Corollary 4.4]{DZ4}:
\bigskip
\noindent\textbf{Theorem 5. }\textsl{The inclusions }$A_{G;\mathbb{F}_{2
}^{\ast},O_{G,\mathbb{F}_{2}}^{\kappa}\subset H^{\ast}(G;\mathbb{F}_{2
)$\textsl{ induces the ring isomorphisms}
\begin{enumerate}
\item[(6.2)] $H^{\ast}(G_{2};\mathbb{F}_{2})=\mathbb{F}_{2}[x_{6
]/\left\langle x_{6}^{2}\right\rangle \otimes\Delta_{\mathbb{F}_{2}}(\zeta
_{3})\otimes\Lambda_{\mathbb{F}_{2}}(\zeta_{5})$;
\item[(6.3)] $H^{\ast}(F_{4};\mathbb{F}_{2})=\mathbb{F}_{2}[x_{6
]/\left\langle x_{6}^{2}\right\rangle \otimes\Delta_{\mathbb{F}_{2}}(\zeta
_{3})\otimes\Lambda_{\mathbb{F}_{2}}(\zeta_{5},\zeta_{15},\zeta_{23})$;
\item[(6.4)] $H^{\ast}(E_{6};\mathbb{F}_{2})=\mathbb{F}_{2}[x_{6
]/\left\langle x_{6}^{2}\right\rangle \otimes\Delta_{\mathbb{F}_{2}}(\zeta
_{3})\otimes\Lambda_{\mathbb{F}_{2}}(\zeta_{5},\zeta_{9},\zeta_{15},\zeta
_{17},\zeta_{23})$;
\item[(6.5)] $H^{\ast}(E_{7};\mathbb{F}_{2})=\frac{\mathbb{F}_{2}[x_{6
,x_{10},x_{18}]}{\left\langle x_{6}^{2},x_{10}^{2},x_{18}^{2}\right\rangle
}\otimes\Delta_{\mathbb{F}_{2}}(\zeta_{3},\zeta_{5},\zeta_{9})\otimes
\Lambda_{\mathbb{F}_{2}}(\zeta_{15},\zeta_{17},\zeta_{23},\zeta_{27})$;
\item[(6.6)] $H^{\ast}(E_{8};\mathbb{F}_{2})=\frac{\mathbb{F}_{2}[x_{6
,x_{10},x_{18},x_{30}]}{\left\langle x_{6}^{8},x_{10}^{4},x_{18}^{2
,x_{30}^{2}\right\rangle }\otimes\Delta_{\mathbb{F}_{2}}(\zeta_{3},\zeta
_{5},\zeta_{9},\zeta_{15},\zeta_{23})\otimes\Lambda_{\mathbb{F}_{2}
(\zeta_{17},\zeta_{27},\zeta_{29})$,
\end{enumerate}
\noindent\textsl{where}
\textsl{i) }$\zeta_{3}^{2}=x_{6}$\textsl{ in }$G_{2},F_{4},E_{6},E_{7},E_{8
$\textsl{;\quad}
\textsl{ii) }$\zeta_{5}^{2}=x_{10},\quad\zeta_{9}^{2}=x_{18}$\textsl{ in
}$E_{7},E_{8}$\textsl{;}
\textsl{iii) }$\zeta_{15}^{2}=x_{30}$\textsl{; }$\zeta_{23}^{2}=x_{6
^{6}x_{10}$\textsl{ in }$E_{8}$\textsl{.}$\square$
\bigskip
\noindent\textbf{Remark 6.1. }Historically, the rings $H^{\ast}(G;\mathbb{F
_{2})$ for exceptional $G$ were first obtained by Borel, Araki and Shikata,
Kono \cite{B5, B6, A1, AS, Ko}, using generators specified mainly by their
degrees. It should be noted that the primary generators $\zeta_{i}$'s that we
utilized may not coincide with those in the classical descriptions, compare
Corollaries 4.2 and 4.4 in \cite{DZ4}.$\square$
\begin{center}
\textbf{6.4. The torsion ideal }$\tau_{p}(G)$\textbf{ in }$H^{\ast
}(G;\mathbb{Z})$ \
\end{center}
The strategy in determining the ring structure on $\tau_{p}(G)$, $p=2,3,5$, is
the following one: the formula (5.14) in Theorem 2 has already characterized
$\tau_{p}(G)$ as a module over the ring $A_{G,\mathbb{F}_{p}}^{+}$:
\begin{enumerate}
\item[(6.7)] $\tau_{p}(G)\cong\frac{\mathbb{F}_{p}[y_{t}]\{1,\mathcal{C
_{I}\}^{+}}{\left\langle y_{t}^{k_{t}},\mathcal{D}_{J},\mathcal{R
_{K}\right\rangle }\otimes\Delta_{\mathbb{F}_{p}}(\kappa(\xi_{i}^{(2)
),\kappa(\eta_{s}^{(2)}))_{1\leq i\leq k,s\in\overline{G}(\mathbb{F}_{p})}$
\end{enumerate}
\noindent where $I\subseteq G(p)$ with $\left\vert I\right\vert \geq2$. It
remains for us to express
a) all the squares $\kappa(\xi_{i}^{(2)})^{2}$, $\kappa(\eta_{s}^{(2)})^{2}$
as elements in $A_{G,\mathbb{F}_{p}}^{+}$ (see (5.5));
b) all the products $\mathcal{C}_{I}\cdot\mathcal{C}_{J}$ as $A_{G,\mathbb{F
_{p}}^{\ast}$--linear combinations of $\mathcal{C}_{K}$'s.
\noindent By considering $\tau_{p}(G)$ as the subring $\operatorname{Im
\delta_{p}\subset H^{\ast}(G;\mathbb{F}_{p})$ via $r_{p}$, the tasks in a) and
b) can be implemented by computation in the ring $H^{\ast}(G;\mathbb{F}_{p})$
whose structures has already been determined in Theorems 4 and 5.
One finds in Tables 5--7 those $\zeta_{j}$'s that correspond to $\kappa
(\xi_{i}^{(2)})$, $\kappa(\eta_{s}^{(2)})$ in (6.7).
\bigskip
\noindent\textbf{Lemma 6.2.} \textsl{Under the ring isomorphisms }$r_{p
:\tau_{p}(G)\cong\operatorname{Im}\delta_{p}$\textsl{ in (6.7), all the
nontrivial }$\tau_{p}(G)$\textsl{ with }$p=3,5$\textsl{ are given by}
\begin{enumerate}
\item[(6.8)] $\tau_{3}(F_{4})\cong\mathbb{F}_{3}[x_{8}]^{+}/\left\langle
x_{8}^{3}\right\rangle \otimes\Lambda_{\mathbb{F}_{3}}(\zeta_{3},\zeta
_{11},\zeta_{15})$;
\item[(6.9)] $\tau_{3}(E_{6})\cong\mathbb{F}_{3}[x_{8}]^{+}/\left\langle
x_{8}^{3}\right\rangle \otimes\Lambda_{\mathbb{F}_{3}}(\zeta_{3},\zeta
_{9},\zeta_{11},\zeta_{15},\zeta_{17})$;
\item[(6.10)] $\tau_{3}(E_{7})\cong\mathbb{F}_{3}[x_{8}]^{+}/\left\langle
x_{8}^{3}\right\rangle \otimes\Lambda_{\mathbb{F}_{3}}(\zeta_{3},\zeta
_{11},\zeta_{15},\zeta_{19},\zeta_{27},\zeta_{35})$;
\item[(6.11)] $\tau_{3}(E_{8})\cong\frac{\mathbb{F}_{3}[x_{8},x_{20
,\mathcal{C}_{\{2,6\}}]^{+}}{\left\langle x_{8}^{3},x_{20}^{3},x_{8}^{2
x_{20}^{2}\mathcal{C}_{\{2,6\}},\mathcal{C}_{\{2,6\}}^{2}\right\rangle
}\otimes\Lambda_{\mathbb{F}_{3}}(\zeta_{3},\zeta_{15},\zeta_{27},\zeta
_{35},\zeta_{39},\zeta_{47})$;
\item[(6.12)] $\tau_{5}(E_{8})\cong\mathbb{F}_{5}[x_{12}]^{+}/\left\langle
x_{12}^{5}\right\rangle \otimes\Lambda_{\mathbb{F}_{5}}(\zeta_{3},\zeta
_{15},\zeta_{23},\zeta_{27},\zeta_{35},\zeta_{39},\zeta_{47})$.
\end{enumerate}
\noindent\textbf{Proof.} Since $u^{2}=0$ for $u\in H^{odd}(G;\mathbb{F}_{p})$
with $p\neq2$, we have in (6.7) that
\begin{enumerate}
\item[(6.13)] $\Delta_{\mathbb{F}_{p}}(\kappa(\xi_{i}^{(p)}),\kappa(\eta
_{s}^{(p)}))=\Lambda_{\mathbb{F}_{p}}(\kappa(\xi_{i}^{(p)}),\kappa(\eta
_{s}^{(p)}))$, $p=3,5$.
\end{enumerate}
For $p=3$ we get from Example 3.2 and Table 2 that
\begin{quote}
i) $G_{2}(3)=\emptyset$;
ii) $A_{G,\mathbb{F}_{3}}^{\ast}=\mathbb{F}_{3}[x_{8}]/\left\langle x_{8
^{3}\right\rangle ,G(3)=\{2\}$ for $G=F_{4},E_{6},E_{7}$;
iii) $A_{E_{8},\mathbb{F}_{3}}^{\ast}=\mathbb{F}_{3}[x_{8},x_{20
]/\left\langle x_{8}^{3},x_{20}^{3}\right\rangle $, $E_{8}(3)=\{2,6\}$.
\end{quote}
\noindent With (6.7) in mind we get $\tau_{3}(G_{2})=0$ from i); (6.8)--(6.10)
from (6.13) and ii) (where the class of the type $\mathcal{C}_{I}$ is absent
since $G(3)$ is a singleton). The isomorphism (6.11) comes from (6.13) and
iii) by notifying further that $\mathcal{C}_{\{2,6\}}$ is the only class of
the type $\mathcal{C}_{I}$ with $\left\vert I\right\vert \geq2$, whose square
is trivial for the degree reason.
Similarly, granted with (6.7) and (6.13), we get $\tau_{5}(G)=0$ for $G\neq
E_{8}$ and (6.12) respectively from
\begin{quote}
$G_{2}(5)=F_{4}(5)=E_{6}(5)=E_{7}(5)=\emptyset$
\end{quote}
\noindent and $E_{8}(5)=\{4\}$ by Table 2.$\square$
\bigskip
\noindent\textbf{Lemma 6.3.} \textsl{With }$\zeta_{3}^{2}=x_{6}$\textsl{ for
all }$G$\textsl{ and }$\zeta_{15}^{2}=x_{30}$\textsl{, }$\zeta_{23}^{2
=x_{6}^{6}x_{10}$\textsl{ for }$E_{8}$\textsl{ being understood, the ring
isomorphisms }$r_{2}:\tau_{2}(G)\cong\operatorname{Im}\delta_{2}$\textsl{ in
(6.7) are given by}
\begin{enumerate}
\item[(6.14)] $\tau_{2}(G_{2})\cong\mathbb{F}_{2}[x_{6}]^{+}/\left\langle
x_{6}^{2}\right\rangle \otimes\Delta_{\mathbb{F}_{2}}(\zeta_{3})$;
\item[(6.15)] $\tau_{2}(F_{4})\cong\mathbb{F}_{2}[x_{6}]^{+}/\left\langle
x_{6}^{2}\right\rangle \otimes\Delta_{\mathbb{F}_{2}}(\zeta_{3})\otimes
\Lambda_{\mathbb{F}_{2}}(\zeta_{15},\zeta_{23})$;
\item[(6.16)] $\tau_{2}(E_{6})\cong\mathbb{F}_{2}[x_{6}]^{+}/\left\langle
x_{6}^{2}\right\rangle \otimes\Delta_{\mathbb{F}_{2}}(\zeta_{3})\otimes
\Lambda_{\mathbb{F}_{2}}(\zeta_{9},\zeta_{15},\zeta_{17},\zeta_{23})$;
\item[(6.17)] $\tau_{2}(E_{7})\cong\frac{\mathbb{F}_{2}[x_{6},x_{10
,x_{18},\mathcal{C}_{I}^{(2)}]^{+}}{\left\langle x_{6}^{2},x_{10}^{2
,x_{18}^{2},\mathcal{D}_{J}^{(2)},\mathcal{R}_{K}^{(2)},\mathcal{S
_{I,J}^{(2)}\right\rangle }\otimes\Delta_{\mathbb{F}_{2}}(\zeta_{3
)\otimes\Lambda_{\mathbb{F}_{2}}(\zeta_{15},\zeta_{23},\zeta_{27})$
\textsl{with}
$\qquad\qquad I,J\subseteq K=\{1,3,4\}$, $\left| I\right| ,\left| J\right|
\geq2$;
\item[(6.18)] $\tau_{2}(E_{8})\cong\frac{\mathbb{F}_{2}[x_{6},x_{10
,x_{18},x_{30},\mathcal{C}_{I}^{(2)}]^{+}}{\left\langle x_{6}^{8},x_{10
^{4},x_{18}^{2},x_{30}^{2},\mathcal{D}_{J}^{(2)},\mathcal{R}_{K
^{(2)},\mathcal{S}_{I,J}^{(2)}\right\rangle }\otimes\Delta_{\mathbb{F}_{2
}(\zeta_{3},\zeta_{15},\zeta_{23})\otimes\Lambda_{\mathbb{F}_{2}}(\zeta_{27})$
\textsl{with}
$\qquad\qquad K,I$, $J\subseteq\{1,3,5,7\}$, $\left\vert I\right\vert
,\left\vert J\right\vert \geq2$, $\left\vert K\right\vert \geq3$,
\end{enumerate}
\noindent\textsl{in which the relations }$S_{I,J}^{(2)}$\textsl{ in (6.17) and
(6.18) are}
\begin{enumerate}
\item[(6.19)] $\mathcal{S}_{I,J}^{(2)}=\mathcal{C}_{I}^{(2)}\mathcal{C
_{J}^{(2)}+\sum_{t\in I}x_{\deg y_{t}}\prod_{s\in I_{t}\cap J}\zeta
_{\deg\theta_{s}^{(2)}}^{2}\mathcal{C}_{\left\langle I_{t},J\right\rangle
}^{(2)}$,
\end{enumerate}
\noindent\textsl{where}
\begin{quote}
$\left\langle I,J\right\rangle =\{t\in I\cup J\mid t\notin I\cap
J\}$\textsl{,}$\quad$
$\prod_{s\in I_{t}\cap J}\zeta_{\deg\theta_{s}^{(2)}}^{2}=1$ \textsl{when
}$I_{t}\cap J=\emptyset$\textsl{, }
\end{quote}
\noindent\textsl{and where the squares }$\zeta_{\deg\theta_{s}^{(2)}}^{2
$\textsl{ (see in Table 7) are evaluated by}
\begin{quote}
$(\zeta_{5}^{2},\zeta_{9}^{2},\zeta_{17}^{2},\zeta_{29}^{2})=(x_{10
,x_{18},0,0)$.
\end{quote}
\noindent\textbf{Proof. }The cases $G\neq E_{8}$ are fairly simple. We may
therefore focus on the relatively nontrivial case $G=E_{8}$, for which the
formula (6.7) turns to be
\begin{enumerate}
\item[(6.20)] $\tau_{2}(E_{8})=\frac{\mathbb{F}_{2}[x_{6},x_{10},x_{18
,x_{30}]\{1,\mathcal{C}_{I}^{(2)}\}^{+}}{\left\langle x_{6}^{8},x_{10
^{4},x_{18}^{2},x_{30}^{2},\mathcal{D}_{J}^{(2)},\mathcal{R}_{K
^{(2)}\right\rangle }\otimes\Delta_{\mathbb{F}_{2}}(\zeta_{3},\zeta_{15
,\zeta_{23},\zeta_{27})$,
\end{enumerate}
\noindent where
\begin{quote}
$\zeta_{3}\equiv\kappa(\xi_{1}^{(2)})$, $\zeta_{15}\equiv\kappa(\xi_{2
^{(2)})$, $\zeta_{27}\equiv\kappa(\xi_{3}^{(2)})$, $\zeta_{23}=\kappa(\eta
_{2}^{(2)})$ (Table 7);
$A_{E_{8},\mathbb{F}_{2}}^{\ast}=\frac{\mathbb{F}_{2}[x_{6},x_{10
,x_{18},x_{30}]}{<x_{6}^{8},x_{10}^{4},x_{18}^{2},x_{30}^{2}>}$ (Example 3.2);
\end{quote}
\noindent and where $I,J,K\subseteq E_{8}(2)=\{1,3,5,7\}$ by Table 2. Since
$\tau_{2}(E_{8})\cong\operatorname{Im}\delta_{2}\subset H^{\ast
(E_{8};\mathbb{F}_{2})$ via $r_{2}$, the relations
\begin{quote}
$\zeta_{3}^{2}=x_{6}$, $\zeta_{15}^{2}=x_{30}$, $\zeta_{23}^{2}=x_{6
^{6}x_{10}$, $\zeta_{27}^{2}=0$
\end{quote}
\noindent obtained in Theorem 5 implies that
\begin{enumerate}
\item[(6.21)] the factor $\Delta_{\mathbb{F}_{2}}(\zeta_{3},\zeta_{15
,\zeta_{23},\zeta_{27})$ in (6.20) is $\Delta_{\mathbb{F}_{2}}(\zeta_{3
,\zeta_{15},\zeta_{23})\otimes\Lambda_{\mathbb{F}_{2}}(\zeta_{27})$.
\end{enumerate}
\noindent It remains to decide the multiplicative rule among the classes
$\mathcal{C}_{I}$'s. For $I,J\subseteq E_{8}(2)=\{1,3,5,7\}$ with $\left|
I\right| ,\left| J\right| \geq2$ we have, in $H^{\ast}(E_{8};\mathbb{F
_{2})$, that
\begin{quote}
i) $\delta_{2}(\theta_{I}^{(2)})=\sum_{t\in I}x_{\deg y_{t}}\theta_{I_{t
}^{(2)}$ by (5.12) and (5.9);
ii) $\theta_{I}^{(2)}\theta_{J}^{(2)}=\prod_{s\in I\cap J}\zeta_{\deg
\theta_{s}^{(2)}}^{2}\theta_{\left\langle I,J\right\rangle }^{(2)}$ with
$\prod_{s\in I\cap J}\zeta_{\deg\theta_{s}^{(2)}}^{2}=1$ for $I\cap
J=\emptyset$,
\end{quote}
\noindent where $\zeta_{\deg\theta_{s}^{(2)}}=\kappa(\theta_{s}^{(2)})$ (Table
7). It follows from $\mathcal{C}_{I}^{(2)}=\delta_{2}(\theta_{I}^{(2)})$ that
\begin{quote}
$\mathcal{C}_{I}^{(2)}\mathcal{C}_{J}^{(2)}=\delta_{2}(\theta_{I}^{(2)
)\delta_{2}(\theta_{J}^{(2)})=\delta_{2}(\delta_{2}(\theta_{I}^{(2)
)\theta_{J}^{(2)})$ (since $\delta_{2}^{2}=0$)
$=\delta_{2}(\sum_{t\in I}x_{\deg y_{t}}\theta_{I_{t}}^{(2)}\theta_{J}^{(2)})$
(by i))
$=\delta_{2}(\sum_{t\in I}x_{\deg y_{t}}\prod_{s\in I_{t}\cap J}\zeta
_{\deg\theta_{s}^{(2)}}^{2}\theta_{\left\langle I_{t},J\right\rangle }^{(2)})$
(by ii)).
\end{quote}
\noindent From
\begin{quote}
$(\zeta_{\deg\theta_{1}}^{2},\zeta_{\deg\theta_{3}}^{2},\zeta_{\deg\theta_{5
}^{2},\zeta_{\deg\theta_{7}}^{2})=(x_{10},x_{18},0,0)$ by Theorem 5,
$\delta_{2}(x_{\deg y_{t}})=0$ by (5.9)
\end{quote}
\noindent and from $\delta_{2}(\theta_{\left\langle I_{t},J\right\rangle
}^{(2)})=\mathcal{C}_{\left\langle I_{t},J\right\rangle }^{(2)}$ by (5.11) we get
\begin{enumerate}
\item[(6.22)] $\mathcal{C}_{I}^{(2)}\mathcal{C}_{J}^{(2)}=\sum_{t\in I}x_{\deg
y_{t}}\prod_{s\in I_{t}\cap J}\zeta_{\deg\theta_{s}^{(2)}}^{2}\mathcal{C
_{\left\langle I_{t},J\right\rangle }^{(2)}$,
\end{enumerate}
\noindent where $\prod_{s\in I_{t}\cap J}\zeta_{\deg\theta_{s}^{(2)}}^{2}=1$
when $I_{t}\cap J=\emptyset$. By taking (6.22) as the relation (6.19) on
$\tau_{2}(E_{8})$, the first factor in (6.20) can be written as
\begin{enumerate}
\item[(6.23)] $\frac{\mathbb{F}_{2}[x_{6},x_{10},x_{18},x_{30},\mathcal{C
_{I}^{(2)}]^{+}}{\left\langle x_{6}^{8},x_{10}^{4},x_{18}^{2},x_{30
^{2},\mathcal{D}_{J}^{(2)},\mathcal{R}_{K}^{(2)},\mathcal{S}_{I,J
^{(2)}\right\rangle }$.
\end{enumerate}
\noindent Combining (6.21) with (6.23) establishes (6.18).$\square$
\bigskip
\noindent\textbf{Example 6.4.} The formula (6.22) (i.e. the relation (6.19))
is effective in computing with the products $\mathcal{C}_{I}^{(2)
\mathcal{C}_{J}^{(2)}$. Taking $G=E_{8}$ as an example and noting that
$E_{8}(2)=\{1,3,5,7\}$, we have by (6.22) that
\begin{quote}
i) $\mathcal{C}_{\{1,3\}}^{(2)}\mathcal{C}_{\{1,3\}}^{(2)}=x_{6}\zeta
_{\deg\theta_{3}^{(2)}}^{2}\mathcal{C}_{\{1\}}^{(2)}+x_{10}\zeta_{\deg
\theta_{1}^{(2)}}^{2}\mathcal{C}_{\{3\}}^{(2)}=x_{6}^{2}x_{18}+x_{10}^{3}$
\end{quote}
\noindent since $(\zeta_{\deg\theta_{3}^{(2)}}^{2},\zeta_{\deg\theta_{1
^{(2)}}^{2})=(x_{18},x_{10})$ and $(\mathcal{C}_{\{1\}}^{(2)},\mathcal{C
_{\{3\}}^{(2)})=(x_{6},x_{10})$ in $H^{\ast}(E_{8};\mathbb{F}_{2})$;
\begin{quote}
ii) $\mathcal{C}_{(1,3)}^{(2)}\mathcal{C}_{(1,5)}^{(2)}=x_{10}\zeta
_{\deg\theta_{1}^{(2)}}^{2}\mathcal{C}_{\{5\}}^{(2)}+x_{6}\mathcal{C
_{\{1,3,5\}}^{(2)}=x_{10}^{2}x_{18}+x_{6}\mathcal{C}_{\{1,3,5\}}^{(2)}$
\end{quote}
\noindent since $\zeta_{\deg\theta_{1}^{(2)}}^{2}=x_{10}$ and $\mathcal{C
_{\{5\}}^{(2)}=x_{18}$ in $H^{\ast}(E_{8};\mathbb{F}_{2})$. These indicate
that the multiplicative rule (6.22) in $\tau_{2}(E_{8})$ is highly
non--trivial, though can be easily implemented.$\square$
\begin{center}
\textbf{6.4.} \textbf{The ring }$H^{\ast}(G;\mathbb{Z})$
\end{center}
We specify the generators that will be utilized in describing the ring
$H^{\ast}(G;\mathbb{Z})$. Firstly, all elements in $\mathcal{O}_{G,\mathbb{Z
}$, together their $\kappa$--images, are tabulated in Table 8 below (see
Example 2.10):
\begin{center
\begin{tabular}
[c]{l|llllllllllll}\hline
$u\in\mathcal{O}_{G_{2},\mathbb{Z}}$ & $\xi_{1}$ & & $\eta_{1}$ & & & & &
& & & & \\
$u\in\mathcal{O}_{F_{4},\mathbb{Z}}$ & $\xi_{1}$ & & $\eta_{1}$ & $\xi_{2}$ &
& & $\eta_{2}$ & & & & & \\
$u\in\mathcal{O}_{E_{6},\mathbb{Z}}$ & $\xi_{1}$ & $\xi_{2}$ & $\eta_{1}$ &
$\xi_{3}$ & $\xi_{4}$ & & $\eta_{2}$ & & & & & \\
$u\in\mathcal{O}_{E_{7},\mathbb{Z}}$ & $\xi_{1}$ & & $\eta_{1}$ & $\xi_{2}$ &
& $\eta_{3}$ & $\eta_{2}$ & $\xi_{3}$ & $\eta_{4}$ & & & \\
$u\in\mathcal{O}_{E_{8},\mathbb{Z}}$ & $\xi_{1}$ & & & $\xi_{2}$ & & &
$\eta_{2}$ & $\xi_{3}$ & $\eta_{5}$ & $\eta_{3}$ & $\eta_{1}$ & $\eta_{6
$\\\hline
$\kappa(u)\in\mathcal{O}_{G,\mathbb{Z}}^{\kappa}$ & $\varrho_{3}$ &
$\varrho_{9}$ & $\varrho_{11}$ & $\varrho_{15}$ & $\varrho_{17}$ &
$\varrho_{19}$ & $\varrho_{23}$ & $\varrho_{27}$ & $\varrho_{35}$ &
$\varrho_{39}$ & $\varrho_{47}$ & $\varrho_{59}$\\\hline
\end{tabular}
{\small Table 8. the elements in }$\mathcal{O}_{G,\mathbb{Z}}$ {\small and
their }$\kappa${\small --images}
\end{center}
\noindent According to Theorem 2, these elements $\varrho_{i}$ have infinite
order and their square free products form a $\mathbb{Z}$--basis for the free
part of $H^{\ast}(G)$.
Next, the Bockstein $\beta_{p}$ carries the classes $\theta_{I}^{(p)}\subset
H^{\ast}(G;\mathbb{F}_{p})$, $I\subseteq G(p)$ (see (5.12)), to the elements
in $H^{\ast}(G;\mathbb{Z})$
\begin{enumerate}
\item[(6.25)] $\mathcal{E}_{I}^{(p)}:=\beta_{p}(\theta_{I}^{(p)})\subset
H^{\ast}(G;\mathbb{Z})$, $I\subseteq G(p)$, with $\mathcal{E}_{\{t\}
^{(p)}=-y_{t}$.
\end{enumerate}
\noindent Let $\mathcal{A}(G)$ be the subring of $H^{\ast}(G;\mathbb{Z})$
generated multiplicatively by $\mathcal{E}_{I}^{(p)}$, $I\subseteq G(p)$, and
the unit $1\in H^{\ast}(G;\mathbb{Z})$. Precisely, by the proofs of Lemmas 6.2
and 6.3 we have:
\begin{enumerate}
\item[(6.26)] \textsl{one has the ring splitting }$\mathcal{A}(G)=\mathbb{Z
\oplus_{p=2,3,5}\mathcal{A}_{p}(G)$\textsl{ in which}
$\mathcal{A}_{2}(E_{7})=\frac{A_{E_{7};\mathbb{F}_{2}}^{\ast}[1,\mathcal{E
_{I}^{(2)}]^{+}}{\left\langle \mathcal{D}_{J}^{(2)},\mathcal{R}_{K
^{(2)},\mathcal{S}_{I,J}^{(2)}\right\rangle }$, $I,J\subseteq K=\{1,3,4\}$,
$\left\vert I\right\vert ,\left\vert J\right\vert \geq2$;
$\mathcal{A}_{2}(E_{8})=\frac{A_{E_{8};\mathbb{F}_{2}}^{\ast}[1,\mathcal{E
_{I}^{(2)}]^{+}}{\left\langle \mathcal{D}_{J}^{(2)},\mathcal{R}_{K
^{(2)},\mathcal{S}_{I,J}^{(2)}\right\rangle }$, $I,J,K\subseteq\{1,3,5,7\}$,
$\left\vert I\right\vert ,\left\vert J\right\vert \geq2,\left\vert
K\right\vert \geq3$;
$\mathcal{A}_{3}(E_{8})=\frac{A_{E_{8};\mathbb{F}_{3}}^{\ast}[1,\mathcal{E
_{(2,6)}^{(3)}]^{+}}{\left\langle (\mathcal{E}_{(2,6)}^{(3)})^{2},x_{8
^{2}x_{20}^{2}\mathcal{E}_{(2,6)}^{(3)}\right\rangle }$;
\textsl{and }$\mathcal{A}_{p}(G)=A_{G;\mathbb{F}_{p}}^{+}$\textsl{ in the
remaining cases.}
\end{enumerate}
\noindent where
\begin{quote}
$\mathcal{D}_{J}^{(p)}=(\prod_{t\in J}y_{t}^{k_{t}-1})\mathcal{E}_{J}^{(p)}$;
$\mathcal{R}_{K}^{(p)}=\sum_{t\in K}y_{t}\mathcal{E}_{K_{t}}^{(p)}$ (compare
with (5.12)),
$\mathcal{S}_{I,J}^{(p)}$ is obtained from (6.19) by replacing $\mathcal{C
_{J}^{(p)}$ with $\mathcal{E}_{J}^{(p)}$,
\end{quote}
\noindent and where $x_{\deg y_{j}}=y_{j}$ as in Example 3.2.
\bigskip
\noindent\textbf{Theorem 6.} \textsl{The inclusions }$O_{G,\mathbb{Z}
^{\kappa}$\textsl{, }$A(G)\subset H^{\ast}(G;\mathbb{Z})$\textsl{ induce a
ring isomorphism}
\begin{enumerate}
\item[(6.27)] $H^{\ast}(G;\mathbb{Z})=\mathcal{A}(G)\otimes\Delta_{\mathbb{Z
}(\mathcal{O}_{G,\mathbb{Z}}^{\kappa})/\left\langle \mathcal{F}_{r
;\mathcal{H}_{t,I}\right\rangle ,$
\end{enumerate}
\noindent\textsl{where }$r\in\{\deg u\}_{u\in\mathcal{O}_{G,\mathbb{Z
}^{\kappa}}\footnote{We observe from Table 8 that elements in $\mathcal{O
_{G,\mathbb{Z}}^{\kappa}$ have distinct degrees for every $G$.}$, $t\in G(p)$,
$I\subseteq G(p)$\ with $p=2,3,5$, and where
\begin{enumerate}
\item[(6.28)] \textsl{the relations }$F_{r}$\textsl{ are given by
$\varrho_{r}^{2}=0$\textsl{ with three exceptions:}
\end{enumerate}
\begin{quote}
$\varrho_{3}^{2}=x_{6}$\textsl{ for all }$G$\textsl{; }
$\varrho_{15}^{2}=x_{30},$\textsl{ }$\varrho_{23}^{2}=x_{6}^{6}x_{10}$\textsl{
for }$G=E_{8}$\textsl{;}
\end{quote}
\begin{enumerate}
\item[(6.29)] \textsl{the relations }$\mathcal{H}_{t,I}$\textsl{ are given by
the three possibilities}
$\varrho_{\deg\eta_{t}}\mathcal{E}_{I}^{(p)}=\left\{
\begin{tabular}
[c]{l
$x_{\deg y_{t}}^{k_{t}-1}\mathcal{E}_{I\cup\{t\}}^{(p)}$ \textsl{if} $t\notin
I$;\\
$0$ \textsl{if either} $t\in I$, $p$ \textsl{is odd or} $I=\{t\}$, $p=2$;\\
$x_{\deg y_{t}}^{k_{t}-1}(\theta_{\{t\}}^{(2)})^{2}\mathcal{E}_{I_{t}}^{(2)}$
\textsl{if} $p=2$, $t\in I$ \textsl{and} $\left\vert I\right\vert \geq2
\end{tabular}
\ \ \right. $
\textsl{with }$(\theta_{\{t\}}^{(2)})^{2}$\textsl{ being evaluated by}
$\quad((\theta_{\{1\}}^{(2)})^{2},(\theta_{\{3\}}^{(2)})^{2},(\theta
_{\{4\}}^{(2)})^{2})=(x_{10},x_{18},0)$ \textsl{for} $G=E_{7}$\textsl{, and}
$\quad((\theta_{\{1\}}^{(2)})^{2},(\theta_{\{3\}}^{(2)})^{2},(\theta
_{\{5\}}^{(2)})^{2},(\theta_{\{7\}}^{(2)})^{2})=(x_{10},x_{18},0,0)$
\textsl{for} $G=E_{8}$.
\end{enumerate}
\noindent\textsl{In particular, one has}
\begin{enumerate}
\item[i)] $H^{\ast}(G_{2};\mathbb{Z})=A_{G_{2};\mathbb{Z}}^{\ast}\otimes
\Delta_{\mathbb{Z}}(\varrho_{3})\otimes\Lambda_{\mathbb{Z}}(\varrho
_{11})/\left\langle \varrho_{3}^{2}-x_{6};\varrho_{11}x_{6}\right\rangle $;
\item[ii)] $H^{\ast}(F_{4};\mathbb{Z})=A_{F_{4};\mathbb{Z}}^{\ast
\otimes\Delta_{\mathbb{Z}}(\varrho_{3})\otimes\Lambda_{\mathbb{Z}
(\varrho_{11},\varrho_{15},\varrho_{23})/\left\langle \varrho_{3}^{2
-x_{6};\varrho_{11}x_{6};\varrho_{23}x_{8}\right\rangle $;
\item[iii)] $H^{\ast}(E_{6};\mathbb{Z})=A_{E_{6};\mathbb{Z}}^{\ast
\otimes\Delta_{\mathbb{Z}}(\varrho_{3})\otimes\Lambda_{\mathbb{Z}}(\varrho
_{9},\varrho_{11},\varrho_{15},\varrho_{17},\varrho_{23})/\left\langle
\varrho_{3}^{2}-x_{6};\varrho_{11}x_{6};\varrho_{23}x_{8}\right\rangle $;
\item[iv)] $H^{\ast}(E_{7};\mathbb{Z})=\frac{\mathcal{A}(E_{7})\otimes
\Delta_{\mathbb{Z}}(\varrho_{3})\otimes\Lambda_{\mathbb{Z}}(\varrho
_{11},\varrho_{15},\varrho_{19},\varrho_{23},\varrho_{27},\varrho_{35
)}{\left\langle \varrho_{3}^{2}-x_{6},\varrho_{23}x_{8},\mathcal{H
_{t,I}\right\rangle }$ \textsl{with}
$\qquad t\in\{1,3,4\},I\subseteq\{1,3,4\}$;
\item[v)] $H^{\ast}(E_{8};\mathbb{Z})=\frac{\mathcal{A}(E_{8})\otimes
\Delta_{\mathbb{Z}}(\varrho_{3},\varrho_{15},\varrho_{23})\otimes
\Lambda_{\mathbb{Z}}(\varrho_{27},\varrho_{35},\varrho_{39},\varrho
_{47},\varrho_{59})}{\left\langle \varrho_{3}^{2}-x_{6},\varrho_{15
^{2}-x_{30},\varrho_{59}x_{12},\varrho_{23}^{2}-x_{6}^{6}x_{10},\mathcal{H
_{s,I},\mathcal{H}_{t,J}\right\rangle }$ \textsl{with}
$\qquad s\in\{2,6\},I\subseteq\{2,6\},t\in\{1,3,5,7\},J\subseteq\{1,3,5,7\}$
\end{enumerate}
\noindent\textbf{Proof.} Since $r_{p}:$ $H^{\ast}(G;\mathbb{Z})\rightarrow
H^{\ast}(G;\mathbb{F}_{p})$ satisfies (by $\delta_{p}=r_{p}\beta_{p}$, $\kappa
r_{p}=r_{p}\kappa$ and Lemma 3.5) that
\begin{quote}
$r_{p}(\mathcal{E}_{I}^{(p)})=\mathcal{C}_{I}^{(p)}$; $r_{p}(\kappa(\xi
_{i}))=\kappa(\xi_{i}^{(p)})$;
$r_{p}(\kappa(\eta_{s}))=p_{s}\kappa(\eta_{s}^{(p)})$, $s\in\overline
{G}(\mathbb{F}_{p})$,
\end{quote}
\noindent we get from Theorem 2 the presentations without resorting to $r_{p}$
\begin{enumerate}
\item[(6.30)] $H^{\ast}(G;\mathbb{Z})=\Delta_{\mathbb{Z}}(\mathcal{O
_{G,\mathbb{Z}}^{\kappa})\underset{p\in\{2,3,5\}}{\oplus}\tau_{p}(G)$ in which
$\tau_{p}(G)=\mathbb{F}_{p}[y_{t}]\{1,\mathcal{E}_{I}^{(p)}\}^{+}/\left\langle
y_{t}^{k_{t}},\mathcal{D}_{J}^{(p)},\mathcal{R}_{K}^{(p)}\right\rangle
\otimes\Delta_{\mathbb{F}_{p}}(\kappa(\xi_{i}),\kappa(\eta_{s}))_{1\leq i\leq
k,s\in\overline{G}(\mathbb{F}_{p})}$.
\end{enumerate}
\noindent In views of (6.26) and (6.30) the $\mathcal{A}(G)$--module map
\begin{quote}
$\psi:\mathcal{A}(G)\otimes\Delta_{\mathbb{Z}}(\mathcal{O}_{G,\mathbb{Z
}^{\kappa}))\rightarrow H^{\ast}(G;\mathbb{Z})$
\end{quote}
\noindent induced from the inclusions $\mathcal{O}_{G,\mathbb{Z}}^{\kappa}$,
$\mathcal{A}(G)\subset H^{\ast}(G;\mathbb{Z})$ is surjective by Lemmas 6.2 and
6.3. Since the square of any odd dimensional integral cohomology class of a
space $X$ lands in the $2$--primary component of $H^{\ast}(X;\mathbb{Z})$, and
since $\tau_{p}(G)$ is an ideal in $H^{\ast}(G;\mathbb{Z})$, to modify $\psi$
into a ring isomorphism it suffices to
\begin{enumerate}
\item[(6.31)] express all the squares $\varrho_{\deg u}^{2}$, $u\in
\mathcal{O}_{G,\mathbb{Z}}$, as elements in $\tau_{2}(G)$;
\item[(6.32)] determine the actions of $\varrho_{\deg u}$, $u\in
\mathcal{O}_{G,\mathbb{Z}}$, on $\tau_{p}(G)$.
\end{enumerate}
\noindent The proof of Theorem 6 is done by showing that the relations (6.28)
and (6.29) take care of these two concerns respectively.
For (6.31) we note from Lemma 3.5 that
$\quad r_{2}(\kappa(\xi_{i})^{2})\equiv\kappa(\xi_{i}^{(2)})^{2}$, $1\leq
i\leq k$;$\quad$
$\quad r_{2}(\kappa(\eta_{s})^{2})\equiv p_{s}^{2}\kappa(\eta_{s}^{(2)})^{2}$
for $s\in\overline{G}(\mathbb{F}_{2})$;
$\quad r_{2}(\kappa(\eta_{t})^{2})\equiv y_{t}^{2(k_{t}-1)}\kappa(\theta
_{t}^{(2)})^{2}\equiv0$ for $t\in G(2)$ (since $y_{t}^{k_{t}}\equiv0$).
\noindent The relations $\mathcal{F}_{r}$ in (6.28) are verified by
\quad i) $\kappa(\xi_{i})^{2},\kappa(\eta_{s})^{2}\in\tau_{2}(G)$;
\quad ii) $r_{2}$ restricts to an isomorphism $\tau_{2}(G)\rightarrow
\operatorname{Im}\delta_{2}\subset H^{\ast}(G;\mathbb{F}_{2})$; and
\quad iii) the results on $\kappa(\xi_{i}^{(2)})^{2}$, $\kappa(\eta_{s
^{(2)})^{2}$ in Theorem 5, $1\leq i\leq k$, $s\in\overline{G}(\mathbb{F}_{2})$.
For (6.32) it suffices, in view of the presentation for $\tau_{p}(G)$ in
(6.30), to express every product $\kappa(\eta_{t})\mathcal{E}_{I}^{(p)}$,
$t\in G(p)$, $I\subseteq G(p)$, as an element in $\tau_{p}(G)$. Since $r_{p}$
restricts to an isomorphism $\tau_{p}(G)\cong\operatorname{Im}\delta_{p}$, the
relation $\mathcal{H}_{t,I}$ in (6.29) is obtained from the calculation in
$H^{\ast}(G;\mathbb{F}_{p})$
\begin{quote}
$r_{p}(\kappa(\eta_{t})\mathcal{E}_{I}^{(p)})\equiv y_{t}^{k_{t}-1
\kappa(\theta_{t}^{(p)})\mathcal{C}_{I}^{(p)}$ (by Lemma 3.5 and
$r_{p}(\mathcal{E}_{I}^{(p)})=\mathcal{C}_{I}^{(p)}$)
$\equiv y_{t}^{k_{t}-1}\kappa(\theta_{t}^{(p)})\mathcal{\delta}_{p}(\theta
_{I}^{(p)})$ ($\mathcal{C}_{I}^{(p)}=\mathcal{\delta}_{p}(\theta_{I}^{(p)})$
by (5.12))
$\equiv y_{t}^{k_{t}-1}\mathcal{\delta}_{p}(\theta_{\{t\}}^{(p)}\theta
_{I}^{(p)})$ (since $\theta_{\{t\}}^{(p)}=\kappa(\theta_{t}^{(p)})$,
$y_{t}^{k_{t}-1}\mathcal{\delta}_{p}(\theta_{\{t\}}^{(p)})\equiv y_{t}^{k_{t
}\equiv0$)
$\equiv\left\{
\begin{tabular}
[c]{l
$y_{t}^{k_{t}-1}\mathcal{C}_{I\cup\{t\}}^{(p)}\text{ if }t\notin
I\text{;\quad}$\\
$0\text{ if }I=\{t\}\text{;}$\\
$y_{t}^{k_{t}-1}(\theta_{\{t\}}^{(p)})^{2}\mathcal{C}_{I_{t}}^{(p)}\text{ if
}t\in I\text{, }\left\vert I\right\vert \geq2$,
\end{tabular}
\ \ \ \right. $
\end{quote}
\noindent where, in the third instance, $(\theta_{\{t\}}^{(p)})^{2}=0$ for
$t\in\overline{G}(\mathbb{F}_{2})$ since $\theta_{\{t\}}^{(p)}$ is of odd
dimensional with order $p_{t}\neq2$, and where by Theorem 5, $(\theta
_{\{t\}}^{(2)})^{2}$ with $t\in G(2)$ should be evaluated as that in c) of (6.29).
Finally, concerning the presentations in i)--v), we remark that each
$\varrho_{\deg u}$ with free square contributes to a generator in the exterior
part, and that if $G(p)$ is a singleton, the relations of the type
$\mathcal{H}_{t,I}$, $t\in G(p)$, $I\subseteq G(p)$, is unique, and can be
concretely given as $x_{\deg y_{t}}\varrho_{\deg\eta_{t}}=0$.$\square$
\bigskip
\noindent\textbf{Remark 6.5.} One may compare i), ii) of Theorem 6 with the
descriptions for $H^{\ast}(G_{2};\mathbb{Z})$ and $H^{\ast}(F_{4};\mathbb{Z})$
by Borel \cite{B5, B6}.
Theorem 6 summarizes the rings $H^{\ast}(G;\mathbb{Z})$ into the compact form
(6.27). As for the visibility of their structure in terms of \textsl{free
part} and \textsl{torsion parts}, the following alternative presentations
based on Theorem 2, together with Lemmas 6.2 and 6.3, may appear more
practical. To explain this we note that in the isomorphisms in Lemmas 6.2 and
6.3 we have $\zeta_{j}=r_{p}(\varrho_{j})$ by Lemma 3.5. Therefore,
\textsl{taking into account of the relations }$H_{t,I}$,\textsl{ }$t\in
G(p)$\textsl{, }$I\subseteq G(p)$\textsl{,} \textsl{that specify the actions
of the free part of }$H^{\ast}(G;\mathbb{Z})$\textsl{ on }$\tau_{p}(G)$ we
have, as examples,
\begin{enumerate}
\item[i)] $H^{\ast}(E_{6};\mathbb{Z})=\Delta_{\mathbb{Z}}(\varrho_{3
)\otimes\Lambda_{\mathbb{Z}}(\varrho_{9},\varrho_{11},\varrho_{15
,\varrho_{17},\varrho_{23})\oplus\tau_{2}(E_{6})\oplus\tau_{3}(E_{6})$, where
$\tau_{2}(E_{6})=\mathbb{F}_{2}[x_{6}]^{+}/\left\langle x_{6}^{2}\right\rangle
\otimes\Delta_{\mathbb{F}_{2}}(\varrho_{3})\otimes\Lambda_{\mathbb{F}_{2
}(\varrho_{9},\varrho_{15},\varrho_{17},\varrho_{23})$
$\tau_{3}(E_{6})=\mathbb{F}_{3}[x_{8}]^{+}/\left\langle x_{8}^{3}\right\rangle
\otimes\Lambda_{\mathbb{F}_{3}}(\varrho_{3},\varrho_{9},\varrho_{11
,\varrho_{15},\varrho_{17})$
and where $\varrho_{3}^{2}=x_{6}\in\tau_{2}(E_{6})$.
\item[ii)] $H^{\ast}(E_{7};\mathbb{Z})=\Delta_{\mathbb{Z}}(\varrho_{3
)\otimes\Lambda_{\mathbb{Z}}(\varrho_{11},\varrho_{15},\varrho_{19
,\varrho_{23},\varrho_{27},\varrho_{35})\underset{p=2,3}{\oplus}\tau_{2
(E_{7})$, where
$\tau_{2}(E_{7})=\frac{\mathbb{F}_{2}[x_{6},x_{10},x_{18},\mathcal{C
_{I}^{(2)}]^{+}}{\left\langle x_{6}^{2},x_{10}^{2},x_{18}^{2},\mathcal{D
_{J}^{(2)},\mathcal{R}_{K}^{(2)},\mathcal{S}_{I,J}^{(2)}\right\rangle
\otimes\Delta_{\mathbb{F}_{2}}(\varrho_{3})\otimes\Lambda_{\mathbb{F}_{2
}(\varrho_{15},\varrho_{23},\varrho_{27})$,
$\tau_{3}(E_{7})=\mathbb{F}_{3}[x_{8}]^{+}/\left\langle x_{8}^{3}\right\rangle
\otimes\Lambda_{\mathbb{F}_{3}}(\varrho_{3},\varrho_{11},\varrho_{15
,\varrho_{19},\varrho_{27},\varrho_{35})$,
and where $\varrho_{3}^{2}=x_{6}\in\tau_{2}(E_{7})$, $I,J\subseteq
K=\{1,3,4\}$, $\left\vert I\right\vert ,\left\vert J\right\vert \geq2$.
\item[iii)] $H^{\ast}(E_{8};\mathbb{Z})=\Delta_{\mathbb{Z}}(\varrho
_{3},\varrho_{15},\varrho_{23})\otimes\Lambda_{\mathbb{Z}}(\varrho
_{27},\varrho_{35},\varrho_{39},\varrho_{47},\varrho_{59})\underset
{p=2,3,5}{\oplus}\tau_{p}(E_{8})$, where
$\tau_{2}(E_{8})=\frac{\mathbb{F}_{2}[x_{6},x_{10},x_{18},x_{30
,\mathcal{C}_{I}^{(2)}]^{+}}{\left\langle x_{6}^{8},x_{10}^{4},x_{18
^{2},x_{30}^{2},\mathcal{D}_{J}^{(2)},\mathcal{R}_{K}^{(2)},\mathcal{S
_{I,J}^{(2)}\right\rangle }\otimes\Delta_{\mathbb{F}_{2}}(\varrho_{3
,\varrho_{15},\varrho_{23})\otimes\Lambda_{\mathbb{F}_{2}}(\varrho_{27})$
$\tau_{3}(E_{8})=\frac{\mathbb{F}_{3}[x_{8},x_{20},\mathcal{C}_{\{2,6\}
^{(3)}]^{+}}{\left\langle x_{8}^{3},x_{20}^{3},x_{8}^{2}x_{20}^{2
\mathcal{C}_{\{2,6\}}^{(3)},(\mathcal{C}_{\{2,6\}}^{(3)})^{2}\right\rangle
}\otimes\Lambda_{\mathbb{F}_{3}}(\varrho_{3},\varrho_{15},\varrho_{27
,\varrho_{35},\varrho_{39},\varrho_{47})$;
$\tau_{5}(E_{8})=\mathbb{F}_{5}[x_{12}]^{+}/\left\langle x_{12}^{5
\right\rangle \otimes\Lambda_{\mathbb{F}_{5}}(\varrho_{3},\varrho_{15
,\varrho_{23},\varrho_{27},\varrho_{35},\varrho_{39},\varrho_{47})$.
and where
$\qquad\varrho_{3}^{2}=x_{6},\varrho_{15}^{2}=x_{30},\varrho_{23}^{2
=x_{6}^{6}x_{10}\in\tau_{2}(E_{8})$,
$\qquad K,I$, $J\subseteq\{1,3,5,7\}$, $\left\vert I\right\vert ,\left\vert
J\right\vert \geq2$, $\left\vert K\right\vert \geq3$. $\square$
\end{enumerate}
\noindent\textbf{Remark 6.6. }For a Lie group $G$ with multiplication
$\mu:G\times G\rightarrow G$, the induced ring map
\begin{quote}
$\mu^{\ast}:H^{\ast}(G;\mathbb{F}_{p})\rightarrow H^{\ast}(G\times
G;\mathbb{F}_{p})=H^{\ast}(G;\mathbb{F}_{p})\otimes H^{\ast}(G;\mathbb{F
_{p})$
(resp. $\mu^{\ast}:H^{\ast}(G;\mathbb{Z})\rightarrow H^{\ast}(G\times
G;\mathbb{Z})$)
\end{quote}
\noindent furnishes $H^{\ast}(G;\mathbb{F}_{p})$ with the structure of a
\textsl{Hopf algebra }(resp. $H^{\ast}(G;\mathbb{Z})$ with the structure of
a\textsl{ near--Hopf ring}). With respect to our presentation of $H^{\ast
}(G;\mathbb{F}_{p})$ in Theorems 4 and 5 (resp. of $H^{\ast}(G;\mathbb{Z})$ in
Theorem 6) by the primary generators, this structure has been determined in
Lemma 3.3 in \cite{D2} (resp. in Theorems 1--5 in \cite{D2}).$\square$
\bigskip
\noindent\textbf{Acknowledgement.} The authors are very grateful to Ping Zhang
for many improvements on the earlier version of this paper.
| {
"timestamp": "2010-08-31T02:01:14",
"yymm": "0711",
"arxiv_id": "0711.2541",
"language": "en",
"url": "https://arxiv.org/abs/0711.2541",
"abstract": "Let $G$ be a compact and $1$--connected Lie group with a maximal torus $T$. Based on Schubert calculus on the flag manifold $G/T$ [15] we construct the integral cohomology ring $H^{\\ast}(G)$ uniformly for all $G$.",
"subjects": "Algebraic Topology (math.AT); Algebraic Geometry (math.AG)",
"title": "Schubert calculus and cohomology of Lie groups. Part I. 1-connected Lie groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551556203815,
"lm_q2_score": 0.7341195269001831,
"lm_q1q2_score": 0.708319010371237
} |
https://arxiv.org/abs/2110.08453 | Voting Theory in the Lean Theorem Prover | There is a long tradition of fruitful interaction between logic and social choice theory. In recent years, much of this interaction has focused on computer-aided methods such as SAT solving and interactive theorem proving. In this paper, we report on the development of a framework for formalizing voting theory in the Lean theorem prover, which we have applied to verify properties of a recently studied voting method. While previous applications of interactive theorem proving to social choice (using Isabelle/HOL and Mizar) have focused on the verification of impossibility theorems, we aim to cover a variety of results ranging from impossibility theorems to the verification of properties of specific voting methods (e.g., Condorcet consistency, independence of clones, etc.). In order to formalize voting theoretic axioms concerning adding or removing candidates and voters, we work in a variable-election setting whose formalization makes use of dependent types in Lean. | \section{Introduction}
There is a long tradition of fruitful interaction between logic and social choice theory. Both Kenneth Arrow \cite[p.~154]{Arrow2014} and Amartya Sen \cite[p.~108]{Sen2017} have noted the influence of mathematical logic on their thinking about the foundations of social choice theory. Early work using logical methods in social choice theory includes Murakami's \cite{Murakami1968} application of results about three-valued logic to the analysis of voting rules, Rubinstein's \cite{Rubinstein1984} proof of the equivalence between multi-profile and single-profile approaches to social choice, and Parikh's \cite{Parikh1985} development of a logic of games to study social procedures. There is now a rich literature developing logical systems that can formalize results in social choice theory (see, e.g., \cite{Pauly2008,AgotnesVanDerHoekWooldridge2009,Nipkow2009,Tang2011,TroquardVanDerHoekWooldridge2011,Endriss2011,GrandiEndriss2013,CinaEndriss2016,PacuitYang2016,HollidayPacuit2020}).
In recent years, much of the research on logic and social choice has focused on computer-aided methods such as SAT solving and interactive theorem proving \cite{Geist2017}. The first applications of interactive theorem proving used Isabelle/HOL \cite{Nipkow2009} and Mizar \cite{Wiedijk2007} to formalize different proofs of Arrow's Impossibility Theorem \cite{Geanakoplos2005}. More recently, \cite{Brandt2018b} and \cite{Eberl2019} used Isabelle to verify impossibility theorems from \cite{Brandt2018} and \cite{Brandl2018}, respectively. These projects demonstrate, as Nipkow \cite{Nipkow2009} notes, that ``social choice theory turns out to be perfectly suitable for mechanical theorem proving" (p.~303). In this paper, we provide further evidence of this by developing a framework for formalizing voting theory using an interactive theorem prover.
But why formalize? One obvious benefit of such a project is the verification of the correctness of mathematical claims in voting theory. Several published claims, including Arrow's \cite{Arrow1951} original statement of his impossibility theorem (for more than 3 candidates), Baigent's \cite{Baigent1987} variation involving ``weak IIA'' (in the case of 3 candidates), and Routley's \cite{Routley1979} claimed generalization of Arrow's theorem to infinite populations, were disproved by counterexamples (see \cite{Blau1957}, \cite{Campbell2000}, and \cite{Blau1979}). Second, formalization allows us to carefully track which assumptions---e.g., about voter preferences, cardinalities, choice of primitive concepts, etc.---are needed for which results, leading to generalizations and perhaps even new avenues for research. Third, formalization may eventually facilitate automated search of the corpus of proved results for use by researchers in proving new results.
For our formalization project we chose to use the Lean theorem prover \cite{Lean}, a framework that supports both interactive and automated theorem proving. Lean's kernel is based on dependent type theory and implements a version of the calculus of inductive constructions \cite{Coquand1988} and Martin-L\"of type theory \cite{MartinLof1984}. There is an extensive and actively maintained library of mathematical results formalized in Lean (see \url{https://leanprover-community.github.io/mathlib_docs/}). In addition, Lean is the system chosen for the Formal Abstracts project initiated by Thomas Hales (see \url{https://formalabstracts.github.io}).
Our aim was to use Lean to verify results involving axioms for voting methods (e.g., Condorcet consistency, independence of clones, etc.). In order to formalize axioms concerning adding or removing candidates and voters, we work in a variable-election setting whose formalization makes use of dependent types, as explained in Section \ref{Framework}. In Section \ref{Theorems}, we discuss our formal verification of results from \cite{HP2020b} about a recently studied voting method, Split Cycle (defined in Example \ref{SplitCycleEx1} below), illustrating the usefulness of our framework. We conclude in Section \ref{Conclusion} with directions for further work. All of the code for our project is available at \url{https://github.com/chasenorman/Formalized-Voting}.
\section{Framework}\label{Framework}
In this section, we define the basic objects of voting theory: profiles, social choice correspondences, etc. We first give standard set-theoretic definitions and then their type-theoretic counterparts in Lean syntax. After defining these objects, we discuss our formalization of standard axioms used to evaluate voting procedures.
\subsection{Profiles}
For our set-theoretic definitions, we fix infinite sets $\mathcal{V}$ and $\mathcal{X}$ of voters and candidates, respectively. Given $X\subseteq\mathcal{X}$, let $\mathcal{B}(X)$ be the set of all binary relations on $X$. Instead of thinking of a binary relation as a set of ordered pairs, here it is more convenient to think of a binary relation on $X$ as a function $S: X\times X \to \{0,1\}$. In fact, to better match our Lean formalization, we ``curry'' all functions with multiple arguments, transforming them into functions with single arguments that output functions. Thus, we regard a binary relation on $X$ as a function $S:X\to \{0,1\}^X$, where $\{0,1\}^X$ is the set of functions from $X$ to $\{0,1\}$. For any $x\in X$, $S(x): X\to \{0,1\}$, and $S(x)(y)=1$ means that the binary relation $S$ holds of $(x,y)$. In what follows, we write `$xSy$' instead of $S(x)(y)=1$.
\begin{definition}\label{ProfileDef} \textnormal{For $V\subseteq\mathcal{V}$ and $X\subseteq\mathcal{X}$, a \emph{$(V,X)$-profile} is a map $\mathbf{Q}:V\to \mathcal{B}(X)$. We write `$\mathbf{Q}_i$' for the relation $\mathbf{Q}(i)$. Given a $(V,X)$-profile $\mathbf{Q}$, let $V(\mathbf{Q})$ be $V$ and $X(\mathbf{Q})$ be $X$. We then define a function $\mathsf{Prof}$ that assigns to each pair $(V,X)$ of $V\subseteq\mathcal{V}$ and $X\subseteq\mathcal{X}$ the set $\mathsf{Prof}(V,X)$ of all $(V,X)$-profiles. Finally, define $\mathsf{PROF} = \bigcup_{V\subseteq\mathcal{V},X\subseteq\mathcal{X}}\mathsf{Prof}(V,X)$.}\end{definition}
Depending on the application, one can interpret $x\mathbf{Q}_i y$ to mean either (i) that voter $i$ strictly prefers $x$ to $y$ or (ii) that voter $i$ strictly prefers $x$ to $y$ or is indifferent between $x$ and $y$. We allow either interpretation for the sake of generality, as different voting theorist select different primitives. Under interpretation (i), we use `$\mathbf{P}$' for a profile; under interpretation (ii), we use `$\mathbf{R}$' for a profile.\footnote{\label{VoterNote}Approach (ii) is more general, since it allows one to distinguish between voter $i$ being \textit{indifferent} between $x$ and $y$, defined as $x\mathbf{R}_iy$ and $y\mathbf{R}_ix$, vs. $x$ and $y$ being \textit{noncomparable} for $i$, defined as \textit{neither} $x\mathbf{R}_iy$ \textit{nor} $y\mathbf{R}_ix$. When the distinction between voter indifference and noncomparability is not needed, approach (i) can be simpler.} A profile $\mathbf{Q}$ is said to be \emph{asymmetric} (\emph{transitive}, etc.) if for every $i\in V$, $\mathbf{Q}_i$ is asymmetric (transitive, etc.). Of course, asymmetric profiles only make sense under interpretation (i), whereas under interpretation (ii), profiles should be reflexive.
To translate Definition \ref{ProfileDef} into Lean, we first think of $V$ and $X$ as types, rather than sets, and then represent the function \textsf{Prof} from Definition \ref{ProfileDef} as follows:\footnote{When writing type expressions, arrows associate to the right, so, e.g., the expression `\texttt{V $\to$ X $\to$ X $\to$ Prop}' stands for \texttt{V $\to$ (X $\to$ (X $\to$ Prop))}.}
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{Prof} : \textcolor{blue}{Type} $\to$ \textcolor{blue}{Type} $\to$ \textcolor{blue}{Type} := }\\\texttt{$\textcolor{blue}{\lambda}$ (V X : \textcolor{blue}{Type}), V $\to$ X $\to$ X $\to$ \textcolor{blue}{Prop}}
\end{itemize}
Here \texttt{Prop} is the type of propositions, which in the definition plays the role of $\{0,1\}$ in the treatment of binary relations mentioned above. The definition states that \texttt{Prof} is a function that given two types, \texttt{V} and \texttt{X}, outputs the type \texttt{V $\to$ X $\to$ X $\to$ Prop}. Because \texttt{X $\to$ X $\to$ Prop} is the type of binary relations on \texttt{X}, an element of the type \texttt{V $\to$ X $\to$ X $\to$ Prop} can be viewed as a $(V,X)$-profile. Thus, we may think of \texttt{Prof V X} as the type of $(V,X)$-profiles.
One of the most important kinds of information to read off from a profile is whether one candidate is majority preferred to another.
\begin{definition}\label{MajPrefDef} \textnormal{Given a profile $\mathbf{P}$ and $x,y\in X(\mathbf{P})$, we say that \textit{$x$ is majority preferred to $y$ in $\mathbf{P}$} if more voters rank $x$ above $y$ than rank $y$ above $x$.}\end{definition}
In Lean, we formalize Definition \ref{MajPrefDef} as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{majority\_preferred} $\{$V X : \textcolor{blue}{Type}$\}$ :}
\texttt{Prof V X $\to$ X $\to$ X $\to$ \textcolor{blue}{Prop} := \textcolor{blue}{$\lambda$} P x y,}
\texttt{cardinal.mk $\{$v : V // P v x y$\}$ >
cardinal.mk $\{$v : V // P v y x$\}$}
\end{itemize}
Here `\texttt{$\{$V X : \textcolor{blue}{Type}$\}$}' indicates that \texttt{V} and \texttt{X} are implicit arguments\footnote{See \href{https://leanprover.github.io/reference/expressions.html\#implicit-arguments}{Section 3.3} of the Lean documentation on implicit arguments.} to the function \texttt{majority\_preferred} of type \texttt{Type}. Then \texttt{majority\_preferred} takes in explicit arguments of a $(V,X)$-profile and two candidates and returns the proposition stating that the cardinality of the set of voters who prefer \texttt{x} to \texttt{y} is greater than the cardinality of the set of voters who prefer \texttt{y} to \texttt{x}. Here the `\texttt{//}' notation indicates that we are identifying the subtype of voters with a certain property, and \texttt{cardinal.mk} gives us the cardinality of the subtype.
Voting theorists are often concerned not only with whether one candidate is majority preferred to another but also, if so, by what margin.
\begin{definition}\label{MarginDef} \textnormal{Given a profile $\mathbf{P}$ and $x,y\in X(\mathbf{P})$, the \textit{margin of $x$ over $y$ in $\mathbf{P}$}, denoted $Margin_\mathbf{P}(x,y)$, is $|\{i\in V(\mathbf{P})\mid x\mathbf{P}_iy\}|-|\{i\in V(\mathbf{P})\mid y\mathbf{P}_ix\}|$.}
\end{definition}
\noindent In Lean, Definition \ref{MarginDef} becomes:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{margin} $\{$V X : \textcolor{blue}{Type}$\}$ [fintype V] : Prof V X $\to$ X $\to$ X $\to$ $\mathbb{Z}$}
\texttt{:=
\textcolor{blue}{$\lambda$} P x y, $\uparrow$(finset.univ.filter (\textcolor{blue}{$\lambda$} v, P v x y)).card
-} \\ \texttt{$\uparrow$(finset.univ.filter (\textcolor{blue}{$\lambda$} v, P v y x)).card}
\end{itemize}
Here `\texttt{[fintype V]}' can be understood as an implicit assumption that \texttt{V} is finite,\footnote{See the Lean community page on \href{https://leanprover-community.github.io/theories/sets.html\#finite-types}{Sets and set-like objects}.} which we make so that we can perform the subtraction in the definition of \texttt{margin}. The \texttt{margin} function takes in explicit arguments of a $(V,X)$-profile and two candidates and returns the margin of the first over the second; in particular, `\texttt{finset.univ.filter (\textcolor{blue}{$\lambda$} v, P v x y)}' is syntax for constructing the set \\ $\{v\in V(\mathbf{P})\mid x\mathbf{P}_v y\}$, \texttt{.card} takes the cardinality of the set (a natural number), and $\uparrow$ shifts the type from natural number to integer (so we can subtract).
As usual, we can regard the $Margin_\mathbf{P}$ function as an $|X(\mathbf{P})|\times |X(\mathbf{P})|$ matrix. Since $Margin_\mathbf{P}(x,y)=-Margin_\mathbf{P}(y,x)$, the matrix is skew-symmetric. Treating an integer-valued square matrix as a function from a set $X$ to functions from $X$ to $\mathbb{Z}$, the property of skew-symmetry takes in such a function and outputs the proposition stating that the skew-symmetry equation holds for all pairs:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{skew\_symmetric} $\{$X : \textcolor{blue}{Type}$\}$ : (X $\to$ X $\to \mathbb{Z}$) $\to$ Prop :=}
\texttt{\textcolor{blue}{$\lambda$} M, $\forall$ x y, M x y = - M y x}.
\end{itemize}
Verifying that $Margin_\mathbf{P}$ is skew-symmetric is trivial using Lean's automation:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{lemma} \textcolor{orange}{margin\_skew\_symmetric} $\{$V X : \textcolor{blue}{Type}$\}$ (P : Prof V X)}\\ \texttt{[fintype V] : skew\_symmetric (margin P) :=}
\texttt{\textcolor{blue}{begin}}
\quad \texttt{unfold margin,}
\quad \texttt{obviously,}
\texttt{\textcolor{blue}{end}}
\end{itemize}
The \texttt{unfold} tactic writes \texttt{margin P} in terms of the definition of \texttt{margin} above, allowing the \texttt{obviously} tactic to fill in the details of the proof of skew-symmetry.
Returning to properties of profiles, one of the most important to consider is whether a profile has a so-called Condorcet winner or even a majority winner.
\begin{definition}\label{CondorcetDef} \textnormal{Given a profile $\mathbf{P}$ and $x\in X(\mathbf{P})$, $x$ is a \textit{Condorcet winner in $\mathbf{P}$} if for all $y\in X(\mathbf{P})$ with $y\neq x$, $x$ is majority preferred to $y$ in $\mathbf{P}$. We say that $x$ is a \textit{majority winner in $\mathbf{P}$} if the number of voters who rank $x$ (and only $x$) in first place is greater than the number of voters who do not rank $x$ in first place.}
\end{definition}
In Lean, Definition \ref{CondorcetDef} becomes:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{condorcet\_winner} $\{$V X : \textcolor{blue}{Type}$\}$ (P : Prof V X) (x : X) :} \\ \texttt{Prop := $\forall$ y $\neq$ x, majority\_preferred P x y}
\item[]
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{majority\_winner} $\{$V X : \textcolor{blue}{Type}$\}$ (P : Prof V X) (x : X) :} \\
\texttt{ Prop := cardinal.mk $\{$v : V // $\forall$ y $\neq$ x, P v x y$\}$ > cardinal.mk} \\ \texttt{$\{$v : V // $\exists$ y $\neq$ x, P v y x$\}$}
\end{itemize}
As an example of a more involved proof than the one above showing that the margin matrix is skew-symmetric, we present a proof in Lean that a majority winner is also a Condorcet winner. For this we use several basic theorems provided by Mathlib, including one formalizing the fact that a subtype of a type has cardinality less than or equal to that of the type:\footnote{We have changed variable names and replaced `$\#$' with `\texttt{cardinal.mk}'.}
\begin{itemize}
\item[] \texttt{\textcolor{blue}{theorem} \textcolor{orange}{cardinal.mk\_subtype\_mono} $\{$$\alpha$ : Type u$\}$ $\{$$\varphi$ $\psi$ : $\alpha$ $\to$ Prop$\}$} \\
\texttt{(h : $\forall$ x, $\varphi$ x $\to$ $\psi$ x) :} \\
\texttt{cardinal.mk $\{$x // $\varphi$ x$\}$ $\leq$ cardinal.mk $\{$x // $\psi$ x$\}$}
\end{itemize}
We explain the following Lean proof in detail below:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{lemma} \textcolor{orange}{condorcet\_of\_majority\_winnner} $\{$V X : \textcolor{blue}{Type}$\}$ (P : Prof V X)}
\texttt{[fintype V] (x : X) :}
\texttt{majority\_winner P x $\to$ condorcet\_winner P x :=}
\textcolor{blue}{\texttt{begin}}
\item[\texttt{1.}]\quad \texttt{intros majority z z\_ne\_x,}
\item[\texttt{2.}]\quad \texttt{\textcolor{blue}{have} imp1 : $\forall$ v, ($\forall$ y $\neq$ x, P v x y) $\to$ P v x z := \textcolor{blue}{by} finish,}
\item[\texttt{3.}]\quad \texttt{refine lt\_of\_lt\_of\_le \_ (cardinal.mk\_subtype\_mono imp1),}
\item[\texttt{4.}]\quad \texttt{\textcolor{blue}{have} imp2 : $\forall$ v, P v z x $\to$ ($\exists$ y $\neq$ x, P v y x) := \textcolor{blue}{by} finish,}
\item[\texttt{5.}]\quad \texttt{apply lt\_of\_le\_of\_lt (cardinal.mk\_subtype\_mono imp2),}
\item[\texttt{6.}]\quad \texttt{exact majority, }
\textcolor{blue}{\texttt{end}}
\end{itemize}
Since the logical form of what we want to prove, \texttt{majority\_winner P x $\to$ condorcet\_winner P x}, is an implication, we use \texttt{intros} on line \texttt{1} to introduce a name \texttt{majority} for a proof of \texttt{majority\_winner P x}. Then since the consequent, \texttt{condorcet\_winner P x}, is a universal claim, \texttt{$\forall$ y $\neq$ x, majority\_preferred P x~y}, we introduce a name \texttt{z} for an arbitrary candidate and a name \texttt{z\_ne\_x} for a proof of \texttt{z $\neq$ x}. Our goal is now to prove \texttt{majority\_preferred P x z}.
The first key move on line \texttt{2} is to prove that everyone who ranks \texttt{x} first ranks \texttt{x} above \texttt{z}, which Lean does automatically using the \texttt{finish} tactic. Since \texttt{imp1} is a proof of a proposition of the form \texttt{($\forall$ v, $\varphi$ v $\to$ $\psi$ v)}, we can apply the Mathlib theorem \texttt{cardinal.mk\_subtype\_mono} to get a proof \texttt{cardinal.mk\_subtype\_mono imp1} that the number of voters who rank \texttt{x} first is less than or equal to the number of voters who rank \texttt{x} above \texttt{z}.
On line \texttt{3}, we use a Mathlib theorem, \texttt{lt\_of\_lt\_of\_le}, which states that \texttt{n}~$<$~\texttt{m}~$\to$ \texttt{m} $\leq$ \texttt{k} $\to$ \texttt{n} $<$ \texttt{k} (recall that implication associates to the right). Take \texttt{n} to be the number of voters who rank \texttt{z} above \texttt{x}, \texttt{m} to be the number who rank \texttt{x} first, and \texttt{k} to be the number who rank \texttt{x} above \texttt{z}. Thus, our goal is to prove \texttt{n}~$<$~\texttt{k}, and above we proved \texttt{m} $\leq$ \texttt{k}. Now \texttt{m} $\leq$ \texttt{k} is not the antecedent of \texttt{n}~$<$~\texttt{m}~$\to$~\texttt{m}~$\leq$~\texttt{k}~$\to$~\texttt{n}~$<$~\texttt{k}, but Lean's \texttt{refine} tactic allows us to insert a placeholder $\_$ for the antecedent, so our goal then becomes proving \texttt{n} $<$ \texttt{m}.
To prove \texttt{n} $<$ \texttt{m}, the key move on line \texttt{4} is to prove that everyone who ranks \texttt{z} above \texttt{x} does not rank \texttt{x} first, which Lean does automatically using the \texttt{finish} tactic. Then we can apply \texttt{cardinal.mk\_subtype\_mono} to obtain a proof \texttt{cardinal.mk\_subtype\_mono imp2} that the number \texttt{n} of voters who rank \texttt{z} above \texttt{x} is less than or equal to the number---call it \texttt{m}$'$---of voters who do not rank \texttt{x} first. Thus, we have a proof of \texttt{n~$\leq$~m$'$}, so we can apply the implication \texttt{n~$\leq$~m$'$~$\to$~m$'$~$<$~m~$\to$~n~$<$ m} provided by the Mathlib theorem \texttt{lt\_of\_le\_of\_lt} to obtain a proof of \texttt{m$'$ $<$ m $\to$ n $<$ m}. Then since \texttt{majority} is exactly a proof of the antecedent of \texttt{m$'$ $<$ m $\to$ n $<$ m}, we obtain a proof of our goal \texttt{n $<$ m}.
\subsection{Functions on profiles}
Next we define two standard kinds of functions in voting theory that input profiles. The first, a \textit{social choice correspondence} (SCC), assigns to a profile a set of candidates, considered tied for winning the election. It is common to consider ``domain restrictions'' on the set of profiles for which the SCC is defined \cite{Gaertner2001}. Thus, one may define an SCC as a function $F$ on some set $\mathcal{D}$ of profiles such that for all $\mathbf{Q}\in\mathcal{D}$, we have ${\emptyset\neq F(\mathbf{Q})\subseteq X(\mathbf{Q})}$. However, for our formalization purposes, it is more convenient to use the following equivalent approach.
\begin{definition} \textnormal{For $V\subseteq\mathcal{V}$ and $X\subseteq\mathcal{X}$, a \textit{social choice correspondence for $(V,X)$}, or $(V,X)$-SCC, is a function $F: \mathsf{Prof}(V,X)\to \wp(X)$. We abuse terminology and call the set $\{\mathbf{Q}\in\mathsf{Prof}(V,X)\mid F(\mathbf{Q})\neq\emptyset \}$ the \textit{domain} of $F$. We say that $F$ satisfies \textit{universal domain} if its domain is $\mathsf{Prof}(V,X)$.}
\textnormal{Let $\mathsf{SCC}$ be a function that assigns to each pair $(V,X)$ of $V\subseteq\mathcal{V}$ and $X\subseteq\mathcal{X}$ the set of all $(V,X)$-SCCs.}
\end{definition}
We represent the function $\mathsf{SCC}$ in Lean as follows, where \texttt{set X} is the type of subsets of \texttt{X}:\footnote{When writing type expressions, function application binds more strongly than arrow, so `\texttt{Prof V X $\to$ set X}' stands for \texttt{(Prof V X) $\to$ set X}.}
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{SCC} := \textcolor{blue}{$\lambda$} (V X : \textcolor{blue}{Type}), Prof V X $\to$ set X}
\end{itemize}
The definition states that \texttt{SCC} is a function that given two types, \texttt{V} and \texttt{X}, outputs the type \texttt{Prof V X $\to$ set X}, which is the type of $(V,X)$-SCCs.
We formalize universal domain as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{universal\_domain\_SCC} $\{$V X : \textcolor{blue}{Type}$\}$ (F : SCC V X) : Prop :=} \\
\texttt{$\forall$ P : Prof V X, F P $\neq$ $\emptyset$}
\end{itemize}
\begin{example}\label{CondorcetEx1} For any $V$, $X$, consider the Condorcet SCC for $(V,X)$ defined by:
\[\mathrm{Cond}_{(V,X)}(\mathbf{P})=\begin{cases} \{x\} & \mbox{if there is a Condorcet winner $x$ in $\mathbf{P}$} \\ X(\mathbf{P}) & \mbox{otherwise}\end{cases}.\]
The definition states that given a $(V,X)$-profile $\mathbf{P}$, if there is a Condorcet winner---in which case it is unique---then output the set containing the Condorcet winner, and otherwise output all candidates in $X$.
We represent this $(V,X)$-SCC in Lean as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{condorcet\_SCC} $\{$V X : \textcolor{blue}{Type}$\}$ : SCC V X := $\textcolor{blue}{\lambda}$ P, }
\texttt{$\{$x : X | condorcet\_winner P x $\vee$ $\neg$ $\exists$ y, condorcet\_winner P y$\}$}
\end{itemize}
\end{example}
Most voting methods (e.g., Plurality, Borda, Instant Runoff) are defined not only for a fixed set of voters and candidates but for any set of voters and candidates, which motivates the following definition.
\begin{definition}\label{VSCC} \textnormal{A \textit{variable-election social choice correspondence} (VSCC) is a function $F$ that assigns to each pair $(V,X)$ of a $V\subseteq \mathcal{V}$ and $X\subseteq\mathcal{X}$ a $(V,X)$-SCC. We abuse terminology and call the set $\{\mathbf{Q}\in\mathsf{PROF}\mid F(V(\mathbf{Q}),X(\mathbf{Q}))(\mathbf{Q})\neq\emptyset \}$ the \textit{domain} of $F$. We say that $F$ satisfies (\textit{finite}) \textit{universal domain} if the domain of $F$ includes $\{\mathbf{P}\in \mathsf{PROF}\mid V(\mathbf{P}) \mbox{ and }X(\mathbf{P}) \mbox{ nonempty and finite}\}$.}\footnote{Of course, one could also consider the stronger condition that the domain of $F$ contains all profiles even with infinite sets of voters and/or candidates.}
\end{definition}
\noindent An equivalent but perhaps more intuitive approach would define a VSCC to be a function on $\mathsf{PROF}$ (rather than $\wp(\mathcal{V})\times\wp(\mathcal{X})$) such that for each $\mathbf{Q}\in\mathsf{PROF}$, we have $F(\mathbf{Q})\subseteq X(\mathbf{Q})$;\footnote{This is the definition of a \textit{voting method} used in \cite{HP2020b} with the additional stipulations that $F(\mathbf{Q})\neq\emptyset$ and that $V(\mathbf{Q})$ and $X(\mathbf{Q})$ are nonempty and finite.} abusing terminology, we could then call the set $\{\mathbf{Q}\in\mathsf{Prop}\mid F(\mathbf{Q})\neq\emptyset\}$ the \emph{domain} of the VSCC. However, we have presented Definition \ref{VSCC} above because it nicely connects with our formalization in Lean.
In Lean, we define the type of VSCCs as a \textit{dependent function type}:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{VSCC} : \textcolor{blue}{Type 1} := $\Pi$ (V X : \textcolor{blue}{Type}), SCC V X}
\end{itemize}
Given \texttt{$\alpha$ : Type 1} and \texttt{$\beta$ : $\alpha$ $\to$ $\alpha$ $\to$ Type}, the type \texttt{$\Pi$ y z : $\alpha$, $\beta$ y z} is the type of functions \texttt{f} such that for each \texttt{a b : $\alpha$}, we have that \texttt{f a b} is an element of \texttt{$\beta$~a~b}. In the definition of \texttt{VSCC} above, $\alpha$ is \texttt{Type} and $\beta$ is \texttt{SCC}. Thus, the definition states that an element of the type \texttt{VSCC} is a function that for any types \texttt{V} and \texttt{X} returns a function of the type \texttt{SCC V X}, i.e., a $(V,X)$-SCC.
We formalize (finite) universal domain as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{finite\_universal\_domain\_VSCC} (F : VSCC) : Prop :=}\\
\texttt{$\forall$ V X [inhabited V] [inhabited X] [fintype V] [fintype X],}
\texttt{universal\_domain\_SCC (F V X)}
\end{itemize}
\begin{example} We define the Condorcet VSCC as follows, taking advantage of our definition for any $V$ and $X$ of the Condorcet $(V,X)$-SCC in Example \ref{CondorcetEx1}:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{condorcet\_VSCC} : VSCC := \textcolor{blue}{$\lambda$} V X, condorcet\_SCC}
\end{itemize}
\end{example}
Similarly, we may define VSCCs for Plurality voting, Borda, Instant Runoff, etc.
The second type of function we consider assigns to a given profile a binary relation on the set of candidates in the profile.
\begin{definition}\textnormal{For $V\subseteq\mathcal{V}$ and $X\subseteq\mathcal{X}$, a \textit{collective choice rule for $(V,X)$}, or $(V,X)$-CCR, is a function $f: \mathsf{Prof}(V,X)\to \mathcal{B}(X)$. Let $\mathsf{CCR}$ be a function that assigns to each pair $(V,X)$ of $V\subseteq\mathcal{V}$ and $X\subseteq\mathcal{X}$ the set of all $(V,X)$-CCRs.}\end{definition}
\noindent Depending on the application, one can interpret the binary relation $f(\mathbf{Q})$ in one of two ways: $xf(\mathbf{Q})y$ can mean (a) $x$ is strictly preferred to $y$ socially or (b) $x$ is strictly preferred to or tied with $y$ socially.\footnote{As in Footnote \ref{VoterNote}, approach (b) is more general, since it allows one to distinguish between ``social indifference'' and ``social noncomparability'' (for examples of theorems in social choice in which this distinction matters, see \cite{HK2020}). When notions of social indifference and noncomparability are not needed, approach (a) can be simpler.} Once again, there is also the issue of ``domain restrictions.'' Under approach $(a)$, we can mark that the CCR is ``undefined'' on a profile $\mathbf{Q}$ by setting $f(\mathbf{Q})= X(\mathbf{Q})\times X(\mathbf{Q})$. Then we can abuse terminology and call $\{\mathbf{Q}\in\mathsf{Prof}(V,X)\mid f(\mathbf{Q})\neq X(\mathbf{Q})\times X(\mathbf{Q}) \}$ the domain of $f$. Under approach (b), we can mark that the CCR is ``undefined'' on $\mathbf{Q}$ by setting $f(\mathbf{Q})=\emptyset$. Then we can abuse terminology and call $\{\mathbf{Q}\in\mathsf{Prop}(V,X)\mid f(\mathbf{Q})\neq \emptyset\}$ the domain of~$f$. A CCR $f$ is said to be \textit{asymmetric} (resp.~\textit{transitive}, etc.), if for all $\mathbf{Q}$ in the domain of $f$, $f(\mathbf{Q})$ is asymmetric (transitive, etc.). Of course, asymmetric CCRs only make sense under interpretation (a) above, whereas under interpretation (b), CCRs should be reflexive.
In Lean, our representation of the function $\mathsf{CCR}$ is similar to that of $\mathsf{SCC}$:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{CCR} := \textcolor{blue}{$\lambda$} (V X : \textcolor{blue}{Type}), Prof V X $\to$ X $\to$ X $\to$ \textcolor{blue}{Prop}}
\end{itemize}
\begin{example}\label{SplitCycleEx1} As an example of a CCR, we consider the Split Cycle CCR studied in \cite{HP2020}. The output of the Split Cycle CCR is an asymmetric relation understood as a relation of ``defeat'' between candidates. A candidate $x$ defeats a candidate $y$ in $\mathbf{P}$ just in case the margin of $x$ over $y$ is (i) positive and (ii) greater than the weakest margin in each majority cycle containing $x$ and $y$. To formalize this definition, we first need a definition of a cycle in a binary relation:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{cycle} $\{$X : \textcolor{blue}{Type}$\}$ := \textcolor{blue}{$\lambda$} (R : X $\to$ X $\to$ \textcolor{blue}{Prop}) (c : list X), \\
$\exists$ (e : c $\neq$ list.nil), list.chain R (c.last e) c}
\end{itemize}
Here the function \texttt{cycle} takes in a binary relation \texttt{R} and a list \texttt{c} of elements of \texttt{X} and outputs the proposition stating that (i) there is a proof \texttt{e} that \texttt{c} is not the empty list, and (ii) \texttt{c} is a cycle in \texttt{R}. To express (ii), we use the construction \texttt{list.chain R a c}, where \texttt{R} is a binary relation, \texttt{a} is an element of \texttt{X}, and \texttt{c} is a list of elements of \texttt{X}, which means that \texttt{a} is \texttt{R}-related to the first element of \texttt{c} and that every element in the list \texttt{c} is related to the next element in \texttt{c}. Thus, if we take \texttt{a} as the last element of \texttt{c}, this implies that \texttt{c} is a cycle. Applying \texttt{c.last} to the proof \texttt{e} that \texttt{c} is not the empty list outputs the last element of \texttt{c}.
Now we are ready to define the Split Cycle $(V,X)$-CCR in Lean:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{split\_cycle\_CCR} $\{$V X : \textcolor{blue}{Type}$\}$ : CCR V X :=}
\texttt{\textcolor{blue}{$\lambda$} (P : Prof V X) (x y : X), $\forall$ [n : fintype V],}
\texttt{0 $<$ @margin V X n P x y $\wedge$}
\texttt{$\neg$ ($\exists$ (c : list X), x $\in$ c $\wedge$ y $\in$ c $\wedge$}\\
\texttt{cycle (\textcolor{blue}{$\lambda$} a b, @margin V X n P x y $\leq$ @margin V X n P a b) c)}
\end{itemize}
Recall that the \texttt{margin} function takes as implicit arguments the set \texttt{V} of voters, the set \texttt{X} of candidates, and a proof that \texttt{V} is finite. The \texttt{@} symbol is used when explicitly supplying these implicit arguments. Thus, the definition states that given a profile \texttt{P} and two candidates \texttt{x} and \texttt{y}, the binary relation output by \texttt{split\_cycle\_CCR} holds of \texttt{x}, \texttt{y} if for any proof \texttt{n} that \texttt{V} is finite, the margin of \texttt{x} over \texttt{y} in \texttt{P} (supplying the \texttt{margin} function with \texttt{V}, \texttt{X}, and \texttt{n}) is greater than 0 and there is no list \texttt{c} of elements containing \texttt{x} and \texttt{y} such that \texttt{c} is a majority cycle for which the margin of \texttt{x} over \texttt{y} is less than or equal to every margin in the cycle, i.e., \texttt{c} is a cycle in the binary relation \texttt{R} that holds of \texttt{a}, \texttt{b} just in case the margin of \texttt{x} over \texttt{y} is less than or equal to the margin of \texttt{a} over \texttt{b}.
\end{example}
Once again, we can consider functions that are not restricted to a fixed set of voters and candidates.
\begin{definition}\label{VCCR} \textnormal{A \emph{variable-election collective choice rule} (VCCR) is a function that assigns to each pair $(V,X)$ of a $V\subseteq\mathcal{V}$ and $X\subseteq\mathcal{X}$ a $(V,X)$-CCR.}
\end{definition}
\noindent An equivalent but perhaps more intuitive definition takes a VCCR to be a function $f$ on $\mathsf{PROF}$ (instead of $\mathcal{V}\times\mathcal{X}$) such that for all $\mathbf{Q}\in\mathsf{PROF}$, $f(\mathbf{Q})$ is a binary relation on $X(\mathbf{Q})$.\footnote{This is the definition of a VCCR used in \cite{HP2020} with the aditional stipulation that $V(\mathbf{Q})$ and $X(\mathbf{Q})$ are nonempty and finite.} However, we have presented Definition \ref{VCCR} above because it nicely connects with our formalization in Lean, which as in the case of VSCCs defines the type of VCCRs to be a dependent function type:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{VCCR} := $\Pi$ (V X : \textcolor{blue}{Type}), CCR V X}
\end{itemize}
The definition states that an element of the type \texttt{VCCR} is a function that for any types \texttt{V} and \texttt{X} returns a function of the type \texttt{CCR V X}, i.e., a $(V,X)$-CCR.
\begin{example}\label{SCVCCR} We define the Split Cycle VCCR as follows, taking advantage of our definition for any $V$ and $X$ of the Split Cycle $(V,X)$-SCC in Example \ref{SplitCycleEx1}:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{split\_cycle\_VCCR} : VCCR := \textcolor{blue}{$\lambda$} V X, split\_cycle\_CCR}
\end{itemize}
\end{example}
Any VCCR, regarded as outputting for a given profile (for a given $V$, $X$) a relation of strict social preference or ``defeat,'' can be transformed into a VSCC by assigning to a given profile the set of candidates who are not defeated.\footnote{An alternative approach, also easily formalizable, assigns to a given profile the set of candidates who are weakly socially preferred to all other candidates.}
\begin{definition}\label{InducedDef} \textnormal{Given an asymmetric VCCR $f$, we define the \textit{maximal-element induced} VSCC $f_{M}$ such that for any $V\subseteq\mathcal{V}$, $X\subseteq\mathcal{X}$, and $(V,X)$-profile $\mathbf{P}$, \[f_{M}(V,X)(\mathbf{P})=\{x\in X(\mathbf{P})\mid \forall y\in X(\mathbf{P}),\, (y,x)\not\in f(V,X)(\mathbf{P})\}.\]}
\end{definition}
In Lean, we formalize Definition \ref{InducedDef} as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{max\_el\_VSCC} : VCCR $\to$ VSCC := \textcolor{blue}{$\lambda$} f V X P,} \\
\texttt{$\{$x : X | $\forall$ y : X, $\neg$ f V X P y x$\}$}
\end{itemize}
\begin{example} The Split Cycle voting method \cite{HP2020b} is the maximal-element induced VSCC from the Split Cycle VCCR defined in Example \ref{SCVCCR}:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{split\_cycle} : VSCC := max\_el\_VSCC split\_cycle\_VCCR}
\end{itemize}
\end{example}
As is well known, any acyclic VCCR (i.e., VCCR that assigns an acyclic CCR to each $V,X$) induces a VSCC satisfying (finite) universal domain:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{acyclic} $\{$X : \textcolor{blue}{Type}$\}$ : (X $\to$ X $\to$ \textcolor{blue}{Prop}) $\to$ \textcolor{blue}{Prop} :=}\\
\texttt{\textcolor{blue}{$\lambda$} Q, $\forall$ (c : list X), $\neg$ cycle Q c}
\item[]
\item[] \texttt{\textcolor{blue}{theorem} \textcolor{orange}{max\_el\_VSCC\_universal\_domain} (f : VCCR)}\\
\texttt{(a : $\forall$ V X [inhabited V] [inhabited X] [fintype V] [fintype X]} \\
\texttt{(P : Prof V X), acyclic (F V X P)) :}\\
\texttt{finite\_universal\_domain\_VSCC (max\_el\_VSCC f) := \dots}
\end{itemize}
The proof can be found in our online repository.
\subsection{Voting axioms}\label{VotingAxioms}
After formalizing the basic objects of voting theory, we formalized a number of standard axioms by which voting procedures are evaluated (and then proved that the axioms are satisfied by Split Cycle, as explained in Section \ref{Theorems}):
\begin{itemize}
\item[] \textbf{Domain axioms}:
\begin{itemize}
\item \textit{universal domain} (resp.~\textit{finite universal domain}): all profiles (resp.~finite profiles) are in the domain of the VSCC.\\
\end{itemize}
\item[] \textbf{Intra-profile axioms}:
\begin{itemize}
\item \textit{Condorcet criterion}: if there is a Condorcet winner in a profile, that candidate is the unique winner.
\item \textit{Condorcet loser criterion}: if there is a Condorcet loser in a profile---a candidate who loses to every other candidate in a head-to-head majority comparison---that candidate does not win.
\item \textit{Pareto}: if all voters rank candidate $x$ above candidate $y$ in a profile, then $y$ does not win.\\
\end{itemize}
\item[] \textbf{Inter-profile axioms}:
\begin{itemize}
\item \textit{monotonicity}: if $x$ wins in a profile $\mathbf{P}$, and $\mathbf{P}'$ is obtained from $\mathbf{P}$ by voters moving $x$ up in their rankings, then $x$ still wins in $\mathbf{P}'$.
\item \textit{reversal symmetry}: if $x$ is the unique winner in a profile $\mathbf{P}$, then $x$ is not a winner in the profile $\mathbf{P}^r$ obtained from $\mathbf{P}$ by reversing all voters' rankings.\\
\end{itemize}
\item[] \textbf{Variable-voter inter-profile axioms}:
\begin{itemize}
\item \textit{positive involvement}: if $x$ wins in a profile $\mathbf{P}$, and $\mathbf{P}'$ is obtained from $\mathbf{P}$ by adding a new voter who ranks $x$ as their unique first choice, then $x$ still wins in $\mathbf{P}'$.
\item \textit{negative involvement}: if $x$ does not win in a profile $\mathbf{P}$, and $\mathbf{P}'$ is obtained from $\mathbf{P}$ by adding a new voter who ranks $x$ as their unique last choice, then $x$ still does not win in $\mathbf{P}'$.\\
\end{itemize}
\item[] \textbf{Variable-candidate inter-profile axioms}:
\begin{itemize}
\item \textit{strong stability for winners}: if $x$ wins in a profile $\mathbf{P}$, and $\mathbf{P}'$ is obtained from $\mathbf{P}$ by adding a new candidate $y$ who does not beat $x$ in a head-to-head majority comparison, then $x$ still wins in $\mathbf{P}'$.
\item \textit{independence of clones}: see Section \ref{ClonesSection}.
\end{itemize}
\end{itemize}
Several of these axioms involved formalizing auxiliary relations or operations on profiles. For example, monotonicity requires the ternary relation of one profile being related to another by a \textit{simple lift} of a candidate $x$, meaning that $x$ may go up in voters' rankings but the rest of their rankings remains the same:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{simple\_lift} $\{$V X : \textcolor{blue}{Type}$\}$ : Prof V X $\to$ Prof V X $\to$ X $\to$ Prop :=
\textcolor{blue}{$\lambda$} P$'$ P x, ($\forall$ (a $\neq$ x) (b $\neq$ x) i, P i a b $\leftrightarrow$ P$'$ i a b)
$\wedge$ \\$\forall$ a i, ((P i x a $\to$ P$'$ i x a) $\wedge$ (P$'$ i a x $\to$ P i a x))}
\end{itemize}
The variable-voter and variable-candidate axioms involve moving from one profile to another with a different set of voters or candidates. In fact, it is most convenient to formalize these axioms using the operations of \textit{removing} a voter or candidate from a profile, moving from a profile of type \texttt{Prof V X} to a profile of type \texttt{ Prof $\{$v : V // v $\neq$ i$\}$ X}, in the case of removing a voter \texttt{i}, or of type \texttt{Prof V $\{$x : X // x $\neq$ b$\}$}, in the case of removing a candidate \texttt{b}. See Section~\ref{ClonesSection} for the definition of the \textcolor{orange}{\texttt{minus\_candidate}} operation.
We stress that the axiom definitions we formalized are the standard ones used throughout the literature, not ones tailored to any particular VSCC, so others can immediately use our formalizations to verify results for any VSCCs.
\section{Theorems}\label{Theorems}
As a proof of concept of verifying theorems using our formal framework, we verified most of the results concerning the Split Cycle voting method in \cite{HP2020b}. In particular, we verified the equivalence of two definitions of Split Cycle (one quantifying over all cycles containing $x$ and $y$, the other quantifying over only paths from $y$ to $x$) and that Split Cycle satisfies all of the axioms listed in Section~\ref{VotingAxioms}. However, we emphasize that Split Cycle was only chosen as a test case. Our formalization of the basic objects and axioms of voting theory can be used to verify properties of any other voting method.
Moreover, verifying the properties of Split Cycle required formalizing a number of general-purpose lemmas that will be needed---and can now be used---to verify the properties of other voting methods. Indeed, one of the benefits of a formalization project such as ours is to extract those general-purpose lemmas, which may be buried in proofs for a particular voting method, so that future formalization efforts can move more quickly to formalizing sophisticated results.
\subsection{Graph-theoretic background}
Before formalizing voting-theoretic proofs, we had to build up basic infrastructure for reasoning about cycles, walks, and paths in graphs, such as rotating and reversing cycles and converting walks to paths, which was not available in Mathlib. For example, to convert walks to paths, we defined an inductive type\footnote{We can eliminate `\texttt{noncomputable}' if we assume that equality for \texttt{X} is decidable, adding `\texttt{[decidable\_eq X]}' as an implicit argument. }:
\begin{itemize}
\item[] \texttt{\textcolor{magenta}{noncomputable} \textcolor{blue}{def} \textcolor{orange}{to\_path} $\{$X : \textcolor{blue}{Type}$\}$ : list X $\to$ list X}
\item[] \texttt{| [] := []}
\item[] \texttt{| (u :: p) := \textcolor{blue}{let} p$'$ := to\_path p \textcolor{blue}{in}}
\item[] \quad\texttt{\textcolor{magenta}{if} u $\in$ p$'$ \textcolor{magenta}{then} (p$'$.drop (p$'$.index\_of u)) \textcolor{magenta}{else} (u :: p$'$)}
\end{itemize}
Hence \texttt{to\_path} maps the empty list to itself, and given a list \texttt{u :: p} constructed by adding \texttt{u} to the front of the list \texttt{p}, if \texttt{u} is an element of \texttt{to\_path~p}, we output the result of dropping from \texttt{to\_path p} all elements before \texttt{u} in the list, and otherwise we add \texttt{u} to the front of \texttt{to\_path p}. A significant part of the formalization effort was proving needed properties of \texttt{to\_path} and other operations on lists.
\subsection{Reasoning about margins}
In addition to developing graph-theoretic infrastructure, we formalized a number of general-purpose lemmas for reasoning about majority margins between candidates and the relation between changes in profiles and changes in margins. These lemmas are applicable to all voting methods for which the selection of winners is invariant between profiles with the same majority margins matrices---so-called C2 voting methods \cite{Fishburn1977} (e.g., Minimax, Ranked Pairs, Beat Path, Split Cycle, and even Borda). To take one example, when reasoning about monotonicity (recall Section \ref{VotingAxioms}) for C2 methods, a key fact is that a simple lift of a candidate $x$ cannot increase the margin of any candidate over~$x$:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{lemma} \textcolor{orange}{margin\_lt\_margin\_of\_lift} $\{$V X : \textcolor{blue}{Type}$\}$ (P P$'$ : Prof V X)} \\ \texttt{[fintype V] (y x : X) :} \\
\texttt{simple\_lift P$'$ P x $\to$ margin P$'$ y x $\leq$ margin P y x := $\dots$}
\end{itemize}
Other general-purpose lemmas about margins that we formalized concern how margins change when adding a voter who ranks $x$ first---for verifying the axiom of positive involvement---or last---for verifying the axiom of negative involvement---or how clones of a candidate all bear the same margins to a given non-clone---for verifying the axiom of independence of clones. These are basic lemmas that anyone wishing to verify the relevant properties of C2 voting methods will need.
For some of the most elementary facts, Lean's automation was adequate to complete proofs without our help. For example, the fact that reversing all voters rankings reverses all margins---used in verifying the axiom of reversal symmetry---was provable using Lean's \texttt{obviously} tactic:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{reverse\_profile} $\{$V X : \textcolor{blue}{Type}$\}$ : Prof V X $\to$ Prof V X := \\ $\lambda$ P v x y, P v y x} \\
\item[] \texttt{\textcolor{blue}{lemma} \textcolor{orange}{margin\_reverse\_eq} $\{$V X : \textcolor{blue}{Type}$\}$ [fintype V] (P : Prof V X) \\}
\texttt{(a b : X) : margin (reverse\_profile P) b a = margin P a b :=}
\texttt{\textcolor{blue}{begin}}
\quad \texttt{obviously,}
\texttt{\textcolor{blue}{end}}
\end{itemize}
Similarly, Lean's \texttt{obviously} tactic immediately proves that removing a candidate does not change the margins between remaining candidates. By contrast, proofs of lemmas like \texttt{\textcolor{orange}{margin\_lt\_margin\_of\_lift}} required more human input.
\subsection{Example: independence of clones}\label{ClonesSection}
Our most involved formalization was of the proof from \cite{HP2020b} that Split Cycle satisfies Tideman's \cite{Tideman1987} axiom of independence of clones. A set $C$ of two or more candidates is a set of \textit{clones} in a profile $\mathbf{P}$ if no voter ranks any candidates outside of $C$ in between two candidates from $C$. Given a particular candidate $c$, we say that a nonempty set $D$ of candidates (not containing $c$) is a set of \textit{clones of $c$} if $D\cup \{c\}$ is a set of clones. In Lean, we formalize this as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{clones} $\{$V X : \textcolor{blue}{Type}$\}$ (P : Prof V X) (c : X)}
\texttt{(D : set $\{$x : X // x $\neq$ c$\}$) : Prop := }
\texttt{D.nonempty $\wedge$ ($\forall$ (c$'$ $\in$ D) (x : $\{$x : X // x $\neq$ c$\}$) (i : V),}
\texttt{x $\not\in$ D $\to$ ((P i c x $\leftrightarrow$ P i c$'$ x) $\wedge$ (P i x c $\leftrightarrow$ P i x c$'$)))}
\end{itemize}
Independence of clones for VSCCs states that (i) removing a clone from a profile should not change which non-clones win and (ii) removing a clone from a profile should not change whether at least one clone is among the winners (though which clone wins is allowed to change upon removing a clone). To formalize this, we need a way of removing a candidate from a profile, accomplished as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{minus\_candidate} $\{$V X : \textcolor{blue}{Type}$\}$ (P : Prof V X) (b : X) :} \\
\texttt{ Prof V $\{$x : X // x $\neq$ b$\}$ := \textcolor{blue}{$\lambda$} v x y, P v x y}
\end{itemize}
Thus, \texttt{minus\_candidate} takes in a profile \texttt{P} for \texttt{V} and \texttt{X}, as well as a candidate \texttt{b} from \texttt{X}, and outputs the profile for \texttt{V} and \texttt{$\{$x : X // x $\neq$ b$\}$} that agrees with \texttt{P} on how every voter ranks the candidates other than \texttt{b}. Using \texttt{minus\_candidate}, we formalize condition (i) of independence of clones as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{def} \textcolor{orange}{non\_clone\_choice\_ind\_clones} $\{$V X : \textcolor{blue}{Type}$\}$ (P : Prof V X)}
\texttt{(c : X) (D : set $\{$x : X // x $\neq$ c$\}$) : VSCC $\to$ Prop := \textcolor{blue}{$\lambda$} F, }
\texttt{clones P c D $\to$ ($\forall$ a : $\{$x : X // x $\neq$ c$\}$, a $\not\in$ D $\to$ }
\texttt{(a.val $\in$ (F V X P) $\leftrightarrow$ a $\in$ (F V $\{$x : X // x $\neq$ c$\}$}
\texttt{(minus\_candidate P c))))}
\end{itemize}
Since \texttt{$\{$x : X // x $\neq$ c$\}$} is a subtype of \texttt{X}, \texttt{a} consists of an element of \texttt{X}, called \texttt{a.val}, together with a proof that \texttt{a.val $\neq$ c}. Since \texttt{F V X P} is a set of elements of \texttt{X}, we must write `\texttt{a.val $\in$ (F V X P)}' instead of `\texttt{a $\in$ (F V X P)}'. Finally, we can state that Split Cycle satisfies part (i) of independence of clones as follows:
\begin{itemize}
\item[] \texttt{\textcolor{blue}{theorem} \textcolor{orange}{non\_clone\_choice\_ind\_clones\_split\_cycle} $\{$V X : \textcolor{blue}{Type}$\}$}
\texttt{[fintype V] (P : Prof V X) (c : X) (D : set $\{$x : X // x $\neq$ c$\}$) : }
\texttt{non\_clone\_choice\_ind\_clones P c D split\_cycle := ...}
\end{itemize}
The formalization of part (ii) of independence of clones is similar. The proof that Split Cycle satisfies independence of clones involves manipulating paths in the majority graph of a profile---in particular, replacing all clones in a path by a distinguished clone and then eliminating repetitions of candidates in the resulting sequence using the \texttt{to\_path} operation.
\section{Conclusion}\label{Conclusion}
Our goal was to set up a general framework for formally verifying results in voting theory. One of the benefits of such a formalization project, beyond certifying the correctness of results, is that it forces formalizers to think about how the fundamental notions of the field ought to be formulated (see, e.g., our definitions of VSCCs and VCCRs). We expect such benefits to accrue for formalization projects in other areas of mathematical social science or even natural science.
What did we learn from the verification stage of our project? As usual in formalization, we caught some omitted assumptions (e.g., of nonemptiness) in definitions needed to prove results about the Split Cycle voting method in a draft of \cite{HP2020b}, prompting corrections. A more striking lesson of formalizing these results is how little depends on assumptions about properties of voter preferences. While it was initially assumed in \cite{HP2020b} that voter preference relations are linear orders,\footnote{This assumption was made for its simplifying consequence of rendering the majority margin between two candidates the canonical measure of the strength of majority preference between candidates. Cf.~the following paragraph in the main text.} the full strength of this assumption turned out not to be used in any proofs we formalized. In fact, most results work with no assumptions about voter preferences at all (except the default asymmetry of strict preference). The only exception was the Pareto principle, whose proof used the acyclicity of voter preferences. It would be fascinating to see exactly what properties of voter preferences are needed in formalized proofs of properties of other voting methods.
It would also be desirable to abstract away from the definition of margin in Definition \ref{MarginDef} to define voting methods and prove theorems in terms of an abstract relation $(a,b)\succ_\mathbf{P} (c,d)$ expressing that the strength of majority preference for $a$ over $b$ in profile $\mathbf{P}$ is stronger than the strength of majority preference for $c$ over $d$ in $\mathbf{P}$. One definition of $(a,b)\succ_\mathbf{P} (c,d)$ is that $Margin_\mathbf{P}(a,b)>Margin_\mathbf{P}(c,d)$. But if voter preference relations are non-linear, there are alternative, inequivalent definitions of $(a,b)\succ_\mathbf{P} (c,d)$, such as the \textit{winning votes} definition: $\big|\{i\in V(\mathbf{P})\mid a\mathbf{P}_ib\}\big| \geq \big|\{i\in V(\mathbf{P})\mid c\mathbf{P}_id\}\big|$. Schulze \cite{Schulze2011} considers other definitions besides margins and winning votes, and rather than settling for any one of them, lays down general axioms on the relation of relative strength of majority preference needed for proofs to go through. This is a natural next step in our project, which formalization facilitates: identify which of the properties of the $\succ_\mathbf{P}$ relation are actually used in our formalized proofs and then assume only those properties.
Another natural next step would be to extend the verification of voting axioms beyond voting methods, such as Split Cycle, that are based on head-to-head majority comparisons to voting methods, such as Instant Runoff, that are based on iterative elimination procedures. Instant Runoff can be defined \textit{recursively}: a candidate $x$ is an Instant Runoff winner in a profile $\mathbf{P}$ if either $x$ is the only candidate in $\mathbf{P}$ or there is some candidate $y$ who receives the fewest first-place votes in $\mathbf{P}$, and $x$ is an Instant Runoff winner in $\mathbf{P}_{-y}$.\footnote{This is the ``parallel universe'' version of Instant Runoff. An alternative version \cite[p.~7]{Taylor2008} states that the Instant Runoff winners in $\mathbf{P}$ are the Instant Runoff winners in the profile $\mathbf{P}_{-Y}$ obtained from $\mathbf{P}$ by removing the set $Y$ of \textit{all} candidates who receive the fewest first-place votes in $\mathbf{P}$, if $Y\subsetneq X(\mathbf{P})$; otherwise all candidates~win. For another example of a recursively-defined voting method, see \cite{HPSV2021}.} As Lean offer natural ways of defining functions recursively and writing proofs by induction \cite[\S~8]{Avigad2021}, we expect Lean to be well suited to verifying properties of recursively-definable voting methods such as Instant Runoff.
With its axiomatic approach and discrete mathematical character, voting theory is especially amenable to formal verification. Moreover, given the importance of democratic decision making in society, we find it desirable to formally verify that democratic decision procedures have the desirable properties claimed for them. We have done so for one recently proposed voting method, but we would like to see this done for all methods proposed for use in democratic elections.
\subsection*{Acknowledgement}
We thank the Lean Zulip chat community, the Berkeley Lean Seminar, and Jeremy Avigad for advice about using Lean and the two anonymous referees for helpful feedback on our paper.
\bibliographystyle{splncs04}
| {
"timestamp": "2021-10-19T02:07:18",
"yymm": "2110",
"arxiv_id": "2110.08453",
"language": "en",
"url": "https://arxiv.org/abs/2110.08453",
"abstract": "There is a long tradition of fruitful interaction between logic and social choice theory. In recent years, much of this interaction has focused on computer-aided methods such as SAT solving and interactive theorem proving. In this paper, we report on the development of a framework for formalizing voting theory in the Lean theorem prover, which we have applied to verify properties of a recently studied voting method. While previous applications of interactive theorem proving to social choice (using Isabelle/HOL and Mizar) have focused on the verification of impossibility theorems, we aim to cover a variety of results ranging from impossibility theorems to the verification of properties of specific voting methods (e.g., Condorcet consistency, independence of clones, etc.). In order to formalize voting theoretic axioms concerning adding or removing candidates and voters, we work in a variable-election setting whose formalization makes use of dependent types in Lean.",
"subjects": "Logic in Computer Science (cs.LO)",
"title": "Voting Theory in the Lean Theorem Prover",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551566309689,
"lm_q2_score": 0.734119521083126,
"lm_q1q2_score": 0.7083190055005113
} |
https://arxiv.org/abs/1704.06255 | On the gonality, treewidth, and orientable genus of a graph | We examine connections between the gonality, treewidth, and orientable genus of a graph. Especially, we find that hyperelliptic graphs in the sense of Baker and Norine are planar. We give a notion of a bielliptic graph and show that each of these must embed into a closed orientable surface of genus one. We also find, for all $g\ge 0$, trigonal graphs of treewidth 3 and orientable genus $g$, and give analogues for graphs of higher gonality. | \section{Preliminaries on the involutions of graphs and hyperelliptic graphs}
Although the notion of a hyperelliptic graph is well-established,
we prefer to use the
equivalent definition furnished by the hyperelliptic involution \cite{BNHypell}.
\begin{dfn} A \newword{mixing involution} on a graph $G$ is an order-two
automorphism $\alpha: G \to G$ such that if $e$ is an edge between $x$ and
$y$ fixed by $\alpha$ then $\alpha(x) = y$.
\end{dfn}
Note that a graph with a mixing involution $\alpha$ and without loops cannot
have any edges $e$ fixed by $\alpha$ between $\alpha$-fixed vertices $x$ and
$y$. Note that if $G$ is a graph with loops, then the graph $G'$ obtained
by deleting those loops has the same orientable genus. We therefore make
our first reduction.
\begin{Reduction} Hereon, all graphs will be loopless.\end{Reduction}
We are now in the proper setting to consider
\newword{harmonic morphisms of graphs} \cite[\S 2.1]{BNHypell}, an example
of which is given by the quotient of a graph $G$ by a mixing involution
$\alpha$. Notably, the quotient $G/\alpha$ has vertices of the form
$\{v,\alpha(v)\}$ such that $v$ is a vertex of $G$ and edges of the form
$\{e,\alpha(e)\}$ such that the bounding vertices of $e$ are inequivalent
under $\alpha$. One defines a map $G \to G/\alpha$ by sending vertices to
the obvious place, edges to the obvious place provided that their bounding
vertices are non-equivalent under $\alpha$. If $e$ is an edge of the form
$e(v, \alpha(v))$ then of course we must send $e$ to the quotient vertex
$\{v,\alpha(v)\}$.
In the terminology of Baker-Norine, if $G$ has at least 3 vertices, this map
is a harmonic morphism of degree 2.
All such morphisms on graphs with at least 3 vertices arise this way
\cite[Lemma 5.6]{BNHypell}. If $G$ has two vertices, then there is an obvious
mixing involution and the quotient is a point, and thus a tree,
and it is only because that map is constant that we do not say it has degree 2.
\begin{dfn}We say that a connected graph $G$ admitting a
mixing involution $\iota:G\to G$ such that $G/\iota$ is a tree is
\newword{hyperelliptic} and that $\iota$ is the corresponding
\newword{hyperelliptic involution}.\end{dfn}
This is a slightly nonstandard definition in that we don't require the genus
to be at least 2. Typically one stipulates that because when $G$
is 2-edge-connected and has genus $\ge 2$, such an involution must be unique
\cite[Corollary 5.15]{BNHypell}. Thankfully we can reduce to the
2-edge-connected case without pain by contracting all its bridges
\cite[Corollary 5.11]{BNHypell}. There are no 2-edge connected trees, and the
only 2-edge connected genus one graphs are cycles, which are planar.
\begin{Reduction} Hereon, when we refer to the graph $G$, we will mean it to be
2-edge connected.
\end{Reduction}
Note also that a graph with all its bridges contracted has the same orientable
genus as the original graph. Of course we will allow other graphs to not be
2-edge connected. Indeed $G/\iota$ will often be a tree in what follows.
Before proceeding further, we review some examples.
\section{Hyperelliptic graphs associated to Shimura curves}
The literature on Shimura curves which is relevant to the task at hand is
simply too large and too technical to introduce here in a meaningful way. Let
it suffice to say that Ogg has determined all Shimura curves $X^D$ which are
hyperelliptic over $\overline\mathbb{Q}$ \cite{Ogg}. In particular, note that in each
case $D$ is the product of two primes and so there are only two primes of bad
reduction to explore. In each case, the dual graph is also hyperelliptic.
The following code verifies that all of these dual graphs are planar.
\begin{verbatim}
Dlist := [26,35,38,39,51,55,57,58,62,69,74,
82,86,87,93,94,95,111,119,134,146,159,194,206];
// Ogg's list of Shimura curves hyperelliptic over QQbar
del := function(x)
if x eq 0 then return 0;
else return 1;
end if;
end function;
ReducedDualGraph := function(p,q)
// Returns in magma format the dual graph of X^{pq} over FFpbar
// Rather, the "reduced dual graph" with parallel edges collapsed
M := BrandtModule(q,1);
d := Dimension(M);
Mx := MatrixRing(Integers(),d);
Bx := Mx!HeckeOperator(M,p);
for i in [1..d] do for j in [1..d] do
Bx[i,j] := del(Bx[i,j]);
end for; end for;
return Graph<2*Dimension(M)|BlockMatrix(2,2,[[Mx!0,Bx],[Bx,Mx!0]])>;
end function;
for D in Dlist do
G1 := ReducedDualGraph(PrimeDivisors(D)[1],PrimeDivisors(D)[2]);
G2 := ReducedDualGraph(PrimeDivisors(D)[2],PrimeDivisors(D)[1]);
D,IsPlanar(G1),IsPlanar(G2);
end for;
\end{verbatim}
Similar lists exist for, e.g., bielliptic Shimura curves, each of which has
$D\le 546$. Similar code to the above suggests that if $X^D$ has a dual graph
(of its reduction modulo $p$ for $p\mid D$) which is planar and has at least
six vertices, then for $D\ge 500$ the complete list of $(D,p)$ is
$$
\begin{array}{r|l}
p & D \\ \hline
2 & 510,546 \\
3 & 510,570,690 \\
5 & 690,910,1110 \\
7 & 798,910 \\
11 & 1122 \\
13 & 1365\\
29 & 667,2958.
\end{array}
$$
\section{Planarity and Toroidality of graphs with involutions}
Suppose now $G$ is a graph which is 2-edge-connected, loopless, and has a
hyperelliptic involution $\iota$. A given vertex can be either fixed or moved
by $\iota$. We let $F$ denote the set of vertices which are fixed by $\iota$.
By definition, all other vertices are permuted, and there must be an even
number of these. Let $A$ and $B$ be disjoint sets of permuted vertices:
we let $a_1, \ldots, a_n$ be
the elements of $A$, so $B = \{b_1 = \iota(a_1), \ldots, b_n = \iota(a_n)\}$.
The edges of $G$ must therefore fall into one of the following categories.
\begin{itemize}
\item The set $E_A$ of edges from $A$ to itself.
\item The set $E_B = \iota(E_A)$ of edges from $B$ to itself.
\item The set $E_F$ of edges from $F$ to itself.
\item The ``horizontal edges'' $H$ from some $a_i$ to $b_i$.
\item The ``cross edges'' $C$ from some $a_i$ to some $b_j$ such that $i\ne j$.
\item The ``transfer edges'' $T_A$ and $T_B$ respectively from $F$ to $A$
and $F$ to $B$. Note that $T_B = \iota(T_A)$.
\end{itemize}
We note some properties of subgraphs of $G$.
\begin{Lemma}
The involution $\iota$ maps the subgraph $(A,E_A)$ isomorphically onto $(B,E_B)$
and both are a finite disjoint union of trees.
\end{Lemma}
\begin{proof}
The isomorphism between the two is simply given by restricting $\iota$ to
$(A,E_A)$. We must therefore have an isomorphic copy of $(A,E_A)$ in the
quotient $G/\iota$, which is a finite connected tree. Any subgraph of a tree
must be a disjoint union of trees and so the result follows.
\end{proof}
\begin{Lemma} The connected components of the subgraph $(F,E_F)$ are either
single vertices or chains of vertices $f_1, \ldots, f_r$ such that between
$f_i$ and $f_{i+1}$ there are exactly two edges and between $f_i$ and $f_j$
there are no edges if $|i-j|>1$.
\end{Lemma}
\begin{proof} Let $e\in E_F$ and let $f,f'$ be the bounding vertices of $e$.
Since $\iota$ fixes $f,f'$ and $\iota$ is mixing, we must have $\iota(e) \ne e$.
Therefore there are at least 2 edges between $f$ and $f'$. If we
suppose to the contrary that there was a
third edge $e'$ then $\iota(e')$ would be distinct from $e'$ again by the
mixing property. But also since $e'\ne e$ and $e' \ne \iota(e)$ we must also
have $\iota(e') \ne e$ and $e' \ne \iota(e)$. The quotient graph $G/\iota$
would then have a cycle $ee'$ and since the hyperelliptic involution is unique
we have a contradiction.
Therefore between any two vertices $f,f'$ in our subgraph $(F,E_F)$ there are
either zero or two edges. If $f,f',f''$ each have two edges between them, then
in the quotient, we would have a cycle $e(f,f')e(f',f'')e(f'',f)$. The result
follows.
\end{proof}
We see therefore that $(F,E_F)$ is planar, and although a given connected
component may have a cycle, and for the purpose of orientable genus we
may think of each one as a point. We can therefore make the following
reduction by replacing $F$ with the set of connected components of $F$ and
$E_F$ by the empty set.
\[
\xymatrix{
&&&&&&&&&\\
&\bullet_{f_1}\ar@/^/@{-}[r]\ar@/_/@{-}[r]\ar@{-}[lu]\ar@{-}[ld] &
\bullet_{f_2}\ar@/^/@{-}[r]\ar@/_/@{-}[r]\ar@{-}[u]\ar@{-}[d] &
\cdots &
\bullet_{f_r}\ar@{-}[u]\ar@{-}[d]\ar@{-}[ru]\ar@{-}[rd]&&
\leadsto &&
\bullet \ar@{-}[u]\ar@{-}[ur]\ar@{-}[ul]\ar@{-}[l]\ar@{-}[r]\ar@{-}[ld]\ar@{-}[dr]\ar@{-}[d]&\\
&&&&&&&&&\\
}
\]
\begin{Reduction} We will assume $E_F$ is empty.\end{Reduction}
In the same way, we can replace $(A,E_A)$ and $(B,E_B)$ by the connected
components of each.
\begin{Reduction} We will assume $E_A$ and $E_B$ are empty.\end{Reduction}
Note that if we were to refine $G$ by adding a point in the middle of each
horizontal edge we would obtain a new graph. Embedding
this refined graph into an
orientable surface of genus $g$ induces an embedding of $G$
into the same surface.
We therefore refine $G$ by
adding a new element each of $F$, $T_A$ and $T_B$ as we eliminate $H$.
\begin{Reduction} We assume that $G$ has no horizontal edges.\end{Reduction}
We are now ready to prove our main theorem on hyperelliptic graphs.
If we can embed any connected graph $G$ into the plane,
then by adding a point at infinity, we give an embedding of this graph into
the 2-sphere $S^2$, and in fact that graph defines a CW-decomposition of $S^2$.
For instance, if $G$ has genus $g$ then this decomposition has $V(G)$ vertices,
$E(G)$ edges, and $g+1$ faces. By spherical inversion we can simply assume that
any one pair
$\{a_j,b_j\}$ lies on the same face as $\infty$, or that they lie
on the ``outside face.'' We will freely perform this in the following.
\begin{Theorem}\label{MainThm}
All hyperelliptic graphs are planar. Moreover there is an
embedding $\rho_G$ into $\mathbb{R}^2$ under which any pair $\{a_j,b_j\}$ exchanged
by the hyperelliptic involution $\iota$ lie on a common face.
\end{Theorem}
\begin{proof} We induct on the size of $\# A = \# B$.
The following will be our inductive assumption.
\begin{itemize}
\item {\bf Ind}(n): All connected hyperelliptic graphs with
$\#A = \#B \le n$ admit a piecewise smooth (considering $G$
e.g., as a simplicial complex)
embedding $\rho_G: G \to \mathbb{R}^2$ such that
\begin{enumerate}
\item If $\rho(v) = (x,y)$ then $\rho(\iota(v)) = (-x,y)$ and
\item If $\{a_i,b_i\}$ are exchanged by $\iota$ then there is a face $F$
of the CW decomposition of $S^2$ induced by $\rho_G$ such that
$a_i,b_i\in\partial F$.
\end{enumerate}
\end{itemize}
Clearly ${\bf Ind}(0)$ holds as we have shown that a hyperelliptic
graph which fixes each vertex is planar. Almost-as-clearly, ${\bf Ind}(1)$
holds because there are no crossing edges, and so all edges are transfer edges
by our reductive step. Since $G$ is connected, between each fixed point $f$
there is at least one transfer edge between $f$ and $a_1$ as well as $f$ and
$b_1$. There is also at most one such edge, because if there were two edges
between $f$ and $a_1$ then there would be a cycle in the quotient. It follows
that after our reductions, $G$ embeds into the plane as the banana graph
with midpoints. Of course, both $a_1$ and $b_1$ lie on the outside face.
Now suppose that $G$ has $\#A = n$ and ${\bf Ind}(n-1)$ is satsified. We let
\begin{itemize}
\item $A(n-1) = \{ a_1, \ldots, a_{n-1}\}$ and $B(n) = \iota(A(n-1))$
\item $T_A(n-1) = \{$ edges from $F$ to $A(n-1)\}$ and
$T_B(n-1) = \iota(T_A(n-1))$.
\item $C(n-1) = \{$ cross edges from $A(n-1)$ to $B(n-1)\}$.
\end{itemize}
We therefore let $G(n-1)$ be the graph whose vertices are
$A(n-1) \cup B(n-1) \cup F$
and whose edges are $T_A(n-1) \cup T_B(n-1) \cup C(n-1)$. As
$G(n-1)/\iota$ is a
subgraph of $G/\iota$, it is a finite disjoint union of trees. Let
$\Gamma_1,\ldots, \Gamma_m$ be the horizontal connected components of
$G(n-1)$, i.e. $\Gamma_i$ is either connected or the union of two
vertices exchanged by $\iota$. All of the
images of the $\Gamma_i$ in the quotient are connected trees. We note that
the connected
$\Gamma_i$ are hyperelliptic and so satisfy the conclusions
of ${\bf Ind}(n-1)$.
Since $G$ is connected, for each $i$ there is a pair of transfer edges or a
pair of cross edges from $\{a_{n},b_{n}\}$ to $\Gamma_i$. In fact, there can be
either a cross edge $c_i$ from $a_n$ to some $b_k$ in $\Gamma_i$ or a transfer
edge $t_i$ from $a_n$ to a fixed point $f$ in $\Gamma_i$ and not both.
There cannot be more than
one else there would be a cycle in the quotient.
We therefore create a function $\psi_G:\{1,\ldots, m\} \to \{0,1\}$ where
$\psi(i)$ is $0$ if there is a transfer edge $a_{n}$ to $\Gamma_i$ and $1$
in the case of a cross edge. We roughly create $\rho_G$ as follows:
${\bf Ind}(n-1)$ gives us an embedding of each $\Gamma_i$ into $\mathbb{R}^2$, but
moreover we can scale down into $[-1,1]^2$ and still be symmetric under $\iota$.
We stack each copy of $[-1,1]^2$ vertically in $\mathbb{R}^2$, put $a_n$ to the left
of this column, $b_n$ to the right, and either directly attach the transfer
edge if $\psi_G(i)=0$ or possibly
first apply $\iota$ to $\Gamma_i$ before attaching the
cross edge if $\psi_G(i) =1$. Hidden in this is that if $\psi_G(i)=0$ we need
to make sure to perform spherical inversion to make sure that the fixed point
$f$ is on the outside face, and if $\psi_G(i)=1$ we need to make sure that both
$a_k$ and $b_k$ are on the outside face. This latter part explains the second
condition of
${\bf Ind}(n)$ and the remainder of the proof is simply verifying the
conditions of ${\bf Ind}(n)$ and making the construction explicit.
As noted, if $\psi_G(i) = 0$ then we may assume that our $\rho_{\Gamma_i}$
has $f$ on the outside
face. By scaling and shifting up or down we may assume that $\rho_{\Gamma_i}$
has image in the interior of $[-1,1]^2$ which is symmetric about the $y$-axis
and $\rho_{\Gamma_i}(f) = (0,0)$. We may therefore draw
a symmetric pair of edges between $(0,0)$ and $(\pm 1, 0)$ which do not
intersect $\Gamma_i$. Note that these two new edges split the outside face of
$[-1,1]^2$ into two, but that $\Gamma_i$ lies entirely on one side of that
divide, so adding these edges does not change whether ${\bf Ind}(n)$
is satisfied. If $\psi_G$ is identically zero,
we embed a refinement of $G$ into
$\mathbb{R}^2$ as follows: send $a_{n}, b_{n}$ to $(\pm 1, 0)$, use $\rho_{\Gamma_i}$
to send $\Gamma_i$ to $\{(x,y): -1\le x\le 1, i-1\le y\le i+1\}$. We can
symmetrically draw edges between $(\pm 1,0)$ and $(\pm 1, i)$ which are pairwise
disjoint and this produces an embedding $\rho_G$ which is symmetric under
$\iota$ and preserves the face condition of our inductive
assumption for $G$.
Now let's assume there are some $i$ such that $\psi_G(i) =1$. We assume that
$\Gamma_i$ is connected, else it is the disjoint union of two vertices, and
adding some cross edges does not change the face condition of ${\bf Ind}(n)$.
Let $\rho_i = \rho_{\Gamma_i}$ be an embedding so that $\rho_i(a_k),\rho_i(b_k)$
lie on the outside face with respectively positive and negative $x$-values,
and let $d_i$, $d_i'$ respectively be paths $(-1,0)$ to
$\rho_i(b_k)$ and $(1,0)$ to $\rho_i(a_k)$ such that $d_i' = \iota(d_i)$. Could
it be that $d_i, d_i'$ put $a_j$ and $b_j$ on different faces?
\begin{itemize}
\item By
${\bf Ind}(n-1)$, $\rho_i(a_j)$ and $\rho_i(b_j)$ share a face,
and we need only worry
if it is the outside face.
\item If $a_j = a_k$
then $a_j$ still lies on the same face as $b_j$ even after adding $d_i$ and
$d_i'$.
\end{itemize}
So we assume $a_j$ and $b_j$ lie on the outside face, let $\gamma_j^+$
and $\gamma_j^-$ be smooth symmetric paths from $\rho_i(a_j)$ to $\rho_i(b_j)$
which lie above and below $\rho_i(\Gamma_i)$, meeting only at $\rho_i(a_j)$ and
$\rho_i(b_j)$. As such, $\gamma_j^+\cup \gamma_j^-$ forms a simple
Jordan curve, which
has an inside and outside defined by the mod 2 intersection number
\cite[\S 3.3]{GPTop}. Since $a_j\ne a_k$, the path $d_i$ has an odd
number of transverse
intersection points with $\gamma_j^+\cup \gamma_j^-$
up to multiplicity. If there is just
one, we are done, as it has to lie on precisely one of $\gamma_j^+$ and
$\gamma_j^-$. The non-intersecting path lies within the face we desire.
If there are three or more, we may pick an $\varepsilon>0$ less
than the distance from $\rho_i(\Gamma_i)$ to any of the points of
$d_i\cap(\gamma_j^+\cup\gamma_j^-)$. There is thus a smooth path between
$\rho_i(b_j)$ and $(-1,0)$ which agrees with $d_i$ at distance less than
$\varepsilon$ from $\rho_i(\Gamma_i)$, which is homotopic to $d_i$, and which
has precisely one point of intersection with $\gamma_j^+ \cup \gamma_j^-$. By
replacing $d_i$ with this path and $d_i'$ by the image under $\iota$ we have
reduced to the previous case. We conclude that
${\bf Ind}(n)$ holds and the proof
of our Theorem is complete.\end{proof}
\begin{comment}
If $t_i$
is the edge connecting $a_{n}$ to $\Gamma_i$ we let $b_k$ be the other
endpoint of $t_i$. As there is a face which meets both $a_k$ and $b_k$, we can
perform spherical inversion to turn that into the outside face, scale the
image into the interior of $[-1,1]^2$ and find some $0< d <1$ such that we send
$a_k,b_k$ to $(\pm d, 0)$ under $\rho_{\Gamma_i}$. We can again draw a symmetric
pair of edges to $(\pm 1,0)$, and after applying $\iota$ to each
$\rho_{\Gamma_i}$ such that $\psi_G(i) =1$
we find a planar embedding for $G$ under
which $\iota$ is reflection about the $y$-axis. To complete the proof we need
only verify that for all $j$, we can find a face $F_j$ such that
$a_j,b_j\in \partial F_j$. We know $a_{n},b_{n}$ are on the outside edge, so
there is some $i$ such that $a_j,b_j\in \Gamma_i$.
By our inductive hypothesis,
$a_j,b_j$ share a face under $\rho_{\Gamma_i}$, and so we could even draw a
symmetric edge
between two which does not intersect $\Gamma_i$. If we can still draw that edge
so that there is no intersection with
our symmetric edges $s, s'$ from $(\pm d, 0)$ to $(\pm 1,0)$ then we are done.
Thus,
we assume that our edge $e$ from $\rho_{\Gamma_i}(a_j)$ to $\rho_{\Gamma_i}(b_j)$
intersects $s$ and thus also $s'$. Consider the Jordan curve $J$ formed by the
paths $s, s'$, $\{(\pm 1,y): y\ge 0\}$,
$\{(x,1): -1\le x\le 1\}$, and a path $\gamma$ between $(d,0)$ and $(-d,0)$ on
the outside edge of
$\rho_{\Gamma_i}(\Gamma_i)$. Since $e$ only intersects $\Gamma_i$ at $a_j, b_j$,
we can assume $e$ does not intersect $\gamma$, else we could pick an $e$ which
does not intersect $s, s'$. Since $J$ is a Jordan curve, the mod 2 intersection
number of $J$ with $e$ is even, and all intersection between $e$ and $J$ takes
place on $s$ or $s'$ and the intersection number with each is the same. If the
mod 2 intersection number of $e$ with $s$ is even, then the homotopy class of
paths with the same endpoints as $e$ also contains an $e'$
which does not intersect
$\rho_{\Gamma_i}\cup s\cup s'$. If the mod 2 intersection number is odd, then
let $J'$ be the Jordan curve made up of $s,s', \gamma$ and the other half of
the boundary of $[-1,1]^2$. We let $p,p'$ be the first intersection points
of $e$ with $s,s'$ and we construct an alternate path $e'$ which follows $e$
until it hits $p$, follows a path to $p'$ within an $\varepsilon$-neighborhood
of $J'$ on the inside of $J'$ and then resumes the path of $e$. \end{comment}
For good measure, we give a second proof of the planarity of hyperelliptic
graphs.
\begin{proof}
By work of de Bruyn and Gijswijt \cite{Treewidth},
we know that for all graphs $G$, the stable
gonality of $G$ is bounded below by the treewidth of $G$. We know that $G$
is hyperelliptic if and only if the stable gonality is 2.
Since $G$ is hyperelliptic, we find that it has treewidth
2, and therefore is a subgraph of a series-parallel graph \cite{SP},
and is therefore planar.\end{proof}
There is also a third proof of this result due to Spencer Backman which
characterizes the ear decomposition of a hyperelliptic graph and which
predates work of de Bruyn and Gijswijt but was not written up.
While it may not seem so, these proofs work out to being very similar. Since
$G$ is hyperelliptic, $G/\iota$ is a tree. We may think of the inductive proof
as rooting that tree and thus
producing an embedding into a series-parallel graph. Note that our embedding
$\rho_G$ gives $G/\iota$ as $\rho_G(G)\cap\{(x,y): x\le 0\}$, so the source and
sink vertices are respectively $a_n$ and $b_n$.
The advantage of
working so explicitly is that some natural improvements present themselves.
\begin{Lemma} On any hyperelliptic graph $G$ with two pairs of vertices
$a_i\ne b_i$ and $a_j\ne b_j$ exchanged by the hyperelliptic involution, we can
find an embedding $\rho_{i,j}$ of $G$ into $\mathbb{R}^2$ such that $a_i,b_i,a_j,b_j$
all lie on the boundary of a face. Moreover, the same is true when $a_i$ and
$b_i$ are replaced by a hyperelliptic fixed vertex.
\end{Lemma}
\begin{proof}
We proceed by induction in the same way as in the proof of
Theorem \ref{MainThm}. In fact,
if $a_i = a_j$ then our Lemma holds by appealing to Theorem \ref{MainThm}.
Therefore we
suppose that $a_i \ne a_j$ and thus $b_i\ne b_j$. We make all necessary
reductions to retain the notation
of $V(G) = A \cup B \cup F$ and $E(G) = C \cup T$. We know therefore that
$\#A = \# B \ge 2$. In the case of equality, $G$ is outerplanar.
If we do not have equality, we reorder $A$ and
$B$ so that $j = n$ and let $\Gamma_1,\ldots,\Gamma_m$ be the horizontal
connected components of $G(n-1)$ as in the proof of the Theorem.
Let $r$ be
such that $a_i\in \Gamma_r$ and let $k$ be such that there is a cross
edge from $a_n$ to $b_k$.
We apply our inductive hypothesis to $\Gamma_r$ to find an embedding of
$\Gamma_r$ into $\mathbb{R}^2$ such that $a_i$ and $a_k$ share a face.
We use spherical
inversion to move that face to the outside, and thereby give an embedding
of $G$ into $\mathbb{R}^2$ such that $a_i$ and $a_n = a_j$ share a face.
If $a_i$ and $b_i$ are replaced by a fixed vertex $f$, then we let $r$ be such
that $f\in\Gamma_r$ and we use spherical inversion to find a planar embedding
of $\Gamma_r$ such that $f$ lies on the outside face. The result follows in the
same way.
\end{proof}
With the above in mind, we recall that a bielliptic graph is one which admits a
mixing involution $\alpha$ such that $G/\alpha$ has genus one. We therefore
have the following.
\begin{Theorem}Bielliptic graphs are toroidal.
\end{Theorem}
\begin{proof}
Without loss of generality, we assume $G$ is 2-edge connected , and that the
genus of $G$ is at least 3, else $G$ is already planar.
Since $G/\alpha$ has genus one, there is an edge $\bar e$ of $G/\alpha$ such
that $G/\alpha - \bar e$ is a tree. Let $e,e'=\alpha(e)$
be the preimages of $\bar e$
in $G$ and let $G_0 = G - \{e,e'\}$ with $\alpha_0$ the induced involution,
whose quotient is $G/\alpha - \bar e$.
\[
\begin{array}{ccc}
G_0 & \hookrightarrow & G \\
\downarrow & & \downarrow \\
G_0/\alpha_0 & \hookrightarrow & G/\alpha
\end{array}
\]
We show first that $G_0$ is connected: if not,
let $a,b$ be the endpoints of $e$ and $G_a, G_b$ the connected components of
each in $G_0$. In which of these can we find $\alpha_0(a)$ and $\alpha_0(b)$?
If there is a path $\gamma_a$
between $a$ and $\alpha_0(a)$ then $G_0$ is
connected, as there is a unique simple path in $G_0/\alpha_0$ between
$\alpha_0^\sim(v_1)$ and $\alpha_0^\sim(v_2)$ for any
$v_1\in G_a$ and $v_2\in G_b$. This path lifts to a path $\gamma$ between either
$v_1$ and $v_2$ (in which case $G_0$ is connected)
or $v_1$ and $\alpha_0(v_2)$ (in
which case $\gamma_a\alpha_0(\gamma)$ is a path between $v_1$ and $v_2$). Thus
there is no such path $\gamma_a$ when $G_0$ is disconnected. In other words,
when $G_0$ is disconnected, $\alpha_0(a)\not\in G_a$. Since $\alpha_0$ is
an isomorphism, it must exchange $G_a$ with $G_b$ so that
$\alpha_0: G_a\stackrel{\sim}{\to} G_b$.
But then the quotient is a tree, so $G_a$ and $G_b$ are trees.
This however contradicts the statement that the genus of $G$ is at least 3.
It follows then that $G_0$ is hyperelliptic, and therefore planar. Moreover the
embedding is planar in such a way as to recognize $\alpha_0$ as reflection
about the $y$-axis.
Let $a,b$ be the endpoints of $e$ and $a',b'$ be the endpoints of $e'$,
so moreover
we can find a planar embedding of $G_0$ such that $a,a',b,b'$ all lie on the
outside face. The boundary of this outside face is a Jordan curve containing
$a,a',b,b'$ which is broken up into four paths between the four of these points.
If one of these is a path $\delta$ between $a$ and $b$ then another must be a path
$\delta'$ between $a'$ and $b'$. In this case, $G$ itself is planar. If not,
there are paths from $a$ to $a'$ and $b'$ in the boundary,
and we can therefore flip $\rho_{G_0}$
along the $x$ and $y$ axes so that $a$ lands in $\{(x,y): x>0, y>0\}$ and thus
$a'$ lands in $\{(x,y): x<0, y>0\}$. By scaling,
we may assume the image of $\rho_{G_0}$ lies in $[-1,1]^2$. We may then draw
edges between $\rho_{G_0}(a)$ and $(0,1)$, $\rho_{G_0}(a')$ and $(-1,0)$,
$\rho_{G_0}(b)$ and $(0,-1)$, as well as $\rho_{G_0}(b')$ and $(1,0)$, none of
which intersect each other or any other point of $\rho_{G_0}(G_0)$.
These edges induce an embedding of $G$ into $\mathbb{R}^2/2\mathbb{Z}^2$ by identifying
opposite edges of $[-1,1]^2$. We therefore have shown that $G$ is toroidal
in all cases.\end{proof}
One could imagine extending this to the case where $G/\alpha$ has genus $g$,
but that would depend on finding a sequence of points interchanged by $\alpha$
which sequentially lie on common faces. This fails however, as we see in the
following example where $G/\alpha$ has genus $2$.
\[
\xymatrix{
& & \bullet \ar@{-}[r] \ar@{-}[rdddddd]& \bullet & & \\
& \bullet \ar@{-}[ru]\ar@{-}[rd] & & & \bullet\ar@{-}[lu]\ar@{-}[ld] & \\
& & \bullet\ar@{-}[r]\ar@{-}[rdd] & \bullet & & \\
\bullet \ar@{-}[ruu]\ar@{-}[rdd]\ar@{-}[rrrrr] & & & & & \bullet\ar@{-}[luu]\ar@{-}[ldd] \\
& & \bullet\ar@{-}[r]\ar@{-}[ruu] & \bullet & & \\
& \bullet\ar@{-}[ru]\ar@{-}[rd] & & & \bullet\ar@{-}[lu]\ar@{-}[ld] & \\
& & \bullet\ar@{-}[r]\ar@{-}[ruuuuuu] & \bullet & &
}
\]
Nonetheless we note that this graph does indeed admit an embedding into a genus
two surface! In particular, we get slightly lucky in that the above method
shows how to embed this graph into the connected sum of two tori, albeit in a
way that does not obviously generalize.
Indeed, we do not know of an example of a graph with a mixing
involution $\alpha$ to a genus $g$ graph which does not already embed into a
genus $g$ orientable surface.
Sometimes as well, this construction is not optimal because
different lifts of an edge need not cross: $K_5$ admits an essentially unique
involution whose quotient has genus 2, but is well-known to be toroidal.
We conclude by noting that although our criterion for being toroidal has
\emph{something} to do with gonality, there is more that goes into the
orientable genus than the gonality.
\begin{Lemma} There are trigonal graphs of all possible orientable genera.
Moreover, there are $d$-gonal graphs which are either planar or
of all possible orientable genera
$\ge (\frac{d}{2} -1)^2$ whenever $d\not \equiv 2\bmod 4$.
\end{Lemma}
\begin{proof}
First we note that there are $d$-gonal planar graphs for all $d$ - simply take
$n\ge d$ and note that the $d\times n$ grid graph has gonality $d$
\cite[Example 3.3]{Treewidth}.
Then note that for $3\le d\le n$, the complete bipartite graph has orientable
genus $\left\lceil \frac{(d-2)(n-2)}{4}\right\rceil$. If $d$ is not $2\bmod 4$
then this can be any integral value at least
$(\frac{d}{2} -1)^2$. On one hand, there is a clear degree $d$ harmonic
map from $K_{d,n}$ to a tree given by simply identifying the vertices in the
size $d$ subset. Therefore the gonality of $K_{d,n}$ is at most $d$. On the
other hand, the treewidth of $K_{d,n}$ is $d$, so this is a lower bound for
gonality \cite{Treewidth}, and we find that $K_{d,n}$ is $d$-gonal.
\end{proof}
The use of the complete bipartite graph above was suggested by Spencer Backman
and we thank him for the suggestion.
We conclude by noting that in the above examples, gonality, stable gonality, and
treewidth all coincide. It is conjectured for the hypercube graph $Q_n$ that
there is a gap between the two which increases along with $n$
\cite[\S 3]{Treewidth}.
In that case, the orientable genus is large and the conjectural least degree
map to a tree is given by successive quotients by involutions $Q_n \to Q_{n-1}$.
It would be interesting to find other infinite families of graphs with gaps
between gonality and treewidth and see if those also have large orientable
genus. It also still seems reasonable to wonder about
a connection between the orientable genus of a graph and the spectrum of its
Laplacian. After all, the spectrum of the $d\times n$ grid graph is
very limited \cite{Eigen}: the eigenvalues can only be
$$\lambda_{j,k} = 4\sin^2\left(\frac{j\pi}{2n}\right)
+ 4\sin^2\left(\frac{k\pi}{2d}\right).$$
In particular, the spectral lower bound on gonality \cite[Theorem C]{CKK} for
this example tends to 0 as $n\to\infty$.
| {
"timestamp": "2017-04-21T02:08:42",
"yymm": "1704",
"arxiv_id": "1704.06255",
"language": "en",
"url": "https://arxiv.org/abs/1704.06255",
"abstract": "We examine connections between the gonality, treewidth, and orientable genus of a graph. Especially, we find that hyperelliptic graphs in the sense of Baker and Norine are planar. We give a notion of a bielliptic graph and show that each of these must embed into a closed orientable surface of genus one. We also find, for all $g\\ge 0$, trigonal graphs of treewidth 3 and orientable genus $g$, and give analogues for graphs of higher gonality.",
"subjects": "Number Theory (math.NT); Algebraic Geometry (math.AG); Combinatorics (math.CO)",
"title": "On the gonality, treewidth, and orientable genus of a graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551515780318,
"lm_q2_score": 0.734119521083126,
"lm_q1q2_score": 0.7083190017910516
} |
https://arxiv.org/abs/1008.4768 | The absence of phase transition for the classical XY-model on Sierpinski pyramid with fractal dimension D=2 | For the spin models with continuous symmetry on regular lattices and finite range of interactions the lower critical dimension is d=2. In two dimensions the classical XY-model displays Berezinskii-Kosterlitz-Thouless transition associated with unbinding of topological defects (vortices and antivortices). We perform a Monte Carlo study of the classical XY-model on Sierpinski pyramids whose fractal dimension is D=log4/log2=2 and the average coordination number per site is about 7. The specific heat does not depend on the system size which indicates the absence of long range order. From the dependence of the helicity modulus on the cluster size and on boundary conditions we draw a conclusion that in the thermodynamic limit there is no Berezinskii-Kosterlitz-Thouless transition at any finite temperature. This conclusion is also supported by our results for linear magnetic susceptibility. The lack of finite temperature phase transition is presumably caused by the finite order of ramification of Sierpinski pyramid. | \section{Introduction}
One of the powerful predictions of the renormalization group theory of critical phenomena is universality
according to which the critical behavior of a system is determined by: (1) symmetry group of the Hamiltonian,
(2) spatial dimensionality $d$ and (3) whether or not the interactions are short-ranged \cite{gold92}.
The possibility of phase transitions on systems with nonintegral dimensionality $d$ was first considered by
Dhar \cite{dhar} for the classical XY-model and for the Fortuin-Kasteleyn cluster model on the truncated
tetrahedron lattice with the effective dimensionality 2$\log$3/$\log$5=1.365. No phase transition at any
finite temperature was obtained. Subsequently in a series of papers Gefen {\em et al.}
\cite{gab80,gab83,gasb84,gab84} examined the critical properties of the Ising model (discrete $Z_{2}$ symmetry)
on fractal structures, which are scale invariant but not translationally invariant, in order to
elucidate the relative importance of multiple topological factors affecting critical phenomena.
They found that the Ising systems with given fractal dimension $D$ have transition temperature
$T_{c}=$0 if the minimum order of ramification $R_{min}$, which is the minimum number of bonds that needs to be
cut in order to isolate an {\em arbitrarily large} bounded cluster of sites, is finite. In the case of
fractals with $R_{min}=\infty$ they presented arguments that $T_{c}>$0 for the Ising model.
Monceau and Hsiao \cite{mh04} further studied the Ising model on fractals with $R_{min}=\infty$ using
the Monte Carlo method and found weak universality in that the critical exponents depended also on
topological features of fractal structures.
For the models with continuous $O(n)$ symmetry, $n\geq$2, on fractal structures Gefen {\em et al.}
\cite{gasb84,gab84} used a correspondence between the low-temperature properties of such models
and pure resistor network connecting the sites of the fractal to argue that there is no long-range order at
any finite temperature if the fractal dimension $D<$2 even in the case of infinite order of ramification.
Subsequently Vallat {\em et al.} \cite{vkb91} used the harmonic approximation to the XY-model ($O(2)$
symmetry) on two-dimensional Sierpi\' nski gasket ($R_{min}$=3, $D=\log$3/$\log$2=1.585) to show that
the energy of a vortex excitation is always finite and hence there is no
Berezinskii-Kosterlitz-Thouless (BKT) transition \cite{gold92} at any finite temperature as
free vortices are always present. These conclusions were confirmed in a recent Monte Carlo study of the
full XY-model on two-dimensional Sierpi\' nski gasket \cite{mb10}.
Here we present a Monte Carlo study of the XY-model on three-dimensional Sierpi\' nski pyramids
with $R_{min}$=4 and fractal dimension $D=\log$4/$\log$2=2. The model is described by the Hamiltonian
\begin{equation}
\label{ham}
H=-J\sum_{\langle i,j\rangle}\cos(\theta_{i}-\theta_{j})\>,
\end{equation}
where 0$\leq \theta_{i}<$2$\pi$ is the angle/phase variable on site $i$, $\langle i,j\rangle$
denotes the nearest neighbors and $J>$0 is the coupling constant. In the case of translationally
invariant system in two dimensions this Hamiltonian gives rise to BKT transition and we investigate if the
same holds in the case of a fractal structure with fractal dimension $D=$2.
The rest of the paper is organized as follows. In Section 2 we present our algorithm for generating
Sierpi\' nski pyramids and outline the Monte Carlo procedure for
calculating the thermal averages. Section 3 contains our results and discussion and in Section 4 we
give a summary.
\section{Numerical procedure}
The procedure which we used to generate three-dimensional Sierpi\' nski pyramids (SP) is illustrated in
Figure 1, which shows the transition from the zeroth-order SP (the tetrahedron of unit side) to the
first order pyramid via translations by three nonorthogonal basis vectors ${\bf e}_{1}$=(1,0,0),
${\bf e}_{2}$=(0,1,0) and ${\bf e}_{3}$=(0,0,1). The pyramid of order $m$+1 is obtained from the pyramid of
order $m$ via translations by vectors 2$^{m}{\bf e}_{1}$, 2$^{m}{\bf e}_{2}$ and 2$^{m}{\bf e}_{3}$ ($m$=0,1,
$\dots$). It is clear that the number of vertices $N_{m}$ of the $m$th order Sierpi\' nski pyramid can be
obtained from the recursion relation $N_{m}$=4$N_{m-1}$-6 with $N_{0}$=4. Thus, in generating the pyramid
of order $m$+1 from the pyramid of order $m$ not every point of the $m$th order pyramid gets translated by
{\em all} three translation vectors 2$^{m}{\bf e}_{1}$, 2$^{m}{\bf e}_{2}$ and 2$^{m}{\bf e}_{3}$.
The top of the $m$th order pyramid, (0,0,0), is never translated. The remaining $N_{m}$-1 points are then
all translated by 2$^{m}{\bf e}_{1}$. Next, the same points except for (2$^{m}$,0,0) are translated by
2$^{m}{\bf e}_{2}$ and finally, all points except for (2$^{m}$,0,0) and (0,2$^{m}$,0) are translated by
2$^{m}{\bf e}_{3}$. For pyramids of order $m\leq$9 we found it most efficient to represent a vertex $(i,j,k)$
by the number $P=i$10$^{6}$+$j$10$^{3}$+$k$10$^{0}$ and the three translation vectors 2$^{m}{\bf e}_{1}$,
2$^{m}{\bf e}_{2}$ and 2$^{m}{\bf e}_{3}$ by the numbers $T_{1}(m)=$2$^{m}\times$10$^{6}$,
$T_{2}(m)=$2$^{m}\times$10$^{3}$ and $T_{3}(m)=$2$^{m}\times$10$^{0}$, respectively. Then the result of
translating a point represented by number P by a vector represented by $T_{i}(m)$ is described by
the number $P+T_{i}(m)$. From a given number $P$ representing a vertex it is easy to get its coordinates
$(i,j,k)$ in the basis $\{{\bf e}_{1},{\bf e}_{2},{\bf e}_{3}\}$: $i=[P$/10$^{6}]$, where $[\cdots]$
denotes the integer part, $j=[(P-i\times$10$^{6})/$10$^{3}]$ and $k=P-i\times$10$^{6}-j\times$10$^{3}$.
In the Metropolis Monte Carlo scheme of calculating the statistical averages for a model with only the
nearest neighbor interactions it is necessary to provide the list of nearest neighbors for each site/vertex.
We take that all sites that are at a unit distance (the size of the edge of the elementary tetrahedron)
from a given vertex are its nearest neighbors. Thus the number of nearest neighbors of site $(i,j,k)$ varies
with the order $m$ of Sierpi\' nski pyramid. For example, the vertex (1,1,0) has six nearest neighbors in
the first order pyramid (Figure 1) but the same site has eight nearest neighbors in all higher order pyramids,
the additional two neighbors being vertices (2,1,0) and (1,2,0). For the Sierpi\' nski pyramids of orders
$m=$4,5,6 we found the average number of neighbors per site to be 6.923, 6.981, 6.995 with standard
deviations 0.903, 0.876, 0.869, respectively. Thus, the average coordination number for three-dimensional
Sierpi\' nski pyramid is greater than the coordination number for three-dimensional simple cubic lattice.
This fact should be kept in mind when we discuss our numerical results in the next section. In constructing
the list of nearest neighbors for the sites in a pyramid we found it convenient to group all sites
according to the values of the sum $i+j+k$ of their coordinates (i,j,k). The sites with the same $i+j+k$
belong to the same plane parallel to the basal plane of the pyramid defined by points (2$^{m}$,0,0),
(0,2$^{m}$,0), (0,0,2$^{m}$), where $m$ is the order of the pyramid (see Figure 1).
The nearest neighbors of a given site are located in the plane to which it belongs and in the
neighboring planes. Throughout
this work we employed two types of boundary conditions: closed, where the four corners of an $m$th order
pyramid were considered to be coupled to each other and open, where the four corners are uncoupled to each
other.
The Monte Carlo (MC) simulation of the classical XY-model on Sierpi\' nski pyramids was based on
Metropolis algorithm \cite{metro53}. We considered the pyramids of orders $m=$4 (130 sites), $m=$5
(514 sites) and $m=$6 (2050 sites). For a pyramid of given order the simulation would start at a low
temperature with all phases aligned. The first 120,000 steps per site (sps) were discarded, followed by
seven MC links of 120,000 MC sps each. At each temperature the range over which each angle $\theta_{i}$
was allowed to vary \cite{bh88} was adjusted to ensure an MC acceptance rate of about 50\%. The errors
were calculated by breaking up each link into six blocks of 20,000 sps, then calculating the average
values for each of 42 blocks and finally taking the standard deviation $\sigma$ of these 42 average
values as an estimate of the error. The final configuration of the angles at a given temperature was used
as a starting configuration for the next higher temperature.
\section{Numerical Results and Discussion}
The heat capacity per site shown in Figure 2 was calculated from the fluctuation theorem
\begin{equation}
\label{cv}
C=\frac{1}{N}\frac{\langle H^{2}\rangle - \langle H\rangle^{2}}{k_{B}T^{2}}\>,
\end{equation}
where $k_{B}$ is the Boltzmann constant, $T$ is the absolute temperature and $\langle\cdots\rangle$
denotes the MC average. The results did not depend on the type of boundary condition (closed or open) within
the error bars in analogy to what was found for two-dimensional Sierpi\' nski gasket \cite{mb10}. In the
same Figure we also show the results for heat capacity obtained for the XY-model on three cubic lattices
with the periodic boundary conditions. The sizes of cubic lattices were chosen so that they are
comparable to the sizes of three Sierpi\' nski pyramids. The peak in the specific heat of cubic
lattices near $k_{B}T/J$=2.2 increases with system size \cite{lt89} as a consequence of diverging correlation
length at a continuous (i.e. second order) phase transition. On the other hand the specific heat for the
Sierpi\' nski pyramids is virtually size-independent indicating the absence of long range order for
the XY-model on three-dimensional Sierpi\' nski pyramid at any finite temperature. Thus, although the
average coordination number for the Sierpi\' nski pyramid ($\approx$ 7) is higher than its value for the
cubic lattice, the topological properties of this fractal structure, in particular a finite order of
ramification \cite{gasb84}, are responsible for the lack of long range order at finite $T$. The regular
lattices have an infinite order of ramification as the number of bonds one needs to break in
order to isolate an {\em arbitrary large} bounded cluster of lattice sites is infinite. For fractals with
a finite order of ramification an arbitrarily large bounded cluster can be cut off from the rest of the
structure by breaking off only a finite number of bonds and at finite temperature thermal fluctuations
are sufficient to destroy the long range order.
The absence of
size dependence of the peak in $C$ is found for the XY-model in two dimensions \cite{tc79,VanHC81}.
In that case
the peak results from unbinding of vortex clusters \cite{tc79} with increasing temperature above
the Berezinskii-Kosterlitz-Thouless (BKT) transition temperature $T_{c}$ where the heat capacity
has an unobservable essential singularity \cite{bn79}. However the results in Figure 2 for fractal structures
of fractal dimension $D=$2 do not necessarily imply BKT transition since size-independent peaks in $C$ could
result from the average energy per site $\langle E\rangle$ changing monotonically from the values near
-3.5$J$ (each site has about 7 neighbors on average) at low temperatures to near zero in
high temperature paramagnetic phase.
The main signature of BKT transition is a universal jump in helicity modulus $\gamma(T_{c})/T_{c}$=2/$\pi$
\cite{nk77} at critical temperature $T_{c}$. The helicity modulus $\gamma(T)$ measures the stiffness of the
angles $\{\theta_{i}\}$ with respect to a twist at the boundary of the system. At zero temperature, when
the angles are all aligned, one expects a finite value of $\gamma$ and at high temperatures, when the system
is in disordered paramagnetic phase, $\gamma$ vanishes. For the XY-model on three-dimensional regular
lattices $\gamma(T)$ decreases continuously with increasing temperature
and just below the transition temperature $T_{c}$ it obeys
a power law $\gamma(T)\propto |T-T_{c}|^{v}$ \cite{lt89}. In two dimensions, however, there is a
discontinuity in $\gamma$ stemming from unbinding of the vortex-antivortex pairs at BKT transition from
quasi-long-range order (order parameter correlation function decays algebraically) to disordered phase
(order parameter correlation function decays exponentially).
In numerical simulations on finite systems discontinuity is replaced by continuous drop in $\gamma(T)$
which becomes steeper with increasing system size (see, for example, \cite{mb10}). We should point out
that since Sierpi\' nski pyramid is three-dimensional object topological defects are not only vortices
in planes parallel to the faces of the pyramid (which are two-dimensional Sierpi\' nski gaskets) but also
vortex strings. Kohring {\em et al.} \cite{ksw86} presented Monte Carlo evidence that a continuous
phase transition for the XY-model in three dimensions is related to unbinding of vortex strings.
We computed the helicity modulus for Sierpi\' nski pyramids following the procedure of Ebner and Stroud
\cite{es83}. The idea is to think of Hamiltonian (\ref{ham}) as describing a set Josephson coupled
superconducting grains in zero magnetic field, where $\theta_{i}$ is the phase of the superconducting
order parameter on grain $i$. Then if a {\em uniform} vector potential ${\bf A}$ is applied
the phase difference $\theta_{i}-\theta_{j}$ in (\ref{ham}) is shifted by
$2\pi{\bf A}\cdot({\bf r}_{j}-{\bf r}_{i})/\Phi_{0}$, where ${\bf r}_{i}$ is the position vector of
site $i$ and $\Phi_{0}=hc/2e$ is the flux quantum. The helicity modulus is obtained from the
Helmholtz free energy $F$ as $\gamma=(\partial^{2}F/\partial A^{2})_{A=0}$, i.e.~
\begin{equation}
\label{hel}
\gamma=
\Big\langle\left(\frac{\partial^{2}H}{\partial A^{2}}\right)_{A=0}\Big\rangle-\frac{1}{k_{B}T}
\Big\langle\left(\frac{\partial H}{\partial A}\right)_{A=0}^{2}\Big\rangle+\frac{1}{k_{B}T}
\Big\langle\left(\frac{\partial H}{\partial A}\right)_{A=0}\Big\rangle^{2}\>.
\end{equation}
We took ${\bf A}$ to be along one of the edges of elementary
tetrahedron (Figure 1). Our results for $\gamma(T)$ obtained with closed boundary condition are shown in
Figure 3. They are completely analogous to what was obtained for two-dimensional Sierpi\' nski gaskets
\cite{mb10}:
a rapid downturn in $\gamma(T)$ starts near the universal 2/$\pi$-line but the low temperature values
of $\gamma(T)$ decrease with increasing system size and the onset of the downturn, which is in the
vicinity of the putative phase transition, shifts to the lower temperatures. For the XY-model on the
square lattices, where BKT transition does occur, the low-$T$ values of helicity modulus and the onset of
its downturn do not depend on the system size (see, for example, \cite{mb10}). Our results in Figure 3
suggest that in thermodynamic limit $N\rightarrow\infty$ $\gamma(T)$ vanishes at any temperature $T>$0
implying no BKT transition for the XY-model on Sierpi\' nski pyramid of fractal dimension $D=$2. This
conclusion is reinforced by our results for $\gamma(T)$ obtained with the open boundary condition when
four corners of an $m$th order pyramid are not coupled to each other, Figure 4. The helicity modulus is zero
within the error bars which are larger than those obtained with the closed boundary condition. The results
in Figures 3 and 4 indicate that closed boundary condition introduces additional correlations as was the
case for two-dimensional Sierpi\' nski gaskets \cite{mb10}.
Our conclusion about the lack of finite temperature BKT transition is supported by results for
linear susceptibility
\begin{equation}
\label{susc}
\chi=\frac{\langle M^{2}\rangle-\langle M\rangle^{2}}{N^{2}k_{B}T}\>,
\end{equation}
where $M$ is the magnetization of the system, shown in Figure 5. For finite cubic lattices one gets a
peak in $\chi$ near $k_{B}T/J=$2.2 whose size and sharpness increase with the number of sites as a result
of diverging correlation length at the onset of long range order. In the case of Sierpi\' nski pyramids
the peak in $\chi$ also grows with increasing system size but it also shifts substantially to lower
temperatures. For BKT transition Kosterlitz predicted \cite{k74} that above $T_{c}$ the susceptibility diverges
as $\chi\sim\exp[(2-\eta)b(T/T_{c}-1)^{-\nu}]$, with $\eta$=0.25, $b\approx$1.5 and $\nu$=0.5, and is
infinite below $T_{c}$. Our results suggest that in thermodynamic limit $N\rightarrow\infty$ there would
be no divergence in $\chi$ at any finite temperature for the classical XY-model on Sierpi\' nski pyramid.
\section{Summary}
From our Monte Carlo simulation results we conclude that there is no finite temperature phase
transition for the classical XY-model ($O(2)$ symmetry) on three-dimensional Sierpi\' nski pyramid
(fractal dimension $D$=2). Since the heat capacity per site does not depend on the system
size there can be no long range order at any finite temperature. Because the low-temperature
helicity modulus decreases with increasing system size for closed boundary condition, and is zero within
the error bars for open boundary condition, it must vanish in thermodynamic limit at any finite
temperature. This implies no continuous finite temperature phase transition associated with
unbinding of vortex strings \cite{ksw86} in which case the helicity modulus vanishes at transition
temperature $T_{c}$ as a power law $\gamma(T)\propto|T-T_{c}|^{v}$ \cite{lt89}. Moreover there is
no finite temperature Berezinskii-Kosterlitz-Thouless transition associated with unbinding of
vortices and characterized by discontinuity in $\gamma(T)$ at $T_{c}$. These conclusions are
supported by our results for linear magnetic susceptibility. The lack of finite-temperature
long range order and the vanishing of spin stiffness/helicity modulus are the consequence of finite
order of ramification of Sierpi\' nski pyramid: as an arbitrarily large bounded cluster of sites can
be disconnected by cutting only the finite number of bonds thermal fluctuations drive
helicity modulus to zero and destroy long range order.
\subsection{Acknowledgements}
We thank Professor S.~K.~Bose for many useful discussions. This work
was supported by the Natural Sciences and Engineering Research Council
(NSERC) of Canada. The work of M.~P.~was also supported in part
through an NSERC Undergraduate Student Research Award (USRA).
| {
"timestamp": "2010-08-30T02:02:05",
"yymm": "1008",
"arxiv_id": "1008.4768",
"language": "en",
"url": "https://arxiv.org/abs/1008.4768",
"abstract": "For the spin models with continuous symmetry on regular lattices and finite range of interactions the lower critical dimension is d=2. In two dimensions the classical XY-model displays Berezinskii-Kosterlitz-Thouless transition associated with unbinding of topological defects (vortices and antivortices). We perform a Monte Carlo study of the classical XY-model on Sierpinski pyramids whose fractal dimension is D=log4/log2=2 and the average coordination number per site is about 7. The specific heat does not depend on the system size which indicates the absence of long range order. From the dependence of the helicity modulus on the cluster size and on boundary conditions we draw a conclusion that in the thermodynamic limit there is no Berezinskii-Kosterlitz-Thouless transition at any finite temperature. This conclusion is also supported by our results for linear magnetic susceptibility. The lack of finite temperature phase transition is presumably caused by the finite order of ramification of Sierpinski pyramid.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech); Superconductivity (cond-mat.supr-con)",
"title": "The absence of phase transition for the classical XY-model on Sierpinski pyramid with fractal dimension D=2",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551546097942,
"lm_q2_score": 0.7341195152660687,
"lm_q1q2_score": 0.7083189984041098
} |
https://arxiv.org/abs/math/0411171 | New refinements of the McKay conjecture for arbitrary finite groups | Let $G$ be an arbitrary finite group and fix a prime number $p$. The McKay conjecture asserts that $G$ and the normalizer in $G$ of a Sylow $p$-subgroup have equal numbers of irreducible characters with degrees not divisible by $p$. The Alperin-McKay conjecture is a version of this as applied to individual Brauer $p$-blocks of $G$. We offer evidence that perhaps much stronger forms of both of these conjectures are true. | \section{Introduction and Conjecture A}
Let $G$ be an arbitrary finite group and fix a prime number $p$. As is well
known, there seem to be some mysterious and unexplained connections
between the representation theory of $G$ and that of certain of its
$p$-local subgroups. For example, it appears to be true that if $P$ is a
Sylow $p$-subgroup of $G$ and $N = \norm GP$, then equal numbers of the
irreducible (complex) characters of $G$ and of $N$ have degrees not
divisible by $p$. This ``McKay Conjecture" was first proposed by J.~McKay
in \ref9, where it was stated in the case where $G$ is simple and
$p = 2$. (See also \ref10.) The more general formulation of the
conjecture, for arbitrary finite groups and arbitrary primes, was stated by
J.~L.~Alperin in \ref1, although it was first suggested by the first
author in \ref6, where it was proved for (solvable) groups of odd
order. (In fact, in the odd-order case considered in \ref6, a natural
bijection was constructed between the sets of irreducible characters of
$p'$-degree of $G$ and of $\norm GP$, but it is known that no natural
bijection exists in general.) The McKay conjecture has been verified for
several additional types of groups including $p$-solvable groups, symmetric
groups and the sporadic simple groups. As yet, however, no one has given a
proof, or has even proposed an explanation for why this conjecture might
hold in the general case.
One could argue that the more precisely a conjecture can be stated, the
better it will be understood and thus, perhaps, the easier it might become
to discover a proof. In fact, the McKay conjecture was generalized and
strengthened by Alperin, who formulated a version that applies to the
Brauer $p$-blocks of $G$. (The Alperin-McKay conjecture was first
proposed in \ref1. We will review its statement in Section 2, where
we discuss blocks.)
In this note we propose several further refinements of the McKay
conjecture and the Alperin-McKay conjecture. To state the first of these,
we define the integer $M_k(G)$, where $G$ is an arbitrary finite group and
$k$ is an integer not divisible by our fixed prime $p$. We write $M_k(G)$ to
denote the total number of irreducible characters of $G$ having degree
congruent modulo $p$ to $\pm k$. For example, if $p = 5$ and $G$ is the
alternating group $A_5$, we see that $M_1(G) = 2$ since $A_5$ has one
irreducible character of degree $1$ and one of degree $4$. Also,
$M_2(G) = 2$ since $A_5$ has two irreducible characters of degree~$3$.
(Note that for odd primes~$p$, the only values of $k$ that we need to
consider are $1 \le k \le (p-1)/2$.)
\nonumproclaim{Conjecture A}
Let $G$ be an arbitrary finite group and let
$N = \norm GP${\rm ,} where $P \in \syl pG$. Then for each integer $k$ not
divisible by $p${\rm ,} we have $M_k(G) = M_k(N)$.
\endproclaim
For example, if $p = 5$ and $G = A_5$, we saw that $M_1(G) = 2 = M_2(G)$.
In this case, we know that $N$ is dihedral of order $10$, and thus $N$ has
two irreducible characters of degree $1$ and two of degree $2$ and we see
that $M_1(N) = 2 = M_2(N)$, as predicted by Conjecture~A.
Note that if $p = 2$ or $p = 3$, then $M_1(G)$ is the number of irreducible
characters of $G$ of degree not divisible by $p$, and so for these two
primes, Conjecture~A is exactly equivalent to the McKay conjecture. For
$p > 3$, however, we see that the number of irreducible characters of $G$
of degree not divisible by $p$ is the sum of the numbers $M_k(G)$ for
$1 \le k \le (p-1)/2$, and so for these primes, Conjecture~A is strictly
stronger than the McKay conjecture.
But what is the evidence that our conjecture is true? In the case where
$G$ has odd order, the bijection constructed in \ref6\ carries a
$p'$-degree character $\chi \in \irr G$ to a character $\xi \in \irr{\norm GP}$
such that $\chi(1) \equiv \pm \xi(1)$ mod $p$, and hence Conjecture~A
certainly holds in this case. More generally, but using some deep theory,
the conjecture can be proved for all $p$-solvable groups. Also, P.~Fong
\ref5\ has recently succeeded in proving it for all symmetric groups (for
all primes). Furthermore, Conjecture~A holds if the Sylow $p$-subgroup is
cyclic. (This follows from E.~C.~Dade's cyclic defect theory \ref3, and
we shall have more to say about that in Section~2.)
If $G = {\rm GL}(n,q)$, where $q$ is a power of the prime $p$, then it is known
that all irreducible characters of $G$ of degree not divisible by $p$ have
degrees congruent to $\pm1$ mod $p$. (In fact, these degrees are
congruent to $\pm1$ mod $q$; this fact follows from results in
\ref4.) Also, it is not too hard to see that all of the irreducible
$p'$-degrees for the relevant Sylow normalizer of this group are powers of
$q - 1$, and hence they too are all congruent to $\pm1$ mod $p$. Since the
McKay conjecture is known to be true for ${\rm GL}(n,q)$ in the defining
characteristic, (see \ref1), it follows that Conjecture~A is also true
in this case. (In fact, with some additional work, the conjecture can also be
checked for ${\rm SL}(n,q)$ in the defining characteristic.)
Finally, we mention that Conjecture~A is also true for all primes for all of
the sporadic simple groups. Since the McKay conjecture is known to hold
for these groups \ref11\ and since Conjecture~A is automatically true
when the Sylow $p$-subgroup is cyclic, we see that it suffices to check the
conjecture for primes exceeding $3$ and for which a Sylow subgroup is not
cyclic. We have carried out this check, relying on the ATLAS for the
irreducible character degrees and the paper \ref11\ of R.~A.~Wilson
for the Sylow normalizers of these groups. (However, Wilson's paper has an
error: the normalizer of a Sylow $5$-subgroup in $Fi_{23}$ is incorrectly
described. When we reported this to Wilson, he provided us with a
corrected version.)
The following table gives the relevant data. The third column lists the
numbers $M_k(G)$ for $1 \le k \le (p-1)/2$. The McKay conjecture, of
course, predicts only the sum of these numbers, while Conjecture~A
predicts each of the numbers in the third column.
\medskip
\centerline{
\vtop{\offinterlineskip
\hrule
\halign{&\strut\vrule~\hfil#\hfil~&\vrule~\hfil#\hfil~&\vrule%
~\hfil#~&~\hfil#~&~~\hfil#~\hfil\vrule\cr
Group&Prime&&&\cr
\noalign{\hrule}
$J_2$&5&12&2&\cr
\noalign{\hrule}
$HS$&5&~9&4&\cr
\noalign{\hrule}
$McL$&5&~9&4&\cr
\noalign{\hrule}
$He$&5&~8&8&\cr
\noalign{\hrule}
&7&12&7&1\cr
\noalign{\hrule}
$Ru$&5&10&10&\cr
\noalign{\hrule}
$Suz$&5&~8&8&\cr
\noalign{\hrule}
$O$'$N$&7&12&7&1\cr
\noalign{\hrule}
$CO_3$&5&10&10&\cr
\noalign{\hrule}
$CO_2$&5&10&10&\cr
\noalign{\hrule}
$Fi_{22}$&5&10&10&\cr
\noalign{\hrule}
$HN$&5&10&10&\cr
\noalign{\hrule}
$Ly$&5&25&0&\cr
\noalign{\hrule}
}}\quad\quad
\vtop{\offinterlineskip
\hrule
\halign{&\strut\vrule~\hfil#\hfil~&\vrule~\hfil#\hfil~&\vrule%
~\hfil#~&~\hfil#~&~\hfil#~&~\hfil#~&~\hfil#~&~~\hfil#~\hfil\vrule\cr
Group&Prime&&&&&&\cr
\noalign{\hrule}
$Th$&5&10&10&&&&\cr
\noalign{\hrule}
&7&9&9&9&&&\cr
\noalign{\hrule}
$Fi_{23}$&5&20&20&&&&\cr
\noalign{\hrule}
$Co_1$&5&25&0&&&&\cr
\noalign{\hrule}
&7&9&9&9&&&\cr
\noalign{\hrule}
$J_4$&11&12&15&10&5&0&\cr
\noalign{\hrule}
$Fi_{24}'$&5&28&28&&&&\cr
\noalign{\hrule}
&7&12&9&4&&&\cr
\noalign{\hrule}
$B$&5&25&0&&&&\cr
\noalign{\hrule}
&7&27&27&27&&&\cr
\noalign{\hrule}
$M$&5&40&40&&&&\cr
\noalign{\hrule}
&7&49&0&0&&&\cr
\noalign{\hrule}
&11&10&10&10&10&10&\cr
\noalign{\hrule}
&13&12&18&12&6&3&4\cr
\noalign{\hrule}
}}}
\vglue18pt
Conjecture~A has an amusing consequence for symmetric groups. (As we
mentioned, this case of our conjecture has been proved by Fong.) The fact
is that if $n \ge p$, then all of the numbers $M_k(S_n)$ are multiples of the
prime $p$. To prove this, of course, it suffices to check that the numbers
$M_k(N)$ are multiples of $p$, where $N$ is the normalizer of a Sylow
$p$-subgroup $P$ of the symmetric group $S_n$. It is not very hard to
establish this fact using known information about the Sylow normalizers in
symmetric groups. (See Fong's paper \ref5\ for more detail.)
Finally, we mention that there is a significant difference between
Conjecture~A and the many other conjectures that relate the
representation theory of an arbitrary finite group to the $p$-local structure
of the group. (In addition to the McKay conjecture, these include the
Brauer height conjecture, the Alperin weight conjecture and the several
variations and strengthenings of the latter that were formulated by
E.~C.~Dade.) All of these conjectures deal only with the $p$-parts of
irreducible character degrees while our Conjecture~A, of course, is
concerned also with $p'$-parts of character degrees. In the following
section, we discuss Conjecture~B, which to an even greater extent is also
concerned with $p'$-parts of character degrees.
\section{Blocks}
Let $B$ be a $p$-block of an arbitrary finite group $G$ and let $D$ be a
defect group for $B$. (Recall that $D$ is a $p$-subgroup of $G$ that is
uniquely determined up to $G$-conjugacy by $B$.) As is customary, we will
write $\irr B$ to denote the subset of $\irr G$ consisting of those characters
that belong to the block $B$. Recall that the degrees of the characters in
$\irr B$ are all divisible by $|P|/|D|$, where $P$ is a Sylow $p$-subgroup
of $G$, and that those members of $\irr B$ whose $p$-part is exactly equal
to $|P|/|D|$ are the ``height zero" characters of $B$.
Now let $N = \norm GD$. Recall that R.~Brauer's famous First Main Theorem
asserts that there is a certain natural bijection between the set of
$p$-blocks of $G$ having defect group $D$ and the set of $p$-blocks of $N$
having defect group $D$. If the block $b$ of $N$ corresponds to the block
$B$ of $G$ under this bijection, we say that $b$ is the ``Brauer
correspondent" of $B$ with respect to the defect group~$D$.
Consider the case where $D \in \syl pG$, so that $B$ and $b$ are blocks of
``maximal defect". It is easy to see that the members of $\irr G$ having
degree not divisible by $p$ are exactly all of the height zero characters in
all $p$-blocks of $G$ that have maximal defect. Similarly, the members of
$\irr N$ with degree not divisible by $p$ are just the height zero characters
in the maximal defect $p$-blocks of $N$. The McKay conjecture asserts,
therefore, that the total number of height zero characters in $p$-blocks of
maximal defect of $G$ is equal to the total number of height zero
characters in $p$-blocks of maximal defect of $N$. Since the latter blocks
are exactly the Brauer correspondents of the former, it is reasonable to
guess that for each $p$-block $B$ of maximal defect, the number of height
zero characters in $\irr B$ is equal to the number of height zero characters
in $\irr b$, where $b$ is the Brauer correspondent of $B$.
In fact, it may be unnecessary to limit ourselves to blocks of maximal
defect. Perhaps it is true for {\it every} $p$-block $B$ of $G$ that the
numbers of height zero characters in $\irr B$ and $\irr b$ are equal, where
$b$ is the Brauer correspondent of $B$ with respect to some defect group.
This is precisely the Alperin-McKay conjecture, which appears as
Conjecture~3 of Alperin's paper \ref1. This conjecture has been
shown to be valid for many families of groups.
Our Conjecture~B strengthens the Alperin-McKay conjecture in the same
way that Conjecture~A strengthens the McKay conjecture. To state it, we
need to define the integer $M_k(B)$, where $B$ is a $p$-block of $G$ and
$k$ is an integer not divisible by $p$. We write $M_k(B)$ to denote the
number height zero characters in $\irr B$ for which the $p'$-part
of the degree is congruent modulo $p$ to $\pm k$. Note that if we hold $k$
fixed and sum $M_k(B)$ over all $p$-blocks $B$ of $G$ of maximal defect,
we obtain the number $M_k(G)$. Also, it is clear that if $p > 2$ and we sum
$M_k(B)$ for $1 \le k \le (p-1)/2$, we get the total number of height zero
characters in $\irr B$.
At this point, one might guess that it is always true that $M_k(B) = M_k(b)$,
where $k$ is any integer not divisible by $p$ and $b$ is the Brauer
correspondent of $B$ with respect to some defect group $D$. To see that
this cannot be correct, however, consider the situation where $G$ is
$p$-solvable. In this case, there is a subgroup $G_0$ of $G$ and a block
$B_0$ of $G_0$ having defect group $D$ and such that the members of
$\irr B$ are exactly the induced characters $\psi^G$, where $\psi$ runs over
$\irr{B_0}$. Now let $N = \norm GD$, write $N_0 = N \cap G_0$ and let $b_0$
be the Brauer correspondent of $B_0$. All of this can be done so that
$D \in \syl p{G_0}$ and the members of $\irr b$ are exactly the induced
characters $\xi^N$, where $\xi$ runs over $\irr{b_0}$. Note that in this
situation, the height zero characters in $\irr B$ are induced from the
height zero characters in $\irr{B_0}$ and similarly, the height zero
characters in $\irr b$ are induced from the height zero characters in
$\irr{b_0}$.
Suppose now that we knew that $M_k(B_0) = M_k(b_0)$, for all integers $k$
not divisible by $p$. (And in fact, this can be proved using deep facts from
the representation theory of $p$-solvable groups.) Since the height zero
characters of $B$ and $b$ are obtained by induction from $G_0$ and $N_0$,
respectively, it follows that $M_{rk}(B) = M_{sk}(b)$, where $r$ and $s$ are
respectively the $p'$-parts of the indices $|G:G_0|$ and $|N:N_0|$. But in
general, $r$ and $s$ are not congruent modulo $p$, and so we cannot
expect that $M_k(B) = M_k(b)$ for all choices of $k$.
In this situation, let $c$ be the $p'$-part of $|G:N|$. Since
$D \in \syl p{G_0}$, we know by Sylow's theorem that
$|G_0:N_0| \equiv 1$ mod $p$, and thus
$$
r \equiv |G:G_0|_{p'} \equiv
|G:N_0|_{p'} = |G:N|_{p'}|N:N_0|_{p'} \equiv cs {\rm ~~mod~p} \,.
$$
We now have $M_{sk}(b) = M_{rk}(B) = M_{csk}(B)$ for all integers $k$ that
are not divisible by $p$, or equivalently, $M_{ck}(B) = M_k(b)$ for all such
integers $k$. We conjecture that this fact about $p$-solvable groups holds
in general.
\nonumproclaim{Conjecture B} Let $B$ be a $p$\/{\rm -}\/block of an arbitrary finite
group $G$ and suppose that $b$ is the Brauer correspondent of $B$ with
respect to some defect group $D$. Then for each integer $k$ not divisible
by $p${\rm ,} we have $M_{ck}(B) = M_k(b)${\rm ,} where $c = |G:\norm GD|_{p'}$.
\endproclaim
Observe that Conjecture~B implies Conjecture~A. To see why this is so,
recall that $M_k(G)$ is the sum of the quantities $M_k(B)$ as $B$ runs over
all $p$-blocks of $G$ of maximal defect. For each such block, however,
Conjecture~B asserts that $M_k(B) = M_k(b)$, where $b$ is the Brauer
correspondent of $B$ with respect to some fixed Sylow $p$-subgroup $P$
of $G$. (This is because by Sylow's theorem, the constant $c$ that appears
in Conjecture~B is congruent to $1$ modulo $p$ in this case.) Conjecture~A
then follows since the sum of the quantities $M_k(b)$ is exactly $M_k(N)$,
where $N = \norm GP$.
As we have already indicated, Conjecture~B is true for $p$-solvable
groups. Also, Fong's paper \ref5\ establishes this conjecture for all
symmetric groups (for all primes). There is one other situation where we
know that our Conjecture~B is valid.
\nonumproclaim{(2.1) Theorem} Conjecture~{\rm B} holds for all $p$\/{\rm -}\/blocks that
have cyclic defect groups.
\endproclaim
Since Conjecture~B implies Conjecture~A, it follows that Conjecture~A is
valid for all groups $G$ having a cyclic Sylow $p$-subgroup. (This was
mentioned in Section~1.)
To establish Theorem~2.1, we need to appeal to the deep theory of blocks
with cyclic defect groups that was developed by Dade. A consequence of
this theory in \ref3\ is the following useful fact.
\nonumproclaim{(2.2) Theorem} Suppose that $B$ is a $p$\/{\rm -}\/block of $G$ with
cyclic defect group $D = \gen x$ and let $b$ be the Brauer correspondent
of $B$ with respect to $D$. Then for each character $\chi \in \irr B$ there is
a sign $\epsilon_\chi = \pm1$ and a character $\tilde\chi \in \irr b$ such that
$$
\chi(xy) = \epsilon_\chi \tilde\chi(xy)
$$
for all $p$\/{\rm -}\/regular elements $y \in \cent Gx$. Furthermore{\rm ,} the map
$\chi \mapsto \tilde\chi$ defines a bijection from $\irr B$ onto $\irr b$.
\endproclaim
\demo{Sketch of proof} Write $C = \cent GD$ and $N = \norm GD$.
According to Dade's paper \ref3, there is a certain uniquely determined
subgroup $E$ with $C \subseteq E \subseteq N$, where $|E:C| = e$ is a divisor of $p-1$.
(The uniqueness of $E$ depends on the fact that $N/C$ is abelian.) Dade
shows in Theorem~1, Part~1, that the members of $\irr B$ are of two types.
There are $e$ characters of the form $\chi_j$, where $j$ is an integer
$1 \le j \le e$ and there are $(|D|-1)/e$ characters $\chi_\lambda$, where
$\lambda$ is a nonprincipal linear character of $D$. (Our notation, which
differs slightly from Dade's, is set up so that $\chi_\lambda = \chi_\mu$ if
and only if $\lambda$ and $\mu$ are conjugate in $E$.) Similarly, if we
apply this reasoning to $N$ in place of $G$, with $b$ in place of $B$, we
get the same subgroup $E$, and thus the members of $\irr b$ can be
parametrized as the characters $\psi_j$ with $1 \le j \le e$ and
$\psi_\lambda$, where $\lambda$ is a nonprincipal linear character of $D$
and $\psi_\lambda = \psi_\mu$ if and only if $\lambda$ and $\mu$ are
conjugate in $E$. We can now define the bijection $\chi \mapsto \tilde\chi$
by $\chi_j \mapsto \psi_j$ for $1 \le j \le e$ and $\chi_\lambda \mapsto
\psi_\lambda$ if $\lambda$ is a linear character of $D$.
Dade's Corollary~1.9 gives formulas for the evaluation of $\chi_j(xy)$ and
$\chi_\lambda(xy)$, and of course similar formulas can be used to compute
$\psi_j(xy)$ and $\psi_\lambda(xy)$ by working in $N$ instead of $G$. (Note
that since $D = \gen x$, we must set $i = 0$ in Corollary~1.9.) Examination
of the right sides of the formulas in Corollary~1.9 shows that everything is
determined inside the group $N$ except for a constant $\gamma_0$ (which
is always equal to $1$ by Dade's Equation~1.10) and certain signs
$\epsilon_j$, which depend on the character. (All of the characters
$\chi_\lambda$ use the same sign, $\epsilon_0$.) It follows that $\chi(xy)$
and $\tilde\chi(xy)$ agree, except possibly for a sign depending on $\chi$.
This completes the proof.\hfill\qed
\enddemo
{\it Proof of Theorem} 2.1. Let $B$ be a $p$-block of $G$ with cyclic
defect group $D = \gen x$ and let $b$ be the Brauer correspondent of $B$
with respect to $D$, so that $b$ is a $p$-block of $N = \norm GD$. We will
show that the bijection $\chi \mapsto \tilde\chi$ of Theorem~2.2 maps
the height zero characters in $\irr B$ onto the height zero characters in
$\irr b$. (Actually, it is true that in this case all of the members of $\irr B$
and $\irr b$ have height zero, but we will not need that fact.) Also, we will
show that if $\chi$ and $\tilde\chi$ are height zero characters, then
$\chi(1)_{p'} \equiv \pm c\tilde\chi(1)_{p'}$ mod $p$, where
$c = |G:N|_{p'}$. The result will then follow.
Let $K$ be a defect class for $B$. In particular, $D$ is a defect group for
$K$, which means that there exists $y \in K$ such that
$D \in \syl p{\cent Gy}$, and thus $y \in C$, where
$C = \cent GD = \cent Gx$. Also, because $K$ is a defect class for $B$, we
know that $y$ is $p$-regular and that $\lambda_B(\hat K) \ne 0$, where
$\lambda_B$ is the ``central homomorphism" corresponding to $B$ and
$\hat K$ is the sum of the elements of $K$ in the appropriate group ring.
(Recall that for every class $L$ of $G$, we have
$\lambda_B(\hat L) = \omega_\chi(\hat L)^*$, where $\chi$ is any member
of $\irr B$ and $(\ )^*$ is the canonical homomorphism from the ring of
$p$-local integers to its residue class field modulo some fixed maximal ideal
$M$ containing $p$.)
Now let $L = \cl G{xy}$ and note that $D \in \syl p{\cent G{xy}}$, so that
$|L|_p = |G|_p/|D| = |K|_p$. We claim now that
$\lambda_B(\hat L) \ne 0$. To see why this is so, let $\chi \in \irr B$ have
height zero. Then $|K|_p = \chi(1)_p = |L|_p$, and thus $|K|/\chi(1)$ and
$|K|/|L|$ are $p$-local integers. Also, since $\chi(y) \equiv \chi(xy)$,
where we are working modulo the maximal ideal $M$, it follows that
$$
\omega_\chi(\hat K) =
{\chi(y)|K| \over \chi(1)} \equiv
{\chi(xy)|K| \over \chi(1)} =
{\chi(xy)|L| \over \chi(1)}\,{|K| \over |L|} =
\omega_\chi(\hat L){|K| \over |L|} \,.
$$
We now have
$$
0 \ne \lambda_B(\hat K) = \omega_\chi(\hat K)^* =
\omega_\chi(\hat L)^* \left({|K| \over |L|}\right)^ {\kern -.3em *} =
\lambda_B(\hat L) \left({|K| \over |L|}\right)^ {\kern -.3em *} \,,
$$
and it follows that $\lambda_B(\hat L) \ne 0$, as claimed.
By Lemma~15.46 of \ref7, we know that $L \cap C$ is a class of $N$,
and thus $L \cap C = \cl N{xy}$. Since $\cent G{xy} \subseteq C \subseteq N$, we see
that $\cent G{xy} = \cent N{xy}$, and from this, we compute that
$|L| = |G:\cent G{xy}| = |G:N||N:\cent N{xy}| = |G:N||L \cap C|$.
Observe that since $b^G = B$, we have
$\lambda_B(\hat L) = \lambda_b(\widehat{L \cap C})$, and we write
$\alpha$ to denote this nonzero element of the residue class
field of the $p$-local integers.
Now let $\chi \in \irr B$ be arbitrary and write $\psi = \tilde\chi$, in the
notation of Theorem~2.2. We thus have
$$
\omega_\chi(\hat L)^* = \alpha = \omega_\psi(\widehat{L \cap C})^* \,.
$$
Since $L \cap C$ is a class of $N$ with defect group $D$, we see that
$\chi(1)/|L|$ and $\psi(1)/|L \cap C|$ are $p$-local integers. Also, we
observe that $\chi$ has height zero in $B$ precisely when
$(\chi(1)/|L|)^* \ne 0$, and similarly, $\psi$ has height zero in $b$ if and
only if $(\psi(1)/|L \cap C|)^* \ne 0$.
By Theorem~2.2, we have
$$
{\chi(1) \over |L|} \omega_\chi(\hat L) = \chi(xy) = \pm \psi(xy) =
\pm{\psi(1) \over |L \cap C|} \omega_\psi(\widehat{L \cap C}) \,.
$$
Since $\chi(1)/|L|$ and $\psi(1)/|L \cap C|$ are $p$-local integers, we
deduce that
$$
\left({\chi(1) \over |L| }\right)^{\kern -.3em *}\!\alpha =
\pm \left({\psi(1) \over |L \cap C|}\right)^ {\kern -.3em *}\!\alpha \,,
$$
and thus since $\alpha \ne 0$, we have
$\chi(1)/|L| \equiv \pm \psi(1)/|L \cap C|$. In particular, $\chi$ has height
zero if and only if $\psi$ has height zero. If we multiply both sides by
$|L|_{p'} = |G:N|_{p'}|L \cap C|_{p'} = c|L \cap C|_{p'}$, we obtain
$$
{\chi(1) \over |L|_p} \equiv \pm c {\psi(1) \over |L \cap C|_p}
$$
and since both sides of this congruence are rational integers, these
numbers are actually congruent modulo $p$. In the case where $\chi$ and
$\psi$ have height zero, the integers $\chi(1)/|L|_p$ and
$\psi(1)|L \cap C|_p$ are exactly the $p'$-parts of the degrees of $\chi$
and $\psi$, and so the result follows.\hfill\qed
\vglue12pt
Our Conjecture~B is related to some conjectures and results of M.~Brou\'e
in \ref2. As usual, suppose that $B$ is a $p$-block of $G$ and that $b$
is its Brauer correspondent with respect to the defect group $D$. Brou\'e
conjectures that in the case where $D$ is abelian, there exists a ``perfect
isometry" between $B$ and $b$, and he proves this in the case where $D$
is cyclic. (It is known that a perfect isometry need not exist in the case
where $D$ is nonabelian.) A perfect isometry, if it exists, would imply the
existence of a certain bijection $\chi \mapsto \tilde\chi$ from $\irr B$ onto
$\irr b$. In addition, Brou\'e shows that if a perfect isometry exists, there
would be some constant $c$, depending on the block $B$, such that
$\chi(1) \equiv \pm c \tilde\chi(1)$ mod $p$ for all $\chi \in \irr B$. Of
course, it follows in this case that for each integer $k$ not divisible by $p$,
we would have (in our notation) $M_{ck}(B) = M_k(b)$. Furthermore,
Brou\'e shows that his constant $c$ is equal to $1$ if $B$ is the principal
block of $G$, but he does not evaluate the constant in other cases. (Note
that according to Conjecture~B, this constant should be $1$ for every
block of maximal defect, and not just for the principal block.) We mention
that if $G$ has an abelian self-centralizing Sylow $p$-subgroup, then the
principal block is the only $p$-block of maximal defect, and in that case,
Brou\'e's perfect isometry conjecture would imply our Conjecture~A.
\section{Field automorphisms}
There are other directions in which the McKay conjecture might be
extended. Suppose, for example, that $G$ is a group for which the McKay
conjecture holds in the strong sense that there is a {\it canonical} bijection
$\chi \mapsto \tilde\chi$ from $\irrpp G$ onto $\irrpp N$. (Here
$N = \norm GP$, where $P \in \syl pG$, and we are using the notation
$\irrpp X$ to denote the subset of $\irr X$ consisting of characters of
$p'$-degree.) In this case, we see that if $\sigma$ is any automorphism of
the cyclotomic field $\ifx\MYUN\Bbb {\font\bb=msbm10 \hbox{\bb Q}} \else {\Bbb Q} \fi_{|G|}$, then $(\tilde\chi)^\sigma =
\widetilde{\chi^\sigma}$, and thus in particular, the sets $\irrpp G$ and
$\irrpp N$ would have equal numbers of $\sigma$-fixed members.
If $G$ is (solvable) of odd order, then there is such a canonical bijection,
and the numbers of $\sigma$-fixed characters in $\irrpp G$ and $\irrpp N$
are equal for all choices of the field automorphism $\sigma$. But this fails
for solvable groups in general. (For example, if $G = {\rm GL}(2,3)$ and $p = 3$,
then all members of $\irrpp N$ are rational valued, but the same is not true
for $\irrpp G$.) If we impose some conditions on the field automorphism
$\sigma$, however, then the equality of the numbers of $\sigma$-fixed
characters is known to hold for all $p$-solvable groups. (See Corollary~C of
\ref{8}.) We conjecture that under these conditions on $\sigma$,
equality holds for all groups.
\nonumproclaim{Conjecture C} Let $G$ be an arbitrary finite group and fix a
prime~$p$. Let $\sigma$ be an automorphism of the cyclotomic field
$\ifx\MYUN\Bbb {\font\bb=msbm10 \hbox{\bb Q}} \else {\Bbb Q} \fi_{|G|}$ and assume that $\sigma$ has $p$\/{\rm -}\/power order and that it fixes
all $p'$\/{\rm -}\/roots of unity in $\ifx\MYUN\Bbb {\font\bb=msbm10 \hbox{\bb Q}} \else {\Bbb Q} \fi_{|G|}$. Then $\sigma$ fixes equal numbers of
characters in $\irrpp G$ and $\irrpp N$, where $N = \norm GP$ and $P \in
\syl pG$.
\endproclaim
Of course, if we take $\sigma$ to be the identity automorphism, we
recover the McKay conjecture from Conjecture~C. Another consequence
of the conjecture is that the character table of a group $G$ determines the
exponent of the abelian group $P/P'$, where, $P \in \syl pG$. To see why
this is true, let $N = \norm GP$ and fix a positive integer $n$. Let
$\sigma_n$ be the unique automorphism of $\ifx\MYUN\Bbb {\font\bb=msbm10 \hbox{\bb Q}} \else {\Bbb Q} \fi_{|G|}$ that fixes all
$p'$-roots of unity and maps every $p$-power root of unity $\epsilon$ to
$\epsilon^{p^n+1}$. Then $\sigma_n$ has $p$-power order and it fixes roots
of unity of order $p^n$ but not those of order $p^{n+1}$ or higher. It is not
hard to see from this that a necessary and sufficient condition for
$\sigma_n$ to fix every member of $\irrpp N$ is that $P/P'$ has exponent at
most $n$. If Conjecture~C is true, therefore, it follows that $P/P'$ has
exponent at most $n$ if and only if $\sigma_n$ fixes every member of
$\irrpp G$, and thus the exponent of $P/P'$ is determined from the
character table of $G$, as claimed.
We also propose a block version of Conjecture~C that generalizes the
Alperin-McKay conjecture. To state it, we observe that if $\sigma$ is a field
automorphism fixing $p'$-roots of unity and $\chi \in \irr G$, then $\chi$
and $\chi^\sigma$ necessarily belong to the same $p$-block of $G$. (This is
because these characters agree on all $p'$-elements of $G$.) Such a field
automorphism, therefore, permutes the set of height zero characters in
$\irr B$ for each $p$-block $B$ of $G$.
\nonumproclaim{Conjecture D} Let $B$ be a $p$-block for an arbitrary finite
group $G$ and suppose that $b$ is the Brauer correspondent of $B$ with
respect to some defect group. Let $\sigma$ be an automorphism of the
cyclotomic field $\ifx\MYUN\Bbb {\font\bb=msbm10 \hbox{\bb Q}} \else {\Bbb Q} \fi_{|G|}$ and assume that $\sigma$ has $p$\/{\rm -}\/power order
and that it fixes all $p'$\/{\rm -}\/roots of unity in $\ifx\MYUN\Bbb {\font\bb=msbm10 \hbox{\bb Q}} \else {\Bbb Q} \fi_{|G|}$. Then $\sigma$ fixes
equal numbers of height zero characters in $\irr B$ and $\irr b$.
\endproclaim
By Theorem~G of \ref8, Conjecture~D is known to hold for
$p$-solvable groups. We present a proof of the conjecture in the case
where the defect group of $B$ is cyclic.
\nonumproclaim{(3.1) Theorem} Conjecture~$D$ is valid for blocks with cyclic
defect group.
\endproclaim
{\it Sketch of proof}. We suppose that the defect group $D$ of $B$ is
cyclic and that $b$ is the Brauer correspondent of $B$ with respect to $D$.
As we observed in the proof of the Theorem~2.2, the members of $\irr B$
are of two types. There are $e$ characters $\chi_j$, where $1 \le j \le e$
and there are $(|D| - 1)/e$ different characters of the form
$\chi_\lambda$, where $\lambda$ is a nonprincipal linear character of $D$
and $\chi_\lambda = \chi_\mu$ if and only if $\lambda$ and $\mu$ lie in the
same $E$-orbit. Also, there is a similar parametrization of the members of
$\irr b$.
We will see that all of the characters $\chi_j$ and $\psi_j$ are fixed by
$\sigma$ and that $\chi_\lambda$ and $\psi_\lambda$ are fixed by $\sigma$
if and only if the linear character $\lambda$ is $\sigma$-fixed. The result
will then follow.
Suppose that $\chi \in \irr B$ and that $g \in G$ is arbitrary. If $g$
is $p$-regular, we know that $\chi(g)$ is $p$-rational, and thus
$\chi(g) = \chi(g)^\sigma$. Also, if the $p$-part of $g$ is not conjugate to
an element of $D$, then $\chi(g) = 0 = \chi(g)^\sigma$. It follows that to
determine whether or not $\chi$ is $\sigma$-fixed, it suffices to consider
only the values $\chi(xy)$, where $1 \ne x \in D$ and $y$ is a $p$-regular
element in $\cent Gx$. We are thus exactly in the situation where
Corollary~1.9 of \ref3\ applies.
It is immediate from Dade's Corollary~1.9 that if $\chi = \chi_j$ with
$1 \le j \le e$, then $\chi(xy)$ is $p$-rational, and it follows that all of the
characters $\chi_j$ are $\sigma$-fixed, as claimed. Also from Corollary~1.9,
we see that $\chi_\lambda(xy)^\sigma = \chi_{\lambda^\sigma}(xy)$.
If $\lambda$ is $\sigma$-fixed, it is now immediate that $\chi_\lambda$ is
$\sigma$-fixed. Conversely, if $\chi_\lambda$ is $\sigma$-fixed and we
write $\mu = \lambda^\sigma$, then we have $\chi_\lambda = \chi_\mu$,
and thus the $E$-orbit containing $\lambda$ is invariant under $\sigma$.
Since $\sigma$ has $p$-power order, however, and the $E$-orbit
containing $\lambda$ has size $e$, which is not divisible by $p$, it follows
that every member of this $E$-orbit is $\sigma$-fixed, and in particular
$\lambda$ is $\sigma$-fixed, as desired.
We have now shown that as claimed, the $\sigma$-fixed members of $\irr B$
are exactly the characters $\chi_j$ with $1 \le j \le e$ and the characters
$\chi_\lambda$, where $\lambda$ is $\sigma$-fixed. Exactly the same
reasoning applies to the block $b$, and the proof is complete.\hfill\qed
\vglue9pt
Of course, one could combine our Conjectures~A and~C. Perhaps, for
example, it is true that for each integer $k$ not divisible by $p$ and each
appropriate field automorphism $\sigma$, the groups $G$ and
$N = \norm GP$ always have equal numbers of $\sigma$-fixed characters
with degrees congruent modulo $p$ to $\pm k$. Similarly, one could
combine our Conjectures~B and~D, but we will refrain from stating these
composite conjectures formally.
\bigskip\bigskip
\references
alperin
\name{J.\ L.\ Alperin}, The main problem of block theory, {\it Proc.\
of the Conference on Finite Groups\/} (Univ. of Utah, Park City, Utah,
1975),
341--356, Academic Press, New York, 1976.
2
\name{M.\ Brou\'e}, Isom\'etries parfaites, types de blocs, cat\'egories
d\'eriv\'ees, {\it Ast{\rm \'{\it e}}risque\/} {\bf 181--182} (1990), 61--92.
3
\name{E.\ C.\ Dade}, Blocks with cyclic defect groups, {\it Ann.\ of
Math\/}.\
{\bf 84 } (1966), 20--48.
4
\name{P.\ Fong}, The Isaacs-Navarro conjecture for symmetric groups, {\it J. Algebra}, to appear.
5
\name{P.\ Fong} and \name{B.\ Srinivasan}, The blocks of finite general linear and
unitary groups, {\it Invent. Math}.\/ {\bf 69} (1982), 109--153.
6
\name{I.\ M.\ Isaacs}, Characters of solvable and symplectic groups,
{\it Amer.\ J.\ Math\/}.\ {\bf 95} (1973), 594--635.
7
\bibline, {\it Character Theory of Finite Groups\/}, Dover Publ.\ Inc.,
New York, 1994.
8
\name{I.\ M.\ Isaacs} and \name{G.\ Navarro}, Characters of $p'$-degree of
$p$-solvable groups, {\it J.\ Algebra\/}, {\bf 246} (2001), 394--413.
9
\name{J.\ McKay}, A new invariant for simple groups, {\it Notices
Amer.\
Math.\ Soc\/}.\ {\bf 18} (1971), 397.
10 \bibline, Irreducible representations of odd degree, {\it J.\
Algebra\/}
{\bf 20} (1972), 416--418.
11 \name{R.\ A.\ Wilson}, The McKay conjecture is true for the sporadic
simple groups, {\it J.\ Algebra\/} {\bf 207} (1998), 294--305.
\endreferences
\bye
| {
"timestamp": "2004-11-08T19:27:11",
"yymm": "0411",
"arxiv_id": "math/0411171",
"language": "en",
"url": "https://arxiv.org/abs/math/0411171",
"abstract": "Let $G$ be an arbitrary finite group and fix a prime number $p$. The McKay conjecture asserts that $G$ and the normalizer in $G$ of a Sylow $p$-subgroup have equal numbers of irreducible characters with degrees not divisible by $p$. The Alperin-McKay conjecture is a version of this as applied to individual Brauer $p$-blocks of $G$. We offer evidence that perhaps much stronger forms of both of these conjectures are true.",
"subjects": "Group Theory (math.GR)",
"title": "New refinements of the McKay conjecture for arbitrary finite groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9648551515780319,
"lm_q2_score": 0.7341195152660687,
"lm_q1q2_score": 0.708318996178434
} |
https://arxiv.org/abs/1108.2684 | Gabor frames with rational density | We consider the frame property of the Gabor system G(g, {\alpha}, {\beta}) = {e2{\pi}i{\beta}nt g(t - {\alpha}m) : m, n \in Z} for the case of rational oversampling, i.e. {\alpha}, {\beta} \in Q. A 'rational' analogue of the Ron-Shen Gramian is constructed, and prove that for any odd window function g the system G(g, {\alpha}, {\beta}) does not generate a frame if {\alpha}{\beta} = (n-1)/n. Special attention is paid to the first Hermite function h_1(t) = te^(-{\pi}t^2). | \section{Introduction}\label{S:I}
One of the fundamental problems of Gabor analysis can be stated as
follows: given a window function $g\in L^2({\mathbb R})$, determine the set of
lattice parameters $\alpha,\beta {>}0$ such that the Gabor system
$\mathcal{G}(g{,}\alpha{,}\beta) {=} \{e^{2i\pi \beta n t}
g(t{-}\alpha m){:} m{,}n {\in} {\mathbb Z}\}$ forms a frame in $L^2({\mathbb R})$.
\begin{equation*}
\mathcal{F}(g):= \{(\alpha,\beta)\in {\mathbb R}^2_+ :
\mathcal{G}(g,\alpha,\beta) \ \mbox{is a frame for} \ L^2({\mathbb R})\}.
\end{equation*}
\indent We remind some known facts about the frame set
$\mathcal{F}(g)$, following \cite{kG01} and (in more compressed form)
\cite{kG11}. Under milder conditions, precisely if $g$ is in the
Feichtinger algebra $M^1$ the set $\mathcal{F}(g)$ is open in ${\mathbb R}^2_+$
and contains a neighborhood of the origin. Fundamental density
theorems \cite{iD92,kG01,cH07} together with a version of the
uncertainty principle \cite{jB95,wC06} asserts that also
$\mathcal{F}(g)\subset \Pi_+:= \{(\alpha, \beta)\in {\mathbb R}^2_+ :
\alpha\beta < 1\}$ for $g\in M^1$.\\ \indent Up to the very recent
time just few functions have been known for which
$\mathcal{F}=\Pi_+$. The list included the Gaussian $g(t)=e^{-\pi t2}$
\cite{yL92,kS92a,kS92b}, the hyperbolic secant $g(t)=
(e^t+e^{-t})^{-1}$, one- and two-sided exponential functions
$g(t)=e^{-t} {\mathbf 1}_{{\mathbb R}_+}$ (in this case $\alpha \beta = 1$ also
generates a frame) and $g(t)=e^{-|t|}$ \cite{aJ96,aJ02}, as well as
their shifts, dilates, and Fourier transforms. A breakthrough was
achieved in \cite{kG11} where the authors constructed an infinite
family of functions for which $\mathcal{F}(g)=\Pi_+$ by proving that
any totally positive function of finite type possesses this property.\\
\indent On the other hand it is shown in \cite{aJ03} that the set
$\mathcal{F}(g)$ may have rather complicated structure even for the
"simple" function such as the characteristic function $g= {\mathbf
1}_I$ of an interval.\\ \indent In this article we attempt to study
the set $\mathcal{F}(g)$ for some cases when $\mathcal{F}(g)\neq
\Pi_+$ {\em and} $g$ is well concentrated both in time and
frequency. Our primary objective is the first Hermite function
$h_1(t)=te^{-\pi t2}$. This choice is motivated by the uncertainty
principle ($h_1$ minimizes the Heisenberg uncertainty among all
functions which are orthogonal to the Gaussian) and also
recent results regarding vector-valued Gabor frames \cite{kG08}.\\
\indent The article is organized as follows. The next section
contains notation and basic facts from Gabor analysis which will be
used in the sequel. The whole analysis is carried out for the case of
rational oversampling: $\alpha\beta = p/q \in {\mathbb Q}$. In Section
\ref{S:III} we prove that for {\em any} odd function $g\in M^1$ (in
particular $h_1$ of course) the set $\mathcal{F}(g)$ does NOT contain
the union of hyperbolas:
\begin{equation}\label{not}
\alpha \beta = \frac{n-1}n \ \Rightarrow \ (\alpha,\beta) \not\in
\mathcal{F}(g), \ n=2,3, \ldots \ .
\end{equation}
The proof is based on analysis of the vector-valued Zak transform (see
\cite{mZ97} and also \cite[ch.~8]{kG01}) which represents the frame
operator as matrix multiplication in a space of vector-valued
functions. In the next section we factorize the matrix of the
vector-valued Zak transform and extract a factor which is a rational
analogue of the well-known Ron-Shen Gramian (see \cite{aR97} and also
\cite{kG01}). In Section \ref{S:V} we conjecture that condition
\eqref{not} is the only restriction on the set $\mathcal{F}(h_1)$:
\begin{equation*}
\mathcal{F}(h_1)= \{(\alpha,\beta)\in \Pi_+: \ \alpha\beta\neq \frac{n-1}n, \
n=2,3, \ldots \ \}.
\end{equation*}
Unfortunately we are not able to prove this conjecture in its full
range. We prove it analytically just for some points in $\Pi_+$ and
also provide numerical verification for a wider set of points. These
constructions are based on the rational analogue of the Gramian given
in Section \ref{S:IV}.\\ \indent The authors thank E.Malinnikova for
useful discussions and hints.
\section{Preliminaries}\label{S:II}
In this section we remind the basic facts from Gabor analysis which
will be used later. We refer the reader to \cite{kG01} for a more
detailed presentation as well as the history of the subject.\\
\indent Given numbers $\alpha, \beta >0$ and a window function $g$ we
consider the lattice $\Lambda= \alpha {\mathbb Z} \times \beta {\mathbb Z}$ and the Gabor
system
\begin{equation*}
\gabg= \{ \pi_\lambda g: \ \lambda \in \Lambda\},
\end{equation*}
where $\pi_{\lambda}:g \rightarrow e^{2\pi i b t}g(t-a)$ for $\lambda = (a,b)$
denotes the usual time-frequency shift. The Gabor frame
operator $S_{g, \Lambda}: L^2({\mathbb R}) \to L^2({\mathbb R})$ is defined as
\begin{equation*}
S_{g, \Lambda} f(t) \,{=}\, \sum_{\lambda
\,{\in}\, \Lambda} \langle f,\pi_{\lambda}g \rangle_{L^{2}(\mathbb{R})}
\pi_{\lambda}g(t), \qquad f \,{\in}\, L^2(\mathbb{R}).
\end{equation*}
If $g$ belongs to the modulation space $M^1(\mathbb{R})$ (we remind
the definition later in this section) this operator is bounded, and
$\mathcal{G}(g,\Lambda)$ is a Gabor frame if and only if the Gabor
frame operator is invertible. In this article we consider the case
$\alpha \beta \in {\mathbb Q}$. The operator $S_{g, \Lambda} $ can in this case
be realized as a
multiplication-operator in a space of vector-valued functions.\\
\indent Let $\alpha\beta = p/q$ for some relatively primes $p,q\in
\mathbb{N}$. Consider the rectangle $Q_{\alpha, \beta}= [0,\alpha/p)
\times [0, 1/\alpha)$ and the space of
vector-valued functions ${\mathcal H}_{\alpha,p}= L^2(Q_{\alpha,p}, {\mathbb C}^p)$.\\
\indent We remind that the Zak transform is defined as
\begin{equation}\label{eq:zak}
\mathcal{Z}_{\alpha}f(t,\omega) = \sum_{n \in
\mathbb{Z}} f(t - \alpha n)e^{2\pi i n \alpha \omega}.
\end{equation}
Following \cite[ch.~8]{kG01} we consider the {\em vector-valued} Zak
transform $\overrightarrow{\cZ}_{\alpha} : L^2({\mathbb R}) \to {\mathcal H}_{\alpha,p}$
defined as
\begin{equation*}
\overrightarrow{\cZ}_{\alpha}f(x,\omega)
= \big( \cZ_\alpha(x+\frac{\alpha}{p} r, \omega) \big)_{r=1}^p, \quad (x,\omega)
\in Q_{\alpha,p}
\end{equation*}
The vector-valued Zak transform is up to normalization a unitary
mapping between $L^2({\mathbb R})$ and ${\mathcal H}_{\alpha,p}$.\\ \indent Denote also
\begin{equation*}
A_r^s(x,\omega)= \alpha \sum_{j=0}^{q-1} \overline {\mathcal{Z}_{\alpha}
g(x+\frac{\alpha}{p}s, \omega - \beta j)} \mathcal{Z}_{\alpha} g
(x+\frac{\alpha}{p} r,\omega - \beta j) e^{2\pi i j (r-s)/q},
\end{equation*}
and consider the $p\times p$ matrix function
\begin{equation*}
\mathcal A (x,\omega ) = \left ( A_r^s(x,\omega) \right )_{r,s =
0}^{p-1}, \ \ (x,\omega) \in Q_{\alpha,p}.
\end{equation*}
\noindent{\bf Theorem A}( Zibulskii-Zeevi) (see \cite{mZ97} and also
Theorem 8.3.3. in \cite{kG01}). {\em With the above assumptions we
have }
\begin{equation*}
\overrightarrow{\cZ}_{\alpha} (S_{g, \alpha,\beta } f)
(x,\omega)= \mathcal A (x,\omega ) \overrightarrow{\cZ}_{\alpha}
f(x,\omega),
\end{equation*}
{\em for almost all }$(x,\omega)\in Q_{\alpha,p}$.
\medskip\\ In what follows we assume that $g $ belongs to the
modulation space $M^1(\mathbb{R})$.
\begin{De}\label{De:M}
The modulation space $M^1(\mathbb{R})$ consists functions $g$ for
which the norm
\begin{equation*}
\| g \|_{M_1(\mathbb{R})} = \int_{\mathbb{R}} \int_{\mathbb{R}}
|V_fg(x,\omega)|\; dx \; d\omega < \infty
\end{equation*}
for some (or equivalently all) non-trivial function $f$ in the
Schwartz space $\mathcal{S}({\mathbb R})$.
\end{De}
If $g\in M^1(\mathbb{R})$ then $\mathcal{Z}_{\alpha} g$ is continuous.\medskip
\begin{Cor}
The Gabor system $\mathcal{G}(g,\Lambda)$ is a frame in $L^2({\mathbb R})$ if
and only if
\begin{equation}\label{framea}
\mbox{det}{\mathcal A}(x,\omega) \neq 0, \qquad (x,\omega
)\in Q_{\alpha,p}.
\end{equation}
\end{Cor}
\medskip We factorizes the matrix $\mathcal{A}$ in order to make
condition \eqref{framea} more transparent. Consider the column vectors
\begin{equation*}
X^{j}(x,\omega) = \left ( X^{j}_r(x,\omega)\right
)_{r=0}^{p-1}, \qquad \ j=0,1,\ldots \ , q-1
\end{equation*}
where
\begin{equation}\label{column2}
X^{j}_r(x,\omega)= \mathcal{Z}_{\alpha} g(t+\frac{\alpha r}p,
\omega-\beta j)e^{2\pi i jr/q},
\end{equation}
and the $p\times q$ matrix
\begin{align*}
\mathcal {Q}(t,\omega) = \left ( X^{j} \right )_{j=0}^{q-1}=\Big(
\mathcal{Z}_{\alpha}g(t + \frac{\alpha r}{p}, \omega - \beta j)
e^{2\pi i j r/q} \Big)_{r = 0,j=0}^{p-1,q-1}.
\end{align*}
Let $\mathcal{Q}^T$ denote the conjugate transform of
$\mathcal{Q}$. Clearly
\begin{equation*}
\mathcal {A}(x,\omega)= \mathcal{Q}(x,\omega)
\mathcal{Q}^T(x,\omega).
\end{equation*}
We have
\begin{equation*}
\mathcal A (x,\omega )
\overrightarrow{\cZ}_{\alpha} f(x,\omega) = \sum_{j=0}^{q-1}
\langle X^{j}(x,\omega), \overrightarrow{\cZ}_{\alpha} f(x,\omega) \rangle
\langle X^{j}(x,\omega), \overrightarrow{\cZ}_{\alpha} f(x,\omega) \rangle,
\end{equation*}
so the condition \eqref{framea} is met if and only if for each
$(x,\omega)\in Q_{\alpha,p}$ the vectors $X^{j}(x,\omega), \
j=0,1,\ldots q-1$ span ${\mathbb C}^p$.\medskip
\begin{Cor}\label{cor:frameb}
Let $\alpha \beta = \frac p q \in {\mathbb Q}$ and $g\in M^1(\mathbb{R})$. If
$\mathcal{G}(g,\Lambda)$ is a frame in $L^2({\mathbb R})$ it is necessary and
sufficient that
\begin{equation*}
\mbox{rank}\mathcal {Q}(x,\omega)= p, \qquad {for \ all} \
(x,\omega) \in Q_{\alpha, p}.
\end{equation*}
\end{Cor}
In the next section we use this condition in order to study
Gabor systems generated by odd functions.
\section{Gabor frames generated by odd functions}\label{S:III}
In this section we prove the following
\begin{Th}\label{th:symmetry}
Let $g\in M^1(\mathbb{R})$ be an odd function and $\alpha \beta =
\frac {n-1} n$, $n = 2, 3, \dots $. Then $\mathcal{G}(g,\Lambda)$
cannot form a frame in $L^2({\mathbb R})$.
\end{Th}
\begin{proof} We will prove that
\begin{equation}\label{smallrank}
\rank \mathcal Q(0,0)< n-1.
\end{equation}
The results then follows from Corollary \ref{cor:frameb}.\\ \indent
Relation \eqref{smallrank} will follow from the fact that for odd
windows the elements of the matrix $\mathcal Q(0,0)$ posses
additional symmetries. For simplicity we will assume $\alpha=1$.
\begin{Lemma}\label{l:symmetry}
Let $g\in M^1(\mathbb{R})$ be an odd function, and let
$\alpha=1$, $\beta=p/q \in {\mathbb Q}$ and $ X^j_s=X^{j}_s(0,0), $ where
the functions $X^{j}_s(x,\omega)$ are defined by \eqref{column2}.
Then
\begin{equation}\label{symmetry}
X_s^j=-X^{q-j}_{p-s}, \ s=0,1,\ldots \ , p-1, \
j=0,1, \ldots \ , q-1.
\end{equation}
\end{Lemma}
The proof of the lemma follows readily from the definition of the
Zak transform and also from the fact that $g$ is odd. We will use
this lemma for $q-p=1$.\\ \indent First we consider the case
$q=2k+2$, $p=2k+1$ ($q$ is an even number). In this case $\mathcal
Q (0,0)$ is a $(2k+1)\times (2k+2)$ matrix.\\ \indent We need
additional relations for the elements of the zero row, the zero
column and also the $(k+1)$-th column of $\mathcal Q (0,0)$.
Namely
\begin{align}
X_0^0=0, \ X^{k+1}_0=0, \ X_0^j=-X_0^{q-j} \quad &\text{-- zero
row}\label{zerorow}\\
X^0_s=-X^0_{p-s}, \ s=1, \ldots \ ,p-1 \quad
&\text{-- zero column};\label{zerocolumn}\\
X^{k+1}_s=-X^{k+1}_{p-s}, \ s=1,\ldots \, p-1 \quad &\mbox{--
$-(k+1)$th column.}\label{nplusonecolumn}
\end{align}
As in Lemma \ref{l:symmetry} these relations follow readily from
the definition of the Zak transform.\\ \indent Let $R_s$ denote the
$s$-th row of $\mathcal Q(0,0)$. Consider the row vectors
$e_l=(e_l^j)_{j=0,1,\ldots 2k+2}$, $l=1,2, \ldots \ ,k$, where
$e_l^j=0$, for $j\neq l+1, 2k+2-l$, $e_l^{l+1}=1$, and
$e_l^{2k+2-l}=-1$.\\ \indent From the relations \eqref{symmetry},
\eqref{zerorow}, \eqref{zerocolumn}, and \eqref{nplusonecolumn} it
is easy to see that all rows of $\mathcal Q$ belong to ${\mathcal S
= \mbox{span} \left \{ \{R_s\}_{s=1}^k\} \cup \{ e_l\}_{l=0}^k
\right \}}$.\\ \indent Indeed the row $R_0$ has the form
\begin{equation}\label{form}
R=(0, \alpha_1, \ldots \, \alpha_k, 0, -\alpha_k,
\ldots , \ -\alpha_1)
\end{equation}
for some $\alpha_1, \ldots \ , \alpha_k$, thus the vector can be
spanned by $ \{ e_l\}_{l=0}^n$. The rows $R_s$ $s=1,\ldots \ ,k$
belong to the spanning set themselves. So it suffices to prove that
the rows $R_{p-s}$, $s=1, \ldots k$ also belong to $\mathcal S$ or
equivalently the vectors $R_s+R_{p-s}$ belong to $\mathcal S$. The
later is evident since, according \eqref{symmetry} and
\eqref{zerocolumn}, these vectors also have the form
\eqref{form}.\\ \indent This completes the proof of the Theorem in
the case $q=2k+2$, $p=2k+1$.\\ \indent Consider now the case
$p=2k$, $q=2k+1$ ($q$ is an odd number). Once again, in addition
to the general relation \eqref{symmetry}, we need relations for
selected rows and columns:
\begin{align*}
X_0^j=-X^{q-j}_0, \quad &\mbox{-- zero row};\\
X_{s}^0=-X^0_{p-s} \quad &\mbox{-- zero column};\\
X^j_k=-X_k^{q- j} \quad &\mbox{-- $k$-th row}.
\end{align*}
Consider now the rows $e_l=(e_l^j)_{j=0,1 \ldots 2k}$,
$l=1,2,\ldots \, k$ with $e_l^j=0$ if $j\neq l, 2k-l+1$, $e_l^j=1$,
and $e_l^{2k-l+1}=-1$.\\ \indent Applying the same arguments as in
the previous case we can see that the set of $2k-1$ vectors
$\{R_s\}_{s=1,\ldots \ , k-1} \cup \{e_l\}_{l=1, \ldots \ , k}$
spans all rows of the matrix $\mathcal Q(0,0)$.\\ \indent This
completes the proof of Theorem \ref{th:symmetry}.
\end{proof}
\section{Factorization of the Zibulskii-Zeevi matrix}\label{S:IV}
In this section we study the Zibulskii-Zeevi matrix $\mathcal
Q(x,\omega)$. Our goal is to reduce it to a simpler $p\times q$
matrix having the same rank as $\mathcal Q$. One can consider this
simpler matrix as an analogue of the Ron-Shen Gramian \cite{aR97} for
rationally oversampled Gabor systems.\\ \indent We believe this
reduction is interesting by itself, it will be also used in the next
section in order to study Gabor frames generated by the first Hermite
function.
\begin{Th}\label{th:LNrep}
Let the window function $g$ belong to $M^1(\mathbb{R})$ and
$\alpha\beta = \frac p q \in {\mathbb Q}$. The system $\gabg$ forms a frame
in $L^2({\mathbb R})$ if and only if the matrix
\begin{equation}\label{eq:LNrep}
\mathcal P (x,\omega)= \left (Z_{\alpha q}g(x+\frac
\alpha p (tp+sq), \omega) \right )_{s=0\ \ t=0 }^{p-1\ q-1}
\end{equation}
has $\rank = p$ for all $(x,\omega)\in Q_{\alpha,p}$ .
\end{Th}
\begin{proof}
Fix $(x,\omega)\in Q_{\alpha,p}$ and let
\begin{small}
\begin{equation}\label{eq:01}
X_s^j=\mathcal{Z}_{\alpha} g(x {+} \frac \alpha p s, \omega {-} \beta j) e^{2i\pi
\frac {js}q} = \sum_n g \big(x {+} \frac \alpha p (s {-}
pn)\big) e^{2i\pi \alpha n \omega} e^{2i\pi j (\frac s q
{-} \alpha \beta n)}
\end{equation}
\end{small}
be the corresponding element of the matrix $\mathcal Q
(x,\omega)$.\\ \indent For each $s=0,1,\ldots \ , p-1$,
$t=0,1,\ldots \ , q-1$ let
\begin{equation*}
L(s):=\{l: l=s-pn, n\in {\mathbb Z}\}, \ L(s,t)=\{l\in L(s): l=t+mq, m\in
{\mathbb Z}\}.
\end{equation*}
Setting $l=s-pn$ in \eqref{eq:01} we obtain
\begin{align}\label{eq:02}
X_s^j=\sum_{l\in L(s)}g
\big(x+\frac \alpha p
l\big)& e^{2i\pi \frac {jl}q +2i \pi \alpha \omega \frac {s-l}p}\notag\\ &=
\sum_{t=0}^{q-1} e^{2i \pi \alpha \omega \frac s p} \sum_{l\in
L(s,t)}{g (x+\frac \alpha p l) e^{- 2i\pi\alpha \omega
\frac l p} } e^{2i\pi \frac {jt}q}.
\end{align}
For each $t\in \{0, \ldots \, q-1\}$ and $s\in \{ 0, \ldots \
,p-1\}$ we chose numbers $k_t \in \{0, \ldots \, q-1\}$ and $m_s
\in \{ 0, \ldots \ ,p-1\}$ such that
\begin{equation*}
k_t p = t \mbox{(mod $q$)}, \quad m_s q = s \mbox{(mod $p$)}.
\end{equation*}
One can easily see that $L(s,t)=\{k_tp+m_sq - pq m : m\in {\mathbb Z}\}$, so
one can rewrite \eqref{eq:02} as
\begin{multline}
\label{eq:03} X_s^j= e^{2i\pi \alpha \omega\frac{s-m_sq}p}
\sum_{t=0}^{q-1} e^{2i \pi \alpha \omega k_t} e^{2i\pi k_t j
\frac pq} \times \\ \underbrace { \sum_{m\in {\mathbb Z}} g\left (
x+\frac \alpha p(k_tp +m_sq) - m\alpha q \right ) e^{2i \pi
\omega m \alpha q} }_ {= Z_{\alpha q}g(x+\frac \alpha p
(k_tp+m_sq), \omega)}.
\end{multline}
\indent Since the numbers $k_t$ runs through the set $\{0, \ldots
\, q-1\}$ as $t$ runs through this set, we can rewrite
\eqref{eq:03} as
\begin{multline*}
X_s^j= e^{2i\pi \alpha \omega\frac{s-m_sq}p} \sum_{\tau =0}^{q-1}
e^{2i \pi \alpha \omega \tau} e^{2i\pi \tau j \frac pq} \times \\
\underbrace { \sum_{m\in {\mathbb Z}} g\left ( x+\frac \alpha p(\tau p
+m_sq) - m\alpha q \right ) e^{2i \pi \omega m \alpha q} }_
{= Z_{\alpha q}g(x+\frac \alpha p (\tau p+m_sq), \omega)},
\end{multline*}
or
\begin{equation*}
\mathcal Q (x,\omega)= \mbox{diag}\{e^{2i\pi \alpha
\omega\frac{s-m_sq}p} \}_{s=0}^{p-1} \ \tilde{\mathcal P}(x,\omega) \
\mbox{diag}\{ e^{2i \pi \alpha \omega \tau} \}_{\tau=0}^{q-1} \ W,
\end{equation*}
here
\begin{equation*}
\tilde {\mathcal P}(x,\omega)= \left ( Z_{\alpha q}g(x+\frac
\alpha p (\tau p+m_sq), \omega) \right )_{s=0 \ \tau=0}^{p-1 \ q-1},
\quad W= \left ( e^{2i\pi \tau j \frac pq} \right )_{\tau,j=0}^{q-1}.
\end{equation*}
Clearly the matrices $ \tilde {\mathcal P}(x,\omega)$ and $
{\mathcal Q}(x,\omega)$ have the same rank. On the other hand,
since the number $m_s$ runs through the whole set $\{0,1,\ldots \ ,
p-1\}$ as $s$ runs through this set the matrices $ \tilde {\mathcal
P}(x,\omega)$ and $ {\mathcal P}(x,\omega)$ differ only by
permutations of their rows and hence have the same rank.
\end{proof}
\section{Example}\label{S:V}
In \cite{kG08} it is proved that the Gabor system
$\mathcal{G}(h_1,\alpha,\beta)$ is a frame if $\alpha \beta <
\frac{1}{2}$, and fails to be a frame if $\alpha \beta =
\frac{1}{2}$. Furthermore the authors discuss an example suggesting
that this result may be sharp.\\ \indent In this section we prove that
for at least some $\alpha,\beta$ with $\alpha \beta > \frac{1}{2}$ the
system $\mathcal{G}(h_1,\alpha,\beta)$ indeed generates a frame in
$L^2({\mathbb R})$. The proof uses the matrix-function $\mathcal{P}$
constructed in the previous section, and also a result on
\textit{diagonally-dominant} matrices.
\begin{Th}\label{35}
Let $\alpha \beta = 3/5$ and $h_1(t)=te^{-\pi t^2}$. Then the
system $\mathcal{G}(h_1,\alpha,\beta)$ is a frame in $L^2({\mathbb R})$.
\end{Th}
\begin{proof} The matrix $\mathcal P (x,\omega)$ takes the form
\begin{equation*
\mathcal P_1 (x,\omega)= \left (\mathcal{Z}_{5\alpha} h_1(x+ \alpha
t+s\frac{5\alpha}{3}, \omega) \right )_{s,t=0 }^{2, \ 4},
\end{equation*}
By Theorem \ref{th:LNrep} and Corollary \ref{cor:frameb} it suffices
to prove that
\begin{equation*}
\rank \mathcal P_1 (x,\omega)=3, \quad \mathrm{for\ all}\
(x,\omega) \in Q_{\alpha,3}.
\end{equation*}
\indent We split the proof into several steps.\medskip\\ {\bf a. \ }
It suffices to prove that $\mathcal{G}(h_1,\alpha,\beta)$,
$\alpha\beta = \frac{3}{5}$ is a frame for $\alpha \geq
\sqrt{\frac{3}{5}}$. The case $\beta \geq \sqrt{\frac{3}{5}}$ can be
reduced to the previous by using the Fourier transform.\medskip\\
{\bf b. \ } Since the function $h_1$ decays fast one can approximate
$\mathcal{Z}_{5\alpha} h_1(x,\omega)$ with the maximal term of the series. In
particular the following holds
\begin{Lemma}\label{l:approximation} Let $0\leq |x| <
\frac{5\alpha}{2}$. Then
\begin{equation*}
|h_1(x) - \mathcal{Z}_{5\alpha} h_1(x,\omega)| \leq C_{5\alpha}h_1(5\alpha - |x|)
\end{equation*}
where $C_{5\alpha} = 2 + \frac{1}{h(5\alpha)} \sum_{n\geq 2}
h_1(5\alpha n) + h_1(\frac{5\alpha(2n - 1)}{2})$.
\end{Lemma}
This lemma can be verified directly. For all practical reasons we
can assume that $C_{5\alpha} = 0$.\medskip\\ {\bf c. \ } We will see
later in {\bf (g)} that it suffices to consider $0\leq x \leq
\frac{\alpha}{6}$. This will follow from symmetry of $h_1$ and
quasi-periodicity of the Zak transform. We split the interval
$0\leq x \leq \frac{\alpha}{6}$ in two: $0\leq x< \frac{\alpha}{12}$
and $\frac{1}{12} \leq x \leq \frac{\alpha}{6}$.\medskip\\ {\bf d.
\ } Let $0\leq x<\frac{\alpha}{12}$. Consider the sub-matrix
of $\mathcal P_1 (x,\omega)$ corresponding to $t=1,2,3$. After
interchanging of the second and third row this matrix takes the form
\begin{align*}
\begin{pmatrix}
\mathcal{Z}_{5\alpha} h_1(x+\alpha,\omega) &
\mathcal{Z}_{5\alpha} h_1(x+2\alpha,\omega) &
\mathcal{Z}_{5\alpha} h_1(x+3\alpha,\omega)\\
\mathcal{Z}_{5\alpha} h_1(x+ \frac{13\alpha}{3},\omega) & \mathcal{Z}_{5\alpha} h_1(x+
\frac{16\alpha}{3},\omega) & \mathcal{Z}_{5\alpha} h_1(x+
\frac{19\alpha}{3},\omega) \\
\mathcal{Z}_{5\alpha} h_1(x+ \frac{8\alpha}{3},\omega) &
\mathcal{Z}_{5\alpha} h_1(x+\frac{11\alpha}{3},\omega) &
\mathcal{Z}_{5\alpha} h_1(x+\frac{14\alpha}{3},\omega)
\end{pmatrix}.
\end{align*}
We will use the following theorem about diagonally dominant
matrices.\medskip\\
\noindent{\bf Theorem B} (see \cite{oT49}). {\em If $(a_i^k)$ is an
$n\times n$-matrix with complex elements such that either}
\begin{itemize}
\item[(i):] \qquad$|a_i^i| > \sum_{k,k\neq i}|a_k^i|, \qquad
\qquad \qquad \qquad \qquad \;\;\; 1 \leq i \leq n$
\end{itemize}
{\em or}
\begin{itemize}
\item[(ii):] \qquad$|a_{i}^i||a_{j}^j| > \Big(\sum_{k,k\neq
i}|a_{k}^i|\Big) \Big(\sum_{k,k\neq j}|a_{k}^j|\Big),\qquad
1\leq i,j \leq n,\;i\neq j$
\end{itemize}
{\em then $\mathrm{det}(a_{ik}) \neq 0$.}
\medskip\\
\noindent{\bf e. \ } We will check that Theorem B can be used to
prove invertibility of the matrix in \textbf{(d)}. We emphasis the
main steps of the proof.
\begin{itemize}
\item Using the fact that $|\mathcal{Z}_{5\alpha} h_1(x,\omega)|$ is
$5\alpha$-periodic function one can represent the absolute values
of the elements of the matrix in \textbf{(e)} as
\begin{align*}
\begin{pmatrix}
|\mathcal{Z}_{5\alpha} h_1(x + \alpha,\omega)| & |\mathcal{Z}_{5\alpha} h_1(x +
2\alpha,\omega)| & |\mathcal{Z}_{5\alpha} h_1(x - 2\alpha,\omega)|\\
|\mathcal{Z}_{5\alpha} h_1(x - \frac{2\alpha}{3},\omega)| & |\mathcal{Z}_{5\alpha} h_1(x
+ \frac{\alpha}{3},\omega)| & |\mathcal{Z}_{5\alpha} h_1(x +
\frac{4\alpha}{3},\omega)|\\ |\mathcal{Z}_{5\alpha} h_1(x -
\frac{7\alpha}{3},\omega)| & |\mathcal{Z}_{5\alpha} h_1(x -
\frac{4\alpha}{3},\omega)| & |\mathcal{Z}_{5\alpha} h_1(x -
\frac{\alpha}{3},\omega)|
\end{pmatrix}.
\end{align*}
\item Using Lemma \ref{l:approximation} one can replace all elements,
except those with indices $(1,2), (1,3)$ and $(3,1)$ with the main
term of the corresponding series. We use an upper bound for the
remaining (non-diagonal) elements replacing $\mathcal{Z}_{5\alpha} h_1(x -
\frac{7\alpha}{3},\omega)$ and $\mathcal{Z}_{5\alpha} h_1(x \pm 2\alpha,\omega)$ with
respectively $3h_1(x - \frac{7\alpha}{3})$ and $3h_1(x \pm
2\alpha)$. Namely, for $x$ near $\frac{5 \alpha}{2}$ Lemma
\ref{l:approximation} gives
\begin{align*}
|\mathcal{Z}_{5\alpha} h_1(x,\omega)| {=} |h_1(x) {-} \mathcal{Z}_{5\alpha} h_1(x,\omega) {-}
h_1(x)| {\leq} h_1(x) {+} C_{5\alpha} h_1(5\alpha {-} |x|).
\end{align*}
Since the constant $C_{5\alpha} \approx 2$ we use the estimate
$|\mathcal{Z}_{5\alpha} h_1(x,\omega)| \leq 3h_1(x)$ for $x$ near
$\frac{5\alpha}{2}$. Since we plan to use Theorem B we use an upper
estimate for the non-diagonal elements. For the other values of $x$
the error is negliable.\\ \indent \quad We will show that the
following matrix satisfies the conditions of Theorem B.
\begin{align*}
\Big(H_i^j\Big)_{i,j=1}^3 =
\begin{pmatrix} |h_1(x + \alpha)| & 3|h_1(x +
2\alpha)| & 3|h_1(x - 2\alpha)|\\ |h_1(x -
\frac{2\alpha}{3})| & |h_1(x + \frac{\alpha}{3})|
& |h_1(x + \frac{4\alpha}{3})|\\ 3|h_1(x -
\frac{7\alpha}{3})| & |h_1(x - \frac{4\alpha}{3})|
& |h_1(x - \frac{\alpha}{3})|
\end{pmatrix}.
\end{align*}
\item For $0 \leq x \leq \frac{\alpha}{12}$ and $\sqrt{\frac{3}{5}}
\leq \alpha \leq 1$ condition (ii) in Theorem B can now be verified
by direct inspection.
\item For $0 \leq x \leq \frac{\alpha}{12}$ and $\alpha \geq 1$ we
consider the difference
\begin{align*}
H_i^i - \sum_{k,k\neq i} H_i^k,\qquad \qquad 1 \leq i \leq 3.
\end{align*}
If this expression is positive then condition (i) in Theorem B
is met. Consider first the case $i=1$. Then we need to verify
\begin{equation}\label{eq:H1}
H_1^1 - H_1^2 - H_1^3 > 0.
\end{equation}
Let $\alpha y = x$ for $0 \leq y \leq \frac{1}{12}$. Then \eqref{eq:H1}
is equivalent to
\begin{align*}
&h_1(\alpha (y + 1)) - 3h_1(\alpha (y + 2)) - 3h_1(\alpha (2 - y))\\ &>
h_1(\alpha (\frac{1}{12} + 1)) - 3h_1(2 \alpha ) - 3h_1(\alpha (2 -
\frac{1}{12}))\\ &> h_1(\frac{13\alpha}{12}) -
6h_1(\frac{23\alpha}{12}) = h_1(\frac{13\alpha}{12})\Big(1 -
6\frac{23}{13}e^{-\frac{5\pi \alpha^2}{2}}\Big)
\end{align*}
which clearly is positive for all values $\alpha \geq 1$. The proof
of the cases $i=2$ and $i=3$ follows along the same lines.
\end{itemize}
\medskip {\bf f. \ } The case $1/12\leq x \leq 1/6$ and $\alpha \geq
\sqrt{\frac{3}{5}}$ can be treated similarly to the previous case. In
this case one can verify that the submatrix of $\mathcal{P}_1
(x,\omega)$ corresponding to the columns $t=0,2,3$ fulfill condition
$(i)$ of
Theorem B.\medskip\\
{\bf g. \ } We prove that the case $\frac{\alpha}{6}\leq x \leq
\frac{\alpha}{3}$ can be reduced to the previous ones. By
substituting $x = \frac{\alpha}{3} - y$ we have
\begin{align*}
|\mathcal{Z}_{5\alpha} h_1(\frac{\alpha}{3}-y + \alpha t +
r\frac{5\alpha}{3},\omega)| &= |\mathcal{Z}_{5\alpha} h_1(\frac{\alpha}{3}-y +
\alpha t + \alpha(r - 1)\frac{5}{3} + \frac{5\alpha}{3},\omega)|\\
&= |\mathcal{Z}_{5\alpha} h_1(y - \alpha(t + 2) - \alpha(r-1)\frac{5}{3},\omega)|.
\end{align*}
It is clear that the two cases are - up to permutations of
rows/columns - similar for $0 \leq x \leq \frac{\alpha}{6}$ and
$\frac{\alpha}{6} \leq x \leq \frac{\alpha}{3}$.
\end{proof}
\section{Conjecture}\label{S:VI}
Calculations of the previous section become too cumbersome for
arbitrary $\alpha$,$\beta$ with $\alpha \beta \in {\mathbb Q}$, $\alpha \beta
<1$. Numerous numerical calculations make
us to believe that the following statement is true:\medskip\\
\noindent {\bf Conjecture\quad} {\em Let $\alpha \beta <1$ and $\alpha
\beta \neq (n-1)/n$, $n=1,2,\ldots $,. Then
$\mathcal{G}(h_1,\alpha,\beta)$ in a frame in $L^2({\mathbb R})$. }\medskip\\
Unfortunately the authors are at the moment not able to
prove/disprove this statement even in the case $\alpha \beta \in
{\mathbb Q}$.\\ \indent The following figure suggest that $\alpha\beta =
\frac{n-1}{n}$ are the only exceptional cases. Figure~\ref{fig:eig}
shows the minimal eigenvalue of
$\mathcal{P}(x,\omega)\mathcal{P}^T(x,\omega)$ for $0 \leq x \leq
\frac{\alpha}{2p}$ and $0 \leq \omega \leq \frac{1}{\alpha}$ for
$\alpha \beta = \frac{n-j}{n}$ for $5 \leq n \leq 201$ and $1 \leq j
\leq n-1$. With this choice of $n$ and $j$ we have $\alpha \beta <
0.995$. In the experiments we have considered $\alpha = 1$.
\begin{figure}[!h]
\begin{center}
\subfigure[]{\includegraphics[height=3cm,width=10cm]{r101}}\\
\subfigure[]{\includegraphics[height=3cm,width=10cm]{r201}}
\end{center}
\caption{Minimal eigenvalue of $\mathcal{P}\mathcal{P}^T$ for (a)
$\alpha \beta < 0.98$ and (b) $0.98 < \alpha \beta < 0.995$. In
the experiments we have used $\alpha = 1$. Note that the scaling
of the y-axis differs.}\label{fig:eig}
\end{figure}
\newpage
\bibliographystyle{plain}
| {
"timestamp": "2011-08-15T02:03:07",
"yymm": "1108",
"arxiv_id": "1108.2684",
"language": "en",
"url": "https://arxiv.org/abs/1108.2684",
"abstract": "We consider the frame property of the Gabor system G(g, {\\alpha}, {\\beta}) = {e2{\\pi}i{\\beta}nt g(t - {\\alpha}m) : m, n \\in Z} for the case of rational oversampling, i.e. {\\alpha}, {\\beta} \\in Q. A 'rational' analogue of the Ron-Shen Gramian is constructed, and prove that for any odd window function g the system G(g, {\\alpha}, {\\beta}) does not generate a frame if {\\alpha}{\\beta} = (n-1)/n. Special attention is paid to the first Hermite function h_1(t) = te^(-{\\pi}t^2).",
"subjects": "Information Theory (cs.IT)",
"title": "Gabor frames with rational density",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561694652215,
"lm_q2_score": 0.7310585903489891,
"lm_q1q2_score": 0.7082906255001661
} |
https://arxiv.org/abs/2210.00279 | Failure-informed adaptive sampling for PINNs | Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains. It is noticed, however, the performance of PINNs can vary dramatically with different sampling procedures. For instance, a fixed set of (prior chosen) training points may fail to capture the effective solution region (especially for problems with singularities). To overcome this issue, we present in this work an adaptive strategy, termed the failure-informed PINNs (FI-PINNs), which is inspired by the viewpoint of reliability analysis. The key idea is to define an effective failure probability based on the residual, and then, with the aim of placing more samples in the failure region, the FI-PINNs employs a failure-informed enrichment technique to adaptively add new collocation points to the training set, such that the numerical accuracy is dramatically improved. In short, similar as adaptive finite element methods, the proposed FI-PINNs adopts the failure probability as the posterior error indicator to generate new training points. We prove rigorous error bounds of FI-PINNs and illustrate its performance through several problems. | \section{Introduction}
Partial differential equations (PDEs) are important tools for modeling many real-world phenomena. Traditional numeric solvers such as finite difference method and finite element method have been widely used to solve PDEs for many decades.
However, for high-dimensional problems, the computational cost will be prohibitively expensive when applying those methods due to the dramatic increase in the number of grids and thus the curse of dimensionality is inevitable. Driven by their well-documented performance in fields of machine learning, particularly deep learning is being increasing used in scientific computing. As a result, the area of scientific machine learning has emerged, see e.g., \cite{E_CICP,Edeep18,raissi2019physics,sirignano2018dgm,zang2020weak}.
Physics-Informed neural networks (PINNs) \cite{lu2021deepxde,raissi2019physics} is one of the popular machine learning methods for solving PDEs with deep neural networks. PINNs adopt automatic differentiation to solve PDEs by penalizing the PDE in the loss function at a random set of points in the domain of interest. PINNs have been successfully applied to simulate a variety of forward and inverse problems for PDEs, see \cite{wang2021understanding,karniadakis2021physics,lu2021deepxde,GWZ_2022,GWZ_2022_JCP} and references therein. Being a mesh less approach, a key issue in PINNs is how to choose the training points. Obviously, for PDEs with complex solution structures, a fixed set of (prior chosen) training points may fail to capture the effective solution region \cite{krishnapriyan2021characterizing,wang2021understanding,wang2022and}. Another limitation is that when implementing PINNs to PDEs in an unbounded domain, due to the lack of prior knowledge about the PDE solutions, choosing an effective training set can be challenging. To address these issues, some adaptive sampling strategies have been proposed \cite{daw2022rethinking,gao2021active,lu2021deepxde,subramanian2022adaptive,tang2021deep,FZZ_2022}, see also a recent review \cite{wu2022comprehensive}. Among others, the most common strategy is the residual-based refinement method (RAR) in \cite{lu2021deepxde}. The RAR method provides an insight to adaptively select samples based on residual errors. However, the adaptive samples in RAR are chosen based on a large set of candidate points (either grid-based or sampled with a prior distribution), making it not effective for high dimensional problems.
Another interesting idea is the so-called deep adaptive sampling (DAS) method \cite{tang2021deep}, in which one first train a generative model based on the residual, and then the generative model can be used to generate new training points. However, the main issue for DAS is that the complexity for training a generative model can be comparable for solving the PDEs.
Motivated by the above discussions, we present in this work the failure-informed PINNs (FI-PINNs) that can effectively generate adaptive training points. Our approach is inspired by the viewpoint of reliability analysis. The key idea is to define an effective failure probability based on the residual, and then, with the aim of placing more samples in the failure region, the FI-PINNs employs a failure-informed enrichment technique to adaptively add new collocation points to the training set, in such a way that the numerical accuracy is dramatically improved. We summarize the main features of FI-PINNs as follows:
\vspace{0.25cm}
\begin{itemize}
\item In each iteration, based on the residual, we define the failure probability as an error indicator. This is similar as in the adaptive finite element methods, where one drive posterior error indicator for adjusting the meshes.
\vspace{0.25cm}
\item In each iteration, we use the simple (truncated) Gaussian model to estimate the failure probability and to generate new training points, and this can be done in a very efficient way (compared to RAR and DAS).
\vspace{0.25cm}
\item We prove rigorous error bound for FI-PINNs in terms of the error tolerance and failure probability tolerance.
\vspace{0.25cm}
\item We test FI-PINNs for various PDE problems, which include PDEs with singular solutions, PDEs in unbounded domains, and time-dependent PDEs. It is shown that for all the test problems, FI-PINNs can effectively capture the solution structure.
\vspace{0.25cm}
\item Our adaptive sampling strategy can be easily extended to other formulations such as the deep Ritz method \cite{Edeep18} and weak adversarial networks \cite{zang2020weak}.
\end{itemize}
\vspace{0.25cm}
The rest of this paper is organized as follows. In the next section, we briefly introduces the basic knowledge of PINNs. In Section \ref{failure_probability_criteria} we present FI-PINNs and propose the adaptive sampling strategy. In Section \ref{analysis_of_convergence}, we present the convergence analysis of FI-PINNs. Finally, we provide numerical experiments in Section \ref{Numerical_experiments}, and this is followed by some concluding remarks in Section 6.
\section{Preliminaries of PINNs}\label{Preliminaries_of_pinns}
We briefly review physics-informed neural networks (PINNs). Let $\Omega \in \mathbb{R}^d$ be a spatial domain, and we denote by $\mathbf{x}\in \Omega$ the spatial variable. Consider the following partial differential equations:
\begin{equation}
\begin{split} \label{nonlinear-pde}
&\mathcal{A}(\mathbf{x};u(\mathbf{x})) = 0, \quad \mathbf{x}\in \Omega,\\
&\mathcal{B}(\mathbf{x};u(\mathbf{x})) = 0, \quad \mathbf{x}\in \partial \Omega,\\
\end{split}
\end{equation}
where $\mathcal{A}$ is a linear or non-linear differential operator, $\mathcal{B}$ is the boundary operator, and $u(\mathbf{x})$ is the unknown solution.
The basic idea of PINNs is to use a deep neural network (DNN) $u(\mathbf{x};\theta)$ with parameters $\theta$ to approximate the unknown solution $u(\mathbf{x})$. The PDE solution is then obtained by choosing the best parameters for the following soft constrained optimization problem:
\begin{equation}\label{loss-function}
\min_{\theta \in \Theta}\mathcal{L}(\theta) = \min_{\theta\in\Theta} \mathcal{L}_{c}(\theta) + \lambda\mathcal{L}_{b}(\theta),
\end{equation}
where $\Theta$ is the parameter space and $\lambda$ is a penalty factor that balances the PDE loss $\mathcal{L}_{c}(\theta)$ and the boundary loss $\mathcal{L}_{b}(\theta)$. A common choice for $\mathcal{L}(\theta)$ is the $L^2$ loss, i.e., $$\mathcal{L}_{c}(\theta)=\|r(\mathbf{x};\theta)\|_{2, \Omega}^{2}, \quad \mathcal{L}_{b}(\theta)=\|b(\mathbf{x};\theta)\|_{2, \partial\Omega}^{2},$$
where $r(\mathbf{x};\theta) =\mathcal{A}(\mathbf{x}; u(\mathbf{x};\theta))$, and $b(\mathbf{x};\theta)=\mathcal{B}(\mathbf{x}; u(\mathbf{x};\theta))$ measure how well $u(\mathbf{x}, \theta)$ satisfies the PDE and the boundary operators, respectively. The above $L^2$-norm is defined as $\|u\|_{2, \Omega}^{2} = \int_{\Omega } u(\mathbf{x})^{2}\omega(\mathbf{x})d\mathbf{x}$, where $\omega(\mathbf{x})$ is a prior distribution and is usually set to be 1 when the problem domain is bounded.
\begin{figure}[H]
\centering
\includegraphics[width = 0.7\textwidth]{figures/pinn_framework.pdf}
\caption{The framework of PINNs}
\label{pinn_framework}
\end{figure}
We present in Fig.\ref{pinn_framework} the general PINNs framework. In practice, the loss functional $\mathcal{L}(\theta)$ should be discretized
by giving a prior distributed training dataset, which consists of the discrete points (collocation points) $\mathcal{D}_{c} = \{\mathbf{x}_{i}^{c}\}_{i=1}^{N_{c}}$ and the boundary points $\mathcal{D}_{b} = \{\mathbf{x}_{i}^{b}\}_{i=1}^{N_{b}}$.
We can then consider the discrete loss function
\begin{equation}
\label{optimal_parameters}
\hat{\mathcal{L}}(\theta) = \hat{\mathcal{L}}_{c}(\theta) + \lambda\hat{\mathcal{L}}_{b}(\theta),
\end{equation}
where
\begin{equation}\label{discrete_loss_function}
\hat{\mathcal{L}}_{c}(\theta) = \frac{1}{N_{c}}\sum_{i = 1}^{N_{c}}\left|r(\mathbf{x}^{c}_{i};\theta)\right|^{2}, \quad
\hat{\mathcal{L}}_{b}(\theta) = \frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left|b(\mathbf{x}^{b}_{i};\theta) \right|^{2}.
\end{equation}
It should be noted that the penalty factor $\lambda$ is often chosen as $\lambda=1$ in most cases, but it can be carefully tuned to achieve better performance \cite{wang2021understanding,wang2022and}. The discrete loss function (\ref{optimal_parameters}) can then be minimized using stochastic gradient-based algorithms \cite{bottou2012stochastic,kingma2014adam}.
Although PINNs have been successfully applied to simulate a variety of forward and inverse problems for PDEs, they can sometimes have difficulty in converging to the true solution, as shown in several studies depicting the \textit{failure modes} of PINNs \cite{krishnapriyan2021characterizing,wang2021understanding,wang2022and}. These potential failure modes are caused by the mechanism of PINNs, which makes optimizing the loss function extremely difficult due to its complicated landscape. To address these challenges, specialized solutions have been proposed independently.
\paragraph{Modify the loss function} Modifying the structure of the loss function has become a popular tool for training PINNs. This includes (I) applying adaptive weights for each individual point \cite{mcclenny2020self}, (II) embedding gradient information of the residual function in the total loss \cite{yu2022gradient}, and (III) combing the augmented Lagrangian relaxation to balance different loss components \cite{huang2022augmented}.
\paragraph{Construct new learning schemes} Recent ideas along this line include the extended PINNs (XPINNs) \cite{jagtap2021extended} and sequence to sequence learning\cite{krishnapriyan2021characterizing}.
\paragraph{Adaptive sampling strategies} Adaptive sampling strategies are essential for training PINNs \cite{daw2022rethinking,gao2021active,lu2021deepxde,subramanian2022adaptive,tang2021deep,wu2022comprehensive}. As discussed in the last section, \cite{lu2021deepxde} proposes to update the training dataset by selecting points from a uniformly distributed set with larger residual values. While \cite{tang2021deep} proposed to use the generative model, e.g., the KRnet, to generate the training points.
Our contribution belongs to the third category, i.e., we focus on developing an efficient adaptive sampling strategy for PINNs.
\section{Failure-informed PINNs (FI-PINNs)}
\label{failure_probability_criteria}
In this section, we introduce the concept of failure-informed PINNs. To this end, We shall first discuss how to establish an adaptive sampling framework from the view of reliability analysis, and then we shall present a self-adaptively importance sampling procedure in FI-PINNs.
To begin, we first define the so-called {\it limit-state function (LSF)} $g(\mathbf{x}):\Omega\rightarrow \mathbb{R}$ $$g(\mathbf{x}) = \mathcal{Q}(\mathbf{x}) - \epsilon_{r}.$$ Here, $\epsilon_r$ is a predefined maximum allowed threshold, and $\mathcal{Q}(\mathbf{x}):\Omega\rightarrow \mathbb{R}$ maps the domain to a quantity of interest (QoI) that characterizes the system's performance. In PINNs, we can simply choose $\mathcal{Q}(\mathbf{x})= |r(\mathbf{x};\theta)|$ with $r(\mathbf{x};\theta)$ being the residual such that
\begin{equation}\label{Power_function}
g(\mathbf{x}) = |r(\mathbf{x};\theta)| - \epsilon_{r}.
\end{equation}
The failure hypersurface defined by $g(\mathbf{x})=0$ divides the physics domain into two subsets: the safe set $\Omega_{\mathcal{S}} = \{\mathbf{x}:g(\mathbf{x}) < 0\}$ and the failure set $\Omega_{\mathcal{F}} = \{\mathbf{x}:g(\mathbf{x}) > 0\}.$ To describe the reliability of the PINNs, we define the {\it failure probability} $P_{\mathcal{F}}$ under a prior distribution $\omega(\mathbf{x})$:
\begin{equation}\label{failure_probability}
P_{\mathcal{F}} = \int_{\Omega}\omega(\mathbf{x})\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x})d\mathbf{x}.
\end{equation}
Here $\mathbb{I}_{\Omega_{\mathcal{F}}} :\Omega \rightarrow \{0,1\}$ represents the indicator function, which takes values $\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x})=1$ when $\mathbf{x} \in \Omega_{\mathcal{F}}$, and $\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x})=0$ otherwise.
If the failure probability is smaller than a given tolerance $\epsilon_{p}$, we say that PINNs are reliable; otherwise, the failure region indicates that the system is unreliable, and the performance of PINNs should be improved. Generally speaking, as the failure region gets smaller, the failure probability decreases, meaning that the system is becoming more reliable. Motivated by this fact, we may design adaptive strategies to add new training points from the failure region $\Omega_{\mathcal{F}}$ and retrain the PINNs if $P_{\mathcal{F}}>\epsilon_{p}$. The failure probability can thus be used as a stopping indicator during the training of PINNs, similar as the posterior error indicator in adaptive finite element methods.
\begin{figure}
\centering
\includegraphics[width = 0.6\textwidth]{figures/is_pinn.pdf}
\caption{Workflow of FI-PINNs}
\label{IS_PINN}
\end{figure}
\begin{algorithm}[t]
\caption{Failure informed PINNs (FI-PINNs)}
\label{Algorithm1}
\begin{algorithmic}[1]
\Require ~ A DNN solution $u(\mathbf{x};\theta)$, boundary points $\mathcal{D}_{b}$, collocation points $\mathcal{D}_{c}$. Maximum iterations $M$,
residual tolerance $\epsilon_{r}$ and failure probability tolerance $\epsilon_{p}$.
\State $s\leftarrow 1$.
\While {$s \le M$}
\State Train $u(\mathbf{x}; \theta)$ using the training dataset $\mathcal{D}_{b}, \mathcal{D}_{c}$.
\State Set LSF $g$ using Eq.\eqref{Power_function}.
\State Estimate the failure probability $\hat{P}_{\mathcal{F}}$ using some sampling technique.
\If{$\hat{P}_{\mathcal{F}}<\epsilon_{p}$}
\State \textbf{Break}
\EndIf
\State Generate a new dataset $\mathcal{D}_{adaptive}$ from the failure region $\Omega_{\mathcal{F}}$.
\State Set $\mathcal{D}_{c} = \mathcal{D}_{c}\cup \mathcal{D}_{adaptive}, s = s+1$.
\EndWhile
\end{algorithmic}
\end{algorithm}
We present in Fig. \ref{IS_PINN} a general workflow of FI-PINNs, which can be used to combine with adaptive sampling strategies. The detailed algorithm of FI-PINNs is summarized in Algorithm \ref{Algorithm1}. Notice that in Algorithm \ref{Algorithm1}, the key issue is to estimate the failure probability $P_{\mathcal{F}}$ in Eq.\eqref{failure_probability}, as the integral cannot be calculated analytically. Moreover, one needs also to design effective model to sampling the failure region in an efficient way. These two issues will be addressed in the forthcoming subsections.
\subsection{Monte Carlo and importance sampling} \label{self adaptive_importance_sampling}
To estimate the failure probability, a nature idea is to use the Monte Carlo (MC) methods. In this case, one may generate a set of randomly distributed locations $\mathcal{S} = \{\mathbf{x}_{1}, \mathbf{x}_{2},\ldots, \mathbf{x}_{|\mathcal{S}|}\}$ from the prior $\omega(\mathbf{x})$ (e.g., the uniform distribution),
then the Monte Carlo estimator is given by $$\hat{P}_{\mathcal{F}}^{MC} = \frac{1}{|\mathcal{S}|}\sum_{\mathbf{x}\in \mathcal{S}}\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x}).$$ Using this estimator, we can propose an adaptive sampling scheme, as described in Algorithm \ref{Monte Carlo method}. If $\hat{P}_{\mathcal{F}}^{MC}>\epsilon_{p}$, we add the new collocation points $\{\mathbf{x}_{i}\}$ which satisfy $\{\mathbf{x}_{i}:\mathbf{x}_i \in \mathcal{S}, \, g(\mathbf{x}_{i}) > 0\}$ into the training set and retrain the network. Notice that in each iteration, the number of the new training points $m$ may be different; it is determined by the number of points $\{\mathbf{x}_{i}\}\in \mathcal{S}$ that fall in the failure region $\Omega_{\mathcal{F}}$. The residual-based adaptive refinement (RAR) method \cite{lu2021deepxde} proposed to select a fixed number of $m$ new points with the largest values of LSF $g$ in $\mathcal{S}.$
\begin{algorithm}[t]
\caption{FI-PINNs using Monte Carlo}
\label{Monte Carlo method}
\begin{algorithmic}[1]
\State Generate a set of randomly distributed locations $\mathcal{S} = \{\mathbf{x}_{1}, \mathbf{x}_{2},\ldots, \mathbf{x}_{|\mathcal{S}|}\}$ from the prior $\omega$ (e.g., uniform distribution).
\State Estimate the failure probability $\hat{P}_{\mathcal{F}}^{MC}$:
\begin{equation*}
\hat{P}_{\mathcal{F}}^{MC} = \frac{1}{|\mathcal{S}|}\sum_{\mathbf{x}\in \mathcal{S}}\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x})
\end{equation*}
\State Stop if $\hat{P}_{\mathcal{F}}^{MC}<\epsilon_{p}$. Otherwise, add the $(m)$ new points $\mathbf{x}_{i}\in \mathcal{S}$ satisfying $\{\mathbf{x}_{i}:g(\mathbf{x}_{i}) > 0\}$, retrain the network, and go to 1.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{FI-PINNs using importance sampling}
\label{importance_sampling}
\begin{algorithmic}[1]
\State Choose a set of randomly distributed samples $\mathcal{S} = \{\mathbf{x}_{1}, \mathbf{x}_{2}, \ldots, \mathbf{x}_{|\mathcal{S}|}\}$ from the proposal distribution $h$.
\State Estimate the failure probability $\hat{P}_{\mathcal{F}}^{IS}$ using Eq. \eqref{3.1.3}.
\State Stop if $\hat{P}_{\mathcal{F}}^{IS}<\epsilon_{p}$. Otherwise, add the $(m)$ new points $\mathbf{x}_{i}\in \mathcal{S}$ satisfy $\{\mathbf{x}_{i}:g(\mathbf{x}_{i}) > 0\}$, retrain the network, and go to 1.
\end{algorithmic}
\end{algorithm}
Since the failure region may be relatively small compared to the entire problem domain, the above MC sampling strategy can be ineffective in generating effective samples. This is especially true when the PDEs exhibit local behaviors (e.g., in presence of sharp, or very localized features). Consequently, the sample size $|\mathcal{S}|$ is typically in the range of $\mathcal{O}(10^4\sim10^6)$, meaning that the procedure is extremely expensive.
Importance sampling (IS) can be considered to generate effective samples as it can reduce variance by selecting appropriate proposal distributions. In this case, we have
\begin{equation}
\label{failure_probabilty_importance_sampling}
P_{\mathcal{F}} = \int_{\Omega}\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x})\frac{\omega(\mathbf{x})}{h(\mathbf{x})}h(\mathbf{x})d\mathbf{x} =
\mathbb{E}_{h}\left[\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x})R(\mathbf{x})\right],
\end{equation}
where $h(\mathbf{x})$ is the proposal distribution. Here $R(\mathbf{x})$ represents the weight function $\frac{\omega(\mathbf{x})}{h(\mathbf{x})}$ that transfers the proposal distribution $h(\mathbf{x})$ to the prior distribution $\omega(\mathbf{x})$. By choosing a set of randomly distributed samples $\mathcal{S} = \{\mathbf{x}_{1}, \mathbf{x}_{2}, \ldots, \mathbf{x}_{|\mathcal{S}|}\}$ from the proposal distribution $h$, we can approximate the failure probability $P_{\mathcal{F}}$ as
\begin{equation}
\label{3.1.3}
\hat{P}_{\mathcal{F}}^{IS} = \frac{1}{|\mathcal{S}|}\sum_{\mathbf{x}\in \mathcal{S}}\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x})R(\mathbf{x}).
\end{equation}
The FI-PINNs using importance sampling is presented in Algorithm \ref{importance_sampling}. If the support of $h$ contains the intersection of the support of $\omega$ and the failure set, \eqref{3.1.3} gives an unbiased estimator for $P_{\mathcal{F}}$. Theoretically, there exists an optimal proposal distribution,
\begin{equation}
\label{optimal_importance_distribution}
h_{opt}(\mathbf{x}) = \frac{\mathbb{I}_{\Omega_{\mathcal{F}}(\mathbf{x})}\omega(\mathbf{x})}{P_{\mathcal{F}}} = \frac{\mathbb{I}_{g(\mathbf{x}) > 0}\omega(\mathbf{x})}{\int_{\Omega}\mathbb{I}_{g(\mathbf{x}) > 0}\omega(\mathbf{x})d\mathbf{x}},
\end{equation}
which leads to a zero-variance estimator. However, $h_{opt}(\mathbf{x})$ is not available in practice due to the normalizing constant. Note that, the optimal IS density $h_{opt}(\mathbf{x}) $ can be interpreted as a ``posterior-failure" density of the collocation points given the occurrence of the failure set $\Omega_{\mathcal{F}}$, where the indicator function $\mathbb{I}_{\Omega_{\mathcal{F}}(\mathbf{x})}$ is the likelihood function, $\omega(\mathbf{x})$ is the prior density, and $P_{\mathcal{F}}$ is the evidence.
Although it is impossible to evaluate and sample directly from $h_{opt}$, this still serves as a guide for selecting an IS proposal distribution. We will provide a self-adaptive importance sampling (SAIS) method for approximating the posterior-failure density $h_{opt}$ in the following section.
\subsection{Self-adaptive importance sampling (SAIS)}
As discussed above, the optimal density $h_{opt}$ in \eqref{optimal_importance_distribution} is in general not available. Consequently, our strategy is to develop an adaptive procedure that begins with an initial proposal and iteratively updates the intermediate proposal using samples. For efficiency reason, we simply choose the intermediate proposal distributions $h_{k}$ as truncated Gaussian restricted to $\Omega$, denoted as $\mathcal{N}_{T}(\mu_{k}, \Sigma_{k})$.
To establish the adaptive scheme, we first set the initial proposal distribution as $h_{1}(\mathbf{x})=\omega(\mathbf{x})$. Then, at the $k$ step, we generate $N_{1}$ samples $\{\mathbf{x}_{i}^{k}\}_{i=1}^{N_{1}}$ from $h_{k}(\mathbf{x})$ and sort these samples according to their LSF values in a descending order to obtain the candidate points set $\mathcal{D}_k:=\{\widetilde{\mathbf{x}}_{i}^{k}\}_{i=1}^{N_{1}}$. Let $N_{p} = \lfloor p_{0}N_{1}\rfloor$ denotes the minimum number of samples used to approximate the optimal distribution ${h}_{opt}$, here $0<p_0<1$ is a fixed parameter. Let $N_{\eta}$ represents the number of points where $\mathcal{D}_k$ falls in the failure region. We use this number as an indicator: if $N_{\eta} <N_{p}$, this means that the intermediate proposal distribution $h_k$ needs to be refined. Otherwise, if $N_{\eta} $ is bigger than $N_{p}$, it means that the number of points is acceptable and we just use those points to approximate the optimal distribution $\hat{h}_{opt}$.
To update $h_k$ into $h_{k+1},$ we consider the truncated Gaussian model, and use the first $N_{p}$ samples to estimate the mean vector and the covariance matrix of $h_{k+1}$ as follow:
\begin{equation}
\label{3.1.9}
\begin{split}
&\mu_{k+1} = \frac{1}{N_{p}}\sum_{i = 1}^{N_{p}}\widetilde{\mathbf{x}}_{i}^{k},\\
&\Sigma_{k+1} = \frac{1}{N_{p} - 1}\sum_{i=1}^{N_{p}}(\widetilde{\mathbf{x}}_{i}^{k} - \mu_{k+1})^{T}(\widetilde{\mathbf{x}}_{i}^{k} - \mu_{k+1}).
\end{split}
\end{equation}
Notice that this procedure can be done in a very efficient way.
Notice also that once the iterative scheme stopped, the mean vector $ \mu_{opt}$ and the covariance matrix $\Sigma_{opt} $ are approximated using the first $N_{p}$ samples from the last iteration:
\begin{equation}
\label{optimal_sampling_center}
\begin{split}
\mu_{opt} &= \frac{\sum_{i=1}^{N_{p}}\widetilde{\mathbf{x}}_{i}\omega(\widetilde{\mathbf{x}}_{i})}{\sum_{i=1}^{N_{p}}\omega(\widetilde{\mathbf{x}}_{i})}\\
\Sigma_{opt} &= \frac{1}{N_{p} - 1}\sum_{i=1}^{N_{p}}(\widetilde{\mathbf{x}}_{i} - \mu_{opt})^{T}(\widetilde{\mathbf{x}}_{i} - \mu_{opt}).
\end{split}
\end{equation}
Thus, the approximated optimal distribution $\hat{h}_{opt}(\mathbf{x})$ is a truncated Gaussian with mean vector $\mu_{opt}$ and covariance matrix $\Sigma_{opt}$. By generating $N_{2}$ samples from $\hat{h}_{opt}$, the failure probability can be approximated as
\begin{equation}
\label{failure_probability_estimation}
\hat{P}_{\mathcal{F}}^{SAIS} = \frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\frac{\omega(\mathbf{x}_{i})}{\hat{h}_{opt}(\mathbf{x}_{i})}\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x}_{i}).
\end{equation}
\begin{algorithm}[t]
\caption{Self-adaptive importance sampling (SAIS)}
\label{Algorithm2}
\begin{algorithmic}[1]
\Require ~ Number of samples $N_{1}$ and $N_{2}$, the parameter $p_{0}$, the LSF $g$, the prior distribution $\omega$ and the maximum updated number $M$.
\State $k \leftarrow 1$, set $h_1 =\omega$
\While {$k\leq M$}
\State Generate $N_{1}$ samples $\{\mathbf{x}_{i}\}_{i=1}^{N_{1}}$ from $h_k$.
\State Sort the samples according to the corresponding LSF in a descending order to obtain $\widetilde{\mathbf{x}}_{1}, \ldots, \widetilde{\mathbf{x}}_{N_{1}}$.
\State Let $N_{\eta} = \max_{1\leq i\leq N_{1}}\{i|g(\widetilde{\mathbf{x}}_{i})>0$\} and $N_{p} = \lfloor p_{0}N_{1} \rfloor$.
\If{$N_{\eta} < N_{p}$}
\State Compute $\mu_{k+1}$ and $ \Sigma_{k+1} $ using Eq.\eqref{3.1.9}.
\State Set $h_{k+1} = \mathcal{N}_{T}(\mu_{k+1}, \Sigma_{k+1})$.
\State $k \leftarrow k+1$.
\Else
\State \textbf{Break}
\EndIf
\EndWhile
\vspace{0.15cm}
\State Compute $\mu_{opt}$ and $\Sigma_{opt}$ using \eqref{optimal_sampling_center}.
\State Generate $N_{2}$ samples $\mathcal{S}=\{\mathbf{x}_{1}, \cdots, \mathbf{x}_{N_{2}}\}$ from $\mathcal{N}_{T}(\mu_{opt}, \Sigma_{opt})$.
\State Calculate $\hat{P}_{\mathcal{F}}^{SAIS}$ using Eq.\eqref{failure_probability_estimation}.\\
\Return $\hat{P}_{\mathcal{F}}^{SAIS}, \mathcal{D}_{adaptive} = \{\mathbf{x}_{i}| g_{k}(\mathbf{x}_{i})>0, \, \mathbf{x}_{i} \in \mathcal{S}\}$.
\end{algorithmic}
\end{algorithm}
We present in Algorithm \ref{Algorithm2} the summary of the self-adaptive importance sampling (SAIS) procedure. In our experiments, we observe that by selecting $p_0=0.1,$ SAIS can self terminate rapidly with high numerical accuracy.
\paragraph{Remark}
The above strategy can be easily applied to time-dependent problems and problems in unbounded domains. For instance, when $\Omega$ is unbounded, the intermediate proposal distribution can be simply chosen as Gaussian, which has the same form with the prior distribution $\omega$ (see our numerical experiments in Section 5).
Moreover, in this work, we simply consider a truncated Gaussian distribution to approximate the ``posterior failure" density $h_{opt}.$ One can easily generalize the truncated Gaussian model into more complex models such as the Gaussian mixture model.
\section{Convergence analysis}\label{analysis_of_convergence}
This section is devoted to the convergence analysis of FI-PINNs. We shall need the following assumptions.
\vspace{0.15cm}
\begin{assumption}\label{assumption1}
Let $\mathcal{A}$ be a linear operator that maps $\mathcal{X}\rightarrow \mathcal{X}$ in problem \eqref{nonlinear-pde}, where $\mathcal{X}\subset \Omega$ is a Hilbert space. We assume that the operator $\mathcal{A}$ and the boundary operator $\mathcal{B}$ satisfy
\begin{equation}
C_{1}\|v\|_{2, \Omega} \leq \|\mathcal{A}v\|_{2, \Omega} + \|\mathcal{B}v\|_{2, \partial\Omega} \leq C_{2}\|v\|_{2, \Omega},\quad \forall v\in \mathcal{X}
\end{equation}
where the positive constants $C_1$ and $C_2$ are independent of $v$.
\end{assumption}
\vspace{0.15cm}
\begin{assumption}
\label{Assumption2}
For any $\epsilon_{b}$, assume the neural network $u(\mathbf{x};\theta)$ can be trained sufficiently to satisfy the condition
\begin{equation}
\big\|\mathcal{B}(u(\mathbf{x}) - u(\mathbf{x};\theta^{*}))\big\|_{2, \partial\Omega} \leq \epsilon_{b}.
\end{equation}
\end{assumption}
Assumption 4.2 simply supposes that we have successfully approximated the boundary data. We are now ready to present the following theorem.
\newpage
\begin{theorem} \label{theorem1}
Assume the problem domain $\Omega$ is bounded, and let $u(\mathbf{x};\theta^{*})$ be an FI-PINNs solution of \eqref{nonlinear-pde}. Suppose Assumptions \eqref{assumption1} and \eqref{Assumption2} are satisfied. Then the following error estimate holds
\begin{equation}
\big\|u(\mathbf{x}) - u(\mathbf{x};\theta^{*})\big\|_{2,\Omega} \leq \small{\sqrt{2}C_{1}^{-1}\big(S_{\Omega}(M^{2}\epsilon_{p} + \epsilon_{r}^{2}) + \epsilon_{b}^{2}\big)^{\frac{1}{2}}},
\end{equation}
where $$M = \max_{\mathbf{x}\in \Omega}|r(\mathbf{x};\theta^{*})|,$$ $S_{\Omega}$ is the area of $\Omega$ and $\epsilon_{r},\epsilon_{p}$ are the pre-given tolerances.
\end{theorem}
\vspace{0.05cm}
\begin{proof}
Let $v = u(\mathbf{x}) - u(\mathbf{x};\theta^{*})$.
Using Assumption \ref{assumption1}, it can be shown that
\begin{equation*}
\begin{split}
&\|u(\mathbf{x}) - u(\mathbf{x};\theta^{*})\|_{2, \Omega} \\[3mm]
&\leq C_{1}^{-1}\left(\|\mathcal{A}(u(\mathbf{x}) - u(\mathbf{x};\theta^{*}))\|_{2,\Omega} + \|\mathcal{B}(u(\mathbf{x}) - u(\mathbf{x};\theta^{*}))\|_{2,\partial\Omega}\right)\\[3mm]
& = C_{1}^{-1}\left(\|r(\mathbf{x};\theta^{*})\|_{2,\Omega} + \|\mathcal{B}(u(\mathbf{x}) - u(\mathbf{x};\theta^{*}))\|_{2,\partial\Omega}\right)\\
& \leq \sqrt{2}C_{1}^{-1}\left(\|r(\mathbf{x};\theta^{*})\|_{2,\Omega}^{2} + \|\mathcal{B}(u(\mathbf{x}) - u(\mathbf{x};\theta^{*}))\|_{2,\partial\Omega}^{2}\right)^{\frac{1}{2}}.
\end{split}
\end{equation*}
From Assumption \ref{Assumption2}, we have
\begin{equation}
\label{estimate}
\begin{split}
\|u(\mathbf{x}) - u(\mathbf{x};\theta^{*})\|_{2, \Omega}
\leq \sqrt{2}C_{1}^{-1}\left(\|r(\mathbf{x};\theta^{*})\|_{2,\Omega}^{2} +\epsilon_{b}^{2}\right)^{\frac{1}{2}}.
\end{split}
\end{equation}
For the bounded domain $\Omega$, we have
\begin{equation*}
\begin{split}
\|r(\mathbf{x};\theta^{*})\|_{2,\Omega}^2 &= \int_{\Omega}r(\mathbf{x};\theta^{*})^{2}d\mathbf{x}\\
& = \int_{\Omega_{\mathcal{F}}}r(\mathbf{x};\theta^{*})^{2}d\mathbf{x} + \int_{\Omega_{\mathcal{S}}}r(\mathbf{x};\theta^{*})^{2}d\mathbf{x}.
\end{split}
\end{equation*}
Using the definition of the failure probability $P_{\mathcal{F}}$ on bounded domain, it can be shown that
\begin{equation}
P_{\mathcal{F}} = \int_{\Omega}\mathbb{I}_{\Omega_{\mathcal{F}}}(\mathbf{x})d\mathbf{x}= \int_{\Omega_{\mathcal{F}}}d\mathbf{x} = \frac{S_{\Omega_{\mathcal{F}}}}{S_{\Omega}} ,
\end{equation}
where $S_{\Omega},S_{\Omega_{\mathcal{F}}}$ denote the areas of $\Omega,\Omega_{\mathcal{F}}$ respectively. If $P_{\mathcal{F}} < \epsilon_{p}$, we have
\begin{eqnarray}
S_{\Omega_{\mathcal{F}}} < S_{\Omega}\epsilon_{p}.
\end{eqnarray}
Thus,
\begin{equation}
\label{the_first_part}
\int_{\Omega_{\mathcal{F}}}r(\mathbf{x};\theta^{*})^{2}d\mathbf{x} \leq \max_{\mathbf{x}\in \Omega_{\mathcal{F}}} |r(\mathbf{x};\theta^{*})|^{2} S_{\Omega_{\mathcal{F}}} \leq M^{2}S_{\Omega}\epsilon_{p},
\end{equation}
where $M = \max_{\mathbf{x}\in \Omega}|r(\mathbf{x};\theta^{*})|$.
Note that in the safe region $\Omega_{\mathcal{S}}$, the PDE residual $|r(\mathbf{x};\theta)|$ satisfy $|r(\mathbf{x};\theta^{*})|< \epsilon_{r}$, which implies
\begin{equation}
\label{the_second_part}
\int_{\Omega_{\mathcal{S}}}r(\mathbf{x};\theta^{*})^{2}d\mathbf{x} \leq \epsilon_{r}^{2} S_{\Omega_{\mathcal{S}}} \leq S_{\Omega}\epsilon_{r}^{2}.
\end{equation}
Combing Eq.\eqref{the_first_part} and Eq.\eqref{the_second_part}, we can obtain
\begin{equation}
\label{first_part_bound}
\|r(\mathbf{x};\theta^{*})\|_{2,\Omega}^{2} \leq S_{\Omega}(M^{2}\epsilon_{p} + \epsilon_{r}^{2}).
\end{equation}
The desired result follows by substituting Eq.\eqref{first_part_bound} into Eq.\eqref{estimate}. \qquad \qquad \qquad \qquad
\end{proof}
\section{Numerical experiments} \label{Numerical_experiments}
In this section, we present several numerical experiments to verify the effectiveness of FI-FINNs.
\subsection{Experiment Setup} \label{experiment_setup}
To better present the numerical results, we shall perform the following three types of approaches:
\begin{itemize}
\item The conventional PINNs \cite{raissi2019physics}, or baseline PINNs approach. This method update the training dataset by using the uniform sampling strategy.
\item The residual-based adaptive refinement method (RAR) \cite{lu2021deepxde}. This method update the training set by selecting collocation points with the first $(m)$ largest residual values, in a large set of candidate points.
\item The FI-FINNs with SAIS presented in Section \eqref{self adaptive_importance_sampling}.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width = 0.5\linewidth]{figures/one_peak_exact_solution.pdf}
\caption{Exact solution for the two-dimensional peak test probelm.}
\label{one_peak_exact_solution}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=0.45\textwidth, clip=true,tics=10]{figures/one_peak_error.pdf}
\end{overpic}
\begin{overpic}[width=0.45\textwidth, clip=true,tics=10]{figures/one_peak_failure.pdf}
\end{overpic}
\end{center}
\caption{Relative $L_{2}$ error (left) and the estimated failure probability (right) over updates.}
\label{one_peak_error}
\end{figure}
In all our numerical results, we will refer to ``\textit{Uniform}" as the traditional PINNs, ``\textit{RAR}" as the residual-based adaptive refinement approach, and ``\textit{SAIS}" as our FI-PINNs approach. To make a fair comparison, we shall first train the network completely using a small uniformly distributed dataset, and then we will train the three distinct models that are descended from the original model using datasets with the same size but produced using three different strategies.
Unless otherwise specified, we shall use the following parameters: $N_{1} = 300, p_{0} = 0.1, N_2 = 1000$ in \textit{SAIS}. The network used to implement the experiments is a fully connected neural network with 7 hidden layers with 20 neurons in each layer, and the activation function is chosen as the \texttt{tanh} function. In the beginning, we always use 2000 collocation points and 200 boundary points to train the network. The \texttt{Adam} optimizer is adopted to optimize the loss function, with a learning rate 0.0001 and 10000 iterations. The maximum adaptive iteration is set to $M=10.$ In order to test the validity of the method, we use the following relative $L_{2}$ error:
\begin{equation}
\label{L_2 error}
err_{L_{2}} = \frac{\sqrt{\sum_{i=1}^{N}\left|\hat{u}(\mathbf{x}_{i}) - u(\mathbf{x}_{i})\right|^{2}}}{\sqrt{\sum_{i=1}^{N}\left|u(\mathbf{x}_{i})\right|^{2}}},
\end{equation}
where $N$ represents the total number of test points chosen randomly in the domain, and $\hat{u}(\mathbf{x}_{i})$ and $u(\mathbf{x}_{i})$ represent the predicted and the exact solution values, respectively.
\subsection{Two-dimensional Poisson equation}
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=0.7\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/one_peak_uniform_samples-eps-converted-to.pdf}
\put (38,30) {\scriptsize {\bf Samples by \textit{Uniform}}}
\end{overpic}
\end{center}
\vspace{0.3cm}
\begin{center}
\begin{overpic}[width=0.7\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/one_peak_rar_samples-eps-converted-to.pdf}
\put (38,30) {\scriptsize {\bf Samples by \textit{RAR}}}
\end{overpic}
\end{center}
\vspace{0.3cm}
\begin{center}
\begin{overpic}[width=0.7\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/one_peak_sais_samples-eps-converted-to.pdf}
\put (38,30) {\scriptsize {\bf Samples by \textit{SAIS}}}
\end{overpic}
\end{center}
\vspace{-0.2cm}
\caption{The distribution of collocation points obtained by three sampling methods for the first three updates.}
\label{one_peak_samples}
\end{figure}
We first consider the following two-dimensional Poisson equation \cite{tang2021deep}:
\begin{equation}
\label{one_peak_equation}
\begin{split}
-\Delta u(x, y) &= f(x, y) \quad \mbox{in} \, \Omega \\
u(x, y) &= g(x, y) \quad \mbox{on} \, \partial \Omega,
\end{split}
\end{equation}
where $\Omega$ is $[-1,1]^2$ and we specify the true solution as
\begin{equation}
\label{one_peak_true_solution}
u(x, y) = \exp\left(-1000\left[(x - 0.5)^{2} + (y - 0.5)^{2}\right]\right),
\end{equation}
which has a peak at $(0.5, 0.5)$ and decreases rapidly away from $(0.5, 0.5)$ (see Fig.\ref{one_peak_exact_solution}).
We first compare the numerical results obtained by three different sampling strategies. To construct our FI-PINNs framework, we set $\epsilon_{p} = 0.1$ and $\epsilon_{r}=0.1$ in this example. The relative $L_{2}$ errors are shown in the left plot of Fig. \ref{one_peak_error}. The error achieved by \textit{SAIS} method decreases much faster than the \textit{Uniform} and \textit{RAR} strategies, and it is clearly seen that the mean $L_{2}$ error obtained by \textit{SAIS} is much smaller, which is $9.04\times 10^{-2}$, compared to $7.36\times 10^{-1}$ and $2.15\times 10^{-1}$ obtained by \textit{Uniform} and \textit{RAR} sampling strategies respectively. In the right plot of Fig. \ref{one_peak_error}, we show that the estimated failure probability has the same trend with the relative $L_{2}$ error. After four iterations, the training can be stopped when the estimated failure probability is smaller than the tolerance, meaning that our stop criteria is quite effectiveness.
Fig.\ref{one_peak_samples} displays the distributions of the updated collocation points from the first three iterations using various sampling strategies. It is clear seen that the samples generated by \textit{SAIS} are much more concentrated around the peak $(0.5,0.5)$. The associated predicted values, absolute error and the predicted solution curve at $x = 0.5$ obtained by the three sampling strategies are shown in Fig.\ref{one_peak_solution}. It is evident that the absolute error obtained by \textit{SAIS} is much smaller, and there is hardy noticeable difference between the exact values and the predicted values at $x = 0.5$, indicating that the \textit{SAIS} sampling strategy outperforms the other two sampling strategies.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=0.9\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/one_peak_uniform_solution.pdf}
\put (30,26) {\scriptsize {\bf Numerical results obtained by \textit{Uniform}}}
\end{overpic}
\end{center}
\vspace{0.3cm}
\begin{center}
\begin{overpic}[width=0.9\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/one_peak_rar_solution.pdf}
\put (30,26) {\scriptsize {\bf Numerical results obtained by \textit{RAR}}}
\end{overpic}
\end{center}
\vspace{0.3cm}
\begin{center}
\begin{overpic}[width=0.9\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/one_peak_sais_solution.pdf}
\put (30,26) {\scriptsize {\bf Numerical results obtained by \textit{SAIS}}}
\end{overpic}
\end{center}
\vspace{-.2cm}
\caption{The predicted solution(Left), absolute error (Middle) and the predicted curves at $x = 0.5$(Right) obtained by the three different sampling strategies.}
\label{one_peak_solution}
\end{figure}
We now examine the convergence properties of FI-PINNs. To show the convergence properties of FI-PINNs, we fix one tolerance parameter and train the network to obtain the prediction error when the other tolerance changes. The results are presented in Fig.\ref{residual_failure_one_peak}. It is clearly seen that the convergence rate with respect to $\epsilon_{p}$ and $\epsilon_{r}$ are 1/2 and 1, respectively. This agrees well with Theorem \ref{theorem1}.
\begin{figure}
\begin{center}
\begin{overpic}[width = 0.45\textwidth, clip = true, tics = 10]{figures/one_peak_failure_probability.png}
\end{overpic}
\begin{overpic}[width = 0.45\textwidth, , clip = true, tics = 10]{figures/one_peak_residual.png}
\end{overpic}
\end{center}
\caption{Error profiles when solving a 2D Poisson equation with one peak. Left: the prediction errors with respect to the $\epsilon_{p}$ with $\epsilon_{r} = 0.1$. Right: the prediction errors with respect to the $\epsilon_{r}$ with $\epsilon_{p} = 0.1$.}
\label{residual_failure_one_peak}
\end{figure}
\subsection{Burgers' equation}
We consider the following Burgers' equation:
\begin{equation}
\label{burgers_equation}
\begin{split}
&u_{t} + uu_{x} - \frac{0.01}{\pi}u_{xx} = 0,\\
&u(x,0) = -\sin(\pi x),\\
&u(t,-1) = u(t,1) = 0,
\end{split}
\end{equation}
where $(x, t)\in [-1,1]\times[0,1]$. Notice that there exists a stiff mutation at $x = 0$ in the solution, which is used here to test the viability of the FI-PINNs and the \textit{SAIS} method for time-dependent nonlinear problems.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=0.45\textwidth,trim= 0 0 0 0, clip=true,tics=10]{figures/burgers_error.pdf}
\end{overpic}
\begin{overpic}[width=0.45\textwidth,trim= 0 0 0 0, clip=true,tics=10]{figures/burgers_failure.pdf}
\end{overpic}
\end{center}
\caption{Relative $L_{2}$ error (left) and the estimated failure probability (right) over updates.}
\label{burgers_failure}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=\textwidth,trim= 35 10 45 15, clip=true,tics=10]{figures/samples_Uniform_burgers.png}
\put (42,22) {\scriptsize {\bf Samples by \textit{Uniform}}}
\end{overpic}
\end{center}
\vspace{0.3cm}
\begin{center}
\begin{overpic}[width=\textwidth,trim= 35 10 45 15, clip=true,tics=10]{figures/samples_MC_burgers.png}
\put (42,22) {\scriptsize {\bf Samples by \textit{RAR}}}
\end{overpic}
\end{center}
\vspace{.3cm}
\begin{center}
\begin{overpic}[width=\textwidth,trim= 35 10 45 15, clip=true,tics=10]{figures/samples_sais_burgers.png}
\put (42,22) {\scriptsize {\bf Samples by \textit{SAIS}}}
\end{overpic}
\end{center}
\vspace{-0.2cm}
\caption{The distribution of collocation points obtained by three sampling methods for the first two updates.}
\label{burgers_samples}
\end{figure}
In this example, the residual tolerance $\epsilon_{r}$ and failure probability tolerance $\epsilon_{p}$ are both set to be 0.01. To implement the experiment, we first train the network using the \texttt{LBFGS} optimizer with a maximum of 50000 iterations. If the network needs to be updated (with new training points), we first retrain it using the \texttt{Adam} optimizer for a total of 10000 iterations, and then we utilize the \texttt{LBFGS} optimizer to fine tune the network with maximum 50000 iterations.
Figure \ref{burgers_failure} shows the prediction errors for three strategies, as well as the estimated failure probability with respect to iteration number. Again, it is clearly seen that the estimated failure probability and the prediction errors using \textit{SAIS} have a similar pattern. The estimated failure probability is below the tolerance $\epsilon_{p}$ after four updates. Additionally, the prediction errors of \textit{SAIS} are significantly lower ($5.59\times 10^{-3}$).
New training points generated in the first two iterations are presented in Fig. \ref{burgers_samples}. We observe that the samples produced by the \textit{SAIS} strategy are concentrated more in the regions where the residual error is high. In Fig.\ref{burgers_solution}, we show the final absolute prediction error as well as the predicted solution curve at the time $t=1$. It is obvious that the predicted values produced by the \textit{SAIS} match well with the exact solution. Although we have only established the theoretical results for the linear model. We indeed observe in Fig. \ref{burgers_residual_failure} the convergence rates also hold for this nonlinear example.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/uniform_solution_error.pdf}
\put (35,22) {\scriptsize {\bf Numerical results obtained by \textit{Uniform}}}
\end{overpic}
\end{center}
\vspace{0.3cm}
\begin{center}
\begin{overpic}[width=\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/MC_solution_error.pdf}
\put (35,22) {\scriptsize {\bf Numerical results obtained by \textit{RAR}}}
\end{overpic}
\end{center}
\vspace{.3cm}
\begin{center}
\begin{overpic}[width=\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/sais_solution_error.pdf}
\put (35,22) {\scriptsize {\bf Numerical results obtained by \textit{SAIS}}}
\end{overpic}
\end{center}
\caption{The predicted error (Left) and the predicted curves at $t = 1$(Right) obtained by the three different sampling strategies.}
\label{burgers_solution}
\end{figure}
\begin{figure}
\begin{center}
\begin{overpic}[width = 0.45\textwidth, clip = true, tics = 10]{figures/failure_probability_test.png}
\end{overpic}
\begin{overpic}[width = 0.45\textwidth, clip = true, tics = 10]{figures/residual_test.png}
\end{overpic}
\end{center}
\caption{Errors profiles when solving Burgers' equation. Left: the prediction errors with respect to the $\epsilon_{p}$ with $\epsilon_{r} = 0.01$. Right: the prediction errors with respect to the $\epsilon_{r}$ with $\epsilon_{p} = 0.01$.}
\label{burgers_residual_failure}
\end{figure}
\subsection{The high-dimensional Poisson equation}
Consider the the following $d$-dimensional elliptic equation
\begin{equation*}
\label{high dimensional}
-\Delta u(\mathbf{x}) = f(\mathbf{x}),\quad \mathbf{x}\in [-1,1]^{d},
\end{equation*}
with an exact solution $$u(\mathbf{x}) = e^{-10\|\mathbf{x}\|_{2}^{2}}.$$
We set $d=9$, and use 20000 collocation points and 900 boundary points to train the initial network. The hyperparameter for \textit{SAIS} is set at $N_{2} = 5000$. Additionally, we define the residual and failure probability tolerances as $\epsilon_{r} = 0.01, \epsilon_{p} = 0.005$ respectively.
The left plot of Fig. \ref{high_dimensioanl_error} displays the relative errors achieved using three different sampling strategies. It is shown that the error decreases smoothly when employing \textit{SAIS}, whereas the \textit{Uniform} and \textit{RAR} seems not converge for this example. This demonstrates that \textit{SAIS} can outperform other techniques in terms of efficiency. The right plot of Fig.\ref{high_dimensioanl_error} further shows that the estimated failure probability has a similar trend with the mean $L_{2}$ error, confirming the efficiency of the stop criteria of FI-PINNs.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=0.45\textwidth, clip=true,tics=10]{figures/high_dimensional_error.pdf}
\end{overpic}
\begin{overpic}[width=0.45\textwidth, clip=true,tics=10]{figures/failure_error_high_dimensional.pdf}
\end{overpic}
\end{center}
\caption{Relative $L_{2}$ error (left) and the estimated failure probability (Right); $d = 9$.}
\label{high_dimensioanl_error}
\end{figure}
\begin{figure}
\begin{center}
\begin{overpic}[width=0.7\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/high_dimensional_uniform_samples-eps-converted-to.pdf}
\put (34,29) {\scriptsize {\bf Samples by \textit{Uniform}}}
\end{overpic}
\end{center}
\vspace{0.3cm}
\begin{center}
\begin{overpic}[width=0.7\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/high_dimensional_ara_samples-eps-converted-to.pdf}
\put (34,29) {\scriptsize {\bf Samples by \textit{RAR}}}
\end{overpic}
\end{center}
\vspace{0.3cm}
\begin{center}
\begin{overpic}[width=0.7\textwidth,trim= 35 0 45 15, clip=true,tics=10]{figures/high_dimensional_sais_samples-eps-converted-to.pdf}
\put (34,29) {\scriptsize {\bf Samples by \textit{SAIS}}}
\end{overpic}
\end{center}
\caption{The $2nd, 4th, 6th$(from left to right) samples distribution of $x_{8}, x_{9}$ obtained by the three sampling strategies respectively; $d = 9$ }
\label{high_dimensioanl_samples}
\end{figure}
The new generated samples for the $2nd, 4th, 6th$ iterations in the space of $x_{8}, x_{9}$ are shown in Fig. \ref{high_dimensioanl_samples}. Again, it is clearly seen that the samples generated by \textit{SAIS} are more concentrated around $(0,0)$, which significantly increases the effective sample size to train the network and results in a smaller $L_{2}$ error; see also Fig.\ref{high_dimensioanl_error}.
\subsection{Two-dimensional unbounded problem}
Consider the following two-dimensional Possion problem
\begin{equation}
\label{unbounded_2d_problem}
\begin{split}
-\Delta u(x, y) = f(x, y), \quad (x, y)\in \mathbb{R}^2\setminus \Omega \\
u(x,y) = s(x, y),\quad (x, y) \in \partial \Omega,
\end{split}
\end{equation}
where the true solution is
\begin{equation}
\label{unbounded_2d_true_solution}
u(x, y) = e^{-(x - 4)^{2} - (y - 4)^{2}}.
\end{equation}
The boundary of $\Omega$ is defined as
\[\partial\Omega = \left(\cos(t) - \frac{\cos(5t)\cos(t)}{4}, \sin(t) - \frac{\cos(5t)\sin(t)}{4}\right), \quad t\in [0, 2\pi].\]
In this unbounded example, we set the initial proposal distributions $h_{1}$ to be a two-dimensional Gaussian distribution with mean vector $\mu_{1} = [0,0]$, covariance matrix $\Sigma_{1} = 3I_2$. Additionally, we set $\epsilon_{r}$ and $\epsilon_{p}$ in FI-PINNs to be 0.01 and 0.0001, respectively.
We set the hyperparameter of the \textit{SAIS} algorithm to $N_{2} = 1000$.
\begin{figure}[t]
\centering
\includegraphics[width = 1\textwidth]{figures/samples_unbounded_2d.pdf}
\caption{The distribution of collocation points obtained by SAIS for the first three updates.}
\label{unbounded_2d_samples}
\end{figure}
The challenge of selecting a training data for unbounded domain arises from the fact that the true solution is unknown. To verify the effectiveness of our method, we starting with $5000$ collocation points randomly chosen in $[-2,2]^2\setminus \Omega$ which is far away from the peak. The estimated failure probability becomes smaller than the tolerance after 4 updates. The distribution of newly added collocation points obtained by \textit{SAIS} for the first three iterations is shown in Fig.\ref{unbounded_2d_samples}. It is seen that the update points produced by \textit{SAIS} are concentrated near the peak $(4,4)$.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=0.32\textwidth, clip=true,tics=10]{figures/unbounded_2d_exact_solution.pdf}
\end{overpic}
\begin{overpic}[width=0.32\textwidth, clip=true,tics=10]{figures/unbounded_2d_numeric_solution.pdf}
\end{overpic}
\begin{overpic}[width=0.34\textwidth, clip=true,tics=10]{figures/absolute_error_unbounded_2d.pdf}
\end{overpic}
\end{center}
\vspace{-0.2cm}
\caption{The exact solution, numerical solution and absolute error from left to right.}
\label{unbounded_2d_solution}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 0.5\textwidth]{figures/unbounded_error_failure_2d.pdf}
\caption{The relative $L_{2}$ error and the estimated failure probability.}
\label{unbounded_2d_error}
\end{figure}
Fig. \ref{unbounded_2d_solution} shows the predicted solution over a disk with a radius of 10. It is seen that the numerical solution match well with the exact solution.
We also provide the $L_{2}$ errors in Fig. \ref{unbounded_2d_error} to demonstrate the efficiency of our adaptive framework. The figure shows that, after four updates, our technique can produce a prediction error of only $2.28\times 10^{-2}$. Additionally, the prediction error and predicted failure probability have a similar pattern, making it useful for creating stop criteria.
\subsection{Time-dependent problem in an unbounded domain}
We finally consider the following time-dependent equation
\begin{equation}
\label{ubnounded_time_dependent_problem}
\begin{split}
&u_t(x, t) = u_{xx}(x, t) + f(x, t), \quad (x, t)\in \mathbb{R} \times[0,1] \\
&u(x, 0) = u_0(x), \quad x \in \mathbb{R}.
\end{split}
\end{equation}
We choose the following reference solution
\begin{equation}
\label{unbounded_time_dipendent_true_solution}
u(x, t) = \frac{e^{- \frac{\left(x - 10\right)^{2}}{4 t + 4}}}{\sqrt{t + 1}},
\end{equation}
In this example, we set the tolerances $\epsilon_{r} = 0.01$ and $\epsilon_{p} = 10^{-10}$, respectively. The hyper-parameters used in the \textit{SAIS} algorithm are set to $N_{1} = 600, N_{2} = 2000, p_{0}= 0.05$. The prior distribution of $(x, t)$ is a two dimensional Gaussian distribution with mean vector $\mu_{1} = [0, 0]$ and covariance matrix $\Sigma_{1} = 3I_{2}$. Moreover, the initial 3000 training points are generated in $[-6, 0]\times[0,1]$, which is far away from the peak. The training can be terminated after three updates to the training dataset because the estimated failure probability is smaller than the tolerance (see Fig.\ref{unbounded_error}).
The distributions of the collocation points are shown in Fig.\ref{unbounded_samples}. We can clearly see that the collocation points produced using the \textit{SAIS} algorithm automatically move toward the region where the residual error is the greatest. This phenomenon demonstrates the effectiveness of our FI-PINNs framework in dealing with unbounded time-dependent problems. Moreover, the absolute error showed in Fig.\ref{unbounded_samples} gradually decreases during the training process. Finally, we can obtain an absolute error less than $2\times 10^{-3}$ over the whole test domain. To be more clear, we plot the predicted values at $t = 1$. It is obvious that the initial predicted solution differs a lot from the exact solution. While as the training continues, the predicted solution gradually converges to the true solution. After three updates, the relative prediction error shown in Fig.\ref{unbounded_error} is smaller than $3\times 10^{-3}$. The corresponding failure probability shares the same trend. This phenomenon indicates that our FI-PINNs framework and the SAIS sampling strategy are suitable for this kinds of time-dependent problems as well.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_samples0.png}
\put (25,65) {\scriptsize {\bf Initial points}}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_samples1.png}
\put (35,65) {\scriptsize {\bf 1-updated}}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_samples2.png}
\put (35,65) {\scriptsize {\bf 2-updated}}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_samples3.png}
\put (35,65) {\scriptsize {\bf 3-updated}}
\end{overpic}
\end{center}
\begin{center}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_error0.png}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_error1.png}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_error2.png}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_error3.png}
\end{overpic}
\end{center}
\begin{center}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_predict0.png}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_predict1.png}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_predict2.png}
\end{overpic}
\begin{overpic}[width=0.242\textwidth, clip=true,tics=10]{figures/unbounded_predict3.png}
\end{overpic}
\end{center}
\caption{From top to bottom: the distribution of (updated) samples, the corresponding absolute error and the predicted values at $t = 1$. }
\label{unbounded_samples}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 0.55\textwidth]{figures/failure_error_unbounded.pdf}
\vspace{-0.2cm}
\caption{The relative $L_{2}$ error and the estimated failure probability.}
\label{unbounded_error}
\end{figure}
\section{Concluding remarks}
In this work, we present failure-informed PINNs that combine the PINNs with adaptive sampling procedures. The key idea in our FI-PINNs is to define a failure probability as an error indicator, based on the residual. In particular, we propose to use truncated Gaussian as a simple model to estimate the failure probability and to generate new training points. Rigorous error bound of FI-PINNs is also presented. We illustrate the performance FI-PINNs via several PDE problems that include PDEs with singular solutions, PDEs in unbounded domains, and time-dependent PDEs. It is shown that for all the test problems, FI-PINNs can effectively capture the solution structure. Further studies include applications of FI-PINN to more complex real problems. We close this section by emphasizing again that our failure-informed adaptive sampling procedure can be easily applied to other deep learning formulations such as the deep Ritz method \cite{Edeep18} and weak adversarial networks \cite{zang2020weak}. Furthermore, our truncated Gaussian model can also be changed into more complex models such as Gaussian mixture model. We shall investigate these issues in our future works.
| {
"timestamp": "2022-10-11T02:15:32",
"yymm": "2210",
"arxiv_id": "2210.00279",
"language": "en",
"url": "https://arxiv.org/abs/2210.00279",
"abstract": "Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains. It is noticed, however, the performance of PINNs can vary dramatically with different sampling procedures. For instance, a fixed set of (prior chosen) training points may fail to capture the effective solution region (especially for problems with singularities). To overcome this issue, we present in this work an adaptive strategy, termed the failure-informed PINNs (FI-PINNs), which is inspired by the viewpoint of reliability analysis. The key idea is to define an effective failure probability based on the residual, and then, with the aim of placing more samples in the failure region, the FI-PINNs employs a failure-informed enrichment technique to adaptively add new collocation points to the training set, such that the numerical accuracy is dramatically improved. In short, similar as adaptive finite element methods, the proposed FI-PINNs adopts the failure probability as the posterior error indicator to generate new training points. We prove rigorous error bounds of FI-PINNs and illustrate its performance through several problems.",
"subjects": "Numerical Analysis (math.NA); Machine Learning (stat.ML)",
"title": "Failure-informed adaptive sampling for PINNs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561667674652,
"lm_q2_score": 0.7310585903489891,
"lm_q1q2_score": 0.7082906235279482
} |
https://arxiv.org/abs/cond-mat/0508040 | Width of percolation transition in complex networks | It is known that the critical probability for the percolation transition is not a sharp threshold, actually it is a region of non-zero width $\Delta p_c$ for systems of finite size. Here we present evidence that for complex networks $\Delta p_c \sim \frac{p_c}{l}$, where $l \sim N^{\nu_{opt}}$ is the average length of the percolation cluster, and $N$ is the number of nodes in the network. For Erdős-Rényi (ER) graphs $\nu_{opt} = 1/3$, while for scale-free (SF) networks with a degree distribution $P(k) \sim k^{-\lambda}$ and $3<\lambda<4$, $\nu_{opt} = (\lambda-3)/(\lambda-1)$. We show analytically and numerically that the \textit{survivability} $S(p,l)$, which is the probability of a cluster to survive $l$ chemical shells at probability $p$, behaves near criticality as $S(p,l) = S(p_c,l) \cdot exp[(p-p_c)l/p_c]$. Thus for probabilities inside the region $|p-p_c| < p_c/l$ the behavior of the system is indistinguishable from that of the critical point. | \section{\label{sec:Introduction}Introduction:}
Recently the subject of networks has received much attention. It was
realized that many systems in the real world, such as the Internet,
can be successfully modeled as networks. Other examples include social
networks such as the web of social contacts, and biological networks
such as the protein interaction network and metabolic
networks~\cite{Barabasi-2002:Linked,Dorogovtsev-Mendes-2003:From_Biological_Nets_to_the_Internet,vespignani-pastor-satorras-2004:evolution_and_structure}.
The problem of percolation on networks has also been studied
extensively (e.g.~\cite{Barabasi-Albert-2002:statistical-mechanics}).
Using percolation theory we can describe the resilience of the network
to breakdown of sites or
links~\cite{newman-callaway-2000:networks_robustness,Cohen-Erez-Ben_Avraham-Havlin-2000:Resilience},
epidemic spreading~\cite{Cohen-ben-avraham-Havlin-2003:immunization},
and the properties of the optimal path in a highly network with highly
fluctuating weights on the
links~\cite{Braunstein-Buldyrev-Cohen-Havlin-Stanley-2003:Optimal}.
A typical percolation system consists of a $d$-dimensional grid of
length $L$, in which the nodes or links are removed with some
probability $1-p$, or are considered ``conducting'' with probability
$p$
(e.g.~\cite{Bunde-Havlin-1996:Fractals-and-Disordered-Systems,stauffer-aharoni-1992:percolation_theory}).
Below some critical probability $p_c$ the system becomes disconnected
into small clusters, i.e., it becomes impossible to cross from one
side of the grid to the other by following the conducting links.
Percolation is considered a geometrical phase transition exhibiting
universality, critical exponents, upper critical dimension at $d=6$
etc. It was noted by Conigilio~\cite{coniglio-1982:cluster_structure}
that for systems of finite size $L$ the transition from connected to
disconnected state has a width $\Delta p_c \sim \frac{1}{L^{1/\nu}}$,
where $\nu$ is a critical exponent related to the correlation length.
Percolation on networks was studied also from a mathematical point of
view~\cite{Erdos-Renyi-1960:Evolution,bollobas-2001:random_graphs,Barabasi-Albert-2002:statistical-mechanics}.
It was found that in Erd\H{o}s-R\'enyi (ER) graphs with an average
degree $\av{k}$ the percolation threshold is: $p_c =
\frac{1}{\av{k}}$. Below $p_c$ the graph is composed of small clusters
(most of them trees). As $p$ approaches $p_c$ trees of increasing
order appear. At $p=p_c$ a giant component emerges and loops of all
orders abruptly appear. Nevertheless, for graphs of finite size $N$ it
was found that the percolation threshold has a finite width $\Delta
p_c \sim \frac{1}{N^{1/3}}$~\cite{bollobas-2001:random_graphs},
meaning that all attributes of criticality are present in the system
in the range $[p_c - \Delta p_c,p_c + \Delta p_c]$. For example: The
number of loops is negligible below $p_c + \Delta p_c$~\footnote{O.
Riordan and P.~L.~Krapivsky, private communication.}.
In this paper we study the \textit{Survivability} of the network near
the critical threshold. The survivability $S(p,l)$ is defined to be
the probability of a connected cluster to ``survive'' up to $l$
chemical shells in a system with conductance probability
$p$~\cite{tzschichholz-bunde-havlin-1989:loopless} (i.e the
probability that there exists at least one node at chemical distance
$l$ from a randomly chosen node on the same cluster). At the critical
point $p_c$, the survivability decays as a power-law: $S(p_c,l) \sim
l^{-x}$, where $x$ is a universal exponent. Below $p_c$ the
survivability decays exponentially to zero, while above $p_c$ it decays
(exponentially) to a constant. Here we will derive analytically and
numerically the functional form of the survivability above and below
the critical point. We will show that near the critical point $S(p,l)
= S(p_c,l) \cdot exp[(p-p_c)l/p_c]$. Thus, given a system with a
maximal chemical length $l$, for probabilities inside the range
$|p-p_c| < \frac{p_c}{l}$ the behavior of the system is
indistinguishable from that of the critical point. Hence we get
$\Delta p_c \sim \frac{p_c}{l}$.
The maximal chemical length $l$ at criticality is actually the length
of the percolation cluster, which was found to be: $l \sim
N^{\nu_{opt}}$ where $N$ is the number of nodes in the network. For
Erd\H{o}s-R\'enyi (ER) graphs $\nu_{opt} = 1/3$, while for scale-free
(SF) networks with a degree distribution $P(k) \sim k^{-\lambda}$ and
$3<\lambda<4$, $\nu_{opt} =
(\lambda-3)/(\lambda-1)$~\cite{Braunstein-Buldyrev-Cohen-Havlin-Stanley-2003:Optimal}.
\section{\label{sec:ER_graphs}Erd\H{o}s-R\'enyi Graphs:}
Consider an ER graph with a mean degree $\av{k}$, and each link
having a probability $p$ to conduct. We define $N_l(x) = n_0 + n_1 x +
n_2 x^2 + n_3 x^3 + ...$ to be the generating function of the number
of sites that exists on layer (i.e. chemical shell) $l$ starting from
a random node on the graph (for a conduction probability $p$).
The generating function for the degree distribution of a randomly
chosen node in the network is $G_0(x) = \sum P(k) \cdot x^k$ and the
generating function for the number of links emerging from a node
\textit{reached by following a randomly chosen link} is $G_1(x) =
\Sigma \frac{1}{\av{k}} kP(k) \cdot
x^{k-1}$~\cite{Newman-Strogatz-Watts-2001:Random}. Taking into
account the probability $p$ for conduction, we have:
\begin{equation}
\label{equ:G_1}
G_1(x) = 1-p + p \sum \frac{1}{\av{k}} kP(k) \cdot x^{k-1}
.
\end{equation}
We can now write the following recursive
relation~\cite{Kalisky-Cohen-ben-Avraham-Havlin-2004:Tomography}:
\begin{equation}
\label{equ:recursive}
N_{l+1}(x) = G_1(N_l(x))
,
\end{equation}
which means that the probability $n^{(l+1)}_i$ for having $i$ nodes at
layer $l+1$ is composed of the probability of reaching a vertex by
following a link, and reaching $i$ nodes at layer $l$ by following all
branches emerging from that vertex - see sketch in
Fig.~\ref{fig:recurse}.
\ifnum1=2
\figureRecurse
\fi
It can be seen that $N_l(0) = n_0$ is the probability that there are
$0$ nodes at layer $l$, i.e., the probability to die before layer $l$.
Thus $\epsilon_l = 1 - N_l(0)$ is the probability to survive up to layer
$l$.
From~(\ref{equ:recursive}) we have:
\begin{equation}
N_{l+1}(0) = G_1(N_l(0))
\end{equation}
\begin{equation}
1-\epsilon_{l+1} = G_1(1-\epsilon_l)
\end{equation}
\begin{equation}
\label{equ:recursive_1}
1-\epsilon_{l+1} = 1 - p + p \sum \frac{1}{\av{k}} kP(k) (1-\epsilon_l)^{k-1}
\end{equation}
For ER graphs $G_0(x) = G_1(x) = e^{{\av{k}(x-1)}}$ (for $p=1$), Thus:
\begin{equation}
1-\epsilon_{l+1} = 1 - p + p e^{\av{k} (1 - \epsilon_l - 1)} = 1 - p + p e^{-\av{k} \epsilon_l}
\end{equation}
\begin{equation}
\label{equ:recursive_ER}
\epsilon_{l+1} = p - p e^{-\av{k} \epsilon_l}
\end{equation}
Setting $\delta \equiv p - p_c$, where $p_c = \frac{1}{\av{k}}$, and
expanding by series we get:
\begin{equation}
\epsilon_{l+1} = p_c + \delta - (p_c + \delta) \left(1 - \av{k} \epsilon_l + \frac{1}{2} \av{k}^2 \epsilon_l^2 - ... \right)
\end{equation}
Leaving only expressions up to second order in $\delta$ and
$\epsilon_l$ (we assume that $p < p_c$ and thus $\epsilon_l \ll 1$ for
large $l$) we get:
\begin{equation}
\epsilon_{l+1} \approx \epsilon_l - \frac{1}{2} \av{k} \epsilon_l^2 + \delta \av{k} \epsilon_l
\end{equation}
\begin{equation}
\label{equ:recursive_ER_near_pc}
\frac{d \epsilon_{l}}{dl} \approx - \frac{1}{2} \av{k} \epsilon_l^2 + \frac{\delta}{p_c} \cdot \epsilon_l
\end{equation}
At criticality, $\delta = 0$ and the solution to this equation is:
$\epsilon_l \sim
l^{-1}$~\cite{Kalisky-Cohen-ben-Avraham-Havlin-2004:Tomography}. The
additional term suggests the following solution near criticality:
$\epsilon_l \sim l^{-1} \cdot \exp{\left( \frac{1}{p_c} \delta l
\right)}$
\footnote{Indeed, solving equation~(\ref{equ:recursive_ER_near_pc})
taking $\delta \ll 1$ and the initial condition $\epsilon_{l=0}=1$
we get: $\epsilon_l = \frac{2}{\av{k}} l^{-1} \exp{\left(
\frac{1}{p_c} \delta l \right)}$.}.
In terms of survivability this can be written as:
\begin{equation}
\label{equ:ER_survivability}
S(p,l) = S(p_c,l) \cdot \exp \left(\frac{1}{p_c}(p-p_c)l \right)
.
\end{equation}
In order to check this result we numerically solved the survivability
$S(p,l)$ near $p_c$ according to the exact enumeration method
presented
in~\cite{Braunstein-Buldyrev-Sreenivasan-Cohen-Havlin-Stanley-2004:the_optimal}~\footnote{This
method assumes that near the critical point there is a negligible
number of loops and thus the network behaves similar to a cayley
tree with the same degree distribution $P(k)$ as the ER network.}.
Fig.~\ref{fig:ERsurvivability} shows the survivability $S(p,l)$ for
different values of $p$. For $p = p_c$ the survivability decays as a
power law, while above and below there is an exponential decay, either
to zero (for $p<p_c$) or to a constant (for $p>p_c$).
Fig.~\ref{fig:ERsurvivability_scaled} shows that all curves of the
survivability $S(p,l)$ from Fig.~\ref{fig:ERsurvivability} can be
rescaled such that they all collapse. Moreover, scaled survivabilities
from all different graphs with different values of $\av{k}$ (i.e.,
different values of $p_c$) also collapse on the same curve.
However, equation~(\ref{equ:ER_survivability}) is true only below the
percolation threshold where there is no giant component. Above the
percolation threshold there is an exponential decay to a non-zero
constant, and the generalized expression is:
\begin{equation}
\label{equ:ER_survivability_modified}
S(p,l) = S(p_c,l) \cdot \exp \left(- \frac{1}{p_c} |p-p_c| l \right) + pP_{\infty}
,
\end{equation}
Where $pP_{\infty}$ is the probability for a randomly chosen site to be
inside the percolation cluster~\footnote{$S(p,l \rightarrow \infty)$
is the probability that is we start from a randomly chosen
conducting site, we will survive an infinite chemical distance.
This equals the probability $p$ that the randomly chosen site is
conducting, multiplied by the probability $P_{\infty}$ that it
resides in the giant component.}.
Indeed, setting $\epsilon_{l+1} = \epsilon_l$ in
equation~(\ref{equ:recursive_ER}) the resulting ``steady state''
solution is
$pP_{\infty}$~\cite{bollobas-2001:random_graphs}\footnote{$P_{\infty}$
obeys the transcendental equation: $P_{\infty} = 1-e^{-\av{k} p
P_{\infty}}$.}.
\ifnum1=2
\figureERsurvivability
\figureERsurvivabilityScaled
\fi
\section{\label{sec:SF_graphs}Scale-Free Graphs:}
Scale-free graphs can be taken to have a degree distribution of the
form $P(k) = c k^{-\lambda}$ where $c \approx (\lambda-1)
m^{\lambda-1}$~\cite{Cohen-Erez-Ben_Avraham-Havlin-2000:Resilience}.
In order to solve equation~(\ref{equ:recursive_1}) we have to
evaluate:
\begin{equation}
G_1(1-\epsilon_l) = \frac{1}{\av{k}} \sum kP(k) (1-\epsilon_l)^{k-1}
\end{equation}
Expanding by powers of $\epsilon_l$, and inserting $P(k)=c
k^{-\lambda}$ with $3<\lambda<4$, we
get~\cite{cohen-havlin-2005:complex_networks}:
\begin{equation}
\sum kP(k) (1-\epsilon_l)^{k-1} \approx \av{k} - \av{k(k-1)} \epsilon_l + \frac{c}{2} \Gamma(4-\lambda) \epsilon_l^{\lambda-2}
\end{equation}
Thus equation~(\ref{equ:recursive_1}) becomes:
\begin{equation}
1-\epsilon_{l+1} \approx 1 - p +
\frac{p}{\av{k}} \left( \av{k} - \av{k(k-1)} \epsilon_l + \frac{c}{2} \Gamma(4-\lambda) \epsilon_l^{\lambda-2} \right)
\end{equation}
Taking $p = p_c + \delta$:
\begin{equation}
1-\epsilon_{l+1} \approx 1 - (p_c+\delta) +
\frac{p_c+\delta}{\av{k}} \left( \av{k} - \av{k(k-1)} \epsilon_l + \frac{c}{2} \Gamma(4-\lambda) \epsilon_l^{\lambda-2} \right)
.
\end{equation}
Substituting
$p_c=\frac{\av{k}}{\av{k(k-1)}}$~\cite{Cohen-Erez-Ben_Avraham-Havlin-2000:Resilience}
we get:
\begin{equation}
1-\epsilon_{l+1} \approx 1 - \epsilon_l + p_c \frac{c}{2 \av{k}} \Gamma(4-\lambda) \cdot \epsilon_l^{\lambda-2}
- \frac{1}{p_c} \cdot \delta \epsilon_l + \frac{c}{2 \av{k}} \Gamma(4-\lambda) \cdot \delta \epsilon_l^{\lambda-2}
\end{equation}
Setting $A \equiv p_c \frac{c}{2 \av{k}} \Gamma(4-\lambda)$ we get:
\begin{equation}
\epsilon_{l+1} - \epsilon_l \approx -A \cdot \epsilon_l^{\lambda-2} + \frac{1}{p_c} \cdot \delta \epsilon_l
- \frac{A}{p_c} \cdot \delta \epsilon_l^{\lambda-2}
\end{equation}
\begin{equation}
\epsilon_{l+1} - \epsilon_l \approx -A \cdot \epsilon_l^{\lambda-2}
+ \frac{1}{p_c} \cdot \delta \left( \epsilon_l - A \cdot \epsilon_l^{\lambda-2} \right)
.
\end{equation}
For large $l$, $\epsilon_l \ll 1$, and taking into account that
$\lambda-2 > 1$ we have $\epsilon_l^{\lambda-2} \ll \epsilon_l$.
Therefore:
\begin{equation}
\label{equ:recursive_SF_near_pc}
\frac{d \epsilon_l}{dl} \approx -A \cdot \epsilon_l^{\lambda-2} + \frac{1}{p_c} \cdot \delta \epsilon_l
.
\end{equation}
For $\delta = 0$ the solution is $\epsilon_l \sim l^{-x}$ with $x =
\frac{1}{\lambda-3}$~\cite{Kalisky-Cohen-ben-Avraham-Havlin-2004:Tomography}.
The additional term suggests the following solution near criticality:
$\epsilon_l \sim l^{-x} \cdot \exp{\left( \frac{\delta l}{p_c}
\right)}$
\footnote{Solving equation~(\ref{equ:recursive_SF_near_pc}) with
$\delta \ll 1$ and the initial condition $\epsilon_{l=0}=1$ we get:
$\epsilon_l = \frac{1}{( (\lambda-3)A )^{ 1/(\lambda-3) }}
l^{-1/(\lambda-3)} \exp{\left( \frac{1}{p_c} \delta l \right)}$.}.
A similar form can be found for $\lambda >
4$~\footnote{In this range is the behavior is similar to ER
graphs~\cite{Cohen-Ben_Avraham-Havlin-2002:critical_exponents}.}.
The scaling form for SF networks is also confirmed by numerical
simulations as shown in Figures~\ref{fig:SFsurvivability}
and~\ref{fig:SFsurvivability_scaled}.
\ifnum1=2
\figureSFsurvivability
\figureSFsurvivabilityScaled
\fi
\section{\label{sec:summary}Summary and Conclusions}
The scaling form of the survivability near the critical probability
obeys the following scaling relation (for $p<p_c$):
\begin{equation}
\label{equ:survivability_fluct}
S(p,l) = S(p_c,l) \cdot \exp \left(\frac{p-p_c}{\Delta p_c} \right)
.
\end{equation}
Where $\Delta p_c = \frac{p_c}{l}$. Given a system with a maximal
chemical length $l$, for all values of conductivity $p$ inside the
range $[p_c - \Delta p_c,p_c + \Delta p_c]$ the survivability behaves
similar to the power law $S(p_c,l) \sim l^{-x}$ found at criticality.
Thus, the width of the critical threshold is $\Delta p_c =
\frac{p_c}{l}$.
To summarize, we have shown analytically and numerically the the
survivability in ER and SF graphs scales according to
equations~(\ref{equ:ER_survivability})
and~(\ref{equ:ER_survivability_modified}) near the critical point.
This implies that the width of the critical region in networks of
finite size scales as $\Delta p_c = \frac{p_c}{l}$, where $l$ is the
chemical length of the percolation cluster. For ER graphs, $l \sim
N^{1/3}$, while for SF networks with $3<\lambda<4$, $l \sim
N^{(\lambda-3)/(\lambda-1)}$.
\section*{Acknowledgments}
We thank the ONR, the Israel Science Foundation and the Israeli Center
for Complexity Science for financial support. We thank E.~Perlsman, S.
Sreenivasan, Lidia A.~Braunstein, Sergey V. Buldyrev, Shlomo Havlin,
H. Eugene Stanley, Y. Strelniker, Alexander Samukhin, O. Riordan and
P.~L.~Krapivsky for useful discussions.
\omitit{Lidia A.~Braunstein thanks the ONR - Global for financial
support.}
| {
"timestamp": "2005-08-01T12:25:46",
"yymm": "0508",
"arxiv_id": "cond-mat/0508040",
"language": "en",
"url": "https://arxiv.org/abs/cond-mat/0508040",
"abstract": "It is known that the critical probability for the percolation transition is not a sharp threshold, actually it is a region of non-zero width $\\Delta p_c$ for systems of finite size. Here we present evidence that for complex networks $\\Delta p_c \\sim \\frac{p_c}{l}$, where $l \\sim N^{\\nu_{opt}}$ is the average length of the percolation cluster, and $N$ is the number of nodes in the network. For Erdős-Rényi (ER) graphs $\\nu_{opt} = 1/3$, while for scale-free (SF) networks with a degree distribution $P(k) \\sim k^{-\\lambda}$ and $3<\\lambda<4$, $\\nu_{opt} = (\\lambda-3)/(\\lambda-1)$. We show analytically and numerically that the \\textit{survivability} $S(p,l)$, which is the probability of a cluster to survive $l$ chemical shells at probability $p$, behaves near criticality as $S(p,l) = S(p_c,l) \\cdot exp[(p-p_c)l/p_c]$. Thus for probabilities inside the region $|p-p_c| < p_c/l$ the behavior of the system is indistinguishable from that of the critical point.",
"subjects": "Disordered Systems and Neural Networks (cond-mat.dis-nn)",
"title": "Width of percolation transition in complex networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561712637256,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7082906211379729
} |
https://arxiv.org/abs/2104.05200 | A Note on the Performance of Algorithms for Solving Linear Diophantine Equations in the Naturals | We implement four algorithms for solving linear Diophantine equations in the naturals: a lexicographic enumeration algorithm, a completion procedure, a graph-based algorithm, and the Slopes algorithm. As already known, the lexicographic enumeration algorithm and the completion procedure are slower than the other two algorithms. We compare in more detail the graph-based algorithm and the Slopes algorithm. In contrast to previous comparisons, our work suggests that they are equally fast on small inputs, but the graph-based algorithm gets much faster as the input grows. We conclude that implementations of AC-unification algorithms should use the graph-based algorithm for maximum efficiency. |
\section{Introduction}\label{sec:intro}
Solving linear Diophantine equations in the naturals is at the core of
AC-unification algorithms. AC-unification reduces to top-most
unification problems of the form
\[ f^*(u_1, \ldots, u_l) = f^*(v_1, \ldots, v_k), \] where
\( u_1, \ldots, u_l, v_1, \ldots, v_k \) are variables (possibly with
repetitions) and \( f^* \) is a variadic symbol corresponding to some
AC-symbol \( f \). Solving such a top-most AC-unification problem
reduces to solving the Diophantine equation
\begin{equation}\label{eq:lindioph}
a_1x_1+a_2x_2+\cdots+a_nx_n=b_1x_1+b_2x_2+\cdots+b_nx_n,
\end{equation}
\noindent where \( x_i \) (\( 1 \leq i \leq n \)) are the unknowns
(taking values in the set of naturals) and \( a_i, b_j \)
(\( 1 \leq i \leq l \), \( 1 \leq j \leq k\)) are the multiplicities
of the corresponding variable among \( u_1, \ldots, u_l \) and
\( v_1, \ldots, v_k \), respectively. For more details, see the survey
of Baader and Snyder on
unification~\cite{DBLP:books/el/RV01/BaaderS01}.
Therefore, algorithms for computing AC-unifiers make intensive use of a
linear Diophantine equation (LDE) solver. We compare four algorithms
for LDE solving. Our results suggest that the graph-based algorithm is
the fastest on modern computers, in contrast to previous benchmarks,
which are older. We conclude that implementations of AC-unification
should consider switching to the graph-based algorithm.
\section{Algorithms}\label{sec:algos}
In this section we briefly describe the known algorithms for solving
Equation~\ref{eq:lindioph}.
\subsection{The Lexicographic Enumeration Algorithm}
In this subsection we describe the simplest way of solving
Equation~\ref{eq:lindioph}. We notice that a linear Diophantine
equation with natural solutions can have an infinite number of
solutions. We generally do not need all of them, but just need a
complete set of minimal solutions. A solution
$S_1=(x_1, x_2, \ldots, x_n)$ is not minimal if there exists another
solution $S_2=(x'_1, x'_2, \ldots, x'_n)$ such that for all $i$,
$S_{1, i} \geq S_{2,i}$ and $S_1 \neq S_2$. The set of minimal
solutions forms a basis. The lexicographic
algorithm~\cite{HUET1978144} lexicographically enumerates all
solutions and saves only the minimal ones. However, we can not
enumerate infinitely many solutions; we should have a bound for
$x_{a,i}$ and $x_{b,i}$, where the vectors $x_a$ and $x_b$ form a
solution of Equation~\ref{eq:lindioph}. Huet~\cite{HUET1978144} points
out that, for a minimal solution, the unknowns $x_{a,i}$ should be not
greater than $\max(b)$ and the unknowns $x_{b,i}$ should not be greater
than $\max(a)$, where $a$ and $b$ are the coefficients in
Equation~\ref{eq:lindioph}. Lambert~\cite{lambert1987borne} gives the
stronger bounds $\sum_i x_{a,i} \leq \max(b)$ and
$\sum_i x_{b,i} \leq \max(a)$. Moreover, we do not need to enumerate
the possible values of all $N$ unknowns, it is enough to enumerate
$N-1$; the last unknown can be found by solving a simple equation. We
can develop this idea further. What happens if we enumerate $N-2$
variables? We get an equation of form $ax+by=c$ and we can solve it
using the extended Euclidian algorithm. With two types of bounds and
two types of \emph{optimizations} we get 4 similar algorithms that
solve Equation~\ref{eq:lindioph}. The slowest part of this algorithm
is checking if a new solution is minimal or not, because checking
minimality requires comparing the new solution with the other already
generated minimal solutions. For implementation simplicity we rewrite
Equation~\ref{eq:lindioph} as:
\begin{equation}\label{eq:lindioph2}
a_1x_1+a_2x_2+\ldots+a_nx_n - b_1x_1-b_2x_2-\ldots-b_nx_n=0
\end{equation}
If we let $w_i = a_i-b_i$ then Equation~\ref{eq:lindioph2} can be written as:
\begin{equation}\label{eq:lindioph3}
w_1x_1+w_2x_2+\ldots+w_nx_n=0
\end{equation}
We work with Equation~\ref{eq:lindioph3}, because it is closer to the
implementation. In Figure~\ref{fig:lexalg} we provide a very generic
implementation for the lexicographic algorithm. The algorithm
implements a standard backtracking procedure, with the parameter $p$
denoting the current unknown. The test \textit{$p$ is last} allows to
implement the optimizations described above (stop the enumeration at
$n-1$ unknowns or at $n-2$ unknowns).
\begin{figure}
\begin{algorithm}[H]
\DontPrintSemicolon
\SetKwFunction{FLexAlg}{LexAlg}
\SetKwProg{Fn}{Function}{:}{}
\SetKwFor{For}{for (}{)}{}
\Fn{\FLexAlg{p}} {
\eIf{p is last} {
sol := Solve the remaining equation without enumeration\;
\uIf{sol is Minimal} {
addSolution(sol)\;
}
} {
\For{$i = 0;\ i < bound;\ i = i + 1$} {
set sol[p] = i\;
LexAlg(p + 1)\;
}
}
}
\end{algorithm}
\caption{\label{fig:lexalg}The Lexicographic Enumeration Algorithm (generic version).}
\end{figure}
\subsection{The Completion Procedure}\label{sec:CompProc}
Another way to solve Equation~\ref{eq:lindioph} is to compute all
minimal solutions by a \emph{completion procedure}. Such an algorithm
is due to Fortenbacher~\cite{fortenbacher1983algebraische}, with an
optimization by Guckenbiehl and
Herold~\cite{guckenbiehl1985solving}. For some
$x=(x_1, x_2, \ldots, x_n)$, we denote by $d(x)$ the result of the
expression $d(x)=w_1x_1+w_2x_2+\ldots+w_nx_n$, which we call the defect
of Equation~\ref{eq:lindioph3}. A \emph{proposal}
$p = (x_1, x_2, \cdots, x_n)$ is characterized by
$-\max b \leq d(p) \leq \max a$. A solution is a proposal \( p \) that
has $d(p) = 0$. The algorithm starts with a set of proposals. At each
\emph{completion step}, it updates every proposal $p$ in the following
way: \( \bullet \) if its defect is less than zero, then it increments
$x_i$ by $1$ for some index $i$ with $w_i > 0$; \( \bullet \)
otherwise (if its defect is positive) it increments $x_i$ by $1$ for
some $i$ with $w_i < 0$. If the result has defect zero then a
\emph{minimal solution} was found. If a proposal is not minimal then
it is discarded, because we can not obtain a \emph{minimal solution}
from a \emph{non-minimal proposal}. In such a way only minimal
solutions are computed and this is an advantage over
\emph{Lexicographic Algorithm}. However, a solution may be computed
several times and we still have to test proposals for
minimality. Guckenbiehl and Herold~\cite{guckenbiehl1985solving}
describe a way to avoid computation of the same solution several
times. To do that we need to select one unique computation for each
solution. That is done by following one rule: a proposal with negative
(positive) defect must not be incremented at a position $i$ (position
$j$) if there exists a $k > i$ with $w_k > 0$ ($k > j$ with
$w_k < 0$). We present this version of the completion procedure in
Figure~\ref{fig:compalg}.
\begin{figure}
\begin{algorithm}[H]
\DontPrintSemicolon
\SetKwFunction{CompAlg}{CompAlg}
\SetKwProg{Fn}{Function}{:}{}
\SetKwFor{For}{for (}{)}{}
\Fn{\CompAlg{w}} {
// init the proposal set\;
pSet = empty set of proposals\;
\For{$i = 0;\ i < n;\ i = i + 1$} {
p = new proposal\;
p[i] = 1\;
pSet.add(p)\;
}
\While{pSet is not empty} {
// completion step\;
pSetNew = a new proposal set\;
\For{$p\ in\ pSet$} {
\For {$i = n - 1;\ i \geq 0;\ i = i - 1$} {
\uIf{$(d(p) < 0\ and\ w_i < 0)\ or\ (d(p) > 0\ and\ w_i > 0)$} {
continue\;
}
auxProposal = copyOf(p)\;
auxProposal[i] = auxProposal[i] + 1\;
\eIf{auxProposal has defect zero} {
addSolution(auxProposal)\;
} {
\uIf{auxProposal is minimal} {
pSetNew.add(auxProposal)\;
}
}
// avoiding multiple computations of the same solution\;
\uIf{$p[i] > 0$} {
break\;
}
}
}
pSet = pSetNew\;
}
}
\end{algorithm}
\caption{\label{fig:compalg}The Completion Procedure described by
Fortenbacher~\cite{fortenbacher1983algebraische}, with an
optimization by Guckenbiehl and
Herold~\cite{guckenbiehl1985solving}.}
\end{figure}
\subsection{The Graph Algorithm}\label{sec:GrAlg}
Clausen and Fortenbacher~\cite{clausen1989efficient} described a
further optimization of the completion procedure. We call the
resulting algorithm the \emph{Graph Algorithm}. In order to avoid all
additions and subtractions, they represent Equation~\ref{eq:lindioph3}
as a graph. The graph representation of a linear Diophantine equation
is a labelled digraph with the set
$\{d \in \mathbb{Z} | - \max b \leq d \leq \max a\}$ representing the
nodes and set $\{d \rightarrow_{w_i} d + w_i | i \leq n\}$
representing labelled edges. In other words, the nodes are the defect
of any proposal and an edge $d \rightarrow_{w_i} d + w_i$ corresponds
to incrementing a proposal at position $i$. A solution in this graph
is a walk that begins in zero and ends in zero. An advantage over the
\emph{completion procedure} is that the minimality check of a solution
is also transformed into a graph problem. In this graph, a walk that
corresponds to the solution $s = (s_1, s_2, \ldots, s_n)$ is
non-minimal if there is another walk $z = (z_1, z_2, \ldots, z_n)$
that is shorter than $s$ and also is bounded by $s$, in other words
$z_i \leq s_i$ for all $i$. A detailed implementation of this
algorithm written in Pascal is provided by Clausen and
Fortenbacher~\cite{clausen1989efficient}.
\subsection{The Slopes Algorithm}\label{sec:SlopesAlg}
The Slopes Algorithm described by Filgueiras and Tom{\'a}s~\cite{filgueiras1995fast} is an optimization of the lexicographic algorithm. The enumeration is performed \emph{for all but three} of the unknowns and an equation of the following form:
\begin{equation}\label{eq:lde3unk}
ax = by + cz + v,\ a,b,c,x,y,z \in \mathbb{N},\ v \in \mathbb{Z}
\end{equation}
is solved. Filgueiras and Tom{\'a}s~\cite{filgueiras1995fast} describe
a way of finding directly all minimal solutions for
Equation~\ref{eq:lde3unk}. The idea is that, if minimal solutions of
Equation~\ref{eq:lde3unk} are ordered with $z$ strictly increasing,
then both the solution with the smallest $z$ and the difference
between consecutive solutions can be computed
algebraically. Geometrically, this can be seen as a Pareto frontier of
all solutions projected onto the \emph{YZ-plane} if $v \geq 0$ and is
a polygonal line when $v < 0$. We can project solutions to 2D space
because it is well known that each solution of
Equation~\ref{eq:lde3unk} verifies the congruence:
\begin{equation}\label{eq:lde3unk2dspace}
by + cz \equiv -v\ \ \ (mod\ a).
\end{equation}
And reciprocally, that each solution of
Equation~\ref{eq:lde3unk2dspace} corresponds to some integral solution
of Equation~\ref{eq:lde3unk}. In Figure~\ref{fig:slopesalg} we present
the implementation of Slopes algorithm for solving $ax + by + cz = 0$.
\begin{figure}
\begin{algorithm}[H]
\SetKwFunction{Slopes}{Slopes}
\SetKwProg{Fn}{Function}{:}{}
\SetKwFor{For}{for (}{)}{}
\Fn{\Slopes{a, b, c}} {
gb=gcd(a,b); gc=gdc(a,c); G=gcd(gb,c)\;
ymax=a/gb; zmax=a/gc\;
dz=gb/G; dy=(c*multiplier(b,a)/G) mod ymax\;
y=ymax-dy; z=dz\;
Solutions=\{(b/gb, ymax, 0), (c/gc, 0, zmax), ((b*y+c*z)/a, y, z)\}\;
\While{dy $>$ 0} {
\While{y $>$ dy} {
y=y-dy; z=z+dz\;
Solutions.add(((b*y)+c*z)/a,y,z))\;
}
f=dy/y; dy=dy mod y; dz=f*z+dz\;
}
\Return Solutions\;
}
// multiplier(a, b) is an integer $m_b$ such that $gcd(a,b)=m_a*a + m_b*b$
\end{algorithm}
\caption{\label{fig:slopesalg}The Slopes algorithm of Filgueiras and
Tom{\'a}s for the equation $ax = by + cz$.}
\end{figure}
\section{Methodology}\label{sec:Meth}
We have implemented all algorithms in C++. The first two algorithms
are clearly slower than the last two. Therefore, we made a more
detailed comparison between the Graph and Slopes algorithms,
reproducing the comparison by Filgueiras and
Tom{\'a}s~\cite{filgueiras1995fast}. As our implementation of the
Slopes algorithm is somewhat slower than the well known and optimized
C implementation~\cite{Slopes}, we use the later for the
comparison. All of our code, including instructions for reproducing
our results (Figures 1-4), are available at
\url{https://github.com/Djok216/LDEAlgsComparison}.
To measure the running time we use the python \emph{subprocess} and
\emph{time} libraries. The first one is used for spawning the
executables of the algorithms and the later for measuring running time
using \emph{perf\_counter}. We set a timeout of $10$ minutes for the
spawned processes. The tests are generated using \emph{random.randint}
and the left side is sorted in decreasing order and the right side in
increasing order, because in most cases this ordering speeds up the
Slopes algorithm as explained by Filgueiras and
Tom{\'a}s~\cite{filgueiras1995fast}.
The tests are divided in $160$ classes determined by
$N \in \{ 1,2,3,4 \}$ - the number of unknowns on the left hand side,
$M \in \{ 2, 3, 4, 5, 6, 7, 8, 9 \}$ - the number of unknowns on the
right hand side such that $N \leq M$ and
$\emph{MaxValue} \in \{ 2, 3, 5, 13, 29, 39, 107, 503, 1021 \}$- the
maximum coefficient of any unknown. We manually set \emph{MaxValue} as
part of the coefficients on the right hand side, because there are
always more unknowns on the right hand side than left hand side
($N \leq M$). Every class contains $10$ different tests generated
randomly. We use a fixed seed for reproducibility purposes.
We calculate the running time after running the same test with the
same algorithm $5$ times. The running time for that algorithm on that
specific test is considered to be the arithmetic mean of $3$ out of
$5$ remaining values after removing the smallest and the biggest
value. The exception is when an algorithm runs for more than $15$
seconds. In this case we stop running the same test and calculate the
arithmetic mean of running times available at the moment. For example,
if an algorithm runs two times in $14.9$ seconds and the third time in
$15.2$, then we stop running this test after third run and the time is
considered to be $(2 * 14.9 + 15.2) / 3 = 15$ seconds. We also set a
timeout of $10$ minutes, after which we automatically stop the
algorithm.
For every test in a given class, we add $1$ point to the algorithm
taking the least time and $0$ points to the other. In case of a tie,
we add $0.5$ to both algorithms. Therefore, a score of $6:4$ would
mean that the first algorithm performed better $6$ times, while the
second algorithm $4$ times out of the $10$ tests for a given class.
We consider an algorithm to win a particular class if it scores at
least $8$ points. The definition of a win is justified statistically
by Filgueiras and Tom{\'a}s~\cite{filgueiras1995fast}.
For compiling the code we use \emph{GCC 5.4}. Below are the commands
used to compile the programs:
\begin{verbatim}
gcc -static slopesV7i.c -std=c11 -O3 -o slopesV7i
g++ -static -lm -s -x c++ -std=c++17 -O3 -o graph graph.cpp
\end{verbatim}
\noindent We run the benchmark on an Intel Xeon machine with two
processors and 24 hardware threads (12 physical cores) with a clock
speed of 2.67 GHz.
We repeat the measurements made by Filgueiras and
Tom{\'a}s~\cite{filgueiras1995fast}, but we also compare the two
algorithms using an epsilon of $0.01$. By this we mean that the
algorithms are considered equally fast if their running times differ
by a value smaller than $0.01$ seconds. Moreover, we also compute the
overall time spent in every class of tests.
\section{Discussion}\label{sec:Dis}
Figure~\ref{fig1} contains a summary of the results. The Slopes
algorithm wins $103$ classes out of $160$ and Graph algorithm wins $7$
classes. Filgueiras and Tom{\'a}s~\cite{filgueiras1995fast} find that
Slopes wins $88$ classes and Graph wins $33$ classes. These results
suggest that the Slopes algorithm is faster than the Graph algorithm.
We redo the same comparison, but this time we consider the two
algorithms to be equal if their running time differs by at most
$\epsilon = 0.01$ seconds. The results are summarized in
Figure~\ref{fig2}. Each algorithm now has $6$ wins and for most of the
classes there is a tie. This means that the algorithms are quasi-equal
in efficiency.
Going further, we analyze the \emph{total time} spent in each test
class by each algorithm. The results are summarized in
Figure~\ref{fig4}. We see that in classes where the Slopes algorithm
wins, the difference is very small. However, in the classes in which
the Graph algorithm wins, the difference is huge. The total time spent
in all $160$ classes for the Slopes algorithm is $4284.81$ seconds and
$724$ seconds for the Graph algorithm. The counts ($4284.81$ seconds,
$724$ seconds) should be interpreted taking into account that they
contain $1$ timeout of $10$ minutes for Graph and $4$ timeouts of $10$
minutes for Slopes, as summarized in Figure~\ref{fig3}.
Based on our results, we conclude that the Graph algorithm is
significantly faster than Slopes for bigger instances and roughly as
fast for small instances.
\emph{Practical relevance of our benchmark.} In most cases, the
bottleneck in AC(U)-unification algorithms is combining the solutions
to the linear Diophantine equations themselves. However, there are
AC(U)-unification problems where solving Equation~\ref{eq:lindioph} is
the slow part. An example is an ACU-unification problem with a single
AC-function $f$ and 8 different variables, which can be constructed
based on Equation~\ref{eqToSolve}:
\begin{equation}\label{eqToSolve}
104x_1 + 167x_2 = 165x_3 + 154x_4 + 148x_5 + 159x_6 + 174x_7 + 150x_8.
\end{equation}
The ACU-unification problem is the following:
\begin{equation}\label{eqToSolveProb}
f_{104}(u_1) + f_{167}(u_2) =_? f_{165}(u_3) + f_{154}(u_4) + f_{148}(u_5) + f_{159}(u_6) + f_{174}(u_7) + f_{150}(u_8),
\end{equation}
\noindent where $f_k(v) = f(v, v, \ldots, v)$ (such that $v$ has $k$
occurrences).
Equation~\ref{eqToSolve} has a basis of size $5510$. Finding the basis
is significantly slower than combining its solutions and creating the
ACU-unifier. On the same hardware as described in
Section~\ref{sec:Meth}, the Graph algorithm takes about $0.6$ seconds
to solve the linear Diophantine equation above, while combining the
solutions into an ACU-unifier takes $0.15$ seconds. To compute the
ACU-unifier, we use the algorithm presented by Baader and Snyder in
their survey on unification~\cite{DBLP:books/el/RV01/BaaderS01}.
Therefore, at least on some AC-unification problems, solving LDEs
dominates the running time.
\emph{Conclusion.} Implementations of AC unification should therefore
consider using the Graph algorithm, or choosing between Graph and
Slopes, depending on problem size.
| {
"timestamp": "2021-04-13T02:25:43",
"yymm": "2104",
"arxiv_id": "2104.05200",
"language": "en",
"url": "https://arxiv.org/abs/2104.05200",
"abstract": "We implement four algorithms for solving linear Diophantine equations in the naturals: a lexicographic enumeration algorithm, a completion procedure, a graph-based algorithm, and the Slopes algorithm. As already known, the lexicographic enumeration algorithm and the completion procedure are slower than the other two algorithms. We compare in more detail the graph-based algorithm and the Slopes algorithm. In contrast to previous comparisons, our work suggests that they are equally fast on small inputs, but the graph-based algorithm gets much faster as the input grows. We conclude that implementations of AC-unification algorithms should use the graph-based algorithm for maximum efficiency.",
"subjects": "Data Structures and Algorithms (cs.DS); Performance (cs.PF)",
"title": "A Note on the Performance of Algorithms for Solving Linear Diophantine Equations in the Naturals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561703644736,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.708290620480567
} |
https://arxiv.org/abs/1904.05717 | The MOMMS Family of Matrix Multiplication Algorithms | As the ratio between the rate of computation and rate with which data can be retrieved from various layers of memory continues to deteriorate, a question arises: Will the current best algorithms for computing matrix-matrix multiplication on future CPUs continue to be (near) optimal? This paper provides compelling analytical and empirical evidence that the answer is "no". The analytical results guide us to a new family of algorithms of which the current state-of-the-art "Goto's algorithm" is but one member. The empirical results, on architectures that were custom built to reduce the amount of bandwidth to main memory, show that under different circumstances, different and particular members of the family become more superior. Thus, this family will likely start playing a prominent role going forward. |
\subsection{An I/O lower bound for MMM}
Smith et al.~\cite{smith2019tight} starts with a simple model of memory with two layers of memory: a small, fast memory with capacity of $ M $ elements and a large, slow memory with unlimited capacity. It shows that any algorithm for ordinary MMM%
\footnote{We only consider algorithms that compute the $ i,j $ element of $ m \times n $ matrix $ C $ as $ \gamma_{i,j} := \sum_{p=0}^{k-1} \alpha_{i,p} \beta_{p,j} $ where $ \alpha_{i,p} $ and $ \beta_{p,j} $ are the $ i,p $ and $ p,j $ elements of $ m \times k $ matrix $ A $ and $ k \times n $ matrix $ B $, respectively.}
must read at least ${2mnk}/{\sqrt{M}} - 2M$ elements from slow memory and additionally write at least $mn - M$ elements to slow memory.
Adding these two lower bounds gives a lower bound on the number of transfers between slow and fast memory,
called the I/O lower bound, of approximately $ {2mnk}/{\sqrt{M}} $.
Importantly, this lower bound is tight, modulo lower order terms. It improves upon previous work~\cite{redblue,dongarra2008masterworker,irony2004communication}.
\subsection{Resident algorithms for MMM}
In~\cite{smith2019tight}, it is shown that
three algorithms, named {\em Resident~A}, {\em Resident~B}, and {\em Resident~C},
attain the lower bound on the number of reads from slow memory%
\footnote{Modulo lower order terms.}.
Additionally, Resident~C attains the lower bound on the number of writes to slow memory\footnotemark[3].
In each algorithm, the elements of one of the operand matrices are read from slow memory only once,
and each element of the other two operand matrices is reused approximately $\sqrt{M}$ times each time it is brought into fast memory.
While the Resident~A algorithm was described as early as 1991~\cite{lam1991cache}, and all three appear in~\cite{ITXGEMM},
their optimality was first noted in~\cite{smith2019tight}.
\input fig_three_shapes
\subsubsection{Resident C}
The operation MMM $ Z \mathrel{+}= X Y $ can be computed by the sequence of rank-1 updates
$ Z \mathrel{+}= x_0 y_0^T + x_1 y_1^T + \cdots $,
where $ x_i $ and $ y_i^T $ are a row and column of $ X $ and $ Y $, respectively.
This is illustrated in Figure~\ref{fig:three_shapes} (left),
where $Z$ is the square block on the left, $X$ is the middle operand, and $Y$ is the operand on the right.
The vectors $x_i$ and $y_i^T$ are represented by the thin partitions of $X$ and $Y$.
Suppose we have (larger) matrices $C$, $A$, and $B$.
We compute $C \mathrel{+}= AB$ in the following way. Partition:
\setlength{\arraycolsep}{2pt}
$$
C \!\rightarrow\!
\left( \begin{array}{c | c | c}
C_{0,0} & \mbox{\tiny $\cdots$} & C_{0,n-1} \\ \hline
\mbox{\tiny $\vdots $} & & \mbox{\tiny $\vdots$} \\ \hline
C_{m-1,0} & \mbox{\tiny$\cdots$} & C_{m-1,n-1}
\end{array}
\right),
A \!\rightarrow\!
\left( \begin{array}{c}
A_0 \\ \hline
\mbox{\tiny $\vdots $} \\ \hline
A_{m-1}
\end{array}
\right),
B \!\rightarrow\!
\left( \begin{array}{c | c | c}
B_0 & \mbox{\tiny $\cdots$} & B_{n-1}
\end{array}
\right),
$$
where $ C_{i,j} $ is $ m_c \times n_c $, $ A_i $ is $ m_c \times k$, and $ B_j $ is $ k \times n_c $, except at the margins.
Then we compute the suboperation $C_{i,j} \mathrel{+}= A_i B_j$ using the described algorithm for $ Z \mathrel{+}= X Y $. Now, $C_{i,j}$ is read from slow memory once
at the beginning of the suboperation, and resides in fast memory during the rest of the duration, and
$A_i$ and $B_j$ are streamed one row and column at a time from slow memory.
Each element of each operand is read once for each $ i,j $, and there are $\ceil{\frac{m}{m_c}} \ceil{\frac{n}{n_c}}$ such suboperations.
Overall, this algorithm incurs
$m n$ reads and $m n$ writes for matrix $C$,
$\ceil{\frac{m n k}{n_c}}$ reads for matrix $A$, and
$\ceil{\frac{m n k}{m_c}}$ reads for matrix $B$.
When $m_c \approx n_c \approx \sqrt{M}$~\footnote{$m_c$ and $n_c$ must be slightly less than $\sqrt{M}$ to make room for a row of $A_i$ and a column of $B_j$ in fast memory.\label{foot:sqrtm}},
the I/O cost is ${2mnk}/{\sqrt{M}} + 2 mn$.
The highest ordered term in the I/O cost of the Resident~C algorithm is the same as the I/O lower bound for MMM.
Thus the algorithm is essentially optimal.
\subsubsection{Resident A and B}
Similarly, in the MMM $ Z \mathrel{+}= X Y $, each column of $Z$ can be computed by the matrix-vector multiplication$ z_i \mathrel{+}= X y_i$, where $z_i$ and $y_i$ are columns of $Z$ and $Y$, respectively.
This is illustrated in Figure~\ref{fig:three_shapes} (middle), $Z$, $X$, and $Y$ are the left, middle, and right operands, respectively.
Consider $C\mathrel{+}=AB$.
Partition:
$$
C \rightarrow
\left( \begin{array}{c}
C_0 \\ \hline
\vdots \\ \hline
C_{n-1} \\
\end{array}
\right),
A \rightarrow
\left( \begin{array}{c | c | c}
A_{0,0} & \cdots & A_{0,n-1} \\ \hline
\vdots & & \vdots \\ \hline
A_{m-1,0} & \cdots & A_{m-1,n-1}
\end{array}
\right),
B \rightarrow
\left( \begin{array}{c}
B_0 \\ \hline
\vdots \\ \hline
B_{n-1} \\
\end{array}
\right),
$$
where $C_i$ is $m_c \times n$, $A_{i,p}$ is $ m_c \times k_c $, and $ B_{p} $ is $ k_c \times n $, except at the margins.
Then we compute the suboperation $C_i \mathrel{+}= A_{i,p} B_{p}$ using the described MVM-based algorithm
for $Z \mathrel{+}= X Y$.
In Resident~A, $A_{i,p}$ is read from slow memory once at the beginning of the suboperation,
and $C_j$ and $B_p$ are streamed from slow memory one column at a time.
The total I/O costs associated with each matrix are:
$\ceil{\frac{mnk}{k_c}}$ reads and $\ceil{\frac{mnk}{k_c}}$ writes of elements of $ C $;
$m k$ reads of elements of $ A $; and
$\ceil{\frac{mnk}{m_c}}$ reads of elements of $ B $.
If $k_c \approx n_c \approx \sqrt{M}$,
the input cost is approximately ${2mnk}/{\sqrt{M}} + nk$,
and the output cost is approximately ${mnk}/{\sqrt{M}}$.
The input cost attains near the lower bound on reads from slow memory.
The Resident B algorithm is the obvious symmetric equivalent to the Resident A algorithm,
built upon the suboperation in Figure~\ref{fig:three_shapes} (right). Its I/O costs mirror that of the Resident A algorithm.
The above descriptions ``stream'' rows and/or columns of two matrices while keeping a block of the third matrix resident in fast memory. Notice that one can instead stream row panels instead of rows and/or column panels instead of columns as long as the ``small'' dimension of the panel is small relative to the sizes of the block that is resident in fast memory.
This still achieves the I/O lower bound modulo a lower order term. This insight becomes crucial when we discuss blocking for multiple levels of memory.
\subsection{Algorithms for different shapes of MMM}
\label{sec:single_different_shapes}
The number of reads and writes from slow memory
for the Resident~A, B, and C algorithms
depend on the shape of the input matrices:
There are cases where one of the algorithms is more {\textit{efficient}} than the other two,
where we define efficiency by flops per memop (I/O operations).
There are $2mnk$ flops performed during MMM, and the I/O lower bound is ${2mnk}/{\sqrt{M}}$.
Thus our goal for efficiency is $\sqrt{M}$ flops per memop.
We examine the cases for which algorithms are efficient,
assuming that $m$, $n$, and $k$ are at least $\sqrt{M}$.
Resident C is efficient if and only if $k$ is large.
It reads $\ceil{\frac{mnk}{n_c}} + \ceil{\frac{mnk}{m_c}} + mn$ elements from slow memory during MMM.
If $m_c = n_c = \sqrt{M}$, this is approximately ${2mnk}/{\sqrt{M}} + mn$.
This gives an efficiency of $\left( \frac{1}{\sqrt{M}} + \frac{mn}{2k} \right)^{-1}$.
When $k$ is large, this is approximately $\sqrt{M}$.
We can analyze Resident~A and Resident~B similarly.
Here we ignore the I/O cost for writes.
If the sizes of the resident blocks are chosen to be equal to $\sqrt{M}$,
Resident~B has an efficiency of $\left( \frac{1}{\sqrt{M}} + \frac{nk}{2m} \right)^{-1}$, which is approximately $\sqrt{M}$ when $m$ is large.
Resident~A has an efficiency of $\left( \frac{1}{\sqrt{M}} + \frac{mk}{2n} \right)^{-1}$, which is approximately $\sqrt{M}$ when $n$ is large.
This shows that one must choose the right algorithm depending on the shape of the problem.
For each of the Resident~A,B, and C algorithms, there is a minimal shape that can be implemented efficiently.
For Resident~C this occurs when $m \approx n \approx \sqrt{M}$, and $k$ is large.
For Resident~B this occurs when $k \approx n \approx \sqrt{M}$, and $m$ is large.
For Resident~A this occurs when $m \approx k \approx \sqrt{M}$, and $n$ is large.
In each case, the resident matrix fits into fast memory,
and the dimension shared by the other two operands should be large
so that the cost of moving the resident matrix into fast memory can be amortized.
The fact that one must choose a different algorithm for MMM depending on problem shape and size
was previously noted for distributed memory MMM~\cite{schatz2012scalable,li1996poly},
and for hierarchical memory MMM~\cite{ITXGEMM,ATLAS}.
\subsection{A balancing act}
\label{sec:balancing}
So far in this section, we have assumed that the costs associated with accessing an element is the same,
no matter if it is an element of $A$, $B$, or $C$.
In doing so, we arrived at the following strategy:
Place a \textbf{square} block of the resident matrix in fast memory, streaming the other two from slow memory.
This amortizes the I/O costs associated with the resident matrix,
and \textbf{equalizes} the number of accesses of the two streamed matrices.
We now re-analyze the Resident~A, B, and~C algorithms in the case that the costs associated with accessing elements of the different operands are unequal.
This can happen if, e.g., if we add a third layer of memory of intermediate size and access cost.
In this case, at the start of a multiplication with submatrices one operand may reside in slow memory while another resides in some intermediate layer.
Furthermore, in many cases reads and writes cannot be overlapped (e.g. main memory is often not dual-ported), and hence it is more expensive to access elements of $C$ since $C$ must be both read and written.
One way to address this is to select the algorithm where blocks of the operand that is most expensive to access are kept in fast memory as much as possible.
Another is to adjust the sizes used for the the resident block in fast memory.
We now walk through an example of the second solution.
Suppose we are employing the Resident~A algorithm, with an $m_c \times k_c$ block of $A$ in fast memory.
If the cost of accessing an element of $B$ costs $\beta_B$, and accessing an element of $C$ costs $\beta_C$,
when $m$, $n$, and $k$ are large, the efficiency in terms of flops per memop is
$\frac{2 m_c}{\beta_B} + \frac{2 k_c}{\beta_C}$.
This is maximized when
$m_c = \sqrt{\frac{\beta_B }{\beta_C}M}$ and
$k_c = \sqrt{\frac{\beta_C }{\beta_B}M}$.
With this, the total cost of I/O (rather than the number of accesses) associated with accessing the streamed matrices are equalized {\bf and thus the cost minimized} (modulo lower order terms).
\subsection{Summary}
The ingredients to an efficient algorithm are:
(1) Fill fast memory with a submatrix of one of the operands (the resident matrix),
(2) Amortize the I/O cost associated with (1) over enough computation,
(3) Choose dimensions for the resident block that equalize the I/O costs (rather then the number of accesses) associated with the two streamed matrices.
\subsection{A motivating example}
For our motivating example, we choose the Resident~A algorithm when blocking for $ L_m $.
\paragraph{Effectively utilizing $ L_m $}
To effectively utilize $ L_m $,
we select the partitioning that casts computation in terms of a double loop around the middle shape in Figure~\ref{fig:three_shapes}.
We call this subproblem the $L_m$ block-panel multiply.
\paragraph{Effectively utilizing $ L_f $}
The question now becomes how to orchestrate the $ L_m $ block-panel multiply in a way that effectively utilizes $ L_f $.
This suggests again a double loop around one of the shapes in Figure~\ref{fig:three_shapes}.
It is not hard to see that creating a double loop that again implements a Resident~A algorithm is problematic:
Partitioning $A$ for $L_f$ would expose panels of $B$ and $C$ that by design are too large to fit into $L_m$,
and these panels are used in multiple $L_f$ subproblems.
Either these panels of $B$ or $C$ would need to be brought into $L_m$ multiple times
or the sizes of the various matrix partitions would need to be reduced.
Either way the effect would be that the operation would no longer be near-optimal with respect to the number of transfers between
$L_m$ and slower levels of memory.
We conclude that the block-panel multiply should be implemented in terms of a Resident~C or Resident~B algorithm.
Which of these depends on the choice of the outer and inner loop, which we discuss next.
\paragraph{Choosing the outer loop for the $L_f$ cache}
In order to attain near the lower bound, each element in the two long panels of $B$ and $C$
must be used $\approx\sqrt{M}$ times each time it is brought into $L_m$.
This leads us to first partition along the $n$ dimension with blocksize $n_c$,
yielding partitions of $B$ and $C$ that are small enough to fit into the $L_m$ cache along with the block of $A$.
\paragraph{The inner loop for the $L_f$ cache}
The next step is to further partition the matrices to optimize for $L_f$.
The subproblem exposed by each iteration of the $L_f$ outer loop is a block of $A$ times a skinny panel of $B$ updating a skinny panel of $C$.
The $L_f$ inner loop will partition this subproblem along one of the two dimensions that the $L_f$ outer loop did not.
We can choose either of these.
For this example, we will choose the $k$ dimension (with blocksize $k_c$).
This $L_f$ inner loop exposes a new subproblem that we will call the $L_f$ subproblem.
In this case, the $L_f$ subproblem is a tall and skinny panel of $A$ times a $k_c \times n_c$ block of $B$,
updating a tall and skinny panel of $C$.
If $k_c \approx n_c \approx \sqrt{M_f}$, then the $L_f$ subproblem corresponds to the furthest left shape seen in Figure~\ref{fig:three_shapes}.
Then, the block of $B$ will reside in the $L_f$ cache, and the panels of $A$ and $C$ will be streamed in from lower levels of cache
for the duration of this subproblem.
\input fig_one_step
\subsection{Building a Family of Algorithms}
In the the motivating example, we started with a problem resembling the middle shape from Figure~\ref{fig:three_shapes},
and used two loops to partition the problem, resulting in a problem resembling one of the other two problem shapes.
This suggests the following methodology to optimize for any number of levels of cache:
We begin with one of the three shapes in Figure~\ref{fig:three_shapes},
optimizing for the I/O cost for the $L_h$ cache.
Then, to optimize for the next smaller and faster level of cache, the $L_{h-1}$ cache,
we first partition the problem along the long dimension,
and then partition along along one of the other two dimensions.
The result is one of the other two shapes shown in Figure~\ref{fig:three_shapes}.
We name the outermost of these two loops the \textit{$L_{h-1}$ outer loop}
and the innermost the \textit{$L_{h-1}$ inner loop}.
This process is shown in Figure~\ref{fig:onestep}.
We note that~\cite{ITXGEMM} claimed that it was locally optimal to encounter
a subproblem that corresponds to one of the three optimal subproblems at every level of the memory hierarchy.
However that paper did not give details on how this could be accomplished,
nor did it analyze the claim in terms of any I/O lower bounds.
\subsection{Classifying matrix operands and algorithms}
\label{sec:classifying}
The two loops for $L_{h-1}$ have exposed partitions of matrices
that differ in terms of access frequency and size.
From these properties, we can classify these different matrix partitions.
The \textit{$L_h$ resident block} is the block that is designed to remain and reside in the $L_h$ cache during the duration of the $L_h$ subproblem.
The other two operands of an $L_h$ subproblem are called the \textit{$L_h$ streamed panels},
as small partitions of the streamed panels are brought into $L_h$ during an iteration of the $L_{h-1}$ outer loop,
used for computation, and then not used again during the $L_h$ subproblem.
The $L_{h-1}$ inner loop partitions the $L_h$ resident block and one of the $L_h$ streamed panels.
The remaining $L_h$ streamed panel is left unpartitioned.
The matrix partition not partitioned by the $L_{h-1}$ inner loop is used during every iteration of the $L_{h-1}$ inner loop.
Guided by the principle that each element of the $L_h$ subproblem
should only by read into $L_h$ once,
it must remain in cache during the entire inner $L_{h-1}$ loop.
We name this matrix partition the \textit{$L_h$ guest panel}.
Compare this to the resident block of the $L_h$ cache.
The elements of the $L_h$ guest matrix, like the elements of the $L_h$ resident block,
are reused from $L_h$ across iterations of a loop.
The difference is that the $L_h$ resident block is reused across every iteration of the outer $L_{h-1}$ loop,
and the $L_h$ guest matrix is reused across the iterations of the inner $L_{h-1}$ loop.
After the two $L_{h-1}$ loops, we have exposed one of the three shapes associated with our algorithms
Resident~A, Resident~B, and Resident~C.
The small block that will then reside in $L_{h-1}$ will be known as the \textit{$L_{h-1}$ resident block}.
The algorithms that arise from our methodology can be identified by the operand that the resident block is from in each level of cache.
We introduce a naming convention for the algorithms that states the level of cache and the operand that resides in it.
For instance if an algorithm has $B$ as the resident block of the $L_2$ cache, $A$ as the resident block of the $L_1$ cache,
and $C$ as the resident block in registers,
it is called $B_2 A_1 C_0$.
\subsection{Optimizing for registers}
In our family of algorithms, we think of the register file as $L_0$: the smallest and fastest level of cache.
For practical reasons, it should be treated as a special case.
In many implementations of MMM,
the innermost kernel implements the Resident~C algorithm~\cite{GotoBLAS,BLIS1,wang2013augem,heinecke2016libxsmm}.
There are good reasons for this.
The latency of the computation instructions dictates that there is a minimum number of registers that must be used to store elements of $C$
to avoid the instruction latency becoming a bottleneck.
The number of elements of $C$ that are stored in registers must be at least
the product of the instruction latency and the number of elementary additions that can be performed per cycle~\cite{yotov2005,BLIS4}.
Often this means that a significant portion of the registers must be dedicated to storing elements of $C$,
making it unnatural to use the Resident~A or Resident~B algorithms for the registers.
Therefore, it is often the case that there is no choice but to use Resident~C for the registers.
\subsection{Optimizing for $L_{h-1}$ impacts the $L_h$ I/O cost}
When simultaneously optimizing for both $L_{h-1}$ and $L_h$,
the size of the $L_h$ resident block is reduced relative to its size when optimizing only for $L_h$,
since larger portions of the $L_h$ streamed panels must fit in $L_h$.
When optimizing for both $L_h$ and $L_{h-1}$:
\begin{itemize}
\item At minimum, the $L_h$ resident matrix and $L_h$ guest matrix must fit into $L_h$.
\item If $L_h$ is inclusive, meaning that anything in $L_{h-1}$ must also be in $L_h$,
then there must also be space in $L_h$ for the $L_{h-1}$ resident matrix.
\item If $L_h$ is inclusive and has a
LRU policy,
then in order for an element to remain in $L_h$,
fewer than $M_h$ elements may be accessed in between accesses of the element for it to remain in cache and hence every matrix partition exposed by the $L_{h-1}$ outer loop must fit in $L_h$.
\end{itemize}
These conditions represent a tradeoff between optimizing for $L_h$ and $L_{h-1}$.
The larger $L_{h-1}$ is, the more data must fit into it,
and the smaller the $L_h$ resident block can be.
With the simplifying assumptions that the resident blocks of both $L_h$ and $L_{h-1}$ must be square,
the I/O cost for $L_h$ when optimizing for both $L_h$ and $L_{h-1}$ can be determined by the ratio $M_h / M_{h-1}$.
Sometimes, it is counter-productive or of limited value to optimize for $L_{h-1}$ when optimizing for $L_{h}$.
In this case:
\begin{enumerate}
\item A simple option is to treat $L_{h-1}$ as if it were smaller than it is, reducing the size of the $L_{h-1}$ resident block.
\item If $L_h$ is LRU, another option is to tweak the blocksizes for $L_{h-1}$ slightly.
The portions of the $L_h$ streamed panels that must fit into $L_h$ alongside the $L_h$ resident block
depends on the tiling of the $L_{h-1}$ outer loop but not on the blocksize of the $L_{h-1}$ inner loop.
Therefore, one can tweak the shape of the $L_{h-1}$ resident block accordingly.
\item A third option is that
one could ``skip'' optimizing for $L_{h-1}$ and instead simultaneously optimize for $L_h$ and $L_{h-2}$.
\end{enumerate}
Blocking for the $L_{h-1}$ cache adversely affects the number of transfers into and out of the $L_h$ cache
but blocking for further (smaller and faster) levels of cache does not, because the entire $L_{h-2}$ subproblem fits within the data that must be in the $L_h$ cache.
\subsection{Optimizing for $L_{h}$ impacts the $L_{h-1}$ I/O cost}
Simultaneously optimizing for $L_h$ and $L_{h-1}$
adversely effects transfers into and out of $L_h$.
We now argue that it also has an adverse effect on the transfers into and out of $L_h$.
When optimizing for only one cache of size $M$,
the streamed matrices are each associated with an aggregate I/O cost of $\approx {mnk}/{\sqrt{M}} $.
When optimizing for both $L_h$ and $L_{h-1}$, however, the I/O cost associated with the $L_{h-1}$ resident matrix becomes cubic
because each element of the $L_{h-1}$ resident matrix is moved into $L_{h-1}$ once per $L_h$ subproblem.
When optimizing for both $L_h$ and $L_{h-1}$,
the I/O cost associated with the $L_{h-1}$ resident matrix will be $\approx {mnk}/{\sqrt{M_h}}$,
whereas when only optimizing for the $L_{h-1}$ cache, the I/O cost associated with the $L_{h-1}$ resident matrix is
equal to the number of compulsory reads and writes.
The I/O costs for the streamed matrices are not affected.
While optimizing for the $L_h$ cache has increased the $L_{h-1}$ I/O cost,
optimizing for
$L_{h+1}$, $L_{h+2}$, etc., does not affect it,
because optimizing for
further levels of cache does not reduce the number of times each element is used every time it is brought into the $L_{h-1}$ cache.
\NoShow{
In Figure~\ref{fig:pareto_trade}, we consider a pair of caches,
varying the $L_{h-1}$ cache sizes of an algorithm, and comparing its $L_h$ efficiency to its $L_{h-1}$ efficiency
(defined in terms of flops per I/O).
If we assume that the resident matrices are square, and only consider algorithms within our family of algorithms,
then this plot gives us Pareto optimal solutions to the $L_h$ and $L_{h-1}$ tradeoff problem.
When trying to resolve multilevel cache tradeoffs, this can be done for every pair of levels in the memory hierarchy.
}
\subsection{Skipping caches}
We have seen that tradeoffs occur when simultaneously optimizing for the I/O cost of multiple levels of cache.
Sometimes these tradeoffs are too great, so instead of optimizing for both $L_h$ and $L_{h-1}$,
one may forego the $L_{h-1}$ cache, and instead simultaneously optimize for $L_h$ and $L_{h-2}$ I/O costs,
where the $L_{h-1}$ cache is intermediate between $L_h$ and $L_{h-2}$.
We call this {\textit{skipping}} the $L_{h-1}$ cache.
When the $L_{h-1}$ cache is skipped, an optimal subproblem is encountered at the $L_h$ level
and at the $L_{h-2}$ level, but not at the $L_{h-1}$ level,
However, this does not mean that the $L_{h-1}$ is not useful.
Recall that the $L_h$ guest matrix is reused during each iteration of the $L_{h-2}$ inner loop.
In the right circumstances, this $L_h$ guest matrix may be instead reused in the $L_{h-1}$ cache,
if that cache is skipped.
\begin{itemize}
\item In idealized circumstances, only the $L_h$ guest matrix should need to be in the $L_{h-1}$ cache.
\item If the $L_{h-1}$ cache is LRU, then a panel of the $L_h$ resident matrix must also fit into the $L_{h-1}$ cache.
\item If the $L_{h-1}$ cache is inclusive, then the $L_{h-2}$ resident block must also fit into the $L_{h-1}$ cache.
\end{itemize}
In this case, the $L_h$ guest matrix is reused from $L_{h-1}$,
but is not square, and the I/O cost associated with reading the other two operands is suboptimal.
Furthermore, in many cases this panel occupies only a fraction of $L_{h-1}$,
reducing its size and further increasing the I/O cost.
Goto's algorithm is a member of the MOMMS family.
It skips optimizing for the $L_3$ and $L_1$ caches.
Since $A$ is the resident matrix of the $L_2$ cache, and $C$ is the resident matrix of the registers,
Goto's algorithm is named $A_2 C_0 $ according to the convention in Section~\ref{sec:classifying}.
\subsection{Experimental setup}
We have implemented the described family of algorithms as the Multilevel Optimized Matrix-Matrix Multiplication Sandbox (MOMMS).
MOMMS implements algorithms for MMM by composing components
like matrix partitioning, packing, and parallelization at compile time.
MOMMS is written in Rust~\cite{rust},
a modern system programming language focusing on memory safety.
Most of this safety is enforced at compile-time through Rust's borrow checker.
In Rust, memory is freed when it goes out of scope, and there is no garbage collector.
From Rust, one can call C functions with very low overhead.
For low-level kernels, MOMMS calls the BLIS micro-kernel~\cite{BLIS1} coded in C and inline assembly language.
We custom built two computers.
One has an Intel i7-7700K CPU with two 8GB DIMMS of DDR4-3200 RAM and a motherboard with an Intel Z270 chipset.
The other has an Intel i7-5775C CPU with two 8GB DIMMS of DDR3-2400 RAM and a motherboard with an Intel Z97 chipset.
We refer to these computers by their processor names.
We chose the Z270 and Z97 chipset motherboards because these are enthusiast motherboards
for consumers interested in overclocking,
and they provide the ability to change the memory multiplier.
The i7-7700K computer has a 4-core Intel Kaby Lake CPU with
64KB $L_1$, 256KB $L_2$ , and 6MB $L_3$ caches.
We chose this because it is a recent readily available Intel processor with an $L_3$ cache.
The i7-5775C is a 4-core Intel Broadwell CPU.
It also has 64KB $L_1$, 256KB $L_2$,and 6MB $L_3$ caches.
Most notably it has 128MB of eDRAM, functioning as an $L_4$ cache.
All experiments were performed with hyperthreading disabled.
A userspace CPU governor was used to set the CPUs to the nominal CPU frequency:
4.2 GHz for the i7-7700K and 3.3 GHz for the i7-5775C.
The bandwidth to main memory can be determined by the product of
the number of memory channels,
the base clock rate,
the number of bytes per transfer,
and the memory multiplier.
With DDR RAM, this is doubled since it transfers on both the leading and trailing edges of the clock signal.
We increase the ratio of the rate of I/O to the rate of computation via the BIOS settings.
Reducing the memory multiplier and the number of memory channels decreases the rate of I/O without
changing the rate of computation.
\input fig_sidebyside_combined
\subsection{Optimizing for the $L_3$ cache}
We here describe an algorithm implemented in MOMMS
that optimizes for both $L_3$ and $L_2$ labeled $B_3 A_2 C_0$.
We compare this algorithm to our re-implementation of Goto's algorithm (also implemented in MOMMS),
and to vendor and state-of-the-art open source BLAS~\cite{BLAS3} implementations.
Figure~\ref{fig:sidebyside} compares Goto's algorithm
with other MOMMS algorithms optimized for both the I/O cost of $L_3$ and $L_2$.
We now describe the $B_3 A_2 C_0$ algorithm as implemented for the i7-7700K
and illustrated in Figure~\ref{fig:sidebyside} (second from the left).
First, we partition for $L_3$ cache.
The $L_3$ outer loop partitions the matrices in the $n$ dimension with blocksize 768.
Then the $L_3$ inner loop partitions in the $k$ dimension, also with blocksize 768.
This reveals a $768 \times 768$ block of $B$ that becomes the $ L_3 $ resident matrix.
Next, we partition for the $L_2$ cache.
Since $B$ is the $L_3$ resident matrix, the $L_2$ outer loop must be in the $m$ dimension,
and it is with blocksize 120.
The $L_2$ inner loop then partitions the $k$ dimension with blocksize 192,
making a block of $A$ resident in $L_2$,
and a $120 \times 768$ panel of $C$ the guest matrix of $L_3$.
We skip $L_1$, since it is a quarter the size of $L_2$,
making it is not beneficial to optimize for both $L_2$ and $L_1$.
The next two loops make a $4 \times 12$ block of $C$ the resident matrix of registers,
and a $192 \times 12$ panel of $B$, the guest panel of $L_2$.
This guest panel of $L_2$ is designed to be reused in the (skipped) $L_1$ cache.
Finally, we call a $4 \times 12$ micro-kernel provided by BLIS~\cite{BLIS1}.
We compare this to Goto's algorithm with similar blocksizes as follows:
$n_c$ is 3000, $k_c$ is 192, $m_c$ is 120, $m_r$ is 4, and $n_r$ is 12.
Our implementation of Goto's algorithm uses the same micro-kernel from BLIS as does $B_3 A_2 C_0$.
For both algorithms, we parallelize the second loop around the micro-kernel with 4 threads.
This quadruples the bandwidth requirements of our algorithms without
increasing the amount of the $L_3$ cache that must be set-aside for elements of $A$~\cite{BLIS3}.
\input fig_rooflines2
\paragraph{Rooflines}
The roofline model is a simple model used to give an upper bound on performance based on the arithmetic intensity of an algorithm
for a specific computer~\cite{williams2009roofline}.
The computer is characterized by its rate of computation and the rate at which it can transfer data between main memory and cache.
The arithmetic intensity of an algorithm is the number of flops per byte transferred between memory and cache during the execution of that algorithm.
When the arithmetic intensity is low it is bandwidth bound, and when the arithmetic intensity is high it is compute bound.
The roofline model is thus a plot where the x-axis is the arithmetic intensity and the y-axis is maximum rate of computation for that arithmetic intensity.
The roofline that serves as an upper bound on performance is formed by two linear curves that intersect when the minimum time spent for computation
for an algorithm is equal to the minimum time spent for I/O.
Algorithms are plotted on the roofline model according to their arithmetic intensity and measured performance as a way to explain their performance
and to explain whether or not they could perform better.
One can either measure the arithmetic intensity of an algorithm or analyze it. We choose to analyze the arithmetic intensity of the algorithms plotted.
When the matrices are large, Goto's algorithm has an efficiency of $\left( \frac{1}{k_c} + \frac{1}{2 n_c} \right)^{-1}$ flops per element.
With the blocksizes we used, this is 23.26 flops per byte.
The algorithm $B_3 A_2 C_0$, with a $768 \times 768$ block of $B$ in the $L_3$ cache,
has an efficiency of 64 flops per byte.
In Figure~\ref{fig:rooflines}, we show the roofline model
for the i7-7700K for the case of one channel of DDR4-800 RAM,
and for the case of two channels of DDR4-3200 RAM.
These cases represent the minimum and the maximum memory bandwidth that we configure the computer for.
We plot the modeled efficiency of Goto's algorithm and the algorithm $B_3 A_2 C_0$
against each algorithm's measured performance.
The roofline plot clearly shows that
in the high-bandwidth case, either algorithm is capable of achieving
the peak performance of the CPU based on its arithmetic intensity,
but for the low-bandwidth case, only $B_3 A_2 C_0$ can.
The improved arithmetic intensity is caused by its more effective utilization of the $L_3$ cache.
\paragraph{Varying Bandwidth}
\ifthenelse{\boolean{showtikz}}{\input{fig_l3}}{}
Figure~\ref{fig:l3} reports the achieved performance of Goto's algorithm and $B_3 A_2 C_0$ for square matrices,
varying the amount of bandwidth to main memory.
Packing is often used to achieve spatial locality during an algorithm.
Otherwise blocks that are designed to reside in cache may not be able to do so due to cache conflict issues~\cite{packing}.
Packing incurs extra memory movements that do not fundamentally need to happen during MMM.
This paper is concerned with the fundamentals of temporal locality during MMM, and hence
we sidestep the spatial locality issue by storing matrices ``prepacked'' such that
every time a matrix is partitioned, the blocks are stored contiguously.
This lets us separate the issues of temporal and spatial locality in our experiments%
\footnote{Others have avoided packing for practical reasons.
BLASFEO~\cite{frison2018blasfeo} operates on so-called panel-major matrices for performance on small matrices.
The panel-major format is similar to the format used in Goto's algorithm for the packed panel of $B$.
Another library, libxsmm~\cite{heinecke2016libxsmm}, also targets small matrices,
and operates on column-major matrices, but does not perform packing.}.
At low bandwidth, $B_3 A_2 C_0$ outperforms Goto's algorithm by thirty to forty percent.
As the
and the gap
eventually disappears.
\paragraph{Comparing with existing implementations}
\ifthenelse{\boolean{showtikz}}{\input{fig_l3_packing}}{}
In Figure~\ref{fig:l3_packing}, we compare our implementations of Goto's algorithm and $B_3 A_2 C_0$
against the {\sc dgemm}\xspace routines in
ATLAS~\cite{ATLAS} (3.10.3), BLIS (0.2.1), and
Intel's Math Kernel Library (MKL 2017 Release 2)~\cite{MKL}.
It would not be fair to compare against implementations of MMM
if we did not need to pack, so for this experiment, input matrices are stored in column major order,
and our implementations of Goto's algorithm and $B_3 A_2 C_0$ pack matrices the first time they become the resident or guest matrix at some level of cache.
This packing (and the fact that $C$ is not stored hierarchically for Goto's algorithm) account for the performance difference
seen for the Goto and $B_3 A_2 C_0$ curves between Figures~\ref{fig:l3} and~\ref{fig:l3_packing}.
We see that for high bandwidth scenarios, BLIS, the MOMMS implementation of Goto's algorithm,
and $B_3 A_2 C_0$ all attain roughly 75\% of peak,
and that MKL outperforms the other implementations.
For low bandwidth, implementations that use Goto's algorithm (or something similar) exhibit poorer performance
as they do not effectively utilize the $L_3$.
In this case, $C_3 A_2 C_0$ performs best, with $B_3 A_2 C_0$ close behind.
For large problem sizes, ATLAS performs almost as well as the algorithms implemented in MOMMS that optimize for the $L_3$ I/O cost but it does not perform nearly as well for the high bandwidth case.
\subsection{Optimizing for the $L_4$ cache}
\ifthenelse{\boolean{showtikz}}{\input{fig_l4_cache}}{}
In this section, we demonstrate that our methodology can be efficiently applied to the Intel i7-5775C,
which has four levels of cache, where the $L_4$ cache is 128MB of eDRAM.
We implemented an algorithm called $C_4 A_2 C_0$ for this architecture.
Figure~\ref{fig:l4_algorithm} shows the loop ordering and the blocksizes used for $C_4 A_2 C_0$.
In $C_4 A_2 C_0$, a $3600 \times 3600$ block of $C$ resides in the $L_4$ cache,
and a $120 \times 192$ block of $A$ resides in the $L_2$ cache.
We decided to skip blocking for the $L_3$ cache, as there is sufficient bandwidth from the $L_4$ cache without
optimizing for the number of $L_3$ cache misses.
Nevertheless, the guest matrix of the $L_4$ cache, a $192 \times 3600$ panel of $B$,
is appropriately sized to remain in the $L_3$.
$C_4 A_2 C_0$ uses the same inner kernel as $B_3 A_2 C_0$.
\ifthenelse{\boolean{showtikz}}{\input{fig_l4}}{}
\ifthenelse{\boolean{showtikz}}{\input{fig_l4_packing}}{}
In Figure~\ref{fig:l4}, we compare the performance of Goto's algorithm and $C_4 A_2 C_0$ for square matrices
across several bandwidths.
In this experiment, matrices are stored hierarchically, and so packing is not performed.
For high bandwidths, Goto's algorithm and $C_4 A_2 C_0$ exhibit similar performance,
but when bandwidth is low, $C_4 A_2 C_0$ outperforms Goto's algorithm for large problem sizes.
Figure~\ref{fig:l4_packing} compares the performance on square matrices of our implementations of Goto's algorithm and $C_4 A_2 C_0$.
Here, matrices are stored in column-major order and accordingly packing is performed
when partitions of $A$ and $B$ become resident or guest matrices of some level of cache.
In $C_4 A_2 C_0$, $C$ is unpacked when it is no longer resident in $L_4$.
In both Figures~\ref{fig:l4} and~\ref{fig:l4_packing}, the top of the graphs is the peak computational rate of the CPU.
Because $L_4$ is so large, we ran quite large problems since
otherwise the matrices would completely fit into cache.
Performance for Goto's algorithm and MKL do not fall off until the problem size becomes $m=n=k \approx 5000$.
We can see that Goto's algorithm does not optimally use $L_4$
and neither does Intel's MKL.
While BLIS's performance does not fall off as severely as for the other implementations when the problem size grows,
its overall performance is not as high.
The algorithmic differences between BLIS and the MOMMS implementation of Goto's algorithm
are parallelism and blocksizes.
BLIS uses a larger $k_c$ and a smaller $m_c$ than MOMMS
and parallelizes the 2nd and 3rd loops around the micro-kernel,
whereas MOMMS parallelizes the 2nd loop around the micro-kernel.
Modifying either the parallelism or the blocksizes so that they match that of the MOMMS implementation of Goto's algorithm
adversely affects performance for the low bandwidth case, causing a noticeable dropoff for larger matrices.
We postulate that somehow the way that data is shared by the threads within BLIS,
coupled with the larger value of $k_c$ within BLIS, (or the smaller $m_c$) fosters better reuse of data within the $L_4$ cache.
With DDR-800, all implementations of MMM on the i7-5775C outperform
those on the i7-7700K, despite the fact that the former processor is two generations older.
The large $L_4$ cache means that blocksizes for the $C_4 A_2 C_0$ algorithm can be very large,
so the algorithm does not need much bandwidth from main memory,
but even algorithms that do not take advantage of $L_4$ by using such large blocksizes
benefit from having the 128MB cache.
The large capacity cache can facilitate the hiding of latency to main memory,
through techniques such as hardware prefetching.
\subsection{Algorithms for different shapes of matrices}
Algorithm $A_3 B_2 C_0$ partitions the matrices such that a square block of $A$ is resident in $L_3$
and a block of $B$ is resident in $L_2$.
It then calls an inner kernel updating a panel of $C$ whose elements are in $L_3$
by multiplying a block of $A$ whose elements are in $L_2$ times a panel of $B$
whose elements are in $L_3$.
Algorithm $C_3 A_2 C_0$ partitions the matrices such that a square block of $C$ is resident in the $L_3$
and a block of $A$ is resident in $L_2$.
It then calls the same inner kernel as the algorithm $B_3 A_2 C_0$ does.
Blocksizes and loop orderings for algorithms $A_3 B_2 C_0$ and $C_3 A_2 C_0$ are shown in Figure~\ref{fig:sidebyside}.
Algorithms $A_3 B_2 C_0$, $B_3 A_2 C_0$, and $C_3 A_2 C_0$ represent three choices for blocking for $L_3$ .
In Section~\ref{sec:single_different_shapes}, we argued that each of these choices may be
optimal for a specific problem shape where two dimensions are equal to $\sqrt{M_3}$
and the other dimension is large, and selecting the wrong algorithm for a problem shape can result in an I/O cost that is $50\%$ greater.
On a computer with three levels of cache and low bandwidth, we claim the following:
$A_3 B_2 C_0$ casts its computation in terms of a block-panel multiply,
with a block of $A$ in $L_3$, and so it should be the best choice of the three algorithms when
$m = k \approx \sqrt{M_3}$, and $n$ is large.
Similarly, $B_3 A_2 C_0$ casts its computation in terms of a panel-block multiply,
with a block of $B$ in $L_3$, and so it should be the best choice of the three algorithms when
$n = k \approx \sqrt{M_3}$, and $m$ is large.
Finally, $C_3 A_2 C_0$ casts its computation in terms of a block dot product multiply,
with a block of $C$ in $L_3$, and so it should be the best of the three algorithms when
$m = n \approx \sqrt{M_3}$, and $k$ is large.
\input fig_shapes
Figure~\ref{fig:l3_shapes} reports results
using $A_3 B_2 C_0$, $B_3 A_2 C_0$, $C_3 A_2 C_0$, and Goto's algorithm, with matrices stored hierarchically and no packing is performed.
We vary the shape of the matrices. In each case, two of the dimensions are set to $768$,
and one of the dimensions is varied along the x-axis.
The experiments were performed on the Intel i7-7700K, with the DDR speed set to a single channel of DDR4-800.
When the dimension that is allowed to vary is large,
the predicted algorithm outperforms the others.
We also show performance when the matrices are square, and the size varies along the x-axis.
For our algorithms that optimize for the $L_3$ cache,
there is very little performance difference between the square case and the case where an algorithm is the ``correct'' choice.
We conclude that when executing MMM, optimal I/O properties are attainable
two dimensions are at least the square root of the last level cache size,
as long as the third dimension is much larger.
The algorithms $A_3 B_2 C_0$, $B_3 A_2 C_0$, and $C_3 A_2 C_0$ outperform
Goto's algorithm for larger problem sizes in this low bandwidth scenario.
This is because even when the algorithm is wrong for the problem shape,
the I/O cost is only $50\%$ higher.
In comparison, on this computer, Goto's algorithm has an I/O cost that is approximately two times greater than the optimal algorithm.
\section{Introduction}
\input 01intro
\section{Theory and Fundamental Shapes}
\label{sec:single_cache}
\input 02theory_and_shapes
\section{Multiple Levels of Cache}
\label{sec:multi_cache}
\input 03three_levels
\section{Multilevel Cache Tradeoffs}
\input 04tradeoffs
\section{Experiments}
\label{sec:experiments}
\input 05experiments
\section{Summary}
\input 06conclusion
| {
"timestamp": "2019-04-12T02:14:44",
"yymm": "1904",
"arxiv_id": "1904.05717",
"language": "en",
"url": "https://arxiv.org/abs/1904.05717",
"abstract": "As the ratio between the rate of computation and rate with which data can be retrieved from various layers of memory continues to deteriorate, a question arises: Will the current best algorithms for computing matrix-matrix multiplication on future CPUs continue to be (near) optimal? This paper provides compelling analytical and empirical evidence that the answer is \"no\". The analytical results guide us to a new family of algorithms of which the current state-of-the-art \"Goto's algorithm\" is but one member. The empirical results, on architectures that were custom built to reduce the amount of bandwidth to main memory, show that under different circumstances, different and particular members of the family become more superior. Thus, this family will likely start playing a prominent role going forward.",
"subjects": "Mathematical Software (cs.MS)",
"title": "The MOMMS Family of Matrix Multiplication Algorithms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561694652215,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.7082906198231611
} |
https://arxiv.org/abs/1502.02120 | Testing uniformity on high-dimensional spheres against monotone rotationally symmetric alternatives | We consider the problem of testing uniformity on high-dimensional unit spheres. We are primarily interested in non-null issues. We show that rotationally symmetric alternatives lead to two Local Asymptotic Normality (LAN) structures. The first one is for fixed modal location $\theta$ and allows to derive locally asymptotically most powerful tests under specified $\theta$. The second one, that addresses the Fisher-von Mises-Langevin (FvML) case, relates to the unspecified-$\theta$ problem and shows that the high-dimensional Rayleigh test is locally asymptotically most powerful invariant. Under mild assumptions, we derive the asymptotic non-null distribution of this test, which allows to extend away from the FvML case the asymptotic powers obtained there from Le Cam's third lemma. Throughout, we allow the dimension $p$ to go to infinity in an arbitrary way as a function of the sample size $n$. Some of our results also strengthen the local optimality properties of the Rayleigh test in low dimensions. We perform a Monte Carlo study to illustrate our asymptotic results. Finally, we treat an application related to testing for sphericity in high dimensions. | \section{Introduction}
In directional statistics, inference is based on $p$-variate observations lying on the unit sphere~$\mathcal{S}^{p-1}:=\{\ensuremath{\mathbf{x}}\in\mathbb R^{p} : \|\ensuremath{\mathbf{x}}\|=\sqrt{\ensuremath{\mathbf{x}}'\ensuremath{\mathbf{x}}}=1\}$. This is relevant in various situations. (i) First, the original data themselves may belong to~$\mathcal{S}^{p-1}$; classical examples involve wind direction data ($p=2$) or spatial data at the earth scale $(p=3)$.
\linebreak
(ii) Second, some fields by nature are so that only the relative magnitude of the observations is important, which leads to projecting observations onto~$\mathcal{S}^{p-1}$. In shape analysis, for instance, this projection only gets rid of an overall scale factor related to the (irrelevant) object size. (iii) Finally, even in inference problems where the full (Euclidean) observations in principle need to be considered, a common practice in nonparametric statistics is to restrict to sign procedures, that is, to procedures that are measurable with respect to the projections of the observations onto~$\mathcal{S}^{p-1}$; see, e.g., \cite{Oja2010} and the references therein.
While~(i) is obviously restricted to small dimensions~$p$,~(ii)-(iii) nowadays increasingly involve high-dimensional data. For~(ii), high-dimensional directional data were considered in~\cite{Dry2005}, with applications in brain shape modeling; in text mining, \cite{banerjee2003generative} and \cite{BanGho2004} project high-dimensional data on unit spheres to discard text sizes when performing clustering. As for~(iii), the huge interest raised by high-dimensional statistics in the last decade has made it natural to consider high-dimensional \emph{sign} tests. In particular, \cite{Zouetal2013}
recently considered the high-dimensional version of the \cite{HP06} sign tests of sphericity, whereas an extension to the high-dimensional case of the location sign test from \cite{Cha1992} and \cite{mooj95} was recently proposed in \cite{Runze2015}. Considering~(iii) in high dimensions is particularly appealing since for moderate-to-large~$p$, sign tests show excellent (fixed-$p$) efficiency properties (see \cite{PaiVer2015} for details). Also, the concentration-of-measure phenomenon may make the restriction to signs virtually void as the dimension~$p$ increases.
In this paper, we consider the problem of testing uniformity on the unit sphere~$\mathcal{S}^{p-1}$, both in low and high dimensions. In low dimensions, this is a fundamental problem that has been extensively treated; see \cite{MarJup2000} and the references therein.
The high-dimensional version of the problem is less standard, yet also has some history. \cite{cueetal2009} proposed a test of uniformity that performs well empirically even in high dimensions, but no asymptotic results were obtained as~$p$ goes to infinity. \cite{Chi1991,Chi1993} explicitly considered high-dimensional testing for uniformity on the sphere,
in a fixed-$n$ large-$p$ framework, while \cite{Caietal2013} rather adopted a double asymptotic approach for the same problem. Possible applications of testing uniformity on high-dimensional spheres include outlier detection; see \cite{JuaPri2001}. Other natural applications are related with testing for sphericity in~$\mathbb R^p$, in the spirit of~(iii) above; in Section~\ref{realsec}, we will elaborate on this and provide references.
To be more specific, assume that the observations form of a triangular array of random vectors $\mathbf{X}_{ni}$, $i=1,\ldots,n$, $n=1,2,\ldots,$ where, for any~$n$, the~$\mathbf{X}_{ni}$'s are mutually independent and share a common distribution on the unit sphere~$\mathcal{S}^{p_n-1}$, and consider the problem of testing the null hypothesis~$\mathcal{H}_{0n}$ that this common distribution is the uniform over~$\mathcal{S}^{p_n-1}$. While our main interest is in the high-dimensional case ($p_n\to\infty)$, most of our results will also address the (low-dimensional) classical fixed-$p$ case ($p_n=p$ for all~$n$). The most classical test of uniformity is the \cite{Ray1919}
test, that rejects~$\mathcal{H}_{0n}$ for large values of
$
R_n
:=
np_n \| \bar{\mathbf{X}}_n\|^2
,
$
where $\bar{\mathbf{X}}_n:=\frac{1}{n} \sum_{i=1}^n \mathbf{X}_{ni}$. For fixed~$p$, the test is based on the null asymptotic $\chi^2_p$ distribution of~$R_n$. In the high-dimensional setup, \cite{PaiVer2015} obtained the following asymptotic normality result under the null.
\begin{Theor
\label{raylnull}
Let $(p_n)$ be a sequence of positive integers diverging to~$\infty$ as $n\rightarrow\infty$.
Assume that the triangular array $\mathbf{X}_{ni}$, $i=1,\ldots,n$, $n=1,2,\ldots,$ is such that, for any $n$, $\mathbf{X}_{n1},\mathbf{X}_{n2},\ldots,\mathbf{X}_{nn}$ form a random sample from the uniform distribution on~$\mathcal{S}^{p_n-1}$. Then
\vspace{1mm}
\begin{equation}
\label{raylhdstat}
{R}_n^{\rm St}
:=
\frac{R_n-p_n}{\sqrt{2 p_n}}
=
\frac{\sqrt{2p_n}}{n}
\sum_{1\leq i< j\leq n}
\mathbf{X}_{ni}^{\prime} \mathbf{X}_{nj}
\,
\stackrel{\mathcal{D}}{\to}
\,
\mathcal{N}(0,1)
\vspace{-2mm}
\end{equation}
as~$n\to\infty$, where~$\stackrel{\mathcal{D}}{\to}$ denotes weak convergence.
\end{Theor}
\newpage
Denoting by~$\Phi(\cdot)$ the cumulative distribution function of the standard normal, the high-dimensional Rayleigh test ($\phi^{(n)}$, say) then rejects~$\mathcal{H}_{0n}$ at asymptotic level~$\alpha$ whenever
\begin{equation}
\label{RayHDtraindelavie}
{R}_n^{\rm St}
>
z_\alpha,
\quad
\textrm{ with }
z_\alpha:=\Phi^{-1}(1-\alpha).
\end{equation}
Remarkably, this test does not impose any condition on the way $p_n$ goes to infinity with~$n$, hence can be applied as soon as~$n$ and~$p_n$ are large, without bothering about their relative magnitude (in contrast, most results in high-dimensional statistics typically impose that~$p_n/n\to c$ for some~$c>0$). Theorem~\ref{raylnull}, however, is not sufficient to justify resorting to the Rayleigh test~: the trivial test, that would discard the data and reject~$\mathcal{H}_{0n}$ with probability~$\alpha$, has indeed the same asymptotic null behaviour as the high-dimensional Rayleigh test, yet has a power function that is uniformly equal to the nominal level~$\alpha$. One of the main goals of this paper is to study the non-null behaviour of the Rayleigh test and to show that this test actually enjoys nice optimality properties, both in the low- and high-dimensional cases. Optimality throughout will be in the Le Cam sense, in relation with the Local Asymptotic Normality (LAN) structures of the models we adopt below.
The outline of the paper is as follows. In Section~\ref{contiguitysec}, we define a class of alternatives to the null of uniformity that skew the probability mass along a ``modal direction"~${\pmb \theta}$, and we identify the corresponding contiguous alternatives. In Section~\ref{oraclesec}, we provide a LAN result for fixed~${\pmb \theta}$, which leads to locally asymptotically most powerful tests under specified~${\pmb \theta}$. We address the \mbox{un\mbox{specified-${\pmb \theta}$}} problem through invariance arguments in Section~\ref{invariantsec}, which, in the FvML case, provides a second LAN result and shows that the high-dimensional Rayleigh test is locally asymptotically most powerful invariant. In Section~\ref{Rayleighsec}, we derive the asymptotic distribution of the high-dimensional Rayleigh test under general rotationally symmetric alternatives and comment on the resulting limiting powers. In Section~\ref{simusec}, we illustrate our asymptotic results through simulations. In Section~\ref{realsec}, we link the problem considered to that of testing for sphericity in high dimensions and we treat a real data example. In Section~\ref{conclusec}, we summarize the main findings of the paper and discuss some perspectives for future reseach. Finally, the appendix and the supplementary article \cite{Cut2015} collect technical proofs.
\section{Contiguous rotationally symmetric alternatives}
\label{contiguitysec}
Throughout, we consider specific alternatives to the null of uniformity over the $p$-dimensional unit sphere~$\mathcal{S}^{p-1}$, namely rotationally symmetric alternatives. A $p$-dimensional vector~$\mathbf{X}$ is said to be \emph{rotationally symmetric about}~${\pmb \theta}(\in\mathcal{S}^{p-1})$ if and only if $\mathbf{O}\mathbf{X}$ is equal in distribution to~$\mathbf{X}$ for any orthogonal $p\times p$ matrix~$\mathbf{O}$ satisfying $\mathbf{O}{\pmb \theta}={\pmb \theta}$; see, e.g., \cite{saw1978}. Such distributions are fully characterized by the location parameter~${\pmb \theta}$ and the cumulative distribution function~$F$ of~$\mathbf{X}'{\pmb \theta}$. The null of uniformity (under which~${\pmb \theta}$ is not identifiable) is obtained for
\begin{equation}
\label{unifcdf}
F_p(t)
:=
c_p
\int_{-1}^t (1-s^2)^{(p-3)/2}\,ds
,
\
\textrm{ with }
c_{p}
:=
\frac{\Gamma\big(\frac{p}{2}\big)}{\sqrt{\pi}\,\Gamma\big(\frac{p-1}{2}\big)}
,
\end{equation}
where~$\Gamma(\cdot)$ is the Euler Gamma function.
Particular alternatives are given, e.g., by the so-called Fisher--von Mises--Langevin (FvML) distributions, that correspond to
\begin{equation}
\label{fvmlcdf}
F_{p,\kappa}^{\rm FvML}(t)
:=
c^{\rm FvML}_{p,\kappa}\int_{-1}^t (1-s^2)^{(p-3)/2}\exp(\kappa s)\,ds
,
\
\textrm{ with }
c^{\rm FvML}_{p,\kappa}
:=
\frac{(\kappa/2)^{\frac{p}{2}-1}}{\sqrt{\pi}\,\Gamma(\frac{p-1}{2})\mathcal{I}_{\frac{p}{2}-1}(\kappa)}
,
\end{equation}
where~$\mathcal{I}_\nu(\cdot)$ is the order-$\nu$ modified Bessel function of the first kind and~$\kappa(>0)$ is a \emph{concentration} parameter (the larger the value of~$\kappa$, the more concentrated about~${\pmb \theta}$ the distribution is); see \cite{MarJup2000} for further details.
In Sections~\ref{contiguitysec} to~\ref{invariantsec}, we actually restrict to ``monotone" rotationally symmetric densities (with respect to the surface area measure on~$\mathcal{S}^{p-1}$) of the form
\begin{equation}
\label{densconc}
\ensuremath{\mathbf{x}}\mapsto c_{p,\kappa,f} f(\kappa\, \ensuremath{\mathbf{x}}'{\pmb \theta})
,
\qquad
\ensuremath{\mathbf{x}}\in\mathcal{S}^{p-1},
\end{equation}
where~${\pmb \theta}(\in\mathcal{S}^{p-1})$ is a location parameter,~$\kappa(>0)$ is a concentration parameter, and the function~$f:\mathbb R\to\mathbb R^+$ is monotone strictly increasing, differentiable at~$0$, and satisfies~$f(0)=f'(0)=1$. These conditions on~$f$, that will be tacitly assumed throughout, guarantee identifiability of~${\pmb \theta}$,~$\kappa$ and~$f$~: clearly, the strict monotonicity of~$f$ implies that~${\pmb \theta}$ is the modal location on~$\mathcal{S}^{p-1}$, whereas the constraint~$f'(0)=1$ allows to identify~$\kappa_n$ and~$f$.
Note that irrespective of~$f$, the boundary value~$\kappa=0$ corresponds to the uniform distribution over~$\mathcal{S}^{p-1}$. It is well-known that, if~$\mathbf{X}$ has density~(\ref{densconc}), then~$\mathbf{X}'{\pmb \theta}$ has density
$
t
\mapsto
c_{p,\kappa,f} (1-t^2)^{(p-3)/2} f(\kappa t)\,\mathbb{I}[t\in[-1,1]]
$
(throughout~$\mathbb{I}[A]$ stands for the indicator function of the set or condition~$A$). This is compatible with the cumulative distribution functions in~(\ref{unifcdf})-(\ref{fvmlcdf}), and shows that
$
c_{p,\kappa,f}
=
1/\big( \int_{-1}^1 (1-t^2)^{(p-3)/2} f(\kappa t)\,dt \big)
.
$
Finally, note that~$f(\cdot)=f_{\rm FvML}(\cdot)=\exp(\cdot)$ provides the FvML distributions above.
As announced in the introduction, we consider triangular arrays of observations~$\mathbf{X}_{ni}$, $i=1,\ldots,n$, $n=1,2,\ldots$ where the random vectors~$\mathbf{X}_{ni}$, $i=1,\ldots,n$ take values in~$\mathcal{S}^{p_n-1}$. More specifically, for any~${\pmb \theta}_n\in\mathcal{S}^{p_n-1}$, $\kappa_n>0$ and~$f$ as above, we will denote as~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$ the hypothesis under which~$\mathbf{X}_{ni}$, $i=1,\ldots,n$ are mutually independent and share the common density~$\ensuremath{\mathbf{x}}\mapsto c_{p_n,\kappa_n,f} f(\kappa_n\, \ensuremath{\mathbf{x}}'{\pmb \theta}_n)$. Note that larger values of~$\kappa_n$ provide increasingly severe deviations from the null of uniformity, which is obtained as~$\kappa_n$ goes to zero. Denoting the null hypothesis as~${\rm P}^{(n)}_{0}$, it is then natural to wonder whether or not ``appropriately small" sequences~$\kappa_n$ make~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$ and~${\rm P}^{(n)}_{0}$ mutually contiguous. The following result answers this question (see Appendix~\ref{appA} for a proof).
\vspace{1mm}
\begin{Theor}
\label{contigtheor}
Let~$(p_n)$ be a sequence in~$\{2,3,\ldots\}$. Let $({\pmb \theta}_n)$ be a sequence such that ${\pmb \theta}_n\in\mathcal{S}^{p_n-1}$ for all~$n$, $(\kappa_n)$ be a positive sequence such that $\kappa_n^2=O(\frac{p_n}{n})$, and assume that $f$ is twice differentiable at~$0$. Then, the sequence of alternative hypotheses~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$ and the null sequence~${\rm P}^{(n)}_{0}$ are mutually contiguous.
\end{Theor}
This contiguity result covers both the low- and high-dimensional cases. In the low-dimensional case, the usual parametric rate $\kappa_n\sim 1/\sqrt{n}$ provides contiguous alternatives, which implies that, irrespective of~$f$, there exist no consistent tests for~$\mathcal{H}_{0n}: \{{\rm P}^{(n)}_{0}\}$ against~$\mathcal{H}_{1n}: \{{\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}\}$ if $\kappa_n=\tau/\sqrt{n}$, $\tau>0$. The high-dimensional case is more interesting. First, we stress that the contiguity result in Theorem~\ref{contigtheor} does not impose conditions on~$p_n$, hence in particular applies when~(a) $p_n/n\to c$ for some~$c>0$ or (b) $p_n/n\to\infty$. Interestingly, the result shows that contiguity in cases~(a)-(b) can be achieved for sequences~($\kappa_n$) that do not converge to zero~: a constant sequence~($\kappa_n$) ensures contiguity in case~(a), whereas contiguity in case~(b) may even be obtained for a sequence ($\kappa_n$) that diverges to infinity in a suitable way. In both cases, there then exist no consistent tests for~$\mathcal{H}_{0n}: \{{\rm P}^{(n)}_{0}\}$ against the corresponding sequences of alternatives~$\mathcal{H}_{1n}: \{{\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}\}$, despite the fact that the sequences~$(\kappa_n)$ are not~$o(1)$. This may be puzzling at first since such sequences are expected to lead to severe alternatives to uniformity; it actually makes sense, however, that the fast increase of the dimension~$p_n$, despite the favorable sequences~$(\kappa_n)$, makes the problem difficult enough to prevent the existence of consistent tests.
\section{Optimal testing under specified modal location}
\label{oraclesec}
Whenever the modal location~${\pmb \theta}_n$ is specified (a case that is explicitly treated in~\citealp{MarJup2000}), optimal tests of uniformity can be obtained from the following Local Asymptotic Normality (LAN) result (see Appendix~\ref{appA} for a proof). To the best of our knowledge, this result provides the first instance of the LAN structure in high dimensions.
\begin{Theor}
\label{LANtheor}
Let~$(p_n)$ be a sequence in~$\{2,3,\ldots\}$ and let $({\pmb \theta}_n)$ be a sequence such that ${\pmb \theta}_n\in\mathcal{S}^{p_n-1}$ for all~$n$. Let $\kappa_n=\tau_n\sqrt{p_n/n}$, where the positive sequence~$(\tau_n)$ is~$O(1)$ but not~$o(1)$, and assume that $f$ is twice differentiable at~$0$.
Then, as~$n\rightarrow\infty$ under~${\rm P}_{0}^{(n)}$,
\begin{equation}
\label{LAQmain}
\log \frac{d{\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}}{d{\rm P}^{(n)}_{0}}
=
\tau_n
\Delta_{{\pmb \theta}_n}^{(n)}
-
\frac{\tau_n^2}{2}
+
o_{\rm P}(1)
,
\end{equation}
where $
\Delta_{{\pmb \theta}_n}^{(n)}
:=
\sqrt{n p_n}\,
\bar{\mathbf{X}}_n^{\prime} {\pmb \theta}_n
$
is asymptotically standard normal.
In other words, the model $\{ {\rm P}^{(n)}_{{\pmb \theta}_n,\kappa,f} : \kappa\geq 0 \}$ (where~${\rm P}^{(n)}_{{\pmb \theta}_n,0,f}:={\rm P}^{(n)}_0$ for any~${\pmb \theta}_n$ and~$f$) is locally asymptotically normal at~$\kappa=0$ with central sequence~$\Delta_{{\pmb \theta}_n}^{(n)}$, Fisher information~$1$, and contiguity rate~$\sqrt{p_n/n}$.
\end{Theor}
This result, that covers both the low- and high-dimensional cases, reveals that the rate~$\kappa_n\sim \sqrt{p_n/n}$ in Theorem~\ref{contigtheor} is actually the contiguity rate of the considered model (that is, more severe alternatives are not contiguous to the null of uniformity). In low dimensions, the usual parametric contiguity rate $\kappa_n\sim 1/\sqrt{n}$ is obtained. The high-dimensional rate is of course non-standard. Yet in the FvML high-dimensional case, this rate may be related to the fact that, as~$p\to\infty$, one needs to consider~$\kappa_p\sim\sqrt{p}$ to obtain FvML $p$-vectors that provide non-degenerate weak limiting results that are different from those obtained from $p$-vectors that are uniform over the sphere (see \cite{Wat1988} for a precise result); the contiguity rate~$\kappa_n\sim \sqrt{p_n/n}$ then intuitively results from a standard $1/\sqrt{n}$-shrinkage starting from this non-trivial~$\kappa_p\sim\sqrt{p}$ high-dimensional situation.
Now, consider the \mbox{specified-${\pmb \theta}_n$} problem, that is, the problem of testing~$\{{\rm P}^{(n)}_{0}\}$ (uniformity over~$\mathcal{S}^{p_n-1}$) against~$\cup_{\kappa>0} \cup_f \{{\rm P}^{(n)}_{{\pmb \theta}_n,\kappa,f}\}$. Theorem~\ref{LANtheor} entails that the test~$\phi_{{\pmb \theta}_n}^{(n)}$ rejecting the null at asymptotic level~$\alpha$ whenever
\begin{equation}
\label{deforayoup}
\Delta_{{\pmb \theta}_n}^{(n)}
=
\sqrt{n p_n}
\,
\bar{\mathbf{X}}_n^{\prime} {\pmb \theta}_n
>
z_\alpha
\end{equation}
is \emph{locally asymptotically most powerful}.
Since Le Cam's third lemma readily implies that $\Delta_{{\pmb \theta}_n}^{(n)}$ is asymptotically normal with mean~$\tau$ and variance one under~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$, with~$\kappa_n=\tau\sqrt{p_n/n}$, the corresponding asymptotic power of~$\phi_{{\pmb \theta}_n}^{(n)}$ is
\begin{equation}
\label{poworacle}
\lim_{n\to\infty}
{\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}
\big[
\Delta_{{\pmb \theta}_n}^{(n)} > z_\alpha
\big]
=
1-\Phi(z_\alpha - \tau)
.
\end{equation}
While all results of this section so far covered both the low- and high-dimensional cases, we need to treat these cases separately to investigate how the Rayleigh test compares with the optimal test~$\phi_{{\pmb \theta}_n}^{(n)}$.
We start with the low-dimensional case. Denoting by~$\chi^2_p(\delta)$ the non-central chi-square distribution with~$p$ degrees of freedom and non-centrality parameter~$\delta$, Le Cam's third lemma allows to show that, under the contiguous alternatives~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$, with~$\kappa_n=\tau\sqrt{p/n}$ (compare with the local alternatives from Theorem~\ref{LANtheor}),
\begin{equation}
\label{fixedplaw}
R_n
\stackrel{\mathcal{D}}{\to}
\chi^2_p(\tau^2)
\end{equation}
as~$n\rightarrow\infty$; for the sake of completeness, we provide a proof in the supplementary article \cite{Cut2015}. Denoting by~$\Psi_p(\cdot)$ the cumulative distribution function of the $\chi^2_p$ distribution, the corresponding asymptotic power of the Rayleigh test is therefore
\begin{equation}
\label{powfixedp}
\lim_{n\to\infty}
{\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}
\big[
R_n
>
\Psi^{-1}_p(1-\alpha)
\big]
=
{\rm P}
\big[
Y>\Psi^{-1}_p(1-\alpha
\big],
\quad
\textrm{ with }
Y\sim \chi^2_p(\tau^2)
,
\end{equation}
which is strictly smaller than the asymptotic power in~(\ref{poworacle}). We conclude that, in the \mbox{specified-${\pmb \theta}_n$} case, the low-dimensional Rayleigh test is not locally asymptotically most powerful yet shows non-trivial asymptotic powers against contiguous alternatives.
The story is different in the high-dimensional case, as it can be guessed from the following heuristic reasoning. In view of~(\ref{fixedplaw}), we have that, as~$n\rightarrow\infty$ under ${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$, with~$\kappa_n=\tau\sqrt{p/n}$,
$$
R_n^{\rm St}
=
\frac{R_n-p}{\sqrt{2p}}
\stackrel{\mathcal{D}}{\to}
\frac{\chi^2_1(\tau^2)-1}{\sqrt{2p}} +
\frac{\chi^2_{p-1}-(p-1)}{\sqrt{2p}}
,
$$
where both chi-square variables are independent. When both~$n$ and~$p$ are large, it is therefore expected that, under the same sequence of alternatives,
$
R_n^{\rm St}
\approx
\mathcal{N}\big(\frac{\tau^2}{\sqrt{2p}}
\,
,
1 + \frac{2\tau^2}{p}\big)
,
$
where~$Z_n\approx \mathcal{L}$ means that the distribution of~$Z_n$ is close to~$\mathcal{L}$. Thus, in the high-dimensional case (where~$p=p_n\to\infty$), $R_n^{\rm St}$ is expected to be standard normal under these alternatives, which would imply that the high-dimensional Rayleigh test in~(\ref{RayHDtraindelavie}) has asymptotic powers equal to the nominal level~$\alpha$.
The high-dimensional LAN result in Theorem~\ref{LANtheor} allows to confirm these heuristics. Letting~$\kappa_n=\tau_n\sqrt{p_n/n}$, where~$\tau_n$ is $O(1)$, Theorem~\ref{LANtheor} readily yields that, as~$n\rightarrow\infty$,
\begin{eqnarray*}
\lefteqn{
{\rm Cov}_{{\rm P}_{0}^{(n)}}\!
\bigg[
{R}_n^{\rm St}
,
\log \frac{d{\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}}{d{\rm P}^{(n)}_{0}}
\bigg]
=
{\rm Cov}_{{\rm P}_{0}^{(n)}}\!
\big[{R}_n^{\rm St},\Delta_{{\pmb \theta}_n,f}^{(n)}\big]
\tau_n
+o(1)
}
\\[2mm]
& &
\hspace{10mm}
=
\frac{\sqrt{2}p_n}{n^{3/2}} \tau_n
\sum_{i=1}^n
\sum_{1\leq k<\ell\leq n}
{\rm E}_{{\rm P}_{0}^{(n)}}\!
[(\mathbf{X}_{ni}^{\prime} {\pmb \theta}_n) (\mathbf{X}_{nk}^{\prime} \mathbf{X}_{n\ell})]
+
o(1)
=
o(1)
,
\end{eqnarray*}
so that Le Cam's third lemma implies that $R_n^{\rm St}$ remains asymptotically standard normal under~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$.
This confirms that, unlike in the low-dimensional case, the high-dimensional Rayleigh test does not show any power under the contiguous alternatives from Theorem~\ref{LANtheor}. In other words, the high-dimensional Rayleigh test fails to be rate-consistent for the \mbox{specified-${\pmb \theta}_n$} problem.
The Rayleigh test, however, does not make use of the specified value of the modal location~${\pmb \theta}_n$, hence does not primarily address the \mbox{specified-${\pmb \theta}_n$} problem but rather the \mbox{unspecified-${\pmb \theta}_n$} one. Therefore, the key question is whether or not the Rayleigh test is optimal for the latter problem. We answer this question in the next section.
\section{Optimal testing under unspecified modal location}
\label{invariantsec}
Building on the results of the previous section, two natural approaches, that may lead to an optimal test for the \mbox{\mbox{un\mbox{specified-${\pmb \theta}_n$}}} problem, are the following. The first one consists in substituting an estimator~$\hat{\pmb \theta}_n$ for~${\pmb \theta}_n$ in the optimal test~$\phi_{{\pmb \theta}_n}^{(n)}$ above. For the so-called \emph{spherical mean}~$\hat{\pmb \theta}_n=\bar{\mathbf{X}}_n/\|\bar{\mathbf{X}}_n\|$ (which is the MLE for~${\pmb \theta}_n$ in the FvML case), the resulting test rejects the null for large values of
$
\Delta_{\hat{\pmb \theta}_n}^{(n)}
=
\sqrt{n p_n}
\,
\bar{\mathbf{X}}_n^{\prime} \hat{\pmb \theta}_n
=
\sqrt{n p_n}
\,
\|\bar{\mathbf{X}}_n\|
=R_n^{1/2}
,
$
hence coincides with the Rayleigh test. The second approach, in the spirit of~\cite{Dav1977,Dav1987,Dav2002}, rather consists in adopting the test statistic
$
\sup_{{\pmb \theta}_n\in\mathcal{S}^{p_n-1}}
\Delta_{{\pmb \theta}_n}^{(n)}
=
\sqrt{n p_n}
\,
\|\bar{\mathbf{X}}_n\|
,
$
which again leads to the Rayleigh test. These considerations suggest that the Rayleigh test indeed may be optimal for the \mbox{un\mbox{specified-${\pmb \theta}_n$}} problem. In this section, we investigate whether this is the case or not, both in low and high dimensions.
\subsection{The low-dimensional case}
\label{invariantlowsec}
To investigate the optimality properties of the low-dimensional Rayleigh test for the \mbox{un\mbox{specified-${\pmb \theta}_n$}} problem, it is helpful to adopt a new parametri\-zation. For fixed~$p$ and~$f$, the model is indexed by $({\pmb \theta},\kappa)\in\mathcal{S}^{p-1}\times\mathbb R^+$, where the value~$\kappa=0$ makes~${\pmb \theta}$ unidentified (for fixed~$p$, the dimension of~${\pmb \theta}$ does not depend on~$n$, so that there is no need to consider sequences~$({\pmb \theta}_n)$). We then consider the alternative parametrization in~${\pmb \mu}:=\kappa{\pmb \theta}$, for which the fixed-$p$ result in Theorem~\ref{LANtheor} readily rewrites as follows.
\begin{Theor}
\label{LANtheorlow}
Fix an integer~$p\geq 2$ and let ${\pmb\mu}_n=\sqrt{p/n}\,{\pmb \tau}_n$ for all~$n$, where the sequence~$({\pmb \tau}_n)$ in~$\mathbb R^p$ is~$O(1)$ but not~$o(1)$. Assume that $f$ is twice differentiable at~$0$. For any~${\pmb \mu}\in\mathbb R^p\setminus\{\mathbf{0}\}$, let ${\rm P}^{(n)}_{{\pmb\mu},f}:={\rm P}^{(n)}_{{\pmb \theta},\kappa,f}$, where~${\pmb \mu}=:\kappa{\pmb \theta}$, with~${\pmb \theta}\in\mathcal{S}^{p-1}$.
Then, as~$n\rightarrow\infty$ under~${\rm P}_{0}^{(n)}$,
$
\log \big(d{\rm P}^{(n)}_{{\pmb\mu}_n,f}/d{\rm P}^{(n)}_{0}\big)
=
{\pmb \tau}_n^{\prime}
{\pmb \Delta}^{(n)}
-
\frac{1}{2} \|{\pmb \tau}_n\|^2
+
o_{\rm P}(1)
$, where ${\pmb \Delta}^{(n)}:=\sqrt{n p}\,\bar{\mathbf{X}}_n$
is asymptotically standard $p$-variate normal.
\end{Theor}
In the new parametrization, note that the problem of testing uniformity consists in testing~$\mathcal{H}_0^{(n)}: {\pmb\mu}={\bf 0}$ versus~$\mathcal{H}_1^{(n)}: {\pmb\mu}\neq {\bf 0}$. Theorem~\ref{LANtheorlow} then ensures that the test rejecting the null at asymptotic level~$\alpha$ whenever
$
\|{\pmb \Delta}^{(n)}\|^2=np\| \bar{\mathbf{X}}_n \|^2 >\Psi_p^{-1}(1-\alpha)
$ ---
that is, the low-dimensional Rayleigh test --- is locally asymptotically maximin; see, e.g., \cite{Lie2008}. This new optimality property of the low-dimensional Rayleigh test complements the one stating that this test is locally most powerful invariant;
see, e.g., \cite{Chi2003}, Section~6.3.5.
The specified-${\pmb \theta}_n$ and unspecified-${\pmb \theta}_n$ testing problems are two distinct statistical problems, that, even in the low-dimensional case considered, provide different efficiency bounds. In low dimensions, the Rayleigh test is optimal for the unspecified-${\pmb \theta}_n$ problem, but not for the specified-${\pmb \theta}_n$ one (the latter suboptimality follows from the fact that the asymptotic powers in~(\ref{powfixedp}) are strictly smaller than those of the optimal test in~(\ref{poworacle})). This thoroughly describes the optimality properties of this test in the low-dimensional case, so that we may now focus on the high-dimensional case.
\subsection{The high-dimensional case}
\label{invarianthighsec}
If~$p_n$ goes to infinity, then the dimension of the parameter~$({\pmb \theta}_n,\kappa)$ increases with~$n$, so that there cannot be a high-dimensional analogue of the LAN result in Theorem~\ref{LANtheorlow}. We therefore rather adopt, in the present hypothesis testing context, an \emph{invariance} approach that is close in spirit to the one used by \cite{Mor2009} in a point estimation context.
The null of uniformity and all collections of alternatives~$\mathcal{P}^{(n)}_{\kappa,f}:=\{{\rm P}^{(n)}_{{\pmb \theta},\kappa,f}:{\pmb \theta}\in\mathcal{S}^{p_n-1}\}$ (hence also the problem of testing uniformity against rotationally symmetric alternatives itself) are invariant under the group of rotations~$\mathcal{G}^{(n)}:=\{g^{(n)}_\ensuremath{\mathbf{O}}:\ensuremath{\mathbf{O}}\in SO(p_n)\}$, where~$g^{(n)}_\ensuremath{\mathbf{O}}(\ensuremath{\mathbf{x}}_1,\ldots,\ensuremath{\mathbf{x}}_n)=(\ensuremath{\mathbf{O}}\ensuremath{\mathbf{x}}_1,\ldots,\ensuremath{\mathbf{O}}\ensuremath{\mathbf{x}}_n)$ for any~$(\ensuremath{\mathbf{x}}_1,\ldots,\ensuremath{\mathbf{x}}_n)\in \mathcal{S}^{p_n-1}\times \ldots \times \mathcal{S}^{p_n-1}$ \mbox{($n$ times)} and where~$SO(p_n)$ stands for the collection of $p_n\times p_n$ orthogonal matrices with determinant one.
The invariance principle (see, e.g., \cite{Shao2003}, Section~6.3, or \cite{Lehetal2005}, Chapter~6) then suggests restricting to $\mathcal{G}^{(n)}$-invariant tests, that automatically are distribution-free under any~$\mathcal{P}^{(n)}_{\kappa,f}$.
As usual, optimal invariant tests are to be determined in the image of the original model by a maximal invariant~$\ensuremath{\mathbf{T}}_n$ of~$\mathcal{G}^{(n)}$. The likelihood (with respect to the surface area measure~$m_{p_n}$ on~$\mathcal{S}^{p_n-1}$) associated with the image of~$\mathcal{P}^{(n)}_{\kappa_n,f}$ by~$\ensuremath{\mathbf{T}}_n$ is given by
$$
\frac{d{\rm P}^{(n)\ensuremath{\mathbf{T}}_n}_{\kappa_n,f}}{dm_{p_n}}
=
\int_{SO(p_n)}
\prod_{i=1}^n
\Big[
c_{p_n,\kappa_n,f}
f(\kappa_n (\ensuremath{\mathbf{O}}\mathbf{X}_{ni})^{\prime} {\pmb \theta}_n)
\Big]
\,d\ensuremath{\mathbf{O}}
,
$$
where the integral is with respect to the Haar measure on~$SO(p_n)$; see, e.g., Lemma~2.5.1 in~\cite{Gir1996}. The resulting log-likelihood ratio to the null of uniformity is therefore
\begin{eqnarray}
\Lambda_{n,f}^{\ensuremath{\mathbf{T}}_n}
\, := \,
\log \frac{d{\rm P}^{(n)\ensuremath{\mathbf{T}}_n}_{\kappa_n,f}}{d{\rm P}^{(n)}_{0}}
&=&
\log
\,
\frac{c_{p_n,\kappa_n,f}^n
\int_{SO(p_n)}
\prod_{i=1}^n
f(\kappa_n \mathbf{X}_{ni} (\ensuremath{\mathbf{O}}^{\prime} {\pmb \theta}_n))\,d\ensuremath{\mathbf{O}}
}{c_{p_n}^n}
\nonumber
\\[2mm]
&=&
\log
\,
\frac{c_{p_n,\kappa_n,f}^n
{\rm E}
\big[
\prod_{i=1}^n
f(\kappa_n \mathbf{X}_{ni}^{\prime} \ensuremath{\mathbf{U}}) | \mathbf{X}_{n1},\ldots,\mathbf{X}_{nn} \big]
}{c_{p_n}^n}
,
\label{invarL}
\end{eqnarray}
where~$\ensuremath{\mathbf{U}}$ is uniformly distributed over~$\mathcal{S}^{p_n-1}$ and is independent of the~$\mathbf{X}_{ni}$'s. The following theorem shows that, in the FvML case~$f(\cdot)=f_{\rm FvML}(\cdot)=\exp(\cdot)$, this collection of log-likelihood ratios enjoys the LAN property.
\begin{Theor}
\label{LANinvartheor}
Let $(p_n)$ be a sequence of positive integers diverging to~$\infty$ as $n\rightarrow\infty$ and let~$\kappa_n=\tau_n p_n^{3/4}/\sqrt{n}$, where the positive sequence~$(\tau_n)$ is~$O(1)$ but not~$o(1)$.
Then, as~$n\rightarrow\infty$ under~${\rm P}_{0}^{(n)}$, we have that
\begin{equation}
\label{LAQinvarmain}
\log \frac{d{\rm P}^{(n)\ensuremath{\mathbf{T}}_n}_{\kappa_n,f_{\rm FvML}}}{d{\rm P}^{(n)}_{0}}
=
\tau_n^2
\Delta^{(n)\ensuremath{\mathbf{T}}_n}
-
\frac{\tau_n^4}{4}
+
o_{\rm P}(1)
,
\end{equation}
where
$
\Delta^{(n)\ensuremath{\mathbf{T}}_n}
:=
{R}_n^{\rm St}/\sqrt{2}
$ is asymptotically normal with mean zero and variance~$1/2$
\linebreak (${R}_n^{\rm St}$ is the standardized Rayleigh test statistic in~(\ref{raylhdstat})).
\end{Theor}
Applying Le Cam's third lemma,
\vspace{-.8mm}
we obtain that, as~$n\rightarrow\infty$ under~${\rm P}^{(n)\ensuremath{\mathbf{T}}_n}_{\kappa_n,f_{\rm FvML}}$, with~$\kappa_n=\tau p_n^{3/4}/\sqrt{n}$, $\Delta^{(n)\ensuremath{\mathbf{T}}_n}$ converges weakly to the normal distribution with mean~$\Gamma\tau^2$ and variance~$\Gamma$, with~$\Gamma=1/2$. The model~$\{{\rm P}^{(n)\ensuremath{\mathbf{T}}_n}_{\kappa,f_{\rm FvML}}:\kappa\geq 0\}$ (where~${\rm P}^{(n)\ensuremath{\mathbf{T}}_n}_{0,f_{\rm FvML}}:={\rm P}^{(n)}_0$) is thus ``second-order" LAN, in the sense that the mean of the limiting Gaussian shift experiment is quadratic (rather than linear) in~$\tau$. Clearly, this does not change the form of locally asymptotically optimal tests, but only their asymptotic performances. Note that the contiguity rate~$\kappa_n \sim p_n^{3/4}/\sqrt{n}$ associated with this new LAN property differs from the contiguity rate~$\kappa_n \sim \sqrt{p_n/n}$ in Theorem~\ref{LANtheor}.
Theorem~\ref{LANinvartheor} entails that the test rejecting the null of uniformity at asymptotic level~$\alpha$ whenever
$\Delta^{(n)\ensuremath{\mathbf{T}}_n}/\sqrt{\Gamma}=R_n^{\rm St}>z_\alpha$ (that is, the high-dimensional Rayleigh test in~(\ref{RayHDtraindelavie})) is, in the FvML case, \emph{locally asymptotically most powerful invariant}, that is, locally asymptotically most powerful in the class of invariant tests. This optimality result is of a high-dimensional asymptotic nature and also covers cases where~$\kappa_n$ does not converge to~$0$, hence does not follow from the aforementioned local optimality result from \cite{Chi2003}. Le Cam's third lemma readily implies that ${R}_n^{\rm St}$ converges weakly to the normal distribution with mean~$\tau^2/\sqrt{2}$ and variance one as~$n\rightarrow\infty$ under~${\rm P}^{(n)\ensuremath{\mathbf{T}}_n}_{\kappa_n,f_{\rm FvML}}$, with~$\kappa_n=\tau p_n^{3/4}/\sqrt{n}$, so that the corresponding asymptotic power of the Rayleigh test is given by
\begin{equation}
\label{powRay}
\lim_{n\to\infty}
{\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f_{\rm FvML}}
\big[
R_n^{\rm St}>z_\alpha
\big]
=
1
-
\Phi
\big(
z_\alpha- {\textstyle\frac{\tau^2}{\sqrt{2}}}
\big)
,
\end{equation}
where the sequence~$({\pmb \theta}_n)$ is such that ${\pmb \theta}_n\in\mathcal{S}^{p_n-1}$ for all~$n$ but is otherwise arbitrary. While the Rayleigh test is blind to alternatives in~$\kappa_n\sim \sqrt{p_n/n}$, it thus detects alternatives in~$\kappa_n\sim p_n^{3/4}/\sqrt{n}$, which, in view of Theorem~\ref{LANinvartheor}, is the best that can be achieved for the \mbox{un\mbox{specified-${\pmb \theta}_n$}} problem.
Interestingly, we might have guessed that these alternatives in~$\kappa_n\sim p_n^{3/4}/\sqrt{n}$ are those that can be detected by the high-dimensional Rayleigh test. Recall indeed that heuristic arguments in Section~\ref{oraclesec} suggested that, under ${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$, with~$\kappa_n=\tau\sqrt{p/n}$, the distribution of~$R_n^{\rm St}$ is close to $\mathcal{N}\big(\frac{\tau^2}{\sqrt{2p}}
\,
,
1 + \frac{2\tau^2}{p}\big)
$ for large~$n$ and~$p$. Consequently, to obtain, in high dimensions, an asymptotic non-null distribution that differs from the limiting null (standard normal) one, we need to consider alternatives of the form~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$, with~$\kappa_n=\tau p_n^{3/4}/\sqrt{n}$, under which the distribution of~$R_n^{\rm St}$ is then expected to be approximately $\mathcal{N}\big(\frac{\tau^2}{\sqrt{2}},1\big)$ for large~$n$ and~$p$. This is fully in line with the non-null distribution and local asymptotic powers obtained from Le Cam's third lemma in the previous paragraph.
Provided that~$f$ is four times differentiable at~$0$ and that~$p_n=o(n^2)$, tedious computations allowed to show that a
\vspace{-.8mm}
fourth-order expansion of the $f$-based log-likelihood ratio~$\Lambda_{n,f}^{\ensuremath{\mathbf{T}}_n}$ above, still based on~$\kappa_n=\tau_n p_n^{3/4}/\sqrt{n}$, exactly provides the righthand side of~(\ref{LAQinvarmain}), with the same central sequence~$
\Delta^{(n)\ensuremath{\mathbf{T}}_n}$. However, turning this into a proper $f$-based version of Theorem~\ref{LANinvartheor} requires controlling the corresponding (fifth-order) remainder term, which proved to be extremely difficult. Yet we conjecture that Theorem~\ref{LANinvartheor} indeed extends to an arbitrary~$f$ admitting five derivatives at~$0$, under the aforementioned assumption that~$p_n=o(n^2)$ (an assumption that is superfluous in the FvML case, since Theorem~\ref{LANinvartheor} allows~$p_n$ to go to infinity in an arbitrary way as a function of~$n$). Proving this conjecture would establish that the Rayleigh test is locally asymptotically most powerful invariant under any such~$f$, with the same asymptotic powers as in~(\ref{powRay}). Since this remains a conjecture, we now study the asymptotic powers of the high-dimensional Rayleigh test away from the FvML case.
\section{Asymptotic non-null behaviour of the Rayleigh test}
\label{Rayleighsec}
In this section, we derive the asymptotic distribution of the high-dimensional Rayleigh test under rotationally symmetric distributions that encompass those considered in Sections~\ref{contiguitysec}-\ref{invariantsec}. Here we do not require that the rotationally symmetric alternatives are monotone (in the sense of Section~\ref{contiguitysec}), nor absolutely continuous with respect to the surface area measure on the unit sphere, nor that they involve a concentration parameter~$\kappa$. Yet one of our objectives is to interpret the results of this section in the light of the contiguity/LAN/rate-consistency/power results obtained above.
More specifically, the sequences of alternatives we consider in this section are described by triangular arrays of observations $\mathbf{X}_{ni}$, $i=1,\ldots,n$, $n=1,2,\ldots$ such that, for any~$n$, $\mathbf{X}_{n1},\mathbf{X}_{n2},\ldots,\mathbf{X}_{nn}$ are mutually independent and share a common rotationally symmetric distribution on~$\mathcal{S}^{p_n-1}$. We denote by~${\rm P}^{(n)}_{{\pmb \theta}_n,F_n}$ the corresponding hypothesis when~$\mathbf{X}_{ni}$ is rotationally symmetric about~${\pmb \theta}_n$ and~$\mathbf{X}_{ni}^{\prime}{\pmb \theta}_n$ has cumulative distribution function~$F_n$. Since the Rayleigh test statistic is invariant under rotations, we will, without loss of generality, restrict to the case for which~${\pmb \theta}_n$, for any~$n$, coincides with the first vector of the canonical basis of~$\mathbb R^{p_n}$. The corresponding sequence of hypotheses will then simply be denoted as~${\rm P}^{(n)}_{F_n}$.
Under the null of uniformity (which we still denote as~${\rm P}_0^{(n)}$), the test statistic~${R}_n^{\rm St}$ in~(\ref{raylhdstat}) has mean zero and variance~$\frac{n-1}{n}$($\to 1$). Rotationally symmetric alternatives are expected to have an impact on the asymptotic mean and variance of~${R}_n^{\rm St}$. This is made precise in the following result (see Appendix~\ref{appB1} for a proof).
\begin{Prop}
\label{propmoments}
Under~${\rm P}^{(n)}_{F_n}$,
$
{\rm E}[{R}_n^{\rm St}]
=
(n-1)\sqrt{p_n} \, e_{n1}^2/\sqrt{2}
$
and
$\sigma_n^2
:=
p_n \tilde{e}_{n2}^2
+
2n p_n e_{n1}^2 \tilde{e}_{n2}
+
f_{n2}^2
={\rm Var}[{R}_n^{\rm St}]+o(1)$ as~$n\to\infty$, where the expectations~$e_{n\ell}:={\rm E}[(\mathbf{X}_{ni}'{\pmb \theta}_n)^\ell]$, $\tilde{e}_{n\ell}:={\rm E}[(\mathbf{X}_{ni}'{\pmb \theta}_n-e_{n1})^\ell]$, and $f_{n\ell}:={\rm E}[ (1- (\mathbf{X}_{ni}'{\pmb \theta}_n)^2)^{\ell/2}]$ are taken under~${\rm P}^{(n)}_{F_n}$.
\end{Prop}
\vspace{2mm}
Under~${\rm P}_0^{(n)}$, $e_{n1}=0$ and $\tilde{e}_{n2}=e_{n2}=1/p_n$, so that Proposition~\ref{propmoments} is compatible with the null values of~${\rm E}[{R}_n^{\rm St}]$ and ${\rm Var}[{R}_n^{\rm St}]$ provided above. Now, parallel to the null case (see Theorem~\ref{raylnull}), the Rayleigh test statistic, after appropriate standardization, is also asymptotically standard normal under a broad class of rotationally symmetric alternatives. More precisely, we have the following result (see Appendix~\ref{appB2} for a proof).
\begin{Theor}
\label{maintheorempower}
Let $(p_n)$ be a sequence of positive integers diverging to~$\infty$ as $n\rightarrow\infty$. Assume that the sequence~$({\rm P}^{(n)}_{F_n})$ is such that, as $n\rightarrow\infty$,
(i)
$\min
\big(
\frac{p_n \tilde{e}_{n2}^2}{f_{n2}^2}
,
\frac{\tilde{e}_{n2}}{ne_{n1}^2}
\big)
=o(1)$,
(ii)
$\tilde{e}_{n4}/\tilde{e}_{n2}^2=o(n)$
and
(iii)
$f_{n4}/f_{n2}^2=o(n)$.
Then, under~${\rm P}^{(n)}_{F_n}$,
$
(R_n^{\rm St}-{\rm E}[{R}_n^{\rm St}])/\sigma_n
=
\frac{\sqrt{2p_n}}{n\sigma_n}
\sum_{1\leq i< j\leq n}
\,
\big(\mathbf{X}_{ni}^{\prime} \mathbf{X}_{nj}-e_{n1}^2\big)
\stackrel{\mathcal{D}}{\to}
\,
\mathcal{N}(0,1)
$
as~$n\to\infty$.
\end{Theor}
This result applies under very mild assumptions, that in particular do not impose absolute continuity nor any other regularity conditions. The only structural assumptions are the conditions~(i)-(iii) above. These, however, may only be violated for rotationally symmetric distributions that are very far from the null of uniformity (hence, for alternatives under which there is in practice no need for a test of uniformity). Indeed, a necessary --- yet far from sufficient --- condition for (i)-(iii) to be violated is that~$\mathbf{X}_{n1}^{\prime}{\pmb \theta}_n$ converges in probability to some constant~$c(\in[-1,1])$. Moreover, in the FvML case, the conditions~(i)-(iii) \emph{always} hold, that is, they hold without any constraint on the concentration~$\kappa_n$ nor on the way the dimension~$p_n$ goes to infinity with~$n$ (the proof of this statement is very lengthy and requires original results on modified Bessel functions ratios, hence is provided in the supplementary article~\cite{Cut2015}).
Theorems~\ref{maintheorempower} allows to compute the asymptotic power of the Rayleigh test under appropriate sequences of alternatives. As mentioned above, the null of uniformity~${\cal H}_{0n}$ yields~$e_{n1}=0$ and $\tilde{e}_{n2}=1/p_n$. Here, we therefore consider ``local" departures from uniformity of the form
$
{\cal H}_{1n}:
\big\{
{\rm P}_{F_n}^{(n)}
:
e_{n1}=0+\nu_n \tau
,
\,
\tilde{e}_{n2}=(1/p_n) + \xi_n \eta
\big\}
\cdot
$
The following result provides the asymptotic power of the high-dimensional Rayleigh test in~(\ref{RayHDtraindelavie}) under sequences of local alternatives that, as we will show, are intimately related to those we considered in Sections~\ref{oraclesec}-\ref{invariantsec} (see Appendix~\ref{appB2} for a proof).
\begin{Theor}
\label{Powerprop}
Let $(p_n)$ be a sequence of positive integers diverging to~$\infty$ as $n\rightarrow\infty$. Let the sequence~$({\rm P}^{(n)}_{F_n})$ satisfy the assumptions of Theorem~\ref{maintheorempower} and be such that
\vspace{-1mm}
\begin{equation}
\label{lalt}
e_{n1}
=
\frac{\tau}{n^{1/2}p_n^{1/4}}
+
o\bigg(\frac{1}{n^{1/2}p_n^{1/4}}\bigg)
\qquad\textrm{and}\qquad
\tilde{e}_{n2}
=
\frac{1}{p_n}
+
o\Big(\frac{1}{p_n}\Big)
,
\end{equation}
for some~$\tau\geq 0$. Then, under~${\rm P}^{(n)}_{F_n}$, the asymptotic power of the high-dimensional Rayleigh test in~(\ref{RayHDtraindelavie}) is given by
$
1
-
\Phi
\big(
z_\alpha- \frac{\tau^2}{\sqrt{2}}
\big).
$
\end{Theor}
In order to link these alternatives to those considered earlier, note that, as~$n\rightarrow\infty$ under~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$, with $\kappa_n=\xi_n \sqrt{p_n/n}$, where the positive sequence~$(\xi_n)$ is~$o(\sqrt{n})$, we have
\begin{eqnarray}
e_{n1}
&\!\!=\!\!&
\Big(\frac{c_{p_n}}{c_{p_n,\kappa_n,f}}\Big)^{-1}
\,
\frac{c_{p_n}}{\kappa_n}
\int_{-1}^1 (1-s^2)^{(p_n-3)/2} \, \kappa_n s f(\kappa_n s)\,ds
\nonumber
\\[2mm]
&\!\!=\!\!&
\bigg(
1+
\frac{\kappa_n^2}{2p_n}f''(0)
+
o\bigg(\frac{\kappa^2_n}{p_n} \bigg)
\bigg)^{-1}
\bigg(
\frac{\kappa_n}{p_n} +
o\bigg(\frac{\kappa_n}{p_n} \bigg)
\bigg)
\label{genexpande1}
\end{eqnarray}
and
\begin{eqnarray}
e_{n2}
\nonumber
&\!\!=\!\!&
\Big(\frac{c_{p_n}}{c_{p_n,\kappa_n,f}}\Big)^{-1}
\,
\frac{c_{p_n}}{\kappa_n^2}
\int_{-1}^1 (1-s^2)^{(p_n-3)/2} \, (\kappa_n s)^2 f(\kappa_n s)\,ds
\\[2mm]
&\!\!=\!\!&
\bigg(
1+
\frac{\kappa_n^2}{2p_n}f''(0)
+
o\bigg(\frac{\kappa^2_n}{p_n} \bigg)
\bigg)^{-1}
\bigg(
\frac{1}{p_n} +
o\bigg(\frac{1}{p_n} \bigg)
\bigg)
\label{genexpande2}
,
\end{eqnarray}
where we used four times Lemma~\ref{lemcontig}.
For the contiguous alternatives in Theorem~\ref{LANtheor}, that is for~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$, with $\kappa_n=\tau_n \sqrt{p_n/n}$ (where~$(\tau_n)$ is bounded),
(\ref{genexpande1})-(\ref{genexpande2}) provide
\begin{equation}
\label{fsh}
e_{n1}
=
\frac{\tau_n}{\sqrt{np_n}}
+
o\bigg(\frac{1}{\sqrt{np_n}}\bigg)
\qquad\textrm{and}\qquad
\tilde{e}_{n2}
=
\frac{1}{p_n}
+
o\bigg(\frac{1}{p_n}\bigg)
.
\end{equation}
Theorem~\ref{Powerprop} implies that the asymptotic power of the high-dimensional Rayleigh test under the alternatives~(\ref{fsh}) is equal to~$\alpha$, which confirms (see Section~\ref{oraclesec}) that this test is blind to contiguous alternatives.
Now, at least if~$p_n=o(n^2)$ (a constraint that is actually superfluous in the FvML case, as it can be seen by using the Amos-type bounds provided in Lemma~S.3.2 from~\cite{Cut2015}), the more
\vspace{-1mm}
severe alternatives~${\rm P}^{(n)}_{{\pmb \theta}_n,\kappa_n,f}$, with $\kappa_n=\tau p_n^{3/4}/\sqrt{n}$, from Theorem~\ref{LANinvartheor} translate --- still in view of~(\ref{genexpande1})-(\ref{genexpande2}) --- into those in~(\ref{lalt}). This shows that the asymptotic powers of the high-dimensional Rayleigh test computed in the FvML case via Le Cam's third lemma (see~(\ref{powRay})) actually also hold away from the FvML case. Clearly, this further supports the conjecture from Section~\ref{invarianthighsec} that, under the assumption that~$p_n=o(n^2)$, Theorem~\ref{LANinvartheor} holds for an essentially arbitrary~$f$.
\section{A Monte Carlo study}
\label{simusec}
In this section, we present the results of a Monte Carlo study we conducted to check the validity of our asymptotic results. We performed two simulations. In the first one, we generated independent random samples of the form
\begin{equation}
\label{samples}
\mathbf{X}_{i;j}^{(\ell)}
\quad
i=1, \ldots, n,
\quad
j=1,2,
\quad
\ell=0,1,2,3,4
.
\end{equation}
For~$\ell=0$, the common distribution of the $\mathbf{X}_{i;j}^{(\ell)}$'s is the uniform distribution on the unit sphere~$\mathcal{S}^{p-1}$, while, for~$\ell>0$, the $\mathbf{X}_{i;j}^{(\ell)}$'s have an FvML distribution on~$\mathcal{S}^{p-1}$ with location ${\pmb \theta}=(1, 0, \ldots, 0)^{\prime} \in \mathbb R^p$ and concentration~$\kappa_j^{(\ell)}$, with
$$
\kappa_1^{(\ell)}=0.6\ell \, \sqrt{\frac{p}{n}}
\quad
\textrm{ and }
\quad
\kappa_2^{(\ell)}=0.6\ell\, \frac{p^{3/4}}{\sqrt{n}}
\cdot
$$
In the second simulation, we considered again independent random samples of the form~(\ref{samples}), still with $\mathbf{X}_{i;j}^{(0)}$'s that are uniform over~$\mathcal{S}^{p-1}$. Here, however, the $\mathbf{X}_{i;j}^{(\ell)}$'s, for~$\ell=1,2,3,4$, are rotationally symmetric with location ${\pmb \theta}=(1, 0, \ldots, 0)^{\prime} \in \mathbb R^p$ and are such that the ${\pmb \theta}^{\prime} \mathbf{X}_{i;j}^{(\ell)}$'s are beta
with mean~$e_{1;j}^{(\ell)}$ and variance~$\tilde{e}_{2;j}=1/p$, where we let
$$
e_{1;1}^{(\ell)}
=
\frac{0.6\ell}{\sqrt{np}}
\quad
\textrm{ and }
\quad
e_{1;2}^{(\ell)}
=
\frac{0.6\ell}{n^{1/2}p^{1/4}}
$$
(this beta example is associated with a non-monotonic nuisance~$f$, which is allowed in Section~\ref{Rayleighsec}). In both simulations, the value $\ell=0$ corresponds to the null hypothesis of uniformity, while $\ell=1,2,3,4$ provide increasingly severe alternatives. The case~$j=1$ relates to the contiguous alternatives (see Theorem~\ref{LANtheor}) and the corresponding (more general) alternatives in~(\ref{fsh}), whereas~$j=2$ is associated with the alternatives under which the Rayleigh test shows non-trivial asymptotic powers in the high-dimensional setup (see Theorem~\ref{LANinvartheor} and the alternatives~(\ref{lalt})).
For any~$(n,p)\in C\times C$, with $C:=\{30, 100, 400\}$, any~$j\in\{1,2\}$, and any~$\ell\in\{0,1,2,3,4\}$, we generated $M=2,500$ independent random samples~$\mathbf{X}_{i;j}^{(\ell)}$, $i=1, \ldots, n$, as described above, and evaluated the rejection frequencies of~(i) the \mbox{specified-${\pmb \theta}_n$} test~$\phi_{{\pmb \theta}_n}^{(n)}$ in~(\ref{deforayoup}) and of (ii) the high-dimensional Rayleigh test~$\phi^{(n)}$ in~(\ref{RayHDtraindelavie}), both conducted at nominal level~$5\%$. Rejection frequencies are plotted in Figures~\ref{FigFvML} and~\ref{Figbeta}, for FvML and beta-type alternatives, respectively. In each figure, we also plot the corresponding asymptotic powers, obtained from~(\ref{poworacle}), (\ref{powRay}), Theorem~\ref{Powerprop}, and the fact that~$\phi_{{\pmb \theta}_n}^{(n)}$
is consistent against ($j=2$)-alternatives.
Clearly, for both simulations, rejection frequencies match extremely well the corresponding asymptotic powers, irrespective of the tests and types of alternatives considered (the only possible exception is the test~$\phi_{{\pmb \theta}_n}^{(n)}$ under ($\ell=1,j=2$)-alternatives; this, however, is only a consequence of the lack of continuity of the corresponding asymptotic power curves). Remarkably, this agreement is also reasonably good for moderate sample size~$n$ and dimension~$p$. Beyond validating our asymptotic results, this Monte Carlo study therefore also shows that these results are relevant for practical values of~$n$ and~$p$.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=1.03\linewidth]{plotFvMLCORRIGE.pdf}
\caption{\small Rejection frequencies (dashed) and asymptotic powers (solid), under the null of uniformity over the $p$-dimensional unit sphere ($\ell=0$) and increasingly severe FvML alternatives ($\ell=1,2,3,4$), of the \mbox{specified-${\pmb \theta}_n$} test~$\phi_{{\pmb \theta}_n}^{(n)}$ in~(\ref{deforayoup}) (red/orange) and the high-dimensional Rayleigh test~$\phi^{(n)}$ in~(\ref{RayHDtraindelavie}) (light/dark green). Light colors (orange and light green) are associated with contiguous alternatives, whereas dark colors (red and dark green) correspond to the more severe alternatives under which the Rayleigh test shows non-trivial asymptotic powers in high dimensions; see Section~\ref{simusec} for details.}
\label{FigFvML}
\end{center}
\end{figure}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=1.03\linewidth]{plotbetaCORRIGE.pdf}
\caption{\small Rejection frequencies (dashed) and asymptotic powers (solid), under the null of uniformity over the $p$-dimensional unit sphere ($\ell=0$) and increasingly severe ``beta" rotationally symmetric alternatives ($\ell=1,2,3,4$), of the \mbox{specified-${\pmb \theta}_n$} test~$\phi_{{\pmb \theta}_n}^{(n)}$ in~(\ref{deforayoup}) (red/orange) and the high-dimensional Rayleigh test~$\phi^{(n)}$ in~(\ref{RayHDtraindelavie}) (light/dark green). Light colors (orange and light green) are associated with contiguous alternatives, whereas dark colors (red and dark green) correspond to the more severe alternatives under which the Rayleigh test shows non-trivial asymptotic powers in high dimensions; see Section~\ref{simusec} for details.}
\label{Figbeta}
\end{center}
\end{figure}
\section{An application}
\label{realsec}
Since the seminal paper \cite{LedWol2002}, one of the most widely considered testing problems in high-dimensional statistics is the problem of testing for sphericity. A possible approach to test for sphericity about a specified centre (without loss of generality, the origin of~$\mathbb R^p$) is to perform a test of uniformity on the sphere~$\mathcal{S}^{p-1}$ on ``spatial signs", that is, on the observations projected on~$\mathcal{S}^{p-1}$; see, among others, \cite{Caietal2013}, where this is used in a possibly high-dimensional setup, and \cite{cueetal2009}, where it is argued that ``\emph{in most practical cases the violations of sphericity will arise from the non-fulfillment of uniformity on the unit sphere for projected data}". This is particularly true in the high-dimensional case, since the concentration-of-measure phenomenon there implies that information lie much more in the directions of the observations from the origin than in their distances from the origin (incidentally, note that \cite{JuaPri2001} also invoked the same argument to adopt a directional approach for outlier detection in high dimensions).
As showed in the previous sections, the high-dimensional Rayleigh test
will show power against \emph{skewed} rotationally symmetric distributions on the sphere (skewness arises from the monotonicity of the corresponding nuisance~$f$).
On the contrary, the Rayleigh test will be blind to any non-spherical distribution in~$\mathbb R^p$ whose projection on the sphere charges antipodal regions equally. In particular, it will show no power against elliptical alternatives, hence also against spiked alternatives (that is, against alternatives associated with scatter matrices of the form ${\pmb \Sigma}=\sigma ({\bf I}_p+ \lambda {\pmb \beta} {\pmb \beta}^{\prime})$, with $\sigma,\lambda>0$ and ${\pmb \beta}\in\mathcal{S}^{p-1}$). Interestingly, most (if not all) tests for sphericity in high dimensions are designed to detect elliptical or spiked alternatives. This is the case, e.g., both for the Gaussian sphericity test ($\phi_{\mathcal{N}}^{(n)}$, say) from \cite{Joh1972} and for the sign test of sphericity ($\phi_S^{(n)}$, say) from~\cite{HP06} (these tests were shown to be valid in high-dimensions in \cite{LedWol2002} and~\cite{Zouetal2013}/\cite{PaiVer2015}, respectively).
In line with this, theoretical efforts have so far focused on spiked alternatives; see, e.g., \cite{Onaetal2013,Onaetal2014}, where powers of various tests of sphericity under high-dimensional spiked alternatives were investigated.
To illustrate these antagonistic power behaviours, we performed the following simulation exercise involving the Gaussian sphericity test~$\phi_{\mathcal{N}}^{(n)}$ and the sign sphericity test~$\phi_S^{(n)}$ (both in their version to test for sphericity about the origin of~$\mathbb R^p$), as well as the high-dimensional Rayleigh test~$\phi_R^{(n)}$.
For~$\ell=0,1,2,3,4$ and~$n=p=100$, we generated 10,000 $p$-dimensional independent samples~$\mathbf{X}_{i; \ell}^{(1)}$, $i=1, \ldots, n$, and~$\mathbf{X}_{i; \ell}^{(2)}$, $i=1, \ldots, n$, from two different alternatives to sphericity~:
\begin{itemize}
\item[(i)] $\mathbf{X}_{i; \ell}^{(1)}$, $i=1, \ldots, n$ form a random sample from the $p$-variate skew-normal distribution with location vector~${\bf 0}$, scatter matrix~${\bf I}_p$ and skewness vector~$(\ell, \ldots, \ell)'\in\mathbb R^p$; see \cite{AzzCap1999};
\item[(ii)] $\mathbf{X}_{i; \ell}^{(2)}$, $i=1, \ldots, n$ form a random sample from the $p$-variate normal distribution with mean~${\bf 0}$ and covariance matrix~${\bf I}_p+ \ell {\bf e}_1{\bf e}_1^{\prime}$, with~${\bf e}_1=(1,0,\ldots,0)'\in\mathbb R^p$.
\end{itemize}
For both~(i)-(ii), $\ell=0$ is associated with the null of sphericity about the origin of~$\mathbb R^p$, whereas $\ell=1,2,3,4$ provide increasingly severe alternatives. Figure \ref{powerss} plots the resulting empirical powers of the three tests mentioned above, all performed at nominal level~$5\%$. Results confirm that the Rayleigh test~$\phi_R^{(n)}$ performs quite well under alternatives of type~(i) but shows no power against alternatives of type~(ii), whereas the tests~$\phi_{\mathcal{N}}^{(n)}$ and~$\phi_S^{(n)}$ do the exact opposite. In practice, thus, as soon as the Rayleigh test and more standard tests of sphericity lead to opposite rejection decisions, practitioners are offered some insight on what type of deviation from sphericity they are likely to be facing.
\begin{figure}[!htbp]
\includegraphics[width=\linewidth, height=73mm]{powerboth.pdf}
\caption{ \small (Left:) Rejection frequencies, under the null of sphericity in~$\mathbb R^p$ ($\ell=0$) and increasingly severe skew-normal
\vspace{-.5mm}
alternatives ($\ell=1,2,3,4$), of the Rayleigh sphericity test~$\phi_R^{(n)}$ (green), along with the more classical
sphericity tests~$\phi_{S}^{(n)}$ (orange) and~$\phi_{\mathcal{N}}^{(n)}$ (red).
(Right:) The corresponding rejection frequencies under some $p$-variate spiked alternatives. In both cases, the dimension~$p$ and the sample size~$n$ are equal to~$100$, the nominal level is~$5\%$, and the number of replications is~$10,000$;
see Section~\ref{realsec} for details.
}
\label{powerss}
\end{figure}
We illustrate this on a real data example. We considered the gene expression dataset analyzed in \cite{Eisetal1998}; more precisely, we restricted to a subsample of~100 ribosomal proteins from this dataset. The data then take the form of a matrix ${\bf X}=(X_{ij})$, where~$X_{ij}$ is the $j$th expression value ($j=1, \ldots, p=79$) of the $i$th gene ($i=1,\ldots,n=100$) (even though~$n>p$, the present data may be considered high-dimensional since the small value of~$n/p$ prevents relying on fixed-$p$ asymptotic results). The rows of~$\mathbf{X}$ (the ``expression vectors") are obtained from DNA microarray experiments. After imputing missing data (by replacing any missing entry in~$\mathbf{X}$ with the sample average of available measurements on the same variable) and centering the observations via the sample average, we performed the Rayleigh test~$\phi_R^{(n)}$ and its competitors~$\phi_{\mathcal{N}}^{(n)}$ and~$\phi_S^{(n)}$. While centering still leaves some space for rejection by~$\phi_R^{(n)}$ (that is indeed based on the sample average of \emph{projected} observations), the Rayleigh test interestingly provides a $p$-value above~$.9999$,
whereas those of~$\phi_{\mathcal{N}}^{(n)}$ and~$\phi_S^{(n)}$ are below~$.0001$.
Hence, at any usual nominal level, the null of sphericity is rejected, and the outcome of the various tests suggest that the deviation from sphericity is of an elliptical, or at least of a centrally symmetric, nature.
This may be useful to guide further modelling of this gene expression dataset.
\section{Conclusions and perspectives}
\label{conclusec}
In this final section, we summarize the results of the paper and present perspectives for future research.
\subsection{Summary}
We considered the problem of testing uniformity on the unit sphere in the low- and high-dimensional setups. Rotationally symmetric alternatives with modal location~${\pmb \theta}_n$, concentration~$\kappa_n$ and functional parameter~$f$ were considered. We showed that~$\kappa_n\sim \sqrt{p_n/n}$ provides contiguous alternatives. For specified~${\pmb \theta}_n$, a local asymptotic normality result was established (at the aforementioned contiguity rate), which allowed, both in low and high dimensions, to define locally asymptotically most powerful tests for the \mbox{specified-${\pmb \theta}_n$} problem.
In practice, however, ${\pmb \theta}_n$ may rarely be assumed to be known. In the corresponding \mbox{un\mbox{specified-${\pmb \theta}_n$}} problem, we showed that the Rayleigh test enjoys nice asymptotic optimality properties, both in the low- and high-dimensional cases. In low dimensions, it is locally asymptotically maximin, irrespective of~$f$. In high dimensions, it is locally asymptotically most powerful invariant in the FvML case, and a conjecture --- that is strongly supported by a fourth-order expansion of the relevant $f$-based local log-likelihood ratio and by the computation of asymptotic powers in Section~\ref{Rayleighsec} --- states that, provided that~$p_n=o(n^2)$, this optimality holds for any~$f$ that is five times differentiable at~$0$.
Our results fully characterize the cost of the possible unspecification of~${\pmb \theta}_n$. In low dimensions, this cost is in terms of asymptotic powers but not in terms of rate. In high-dimensions, however, there is a cost in terms of rate, as optimal tests cannot detect the contiguous alternatives in~$\kappa_n\sim \sqrt{p_n/n}$, but only the more severe alternatives in~$\kappa_n\sim p_n^{3/4}/\sqrt{n}$. Simulation results are in remarkable agreement with our asymptotic results, irrespective of the relative magnitude of~$n$ and~$p$ --- which materializes the robustness of most of our results in the rate at which~$p_n$ goes to infinity with~$n$. A real data example illustrated the usefulness of the high-dimensional Rayleigh test in the framework of testing for sphericity.
\subsection{Perspective for future research}
In the distributional framework described in Section~\ref{contiguitysec}, the problem of testing uniformity consists in testing the null hypothesis that the concentration parameter~$\kappa_n$ is equal to zero. Depending on the information at hand, the other parameters, namely the modal location~${\pmb \theta}_n$ and the infinite-dimensional parameter~$f$, may be regarded as specified or unspecified. If~$f$ is specified, then the problem is of a parametric nature and optimality quite naturally relates to the local asymptotic normality of the corresponding fixed-$f$ submodel (both the specified- and unspecified-${\pmb \theta}_n$ parametric problems can be considered). We showed that, for any sufficiently smooth~$f$ in a neighbourhood of the origin, the test in~(\ref{deforayoup}) and the Rayleigh test achieve the $f$-parametric efficiency bounds in the specified- and unspecified-${\pmb \theta}_n$ problems, respectively.
Since it can hardly be assumed in practice that~$f$ is known, it is more natural to adopt a semiparametric point of view under which~$f$ remains unspecified. The optimality results stated in this paper should then be read in a semiparametric sense, under unspecified~$f$ in the specified-${\pmb \theta}_n$ problem, and under unspecified~$({\pmb \theta}_n,f)$ in the unspecified-${\pmb \theta}_n$ one. In all cases, such results are pointwise in~(${\pmb \theta}_n,f$) and relate to the corresponding semiparametric efficiency bounds at~(${\pmb \theta}_n,f$); see, e.g., \cite{Bic1998}. In the present setup, it is not needed to go through tangent space calculations to derive the resulting semiparametrically optimal tests; indeed, since the test in~(\ref{deforayoup}) and the Rayleigh test are parametrically optimal at any smooth~$f$ (for the specified- and unspecified-${\pmb \theta}_n$ problems, respectively), they also are semiparametrically optimal at such~$f$. Another corollary of our results is that semiparametrically efficiency bounds at~(${\pmb \theta}_n,f$) do not depend on~$({\pmb \theta}_n,f)$ but differ in the specified- and unspecified-${\pmb \theta}_n$ problems.
Now, the problem of testing uniformity over the unit sphere is primarily of a \emph{nonparametric} nature. Even if the distributional framework described in Section~\ref{contiguitysec} is considered, it is therefore valid to adopt a nonparametric point of view and to try to identify, e.g., \emph{minimax separation rates}; see, e.g., \cite{Ing2000} or, in a directional context, \cite{Fayetal2013}, \cite{TM14} and \cite{TM15}. This approach is fundamentally different from the semiparametric one adopted in this work. In particular, instead of providing \emph{pointwise} results in~$({\pmb \theta}_n,f)$, this approach aims at identifying consistency rates that are associated with the \emph{worst-in-}$f$ (resp., \emph{worst-in-}(${\pmb \theta}_n,f)$) performances that can be achieved in the specified-${\pmb \theta}_n$ (resp., \mbox{unspecified-${\pmb \theta}_n$}) problem. This fundamental difference between the semiparametric and nonparametric approaches above does not make it possible to translate our results in terms of minimax separation rates. Nevertheless, preliminary results indicate that, at least for the specified-${\pmb \theta}_n$ problem, the consistency rates described in this paper are also minimax separation rates and that the test~$\phi_{{\pmb \theta}_n}^{(n)}$ is ``rate-optimal in the minimax sense" (obviously, it would be natural to consider further the unspecified-${\pmb \theta}_n$ problem). These results, however, require much work and rely on other techniques than those considered in the present paper, hence will be presented elsewhere.
| {
"timestamp": "2016-04-28T02:12:39",
"yymm": "1502",
"arxiv_id": "1502.02120",
"language": "en",
"url": "https://arxiv.org/abs/1502.02120",
"abstract": "We consider the problem of testing uniformity on high-dimensional unit spheres. We are primarily interested in non-null issues. We show that rotationally symmetric alternatives lead to two Local Asymptotic Normality (LAN) structures. The first one is for fixed modal location $\\theta$ and allows to derive locally asymptotically most powerful tests under specified $\\theta$. The second one, that addresses the Fisher-von Mises-Langevin (FvML) case, relates to the unspecified-$\\theta$ problem and shows that the high-dimensional Rayleigh test is locally asymptotically most powerful invariant. Under mild assumptions, we derive the asymptotic non-null distribution of this test, which allows to extend away from the FvML case the asymptotic powers obtained there from Le Cam's third lemma. Throughout, we allow the dimension $p$ to go to infinity in an arbitrary way as a function of the sample size $n$. Some of our results also strengthen the local optimality properties of the Rayleigh test in low dimensions. We perform a Monte Carlo study to illustrate our asymptotic results. Finally, we treat an application related to testing for sphericity in high dimensions.",
"subjects": "Statistics Theory (math.ST)",
"title": "Testing uniformity on high-dimensional spheres against monotone rotationally symmetric alternatives",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561694652216,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7082906198231611
} |
https://arxiv.org/abs/1403.5186 | The spectral Phase-Amplitude representation of a wave function revisited | The phase and amplitude (Ph-A) of a wave function vary slowly and monotonically with distance, in contrast to the wave function that can be highly oscillatory. Hence an attractive feature of the Ph-A representation is that it requires far fewer meshpoints than for the wave function itself. In 1930 Milne developed an equation for the phase and the amplitude functions (W. E. Milne, Phys. Rev. 35, 863 (1930)), and in 1962 Seaton andPeach (M. J. Seaton and G. Peach, Proc. Phys. Soc. 79 1296 (1962)) developed an iterative method for solving Milne's Ph-A equations. Since the zero'th order term of the iteration is identical to the WKB approximation, there is a close relationship between the Ph-A and the WKB representations of a wave function. The objective of the present study is to show that a spectral Chebyshev expansion method to solve Seaton and Peach's iteration scheme is feasible, and requires very few meshpoints for the whole radial interval. Hence this method provides an economical and accurate way to calculate wave functions out to large distances. In a numerical example for which the potential decreased slowly with distance as 1/r^3, the whole radial range of [0-2000] covered with 301 mesh points (and Chbyshev basis functions). The first order iteration of the Ph-A wave function was found to have an accuracy better than 1%, and was always more accurate than the WKB wave function. | \section{Introduction}
When the Phase-Amplitude (Ph-A) method was first introduced by Milne in 1930
\cite{MILNE} , and then taken up by many authors, see Ref. \cite{KORSCH}, the
main motivation was the paucity of numerical mesh points required, compared to
the calculation of the wave function itself. This is because both phase and
amplitude functions are monotonic and slowly varying, as opposed to the wave
function itself that can be highly oscillatory. This point was verified by
many authors, in particular by Calogero and Ravenhall \cite{RAVEN} who state
that the solution for the phase is more stable than the solution of the wave
function. An additional argument in favor of the (Ph-A) representation is that
it lends itself to analytic expressions to address particular problems. For
example, the Ph-A representation facilitates the incorporation of the effect
of long range potentials \cite{ROBICH}, \cite{DEHMER}\ or the calculation of
resonances \cite{KORSCH}. It is also helpful in the quantum defect calculation
of atomic wave functions \cite{GREENE}, the calculation of Gaunt Factors
\cite{WIM}, as well as the description of an electron with an ion embedded in
a plasma \cite{RITCHIE}, among others. The Ph-A description of a carrier wave
in radio or television also plays a significant r\^{o}le in the
compactification of the signal transmission in the field of Information
Technology. An additional advantage of the Ph-A representation is that it
provides a method to improve the WKB approximation of a wave function, an
important point since the WKB approximation \cite{WKB} has led, over the
years, to a much improved understanding of the solution of the Schr\"{o}dinger Eq.
The Ph-A representation consists in writing a wave function $\psi(r)$ in the
form
\begin{equation}
\psi(r)=y(r)\sin[\phi(r)], \label{y sin
\end{equation}
where $y$ is the amplitude and $\phi$ is the phase, and $r$ the distance from
the origin. If an overlap matrix element
\begin{equation}
M=\int_{0}^{\infty}\psi_{1}(r)U(r)\psi_{2}(r)dr. \label{M12
\end{equation}
between two wave functions is required, then in the finite difference method
of obtaining integrals, both $\psi_{1}$ and $\psi_{2}$ have to be calculated
on a sufficiently fine mesh, which can be time consuming and prone to errors.
However, the Ph-A representation can provide an estimate of $M$ by decomposing
the integrand of the overlap matrix element into a slowly oscillating (S)
\ and a fast oscillating (F) par
\begin{equation}
M=M^{(S)}-M^{(F)}. \label{MFMS
\end{equation}
The decomposition makes use of a trigonometric identity for the product of two
sine functions with the resul
\begin{equation}
M^{(F,S)}=\frac{1}{2}\int_{0}^{\infty}y_{1}(r)U(r)y_{2}(r)[\cos(\phi_{1
\pm \phi_{2})]dr. \label{IF IS
\end{equation}
The matrix element $M^{(S)}$ can be calculated on a small set of radial mesh
points since the integrand oscillates slowly. Further, since $M^{(F)
<M^{(F)},$ a rough estimate for $M$ is provided by $M^{(S)}$ alone. Here
$U(r)$ is an overlap function that depends on the physics application envisaged.
In 1962 Seaton and Peach \cite{Seaton} presented an iterative scheme to solve
Milne's non-linear differential equation \cite{MILNE} for the amplitude and
phase. It is the purpose of the present work to implement this iterative
method by means of a spectral \cite{SPECTRAL} expansion of the amplitude in
terms of Chebyshev polynomials. A further purpose is to examine the accuracy
of the resulting Ph-A wave function by comparison with the direct solution of
the Schr\"{o}dinger equation for the wave function, the latter also obtained
by an accurate spectral integral equation method \cite{CHEB}, denoted as $IEM$
in what follows. The combination of both objectives have not been presented
previously. The great advantage of a spectral expansion is that the
calculations utilize all the support points located in a given partition
simultaneously, with the result that the errors are shared uniformly across
the partition in the case of Chebyshev expansions \cite{BOYD}. For the present
numerical examples the calculation is done in one great radial partition,
extending from $r=0$ to $r=2000$, containing $201$ Chebyshev support points.
By contrast, other algorithms (such as finite elements, finite differences, or
the $IEM$ method described below) have to divide such a large radial interval
into a number of partitions, with the result that the error from one partition
is propagated into the adjoining one, the last partition having the largest
error \cite{POWER}. In addition, for calculations that require\ the storage of
many wave functions with high precision \cite{DEREV}\ the use of the Ph-A
representation can be very advantageous because the amount of storage required
can be substantially smaller than what is needed for other algorithms.
In section II the iterative method is explained, section III contains details
of the computational spectral method, section IV presents the results,
including error estimates and suggestions for improvements, and finally the
Summary and Conclusions are presented in section V.
\section{Iterative solution of Milne's Phase-Amplitude equation.}
Milne \cite{MILNE} and others have derived a non linear equation for the
amplitude $y$ and phase $\phi$ \ for a partial wave functions $\psi$, which i
\begin{equation}
d^{2}y/dr^{2}+k^{2}y=V_{T}\ y+\frac{k^{2}}{y^{3}} \label{NL_SCHR
\end{equation}
where the total potential $V_{T}$\ i
\begin{equation}
V_{T}=L(L+1)/r^{2}+V(r). \label{Vext
\end{equation}
Here $V(r)$ is the atomic or nuclear potential (including the Coulomb
potential), $L$ is the orbital angular momentum quantum number, and the
nonlinearity is given by the last term in Eq. (\ref{NL_SCHR}). The phase
$\phi(r)$ is obtained from the amplitude $y(r)$ according to \cite{MILNE}
\begin{equation}
\phi(r)=\phi(r_{0})+k\int_{r_{0}}^{r}[y(r^{\prime})]^{-2}\ dr^{\prime},
\label{phase
\end{equation}
but it can also be obtained without the knowledge of $y$ \cite{WIM}. The Eq.
(\ref{NL_SCHR}) has been solved non-iteratively in the past by using some form
of a finite difference computational method, such as one of Milne's
predictor-corrector methods \cite{AS}, or \cite{RITCHIE} by a Bulirsch-Stoer
limit method \cite{Burl-Stoer}, none of which will be used in the present study.
The iterative method of Seaton and Peach \cite{Seaton} consists in rewriting
Eq. (\ref{NL_SCHR}) in the for
\begin{equation}
\frac{k^{2}}{y^{4}}=w+\frac{1}{y}\frac{d^{2}y}{dr^{2}} \label{Seaton1
\end{equation}
where
\begin{equation}
w(r)=k^{2}-V_{T}, \label{w
\end{equation}
and calculating the solution of Eq. (\ref{Seaton1}) by means of the iteration
\cite{Seaton}
\begin{equation}
\frac{k}{y_{n+1}^{2}}=[w+\frac{1}{y_{n}}\frac{d^{2}y_{n}}{dr^{2}
]^{1/2},\ \ n=0,1,2,... \label{Siter
\end{equation}
Here $n$ denotes the order of the iteration, and the initial value of $y$ is
given by the WKB approximation \cite{WKB
\begin{equation}
\frac{k}{y_{0}^{2}}=w^{1/2}. \label{WKB
\end{equation}
The advantage of formulating the iteration according to Eq. (\ref{Siter}) is
that $y$ varies slowly and monotonically with $r$ for large distances$,$ and
hence $(1/y_{n})d^{2}y_{n}/dr^{2}$ is small compared to $w$. Near the origin
of $r$ this term may become large, but a numerical solution of Eq.
(\ref{Siter}) still converges very well according to Ref. \cite{Seaton}. At
large distances where $w(r)\rightarrow k^{2}$ the amplitude $y$ automatically
approaches unity. The Eq. (\ref{phase}) combined with the first order result
(\ref{WKB}) is equivalent to the WKB approximation, and hence the iteration
scheme (\ref{Siter}) provides a method to iteratively improve the WKB approximation.
\section{Computational method}
The spectral computational method consist in expanding the function $y$ into a
series of $N+1$ Chebyshev polynomials $T_{s}(x),$ $s=0,1,2,..N$ ,
\begin{equation}
y(x)=\sum_{s=0}^{N}a_{s}T_{s}(x). \label{aT
\end{equation}
That expansion is inserted into Eq. (\ref{Siter}), and the corresponding
coefficients $a_{s}$are obtained by solving a matrix equation \cite{CHEB},
\cite{CiSE}. The driving term of this equation\ is the known right hand side
of Eq. (\ref{Siter}), which is also expanded in terms of Chebyshev
polynomials. Since the Chebyshev polynomials are defined in the interval
$-1\leq x\leq1$ the quantities defined in the radial interval $0\leq r\leq
r_{\max}$ are mapped into the $x-$variable by a linear transformation.
According to the spectral methods the $x$- mesh points are the $N+1$ zeros of
$T_{N+1}$. The expansion cutoff value $N$ is set arbitrarily, but once chosen,
the location and number of support points on the $x-$axis is determined.
Extensive use is made of the Clenshaw-Curtis matrix method (CC) \cite{CC} that
relates the values of a function evaluated at the $N+1$ mesh-points to the
expansion coefficients $a_{s}$ of that function, and vice-versa, by a simple
known matrix \cite{CHEB} relation.
The second order derivatives of $y_{n}$ are obtained by replacing the $T_{s}$
in Eq. (\ref{aT}) by their respective second derivatives, and keeping the
coefficients $a_{s}$ unchanged.
\begin{equation}
\frac{d^{2}y}{dr^{2}}=\sum_{s=0}^{N}a_{s}\frac{d^{2}T_{s}(x)}{dx^{2}
(\frac{dx}{dr})^{2} \label{D2y
\end{equation}
By using the expression $T_{s}(x)=\cos(s\theta)$, $s=0,1,..N$ , in terms of
$\theta$, where $x=\cos(\theta)$, one obtains after some trigonometric
transformations
\begin{equation}
\frac{d^{2}T_{s}(x)}{dx^{2}}=\frac{s}{\sin^{2}(\theta)}\left[ \frac
{\sin((s-1)\theta)}{\sin(\theta)}-(s-1)\cos(s\theta)\right] ,\text{
\ \ \ }s=0,1,2,.. \label{D2T
\end{equation}
In order to obtain these derivatives in $r-$space, it is sufficient to use
$dx/dr=2/(b_{2}-b_{1})$, where $b_{2}$ and $b_{1}$ are the right and left
extrema of the radial interval. However the calculation of the second order
derivative in Eq. (\ref{Siter}) introduces errors \cite{CiSE}, and these
errors increase as $N$ is made larger. This feature is the major source of
error in the present procedure, since the derivatives of Chebyshev polynomials
increase\ substantially with the order $s$ of the polynomial, and may overcome
the decrease with $s$ of the coefficients $a_{s}$. For example, for $s=16$ and
$x=-1,$ $d^{2}T_{s}(x)/dx^{2}=2\times10^{4}.$ For this reason, a balance
between the desired accuracy that increases with $N$, and the error in the
second order derivative of $T_{s}$ has to be achieved. In order to overcome
the difficulty described above, the function $y$ is approximated by an
analytical function $y_{A}$, plus a remainder function $\Delta y.$
\begin{equation}
y_{n}=y_{A}+\Delta y_{n} \label{deltay
\end{equation}
with
\begin{equation}
y_{A}(r)=1-\exp[(r-r_{S})/\alpha] \label{ANAL
\end{equation}
The second order derivative of $y_{A}$ is obtained analytically, and the
second order derivative of $\Delta y$ is obtained by using Eq. (\ref{D2T}).
The decrease of the expansion coefficients of $\Delta y$ relative to $y$ is
illustrated in Fig. \ref{absay}\ for the case that $y$ has the WKB value, as
discussed below.\ \ The figure shows that the values for the expansion of
$\Delta y$ are smaller by two orders of magnitude than the coefficients for
$y$ for small values of the index $s$, and remain small. This feature permits
one to evaluate the second order derivative of $\Delta y$ by using Eq.
(\ref{D2y}) without undue loss of accuracy, while the same would not have been
the case for the second order derivative of $y.$ The values of the parameters
$r_{S}$ \ and $a$ in Eq. (\ref{ANAL}) are listed in Table \ref{TABLE1}
\begin{table}[tbp] \centering
\begin{tabular}
[c]{|l||l|l|}\hline
$\ \ k$ & $\ r_{S}$ & $\ \alpha$\\ \hline \hline
$0.005$ & $500$ & $5$\\ \hline
$0.01$ & $250$ & $5$\\ \hline
$0.1$ & $60$ & $10$\\ \hline
\end{tabular}
\caption{The values of the parameters in Eq. (\ref{ANAL})}\label{TABLE1
\end{table
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.4163in,
width=3.2145in
{ay_k0005.eps
\caption{The absolute values of Chebyshev expansion coefficients as a function
of the Chebyshev index $s$ for $y_{WKB},\Delta y$, and $y_{A}.$
\label{absay
\end{center}
\end{figure}
The integral in Eq. (\ref{phase}) required to calculate the phase $\phi$ is
performed by a Gauss-Chebyshev method \cite{CHEB}, \cite{CiSE} that is well
suited to this type of spectral expansion since it only requires the values of
the expansion coefficients $a_{s}$. Situations that involve imaginary local
wave numbers and the respective turning points, as is the case in the presence
of repulsive barriers, are postponed to a future study.
The calculations are done with MATLAB on a desk PC using an Intel TM2 Quad,
with a CPU Q 9950, a frequency of 2.83 GHz, and a RAM\ of 8 GB. The
calculation uses typically $N+1=31$ Chebyshev polynomials for the calculation
of $u$. The computing time for the iterative spectral part of the calculation,
compared with the IEM calculation, both carried out in the whole radial
interval $[0,2500]$ is given in Table \ref{TABLE2}. The computing time for the
Ph-A iterations depends only on the number of Chebyshev functions $N+1$,
regardless of the size of the radial interval, and depends weakly on the value
of $k.$ \ For $N=200$, and performing one iteration, the calculation requires
approximately $0.18\ s$. That does not include the time to interpolate the
results to a fine equidistant radial mesh. Interpolating $y$ and $\phi$ to an
equi-spaced radial mesh size of step length $h=0.1$ depends on the size of the
radial interval. For the radial interval $[0,40]$ the fine mesh interpolation
requires $0.8\ s,$ and for the radial interval $[40,2000]$ the interpolation
takes $170\ s$ to $180\ s$. However, the calculation of the slowly oscillating
part $M^{(S)}$ of an overlap matrix element (\ref{IF IS}) can be done by using
the Gauss-Chebyshev integration method \cite{CiSE}, which does not require the
interpolation to an equi-spaced radial mesh, and is expected to take
approximately $0.30\ s$ for obtaining both of the two wave functions and also
$M^{(S)}.$
\begin{table}[tbp] \centering
\begin{tabular}
[c]{|l||l|l|}\hline
$k$ & Ph-A (s) & $\ IEM(s)$\\ \hline \hline
$0.01$ & $0.18$ & $0.20$\\ \hline
$0.1$ & $0.18$ & $0.29$\\ \hline
\end{tabular}
\caption{Computation times, as explained in the text}\label{TABLE2
\end{table
.
\section{Results}
The feasibility of the present approach will be demonstrated by means of an
example, for which the potential $V_{T}$ is everywhere attractive and has a
long range tail proportional to $r^{-3}$. Three wave numbers are used,
$k=0.1,\ 0.01,$ and $0.005$, the radial region extends from $r=0$ to $r=2000$,
and the orbital angular momentum is $L=0.$ In Eq. (\ref{NL_SCHR}) the factor
$\hslash^{2}/2m$ has already been divided into the potential and into the
energy $k^{2}$, so that both are given in units of inverse length squared. The
unit of distance can be either $fm$ for nuclear physics applications, or the
Bohr radius $a_{0}$ for atomic physics applications, but will not be
explicitly indicated.
The potential is the sum of a Woods-Saxon form, Eq. (\ref{C6}), to which is
added a $r^{-3}$ tail, whose singularity at the origin is smoothly removed by
an analytic mapping procedure, Eqs. (\ref{C7}-\ref{C8})
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.0211in,
width=2.687in
{potV_3.eps
\caption{(Color online) The solid line illustrates the potential used for the
numerical examples. The units are in inverse length squared, since the
potential, in energy units, has been multiplied by the factor $2m/\hbar^{2}.$
The dashed line indicates the Woods-Saxon potential to which is smoothly added
a $1/r^{3}$ "tail", as described in the text.
\label{FIGpot
\end{center}
\end{figure}
\begin{equation}
V_{WS}(r)=-3.36\ /\ \left[ 1+\exp \{(r-3.5)/0.6\} \right] \label{C6
\end{equation
\begin{equation}
V_{3}(r)=-1.6224\times10^{4}/\mathcal{R}^{3} \label{C7
\end{equation
\begin{equation}
\mathcal{R(}r\mathcal{)}=r/[1-\exp(-r/10)] \label{C8
\end{equation
\begin{equation}
V=V_{WS}+V_{3}. \label{C9
\end{equation}
The values of these potentials are appropriate for atomic physics applications
\cite{BA}. The reason this $1/r^{3}$ long range nature was chosen, is because
this case did not get addressed successfully by means of a Born-approximation
method \cite{BA}, while it is well described in the present study. The Woods
Saxon part and the total potential $V$ are illustrated respectively by the
dashed and solid lines in Fig. \ref{FIGpot}. The long-ranged nature of this
potential is such that at $r=2500$ the value of $V$ is $\simeq10^{-6}.$ The
corresponding wave function is highly oscillatory at small distances, with an
amplitude that varies substantially with distance, as is illustrated in Fig.
\ref{FIG18}.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=1.9216in,
width=2.5547in
{PSI_A_1500_k001.eps
\caption{(Color online). The agreement of the amplitude $y,$ shown by the
dashed lines, with the extrema of the solid line representing the \ $IEM$ wave
function, for $k=0.01$ inverse length.
\label{FIG18
\end{center}
\end{figure}
The corresponding amplitude $y(r)$ is illustrated by the dashed lines in Fig.
\ref{FIG18}$.$ It is in good agreement with the wave function calculated by
the spectral IEM method \cite{CHEB}, denoted as $IEM$, and shown in Fig.
\ref{FIG18} by the solid line. Noteworthy is the fact that only $201$
expansion terms in Eq. (\ref{aT}) have been used to calculate the amplitude
for the whole radial interval $[0,2000]$. The phase functions $\phi(r)$, based
on Eq. (\ref{phase}), are illustrated in Fig. \ref{FIGxx} for two values of
the wave number $k$. It is not clear wether the phase function obtained here
is identical to the one examined by Calogero in his excellent book
\cite{CALOG}, because the equations each one obeys are very different from
each other, although asymptotically they must agree.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=1.8844in,
width=2.5062in
{phases_k_01_0005.eps
\caption{(Color online). The phase functions divided by $\pi$ for the
potential shown in Fig. \ref{FIGpot} for two different values of the wave
number $k.$ For the larger value of $k$ the wave function has more
oscillations, hence the phase function increases more rapidly with $r.$
\label{FIGxx
\end{center}
\end{figure}
Unless otherwise noted, the numerical results described further below are
carried out only to the first iteration order $n=1$, since the main pupose of
the study was to establish the feasibility of the method. Additional
iterations could proceed along the lines of Eq. (\ref{Siter}), but a more
effective method could be established by subtracting the WKB amplitude from
$y_{n}$, i.e., $z_{n}=y_{n}-y_{WKB},$ and since $z_{n}<<y_{WKB}$ the resulting
equation for $z_{n}$ could be linearized$.$An example of the good agreement
between the IEM and the Ph-A wave functions is illustrated in Fig
\ref{FIG20}.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.1923in,
width=2.9162in
{examplek_0005.eps
\caption{(Color on line). The solid line is the IEM wave function, while the
open circles illustrate the Ph-A wave function results at the Chebyshev
support points with $N=300$, for $k=0.005$. The Ph-A calculation extends ftom
$r=0$ to $r=2000$, but only the radial interval $[0,500]$ is shown.
\label{FIG20
\end{center}
\end{figure}
An evaluation of the error of the wave function is obtained by plotting the
absolute value of the difference of the Ph-A and the IEM wave functions. The
result for the case $k=0.01$ is illustrated in Fig. \ref{FIG22}, which shows
that the agreement between the Ph-A and IEM wave functions for the large
distances is close to $0.1\ \%,$ while the error of the WKB wave function is
larger than $1\%$ . For the smaller distances, $0<r<40,$ both the WKB and the
Ph-A wave functions have an error less than $10^{-3}$.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.0418in,
width=2.7146in
{err_PSI_N_200_r_0_2000_k001.eps
\caption{(Color on line). The error of the Ph-A wave function for $k=0.01$,
using $201$ chebyshev expansion functions for the whole radial interval $0\leq
r\leq2000.$
\label{FIG22
\end{center}
\end{figure}
The values of the errors for the WKB and Ph-A wave functions for the three
values at the large distances are summarized in Fig. \ref{FIGerror
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=1.9052in,
width=2.5339in
{error_PSI_k.eps
\caption{(Color on line). The error of the Ph-A wave functions, obtained by
comparison with the $IEM$ wave functions, for three values of the wave number
$k,$ (in units of inverse length) for large distances in the vicinity of
$r=2000$ . For the small distances, in the vicinity of $r=20$, all errors are
of the same magnitude, less than $10^{-3}.$
\label{FIGerror
\end{center}
\end{figure}
The general conclusion for this particular numerical case studied is that in
the smaller radial intervals the WKB approximation is slightly less accurate
than the Ph-A method for the smaller distances, but is less accurate by more
than an order of magnitude for the large radial distances. This latter result
shows the value of the present form of the Ph-A method, which provides
\ further corrections to the WKB results, requiring very few mesh-points
\subsection{Overlap Integrals}
An example of the calculation of matrix elements by means of the Ph-A method
will be presented below. The two wave functions $\psi_{1}$ and $\psi_{2}$ are
solutions of the one-dimensional radial Schr\"{o}dinger equation with the
potential $V$ defined in Eqs. (\ref{C6}) to (\ref{C9}), for different wave
numbers $\ k=0.01$ and $0.005$, respectively (in units of inverse length). The
two wave functions have different amplitudes, but nearly the same phases at
distances where $|V|\ >\ k^{2},$ as illustrated in Fig. \ref{FIG26}.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=1.6181in,
width=2.1517in
{psi_1_2.eps
\caption{(Color on line). The two wave functions used in the calculation of
the matrix $M$, defined in Eq. (\ref{M12}). Both have unit amplitude at
$r=2500$.
\label{FIG26
\end{center}
\end{figure}
The overlap potential $U$ is taken from Eq. (4) of Ref. \cite{RITCHIE}, and
represents the screened interaction of an electron with an ion embedded in a
plasma. It is composed of a sum of exponentials divided by the radial distance
$r$, and is illustrated in Fig.\ref{FIG27}.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=1.5921in,
width=2.1171in
{pot_U.eps
\caption{(Color online). The overlap function $U$ that occurs in the matrix
element $M$, defined in Eq. (\ref{M12}). Because of the factor $1/r$, it
becomes $\infty$ at $r=0.$ The units of $U$ are $(1/length)^{2}$
\label{FIG27
\end{center}
\end{figure}
It has a $1/r$ singularity at $r\rightarrow0.$ Using the Ph-A representation
of $\psi_{1}$ and $\psi_{2}$, the integrand of the overlap integral separates
into a fast oscillating and slowly oscillating parts, Eqs. (\ref{IF IS}), as
described above. These integrands are illustrated in Fig. \ref{FIG28}.
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=1.8066in,
width=2.4016in
{integrand.eps
\caption{(Color online). The integrands of the matrix elements $M^{(F)}$ and
$M^{(S)}$. Due to the oscillation of the integrand of $M^{(F)}$, it is clear
that $M^{(F)}<M^{(S)}.$
\label{FIG28
\end{center}
\end{figure}
The approximate values of $M^{(F)}$and $M^{(S)}$ are $-0.073$ and $0.258$. As
expected, the integrand of $M^{(F)}$ is more oscillatory than the integrand of
$M^{(S)},$ and hence $|M^{(F)}|\ <|M^{(S)}|.$ Hence a crude estimate of $M$ is
given by $M^{(s)},$ which can be calculated directly within the Ph-A
representations, without the necessity to interpolate to small radial meshes.
\section{ Summary and conclusions}
This is the first time that the iterative method of Seaton and Peach
\cite{Seaton} was successfully combined with a spectral Chebyshev expansion of
the amplitude $y$ in solving the non linear equation of Milnes \cite{MILNE}
for the amplitude representation of a wave function. The difficulty with the
Chebyshev expansion of $y$ in obtaining the second order derivative of $y$ was
overcome by the simple procedure of decomposing $y$ into an analytic part
$y_{A}$ plus a remainder $\Delta y.$ The second order derivative of $y_{A}$ is
obtained analytically, and since $\Delta y<<y,$ the second order derivative of
$\Delta y,$ given by its Chebyshev expansion, caused no difficulty. For a
numerical example that contains a long range potential tail proportional to
$r^{-3}$, it was found that $300$ basis functions sufficed to span the entire
radial domain from the origin to $r=2000,$ and the resulting Ph-A wave
function was accurate to $0.1\%$ in the whole domain. An interesting feature
of the Seaton and Peach's iteration scheme is that the zero'th order
approximation is identical to the WKB approximation. The accuracy of the
latter was in some of the cases less than $1\%,$ but the first iteration
increased the accuracy to $0.1\%$, as illustrated in Fig. \ref{FIGerror}.
The Ph-A method is expected to be very useful for a) the calculation of
overlap matrix elements that involve highly oscillatory wave functions, b) to
obtain the long range value of wave functions in cases where the conventional
solutions of the Schr\"{o}dinger equation may be inadequate, and c) to provide
a very economical method to store wave functions. The present results open the
way to generalize the Ph-A method to scattering cases where barriers are
present, to bound states, or to the situation of coupled channel equations for
which only the final phases in each channel are required.
The author is indebted to Dr. Ionel Simbotin for calling attention to the Ph-A
representation, and for stimulating conversations.
| {
"timestamp": "2014-05-29T02:11:06",
"yymm": "1403",
"arxiv_id": "1403.5186",
"language": "en",
"url": "https://arxiv.org/abs/1403.5186",
"abstract": "The phase and amplitude (Ph-A) of a wave function vary slowly and monotonically with distance, in contrast to the wave function that can be highly oscillatory. Hence an attractive feature of the Ph-A representation is that it requires far fewer meshpoints than for the wave function itself. In 1930 Milne developed an equation for the phase and the amplitude functions (W. E. Milne, Phys. Rev. 35, 863 (1930)), and in 1962 Seaton andPeach (M. J. Seaton and G. Peach, Proc. Phys. Soc. 79 1296 (1962)) developed an iterative method for solving Milne's Ph-A equations. Since the zero'th order term of the iteration is identical to the WKB approximation, there is a close relationship between the Ph-A and the WKB representations of a wave function. The objective of the present study is to show that a spectral Chebyshev expansion method to solve Seaton and Peach's iteration scheme is feasible, and requires very few meshpoints for the whole radial interval. Hence this method provides an economical and accurate way to calculate wave functions out to large distances. In a numerical example for which the potential decreased slowly with distance as 1/r^3, the whole radial range of [0-2000] covered with 301 mesh points (and Chbyshev basis functions). The first order iteration of the Ph-A wave function was found to have an accuracy better than 1%, and was always more accurate than the WKB wave function.",
"subjects": "Atomic Physics (physics.atom-ph)",
"title": "The spectral Phase-Amplitude representation of a wave function revisited",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561676667174,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7082906185083493
} |
https://arxiv.org/abs/1302.1699 | Sharp deviation bounds for quadratic forms | This note presents sharp inequalities for deviation probability of a general quadratic form of a random vector \(\xiv\) with finite exponential moments. The obtained deviation bounds are similar to the case of a Gaussian random vector. The results are stated under general conditions and do not suppose any special structure of the vector \(\xiv\). The obtained bounds are exact (non-asymptotic), all constants are explicit and the leading terms in the bounds are sharp. |
\section{#1}}
\newcommand{\Section}[1]{\subsection{#1}}
\newcommand{\Subsection}[1]{\subsubsection{#1}}
\def\Chname{Section }
\def\chname{section }
\def\chapsect{Section}
}
{
\newcommand{\Chapter}[1]{\chapter{#1}}
\newcommand{\Section}[1]{\section{#1}}
\newcommand{\Subsection}[1]{\subsection{#1}}
\def\Chname{Chapter}
\def\chapsect{Chapter}
}
\date{}
\def\thetitle{Sharp deviation bounds for quadratic forms}
\def\thanks
{The author is partially supported by
Laboratory for Structural Methods of Data Analysis in Predictive Modeling, MIPT,
RF government grant, ag. 11.G34.31.0073.
Financial support by the German Research Foundation (DFG) through the Collaborative
Research Center 649 ``Economic Risk'' is gratefully acknowledged}
\def\theruntitle{sharp deviation bounds for quadratic forms}
\def\theabstract{
This note presents sharp inequalities for deviation probability of a general
quadratic form of a random vector \( \bb{\xi} \) with finite exponential moments.
The obtained deviation bounds are similar to the case of a Gaussian random vector.
The results are stated under general conditions and do not suppose any special
structure of the vector \( \bb{\xi} \).
The obtained bounds are exact (non-asymptotic), all constants are explicit and
the leading terms in the bounds are sharp.
}
\def\kwdp{60F10}
\def\kwds{62F10}
\def\thekeywords{quadratic forms, deviation bounds}
\def\thankstitle{}
\def\authora{Vladimir Spokoiny}
\def\runauthora{spokoiny, v.}
\def\addressa{
Weierstrass-Institute, \\ Humboldt University Berlin, \\ Moscow Institute of
Physics and Technology
\\
Mohrenstr. 39, 10117 Berlin, Germany, \\
}
\def\emaila{spokoiny@wias-berlin.de}
\def\affiliationa{Weierstrass-Institute and Humboldt University Berlin}
\input mydef
\input myfront
\input statdef
\input {pa_qf_2013}
\section{A probability bound for a quadratic form}
\section{Introduction}
\label{Chgqform}
\label{Sprobabquad}
This paper presents a number of deviation probability bounds for a quadratic form
\( \| \bb{\xi} \|^{2} \) or more generally \( \| I\!\!B \bb{\xi} \|^{2} \) of a random
\( p \) vector \( \bb{\xi} \) satisfying a general exponential moment condition.
Such quadratic forms arise in many problems.
We mainly focus on statistical applications such that hypothesis testing for linear
models or linear model selection.
We refer to \cite{massart2003} for an extensive overview and numerous results on
probability bounds and their applications in statistical model selection.
Limit theorems for quadratic forms can be found e.g. in \cite{GoTi1999} and
\cite{HoSh1999}.
Some concentration bounds for U-statistics are available in
\cite{Bret1999},
\cite{Gine2000},
\cite{HouRe2003}.
We also refer to \cite{Ba2010} for a number of statistical problems relying on such
deviation bounds.
If \( \bb{\xi} \) is standard normal then \( \| \bb{\xi} \|^{2} \) is chi-squared with
\( p \) degrees of freedom.
We aim to extend this behavior to the case of a general vector \( \bb{\xi} \)
satisfying the following exponential moment condition:
\begin{EQA}[c]
\log I\!\!E \exp\bigl( \bb{\gamma}^{\top} \bb{\xi} \bigr)
\le
\| \bb{\gamma} \|^{2}/2,
\qquad
\bb{\gamma} \in \mathbb{R}^{p}, \, \| \bb{\gamma} \| \le \mathtt{g} .
\label{expgamgm}
\end{EQA}
Here \( \mathtt{g} \) is a positive constant which appears
to be very important in our results.
Namely, it determines the frontier between the Gaussian and non-Gaussian type
deviation bounds.
Our first result shows that under \eqref{expgamgm} the deviation bounds for the
quadratic form \( \| \bb{\xi} \|^{2} \) are essentially the same as in the Gaussian
case, if the value \( \mathtt{g}^{2} \) exceed \( \mathtt{C} p \) for a fixed constant
\( \mathtt{C} \).
Further we extend the result to the case of a more general form
\( \| I\!\!B \bb{\xi} \|^{2} \).
An important advantage of the approach of this paper which differs it from all the
previous studies is that there is no any
additional conditions on the structure or origin of the vector \( \bb{\xi} \).
For instance, we do not assume that \( \bb{\xi} \) is a sum of independent or weakly
dependent random variables, or components of \( \bb{\xi} \) are independent.
The results are exact stated in a non-asymptotic fashion, all the constants are
explicit and the leading terms are sharp.
As a motivating example, we consider a linear regression model
\( \bb{Y} = \Psi^{\top} \bb{\theta} + \bb{\varepsilon} \)
in which the error vector \( \varepsilon \) is zero mean.
The ordinary least square estimator \( \tilde{\bb{\theta}} \) for the parameter vector
\( \bb{\theta} \) reads as
\begin{EQA}[c]
\tilde{\bb{\theta}}
=
\bigl( \Psi \Psi^{\top} \bigr)^{-1} \Psi \bb{Y}
\label{ttPsPsYv}
\end{EQA}
and it can be viewed as the maximum likelihood estimator in a Gaussian linear model
with a diagonal covariance matrix, that is,
\( \bb{Y} \sim \cc{N}(\Psi^{\top} \bb{\theta}, \sigma^{2} I\!\!\!I_{{n}}) \).
Define the \( p \times p \) matrix
\begin{EQA}[c]
\DP_{0}^{2}
\stackrel{\operatorname{def}}{=}
\Psi \Psi^{\top},
\label{DPcVPsqf}
\end{EQA}
Then
\begin{EQA}[c]
\DP_{0} (\tilde{\bb{\theta}} - \thetav^{*})
=
\DP_{0}^{-1} \bb{\zeta}
\label{DPcttsqf}
\end{EQA}
with \( \bb{\zeta} \stackrel{\operatorname{def}}{=} \Psi \bb{Y} \).
The likelihood ratio test statistic for this problem is exactly
\( \| \DP_{0}^{-1} \bb{\zeta} \|^{2}/2 \).
Similarly, the model selection procedure is based on comparing such quadratic forms
for different matrices \( \DP_{0} \); see e.g. \cite{Ba2010}.
Now we indicate how this situation can be reduced to a bound for a vector \( \bb{\xi} \)
satisfying the condition \eqref{expgamgm}.
Suppose for simplicity that the errors \( \varepsilon_{i} \) are independent and
have exponential moments.
\begin{description}
\item[\( \bb{(e_{1})} \)]
\emph{ There exist some constants \( \nu_{0} \) and \( \mathtt{g}_{1} > 0 \),
and for every \( i \)
a constant \( \mathfrak{s}_{i} \) such that
\( I\!\!E \bigl( \varepsilon_{i}/\mathfrak{s}_{i} \bigr)^{2} \le 1 \) and
}
\begin{EQA}[c]
\log I\!\!E \exp\bigl( {\lambda \varepsilon_{i}}/{\mathfrak{s}_{i}} \bigr)
\le
\nu_{0}^{2} \lambda^{2} / 2,
\qquad
|\lambda| \le \mathtt{g}_{1} .
\label{expzetanunu}
\end{EQA}
\end{description}
Here \( \mathtt{g}_{1} \) is a fixed positive constant.
One can show that if this condition is fulfilled for some \( \mathtt{g}_{1} > 0 \) and
a constant \( \nu_{0} \ge 1 \), then one can get a similar condition with
\( \nu_{0} \) arbitrary close to one and \( \mathtt{g}_{1} \) slightly decreased.
A natural candidate for \( \mathfrak{s}_{i} \) is \( \sigma_{i} \) where
\( \sigma_{i}^{2} = I\!\!E \varepsilon_{i}^{2} \) is the variance of
\( \varepsilon_{i} \).
Under \eqref{expzetanunu}, introduce a \( p \times p \) matrix \( \VP_{0} \)
defined by
\begin{EQA}[c]
\VP_{0}^{2}
\stackrel{\operatorname{def}}{=}
\sum \mathfrak{s}_{i}^{2} \Psi_{i} \Psi_{i}^{\top} .
\label{VPlinregr}
\end{EQA}
Define also
\begin{EQA}
\bb{\xi}
&=&
\VP_{0}^{-1} \Psi \bb{Y},
\\
N^{-1/2}
& \stackrel{\operatorname{def}}{=} &
\max_{i} \sup_{\bb{\gamma} \in \mathbb{R}^{p}}
\frac{\mathfrak{s}_{i} |\Psi_{i}^{\top} \bb{\gamma}|}{\| \VP_{0} \bb{\gamma} \|} \, .
\label{CPsiexp}
\end{EQA}
Simple calculation shows that for \( \| \bb{\gamma} \| \le \gm = \mathtt{g}_{1} N^{1/2} \)
\begin{EQA}[c]
\log I\!\!E \exp\bigl( \bb{\gamma}^{\top} \bb{\xi} \bigr)
\le
\nu_{0}^{2} \| \bb{\gamma} \|^{2}/2,
\qquad
\bb{\gamma} \in \mathbb{R}^{p}, \, \| \bb{\gamma} \| \le \mathtt{g} .
\label{expgamgm0ex}
\end{EQA}
We conclude that \eqref{expgamgm} is nearly fulfilled under \( (e_{1}) \) and
moreover, the value \( \mathtt{g}^{2} \) is proportional to the effective sample size
\( N \).
The results of the paper allow to get a nearly \( \chi^{2} \)-behavior of the test
statistic \( \| \bb{\xi} \|^{2} \) which is a finite sample version of the famous Wilks
phenomenon; see e.g. \cite{FaZh2001,FaHu2005}, \cite{BoMa2011}.
\medskip
The paper is organized as follows.
Section~\ref{SGaussqf} reminds the classical results about deviation probability of
a Gaussian quadratic form.
These results are presented only for comparison and to make the paper selfcontained.
Section~\ref{SchiquadE}
studies the probability of the form \( I\!\!P\bigl( \| \bb{\xi} \| > \mathtt{y} \bigr) \)
under the condition
\begin{EQA}[c]
\log I\!\!E \exp\bigl( \bb{\gamma}^{\top} \bb{\xi} \bigr)
\le
\nu_{0}^{2} \| \bb{\gamma} \|^{2}/2,
\qquad
\bb{\gamma} \in \mathbb{R}^{p}, \,\, \| \bb{\gamma} \| \le \mathtt{g} .
\label{expgamgm0}
\end{EQA}
The general case can be reduced to \( \nu_{0} = 1 \) by rescaling
\( \bb{\xi} \) and \( \mathtt{g} \):
\begin{EQA}[c]
\log I\!\!E \exp\bigl( \bb{\gamma}^{\top} \bb{\xi} / \nu_{0} \bigr)
\le
\| \bb{\gamma} \|^{2}/2,
\qquad
\bb{\gamma} \in \mathbb{R}^{p}, \,\, \| \bb{\gamma} \| \le \nu_{0} \mathtt{g}
\label{expgamgmm}
\end{EQA}
that is, \( \nu_{0}^{-1} \bb{\xi} \) fulfills \eqref{expgamgm} with a slightly increased
\( \mathtt{g} \).
The result is extended to the case of a general quadratic form
in Section~\ref{Sbqf}.
Some more extension motivated by different statistical problems are given in
Section~\ref{Schi2norm} and Section~\ref{SBernstqf}.
All the proofs are collected in the Appendix.
\section{Gaussian case}
\label{SGaussqf}
Our benchmark will be a deviation bound for \( \| \bb{\xi} \|^{2} \) for a standard
Gaussian vector \( \bb{\xi} \).
The ultimate goal is to show that under \eqref{expgamgm}
the norm of the vector \( \bb{\xi} \) exhibits behavior
expected for a Gaussian vector, at least in the region of moderate deviations.
For the reason of comparison, we begin by stating the result for a Gaussian vector
\( \bb{\xi} \).
\begin{theorem}
\label{TxivG}
Let \( \bb{\xi} \) be a standard normal vector in \( \mathbb{R}^{p} \).
Then for any \( u > 0 \), it holds
\begin{EQA}
\label{logPmunuu}
I\!\!P\bigl( \| \bb{\xi} \|^{2} > p + u \bigr)
& \le &
\exp\bigl\{ - (p/2) \phi(u/p) \bigr] \bigr\}
\end{EQA}
with
\begin{EQA}[c]
\phi(t)
\stackrel{\operatorname{def}}{=}
t - \log(1+t) .
\label{fmupd}
\end{EQA}
Let \( \phi^{-1}(\cdot) \) stand for the inverse of \( \phi(\cdot) \).
For any \( \mathtt{x} \),
\begin{EQA}[c]
I\!\!P\bigl(
\| \bb{\xi} \|^{2}
> p + \phi^{-1}(2\mathtt{x}/p)
\bigr)
\le
\exp(- \mathtt{x}) .
\label{Ptttlg0}
\end{EQA}
This particularly yields with \( \varkappa = 6.6 \)
\begin{EQA}[c]
I\!\!P\bigl(
\| \bb{\xi} \|^{2}
> p + \sqrt{\varkappa \mathtt{x} p} \vee (\varkappa \mathtt{x})
\bigr)
\le
\exp(- \mathtt{x}) .
\label{Ptttlg}
\end{EQA}
\end{theorem}
This is a simple version of a well known result and we present it only for
comparison with the non-Gaussian case.
The message of this result is that the squared norm of the Gaussian vector
\( \bb{\xi} \) concentrates around the value \( p \) and the deviation over the
level \( p + \sqrt{\mathtt{x} p} \) are exponentially small in \( \mathtt{x} \).
A similar bound can be obtained for a norm of the vector \( I\!\!B \bb{\xi} \) where
\( I\!\!B \) is some given matrix.
For notational simplicity we assume that \( I\!\!B \) is symmetric.
Otherwise one should replace it with \( (I\!\!B^{\top} I\!\!B)^{1/2} \).
\begin{theorem}
\label{TexpbLGA}
Let \( \bb{\xi} \) be standard normal in \( \mathbb{R}^{p} \).
Then for every \( \mathtt{x} > 0 \) and any symmetric matrix \( I\!\!B \),
it holds with \( \mathtt{p} = \operatorname{tr}(I\!\!B^{2}) \),
\( \mathtt{v}^{2} = 2 \operatorname{tr}(I\!\!B^{4}) \), and \( a^{*} = \| I\!\!B^{2} \|_{\infty} \)
\begin{EQA}[c]
I\!\!P\bigl(
\| I\!\!B \bb{\xi} \|^{2}
> \mathtt{p} + (2 \mathtt{v} \mathtt{x}^{1/2}) \vee (6 a^{*} \mathtt{x})
\bigr)
\le
\exp(- \mathtt{x}) .
\label{PtttLGA}
\end{EQA}
\end{theorem}
Below we establish similar bounds for a non-Gaussian vector \( \bb{\xi} \)
obeying \eqref{expgamgm}.
\section{A bound for the \( \ell_{2} \)-norm}
\label{SchiquadE}
This section presents a general exponential bound for the probability
\( I\!\!P\bigl( \| \bb{\xi} \| > \mathtt{y} \bigr) \) under \eqref{expgamgm}.
The main result tells us that if \( \mathtt{y} \) is not too large,
namely if \( \mathtt{y} \le \yy_{c} \) with \( \yy_{c}^{2} \asymp \mathtt{g}^{2} \), then
the deviation probability is essentially the same as in the Gaussian case.
To describe the value \( \yy_{c} \), introduce the following notation.
Given \( \mathtt{g} \) and \( p \),
define the values \( w_{0} = \mathtt{g} p^{-1/2} \) and
\( \ww_{c} \) by the equation
\begin{EQA}[c]
\frac{\ww_{c}(1+\ww_{c})}{(1+\ww_{c}^{2})^{1/2}}
=
w_{0}
=
\mathtt{g} p^{-1/2}.
\label{wc212}
\end{EQA}
It is easy to see that \( w_{0}/\sqrt{2} \le \ww_{c} \le w_{0} \).
Further define
\begin{EQA}
\mu_{c}
& \stackrel{\operatorname{def}}{=} &
\ww_{c}^{2}/(1+\ww_{c}^{2})
\\
\yy_{c}
& \stackrel{\operatorname{def}}{=} &
\sqrt{(1 + \ww_{c}^{2}) p} ,
\label{yyc1wc}
\\
\xx_{c}
& \stackrel{\operatorname{def}}{=} &
0.5 p \bigl[ \ww_{c}^{2} - \log\bigl( 1 + \ww_{c}^{2} \bigr) \bigr].
\label{zcgmp}
\end{EQA}
Note that for \( \mathtt{g}^{2} \ge p \), the quantities \( \yy_{c} \) and \( \xx_{c} \) can be
evaluated as
\( \yy_{c}^{2} \ge \ww_{c}^{2} p \ge \mathtt{g}^{2}/2 \) and
\( \xx_{c} \gtrsim p \ww_{c}^{2}/2 \ge \mathtt{g}^{2}/4 \).
\begin{theorem}
\label{TxivqLD}
Let \( \bb{\xi} \in \mathbb{R}^{p} \) fulfill \eqref{expgamgm}.
Then it holds for each \( \mathtt{x} \le \xx_{c} \)
\begin{EQA}
I\!\!P\bigl(
\| \bb{\xi} \|^{2} > p + \sqrt{\varkappa \mathtt{x} p} \vee (\varkappa \mathtt{x}) , \,\,
\| \bb{\xi} \| \le \yy_{c}
\bigr)
& \le &
2 \exp( - \mathtt{x} ),
\label{expxibo}
\end{EQA}
where \( \varkappa = 6.6 \).
Moreover, for \( \mathtt{y} \ge \yy_{c} \), it holds with
\( \gm_{c} = \mathtt{g} - \sqrt{\mu_{c} p} = \mathtt{g} \ww_{c}/(1+\ww_{c}) \)
\begin{EQA}
I\!\!P\bigl( \| \bb{\xi} \| > \mathtt{y} \bigr)
& \le &
8.4 \exp\bigl\{ - \gm_{c} \mathtt{y}/2 - (p/2) \log(1 - \gm_{c}/\mathtt{y}) \bigr\}
\\
& \le &
8.4 \exp\bigl\{ - \xx_{c} - \gm_{c} (\mathtt{y} - \yy_{c})/2 \bigr\}.
\label{Pexp2xit}
\end{EQA}
\end{theorem}
The statements of Theorem~\ref{TxivqLDA} can be simplified
under the assumption \( \mathtt{g}^{2} \ge p \).
\begin{corollary}
\label{CTxivqLDA}
Let \( \bb{\xi} \) fulfill \eqref{expgamgm} and \( \mathtt{g}^{2} \ge p \).
Then it holds for \( \mathtt{x} \le \xx_{c} \)
\begin{EQA}
\label{Pzzxxp}
I\!\!P\bigl( \| \bb{\xi} \|^{2} \ge \mathfrak{z}(\mathtt{x},p) \bigr)
& \le &
2 \mathrm{e}^{-\mathtt{x}} + 8.4 \mathrm{e}^{-\xx_{c}},
\\
\mathfrak{z}(\mathtt{x},p)
& \stackrel{\operatorname{def}}{=} &
\begin{cases}
p + \sqrt{\varkappa \mathtt{x} p}, & \mathtt{x} \le p/\varkappa , \\
p + \varkappa \mathtt{x} & p/\varkappa < \mathtt{x} \le \xx_{c} ,
\end{cases}
\label{zzxxppd}
\end{EQA}
with \( \varkappa = 6.6 \).
For \( \mathtt{x} > \xx_{c} \)
\begin{EQA}
I\!\!P\bigl( \| \bb{\xi} \|^{2} \ge \mathfrak{z}_{c}(\mathtt{x},p) \bigr)
& \le &
8.4 \mathrm{e}^{-\mathtt{x}},
\qquad
\mathfrak{z}_{c}(\mathtt{x},p)
\stackrel{\operatorname{def}}{=}
\bigl| \yy_{c} + 2 (\mathtt{x} - \xx_{c})/\gm_{c} \bigr|^{2} .
\label{zzcxxppd}
\end{EQA}
\end{corollary}
This result implicitly assumes that \( p \le \varkappa \xx_{c} \) which is fulfilled
if \( w_{0}^{2} = \mathtt{g}^{2}/p \ge 1 \):
\begin{EQA}[c]
\varkappa \xx_{c}
=
0.5 \varkappa \bigl[ w_{0}^{2} - \log(1 + w_{0}^{2}) \bigr] p
\ge
3.3 \bigl[ 1 - \log(2) \bigr] p
>
p .
\label{dm2dimp}
\end{EQA}
For \( \mathtt{x} \le \xx_{c} \), the function \( \mathfrak{z}(\mathtt{x},p) \) mimics the quantile
behavior of the chi-squared distribution \( \chi^{2}_{p} \) with \( p \)
degrees of freedom.
Moreover, increase of the value \( \mathtt{g} \) yields a growth of the sub-Gaussian zone.
In particular, for \( \mathtt{g} = \infty \), a general quadratic form \( \| \bb{\xi} \|^{2} \)
has under \eqref{expgamgm} the same tail behavior as in the Gaussian case.
Finally, in the large deviation zone \( \mathtt{x} > \xx_{c} \) the deviation probability
decays as \( \mathrm{e}^{-c \mathtt{x}^{1/2}} \) for some fixed \( c \).
However, if the constant \( \mathtt{g} \) in the condition \eqref{expgamgm} is sufficiently large
relative to \( p \), then \( \xx_{c} \) is large as well and the large deviation
zone \( \mathtt{x} > \xx_{c} \) can be ignored at a small price of \( 8.4 \mathrm{e}^{-\xx_{c}} \) and
one can focus on the deviation bound described by \eqref{Pzzxxp} and \eqref{zzxxppd}.
\section{A bound for a quadratic form}
\label{Sbqf}
Now we extend the result to more general bound for
\( \| I\!\!B \bb{\xi} \|^{2} = \bb{\xi}^{\top} I\!\!B^{2} \bb{\xi} \) with a given matrix
\( I\!\!B \) and a vector \( \bb{\xi} \) obeying the condition \eqref{expgamgm}.
Similarly to the Gaussian case we assume that \( I\!\!B \) is symmetric.
Define important characteristics of \( I\!\!B \)
\begin{EQA}[c]
\mathtt{p} = \operatorname{tr} (I\!\!B^{2}) ,
\qquad
\mathtt{v}^{2} = 2 \operatorname{tr}(I\!\!B^{4}),
\qquad
{\lambda}^{*} \stackrel{\operatorname{def}}{=} \| I\!\!B^{2} \|_{\infty} \stackrel{\operatorname{def}}{=} \lambda_{\max}(I\!\!B^{2}) .
\label{dimAvAlb}
\end{EQA}
For simplicity of formulation we suppose that \( {\lambda}^{*} = 1 \),
otherwise one has to replace \( \mathtt{p} \) and \( \mathtt{v}^{2} \) with
\( \mathtt{p}/{\lambda}^{*} \) and \( \mathtt{v}^{2}/{\lambda}^{*} \).
Let \( \mathtt{g} \) be shown in \eqref{expgamgm}.
Define similarly to the \( \ell_{2} \)-case
\( \ww_{c} \) by the equation
\begin{EQA}[c]
\frac{\ww_{c}(1+\ww_{c})}{(1+\ww_{c}^{2})^{1/2}}
=
\mathtt{g} \mathtt{p}^{-1/2} .
\label{wc212A}
\end{EQA}
Define also \( \mu_{c} = \ww_{c}^{2}/(1+\ww_{c}^{2}) \wedge 2/3 \).
Note that \( \ww_{c}^{2} \ge 2 \) implies \( \mu_{c} = 2/3 \).
Further define
\begin{EQA}
\yy_{c}^{2} = (1 + \ww_{c}^{2}) \mathtt{p},
\qquad
2 \xx_{c}
& = &
\mu_{c} \yy_{c}^{2} + \log \det\{ I\!\!\!I_{p} - \mu_{c} I\!\!B^{2} \} .
\label{xxcyycA}
\end{EQA}
Similarly to the case with \( I\!\!B = I\!\!\!I_{p} \), under the condition
\( \mathtt{g}^{2} \ge \mathtt{p} \), one can bound
\( \yy_{c}^{2} \ge \mathtt{g}^{2}/2 \) and \( \xx_{c} \gtrsim \mathtt{g}^{2}/4 \).
\begin{theorem}
\label{TxivqLDA}
Let a random vector \( \bb{\xi} \) in \( \mathbb{R}^{p} \) fulfill \eqref{expgamgm}.
Then for each \( \mathtt{x} < \xx_{c} \)
\begin{EQA}
I\!\!P\bigl(
\| I\!\!B \bb{\xi} \|^{2} > \mathtt{p} + (2 \mathtt{v} \mathtt{x}^{1/2}) \vee (6 \mathtt{x}), \,\,
\| I\!\!B \bb{\xi} \| \le \yy_{c}
\bigr)
& \le &
2 \exp( - \mathtt{x} ) .
\label{expxiboA}
\end{EQA}
Moreover, for \( \mathtt{y} \ge \yy_{c} \), with
\( \gm_{c} = \mathtt{g} - \sqrt{\mu_{c} \mathtt{p}} = \mathtt{g} \ww_{c}/(1+\ww_{c}) \),
it holds
\begin{EQA}
I\!\!P\bigl( \| I\!\!B \bb{\xi} \| > \mathtt{y} \bigr)
& \le &
8.4 \exp\bigl( - \xx_{c} - \gm_{c} (\mathtt{y} - \yy_{c})/2 \bigr) .
\label{expxibogA}
\end{EQA}
\end{theorem}
Now we describe the value \( \mathfrak{z}(\mathtt{x},I\!\!B) \) ensuring a small value for the large deviation probability
\( I\!\!P\bigl( \| I\!\!B \bb{\xi} \|^{2} > \mathfrak{z}(\mathtt{x},I\!\!B) \bigr) \).
For ease of formulation, we suppose that \( \mathtt{g}^{2} \ge 2 \mathtt{p} \) yielding
\( \mu_{c}^{-1} \le 3/2 \).
The other case can be easily adjusted.
\begin{corollary}
\label{CTxivqLDAB}
Let \( \bb{\xi} \) fulfill \eqref{expgamgm} with \( \mathtt{g}^{2} \ge 2 \mathtt{p} \).
Then it holds for \( \mathtt{x} \le \xx_{c} \) with \( \xx_{c} \) from \eqref{xxcyycA}:
\begin{EQA}
\label{PzzxxpB}
I\!\!P\bigl( \| I\!\!B \bb{\xi} \|^{2} \ge \mathfrak{z}(\mathtt{x},I\!\!B) \bigr)
& \le &
2 \mathrm{e}^{-\mathtt{x}} + 8.4 \mathrm{e}^{-\xx_{c}},
\\
\mathfrak{z}(\mathtt{x},I\!\!B)
& \stackrel{\operatorname{def}}{=} &
\begin{cases}
\mathtt{p} + 2 \mathtt{v} \mathtt{x}^{1/2}, & \mathtt{x} \le \mathtt{v}/18 , \\
\mathtt{p} + 6 \mathtt{x} & \mathtt{v}/18 < \mathtt{x} \le \xx_{c} .
\end{cases}
\label{zzxxppdB}
\end{EQA}
For \( \mathtt{x} > \xx_{c} \)
\begin{EQA}
I\!\!P\bigl( \| I\!\!B \bb{\xi} \|^{2} \ge \mathfrak{z}_{c}(\mathtt{x},I\!\!B) \bigr)
& \le &
8.4 \mathrm{e}^{-\mathtt{x}},
\qquad
\mathfrak{z}_{c}(\mathtt{x},I\!\!B)
\stackrel{\operatorname{def}}{=}
\bigl| \yy_{c} + 2 (\mathtt{x} - \xx_{c})/\gm_{c} \bigr|^{2} .
\label{zzcxxppdB}
\end{EQA}
\end{corollary}
\section{Rescaling and regularity condition}
\label{SLDQFr}
The result of Theorem~\ref{TxivqLDA} can be extended to a more general situation
when the condition \eqref{expgamgm} is fulfilled for a vector \( \bb{\zeta} \) rescaled
by a matrix \( \VP_{0} \).
More precisely, let the random \( p \)-vector \( \bb{\zeta} \) fulfills for some
\( p \times p \) matrix \( \VP_{0} \) the condition
\begin{EQA}[c]
\label{expzetaclocz}
\sup_{\bb{\gamma} \in \mathbb{R}^{p}}
\log I\!\!E \exp\Bigl(
\lambda \frac{\bb{\gamma}^{\top} \bb{\zeta}}{\| \VP_{0} \bb{\gamma} \|}
\Bigr)
\le
\nu_{0}^{2} \lambda^{2} / 2,
\qquad
|\lambda| \le \mathtt{g},
\end{EQA}
with some constants \( \mathtt{g} > 0 \), \( \nu_{0} \ge 1 \).
Again, a simple change of variables reduces the case of an arbitrary \( \nu_{0} \ge 1 \)
to \( \nu_{0} = 1 \).
Our aim is to bound the squared norm \( \| \DP_{0}^{-1} \bb{\zeta} \|^{2} \) of a vector
\( \DP_{0}^{-1} \bb{\zeta} \) for another \( p \times p \) positive symmetric matrix
\( \DP_{0}^{2} \).
Note that condition \eqref{expzetaclocz} implies \eqref{expgamgm} for the rescaled
vector \( \bb{\xi} = \VP_{0}^{-1} \bb{\zeta} \).
This leads to bounding the quadratic form
\( \| \DP_{0}^{-1} \VP_{0} \bb{\xi} \|^{2} = \| I\!\!B \bb{\xi} \|^{2} \) with
\( I\!\!B^{2} = \DP_{0}^{-1} \VP_{0}^{2} \DP_{0}^{-1} \).
It obviously holds
\begin{EQA}[c]
\mathtt{p}
=
\operatorname{tr}(I\!\!B^{2})
=
\operatorname{tr} (\DP_{0}^{-2} \VP_{0}^{2}) .
\label{dimAVPDP}
\end{EQA}
Now we can apply the result of Corollary~\ref{CTxivqLDAB}.
\begin{corollary}
\label{CTxivqLDDV}
Let \( \bb{\zeta} \) fulfill \eqref{expzetaclocz} with
some \( \VP_{0} \) and \( \mathtt{g} \).
Given \( \DP_{0} \), define \( I\!\!B^{2} = \DP_{0}^{-1} \VP_{0}^{2} \DP_{0}^{-1} \), and let
\( \mathtt{g}^{2} \ge 2 \mathtt{p} \).
Then it holds for \( \mathtt{x} \le \xx_{c} \) with \( \xx_{c} \) from \eqref{xxcyycA}:
\begin{EQA}
\label{PzzxxpB}
I\!\!P\bigl( \| \DP_{0}^{-1} \bb{\zeta} \|^{2} \ge \mathfrak{z}(\mathtt{x},I\!\!B) \bigr)
& \le &
2 \mathrm{e}^{-\mathtt{x}} + 8.4 \mathrm{e}^{-\xx_{c}},
\label{zzxxppdBDV}
\end{EQA}
with \( \mathfrak{z}(\mathtt{x},I\!\!B) \) from \eqref{zzxxppdB}.
For \( \mathtt{x} > \xx_{c} \)
\begin{EQA}
I\!\!P\bigl( \| \DP_{0}^{-1} \bb{\zeta} \|^{2} \ge \mathfrak{z}_{c}(\mathtt{x},I\!\!B) \bigr)
& \le &
8.4 \mathrm{e}^{-\mathtt{x}},
\qquad
\mathfrak{z}_{c}(\mathtt{x},I\!\!B)
\stackrel{\operatorname{def}}{=}
\bigl| \yy_{c} + 2 (\mathtt{x} - \xx_{c})/\gm_{c} \bigr|^{2} .
\label{zzcxxppdBDV}
\end{EQA}
\end{corollary}
In the \emph{regular} case with \( \DP_{0} \ge \mathfrak{a} \VP_{0} \) for some
\( \mathfrak{a} > 0 \), one obtains
\( \| I\!\!B \|_{\infty} \le \mathfrak{a}^{-1} \) and
\begin{EQA}[c]
\mathtt{v}^{2}
=
2 \operatorname{tr}(I\!\!B^{4})
\le
2 \mathfrak{a}^{-2} \mathtt{p} .
\label{vAfisdimA}
\end{EQA}
\section{A chi-squared bound with norm-constraints}
\label{Schi2norm}
This section extends the results to the case when the bound \eqref{expgamgm} requires
some other conditions than the \( \ell_{2} \)-norm of the vector \( \bb{\gamma} \).
Namely, we suppose that
\begin{EQA}[c]
\log I\!\!E \exp\bigl( \bb{\gamma}^{\top} \bb{\xi} \bigr)
\le
\| \bb{\gamma} \|^{2}/2,
\qquad
\bb{\gamma} \in \mathbb{R}^{p}, \,\, \| \bb{\gamma} \|_{\circ} %{\vartriangle} \le \gm_{\norms} ,
\label{expgamgmmn}
\end{EQA}
where \( \| \cdot \|_{\circ} %{\vartriangle} \) is a norm which differs from the usual
Euclidean norm.
Our driving example is given by the sup-norm case with
\( \| \bb{\gamma} \|_{\circ} %{\vartriangle} \equiv \| \bb{\gamma} \|_{\infty} \).
We are interested to check whether the previous results of Section~\ref{SchiquadE}
still apply.
The answer depends on how massive the set
\( \cc{A}(r) = \{ \bb{\gamma}: \| \bb{\gamma} \|_{\circ} %{\vartriangle} \le r \} \) is in terms of the standard Gaussian
measure on \( \mathbb{R}^{p} \).
Recall that the quadratic norm \( \| \bb{\varepsilon} \|^{2} \) of a standard Gaussian
vector \( \bb{\varepsilon} \) in \( \mathbb{R}^{p} \) concentrates around \( p \)
at least for \( p \) large.
We need a similar concentration property for the norm \( \| \cdot \|_{\circ} %{\vartriangle} \).
More precisely, we assume for a fixed \( r_{*} \) that
\begin{EQA}[c]
I\!\!P\bigl( \| \bb{\varepsilon} \|_{\circ} %{\vartriangle} \le r_{*} \bigr)
\ge
1/2,
\qquad
\bb{\varepsilon} \sim \cc{N}(0,I\!\!\!I_{p}) .
\label{epsuvgmr}
\end{EQA}
This implies for any value \( \uu_{\norms} > 0 \) and all \( \bb{u} \in \mathbb{R}^{p} \) with
\( \| \bb{u} \|_{\circ} %{\vartriangle} \le \uu_{\norms} \) that
\begin{EQA}[c]
I\!\!P\bigl( \| \bb{\varepsilon} - \bb{u} \|_{\circ} %{\vartriangle} \le r_{*} + \uu_{\norms} \bigr)
\ge
1/2,
\qquad
\bb{\varepsilon} \sim \cc{N}(0,I\!\!\!I_{p}) .
\label{epsuvgmrr}
\end{EQA}
For each \( \mathfrak{z} > p \), consider
\begin{EQA}[c]
\mu(\mathfrak{z}) = (\mathfrak{z} - p)/\mathfrak{z} .
\label{muzzdp}
\end{EQA}
Given \( \uu_{\norms} \), denote by \( \mathfrak{z}_{\norms} = \mathfrak{z}_{\norms}(\uu_{\norms}) \) the root of the equation
\begin{EQA}[c]
\frac{\gm_{\norms}}{\mu(\mathfrak{z}_{\norms})} - \frac{r_{*}}{\mu^{1/2}(\mathfrak{z}_{\norms})}
=
\uu_{\norms} .
\label{gmmuyynn}
\end{EQA}
One can easily see that this value exists and unique
if \( \uu_{\norms} \ge \gm_{\norms} - r_{*} \) and it can be defined as the largest \( \mathfrak{z} \)
for which \( \frac{\gm_{\norms}}{\mu(\mathfrak{z})} - \frac{r_{*}}{\mu^{1/2}(\mathfrak{z})} \ge \uu_{\norms} \).
Let \( \mu_{\norms} = \mu(\mathfrak{z}_{\norms}) \) be the corresponding \( \mu \)-value.
Define also \( \xx_{\norms} \) by
\begin{EQA}[c]
2 \xx_{\norms} = \mu_{\norms} \mathfrak{z}_{\norms} + p \log(1 - \mu_{\norms}) .
\label{xxdgg}
\end{EQA}
If \( \uu_{\norms} < \gm_{\norms} - r_{*} \), then set \( \mathfrak{z}_{\norms} = \infty \), \( \xx_{\norms} = \infty \).
\begin{comment}
If \( r_{*}^{2} = p \), then the values \( \yy_{\norms} \) and \( \gm_{\norms} \) can be easily
evaluated.
\begin{lemma}
\label{Lgmmuc2n}
For \( r_{*} = p^{1/2} \), it holds
\begin{EQA}[c]
p + \mathtt{g}^{2}/2 \le \yy_{c}^{2} \le p + \mathtt{g}^{2},
\qquad
\mu_{c}
\ge
\frac{\mathtt{g}^{2}}{p + \mathtt{g}^{2}},
\qquad
\gm_{c} \ge \frac{\mathtt{g}^{2}}{(p + \mathtt{g}^{2})^{1/2}} \, .
\label{yycmucgmcBe}
\end{EQA}
\end{lemma}
\end{comment}
\begin{theorem}
\label{TxivqLDrg}
Let a random vector \( \bb{\xi} \) in \( \mathbb{R}^{p} \) fulfill \eqref{expgamgmmn}.
Suppose \eqref{epsuvgmr} and let, given \( \uu_{\norms} \), the value \( \mathfrak{z}_{\norms} \) be defined
by \eqref{gmmuyynn}.
Then it holds for any \( u > 0 \)
\begin{EQA}
\label{logPmunuunc}
I\!\!P\bigl( \| \bb{\xi} \|^{2} > p + u,
\| \bb{\xi} \|_{\circ} %{\vartriangle} \le \uu_{\norms}
\bigr)
& \le &
2 \exp\bigl\{ - (p/2) \phi(u) \bigr] \bigr\} .
\end{EQA}
yielding for \( \mathtt{x} \le \xx_{\norms} \)
\begin{EQA}
I\!\!P\bigl(
\| \bb{\xi} \|^{2} > p + \sqrt{\varkappa \mathtt{x} p} \vee (\varkappa \mathtt{x}), \,
\| \bb{\xi} \|_{\circ} %{\vartriangle} \le \uu_{\norms}
\bigr)
& \le &
2 \exp( - \mathtt{x} ),
\label{expxibon}
\end{EQA}
where \( \varkappa = 6.6 \).
Moreover, for \( \mathfrak{z} \ge \mathfrak{z}_{\norms} \), it holds
\begin{EQA}
I\!\!P\bigl( \| \bb{\xi} \|^{2} > \mathfrak{z}, \,
\| \bb{\xi} \|_{\circ} %{\vartriangle} \le \uu_{\norms} \bigr)
& \le &
2 \exp\bigl\{ - \mu_{\norms} \mathfrak{z}/2 - (p/2) \log(1 - \mu_{\norms}) \bigr\}
\\
& = &
2 \exp\bigl\{ - \xx_{\norms} - \gm_{\norms} (\mathfrak{z} - \mathfrak{z}_{\norms})/2 \bigr\}.
\label{Pexp2xitrg}
\end{EQA}
\end{theorem}
It is easy to check that the result continues to hold for the norm of
\( \Pi \bb{\xi} \) for a given sub-projector \( \Pi \) in \( \mathbb{R}^{p} \)
satisfying \( \Pi = \Pi^{\top} \), \( \Pi^{2} \le \Pi \).
As above, denote \( \mathtt{p} \stackrel{\operatorname{def}}{=} \operatorname{tr} (\Pi^{2}) \),
\( \mathtt{v}^{2} \stackrel{\operatorname{def}}{=} 2 \operatorname{tr} (\Pi^{4}) \).
Let \( r_{*} \) be fixed to ensure
\begin{EQA}[c]
I\!\!P\bigl( \| \Pi \bb{\varepsilon} \|_{\circ} %{\vartriangle} \le r_{*} \bigr)
\ge
1/2,
\qquad
\bb{\varepsilon} \sim \cc{N}(0,I\!\!\!I_{p}) .
\label{epsuvgmrPi}
\end{EQA}
The next result is stated for \( \gm_{\norms} \ge r_{*} + \uu_{\norms} \), which simplifies the
formulation.
\begin{theorem}
\label{TxivqLDrgPi}
Let a random vector \( \bb{\xi} \) in \( \mathbb{R}^{p} \) fulfill \eqref{expgamgmmn}
and \( \Pi \) follows \( \Pi = \Pi^{\top} \), \( \Pi^{2} \le \Pi \).
Let some \( \uu_{\norms} \) be fixed.
Then for any \( \mu_{\norms} \le 2/3 \) with \( \gm_{\norms} \mu_{\norms}^{-1} - r_{*} \mu_{\norms}^{-1/2} \ge \uu_{\norms} \),
\begin{EQA}
I\!\!E \exp\Bigl\{ \frac{\mu_{\norms}}{2} (\| \Pi \bb{\xi} \|^{2} - \mathtt{p}) \Bigr\}
\operatorname{1}\hspace{-4.3pt}\operatorname{I}\bigl( \| \Pi^{2} \bb{\xi} \|_{\circ} %{\vartriangle} \le \uu_{\norms} \bigr)
& \le &
2 \exp( \mu_{\norms}^{2} \mathtt{v}^{2}/4 ) ,
\label{EexpmusvA}
\end{EQA}
where \( \mathtt{v}^{2} = 2 \operatorname{tr} (\Pi^{4}) \).
Moreover, if \( \gm_{\norms} \ge r_{*} + \uu_{\norms} \),
then for any \( \mathfrak{z} \ge 0 \)
\begin{EQA}
\label{logPmunuuncPi}
&& \hspace{-1cm}
I\!\!P\bigl( \| \Pi \bb{\xi} \|^{2} > \mathfrak{z},
\| \Pi^{2} \bb{\xi} \|_{\circ} %{\vartriangle} \le \uu_{\norms}
\bigr)
\\
& \le &
I\!\!P\bigl(
\| \Pi \bb{\xi} \|^{2} > \mathtt{p} + (2 \mathtt{v} \mathtt{x}^{1/2}) \vee (6 \mathtt{x}), \,
\| \Pi^{2} \bb{\xi} \|_{\circ} %{\vartriangle} \le \uu_{\norms}
\bigr)
\le
2 \exp( - \mathtt{x} ).
\label{expxiboPi}
\end{EQA}
\begin{comment}
Moreover, for \( \mathfrak{z} \ge \mathfrak{z}_{\norms} \), it holds
\begin{EQA}
I\!\!P\bigl( \| \bb{\xi} \|^{2} > \mathfrak{z}, \,
\| \bb{\xi} \|_{\circ} %{\vartriangle} \le \uu_{\norms} \bigr)
& \le &
2 \exp\bigl\{ - \mu_{\norms} \mathfrak{z}/2 - (p/2) \log(1 - \mu_{\norms}) \bigr\}
\\
& = &
2 \exp\bigl\{ - \xx_{\norms} - \gm_{\norms} (\mathfrak{z} - \mathfrak{z}_{\norms})/2 \bigr\}.
\label{Pexp2xitrgPi}
\end{EQA}
\end{comment}
\end{theorem}
\section{A bound for the \( \ell_{2} \)-norm under Bernstein conditions}
\label{SBernstqf}
For comparison, we specify the results to the case considered
recently in \cite{Ba2010}.
Let \( \bb{\zeta} \) be a random vector in \( \mathbb{R}^{n} \) whose components \( \zeta_{i} \)
are independent and satisfy the Bernstein type conditions:
for all \( |\lambda| < c^{-1} \)
\begin{EQA}[c]
\label{xiBernybB}
\log I\!\!E e^{\lambda \zeta_{i}}
\le
\frac{\lambda^{2} \sigma^{2}}{1 - c |\lambda|} .
\end{EQA}
Denote \( \bb{\xi} = \bb{\zeta}/(2\sigma) \) and consider
\( \| \bb{\gamma} \|_{\circ} %{\vartriangle} = \| \bb{\gamma} \|_{\infty} \).
Fix \( \gm_{\norms} = \sigma/c \).
If \( \| \bb{\gamma} \|_{\circ} %{\vartriangle} \le \gm_{\norms} \), then
\( 1 - c \gamma_{i}/ (2\sigma) \ge 1/2 \) and
\begin{EQA}[c]
\log I\!\!E \exp\bigl( \bb{\gamma}^{\top} \bb{\xi} \bigr)
\le
\sum_{i} \log I\!\!E \exp\Bigl( \frac{\gamma_{i} \zeta_{i}}{2\sigma} \Bigr)
\le
\sum_{i} \frac{|\gamma_{i}/(2\sigma)|^{2} \sigma^{2}}{1 - c \gamma_{i}/(2 \sigma)}
\le
\| \bb{\gamma} \|^{2}/2 .
\label{logexpggi}
\end{EQA}
Let also \( S \) be some linear subspace of \( \mathbb{R}^{n} \) with dimension \( \mathtt{p} \)
and \( \Pi_{S} \) denote the projector on \( S \).
For applying the result of Theorem~\ref{TxivqLDrg}, the value \( r_{*} \) has to be fixed.
We use that
the infinity norm \( \| \bb{\varepsilon} \|_{\infty} \) concentrates
around \( \sqrt{2 \log p} \).
\begin{lemma}
\label{LBeboundrs}
It holds for a standard normal vector \( \bb{\varepsilon} \in \mathbb{R}^{p} \)
with \( r_{*} = \sqrt{2 \logp} \)
\begin{EQA}[c]
I\!\!P\bigl( \| \bb{\varepsilon} \|_{\circ} %{\vartriangle} \le r_{*} \bigr)
\ge
1/2 .
\label{epsvBe}
\end{EQA}
\end{lemma}
\begin{proof}
By definition
\begin{EQA}[c]
I\!\!P\bigl( \| \bb{\varepsilon} \|_{\circ} %{\vartriangle} > r_{*} \bigr)
\le
I\!\!P\bigl( \| \bb{\varepsilon} \|_{\infty} > \sqrt{2 \log p} \bigr)
\le
p I\!\!P\bigl( |\varepsilon_{1}| > \sqrt{2 \log p} \bigr)
\le
1/2
\label{PsdimpABe}
\end{EQA}
as required.
\end{proof}
Now the general bound of Theorem~\ref{TxivqLDrg} is applied to bounding the norm of
\( \| \Pi_{S} \bb{\xi} \| \).
For simplicity of formulation we assume that \( \gm_{\norms} \ge \uu_{\norms} + r_{*} \).
\begin{theorem}
\label{Tchi2Bern}
Let \( S \) be some linear subspace of \( \mathbb{R}^{n} \) with dimension \( \mathtt{p} \).
Let \( \gm_{\norms} \ge \uu_{\norms} + r_{*} \).
If the coordinates \( \zeta_{i} \) of \( \bb{\zeta} \) are independent and satisfy
\eqref{xiBernybB}, then for all \( \mathtt{x} \),
\begin{EQA}
I\!\!P\bigl(
(4 \sigma^{2})^{-1} \| \Pi_{S} \bb{\zeta} \|^{2}
> \mathtt{p} + \sqrt{\varkappa \mathtt{x} \mathtt{p}} \vee (\varkappa \mathtt{x}), \,
\| \Pi_{S} \bb{\zeta} \|_{\infty} \le 2 \sigma \uu_{\norms}
\bigr)
& \le &
2 \exp( - \mathtt{x} ),
\label{expxiboBe}
\end{EQA}
\begin{comment}
while for \( \mathfrak{z} > \mathfrak{z}_{\norms} \)
\begin{EQA}
I\!\!P\bigl( (4 \sigma^{2})^{-1} \| \Pi_{S} \bb{\zeta} \|^{2} > \mathfrak{z}, \,
\| \bb{\zeta} \|_{\circ} %{\vartriangle} \le 2 \sigma \uu_{\norms}
\bigr)
& \le &
2 \exp\bigl\{ - \xx_{\norms} - \gm_{\norms} (\mathfrak{z} - \mathfrak{z}_{\norms})/2 \bigr\}.
\label{Pexp2xi2n}
\end{EQA}
\end{comment}
\end{theorem}
The bound of \cite{Ba2010} reads
\begin{EQA}[c]
\label{Bernmainyb}
I\!\!P\biggl(
\| \Pi_{S} \bb{\zeta} \|_{2}
> \bigl( 3 \sigma \, \vee \, \sqrt{6 c u} \bigr) \sqrt{\mathtt{x} + 3 \mathtt{p}} , \,\,
\| \Pi_{S} \bb{\zeta} \|_{\infty} \le 2 \sigma \uu_{\norms}
\biggr)
\le
e^{- \mathtt{x}} .
\end{EQA}
As expected, in the region \( \mathtt{x} \le \xx_{c} \) of Gaussian approximation, the bound
of Baraud is not sharp and actually quite rough.
| {
"timestamp": "2013-02-08T02:01:52",
"yymm": "1302",
"arxiv_id": "1302.1699",
"language": "en",
"url": "https://arxiv.org/abs/1302.1699",
"abstract": "This note presents sharp inequalities for deviation probability of a general quadratic form of a random vector \\(\\xiv\\) with finite exponential moments. The obtained deviation bounds are similar to the case of a Gaussian random vector. The results are stated under general conditions and do not suppose any special structure of the vector \\(\\xiv\\). The obtained bounds are exact (non-asymptotic), all constants are explicit and the leading terms in the bounds are sharp.",
"subjects": "Probability (math.PR)",
"title": "Sharp deviation bounds for quadratic forms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561676667173,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.7082906185083493
} |
https://arxiv.org/abs/1901.04405 | Quadratization in discrete optimization and quantum mechanics | A book about turning high-degree optimization problems into quadratic optimization problems that maintain the same global minimum (ground state). This book explores quadratizations for pseudo-Boolean optimization, perturbative gadgets used in QMA completeness theorems, and also non-perturbative k-local to 2-local transformations used for quantum mechanics, quantum annealing and universal adiabatic quantum computing. The book contains ~70 different Hamiltonian transformations, each of them on a separate page, where the cost (in number of auxiliary binary variables or auxiliary qubits, or number of sub-modular terms, or in graph connectivity, etc.), pros, cons, examples, and references are given. One can therefore look up a quadratization appropriate for the specific term(s) that need to be quadratized, much like using an integral table to look up the integral that needs to be done. This book is therefore useful for writing compilers to transform general optimization problems, into a form that quantum annealing or universal adiabatic quantum computing hardware requires; or for transforming quantum chemistry problems written in the Jordan-Wigner or Bravyi-Kitaev form, into a form where all multi-qubit interactions become 2-qubit pairwise interactions, without changing the desired ground state. Applications cited include computer vision problems (e.g. image de-noising, un-blurring, etc.), number theory (e.g. integer factoring), graph theory (e.g. Ramsey number determination), and quantum chemistry. The book is open source, and anyone can make modifications here:this https URL. | \section{Introduction}
\noindent\rule{\textwidth}{0.4pt}
\vspace{-1.25mm}
\indent When optimizing discrete functions, it is often easier when the function is quadratic than if it is of higher degree. But notice that the cubic and quadratic functions:
\vspace{-4mm}
\begin{align}
b_1b_2+b_2b_3+b_3b_4 -4b_1b_2b_3 & \hspace{5mm} \textrm{(cubic)},\label{eq:intro_cubic}\\
b_1b_2 + b_2b_3 + b_3b_4 + 4b_1 -4b_1b_2 -4b_1b_3 & \hspace{5mm}\textrm{(quadratic)},\label{eq:intro_quadratic}
\end{align}
\vspace{1mm}
\noindent where each $b_i$ can either be 0 or 1, both never go below the value of -2, and all minima occur at $(b_1,b_2,b_3,b_4) = (1,1,1,0)$. Therefore if we are interested in the ground state of a discrete function of degree $k$, we may optimize either function and get exactly the same result. { \textbf{Part \ref{partDiagonal}}} gives more than 40 different ways to do this, almost all of them published in the last 5 years.
\vspace{0.5mm}
\noindent\rule{\textwidth}{0.4pt}
\\
\vspace{-2mm}
\\
\indent The binary variables $b_i$ can be either of the eigenvalues of the matrix $b$ below, which is related to the Pauli $z$ matrix by $z=2b-\openone$. The Pauli matrices $x,y,z,\openone$ are listed below:
\vspace{0mm}
\begin{equation}
b\equiv\begin{pmatrix}1 & 0\\
0 & 0
\end{pmatrix},\,z\equiv\begin{pmatrix}1 & 0\\
0 & -1
\end{pmatrix},\, x\equiv\begin{pmatrix}0 & 1\\
1 & 0\end{pmatrix},\,y\equiv\begin{pmatrix}0 & -\textrm{i}\\
\textrm{i} & 0
\end{pmatrix},\,\openone\equiv\begin{pmatrix}1 & 0\\
0 & 1
\end{pmatrix}. \label{eq:pauli}
\end{equation}
\noindent Any Hermitian $2\times 2$ matrix can be written as a linear combination of the Pauli matrices, so we can therefore describe the Hamiltonian of any number of spin-$\nicefrac{1}{2}$ particles by a function of Pauli matrices acting on each particle, for instance:
\vspace{-5mm}
\begin{align}
x_1y_2z_3y_4 + y_1x_2z_3y_4 + x_1x_2y_3 & \hspace{5mm} \textrm{(cubic)},\label{eq:intro_quantum_cubic}\\
x_1y_4 + x_2y_4 + x_3 & \hspace{5mm} \textrm{(quadratic)},\label{eq:intro_quantum_quadratic}
\end{align}
\noindent where the coefficients tell us about the strengths of couplings between these particles. The Schr\"{o}dinger equation tells us that the eigenvalues of the Hamiltonian are the allowed energy levels and their eigenvectors (wavefunctions) are the corresponding physical states. More generally these do not have to be spins but can be any type of qubits, and we can encode the solution to \textit{any} problem in the ground state of a Hamiltonian, then solve the problem by finding the lowest energy state of the physical system (this is called adiabatic quantum computing). Eqs. \eqref{eq:intro_quantum_quadratic} and \eqref{eq:intro_quantum_cubic} have exactly the same energy spectra, so Eq. \eqref{eq:intro_quantum_quadratic} is an example of a type of quadratization.
Two-body physical interactions occur more naturally than many-body interactions so \textbf{Parts \ref{partTransverseIsing}-\ref{partGeneral}} give more than 30 different ways to quadratize general Hamiltonians (some of these methods may use $d\times d$ matrices instead of only the $2\times2$ matrices in Eq. \eqref{eq:pauli}, meaning that we can have types of qudits that are not qubits). All of these methods were published during the last 15 years.
The optimization problems of Eqs. \eqref{eq:intro_cubic}-\eqref{eq:intro_quadratic} are specific cases of the type in Eqs. \eqref{eq:intro_quantum_cubic}-\eqref{eq:intro_quantum_quadratic}, but with only $b$ matrices.
\noindent\rule{\textwidth}{0.4pt}
\end{titlepage}
\addtocounter{page}{1}
\restoregeometry
\newpage
\part{{\normalsize{\underline{Diagonal Hamiltonians (pseudo-Boolean functions)}}}\label{partDiagonal}}
\section{Methods that introduce ZERO auxiliary variables}
\subsection{Deduction Reduction (Deduc-reduc; Tanburn, Okada, Dattani, 2015)} \label{subsec:deduc_reduc}
\vspace{-5mm}
\secspace\emph{\textbf{Summary}}
We look for \emph{deductions} (e.g. $b_{1}b_{2}=0$) that must hold true at the global minimum.
These can be found by \emph{a priori} knowledge of the given problem, or by enumerating solutions of a small subset of the variables.
We can then substitute high-order terms using the low-order terms of the deduction, and add on a penalty term to preserve the ground states \cite{Tanburn2015c}.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $0$ auxiliary variables needed
\item For a particular value of $m$, we have $\binom{n}{m}$ different $m$-variable subsets of the $n$ variable problem, and $\binom{n}{m} 2^m$ evaluations of the objective function to find all possible $m$-variable deductions, whereas $2^n$ evaluations is enough to solve the entire problem. We therefore choose $m \lll n$.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item No auxiliary variables needed.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item When deductions cannot be determined naturally (as in the Ramsey number determination problem, see Example \ref{subsec:Example_Ramsey_deduc_reduc}), deductions need to be found by `brute force', which scales exponentially with respect to $m$.
For highly connected systems (systems with a large number of non-zero coefficients), the value of $m$ required to find even one deduction can be prohibitively large.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{comment}
Examples are difficult to illustrate in a paper, since they involve searching for patterns in exhaustive searches.
\end{comment}
Consider the objective function:
\vspace{-2mm}
\begin{equation}
H_{4{\rm -local}}=b_{1}b_{2}(4+b_{3}+b_{3}b_{4})+b_{1}(b_{3}-3)+b_{2}(1-2b_{3}-b_{4})+F(b_{3},b_{4},b_{5},\ldots,b_{N})
\end{equation}
where $F$ is any quadratic polynomial in $b_{i}$ for $i\ge 3$.
Since
\vspace{-2mm}
\begin{equation}
H_{4{\rm -local}}\left(1, 1, b_3, b_4, ...\right) > H_{4{\rm -local}}\left(0, 0, b_3, b_4, ...\right), H_{4{\rm -local}}\left(0, 1, b_3, b_4, ...\right), H_{4{\rm -local}}\left(1, 0, b_3, b_4, ...\right),
\end{equation}
\uline{\textbf{\textit{it must be the case that $b_{1}b_{2}=0$}}}.
Specifically, for the 4 assignments of $(b_{3},b_{4})$, we see that $b_{1}b_{2}=0$ at every minimum of $H_{4{\rm -local}}-F$.
Using deduc-reduc we have:
\vspace{-2mm}
\begin{equation}
H_{2{\rm -local}}=6b_{1}b_{2}+b_{1}(b_{3}-3)+b_{2}(1-2b_{3}-b_{4})+F(b_{3},b_{4},b_{5},\ldots,b_{N}),
\end{equation}
which has the same global minima as $H_{4-\textrm{local}}$ but one fewer quartic and one fewer cubic term.
The coefficient of $b_1 b_2$ was chosen as $6$ because $6 \ge \max \left( 4+b_{3}+b_{3}b_{4} \right)$.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper, with more implementation details, and application to integer factorization: \cite{Tanburn2015c}.
\end{itemize}
\newpage
\subsection{ELC Reduction (Ishikawa, 2014)}
\secspace\emph{\textbf{Summary}}
An Excludable Local Configuration (ELC) is a partial assignment of variables that make it impossible to achieve the minimum.
We can therefore add a term that corresponds to the energy of this ELC without changing the solution to the minimization problem.
In practice we can eliminate all monomials with a variable in which a variable is set to 0, and reduce any variable set to 1.
Given a general objective function we can try to find ELCs by enumerating solutions of a small subset of variables in the problem \cite{Ishikawa2014}.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $0$ auxiliary variables needed
\item For a particular value of $m$, we have $\binom{n}{m}$ different $m$-variable subsets of the $n$ variable problem, and $\binom{n}{m} 2^m$ evaluations of the objective function to find all possible $m$-variable deductions, whereas $2^n$ evaluations is enough to solve the entire problem. We therefore choose $m \lll n$.
\item Approximate methods exist which have been shown to be much faster and give good approximations to the global minimum \cite{Ishikawa2014}.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item No auxiliary variables needed.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item No known way to find ELCs except by `brute force', which scales exponentially with respect to $m$.
\item ELCs do not always exist.
\end{itemize}
\secspace\emph{\textbf{Example}}
Consider the objective function:
\begin{equation}
H_{{\rm 3-local}}=b_{1}b_{2}+b_{2}b_{3}+b_{3}b_{4}-4b_{1}b_{2}b_{3}. \label{eq:ELCexample}
\end{equation}
If $b_{1}b_{2}b_{3}=0$, no assignment of our variables will we be able to reach a lower energy than if $b_{1}b_{2}b_{3}=1$.
Hence this gives us \textit{twelve} ELCs, and one example is $(b_{1},b_{2},b_{3})=(1,0,0)$ which we can use to form the polynomial:
\begin{align}
H_{2\rm{-local}}&=H_{{\rm 3-local}}+4b_{1}(1-b_{2})(1-b_{3})\\
&=b_{1}b_{2}+b_{2}b_{3}+b_{3}b_{4}+4b_{1}-4b_{1}b_{2}-4b_{1}b_{3}. \label{eq:ELCreduced}
\end{align}
In both cases Eqs. \eqref{eq:ELCexample} and \eqref{eq:ELCreduced}, the only global minima occur when $b_1b_2b_3 = 1$.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper and application to computerized image denoising: \cite{Ishikawa2014}.
\end{itemize}
\newpage
\subsection{Groebner Bases}
\begin{comment}
A Groebner basis is a set of equations representing the zeros of a polynomial (e.g. of the form Eq. \ref{eq:Z+ZZ+ZZZ+ZZZZ...}) such that the number of variables is the same, and the zeros of Eq. \ref{eq:Z+ZZ+ZZZ+ZZZZ...} are the same as the zeros of all equations of the Groebner basis.
If a Groebner basis can be found such that all equations are quadratic, then quadratization of Eq. \ref{eq:Z+ZZ+ZZZ+ZZZZ...} is as simple as finding the Groebner basis.
\begin{comment}
Nike, this is NOT true.
Once we have the Groebner basis, we still need a single non-negative expression to minimize.
This involves finding appropriate coefficients of the equations to add together (non-trivial and what they do in the 1Qbit paper) OR squaring each one, which gives a quartic.
\end{comment}
\begin{comment}
Use the multivariate extension of Gaussian elimination to find a different set of equations which have the same vanishing set.
It has been shown these can be used to reduce and embed factorisations of all bi-primes up to 200,000 \cite{Dridi2016}.
Some work has been done in the field of ``boolean Groebner bases'', but these consider bases of a boolean ring over $\mathbb{F}_{2}$ rather than $\mathbb{Q}$.
\end{comment}
\begin{comment}
\textcolor{red}{Richard: too much algebra here. People come from different
backgrounds. Some people don't know what rings are, and some people
get scared when they see fancy symbols like $\mathcal{V}$. It should
be possible to describe this to A-level students because that way
we know everyone (chemists, physicists, engineers, computer scientsists,
etc.) will understand it. I guess I already did this in a way that
A-level students would understand, in the ``summary section''. For
``descriptions'' I envision that we explain exactly how to DO the
methods, and for Groebner bases, I guess this would be equivalent
to listing the known algorithms (such as Buchberger's) and known codes
available. I guess another description that could be added is the
method in the 1Qbit paper, where they have a ``max\_cutoff'' and
a ``min\_cutoff'' and Groebnerized everything in between there,
or something like that.}
Instead of solving the equations $f_{1}=f_{2}=...=f_{n}=0$ directly,
Groebner bases consider the points on which they vanish, also known
as the \emph{algebraic variety} $\mathcal{V}(f_{1},...,f_{n})$. Groebner
bases are a 'pleasant' generating set for this variety.
This is done by first defining a monomial ordering on the polynomial
ring $F[b_{1},...,b_{n}]$, chosen so that higher order mononials
are considered 'bigger' than low order monomials. Monomials of the
same degree are sorted by the lexicographic ordering of the variables.
Then define the leading term of a polynomial $f$ with respect to
the monomial ordering to be $\mathrm{LT}(f)$, the largest monomial
of $f$. Then $g_{1},...,g_{m}$ is a Grobner basis if $\mathcal{V}(f_{1},...,f_{n})$=$\mathcal{V}(g_{1},...,g_{m})$
(they have the same vanishing set) and for every function $f$ in
the ideal of $\left\langle f_{1},...f_{n}\right\rangle $, we have
that the leading term of $f$ is divisible by the leading term of
$g_{i}$ for some $i$.
Grobner bases are very useful because they allow computation of remainders,
using the multivariate division algorithm. They are useful in this
context since the polynomials of the Grobner basis will have smaller
degrees than our original function, reducing the need for reduction
by other means.
\end{comment}
\secspace\emph{\textbf{Summary}}
Given a set of polynomials, a Groebner basis is another set of polynomials that have exactly the same zeros.
The advantage of a Groebner basis is it has nicer algebraic properties than the original equations, in particular they tend to have smaller degree polynomials.
The algorithms for calculating Groebner bases are generalizations of Euclid's algorithm for the polynomial greatest common divisor.
Work has been done in the field of 'Boolean Groebner bases', but while the variables are Boolean the coefficients of the functions are in $\mathbb{F}_{2}$ rather than $\mathbb{Q}$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $0$ auxiliary variables needed.
\item $\mathcal{O}\left( 2^{2^{n}} \right)$ in general, $\mathcal{O}(d^{n^{2}})$ if the zeros of the equations form a set of discrete points, where $d$ is the degree of the polynomial and $n$ is the number of variables \cite{Bardet2002}.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item No auxiliary variables needed.
\item General method, which can be used for other rings, fields or types of variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Best algorithms for finding Groebner bases scale double exponentially in $n$.
\item Only works for objective functions whose minimization corresponds to solving systems of discrete equations, as the method only preserves roots, not minima.
\end{itemize}
\secspace\emph{\textbf{Example}}
Consider the following pair of equations:
\begin{equation}
b_1 b_2 b_3 b_4 + b_1 b_3 + b_2 b_4 - b_3 = b_1 + b_1 b_2 + b_3 - 2 = 0.
\end{equation}
\noindent Feeding these to Mathematica's ${\tt GroebnerBasis}$ function, along with the binarizing $b_1(b_1-1)=\ldots=b_4(b_4-1)=0$ constraints, gives a Groebner basis:
\begin{equation}
\left\{ b_4 b_3 - b_4, b_2 + b_3 - 1, b_1 - 1 \right\}.
\end{equation}
From this we can immediately read off the solutions $b_1=1$, $b_2=1-b_3$ and reduce the problem to $b_3b_4-b_4=0$. Solving this gives a final solution set of: $(b_1, b_2, b_3, b_4) = (1, 0, 1, 0), (1, 0, 1, 1), (1, 1, 0, 0)$, which should be the same as the original 4-local problem.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Reduction and embedding of factorizations of all bi-primes less than $200,000$: \cite{Dridi2016}.
\end{itemize}
\begin{comment}
To quadratize the objective function:
\begin{equation}
H_{4{\rm -local}}=(z_{1}z_{2}-z_{3}z_{4}-1)^{2},
\end{equation}
we first linearize the part that is being squared. In Mathematica,
the ${\tt GroebnerBasis}$ function gives us:
Then when we square it we get a quadratic function:
\begin{equation}
H_{2{\rm -local}}=1-2z_{1}-2z_{2}+3z_{3}+3z_{4}-2z_{3}z_{4}.
\end{equation}
In both cases, the only global minima occur when:
\begin{equation}
(z_{3,}z_{4})=1,\,z_{1}\,\text{or}\,z_{2}=0.
\end{equation}
\textcolor{red}{Wait, this is far from true!}
\end{comment}
\newpage
\subsection{Application of ELM (Dattani, 2018)}
\vspace{-3mm}
\secspace\emph{\textbf{Summary}}
We use the formula from \cite{Ali2008} for representing any function of three binary variables:
\vspace{-4mm}
{\footnotesize
\begin{align}
f(b_1,b_2,b_3) &= \left( f(1,1,1) + f(1,0,0) - f(1,1,0) - f(1,0,1) - f(0,1,1) - f(0,0,0) \right. + \\
& \left. f(0,0,1) + f(0,1,0) \right)b_1b_2b_3 + \left(f(0,1,1) + f(0,0,0) - f(0,0,1) - f(0,1,0) \right)b_2b_3 ~+ \\
& \left( f(1,0,1) + f(0,0,0) - f(0,0,1) - f(1,0,0) \right)b_1b_3 + \\
& \left(f(1,1,0) + f(0,0,0) - f(1,0,0) - f(0,1,0) \right)b_1b_2 ~+ \left( f(0,1,0) - f(0,0,0) \right)b_2 \\
& + \left(f(1,0,0) - f(0,0,0)\right)b_1 + \left( f(0,0,1) -\left. f(0,0,0) \right)b_3 + f(0,0,0).\right.
\end{align}
}
\vspace{-4mm}
If the cubic term is zero, then the function becomes quadratic. We can use ELM (Energy Landscape Manipulation) to change the energy landscape \textit{without} changing the ground state \cite{Tanburn2015d}. In this case, we apply ELM in order to make the cubic term zero.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item No auxiliary variables.
\item May require many evaluations of the cubic function, varying coefficients in order to find the right ELM coefficients.
\item For a particular value of $m$, we have $\binom{n}{m}$ different $m$-variable subsets of the $n$ variable problem, and $\binom{n}{m} 2^m$ evaluations of the objective function to find all possible $m$-variable deductions, whereas $2^n$ evaluations is enough to solve the entire problem. We therefore choose $m \lll n$.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Can be generalized to arbitrary $k$-local functions, but the ELM constraints may become harder to achieve.
\item Can quadratize an entire cubic function (or a cubic part of a more general function) with no auxiliary qubits.
\item Can reproduce the full spectrum.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item May not always be possible.
\item May require a local search to find appropriate deductions.
\end{itemize}
\secspace\emph{\textbf{Example}}
In order to reduce the number of constraints required for the cubic term to be zero, we will assume that Deduc-Reduc told us that $(1-b_1)(1-b_2) + (1-b_2)(1-b_3) + (1-b_1)(1-b_3) = 0$ when the overall function is minimized, which means the ground state only occurs when at least two variables are 1. This does not allow us to assign the linear terms, but assigns all quadratic terms to have $b_ib_j=1$. This also suggests that $b_1b_2b_3$ can also be reduced to a linear term, but we do not know whether it is $b_1$, $b_2$, or $b_3$. So we have the following constraint on the cubic term, after setting all $f(b_1,b_2,b_3)$ to zero if there is not at least two 1's:
\begin{align}
f(1,1,1) - f(1,1,0) - f(1,0,1) - f(0,1,1) = 0.
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item The method was first presented in the first arXiv version of this book \cite{Dattani2019}.
\end{itemize}
\newpage
\subsection{Split Reduction (Okada, Tanburn, Dattani, 2015)}
\secspace\emph{\textbf{Summary}}
It is possible to reduce a lot of the problem by conditioning on the most connected variables.
We call each of these operations a \emph{split}.
\secspace\emph{\textbf{Cost}}
Usually slightly sub-exponential in the number of splits, as the number of problems to solve at most doubles with every split, but often does not double (since entire cases can get eliminated by some splits, as in the example below).
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item This method can be applied to any problem and can be very effective on problems with a few very connected variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Multiple runs of the optimization procedure need to be made, and the number of runs can often grow almost exponentially with respect to the number of splits.
\end{itemize}
\secspace\emph{\textbf{Example}}
Consider the simple objective function
\begin{equation}
H=1+b_{1}b_{2}b_{5}+b_{1}b_{6}b_{7}b_{8}+b_{3}b_{4}b_{8}-b_{1}b_{3}b_{4}.
\end{equation}
In order to quadratize $H$, we first have to choose a variable over which to split.
In this case $b_{1}$ is the obvious choice since it is present in the most terms and contributes to the quartic term.
We then obtain two different problems:
\begin{eqnarray}
H_{0} & = & 1+b_{3}b_{4}b_{8}\\
H_{1} & = & 1+b_{2}b_{5}+b_{6}b_{7}b_{8}+b_{3}b_{4}b_{8}-b_{3}b_{4}.
\end{eqnarray}
\noindent At this point, we could split $H_{0}$ again and solve it entirely, or use a variable we saved in the previous split to quadratize our only problem.
To solve $H_{1}$, we can split again on $b_{8}$, resulting in two quadratic problems:
\begin{eqnarray}
H_{1,0} & = & 1+b_{2}b_{5}-b_{3}b_{4}\\
H_{1,1} & = & 1+b_{2}b_{5}+b_{6}b_{7}.
\end{eqnarray}
Now both of these problems are quadratic.
Hence we have reduced our original, hard problem into 3 easy problems, requiring only 2 extra (much easier) runs of our minimization algorithm, and without needing any auxiliary variables.
\vspace{5mm}
Note that the number of quadratic problems to solve is 3, which is smaller than 2$^2$ which would be the "exponential" cost if the number of problems were (hypothetically) to double with each split. This is a good example of the typical \textit{\textbf{sub}}-exponential scaling of split-reduc.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper and application to Ramsey number determination: \cite{Okada2015}.
\end{itemize}
\newpage
\section{Methods that introduce auxiliary variables to quadratize a SINGLE negative term (Negative Term Reductions, NTR)}
\subsection{NTR-KZFD (Kolmogorov \& Zabih, 2004; Freedman\& Drineas, 2005)} \label{subsec:Negative-Monomial-Reduction}
\secspace\emph{\textbf{Summary}}
For a negative term $-b_{1}b_{2}...b_{k}$, introduce a single auxiliary variable $b_a$ and make the substitution:
\begin{equation}
-b_{1}b_{2} \ldots b_{k} \rightarrow (k-1)b_a - \sum_ib_ib_a.
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary variable for each $k$-local term.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item All resulting quadratic terms are submodular (have negative coefficients).
\item Can reduce arbitrary order terms with only 1 auxiliary.
\item Reproduces the full spectrum.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for negative terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{align}
H_{{\rm 6-local}} & =-2b_{1}b_{2}b_{3}b_{4}b_{5}b_{6} + b_5b_6,
\end{align}
has a unique minimum energy of -1 when all $b_i=1$.
\begin{equation}
H_{{\rm 2-local}}=2\left( 5b_a-b_{1}b_a-b_{2}b_a-b_{3}b_a-b_{4}b_a-b_{5}b_a-b_{6}b_a \right) + b_5b_6
\end{equation}
has the same unique minimum energy, and it occurs at the same place (all $b_i=1$), with $b_a=1$.
\begin{comment}
\secspace\emph{\textbf{Example}}
\begin{align}
H_{{\rm 5-local}} & =-b_{1}b_{2}b_{3}b_{4}b_{5}-b_{1}b_{2}b_{3}b_{4}b_{6},
\end{align}
has a unique minimum energy of -2 when all $b$'s are 1.
\begin{equation}
H_{{\rm 3-local}}=(3-b_{1}-b_{2}-b_{3}-b_{4})(b_{5}+b_{6})a
\end{equation}
has the same unique minimum energy, and it occurs at the same place,
with $a=1$.
\end{comment}
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
-b_{1}b_{2} \ldots b_{k} &= \min_{b_a} \left( (k-1-\sum_i b_{i})b_a \right)\\
&\rightarrow \left( (k-1-\sum_i b_{i})b_a \right).
\end{align}
\secspace\emph{\textbf{Alternate Names}}
\begin{itemize}
\item "Standard quadratization" of negative monomials \cite{Anthony2017}.
\item $s_k(b,b_a)$ \cite{Anthony2017}
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2004: Kolmogorov and Zabih presented this for cubic terms \cite{Kolmogorov2004}.
\item 2005: Generalized to arbitrary order by Freedman and Drineas \cite{Freedman2005}.
\item Discussion: \cite{Ishikawa2011}, \cite{Anthony2016}.
\end{itemize}
\newpage
\subsection{NTR-ABCG (Anthony, Boros, Crama, Gruber, 2014)} \label{subsec:Negative-Monomial-Reduction-2}
\secspace\emph{\textbf{Summary}}
For a negative term $-b_{1}b_{2}...b_{k}$, introduce a single auxiliary variable $b_a$ and make the substitution:
\begin{equation}
-b_{1}b_{2} \ldots b_{k} \rightarrow \sum_{i}^{k-1}b_{i}-\sum_{i}^{k-1}b_{i}b_{k}-\sum_{i}^kb_{i}b_{a}+(k-1)b_{k}b_{a}.
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary variable for each $k$-local term.
\item 1 non-submodular term for each $k$-local termi (and it is quadratic).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Can reduce arbitrary order terms with only 1 auxiliary.
\item Reproduces the full spectrum.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for negative terms.
\item Turns a symmetric term into a non-symmetric term (but only $b_{k}$ is asymmetric).
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{align}
H_{{\rm 6-local}} & =-2b_{1}b_{2}b_{3}b_{4}b_{5}b_{6} + b_5b_6,
\end{align}
has a unique minimum energy of -1 when all $b_i=1$.
$H_{2-\textrm{local}}$ has the same unique minimum energy, and it occurs at the same place (all $b_i=1$), with $b_a=1$.
\begin{comment}
\secspace\emph{\textbf{Example}}
\begin{align}
H_{{\rm 5-local}} & =-b_{1}b_{2}b_{3}b_{4}b_{5}-b_{1}b_{2}b_{3}b_{4}b_{6},
\end{align}
has a unique minimum energy of -2 when all $b$'s are 1.
\begin{equation}
H_{{\rm 3-local}}=(3-b_{1}-b_{2}-b_{3}-b_{4})(b_{5}+b_{6})a
\end{equation}
has the same unique minimum energy, and it occurs at the same place,
with $a=1$.
\end{comment}
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
-b_{1}b_{2}\ldots b_{k}&\rightarrow(k-1)b_{k}b_{a}-\sum_{i}b_{i}(b_{a}+b_{k}-1)\\
& = (k-2)b_kb_a - \sum_i^{k-1}b_i\left( b_a + b_k -1 \right)
\end{align}
\secspace\emph{\textbf{Alternate Names}}
\begin{itemize}
\item "Extended standard quadratization" of negative monomials (see Eqs. 25-26 of \cite{Anthony2017}).
\item $s_k(b,b_a)^+$ \cite{Anthony2017}.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2014, First presentation: \cite{Anthony2014,Anthony2016}.
\item Further discussion: \cite{Anthony2017}.
\end{itemize}
\newpage
\subsection{NTR-ABCG-2 (Anthony, Boros, Crama, Gruber, 2016)} \label{subsec:Negative-Monomial-Reduction-3}
\secspace\emph{\textbf{Summary}}
For a negative term $-b_{1}b_{2}...b_{k}$, introduce a single auxiliary variable $b_a$ and make the substitution:
\begin{align} \label{eqn:NTR-ABCG-2}
-b_{1}b_{2}\ldots b_{k} &\rightarrow\left(2k-1\right)b_{a}-2\sum_{i}b_{i}b_{a}
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary variable for each $k$-local term.
\item 1 non-submodular term for each $k$-local term (and it is linear).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Can reduce arbitrary order terms with only 1 auxiliary.
\item Reproduces the full spectrum.
\item The non-submodular term is linear as opposed to NTR-ABCG-1 whose non-submodular term is quadratic.
\item Symmetric with respect to all non-auxiliary variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for negative terms.
\item Turns a symmetric term into a non-symmetric term (but only $b_{k}$ is asymmetric).
\item Coefficients of quadratic terms are twice the size of their size in NTR-KZFD or NTR-ABCG-1, and roughly twice the size for the linear term.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{align}
H_{{\rm 6-local}} & =-2b_{1}b_{2}b_{3}b_{4}b_{5}b_{6} + b_5b_6,
\end{align}
has a unique minimum energy of -1 when all $b_i=1$.
$H_{2-\textrm{local}}$ has the same unique minimum energy, and it occurs at the same place (all $b_i=1$), with $b_a=1$.
\begin{comment}
\secspace\emph{\textbf{Example}}
\begin{align}
H_{{\rm 5-local}} & =-b_{1}b_{2}b_{3}b_{4}b_{5}-b_{1}b_{2}b_{3}b_{4}b_{6},
\end{align}
has a unique minimum energy of -2 when all $b$'s are 1.
\begin{equation}
H_{{\rm 3-local}}=(3-b_{1}-b_{2}-b_{3}-b_{4})(b_{5}+b_{6})a
\end{equation}
has the same unique minimum energy, and it occurs at the same place,
with $a=1$.
\end{comment}
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
- b_1 b_2 \dots b_k = 2 b_{a} \left(k - \frac12 - \sum_{i=1}^k b_i \right)
\end{align}
\eqref{eqn:NTR-ABCG-2} can be generalized as follows:
\begin{align} \label{eqn:NTR-ABCG-2-gen}
-b_{1}b_{2}\ldots b_{k} &\rightarrow\left(Ck-1\right)b_{a}-C\sum_{i}b_{i}b_{a}
\end{align}
where $C \geq 1$ is a constant. NTR-KZFD is a particular case of \eqref{eqn:NTR-ABCG-2-gen} where $C = 1$.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Discussion: \cite{Anthony2016}.
\end{itemize}
\newpage
\subsection{NTR-GBP (``Asymmetric cubic reduction'', Gallagher, Batra, Parikh, 2011)}
\secspace\emph{\textbf{Summary}}
\begin{align}
-b_{1}b_{2}b_{3}&\rightarrow b_a \left( -b_1 + b_2 + b_3 \right) -b_1b_2 - b_1b_3 + b_1 \\
&\rightarrow b_a \left( -b_2 + b_1 + b_3 \right) -b_1b_2 - b_2b_3 + b_2 \\
&\rightarrow b_a \left( -b_3 + b_1 + b_2 \right) -b_2b_3 - b_1b_3 + b_3 \label{eqn:NTR-GBP,3}
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary variable per negative cubic term.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Asymmetric which allows more flexibility in cancelling with other quadratics.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for negative cubic monomials.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
- b_{1}b_{2}b_{3} + b_1 b_3 - b_2 &= \min_{b_a} \left( b_a-b_{1}b_a-b_{3}b_a+b_{2}b_a+2b_{1}b_{3} \right) - b_2
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
-b_{1}b_{2}b_{3}&= \min_{b_a} \left( b_a -b_1 + b_2 + b_3 -b_1b_2 - b_1b_3 + b_1 \right)
\end{align}
\begin{itemize}
\item By starting with \eqref{eqn:NTR-GBP,3}, and flipping $b_a$ (i.e. setting $b_a \rightarrow 1- \bar{b}_a$ and relabelling $b_a \rightarrow \bar{b}_a$ since $b_a$ does not appear anywhere else in the function being quadratized), we see that NTR-GBP can actually be derived from NTR-ABCG with $k=3$).
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item First introduced in: \cite{Gallagher2011}.
\end{itemize}
\newpage
\subsection{NTR-RBL (Rocchetto, Benjamin, Li, 2016)}
\secspace\emph{\textbf{Summary}}
Using a ternary variable $t_q \in {-1,0,1}$ we have:
\begin{equation}
-z_1z_2z_3 \rightarrow \left(1 + 4t_a + z_1 + z_2 + z_3\right)^2 -1.
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary ternary variable
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item One of the only methods designed specifically for $z$ variables.
\item Symmetric with respect to all variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item The auxiliary variable required is ternary (a qutrit).
\item Requires all possile quadratic terms and they are all non-submodular.
\item Only reproduces the ground state manifold.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
-b_{1}b_{2}b_{3}&= \min_{b_a} b_a -b_1 + b_2 + b_3 -b_1b_2 - b_1b_3 + b_1
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
\begin{eqnarray}
-z_1z_2z_3z_4\rightarrow 16\,t_a^2+4\,t_a\,\sum_{i=1}^4 z_i+2\,\sum_{i=1}^4\sum_{j>i}^4z_i\,z_j+4
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper where they introduce this gadget for the \textit{end points} in the LHZ lattice: \cite{Rocchetto2016}.
\end{itemize}
\newpage
\subsection{NTR-LHZ (Lechner, Hauke, Zoller, 2015)}
\secspace\emph{\textbf{Summary}}
Extra binary or ternary variable is added to ensure that the energy of the even parity sector is zero and the energy of the odd sector is higher:
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary ternary variable (see appendix for transformation to a binary variable)
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item One of the only methods designed specifically for $z$ variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only reproduces the ground state manifold, not higher excited states.
\item Requires all possile quadratic terms and they are all non-submodular.
\item Only reproduces the ground state manifold.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
-z_1z_2z_3z_4=-16\,b_1b_2b_3b_4+8\,(b_1b_2b_3+b_1b_2b_4+b_1b_3b_4+b_2b_3b_4)-\nonumber \\
4\,(b_1b_2+b_1b_3+b_1b_4+b_2b_3+b_2b_4+b_3b_4)+2\,(b_1+b_2+b_3+b_4)-1 \nonumber \\
\rightarrow 16\,t_a^2+8\,t_a\sum_{i=1}^4 b_i+8\,\sum_{i=1}^4\sum_{j>i}^4b_i\,b_j+16
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
\begin{eqnarray}
-z_1z_2z_3z_4\rightarrow 16\,t_a^2+4\,t_a\,\sum_{i=1}^4 z_i+2\,\sum_{i=1}^4\sum_{j>i}^4z_i\,z_j+4
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Lechner2015} (Eq. 4).
\item Also given in Eq. 10 of \cite{Rocchetto2016} for $i+2<j$.
\end{itemize}
\newpage
\section{Methods that introduce auxiliary variables to quadratize a SINGLE positive term (Positive Term Reductions, PTR)}
\subsection{PTR-BG (Boros and Gruber, 2014)}
\secspace\emph{\textbf{Summary}}
By considering the negated literals $\bar{b}_{i}=1-b_{i}$, we recursively apply NTR-KZFD to $b_{1}b_{2}\ldots b_{k}=-\bar{b}_{1}b_{2}\ldots b_{k}+b_{2}b_{3}\ldots b_{k}$.
The final identity is:
\begin{equation}
b_{1}b_{2}\ldots b_{k}\rightarrow \left(\sum_{i=1}^{k-2}b_{a_{i}}(k-i-1+b_{i}-\sum_{j=i+1}^{k}b_{j})\right)+b_{k-1}b_{k}
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $k-2$ auxiliary variables for each $k$-local term.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Works for positive monomials.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item $k-1$ non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
b_{1}b_{2}b_{3}b_{4} & \rightarrow {b_{a_{1}}(2+b_{1}-b_{2}-b_{3}-b_{4})+b_{a_{2}}(1+b_{2}-b_{3}-b_{4})}+b_{3}b_{4}
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Summary: \cite{Boros2014}.
\end{itemize}
\begin{comment}
By flipping and applying \ref{subsec:Negative-Monomial-Reduction} alternatively, we get a nice way to reduce a quartic using only 2 variables and with only 2 positive terms and much less connectivity:
\begin{eqnarray}
b_{1}b_{2}b_{3}b_{4} & \mapsto & b_{2}b_{3}b_{4}-\bar{b}_{1}b_{2}b_{3}b_{4}\\
& \mapsto & b_{2}b_{3}b_{4}+\big(3a_{1}-a_{1}(\bar{b}_{1}+b_{2}+b_{3}+b_{4})\big)\\
& \mapsto & b_{3}b_{4}-\bar{b}_{2}b_{3}b_{4}+\big(3a_{1}-a_{1}(\bar{b}_{1}+1-\bar{b}_{2}+b_{3}+b_{4})\big)\\
& \mapsto & b_{3}b_{4}+\big(2a_{2}-a_{2}(\bar{b}_{2}+b_{3}+b_{4}\big)\big)+\big(3a_{1}-a_{1}(\bar{b}_{1}+1-\bar{b}_{2}+b_{3}+b_{4})\big)\\
& \mapsto & b_{3}b_{4}+a_{2}+a_{2}b_{2}-a_{2}b_{3}-a_{2}b_{4}+2a_{1}+a_{1}b_{1}-a_{1}b_{2}-a_{1}b_{3}-a_{1}b_{4}
\end{eqnarray}
\end{comment}
\newpage
\subsection{PTR-Ishikawa (Ishikawa, 2011)}
\secspace\emph{\textbf{Summary}}
This method re-writes a positive monomial using symmetric polynomials, so all possible quadratic terms are produced and they are all non-submodular:
\begin{equation}
b_{1}...b_{k} \rightarrow \left( \sum_{i=1}^{n_{k}}b_{a_{i}}\left(c_{i,d}\left(-\sum_{j=1}^{k}b_{j}+2i\right)-1\right)+\sum_{i<j}b_{i}b_{j} \right)
\end{equation}
where $n_{k}=\left\lfloor \frac{k-1}{2}\right\rfloor $ and $c_{i,k}=\begin{cases}
1, & i=n_{d}\text{ and }k\text{ is odd,}\\
2, & \text{else.}
\end{cases}$
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $\left\lfloor \frac{k-1}{2}\right\rfloor $ auxiliary variables for each $k$-order term
\item $\mathcal{O}(kt)$ for a $k$-local objective function with $t$ terms.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Works for positive monomials.
\item About half as many auxiliary variables for each $k$-order term as the previous method.
\item Reproduces the full spectrum.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item $\mathcal{O}(k^{2})$ quadratic terms are created, which may make chimerization more costly.
\item $\frac{k(k-1)}{2}$ non-submodular terms.
\item Worse than the previous method for quartics, with respect to submodularity.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
b_{1}b_{2}b_{3}b_{4} \rightarrow (3-2b_{1}-2b_{2}-2b_{3}-2b_{4})b_a+b_{1}b_{2}+b_{1}b_{3}+b_{1}b_{4}+b_{2}b_{3}+b_{2}b_{4}+b_{3}b_{4}
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
For even $k$, and equivalent expression is given in \cite{Boros2018QuadratizationsOS}:
\begin{align}
b_{1}b_{2}\ldots b_{k}&\rightarrow\sum_{i}b_{i}+\sum_{ij}b_{i}b_{j}+\sum_{2i}b_{a_{2i}}\left(4i-2-\sum_{j}b_{j}\right)\\
&\rightarrow\sum_{i}b_{i}+2\sum_{2i}b_{a_{2i}}\left(2i-1\right)+\sum_{ij}b_{i}b_{j}-\sum_{2i,j}b_{j}b_{a_{2i}}
\end{align}
\secspace\emph{\textbf{Alternate Names}}
\begin{itemize}
\item "Ishikawa's Symmetric Reduction" \cite{Gallagher2011}.
\item "Ishikawa Reduction"
\item "Ishikawa"
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper and application to image denoising: \cite{Ishikawa2011}.
\item Equivalent way of writing it for even $k$, shown in \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{PTR-BCR-1 (Boros, Crama, and Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
This is very similar to the alternative form of Ishikawa Reduction, but works for odd values of $k$, and is different from Ishikawa Reduction:
\begin{eqnarray}
\begin{gathered}
b_{1}b_{2}\ldots b_{k} \rightarrow \sum_{i}b_{i}+\sum_{2i-1}\left(4i-3\right)b_{a_{2i-1}}+\sum_{ij}b_{i}b_{j}-\sum_{2i-1,j}b_{j}b_{a_{2i-1}}
\end{gathered}
\end{eqnarray}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item Same number of auxiliaries as Ishikawa Reduction.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Same as for Ishikawa Reduction.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Same as for Ishikawa Reduction.
\item Only works for odd $k$, but for even $k$ we have an analogous method which is equivalent to Ishikawa Reduction.
\end{itemize}
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
b_{1}b_{2}\ldots b_{k}&\rightarrow\sum_{i}b_{i}+\sum_{ij}b_{i}b_{j}+\sum_{2i-1}b_{a_{2i-1}}\left(4i-3-\sum_{j}b_{j}\right)
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{PTR-BCR-2 (Boros, Crama, and Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Let $\lceil \frac{k}{4} \rceil \le m \le \lceil \frac{k}{2} \rceil$,
\begin{align}
\begin{gathered}
b_{1}b_{2}\cdots b_{k} \rightarrow\alpha^{b}\sum_{i}b_{i}+\alpha^{b_{a,1}}\sum_{i}b_{a_{i}}+\alpha^{b_{a,2}}b_{a_{m}}+\alpha^{bb}\sum_{ij}b_{i}b_{j}+\alpha^{bb_{a,1}}\sum_{i}\sum_{j}^{m-1}b_{i}b_{a_{j}}+\\
\alpha^{bb_{a,2}}\sum_{i}b_{i}b_{a_{m}}+\alpha^{b_{a,1}b_{a,1}}\sum_{ij}^{m-1}b_{a_{i}}b_{a_{j}}+\alpha^{b_{a,1}b_{a,2}}\sum_{i}^{m-1}b_{a_{i}}b_{a_{m}},
\end{gathered}
\end{align}
\noindent where:
\begin{align}
\begin{pmatrix}\alpha^{b} & \alpha^{bb_{a,1}}\\
\alpha^{b_{a,1}} & \alpha^{bb_{a,2}}\\
\alpha^{b_{a,2}} & \alpha^{b_{a,1}b_{a,1}}\\
\alpha^{bb} & \alpha^{b_{a,1}b_{a,2}}
\end{pmatrix}&=\begin{pmatrix}-\nicefrac{1}{2} & -1\\
1 & -2\\
\frac{1}{2}(n-m+n^{2}-2mn+m^{2}) & -(n-m)\\
\nicefrac{1}{2} & 4(n-m)
\end{pmatrix}.
\end{align}
\secspace\emph{\textbf{Cost}}
$\lceil \frac{k}{4} \rceil$ auxiliary qubits per positive monomial.
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Smallest number of auxiliary coefficients that scales linearly with $k$.
\item Smaller coefficients than the logarithmic reduction.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Introduces many non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
We quadratize a quartic term with only 1 auxiliary (half as many as in PTR-Ishikawa):
\begin{eqnarray}
b_1 b_2 b_3 b_4 \rightarrow \frac{1}{2}
\left( b_1 + b_2 + b_3 + b_4 - 2b_{a_1} \right)
\left( b_1 + b_2 + b_3 + b_4 - 2b_{a_1} - 1 \right)
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original appears in: Theorem 7 of \cite{Boros2018QuadratizationsOS}, and Theorem 10 of \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{PTR-BCR-3 (Boros, Crama, and Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Pick $m$ such that $k < 2^{m+1}$,
\begin{align}
b_{1}b_{2}\ldots b_{k} &\rightarrow \alpha+\alpha^{b}\sum_{i}b_{i}+\alpha^{b_{a_{i}}}\sum_{i}2^{i-1}b_{a_{i}}+\alpha^{bb}\sum_{ij}b_{i}b_{j}+\alpha^{bb_{a}}\sum_{ij}b_{i}b_{a_{j}}+\alpha^{b_{a_{i}}b_{a_{j}}}b_{a_{i}}b_{a_{j}},
\end{align}
\noindent where,
\begin{align}
\begin{pmatrix}\alpha & \alpha^{bb}\\
\alpha^{b} & \alpha^{bb_{a}}\\
\alpha^{b_{a}} & \alpha^{b_{a}b_{a}}
\end{pmatrix}=\begin{pmatrix}\left(2^{m}-k\right)^{2} & 1\\
2\left(2^{m}-k\right) & 2^{j-1}\\
-2\left(2^{m}-k\right) & 2^{i+j-2}
\end{pmatrix}.
\end{align}
\secspace\emph{\textbf{Cost}}
$\lceil \log k \rceil $ auxiliary qubits per positive monomial.
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Logarithmic number of auxiliary variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Introduces all terms non-submodular except for the term linear in auxiliaries.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
b_1 b_2 b_3 b_4 \rightarrow \frac{1}{2}
\left( 4 + b_1 + b_2 + b_3 + b_4 - b_{a_1} - 2b_{a_2} \right)
\left( 3 + b_1 + b_2 + b_3 + b_4 - b_{a_1} - 2b_{a_2} \right)
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
b_{1}b_{2}\ldots b_{k} \rightarrow\left(2^{m}-k+\sum_{i}b_{i}-\sum_{i}2^{i-1}b_{a_{i}}\right)^{2}
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper (Theorem 4, special case of Theorem 1): \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{PTR-BCR-4 (Boros, Crama, and Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Pick $m$ such that $k \le 2^{m+1}$,
\begin{align}
b_1 \ldots b_k &\rightarrow \frac{1}{2} \left( 2^{m+1}-k + \sum_ib_i - \sum^m_{i} 2^i b_{a_i} \right)
\left( 2^{m+1}-k + \sum_i b_i - \sum^m_{i} 2^i b_{a_i} - 1 \right).
\end{align}
\secspace\emph{\textbf{Cost}}
$\lceil \log k \rceil - 1$ auxiliary qubits per positive monomial.
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Logarithmic number of auxiliary variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Introduces many non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
b_1 b_2 b_3 b_4 \rightarrow \frac{1}{2}
\left( b_1 + b_2 + b_3 + b_4 - 2b_a \right)
\left( b_1 + b_2 + b_3 + b_4 - 2b_a - 1 \right)
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
Let $X = \sum b_i$ and $N = 2^{m+1} - k$,
\begin{align}
b_1 \ldots b_k &= \min_{b'_1, \ldots b'_n} \frac{1}{2} \left( N + X - \sum^n_{i=1} 2^i b'_i \right)
\left( N + X - \sum^n_{i=1} 2^i b'_i - 1 \right).
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper (Theorem 5): \cite{Boros2018QuadratizationsOS}, Also in (Theorem 9): \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{PTR-BCR-5 (Boros, Crama, and Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Pick $m$ such that $k \le 2^{m+1}$,
\begin{align}
b_1 \ldots b_k &\rightarrow \frac{1}{2} \left( 2^{m+1}-k + \sum_ib_i - \sum^m_{i} 2^i b_{a_i} \right)
\left( 2^{m+1}-k + \sum_i b_i - \sum^m_{i} 2^i b_{a_i} - 1 \right).
\end{align}
\secspace\emph{\textbf{Cost}}
$\lceil \log n \rceil - 1$ auxiliary qubits per positive monomial.
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Logarithmic number of auxiliary variables.
\item "As mentioned in Section 1, Theorem 9 provides a significant improvement
over the best previously known quadratizations for the Positive monomial,
and the upper bound on the number of auxiliary variables precisely matches
the lower bound presented in Section 3."
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Introduces many non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
b_1 b_2 b_3 b_4 \rightarrow \frac{1}{2}
\left( b_1 + b_2 + b_3 + b_4 - 2b_a \right)
\left( b_1 + b_2 + b_3 + b_4 - 2b_a - 1 \right)
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
Let $X = \sum b_i$ and $N = 2^{m+1} - k$,
\begin{align}
b_1 \ldots b_k &= \min_{b'_1, \ldots b'_n} \frac{1}{2} \left( N + X - \sum^n_{i=1} 2^i b'_i \right)
\left( N + X - \sum^n_{i=1} 2^i b'_i - 1 \right).
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper (Theorem 9): \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{PTR-BCR-5 (Boros, Crama, and Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Pick $m$ such that $k \le 2^{m+1}$,
\begin{align}
b_1 \ldots b_k &\rightarrow \frac{1}{2} \left( 2^{m+1}-k + \sum_ib_i - \sum^m_{i} 2^i b_{a_i} \right)
\left( 2^{m+1}-k + \sum_i b_i - \sum^m_{i} 2^i b_{a_i} - 1 \right).
\end{align}
\secspace\emph{\textbf{Cost}}
$\lceil \log n \rceil $ auxiliary qubits per positive monomial.
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Logarithmic number of auxiliary variables.
\item "As mentioned in Section 1, Theorem 9 provides a significant improvement
over the best previously known quadratizations for the Positive monomial,
and the upper bound on the number of auxiliary variables precisely matches
the lower bound presented in Section 3."
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Introduces many non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
b_1 b_2 b_3 b_4 \rightarrow \frac{1}{2}
\left( b_1 + b_2 + b_3 + b_4 - 2b_a \right)
\left( b_1 + b_2 + b_3 + b_4 - 2b_a - 1 \right)
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
Let $X = \sum b_i$ and $N = 2^{m+1} - k$,
\begin{align}
b_1 \ldots b_k &= \min_{b'_1, \ldots b'_n} \frac{1}{2} \left( N + X - \sum^n_{i=1} 2^i b'_i \right)
\left( N + X - \sum^n_{i=1} 2^i b'_i - 1 \right).
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper (Remark 5): \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{CCG-based (Yip, Xu, Koenig and Kumar, 2019)}
\secspace\emph{\textbf{Summary}}
The CCG-based quadratization algorithm is an iterative algorithm. The CCG (Constraint composite graph) is a combinatorial structure associated with an optimization problem posed as the weighted constraint satisfaction problem.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item Each positive monomial of degree $i$ generates 2 auxiliary variables when it is reduced to the sum of a quadratic polynomial and a monomial of degree $i-1$, which can then be combined with existing monomials of degree $i-1$ if they are composed of the same variables. This combination of monomials can take place in each iteration, until the whole pseudo-Boolean function (PBF) becomes quadratic.
\item Same as Ishikawa's method, use 1 auxiliary variable for each negative monomial.
\item For a $k$-local objective function, use a factor of $k$ less of auxiliary variables asymptotically compared to Ishikawa's method.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Due to the recombinations
of terms during its iterative reduction process, the resulting number of auxiliary variables is less than Ishikawa's method especially for PBFs with many positive monomials and many terms (the difficult case).
\item The higher the degree of the PBF is,
the more advantageous the CCG-based quadratization method is. It
works particularly well for problems in real life such as planning problem that requires a high-degree PBF formulation.
\item Due to the nature of recombinations
of terms during each iteration, the number of quadratic terms in the finalized quadratic PBFs is less than Ishikawa's method.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Can lead to more auxiliary variables for sparse PBFs (the number of monomials is much less than the maximum number the PBFs can have) and PBFs with low degree.
\end{itemize}
\secspace\emph{\textbf{Example}}
Each degree-\(d\) monomial in the PBF \(f(\vec x)\) in one reduction step is substituted as:
\begin{align}
\begin{split}\label{eq:ccgqdr}
ax_1\ldots x_d =& \min_{x_a, x_L} \left[ax_a + Lx_L + J\sum_{i=1}^{d-1} (1-x_i)(1-x_a) \right. \\
+ &\left. J(1-x_d)(1-x_L) +J (1-x_L)(1-x_a) \vphantom{\sum_{xxx}^{xxx}}\right]\\
- &L(1-x_d) - a + ax_1\ldots x_{d-1},
\end{split}\\
-ax_1\ldots x_d =& \min_{x_{a'}} \left[ax_{a'} + J\sum_{i=1}^d (1-x_i)(1-x_{a'}) \right] - a \label{eq:ccgqdrnegative}
\end{align}
where $J \geq L > a > 0$. $x_a$ and $x_L$ are the two auxiliary variables introduced for positive monomial and $x_{a'}$ is the auxiliary variable introduced for negative monomial.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2019, original paper: \cite{yxkk19}.
\item 2019, (Theorem 1): \cite{yxkk19}, is inspired by 2008, (Theorem 3): \cite{vchoi08}.
\end{itemize}
\newpage
\subsection{PTR-KZ (Kolmogorov \& Zabih, 2004)}
\secspace\emph{\textbf{Summary}}
This method can be used to re-write positive or negative cubic terms in terms of 6 quadratic terms.
The identity is given by:
\begin{align}
b_{1}b_{2}b_{3} & \rightarrow 1 - \left( b_a + b_1 + b_2 + b_3 \right) + b_a \left( b_1 + b_2 + b_3 \right) + b_1 b_2 + b_1 b_3 + b_2 b_3
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary variable per positive or negative cubic term.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Works on positive or negative monomials.
\item Reproduces the full spectrum.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Introduces all 6 possible non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Alternate Names}}
\begin{itemize}
\item "Reduction by Minimum Selection" \cite{Gallagher2011,Ishikawa2011}.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Kolmogorov2004}.
\end{itemize}
\newpage
\subsection{PTR-KZ (in terms of $z$)}
\secspace\emph{\textbf{Summary}}
The formula is almost the same as in the version of PTR-KZ on the previous page (which is written in terms of $b$), but with a factor of 2, a change of sign for the linear terms, and a slight change in the constant term. This formula can be obtained directly from the PTR-KZ quadratization formula in terms of $b$, by starting with $8b_1b_2b_3$ on the left-side, making the substitution $b_i \rightarrow (1+z_i)/2$, and removing all terms that appear on both sides of the equation. The result is:
\begin{align}
\pm z_1z_2z_3 &\rightarrow 3 \pm \left( z_1 + z_2 + z_3 + z_a \right) + 2z_a \left(z_1 + z_2 +z_3 \right) + z_1z_2 + z_1z_3 + z_2z_3.
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary variable per positive or negative cubic term.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Works on positive or negative monomials.
\item Reproduces the full spectrum.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Introduces all 6 possible non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2004: published by Kolmogorov and Zabih in terms of $b$ variables: \cite{Kolmogorov2004}.
\item 31 March 2016: published by Chancellor, Zohren, and Warburton in terms of $z$, and without the constant term: \cite{Chancellor2016b}.
\item 8 April 2016: published independently by Leib, Zoller, and Lechner in terms of $z$, and without the constant term \cite{Leib2016a,Leib2016}.
\end{itemize}
\newpage
\subsection{PTR-GBP (``Asymmetric reduction'', Gallagher, Batra, Parikh, 2011)}
\secspace\emph{\textbf{Summary}}
Similar to other methods of reducing one term, this method can reduce a positive cubic monomial into quadratic terms using only one auxiliary variable, while introducing fewer non-submodular terms than the symmetric version.
The identity is given by:
\begin{align}
b_{1}b_{2}b_{3}\rightarrow b_a-b_{2}b_a-b_{3}b_a+b_{1}b_a+b_{2}b_{3} \\
\rightarrow b_a-b_{1}b_a-b_{3}b_a+b_{2}b_a+b_{1}b_{3} \\
\rightarrow b_a-b_{1}b_a-b_{2}b_a+b_{3}b_a+b_{1}b_{2} .\\
\end{align}
\secspace\emph{\textbf{Cost}}
1 auxiliary variable per positive cubic term.
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Works on positive monomials.
\item Fewer non-submodular terms than Ishikawa Reduction.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only been shown to work for cubics.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
b_{1}b_{2}b_{3} + b_1 b_3 - b_2 \rightarrow \left( b_a-b_{1}b_a-b_{3}b_a+b_{2}b_a+2b_{1}b_{3} \right) - b_2
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper and application to computer vision: \cite{Gallagher2011}.
\end{itemize}
\newpage
\subsection{PTR-RBL-(3$\rightarrow$2) (Rocchetto, Benjamin, Li, 2016)}
\secspace\emph{\textbf{Summary}}
Using a ternary variable $t_q \in {-1,0,1}$ we have:
\begin{equation}
z_1z_2z_3 \rightarrow (1 + 4t_a + z_1 + z_2 + z_3)^2 -1.
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary ternary variable
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item One of the only methods designed specifically for $z$ variables.
\item Symmetric with respect to all variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item The auxiliary variable required is ternary (a qutrit).
\item Requires all possile quadratic terms and they are all non-submodular.
\item Only reproduces the ground state manifold.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
-b_{1}b_{2}b_{3}&= \min_{b_a} b_a -b_1 + b_2 + b_3 -b_1b_2 - b_1b_3 + b_1
\end{eqnarray}
\secspace\emph{\textbf{Alternate Forms}}
\begin{eqnarray}
-z_1z_2z_3z_4\rightarrow 16\,t_a^2+4\,t_a\,\sum_{i=1}^4 z_i+2\,\sum_{i=1}^4\sum_{j>i}^4z_i\,z_j+4
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper where they introduce this gadget for the \textit{end points} in the LHZ lattice: \cite{Rocchetto2016}.
\end{itemize}
\newpage
\subsection{PTR-RBL-(4$\rightarrow$2) (Rocchetto, Benjamin, Li, 2016)}
\secspace\emph{\textbf{Summary}}
Extra binary or ternary variable is added to ensure that the energy of the even parity sector is zero and the energy of the odd sector is higher:
\begin{equation}
z_1z_2z_3z_4\rightarrow 16\,t_a^2+4\,t_a\,\sum_{i=1}^4 z_i+2\,\sum_{i=1}^4\sum_{j>i}^4z_i\,z_j+4.
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary ternary variable
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item One of the only methods designed specifically for $z$ variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only reproduces the ground state manifold, not higher excited states.
\item Requires all possile quadratic terms and they are all non-submodular.
\item Only reproduces the ground state manifold.
\end{itemize}
\secspace\emph{\textbf{Example}}
\begin{eqnarray}
-z_1z_2z_3z_4=-16\,b_1b_2b_3b_4+8\,(b_1b_2b_3+b_1b_2b_4+b_1b_3b_4+b_2b_3b_4)-\nonumber \\
4\,(b_1b_2+b_1b_3+b_1b_4+b_2b_3+b_2b_4+b_3b_4)+2\,(b_1+b_2+b_3+b_4)-1 \nonumber \\
\rightarrow 16\,t_a^2+8\,t_a\sum_{i=1}^4 b_i+8\,\sum_{i=1}^4\sum_{j>i}^4b_i\,b_j+16
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Rocchetto2016}.
\end{itemize}
\newpage
\subsection{PTR-CZW (Chancellor, Zohren, Warburton, 2017)}
\secspace\emph{\textbf{Summary}}
Auxilliary qubits can be made to ``count'' the number of logical qubits in the $1$ configuration. By applying single qubit terms to the auxilliary qubits, the spectrum of \emph{any} permutation symmetric objective function can be reproduced.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item For a $k$ local coupler requires $k$ auxilliary qubits.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Natural flux qubit implementation \cite{Chancellor2017}.
\item Single gadget can reproduce any permutation symmetric spectrum.
\item High degree of symmetry means this method is natural for some kinds of quantum simulations \cite{Chancellor2016a}.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Requires coupling between all logical bits and from all logical bits to all auxilliary bits.
\item Requires single body terms of increasing strength as $k$ is increased.
\end{itemize}
\secspace\emph{\textbf{Example}}
A $4$ qubit gadget guarantees that the number of auxillary bits in the $-1$ state is equal to the number of logical bits in the $1$ state
\begin{equation}
H_{4-\rm{count}}= 4\,\sum_{i=2}^4\sum_{j=1}^{i-1}b_ib_j+4\,\sum_{i=1}^4\sum_{j=1}^4b_ib_{a_j}-15\,\sum_{i=1}^4b_i-8\,\sum_{i=1}^4b_{a_i}+(5\,b_{a_1}+b_{a_2}-3\,b_{a_3}-7\,b_{a_4})+26
\end{equation}
This gadget can be expressed more naturally in terms of $z$:
\begin{equation}
H_{4-\rm{count}}= \sum_{i=2}^4 \sum_{j =1}^{i-1} z_i z_j -\frac{1}{2} \sum_{i=1}^4 z_i + \sum_{i=1}^4 \sum_{j=1}^4 z_i z_{a_j} +\frac{1}{2}\left(5z_{a_1}+ z_{a_2}-3\,z_{a_3}-7 z_{a_4}\right).
\end{equation}
To replicate the spectrum of $b_1b_2b_3b_4$, we add
\begin{equation}
H_{2-\rm{local}}=-b_{a_4}+ \lambda H_{4-\rm{count}}.
\end{equation}
where $\lambda$ is a large number.
For the spectrum of
\begin{align}
z_1z_2z_3z_4=16\,b_1b_2b_3b_4-8\,(b_1b_2b_3+b_1b_2b_4+b_1b_3b_4+b_2b_3b_4)+\nonumber \\
4\,(b_1b_2+b_1b_3+b_1b_4+b_2b_3+b_2b_4+b_3b_4)-2\,(b_1+b_2+b_3+b_4)+1,
\end{align}
we implement,
\begin{equation}
H_{2-\rm{local}}=2\,b_{a_1}-2\,b_{a_2}+2\,b_{a_3}-2\,b_{a_4}+ \lambda H_{4-\rm{count}},
\end{equation}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Paper on flux qubit implementation: \cite{Chancellor2016}
\item Paper on MAX-$k$-SAT mapping: \cite{Chancellor2016} (published in a journal earlier than \cite{Chancellor2017} but put on arXiv 1 month later).
\item Talk including use in quantum simulation: \cite{Chancellor2016a}
\end{itemize}
\newpage
\subsection{Bit flipping (Ishikawa, 2011)}
\secspace\emph{\textbf{Summary}}
For any variable $b$, we can consider the negation $\bar{b}=1-b$.
The process of exchanging $b$ for $\bar{b}$ is called \emph{flipping}.
Using bit-flipping, an arbitrary function in $n$ variables can be represented using at most $2^{(n-2)}(n-3)+1$ variables, though this is a gross overestimate.
Can be used in many different ways:
\begin{enumerate}
\item Flipping positive terms and using \ref{subsec:Negative-Monomial-Reduction}, recursively;
\item For $\alpha<0$, we can reduce $\alpha\bar{b}_{1}\bar{b}_{2}...\bar{b}_{k}$ very efficiently to submodular form using \ref{subsec:Negative-Monomial-Reduction}.
A generalized version exists for arbitrary combinations of flips in the monomial which makes reduction entirely submodular \cite{Ishikawa2011};
\item When we have quadratized we can minimize the number of non-submodular terms by flipping.
\item We can make use of both $b_{i}$ and $\bar{b}_{i}$ in the same objective function by adding on a sufficiently large penalty term: $\lambda(b_{i}+\bar{b}_{i}-1)^{2}=\lambda(1+2b_{i}\bar{b}_{i}-b_{i}-\bar{b}_{i})$.
This is similar to the ideas in reduction by substitution or deduc-reduc.
In this way, given a quadratic in $n$ variables we can make sure it only has at most $n$ nonsubmodular terms if we are willing to use the extra $n$ negation variables as well (so we have $2n$ variables in total).
\end{enumerate}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item None, as replacing $b_{i}$ with it's negation $\bar{b}_{i}$ costs nothing except a trivial symbolic expansion.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Cheap and effective way of improving submodularity.
\item Can be used to combine terms in clever ways, making other methods more efficient.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Unless the form of the objective function is known, spotting these 'factorizations' using negations is difficult.
\item We need an auxiliary variable for each $b_i$ for which we also want to use $\bar{b_i}$ in the same objective function.
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Ishikawa2011}.
\end{itemize}
\newpage
\section{Methods that quadratize MULTIPLE terms with the SAME auxiliaries (Case 1: Symmetric Function Reductions, SFR)}
A symmetric function is one where if we switch any of the variable names (for example $b_1 \rightarrow b_5 \rightarrow b_8 \rightarrow b_1$), the function's output is unaffected.
\subsection{SFR-ABCG-1 (Anthony, Boros, Crama, Gruber, 2014)}
\secspace\emph{\textbf{Summary}}
Any $n$-variable symmetric function $f\left(b_1,b_2,\ldots b_n\right)\equiv f(b)$ can be quadratized with $n-2$ auxiliaries:
\begin{align}
f(b)&\rightarrow -\alpha_{0}-\alpha_{0}\sum_{i}b_{i}+a_{2}\sum_{ij}b_{i}b_{j}+2\sum_{i}\left(\alpha_{i}-c\right)b_{a_{i}}\left(2i-\frac{1}{2}-\sum_{j}b_{j}\right) \\
c &= \begin{cases}
{\rm min}\left(\alpha_{2j}\right) & ,i\in\text{even}\\
{\rm min}\left(\alpha_{2j-1}\right) & ,i\in\text{odd}
\end{cases} \\
a_2 &= \textrm{Determined from Page 12 of \cite{Anthony2015}}
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $n-2$ auxiliaries for any $n$-variable symmetric function.
\item $n^2$ non-submodular quadratic terms (of the non-auxiliary variables).
\item $n-2$ non-submodular linear terms (of the auxiliary variables).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Quadratization is symmetric in all non-auxiliary variables (this is not always true, for example some of the methods in the NTR section).
\item Reproduces the full spectrum.
\item When there's a large number of terms, there's fewer auxiliary variables than quadratizing each positive monomial separately.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works on a specific class of functions, although the quadratizations of arbitrary functions can be related to the quadratizations of symmetric functions on a larger number of variables.
\item All quadratic terms of the non-auxiliary variables are non-sub-modular.
\item All linear terms of the auxliaries are non-submodular.
\item Not meant so much to be practical, but rather an easy proof of an upper bound on the number of needed auxiliaries.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2014, original paper (Theorem 4.1, with $\alpha_i$ from Corollary 2.3): \cite{Anthony2014}.
\end{itemize}
\newpage
\subsection{SFR-BCR-1 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Any $n$-variable symmetric function $f\left(b_1,b_2,\ldots b_n\right)\equiv f(b)$ that is non-zero only when $\sum b_i = c$ where $\nicefrac{n}{2}\le c \le n$, can be quadratized with $m=\lceil \textrm{log}_2c \rceil +1$ auxiliary variables:
{\scriptsize
\begin{align}
\hspace{-5mm}
f(b) \rightarrow
\alpha
+\alpha^{b} \hspace{-0.5mm} \sum_{i} b_{i}
+\alpha_1^{b_{a}} \hspace{-1.0mm} \sum_{i}^{m-1} b_{a_{i}}
+\alpha_2^{b_{a}} b_{a_{m}}
+\alpha^{bb} \hspace{-0.5mm} \sum_{ij} b_{i}b_{j}
+\alpha_1^{bb_{a}} \hspace{-0.5mm} \sum_{i} \hspace{-1.5mm}\sum_{j}^{m-1} b_{i}b_{a_{j}}
+\alpha_2^{bb_{a}} \hspace{-0.5mm} \sum_{i}b_{i} b_{a_{m}}
+\alpha_1^{b_{a}b_{a}} \hspace{-1.2mm} \sum_{ij}^{m-1} b_{a_{i}}b_{a_{j}}
+\alpha_2^{b_{a}b_{a}} \hspace{-1.2mm} \sum_{i}^{m-1} b_{a_{i}}b_{a_{m}},
\end{align}
}
\noindent where:
\begin{align}
\begin{pmatrix}
\alpha & \alpha^{bb}\\
\alpha^{b} & \alpha_1^{bb_{a}}\\
\alpha_1^{b_{a}} & \alpha_2^{bb_{a}}\\
\alpha_2^{b_{a}} & \alpha_1^{b_{a}b_{a}}\\
\cdot & \alpha_2^{b_{a}b_{a}}
\end{pmatrix}=\begin{pmatrix}(c+1)^{2} & 1\\
-2(c+1) & -2^{i}\\
(c+1)2^{i} & 2\left(1+2^{m-1}\right)\\
\left(1+2^{m-1}\right)\left(2^{m-1}-2c-1\right) & 2^{i+j-1}\\
\cdot & - \left(1+2^{m-1}\right)2^{i}
\end{pmatrix}.
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $m=\lceil \textrm{log}_2 c \rceil + 1$ auxiliary variables.
\item $n^2 + m^2$ non-submodular quadratic terms (all possible quadratic terms involving only non-auxiliary or only auxiliary variables).
\item $m$ non-submodular linear terms (all possible linear terms involving auxiliaries).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Small number of auxiliary terms
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
For $n = 4$ and $c = 2$, we have $m = 2$, and
\begin{eqnarray}
& &\hspace{-10mm}(b_1 b_2 +b_1 b_3 + b_1 b_4 + b_2 b_3 + b_2 b_4 + b_3 b_4)
- 3(b_1 b_2 b_3 + b_1 b_2 b_4 + b_1 b_3 b_4 + b_2 b_3 b_4) + 6 b_1 b_2 b_3 b_4 \\
&\to& \left(-3 + b_1+b_2 + b_3 + b_4 - b_{a_1} + 3 b_{a_2}\right)^2
\end{eqnarray}
using the alternate form below.
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right) &\rightarrow \left(-(c+1)+\sum_{i}b_{i}-\sum_{i}^{m-1}2^{i-1}b_{a_{i}}+\left(1+2^{m-1}\right)b_{a_{m}}\right)^{2}
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 1): \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{SFR-BCR-2 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Any $n$-variable symmetric function $f\left(b_1,b_2,\ldots b_n\right)\equiv f(b)$ that is non-zero only when $\sum b_i = c$ where $0\le c \le\nicefrac{n}{2} $, can be quadratized with $m=\lceil \textrm{log}_2(n-c) \rceil +1$ auxiliary variables:
{\scriptsize
\begin{align}
\hspace{-5mm}
f(b) \rightarrow
\alpha
+\alpha^{b} \hspace{-0.5mm} \sum_{i} b_{i}
+\alpha_1^{b_{a}} \hspace{-1.0mm} \sum_{i}^{m-1} b_{a_{i}}
+\alpha_2^{b_{a}} b_{a_{m}}
+\alpha^{bb} \hspace{-0.5mm} \sum_{ij} b_{i}b_{j}
+\alpha_1^{bb_{a}} \hspace{-0.5mm} \sum_{i} \hspace{-1.5mm}\sum_{j}^{m-1} b_{i}b_{a_{j}}
+\alpha_2^{bb_{a}} \hspace{-0.5mm} \sum_{i}b_{i} b_{a_{m}}
+\alpha_1^{b_{a}b_{a}} \hspace{-1.2mm} \sum_{ij}^{m-1} b_{a_{i}}b_{a_{j}}
+\alpha_2^{b_{a}b_{a}} \hspace{-1.2mm} \sum_{i}^{m-1} b_{a_{i}}b_{a_{m}},
\end{align}
}
\noindent where:
\begin{align}
\begin{pmatrix}\alpha & \alpha^{bb}\\
\alpha^{b} & \alpha_1^{bb_{a}}\\
\alpha_1^{b_{a}} & \alpha_2^{bb_{a}}\\
\alpha_2^{b_{a}} & \alpha_1^{b_{a}b_{a}}\\
\cdot & \alpha_2^{b_{a}b_{a}}
\end{pmatrix}=\begin{pmatrix}(c-1)^{2} & 1\\
-2(c-1) & +2^{i}\\
(1-c)2^{i} &-2\left(1+2^{m-1}\right)\\
\left(1+2^{m-1}\right)\left(2^{m-1}+2c-1\right) & 2^{i+j-1}\\
\cdot & - \left(1+2^{m-1}\right)2^{i}
\end{pmatrix}.
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $m=\lceil \textrm{log}_2 (n-c) \rceil + 1$ auxiliary variables.
\item $n^2 + m^2$ non-submodular quadratic terms (all possible quadratic terms involving only non-auxiliary or only auxiliary variables).
\item $m$ non-submodular linear terms (all possible linear terms involving auxiliaries).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Small number of auxiliary terms
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
For $n = 4$ and $c = 2$, we have $m = 2$, and
\begin{eqnarray}
& &\hspace{-10mm}(b_1 b_2 +b_1 b_3 + b_1 b_4 + b_2 b_3 + b_2 b_4 + b_3 b_4)
- 3(b_1 b_2 b_3 + b_1 b_2 b_4 + b_1 b_3 b_4 + b_2 b_3 b_4) + 6 b_1 b_2 b_3 b_4 \\
&\to& \left(1 - b_1 -b_2 - b_3 - b_4 - b_{a_1} + 3 b_{a_2}\right)^2
\end{eqnarray}
using the alternate form below.
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right) &\rightarrow \left((c-1)-\sum_{i}b_{i}-\sum_{i}^{m-1}2^{i-1}b_{a_{i}}+\left(1+2^{m-1}\right)b_{a_{m}}\right)^{2}
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 1): \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{SFR-BCR-3 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Any $n$-variable symmetric function $f\left(b_1,b_2,\ldots b_n\right)\equiv f(b)$ that is non-zero only when $\sum b_i = c$ where $\nicefrac{n}{2}\le c \le n$, can be quadratized with $m=\lceil \textrm{log}_2c \rceil $ auxiliary variables $f\rightarrow$:
{\scriptsize
\begin{align}
\hspace{-5mm}
f(b) \rightarrow
\alpha
+\alpha^{b} \hspace{-0.5mm} \sum_{i} b_{i}
+\alpha_1^{b_{a}} \hspace{-1.0mm} \sum_{i}^{m-1} b_{a_{i}}
+\alpha_2^{b_{a}} b_{a_{m}}
+\alpha^{bb} \hspace{-0.5mm} \sum_{ij} b_{i}b_{j}
+\alpha_1^{bb_{a}} \hspace{-0.5mm} \sum_{i} \hspace{-1.5mm}\sum_{j}^{m-1} b_{i}b_{a_{j}}
+\alpha_2^{bb_{a}} \hspace{-0.5mm} \sum_{i}b_{i} b_{a_{m}}
+\alpha_1^{b_{a}b_{a}} \hspace{-1.2mm} \sum_{ij}^{m-1} b_{a_{i}}b_{a_{j}}
+\alpha_2^{b_{a}b_{a}} \hspace{-1.2mm} \sum_{i}^{m-1} b_{a_{i}}b_{a_{m}},
\end{align}
}
\noindent where:
\begin{align}
\begin{pmatrix}
\alpha & \alpha^{bb}\\
\alpha^{b} & \alpha_1^{bb_{a}}\\
\alpha_1^{b_{a}} & \alpha_2^{bb_{a}}\\
\alpha_2^{b_{a}} & \alpha_1^{b_{a}b_{a}}\\
\cdot & \alpha_2^{b_{a}b_{a}}
\end{pmatrix}=\begin{pmatrix}\frac{1}{2}(c^2+3c+2) & \frac{1}{2}\\
-c-\frac{3}{2} & -2^{i}\\
(3+c)2^{i-1} & \left(1+2^{m}\right)\\
\left(1+2^{m}\right)\left(2^{m-1}-c-1\right) & 2^{i+j-1}\\
\cdot & - \left(1+2^{m}\right)2^{i}
\end{pmatrix}.
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $m=\lceil \textrm{log}_2 c \rceil$ auxiliary variables.
\item $n^2 + m^2$ non-submodular quadratic terms (all possible quadratic terms involving only non-auxiliary or only auxiliary variables).
\item $m$ non-submodular linear terms (all possible linear terms involving auxiliaries).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Small number of auxiliary terms
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
For $n = 4$ and $c = 2$, we have $m = 1$, and
\begin{eqnarray}
& &\hspace{-15mm}(b_1 b_2 +b_1 b_3 + b_1 b_4 + b_2 b_3 + b_2 b_4 + b_3 b_4)
- 3(b_1 b_2 b_3 + b_1 b_2 b_4 + b_1 b_3 b_4 + b_2 b_3 b_4) + 6 b_1 b_2 b_3 b_4 \\
&\to& \binom{-3+b_1 + b_2 + b_3 + b_4 + 3b_{a_1}}{2}
\end{eqnarray}
using the alternate form below.
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)&\rightarrow \binom{-(c+1)+\sum_{i}b_{i}-\sum_{i}^{m-1}2^{i}b_{a_{i}}+\left(1+2^{m}\right)b_{a_{m}}}{2}
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 2): \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{SFR-BCR-4 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Any $n$-variable symmetric function $f\left(b_1,b_2,\ldots b_n\right)\equiv f(b)$ that is non-zero only when $\sum b_i = c$ where $0\le c \le\nicefrac{n}{2} $, can be quadratized with $m=\lceil \textrm{log}_2(n-c) \rceil $ auxiliary variables $f\rightarrow$:
{\scriptsize
\begin{align}
\hspace{-5mm}
f(b) \rightarrow
\alpha
+\alpha^{b} \hspace{-0.5mm} \sum_{i} b_{i}
+\alpha_1^{b_{a}} \hspace{-1.0mm} \sum_{i}^{m-1} b_{a_{i}}
+\alpha_2^{b_{a}} b_{a_{m}}
+\alpha^{bb} \hspace{-0.5mm} \sum_{ij} b_{i}b_{j}
+\alpha_1^{bb_{a}} \hspace{-0.5mm} \sum_{i} \hspace{-1.5mm}\sum_{j}^{m-1} b_{i}b_{a_{j}}
+\alpha_2^{bb_{a}} \hspace{-0.5mm} \sum_{i}b_{i} b_{a_{m}}
+\alpha_1^{b_{a}b_{a}} \hspace{-1.2mm} \sum_{ij}^{m-1} b_{a_{i}}b_{a_{j}}
+\alpha_2^{b_{a}b_{a}} \hspace{-1.2mm} \sum_{i}^{m-1} b_{a_{i}}b_{a_{m}},
\end{align}
}
\noindent where:
\begin{align}
\begin{pmatrix}
\alpha & \alpha^{bb}\\
\alpha^{b} & \alpha_1^{bb_{a}}\\
\alpha_1^{b_{a}} & \alpha_2^{bb_{a}}\\
\alpha_2^{b_{a}} & \alpha_1^{b_{a}b_{a}}\\
\cdot & \alpha_2^{b_{a}b_{a}}
\end{pmatrix}=\begin{pmatrix}\frac{1}{2}(c^2-3c+2) & \frac{1}{2}\\
-c+\frac{3}{2} & +2^{i}\\
(3-c)2^{i-1} &-\left(1+2^{m}\right)\\
\left(1+2^{m}\right)\left(2^{m-1}+c-1\right) & 2^{i+j-1}\\
\cdot & - \left(1+2^{m}\right)2^{i}
\end{pmatrix}.
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $m=\lceil \textrm{log}_2 (n-c) \rceil$ auxiliary variables.
\item $n^2 + m^2$ non-submodular quadratic terms (all possible quadratic terms involving only non-auxiliary or only auxiliary variables).
\item $m$ non-submodular linear terms (all possible linear terms involving auxiliaries).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Small number of auxiliary terms
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
For $n = 4$ and $c = 2$, we have $m = 1$, and
\begin{eqnarray}
& &\hspace{-15mm}(b_1 b_2 +b_1 b_3 + b_1 b_4 + b_2 b_3 + b_2 b_4 + b_3 b_4)
- 3(b_1 b_2 b_3 + b_1 b_2 b_4 + b_1 b_3 b_4 + b_2 b_3 b_4) + 6 b_1 b_2 b_3 b_4 \\
&\to& \binom{1 - b_1 - b_2 - b_3 - b_4 + 3b_{a_1}}{2}
\end{eqnarray}
using the alternate form below.
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)& \rightarrow \binom{(c-1)-\sum_{i}b_{i}-\sum_{i}^{m-1}2^{i}b_{a_{i}}+\left(1+2^{m}\right)b_{a_{m}}}{2}
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 2): \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{SFR-BCR-5 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
For an $n$-variable symmetric function that is a function of the sum of all variables $f\left(b_1,b_2,\ldots b_n\right) = f(\sum b_i)$, for some large value of $\lambda > \textrm{max}(f)$, and $\lceil \sqrt{n+1}\rceil$:
{\footnotesize
\begin{align}
\begin{gathered}
f\left(\sum b_i \right) \rightarrow \sum_{ij}^{m}f\left((i-1)\left(m+1\right)+(j-1)\right)b_{a_{i}}b_{a_{c+j}}+\lambda\left(\left(1-\sum_{i}^{m}b_{a_{i}}\right)^{2}+\left(1-\sum_{i}^{m}b_{a_{c+i}}\right)^{2}+\right.\\
\left.\left(\sum_{i}b_{i}-\left(\left(m+1\right)\sum_{i}^{m}(i-1)y_{a_{i}}+\sum_{i}^{m}(i-1)b_{a_{c+i}}\right)\right)^{2}+\left(\sum_{i}b_{i}-\left(\left(m+1\right)\sum_{i}^{m}(i-1)y_{a_{i}}+\sum_{i}^{m}(i-1)b_{a_{c+i}}\right)\right)^{2}\right)
\end{gathered}
\end{align}
}
\noindent where:
\begin{align}
\begin{pmatrix}\alpha & \alpha^{bb}\\
\alpha^{b} & \alpha^{bb_{a,1}}\\
\alpha^{b_{a,1}} & \alpha^{bb_{a,2}}\\
\alpha^{b_{a,2}} & \alpha^{b_{a}b_{a}}
\end{pmatrix}=\begin{pmatrix}(c+1)^{2} & 1\\
-2(c+1) & -2^{i}\\
2(c+1) & -2\left(2m-2^{m-1}+1\right)\\
2(c+1)\left(2c-2^{m-1}+1\right) & 2^{i+j-2}
\end{pmatrix}.
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $m=\lceil \sqrt{n+1} \rceil $ auxiliary variables.
\item $n^2 + m^2$ non-submodular quadratic terms (all possible quadratic terms involving only non-auxiliary or only auxiliary variables).
\item $m$ non-submodular linear terms (all possible linear terms involving auxiliaries).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Small number of auxiliary terms
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)& \rightarrow \binom{\frac{1}{2}\left(\left(c-1\right)-\sum_{i}b_{i}-\left(2(n-c)-2^{m-1}+1\right)b_{a_{m}}-\sum_{i}^{m-1}2^{i-1}b_{a_{i}}\right)}{2}
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 9): \cite{Boros2018QuadratizationsOS} (contains a typo which was corrected in Theorem 6 of \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{SFR-BCR-6 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
For an $n$-variable symmetric function that is a function of a \textit{weighted} sum of all variables $f\left(b_1,b_2,\ldots b_n\right) = f(\sum w_ib_i)$, for some large value of $\lambda > \textrm{max}(f)$, and $\max \left( f\left(\sum w_i b_i \right)\right) < (m+1)^2$:
\begin{align}
\begin{gathered}
f\left(\sum w_{i}b_{i}\right)\rightarrow\sum_{ij}^{m}\alpha_{ij}b_{a_{i}}b_{a_{m+i}}+\lambda\left(1+\left(\sum_{i}w_{i}b_{i}-(m-1)\sum_{i}^{m}b_{a_{i}}+\sum_{i}^{m}b_{a_{c+i}}\right)^{2}\right.\\
+\left.\sum_{i}^{m-1}\left(1-b_{a_{i}}\right)b_{a_{i+1}}+\sum_{i}^{m-1}\left(1-b_{a_{i+m}}\right)b_{a_{i+m+1}}\right)
\end{gathered}
\end{align}
\noindent where:
\begin{align}
\sum_{i}^{\alpha}\sum_{j}^{\beta}\alpha_{ij}&=f\left(\alpha(m+1)+\beta\right)
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $2m$ auxiliary variables, where $m> \sqrt{\max \left( f \left( \sum w_i b_i \right) \right)}-1$.
\item $n^2 + m^2$ non-submodular quadratic terms (all possible quadratic terms involving only non-auxiliary or only auxiliary variables).
\item $m$ non-submodular linear terms (all possible linear terms involving auxiliaries).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Small number of auxiliary terms
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Alternate Forms}}
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)& \rightarrow \binom{\frac{1}{2}\left(\left(c-1\right)-\sum_{i}b_{i}-\left(2(n-c)-2^{m-1}+1\right)b_{a_{m}}-\sum_{i}^{m-1}2^{i-1}b_{a_{i}}\right)}{2}
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 10): \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{SFR-ABCG-2 (Anthony, Boros, Crama, Gruber, 2014)}
\secspace\emph{\textbf{Summary}}
For any $n$-variable, $k$-local function that is non-zero only if $\sum b_{i}=2m-1$, we call it the "partity function" and it can be quadratized as follows:
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)\rightarrow\sum_{i}b_{i}+2\sum_{ij}b_{i}b_{j}+4\sum_{2i-1}^{n-1}b_{a_{i}}\left(2i-1-\sum_{j}b_{j}\right).
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $m=2\lfloor n/2 \rfloor$ auxiliary variables.
\item $\lfloor 1.5n\rfloor$ non-submodular linear terms.
\item $ n^2$ non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Smaller number of auxiliary variables than the most naive methods.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms (everything is non-submodular except for $0.5n^2$ quadratic terms involving the auxiliaries with the non-auxiliaries).
\item non-submodular terms can be rather large compared to the submodular terms (about $4n$ times as big).
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2014, original paper (Theorem 4.6): \cite{Anthony2014,Anthony2016}.
\end{itemize}
\newpage
\subsection{SFR-ABCG-3 (Anthony, Boros, Crama, Gruber, 2014)}
\secspace\emph{\textbf{Summary}}
The complement of the parity function can be quadratized as follow:
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)\rightarrow1+2\sum_{ij}b_{i}b_{j}-\sum_{i}b_{i}+4\sum_{2i}^{n-1}b_{a_{i}}\left(i-\sum_{j}^{n}b_{j}\right)
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $m=2\lfloor \frac{n-1}{2} \rfloor$ auxiliary variables.
\item $\lfloor 0.5n\rfloor$ non-submodular linear terms.
\item $ n^2$ non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Smaller number of auxiliary variables than the most naive methods.
\item Fewer non-submodular linear terms than in the analogous quadratization for its complement (the parity function).
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms (everything is non-submodular except for $0.5n^2$ quadratic terms involving the auxiliaries with the non-auxiliaries, and all $n$ linear terms involving only the non-auxiliaries).
\item non-submodular terms can be rather large compared to the submodular terms (about $4n$ times as big).
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2014, original paper (Theorem 4.6): \cite{Anthony2014,Anthony2016}.
\end{itemize}
\newpage
\subsection{SFR-BCR-7 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
For a symmetric function such that $f\left(|c|\right)=0$ for $c>n$, then with with $2\lceil \sqrt{n+1}\rceil$ auxiliary variables, we have:
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)\rightarrow1+2\sum_{ij}b_{i}b_{j}-\sum_{i}b_{i}+4\sum_{2i}^{n-1}b_{a_{i}}\left(i-\sum_{j}^{n}b_{j}\right)
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $m=\lceil \sqrt{n+1}\rceil$ auxiliary variables.
\item $\lfloor 0.5n\rfloor$ non-submodular linear terms.
\item $ n^2$ non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Smaller number of auxiliary variables than the most naive methods.
\item Fewer non-submodular linear terms than in the analogous quadratization for its complement (the parity function).
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms (everything is non-submodular except for $0.5n^2$ quadratic terms involving the auxiliaries with the non-auxiliaries, and all $n$ linear terms involving only the non-auxiliaries)
\item non-submodular terms can be rather large compared to the submodular terms (about $4n$ times as big)
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 6): \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{SFR-BCR-8 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
For a symmetric function such that $f\left(|c|\right)=0$ for $c>n$, then with with $\max\left( \lceil \log(c)\rceil , \lceil \log(n-c)\rceil \right)$ auxiliary variables, we have:
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)\rightarrow1+2\sum_{ij}b_{i}b_{j}-\sum_{i}b_{i}+4\sum_{2i}^{n-1}b_{a_{i}}\left(i-\sum_{j}^{n}b_{j}\right)
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $\max\left( \lceil \log(c)\rceil , \lceil \log(n-c)\rceil \right)$ auxiliary variables.
\item $\lfloor 0.5n\rfloor$ non-submodular linear terms.
\item $ n^2$ non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Smaller number of auxiliary variables than the most naive methods.
\item Fewer non-submodular linear terms than in the analogous quadratization for its complement (the parity function).
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms (everything is non-submodular except for $0.5n^2$ quadratic terms involving the auxiliaries with the non-auxiliaries, and all $n$ linear terms involving only the non-auxiliaries)
\item non-submodular terms can be rather large compared to the submodular terms (about $4n$ times as big)
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 7): \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{SFR-BCR-9 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
For a symmetric function such that $f\left(|c|\right)=0$ for $c>n$, then with with $\max\left( \lceil \log(c)\rceil , \lceil \log(n-c)\rceil \right)$ auxiliary variables, we have:
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)\rightarrow1+2\sum_{ij}b_{i}b_{j}-\sum_{i}b_{i}+4\sum_{2i}^{n-1}b_{a_{i}}\left(i-\sum_{j}^{n}b_{j}\right)
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $\max\left( \lceil \log(c)\rceil , \lceil \log(n-c)\rceil \right)$ auxiliary variables.
\item $\lfloor 0.5n\rfloor$ non-submodular linear terms.
\item $ n^2$ non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Smaller number of auxiliary variables than the most naive methods.
\item Fewer non-submodular linear terms than in the analogous quadratization for its complement (the parity function).
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms (everything is non-submodular except for $0.5n^2$ quadratic terms involving the auxiliaries with the non-auxiliaries, and all $n$ linear terms involving only the non-auxiliaries)
\item non-submodular terms can be rather large compared to the submodular terms (about $4n$ times as big)
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 8): \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{PFR-BCR-1 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
The parity function for even $n$ can be quadratized with:
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)\rightarrow1+2\sum_{ij}b_{i}b_{j}-\sum_{i}b_{i}+4\sum_{2i}^{n-1}b_{a_{i}}\left(i-\sum_{j}^{n}b_{j}\right)
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $\lceil \log(n)\rceil -1$ auxiliary variables.
\item $\lfloor 0.5n\rfloor$ non-submodular linear terms.
\item $ n^2$ non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Smaller number of auxiliary variables than the most naive methods.
\item Fewer non-submodular linear terms than in the analogous quadratization for its complement (the parity function).
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms (everything is non-submodular except for $0.5n^2$ quadratic terms involving the auxiliaries with the non-auxiliaries, and all $n$ linear terms involving only the non-auxiliaries)
\item non-submodular terms can be rather large compared to the submodular terms (about $4n$ times as big)
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 11): \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{PFR-BCR-2 (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
The parity function for odd $n$ can be quadratized with:
\begin{align}
f\left(b_{1},b_{2},\ldots,b_{n}\right)\rightarrow1+2\sum_{ij}b_{i}b_{j}-\sum_{i}b_{i}+4\sum_{2i}^{n-1}b_{a_{i}}\left(i-\sum_{j}^{n}b_{j}\right)
\end{align}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $\lceil \log(n)\rceil -1$ auxiliary variables.
\item $\lfloor 0.5n\rfloor$ non-submodular linear terms.
\item $ n^2$ non-submodular quadratic terms.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Smaller number of auxiliary variables than the most naive methods.
\item Fewer non-submodular linear terms than in the analogous quadratization for its complement (the parity function).
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only works for a special class of functions
\item Introduces many linear and even more quadratic non-submodular terms (everything is non-submodular except for $0.5n^2$ quadratic terms involving the auxiliaries with the non-auxiliaries, and all $n$ linear terms involving only the non-auxiliaries)
\item non-submodular terms can be rather large compared to the submodular terms (about $4n$ times as big)
\end{itemize}
\secspace\emph{\textbf{Example}}
By bit-flipping $b_2$ and $b_4$, i.e. substituting $b_2 = 1 - \bar{b}_{2}$ and $b_4 = 1 - \bar{b}_{4}$, we see that:
\begin{eqnarray}
H & = & 3b_{1}b_{2}+b_{2}b_{3}+2b_{1}b_{4}-4b_{2}b_{4}\protect\\
& = & -3b_{1}\bar{b}_{2}-\bar{b}_{2}b_{3}-2b_{1}\bar{b}_{4}-\bar{b}_{2}\bar{b}_{4}+5b_{1}+b_{3}+4\bar{b}_{2}+4\bar{b}_{4}-4.
\end{eqnarray}
The first expression is highly non-submodular while the second is entirely submodular.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item 2018, original paper (Theorem 11): \cite{Boros2018boundsPaper}.
\end{itemize}
\newpage
\subsection{Lower bounds for SFRs (Anthony, Boros, Crama, Gruber, 2014)}
\begin{itemize}
\item There exist symmetric functaons on $n$ variables for which no quadratization can be done without at least $\Omega\left(\sqrt{n}\right)$ auxiliary variables (Theorem 5.3 from \cite{Anthony2014,Anthony2016}.
\item There exist symmetric functions on $n$ variables for whcih no quadratization linear in the auxiliaries can be done without at least $\Omega\left( \frac{n}{\log_2(n)}\right)$ auxiliary variables (Theorem 5.5 from \cite{Anthony2014,Anthony2016}).
\item The parity function on $n$ variables cannot be quadratized without quadratic terms involving the auxiliary variables, unless there is at least $\sqrt{\nicefrac{n}{4}-1}+1=\Omega\left( \sqrt{n} \right)$ auxiliary variables (Theorem 5.6 from \cite{Anthony2014,Anthony2016}).
\item Theorem 5 of \cite{Boros2018boundsPaper} gives an even tighter bound of $\lceil \log(n)\rceil -1$ for the minimum number of auxiliary variables for the parity function.
\item Corollary 5 of \cite{Boros2018boundsPaper} gives $m\ge \log\left( 1/2 - \mu\right) + \log(n) -1$.
\end{itemize}
\subsection{Lower bounds for positive monomials (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\begin{itemize}
\item A positive monomial with $n$ variables cannot be quadratized with fewer than $\lceil \log(n)\rceil - 1$ auxiliary variables, unless there is some extra deduction we can make about the optimization problem, as in for example deduc-reduc (Corollary 1 from \cite{Boros2018boundsPaper}).
\item ALCN (at least $c$ out of $n$) and ECN (exact $c$ out of $n$) functions also cannot be quadratized with fewer than $\lceil \log(n)\rceil - 1$ auxiliary variables (Corollaries 2 and 3 from \cite{Boros2018boundsPaper}).
\item ECN (exact $c$ out of $n$) functions also cannot be quadratized with fewer than $\max\left(\lceil \log(c)\rceil , \lceil \log(n-c)\rceil \right)-1$ auxiliary variables (Corollaries 2 and 3 from \cite{Boros2018boundsPaper}).
\end{itemize}
\newpage
\subsection{Lower bounds for ZUCs (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\begin{itemize}
\item There exist ZUC (zero until $c$) functions such that every quadratization must involve at least $\Omega\left(2^{n/2}\right)$ auxiliary variables, no matter what the value of $c$ (Theorem 2 from \cite{Boros2018boundsPaper}). This is true for almost all ZUC functions because the set of ZUCs requiring fewer auxiliary variables has Lebesgue measure zero.
\item For any $c\ge 0$, the number of auxiliary variables is $m\ge \lceil \log(c) \rceil -1$ (Theorem 3 of \cite{Boros2018boundsPaper}).
\end{itemize}
\subsection{Lower bounds for $d$-sublinear functions (Boros, Crama, Rodr\'{i}guez-Heck, 2018)}
\begin{itemize}
\item The number of auxiliary variables $m$ is such that $2^{m+1} \ge \frac{\beta(q_1)}{2} -d +1$ (Theorem 12 from \cite{Boros2018boundsPaper}).
\item For any $c\ge 0$, the number of auxiliary variables is $m\ge \lceil \log(c) \rceil -1$ (Theorem 3 of \cite{Boros2018boundsPaper}).
\end{itemize}
\newpage
\section{Methods that quadratize MULTIPLE terms with the SAME auxiliaries (Case 2: Arbitrary Functions)}
\subsection{Reduction by Substitution (Rosenberg 1975)}
\secspace\emph{\textbf{Summary}}
Pick a variable pair $(b_{i},b_{j})$ and substitute $b_{i}b_{j}$ with a new auxiliary variable $b_{a_{ij}}$.
Enforce equality in the ground states by adding some scalar multiple of the penalty $P=b_{i}b_{j}-2b_{i}b_{a_{ij}}-2b_{j}b_{a_{ij}}+3b_{a_{ij}}$ or similar. Since $P > 0$ if and only if $b_{a_{ij}}\ne b_ib_j$, the minimum of the new $(k-1)$-local function will satisfy $b_{a_ij}=b_{i}b_{j})$, which means that at the minimum, we have precisely the original function. Repeat $(k-2)$ times for each $k$-local term and the resulting function will be 2-local. For an arbitrary cubic term we have:
\begin{equation}
b_ib_j b_k \rightarrow b_ab_k + b_ib_j - 2b_ib_a - 2b_jb_a + 3b_a.
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary variable per reduction.
\item At most $k\,t$ auxiliary variables for a $k$-local objective function of $t$ terms, but usually fewer.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Variable can be used across the entire objective function, reducing many terms at once.
\item Very easy to implement.
\item Reproduces not only the ground state, but the full spectrum.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Inefficient for single terms as it introduces many auxiliary variables compared to Ishikawa reduction, for example.
\item Introduces quadratic terms with large positive coefficients, making them highly non-submodular.
\item Determining optimal substitutions can be expensive.
\end{itemize}
\secspace\emph{\textbf{Example}}
We pick the pair $(b_{1},b_{2})$ and combine.
\begin{equation}
b_{1}b_{2}b_{3}+b_{1}b_{2}b_{4}\mapsto b_{3}b_a+b_{4}b_a+b_{1}b_{2}-2b_{1}b_a-2b_{1}b_a+3b_a
\end{equation}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Rosenberg1975}
\item Re-discovered in the context of diagonal quantum Hamiltonians: \cite{Biamonte2008a}.
\item Used in: \cite{Perdomo2008, Bian2013}.
\end{itemize}
\newpage
\subsection{FGBZ Reduction for Negative Terms (Fix-Gruber-Boros-Zabih, 2011)}
\secspace\emph{\textbf{Summary}}
We consider a set $C$ of variables which can occur in multiple terms throughout the objective function.
Each application `rips out' this common component from each term \cite{Fix2011,Boros2014}.
\begin{equation}
\sum_{H}\alpha_{H}\prod_{j\in H}b_{j}\rightarrow\sum_{H}\alpha_{H}\left(1-\prod_{j\in C}b_{j}-\prod_{j\in H\setminus C}b_{j}\right)b_a
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item One auxiliary variable per application.
\item In combination with \ref{subsec:Negative-Monomial-Reduction}, it can reduce $t$ positive terms of degree $k$ in $n$ variables using $n+t(k-1)$ auxiliary variables in the worst case.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Can reduce the connectivity of an objective function, as it breaks interactions between variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Cannot reduce the degree of the original function if $|C|\le 1$, and cannot quadratize anything for the other values of $|C|$ (but it can reduce their degree).
\end{itemize}
\secspace\emph{\textbf{Example}}
First let $C=b_{1}$ and use the positive weight version:
\begin{eqnarray}
b_{1}b_{2}b_{3}+b_{1}b_{2}b_{4} & \mapsto & 2b_{a_1}b_{1}+(1-b_{a_1})b_{2}b_{3}+(1-b_{a_1})b_{2}b_{4}\\
& = & 2b_{a_1}b_{1}+b_{2}b_{3}+b_{2}b_{4}-b_{a_1}b_{2}b_{3}-b_{a_1}b_{2}b_{4}
\end{eqnarray}
now we can use \ref{subsec:Negative-Monomial-Reduction}:
\begin{eqnarray}
-b_{a_1}b_{2}b_{3}-b_{a_1}b_{2}b_{4} & \mapsto & 2b_{a_2}-b_{a_1}b_{a_2}-b_{a_2}b_{2}-b_{a_2}b_{3}+2b_{a_2}-b_{a_1}b_{a_2}-b_{a_2}b_{2}-b_{a_2}b_{4}\\
& = & 4b_{a_2}-2b_{a_1}b_{a_2}-2b_{a_2}b_{2}-b_{a_2}b_{3}-b_{a_2}b_{4}.
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper and application to image denoising: \citep{Fix2011}.
\end{itemize}
\newpage
\subsection{FGBZ Reduction for Positive Terms (Fix-Gruber-Boros-Zabih, 2011)}
\secspace\emph{\textbf{Summary}}
We consider a set $C$ of variables which can occur in multiple terms throughout the objective function.
Each application `rips out' this common component from each term \cite{Fix2011,Boros2014}:
\begin{equation}
\sum_{H}\alpha_H\prod_{j\in H} b_{j} \rightarrow \sum_{H}\alpha_{H}b_a\prod_{j\in C}b_{j}+\sum_{H}\alpha_{H}(1-b_a)\prod_{j\in H\setminus C}b_{j}.
\end{equation}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item One auxiliary variable per application.
\item In combination with \ref{subsec:Negative-Monomial-Reduction}, it can reduce $t$ positive terms of degree $k$ in $n$ variables using $n+t(k-1)$ auxiliary variables in the worst case.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item It is a `perfect' transformation, meaning that after minimizing over $b_a$, the original degree-$k$ function is recovered.
\item Can reduce the connectivity of an objective function, as it breaks interactions between variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item If $|C|=1$ the first sum will result in a quadratic but the second sum will have degree $k$. If $|C|=k-1$ the second sum will be quadratic but the first term will have degree $k$. For any other value of $|C|$, both sums will be super-quadratic, but the part of the second sum involving $b_a$ will be negative and therefore can be quadratized easily.
\end{itemize}
\secspace\emph{\textbf{Example}}
With $C=b_{1}$ we can get:
\begin{eqnarray}
b_{1}b_{2}b_{3}+b_{1}b_{2}b_{4} & \rightarrow & 2b_{a_1}b_{1}+(1-b_{a_1})b_{2}b_{3}+(1-b_{a_1})b_{2}b_{4}\\
& = & 2b_{a_1}b_{1}+b_{2}b_{3}+b_{2}b_{4}-b_{a_1}b_{2}b_{3}-b_{a_1}b_{2}b_{4}
\end{eqnarray}
now we can use \ref{subsec:Negative-Monomial-Reduction} to quadratize the two negative cubic terms:
\begin{eqnarray}
-b_{a_1}b_{2}b_{3}-b_{a_1}b_{2}b_{4} & \mapsto & 2b_{a_2}-b_{a_1}b_{a_2}-b_{a_2}b_{2}-b_{a_2}b_{3}+2b_{a_2}-b_{a_1}b_{a_2}-b_{a_2}b_{2}-b_{a_2}b_{4}\\
& = & 4b_{a_2}-2b_{a_1}b_{a_2}-2b_{a_2}b_{2}-b_{a_2}b_{3}-b_{a_2}b_{4}.
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper and application to image denoising: \citep{Fix2011}.
\end{itemize}
\newpage
\subsection{Pairwise Covers (Anthony-Boros-Crama-Gruber, 2017)}
\secspace\emph{\textbf{Summary}}
Here we consider a set $C$ of variables which occur in multiple monomials throughout the objective function.
Each application 'rips out' this common component from each term \cite{Fix2011}\cite{Boros2014}.
Let $\mathcal{H}$ be a set of monomials, where $C \subseteq H$ for each $H\in\mathcal{H}$ and each monomial $H$ has a weight $\alpha_{H}$.
The algorithm comes in 2 parts: when all $\alpha_{H}>0$ and when all $\alpha_{H}<0$. Combining the 2 gives the final method:
\begin{enumerate}
\item $\alpha_{H}>0$
\begin{equation}
\sum_{H\in\mathcal{H}}\alpha_{H}\prod_{j\in H}b_{j}=\min_{b_a}\left(\sum_{H\in\mathcal{H}}\alpha_{H}\right)b_a\prod_{j\in C}b_{j}+\sum_{H\in\mathcal{H}}\alpha_{H}(1-b_a)\prod_{j\in H\setminus C}b_{j}
\end{equation}
\item $\alpha_{H}<0$
\begin{equation}
\sum_{H\in\mathcal{H}}\alpha_{H}\prod_{j\in H}b_{j}=\min_{b_a}\sum_{H\in\mathcal{H}}\alpha_{H}\left(1-\prod_{j\in C}b_{j}-\prod_{j\in H\setminus C}b_{j}\right)b_a
\end{equation}
\end{enumerate}
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item One auxiliary variable per application.
\item In combination with \ref{subsec:Negative-Monomial-Reduction}, it can be used to make an algorithm which can reduce $t$ positive monomials of degree $d$ in $n$ variables using $n+t(d-1)$ auxiliary variables in the worst case.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Can reduce the connectivity of an objective function, as it breaks interactions between variables.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item $\alpha_{H}>0$ method converts positive terms into negative ones of same order rather than reducing them, though these can then be reduced more easily.
\item $\alpha_{H}<0$ method only works for $|C|>1$, and cannot quadratize cubic terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
First let $C=b_{1}$ and use the positive weight version:
\begin{eqnarray}
b_{1}b_{2}b_{3}+b_{1}b_{2}b_{4} & \mapsto & 2b_{a_1}b_{1}+(1-b_{a_1})b_{2}b_{3}+(1-b_{a_1})b_{2}b_{4}\\
& = & 2b_{a_1}b_{1}+b_{2}b_{3}+b_{2}b_{4}-b_{a_1}b_{2}b_{3}-b_{a_1}b_{2}b_{4}
\end{eqnarray}
now we can use \ref{subsec:Negative-Monomial-Reduction}:
\begin{eqnarray}
-b_{a_1}b_{2}b_{3}-b_{a_1}b_{2}b_{4} & \mapsto & 2b_{a_2}-b_{a_1}b_{a_2}-b_{a_2}b_{2}-b_{a_2}b_{3}+2b_{a_2}-b_{a_1}b_{a_2}-b_{a_2}b_{2}-b_{a_2}b_{4}\\
& = & 4b_{a_2}-2b_{a_1}b_{a_2}-2b_{a_2}b_{2}-b_{a_2}b_{3}-b_{a_2}b_{4}.
\end{eqnarray}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Theorem 4 of: \citep{Anthony2017}.
\end{itemize}
\begin{comment}
\subsection{Generalized roof duality}
(only 1 extra qubit, but only works for up to 5-qubit terms.)
Nike, where did you get the above from? When I search for this, with
the relevant authors, I only find stuff on QPBO. I've spent a while
reading papers on QPBO (fascinating stuff, very impressive and in
some ways similar to what we've been trying/will try to do. Is the
quantum community aware of their software??). I don't know what roof
duality you're referring to.
\subsection{Generalized Roof Duality (QPBO)}
\secspace\emph{\textbf{Summary}}
Algorithm that takes a quadratic objective function and tries to perform as much of the optimization as possible.
\secspace\emph{\textbf{Cost}}
For computer image problems, it can be used with the above methods to optimize most of the image in seconds/minutes \cite{Ishikawa2014,Fix2011,Ishikawa2011}.
\end{comment}
\begin{comment}
\begin{comment}
These techniques will transform $k$-local functions to 2-local functions that not only have the same ground state as the $k$-local function, but also the entire input/output spectrum of the $k$-local functions will be preserved in the low-lying energy space of the corresponding 2-local function (which also has higher energy states due to the auxiliary variables added).
\end{comment}
\subsection{Flag Based SAT Mapping\label{sub:flag_SAT}}
\secspace\emph{\textbf{Summary}}
This method uses gadgets to produce separate 3-SAT clauses which allow variables which `flag' the state of pairs of other variables.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 24 auxiliary variables to quadratize $z_1z_1z_3$.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Very general and therefore conducive to proofs.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Extremely inefficient in terms of number of auxiliary variables.
\end{itemize}
\secspace\emph{\textbf{Example}}
To create a system which maps $b_1b_2b_3$, we use the following gadget (note that this is given in terms of $z$ in the orginal work and translated to $b$ here):
\begin{align}
H_1(b_1,b_2,b_3)=2\sum_{i=1}^3b_ib_{a_i}+2\sum_{i<j}^3b_{a_i}b_{a_j}-4\sum_{i=1}^3b_{a_i}-2\sum_{i=1}^3b_i+\frac{23}{2}.
\end{align}
Implementing $\alpha H_1$, creates a situation where $b_3$ is a `flag' for $b_1$ and $ b_2$ in other words $b_3$ is constrained to be $1$ in the low energy manifold if $b_1=0$ and $ b_2=0$. It follows from the universality of $3-\rm{SAT}$ that these `flag' clauses can be combined to map any spin Hamiltonian. To do this, we also need anti-ferromagnetic couplings to express the `negated' variable, to do this, we define,
\begin{align}
H_2(b_1,b_{\neg1})=2\,b_1b_{\neg1}-b_i-b_{\neg 1}+1.
\end{align}
As an explicit example, consider reproducing the spectrum of $z_1z_2z_3=(2\,b_1-1)(2\,b_2-1)(2\,b_3-1)$. In this case we need to assign a higher energy to the $(1,1,1)$, $(0,0,1)$, $(0,1,0)$, and $(1,0,0)$ states. A flag ($b_{a_{4,1}}$) which is forced into a higher energy state if these conditions are satisfied can be constructed from two instances of $H_1$ and an auxilliary qubit, combining these leads to
{\scriptsize
\begin{align}
z_1z_2z_3 \rightarrow & \alpha\left(\sum_{i=1}^3 H_2(b_i,b_{\neg i})+\sum_{i=1}^{2^3} H_2(b_{a_i},b_{\neg a_i})+ H_2(b_{a_{4,1}},b_{a_{4,2}})+H_1(b_{a_{1,1}},b_{a_{1,2}},b_{\neg a_1})+H_1(b_{a_1},b_{a_{1,3}},b_{a_{4,1}}) ~+\right. \\
& H_1(b_{1},b_{2},b_{\neg a_2})+H_1(b_{a_2},b_{3},b_{a_{4,2}}+H_1(b_{1},b_{2},b_{\neg a_3})+H_1(b_{a_3},b_{a_{1,3}},b_{a_{4,1}})+H_1(b_{a_{1,1}},b_{a_{1,2}},b_{\neg a_4})~+ \\
&H_1(b_{a_4},b_{3},b_{a_{4,2}}) + H_1(b_{1},b_{a_{1,2}},b_{\neg a_5})+H_1(b_{a_5},b_{3},b_{a_{4,1}})+H_1(b_{a_{1,1}},b_{2},b_{\neg a_6})+H_1(b_{a_6},b_{a_{1,3}},b_{a_{4,2}})~+ \\
& \left.H_1(b_{1},b_{2},b_{\neg a_7})+H_1(b_{a_7},b_{a_{1,3}},b_{a_{4,1}})+H_1(b_{a_{1,1}},b_{ a_{1,2}},b_{\neg a_8})+H_1(b_{a_8},b_{3},b_{a_{4,2}})\right)
-2\,b_{a_{4,1}}+1.
\end{align}
}
Each of the next four lines assigns a value to the flag variable $b_{a_{4,1}}$ for a state and $b_{a_{4,2}}$, for instance the leftmost two terms of the second line enforce that $b_{a_{4,1}}=0$ if $(b_1,b_2,b_3)=(0,0,0)$, while the right two terms enforce that $b_{a_{4,1}}=1$ if $(b_1,b_2,b_3)=(1,1,1)$. Because there are $2^3=8$ possible bitstrings for $(b_1,b_2,b_3)$, and each term to enforce a flag state requires two instances of $H_1$ (and two auxilliary variables), a total of $16$ instances are required as well as $16$ auxilliary variables.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Paper showing the universality of the Ising spin models: \cite{DelasCuevasGemmaandCubitt2016}.
\end{itemize}
\newpage
\subsection{Lower bounds for arbitrary functions (Anthony, Boros, Crama, Gruber, 2015)}
\begin{itemize}
\item There exist functions on $n$ variables for which no quadratization can be done without at least $\frac{2^{\nicefrac{n}{2}}}{8}=\Omega\left(\sqrt{n}\right)$ auxiliary variables (Theorem 5.3 from \cite{Anthony2017}.
\item There exist symmetric functions on $n$ variables for which no quadratization linear in the auxiliaries can be done without at least $\Omega\left( \frac{2^n}{n}\right)$ auxiliary variables (Theorem 5.5 from \cite{Anthony2017}).
\end{itemize}
\newpage
\section{Strategies for combining methods}
\subsection{SCM-BCR (Boros, Crama, and Rodr\'{i}guez-Heck, 2018)}
\secspace\emph{\textbf{Summary}}
Split a $k$-local monomial with odd $k$ into a $(k-1)$-local term (with even degree) and a new odd $k$-local term which has negative coefficient:
\begin{align}
\begin{gathered}
b_{1}b_{2}\cdots b_{k} \rightarrow \prod_{i=1}^{k-1}b_{i}-\prod_{i=1}^{k-1}b_{i}(1-b_{k})
\end{gathered}
\end{align}
\noindent We can use any of the PTR methods for even $k$ on the first term, and we can use any of the NTR methods on the second term. Can be generalized to split into different-degree factors when seeking an "optimum" quadratization. Can be generalized into more splits.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item Depends on the methods used for the PTR and NTR procedures.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Very flexible.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item First turns one term into two terms, so might not be preferred when we wish to minimize the number of terms.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Boros2018QuadratizationsOS}.
\end{itemize}
\newpage
\subsection{Decomposition into symmetric and anti-symmetric parts}
\secspace\emph{\textbf{Summary}}
Split a any function $f$ into a symmetric part and anti-symmetric part:
\begin{align}
f\left(b_1,b_2,\ldots,b_n\right) &= f_{\textrm{symmetric}} + f_{\textrm{anti-symmetric}} ,\\
f_{\textrm{symmetric}} &\equiv \frac{1}{2}\left( f\left(b_1,b_2,\ldots,b_n\right) + f\left(1-b_1,1-b_2,\ldots,1-b_n\right) \right) \\
f_{\textrm{anti-symmetric}} &\equiv \frac{1}{2}\left( f\left(b_1,b_2,\ldots,b_n\right) - f\left(1-b_1,1-b_2,\ldots,1-b_n\right) \right)
\end{align}
\noindent We can now use any of the methods described only for symmetric functions, on the symmetric part, and use the (perhaps less powerful) general methods on the anti-symmetric part.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item Depends on the methods used.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Allows non-symmetric functions to benefit from techniques designed only for symmetric functions.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item May result in more terms than simply quadratizing the non-symmetric function directly.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Discussed in: \cite{Kahl2011}.
\end{itemize}
\newpage
\newpage
\part{\underline{{\normalsize Hamiltonians quadratic in $z$ and linear in $x$ (Transverse Field Ising Hamiltonians)}}\label{partTransverseIsing}}
The Ising Hamiltonian with a transverse field in the $x$ direction is possible to implement in hardware:
\begin{align}
H = \sum_i \left( \alpha_i^{(z)} z_i + \alpha_i^{(x)} x_i\right) + \sum_{ij}\left(\alpha_{ij}^{(zz)} z_iz_j \right).
\end{align}
\subsection{ZZZ-TI-CBBK: Transvese Ising from ZZZ, by Cao, Babbush, Biamonte, and Kais (2015)}
There is only one reduction in the literature for reducing a Hamiltonian term to the transverse Ising Hamiltonian, and it works on 3-local $zzz$ terms, by introducing an auxiliary qubit with label $a$:
\begin{align}
\alpha z_{i}z_{j}z_{k}\rightarrow\alpha^{I}+\alpha_{i}^{z}z_{i}+\alpha_{j}^{z}z_{j}+\alpha_{k}^{z}z_{k}+\alpha_{a}^{z}z_{a}+\alpha_{a}^{x}x_{a}+\alpha_{ia}^{zz}z_{i}z_{a}+\alpha_{ja}^{zz}z_{j}z_{a}+\alpha_{ka}^{zz}z_{k}z_{a} \label{eq:ZZZ-TI-CBBK}
\end{align}
\begin{tabular}{rcl}
$\alpha^{I}$ & = & $\frac{1}{2}\left(\Delta\textcolor{red}{\ensuremath{+}}\left(\frac{\alpha}{6}\right)^{\nicefrac{2}{5}}\Delta^{\nicefrac{3}{5}}\right)$\tabularnewline
$\alpha_{i}^{z}$ & = & $-\frac{1}{2}\left(\left(\frac{7\alpha}{6}+\left(\frac{\alpha}{6}\right)^{\nicefrac{3}{5}}\Delta^{\nicefrac{2}{5}}\right)\textcolor{red}{\ensuremath{-}}\left(\frac{\alpha\Delta^{4}}{6}\right)^{\nicefrac{1}{5}}\right)$\tabularnewline
$\alpha_{j}^{z}$ & = & $\alpha_{i}^{(z)}$\tabularnewline
$\alpha_{k}^{z}$ & = & $\alpha_{i}^{(z)}$\tabularnewline
$\alpha_{a}^{z}$ & = & $\frac{1}{2}\left(\Delta\textcolor{red}{\ensuremath{-}}\left(\frac{\alpha}{6}\right)^{\nicefrac{2}{5}}\Delta^{\nicefrac{3}{5}}\right)$\tabularnewline
$\alpha_{a}^{x}$ & = & $\left(\frac{\alpha\Delta^{4}}{6}\right)^{\nicefrac{1}{5}}$\tabularnewline
$\alpha_{ia}^{zz}$ & = & $-\frac{1}{2}\left(\left(\frac{7\alpha}{6}+\left(\frac{\alpha}{6}\right)^{\nicefrac{3}{5}}\Delta^{\nicefrac{2}{5}}\right)\textcolor{red}{\ensuremath{+}}\left(\frac{\alpha\Delta^{4}}{6}\right)^{\nicefrac{1}{5}}\right)$\tabularnewline
$\alpha_{ja}^{zz}$ & = & $\alpha_{ia}^{(zz)}$\tabularnewline
$\alpha_{ka}^{zz}$ & = & $\alpha_{ja}^{(zz)}$\tabularnewline
\end{tabular}
\vspace{5mm}
Including all coefficients and factorizing, we get:
\begin{align}
\alpha z_iz_jz_k \rightarrow & \left( \Delta + \frac{\alpha \Delta^4}{6}^{\nicefrac{1}{5}} \left(z_i+z_j+z_k \right) \right)\left( \frac{1-z_a}{2}\right) + \frac{\alpha \Delta^4}{6}^{\nicefrac{1}{5}} x_a\\
&+ \left( \left(\frac{\alpha}{6}\right)^{\nicefrac{2}{5}} \Delta^{\nicefrac{3}{5}} - \left( \frac{7\alpha}{6} + \left(\frac{\alpha}{6} \right)^{\nicefrac{3}{5}} \Delta^{\nicefrac{2}{5}} \right)\left(z_i + z_j + z_k \right) \right)\left(\frac{1+z_a}{2} \right)
\end{align}
The low-lying spectrum (eigenvalues \textbf{\textit{and}} eigenvectors) of the right side of Eq. \eqref{eq:ZZZ-TI-CBBK} will match those of the left side to within a spectral error of $\epsilon$ as long as $\Delta = \mathcal{O}\left(\epsilon^{-5}\right)$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary qubit
\item 8 auxiliary terms not proportional to \openone.
\end{itemize}
\newpage
\part{\underline{{\normalsize{General Quantum Hamiltonians}}}}\label{partGeneral}
\section{Non-perturbative Gadgets}
\subsection{NP-OY (Ocko \& Yoshida, 2011)}
\secspace\emph{\textbf{Summary}}
For the 8-body Hamiltonian:
\begin{eqnarray}
\begin{gathered}
H_{8\textrm{-body}} =-J\sum_{ij}\left(x_{ij3}x_{ij+1,2}x_{ij4}x_{ij+1,4}x_{i+1j1}x_{i+1j+1,1}x_{i+1j3}x_{ij+1,2}+\right. \\
\left.z_{ij1}z_{ij2}z_{ij3}z_{ij4}+z_{i-1j4}z_{ij1}+z_{ij2}z_{ij-1,3}+z_{ij4}z_{i+1j1}+z_{ij3}z_{ij+1,2}\right),
\end{gathered}
\end{eqnarray}
\noindent we define auxiliary qubits labeled by $a_{ijk}$, two auxiliaries for each pair $ij$: labeled $a_{ij1}$ and $a_{ij2}$. Then the 8-body Hamiltonian has the same low-lying eigenspace as the 4-body Hamiltonian:
\begin{eqnarray}
\begin{gathered}
H_{4\textrm{-body}} =-\sum_{ij}\alpha\left(z_{ij1}z_{ij2}z_{ij3}z_{ij4}+z_{i,j,-1,4}z_{ij1}+z_{ij2}z_{i,j-1,3}+z_{ij4}z_{i+1,j,1}+z_{ij3}z_{i,j+1,2}\right.\\
\left(1-z_{a_{ij1}}+z_{a_{ij2}}+z_{a_{ij1}}z_{a_{ij2}}\right)\left(z_{a_{i,j+1,1}}+z_{a_{i,j+1,2}}+z_{a_{i,j+1,1}}z_{a_{i,j+1,2}}-1\right)+\\
\left.\left(1+z_{a_{ij1}}-z_{a_{ij2}}+z_{a_{ij1}}z_{a_{ij2}}\right)\left(1-z_{a_{i+1,j1}}-z_{a_{i+1,j2}}-z_{a_{i+1,j1}}z_{a_{i+1,j2}}\right)\right)+\\
\frac{U}{2}\left(z_{a_{ij1}}+z_{a_{ij2}}+z_{a_{ij1}}z_{a_{ij2}}-1\right)+\\
\frac{t}{2}\left(\left(x_{a_{ij2}}+z_{a_{ij1}}x_{a_{ij2}}\right)x_{ij3}x_{ij4}+\left(x_{a_{ij1}}x_{a_{ij2}}+y_{a_{ij1}}y_{a_{ij2}}\right)x_{i,j+1,2}x_{i,j+1,4}+\right.\\
\left. \left.\left(x_{a_{ij2}}-z_{a_{ij1}}x_{a_{ij2}}\right)x_{i+1,j+1,1}x_{i+1,j+1,2}+\left(x_{a_{ij1}}x_{a_{ij2}}-y_{a_{ij1}}y_{a_{ij2}}\right)x_{i+1,j,1}x_{i+1,j,3}\right)\right).
\end{gathered}
\end{eqnarray}
\noindent Now by defining the following ququits (spin-${3/2}$ particles, or 4-level systems):
\begin{align}
s_{ijki^{\prime}j^{\prime}k^{\prime}}^{zz} &=z_{ijk}z_{i^{\prime}j^{\prime}k^{\prime}}\\
s_{a_{ij}1}^{zz} &=\left(1-z_{a1_{ij}}+z_{a2_{ij}}+z_{a1_{ij}}z_{a2_{ij}}\right)\\
s_{a_{ij}2}^{zz} &=\left(z_{a1_{ij}}+z_{a2_{ij}}+z_{a1_{ij}}z_{a2_{ij}}-1\right)\\
s_{a_{ij}3}^{zz} &=\left(1+z_{a1_{ij}}-z_{a2_{ij}}+z_{a1_{ij}}z_{a2_{ij}}\right)\\
s_{a_{ij}1}^{xz} &=\left(x_{a2_{ij}}+z_{a1_{ij}}x_{a2_{ij}}\right)\\
s_{a_{ij}2}^{xz} &=\left(x_{a2_{ij}}-z_{a1_{ij}}x_{a2_{ij}}\right)\\
s_{ijki^{\prime}j^{\prime}k^{\prime}}^{xx} &=x_{ijk}x_{i^{\prime}j^{\prime}k^{\prime}}\\
s_{a_{ij}1}^{xy} &=\left(x_{a1_{ij}}x_{a2_{ij}}+y_{a1_{ij}}y_{a2_{ij}}\right)\\
s_{a_{ij}2}^{xy} &=\left(x_{a1_{ij}}x_{a2_{ij}}-y_{a1_{ij}}y_{a2_{ij}}\right)
\end{align}
\subsection*{NP-OY (Ocko \& Yoshida, 2011) [Continued]}
\noindent We can write the 4-body Hamiltonian on qubits as a 2-body Hamiltonian on ququits:
{\scriptsize
\begin{eqnarray}
\begin{gathered}
H_{2\textrm{-body}} =-\sum_{ij}\left(\alpha\left(s_{ij1ij2}^{zz}s_{ij3ij4}^{zz}+s_{ij-1,4ij1}^{zz}+s_{ij2ij-1,3}^{zz}+s_{ij4i+1j1}^{zz}+s_{ij3ij+1,2}^{zz}+s_{a_{ij}1}^{zz}s_{a_{ij+1}2}^{zz}-s_{a_{ij}3}^{zz}s_{a_{i+1j}3}^{zz}\right)\right.\\
\left.+\frac{U}{2}s_{a_{ij},1}^{zz}+\frac{t}{2}\left(s_{a_{ij}1}^{xz}s_{ij3ij4}^{xx}+s_{a_{ij}1}^{xy}s_{ij+1,2ij+1,4}^{xx}+s_{a_{ij}2}^{xz}s_{i+1,j+1,1i+1,j+1,2}^{xx}+s_{a_{ij}2}^{xy}s_{i+1j1,i+1,j3}^{xx}\right)\right).
\end{gathered}
\end{eqnarray}
}
\noindent The low-lying eigenspace of $H_{2-\textrm{body}}$ is \textit{exactly} the same as for $H_{4-\textrm{local}}$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 2 auxiliary ququits for each pair $ij$.
\item 6 more total terms (6 terms in the 8-body version becomes 12 terms: \\
11 of them 2-body and 1 of them 1-body).
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Non-perturbative. No prohibitive control precision requirement.
\item Only two auxiliaries required for each pair $ij$.
\item 8-body to 2-body transformation can be accomplished in 1 step, rather than a 1B1 gadget which would take 6 steps or an SD + $(3\rightarrow2)$ gadget combination which would take 4 steps.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Increase in dimention from working with only 2-level systems (spin-1/2 particles or $2\times2$ matrices) to working with 4-level systems (spin-3/2 particles).
\item Until now, only derived for a very specific Hamiltonian form.
\item This appraoch may become more demanding for Hamiltonians that are more than 8-local.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Ocko2011}.
\end{itemize}
\newpage
\subsection{NP-SJ (Subasi \& Jarzynski, 2016)}
\secspace\emph{\textbf{Summary}}
Determine the $k$-local term, $H_{k-\rm{local}}$, whose degree we wish to reduce, and factor it into two commuting factors: $H_{k^\prime-\rm{local}}H_{(k-k^\prime)-\rm{local}}$, where $k^\prime$ can be as low as 0. Separate all terms that are at most $(k-1)$-local into ones that commmute with one of these factors (it does not matter which one, but without loss of generality we assume it to be the $(k-k^\prime)$-local one) and ones that anti-commute with it:
\begin{align}
H_{ <k \rm{-local}}^{\rm{commuting}} +H_{ <k \rm{-local}}^{\rm{anti-commuting}} + \alpha H_{k^\prime-\rm{local}}H_{(k-k^\prime)-\rm{local}}
\end{align}
\noindent Introduce one auxiliary qubit labeled by $a$ and the Hamiltonian:
\begin{align}
\alpha x_aH_{k^\prime-\rm{local}} + H_{ <k \rm{-local}}^{\rm{commuting}} + z_aH_{ <k \rm{-local}}^{\rm{anti-commuting}}
\end{align}
\noindent no longer contains $H_{k-\rm{local}}$ but $H_{<k-{\rm{local}}}^{\rm{anti-commuting}}$ is now one degree higher.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary qubit to reduce $k$-local term to $(k^\prime+1)$-local where $k^\prime$ can even be 0-local, meaning the $k$-local term is reduced to a 1-local one.
\item Raises the $k$-locality of $H_{ <k \rm{-local}}^{\rm{anti-commuting}}$ by 1 during each application. It can become $(>k)$-local!
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Non-perturbative
\item Can linearize a term of arbitrary degree in one step.
\item Requires very few auxiliary qubits.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Can introduce many new non-local terms as an expense for reducing only one $k$-local term.
\item If the portion of the Hamiltonian that does not commute with the $(k-k^\prime)$-local term has termms of degree $k-1$ (which can happen if $k^\prime=0$) they will all become $k$-local, so there is no guarantee that this method reduces $k$-locality.
\item If any terms were more than 1-local, this method will not fully quadratize the Hamiltonian (it must be combined with other methods).
\item It only works when the Hamiltonian's terms of degree at most $k-1$ all either commute or anti-commmute with the $k$-local term to be eliminated.
\end{itemize}
\vspace{-1mm}
\secspace\emph{\textbf{Example}}
\vspace{-3mm}
\begin{align}
\vspace{-1mm}
4z_5 -3 x_1 + 2z_1y_2x_5 + 9x_1x_2x_3x_4 -x_1y_2z_3x_5 \rightarrow 9x_{a_{1}} + 4z_{a_2}z_5 -3z_{a_3}x_1 -z_{a_3}x_{a_2} +2x_{a_3}x_5
\end{align}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper, and description of the choices of terms and factors used for the given example \cite{Subas2016}.
\end{itemize}
\subsection{NP-Nagaj-1 (Nagaj, 2010)}
\secspace\emph{\textbf{Summary}}
\vspace{-1mm}
The Feynman Hamiltonian can be written as \cite{Feynman1985a}:
\vspace{-5mm}
\begin{align}
\frac{1}{4}\left(x_{1}x_{2}-{\rm i}y_{1}x_{2}+{\rm i}x_{1}y_{2}+y_{1}y_{2}\right)U_{{\rm 2-local}}+\frac{1}{4}\left(x_{1}x_{2}+{\rm i}y_{1}x_{2}-{\rm i}x_{1}y_{2}+y_{1}y_{2}\right)U_{{\rm 2-local}}^{\dagger},
\end{align}
\noindent where $U_{{\rm 2-{\rm local}}}$ is an arbitrary 2-local unitary matrix that acts on qubits different from the ones labeled by "1" and "2". This Hamiltonian that is 4-local on qubits can be transformed into one that is 2-local in qubits and qutrits. Here we show the 2-local Hamiltonian for the case where $U_{{\rm 2-local}}={\rm CNOT}\equiv\frac{1}{2}\left(\openone+z_{3}+x_{4}-z_{3}x_{4}\right).$ We start with the specific 4-local Hamiltonian:
\vspace{-5mm}
\begin{align}
H_{{\rm 4-local}}=\frac{1}{4}\left(x_{1}x_{2}+y_{1}y_{2}+x_{1}x_{2}z_{3}+x_{1}x_{2}x_{4}+y_{1}y_{2}z_{3}+y_{1}y_{2}x_{4}-x_{1}x_{2}z_{3}x_{4}-y_{1}y_{2}z_{3}x_{4}\right),
\end{align}
\noindent and after adding 4 auxiliary qubits labeled by $a_{1}$ to $a_{4}$ and 6 auxiliary qutrits labeled by $a_{5}$ to $a_{10}$ and acted on by the Gell-Mann matrices $\lambda_{1}$ to $\lambda_{9}$, we get the following 2-local Hamiltonian:
\vspace{-5mm}
\begin{align}
H_{{\rm 2-local}} &=\nicefrac{1}{2}\left(2\lambda_{6,a_{8}} + x_{1}\lambda_{1,a_{5}}+y_{1}\lambda_{2,a_{5}}+\lambda_{6,a_{5}}-z_{3}\lambda_{6,a_{5}}+x_{a_{1}}\lambda_{4,a_{5}}+y_{a_{1}}\lambda_{5,a_{5}}+x_{a_{1}}\lambda_{1,a_{6}} + \right.\\
&y_{a_{1}}\lambda_{2,a_{6}}+2x_{4}\lambda_{6,a_{6}}+x_{a_{2}}\lambda_{4,a_{6}}+y_{a_{2}}\lambda_{5,a_{6}}+x_{a_{2}}\lambda_{1,a_{7}}+y_{a_{2}}\lambda_{2,a_{7}}+\lambda_{6,a_{7}}-z_{a_{1}}\lambda_{6,a_{7}}+\\
&x_{2}\lambda_{4,a_{7}}+y_{2}\lambda_{5,a_{7}}+x_{1}\lambda_{1,a_{8}}+ y_{1}\lambda_{2,a_{8}}+z_{3}\lambda_{6,a_{8}}+x_{a_{5}}\lambda_{4,a_{9}}+y_{a_{5}}\lambda_{5,a_{9}}+\lambda_{6,a_{9}}+ \\
&\left.x_{a_{6}}\lambda_{4,a_{9}}+y_{a_{6}}\lambda_{5,a_{9}}+x_{a_{6}}\lambda_{1,a_{10}}+y_{a_{6}}\lambda_{2,a_{10}}+\lambda_{6,a_{10}}+z_{3}\lambda_{6,a_{10}}+x_{2}\lambda_{4,a_{10}}+y_{2}\lambda_{5,a_{10}}\right),
\end{align}
\noindent whose low-lying spectrum is equivalent to the spectrum of $H_{2-{\rm local}}$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 6 auxiliary qutrits and 4 auxiliary qubits
\item 2 quartic, 4 cubic, and 2 quadratic terms becomes 27 quadratic terms and 5 linear terms in the Pauli-GellMann basis.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Exact (non-perturbative). No special control precision demands.
\item All coefficients are equal to each other, with a value of $\nicefrac{1}{2}$, except one which is equal to 1.
\item With more auxiliary qubits, can be further reduced to only containing qubits.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Involves qutrits in all 32 terms.
\item Only derived (so far) for the Feynman Hamiltonian.
\item High overhead in terms of number of auxiliary qubits and number of terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
The transformation presented above was for the case of $U_{2-\textrm{local}}={\rm CNOT}\equiv\frac{1}{2}\left(\openone+z_{3}+x_{4}-z_{3}x_{4}\right)$, but similar transformations can be derived for any arbitrary unitary matrix $U_{2-\textrm{local}}$.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Nagaj2010}.
\end{itemize}
\newpage
\subsection{NP-Nagaj-2 (Nagaj, 2012)}
\secspace\emph{\textbf{Summary}}
Similar to NP-Nagaj-1 but instead of using qutrits, we use two qubits for each qutrit, according to:
\vspace{-7mm}
\begin{align}
|0\rangle \rightarrow |00\rangle , \qquad |1\rangle \rightarrow \frac{1}{\sqrt{2}}\left( |01\rangle + |10\rangle \right), \qquad |2\rangle \rightarrow \frac{1}{\sqrt{2}} \left( |01\rangle - |10\rangle \right).
\end{align}
\noindent which leads to the following transformations:
\vspace{-5mm}
\begin{align}
|01\rangle\langle10|_{ij}+h.c. &\rightarrow \frac{1}{\sqrt{2}}\left(|01\rangle\langle10|_{ij_{1}}+|01\rangle\langle10|_{ij_{2}}\right)+h.c.\\
|02\rangle\langle10|_{ij}+h.c. &\rightarrow \frac{1}{\sqrt{2}}\left(|01\rangle\langle10|_{ij_{1}}-|01\rangle\langle10|_{ij_{2}}\right)+h.c.\\
|1\rangle\langle2|_{j}+h.c. &\rightarrow z_{j_{1}}-z_{j_{2}},
\end{align}
\noindent and the following 2-local Hamiltonian involving only qubits:
\vspace{-3mm}
\begin{eqnarray}
\begin{gathered}
H_{2-\textrm{local}}=\nicefrac{1}{2}\left(z_{a_{5}}-z_{a_{6}}+z_{a_{9}}-z_{a_{10}}-z_{a_{3}}z_{a_{5}}+z_{a_{3}}z_{a_{6}}-z_{a_{3}}z_{a_{9}}+z_{a_{3}}z_{a_{10}}+\right. \\
\left.z_{a_{11}}-z_{a_{12}}+z_{a_{15}}-z_{a_{16}}+z_{a_{3}}z_{a_{11}}-z_{a_{3}}z_{a_{12}}+z_{a_{3}}z_{a_{15}}-z_{a_{3}}z_{a_{16}}\right)+x_{4}z_{a_{7}}-x_{4}z_{a_{8}}+z_{a_{13}}-z_{a_{14}}+\\
\nicefrac{1}{\sqrt{2}}\left(x_{1}\lambda_{1,a_{5}}+y_{1}\lambda_{2,a_{5}}+x_{1}\lambda_{1,a_{6}}+y_{1}\lambda_{2,a_{6}}+x_{a_{1}}\lambda_{1,a_{7}}+y_{a_{1}}\lambda_{2,a_{7}}+x_{a_{1}}\lambda_{1,a_{8}}+y_{1}\lambda_{2,a_{8}}\right.+\\
x_{a_{2}}\lambda_{1,a_{7}}+y_{a_{2}}\lambda_{2,a_{7}}+x_{a_{2}}\lambda_{1,a_{8}}+y_{a_{2}}\lambda_{2,a_{8}}+x_{a_{2}}\lambda_{1,a_{9}}+y_{a_{2}}\lambda_{2,a_{9}}+x_{a_{2}}\lambda_{1,a_{10}}+\\
y_{a_{2}}\lambda_{2,a_{10}}+x_{1}\lambda_{1,a_{11}}+y_{1}\lambda_{2,a_{11}}+x_{a_{4}}\lambda_{1,a_{15}}+y_{a_{4}}\lambda_{2,a_{15}}+x_{a_{1}}\lambda_{4,a_{5}}+y_{a_{1}}\lambda_{5,a_{5}}+\\
x_{a_{2}}\lambda_{4,a_{7}}+y_{a_{2}}\lambda_{5,a_{7}}+x_{2}\lambda_{4,a_{9}}+y_{2}\lambda_{5,a_{9}}+x_{a_{3}}\lambda_{4,a_{11}}+y_{a_{3}}\lambda_{5,a_{11}}+x_{a_{4}}\lambda_{4,a_{13}}-\\
y_{a_{4}}\lambda_{5,a_{13}}+x_{a_{2}}\lambda_{4,a_{15}}+y_{a_{2}}\lambda_{5,a_{15}}-x_{a_{1}}\lambda_{4,a_{6}}-y_{a_{1}}\lambda_{5,a_{6}}-x_{a_{2}}\lambda_{4,a_{8}}-y_{a_{2}}\lambda_{5,a_{8}}-x_{2}\lambda_{4,a_{10}}-\\
\left.-y_{2}\lambda_{5,a_{10}}-x_{a_{3}}\lambda_{4,a_{12}}-y_{a_{3}}\lambda_{5,a_{12}}-x_{a_{4}}\lambda_{4,a_{14}}-y_{a_{4}}\lambda_{5,a_{14}}-x_{2}\lambda_{4,a_{16}}-y_{2}\lambda_{5,a_{16}}\right),
\end{gathered}
\end{eqnarray}
\vspace{2mm}
\noindent whose low-lying spectrum is equivalent to the spectrum of $H_{2-{\rm local}}$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 16 auxiliary qubits.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Exact (non-perturbative). No special control precision demands.
\item Only involves qubits (as opposed to NP-Nagaj-1 which contains qutrits and NP-OY which contains ququits.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only derived (so far) for the Feynman Hamiltonian.
\item High overhead in terms of number of auxiliary qubits and number of terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
The transformation presented above was for the case of $U_{2-\textrm{local}}={\rm CNOT}\equiv\frac{1}{2}\left(\openone+z_{3}+x_{4}-z_{3}x_{4}\right)$, but similar transformations can be derived for any arbitrary unitary matrix $U_{2-\textrm{local}}$.
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Nagaj2012}.
\end{itemize}
\newpage
\section{Perturbative $(3\rightarrow2)$ Gadgets}
The first gadgets for arbitrary Hamiltonians acting on some number of qubits, were designed to reproduce the spectrum of a 3-local Hamiltonian in the low-lying spectrum of a 2-local Hamiltonian.
\subsection{P$(3\rightarrow2)$-DC (Duan, Chen, 2011)}
\secspace\emph{\textbf{Summary}}
For any group of 3-local terms that can be factored into a product of three 1-local factors, we can define three auxiliary qubits (regardless of the number of qubits we have in total) labeled by $a_{i}$ and make the transformation:
\begin{align}
\prod_i^3 \sum_j \alpha_{ij}s_{i} \rightarrow \alpha + \alpha_i^{ss} \sum_i \left(\sum_j \alpha_{ij}s_{ij}\right)^2 + \alpha_i^{sx}\sum_i \sum_j \alpha_{ij} s_{ij}x_{a_i} + \alpha^{zz} \sum_{ij} z_{a_i}z_{a_j}
\end{align}
\begin{align}
\alpha &= \frac{1}{8\Delta} \\
\alpha^{ss} &= \frac{1}{6\Delta^{\nicefrac{1}{3}}} \\
\alpha^{sx} &= - \frac{1}{6\Delta^{\nicefrac{2}{3}}} \\
\alpha^{zz} &= - \frac{1}{24\Delta}
\end{align}
\noindent The result will be a 2-local Hamiltonian whose low-lying spectrum is equivalent to the spectrum of $H_{3-\rm{local}}$ to within $\epsilon$ as long as $\Delta=\Theta\left(\epsilon^{-3}\right)$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary qubit for each group of 3-local terms that can be factored into three 1-local factors.
\item $\Delta =\Theta\left(\epsilon^{-3}\right)$
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Very few auxiliary qubits needed
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Will not work for Hamiltonians that do not factorize appropriately.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Duan2011}
\end{itemize}
\newpage
\subsection{P$(3\rightarrow2)$-DC2 (Duan, Chen, 2011)}
\secspace\emph{\textbf{Summary}}
For any 3-local term (product of Pauli matrices $s_i$) in the Hamiltonian, we can define \textit{one} auxiliary qubit labeled by $a$ and make the transformation:
\begin{align}
a\prod_i^3 s_{i} \rightarrow \alpha + \alpha^s s_3 + \alpha^z z_a +\alpha^{ss} \left(s_1 + s_2 \right)^2 + \alpha^{sz}s_3z_a + + \alpha^{sx}\left(s_1x_a + s_2x_a \right)
\end{align}
\begin{align}
\begin{pmatrix}\alpha & \alpha^{s} & \alpha^{z}\\
\alpha^{ss} & \alpha^{sz} & \alpha^{sx}
\end{pmatrix}=\begin{pmatrix}-\frac{1}{2\Delta} & a\left(\frac{1}{4\Delta^{\nicefrac{2}{3}}}-1\right) & a\left(\frac{1}{4\Delta^{\nicefrac{2}{3}}}-1\right)\\
\frac{1}{\Delta^{\nicefrac{1}{3}}} & \frac{a}{4\Delta^{\nicefrac{2}{3}}} & \frac{1}{\Delta^{\nicefrac{2}{3}}}
\end{pmatrix}
\end{align}
\noindent The result will be a 2-local Hamiltonian whose low-lying spectrum is equivalent to the spectrum of $H_{3-\rm{local}}$ to within $\epsilon$ as long as $\Delta=\Theta\left(\epsilon^{-3}\right)$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary qubit for each 3-local term.
\item $\Delta =\Theta\left(\epsilon^{-3}\right)$
\end{itemize}
\begin{comment}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item
\item
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item
\item
\end{itemize}
\end{comment}
\secspace\emph{\textbf{Example}}
{\small
\begin{align}
x_1z_2y_3 - 3x_1x_2y_4 + z_1x_2 &\rightarrow \alpha + \alpha^z(z_{a_1} + z_{a_2}) + \alpha^y(y_3 + y_4) +\alpha^{zx}_{12}z_1x_2+ \alpha^{zx}z_2x_{a_1} +\alpha^{xx}_{11} x_1x_2\\
& + \alpha^{xx}\left( x_1x_{a_1} + x_1x_{a_2} + x_2x_{a_2} \right) + \alpha^{yz}\left( y_3z_{a_1} + y_4z_{a_2} \right)
\end{align}
}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Duan2011}
\end{itemize}
\newpage
\subsection{P$(3\rightarrow2)$-KKR (Kempe, Kitaev, Regev, 2004)}
\secspace\emph{\textbf{Summary}}
For any 3-local term (product of commuting matrices $s_i$) in the Hamiltonian, we can define three auxiliary qubits labeled by $a_{i}$ and make the transformation:
\begin{align}
\prod_i^3 s_{i} \rightarrow \alpha + \alpha_i^{ss} \sum_i s_i^2 + \alpha_i^{sx}\sum_i s_ix_{a_i} + \alpha^{zz} \sum_{ij} z_{a_i}z_{a_j}
\end{align}
\begin{align}
\alpha &= -\frac{1}{8\Delta} \\
\alpha^{ss} &= -\frac{1}{6\Delta^{\nicefrac{1}{3}}} \\
\alpha^{sx} &= \frac{1}{6\Delta^{\nicefrac{2}{3}}} \\
\alpha^{zz} &= \frac{1}{24\Delta}
\end{align}
\noindent The result will be a 2-local Hamiltonian whose low-lying spectrum is equivalent to the spectrum of $H_{3-\rm{local}}$ to within $\epsilon$ as long as $\Delta=\Theta\left(\epsilon^{-3}\right)$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 3 auxiliary qubits for each 3-local term.
\item $\Delta =\Omega\left(\epsilon^{-3}\right)$
\end{itemize}
\begin{comment}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item
\item
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item
\item
\end{itemize}
\end{comment}
\secspace\emph{\textbf{Example}}
{\tiny
\begin{align}
x_1z_2y_3 - 3x_1x_2y_4 + z_1x_2 \rightarrow \alpha + \alpha^{zx}_{2a_{12}}z_2x_{a_12}+ \alpha^{xx}_{12} x_1x_2 + \alpha^{xx}_{1a_{11}}x_1x_{a_{11}} + \alpha^{xx}_{1a_{21}}x_1x_{a_{21}} + \alpha^{xx}_{2a_{22}}x_2x_{a_{22}} + \alpha^{yz}_{3a_{13}}y_3x_{a_{13}} + \alpha_{4a_{23}}^{yx}y_4x_{a_{23} }
\end{align}
}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper on arXiv: \cite{Kempe2004a}
\item Journal publication two years later: \cite{Kempe2006}
\end{itemize}
\newpage
\subsection{P$(3\rightarrow2)$-OT (Oliveira-Terhal, 2005)}
\secspace\emph{\textbf{Summary}}
For any 3-local term which is a product of 1-local matrices $s_i$, we can define one auxiliary qubit labeled by $a$ and make the transformation:
{\footnotesize
\begin{align}
a\prod_i^3 s_{i} \rightarrow \alpha + \alpha_1^{s} s_1^2 + \alpha_2^s s_2^2 + \alpha_3^s s_3 + \alpha_a^z z_a + \alpha_{12}^{ss} s_1s_2 + \alpha_{13}^{ss} s_1^2s_3 + \alpha_{23}^{ss} s_2^2s_3 + \alpha_{3a}^{sz} s_3z_a + \alpha_{1a}^{sx}s_1x_a+\alpha_{2a}^{sx}s_2x_a
\end{align}
}
\begin{align}
\begin{pmatrix}\alpha & \alpha_{12}^{ss}\\
\alpha_{1}^{s} & \alpha_{13}^{ss}\\
\alpha_{2}^{s} & \alpha_{23}^{ss}\\
\alpha_{3}^{s} & \alpha_{3a}^{sz}\\
\alpha_{a}^{z} & \alpha_{2a}^{sx}\\
\textrm{N/A} & \alpha_{2a}^{sx}
\end{pmatrix}&=\begin{pmatrix}\frac{\Delta}{2} & -\Delta^{1/3}\\
-\frac{\alpha^{2/3}\Delta^{1/3}}{2} & a\frac{1}{2}\\
\frac{\alpha^{2/3}\Delta^{1/3}}{2} & a\frac{1}{2}\\
-\frac{\alpha^{1/3}\Delta^{2/3}}{2} & \frac{\alpha^{1/3}\Delta^{2/3}}{2}\\
-\frac{\Delta}{2} & -\frac{\alpha^{1/3}\Delta^{2/3}}{\sqrt{2}}\\
\textrm{N/A} & \frac{\alpha^{1/3}\Delta^{2/3}}{\sqrt{2}}
\end{pmatrix}.
\end{align}
\noindent A $k$-local Hamiltonian with a 3-local term replaced by this 2-local Hamiltonian will have an equivalent low-lying spectrum to within $\epsilon$ as long as $\Delta=\Omega\left(\epsilon^{-3}\right)$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary qubit for each 3-local term.
\item $\Delta =\Omega\left(\epsilon^{-3}\right)$
\end{itemize}
\begin{comment}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item
\item
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item
\item
\end{itemize}
\end{comment}
\secspace\emph{\textbf{Example}}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper where $\alpha=1$: \cite{Oliveira2008}. For arbitrary $\alpha$ see the 2005 v1 from arXiv, or \cite{Bravyi2008}. Connection to improved version: \cite{Cao2015}.
\end{itemize}
\newpage
\section{Perturbative 1-by-1 Gadgets}
A 1B1 gadget allows $k$-local terms to be quadratized one step at a time, where at each step the term's order is reduced by at most one. In each step, a $k$-local term is reduced to $\left(k-1\right)$-local, contrary to SD (sub-division) gadgets which can reduce $k$-local terms to $\left(\nicefrac{1}{2}\right)$-local in one step.
\subsection{P1B1-OT (Oliveira \& Terhal, 2008)}
\secspace\emph{\textbf{Summary}}
We wish to reduce the $k$-local term:
\begin{align}
H_{k-\rm{local}} = \alpha \prod_j^k s_{j} . \label{eq:klocalInKempeMethod}
\end{align}
\noindent
Define one auxiliary qubit labeled by $a$ and make the transformation:
\begin{align}
H_{k-{\rm local}} \rightarrow &-\left(\frac{\alpha}{2}\right)^{\nicefrac{1}{3}}\Delta^{2(1-r)}s_{k}\left(\frac{1-z_{a}}{2}\right)+\left(\frac{\alpha}{2}\right)^{\nicefrac{1}{3}}\frac{\Delta^{r}}{\sqrt{2}}\left(s_{k-1}-s_{k-2}\right)x_{a} \\
&+\frac{1}{2}\left(\frac{\alpha}{2}\right)^{\nicefrac{2}{3}}\left(\Delta^{r-1}s_{k-1}+{\rm sgn}(\alpha)\sqrt{2}\Delta^{-\nicefrac{1}{4}}\prod_{j}^{k-2}s_{j}\right)^{2} +\frac{\alpha}{4}\left(1+2{\rm sgn^{2}\alpha}\Delta^{\nicefrac{3}{2}-2r}\right)s_{k}.
\end{align}
\noindent The result will be a $(k-1)$-local Hamiltonian with the same low-lying spectrum as $H_{k-\rm{local}}$ to within $\epsilon$ as long as $\Delta=\Omega\left(\epsilon^{-3}\right)$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item Only 1 auxiliary qubit.
\item $\Delta =\Omega\left(\epsilon^{-3}\right)$
\end{itemize}
\begin{comment}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item only one auxiliary qubit for each 3-local term.
\item
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item
\item
\end{itemize}
\end{comment}
\secspace\emph{\textbf{Example}}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Described in: \cite{Cao2015}, based on: \cite{Oliveira2008}.
\end{itemize}
\newpage
\subsection{P1B1-CBBK (Cao, Babbush, Biamonte, Kais, 2015)}
\secspace\emph{\textbf{Summary}}
Define one auxiliary qubit labeled by $a$ and make the transformation:
\begin{align}
H_{k-\rm{local}} \rightarrow & \left(\Delta+\left(\frac{\alpha}{2}\right)^{\nicefrac{3}{2}}\Delta^{\nicefrac{1}{2}}s_{k}\right)\left(\frac{1-z_{a}}{2}\right) \\
& - \frac{\alpha^{\nicefrac{2}{3}}}{2}\left(1+{\rm sgn^{2}\alpha}\right)\left(\left(2\alpha\right)^{\nicefrac{2}{3}}{\rm sgn}^{2}\alpha+\alpha^{\nicefrac{1}{3}}s_{k}-\sqrt[3]{2}\Delta^{\nicefrac{1}{2}}\right)\left(\frac{1+z_{a}}{2}\right) \\
& +\left(\frac{\alpha}{2}\right)^{\nicefrac{1}{3}}\Delta^{\nicefrac{3}{4}}\left(\prod_j^{k-2} s_{j}-{\rm sgn}(\alpha) s_{k-1}\right)x_{a}+{\rm sgn}(\alpha)\sqrt[3]{2}\alpha^{\nicefrac{2}{3}}\left(\Delta^{\nicefrac{1}{2}}+\Delta^{\nicefrac{3}{2}}\right)\prod_j^{k-1}s_j.
\end{align}
\noindent The result is $(k-1)$-local and its low-lying spectrum is the same as that of $H_{k-\rm{local}}$ when $\Delta$ is large enough.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item Only 1 auxiliary qubit.
\item $\Delta =\Omega\left(\epsilon^{-3}\right)$
\end{itemize}
\begin{comment}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item only one auxiliary qubit for each 3-local term.
\item
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item
\item
\end{itemize}
\end{comment}
\secspace\emph{\textbf{Example}}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Described in: \cite{Cao2015}, based on: \cite{Oliveira2008}.
\end{itemize}
\newpage
\section{Perturbative Subdivision Gadgets}
Instead of recursively reducing $k$-local to $(k-1)$-local one reduction at a time, we can reduce $k$-local terms to $(k/2)$-local terms directly for even $k$, or to $(k+1)/2$-local terms directly for odd $k$. Since when $k$ is odd we can add an identity operator to the $k$-local term to make it even, we will assume in the following that $k$ is even, in order to avoid having to write floor and ceiling functions.
\subsection{PSD-OT (Oliveira \& Terhal, 2008)}
\secspace\emph{\textbf{Summary}}
We factor a $k$-local term into a product of three factors: operators $H_1,H_2$ acting on non-overlapping spaces, and scalar $\alpha$. Then introduce an auxiliary qubit labelled by $a$ and make the transformation:
\begin{align}
H_{k-\rm{local}} \rightarrow \Delta \frac{1-z_a}{2} + \frac{\alpha}{2}H_1^2 + \frac{\alpha}{2}H_2^2 + \sqrt{\frac{\alpha\Delta}{2}}\left( - H_1 + H_2 \right)x_a.
\end{align}
The resulting Hamiltonian has a degree of 1 larger than the degree of whichever factor $H_1$ or $H_2$ has a larger degree, and the low-lying spectrum is equivalent to the original one to within $\mathcal{O}\left(\alpha\epsilon\right)$ for sufficiently large $\Delta$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item 1 auxiliary qubit for each $k$-local term that can be factored into two non-overlapping subspaces, is enough to reduce the degree down to $\nicefrac{k}{2}+1$.
\item $\Delta =\frac{\alpha\left(||H_{(\rm{else})} + \Omega(\sqrt{2}) \textrm{max}\left( ||H_1||,||H_2|| \right) ||\right)^6}{\epsilon^2} =\Omega\left(\alpha\epsilon^{-2}\right)$.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item Potentially very few auxiliary qubits needed.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Requires the ability to factor $k$-local terms into non-overlapping subspaces that are at most $\left(k-2\right)$-local in order to reduce $k$-locality. This is not possible for $z_1x_2x_3 + z_2z_3x_4$, for example.
\item $\Delta$ needs to be rather large.
\item Cannot reduce 3-local to 2-local unless we generalize to a factor of 3 non-overlapping subspaces instead of 2. Needs to be combined with $3-\rightarrow2$ gadgets, for example.
\item A lot of work may be needed to find the optimal reduction, since each $k$-local term can be factored in many ways, and some of these ways may affect the ability to reduce other $k$-local terms.
\end{itemize}
\secspace\emph{\textbf{Example}}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper where $\alpha=1$: \cite{Oliveira2008}. For arbitrary $\alpha$ see the 2005 v1 from arXiv, or \cite{Bravyi2008}. Connection to improved version: \cite{Cao2015}.
\end{itemize}
\newpage
\subsection{PSD-CBBK (Cao, Babbush, Biamonte, Kais 2015)}
\secspace\emph{\textbf{Summary}}
For any $k$-local term, we can subdivide it into a product of two $\left(\nicefrac{k}{2}\right)-$local terms:
\begin{align}
H_{k-\rm{local}} = \alpha H_{1,(\nicefrac{k}{2})-\rm{local}}H_{2,(\nicefrac{k}{2})-\rm{local}} + H_{(k-1)-\rm{local}}. \label{eq:olivieraTehral}
\end{align}
\noindent Define one qubit $a$ and make the following Hamiltonian is $( \nicefrac{k}{2} )$-local:
\begin{align}
\Delta \frac{1-z_a}{2} + |\alpha|\frac{1+z_a}{2} + \sqrt{|\alpha|\Delta /2}\left({\rm sgn}(\alpha) H_{1,(\nicefrac{k} {2}-\rm{local})} - H_{2,(\nicefrac{k}{2}-\rm{local})}\right)x_a
\end{align}
\noindent The result is a $( \nicefrac{k}{2})$-local Hamiltonian with the same low-lying spectrum as $H_{k-\rm{local}}$ for large enough $\Delta$. The disadvantage is that $\Delta$ has to be larger.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item $\Delta \ge \left( \frac{2|\alpha|}{\epsilon}+1\right)(|\alpha|+\epsilon+22||H_{(k-1)-\rm{local}} )$
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item only one qubit to reduce $k$ to $\lceil k/2\rceil +1$
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Only beneficial for $k\ge5$.
\end{itemize}
\secspace\emph{\textbf{Example}}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Cao2015}.
\end{itemize}
\newpage
\subsection{PSD-CN (Cao \& Nagaj, 2014)}
\secspace\emph{\textbf{Summary}}
For a sum of terms that are $k$-local, with each term $j$ written as a product $H_{1j}H_{2j}$, introduce $N_{\rm core}$ `core' auxiliary qubits labeled by $a_i$ and $N_{\rm direct}$ `direct' auxiliary qubits labeled by $a_{ij}$ for each term $j$. Make all core auxiliary qubits couple to all others, and make the direct auxiliary qubits couple each $H_{1j}$ and $H_{2j}$ to the core auxiliary qubits.
\begin{equation}
\begin{array}{ccl}
\displaystyle
\sum_{j}a_j H_{1j} H_{2j} & \rightarrow & \alpha \sum_{ij} \left(1-z_{a_{ij}} z_{a_i} + \alpha_{j}x_{a_{ij}} \left(H_{1j} - H_{2j}\right) \right) + \alpha \sum_i\left(1-z_{a_i} + \sum_{j}\left(1-z_{a_i} z_{a_j}\right)\right) \\
\end{array}
\end{equation}
If we would like the spectrum of the RHS to be close to that of the LHS, with a difference of $O(\epsilon)$, then for any $d\in(0,1)$ we can choose
\begin{align}
\label{eq:RC}
N_{\rm{direct}} & \in \Omega\left( \max\left\{
\epsilon^{-\frac{2}{d}},
\left(\frac{\|H_\text{else}\|^2}{2M^4 \max_j|a_j|}\right)^{\frac{1}{d}},
\left(M^3 \epsilon^{-2} \right)^{\frac{1}{1-d}}
\right\}\right),\\
\label{eq:CC}
N_{\rm{core}} & \in \Omega \left( M^3 N_{\rm{direct}}^d \,\epsilon^{-1}\right), \\
\alpha_{j}, \alpha & \in O(\epsilon) \label{eq:eps}
\end{align}.
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item For spectral error $\epsilon$, uses only $O(\epsilon)$ coupling between the qubits (See Equation \ref{eq:eps}).
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Uses poly$(\epsilon^{-1})$ ancilla qubits (See Equations \ref{eq:RC} and \ref{eq:CC}).
\item The construction only describes the asymptotic scaling of the parameters rather than concrete assignments of them.
More work is needed for finding tight non-asymptotic error bounds in perturbative expansion.
\end{itemize}
\secspace\emph{\textbf{Example}}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Cao2015a}.
\end{itemize}
\newpage
\begin{comment}
\subsection{3-body Gadget from local X: Cao et al.~2013}
Cao et al.~2013 produced two new gadgets and also generally improved the known constructions for several typical gadgets.
\secspace\emph{\textbf{Summary}} ~In general, terms in perturbative gadgets involve mixed couplings (e.g. $X_i Z_j$). Although such couplings can be realized by certain gadget constructions \cite{BL07},
physical couplings of this type are difficult to realize in an experimental setting. However, there has been significant progress towards experimentally implementing Ising models with transverse fields of the type \cite{2006cond.mat..8253H}:
\begin{equation}\label{eq:dwave}
H_{ZZ}=\sum_i\delta_iX_i+\sum_ih_iZ_i+\sum_{i,j}J_{ij} Z_iZ_j.
\end{equation}
Accordingly, an interesting question is whether we can approximate 3-body terms such as $\alpha \cdot Z_i\otimes Z_j\otimes Z_k$ using a Hamiltonian of this form. This turns out to be possible by employing a perturbative calculation which considers terms up to $5^\text{th}$ order.
\secspace\emph{\textbf{Cost}}
\secspace\emph{\textbf{Example}}
Similar to the 3- to 2-body reduction discussed previously, we introduce an ancilla $w$ and apply the Hamiltonian $H=\Delta|1\rangle\langle{1}|_w$. We apply the perturbation
\begin{equation}\label{eq:V5}
V = H_\text{else}+\mu(Z_i+Z_j+Z_k)\otimes|1\rangle\langle{1}|_w + \mu\openone\otimes X_w+V_\textrm{comp}
\end{equation}
where $\mu = \left(\alpha \Delta^4 / 6\right)^{1/5}$ and $V_\textrm{comp}$ is
\begin{equation}
\begin{array}{ccl}
V_\textrm{comp} & = & \displaystyle \frac{\mu^2}{\Delta} |0\rangle\langle{0}|_w-\left(\frac{\mu^3}{\Delta^2}+ 7 \frac{\mu^5}{\Delta^4}\right)\left(Z_i+Z_j+Z_k \right)\otimes|0\rangle\langle{0}|_w+ \frac{\mu^4}{\Delta^3}\left(3 \openone +2 Z_i Z_j+2 Z_i Z_k +2 Z_j Z_k\right).
\end{array}
\end{equation}
To illustrate the basic idea of the $5^\text{th}$ order gadget, define subspaces $\mathcal{L}_-$ and $\mathcal{L}_+$ in the usual way and define $P_-$ and $P_+$ as projectors into these respective subspaces. Then the second term in Eq.\ \ref{eq:V5} with $\otimes|1\rangle\langle{1}|_w$ contributes a linear combination $\mu Z_i+\mu Z_j+ \mu Z_k$ to $V_+=P_+VP_+$. The third term in Eq.\ \ref{eq:V5} induces a transition between $\mathcal{L}_-$ and $\mathcal{L}_+$ yet since it operates trivially on qubits 1-3, it only contributes a constant $\mu$ to the projections $V_{-+}=P_-VP_+$ and $V_{+-}=P_+VP_-$. In the perturbative expansion, the $5^\text{th}$ order contains a term
\begin{equation}\label{eq:V55}
\frac{V_{-+}V_+V_+V_+V_{+-}}{(z-\Delta)^4}=\frac{\mu^5 (Z_i+Z_j+Z_k)^3}{(z-\Delta)^4}
\end{equation}
due to the combined the contribution of the second and third term in Eq.\ \ref{eq:V5}.
\noindent{}This yields a term proportional to $\alpha\cdot Z_i\otimes Z_j \otimes Z_k$ along with some 2-local error terms. These error terms, combined with the unwanted terms that arise at $1^\text{st}$ through $4^\text{th}$ order perturbation, are compensated by $V_\text{comp}$. Note that terms at 6$^\textrm{th}$ order and higher are $\Theta(\Delta^{-1/5})$. This means in order to satisfy the gadget theorem of Kempe \emph{et al.} (\cite[Theorem 3]{KKR06}, or Theorem I.1) $\Delta$ needs to be $\Theta(\epsilon^{-5})$. This is the first perturbative gadget that simulates a 3-body target Hamiltonian using the Hamiltonian Eq.\ \ref{eq:dwave}. By rotating the ancilla space, subdivision gadgets can also be implemented using this Hamiltonian: in the $X$ basis, $Z$ terms will induce a transition between the two energy levels of $X$. Therefore $Z_i Z_j$ coupling could be used for a perturbation of the form in Eq.\ \ref{eq:2body_V} in the rotated basis. In principle using {the transverse Ising model in Eq.\ \ref{eq:dwave},} one can reduce some {diagonal} $k$-body Hamiltonian to 3-body by iteratively applying the subdivision gadget and then to 2-body by using the 3-body reduction gadget.
$\quad$\\
$\quad$\\
\noindent{\bf Analysis.} Similar to the gadgets we have presented so far, we introduce an ancilla spin $w$. Applying an energy gap $\Delta$ on the ancilla spin gives the unperturbed Hamiltonian $H=\Delta|1\rangle\langle{1}|_w$. We then perturb the Hamiltonian $H$ using a perturbation $V$ described in \eqref{eq:V5}. Using the same definitions of subspaces $\mathcal{L}_+$ and $\mathcal{L}_-$ as the previous 3-body gadget, the projections of $V$ into these subspaces can be written as
\begin{equation}\label{eq:V_proj_fifth}
\begin{array}{ccl}
V_+ & = & \displaystyle \left(H_\text{else} + \mu(Z_1+Z_2+Z_3) + \frac{{\mu}^4}{\Delta^3}\big[3{\openone}+ 2(Z_1Z_2+Z_1Z_3+Z_2Z_3)\big]\right)\otimes|1\rangle\langle{1}|_w \\[0.1in]
V_- & = & \displaystyle \left(H_\text{else}+\frac{{\mu}^2}{\Delta}{\openone}-\frac{{\mu}^3}{\Delta^2}(Z_1+Z_2+Z_3){\openone}+\frac{{\mu}^4}{\Delta^3}\big[3\openone+2(Z_1Z_2+Z_1Z_3+Z_2Z_3)\big] \\[0.1in]
& & \displaystyle -\frac{7{\mu}^5}{\Delta^4}\big(Z_1+Z_2+Z_3\big)\bigg)\otimes|0\rangle\langle{0}|_w \\[0.1in]
V_{-+} & = & {\mu}{\openone}\otimes|0\rangle\langle{1}|_w,\quad V_{+-}= {\mu}{\openone}\otimes|1\rangle\langle{0}|_w. \\[0.1in]
\end{array}
\end{equation}
\noindent{}The low-lying spectrum of $\tilde{H}$ is approximated by the self energy expansion $\Sigma_-(z)$ below with $z\in[-\max{z},\max{z}]$ where $\max{z}=\|H_\text{else}\|+|\alpha|+\epsilon$.
With the choice of $\mu$ above the expression of $V_+$ in Eq.\ \ref{eq:V_proj_fifth} can be written as
\begin{equation}\label{eq:Vp_simple}
V_+=\left(H_\text{else}+{\mu}(Z_1+Z_2+Z_3)+O(\Delta^{1/5})\right)\otimes|1\rangle\langle{1}|_w.
\end{equation}
\noindent{}Because we are looking for the $5^\text{th}$ order term in the perturbation expansion that gives a term proportional to $Z_1Z_2Z_3$, expand the self energy in Eq.\ \ref{eq:selfenergy} up to $5^\text{th}$ order:
\begin{equation}\label{eq:self_energy_fifth}
\begin{array}{ccl}
\Sigma_-(z) & = & \displaystyle V_-\otimes|0\rangle\langle{0}|_w+\frac{V_{-+}V_{+-}}{z-\Delta}\otimes|0\rangle\langle{0}|_w+\frac{V_{-+}V_+V_{+-}}{(z-\Delta)^2}\otimes|0\rangle\langle{0}|_w+\frac{V_{-+}V_+V_+V_{+-}}{(z-\Delta)^3}\otimes|0\rangle\langle{0}|_w \\[0.1in]
& + & \displaystyle \frac{V_{-+}V_+V_+V_+V_{+-}}{(z-\Delta)^4}\otimes|0\rangle\langle{0}|_w+\sum_{k=4}^\infty\frac{V_{-+}V_+^kV_{+-}}{(z-\Delta)^{k+1}}\otimes|0\rangle\langle{0}|_w.
\end{array}
\end{equation}
\noindent{}Using this simplification as well as the expressions for $V_-$, $V_{-+}$ and $V_{+-}$ in Eq.\ \ref{eq:V_proj_fifth}, the self energy expansion Eq.\ \ref{eq:self_energy_fifth} up to $5^\text{th}$ order becomes
\begin{equation}\label{eq:self_energy_fifth2}
\begin{array}{ccl}
\Sigma_-(z) & = & \displaystyle
\underbrace{\left(H_\text{else}+\frac{6\mu^5}{\Delta^4}Z_1Z_2Z_3\right)\otimes|0\rangle\langle{0}|_w}_\text{$H_\text{eff}$}+\underbrace{\left(\frac{1}{\Delta}+\frac{1}{z-\Delta}\right){\mu}^2{\openone}\otimes|0\rangle\langle{0}|_w}_\text{$E_1$} \\[0.1in]
& + & \displaystyle\underbrace{\left(\frac{1}{(z-\Delta)^2}-\frac{1}{\Delta^2}\right)\mu^3(Z_1+Z_2+Z_3)\otimes|0\rangle\langle{0}|_w}_\text{$E_2$}+\underbrace{\left(\frac{1}{\Delta^3}+\frac{1}{(z-\Delta)^3}\right)\cdot \mu^4\cdot(Z_1+Z_2+Z_3)^2\otimes|0\rangle\langle{0}|_w}_\text{$E_3$} \\[0.1in]
& + & \displaystyle \underbrace{\left(\frac{1}{(z-\Delta)^4}-\frac{1}{\Delta^4}\right)7{\mu}^5(Z_1+Z_2+Z_3)\otimes|0\rangle\langle{0}|_w}_\text{$E_4$}+\underbrace{\frac{{\mu}^2}{(z-\Delta)^2}\cdot\frac{{\mu}^4}{\Delta^3}(Z_1+Z_2+Z_3)^2\otimes|0\rangle\langle{0}|_w}_\text{$E_6$} \\[0.1in]
& + & O(\Delta^{-2/5})+O(\|H_\text{else}\|\Delta^{-2/5})+O(\|H_\text{else}\|^2\Delta^{-7/5})+O(\|H_\text{else}\|^3\Delta^{-12/5})+\underbrace{\sum_{k=4}^\infty\frac{V_{-+}V_+^kV_{+-}}{(z-\Delta)^{k+1}}\otimes|0\rangle\langle{0}|_w}_\text{$E_7$}. \\[0.1in]
\end{array}
\end{equation}
\noindent{}Similar to what we have done in the previous sections, the norm of the error terms $E_1$ through $E_7$ can be bounded from above by letting $z\mapsto\max{z}$. Then we find that
\begin{equation}\label{eq:error_total_fifth}
\begin{array}{ccl}
\|\Sigma_-(z)-H_\text{targ}\otimes|0\rangle\langle{0}|_w\| & \le & \Theta(\Delta^{-1/5})
\end{array}
\end{equation}
\noindent{}if we only consider the dominant dependence on $\Delta$ and regard $\|H_\text{else}\|$ as a given constant. To guarantee that $\|\Sigma_-(z)-H_\text{targ}\otimes|0\rangle\langle{0}|_w\|\le\epsilon$, we let the right hand side of Eq.\ \ref{eq:error_total_fifth} to be $\le\epsilon$, which translates to $\Delta=\Theta(\epsilon^{-5})$.
This $\Theta(\epsilon^{-5})$ scaling is numerically illustrated (Fig.\ \ref{fig:ZZZ_fifth_Delta_eps}a). Although in principle the $5^\text{th}$ order gadget can be implemented on a Hamiltonian of form Eq.\ \ref{eq:dwave}, for a small range of $\alpha$, the minimum $\Delta$ needed is already large (Fig.\ \ref{fig:ZZZ_fifth_Delta_eps}b), rendering it challenging to demonstrate the gadget experimentally with current resources. However, this is the only currently known gadget realizable with a transverse Ising model that is able to address the case where $H_\text{else}$ is not necessarily diagonal.
\begin{figure}
\includegraphics[scale=0.15]{pics/sup_fig6a.png}
\includegraphics[scale=0.15]{pics/sup_fig6b.png}
\makebox[0.8cm]{}(a)\makebox[8cm]{}(b)
\caption{\normalsize (a) The scaling of minimum $\Delta$ needed to ensure $\|\Sigma_-(z)-H_\text{eff}\|\le\epsilon$ as a function of $\epsilon^{-1}$. Here we choose $\|H_\text{else}\|=0$, $\alpha=0.1$ and $\epsilon$ ranging from $10^{-0.7}$ to $10^{-2.3}$. The values of minimum $\Delta$ are numerically optimized \cite{footnote:num_op}. The slope of the line at large $\epsilon^{-1}$ is $4.97\approx{5}$, which provides evidence that with the assignments of ${\mu}=(\alpha\Delta^4/6)^{1/5}$, the optimal scaling of $\Delta$ is $\Theta(\epsilon^{-5})$. (b) The numerically optimized \cite{footnote:num_op} gap versus the desired coupling $\alpha$ in the target Hamiltonian. Here $\epsilon=0.01$ and $\|H_\text{else}\|=0$.}
\label{fig:ZZZ_fifth_Delta_eps}
\end{figure}
\secspace\emph{\textbf{Bibliography}}
\end{comment}
\newpage
\section{Perturbative Direct Gadgets}
Here we do not reduce $k$ by one order at a time (1B1 reduction) or by $\nicefrac{k}{2}$ at a time (SD reduction), but we directly reduce $k$-local terms to 2-local terms.
\subsection{PD-JF (Jordan \& Farhi, 2008)}
\secspace\emph{\textbf{Summary}}
Express a sum of $k$-local terms as a sum of products of Pauli matrices $s_{ij}$, and define $k$ auxiliary qubits laelled by $a_{ij}$ for each term $i$, and make the transformmation:
\begin{equation}
\sum_i \alpha_i \prod_{j}^k s_{ij} \rightarrow \frac{-k(-\epsilon)^k}{(k-1)!} \sum_i \frac{1}{2}\left( k^2 - \sum_{jl}^k z_{a_{ij}} z_{a_{il}} \right) + \epsilon \left( \alpha_i s_{i1}x_{i1} + \sum_{j}^k s_{ij}x_{ij} \right) -f(\epsilon)\Pi,
\end{equation}
\noindent for some polynomial $f(\lambda)$. The result is a 2-local Hamiltonian with the same low-lying spectrum to within $\epsilon^{k+1}$ for sufficiently smmall $\epsilon$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item Number of auxiliary qubits is $tk$ for $t$ terms.
\item Unknown requirement for $\epsilon$.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item All done in one step, so easier to implement than 1B1 and SD gadgets.
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Requires 2 more auxiliary qubits per term than 1B1-KKR.
\item Unknown polynomial $f(\lambda)$
\end{itemize}
\secspace\emph{\textbf{Example}}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper \cite{Jordan2008}.
\end{itemize}
\newpage
\subsection{PD-BFBD (Brell, Flammia, Bartlett, Doherty, 2011)}
\secspace\emph{\textbf{Summary}}
The 4-body Hamiltonian:
\begin{eqnarray}
H_{\textrm{4-local}} = -\sum_{ij}\left( z_{4i+1,j}z_{4i+2,j}z_{4i+3,j}z_{4i+4,j} + x_{4i+3,j}x_{4i+4,j}x_{4i+6,j}x_{4i+4,j+1} \right)
\end{eqnarray}
\noindent is transformed into the 2-body Hamiltonian:
\begin{align}
H_{\textrm{2-local}} &= -\sum_{ij} \left( x_{8i+4,j}x_{8i+6,j} + x_{8i+3,j+1}x_{8i+5,j+1} + z_{8i+4,j}z_{8i+3,j+1} + z_{8i+6,j}z_{8i+5,j+1} + \right.\\
&= - \lambda\left( x_{8i+1,j}x_{8i+3,j} + x_{8i+2,j}x_{8i+4,j} + x_{8i+5,j}x_{8i+6} + x_{8i+7}x_{8i+8}\right. \\
&= \left.\left. z_{8i+1,j}z_{8i+3,j} + z_{8i+2,j}z_{8i+4,j} + z_{8i+5,j}z_{8i+6} + z_{8i+7}z_{8i+8} \right)\right) \
\end{align}
\noindent For $\lambda=\mathcal{O}\left(\epsilon^{-5}\right)$, the 2-local Hamiltonian has the same low-lying spectrum as the 4-local Hamiltonian, to within an error of $\epsilon$.
\secspace\emph{\textbf{Cost}}
\begin{itemize}
\item In total, uses four times the number of qubits of the original Hamiltonian.
\item Unknown requirement for $\lambda$.
\end{itemize}
\secspace\emph{\textbf{Pros}}
\begin{itemize}
\item All done in one step, so easier to implement than implementing two 1B1 gadgets.
\item Very symmetric
\end{itemize}
\secspace\emph{\textbf{Cons}}
\begin{itemize}
\item Ordinary 1B1 or SD followed by 3$\rightarrow$2 gadgets would require half as many total qubits.
\item Many 2-local terms.
\item Perturbative, as opposed to NR-OY which is similar but does not involve any $\lambda$ parameter.
\item Required value of $\lambda$ for it to work, is presently unknown.
\end{itemize}
\secspace\emph{\textbf{Bibliography}}
\begin{itemize}
\item Original paper: \cite{Brell2011}.
\end{itemize}
\newpage
\newpage
\part{{\normalsize {\underline{Appendix}}}}
\section{Transformations from ternary to binary variables}
\begin{eqnarray}
H_{\textrm{ternary}} = -\lambda (z_1\,z_2+z_1-z_2)
\end{eqnarray}
In this implementation the variable $\frac{1}{2}(z_1+z_2)$ plays the role of $t$, assuming $\lambda$ is large and positive. For instance, when coupled to a binary variable $t\, z_3\rightarrow \frac{1}{2}(z_1+z_2)\,z_3$.
\newpage
\section{Further Examples}
\secspace\emph{\textbf{Example}} \label{subsec:Example_Ramsey_deduc_reduc}
Here we show how deductions can arise naturally from the Ramsey number problem.
Consider $\mathbb{\mathcal{R}}(4,3)$ with $N=4$ nodes.
Consider a Hamiltonian:
\begin{equation}
H = (1-z_{12})(1-z_{13})(1-z_{23})+\ldots+(1-z_{23})(1-z_{24})(1-z_{34})+ z_{12}z_{13}z_{14}z_{23}z_{24}z_{34}.
\end{equation}
\begin{comment}
We should assume there are no 3-independent sets, so we have deductions:
$(1-z_{ij})(1-z_{ik})(1-z_{jk})=0$, for each $i,j,k$.
\end{comment}
See \cite{Okada2015} for full details of how we arrive at this Hamiltonian.
Since we are assuming we have no 3-independent sets, we know that
$(1-z_{12})(1-z_{13})(1-z_{23})=0$, so $z_{12}z_{13}z_{23}=z_{12}z_{13}+z_{12}z_{23}+z_{13}z_{23}-z_{12}-z_{13}-z_{23}+1$.
This will be our deduction.
Using deduc-reduc we can substitute this into our 6-local term to get:
\begin{eqnarray}
H & = & 2(1-z_{12})(1-z_{13})(1-z_{23})+\ldots+(1-z_{23})(1-z_{24})(1-z_{34})+\\
& & z_{14}z_{24}z_{34}(z_{12}z_{13}+z_{12}z_{23}+z_{13}z_{23}-z_{12}-z_{13}-z_{23}+1).
\end{eqnarray}
We could repeat this process to remove all 5- and 4-local terms without adding any auxiliary qubits.
Note in this case the error terms added by deduc-reduc already appear in our Hamiltonian.
\newpage
\section{$2\rightarrow 2$ gadgets}
This review has only focused on $k-$local to $2-$local transformations where $k>2$. There is also a large number of $2-$local to $2-$local transformations in the literature, which are used for various pruposes. Some of these are listed here:
\vspace{10mm}
\begin{itemize}
\item Gadgetization of any $2-$local Hamiltonian into $\{\openone,z,x,zz,xx\}$ or $\{\openone,z,x,zx\}$ \cite{Biamonte2008}. Used for the proof that $xx + zz$ or $xz$ is universal is enough for universal quantum computation. In other words, \textit{any} computation can be transformed into a problem of finding the ground state of a $2-$local Hamiltonian containing terms from $\{\openone,z,x,zz,xx\}$ or from $\{\openone,z,x,zx\}$ with real coefficients, and the ground state can be found by adiabatic quantum computing with only polynomial time and space overhead over the best alternative algorithm for the problem.
\item Transformation of any $2-$local Hamiltonian into $\{\openone,z,x,zz,xx+yy\}$, without any perturbative gadgets, and only requiring the qubits to be connected in an almost 2D lattice \cite{Lloyd2016}.
\item "Cross gadget", "fork gadget", and "triangle gadget" described in \cite{Oliveira2008}.
\item Gadgetizeation of a $2-$local Hamiltonian with very strong couplings, into a $2-$local Hamiltonian with strengths in $\mathcal{O}\left( \nicefrac{1}{\textrm{poly}\left( \epsilon^{-1},n\right)} \right)$, and $\textrm{poly}\left( \epsilon^{-1},n \right)$ auxiliary qubits and $\textrm{poly}\left( \epsilon^{-1},n \right)$ new quadratic terms. \cite{Cao2015a}.
\item $yy$ creation gadget: Simulation of $yy$ terms using $\{\openone,z,x,zz,xx\}$, with coupling strength restriction defined according to $\Delta = \Theta\left(\epsilon^{-4}\right)$ \cite{Cao2015}.
\end{itemize}
\subsection{Minor-embedding quadratic functions for different graphs}
\begin{itemize}
\item Minor-embedding general problems for the Chimera \cite{Neven2009} graph \cite{vchoi08,Choi2011a}.
\item Minor-embedding quadratization gadgets for the Pegasus \cite{Dattani2019b} graph \cite{Dattani2019c}.
\end{itemize}
\newpage
\section{Further References}
\begin{itemize}
\item Gadgets for pseudo-Boolean optimization problems, with reduced precision requirements: \cite{Babbush2013}.
\item Formalization of pseudo-Boolean gadgets in quantum language. \cite{Biamonte2008a}.
\item By adding more couplers and more auxiliary qubits, we can bring the error down arbitrarily low: \cite{Cao2015a}.
\item More toric code gadgets: \cite{Brell2014}.
\item Parity adiabatic quantum computing (LHZ lattice): \cite{Lechner2015}.
\item Extensions of the LHZ scheme: \cite{Rocchetto2016}.
\item Minimizing $k$-local discrete functions with the help of continuous variable calculus \cite{Shen2017a}.
\item ORI graph which attempts to give optimal quadratizations \cite{Gallagher2011}.
\item Survey on pseudo-Boolean optimization \cite{Boros2002}.
\item Linearization of equations before they are squared \cite{Schaller2010}, and its application to factoring numbers \cite{Xu2012}.
\item Mentioned in \cite{Ali2008} as an early application of quadratization: \cite{Cunningham1985}.
\item Characterizaton of NTRs for cubics: \cite{Crama2014a}.
\item Relation between cones of nonnegative quadratic pseudo-Boolean functions and the Boolean quadric polytope \cite{BorosLari2014}.
\item Relation between cones of nonnegative quadratic pseudo-Boolean functions and the Boolean quadric polytope \cite{BorosLari2014}.
\item Effective non-Hermitian Hamiltonian with 3-body interactions which helps to calcualte electronic structure energies closer to the complete basis set limit \cite{Cohen2019}.
\end{itemize}
\newpage
\section{Circuits that effectively implement degree-$k$ terms for superconducting qubits}
\begin{itemize}
\item Presentation by Northrop Grumman about a $zzz$ coupler \cite{Strand2017} and associated patent \cite{Ferguson2017,Ferguson2018}.
\item Presentaiton that included discussion about engineering multi-qubit interactions \cite{Kerman2018}, presentation by the same lab about the design and experimental demonstration of a $zzzz$ coupling \cite{Menke2019}, and associated patent \cite{Kerman2018a,Kerman2019,Kerman2019a}.
\item Design of an effective $zzzz$ coupling without any auxiliary logical qubits \cite{Schondorf2018}.
\item Design of a tunable $zzz$ coupling in which all $zz$ couplings are cancelled, and its experimental demonstration \cite{Melanson2019}.
\end{itemize}
\newpage
\section{Contributors}
\subsection*{Richard Tanburn}
\begin{itemize}
\item Richard was the original creator and maintaner of the Git repository.
\item Richard created the Tex commands used throughout the document, and contributed majorly to the overall layout.
\item Richard wrote the original versions of the following sections: (1) Deduc-Reduc, (2) ELC Reduction, (3) Groebner Bases, (4) Split Reduction, (5) NTR-KZFD, (6) NTR-GBP, (7) PTR, (8) PTR-Ishikawa, (9) PTR-KZ, (10) PTR-GBP, (11) Bit flipping, (12) RBS, and (13) FGBZ.
\item Richard also wrote the "Further Example" of Deduc-Reduc in the Appendix.
\end{itemize}
\subsection*{Nicholas Chancellor}
\begin{itemize}
\item Nick made contributions to the following sections: (1) RMS (in terms of z), (2) PTR-RBL-(3$\rightarrow$2), (3) PTR-RBL-(4$\rightarrow$2), (4) SBM, (5) Flag based SAT Mapping, and to the qutrit $\rightarrow$ qubit transformation (6).
\end{itemize}
\subsection*{Szilard Szalay}
\begin{itemize}
\item Szilard re-derived Nike's transformations for the sections: (1) SFR-BCR-1, (2) SFR-BCR-2, (3) SFR-BCR-3, and (4) SFR-BCR-4 from the notation of the original paper, into the format consistent with the rest of the book. In doing so he corrected errors in Nike's work and also fixed them in the main document.
\end{itemize}
\subsection*{Ka Wa Yip}
\begin{itemize}
\item Ka Wa Yip added the page about his own method co-authored with Xu, Koenig and Kumar.
\end{itemize}
\subsection*{Yudong Cao}
\begin{itemize}
\item Yudong wrote the first version of the following section: (1) PSD-CN.
\end{itemize}
\subsection*{Daniel Nagaj}
\begin{itemize}
\item Daniel provided a .tex document to Nike in May 2018 which helped Nike to write the following sections: (1) NP-Nagaj-1, (2) NP-Nagaj-2. The document that Daniel provided made it easier for Nike to write these sections than the original papers.
\end{itemize}
\subsection*{Aritanan Gruber}
\begin{itemize}
\item Aritanan informed us in August 2015 of what we ended up making the following sections: (1) PTR-Ishikawa.
\end{itemize}
\subsection*{Charles Herrmann}
\begin{itemize}
\item Charles informed us in May 2018 of the papers which contained results which became the following sections: (1) PTR-BCR-1, (2) PTR-BCR-2, (3) PTR-BCR-3, (4) PTR-BCR-4, (5) SFR-BCR-1, (6) SFR-BCR-2, (7) SFR-BCR-3, (8) SFR-BCR-4, (9) SFR-BCR-5, (10) SFR-BCR-6.
\end{itemize}
\subsection*{Elisabeth Rodriguez-Heck}
\begin{itemize}
\item Elisabeth provided us with a 2-page PDF document with valuable comments on the entire Book.
\item Elisabeth also pointed us to what became the following section: (1) ABCG Reduction.
\end{itemize}
\subsection*{Hou Tin Chau}
\begin{itemize}
\item Tin made the examples for the following sections: SFR-BCR-1,2,3,4.
\item Tin fixed a typo in the alternative forms of the following sections: SFR-BCR-3,4.
\end{itemize}
\subsection*{Andreas Soteriou}
\begin{itemize}
\item Andreas found typos on the opening page in the arXiv version which surprisingly no one else found (or pointed out), and he diligently fixed them.
\item Andreas created the example involving $x$,$y$, and $z$ presented on the opening page in the September 2019 version (I plan to have this example further improved at a later time).
\end{itemize}
\subsection*{Jacob Biamonte}
\begin{itemize}
\item Jacob made valuable edits during a proof-reading of the book.
\end{itemize}
\newpage
\section*{Acknowledgments}
\begin{itemize}
\item It is with immense pleasure that we thank Emile Okada of Cambridge University, who during his first year of udnergraduate study, worked with Nike Dattani and Richard Tanburn on quadratization of pseudo-boolean functions for quantum annealing, and in the first half of 2015 played a role in the development of the Deduc-Reduc and Split-Reduc and Groebner bases methods presented in this review.
\item We thank Gernot Schaller of University of Berlin, who in December 2014 provided Nike Dattani with insights into his quadratization methods mentioned in this review paper, as well as for sharnig his Mathematica code which could be used to generate such quadratization formulas and others.
\item We thank Mohammad Amin of D-Wave for pointing Nike Dattani to the paper \cite{Bian2013} on determining Ramsey numbers on the D-Wave device, which contained what we call in this review "Reduction by substitution", later found through Ishikawa's paper to be from the much older 1970s paper by Rozenberg.
\item We thank Catherine McGeoch of Amherst University and D-Wave, for helpful discussions with Nike Dattani in December 2014 about how to map quadratic pseudo-Boolean optimization problems onto the chimera graph of the D-Wave hardware and for pointing us to the important references of Vicky Choi. While chimerization is very different from quadratization, understanding that roughly $n^2$ variables would be needed to map a quadratic function of $n$ variables, helped Nike Dattani and Richard Tanburn to appreciate how impotrant it is to be able to quadratize with as few variables as possible, and having this in mind throughout our studies helped inspire us in our goals towards "optimal quadratization".
\item We thank Aritanan Gruber and Endre Boros of Rutgers University, who in August 2015 shared with Nike Dattani and Richard Tanburn some of their wisdom about sub-modularity, and Aritanan Gruber for pointing us to the then very recent paper of Hiroshi Ishikawa on what we call in this review "ELC reductions", which was also a valuable paper due to the references in it. We also thank him for helping us in our quest to determine whether or not "deduc-reduc" was a re-discovery by Richard, Emile, and Nike, or perhaps a novel quadratization scheme.
\item We thank Toby Cathcart-Burn of Oxford University, who during the third year of undergraduate study, worked with Nike Dattani and Richard Tanburn and in Autumn 2015 and Winter 2016 helped us gain insights about the application of deduc-reduc and bit flipping to the problem of determining Ramsey numbers via discrete optimization, and for insights into the trade-offs between Ishikawa's symmetric reduction and reduction by substitution.
\item We thank Hiroshi Ishikawa of Waseda University, who Nike Dattani had the memorable opportunity to visit in November 2015, and through discussions about the computer vision problem and neural network problem (two examples of real-world discrete optimization problems that benefit from quadratization), provided insights about the role of quadratization for calculations on classical computers. In particular, solving the computer vision problem in which he had experience solving on classical computers, was very different from the integer factoring problem and Ramsey number problem which we had been attempting to quadratize for D-Wave and NMR devices. He taught us that far more total (original plus auxiliary) variables can be tolerated on classical computers than on D-Wave machines or NMR systems, and approximate solutions to the optimization problems are acceptable (unlike for the factorization and Ramsey number problems in which we were interested). This gave us more insight into what trade-offs one might wish to prioritize when quadratizing optially. We also thank him for helping us in our quest to determine whether or not "deduc-reduc" was a re-discovery by Richard, Emile, and Nike, or perhaps a novel quadratization scheme.
\item Last but indubitably not least, we thank Jacob Biamonte of Skolkovo Institute of Technology, who Nike Dattani enjoyed visiting in Hangzhou in January 2017 and meeting at Harvard University in April 2018. Jacob provided us plenty of insights about perturbative gadgets, made valuable comments on early versions of our manuscript, and has been a major supporter of this review paper since the idea was presented to him in December 2016. At many points during the preparation of this review, we had prioritized other commitments and put preparation of this review aside. Jake's frequent encouragement was often what got us working on this review again. Without him, we are not certain this paper would have been completed by this time (or ever!).
\end{itemize}
\newpage
| {
"timestamp": "2019-09-24T02:20:31",
"yymm": "1901",
"arxiv_id": "1901.04405",
"language": "en",
"url": "https://arxiv.org/abs/1901.04405",
"abstract": "A book about turning high-degree optimization problems into quadratic optimization problems that maintain the same global minimum (ground state). This book explores quadratizations for pseudo-Boolean optimization, perturbative gadgets used in QMA completeness theorems, and also non-perturbative k-local to 2-local transformations used for quantum mechanics, quantum annealing and universal adiabatic quantum computing. The book contains ~70 different Hamiltonian transformations, each of them on a separate page, where the cost (in number of auxiliary binary variables or auxiliary qubits, or number of sub-modular terms, or in graph connectivity, etc.), pros, cons, examples, and references are given. One can therefore look up a quadratization appropriate for the specific term(s) that need to be quadratized, much like using an integral table to look up the integral that needs to be done. This book is therefore useful for writing compilers to transform general optimization problems, into a form that quantum annealing or universal adiabatic quantum computing hardware requires; or for transforming quantum chemistry problems written in the Jordan-Wigner or Bravyi-Kitaev form, into a form where all multi-qubit interactions become 2-qubit pairwise interactions, without changing the desired ground state. Applications cited include computer vision problems (e.g. image de-noising, un-blurring, etc.), number theory (e.g. integer factoring), graph theory (e.g. Ramsey number determination), and quantum chemistry. The book is open source, and anyone can make modifications here:this https URL.",
"subjects": "Quantum Physics (quant-ph); Computer Vision and Pattern Recognition (cs.CV); Discrete Mathematics (cs.DM); Optimization and Control (math.OC); Chemical Physics (physics.chem-ph)",
"title": "Quadratization in discrete optimization and quantum mechanics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561667674652,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.7082906178509433
} |
https://arxiv.org/abs/1104.4992 | Boundedness of trajectories for weakly reversible, single linkage class reaction systems | This paper is concerned with the dynamical properties of deterministically modeled chemical reaction systems with mass-action kinetics. Such models are ubiquitously found in chemistry, population biology, and the burgeoning field of systems biology. A basic question, whose answer remains largely unknown, is the following: for which network structures do trajectories of mass-action systems remain bounded in time? In this paper, we conjecture that the result holds when the reaction network is weakly reversible, and prove this conjecture in the case when the reaction network consists of a single linkage class, or connected component. | \section{Introduction}
Building off the work of Fritz Horn, Roy Jackson, and Martin Feinberg
\cite{FeinbergLec79, Feinberg87, FeinHorn1972, Horn72, Horn74, HornJack72} the
mathematical theory termed ``Chemical Reaction Network Theory'' has, over the past 40 years, determined many of the basic qualitative properties of chemical reaction networks and, more generally, models of population processes. As the exact values of key system parameters, termed \textit{rate constants} and which we will denote by $\kappa_k$, are usually difficult to find experimentally
and, hence, are oftentimes unknown, the results tend to be \textit{independent of the values of these parameters}. In large part motivated by the Global Attractor Conjecture \cite{CraciunShiu09}, much of the recent attention in this field has focussed on which network structures guarantee that trajectories are \textit{persistent}, in that they can not approach the boundary of the positive orthant along a sequence of times \cite{Anderson2011, AndShiu, Sontag2007, Angeli2007, CraciunShiu09, CraciunPantea, JohnstonSiegel2, JohnstonSiegel2011, Pantea2011}. In this paper we consider a related question: for which network structures do trajectories of mass-action systems necessarily remain \textit{bounded} in time? This question is similar to that of persistence in that both force us to consider extreme behaviors of the species, and, hence, the monomials of the dynamical system. Similar to the well known Persistence Conjecture (see below), we will conjecture that all trajectories of weakly reversible systems with mass-action kinetics are bounded in time. We will prove this conjecture in the case when the reaction network consists of a single linkage class, or connected component. The methods used in this paper are similar to those introduced in \cite{Anderson2011}, where the Global Attracor Conjecture was shown to hold in the single linkage class case.
\subsection{Formal statement of the problem}
\label{sec:background}
Two of the most basic questions that can be asked about a mathematical model for a chemical, or more generally a population, process are $(i)$ must all trajectories be bounded in time and $(ii)$ are trajectories persistent in the sense of Definition \ref{def:persistence} below.
\begin{definition}
For $t\ge 0$ denoting time, let $\phi(t,x_0)$ be a trajectory to a dynamical system in $\mathbb R^N$ with initial condition $x_0$. A trajectory $\phi(t,x_0)$ with state space $\mathbb R^N_{\ge 0}$ is said to be {\em
persistent} if
\begin{equation*}
\liminf_{t\to \infty} \phi_i(t,x_0) > 0,
\end{equation*}
for all $i \in \{1,\dots,N\}$, where $\phi_i(t,x_0)$ denotes the $i$th component of $\phi(t,x_0)$. A dynamical system is said to be
{\em persistent} if each trajectory with non-negative initial
condition is persistent.
\label{def:persistence}
\end{definition}
Thus, persistence corresponds to a non-extinction requirement. Some authors refer to dynamical systems satisfying the above condition as \textit{strongly persistent} \cite{Takeuchi1996}. In their work, persistence only requires the weaker condition that $\limsup_{t\to \infty} \phi_i(t,x_0) > 0$ for each $i\in\{1,\dots,N\}$.
The following conjecture of Feinberg (see Remark 6.1.E in \cite{Feinberg87}) is one of the most well known in chemical reaction network theory. It pertains to systems whose reaction networks are weakly reversible, or strongly connected (see Section \ref{sec:def_concepts}), and is intimately related to the Global Attractor Conjecture \cite{CraciunShiu09}.
\vspace{.1in}
\noindent \textbf{Persistence Conjecture.\ (Version 1)} Any weakly reversible
reaction network with mass-action kinetics is persistent.
\vspace{.1in}
In \cite{Anderson2011}, it was pointed out that there are really two natural conjectures pertaining to weakly reversible reaction networks with mass-action kinetics, and that these should be separated.
\begin{definition}
For $t\ge 0$ denoting time, let $\phi(t,x_0)$ be a trajectory to a dynamical system in $\mathbb R^N$ with initial condition $x_0$. A trajectory $\phi(t,x_0)$ is said to be {\em
bounded} if
\begin{equation*}
\limsup_{t\to \infty} |\phi(t,x_0)| < \infty.
\end{equation*}
A dynamical system is said to have
{\em bounded trajectories} if each trajectory is bounded.
\label{def:boundedTraj}
\end{definition}
\vspace{.1in}
\noindent \textbf{Persistence Conjecture.\ (Version 2)} Any weakly reversible reaction network with mass-action kinetics and \textit{bounded trajectories} is persistent.
\vspace{.1in}
\noindent \textbf{Boundedness Conjecture.} Any weakly reversible reaction network with mass-action kinetics has bounded trajectories.
\vspace{.1in}
Clearly, the latter two conjectures would imply the first, which would then imply the well known Global Attractor Conjecture (see \cite{CraciunShiu09, Feinberg87}). Note that none of the conjectures make any assumptions on the choice
of rate constants, which are the natural parameters found in these systems (see Section \ref{sec:def_concepts}). The Boundedness Conjecture stated above is quite similar to the Extended Permanence Conjecture found in \cite{CraciunPantea}, which conjectures that all ``endotactic'' systems (which include those that are weakly reversible) are \textit{permanent}. Permanence is an even stronger condition than bounded trajectories in that all trajectories of a compatibility class (invariant manifold), regardless of initial condition, must enter a single compact subset of $\mathbb R^N_{>0}$. The Extended Permanence Conjecture is proven in \cite{CraciunPantea} in the case when the system is two-dimensional. In Section \ref{sec:permanence} we briefly discuss permanence and conclude that weakly reversible, single linkage class systems are permanent if there is a $\delta>0$ for which $\liminf_{t\to \infty}\phi_i(t,x_0) \ge \delta$ for all $x_0$ and all $i$. That is, when the system is, in some sense, \textit{uniformly persistent}.
Each of the above mentioned conjectures remains open. In recent years there has been a great amount of energy aimed at resolving the Persistence Conjecture, and typically that work has focused on Version 2. This focus has been quite natural as much of the motivation for the work stemmed from consideration of ``complex-balanced'' systems, see \cite{FeinbergLec79, Feinberg87}, which are known to have bounded trajectories. Relatively little attention has been paid, therefore, to the related Boundedness Conjecture, as formally stated above. We will refrain from giving an exhaustive background on the work aimed at resolving the Persistence Conjectures, and instead point the interested reader to \cite{Anderson2011}, where such an introduction, including most of the relevant references related to persistence and the Global Attractor Conjecture, can be found.
\subsection{Results in this paper}
In this paper we will prove that the Boundedness Conjecture holds for all systems whose network consists of a single linkage class, or connected component (see Section \ref{sec:def_concepts}). To prove our results, we will use a method, introduced in \cite{Anderson2011}, for partitioning the relevant monomials of the dynamical system along sequences of trajectory points into classes with comparable growths. This will allow us to prove that there is a Lyapunov function which decreases along all paths when $|x(t)|$ is sufficiently large.
We will prove all of our results in a slightly more general setting than mass-action kinetics in that we will allow our rate ``constants'' to actually be bounded functions of time. Results pertaining to systems with such a generalized mass-action kinetics are useful as these systems arise naturally when a system with standard mass-action kinetics is {\em projected} onto a subset of the species (see Section 3 of \cite{Anderson2011}).
The outline of the paper is as follows. In Section \ref{sec:def_concepts}, we provide a review of the requisite definitions and terminology from chemical reaction network theory. In Section \ref{sec:results}, we give our main results together with their proofs.
\section{Preliminary concepts and definitions}
\label{sec:def_concepts}
Most of the following definitions are standard in chemical reaction network theory. The interested reader should see \cite{FeinbergLec79} or \cite{Gun2003} for a more detailed introduction.\vspace{.125in}
\noindent \textbf{Reaction networks.} An example of a chemical reaction is $2S_{1}+S_{2} ~\rightarrow~ S_{3},$
where we interpret the above as saying two molecules of type $S_1$ combine with a molecule of type $S_2$ to produce a molecule of type $S_3$. For now, assume that there are no other reactions under consideration. The $S_{i}$ are called chemical {\em species} and the linear combinations of the species found at either end of the reaction arrow, namely $2S_{1}+S_{2}$ and
$S_{3}$, are called chemical {\em complexes.} Assigning the {\em
source} (or reactant) complex $2S_{1}+S_{2}$ to the vector $y =
(2,1,0)$ and the {\em product} complex $S_{3}$ to the vector
$y'=(0,0,1)$, we can formally write the reaction as $ y \rightarrow y' .$
In the
general setting we denote the number of species by $N$, and for $i \in \{1,\dots, N\}$ we denote the $i$th species as $S_{i}$. We then
consider a finite set of reactions with the $k$th denoted by $ y_{k} \rightarrow y_{k}', $
where $y_k, y_k' \in \mathbb Z^N_{\ge 0}$ are (non-equal) vectors whose components give the coefficients of the source and product complexes, respectively. Using a slight abuse of notation, we will also refer to the vectors $y_k$ and $y_k'$ as the complexes. Note that if $y_k = \vec 0$ or $y_k' = \vec 0$ for some $k$, then the $k$th
reaction represents an input or output, respectively, to the system. Note also that any
complex may appear as both a source complex and a product complex in
the system. We will usually, though not always (for example, see condition 3 in Definition \ref{def:crn} below) use the prime $'$ to denote the product complex of a given reaction.
As an example, suppose that the entire network consists of the two species $S_1$ and $S_2$ and the two reactions
\begin{equation}
S_1 \to S_2 \quad \text{and} \quad S_2 \to S_1,
\label{eq:ex1}
\end{equation}
where $S_1 \to S_2$ is arbitrarily labeled as ``reaction 1.'' Then $N = 2$ and
\begin{equation*}
y_1 = (1,0), \quad y_1' = (0,1) \qquad \text{and} \qquad y_2 = (0,1), \quad y_2' = (1,0).
\end{equation*}
Thus, the vector $(1,0)$, or equivalently the complex $S_1$, is both $y_1$, the source of the first reaction, and $y_2'$, the product of the second.
For ease of notation, when there is no need for
enumeration we will typically drop the subscript $k$ from the notation
for the complexes and reactions.
\begin{definition}
Let $\mathcal S = \{S_i\}_{i=1}^N$, $\mathcal C = \{y\}$ with $y \in \mathbb Z^N_{\ge 0}$, and $\mathcal R = \{y \to y'\}$ denote
finite sets of species, complexes, and reactions, respectively. The triple
$\{\mathcal S, \mathcal C, \mathcal R\}$ is called a {\em chemical reaction network} so long as the following three natural requirements are met:
\begin{enumerate}
\item For each $S_i\in \mathcal S$, there exists at least one complex $y\in \mathcal C$ for which $y_{i} \ge 1$.
\item There is no trivial reaction $y \to y \in \mathcal R$ for some complex $y \in \mathcal C$.
\item For any $y\in \mathcal C$, there must exist a $y'\in \mathcal C$ for which $y \to y' \in \mathcal R$ or $y' \to y \in \mathcal R$.
\end{enumerate}
\label{def:crn}
\end{definition}
\textbf{Notation:} We will use each of the following choices of notation to denote a complex from $\mathcal C$: $y$, $y'$, $y_i$, $y_j$, $y_k$, $y_k'$, etc. However, there will be other times in which we wish to denote the $i$th component of a complex. If the complex in question has been denoted by $y_k$, then we will write $y_{k,i}$. However, if the complex has been denoted by $y$, then we would write its $i$th component as $y_i$, which, through context, should not cause confusion with a choice of \textit{complex} $y_i$. See, for example, condition 1 in Definition \ref{def:crn} above.
\begin{definition}
To each reaction network $\{\mathcal{S},\mathcal{C},\mathcal{R}\}$
we assign a unique directed graph called a {\em reaction diagram}
constructed in the following manner. The nodes of the graph are the
complexes, $\mathcal{C}$. A directed edge $(y,y')$ exists if and only
if $y \to y' \in \mathcal R$. Each connected
component of the resulting graph is termed a {\em linkage class} of
the graph.
\label{def:diagram}
\end{definition}
For example, the system described in and around \eqref{eq:ex1} has reaction diagram $S_1 \rightleftarrows S_2$,
which consists of a single linkage class.
\begin{definition}
Let $\{\mathcal S,\mathcal C,\mathcal R\}$ denote a chemical reaction network. Denote the
complexes of the $i$th linkage class by $L_i \subset \mathcal C$. We say $T \subset \mathcal C$
consists of a {\em union of linkage classes} if $T = \cup_{i \in I} L_i$
for some nonempty index set $I$.
\end{definition}
\begin{definition}
The chemical reaction network $\{\mathcal S,\mathcal C,\mathcal R\}$ is said to be {\em weakly
reversible} if each linkage class of the corresponding reaction
diagram is strongly connected. A network is said to be
{\em reversible} if $y' \to y \in \mathcal R$ whenever $y \to y' \in
\mathcal R.$
\label{def:WR}
\end{definition}
It is easy to see that a chemical reaction network is weakly reversible if and only if for each reaction $y \to y'\in \mathcal R$, there exists a sequence of complexes, $y_1,\dots, y_r\in \mathcal C$, such that $y' \to y_1 \in \mathcal R, y_1 \to y_2 \in \mathcal R, \cdots, y_{r-1}\to y_r\in \mathcal R,$ and $y_r \to y\in \mathcal R$.
\vspace{.225in}
\noindent \textbf{Dynamics.} A chemical reaction network gives rise to a dynamical system by way of
a \textit{rate function} for each reaction.
That is, for each $y_k \to y_k'\in \mathcal R$, or simply $k\in\{1,\dots,|\mathcal R|\},$ we suppose the existence
of a function $\displaystyle R_k =
R_{y_k \to y_k'}$ that determines the rate of that reaction.
The functions $R_{k}$ are
typically referred to as the \textit{kinetics} of the system and will be denoted by $\mathcal K$, or $\mathcal K(t)$ in the non-autonomous case. The
dynamics of the system is then given by the following coupled set of
(typically nonlinear) ordinary differential equations
\begin{equation}
\dot x(t) = \sum_{k} R_{k}(x(t),t)(y_k' - y_k),
\label{eq:main_general}
\end{equation}
where $k$ enumerates over the reactions and $x(t) \in \mathbb R^N_{\ge 0}$ is a vector whose $i$th component represents the concentration of species $S_i$ at time $t\ge 0$.
\begin{definition}
A chemical reaction network $\{\mathcal S,\mathcal C,\mathcal R\}$ together with a choice of kinetics $\mathcal K$ is called a {\em chemical reaction system} and is denoted via the quadruple $\{\mathcal S,\mathcal C,\mathcal R,\mathcal K\}$. In the non-autonomous case where the $R_k$ can depend explicitly on $t$, we will write $\{\mathcal S,\mathcal C,\mathcal R,\mathcal K(t)\}$. We say that a chemical reaction system is
{\em weakly reversible} if its underlying network is.
\end{definition}
Integrating \eqref{eq:main_general} with respect to time yields
\begin{equation*}
x(t) = x(0) + \sum_{k} \left(\int_0^t R_k(x(s),s) ds \right)
(y_k' - y_k).
\end{equation*}
Therefore, $x(t) - x(0)$ remains within $S =
\text{span}\{y_k' - y_k\}_{k \in \{1,\dots,R\}}$ for all time.
\begin{definition}
The {\em stoichiometric subspace} of a network is the linear
space $S = \text{\em span}\{y_k' - y_k\}_{k \in \{1,\dots,|\mathcal R|\}}$. The vectors $y_k' - y_k$ are called the {\em reaction vectors}.
\label{def:stoich_sub}
\end{definition}
Under mild conditions on the rate functions of a
system, a trajectory $x(t)$ with
strictly positive initial condition $x(0) \in \mathbb R^N_{>0}$ remains in the
strictly positive orthant $\mathbb R^N_{>0}$ for all time (see, for example, Lemma~2.1 of
\cite{Sontag2001}). Thus, the trajectory remains in the relatively open set $(x(0)
+ S) \cap \mathbb{R}^N_{> 0}$, where $x(0) + S := \{z \in \mathbb R^N \ | \ z
= x(0) + v, \text{ for some } v \in S\}$, for all time. In other
words, this set is \textit{forward-invariant} with respect to the
dynamics. It is also easy to show that under the same mild conditions on $R_k$, $(x(0) + S) \cap
\mathbb{R}^N_{\ge 0}$ is forward invariant with respect to the dynamics. The sets $(x(0) + S) \cap
\mathbb{R}^N_{> 0}$
will be referred to as the \textit{positive stoichiometric compatibility classes}, or simply as the \textit{positive classes}. Note that if each of the sets $(x(0) + S) \cap \mathbb{R}^N_{> 0}$ is bounded,
then all trajectories of the dynamical system are necessarily bounded also. Therefore, the results of this paper are of interest when each positive class is an unbounded set.
The most common kinetics is that of \textit{mass-action kinetics}. A
chemical reaction system is said to have mass-action kinetics if all rate functions $R_{k} = R_{y_k \to y_k'}$
take the multiplicative form
\begin{equation}
R_{k}(x) = \kappa_k x_1^{y_{k,1}} x_2^{y_{k,2}} \cdots x_N^{y_{k,N}},
\label{eq:massaction}
\end{equation}
where $\kappa_k$ is a positive reaction rate constant and $y_k$ is the source complex for the reaction. For $u\in \mathbb R^N_{\ge 0}$ and $v \in \mathbb R^N$, we define
\begin{equation*}
u^v \overset{\text{\tiny def}}{=} u_1^{v_1} \cdots u_N^{v_N},
\end{equation*}
where we have
adopted the convention that $0^0 = 1$, and the above is undefined if $u_i = 0$ when $v_i < 0$. Mass action kinetics can then be written succinctly as $R_k(x) = \kappa_k x^{y_k}.$
Combining \eqref{eq:main_general} and \eqref{eq:massaction} gives the
following system of differential equations
\begin{equation}
\dot x(t) = \sum_{k} \kappa_k x(t)^{y_k}(y_k' - y_k).
\label{eq:main}
\end{equation}
We will generalize the equation \eqref{eq:main} slightly by allowing each $\kappa_k$ to be a bounded function of time. See Definition 2.6 in \cite{CraciunPantea} for a similar definition, and \cite{Angeli2011} for another recent treatment of chemical reaction systems with non-autonomous dynamics.
\begin{definition}
We say that the non-autonomous system $\{\mathcal S,\mathcal C,\mathcal R,\mathcal K(t)\}$ has {\em bounded mass-action kinetics} if there exists an $\eta > 0$ such that for each $k\in \{1,\dots,|\mathcal R|\}$
\begin{equation*}
R_k(x,t) = \kappa_k(t) x^{y_k},
\end{equation*}
where $\eta < \kappa_k(t) < 1/\eta$ for all $t \ge 0$. Hence, the vector of concentrations satisfies
\begin{equation*}
\dot x(t) = \sum_{k} \kappa_k(t) x(t)^{y_k}(y_k' - y_k).
\label{eq:main2}
\end{equation*}
\end{definition}
We require some final notation. Let $v \in \mathbb R^N$ for some $N \ge 1$, and let $U \subset \{1,\dots,N\}$. We write $U[j]$ for the $j$th component of $U$. We then write $v|_U$ to denote the vector of size $|U|$ with
\begin{equation*}
v|_{U,j} = (v|_U)_j \overset{\text{\tiny def}}{=} v_{U[j]}
\end{equation*}
for $j \in \{1,\dots, |U|\}.$
Thus, $v|_U$ simply denotes the projection of $v$ onto the components enumerated by $U$.
For example, if $N = 8$ and $U = \{2,4,7\}$, then for any $v \in \mathbb R^8$, $v|_{U} = (v_2,v_4,v_7)\in \mathbb R^3$.
\section{Main results}
\label{sec:results}
Recall that for any vectors $u,v$ such that $u \in \mathbb R^N_{\ge 0}$ and $v\in \mathbb R^N$ we define $u^v \overset{\text{\tiny def}}{=} u_1^{v_1}\cdots u_N^{v_N}$, where we use the convention $0^0 = 1$. For completeness, we recall the following standard definition.
\begin{definition}
For any set $\mathcal C$, we say $\{T_i\}$ is a {\em partition} of $\mathcal C$ if each $T_i$ is non-empty, $\bigcup_i T_i = \mathcal C$, and for all $i\ne j$, $T_i \cap T_j = \emptyset$.
\end{definition}
The following combination of Definition \ref{def:partition} and Lemma \ref{lem:partition} is a generalization of Definition 4.1 and Lemma 4.2 found in \cite{Anderson2011}. While the generalization is not made use of in the current paper, we hope that pointing out that the function $f$ below can be nearly arbitrary will be beneficial in future work.
\begin{definition}
Let $\mathcal C$ denote a finite set of vectors in $\mathbb R^N$. Let $x_n \in
\mathbb R^N$ denote a sequence of points. For $D \subset \mathbb R^N$ with $\{x_n\} \subset D$, let $f:D\times \mathcal C \to \mathbb R_{>0}$. We say that $\mathcal C$ is
{\em partitioned along the sequence $\{x_n\}$ with respect to $f$} if there exists a partition, $\{T_i\}_{i = 1}^P$, of $\mathcal C$, where the $T_i$ are termed tiers, and a constant $C > 1$, such that
\begin{enumerate}[$(i)$]
\item if $y_j,y_k \in T_i$ for some $i \in \{1,\dots,P\}$, then for all $n$
\begin{equation*}
\frac{1}{C} f(x_n,y_j) \le f(x_n,y_k) \le C f(x_n,y_j).
\end{equation*}
\item if $y_j \in T_i$ and $y_k \in T_{i+m}$ for some $m \ge 1$,
then
\begin{equation*}
\frac{f(x_n,y_j)}{f(x_n,y_k)} \to \infty, \quad
\text{as } n \to \infty.
\end{equation*}
\end{enumerate}
When $f(x,y) = x^y$, the case considered in both this paper and \cite{Anderson2011}, we will simply say that {\em $\mathcal C$ is partitioned along $\{x_n\}$}.
\label{def:partition}
\end{definition}
Note that we have a natural ordering of the tiers: $T_1 \succ T_2 \succ T_3 \succ \cdots \succ T_P$, and we say $T_1$ is the ``highest'' tier, whereas $T_P$ is the ``lowest'' tier.
The proof of the following lemma is a slight modification of the proof of Lemma 4.3 in \cite{Anderson2011} and is omitted.
\begin{lemma}
Let $\mathcal C$ denote a finite set of vectors in $\mathbb R^N$. Let $x_n$ be a
sequence of points in $\mathbb R^N_{>0}$. For $D \subset \mathbb R^N$ with $\{x_n\} \subset D$, let $f:D\times \mathcal C \to \mathbb R_{>0}$. Then, there exists a subsequence of
$\{x_n\}$ along which $\mathcal C$ is partitioned with respect to $f$.
\label{lem:partition}
\end{lemma}
The following lemma, which is similar in spirit to Farkas' Lemma, states that for any set of vectors in $\mathbb R^N$, either their span includes a non-zero vector in the non-positive orthant $\mathbb R^N_{\le 0}$, or there is vector normal to their span that intersects the strictly positive orthant.
\begin{lemma}[Stiemke's Theorem, \cite{Stiemke1915}]
For $i = 1,\dots, n$, let $u_i \in \mathbb R^m$. Either there exists a $c \in \mathbb R^n$ such that
\begin{equation*}
\left(\sum_{i=1}^n c_i u_i\right)_j \le 0, \quad j = 1,\dots,m
\end{equation*}
and such that at least one of the inequalities is strict,
or there is a $w \in \mathbb R^m_{> 0}$ such that $w\cdot u_i=0$ for each
$i\in \{1,\dots, n\}$.
\label{lem:Stiemke}
\end{lemma}
\begin{corollary}
For $i = 1,\dots, n$, let $u_i \in \mathbb R^m$. Let $U \subset \{1,\dots, m\}$ and $V = U^c$. Either there exists a $c \in \mathbb R^n$ such that
\begin{align*}
\left(\sum_{i=1}^n c_i u_i\right)_j &\le 0, \quad j \in U\\
\left(\sum_{i=1}^n c_i u_i\right)_j &\ge 0, \quad j \in V
\end{align*}
and such that at least one of the inequalities is strict, or there is a $w \in \mathbb R^m$ with
\begin{align*}
w_j &> 0 \text{ for } j \in U\\
w_j & < 0 \text{ for } j \in V
\end{align*}
such that $w\cdot u_i=0$ for each
$i\in \{1,\dots,n\}$.
\label{lem:Stiemke2}
\end{corollary}
\begin{proof}
Define the vector valued function $\theta:\mathbb R^m \to \mathbb R^m$ via
\begin{equation*}
\theta(u)_j \overset{\text{\tiny def}}{=} \left\{ \begin{array}{cl}
u_j & \text{ if } j \in U\\
-u_j & \text{ if } j \in V
\end{array}\right. .
\end{equation*}
Applying Lemma \ref{lem:Stiemke} to the set of vectors $\theta(u_i)$ proves the result.
\end{proof}
\begin{definition}
Let $w \in \mathbb R^N$. The set $\{i \in \{1,\dots,N\} \ : \ w_i>0\}$ is called the {\em positive support} of $w$ and the set $\{i \in \{1,\dots,N\} \ : \ w_i<0\}$ is called the {\em negative support} of $w$. The union of the positive and negative support of $w$, i.e. the set $\{i \in \{1,\dots,N\} \ : \ w_i \ne 0\}$, is called the {\em support} of $w$.
\label{def:support}
\end{definition}
\begin{definition}
Let $\mathcal C$ denote a finite set of vectors in $\mathbb R^N$. Let $\{T_i\}$
denote a partition of $\mathcal C$. Let $U,V \subset \{1,\dots,
N\}$ with $U\cup V$ nonempty. We will say that the vector $w \in \mathbb R^N$ is a {\em conservation relation that respects the triple} $(U,V,\{T_i\})$ if the
following two conditions hold:
\begin{enumerate}
\item $U$ is the positive support of $w$ and $V$ is the negative support of $w$.
\item Whenever $y_j,y_{\ell} \in T_i$
for some $i$, we have that $w \cdot (y_j - y_{\ell}) = 0$.
\end{enumerate}
\end{definition}
\begin{definition}
Let $x_n \in \mathbb R^N_{>0}$ denote a sequence of points. We say that $x_n$ is {\em partially monotonic} if $x_{n,i} \ge x_{n+1,i}$ for each $i$ for which $\liminf_{n\to \infty} x_{n,i}=0$ and if $x_{n,j} \le x_{n+1,j}$ for each $j$ for which $\limsup_{n\to \infty} x_{n,j}=\infty$.
\label{def:partial_mon}
\end{definition}
Note that Definition \ref{def:partial_mon} stands silent on the behavior of those $j$ for which $0< \liminf_{n\to \infty} x_{n,j} \le \limsup_{n\to\infty} x_{n,j} < \infty$.
\begin{theorem}
Let $\mathcal C$ denote a finite set of vectors in $\mathbb R^N$. Let $x_n \in \mathbb R^N_{>0}$ denote a partially monotonic sequence of points for which $\lim_{n \to \infty}x_{n,i} \in \{0,\infty\}$ for at least one $i \in \{1,\dots,N\}$.
Let
\begin{align*}
U &= \{i \in \{1,\dots,N\} \, : \, \lim_{n\to \infty} x_{n,i} = 0\}\\
V &= \{j \in \{1,\dots,N\} \, : \, \lim_{n\to \infty} x_{n,j} = \infty\}.
\end{align*}
Finally, suppose that $\mathcal C$ is partitioned along
$\{x_{n}\}$ with tiers $T_i$, for $i=1,\dots,
P$, and constant $C>0$. Then, there is a
conservation relation $w \in \mathbb R^N$ that
respects the triple $(U, V,\{T_i\})$.
\label{thm:conservation}
\end{theorem}
\begin{proof}
We suppose, in order to find a contradiction, that there is no
conservation relation that respects the triple $(U, V,\{T_i\})$.
Define the sets $W_i \subset \mathbb R^N$, for $i = 1,\dots,P$, and $W \subset \mathbb R^N$ via
\begin{align*}
W_i &\overset{\text{\tiny def}}{=} \{y_{j} - y_{k}\in \mathbb R^N\ | \ y_{j}, y_{k} \in T_i \},\quad
W \overset{\text{\tiny def}}{=} \bigcup_{i = 1}^{P} W_i,
\end{align*}
and denote the elements of $W$ by $\{u_k\}$. Note that if $T_i$ consists of a single element, then $W_i$ consists solely of the zero vector. Let $m = |U\cup V| >0$ be the number of elements
in $U\cup V$ and let
$W|_{U\cup V} \subset \mathbb R^m$
be the restriction
of $W$ to the
components associated with the index set $U\cup V$, as discussed at the end of Section \ref{sec:def_concepts}. Denote the elements of $W|_{U\cup V}$ by
$\{v_k\}$. Thus, collecting terminology, $u_k \in \mathbb R^N$, whereas $v_k\in \mathbb R^m$, and for each $u_k\in W$, there is a
corresponding $v_k \in W|_{U\cup V}$ for which $u_k|_{U\cup V} = v_k$, however the mapping $\cdot |_{U\cup V}$ need not be injective.
The set $W|_{U\cup V}$ must contain at least one nonzero vector because otherwise any non-negative vector with support $U\cup V$ would be a non-negative conservation relation that respects the triple $(U,V,\{T_i\})$, but we have assumed that no such relation exists.
Because we have assumed
there is no conservation relation that respects the triple
$(U, V,\{T_i\})$, we may conclude by Corollary \ref{lem:Stiemke2} that there exist $c_{k} \in \mathbb R$ such that
\begin{align}
\begin{split}
\left(\sum_{v_k \in W|_{U\cup V}}c_{k}
v_{k}\right)_j &\le 0, \quad \text{ if } j \in U\\
\left(\sum_{v_k \in W|_{U\cup V}}c_{k}
v_{k}\right)_j &\ge 0, \quad \text{ if } j \in V,
\end{split}
\label{app:stiemke1}
\end{align}
and such that the inequality is strict for
at least one $j\in U\cup V$.
For $v_k \in W|_{U\cup V}$, let $m_{k}$ denote the number of vectors of $W$ that reduced to it. Define the function $M:\mathbb R^N \to \mathbb R$ by
\begin{equation*}
M(x) \overset{\text{\tiny def}}{=} \left[ \prod_{ u_k \in W }
\left(x^{u_k} \right)^{c_{k}/m_{k}} \right],
\end{equation*}
where $c_{k}$ and $m_{k}$ are chosen for $u_k \in W$ if $u_k|_{U\cup V} = v_k\in W|_{U\cup V}$.
Note that, by construction and by the definition of partitioning along a sequence, if $u_k \in W$, then there are $y_j,y_{\ell} \in T_i$ for some $i$, such that $u_k = y_{\ell} - y_{j}$ and
\begin{equation*}
\frac{1}{C} \le x_{n}^{u_k} = \frac{x_{n}^{y_{\ell}}}{x_{n}^{y_j}} \le C,
\end{equation*}
for all $n\ge 1$. Therefore, $M(x_{n})$ is uniformly, in $n$, bounded both from
above and below. Noting that each $x_n$ has strictly positive components, we may take logarithms and find
\begin{align}
\ln(M(x_{n})) = \bigg( \sum_{u_k\in W}
\frac{c_{k}}{m_{k}} u_k \bigg) \cdot \ln x_{n},
\label{eq:logM}
\end{align}
where for a vector $u \in \mathbb R^N_{>0}$ we define
\begin{align*}
\ln(u) \overset{\text{\tiny def}}{=} (\ln(u_1),\cdots, \ln(u_N)).
\end{align*}
Expanding equation \eqref{eq:logM} along elements of $U\cup V$ and $(U\cup V)^c$ yields,
\begin{align}
\begin{split}
\ln(M(x_{n})) &= \bigg( \sum_{v_k\in W|_{U\cup V}} c_{k} v_k|_U \bigg) \cdot \ln (x_{n}|_U) + \bigg( \sum_{v_k\in W|_{U\cup V}} c_{k} v_k|_V \bigg) \cdot \ln (x_{n}|_V)\\
&\hspace{.2in} + \bigg( \sum_{u_k\in W} \frac{c_{k}}{m_{k}} u_k|_{(U\cup V)^c} \bigg) \cdot \ln (x_{n}|_{(U\cup V)^c}).
\end{split}
\label{eq:bound}
\end{align}
By construction, $x_{n,\ell}$ is bounded from both above and below
for $\ell \in (U\cup V)^c$. Thus, the final term in \eqref{eq:bound} is bounded from above and below. By the inequalities in \eqref{app:stiemke1}, where at least one term is strict, and the facts that $x_{n,i} \to 0$ for each $i \in U$ and $x_{n,i} \to \infty$ for each $j \in V$ along this subsequence, we may conclude that the sum of the first and second term, and hence $\ln(M(x_{n}))$ itself, is unbounded towards positive infinity as $n \to \infty$. This is a
contradiction with the previously found fact that $M(x_{n})$ is
uniformly bounded above and below, and the result is shown.
\end{proof}
\subsection{Bounded trajectories in the single linkage class case}
\label{sec:results2}
Define $V_1:\mathbb R^N_{>0} \to \mathbb R_{\ge 0}$ by
\begin{equation}
V_1(z) \overset{\text{\tiny def}}{=} \sum_{i = 1}^N \left[ z_i (\ln(z_i) - 1) + 1\right].
\label{eq:Lyapunov}
\end{equation}
This is the standard Lyapunov function of chemical reaction network theory where we have chosen $\overline x = (1,\dots,1)$ \cite{FeinbergLec79, Gun2003}. Note that $\nabla V_1(x) = \ln x$.
It is straightforward to show that $V_1$ is convex with a global minimum of zero at $(1,\dots,1)$ \cite{FeinbergLec79}. The following is a generalization of Lemma 4.7 in \cite{Anderson2011}.
\vspace{.125in}
\begin{lemma}
Let $\{\mathcal S,\mathcal C,\mathcal R,\mathcal K(t)\}$, with $\mathcal S = \{S_1,\dots, S_N\}$, be a weakly reversible,
non-autonomous mass-action system with bounded kinetics. Let $D \subset \mathbb R^N_{>0}$.
One of the following two conditions holds:
\begin{enumerate}
\item[C1:] There exists an $M>0$, such that for any $x \in D$ for which $x_{i} > M$ or $x_i < 1/M$ for at least one $i\in \{1,\dots,N\}$, we have
\begin{equation*}
\sum_k \kappa_k(t) x^{y_k}(y_k' - y_k)\cdot \ln(x) < 0, \quad \text{for all } t \ge 0.
\end{equation*}
\item[C2:] There exists a sequence of points $x_n \in D$ for which $\lim_{n \to \infty} x_{n,i} \in\{0, \infty\}$ for at least one $i$ and
\begin{enumerate}[$(i)$]
\item $\mathcal C$ is partitioned along $x_n$ with tiers
$\{T_i\}_{i=1}^P$, and constant $C$, and
\item $T_1$ consists of a union of linkage classes.
\end{enumerate}
\end{enumerate}
\label{lem:main}
\end{lemma}
\begin{proof}
We suppose condition $C1$ does not hold, and will conclude that condition $C2$ must then hold. Because condition $C1$ does not hold, there is a sequence of points $x_n \in D$ and times $t_n\ge 0$ for which $\lim_{n\to \infty} x_{n,i} \in\{0,\infty\}$ for at least one $i$ and
\begin{equation}
\sum_k \kappa_k(t_n) x_n^{y_k}(y_k' - y_k)\cdot \ln(x_n)\ge 0.
\label{eq:bound1}
\end{equation}
Applying Lemma \ref{lem:partition}, we partition the complexes along an
appropriate subsequence of $\{x_n\}$ with tiers $T_i$, $i = 1,\dots,P$, and constant $C>1$. Note that this also had the effect of only considering the analogous subsequence of $\{t_n\}$.
In the following, for tier $i\in \{1,\dots,P\}$, we denote by
\begin{itemize}
\item $\{i \to i\}$ all reactions with both source and product complex in $T_i$,
\item $\{i \to i + m\}$ all reactions with source complex in $T_i$ and product complex in $T_{i+m}$ for $m \ge 1$,
\item $\{i \to i - m\}$ all reactions with source complex in $T_i$ and product complex in $T_{i-m}$ for $m \ge 1$.
\end{itemize}
Defining $u/v \overset{\text{\tiny def}}{=} (u_1/v_1,\dots, u_N/v_N)$ for $u,v \in \mathbb R^N_{>0}$, we re-write the left hand side of the inequality \eqref{eq:bound1}
\begin{align}
\sum_k \kappa_k(t_n) x_{n}^{y_k}(y_k' - y_k)\cdot \ln (x_n)
&= \sum_{i = 1}^P\bigg[ \sum_{\{ i \to i \}} \kappa_k(t_n)
x_{n}^{y_k} \ln \left( \frac{x_n^{y_k'}} {x_n^{y_k}}\right) \label{eq:1} \\
&\hspace{.2in}+ \sum_{m=1}^{P-i} \sum_{\{ i \to i + m\}} \kappa_k(t_n)
x_{n}^{y_k} \ln \left( \frac{x_n^{y_k'}} {x_n^{y_k}}\right)\label{eq:2} \\
&\hspace{.2in} + \sum_{m=1}^{i-1} \sum_{\{i \to i - m\}} \kappa_k(t_n)
x_{n}^{y_k}\ln \left( \frac{x_n^{y_k'}} {x_n^{y_k}}\right)\bigg].\nota
\end{align}
Note that, by construction, for large enough $n$ any component in the enumeration \eqref{eq:2} is negative, and, in fact, $\ln(x_n^{y_k'}/x_n^{y_k}) \to -\infty$ as $n \to \infty$, for these terms. The proof that the total summation above (that is, the left hand side of \eqref{eq:1}) must also, for large enough $n$, be strictly negative unless condition $C2$ holds is now identical to the analogous portion of the proof of Lemma 4.7 in \cite{Anderson2011}, and is omitted here.
\end{proof}
\begin{lemma}
Let $\{\mathcal S,\mathcal C,\mathcal R\}$, with $\mathcal S = \{S_1,\dots, S_N\}$, be a single linkage class chemical reaction network.
Then, there does not exist a sequence of points $x_n\in \mathbb R^N_{>0}$, all in the same stoichiometric compatibility class, for which $\lim_{n \to \infty} x_{n,i} \in\{0, \infty\}$ for at least one $i$ and
\begin{enumerate}[$(i)$]
\item $\mathcal C$ is partitioned along $x_n$ with tiers
$\{T_i\}_{i=1}^P$, and constant $C$, and
\item $T_1$ consists of a union of linkage classes.
\end{enumerate}
\label{lem:no_union}
\end{lemma}
\begin{proof}
Note that in the one linkage class case $T_1$ can only consist of a union of linkage classes if $T_1 \equiv \mathcal C$. We suppose, in order to find a contradiction, that there is a sequence, $\{x_n\}$, all in the same stoichiometric compatibility class, for which $\lim_{n \to \infty} x_{n,i} \in\{0, \infty\}$ for at least one $i$ and
\begin{enumerate}[$(i)$]
\item $\mathcal C$ is partitioned along $x_n$ with tiers
$\{T_i\}_{i=1}^P$, and constant $C$, and
\item $T_1$ consists of a union of linkage classes.
\end{enumerate}
Perhaps after restricting ourselves to a sub-sequence, we may choose $x_n$ to be partially monotonic (recall Definition \ref{def:partial_mon}). Let
\begin{align*}
U &= \{i \in \{1,\dots,N\} \, : \, \lim_{n\to \infty} x_{n,i} = 0\}\\
V &= \{j \in \{1,\dots,N\} \, : \, \lim_{n\to \infty} x_{n,j} = \infty\}.
\end{align*}
Note that $U\cup V$ is nonempty by construction. By Theorem \ref{thm:conservation} there is a conservation relation $w \in \mathbb R^N_{\ge 0}$ that
respects the triple $(U,V,\{T_i\})$.
For each $j \in V$, $w_j < 0$ and $x_{n,j}\to \infty$. Thus, if $V$ is nonempty, $w\cdot x_n \to -\infty$, as $n\to \infty$. If $V$ is empty, then $U$ is necessarily nonempty and, by construction, $w\cdot x_n \to 0$, as $n \to \infty$. However, because $T_1 \equiv \mathcal C$, we have that $w \cdot (y_k' - y_k) = 0$ for all $y_k \to y_k' \in \mathcal R$. Thus, as the $x_n$ are all in the same stoichiometric compatibility class, we have that $w\cdot x_n$ is a constant. This shows that we can not have $w\cdot x_n \to -\infty$, as $n \to \infty$, and so $V$ must be empty. However, by our construction we may then conclude that $w_i \ge 0$ for all $i$, and $w_i>0$ for at least one $i$. Hence, $w\cdot x_n > 0$, and not zero.
\end{proof}
We now have our main result.
\begin{theorem}
Let $\{\mathcal S,\mathcal C,\mathcal R,\mathcal K(t)\}$, with $\mathcal S = \{S_1,\dots, S_N\}$, be a single linkage class, weakly reversible,
non-autonomous mass-action system with bounded kinetics. Then, $\limsup_{t\to \infty} |\phi(t,x_0)| < \infty$ for each $x_0 \in \mathbb R^N_{>0}$. That is, the system has {\em bounded trajectories.}
\label{thm:main1}
\end{theorem}
\begin{proof}
Letting $D$ be a non-empty positive stoichiometric compatibility class, in the statement of Lemma \ref{lem:main}, we conclude by combining Lemmas \ref{lem:main} and \ref{lem:no_union} that there is an $M>0$ so that for any $x\in D$ with $|x|>M$, we have
\begin{equation*}
\sum_k \kappa_k(t) x^{y_k}(y_k' - y_k)\cdot \ln(x) < 0, \quad \text{for all } t\ge 0.
\end{equation*}
Therefore,
\begin{equation}
\frac{\partial}{\partial t} V_1(\phi(t,x_0)) < 0,
\label{eq:good_bound}
\end{equation}
whenever $|\phi(t,x_0)|>M$. Let $B_{x_0} = \sup\{V_1(x)\ : \ |x| = M\text{ or } x = x_0\}$.
Inequality \eqref{eq:good_bound} shows that $V_1(\phi(t,x_0)) \le B_{x_0}$ for all $t\ge 0$, which when combined with the fact that $V_1(x) \to \infty$ as $|x| \to \infty$,
proves the result.
\end{proof}
\subsection{Permanence}
\label{sec:permanence}
Note that Lemma \ref{lem:main} and, in particular, equation \eqref{eq:good_bound} do not give a rate at which $V_1$ is decreasing. Hence, we can not conclude in general that all trajectories contained within a given stoichiometric compatibility class enter a single compact subset of $\mathbb R^N_{\ge 0}$. That is, we can not conclude that trajectories are permanent in the sense of Definition \ref{def:permanent} below. This is quantified above by the explicit dependence of $B_{x_0}$ upon $x_0$. However, we may strengthen our results slightly.
\begin{definition}
For $t\ge 0$ denoting time, let $\phi(t,x_0)$ be a trajectory to a dynamical system in $\mathbb R^N$ with initial condition $x_0$. The system is said to be {\em permanent} if there is a $\rho > 0$ such that for every $x_0\in \mathbb R^N_{\ge 0}$,
\begin{equation*}
\rho < \liminf_{t\to \infty} \phi_i(t,x_0) \le \limsup_{t\to \infty} \phi_i(t,x_0) < 1/\rho
\end{equation*}
for all $i \in \{1,\dots,N\}$.
\label{def:permanent}
\end{definition}
\begin{lemma}
Let $\{\mathcal S,\mathcal C,\mathcal R,\mathcal K(t)\}$, with $\mathcal S = \{S_1,\dots, S_N\}$, be a weakly reversible,
non-autonomous mass-action system with bounded kinetics. Let $D \subset \mathbb R^N_{>0}$ be such that {\em dist}$(D,\partial \mathbb R^N_{>0}) >\delta$, for some $\delta>0$.
One of the following two conditions holds:
\begin{enumerate}
\item[C1:] For any $\epsilon > 0$, there exists an $M=M_{\epsilon,\delta}>0$ such that for any $x \in D$ with $|x|>M$, we have
\begin{equation*}
\sum_k \kappa_k(t) x^{y_k}(y_k' - y_k)\cdot \ln(x) < - \epsilon, \quad \text{for all } t \ge 0.
\end{equation*}
\item[C2:] There exists a sequence of points $x_n \in D$ that satisfies $\lim_{n \to \infty} |x_n| = \infty$ and
\begin{enumerate}[$(i)$]
\item $\mathcal C$ is partitioned along $x_n$ with tiers
$\{T_i\}_{i=1}^P$, and constant $C$, and
\item $T_1$ consists of a union of linkage classes.
\end{enumerate}
\end{enumerate}
\label{lem:main2}
\end{lemma}
\begin{proof}
The proof is essentially the same as for Lemma \ref{lem:main}. We first suppose condition $C1$ does not hold. Let $\epsilon > 0$. By our assumption, there must be a sequence of points $x_n \in D$ and times $t_n\ge 0$ such that $\lim_{n\to \infty}|x_n| = \infty$ and
\begin{equation*}
\sum_k \kappa_k(t_n) x_n^{y_k}(y_k' - y_k)\cdot \ln(x_n) \ge -\epsilon.
\end{equation*}
The proof is now exactly the same as for Lemma \ref{lem:main}, except that you recognize that for any $y\in T_1$, we necessarily have that $x_n^y \to \infty$, as $n \to \infty$. Therefore, the terms in the summation in \eqref{eq:2} not only dominate the others, but force the expression to $-\infty$ as $n\to \infty$, thereby concluding the proof.
\end{proof}
\begin{corollary}
Let $\{\mathcal S,\mathcal C,\mathcal R,\mathcal K(t)\}$, with $\mathcal S = \{S_1,\dots, S_N\}$, be a single linkage class, weakly reversible,
non-autonomous mass-action system with bounded kinetics. Let $P$ be a positive stoichiometric compatibility class and suppose there is a $\delta>0$ so that
\begin{equation*}
\liminf_{t \to \infty}\phi_i(t,x_0) > \delta, \quad \text{for all } i \in \{1,\dots,N\} \text{ and all } x_0 \in P.
\end{equation*}
Then, there is a $\rho > 0$ such that for any $x_0 \in P$
\begin{equation*}
\rho < \liminf_{t\to \infty} \phi_i(t,x_0) \le \limsup_{t\to \infty} \phi_i(t,x_0) < 1/\rho
\end{equation*}
for all $i \in \{1,\dots,N\}$. That is, the system is {\em permanent}.
\label{cor:2}
\end{corollary}
\begin{proof}
The lower bound follows by our assumption. The upper bound follows from Lemmas \ref{lem:main2} (with $D$ equal to $P$ restricted to those $x$ a distance of at least $\delta$ away from the boundary), \ref{lem:no_union}, and similar arguments as in the proof of Theorem \ref{thm:main1}. The only real difference in the proof is that the analog of equation \eqref{eq:good_bound} is
\begin{equation*}
\frac{\partial}{\partial t} V_1(\phi(t,x_0)) < -\epsilon,
\end{equation*}
whenever $|\phi(t,x_0)|>M$, giving us the needed force to guarantee $|\phi(t,x_0)|$ decreases below some $1/\rho$.
\end{proof}
Note that the $M=M_{\epsilon,\delta}>0$ of Lemma \ref{lem:main2}, and hence in the proof of Corollary \ref{cor:2}, explicitly depends upon $\delta$. Therefore, it is not sufficient in the statement of Corollary \ref{cor:2} to assume the existence of a different $\delta = \delta_{x_0}>0$ for \textit{each} $x_0$. Thus, the main results of \cite{Anderson2011} pertaining to weakly reversible networks (with arbitrary deficiency) are not strong enough to guarantee permanence using the above methods.
\bibliographystyle{amsplain} | {
"timestamp": "2011-06-20T02:00:30",
"yymm": "1104",
"arxiv_id": "1104.4992",
"language": "en",
"url": "https://arxiv.org/abs/1104.4992",
"abstract": "This paper is concerned with the dynamical properties of deterministically modeled chemical reaction systems with mass-action kinetics. Such models are ubiquitously found in chemistry, population biology, and the burgeoning field of systems biology. A basic question, whose answer remains largely unknown, is the following: for which network structures do trajectories of mass-action systems remain bounded in time? In this paper, we conjecture that the result holds when the reaction network is weakly reversible, and prove this conjecture in the case when the reaction network consists of a single linkage class, or connected component.",
"subjects": "Dynamical Systems (math.DS); Molecular Networks (q-bio.MN)",
"title": "Boundedness of trajectories for weakly reversible, single linkage class reaction systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561676667173,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7082906128313441
} |
https://arxiv.org/abs/1503.01587 | On debiasing restoration algorithms: applications to total-variation and nonlocal-means | Bias in image restoration algorithms can hamper further analysis, typically when the intensities have a physical meaning of interest, e.g., in medical imaging. We propose to suppress a part of the bias -- the method bias -- while leaving unchanged the other unavoidable part -- the model bias. Our debiasing technique can be used for any locally affine estimator including â1 regularization, anisotropic total-variation and some nonlocal filters. | \section{Bias of reconstruction algorithms}
Due to the ill-posedness of our observation model and without any assumptions on $u_0$,
one cannot ensure the noise variance to be reduced while
keeping the solution $\uf^\star$ unbiased.
Recall that the statistical bias is defined as the difference
\begin{equation}
\text{Statistical bias} = \EE[\uf^\star] - u_0~.
\end{equation}
An estimator is said unbiased when its statistical bias vanishes.
Unfortunately the statistical bias is difficult to manipulate when
$f \mapsto \uf^\star$ is non linear.
We therefore restrict to a definition of bias at $f_0 = \Phi u_0$ as the error
$\ufO^\star - u_0$.
Note that when $f \mapsto \uf^\star$ is affine, both definitions match (the expectation being linear).
Most methods are biased since, without assumptions, $u_0$ cannot be guaranteed
to be in complete accordance with
the model subspace, i.e., $u_0 \notin \MmfO^\star$.
It is then important to distinguish techniques that are only biased
due to a problem of modeling to the ones that are biased
due to the method. We then define the model bias and the method bias as
the quantities
\begin{equation}
\ufO^\star - u_0
=
\underbrace{\ufO^\star - \Pi_{\MmfO^\star}(u_0)}_{\text{Method bias}}
-
\underbrace{\Pi_{(\MmfO^\star)^\bot}(u_0)}_{\text{Model bias}}~,
\end{equation}
where for any set $S$, $\Pi_S$ denotes the orthogonal projection on $S$
and $S^\bot$ denotes its orthogonal set.
We now define a methodically unbiased estimator as follows.
\begin{defn}
An estimator $\uf^\star$ is \underline{\smash{methodically unbiased}} if
\begin{equation*}
\forall u_0 \in \RR^N, \quad
\ufO^\star = \Pi_{\MmfO^\star}(u_0)
\end{equation*}
\end{defn}
We also define the weaker concept of weakly unbiased estimator as follows.
\begin{defn}
An estimator $\uf^\star$ is \underline{\smash{weakly unbiased}} if
\begin{equation*}
\forall u_0 \in \MmfO^\star, \quad
\ufO^\star = u_0.
\end{equation*}
The quantity $\ufO^\star - u_0$ for $u_0 \in \MmfO^\star$ is
called the weak bias of $\uf^\star$ at $u_0$.
\end{defn}
Remark that a methodically unbiased estimator is also weakly unbiased.
\paragraph{Examples.} The unconstrained least-square estimator is
methodically unbiased since
$\ufO^\LS \!=\! \Phi^+ f_0 \!=\! \Phi^+ \Phi u_0 \!=\! \Pi_{\Ima[\Phi^t]}(u_0) \!=\! \Pi_{\MmfO^\star}(u_0)$.
Moreover, being linear,
it becomes statistically unbiased
whenever $\Phi$ has full column rank since $\Phi^+ \Phi \!=\! \Id$.
However the constrained least-square estimator is only weakly unbiased:
its methodical bias only vanishes
when $u_0 \!\in\! \MmfO^\CLS$, i.e.,
when there exists $t_0 \in \RR^Q$ such that $u_0 \!=\! b + A (\Phi A)^t t_0$.
The hard thresholding is also methodically unbiased
remarking that $\ufO^\star$ is
the orthogonal projection on $\MmfO^\star \!=\! \Ima[\Id_{\Ii_{f_0}}]$.
Unlike the unconstrained least-square estimator,
Tikhonov regularization has a non zero weak bias.
The soft thresholding is also known to be biased \cite{fan2001variable}
and its weak bias is given by $-\lambda \Id_{\Ii_{f_0}} \sign(f_0)_{\Ii_{f_0}}$.
Often, estimators are said to be unbiased when
they are actually only weakly unbiased.
\section{Conclusion}
We have introduced in this paper a mathematical definition of debiasing which has led
to an effective debiasing technique that can remove
the method bias that does not arise from the unavoidable choice of the model.
This debiasing technique simply consists in applying a least-square estimation
constrained to the model subspace chosen implicitly by the
original biased algorithm.
Numerical experiments have demonstrated the efficiency of our technique in retrieving
the correct intensities while respecting the structure
of the original model subspace.
Our technique is nevertheless limited to locally affine estimators.
Isotropic total variation, structured sparsity or nonlocal-means with smooth kernels
are not yet handled by our debiasing technique, and left for future work.
\section{Definitions of debiasing}
Given an estimate $\uf^\star$ of $u_0$,
we define a debiasing of $\uf^\star$ as follows.
\begin{defn}\label{eq:def_debiasing}
An estimator $\duf^\star$ of $u_0$ is a \underline{\smash{weak debiasing}} of $\uf^\star$
if it is weakly unbiased and $\dMmf^\star = \Mmf^\star$ for almost all $f$,
with $\dMmf^\star$ the model subspace of $\duf^\star$ at $f$.
Moreover, it is a \underline{\smash{methodical debiasing}} if it is also methodically
unbiased.
\end{defn}
\paragraph{Examples.}
The unconstrained least square estimator is a methodical debiasing of
the Tikhonov regularization,
since it is a methodically unbiased estimator of $u_0$
and they share the same model subspace.
The hard thresholding is a methodical debiasing of
the soft thresholding, for the same reasons.
\medskip
A good candidate for debiasing $\uf^\star$ is the constraint least squares
on $\Mmf^\star$:
\changemargin{-1pt}{-4pt}{%
\begin{equation}\label{eq:debiasing}
\duf^\star
= \uf^\star + \Uf^\star (\Phi \Uf^\star)^+ (f - \Phi \uf^\star)
\in \uargmin{u \in \Mmf^\star} \norm{\Phi u - f}^2
\end{equation}}%
where $\Uf^\star \!\in \! \RR^{N \times n}$ with $n \!=\! \rank[\Jf^\star]$
is a matrix whose columns form a basis of $\Ima[\Jf^\star]$.
Let $V_f^\star \!\in \! \RR^{n \times P}$ be a matrix such that $J_f^\star = \Uf^\star V_f^\star$.
The following theorem shows that under mild assumptions this choice corresponds to
a debiasing of $\uf^\star$.
\begin{thm}\label{lem:tilde_unbiased}
Assume that $f \mapsto \uf^\star$ is locally affine for almost all $f$
and that $\Phi$ is invertible on $\Mmf^\star$.
Then $\duf^\star$ defined in Eq.~\eqref{eq:debiasing}
is a weak debiasing of $\uf^\star$.
\end{thm}
\begin{proof}
Since $f \mapsto \uf^\star$ is locally affine,
$f \mapsto \Uf^\star$ can be chosen locally constant.
Deriving \eqref{eq:debiasing} for almost all $f$
leads to the Jacobian $\dJf^\star$ of $\duf^\star$ given by
\changemargin{-4pt}{0pt}{%
\begin{eqnarray}
\dJf^\star
&=& \pd{\duf^\star}{f}
= \pd{\uf^\star}{f} + \Uf^\star (\Phi \Uf^\star)^+ \left(\pd{f}{f} - \Phi \pd{\uf^\star}{f}\right)
= \Jf^\star + \Uf^\star (\Phi \Uf^\star)^+ (\Id - \Phi \Jf^\star) \nonumber\\
&=& \Uf^\star V_f^\star + \Uf^\star (\Phi \Uf^\star)^+ (\Id - \Phi \Uf^\star V_f^\star)
= \Uf^\star (\Phi \Uf^\star)^+
~,
\end{eqnarray}}%
since $\Phi \Uf^\star$ has full column rank due to the assumption that
$\Phi$ is invertible on $\Mmf^\star$%
\ifthenelse{\boolean{arxiv}}{ (see Appendix \ref{sec:details_proof1}).}{.}
It follows that
\changemargin{-2pt}{-2pt}{%
\begin{eqnarray}
\dMmf^\star
&=& \duf^\star + \Ima[\dJf^\star]
= \uf^\star + \Uf^\star (\Phi \Uf^\star)^+ (f - \Phi \uf^\star) +
\Ima[\Uf^\star (\Phi \Uf^\star)^+]\\
&=& \uf^\star + \Ima[\Uf^\star (\Phi \Uf^\star)^+]
= \uf^\star + \Ima[\Uf^\star] = \Mmf^\star~,
\end{eqnarray}}%
since $\Phi \Uf^\star$ has full column rank.
Moreover, for any $u_0 \!\in\! \MmfO^\star$,
the equation $\Phi u \!=\! f_0$ has a unique solution $u \!=\! u_0$ in $\MmfO^\star$
since $\Phi$ is invertible on $\MmfO^\star$.
Hence, $\dufO^\star \!=\! u_0$ is the unique solution of \eqref{eq:debiasing},
which concludes the proof.
\hfill $\square$
\end{proof}
The next proposition shows that the condition
``$\Phi$ invertible on $\Mmf^\star$'' can be dropped
when looking at $u_f^\star$ and $\duf^\star$ through $\Phi$.
The debiasing becomes furthermore methodical.
\begin{prop}
Assume $f \mapsto \uf^\star$ is locally affine for almost all $f$.
Taking $\tilde{u}_f^\star$ defined in Eq.~\eqref{eq:debiasing},
then the predictor $\Phi \tilde{u}_f^\star$ of $f_0 \!=\! \Phi u_0$
is equal to $\Pi_{\Phi \Mm_f^\star}(f)$
and is a methodical debiasing of $\Phi u_f^\star$.
\end{prop}
\begin{proof}
Since
$\Phi \Uf^\star (\Phi \Uf^\star)^+\!=\!\Pi_{\Ima[\Phi U_f^\star]}$,
we have
$\Phi \tilde{u}^\star_f \!=\! \Pi_{\Phi \Mm_f^\star}(f)$.
As the orthogonal projector on its own model space,
it is methodically unbiased.
Moreover $\Ima[\Pi_{\Ima[\Phi U_f^\star]}] \!=\! \Ima[\Phi U_f^\star]$, hence
$\Phi \tilde{u}_f^\star$ and $\Phi u^\star_f$ share the same model subspace.
\hfill $\square$
\end{proof}
\begin{rem}
As an immediate consequence, the debiasing
of any locally affine denoising algorithm is a methodical debiasing,
since $\Phi \tilde{u}_f^\star = \tilde{u}_f^\star$.
\end{rem}
\bigskip
We focus in the next sections on the debiasing of
estimators without explicit expression for $\Mmf^\star$,
meaning that Eq.~\eqref{eq:debiasing} cannot be used directly.
We first introduce an algorithm for the case of $\ell_1$ analysis
relying on the computation of the directional derivative $J^\star_f f$.
We propose next a general approach,
applied to an affine nonlocal estimator,
that requires $J^\star_f \delta$
for randomized directions $\delta$.
\section{Introduction}
Restoration of an image of interest from its single noisy degraded observation
necessarily requires imposing some regularity or {\it prior} on the solution.
Being often only crude approximations of the true underlying signal of interest,
such techniques always introduce a bias towards the {\it prior}.
However, in general, this is not the only source of bias.
In many cases, even though the model was perfectly accurate,
the method would remain biased.
This part of the bias often emerges from technical reasons,
e.g., when approaching an NP-hard problem by an easier one
(typically, using the $\ell_1$ convex relaxation of an $\ell_0$ pseudo-norm).
It is well known that reducing bias is not always favorable in terms
of mean square error because of the so-called bias-variance trade-off.
It is important to highlight that a debiasing procedure is expected to
re-inject part of the variance, therefore increasing the residual noise.
Hence, the mean square error is not always expected to be improved by such techniques.
Debiasing is nevertheless essential
in applications where the image intensities have a physical sense and critical decisions
are taken from their values. For instance, the authors of \cite{de2013extended} suggest using
image restoration techniques to estimate a temperature map within a tumor tissue
for real time automatic surgical intervention.
In such applications, it is so crucial that the estimated temperature is not biased.
A remaining residual noise is indeed favorable compared to
an uncontrolled bias.
We introduce a debiasing technique that suppresses the extra bias -- {\it the method bias} --
emerging from the choice of the method and leave unchanged the bias that is
due to the unavoidable choice of the model
-- {\it the model bias}.
To that end, we rely on
the notion of model subspace
essential to carefully define different notions of bias.
This leads to a mathematical definition of
debiasing for any locally affine
estimators that respect some mild assumptions.
Interestingly, our debiasing definition for the
$\ell_1$ synthesis
(also known as LASSO \cite{tibshirani1996regression} or Basis Pursuit \cite{Chen_Donoho_Saunders98})
recovers a well known
debiasing scheme called refitting that goes back to the ``Hybrid LASSO'' \cite{efron2004least}
(see \cite{Lederer13} for more details).
For the $\ell_1$ analysis \cite{elad2007analysis},
including the $\ell_1$ synthesis but also the anisotropic total-variation
\cite{rudin1992nonlinear},
we show that debiasing can be performed with the same complexity as
the primal-dual algorithm of \cite{CP} producing the biased estimate.
In other cases, e.g.,
for an affine version of the popular nonlocal-means
\cite{buades2005nlmeans},
we introduce an iterative scheme that requires
only a few run of an algorithm of the same complexity as
the original one producing the biased estimate.
\section{Background}
We consider observing $f = f_0 + w \in \RR^P$ a corrupted linear observation
of an unknown signal $u_0 \in \RR^N$ such that $f_0 = \Phi u_0$
where $\Phi \in \RR^{N \times P}$ is a linear operator
and $w$ is a random vector modeling the noise fluctuations. We assume that $\EE[w] = 0$
where $\EE$ is the expectation operator.
The linear operator $\Phi$ is a degrading operator typically
with $P\leq N$ and with
a non-empty kernel encoding some information loss
such that the problem becomes ill-posed.
We focus on estimating the unknown signal $u_0$.
Due to the ill-posedness of the observation model,
we consider variational approaches that attempt to recover $u_0$ from
the single observation $f$ as a solution of the optimization problem
\changemargin{-2pt}{-2pt}{%
\begin{equation}\label{eq:variationnal_model}
\uf^\star \in \uargmin{u \in \RR^N} E(u, f)~.
\end{equation}}%
where $E : \RR^N \times \RR^P \to \RR$ is assumed
to have at least one minimum.
The objective $E$ is typically chosen to promote some
structure, e.g., smoothness, piece-wise constantness, sparsity, etc.,
that is captured by the so-called {\it model subspace} $\Mmf^*$.
Providing $\uf^\star$ is uniquely defined and differentiable at $f$,
we define $\Mmf^\star \subseteq{\RR^N}$ as the tangent affine subspace at $f$ of
the mapping $f \mapsto \uf^\star$, i.e.,
\changemargin{-6pt}{-3pt}{%
\begin{equation}
\Mmf^\star = \uf^\star + \Ima[ \Jf^\star ] = \enscond{u \!\in\! \RR^N}{\exists z \!\in\! \RR^P, u = \uf^\star + \Jf^\star z}
\;\;\text{with}\;\;
\Jf^\star = \left.\frac{\partial \uf^\star}{\partial f}\right|_f
\end{equation}}%
where $\Jf^\star$ is the Jacobian operator at $f$ of the mapping $f \!\mapsto\! \uf^\star$
(see \cite{vaiter2014model} for an alternative but related definition
of model subspace).
When $\uf^\star \!\in\! \Ima[ \Jf^\star ]$, the model subspace
restricts to the linear vector subspace $\Mmf^\star \!=\! \Ima[ \Jf^\star ]$.
In the rest of the paper, $\uf^\star$ is assumed to be
differentiable at $f_0$ and for almost all $f$.
\begin{exmp}
The least square estimator
constrained to the affine subspace $C = b + \Ima[A]$,
$b \in \RR^N$ and $A \in \RR^{N \times Q}$,
is a particular instance of \eqref{eq:variationnal_model} where
\changemargin{-2pt}{-1pt}{%
\begin{equation}\label{eq:constrained_least_square_energy}
E(u, f) = \norm{\Phi u - f}^2 + \iota_C(u)
\end{equation}}%
and for any set $C$, $\iota_C$ is its indicator function:
$\iota_C(u) = 0$ if $u \in C$, $+\infty$ otherwise.
The solution of minimum Euclidean norm is unique and given by
\changemargin{-1pt}{-4pt}{%
\begin{equation}\label{eq:contrained_least_square}
\uf^\CLS = b + A (\Phi A)^+ (f - \Phi b)
\end{equation}}%
where for a matrix $M$, $M^+$ is its Moore-Penrose pseudo-inverse.
The affine constrained least square
restricts the solution $\uf^\CLS$ to the
affine model subspace $\Mmf^\CLS \!=\! b + \Ima[ A (\Phi A)^t ]$
(as $\Ima[M^+] \!=\! \Ima[M^t]$).
Taking $C \!=\! \RR^N$ with for instance $Q\!=\!N$, $A = \Id$ and $b = 0$, leads to an
unconstrained solution
$
\uf^\LS \!=\! \Phi^+ f
$
whose model subspace is $\Mmf^\LS \!=\! \Ima[ \Phi^t ]$ reducing to $\RR^N$ when $\Phi$ has full column rank.
\end{exmp}
\begin{exmp}
The Tikhonov regularization (or Ridge regression)
\cite{tikhonov43,hoerl1970ridge}
is another instance of \eqref{eq:variationnal_model} where,
for some parameter $\lambda > 0$ and matrix $\Gamma \in \RR^{L \times N}$,
\begin{equation}\label{eq:tikhonov}
E(u, f) = \frac{1}{2} \norm{\Phi u - f}^2 + \frac{\lambda}{2} \norm{\Gamma u}^2~.
\end{equation}
Provided $\Ker \Phi \cap \Ker \Gamma \!=\! \{ 0 \}$,
$\uf^\Tik$ is uniquely defined as
$
\uf^\Tik = (\Phi^t \Phi + \lambda \Gamma^t \Gamma)^{-1} \Phi^t f
$
which has a linear model subspace given by $\Mmf^\Tik = \Ima[ \Phi^t ]$.
\end{exmp}
\begin{exmp}
The hard thresholding \cite{donoho1994ideal},
used when $\Phi = \Id$ and $f_0$ is supposed to be sparse,
is a solution of \eqref{eq:variationnal_model} where,
for some parameter $\lambda > 0$,
\begin{equation}
E(u, f) = \frac{1}{2} \norm{u - f}^2 + \frac{\lambda^2}{2} \norm{u}_0~,
\end{equation}
where $\norm{u}_0 = \#\enscond{i \in [P]}{u_i \ne 0}$ counts the number of non-zero entries of $u$
and $[P] = \{1, \ldots, P\}$.
The hard thresholding operation writes
\begin{equation}
(\uf^\HT)_{\Ii_f} = f_{\Ii_f} \qandq (\uf^\HT)_{\Ii_f^c} = 0
\end{equation}
where $\Ii_f \!=\! \enscond{i \!\in\! [P]}{|f_i| \!>\! \lambda}$ is
the support of $\uf^\HT$,
$\Ii_f^c$ is the complement of $\Ii_f$ on $[P]$,
and for any vector $v$,
$v_{\Ii_f}$ is the sub-vector whose elements are indexed by $\Ii_f$.
As $\uf^\HT$ is piece-wise differentiable,
its model subspace is only defined for almost all $f$ as
$\Mmf^\HT \!=\! \smallenscond{u \!\in\! \RR^N}{u_{\Ii^c_f} \!=\! 0} \!=\! \Ima[\Id_{\Ii_f}]$,
where for any matrix $M$,
$M_{\Ii_f}$ is the sub-matrix
whose columns are indexed by $\Ii_f$.
Note that $\Id_{\Ii_f} \!\in\! \RR^{N \times \# \Ii_f}$.%
\end{exmp}
\begin{exmp}
The soft thresholding \cite{donoho1994ideal},
used when $\Phi = \Id$ and $f_0$ is supposed to be sparse, is
another particular solution of \eqref{eq:variationnal_model} where
\begin{equation}\label{eq:ST}
E(u, f) = \frac{1}{2} \norm{u - f}^2 + \lambda \norm{u}_1~,
\end{equation}
with $\norm{u}_1 = \sum_i |u_i|$ the $\ell_1$ norm of $u$.
The soft thresholding operation writes
\begin{equation}
(\uf^\ST)_{\Ii_f} = f_{\Ii_f} - \lambda \sign(f_{\Ii_f}) \qandq (\uf^\ST)_{\Ii_f^c} = 0~,
\end{equation}
where $\Ii_f$ is defined as above, and,
as for the hard thresholding:
$\Mmf^\ST = \Ima[\Id_{\Ii_f}]$.
\end{exmp}
\section{Debiasing the $\ell_1$ analysis minimization}
From now on, the dependency of all quantities
with respect to the observation $f$ will be dropped for the sake of simplicity.
Given a linear operator $\Gamma \!\in\! \RR^{L \times N}$,
the $\ell_1$ analysis minimization
reads, for $\lambda > 0$, as
\changemargin{-2pt}{-2pt}{%
\begin{equation}\label{eq:l1analysis}
E(u, f) = \frac{1}{2} \norm{\Phi u - f}^2 + \lambda \norm{\Gamma u}_1~.
\end{equation}}%
Provided $\Ker \Phi \cap \Ker \Gamma \!\!=\!\! \{ 0 \}$,
there exists a solution given implicitly, see \cite{vaiter2013local}, as
\changemargin{-2pt}{-2pt}{%
\begin{equation}\label{eq:l1analysis_solution}
u^\star = U(\Phi U)^+ f - \lambda U(U^t \Phi^t \Phi U)^{-1} U^t (\Gamma^t)_\Ii s_\Ii
\end{equation}}%
for almost all $f$ and
where $\Ii \!=\! \enscond{i}{(\Gamma u^\star)_i \ne 0} \!\subseteq\! [L] \!=\!\{1, \ldots, L\}$
is called the co-support of the solution,
$s \!=\! \sign(\Gamma u^\star)$,
$U\!=\!U_f^\star$ is a matrix whose columns form a basis of $\Ker[\llic{\Gamma}]$
and $\Phi U$ has full column rank.
Note that $s_\Ii$ and $U$ are locally constant almost everywhere since the co-support
is
stable
with respect to
small perturbations \cite{vaiter2013local}.
It then follows that the model subspace is implicitly
defined as $\Mm^\star \!=\! \Ima[U] \!=\! \Ker[\llic{\Gamma}]$,
and so, the $\ell_1$ analysis minimization
suffers from a
weak bias equal to
$-\lambda U(U^t \Phi^t \Phi U)^{-1} U^t (\Gamma^t)_\Ii s_\Ii$.
Given that $u^\star \in \Ima[U]$ and it is locally affine,
its weak debiased solution is defined for almost all $f$ as
\changemargin{-2pt}{-2pt}{%
\begin{equation}\label{eq:debiasing_al1}
\tilde{u}^\star
= U (\Phi U)^+ f~.
\end{equation}}%
\paragraph{The $\ell_1$ synthesis} \cite{tibshirani1996regression,donoho1994ideal} consists in
taking $\Gamma\!=\!\Id$, hence $U\!=\!\Id_\Ii$, so \eqref{eq:l1analysis_solution} becomes
\changemargin{-2pt}{-2pt}{%
\begin{equation}
u^\LO_{\Ii} = (\Phi_{\Ii})^+ f - \lambda ((\Phi_{\Ii})^t \Phi_{\Ii})^{-1} s_{\Ii}
\qandq
u^\LO_{(\Ii)^c} = 0~.
\end{equation}}%
Its model subspace is implicitly defined as
$\Mm^\LO \!=\! \Ima[\Id_{\Ii}]$,
its weak bias is $-\lambda \Id_{\Ii}((\Phi_\Ii)^t \Phi_{\Ii})^{-1} s_\Ii$
and its weak debiasing is $\tilde{u}^\star = \Id_\Ii (\Phi_\Ii)^+ f$.
Subsequently, taking $\Phi = \Id$ leads to the soft-thresholding presented earlier.
\paragraph{The anisotropic Total-Variation (TV)} \cite{rudin1992nonlinear}
is a particular instance of \eqref{eq:l1analysis}
where $u_0 \in \RR^N$ can be identified to a $d$-dimensional discrete signal,
for which $\Gamma \! \in \! \RR^{L \times N}$, with $L\!=\!dN$, is
the concatenation of the discrete gradient operators
in each canonical directions.
In this case $\Ii$ is the set of indexes where the solution has discontinuities (non-null gradients)
and $\Mm^\TV$ is the space of piece-wise constant signals sharing the same discontinuities
as the
solution.
Its weak bias reveals a loss of contrast: a shift of intensity on each piece
depending on its surrounding and the ratio between its perimeter and its area,
as shown, e.g., in \cite{strong2003edge}.
Note that the so-called {\it staircasing} effect of TV regularization is encoded
in our framework as a model bias, and
is therefore not reduced by our debiasing technique.
Strategies devoted to the reduction of this effect
have been studied in, \!e.g., \!\cite{louchet2011total}.
\bigskip
Since in general $u^\star$ has no explicit solutions,
it is usually estimated thanks to an iterative algorithm
that can be expressed as a sequence $u^k$
converging to $u^\star$.
The question we address is how to compute $\tilde{u}^\star$ in practice, i.e.,
to evaluate Eq.~\eqref{eq:debiasing_al1}, or more precisely, how to jointly build
a sequence $\tilde{u}^k$
converging to $\tilde u^\star$.
We propose a technique that relies on the observation that,
given \eqref{eq:l1analysis_solution}, for almost all $f$,
the Jacobian $J^\star$ of $u^\star$ at $f$ applied to $f$, leads to Eq.~\eqref{eq:debiasing_al1}, i.e.,
\changemargin{-2pt}{-2pt}{%
\begin{equation}
J^\star [f] = U(\Phi U)^+ f = \tilde{u}^\star
\end{equation}}%
since $U$ and $s_\Ii$ are locally constant
\cite{vaiter2013local}.
We so define a sequence $\tilde{u}^k$
which is, up to a slight modification,
the closed-form derivation of the primal-dual sequence $u^k$ of \cite{CP}.
Most importantly, we provide a proof of its convergence towards $\tilde{u}^\star$.
Note that other debiasing techniques could be
employed for the $\ell_1$ analysis, e.g., using
iterative hard-thresholding \cite{herrity2006sparse,blumensath2008iterative},
refitting techniques \cite{efron2004least,Lederer13},
post-refinement techniques based an Bregman divergences
and nonlinear inverse scale spaces
\cite{osher2005iterative,burger2006nonlinear,xu2007iterative}
or with ideal spectral filtering in the analysis sense
\cite{gilboa2014total}.
\ifthenelse{\boolean{arxiv}}{%
We invite the interested reader to have a look at Appendix \ref{sec:comp_debiaising}
for a study of the advantages and limitations of such methods
compared to ours.
}
\subsection{Primal-dual algorithm}
Before stating our main result, let us recall some of the properties of
primal-dual techniques. Dualizing the $\ell_1$ analysis norm $u \mapsto \lambda \norm{\Gamma u}_1$,
the primal problem can be reformulated as the following saddle-point problem
\changemargin{-2pt}{-2pt}{%
\begin{equation}\label{eq:l1analysis_dual}
z^\star = \uargmax{z \in \RR^L} \min_{u \in \RR^N} \frac{1}{2} \|\Phi u-f\|^2+\left< \Gamma u, z\right> - \iota_{B_\lambda}(z)
\end{equation}}%
where $z^\star \in \RR^L$ is the dual variable, and $B_\lambda=\enscond{z}{\norm{z}_\infty\leq \lambda}$ is the $\ell_\infty$ ball.
\paragraph{First order primal-dual optimization.}
Taking $\sigma\tau<\frac1{\|\Gamma\|_2^2}$, $\theta\in[0,1]$ and initializing (for instance,) $u^0=v^0=0 \in \RR^N$, $z^0=0 \in \RR^L$, the primal-dual algorithm of \cite{CP} applied to problem \eqref{eq:l1analysis_dual} reads
\changemargin{-2pt}{-2pt}{%
\begin{equation}\label{algo:normal}\left\{\begin{array}{ll}
z ^{k+1}&=\Pi_{B_\lambda}(z^k+\sigma \Gamma v^k),\\
u^{k+1}&=(\Id+\tau\Phi^t\Phi) ^{-1}\left(u^k+\tau( \Phi^tf-\Gamma^t(z^{k+1}))\right),\\
v^{k+1}&=u^{k+1}+\theta(u^{k+1}-u ^k),
\end{array}\right.\end{equation}}%
where the projection of $z$ over $B_\lambda$ is done component-wise as
\changemargin{-2pt}{-2pt}{%
\begin{equation}
\Pi_{B_\lambda}(z)_i=
\left\{
\begin{array}{ll}
z_i & \text{if} \quad |z_i|\leq \lambda,\\
\lambda\, \sign(z_i) \quad \; & \otherwise.
\end{array}
\right.
\end{equation}}%
The primal-dual sequence $u^k$ converges to
a solution $u^\star$ of \eqref{eq:l1analysis} \cite{CP}.
We assumed here that $u^\star$ verifies \eqref{eq:l1analysis_solution}
with $\Phi U$ full-column rank.
This could be enforced as shown in \cite{vaiter2013local},
but it did not seem to be necessary in our experiments.
\subsection{Debiasing algorithm}
As pointed out earlier, the debiasing of $u^\star$ consists in applying the
Jacobian matrix $J^\star$ at $f$ to $f$ itself.
This idea leads to the proposed debiasing algorithm that constructs a sequence of debiased
iterates from the original biased primal-dual sequence with initialization
$\tilde u^0=\tilde v^0=0 \in \RR^N$, $\tilde z^0=0 \in \RR^L$
as follows
\begin{eqnarray}\label{algo:diff}
&&
\left\{\begin{array}{ll}
\tilde z^{k+1}&=\Pi_{z^k+\sigma \Gamma v^k}(\tilde z^k+\sigma \Gamma \tilde v^k),\\
\tilde u^{k+1}&=(\Id+\tau \Phi^t\Phi)^{-1}\left(\tilde u^k+\tau(\Phi^t f-\Gamma^t\tilde z^{k+1})\right),\\
\tilde v^{k+1}&=\tilde u^{k+1}+\theta(\tilde u^{k+1}-\tilde u ^k),
\end{array}\right.\\
\whereq &&
\Pi_{z^k+\sigma \Gamma v^k}(\tilde z_i)=
\left\{
\begin{array}{ll}
\tilde z_i \quad \; & \text{if} \quad |z^k+\sigma \Gamma v^k|^{}_i\leq \lambda+\beta,\\
0 & \otherwise.
\end{array}
\right. \nonumber
\end{eqnarray}
with $\beta \geq 0$.
Note that when $\beta=0$,
deriving $z^k$, $u^k$ and $v^k$ for almost all $f$ at $f$ in the direction $f$ using the chain rule
leads to the sequences $\tilde z^k$, $\tilde u^k$ and $\tilde v^k$ respectively
(see also \cite{deledalle2014stein}).
However, as shown in Theorem \ref{prop:convergence}, it is important to choose $\beta > 0$ to
guarantee the convergence of the sequence\footnote{In practice, $\beta$ can be chosen
as the smallest positive floating number.}.
\begin{thm}\label{prop:convergence}
Let $\alpha>0$ be the minimum non zero value%
\footnote{If $|\Gamma u^\star|_i = 0$ for all $i \in [L]$,
the result remains true for any $\alpha>0$.}
of $|\Gamma u^\star|_i$ for all $i \in [L]$.
Choose $\beta$ such that $\alpha\sigma>\beta>0$.
The sequence $\tilde u^k$ defined in \eqref{algo:diff} converges
to the debiasing $\tilde u^\star$ of $u^\star$.
\end{thm}
Before turning to the proof of this theorem, let us introduce a first lemma.
\newcommand{\tilde u}{\tilde u}
\begin{lem}\label{lem:sad}
The debiasing $\tilde u^\star$ of $u^\star$ is the solution of the saddle-point problem
\changemargin{-3pt}{-3pt}{%
\begin{equation}\label{pb:sad}
\min_{{\tilde u\in\RR^N}}\max_{\tilde z\in \RR^L} \|\Phi{\tilde u}-f\|^2+\left<\Gamma\tilde u,\tilde z\right>- \iota^{}_{{F_{\Ii}}}(\tilde z),
\end{equation}}%
where $\iota^{}_{F_{\Ii}}$
is the indicator function of
the convex set ${F^{}_{\Ii}}\!=\!\enscond{p\in \RR^L}{p^{}_{\Ii}\!=\!0}.$
\end{lem}
\begin{proof}
As $\Phi U$ has full column rank, the debiased solution is the unique solution
of the constrained least square estimation problem
\changemargin{-3pt}{-3pt}{%
\begin{equation}\label{pb:ls}
\tilde u^\star = U(\Phi U)^+ f = \uargmin{\tilde u\in\mathcal{U}^\star} \|\Phi \tilde u-f\|^2~.
\end{equation}}%
Remark that $\tilde u\in\mathcal{U}^\star\!=\!\Ker[\llic{\Gamma}
\Leftrightarrow(\Gamma\tilde u)_{\Ii^c}=0\Leftrightarrow \iota^{}_{F_{\Ii^c}}(\Gamma\tilde u)=0$, where ${F^{}_{\Ii^c}}\!=\!\enscond{p\in \RR^L}{p^{}_{\Ii^c}\!=\!0}$.
Using Fenchel transform,
$\iota^{}_{F_{\Ii^c}}(\Gamma\tilde u)\!=\!\max_{\tilde z}\left<\Gamma\tilde u,\tilde z\right>- \iota^{*}_{{F_{\Ii^c}}}(\tilde z)$,
where $\iota^{*}_{{F_{\Ii^c}}}$ is the convex conjugate of $\iota^{}_{{F_{\Ii^c}}}$.
Observing that $\iota^{}_{F_{\Ii}}\!=\!\iota^{*}_{F_{\Ii^c}}$ concludes the proof.
\hfill $\square$
\end{proof}
Given Lemma \ref{lem:sad}, replacing $\Pi_{z^k+\sigma \Gamma v^k}$ in
\eqref{algo:diff}
by the projection onto ${F_{\Ii}}$, i.e.,
\changemargin{-3pt}{-3pt}{%
\begin{equation}\label{proj_converged}
\Pi_{{F_{\Ii}}}(\tilde z)^{ }_{\Ii^c}=\tilde z^{ } _{\Ii^c}~ \qandq \Pi_{{F_{\Ii}}}(\tilde z)_{\Ii}^{}=0~,
\end{equation}}%
leads to
the primal-dual algorithm of \cite{CP} applied to problem \eqref{pb:sad}
which converges to the debiased estimator $\tilde u^\star$.
It remains to prove
that the projection $\Pi_{z^k+\sigma \Gamma v^k}$ defined in \eqref{algo:diff}
converges to $\Pi_{{F_{\Ii}}}$ in finite time.
\begin{proof}[Theorem \ref{prop:convergence}]
First consider $i\!\in\!\Ii$, i.e., $|\Gamma u^\star|^{}_i\!>\!0$.
By assumption on $\alpha$, $|\Gamma u^\star|^{}_i\!\ge\!\alpha>0$.
Necessary $z_i^\star\!=\!\lambda \sign(\Gamma u^\star)_i$ in order to maximize
\eqref{eq:l1analysis_dual}.
Hence, $|z^\star+\sigma \Gamma u^\star|^{}_i\!\ge\!\lambda+\sigma\alpha$.
Using the triangle inequality shows that
\changemargin{-2pt}{-2pt}{%
\begin{equation}
\lambda+\sigma\alpha\leq|z^\star+\sigma \Gamma u^\star|^{}_i\leq|z^\star-z^k|^{}_i+\sigma |\Gamma u^\star-\Gamma v^k|^{}_i+|z^k+\sigma \Gamma v^k|^{}_i~.
\end{equation}}%
Choose $\epsilon\!>\!0$ sufficiently small such that $\sigma\alpha-\epsilon(1+\sigma ) \!>\! \beta$.
From the convergence of the primal-dual algorithm of \cite{CP},
the sequence $(z^k,u^k,v^k)$ converges to $(z^\star,u^\star,u^\star)$.
Therefore, for $k$ large enough,
$|z^\star-z^k|^{}_i \!<\! \epsilon$, $|\Gamma u^\star-\Gamma v^k|^{}_i \!<\! \epsilon$, and
\changemargin{-2pt}{-2pt}{%
\begin{equation}
|z^k+\sigma \Gamma v^k|^{}_i\geq\lambda+\sigma\alpha-\epsilon(1+\sigma ) > \lambda+\beta ~.
\end{equation}}%
Next consider $i\!\in\!\Ii^c$, i.e., $|\Gamma u^\star|_i\!=\!0$, where by definition $|z^\star|_i \!\leq\! \lambda$.
Using again the triangle inequality shows that
\changemargin{-2pt}{-2pt}{%
\begin{equation}
|z^k+\sigma \Gamma v^k|^{}_i\leq|z^k-z^\star|^{}_i+\sigma |\Gamma v^k - \Gamma u^\star|^{}_i+|z^\star|^{}_i~.
\end{equation}}%
Choose $\epsilon\!>\!0$ sufficiently small such that $\epsilon(1+\sigma ) \!<\! \beta$.
As $(z^k,u^k,v^k) \to (z^\star,u^\star,u^\star)$, for $k$ large enough,
$|z^k-z^\star|^{}_i \!<\! \epsilon$, $|\Gamma v^k-\Gamma u^\star|^{}_i \!<\! \epsilon$, and
\changemargin{-2pt}{-2pt}{%
\begin{equation}
|z^k+\sigma \Gamma v^k|^{}_i \!<\! \lambda + \epsilon(1+\sigma ) \leq \lambda + \beta~.
\end{equation}}%
It follows that for $k$ sufficiently large $|z^k+\sigma \Gamma v^k|^{}_i \!\leq\! \lambda+\beta$ if and only if $i\!\in\!\Ii^c$, and hence
$\Pi_{z^k+\sigma K v^k}(\tilde z)\!=\!\Pi_{{F_{\Ii}}}(\tilde z)$.
As a result, all subsequent iterations of \eqref{algo:diff}
will solve \eqref{pb:sad}, and hence from Lemma \ref{lem:sad}
this concludes the proof of the theorem.
\hfill $\square$
\end{proof}
\section{Debiasing other affine estimators}
In most cases, $U^\star$ cannot be computed in reasonable memory load and/or time,
such that Eq.~\eqref{eq:debiasing} cannot be used directly.
However, the directional derivative, i.e., the application of $J^\star$ to a direction
$\delta$, can in general be obtained
with an algorithm of the same complexity as the one providing $u^\star$.
If one can compute the directional derivatives for any direction,
a general iterative algorithm for the computation of $\tilde{u}^\star$ can be
derived as given in Algorithm 1.
The proposed technique relies on the fact that given $n \!=\! \dim(\Mm^\star)$
uniformly random directions
$\delta_1, \ldots, \delta_n$ on the unit sphere of $\RR^P$,
$J^\star \delta_1, \ldots, J^\star \delta_n$ forms a basis of $\Ima[J^\star]$
almost surely.
Given this basis, the debiased solution can so be retrieved from \eqref{eq:debiasing}.
Unfortunately, computing the image of the usually large number $n$
of random directions can be computationally prohibitive.
The idea is to approach the debiased solution
by retrieving only a low dimensional subspace of $\Mm^\star$
leading to a small approximation error.
Our greedy heuristic is to chose random perturbations around the current residual
(the strength of the perturbation being controlled by a parameter $\epsilon$).
As soon as $\epsilon >0$, the algorithm
converges in $n$ iterations as explained above.
But, by focusing in directions guided by the current residual,
the algorithm refines in priority the directions
for which the current debiasing gets significantly away from the data $f$, i.e.,
directions that encodes potential remaining bias.
Hence, the debiasing can be very effective even though
a small number of such directions has been explored.
We notice in our experiments that with a small value of $\epsilon$,
this strategy leads indeed to a satisfying debiasing,
close to convergence, reached in a few iterations.
\input{nlmeans_ssvm}
\section{Numerical experiments and results}
Figure \ref{fig:tv1d} gives an illustration of
TV used for denoising a 1D piece-wise constant signal
in $[0, 192]$ and
damaged by additive white Gaussian noise (AWGN) with a standard deviation $\sigma\!=\!10$.
Even though TV has perfectly retrieved the
support of $\nabla u_0$ with one more extra jump, the intensities of some regions
are biased.
Our debiasing is as expected unbiased for every region.
Figure \ref{fig:tv2d_deconv} gives
an illustration of our debiasing of
2D anisotropic TV used for the restoration of
an $8bits$ approximately piece-wise constant image damaged by
AWGN with $\sigma\!=\!20$.
The observation operator $\Phi$ is a Gaussian convolution kernel of
bandwidth $2$px.
TV introduced a significant loss of contrast, typically for
the thin contours of the drawing, which are re-enhanced
by our debiased result.
Figure \ref{fig:nlm} gives an illustration of our iterative
debiasing for the block-wise nonlocal-means algorithm used
in a denoising problem for an $8bits$ image
enjoying many repetitive patterns
and damaged by AWGN with $\sigma=20$.
Convergence has been considered as reached after $4$ iterations only.
Our debiasing provides favorable results with many enhanced details
compared to the biased result.%
\ifthenelse{\boolean{arxiv}}{\\
Please refer to Appendix \ref{sec:supp_experiments}
for more details and experiments.
}{}
\subsubsection*{Acknowledgments.} The heading should be treated as a
\input{conclusion}
\bibliographystyle{abbrv}
| {
"timestamp": "2015-03-06T02:07:32",
"yymm": "1503",
"arxiv_id": "1503.01587",
"language": "en",
"url": "https://arxiv.org/abs/1503.01587",
"abstract": "Bias in image restoration algorithms can hamper further analysis, typically when the intensities have a physical meaning of interest, e.g., in medical imaging. We propose to suppress a part of the bias -- the method bias -- while leaving unchanged the other unavoidable part -- the model bias. Our debiasing technique can be used for any locally affine estimator including â1 regularization, anisotropic total-variation and some nonlocal filters.",
"subjects": "Statistics Theory (math.ST)",
"title": "On debiasing restoration algorithms: applications to total-variation and nonlocal-means",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561712637256,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.7082906097839625
} |
https://arxiv.org/abs/1911.04552 | New approaches to finite generation of cohomology rings | In support variety theory, representations of a finite dimensional (Hopf) algebra $A$ can be studied geometrically by associating any representation of $A$ to an algebraic variety using the cohomology ring of $A$. An essential assumption in this theory is the finite generation condition for the cohomology ring of $A$ and that for the corresponding modules. In this paper, we introduce various approaches to study the finite generation condition. First, for any finite dimensional Hopf algebra $A$, we show that the finite generation condition on $A$-modules can be replaced by a condition on any affine commutative $A$-module algebra $R$ under the assumption that $R$ is integral over its invariant subring $R^A$. Next, we use a spectral sequence argument to show that a finite generation condition holds for certain filtered, smash and crossed product algebras in positive characteristic if the related spectral sequences collapse. Finally, if $A$ is defined over a number field over the rationals, we construct another finite dimensional Hopf algebra $A'$ over a finite field, where $A$ can be viewed as a deformation of $A'$, and prove that if the finite generation condition holds for $A'$, then the same condition holds for $A$. | \section{Introduction}
Hochschild cohomology was introduced by Hochschild in 1945 \cite{GHoch59} for any associative algebra. Gerstenhaber \cite{Ger63} showed that this cohomology has a graded algebra structure (via cup product) and a graded Lie algebra structure (via a Lie bracket or Gerstenhaber bracket). These two algebraic structures are compatible in such a way that makes Hochschild cohomology a {\em Gerstenhaber algebra}. Many mathematicians have since investigated Hochschild cohomology $\HH^*(A)$ for various types of algebras $A$, and it has been useful in many settings, including algebraic deformation theory (e.g.,~\cite{Ger64}) and support variety theory (e.g.,~\cite{EHSS, SO}).
Generally speaking, the theory of support varieties studies representations of an algebra $A$ geometrically by associating each finitely generated $A$-module $M$ to a certain algebraic variety $V(M)$, namely the variety of the kernel of the graded ring homomorphism $-\otimes_AM: \HH^*(A) \to \Ext^*_A(M,M)$. In support variety theory, the following finite generation assumption on the Hochschild cohomology ring $\HH^*(A)$ of $A$ and on bimodules is essential (see e.g.,~\cite{EHSS, SO, Solberg}).
\vspace{0.5em}
\begin{itemize}
\item[\textbf{(fg)}:] $\HH^*(A,M)$ is a noetherian module over $\HH^*(A)$, for any finitely generated $A$-bimodule $M$.
\end{itemize}
\vspace{0.5em}
\noindent
In particular, condition {\rm \textbf{(fg)}} implies that the Hochschild cohomology ring $\HH^*(A)$ is finitely generated. Note that this is not always true for a finite dimensional algebra $A$; see Xu's counterexample in \cite{Xu}. We wish to know more finite dimensional algebras $A$ that satisfy condition {\rm \textbf{(fg)}}. There are several conditions equivalent to {\rm \textbf{(fg)}}, which are sometimes more convenient to use. For completeness, we include them in Proposition~\ref{equivfg} below.
For any finite dimensional Hopf algebra $A$ over a field $k$, one can also study the support variety theory using the cohomology ring $\coh^*(A,k)$. There are certain classes of Hopf algebras $A$ whose cohomology rings are known to be finitely generated, but it is still unknown in general if this cohomology ring is always finitely generated. In fact, this is a long-standing conjecture, which is formulated in the setting of finite tensor categories by Etingof and Ostrik in \cite[Conjecture 2.18]{EsOt}. If this conjecture holds true for $A$, one can define support varieties over $A$ by using a graded ring homomorphism $ - \otimes_k M: \coh^*(A,k) \to \Ext_A^*(M,M)$ similar to that described for Hochschild cohomology above. This homomorphism factors through the action of $\HH^*(A)$ on $\Ext^*_A(M,M)$~\cite{PW}, giving a connection between the support variety theories defined via Hopf algebra cohomology $\coh^*(A,k)$ and Hochschild cohomology $\HH^*(A)$. We will work specifically with the following noetherian assumption of a finite dimensional Hopf algebra $A$ over a field $k$ (see e.g.,~\cite{FW2015}), and more generally for any finite dimensional augmented $k$-algebra $A$:
\vspace{0.5em}
\begin{itemize}
\item[\textbf{(hfg)}:] $\coh^*(A,M)$ is a noetherian module over $\coh^*(A,k)$, for any finite dimensional $A$-module $M$.
\end{itemize}
\vspace{0.5em}
\noindent
In particular, when $A$ is a finite dimensional Hopf algebra, condition \textbf{(hfg)} implies that $\coh^*(A,k)$ is finitely generated, which is the finite generation conjecture mentioned above. We wish to identify more finite dimensional Hopf algebras $A$ that satisfy condition \textbf{(hfg)}, and hence move forwards towards the goal of proving the finite generation conjecture. \\
\noindent
\underline{{\bf Main Results:}} We take three different approaches in this finite generation problem: \vspace{0.1in}
Our first result in Section~\ref{sec:fg conditions} provides several equivalent assumptions for \textbf{(hfg)} including the finite generation condition on the cohomology ring $\coh^*(A,R)$ for any affine commutative $A$-module algebra $R$. Our result in Lemma~\ref{lem:hfg} shows that these two assumptions are equivalent if $A$ has the integral property (see Section~\ref{subsec:integral}). A classical result in algebraic groups says that any finite dimensional cocommutative Hopf algebra has the integral property. Zhu \cite{Zhu} proved that the integral property holds for any finite dimensional cosemisimple Hopf algebra and Skryabin \cite{Sky} showed that any finite dimensional Hopf algebra in positive characteristic has the integral property. So our result can be applied to all these cases.
Next, in Section~\ref{sec:homPI}, we study the finite generation conditions that are preserved under certain spectral sequences related to filtered, smash and crossed product algebras (see Theorems \ref{thm:FFGC}, \ref{thm:FiniteTypeCoh}, and \ref{thm:FiniteTypeHochschild} and see the Appendix for those we use here). An answer to the finite generation conditions on multiplicative spectral sequences relies on having suitable \emph{permanent cocycles} (those universal cocycles which survive under the differential maps of all pages in a cohomology spectral sequence), see e.g., \cite{FS,MPSW,NWW2019,NWi}. In this work, we use the Frobenius map to construct such permanent cocycles over the field of positive characteristic (see Lemma \ref{lem:key}). In Propositions \ref{prop:hfg} and \ref{prop:fg}, we employ the Hilton-Eckmann argument in a suspended monoidal category to show that the cohomology ring $\coh^*(A,R)$ of a finite dimensional Hopf algebra $A$ with coefficients in an $A$-module algebra $R$ is finite over its affine graded center under some assumptions on $R$. We apply this result to the first pages of May spectral sequences related to filtered algebras and to the second pages of Lyndon-Hochschild-Serre spectral sequences related to smash and crossed products. Moreover, if the related spectral sequences collapse over a field of positive characteristic, then we are able to construct permanent cocycles by using the Frobenius map on their graded centers to conclude the finite generation conditions.
Finally, in Section~\ref{sec:mod p}, we apply the well-known reduction modulo $p$ method in number theory to support variety theory. For a large family of finite dimensional complex Hopf algebras $A$ (e.g., Lusztig's small quantum groups \cite{Lus90}), we can select a ``good" prime $p$ to construct another finite dimensional Hopf algebra $A'$ over the finite field $F_{q}$, where $q=p^n$ for some integer $n$. Here, $A$ can be viewed as a deformation of $A'$. We show that if the newly constructed Hopf algebra $A'$ over the finite field satisfies \textbf{(hfg)} then so does the original complex Hopf algebra $A$. Namely, the finite generation conditions in positive characteristic can be lifted up to those in zero characteristic via reduction modulo $p$ (see Theorem \ref{thm:m1}). Moreover, our approach is applicable both to the cohomology ring of finite dimensional Hopf algebras and to the Hochschild cohomology of finite dimensional associative algebras (see Theorem \ref{thm:m2}). Therefore, our lifting method provides a viable connection between a field of \emph{characteristic zero} and a field of \emph{positive characteristic} in studying the cohomology of Hopf algebras. This link is often lacking when one classifies Hopf algebras, as the classification methods are quite different from one field to the other (see e.g., \cite{A, NWa, NWW2015, NWW2018, Wang2013}).
\section{Preliminaries}
\label{sec:prelim}
\numberwithin{equation}{subsection}
Throughout the paper, let $k$ be a base field. All modules are left modules and $\otimes = \otimes_k$ unless stated otherwise. We first recall the two cohomology types discussed in this paper and present some background material necessary to build up our main results.
\begin{definition}
\label{def:hochschild}
Let $A$ be an algebra over $k$ and $M$ be an $A$-bimodule, which can be considered as a left module over the enveloping algebra $A^e = A \otimes A^{op}$ of $A$. The {\bf Hochschild cohomology} of $A$ with coefficients in $M$ is
$$\HH^*(A,M)~:=~\bigoplus_{n \ge 0}\, \Ext^n_{A^e}(A,M),$$
where $A$ is an $A$-bimodule via the left and right multiplications in $A$.
\end{definition}
Under the cup product, $\HH^*(A)~:= ~\HH^*(A,A)$ is a graded commutative $k$-algebra and $\HH^*(A,M)$ is a module over $\HH^*(A)$.
\begin{definition}
\label{def:coh}
Let $A$ be an augmented $k$-algebra and $M$ be a left $A$-module. The {\bf cohomology} of $A$ with coefficients in $M$ is
$$\coh^*(A,M):= \bigoplus_{n \ge 0}\, \Ext^n_A (k,M),$$
where $k$ is an $A$-module via the augmentation map.
\end{definition}
Under the Yoneda product, $\coh^*(A,k)$ is a graded $k$-algebra and $\coh^*(A,M)$ is a module over $\coh^*(A,k)$. If, in addition, $A$ is a finite dimensional Hopf algebra, then $\coh^*(A,k)$ is graded commutative (see also e.g.,~\cite{ginzburg-kumar93,Suarez-Alvarez}).
\subsection{Finite generation condition on Hochschild cohomology}
We consider the following finite generation conditions on Hochschild cohomology as
alternatives to the {\textbf{(fg)}} condition:
\vspace{0.5em}
\begin{itemize}
\item[\textbf{(fg1)}:] There is a finitely generated commutative graded subalgebra $H$ of $\HH^*(A)$ with degree-$0$ component $H_0=\HH^0(A)$; and
\item[\textbf{(fg2)}:] $\Ext^*_A(M,N)$ is a finitely generated $H$-module, for all pairs of finitely generated $A$-modules $M$ and $N$.
\end{itemize}
\vspace{0.5em}
We next recall how these finite generation conditions and others are all equivalent.
\begin{prop}
\label{equivfg}
For any finite dimensional $k$-algebra $A$, the following are equivalent.
\begin{enumerate}
\item $\HH^*(A,M)$ is a noetherian module over $\HH^*(A)$, for any finite $A$-bimodule $M$, that is, $A$ satisfies {\rm \textbf{(fg)}}.
\item $\HH^*(A)$ is a finitely generated algebra and $\Ext^*_A(A/\rm{rad}(A),A/\rm{rad}(A))$ is a finitely generated module over $\HH^*(A)$, where $\rm{rad}(A)$ denotes the radical of $A$.
\item $\HH^*(A)$ is a finitely generated algebra and $\Ext^*_A(M,N)$ is a finitely generated module over $\HH^*(A)$, for all pairs of finite $A$-modules $M$ and $N$.
\item $A$ satisfies {\rm \textbf{(fg1)}} and {\rm \textbf{(fg2)}}.
\end{enumerate}
\end{prop}
\begin{proof}
(i)$\Leftrightarrow$(ii)$\Leftrightarrow$(iii): The proof is essentially that of \cite[Proposition 2.4]{EHSS}. Since $\HH^*(A)$ is graded commutative, we know it is finitely generated if and only if it is noetherian by \cite[Proposition~\ref{prop:1}(2)]{EHSS}. Also, a module over a noetherian ring is finitely generated if and only if it is a noetherian module.
(ii)$\Leftrightarrow$(iv): See~\cite[Proposition 5.7]{Solberg}.
\end{proof}
\subsection{Properties of graded algebras}
Both cohomology structures $\HH^*(A)$ and $\coh^*(A,k)$ are graded algebras, so to work with these cohomology structures, we recall some basic properties of nonnegatively graded algebras.
Let $A$ be a (nonnegatively) graded $k$-algebra, that is, $A=\bigoplus_{i\ge 0} A_i$ with $1_A \in A_0$ and $A_iA_j \subseteq A_{i+j}$ for all integers $i,j \geq 0$. If $A_0=k$, then $A$ is called {\bf connected graded}. Recall that $A$ is called {\bf $A_0$-affine} if it is a finitely generated algebra over the degree-$0$ component $A_0$. The following well-known results will be used many times throughout the paper without explicit citation.
\begin{prop}
\label{prop:1}
Let $A=\bigoplus_{i\ge 0} A_i$ be a graded algebra for which the subalgebra $A_0$ is
noetherian commutative. Then:
\begin{enumerate}
\item $A$ is noetherian if and only if $A$ is graded noetherian.
\item Suppose $A$ is graded commutative. Then the following are equivalent:
\begin{itemize}
\item [(a)] $A$ is noetherian.
\item [(b)] $A$ is $A_0$-affine.
\item [(c)] $A$ is a finite module over $A^{{\rm ev}}$ and $A^{{\rm ev}}$ is $A_0$-affine, where $A^{{\rm ev}}$ consists of all even-degree elements of $A$.
\end{itemize}
\item Suppose $A$ is noetherian and is a finite module over some graded central subalgebra $Z$ with $Z_0=A_0$. Then $Z$ is $Z_0$-affine and noetherian.
\end{enumerate}
\end{prop}
\begin{proof}
For a proof of (i), see for example~\cite{Evens}.
For (ii), see~\cite[Lemma 3.2]{JL}.
For (iii), we can adapt the Artin-Tate Lemma \cite[Lemma 13.9.10]{MR} to the graded setting.
\end{proof}
\subsection{Integral property of Hopf algebras}
\label{subsec:integral}
We recall that a commutative ring $R$ is {\bf integral} over some subring $T$ if every element of $R$ is a root of some monic polynomial with coefficients in $T$. Furthermore, if $R$ is $k$-affine, then $R$ is integral over $T$ if and only if $R$ is a finite module over $T$. We say a Hopf algebra $A$ has the {\bf integral property} if
for any commutative $A$-module algebra $R$ (that is, $R$ is an $A$-module with unit and multiplication maps both $A$-module maps), it is integral over its invariant subring $R^A$. Note that, in characteristic zero, Zhu \cite{Zhu} showed that the four dimensional Sweedler Hopf algebra does {\em not} have the integral property.
\begin{prop}
\label{prop:integral}
Let $A$ be a finite dimensional Hopf algebra over a field $k$. In each of the following cases, $A$ satisfies the integral property:
\begin{enumerate}
\item the base field $k$ has positive characteristic,
\item $A$ is semisimple,
\item $A$ is cosemisimple,
\item $A$ is cocommutative.
\end{enumerate}
\end{prop}
\begin{proof}
(i) is \cite[Proposition 2.7 and the remark below it]{Sky}. (ii) is \cite[Theorem 6.1 and the remark below it]{Sky}. (iii) follows from \cite[Theorem 2.1]{Zhu}. (iv) is \cite[Theorem 4.2.1]{MO93}.
\end{proof}
\subsection{Extensions for the finite generation conditions}
\label{subsec:FG}
In this section, we remark that the finite generation conditions {\rm \textbf{(fg)}} and {\rm \textbf{(hfg)}} that were defined on (Hochschild) cohomology of an algebra $A$ in the introduction can be extended to another algebra $B$ under some assumptions:
\begin{lemma}\label{lem:FGCSUR}
Let $f: A\to B$ be a map of finite dimensional augmented $k$-algebras such that $B$ is projective as a (right) $A$-module via $f$. If $B$ satisfies {\rm \textbf{(hfg)}}, then so does $A$.
\end{lemma}
\begin{proof}
Let $M$ be a finite dimensional $A$-module.
The coinduction functor $\Hom_A( _{A}B , - )$ is right adjoint to restriction from $B$ to $A$ along $f$.
Since $B$ is projective as an $A$-module, a projective
resolution of $k$ as a $B$-module restricts to a projective resolution of $k$ as an $A$-module.
Thus there is an isomorphism
\[
\Ext^*_B(k, \Hom_A( _{A}B, M))
\cong \Ext^*_A(k, M).
\]
Under this isomorphism and restriction from
$\Ext^*_B(k,k)$ to $\Ext^*_A(k,k)$,
the action of $\Ext^*_B(k,k)$ on $\Ext^*_B(k,\Hom_A( _{A}B,M))$ restricts to the action of $\Ext^*_A(k,k)$ on $\Ext^*_A(k,M)$. Under the assumption that $B$ satisfies {\bf (hfg)}, $\Ext^*_B(k,\Hom_A ( _{A}B,M))$ is a noetherian module over $\Ext^*_B(k,k)$, and so $\Ext^*_A(k,M)$ is a noetherian module over $\Ext^*_A(k,k)$.
\end{proof}
\begin{remark}
The same conclusion does not hold for {\bf(fg)}. For example, take $A=B \times C$ where $B$ satisfies condition {\bf(fg)} but $C$ does not. Then $B$ is projective as a left and right module over $A$ via the natural projection $f: A \to B$. Here $B$ satisfies {\bf(fg)} but $A$ does not.
\end{remark}
The next result was first proved in \cite{NP2018} in the context of finite tensor categories with respect to surjective tensor functors. We provide here an alternate proof in the special case of categories of modules over Hopf algebras.
\begin{prop}
\label{prop:ext}
Let $A\subset B$ be an extension of finite dimensional Hopf algebras over a field $k$. If $B$ satisfies {\rm \textbf{(hfg)}}, then so does $A$.
\end{prop}
\begin{proof}
By the Nichols-Zoeller freeness theorem \cite{NZ89}, every finite dimensional Hopf algebra is free as a module over each of its Hopf subalgebras. The statement now follows from Lemma \ref{lem:FGCSUR}.
\end{proof}
Next we discuss the relations between conditions {\rm \textbf{(fg)}} and {\rm \textbf{(hfg)}} for a finite dimensional Hopf algebra $A$. Let $S$ denote the antipode map of $A$.
\begin{prop}\label{adjoint}
Let $A$ be a finite dimensional Hopf algebra over a field $k$. Then $A$ satisfies {\rm \textbf{(hfg)}} if and only if $A$ satisfies {\rm \textbf{(fg)}} and $\HH^*(A)$ is a finitely generated module over $\coh^*(A,k)$.
\end{prop}
\begin{proof}
Denote by $\text{Mod}(A)$ and $\text{Mod}(A^e)$ the categories of left $A$-modules and $A$-bimodules, respectively. There is a natural pair of adjoint functors
$$\mathcal F=A^e\otimes_A-:\text{Mod}(A)\to \text{Mod}(A^e) \quad \text{ and } \quad \mathcal G=\Hom_{A^e}(\!_{A^e}A^e_A,-):\text{Mod}(A^e)\to \text{Mod}(A),
$$
where $A^e$ is viewed as a right $A$-module via the embedding $\delta: A\rightarrow A^e$ such that $\delta(a) = \sum a_1\otimes S(a_2)$ for any $a\in A$. Note that $A^e$ is a free right $A$-module by the fundamental theory of Hopf modules (e.g., \cite[Lemma 2.3(ii)]{LOYW}). So $\mathcal F$ and $\mathcal G$ are both exact and preserve projective and injective objects, respectively. And there is an isomorphism of $A^e$-modules $A \cong A^e\otimes_A k=\mathcal F(k)$~\cite[Lemma 7.1]{PW}.
Suppose $A$ satisfies {\rm \textbf{(hfg)}}. Apply \cite[Proposition 9.1(c)]{ESW} where $C=k$, $\mathcal F(k)=A$ and $D=M$ as any finite $A$-bimodule. Then $\HH^*(A,M)$ is finitely generated over $\HH^*(A)$ since $\coh^*(A, \mathcal G(M))$ is finitely generated over $\coh^*(A,k)$ by {\rm \textbf{(hfg)}}. Now letting $D=A$, Hochschild cohomology $\HH^*(A)=\coh^*(A, \mathcal G(A))$ is finitely generated over $\coh^*(A,k)$. Moreover, Proposition \cite[Proposition 9.1(d)]{ESW} implies that $\HH^*(A)$ is noetherian.
Conversely, we can view any finite $A$-module $M$ as a bimodule over $A$ by equipping it with the trivial right action, which is denoted by $M^{\rm tr}$. It is clear that $\mathcal F(M^{\rm tr})=M$. By {\rm \textbf{(fg)}}, we know $\HH^*(A,M^{\rm tr})\cong\coh^*(A,M)$ is finitely generated over $\HH^*(A)$. The action of $\coh^*(A,k)$ factors through that of $\HH^*(A)$~\cite[Lemma 7.3]{PW} and $\HH^*(A)$ is a finitely generated module over $\coh^*(A,k)$. Hence $\coh^*(A,M)$ is finitely generated over $\coh^*(A,k)$. Proposition \ref{prop:1}(3) implies that $\coh^*(A,k)$ is noetherian. So $A$ satisfies {\rm \textbf{(hfg)}}.
\end{proof}
\section{On the equivalency of finite generation conditions for Hopf algebras}
\label{sec:fg conditions}
\numberwithin{equation}{section}
In this section, we provide some equivalent descriptions of the finite generation conditions on the cohomology of any finite dimensional Hopf algebra. First of all, we assert that the assumption {\rm \textbf{(hfg)}} is preserved under any field extension:
\begin{lemma}
\label{Rem:ext}
Let $A$ be a finite dimensional Hopf algebra over a field $k$, and let $K$ be any field extension of $k$. Then $A$ satisfies {\rm \textbf{(hfg)}} if and only if $A'=A\otimes_k K$ satisfies {\rm \textbf{(hfg)}}.
\end{lemma}
\begin{proof}
Suppose $A'$ satisfies {\rm \textbf{(hfg)}}. Let $M$ be a finite module over $A$, and $M'=M\otimes_kK$ the corresponding finite module over $A'$. Since $K$ is flat over $k$, there is a graded ring isomorphism $\coh^*(A',K)\cong \coh^*(A,k)\otimes_k K$. Hence there is an embedding of categories from $\coh^*(A,k)$-modules to $\coh^*(A',K)$-modules given by the tensor product $-\otimes_kK$. By hypothesis, $\coh^*(A',K)$ is noetherian, and it now follows that $\coh^*(A,k)$ is noetherian.
It remains to show that $\coh^*(A,M)$ is finitely generated over $\coh^*(A,k)$. It suffices to show that $\coh^*(A,M)$ is finitely presented. By a result of Lenzing \cite[Satz 3]{Len}, it is equivalent to show that $\Hom_{\coh^*(A,k)}(\coh^*(A,M),-)$ preserves any inductive limit $\varinjlim M_i$ in the category of $\coh^*(A,k)$-modules. There is a natural map
\[
\xymatrix{
\varinjlim \Hom_{\coh^*(A,k)}\left(\coh^*(A,M),\,M_i\right)\ar[rr]^-{f}&& \Hom_{\coh^*(A,k)}\left(\coh^*(A,M),\,\varinjlim M_i\right).
}
\]
After applying $-\otimes_kK$, it becomes
\[
\xymatrix{
\varinjlim \Hom_{\coh^*(A',K)}\left(\coh^*(A',M'),\,M_i'\right)\ar[rr]^-{f\otimes_kK}&& \Hom_{\coh^*(A',K)}\left(\coh^*(A',\,M'),\,\varinjlim M_i'\right),
}
\]
where $M_i'=M_i\otimes_kK$. The map $f\otimes_kK$ is an isomorphism since $\coh^*(A',M')$ is finitely generated over the noetherian algebra $\coh^*(A',K)$ and hence it is finitely presented and preserves the inductive limit $\varinjlim M_i'$ in the category of $\coh^*(A',K)$-modules. This implies that $f$ is an isomorphism, since $-\otimes_kK$ is exact.
On the other hand, suppose $A$ satisfies {\rm \textbf{(hfg)}}. We first deal with the case when $K/k$ is a finite field extension. By hypothesis and by the graded commutativity of $\coh^*(A,k)$, it is easy to see that $\coh^*(A',K)\cong\coh^*(A,k)\otimes_kK$ is noetherian and $K$-affine. Let $M'$ be a finite module over $A'$, and $M=\!_AM'$ be the restriction of $M'$ to $A$, which is again a finite $A$-module since $A'=A\otimes_kK$ is a free $A$-module of rank $[K:k]$. Then $\coh^*(A,M)$ is finitely generated over $\coh^*(A,k)$ since $A$ satisfies {\rm \textbf{(hfg)}}. Moreover, by the tensor-hom adjoint pair, there are isomorphisms of graded vector spaces:
\begin{align*} \coh^*(A,M) &~\cong~\Ext^*_{A}(k,\Hom_{A'}(\!_{A'}{A'}_{A},M')) \\
&~\cong~\Ext^*_{A'}(A'\otimes_{A}k,M') \\
&~\cong~\Ext^*_{A'}(K,M')~=~\coh^*(A',M').
\end{align*}
Thus, $\coh^*(A',M')$ is finitely generated over $\coh^*(A',K)=\coh^*(A,k)\otimes_kK$. This implies that $A'$ satisfies {\rm \textbf{(hfg)}} whenever $K/k$ is a finite field extension.
In general, say $k\subset K$ is any field extension. By the previous discussion, it suffices to show that $A'=A\otimes_k\overline{K}$ satisfies {\rm \textbf{(hfg)}}, where $\overline{K}$ is the algebraic closure of $K$. We can argue similarly that $\coh^*(A', \overline{K})=\coh^*(A,k)\otimes_k\overline{K}$ is noetherian and finitely generated. Since $A$ is finite dimensional, there exists a finite field extension $F$ of $k$ such that the quotient $A\otimes_kF/\text{rad}(A\otimes_kF)$ is a direct sum of matrix algebras over $F$ (for instance, we can take $F$ to be the splitting field of $A/\text{rad}(A)$). Now let $B=A\otimes_kF$, so $B\otimes_F\overline{K}=A'$ and $(B/\text{rad}(B))\otimes_F\overline{K}\cong A'/\text{rad}(A')$. Since $F/k$ is a finite field extension, $B$ satisfies {\rm \textbf{(hfg)}} by the previous discussion. Therefore, $\coh^*(B,B/\text{rad}(B))$ is finitely generated over $\coh^*(B,F)$ and
$$
\coh^*(B,B/\text{rad}(B))\otimes_F\overline{K}~\cong~\coh^*(B\otimes_F\overline{K},(B/\text{rad}(B))\otimes_F\overline{K})~\cong~\coh^*(A',A'/\text{rad}(A'))
$$
is finitely generated over $\coh^*(A',\overline{K})=\coh^*(B,F)\otimes_F\overline{K}$. This implies that $\coh^*(A',S)$ is finitely generated over $\coh^*(A',\overline{K})$ for any $A'$-simple $S$. Finally, by filtering any finite dimensional $A'$-module by its composition series, we see that $A'$ satisfies {\rm \textbf{(hfg)}}.
\end{proof}
\begin{remark}
\label{rem:FE}
Using a similar argument, we can show that for any finite dimensional (not necessarily Hopf) algebra $A$ over a field $k$, and any field extension $K$ of $k$, $A$ satisfies the Hochschild condition {\rm \textbf{(fg)}} if and only if $A'=A\otimes_k K$ satisfies {\rm \textbf{(fg)}}.
\end{remark}
In the classical case of group cohomology, it is well-known that the group algebra $kG$ of any finite group $G$ satisfies \textbf{(hfg)}. More generally, Evens proved in \cite[Theorem 8.1]{Evens} that for any affine commutative algebra $R$ which admits a $kG$-module algebra structure, $\coh^*(G,M):= \coh^*(kG,M)$ is a finitely generated module over $\coh^*(G,R)$ for any finitely generated module $M$ over the skew group ring $R\#kG$. This nice property of finite group cohomology prompts us to seek an analogy in the reign of finite dimensional Hopf algebras. Thus we consider the following generalized noetherian assumption on a finite dimensional Hopf algebra $A$, where $R\# A$ denotes a smash product algebra (see~\cite{MO93}):
\vspace{0.5em}
\begin{itemize}
\item[\textbf{(hfg*)}:] $\coh^*(A,M)$ is a noetherian module over $\coh^*(A,R)$ for any affine commutative $A$-module algebra $R$ and any finitely generated $R\#A$-module $M$.
\end{itemize}
\vspace{0.5em}
It is clear that \textbf{(hfg*)} implies \textbf{(hfg)} by taking $R=k$. The converse implication will also be true under an additional assumption, namely the integral property of $A$ (see~Section~\ref{subsec:integral}) as we show in the following lemma.
\begin{lemma}
\label{lem:hfg}
Let $A$ be a finite dimensional Hopf algebra over a field $k$. Suppose $A$ has the integral property. Then the following are equivalent.
\begin{enumerate}
\item $A$ satisfies {\rm \textbf{(hfg)}}.
\item $A$ satisfies {\rm \textbf{(hfg*)}}.
\item $\coh^*(A,R)$ is finitely generated for any affine commutative $A$-module algebra $R$.
\item Let $T=\bigoplus_{i\ge 0}T_i$ be a finitely generated graded noetherian $A$-module algebra that is a finite module over some graded central $A$-module subalgebra $Z$ of $T$, and let $M$ be a finitely generated module over $T\#A$. Then $\coh^*(A,M)$ is noetherian over $\coh^*(A,T)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i)$\Rightarrow$(ii): Without loss of generality, by Lemma \ref{Rem:ext}, we can replace $k$ by a finite field extension and assume $A/{\rm rad}(A)$ is a direct sum of matrix algebras over $k$. Let $R$ be an affine commutative $A$-module algebra.
We first treat the case when $A$ acts on $R$ trivially such that $R\# A\cong R\otimes A$. Let $M$ be an $R\otimes A$-module. If $M$ is cyclic, we can write $M=(R\otimes A)/I$ for some left ideal $I$ of $R\otimes A$. Since $A$ is finite dimensional, there is a composition series $0=V_0\subset V_1\subset V_2\cdots \subset V_n=A$ of left $A$-modules, where each factor $S_{i}=V_{i}/V_{i-1}$ is a simple $A$-module. This induces a finite filtration $0=M_0\subset M_1\subset M_2\cdots \subset M_n=M$ on $M$ after applying $R\otimes_k-$, where each factor $M_{i}/M_{i-1}$ is a quotient module of $R\otimes S_{i}$. Note that $R\otimes S_{i}$ is a cyclic module over $R\otimes (A/{\rm Ann}(S_{i}))\cong R\otimes M_{d_{i}}(k)\cong M_{d_{i}}(R)$. By replacing $R\otimes A$ with $M_{d_i}(R)$, one sees that $R\otimes S_i\subset M_{d_i}(R)$ is just column matrices $R^{d_i}$. Then by some matrix multiplication, $M_i/M_{i-1}\cong (R/J_i)\otimes S_i$ for some ideal $J_i$ of $R$. Since $A$ acts on $R$ trivially, by the universal coefficients theorem, $\coh^*(A,R)\cong R\otimes \coh^*(A,k)$, and so is finitely generated and noetherian. Moreover, $\coh^*(A,(R/J_i)\otimes S_i)=(R/J_i)\otimes \coh^*(A,S_i)$ is finitely generated over $\coh^*(A,R)=R\otimes \coh^*(A,k)$ since $\coh^*(A,S_i)$ is finitely generated over $\coh^*(A,k)$ by {\rm \textbf{(hfg)}}. For each $i$, by applying $\coh^*(A,-)$ to the short exact sequence $0\to M_{i-1}\to M_i\to (R/J_i)\otimes S_i\to 0$, we obtain an exact sequence
\[
\xymatrix{
\coh^*(A,M_{i-1})\ar[r]& \coh^*(A,M_i)\ar[r]& \coh^*(A,(R/J_i)\otimes S_i),
}
\]
where we have just shown that $\coh^*(A,(R/J_i)\otimes S_i)$ is finitely generated over $\coh^*(A,R)$. Then induction on $i$ yields that $\coh^*(A,M_i)$ is finitely generated for all $i$. This completes the cyclic case. In general, we can again induct on the number of minimal generators of $M$ and employ an exact sequence similar to that above to conclude this trivial $A$-action case.
Finally, if $A$ acts on $R$ arbitrarily, by the integral property of $A$, $R$ is a finitely generated module over the invariant subring $R^A$. Then by the previous discussion, $\coh^*(A,M)$ is finitely generated over $\coh^*(A,R^A)$ and hence is finitely generated over $\coh^*(A,R)$. In particular, when $M=R$, we conclude that $\coh^*(A,R)$ is finitely generated as a module over $\coh^*(A,R^A)=R^A\otimes \coh^*(A,k)$, which is noetherian.
(iii)$\Rightarrow$(i): By letting $R=k$, we know $\coh^*(A,k)$ is finitely generated and noetherian. For any finite $A$-module $M$, denote $R=k\bigoplus M$ where $M^2=0$. Then one can check that $\coh^*(A,R)$ is finitely generated implies that $\coh^*(A,M)$ is finitely generated over $\coh^*(A,k)$.
(ii)$\Rightarrow$(iv): Since $A$ satisfies {\rm \textbf{(hfg*)}}, $\coh^*(A,M)$ is noetherian over $\coh^*(A,Z^{{\rm ev}})$. As the action of $\coh^*(A,Z^{{\rm ev}})$ on $\coh^*(A,M)$ factors through that of $\coh^*(A,T)$, it follows that $\coh^*(A,M)$ is noetherian over $\coh^*(A,T)$.
(ii)$\Rightarrow$(iii) and (iv)$\Rightarrow$(ii) are clear.
\end{proof}
Now we are able to summarize various equivalent finite generation conditions on the cohomology of a finite dimensional Hopf algebra.
\begin{prop}
\label{equivfg*}
For a finite dimensional Hopf algebra $A$ over a field $k$, the following are equivalent:
\begin{enumerate}
\item $A$ satisfies {\rm \textbf{(hfg)}}.
\item $A$ satisfies {\rm \textbf{(fg)}} and $\HH^*(A)$ is a finitely generated module over $\coh^*(A,k)$.
\item $\coh^*(A,k)$ is a finitely generated algebra and $\Ext^*_A(A/\rm{rad}(A),A/\rm{rad}(A))$ is a finitely generated module over $\coh^*(A,k)$.
\item $\coh^*(A,k)$ is a finitely generated algebra and $\Ext^*_A(k,A/\rm{rad}(A))$ is a finitely generated module over $\coh^*(A,k)$.
\item $\coh^*(A,k)$ is a finitely generated algebra and $\Ext^*_A(M,N)$ is a finitely generated module over $\coh^*(A,k)$ for all pairs of finite $A$-modules $M$ and $N$.
\item $\coh^*(A,k)$ is a finitely generated algebra and $\Ext^*_A(k,M)$ is a finitely generated module over $\coh^*(A,k)$ for any finite $A$-module $M$.
\end{enumerate}
If in addition $A$ has the integral property, each of the above conditions is equivalent to {\rm \textbf{(hfg*)}}.
\end{prop}
\begin{proof}
(i)$\Leftrightarrow$(ii): is Proposition \ref{adjoint}.
(ii)$\Leftrightarrow$(iii)$\Leftrightarrow$(v): These implications follow directly from \cite[Proposition 1.4]{EHSS} (see Proposition \ref{equivfg}), where we take $\coh^*(A,k)$ as a subalgebra of $\HH^*(A) \cong \coh^*(A,A^{\rm ad})=\coh^*(A,k) \oplus \coh^*(A,I)$ \cite[Lemma 7.2]{PW}. Here $A^{\rm ad}$ is the adjoint representation of $A$ and $I$ is the augmentation ideal of $A$. Note that in each case, $\HH^*(A)$ is a finitely generated module over $\coh^*(A,k)$ and hence it is noetherian.
(i)$\Leftrightarrow$(iv): Direction ``$\Rightarrow$" is clear and the other direction comes from filtering a finite $A$-module by its composition series and noting that (iv) implies $\coh^*(A,S)$ is finite generated over $\coh^*(A,k)$ for any simple $A$-module $S$.
(i)$\Leftrightarrow$(vi): It is clear since $\coh^*(A,k)$ is graded commutative.
Finally, the last statement is a consequence of Lemma~\ref{lem:hfg}.
\end{proof}
\begin{remarks}
\label{rem:hfg}\
\begin{itemize}
\item[(a)] It is straightforward to check that the condition {\rm \textbf{(hfg)}} is equivalent to the original finite generation condition in \cite{FW2015} which in characteristic $\neq 2$ says that $\coh^{\rm ev}(A,k)$ is finitely generated, and that for any pair of finite $A$-modules $M$ and $N$, $\Ext_A^*(M, N)$ is finitely generated over $\coh^{\rm ev}(A, k)$.
\item[(b)] More generally, {\rm \textbf{(hfg)}} can be stated for any augmented algebra $A$, and one can show that (i)$\Leftrightarrow$(iv)$\Leftrightarrow$(vi) in Proposition \ref{equivfg*} by further requiring $\coh^*(A,k)$ to be noetherian in (vi).
\end{itemize}
\end{remarks}
The equivalent finite generation conditions in Proposition~\ref{equivfg*} allow us to study the finite generation of (Hochschild) cohomology from various perspectives.
\section{A spectral sequence argument for the finite generation conditions}
\label{sec:homPI}
In this section, we use the Hilton-Eckmann argument in a suspended monoidal category to prove that the cohomology ring of $A$ with coefficients in an $A$-module algebra $R$ is finite over its affine graded center under some assumptions of $R$. As applications, we show the finite generation conditions hold for certain filtered, smash and crossed product algebras using the corresponding May and Lyndon-Hochschild-Serre spectral sequences.
We start with a special case when the smash product is taken with a finite dimensional cocommutative Hopf algebra. We follow the argument first used in \cite[Corollary 3.2.2]{Benson1} for the key step to show that a particular subalgebra lies in the graded center.
\begin{lemma}
\label{lem:cocom}
Let $A$ be a finite dimensional cocommutative Hopf algebra satisfying {\rm \textbf{(hfg)}} and let $R=\bigoplus_{i\ge 0} R_i$ be a graded $A$-module algebra. If $R$ is a finitely generated noetherian algebra that is a finite module over a graded central $A$-module subalgebra $Z$, then $\coh^*(A,R)=\bigoplus_{i+j\ge 0}\coh^i(A,R_j)$ is a finitely generated noetherian algebra and is a finite module over its graded center (grading by total degree).
\end{lemma}
\begin{proof}
By Proposition \ref{prop:1}(2) and (3), $Z$ and $Z^{\rm ev}$ are finitely generated and noetherian. By Proposition~\ref{prop:integral}(4), $Z^{\rm ev}$ is a finitely generated module over its invariant subring $(Z^{\rm ev})^A$ and hence so is $R$. Write $W=(Z^{\rm ev})^A=\bigoplus_{i\ge 0} W_i$. Note that $\coh^*(A,W)\cong \coh^*(A,k)\otimes W$ is finitely generated noetherian. By Lemma \ref{lem:hfg}, $\coh^*(A,R)$ is a finitely generated module over $\coh^*(A,W)$. It remains to show that $\coh^*(A,W)$ has image in the graded center of $\coh^*(A,R)$, which follows from the commutative diagram:
\[
\begin{xy}*!C\xybox{
\xymatrixcolsep{2pc}
\xymatrix{
& P_m \otimes P_n \ar[rrr]^{(-1)^{\alpha n} \, f_{m,\alpha} \otimes g_{n,\beta}} \ar[dd]^{(-1)^{mn} \, \tau} &&&W_{\alpha} \otimes R_{\beta} \ar[rrr]^{m_R} \ar[dd]^{(-1)^{mn+n\alpha+\beta m} \, \tau} &&& R_{\alpha+\beta} \\
P_{\DOT} \ar[ur]^{\Delta} \ar[dr]_{\Delta'} &&&&&&& \\
& P_n \otimes P_m \ar[rrr]^{(-1)^{\beta m} \, g_{n,\beta} \otimes f_{m,\alpha}} &&& R_{\beta} \otimes W_{\alpha} \ar[rrr]^{(-1)^{(m+\alpha)(n+\beta)}} &&& R_{\beta} \otimes W_{\alpha} \ar[uu]_{m_R},
}}
\end{xy}
\]
where $P_{\DOT}$ is a projective resolution of $k$ over $A$, $\Delta: P_{\DOT} \rightarrow P_{\DOT} \otimes P_{\DOT}$ is a diagonal map, $\Delta '$ is another diagonal map (defined by commutativity of the left triangle),
$f_{m,\alpha} \in \coh^m(A, W_\alpha)$, $g_{n,\beta} \in \coh^n(A,W_\beta)$, $\tau$ is the twisting map (which is always a morphism of $A$-modules since $A$ is cocommutative), and $m_R$ is the multiplication in $R$. The signs $(-1)^{\alpha n}$ and $(-1)^{\beta m}$ on the rows of the first square come from a standard sign convention. The second square demonstrates the fact that $W$ lies in the graded center of $R$. It follows from the commutativity of the diagram that
$$f_{m,\alpha} \smile g_{n,\beta} = (-1)^{(m+\alpha)(n+\beta)} \, g_{n,\beta} \smile f_{m,\alpha}.$$
\end{proof}
In general, in order to obtain needed results when $A$ is an arbitrary finite dimensional Hopf algebra, we follow the Hilton-Eckmann argument in the context of a suspended monoidal category demonstrated in \cite{Suarez-Alvarez}. A good reference for all the terminologies is \cite{EGNO}.
\begin{definition}
A {\bf suspended monoidal category} is a 9-tuple $(\mathcal C,\otimes,e,a,\ell,r,T,\lambda,\rho)$ such that $(\mathcal C,\otimes,e,a,r,\ell)$ is a monoidal category, $T:\mathcal C\to \mathcal C$ is an automorphism, $\lambda_{X,Y}: X\otimes TY\to T(X\otimes Y)$ and $\rho:TX\otimes Y\to T(X\otimes Y)$ are isomorphisms of functors $\mathcal C\times \mathcal C\to \mathcal C$ for each pair of objects $X$ and $Y\in {\rm obj}\,\mathcal C$, and the following diagrams commute:
\[
\xymatrix{
e\otimes TX\ar[r]^-{\ell}\ar[d]_-{\lambda} & TX\ar[d]^-{1}\\
T(e\otimes X)\ar[r]^-{T\ell} & TX
}
\qquad \qquad \qquad
\xymatrix{
TX\otimes e\ar[r]^-{r}\ar[d]_-{\rho} & TX\ar[d]^-{1}\\
T(X\otimes e)\ar[r]^-{Tr} & TX,
}
\]
while the following diagram anti-commutes:
\[
\xymatrix{
TX\otimes TY\ar[r]^-{\rho}\ar[d]_-{\lambda}\ar@{}[dr]|{(-1)}&T(X\otimes TY)\ar[d]^-{T\lambda}\\
T(TX\otimes Y)\ar[r]^-{T\rho}& T^2(X\otimes Y).
}
\]
\end{definition}
Given a suspended monoidal category, as in \cite[\S 1.5 and \S1.6]{Suarez-Alvarez}, we can inductively define a series of isomorphisms of functors $\mathcal C\times \mathcal C\to \mathcal C$ by
$$\lambda_q: X\otimes T^qY\to T^q(X\otimes Y) , \quad \rho_p: T^pX\otimes Y\to T^p(X\otimes Y),$$
for all $p,q\in \mathbb Z$ with $\lambda_1=\lambda, \rho_1=\rho$, such that the following diagrams commute:
\[
\xymatrix{
e\otimes T^qX\ar[r]^-{\ell}\ar[d]_-{\lambda_q} & T^qX\ar[d]^-{1}\\
T^q(e\otimes X)\ar[r]^-{T^q \ell} & T^qX
}
\qquad \qquad \qquad
\xymatrix{
T^pX\otimes e\ar[r]^-{r}\ar[d]_-{\rho_p} & T^pX\ar[d]^-{1}\\
T^p(X\otimes e)\ar[r]^-{T^pr} & T^pX,
}
\]
while the following diagram $(-1)^{pq}$-commutes:
\[
\xymatrix{
T^pX\otimes T^qY\ar[r]^-{\rho_p}\ar[d]_-{\lambda_q}\ar@{}[dr]|{(-1)^{pq}}&T^p(X\otimes T^qY)\ar[d]^-{T^p\lambda_q}\\
T^q(T^pX\otimes Y)\ar[r]^-{T^q\rho_p}& T^{p+q}(X\otimes Y).
}
\]
Next, we have isomorphisms $\sigma: X\otimes e\cong e\otimes X$ and $\tau: e\otimes X\cong X\otimes e$ satisfying $\ell\sigma=r$ and $r\tau=\ell$ for any $X\in {\rm obj}\,\mathcal C$. Moreover, all the isomorphisms above can be naturally extended from $e$ to $Y$ whenever $Y=\bigoplus e$ is a direct sum of copies of the identity object $e$.
Now, let $R=\bigoplus_{i \in \mathbb Z} R_i$ be a graded ring in $\mathcal C $ with product maps $m_{ij}: R_i\otimes R_j\to R_{i+j}$ satisfying the associativity axiom:
\[
\xymatrix{
(R_i \otimes R_j)\otimes R_k \ar[r]^-{m_{ij}\otimes 1}\ar[d]_-{a_{ijk}} & R_{i+j}\otimes R_k\ar[dd]^-{m_{(i+j)k}}\\
R_i\otimes (R_j\otimes R_k)\ar[d]_-{1\otimes m_{jk}} &\\
R_i\otimes R_{j+k}\ar[r]^-{m_{i(j+k)}} \ar[r] & R_{i+j+k}.
}
\]
Moreover, we say $R$ is {\bf unital} if there is a morphism $u: e\to R_0$ satisfying the unit axiom
\[
\xymatrix{
e\otimes R_i\ar[rr]^-{u\otimes 1}\ar[rd]_-{\ell} &&R_0\otimes R_i\ar[dl]^-{m_{0i}}\\
&R_i&
}\quad\quad\quad
\xymatrix{
R_i\otimes e\ar[rr]^-{1\otimes u}\ar[rd]_-{r} &&R_i\otimes R_0\ar[dl]^-{m_{i0}}\\
&R_i&
}.
\]
Consider the ring
$$
E(R):=\bigoplus_{i,j \in \mathbb Z}\, \Hom_\mathcal C(e,T^iR_j),
$$
where the product in $E(R)$ is given by the following composition for any $f: e\to T^pR_\alpha$ and $g: e\to T^qR_\beta$:
\begin{equation}\label{eq:prod}
\small
\xymatrix{
f\cdot g:e && e\otimes e\ar[ll]_-{\ell=r}\ar[rr]^-{(-1)^{q\alpha}f\otimes g} && T^pR_\alpha \otimes T^qR_\beta \ar[rr]^-{\rho_p}&& T^p(R_\alpha\otimes T^qR_\beta)\\
\ar[rr]^-{T^p\lambda_q}&&T^{p+q}(R_\alpha\otimes R_\beta)\ar[rr]^-{T^{p+q}m_{\alpha\beta}}&&T^{p+q}R_{\alpha+\beta}.
}
\end{equation}
The sign $(-1)^{q\alpha}$ above comes from the sign convention when passing $g$ over $R_{\alpha}$. Notice that if $R$ is unital, then $E(e)$ has image as a graded subalgebra of $E(R)$. Moreover, we say that $M=\bigoplus_{i\in \mathbb Z} M_i$ is a graded (bi)module over $R$ if there are morphisms $\ell_{ij}: R_i\otimes M_j\to M_{i+j}$ (resp. $r_{ji}: M_j\otimes R_i\to M_{i+j}$) satisfying some obvious compatibility conditions. Suppose $M=\bigoplus_{i\in \mathbb Z} M_i$ is a graded bimodule over $R=\bigoplus_{i \in \mathbb Z} R_i$ with each $R_i$ being a direct sum of copies of $e$. We call $M$ a \textbf{graded symmetric bimodule} over $R$ if the following diagrams commute:
\[
\xymatrix{
R_i\otimes M_j\ar[rr]^-{\tau}\ar[dr]_-{r_{ij}} &\ar@{}[d]|{(-1)^{ij}}& M_j\otimes R_i\ar[dl]^-{\ell_{ji}}\\
& M_{i+j}&
}\qquad\quad\qquad
\xymatrix{
M_j\otimes R_i\ar[rr]^-{\sigma}\ar[dr]_-{r_{ji}} &\ar@{}[d]|{(-1)^{ij}}& R_i\otimes M_j\ar[dl]^-{\ell_{ij}}\\
& M_{i+j}&
}
\]
Similarly we denote
$$
E(M):=\bigoplus_{i,j \in \mathbb Z}\, \Hom_\mathcal C(e,T^iM_j).
$$
The following theorem extends Su\'{a}rez-\'{A}lvarez's results \cite{Suarez-Alvarez} on the graded commutativity of the (Hochschild) cohomology ring of a finite dimensional (Hopf) algebra.
\begin{theorem}
\label{thm:gc}
Retain the above notations. Let $M=\bigoplus_{i\in \mathbb Z} M_i$ be a graded symmetric bimodule over $R=\bigoplus_{i \in \mathbb Z} R_i$ with each $R_i$ being a direct sum of copies of $e$. Then $E(M)$ is a graded symmetric bimodule over $E(R)$. In particular, $E(M)$ is a graded symmetric bimodule over $E(e)$.
\end{theorem}
\begin{proof}
Take $f: e\to T^pM_\alpha$ and $g:e\to T^qR_\beta$. It is straightforward to check that we have the following $(-1)^{p\beta}$-commutative diagram:
\[\small
\xymatrix{
e\ar[d]_-{g}& e\otimes e\ar[l]_-{r}\ar[r]^-{(-1)^{p\beta}g\otimes f}\ar[d]_-{g\otimes 1}\ar@{}[dr]|{(-1)^{p\beta}} & T^qR_\beta \otimes T^pM_\alpha\ar[d]^-{1\otimes 1} \\
T^qR_\beta\ar[d]_-{1} &T^qR_\beta \otimes e\ar[l]_-{r}\ar[r]^-{1\otimes f}\ar[d]_-{\rho_q}&T^qR_\beta \otimes T^pM_\alpha\ar[d]^-{\rho_q}\\
T^qR_\beta &T^q(R_\beta \otimes e)\ar[l]_-{T^qr}\ar[r]^-{T^q(1\otimes f)} &T^q(R_\beta\otimes T^pM_\alpha)\ar[r]^-{T^q\lambda_p}& T^{p+q}(R_\beta\otimes M_\alpha)\ar[r]^-{T^{p+q} \ell_{\beta\alpha}} & T^{p+q}M_{\alpha+\beta}
}
\]
where the right outer boundary represents $g\cdot f$. On the other hand, the right outer boundary of the $(-1)^{q\alpha+pq}$-commutative diagram below is equal to $f\cdot g$:
\[\small
\xymatrix{
e\ar[d]_-{g}& e\otimes e\ar[l]_-{\ell=r}\ar[r]^-{(-1)^{q\alpha}f\otimes g}\ar[d]_-{1\otimes g}\ar@{}[dr]|{(-1)^{q\alpha}} & T^pM_\alpha \otimes T^qR_\beta\ar[d]^-{1\otimes 1} \\
T^qR_\beta\ar[d]_-{1} &e \otimes T^qR_\beta \ar[l]_-{\ell}\ar[r]^-{f\otimes 1}\ar[d]_-{\lambda_q}&T^pM_\alpha\otimes T^qR_\beta \ar@{}[dr]|{(-1)^{pq}}\ar[d]^-{\lambda_q}\ar[r]^-{\rho_p}& T^p(M_\alpha\otimes T^qR_\beta)\ar[d]^-{T^p\lambda_q}\\
T^qR_\beta &T^q( e\otimes R_\beta)\ar[l]_-{T^q \ell}\ar[r]^-{T^q(f\otimes 1)} &T^q(T^pM_\alpha\otimes R_\beta)\ar[r]^-{T^q\rho_p}& T^{p+q}(M_\alpha\otimes R_\beta)\ar[r]^-{T^{p+q}r_{\alpha\beta}} & T^{p+q}M_{\alpha+\beta}
}
\]
Therefore, the anti-commutativity of the product of $f$ and $g$ can be derived from the next $(-1)^{\alpha\beta}$-commutative diagram:
\[\small
\xymatrix{
e\ar[d]_-{1} &T^qR_\beta\ar[l]_{g}\ar[d]_-{1} & T^q(e\otimes R_\beta)\ar[l]_{T^q \ell}\ar[r]^-{T^q(f\otimes 1)}\ar[d]_-{T^q\sigma} & T^q(T^pM_\alpha\otimes R_\beta)\ar[r]^-{T^q\rho_p}\ar[d]_-{T^q\sigma}& T^{p+q}(M_\alpha\otimes R_\beta)\ar[r]^-{T^{p+q}r_{\alpha\beta}} \ar[d]_-{T^{p+q}\sigma}\ar@{}[dr]|{(-1)^{\alpha\beta}}&T^{p+q}M_{\alpha+\beta}\ar[d]_-{1}\\
e & T^qR_\beta\ar[l]_{g} & T^q(R_\beta \otimes e)\ar[l]_{T^qr}\ar[r]^-{T^q(1\otimes f)} & T^q(R_\beta \otimes T^pM_\alpha)\ar[r]^-{T^q\lambda_p} &T^{p+q}(R_\beta \otimes M_\alpha)\ar[r]^-{T^{p+q} \ell_{\beta\alpha}} & T^{p+q}M_{\alpha+\beta}, \\
}
\]
where the top row is equal to $(-1)^{q\alpha+pq}f\cdot g$ and the bottom row equals $(-1)^{p\beta}g\cdot f$. Hence $f\cdot g=(-1)^{(p+\alpha)(q+\beta)}g\cdot f$.
Note that the third square commutes because of the functoriality of $\sigma$ and the fourth square commutes by commutativity of
\[\small
\xymatrix{
T^pX\otimes e\ar@/_2.0pc/[dd]_-{\sigma}\ar[d]^-{r} \ar[r]^-{\rho_p}& T^p(X\otimes e)\ar@/^2.0pc/[dd]^-{T^p\sigma}\ar[d]_-{T^pr}\\
T^pX\ar[r]^-{1} & T^pX\\
e\otimes T^pX\ar[r]^-{\lambda_p}\ar[u]_-{\ell}& T^p(e\otimes X)\ar[u]^-{T^p \ell}
}
\]
for any object $X\in {\rm obj}\, \mathcal C$.
\end{proof}
\begin{corollary}
\label{cor:algebragc}
Let $(\mathcal C,\otimes,e,a,\ell,r,T,\lambda,\rho)$ be a suspended monoidal category. Let $W=\bigoplus_{i\in \mathbb Z} W_i$ be a subring of $R=\bigoplus_{i\in \mathbb Z} R_i$ with each $W_i$ being a direct sum of copies of $e$ and the following $(-1)^{ij}$-commutative diagrams
\[
\xymatrix{
W_i\otimes R_j\ar[rr]^-{\tau}\ar[dr]_-{\mu} &\ar@{}[d]|{(-1)^{ij}}& R_j\otimes W_i\ar[dl]^-{\mu}\\
& R_{i+j}&
}\quad\quad\quad
\xymatrix{
R_i\otimes W_j\ar[rr]^-{\sigma}\ar[dr]_-{\mu} &\ar@{}[d]|{(-1)^{ij}}& W_j\otimes R_i\ar[dl]^-{\mu}\\
& R_{i+j}&
}
\]
Then the subring $E(W)$ has image in the graded center of $E(R)$.
\end{corollary}
\begin{corollary}\cite[Theorem 1.7]{Suarez-Alvarez}
\label{cor:gc}
Let $(\mathcal C,\otimes,e,a,\ell,r,T,\lambda,\rho)$ be a suspended monoidal category. Set
$$
E(e):=\bigoplus_{i \in \mathbb Z}\,\Hom_\mathcal C(e,T^ie).
$$
If $f: e\to T^pe$ and $g: e\to T^qe$, define $f\cdot g=T^qf\circ g: e\to T^{p+q}e$. Then $E(e)$ is a commutative graded ring.
\end{corollary}
\begin{proof}
Let $R=M=e$ in Theorem~\ref{thm:gc}. Note that the product $f\cdot g=T^qf\circ g: e\to T^{p+q}e$ defined in the statement coincides with the formula \eqref{eq:prod}; see the proof of \cite[Theorem 1.7]{Suarez-Alvarez}. \\
\end{proof}
\noindent
\underline{{\bf Applications.}} For finite dimensional (Hopf) algebras, spectral sequences are powerful tools for handling their cohomology rings. Therefore, we are interested in multiplicative spectral sequences satisfying similar finite generation conditions. The spectral sequences we have in mind are May spectral sequences related to filtered algebras and Lyndon-Hochschild-Serre spectral sequences related to smash and crossed products. They are described explicitly in the Appendix for completeness. We use these spectral sequences to conclude the finite generation conditions for the original (Hopf) algebras if the initial pages of these spectral sequences satisfy the finite generation conditions and the corresponding spectral sequences collapse at certain pages in positive characteristic.
\begin{prop}
\label{prop:hfg}
Let $A$ be a finite dimensional Hopf algebra over a field $k$, $M$ be any finite dimensional $A$-module, and $R=\bigoplus_{i\ge 0} R_i$ be a connected graded $A$-module algebra. Then
\begin{enumerate}
\item $\coh^*(A,k)$ maps to the graded center of $\Ext_A^*(M,M)$. Moreover, if $A$ satisfies {\rm \textbf{(hfg)}}, then $\Ext_A^*(M,M)$ is noetherian and a finite module over its graded center.
\item $\coh^*(A,k)$ maps to the graded center of $\coh^*(A,R)=\bigoplus_{i,j\ge 0}\coh^i(A,R_j)$ (with respect to the total degree). Moreover, if $A$ satisfies {\rm \textbf{(hfg*)}} and $R$ is a finitely generated noetherian algebra and a finite module over some graded central $A$-module subalgebra, then $\coh^*(A,R)$ is noetherian and a finite module over its graded center.
\end{enumerate}
\end{prop}
\begin{proof}
We use Theorem~\ref{thm:gc} where $\mathcal C$ is the left derived category of ${\rm Mod}(A)$ with $\otimes=\otimes_k$, $e=k$ and $T=[1]$ is the shift functor.
(i) Let $V^*$ be the left dual of $V$ (see e.g., \cite[\S 2.10]{EGNO}). Note that $\Ext_A^*(V, V)\cong \Ext_A^*(k, V\otimes V^*)$ and the standard actions of $\coh^*(A, k)$ on either one correspond under this isomorphism. By Theorem~\ref{thm:gc}, $\coh^*(A,k)$ maps to the graded center of $\Ext_A^*(k, V\otimes V^*)$ via the coevaluation map $\text{coev}: k\to V\otimes V^*$. Moreover, if $A$ satisfies {\rm \textbf{(hfg)}}, then $\Ext_A^*(k, V\otimes V^*)$ is finitely generated as a module over the noetherian graded central subalgebra given by the image of $\coh^*(A,k)$, and as a consequence it is noetherian.
(ii) Since $k=R_0$ certainly lies in the graded center of $R$, $\coh^*(A,k)$ maps to the graded center of $\coh^*(A,R)$. Now assume $H$ satisfies {\rm \textbf{(hfg*)}} and denote again by $Z$ a graded central $A$-module subalgebra of $R$. Write $W=(Z^{\rm ev})^A=\bigoplus_{i\ge 0} W_i$, where $R$ is finitely generated as a module over $W$. Then $\coh^*(A,R)$ is a finitely generated module over the noetherian algebra $\coh^*(A,W)$ by {\rm \textbf{(hfg*)}} and hence itself is finitely generated noetherian. We use Corollary \ref{cor:algebragc} to conclude that $\coh^*(A,W)$ maps to the graded center of $\coh^*(A,R)$ since each homogenous component $W_i$ of $W$ is a direct sum of copies of the trivial module $k$.
\end{proof}
Similarly, we obtain the following result, where part (i) is a special case of \cite[Theorem 1.1]{SO}.
\begin{prop}
\label{prop:fg}
Let $A$ be a finite dimensional algebra over a field $k$, $M$ be a finite dimensional $A$-module, and $R$ be a finite dimensional unital algebra in ${\rm Mod}(A^e)$. Then
\begin{enumerate}
\item $\HH^*(A)$ maps to the graded center of $\Ext_A^*(M,M)$. Moreover, if $A$ satisfies {\rm \textbf{(fg)}}, then $\Ext_A^*(M,M)$ is noetherian and a finite module over its graded center.
\item $\HH^*(A)$ maps to the graded center of $\HH^*(A,R)$. Moreover, if $A$ satisfies {\rm \textbf{(fg)}}, then $\HH^*(A,R)$ is noetherian and a finite module over its graded center.
\end{enumerate}
\end{prop}
\begin{proof}
Again, apply Theorem~\ref{thm:gc} where $\mathcal C$ is the left derived category of ${\rm Mod}(A^e)$ with $\otimes=\otimes_A$, $e=A$ and $T=[1]$ is the shift functor.
(i) Note that $\Ext_A^*(M, M)\cong \HH^*(A, \Hom_k(M,M))$ and the standard actions of $\HH^*(A)$ on either one correspond under this isomorphism. By Theorem~\ref{thm:gc}, $\HH^*(A)$ maps to the graded center of $\HH^*(A, \Hom_k(M,M))$.
Moreover, if $A$ satisfies {\rm \textbf{(fg)}}, then $\HH^*(A, \Hom_k(M,M))$ is finitely generated as a module over the noetherian graded central subalgebra $\HH^*(A)$, and as a consequence it is noetherian.
(ii) Since $R$ is $A$-unital, by Corollary \ref{cor:algebragc}, $\HH^*(A)$ maps to the graded center of $\HH^*(A,R)$. Moreover, if $A$ satisfies {\rm \textbf{(fg)}}, $\HH^*(A,R)$ is finitely generated as a module over the noetherian graded central subalgebra $\HH^*(A)$, and as a consequence it is noetherian.
\end{proof}
\begin{lemma}
\label{lem:key}
Let $R$ be a commutative noetherian ring with characteristic $m>0$. Let $\{E^{i,j}_r\}_{r,i,j}$ be a convergent multiplicative spectral sequence of $R$-algebras concentrated in the half plane $i+j \ge 0$. Assume that for some $r_0\ge 1$,
\begin{enumerate}
\item $E_{r_0}^{*,*}$ is a finitely generated module over its graded center
(grading by total degree), and
\item $E_{r_0}^{*,*}$ is a noetherian $R$-algebra.
\item The spectral sequence $\{E^{i,j}_r\}_{r,i,j}$ collapses at some page $r_1\ge r_0$.
\end{enumerate}
Then $E_\infty^{*,*}$ is a noetherian $R$-algebra.
Additionally, let $\{\widetilde{E}^{i,j}_r\}_{r,i,j}$ be a convergent spectral sequence that is a differential bigraded module over $\{E^{i,j}_r\}_{r,i,j}$. Suppose that for the same value $r_0$, $\widetilde{E}_{r_0}^{*,*}$ is finitely generated over $E_{r_0}^{*,*}$. Then $\widetilde{E}_\infty^{*,*}$ is finitely generated over $E_\infty^{*,*}$.
\end{lemma}
\begin{proof}
For each $r\geq 0$, let $C^{*,*}_r$ be the graded center of $E_r^{*,*}$.
Then for each $c\in C^{i,j}_r$ and $x\in E^{i',j'}_r$,
\begin{equation}\label{eqn:key-cx}
cx = (-1)^{(i+j)(i'+j')} x c
\end{equation}
and $d(c)\in C^{i-1,j}_r \oplus C^{i, j-1}_r$.
As a consequence of equation~(\ref{eqn:key-cx}), $d(c)$ commutes with $c$,
i.e.,~$c d(c) = d(c) c$. Thus if $i$ is odd, then
\[
d(c^2)= d(c) c + (-1)^i c d(c) = 0 ,
\]
while if $i$ is even, since $m = \chara (R)$, we have
\[
d(c^m) = d(c) c^{m-1} + c d(c) c^{m-2} +
\cdots + c^{m-1} d(c) = m d(c) c^{m-1}=0 .
\]
It follows that for each $c$ in $C^{i,j}_r$ there is a finite positive power of $c$ that is
a cycle (i.e.,~$c^2$ if $i$ is odd, and $c^m$ if $i$ is even).
By hypothesis, $E^{*,*}_{r_0}$ is a noetherian $R$-algebra
that is a finitely generated module over $C^{*,*}_{r_0}$.
By Proposition~\ref{prop:1}(3), $C^{*,*}_{r_0}$ is noetherian
and finitely generated.
Let $c_1,\ldots, c_n$ be a set of homogeneous generators of
$C^{*,*}_{r_0}$.
As above, for each $i$, some positive power of $c_i$ is a cycle. By repeating the process, for each $i$, there is a finite positive power, say $c_i^{t_i}$, that is a permanent cycle since the spectral sequence $\{E^{i,j}_r\}_{r,i,j}$ collapses at some page $r_1\ge r_0$. Let
$$A^{*,*}_{r_0}:= R \langle c_1^{t_1},\ldots, c_n^{t_n} \rangle \subseteq C^{*,*}_{r_0},$$
that is, $A_{r_0}^{*,*}$ is the subalgebra of $C_{r_0}^{*,*}$ generated by $c_1^{t_1},\ldots, c_n^{t_n}$.
By its definition, $A_{r_0}^{*,*}$ consists of permanent cycles and $C^{*,*}_{r_0}$ is
finitely generated as a module over $A^{*,*}_{r_0}$. It follows that $E^{*,*}_{r_0}$ is also a finitely generated module
over $A^{*,*}_{r_0}$.
For all $r>r_0$, let
$$A_r^{*,*} := A^{*,*}_{r-1} / B_{r-1}^{*,*} \subseteq E^{*,*}_r ,$$
the subalgebra of $E^{*,*}_r$ given by
the quotient of $A^{*,*}_{r-1}$ by its ideal $B_{r-1}^{*,*}$
consisting of coboundaries.
By construction, we have a sequence of algebras:
\[
\xymatrix{
A^{*,*}_{r_0} \ar@{->>}[r] & \cdots \ar@{->>}[r]
& A_r^{*,*} \ar@{->>}[r] & A _{r+1}^{*,*} \ar@{->>}[r]& \cdots ,
}
\]
for which each $A_i^{*,*}$ is a subalgebra of $E_i^{*,*}$.
For each $r \geq r_0$, let $\Lambda ^{*,*}_r$ be the subalgebra
consisting of \emph{all permanent} cycles in $E^{*,*}_r$.
A calculation shows that since $A^{*,*}_r$ consists of permanent cycles,
$d_r (E^{*,*}_r)$ is an $A^{*,*}_r$-submodule of $\Lambda^{*,*}_r$.
Writing $\Lambda^{*,*}_{r+1} := \Lambda_r^{*,*}/ d_r( E_r^{*,*})$
for each $r$, we have a sequence of $A^{*,*}_{r_0}$-modules:
\[
\xymatrix{
\Lambda^{*,*}_{r_0} \ar@{->>}[r] & \cdots \ar@{->>}[r]
& \Lambda_r^{*,*} \ar@{->>}[r] & \Lambda _{r+1}^{*,*} \ar@{->>}[r]& \cdots .
}
\]
Now $\Lambda^{*,*}_{r_0}$ is an $A^{*,*}_{r_0}$-submodule of
$E^{*,*}_{r_0}$, and so is a noetherian $A^{*,*}_{r_0}$-module.
Let $K_r$ be the kernel of the surjection from $\Lambda^{*,*}_{r_0}$
to $\Lambda_r^{*,*}$. Then $K_r$ is also a noetherian $A^{*,*}_{r_0}$-module.
There is an increasing chain of submodules of $\Lambda^{*,*}_{r_0}$:
\[
K_{r_0} \subseteq K_{r_0+1} \subseteq \cdots .
\]
Since $\Lambda^{*,*}_{r_0}$ is noetherian, this chain stabilizes,
that is $K_s = K_{s+1} = \cdots$, for some $s \ge r_0$.
Therefore, $\Lambda^{*,*}_s = E^{*,*}_{\infty}$, and
$E^{*,*}_{\infty}$ is itself a noetherian $A^{*,*}_{r_0}$-module.
By~\cite[Proposition~2.1]{Evens}, $E^{*,*}_{\infty}$ is a noetherian module
over $\Tot (A^{*,*}_{r_0})$.
Since $A^{*,*}_{r_0}$ is finitely generated, it follows
that $E^{*,*}_{\infty}$ is a noetherian $R$-algebra.
For the remaining statement regarding $\widetilde{E}^{i,j}$, under the above set up and hypotheses, $\widetilde{E}^{*,*}_{r_0}$ is also a finitely generated module over $A^{*,*}_{r_0}$. For each $r \geq r_0$, let $\widetilde{\Lambda} ^{*,*}_r$ be the subalgebra consisting of \emph{all} permanent cycles in $\widetilde{E}^{*,*}_r$ and define $\widetilde{\Lambda}^{*,*}_{r+1} := \widetilde{\Lambda}_r^{*,*}/ d_r( \widetilde{E}_r^{*,*})$. Now $\widetilde{\Lambda}^{*,*}_{r_0}$ is an $A^{*,*}_{r_0}$-submodule of
$\widetilde{E}^{*,*}_{r_0}$, and so is a finitely generated $A^{*,*}_{r_0}$-module. Similar arguments show that $\widetilde{E}^{*,*}_{\infty}$ is a noetherian $A^{*,*}_{r_0}$-module and is finitely generated over $E^{*,*}_{\infty}$.
\end{proof}
\noindent
{\bf (Hochschild) Cohomology of filtered algebras:}
Let $A$ be a finite dimensional filtered (Hopf) algebra over a field $k$, and denote by $\gr A$ the corresponding associated graded (Hopf) algebra. Then there exists a May spectral sequence computing the (Hochschild) cohomology of $A$ in terms of the (Hochschild) cohomology of $\gr A$ as its first page. Similarly, there is a May spectral sequence that computes the (Hochschild) cohomology of $A$ with coefficients in any finite $A$-module $M$. Moreover, these spectral sequences inherit structures that are compatible with the cup product in the (Hochschild) cohomology of $A$ and its module structure on that of $M$ (see Appendix~\ref{subsec:May} and Appendix \ref{subsec:May-HH}).
\begin{theorem}
\label{thm:FFGC}
Let $A$ be a finite dimensional filtered algebra (resp.~Hopf algebra) over a field $k$ of positive characteristic. If the associated graded algebra (resp.~Hopf algebra) $\gr A$ satisfies {\rm \textbf{(fg)}} (resp.~{\rm \textbf{(hfg)}}) and the May spectral sequence used to compute $\HH^*(A)$ (resp.~$\coh^*(A,k)$) collapses, then $A$ satisfies {\rm \textbf{(fg)}} (resp.~{\rm \textbf{(hfg)}}).
\end{theorem}
\begin{proof}
It follows from Lemma \ref{lem:key} and the discussion above since the (Hochschild) cohomology ring of $A$ is always graded commutative.
\end{proof}
\noindent
{\bf Cohomology of crossed products:} Let $A=R \#_\sigma H$ be the crossed product of two finite dimensional Hopf algebras $R$ and $H$ over a field $k$ with respect to a cocycle $\sigma$ (see~\cite{MO93}). We assume that the augmentations of $R$ and $H$ are preserved under the crossed product so that $A$ is again augmented. In this case, there are Lyndon-Hochschild-Serre spectral sequences associated to the crossed product $A=R \#_\sigma H$ and any finite dimensional $A$-module $M$:
\begin{align*}
E_2^{p,q}(A)=\coh^p(H,\coh^q(R,k))&~\Longrightarrow~ \coh^{p+q}(A,k)=E_\infty^{p,q}(A), \\
E_2^{p,q}(M)=\coh^p(H,\coh^q(R,M))&~ \Longrightarrow~ \coh^{p+q}(A,M)=E_\infty^{p,q}(M).
\end{align*}
These spectral sequences inherit structures that are compatible with the multiplicative structure of $\coh^*(A,k)$ and its module structure on $\coh^*(A,M)$ (see Appendix~\ref{subsec:cohomology-smash}).
\begin{theorem}
\label{thm:FiniteTypeCoh}
Retain the notations above. Further assume that $R$ satisfies {\rm \textbf{(hfg)}} and $H$ satisfies {\rm \textbf{(hfg*)}}. Then $A=R \#_\sigma H$ satisfies {\rm \textbf{(hfg)}} if the corresponding LHS spectral sequence $E^{*,*}_r(A)$ collapses for
\begin{enumerate}
\item $r=2$ when $\chara(k)=0$, or
\item some $r\ge 2$ when $\chara(k)>0$.
\end{enumerate}
\end{theorem}
\begin{proof}
Since $R$ satisfies {\rm \textbf{(hfg)}}, the cohomology ring $\coh^*(R,k)$ is graded commutative and finitely generated. In view of Proposition~\ref{prop:dgmod-ss-coh} and the fact that $H$ satisfies {\rm \textbf{(hfg*)}}, we know $E_2^{*,*}(A)=\coh^*(H,\coh^*(R,k))$ is a finitely generated and noetherian algebra over $E_2^{0,0}(A)=k$ and it is a finite module over its graded center. Moreover since $\coh^*(R,M)$ is finite over $\coh^*(R, k)$, Lemma~\ref{lem:hfg}(iv) implies that $E_2^{*,*}(M)=\coh^*(H,\coh^*(R,M))$ is a finite module over $E_2^{*,*}(A)=\coh^*(H,\coh^*(R,k))$. Thus the result holds in characteristic zero and for positive characteristic we can apply Lemma \ref{lem:key} since all the assumptions there are satisfied when $r_0=2$.
\end{proof}
\noindent
{\bf Hochschild cohomology of smash products:} Now let $A=R\#H$ be the smash product of a finite dimensional Hopf algebra $H$ and a finite dimensional $H$-module algebra $R$. There are Lyndon-Hochschild-Serre spectral sequences associated to the smash product $A=R \# H$ and any finite bimodule $M$ over $A$:
\begin{align*}
E_2^{p,q}(A)=\coh^p(H,\HH^q(R,A)) &~\Longrightarrow~ \HH^{p+q}(A)=E_\infty^{p,q}(A),\\
E_2^{p,q}(M)=\coh^p(H,\HH^q(R,M)) &~\Longrightarrow~ \HH^{p+q}(A,M)=E_\infty^{p,q}(M).
\end{align*}
These spectral sequences naturally inherit structures that are compatible with the multiplicative structure of $\HH^*(A)$ and its module structure on $\HH^*(A,M)$ (see Appendix~\ref{subsec:hochschild}).
\begin{theorem}\label{thm:FiniteTypeHochschild}
Retain the notations above. Further assume that $R$ satisfies {\rm \textbf{(fg)}} and $H$ is cocommutative. Then $A=R \# H$ satisfies {\rm \textbf{(fg)}} if the corresponding LHS spectral sequence $E^{*,*}_r(A)$ collapses for
\begin{enumerate}
\item $r=2$ when $\chara(k)=0$, or
\item some $r\ge 2$ when $\chara(k)>0$.
\end{enumerate}
\end{theorem}
\begin{proof}
Since $R$ satisfies {\rm \textbf{(fg)}}, $\HH^*(R,A)$ is a finite module over $\HH^*(R)$. This implies that $\HH^*(R,A)$ is finitely generated and noetherian since $\HH^*(R)$ is. Furthermore, $\HH^*(R)$ is mapped to the graded center of $\HH^*(R,A)$ via the embedding $R\hookrightarrow A$ by Proposition~\ref{prop:fg}(ii). Next we show that the image of $\HH^*(R)$ is an $H$-module subalgebra of $\HH^*(R,A)$.
Using the notation in Appendix~\ref{subsec:hochschild}, let $K_{\DOT}$ be an $H$-equivariant $R$-bimodule resolution of $R$. Then $\HH^*(R,A) \cong \coh^*(\Hom_{R^e}(K_{\DOT},A))$ and $\HH^*(R) \cong \coh^*(\Hom_{R^e}(K_{\DOT},R))$. Choose any $f \in \Hom_{R^e}(K_{\DOT},R) \subset \Hom_{R^e}(K_{\DOT},A)$, $h \in H$, and $u \in K_{\DOT}$. Applying the $H$-action given in Appendix~\ref{subsec:hochschild}
and the assumption that $H$ is cocommutative, we find that
\begin{align*}
(f \cdot h)(u) &~=~\sum S(h_1) f(h_2 \cdot u) h_3 ~=~\sum [S(h_2) \cdot f(h_3 \cdot u)] S(h_1)h_4 \\
&~=~ \sum [S(h_3) \cdot f(h_4 \cdot u)] S(h_1)h_2 ~=~ \sum S(h_1) \cdot f(h_2 \cdot u) \in B.
\end{align*}
Hence, $\Hom_{R^e}(K_{\DOT},R)$ is an $H$-invariant subcomplex of $\Hom_{R^e}(K_{\DOT},A)$. Passing to homology, the image of $\HH^*(R)$ is an $H$-module subalgebra of $\HH^*(R,A)$.
As a result of Friedlander and Suslin~\cite{FS}, $H$ satisfies {\rm \textbf{(hfg)}} and hence it satisfies {\rm \textbf{(hfg*)}} by Lemma~\ref{lem:hfg} since $H$ is cocommutative (see~Proposition \ref{prop:integral}). By Proposition~\ref{prop:hfg}(ii), we know $E_2^{*,*}(A)=\coh^*(H,\HH^*(R,A))$ is a finitely generated noetherian algebra over the center $E_2^{0,0}(A)=Z(A)$ of $A$ and it is a finite module over its graded center. Moreover since $\HH^*(R,M)$ is a finite module over $\HH^*(R,A)$, Lemma~\ref{lem:hfg}(iv) implies that $E_2^{*,*}(M)=\coh^*(H,\HH^*(R,M))$ is finitely generated over $E_2^{*,*}(A)=\coh^*(H,\HH^*(R,A))$. Thus, the result holds in characteristic zero. For positive characteristic, we can apply Lemma \ref{lem:key} since all the assumptions there are satisfied when $r_0=2$.
\end{proof}
\section{A lifting method for the finite generation conditions via reduction modulo $p$}
\label{sec:mod p}
\numberwithin{equation}{section}
In this section, we use the reduction modulo $p$ method in number theory to deal with the finite generation conditions for the (Hochschild) cohomology ring of (Hopf) algebras over a field of characteristic zero, or mainly over the field of complex numbers $\mathbb C$.
Our first result suggests that regarding the finite generation conditions over a field of characteristic zero, it suffices to work over $\mathbb C$.
\begin{lemma}\label{lem:CN}
Let $k$ be an arbitrary field of characteristic zero, and $A$ be a finite dimensional algebra (resp. Hopf algebra) over $k$. Then there is some finite dimensional algebra $A'$ (resp. Hopf algebra) over $\mathbb C$, which is obtained from a field extension of some subring of $A$, such that $A$ satisfies {\rm (\textbf{fg})} (resp. {\rm (\textbf{hfg})}) if and only if $A'$ satisfies {\rm (\textbf{fg})} (resp. {\rm (\textbf{hfg})}).
\end{lemma}
\begin{proof}
Here we only treat the case when $A$ is a finite dimensional algebra over $k$ with condition {\rm (\textbf{fg})}. The argument for $A$ being a Hopf algebra with condition {\rm (\textbf{hfg})} is similar. Since $k$ is of characteristic zero, the prime field of $k$ is $\mathbb Q$. Fix a finite basis $x_1,\ldots,x_n$ of $A$. We can write the multiplication in $A$ as $x_ix_j=\sum_\ell \alpha_{ij}^\ell x_\ell$ for some $\alpha_{ij}^\ell\in k$. Take $K=\mathbb Q(\alpha_{ij}^\ell)$ to be the subfield of $k$ by joining all the coefficients $\alpha_{ij}^\ell$ to its prime field $\mathbb Q$. Then one can define another algebra $B$ over $K$ with the bases $x_i$'s and the same multiplication rule $x_ix_j=\sum_\ell \alpha_{ij}^\ell x_\ell$. It is clear that $B\otimes_Kk\cong A$. Now since $\mathbb C$ is algebraically closed over $\mathbb Q$ and has infinitely many transcendental numbers, one can embed $K$ into $\mathbb C$. Let $A':=B\otimes_K\mathbb C$ which is a finite dimensional complex algebra. By Lemma \ref{Rem:ext} and Remark \ref{rem:FE}, $A$ satisfies {\rm (\textbf{fg})} $\Leftrightarrow B$ satisfies {\rm (\textbf{fg})} $\Leftrightarrow A'$ satisfies {\rm (\textbf{fg})}.
\end{proof}
\begin{lemma}\label{lem:field}
Let $R=\bigoplus_{i\ge 0}R_i$ be a connected graded commutative algebra over a base field $K\subset \mathbb C$. If $R\otimes_Kk$ is noetherian for some field extension $k/K$, then $R\otimes_K\mathbb C$ is noetherian.
Moreover, if $M=\bigoplus_{i\ge 0}M_i$ is a graded module over $R$ such that $M\otimes_Kk$ is finite over $R\otimes_Kk$, then $M\otimes_K\mathbb C$ is finite over $R\otimes_K\mathbb C$.
\end{lemma}
\begin{proof}
By Proposition \ref{prop:1}(ii), we know $R\otimes_Kk$ is finitely generated over $k$, say by homogenous elements $f_1,\dots,f_r$. Choose a $K$-basis $\{x_i\}$ for $R$. Denote by $F$ the subfield of $k$ by joining all the coefficients appearing in $f_1,\dots,f_r$ as $k$-linear combinations of the basis $\{x_i\}$ to $K$. Then $f_1,\dots,f_r$ belong to $R\otimes_K F$ and they generate a $F$-subalgebra in $R\otimes_K F$, which is denoted by $T$. Note that $R\otimes_Kk$ is locally finite and has a Hilbert series. It is clear that $T\otimes_Fk\cong R\otimes_Kk$. Since base field extension does not change the Hilbert series, $T\subseteq R\otimes_KF\subseteq R\otimes_Kk$ share the same Hilbert series. This implies that $T=R\otimes_K F$ and $R\otimes_K F$ is finitely generated. Now by the fact that $\mathbb C$ is algebraically closed over $K$ and has uncountably many transcendental numbers, we can embed $F$ into $\mathbb C$. Hence $(R\otimes_K F)\otimes_{F}\mathbb C\cong R\otimes_K\mathbb C$. So $R\otimes_K\mathbb C$ is finitely generated and hence is noetherian by Proposition \ref{prop:1}(ii).
Finally for any graded module $M$ over $R$, take finitely many homogenous generators $f_1,\cdots,f_r$ of $M\otimes_Kk$ over $R\otimes_Kk$. Then there is a middle field $K\subseteq F\subseteq k$ such that $f_1,\dots,f_r$ belong to $M\otimes_KF$. By a similar argument to that above, we can consider $T$ as the submodule generated by $f_1,\dots,f_r$ in $M\otimes_KF$ and conclude that $T\otimes_F \mathbb C\cong M\otimes_K \mathbb C $ is finitely generated over $R\otimes_K\mathbb C$ via some embedding of $F$ into $\mathbb C$.
\end{proof}
\begin{lemma}\label{lem:completion}
Let $R=\bigoplus_{i\ge 0}R_i$ be a graded commutative algebra over a noetherian commutative base ring $R_0$ that is locally finite over $R_0$, and $I$ be any ideal of $R_0$ . If $R\otimes_{R_0}(R_0/I)$ is noetherian, then $R\otimes_{R_0}(\varprojlim\limits_{i}R_0/I^i)$ is noetherian.
Moreover, let $M=\bigoplus_{i\ge 0}M_i$ be a graded module over $R$ that is locally finite over $R_0$. If $M\otimes_{R_0}(R_0/I)$ is finite over $R\otimes_{R_0}(R_0/I)$, then $M\otimes_{R_0}(\varprojlim\limits_{i}R_0/I^i)$ is finite over $R\otimes_{R_0}(\varprojlim\limits_{i}R_0/I^i)$.
\end{lemma}
\begin{proof}
We write $\widehat{R_0}=\varprojlim\limits_{i}R_0/I^i$ and $\widehat{R}=R\otimes_{R_0}\widehat{R_0}$. For any $R_0$-module $M$, we denote the natural map
\[
\xymatrix{
\varphi_N: M\otimes_{R_0} \widehat{R_0}\ar[r]&\varprojlim\limits_{i} M/I^iM.
}
\]
Suppose $M$ is finite over $R_0$. By taking a finite presentation $R_0^m\to R_0^n\to M\to 0$ of $M$, we get the following commutative diagram:
\[
\xymatrix{
\widehat{R_0}^m\ar[r]\ar[d]^-{\varphi_{R_0^m}} & \widehat{R_0}^n\ar[r]\ar[d]^-{\varphi_{R_0^n}} & M\otimes_{R_0} \widehat{R_0}\ar[d]^-{\varphi_M}\ar[r]& 0 \\
\varprojlim\limits_{i} (R_0/I^i)^m\ar[r] &\varprojlim\limits_{i} (R_0/I^i)^n\ar[r] & \varprojlim\limits_{i} M/I^iM\ar[r]& 0
}
\]
The first row above is exact since it is obtained by applying $-\otimes_{R_0} \widehat{R_0}$ to the finite presentation of $M$. Moreover, we know ${\varprojlim\limits_{i}}^1 R_0/I^i=0$ for the maps in the inverse sequence
\[
\xymatrix{
R_0/I & R_0/I^2\ar@{->>}[l] & R_0/I^3\ar@{->>}[l] & \ar@{->>}[l] \cdots
}
\]
satisfy the Mittag-Leffler condition. Then by taking the inverse limit of the exact sequence $(R_0/I^i)^r\to (R_0/I^i)^s\to M/I^iM\to 0$ over $i\ge 1$, we get the second row above is exact. Now since $\varphi_{R_0^r}$ and $\varphi_{R_0^r}$ are isomorphisms, we know $\varphi_M$ is an isomorphism whenever $M$ is finite over $R_0$. As a consequence since $R$ is locally finite over $R_0$, we have
$$
\widehat{R}~=~\left(\bigoplus_{j\ge 0} R_j\right)\otimes_{R_0}\widehat{R_0}~\cong~\bigoplus_{j\ge 0}\left( R_j\otimes_{R_0}\widehat{R_0}\right)~\cong~\bigoplus_{j\ge 0} \left(\varprojlim\limits_{i} R_j/I^iR_j\right).
$$
So $\widehat{R}$ has an $I$-adic filtration such that
\begin{align*}
\gr_I \widehat{R}~=~\bigoplus_{i\ge 0} \left(I^i \widehat{R}\big/ I^{i+1} \widehat{R}\right)&~\cong~\bigoplus_{j\ge 0}\left(\bigoplus_{i\ge 0} I^iR_j/I^{i+1}R_j\right)~\cong~\bigoplus_{j\ge 0}\left(\bigoplus_{i\ge 0} R_j\otimes_{R_0} I^i/I^{i+1}\right)\\&~\cong~ R\otimes_{R_0}\left(\bigoplus_{i\ge 0} I^i\big/I^{i+1}\right)~\cong~ R\otimes_{R_0} \gr_I \widehat{R_0}.
\end{align*}
Since $R_0$ is noetherian, $I$ is finitely generated, say $I=(a_1,\dots,a_n)$. Thus there is a surjection $(R_0/I)[t_1,\dots,t_n]\twoheadrightarrow \gr_I \widehat{R_0}$ given by $t_i\mapsto a_i$ in $I/I^2$. Then we get a surjection $(R\otimes_{R_0} R_0/I)[t_1,\dots,t_n]\twoheadrightarrow \gr_I \widehat{R}$. By assumption we know $R\otimes_{R_0}(R_0/I)$ is noetherian. Hence $\gr \widehat{R}$ is noetherian and so is $\widehat{R}$. The statement for finite generation of modules can be proved similarly.
\end{proof}
\begin{definition}\label{D:Roder}
Let $A$ be a finite dimensional Hopf algebra over $\mathbb C$. We say $A$ can be defined over some algebraic number field $K/\mathbb Q$ if there is some Hopf $K$-subalgebra $B$ of $A$ such that $A\cong B\otimes_K\mathbb C$.
\end{definition}
\begin{theorem}
\label{thm:m1}
Let $A$ be a finite-dimensional complex Hopf algebra that can be defined over some algebraic number field $K$. Then there exists a finite dimensional Hopf algebra $A'$ over some finite field $F$such that if $A'$ satisfies {\rm (\textbf{hfg})} then so does $A$.
Moreover, $A$ can be viewed as a deformation of $A'$ in the following way: there exists some localization $\mathcal R$ of $\mathcal O_K$ finitely generated over $\mathbb Z$ and some free Hopf $R$-subalgebra $B$ of $A$ such that $A\cong B\otimes_R\mathbb C$, where we can let $A'=B\otimes_{\mathcal R}F$ with $F=\mathcal R/(p)$ for some prime $p$.
\end{theorem}
\begin{proof}
Since $A$ can be defined over $K$, there exists some Hopf $K$-subalgebra, denoted by $L$, such that $L\otimes_K\mathbb C\cong A$. In view of Lemma \ref{Rem:ext}, by replacing $K$ with a possible finite field extension, we may assume the Jacobson radical $J_L$ of $L$ splits in $L$. Thus we can choose a finite basis $x_1,\ldots,x_n$ for $L$ such that $J_L={\rm span}_K(x_1,\ldots,x_m)$ and $J_L\otimes_K\mathbb C\cong J_A$ the Jacobson radical of $A$. Denote by $\mathcal S$ the subset of $K$ which consists of all the coefficients when we apply (co)multiplication, (co)unit, and antipode of $L$ to the above fixed basis. Further denote by $\mathcal R$ the localization of $\mathcal O_K$ at the multiplicative set generated by all the denominators appeared in $\mathcal S$. Since $\dim_K L<\infty$, $\mathcal S$ is finite and $\mathcal R$ is a finitely generated $\mathbb Z$-algebra. As a consequence, there is some prime $p$ such that $F=\mathcal R/(p)$ is a finite field. Set $B:={\rm span}_\mathcal R(x_1,\ldots,x_n)$. Since $\mathcal S\subset \mathcal R$, one can check that $B$ is a free Hopf $\mathcal R$-subalgebra of $A$ satisfying
$$B\otimes_\mathcal R\mathbb C~\cong~(B\otimes_\mathcal RK)\otimes_K\mathbb C~\cong~L\otimes_K\mathbb C~\cong~A.$$
Moreover, $W:={\rm span}_\mathcal R(x_1,\ldots,x_m)$ is a finite $B$-module satisfying
$$W\otimes_\mathcal R\mathbb C~\cong~(W\otimes_\mathcal RK)\otimes_K\mathbb C~\cong ~J_L\otimes_K\mathbb C~\cong~J_A.$$
We use the language of differential graded algebras and modules to describe the cohomology ring $\coh^*(B, \mathcal R)$ of $B$ and the cohomology $\coh^*(B,W)$ of $B$ with coefficients in $W$. We apply $-\otimes_B\mathcal R$ to the bar resolution of $B$ over $\mathcal R$,
\[
\xymatrix{
\cdots\ar[r]^-{\partial_3}&B\otimes_{\mathcal R} B\otimes_{\mathcal R} B\otimes_{\mathcal R} B\ar[r]^-{\partial_2}& B\otimes_{\mathcal R} B \otimes_{\mathcal R} B \ar[r]^-{\partial_1} & B\otimes_{\mathcal R} B \ar[r]^-{\mu}& B,
}
\]
where $\partial_i(b_0\otimes \cdots \otimes b_{i+1})=\sum_{j=0}^i(-1)^jb_0\otimes \cdots \otimes b_jb_{j+1}\otimes \cdots \otimes b_{i+1}$. Since $B$ is free over $\mathcal R$, we get a projective resolution of $R$ in the category of left $B$-modules. Therefore, we obtain two complexes by applying $\Hom_B(-,\mathcal R)$ and $\Hom_B(-,W)$ to this resolution, namely
\begin{align}\label{E:complexB}
C^n(B,\mathcal R)~:=~\Hom_{\mathcal R}(B^{\otimes n},\mathcal R)\ \text{and}\ C^n(B,W)~:=~\Hom_{\mathcal R}(B^{\otimes n}, W),
\end{align}
where we omit the formulas of the corresponding differentials. Note that $C^*(B,\mathcal R)$ is a differential graded algebra with the cup product given by, for any $f\in C^m(B,\mathcal R)$ and $g\in C^n(B,\mathcal R)$,
\begin{align}\label{E:dgm}
f\smile g(b_1\otimes \cdots b_{m+n})~=~f(b_1\otimes \cdots b_m)g(b_{m+1}\otimes \cdots b_{m+n}).
\end{align}
Moreover, $C^\bullet(B,W)$ is differential graded module over $C^\bullet(B,\mathcal R)$ with the module action similar to \eqref{E:dgm} by considering $g\in C^n(B, W)$. After taking the cohomology of the two complexes \eqref{E:complexB}, we obtain $\coh^*(B,\mathcal R)$ and $\coh^*(B,W)$, where the Yoneda product is compatible with the cup product. Since $B$ is a free module over $\mathcal R$ of finite rank, it is projective and so is $B^{\otimes n}$. Hence we have
\begin{align*}
C^n(B,\mathcal R)\otimes_{\mathcal R}F&\, =\Hom_{\mathcal R}(B^{\otimes n},\mathcal R)\otimes_{\mathcal R} F\cong \Hom_F((B^{\otimes n})\otimes_{\mathcal R}F,F)\\
&\,\cong \Hom_F((B\otimes_{\mathcal R}F)^{\otimes n},F)=\Hom_F(A'^{\otimes n},F)=:C^n(A',F),
\end{align*}
which calculates the cohomology ring $\coh^*(A',F)$. Likewise, $C^n(B,W)\otimes_{\mathcal R}F=\Hom_F(A'^{\otimes n},W')=:C^n(A',W')$ with $W'=W\otimes_{\mathcal R}F$ a finite module over $A'$. By the hypothesis that $A'$ satisfies {\rm (\textbf{hfg})}, we know $\coh^*(C^\bullet(A',W'))$ is noetherian over $\coh^*(C^\bullet(A',F))$.
Now as a localization of $\mathcal O_K$, $\mathcal R$ is a Dedekind domain and hence is hereditary. So we have a projective resolution $0\to (p)\to \mathcal R\to F\to 0$ in the category of left $\mathcal R$-modules. After tensoring the resolution with the complex $C^{\DOT}(B,\mathcal R)$, we get a double complex which yields a spectral sequence
$$
E_2^{pq}:=\Tor^{\mathcal R}_q\left(\coh^p(C^{\DOT}(B,\mathcal R),F)\right)~\Rightarrow~E_\infty^{pq}:=\coh^{p+q}\left(C^{\DOT}(B,\mathcal R)\otimes_{\mathcal R}F\right)\cong \coh^{p+q}(A',F),
$$
which collapses at the $E_2$ page.
Next note that the double complex $C^\bullet(B,\mathcal R)$ has a multiplicative structure that induces a
multiplicative structure on the spectral sequence. Deduced by the filtration on the double complex, we get that $E_2^{p(q\ge 1)}=\Tor^{\mathcal R}_{\ge 1}(\coh^p(C^{\DOT}(B,\mathcal R), F)$ is an ideal in page $E_2$. That is, $E_2^{p0}=\coh^{p}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R} F$ is a quotient
of the $E_2=E_\infty$ page. Then $\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R} F$ is a quotient of the noetherian ring $\coh^*(A',F)$, and as a consequence is noetherian itself. By a similar argument, one can show that $\coh^{*}(C^{\DOT}(B,W))\otimes_{\mathcal R} F$ is a finite module over $\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R} F$. By Lemma \ref{lem:completion},
$$\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R} \left(\varprojlim\limits_i \mathcal R/(p^i)\right)~=~\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R} \widehat{\mathcal R}
$$
is noetherian. Note that $\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R} \widehat{\mathcal R}$ is graded commutative and finitely generated. So $\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R} {\rm Frac}\, \widehat{\mathcal R}$ is noetherian and finitely generated. Similarly, one argues that, since $\coh^{*}(C^{\DOT}(B,W))\otimes_{\mathcal R}\mathcal R/(p)$ is finite over $\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R}\mathcal R/(p)$, one gets that $\coh^{*}(C^{\DOT}(B,W))\otimes_{\mathcal R}\widehat{\mathcal R}$ is finite over $\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R} \widehat{\mathcal R}$. As a consequence, $\coh^{*}(C^{\DOT}(B,W))\otimes_{\mathcal R}{\rm Frac}\, \widehat{\mathcal R}$ is finite over $\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R}{\rm Frac}\, \widehat{\mathcal R}$.
Note that ${\rm Frac}\,\mathcal R={\rm Frac}\,\mathcal O_K=K$ and write ${\rm Frac}\,\widehat{\mathcal R}=k$. Let $T:=\coh^{*}(C^{\DOT}(B,\mathcal R))\otimes_{\mathcal R}K $ and $M:=\coh^{*}(C^{\DOT}(B,W))\otimes_{\mathcal R}K$. By previous discussion, we have $T\otimes_Kk$ is noetherian and $M\otimes_Kk$ is finite over $T\otimes_Kk$. Note that $K\subset \mathbb C$ is a subfield. By Lemma \ref{lem:field}, one sees that $T\otimes_K\mathbb C$ is noetherian and $M\otimes_K\mathbb C$ is finite over $T\otimes_K \mathbb C$. Finally, since $\mathbb C$ is flat over $K={\rm Frac}\, \mathcal R$ which is also flat over $\mathcal R$, we have
$$T\otimes_{\mathcal R} \mathbb C~=~\coh^*\left(C^{\DOT}(B,\mathcal R)\right)\otimes_{\mathcal R}\mathbb C~\cong~\coh^*\left(C^{\DOT}(B,\mathcal R)\otimes_{\mathcal R}\mathbb C\right)~\cong~\coh^*\left(C^{\DOT}(A,\mathbb C)\right)~\cong~\coh^*(A,\mathbb C)$$
and
$$M\otimes_{\mathcal R} \mathbb C~=~\coh^*\left(C^{\DOT}(B,W)\right)\otimes_{\mathcal R}\mathbb C~\cong~\coh^*\left(C^{\DOT}(B,W)\otimes_{\mathcal R}\mathbb C\right)~\cong~\coh^*\left(C^{\DOT}(A,J_A)\right)~\cong ~\coh^*(A,J_A).$$
Then we conclude that $A$ satisfies {\rm \textbf{(hfg)}} by Proposition \ref{equivfg*}.
\end{proof}
The following result can be proved analogously.
\begin{theorem}\label{thm:m2}
Let $A$ be a finite-dimensional complex algebra that can be defined over some algebraic number field. Then there exists a finite dimensional algebra $A'$ over some finite field, where $A$ can be viewed as some deformation of $A'$, such that if $A'$ satisfies {\rm (\textbf{fg})}, then so does $A$.
\end{theorem}
\begin{remark}
A similar proof works for finite dimensional augmented algebras that can be defined over some algebraic number field. Moreover, we can always assume the resulting (Hopf or augmented) algebra after the reduction modulo $p$ is over an algebraically closed field of characteristic $p$ by a field extension in view of Lemma \ref{Rem:ext} and Remark \ref{rem:FE}. \\
\end{remark}
\noindent
\underline{{\bf Applications.}} We provide here some applications for the finite generation conditions when the resulting Hopf algebra via reduction modulo $p$ is the smash product of a quantum complete intersection with a semisimple Hopf algebra. In particular, our result is applicable for finite dimensional pointed Hopf algebras of diagonal type over the complex numbers.
\begin{lemma}\label{lem:SP}
Let $R$ be a finite dimensional (resp.\ augmented) $k$-algebra satisfying {\rm \textbf{(fg)}} (resp.\ {\rm \textbf{(hfg)}}), and $H$ a semisimple Hopf algebra over $k$. Suppose there is an $H$-action on $R$ (resp.\ preserving the augmentation of $R$). Then the smash product $R \# H$ satisfies {\rm \textbf{(fg)}} (resp.\ {\rm \textbf{(hfg)}}).
\end{lemma}
\begin{proof}
Here we prove the augmented case, and the argument for the algebra case is similar. Since $H$ is semisimple, we know $\coh^*(R\#H,k)\cong \coh^*(R,k)^H$. By hypothesis, $\coh^*(R,k)$ is finitely generated and noetherian, and so is its invariant ring $\coh^*(R,k)^H$ by \cite[Corollary 4.3.5]{MO93}. Let $M$ be a finite $R\# H$-module. Since $M$ is also finite over $R$ by the restriction, we have $\coh^*(R,M)$ is noetherian over $\coh^*(R,k)$ by hypothesis. Thus the submodule $\coh^*(R\#H,M)\cong \coh^*(R,M)^H$ of $\coh^*(R,M)$ is finite over $\coh^*(R,k)$. Then we can conclude that $\coh^*(R\#H,M)$ is finite over $\coh^*(R\#H,k)$ as $\coh^*(R,k)$ is finite over $\coh^*(R,k)^H$ by \cite[Theorem 4.4.2]{MO93}.
\end{proof}
We will work with some {\bf quantum complete intersections} $R$, as recalled here.
Let $t$ be a positive integer and for each $1\leq i\leq t$,
let $N_i\ge 2$ be an integer. Let $q_{ij}\in k^{\times}$ for $1\leq i<j\leq t$.
Let $R$ be the $k$-algebra generated by $x_1,\ldots, x_t$,
subject to relations
\[
x_i x_j = q_{ij} x_j x_i \quad \mbox{ and } \quad
x_i^{N_i} = 0,
\]
for all $i<j$ and all $i$. It is clear that $R$ is augmented since $R$ is local with the unique maximal ideal $(x_1,\ldots,x_t)$.
The cohomology ring $\coh^*(R,k)$ is finitely generated and noetherian. See~\cite[Theorem~5.3]{BO08} and~\cite[Theorem~4.1]{MPSW}.
The latter reference corrects some small errors in the relations of
the former, but omits the necessary distinction of the cases
where $N_i =2$ for some $i$.
(The proof of exactness of the resolution in~\cite{MPSW}
requires characteristic~0, however it is essentially the same
resolution as that given in~\cite{BO08}, which is proven to be
exact in any characteristic.) So in this case, we know $R$ satisfies {\rm \textbf{(hfg)}} by Proposition \ref{equivfg*} since $R$ is local.
Hochschild cohomology behaves somewhat differently:
$\HH^*(R)$ itself is in fact finite dimensional for some
choices of values of $q_{ij}$.
By~\cite[Theorem~5.5]{BO08} and Proposition~\ref{equivfg},
$\HH^*(R)$ satisfies the finite generation condition {\rm \textbf{(fg)}}
if and only if all $q_{ij}$ are roots of unity, see~\cite{BGMS}.
\begin{prop}\label{prop:m1}
Let $A$ be a finite dimensional complex (resp.\ Hopf) algebra that can be defined over some algebraic number field. If the constructed finite dimensional (resp.\ Hopf) algebra via reduction modulo $p$ over some finite field is the smash product of a quantum complete intersection with a semisimple Hopf algebra, then $A$ satisfies {\rm \textbf{(fg)}} (resp.\ {\rm \textbf{(hfg)}}).
\end{prop}
\begin{proof}
It follows from Lemma \ref{lem:SP} and the discussion above, where we notice that any nonzero number in a finite field is a root of unity.
\end{proof}
Our lifting method of reduction modulo $p$ can be applied to finite dimensional pointed Hopf algebras of diagonal type over the complex numbers.
\begin{example}
Let $A$ be a finite dimensional pointed Hopf algebra of diagonal type over the field of complex numbers. We follow the strategy of Andruskiewitsch-Schneider in \cite{AS, AS2}. Denote the coradical of $A$ by $\mathbb C[G]$ for some finite group $G$. Then the associated graded algebra $\gr A$ is isomorphic to a smash product algebra $\mathcal B(V)\#\mathbb C[G]$ with respect to its coradical filtration, where $\mathcal B(V)$ is the Nichols algebra of some Yetter-Drinfeld module $V$ in ${}^G_G{\mathcal YD}$. Since the braided space $(V,c)$ is of diagonal type, there is a basis $\{x_1,\dots,x_\theta\}$ of $V$ and a collection of scalars $(q_{ij})_{1\le i,j\le \theta}$ such that $c(x_i\otimes x_j)=q_{ij}x_j\otimes x_i$ for all $1\le i,j\le \theta$. Notice that the coefficients in the braiding matrix $\mathbf{q}=(q_{ij})_{1\le i,j\le \theta}$ are all algebraic numbers since they all come from certain characters of the finite group $G$. Then one can show that $\gr A$ can be defined over the algebraic number field $\mathbb Q(q_{ij})$. Moreover, the Drinfeld double of $\gr A$, denoted by $\mathcal D(\gr A)$, also can be defined over the algebraic number field $\mathbb Q(q_{ij})$. An explicit presentation of $\mathcal D(\gr A)$ can be found in \cite[Lemma 7]{PV16}.
Now suppose the constructed finite-dimensional Hopf algebra from $\mathcal D(\gr A)$ via reduction modulo $p$ is the smash product of a quantum complete intersection with a semisimple Hopf algebra (e.g., $\mathcal D(G)$ by a careful choice of $p$ such that $p\nmid |G|$). Then by Proposition \ref{prop:m1}, we know $\mathcal D(\gr A)$ satisfies {\rm \textbf{(hfg)}}. Moreover by Masuoka's result \cite{Ma08}, $A$ is always some 2-cocycle twist of the associated graded algebra $\gr A$. (Also see Angiono and Garcia-Iglesias's survey paper \cite[\S 1.2.1]{AG}.) As a consequence, $A$ can be embedded into the Drinfeld double $\mathcal D(\gr A)$. So we can conclude that $A$ satisfies {\rm \textbf{(hfg)}} by Proposition \ref{prop:ext} in this case.
\end{example}
| {
"timestamp": "2021-08-17T02:16:26",
"yymm": "1911",
"arxiv_id": "1911.04552",
"language": "en",
"url": "https://arxiv.org/abs/1911.04552",
"abstract": "In support variety theory, representations of a finite dimensional (Hopf) algebra $A$ can be studied geometrically by associating any representation of $A$ to an algebraic variety using the cohomology ring of $A$. An essential assumption in this theory is the finite generation condition for the cohomology ring of $A$ and that for the corresponding modules. In this paper, we introduce various approaches to study the finite generation condition. First, for any finite dimensional Hopf algebra $A$, we show that the finite generation condition on $A$-modules can be replaced by a condition on any affine commutative $A$-module algebra $R$ under the assumption that $R$ is integral over its invariant subring $R^A$. Next, we use a spectral sequence argument to show that a finite generation condition holds for certain filtered, smash and crossed product algebras in positive characteristic if the related spectral sequences collapse. Finally, if $A$ is defined over a number field over the rationals, we construct another finite dimensional Hopf algebra $A'$ over a finite field, where $A$ can be viewed as a deformation of $A'$, and prove that if the finite generation condition holds for $A'$, then the same condition holds for $A$.",
"subjects": "Rings and Algebras (math.RA); Quantum Algebra (math.QA)",
"title": "New approaches to finite generation of cohomology rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561694652216,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.7082906084691507
} |
https://arxiv.org/abs/2011.11289 | Sparse Inpainting with Smoothed Particle Hydrodynamics | Digital image inpainting refers to techniques used to reconstruct a damaged or incomplete image by exploiting available image information. The main goal of this work is to perform the image inpainting process from a set of sparsely distributed image samples with the Smoothed Particle Hydrodynamics (SPH) technique. As, in its naive formulation, the SPH technique is not even capable of reproducing constant functions, we modify the approach to obtain an approximation which can reproduce constant and linear functions. Furthermore, we examine the use of Voronoi tessellation for defining the necessary parameters in the SPH method as well as selecting optimally located image samples. In addition to this spatial optimization, optimization of data values is also implemented in order to further improve the results. Apart from a traditional Gaussian smoothing kernel, we assess the performance of other kernels on both random and spatially optimized masks. Since the use of isotropic smoothing kernels is not optimal in the presence of objects with a clear preferred orientation in the image, we also examine anisotropic smoothing kernels. Our final algorithm can compete with well-performing sparse inpainting techniques based on homogeneous or anisotropic diffusion processes as well as with exemplar-based approaches. | \section{Introduction}
Image inpainting aims at restoring partially damaged image or missing
parts of an image in a
visually appealing manner \cite{BS00}. It has a wide
number of practical applications such as
art restoration \cite{KM13,BF08,CA18,RC11},
object removal \cite{CP04},
medical imaging \cite{TB19},
inpainting of optical flow fields \cite{RO20},
video inpainting \cite{NA14},
inpainting reflectance/height values in LiDAR images \cite{BA19,CL18},
image compression \cite{GW08,SP14}, and even image
denoising \cite{AP17}. The term ``inpainting'' itself was introduced for
digital images by Bertalm{\'{\i}}o et al.~in \cite{BS00}, but similar
concepts were already explored in earlier work under different names
such as image restoration, interpolation, disocclusion,
or amodal completion
\cite{Hu73,OABB85,Fe94,CM98,MM98}.
Any inpainting model needs to assume some kind of relation between known
and unknown data. As there is a variety of plausible assumptions for such
relations, many solutions to an inpainting problem exist.
Based on the underlying assumptions, the inpainting methods from the
literature can be grouped into certain main categories \cite{BC14,GL14}.
One class is based on variational models and partial differential equations
(PDEs) \cite{Sc15},
comprising e.g.~Euler's elastica \cite{NMS93,MM98,CK02,BC10,CP19},
transport-like equations \cite{BS00,BM07}, anisotropic diffusion processes
\cite{WW06,GW08,BU13}, harmonic and biharmonic inpainting \cite{CS02,GW08},
total variation restoration \cite{CS02},
the Mumford-Shah functional \cite{CS02,ES02},
and the Cahn--Hilliard equation \cite{BEG07,BHS09}.
Exemplar-based approaches emerged from texture synthesis and
exploit the notion of patch similarity \cite{EL99,CP04,FA09,AF11}.
Other techniques
rely on overcomplete dictionaries and the concept of sparsity
\cite{ES05,ME08,El10}, and more recently also deep learning
concepts have been proposed \cite{PKDD16,ISI17,UVL18}.
Each of these strategies has its advantages and disadvantages depending on the
type of image it is applied to. For example, exemplar-based techniques perform
fairly well on highly textured images, PDE-based methods are more
suited for geometrical structures, and deep learning approaches can capture
high-level semantics from images. This has led to the development of hybrid
approaches which combine the strengths of different methods
\cite{BV03,SE05,AL10,PW15}.
A subclass of inpainting problems deals with the recovery of a whole image from
a small amount of sparsely distributed data \cite{AA17,BU13,FA09,HM17}. These
kind of problems are encountered particularly in the context of compression
\cite{GW08,CR14,SP14,Pe19}. The sparsity of available data makes it
feasibly to consider scattered data interpolation,
e.g.~by radial basis functions
\cite{MD05,We05,US06,Fa07,LZ12,CL18} or by Shepard interpolation
\cite{Sh68,KW93,AA17,Pe19} as inpainting technique.
A key observation for applications in compression is that the data can be
chosen freely from the image. Thus, a careful selection of the sparse set
of pixels to store such that it fits a chosen inpainting method is
essential for a good performance \cite{GW08,MH11,CR14,SP14,HM17,KB18}.
Astonishingly, simple linear methods such as homogeneous diffusion
inpainting show remarkable quality if combined with optimally chosen data
\cite{BB09,GW08,HS13,CR14,PH16,BLPP17} and can even compete with the widely
used JPEG \cite{PM92} and JPEG2000 \cite{TM02} standards \cite{MB11,HM13,PH16}.
Anisotropic diffusion approaches perform even better \cite{GW08,SP14,HM17}
and can outperform JPEG and JPEG2000 for high compression ratios.
\subsection{Goals and Contributions}
The goal of our paper is to show that a hitherto hardly explored class
of scattered data interpolation methods based on Smoothed Particle
Hydrodynamics (SPH) can provide excellent results on sparse inpainting
problems, if one improves them with a number of refined concepts.
SPH was originally introduced to solve astrophysical problems \cite{Lu77},
but has also been applied to problems that deal with large deformations
\cite{BS08}, computational fluid mechanics \cite{Mo94}, and soil mechanics
\cite{MH05}. In SPH, the solution to a given problem is represented by a
set of particles and functions. Derivatives and integrals are approximated
using those particles.
Di Blasi et al.~\cite{DF11} have introduced the SPH method for sparse
image interpolation problems, and applications to non-sparse inpainting
are studied in \cite{AP16}. We have not found more work on SPH-based
image inpainting. One reason for this lack of popularity might lie in the
fact that in the naive formulation,
not even constant functions are interpolated correctly
\cite{Fa07}.
However, in our paper we show that
one can come up with highly competitive approaches by integrating more
sophisticated concepts.
The modifications that we apply are mostly tailored to the
particular situation encountered when inpainting is used as a strategy for
compression. Compared to other, more classical applications of inpainting, the
key difference when using inpainting for compression is that a ground truth
image is known, such that the data used for inpainting can be adapted and
optimized with regard to this ground truth. Moreover, compression applications
are particularly challenging, since they keep only a very sparse subset of the
original data.
Our key contributions are the following:
\begin{enumerate}
\item To define a measure for the area of influence of a given
particle (mask point), we combine a Voronoi tessellation with the Euclidean
distance transform.
\item We restore particle consistency.
\item We perform inpainting with a new method that adapts its
consistency order to the local approximation error.
\item We use the Voronoi tessellation to propose a novel
strategy for spatial data optimization.
\item We optimize not only the data locations, but also their
values (tonal optimization).
\item To incorporate anisotropy in the process, we use anisotropic kernels.
\item We assess the performance of different smoothing kernels and compare to
some of the best sparse inpainting methods for optimized data.
\end{enumerate}
In the context of image processing and reconstruction,
related concepts have been used in combination with kernel regression methods,
e.g., by Takeda et al.\ in \cite{TF07}.
\subsection{Paper Structure}
This paper is organized as follows: In \cref{sec:SPH_basics}, we give a brief
summary of the ideas behind Smoothed Particle Hydrodynamics. This includes its
origin from an integral approximation, techniques to restore consistency in a
discrete setting, and a brief overview on common smoothing kernels used for our
experiments. \Cref{sec:inpainting} explains how SPH can be used for inpainting.
Here we introduce Voronoi tessellation to determine parameters of the method.
Further, we compare performance of SPH inpainting with
diffusion- and exemplar-based methods for examples of sparse inpainting and
classical inpainting tasks.
For problems in which the ground truth is known, we
show how performance can be enhanced by combining results from methods of
different consistency order
in \cref{sec:mask_optimization}. Here, we also
explain our data optimization strategies both with respect to data locations
(spatial optimization) as well as data values (tonal optimization). We proceed
by comparing results from our method to results from other techniques in
\cref{sec:comparisons} and draw conclusions in \cref{sec:conclusions}.
\section{SPH in a Nutshell}
\label{sec:SPH_basics}
\subsection{Essential Ideas}
\label{sec:formulation}
We are interested in approximating a function $f$ on the domain $\Omega$ in
$\mathbb{R}^{2}$. The point of departure for SPH is the idea to replace the
value of $f$ at a point $\bm{q}$ by a weighted average of the function, i.e.
\begin{equation}
\label{eq:4}
f\!\left(\bm{q}\right) \approx
\left\langle f\!\left(\bm{q}\right) \right\rangle
\coloneqq
\int_{\Omega} f\!\left(\bm{p}\right) \,
W\!\left(\bm{q}-\bm{p},h\right)\, \mathrm{d}\bm{p}.
\end{equation}
\Cref{eq:4} is also known as the \emph{kernel approximation} of the function
with the smoothing kernel $W(\cdot,h)$ and its smoothing length $h$, which
represents the effective width of $W$. The kernel should be a monotonically
decreasing positive mollifier, i.e.\ it should have the following properties:
\begin{itemize}
\item compactness:
$W(\bm{q}-\bm{p},h) = 0$ outside
a compact domain
$K \subseteq \Omega$,
\item unity:
$\int_{\Omega} W(\bm{q}-\bm{p},h)\, \mathrm{d}\bm{p} = 1$,
\item limit behavior:
$W(\bm{q}-\bm{p},h) \xrightarrow{h \rightarrow 0} \delta(\bm{q}-\bm{p})$,
where $\delta(\bm{q}-\bm{p})$ is Dirac's delta distribution,
\item positivity:
$W(\bm{q}-\bm{p},h) > 0$ over $K \subseteq \Omega$,
\item monotonicity:
$W$ is monotonically decreasing function w.r.t.\
$\left\lVert \bm{q} - \bm{p}\right\rVert$.
\end{itemize}
Here, $\lVert \cdot \rVert$ denotes the Euclidean distance. Positivity is not
strictly necessary, but desired in order for the approximated function values to
have physical meaning. Allowing the kernel to take negative values in parts of
the domain can lead to unnatural approximated values and corrupt the entire
computation \cite{LL03}. The same holds for monotonicity, which is connected
with the usual behavior of physical forces to decrease with increasing distance.
Discretizing the integral of \cref{eq:4} yields the \emph{particle
approximation} of $f$ given by
\begin{equation}
\label{eq:particle_approx}
f\!\left(\bm{q}\right) \approx u\!\left(\bm{q}\right) \coloneqq
\sum\limits_{j \in \mathcal{N}(\bm{q})} f\!\left(\bm{p}_{j}\right)\,
W\!\left(\bm{q}-\bm{p}_{j},h\right)\, V_{j}.
\end{equation}
Here, in order to approximate the function value at point $\bm{q}$ we sum over
its nearest neighbors $\bm{p}_{j}$, where $\mathcal{N}(\bm{q})$ denotes the
index set of the nearest neighbors and we assume that $1 \leq j \leq M$ with $M$
the total number of particles under consideration. Each of these particles is
related to a specific area of influence (or weight) $V_{j}$ in this quadrature
rule. The particle approximation is interpolating at the particles $\bm{p}_{j}$
if the kernel $W$ satisfies
\begin{equation}
W\!\left(\bm{p}_{k}-\bm{p}_{j},h\right)\, V_{j} = \delta_{k,j} \quad
\text{for any} \quad
j, k = 1,\ldots, M
\end{equation}
with the Kronecker delta $\delta_{k,j}$. This requirement is separate from
the desired properties in the continuous setting and, in general, not satisfied
by kernels with said properties. Some modifications which achieve interpolation
at the particles are discussed in \cref{sec:consistency}.
In order to determine the nearest neighbors, we use the so-called scatter
approach. Here, the neighbors of a point $\bm{q}$, are the particles
$\bm{p}_{j}$ that include $\bm{q}$ in the support domain of the kernels
centered at these particles $\bm{p}_{j}$, cf.\ \cref{fig:Scatter}.
\begin{figure}[htb]
\centering
\begin{tikzpicture}
\draw (2,2) circle (2cm);
\filldraw (2,2) circle (1pt) node[align=left, below] {$\bm{p}_{1}$};
\draw (3,4) circle (1.5cm);
\draw (3.5,2.5) circle (1.5cm);
\filldraw (3.5,2.5) circle (1pt) node[align=left, below] {$\bm{p}_{2}$};
\filldraw (3,4) circle (1pt) node[align=left, above] {$\bm{p}_{3}$};
\draw (1,3.5) circle (1.1cm);
\filldraw (1,3.5) circle (1pt) node[align=left, below] {$\bm{p}_{4}$};
\filldraw (2.5,3) circle (1pt) node[align=left, above] {$\bm{q}$};
\end{tikzpicture}
\caption[Scatter Approach]{Particles $\bm{p}_{1}$, $\bm{p}_{2}$, and
$\bm{p}_{3}$ are considered neighbors of point $\bm{q}$, while $\bm{p}_{4}$
is not.}
\label{fig:Scatter}
\end{figure}
An alternative to the scatter approach is the gather approach \cite{Li09} in
which a disk of a predetermined radius around $\bm{q}$ is considered and
neighbors are determined by checking which particles $\bm{p}_{j}$ are located
within the disk. The scatter approach is preferable for our purpose as it is
less dependent on the particle distribution and allows to consider different
smoothing lengths $h_{j}$ for each particle.
\subsection{Restoring Consistency}
\label{sec:consistency}
Consistency in SPH is defined in the sense of (local) polynomial reproduction
\cite{We05}. While the restriction of kernels to positive mollifiers guarantees
that linear polynomials are reconstructed in the continuous formulation
\cref{eq:4}, this is no longer the case in the discrete formulation
\cref{eq:particle_approx}; a phenomenon known as particle inconsistency
\cite{Mo96}. Thus, \cref{eq:particle_approx} needs to be modified in order to
restore consistency in the discrete setting.
For the particle approximation to satisfy zero order consistency, it needs to
be able to reproduce constants. This requirement is satisfied by the well known
Shepard interpolation formula \cite{Sh68,CB99,We05}
\begin{equation}
\label{eq:new_discrete_shepard}
u\!\left(\bm{q}\right) =
\frac{\sum\limits_{j \in \mathcal{N}(\bm{q})}
f\!\left(\bm{p}_{j}\right) \,
W\!\left(\bm{q}-\bm{p}_{j},h\right) \,V_{j}}{
\sum\limits_{j \in \mathcal{N}(\bm{q})} W\!\left(\bm{q}-\bm{p}_{j},h\right)\,
V_{j}}.
\end{equation}
The fact that zero order consistency already requires to modify
\cref{eq:particle_approx} shows that, in general, using the particle
approximation directly cannot even reproduce constant functions, i.e.\
\cref{eq:particle_approx} has no consistency.
One way to interpret Shepard interpolation is that the kernel $W$ is replaced by
a modified kernel $\widetilde{W}$ such that
\begin{equation}
\label{eq:mod_kernel_zero01}
\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right) =
b_{0}\!\left(\bm{q}\right)\, W\!\left(\bm{q}-\bm{p}_{j},h\right),
\end{equation}
and the original interpolation formula is used with the modified kernel, i.e.,
\begin{equation}
\label{eq:SPH_mod_kernel}
u\!\left(\bm{q}\right) =
\sum\limits_{j \in \mathcal{N}(\bm{q})} f\!\left(\bm{p}_{j}\right)\,
\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right)\, V_{j}.
\end{equation}
By comparison with \cref{eq:new_discrete_shepard}, we see that
$b_{0}\!\left(\bm{q}\right)$ is given by
\begin{equation}
\label{eq:mod_kernel_zero02}
b_{0}\!\left(\bm{q}\right) = \frac{1}{
\sum\limits_{j \in \mathcal{N}(\bm{q})} W\!\left(\bm{q}-\bm{p}_{j},h\right)\,
V_{j}}.
\end{equation}
Thus, $b_{0}\!\left(\bm{q}\right)$ is a function of $\bm{q}$, but is constant
with respect to the particle $\bm{p}_{j}$ for a fixed $\bm{q}$. In other words,
zero order consistency can be restored by multiplying the kernel by a constant.
This motivates the attempt to restore first order consistency by multiplying
the kernel with a function which is linear in the difference
$\bm{q}-\bm{p}_{j}$, i.e.
\begin{equation}
\label{eq:mod_kernel_first01}
\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right) =
\left(b_{0}\!\left(\bm{q}\right)
+ b_{1}\!\left(\bm{q}\right) \left(x_{\bm{q}} - x_{\bm{p}_{j}}\right)
+ b_{2}\!\left(\bm{q}\right) \left(y_{\bm{q}} - y_{\bm{p}_{j}}\right) \right)
\, W\!\left(\bm{q}-\bm{p}_{j},h\right),
\end{equation}
where
$\bm{q} = (x_{\bm{q}}, y_{\bm{q}})^{T}$ and
$\bm{p}_{j} = (x_{\bm{p}_{j}}, y_{\bm{p}_{j}})^{T}$.
For a fixed $\bm{q}$, this
means that we have to determine three coefficients such that we can reproduce
linear polynomials in $\mathbb{R}^{2}$. This yields the system of equations
\begin{align}
\sum\limits_{j \in \mathcal{N}(\bm{q})}
\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right)\, V_{j}
=&\ 1,
\label{eq:first_order01}\\
\sum\limits_{j \in \mathcal{N}(\bm{q})} x_{\bm{p}_{j}} \,
\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right)\, V_{j}
=&\ x_{\bm{q}},
\label{eq:first_order02}\\
\sum\limits_{j \in \mathcal{N}(\bm{q})} y_{\bm{p}_{j}} \,
\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right)\, V_{j}
=&\ y_{\bm{q}}.
\label{eq:first_order03}
\end{align}
\Cref{eq:first_order01} allows to multiply the right-hand sides of
\cref{eq:first_order02,eq:first_order03} by the left-hand side of
\cref{eq:first_order01} without changing the equations, such that this linear
system can be recast as
\begin{equation}
\label{eq:first_order_comp}
\bm{D}\!\left(\bm{q}\right)\, \bm{b}\!\left(\bm{q}\right) = \bm{e}
\end{equation}
if we define
\begin{equation}
\bm{b}\!\left(\bm{q}\right) =
\begin{pmatrix}
b_{0}\!\left(\bm{q}\right) \\
b_{1}\!\left(\bm{q}\right) \\
b_{2}\!\left(\bm{q}\right)
\end{pmatrix},
\qquad
\bm{e} =
\begin{pmatrix}
1 \\
0 \\
0
\end{pmatrix},
\qquad
\bm{v}_{j}\!\left(\bm{q}\right) =
\begin{pmatrix}
1\\
x_{\bm{p}_j}-x_{\bm{q}}\\
y_{\bm{p}_j}-y_{\bm{q}}
\end{pmatrix}.
\end{equation}
The matrix $\bm{D}\!\left(\bm{q}\right)$ can be expressed as a sum of matrices
of rank $1$ in the form
\begin{equation}
\bm{D}\!\left(\bm{q}\right) =
\sum\limits_{j \in \mathcal{N}(\bm{q})} W\!\left(\bm{p}_{j}-\bm{q},h\right) \,
V_{j}
\,
\bm{v}_{j}\!\left(\bm{q}\right) \, \bm{v}_{j}^{T}\!\left(\bm{q}\right).
\end{equation}
It is positive semidefinite as for any $\bm{z} \in \mathbb{R}^{3}$ holds
\begin{equation}
\bm{z}^T \bm{D} \bm{z} =
\bm{z}^T \left(
\sum\limits_{j \in \mathcal{N}(\bm{q})} W\!\left(\bm{p}_{j}-\bm{q},h\right) \,
V_{j}
\,
\bm{v}_{j} \bm{v}_{j}^{T} \right)
\bm{z} =
\sum\limits_{j \in \mathcal{N}(\bm{q})} W\!\left(\bm{p}_{j}-\bm{q},h\right) \,
V_{j}
\,
\left(\bm{v}_{j}^{T}\, \bm{z} \right)^{2} \geq 0,
\end{equation}
since the smoothing kernel $W(\cdot)$ is nonnegative and $V_{j} > 0$. However,
$\bm{D}\!\left(\bm{q}\right)$ is singular unless $\bm{q}$ has at least three
nearest neighbors which are not collinear.
In order to achieve an SPH interpolation with first order consistency at a pixel
$\bm{q}$, it is necessary to solve the $3 \times 3$ linear system
\cref{eq:first_order_comp}. To inpaint a whole image from first order SPH
interpolation, \cref{eq:first_order_comp} needs to be solved for each unknown
pixel. With the solution $\bm{b}\!\left(\bm{q}\right)$ of
\cref{eq:first_order_comp}, a first order consistent SPH approximation can be
written as
\begin{equation}
\label{eq:28}
u\!\left(\bm{q}\right) =
\sum\limits_{j \in \mathcal{N}(\bm{q})} f\!\left(\bm{p}_{j}\right) \,
\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right) \,V_{j} =
\sum\limits_{j \in \mathcal{N}(\bm{q})} f\!\left(\bm{p}_{j}\right) \,
\bm{v}_{j}^{T}\!\left(\bm{q}\right)\, \bm{b}\!\left(\bm{q}\right) \,
W\!\left(\bm{q}-\bm{p}_{j},h\right) \,V_{j}.
\end{equation}
The particular method used here to restore first order consistency was derived
in a longer way in \cite{ZB09}, whereas other methods that modify the kernel to
restore first order consistency can be found in \cite{LJ95,Li09}. The method
described here has the advantage that it does not involve derivatives of $f$ to
restore first order consistency.
For image processing, similar techniques were derived from
kernel regression in \cite{TF07}.
The benefit of a higher order consistency does not come without a
price.
The zero order consistent Shepard interpolation \cref{eq:new_discrete_shepard}
only modifies the kernel such that it satisfies a discrete partition of unity
property. With this modified kernel, Shepard interpolation produces the value
of $u$ at $\bm{q}$ as a convex combination of the values of $f$ at the
neighboring particles $\bm{p}_{j}$. Thus, it prevents over- and undershoots. If
we want the first order consistent method \cref{eq:28} to prevent over- and
undershoots, we have to put a restriction on the positions of particles, since
we have to satisfy \cref{eq:first_order02,eq:first_order03}. These equations can
be written in a compact way as
\begin{equation}
\label{eq:pos_require_first_order}
\bm{q} =
\sum\limits_{j \in \mathcal{N}(\bm{q})} \bm{p}_{j} \,
\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right)\, V_{j} .
\end{equation}
In an ideal situation, $u$ at $\bm{q}$ would be a convex combination of the
values of $f$ at the neighboring particles $\bm{p}_{j}$ with weights given by
$\widetilde{W}\!\left(\bm{q}-\bm{p}_{j},h\right)\, V_{j}$. However,
\cref{eq:pos_require_first_order} along with \cref{eq:first_order01} implies
that this can only be the case if the position $\bm{q}$ is also a convex
combination of the particle positions $\bm{p}_{j}$ with the same weights. In
most cases, this condition on the positions of particles is violated. In order
to achieve first order consistency regardless of the spatial distribution of
particles, the modified kernel $\widetilde{W}$ violates some of the properties
defined in \cref{sec:formulation}. In particular violation of
the positivity requirement results in visible artifacts as can be seen in the
bottom left block of images in \cref{fig:random_5}.
This phenomenon is also mentioned as violation of a
maximum-minimum principle in the context of inpainting in \cite{HH20}.
\subsection{Common Smoothing Kernels}
\label{sec:kernels}
In SPH, most smoothing kernels $W$ incorporate the smoothing length as a scaling
parameter, such that they can be expressed in the form
\begin{equation}
\label{eq:eta}
W\!\left(\bm{q}-\bm{p},h\right) = W\!\left(\frac{\bm{q}-\bm{p}}{h}\right) =
W\!\left(\bm{\eta}\right)
\qquad \text{with} \qquad
\bm{\eta} \coloneqq \frac{\bm{q}-\bm{p}}{h}.
\end{equation}
Further, it is common to choose radial kernels such that they can be written in
the form
\begin{equation}
\label{eq:kernel_rbf}
W\!\left(\bm{q}-\bm{p},h\right) = W\!\left(\bm{\eta}\right) =
\frac{\rho}{h^{2}} \, \Phi\!\left(\left\lVert \bm{\eta} \right\rVert\right).
\end{equation}
Here, $\rho$ is a normalization factor to satisfy the continuous unity property.
Probably the most common kernel is of Gaussian type:
\begin{equation}
\Phi\!\left(\left\lVert \bm{\eta} \right\rVert\right) =
\exp\!\left(-\epsilon\, \left\lVert \bm{\eta} \right\rVert^{2}\right).
\end{equation}
However, as the Gaussian does not have a compact support, it is truncated at
$\left\lVert \bm{\eta} \right\rVert=1$. Thus, the parameter $\epsilon$ should be
chosen in a way that values of the resulting kernel $W\!\left(\bm{\eta}\right)$
for $\left\lVert \bm{\eta} \right\rVert > 1$ can be safely neglected. For the
value of the Gaussian at $\left\lVert \bm{\eta} \right\rVert=1$, we obtain
\begin{equation}
W\!\left(\bm{\eta}\right) = \frac{\rho}{h^{2}}
\qquad \text{if} \qquad
\left\lVert \bm{\eta} \right\rVert = 1,
\end{equation}
which inspires the condition
\begin{equation}
W\!\left(\bm{\eta}\right)
\leq
\frac{0.01}{h^{2}}
\qquad \text{if} \qquad
\left\lVert \bm{\eta} \right\rVert = 1.
\end{equation}
Together with the continuous unity property, this allows us to determine both
parameters $\rho$ and $\epsilon$, such that we use the Gaussian in the form
\begin{equation}
W\!\left(\bm{\eta}\right) =
\frac{\epsilon}{\pi\, h^{2}} \,
\exp\!\left(-\epsilon\, \left\lVert \bm{\eta}\right\rVert^{2}\right)
\end{equation}
with $\epsilon = 5.09$. Here, we have expressed $\rho$ as a function of
$\epsilon$.
An alternative to the Gaussian are Mat\'{e}rn kernels \cite{Fa07}. Contrary to
the Gaussian which is arbitrarily often continuously differentiable, Mat\'{e}rn
kernels differ in smoothness. The $C^{0}$-Mat\'{e}rn kernel, which is not
differentiable but just continuous at $\bm{\eta} = \bm{0}$, is given by
\begin{equation}
\label{eq:matern_1}
W\!\left(\bm{\eta}\right) = \frac{\epsilon^{2}}{2 \pi\, h^{2}}\,
\exp\!\left(-\epsilon \left\lVert \bm{\eta} \right\rVert\right),
\end{equation}
for which we chose $\epsilon = 6.52$. A higher regularity at
$\bm{\eta} = \bm{0}$ can be achieved with the $C^{2}$-Mat\'{e}rn kernel
\begin{equation}
\label{eq:matern_2}
W\!\left(\bm{\eta}\right) =
\frac{\epsilon^{2}}{6 \pi\, h^{2}}\,
\left(1 + \epsilon \left\lVert \bm{\eta} \right\rVert\right)
\exp\!\left(-\epsilon \left\lVert \bm{\eta} \right\rVert\right),
\end{equation}
which we use in our experiments with $\epsilon = 8.04$.
Although the truncated Gaussian is a common choice, the original SPH paper
\cite{Lu77} already introduced a kernel with a compact support,
namely
\begin{equation}
W\!\left(\bm{\eta}\right) = \frac{5}{\pi\, h^{2}}
\begin{cases}
\left(1 + 3 \left\lVert \bm{\eta} \right\rVert\right)
\left(1 - \left\lVert \bm{\eta} \right\rVert\right)^{3},
&\ \left\lVert \bm{\eta} \right\rVert \leq 1, \\
0, &\ \left\lVert \bm{\eta} \right\rVert > 1,
\end{cases}
\end{equation}
which we will call Lucy kernel. Other commonly used kernels with compact support
are the cubic spline \cite{LL03}
\begin{equation}
W\!\left(\bm{\eta}\right) = \frac{120}{14 \pi\, h^{2}}
\begin{cases}
\frac{2}{3} - 4 \left\lVert \bm{\eta} \right\rVert^{2} +
4 \left\lVert \bm{\eta} \right\rVert^{3},
&\ \left\lVert \bm{\eta} \right\rVert \leq \frac{1}{2}, \\
\frac{1}{6} \left(2 - 2 \left\lVert \bm{\eta} \right\rVert\right)^{3},
&\ \frac{1}{2} < \left\lVert \bm{\eta} \right\rVert \leq 1, \\
0, &\ \left\lVert \bm{\eta} \right\rVert > 1,
\end{cases}
\end{equation}
and the Wendland $C^{4}$ kernel \cite{We05}
\begin{equation}
W\!\left(\bm{\eta}\right) = \frac{3}{\pi\, h^{2}}
\begin{cases}
\left(35 \left\lVert \bm{\eta} \right\rVert^{2} +
18 \left\lVert \bm{\eta} \right\rVert + 3\right)
\left(1 - \left\lVert \bm{\eta} \right\rVert\right)^{6},
&\ \left\lVert \bm{\eta} \right\rVert \leq 1, \\
0, &\ \left\lVert \bm{\eta} \right\rVert > 1.
\end{cases}
\end{equation}
\section{SPH Inpainting}
\label{sec:inpainting}
For the majority of inpainting problems discussed in this
paper,
we consider the reconstruction of an image $f$ from a sparse set of values at
scattered pixel locations. These locations, called mask points, take the role
of particles $\bm{p}_{j}$ for our SPH-inspired inpainting procedure. The set of
all mask points is the inpainting mask $\bm{c}$.
Exceptions from this setting are the examples of scratch and
text removal in \cref{sec:scratch}, which we include to investigate
how SPH inpainting performs for some classical inpainting problems.
\subsection{Choosing Influence Areas and Smoothing Lengths}
In order to use \cref{eq:SPH_mod_kernel} for inpainting, whether with the
original particle approximation, Shepard interpolation, or the first order
consistent method, we still have to determine an area of influence $V_{j}$ for
each given mask point $\bm{p}_{j}$. Further, we want to enhance the adaptivity
of the method by allowing for different smoothing lengths $h_{j}$ of the kernels
centered at the individual particles. This adaptivity is motivated by the
results in \cite{DF11}.
A reasonable idea is to assume that the area of influence of a given mask point
$\bm{p}_{j}$ is the set of all points which are closer to $\bm{p}_{j}$ than to
any other mask point in $\bm{c}$. This idea leads to a Voronoi tessellation of
the domain $\Omega$ with seeds given by the mask points. Voronoi cells have
been used before in the context of SPH
\cite{GX16,SA17} with promising
results.
As we are working in a discrete setting where the smallest unit of area is a
pixel, a method which determines approximate Voronoi diagrams based on the
squared Euclidean distance transform is our tool of choice for this task. This
is a rather natural approach since the Voronoi cell $\Omega_{j}$ associated to
the mask point $\bm{p}_{j}$ is defined as
\begin{equation}
\Omega_{j} = \left\{ \bm{q} \in \Omega\, \vert \,
\left\lVert \bm{q}-\bm{p}_{j} \right\rVert \leq
\left\lVert \bm{q}-\bm{p}_{k} \right\rVert \text{ for all }
1 \leq k \leq M,\, k \neq j \right\}.
\end{equation}
Given a binary image $g$ which only takes the values $0$ and $\infty$
throughout a domain $\Omega$, the distance
transform
assigns to each pixel
$\bm{q}$ in $\Omega$ its squared distance to the nearest pixel $\bm{p}$ with
$g(\bm{p})=0$. In our case, $g$ takes the value $0$ at the mask points and
$\infty$ everywhere else. For practical applications, $\infty$ can be replaced
by a sufficiently large number. In a two-dimensional domain, the squared
distance transform is given by
\begin{equation}
\label{eq:dist}
\begin{split}
\mathcal{D}(x,y) =&\
\min_{x',y'}\left\{(x-x')^2 + (y-y')^2 + g(x',y')\right\} \\
=&\
\min_{x'}\left\{(x-x')^2 + \min_{y'}\left\{(y-y')^2+g(x',y')\right\}\right\}
\end{split}
\end{equation}
such that it can be computed by two consecutive squared distance transforms in
one dimension. We used the algorithm from \cite{Bo92,FH12} to compute the
distance transform, which is shown to have a complexity of $\mathcal{O}(n_{x}\,
n_{y})$, where $n_{x}$ and $n_{y}$ are the number of pixels in the image domain
$\Omega$ in $x$- and $y$-direction, respectively. For visualization purposes
each and every Voronoi cell is depicted with a different color per Voronoi cell
in \cref{fig:color}.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask}
\caption[Voronoi Seeds]{Image with seeds marked in white.}
\label{fig:mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/dt}
\caption[Distance transform in $2D$]{Distance transform.}
\label{fig:dt_new}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/color}
\caption[Voronoi Diagram]{Resulting Voronoi diagram with seeds marked in
black.}
\label{fig:color}
\end{subfigure}
\caption[Voronoi Tessellation]{Voronoi tessellation using distance transform
for a given set of mask points $\bm{p}_{j}$ depicted as white seeds in
\cref{fig:mask}. The resulting distance transform is depicted in
\cref{fig:dt_new}. Corresponding Voronoi cells can be seen in
\cref{fig:color}, each depicted in a different color with seeds in black
now.}
\label{fig:tessellation}
\end{figure}
After the Voronoi tessellation, each mask point $\bm{p}_{j}$ is assigned as
area of influence the area of its corresponding Voronoi cell. In our discrete
setting, this area is defined as the sum of pixels that belong to that cell,
i.e., pixels are assumed to have area equal to $1$.
It seems natural to determine the smoothing length $h_{j}$ in relation to the
volume $V_{j}$, e.g.\ half the diameter of the Voronoi cell associated to
$\bm{p}_{j}$. However, this particular choice is prone to result in pixels for
which the requirements of the minimal necessary number of nearest neighbors
are not satisfiable. If a pixel does not lie within the support of any kernel,
it cannot be inpainted.
A straightforward remedy would be to multiply the diameter of each Voronoi cell
with a constant factor chosen such that each pixel has at least the desired
minimum number of nearest neighbors. Unfortunately, this would result in
oversmoothing and blurring as the resulting kernel supports would be rather
large. Instead, we follow the adaptive, iterative approach of \cite{DF11} for
the choice of smoothing lengths, which also enforces that any pixel is inpainted
with at least a specified minimum number of neighbors. The scheme starts by
assigning each mask point $\bm{p}_{j}$ an initial smoothing length
$h_{j,\textrm{init}}$. Using the corresponding kernels, unknown pixels are
inpainted, but only if they lie within the support of a least a fixed number of
kernels. All pixels which do not satisfy this requirement are not inpainted.
Afterwards we check whether there are still pixels with no assigned value left.
If so, we increase all smoothing lengths according to a certain rule and try
again to inpaint those pixels which are not yet assigned a value. This procedure
is repeated iteratively until each pixel is inpainted. As growing strategy for
the smoothing lengths, we increase them linearly with the number of iterations.
The original method in \cite{DF11} assigned initial smoothing lengths which are
connected to the choice of $V_{j}$ as made there. However, our experiments
showed that it is beneficial if each kernel starts with a minimal smoothing
length of $h_{j,\textrm{init}} = 1$. Thus, mask points can initially only be
recognized as neighbors within a $3\times 3$-patch around them such that the
process starts with the smallest sensible isotropic support for each kernel. The
smoothing length $h_{j}$ is in each step equal to the number of iterations.
The last parameter that we have to set is the required minimal number of nearest
neighbors. For Shepard interpolation, we need at least one neighbor to perform
an inpainting, whereas for the method with first order consistency, any pixel
must be contained in the support of at least three kernels. For all methods, it
is reasonable to choose a slightly larger necessary minimal number of nearest
neighbors as this improves results. In particular for the method of first order
consistency that reduces the chance to encounter cases in which all mask points
closest to an unknown pixel are collinear. For our experiments we require a
minimal number of five nearest neighbors.
\subsection{Sparse Inpainting on Regular and Random Masks}
\label{sec:improvement}
We follow here a didactic approach and consider the test image ``trui'' of
size $256 \times 256$ pixels as an example. Results and comparisons for further
images can be found in \cref{sec:comparisons} and in the supplementary material.
All experiments in this paper were performed on an Intel Core
i7-9700K CPU @ 3.6GHz.
To get a first impression, we equip the test image with
two different types of sparse masks
$\bm{c}$.
In one case, we choose for $\bm{c}$ a regular mask with a
density of $6.25 \%$, i.e., pixels on a square grid with a grid width of $4$
pixels are taken as mask points. In the other case, we randomly selected $5 \%$
of all image pixels as mask points. Test image ``trui'' and both masks are
shown in \cref{fig:trui_mask}.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui}
\caption{Original image}
\label{fig:trui}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask_reg_256_256_D00625}
\caption{6.25 \% regular mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask_ran_D005_trui}
\caption{5 \% random mask}
\end{subfigure}
\caption[Test image, regular mask ($6.25 \%$ density) and random mask
($5 \%$ density)]{The $256 \times 256$ test image ``trui'',
a regular grid of mask points, and a random selection of
$5 \%$ of image pixels.}
\label{fig:trui_mask}
\end{figure}
In this setting, we perform SPH inpainting with a required minimal number of
five neighbors, starting with the kernels given in \cref{sec:kernels} and
modifying them either according to Shepard interpolation
\cref{eq:new_discrete_shepard} or the first order consistent method given by
\cref{eq:first_order_comp,eq:28}.
Corresponding results for the case of having a regular mask are
depicted in \cref{fig:regular_6_25}, whereas the results based
on the randomly chosen mask points are given in \cref{fig:random_5}.
In order to compare results, all figures give the corresponding mean square
errors (MSEs) between the inpainting result and the original image.
Further, we have included the runtimes of each set of
experiments, averaged across the six different kernels under consideration.
\begin{figure}[p]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{tabular}{m{0.7\textwidth}m{0.2\textwidth}}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_gauss_zero}
\caption{Gaussian\\
$\textrm{MSE} = 83.28$}
\label{fig:zero_reco_6_25_gaussian}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_matern0_zero}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 83.22$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_matern2_zero}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 82.03$}
\label{fig:zero_reco_6_25_matern_2}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_lucy_zero}
\caption{Lucy\\
$\textrm{MSE} = 81.15$}
\label{fig:zero_reco_6_25_lucy}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_cubic_zero}
\caption{cubic spline\\
$\textrm{MSE} = 78.64$}
\label{fig:zero_reco_6_25_cubic}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_wend_zero}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 81.94$}
\label{fig:zero_reco_6_25_wendland}
\end{subfigure}
\vspace{5ex}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_gauss_first}
\caption{Gaussian\\
$\textrm{MSE} = 85.01$}
\label{fig:first_reco_6_25_gaussian}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_matern0_first}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 80.57$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_matern2_first}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 80.99$}
\label{fig:first_reco_6_25_matern_2}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_lucy_first}
\caption{Lucy\\
$\textrm{MSE} = 82.71$}
\label{fig:first_reco_6_25_lucy}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_cubic_first}
\caption{cubic spline\\
$\textrm{MSE} = 80.94$}
\label{fig:first_reco_6_25_cubic}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_wend_first}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 79.53$}
\label{fig:first_reco_6_25_wendland}
\end{subfigure}
&
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_harm}
\caption{Harmonic\\
$\textrm{MSE} = 121.96$}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 67.95$}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_reg_D00625_eed_lambda_0_2_sigma_0_8}
\caption{EED\\
$\textrm{MSE} = 60.68$}
\end{subfigure}
\end{tabular}
\caption[Inpainting of ``trui'' with a 6.25\% regular mask ]{Inpainting of
``trui'' with the 6.25\% regular mask from \cref{fig:trui_mask} with a zero
order consistency method (top left, (\textbf{a})-(\textbf{f})), first order
consistency method (bottom left, (\textbf{g})-(\textbf{l})), and
diffusion-based inpainting (right, (\textbf{m})-(\textbf{o})).
Inpainting runtime for zero order consistency method was 16.35 s;
for first order consistency method 18.26 s (each averaged across all kernels).
Parameters for EED are $\lambda=0.2$ and $\sigma=0.8$.}
\label{fig:regular_6_25}
\end{figure}
\begin{figure}[p]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{tabular}{m{0.7\textwidth}m{0.2\textwidth}}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_gauss_zero}
\caption{Gaussian\\
$\textrm{MSE} = 208.48$}
\label{fig:zero_reco_5_gaussian}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_matern0_zero}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 206.82$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_matern2_zero}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 207.85$}
\label{fig:zero_reco_5_matern_2}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_lucy_zero}
\caption{Lucy\\
$\textrm{MSE} = 219.29$}
\label{fig:zero_reco_5_lucy}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_cubic_zero}
\caption{cubic spline\\
$\textrm{MSE} = 223.02$}
\label{fig:zero_reco_5_cubic}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_wend_zero}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 244.35$}
\label{fig:zero_reco_5_wendland}
\end{subfigure}
\vspace{5ex}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_gauss_first}
\caption{Gaussian\\
$\textrm{MSE} = 197.58$}
\label{fig:first_reco_5_gaussian}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_matern0_first}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 194.80$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_matern2_first}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 195.21$}
\label{fig:first_reco_5_matern_2}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_lucy_first}
\caption{Lucy\\
$\textrm{MSE} = 220.55$}
\label{fig:first_reco_5_lucy}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_cubic_first}
\caption{cubic spline\\
$\textrm{MSE} = 223.26$}
\label{fig:first_reco_5_cubic}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_wend_first}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 287.41$}
\label{fig:first_reco_5_wendland}
\end{subfigure}
&
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_harm}
\caption{Harmonic\\
$\textrm{MSE} = 226.06$}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 146.46$}
\end{subfigure}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_ran_D005_eed_lambda_0_2_sigma_0_8}
\caption{EED\\
$\textrm{MSE} = 134.92$}
\end{subfigure}
\end{tabular}
\caption[Inpainting of ``trui'' with a 5 \% random mask]
{Inpainting of ``trui''
with the 5 \% random mask from \cref{fig:trui_mask} with a zero order
consistency method (top left, (\textbf{a})-(\textbf{f})), first order
consistency method (bottom left, (\textbf{g})-(\textbf{l})), and
diffusion-based inpainting (right, (\textbf{m})-(\textbf{o})).
Inpainting runtime for zero order consistency method was 3.95 s; for first
order consistency method 5.15 s (each averaged across all kernels).
Parameters for EED are $\lambda=0.2$ and $\sigma=0.8$.}
\label{fig:random_5}
\end{figure}
First of all, comparing the results on the regular mask, we note
that the differences are not significant. Further, the higher order of
consistency in the bottom left block of images in \cref{fig:regular_6_25}
is not always beneficial. Indeed,
the Gaussian, Lucy kernel, and cubic spline achieve lower MSEs if used in the
zero order consistent Shepard interpolation method. Hence, improving consistency
in the sense of polynomial reproduction does not automatically yield overall
better results. If we only regard the MSE, this stays true for the setting of a
random inpainting mask. Here,
the performance gains with first order consistency compared to zero order
consistency are also not significant whereas for kernels with compact support
(Lucy, cubic spline, and $C^{4}$-Wendland) the results are even worse with the
first order consistency method.
Comparing the results in
\cref{fig:regular_6_25,fig:random_5}, we observe that further
problems arise in the case of a random mask. While for a regular mask,
\cref{eq:pos_require_first_order} is satisfied in a way that gives a convex
combination on the right-hand side for every unknown pixel $\bm{q}$, this is
no longer the case for our random mask.
In other words, the modified kernels for a higher order
consistency, at pixels whose position cannot be expressed as a convex
combination of mask point positions
do no longer obey the positivity requirements, resulting in over- and
undershoots which can become quite severe. Moreover, the $3 \times 3$-system
that needs to be solved at each pixel for each modified kernel may become
almost singular, leading to further instabilities.
For the zero order consistency method, i.e., Shepard
interpolation, the $C^{0}$-Mat\'{e}rn kernel yields the best result. This is
in line with recent findings by Dell'Accio et~al.\ in \cite{DD20}.
\subsection{Classical Inpainting Applications: Scratch and
Text Removal}
\label{sec:scratch}
Among the classical applications for inpainting \cite{BS00} are
the removal of scratches or text from an image. Alves Mazzini and Petronetto do
Carmo already used SPH inpainting for such tasks in \cite{AP16}. Thus, we also
briefly address such problems here.
As a first example, we consider a damaged version of the image
``trui'' with scratches, see \cref{fig:trui_scratch}. To repair those scratches,
we consider SPH inpainting of zero and first order consistency for the
commonly used Gaussian kernel.
\begin{figure}[p]
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_scratch}
\caption{``trui'' with scratches}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_scratch_gauss_zero}
\caption{Zero order\\
$\textrm{MSE} = 28.93$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_scratch_gauss_first}
\caption{First order\\
$\textrm{MSE} = 23.97$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_scratch_harm}
\caption{Harmonic\\
$\textrm{MSE} = 23.32$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_scratch_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 12.86$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_scratch_eed_lambda_0_2_sigma_0_6}
\caption{EED\\
$\textrm{MSE} = 12.28$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_scratch_crim_pr9}
\caption{Exemplar-based\\
$\textrm{MSE} = 49.58$}
\end{subfigure}
\caption[Inpainting of ``trui'' for scratch removal ]{Image ``trui'' damaged
by scratches (\textbf{a}) and corresponding SPH inpaintings of zero order
consistency (\textbf{b}) as well as first order consistency (\textbf{c})
using a Gaussian kernel. Diffusion- and exemplar-based inpainting results
are shown in (\textbf{d}-\textbf{g}).
Parameters for EED are $\lambda=0.2$ and $\sigma=0.6$.}
\label{fig:trui_scratch}
\end{figure}
Regarding MSEs, both methods yield similar results. The
differences become clearer when we look at particular areas of the image. For
the horizontal scratch below the eyes, we observe visible artifacts in the
zero order method while the first order method produces a more pleasing, though
not perfect visual impression. On the other hand, for scratches crossing the
scarf, the zero order inpainting shows fewer artifacts than the first order
inpainting.
As a second example, we consider ``trui'' overlaid with some
text which we attempt to remove in \cref{fig:trui_text}. As before, we use a
Gaussian and compare SPH inpainting of zero and first order consistency.
For this example, both zero and first order consistency method
yield good results, though the first order method is overall slightly better,
for example at the boundary between hair and hat on the left-hand side.
Overall, both results looks visually pleasant.
\begin{figure}[p]
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_text}
\caption{``trui'' with text}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_text_gauss_zero}
\caption{Zero order\\
$\textrm{MSE} = 18.58$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_text_gauss_first}
\caption{First order\\
$\textrm{MSE} = 14.27$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_text_harm}
\caption{Harmonic\\
$\textrm{MSE} = 16.29$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_text_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 8.72$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_text_eed_lambda_0_5_sigma_0_6}
\caption{EED\\
$\textrm{MSE} = 7.80$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_text_crim_pr7}
\caption{Exemplar-based\\
$\textrm{MSE} = 27.76$}
\end{subfigure}
\caption[Inpainting of ``trui'' with overlaid text]{Image ``trui'' with
overlaid text (\textbf{a}) and corresponding SPH inpaintings with zero order
consistency (\textbf{b}) as well as first order consistency (\textbf{c})
using a Gaussian kernel. Diffusion- and exemplar-based inpainting results are
shown in (\textbf{d}-\textbf{g}).
Parameters for EED are $\lambda=0.5$ and $\sigma=0.6$.}
\label{fig:trui_text}
\end{figure}
\subsection{Comparisons with Diffusion-Based and Non-Local
Inpainting Methods}
To put the results that we have seen so far in perspective, we
compare them with the performance of other inpainting methods. As simple
representatives of diffusion-based methods, we consider harmonic and biharmonic
inpainting \cite{CS02,GW08}. A more sophisticated method is
edge-enhancing diffusion (EED). Although introduced as a denoising technique
\cite{We98}, it turned out to be also a powerful inpainting method
\cite{GW08,PH16}. For all results shown here and in the
supplement, we have used a discretisation of EED which corresponds to the one
given in \cite{WW13} for $\alpha=0$ and $\beta=0$. Contrast parameter $\lambda$
and noise scale $\sigma$ were adapted to the image at hand.
Let us first consider how these methods perform in the case of sparse
inpainting masks. The results for the different diffusion based
inpainting methods when using the regular inpainting mask from
\cref{fig:trui_mask} are included in \cref{fig:regular_6_25} on the right.
Comparing all results in \cref{fig:regular_6_25}, we see that, with regard
to MSEs, SPH inpainting performs better than harmonic, but worse than
biharmonic inpainting. As can be expected, EED shows the best results, both
visually and in terms of MSE.
When it comes to inpainting on random masks, the situation is
slightly different as \cref{fig:random_5} illustrates. Again, the results of
the three diffusion-based inpainting methods for the random inpainting mask
from \cref{fig:trui_mask} are included on the right. Comparing all results in
\cref{fig:random_5} shows that the performance of SPH inpainting is similar to
harmonic inpainting in terms of MSE, but closer to biharmonic inpainting in
terms of visual impression. Again, EED achieves the best MSE and visually
smoothest inpainting.
We also compare the results achieved by diffsion-based methods
in case of the image damaged by scratches in \cref{fig:trui_scratch}.
Furthermore, we considered the exemplar-based inpainting approach by Criminisi
et al.~\cite{CP04} as an example of a non-local inpainting method. For this
method, we have considered disc-shaped patches and adapted the patch radius to
the image. As is evident from \cref{fig:trui_scratch}, the first order
consistency SPH inpainting can achieve an MSE similar to harmonic inpainting,
but shows more artifacts. The exemplar-based method on the other hand is
worse with regard to both MSE and creation of artifacts.
Results of the diffusion- and exemplar-based methods in case of
the text removal task are included in \cref{fig:trui_text}. The results of SPH
inpainting are, once more, similar to the results obtained by harmonic
inpainting. The exemplar-based method again produces artifacts, in particular
around the eyes.
As a second example, we consider the ``parrots'' image from the
Kodak database, downscaled to size $384 \times 256$. As inpainting tasks, we
consider the removal of scratches or overlaid texts as well as inpainting
based on a sparse regular and a sparse random mask, respectively (see
\cref{fig:parrots}).
\begin{figure}[t]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots}
\caption{Ground truth}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_scratch}
\caption{Damaged by scratches}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_text}
\caption{Overlaid with text}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask_reg_384_256_D00416}
\caption{4.16 \% regular mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask_ran_D005_parrots}
\caption{5 \% random mask}
\end{subfigure}
\caption[Image ``parrots'' with various damages and sparse inpainting masks]{
Image ``parrots'' with various inpainting tasks.}
\label{fig:parrots}
\end{figure}
\Cref{fig:parrots_reg} shows the results for the inpainting of
``parrots'' with the regular inpainting masks for SPH inpainting with an
isotropic Gaussian and diffusion-based inpainting, whereas results for the
random inpainting mask are given in \cref{fig:parrots_ran}.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_reg_D00416_gauss_zero}
\caption{Zero order consistency\\
$\textrm{MSE} = 132.79$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_reg_D00416_gauss_first}
\caption{First order consistency\\
$\textrm{MSE} = 124.81$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_reg_D00416_harm}
\caption{Harmonic\\
$\textrm{MSE} = 139.10$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_reg_D00416_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 128.18$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_reg_D00416_eed_lambda_1_5_sigma_2}
\caption{EED\\
$\textrm{MSE} = 118.79$}
\end{subfigure}
\caption[Inpainting of ``parrots'' on regular mask]{Inpainting
of ``parrots'' for the regular inpainting mask given in \cref{fig:parrots} for
various inpainting methods. The SPH inpainting uses a Gaussian kernel.
Parameters for EED are $\lambda=1.5$ and $\sigma=2.0$.}
\label{fig:parrots_reg}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_ran_D005_gauss_zero}
\caption{Zero order consistency\\
$\textrm{MSE} = 169.62$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_ran_D005_gauss_first}
\caption{First order consistency\\
$\textrm{MSE} = 173.71$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_ran_D005_harm}
\caption{Harmonic\\
$\textrm{MSE} = 162.53$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_ran_D005_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 147.75$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_ran_D005_eed_lambda_1_2_sigma_2}
\caption{EED\\
$\textrm{MSE} = 137.04$}
\end{subfigure}
\caption[Inpainting of ``parrots'' on regular mask]{Inpainting
of ``parrots'' for the random inpainting mask given in \cref{fig:parrots} for
various inpainting methods. The SPH inpainting uses a Gaussian kernel.
Parameters for EED are $\lambda=1.2$ and $\sigma=2.0$.}
\label{fig:parrots_ran}
\end{figure}
For the regular mask, the best MSE is achieved by EED,
followed by the first order consistency SPH inpainting. For the random
mask, diffusion-based methods show a better performance with EED inpainting
taking the lead both with respect to MSE and visual impression. The first order
consistency SPH inpainting suffers from the aforementioned artifacts.
Results for the inpainting of image ``parrots'' damaged by
scratches can be found in \cref{fig:parrots_scratches} whereas
\cref{fig:parrots_text} shows the results for text removal.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_scratch_gauss_zero}
\caption{Zero order consistency\\
$\textrm{MSE} = 37.55$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_scratch_gauss_first}
\caption{First order consistency\\
$\textrm{MSE} = 44.92$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_scratch_crim_pr9}
\caption{Exemplar-based\\
$\textrm{MSE} = 76.95$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_scratch_harm}
\caption{Harmonic\\
$\textrm{MSE} = 32.94$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_scratch_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 28.42$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_scratch_eed_lambda_0_8_sigma_1_8}
\caption{EED\\
$\textrm{MSE} = 25.70$}
\end{subfigure}
\caption[Inpainting of ``parrots'' for scratch removal]{
Inpainting of ``parrots'' damaged by scratches
(cf.~\cref{fig:parrots}). The SPH inpainting uses a Gaussian kernel.
Parameters for EED are $\lambda=0.8$ and $\sigma=1.8$.}
\label{fig:parrots_scratches}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_text_gauss_zero}
\caption{Zero order consistency\\
$\textrm{MSE} = 24.00$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_text_gauss_first}
\caption{First order consistency\\
$\textrm{MSE} = 26.41$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_text_crim_pr7}
\caption{Exemplar-based\\
$\textrm{MSE} = 35.90$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_text_harm}
\caption{Harmonic\\
$\textrm{MSE} = 21.20$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_text_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 20.26$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_text_eed_lambda_1_2_sigma_2}
\caption{EED\\
$\textrm{MSE} = 17.76$}
\end{subfigure}
\caption[Inpainting of ``parrots'' for text removal]{
Inpainting of ``parrots'' overlaid by text
(cf.~\cref{fig:parrots}). The SPH inpainting uses a Gaussian kernel.
Parameters for EED are $\lambda=1.2$ and $\sigma=2.0$.}
\label{fig:parrots_text}
\end{figure}
In both tasks, zero order consistency SPH inpainting performs
better than the first order consistency method. Both are preferable to the
exemplar-based inpainting, but cannot quite achieve the same quality as the
diffusion-based approaches. Further results and comparisons are included in the
supplementary material.
Overall, we see that SPH inpainting is better suited for
inpainting problems with sparse masks, which are closer in nature to the
original applications of SPH. Some further remarks on how SPH inpainting may
be adapted to non-sparse inpainting tasks, which are out of the scope of this
paper, can be found in \cref{sec:conclusions}. Instead, we focus on how the
performance of SPH inpainting can be enhanced in settings which allow data
optimization, as they are encountered, e.g., in compression.
\section{Optimized Inpainting for Known Ground Truths}
\label{sec:mask_optimization}
\subsection{A Mixed Order Consistency Method}
The goal of mixed order consistency is to combine zero order and first order
consistency to get the best possible result. For this purpose, two inpaintings
are done, one with zero order consistency and one with first order consistency.
Both results are compared, and the one with the better reconstruction error is
kept. The new method is described in \cref{alg:new_inpainting}.
\begin{center}
\begin{algorithm}[htb]
\DontPrintSemicolon
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwInOut{Initialize}{Initialize}
\Input{Original image $\bm{f}$, mask $\bm{c}$}
\Output{Reconstruction $\bm{u}$}
\Initialize{Perform Voronoi tessellation. Assign areas of influence $V_{j}$
to mask points. Assign initial smoothing lengths
$h_{j} = h_{j,\textrm{init}} = 1$ to mask points. Set $k = 1$.}
\While{\normalfont{not all pixels $\bm{q}$ have been inpainted}}{
\For{\normalfont{each pixel} $\bm{q}$} {
Detect neighboring mask points $\bm{p}_{j}$, $j\in \mathcal{N}(\bm{q})$ of
$\bm{q}$.\;
\eIf{\normalfont{number of neighbors is larger than or equal to required
minimum and neighbors are not collinear}}{
Inpaint $\bm{q}$ with zero order consistency
according to
\cref{eq:SPH_mod_kernel,eq:mod_kernel_zero01,eq:mod_kernel_zero02}.\;
Inpaint $\bm{q}$ with first order consistency
according to
\cref{eq:mod_kernel_first01,eq:first_order_comp,eq:28}.\;
\eIf{\normalfont{error of zero order consistency} is less than
\normalfont{error of first order consistency}}
{
keep inpainting of $\bm{q}$ with zero order consistency,\;
}
{
keep inpainting of $\bm{q}$ with first order consistency.\;
}
}{continue\;}
}
$k = k+1$\;
$h_{j} = k \cdot h_{j,\textrm{init}}$\;
}
\caption[Mixed Consistency Algorithm]{Mixed Consistency Algorithm}
\label{alg:new_inpainting}
\end{algorithm}
\end{center}
Based on the 6.25 \% regular mask from \cref{fig:trui_mask},
we get the results depicted in \cref{fig:mixed_regular_6_25}, whereas the
results for the 5 \% random mask are depicted in \cref{fig:mixed_random_5}. For
both masks, we have used a minimum of five neighbors. The obtained results
are clearly superior to the results achievable with either the zero or first
order consistent method, especially for the sparser random mask, showing better
MSEs and no visible over- or undershoots.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_gauss_mix}
\caption{Gaussian\\
$\textrm{MSE} = 78.37$}
\label{fig:mixed_reco_6_25_gaussian}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_matern0_mix}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 67.03$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_matern2_mix}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 68.12$}
\label{fig:mixed_reco_6_25_matern_2}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_lucy_mix}
\caption{Lucy\\
$\textrm{MSE} = 78.86$}
\label{fig:mixed_reco_6_25_lucy}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_cubic_mix}
\caption{cubic spline\\
$\textrm{MSE} = 72.78$}
\label{fig:mixed_reco_6_25_cubic}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_wend_mix}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 62.08$}
\label{fig:mixed_reco_6_25_wendland}
\end{subfigure}
\caption[Inpainting of ``trui'' with a 6.25 \% regular mask with a mixed order
consistency method]{Inpainting of ``trui'' with the 6.25 \%
regular mask from \cref{fig:trui_mask} with a mixed order consistency method.
Inpainting runtime 19.18 s (averaged across all kernels).}
\label{fig:mixed_regular_6_25}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_gauss_mix}
\caption{Gaussian\\
$\textrm{MSE} = 128.37$}
\label{fig:zero_reco_mixed_5_gaussian}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_matern0_mix}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 126.43$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_matern2_mix}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 126.36$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_lucy_mix}
\caption{Lucy\\
$\textrm{MSE} = 127.07$}
\label{fig:zero_reco_mixed_5_lucy}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_cubic_mix}
\caption{cubic spline\\
$\textrm{MSE} = 125.51$}
\label{fig:zero_reco_mixed_5_cubic}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_wend_mix}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 125.02$}
\label{fig:zero_reco_mixed_5_wendland}
\end{subfigure}
\caption[Inpainting of ``trui'' with a 5 \% random mask with a mixed
consistency method]{Inpainting of ``trui'' with 5 \% random mask from
\cref{fig:trui_mask} with a mixed consistency method.
Inpainting runtime 5.13 s (averaged across all kernels).}
\label{fig:mixed_random_5}
\end{figure}
\subsection{Spatial Optimization}
Our spatial optimization relies on a novel densification strategy. Instead of a
probabilistic approach as in \cite{HM13}, we base our method on Voronoi
tessellation. The algorithm starts with an empty mask and, as an initial step,
inserts the minimum number of neighbors required at random positions. After this
step, an initial inpainting takes place and a Voronoi tessellation is performed
with the initial mask points as ``seeds''. Once this is done, we detect the
Voronoi cell with the highest error and insert a new mask point at the pixel
with the highest error within the cell. The error of the reconstruction at a
pixel $\bm{q}_{j,k}$ in the Voronoi cell $\Omega_{j}$ is defined as
\begin{equation}
E_{\bm{q}_{j,k}} =
\left\lvert f\!\left(\bm{q}_{j,k}\right) - u\!\left(\bm{q}_{j,k}\right)
\right\rvert^{2},
\end{equation}
with $f$ being the original image and $u$ the reconstruction, whereas the error
for the Voronoi cell $\Omega_{j}$ is given by the sum of the reconstruction
errors at all pixels in the cell,
i.e.,
\begin{equation}
E_{\Omega_{j}} = \sum\limits_{\bm{q}_{j,k} \in \Omega_{j}} E_{\bm{q}_{j,k}} =
\sum\limits_{\bm{q}_{j,k} \in \Omega_{j}}
\left\lvert f\!\left(\bm{q}_{j,k}\right) - u\!\left(\bm{q}_{j,k}\right)
\right\rvert^{2}.
\end{equation}
A new inpainting as well as a new Voronoi tessellation are then computed with
the new mask and the process continues in the same manner until the required
mask density is achieved. The densification algorithm is described in
\cref{alg:densification}. We remark that it is possible to insert more than one
new mask point in each step to speed up the procedure. However, inserting too
many mask points at once deteriorates the quality of the final mask. For the
sake of completeness, we also mention that a densification approach using the
$\mathrm{L}^{1}$-error within Voronoi cells has been used in \cite{SB00} for
nearest-neighbor and piecewise constant interpolation. In our experiments,
using the $\mathrm{L}^{1}$-error always yielded inferior results.
\begin{center}
\begin{algorithm}
\DontPrintSemicolon
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwInOut{Initialize}{Initialize}
\Input{Original image $\bm{f}$, minimum number of neighbors, number of mask
points to add per iteration, required density}
\Output{Mask $\bm{c}$, reconstruction $\bm{u}$}
\Initialize{Insert minimum number of neighbors at random positions. Perform
Voronoi tessellation and initial inpainting.}
\While{\normalfont{mask density $<$ required mask density}}{
Find Voronoi cell(s) $\Omega_{j}$ with highest error $E_{\Omega_{j}}$.\;
Find pixel(s) $\bm{q}_{j,k}$ in cell(s) $\Omega_{j}$ with highest error(s)
$E_{\bm{q}_{j,k}}$.\;
Add mask point(s) at position(s) $\bm{q}_{j,k}$.\;
Perform Voronoi tessellation.\;
Perform inpainting.\;
}
\caption{Densification Algorithm}
\label{alg:densification}
\end{algorithm}
\end{center}
\subsection{Tonal Optimization}
\label{sec:tonal_opt}
Apart from spatial optimization, we also incorporate a gray value optimization
of the mask points for a fixed mask $\bm{c}$. The goal of this process is to
find the optimal gray values $\bm{g}$ such that the mean square error of the
reconstructed image is minimal. Indeed, for a fixed mask $\bm{c}$, the
inpainting is given by \cref{eq:SPH_mod_kernel}. However, due to the adaptive
nature of the smoothing length $h_{j}$ which is not only determined in
dependence of the mask point $\bm{p}_{j}$, but also in dependence of the pixel
$\bm{q}$ currently under consideration for inpainting, we have to perform one
inpainting for our final mask first to determine all necessary smoothing
lengths. Once this is done, \cref{eq:SPH_mod_kernel} can be written as a matrix
vector multiplication of the form $\bm{u} = \bm{A} \widetilde{\bm{f}}$ where
$\bm{u}$ is a vector containing the values of the reconstruction at every
pixel, $\bm{A}$ is a matrix containing the values of all modified kernels at
all pixels multiplied with their area of influence $V_{j}$, and
$\widetilde{\bm{f}}$ is a vector containing the values of the original image at
all mask points $\bm{p}_{j}$. Thus, as $\bm{A}$ is fixed now, $\bm{u}$ can be
interpreted as the solution to an interpolation problem at the mask points.
With this interpretation, it is straightforward to consider the corresponding
least-squares problem. With the above form for $\bm{u}$, it can be written as
\begin{equation}
\min\limits_{\bm{g}} \left\lVert \bm{A}\bm{g} - \bm{f} \right\rVert^{2},
\end{equation}
where $\bm{f}$ denotes the vector with the original image values at all pixels
and $\bm{g}$ is a vector with gray values at the mask points, which can be
determined by solving the normal equations
\begin{equation}
\bm{A}^{T} \bm{A} \bm{g} = \bm{A}^{T} \bm{f}.
\end{equation}
Although the least squares problem can be solved directly, we prefer an
iterative solver instead, specifically the conjugate gradient on the normal
residual (CGNR) method \cite{Sa03}. This variant of conjugate gradients avoids
the explicit computation of $\bm{A}^{T} \bm{A}$ to reduce runtime and
circumvent the larger condition number of $\bm{A}^{T} \bm{A}$ compared to
$\bm{A}$. We
always initialize by choosing for $\bm{g}_{0}$ the zero vector. As stopping
criterion we use a threshold on the relative residual defined such that
\begin{equation}
\frac{\left\lVert \bm{A}^{T} \bm{f} - \bm{A}^{T} \bm{A} \bm{g}_{k}
\right\rVert}{\left\lVert \bm{A}^{T} \bm{f}\right\rVert} \leq 10^{-8}.
\end{equation}
The above procedure is clear for the zero and first order consistency method as
they use the same kind of modified kernel in every pixel. It stays valid for
the mixed order consistency method as in this approach, the kernel that is used
at each pixel $\bm{q}$ is of the same type for all mask points $\bm{p}_{j}$
contributing to the inpainting at $\bm{q}$. Thus, for the mixed consistency
method, the type of the modified kernel changes with the rows in $\bm{A}$, but
stays the same within each row over all columns. We can still write the whole
inpainting process as a matrix-vector-multiplication and solve the associated
least-square problem to perform tonal optimization.
\subsection{Inpainting on Spatially and Tonally Optimized Data with Isotropic
Kernels}
\label{sec:inp_isotropic}
As an example for inpainting on an optimized mask with zero order consistency,
we present the results produced with a Gaussian kernel in
\cref{fig:zero_dense_5}. Even without tonal optimization, the MSE improves by
roughly a factor $6.5$. With tonal optimization, the MSE improves by a factor
$10$ with respect to the random mask and by almost $35 \%$ with respect to the
result on the spatially optimized mask without tonal optimization. As far as
spatial optimization is concerned, the densification process prefers to capture
the geometry of the image, by adding more mask points near edges compared to
rather homogeneous regions of the image.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_mask_dens_D005_gauss_zero}
\caption{Optimized 5 \% mask}
\label{fig:5_mask}
\end{subfigure}
\quad
\begin{subfigure}{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_dens_D005_gauss_zero}
\caption{$\textrm{MSE} = 30.65$}
\label{fig:zero_reco_5}
\end{subfigure}
\quad
\begin{subfigure}{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_dens_D005_gauss_zero_to}
\caption{$\textrm{MSE} = 19.62$}
\end{subfigure}
\caption{Inpainting of ``trui'' with spatially and tonally
optimized mask with a zero order consistency method with an isotropic
Gaussian kernel.
We show the 5 \% optimized zero order consistency mask (\textbf{a}), zero
order consistency inpainting result with this mask without tonal optimization
(\textbf{b}), and zero order consistency inpainting result on this mask with
tonal optimization (\textbf{c}).
Runtimes were 89.09 min for densification and 2.59 min for
tonal optimization.}
\label{fig:zero_dense_5}
\end{figure}
Changing the SPH inpainting method from zero order consistency to the mixed
order consistency method improves the result even further as can be seen in
\cref{fig:mixed_dense_5}. Using the same isotropic Gaussian kernel as before,
spatial optimization improves the MSE by almost a factor $8$ compared to the the
random mask result in \cref{fig:zero_reco_mixed_5_gaussian}. A comparison with
respect to the inpainting method instead of with respect to the mask shows
improvement of almost a factor $2$ with respect to the MSE compared to the
results in \cref{fig:zero_dense_5}. Unfortunately, there seems to be no
structure in the distribution of pixels for which a first order consistency and
for which a zero order consistency method performs better, respectively, as can
be seen in \cref{fig:consistency_map}.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_mask_dens_D005_gauss_mix}
\caption{Optimized 5 \% mask}
\label{fig:5_mixed_isotropic_mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_consmap_dens_D005_gauss_mix}
\caption{Mixed order consistency map}
\label{fig:consistency_map}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_dens_D005_gauss_mix}
\caption{$\textrm{MSE} = 16.30$}
\label{fig:mixed_isotropic_reco_5}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_dens_D005_gauss_mix_to}
\caption{$\textrm{MSE} = 11.68$}
\end{subfigure}
\caption{Inpainting of ``trui'' with spatially and tonally
optimized mask with a mixed order consistency method with an isotropic
Gaussian kernel. (\textbf{a}) shows an optimized 5 \% mask for mixed order
consistency SPH inpainting. (\textbf{b}) shows a mixed order consistency map
with white areas denoting first order consistency reconstruction and black
areas denoting zero order consistency reconstruction. (\textbf{c}) shows a
mixed order consistency inpainting result on the mask from (\textbf{a})
without tonal optimization. (\textbf{d}) shows a mixed order consistency
inpainting on the same mask with tonal optimization.
Runtimes were 111.28 min for densification and 2.66 min for tonal
optimization.}
\label{fig:mixed_dense_5}
\end{figure}
\Cref{table:2} summarizes MSEs of inpainting results for
``trui'' with the other isotropic kernels used in \cref{fig:random_5} if
these kernels are equipped with optimized masks containing $5 \%$ of all pixels
and tonal optimization is performed.
\begin{table}[htb]
\begin{tabular}{lcc}
Kernel & Zero order consistency & mixed order consistency \\
\midrule
Gaussian & 19.62 & 11.68 \\
$C^{0}$-Mat\'{e}rn & 18.57 & \textbf{9}.\textbf{95} \\
$C^{2}$-Mat\'{e}rn & \textbf{17}.\textbf{93} & 9.99 \\
Lucy & 21.83 & 12.15 \\
cubic spline & 21.19 & 11.71 \\
$C^{4}$- Wendland & 23.33 & 11.83
\end{tabular}
\caption[MSE comparison between zero and mixed order consistency optimized
data inpainting with isotropic kernels of ``trui'']{MSE
comparison between zero and mixed order consistency optimized inpainting
with isotropic kernels on ``trui'' for 5 \% masks.}
\label{table:2}
\end{table}
Once again, the benefits of mixed order consistency are quite substantial
since a significant decrease of the MSE has been achieved in all cases compared
to zero order consistency. Even the best performing kernel for the zero order
consistency method, the $C^{2}$-Mat\'{e}rn kernel, has an MSE which is
approximately 50 \% larger than the MSE of the worst performing kernel in the
mixed consistency setting and almost double of the MSE of the best performing
kernel in the mixed consistency setting
which is the $C^{0}$-Mat\'{e}rn kernel.
Further, it appears that compactly supported kernels perform worse in
this setting for the zero order consistency SPH inpainting than truncated
kernels, but are competitive in the mixed order consistency method.
\subsection{Optimized Inpainting with Anisotropic Kernels}
\label{sec:opt_inp_aniso_kernel}
The observation that optimized mask points tend to cluster around edges and the
fact that edges are clearly oriented structures suggest to adapt the support of
kernels to account for this by incorporating anisotropy. This is further
supported by results which show that incorporating anisotropy in other
inpainting strategies can improve reconstruction quality compared to the
related isotropic method \cite{SP14}.
For SPH, anisotropic kernels have been used in the so-called Adaptive Smoothed
Particle Hydrodynamics (ASPH) formulation \cite{SM96} to better account for the
actual distribution of particles. Here, we replace the smoothing length $h$ by
a symmetric positive definite tensor $\bm{G} \in \mathbb{R}^{2 \times 2}$ for a
two-dimensional problem and redefine $\bm{\eta}$ as
\begin{equation}
\label{eq:after_new}
\bm{\eta} = \bm{G} \left(\bm{q}-\bm{p}\right).
\end{equation}
$\bm{G}$ has units of inverse length and in the isotropic case it is given by a
diagonal matrix with each diagonal element equal to $\frac{1}{h}$. This
observation makes it clear that we also have to adapt the normalization of our
kernels from a factor $\frac{\rho}{h^{2}}$ in \cref{eq:kernel_rbf} to a factor
$\rho\, \det(\bm{G})$.
For SPH inpainting, we determine the anisotropy from the distribution of mask
points. For this purpose, we follow the approach of \cite{YT13} by constructing
a weighted local covariance matrix $\bm{C}$ within a fixed predetermined window
around each mask point. For a known mask point $\bm{p}_{j}$, the covariance
matrix is given by
\begin{equation}
\bm{C}_{j} =
\frac{\sum\limits_{\ell}
w_{j,\ell} \left(\bm{p}_{\ell}-\widetilde{\bm{p}}_{j}\right)
\left(\bm{p}_{\ell} - \widetilde{\bm{p}}_{j} \right)^{T}}{
\sum\limits_{\ell} w_{j,\ell}},
\qquad \text{ with } \qquad
\widetilde{\bm{p}}_{j} =
\frac{\sum\limits_{\ell} w_{j,\ell}\, \bm{p}_{\ell}}{\sum\limits_{\ell}
w_{j,\ell}}.
\end{equation}
Here, $\ell$ numbers the mask points within a neighborhood of $\bm{p}_{j}$. It
is necessary to restrict the set of mask points under consideration to such a
neighborhood to catch the locally prevalent direction of structures in the
image. Next, we perform a singular value decomposition (SVD). As $\bm{C}_{j}$ is
symmetric and positive semidefinite by construction, this is the same as the
eigenvalue decomposition
\begin{equation}
\bm{C}_{j} = \bm{Q} \bm{D} \bm{Q}^{T},
\end{equation}
with a rotation matrix $\bm{Q}$ and a matrix $\bm{D}$ with nonnegative
eigenvalues along the diagonal in decreasing order. As $\bm{C}_{j}$ is
constructed from the positions of mask points, its eigenvalues can be assigned a
unit of length. The eigenvectors in $\bm{Q}$ correspond to the directions of
major and minor axis of an ellipse whose orientation is in line with the locally
prevalent orientation in the distribution of mask points. Hence, the tensor
$\bm{G}$ is given by
\begin{equation}
\bm{G} = \bm{Q} \bm{D}^{-1} \bm{Q}^{T},
\end{equation}
such that it has units of inverse length as desired.
In the context of kernel regression, the matrix $\bm{C}$ that
we have introduced above is related to the so-called ``steering matrix'' of an
anisotropic regression kernel \cite{TF07}.
In our experiments, we incorporate anisotropy after spatially optimizing mask
points for isotropic kernels. We fix the window size for construction of
covariance matrices to $25 \times 25$ pixels and demand a minimum number of
15 mask points within that window. If this minimum number of mask points is
not satisfied, the corresponding kernel stays isotropic. This behavior is
desirable since the densification process results in masks where the majority
of mask points are placed near discontinuities rather than in homogeneous areas
of the image. Thus, a low local density of mask points implies homogeneous
areas of the image. The results achieved with mixed order consistency and an
anisotropic Gaussian kernel are depicted in
\cref{fig:mixed_anisotropic_dense_5}.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_mask_dens_D005_an_gauss_mix}
\caption{Optimized 5 \% mask}
\label{fig:5_mixed_anisotropic_mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_mask_dens_D005_only_an_gauss_mix}
\caption{Anisotropic mask points of the 5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_dens_D005_an_gauss_mix_to}
\caption{$\textrm{MSE} = 10.49$}
\end{subfigure}
\caption{Inpainting of ``trui'' with spatially and tonally
optimized mask with a mixed order consistency method with an anisotropic
Gaussian kernel.
From left to right: Optimized 5 \% mask for mixed order consistency method
(\textbf{a}), mask points which incorporate anisotropic kernels in white
(\textbf{b}), and mixed order consistency inpainting result with given mask
with tonal optimization (\textbf{c}).
Runtimes were 129.44 min for densification and 2.51 min for
tonal optimization.}
\label{fig:mixed_anisotropic_dense_5}
\end{figure}
Compared to the result in \cref{fig:mixed_dense_5},
we observe an improvement in MSE of only 1.19, which is
roughly 11 \%, whereas a large amount of mask
points is now equipped with anisotropic kernels. This relatively moderate
improvement may be explained by the fact that our method of determining
anisotropy relies on the local spatial distribution of mask points whereas many
important structures in the given image live on a mesoscale. This behavior
cannot be captured by increasing the size of the search window as the covariance
matrix becomes prone to incorporating the orientations of neighboring
structures, resulting in a more isotropic behavior instead of a better
orientation along the mesoscale structures. \Cref{table:anisotropy} summarizes
MSEs of inpainting results for ``trui'' with the other anisotropic kernels used
if these kernels are equipped with optimized masks containing $5 \%$ of all
pixels and tonal optimization is performed.
\begin{table}[htb]
\begin{tabular}{lr}
Kernel & MSE \\
\midrule
Gaussian & 10.49 \\
$C^{0}$-Mat\'{e}rn & \textbf{9}.\textbf{51} \\
$C^{2}$-Mat\'{e}rn & 9.81 \\
Lucy & 11.57 \\
cubic spline & 11.26 \\
$C^{4}$- Wendland & 11.95
\end{tabular}
\caption[MSE with mixed order consistency optimized data inpainting with
anisotropic kernels of ``trui'']{MSE with mixed order
consistency optimized inpainting with anisotropic kernels on ``trui''
for 5 \% masks.}
\label{table:anisotropy}
\end{table}
\subsection{Performance Compared to Diffusion-based and Exemplar-based
Inpainting Methods}
\label{sec:comparisons}
To assess the performance of SPH inpainting, we combine our
implementations of harmonic and biharmonic inpainting with our Voronoi-based
densification strategy and a tonal optimization approach similar in spirit to
\cref{sec:tonal_opt}. The results achieved by these two inpainting methods
are depicted in \cref{fig:trui_diff_opt} together with the corresponding
MSEs.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_dens_D005_harm_to}
\caption{Harmonic\\
$\textrm{MSE} = 20.18$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_dens_D005_biharm_to}
\caption{Biharmonic\\
$\textrm{MSE} = 15.00$}
\end{subfigure}
\caption{Inpainting of ``trui'' with spatially and tonally
optimized 5 \% mask for harmonic and biharmonic inpainting.}
\label{fig:trui_diff_opt}
\end{figure}
As we can see from \cref{table:2}, SPH inpainting with mixed
order consistency performs better than these two diffusion-based methods even
if we consider only isotropic kernels. \Cref{table:3} shows that we can obtain
an MSE which is less than half that of harmonic inpainting if we incorporate
anisotropy.
As another example, consider the ``parrots'' image from
\cref{fig:parrots}. \Cref{fig:parrots_SPH_opt} shows the results obtained by
Voronoi densification for a 5 \% mask for mixed order consistency SPH
inpainting
with isotropic and anisotropic Gaussian kernels, including tonal optimization.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_mask_dens_D005_gauss_mix}
\caption{Isotropic mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_dens_D005_gauss_mix_to}
\caption{Isotropic, mixed order\\
$\textrm{MSE} = 10.07$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_mask_dens_D005_an_gauss_mix}
\caption{Anisotropic mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_dens_D005_an_gauss_mix_to}
\caption{Anisotropic, mixed order\\
$\textrm{MSE} = 8.51$}
\end{subfigure}
\caption[SPH inpainting of ``parrots'' with spatially and tonally optimized
mask]{Inpainting of ``parrots'' with a 5 \% spatially and
tonally optimized mask with a mixed order consistency method and Gaussian
kernels. Left column shows the masks. Right column shows the inpaintings.
Top row is the isotropic case. Bottom row is the anisotropic case.}
\label{fig:parrots_SPH_opt}
\end{figure}
The corresponding results achieved by harmonic and biharmonic
inpainting are shown in \cref{fig:parrots_diff_opt}.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_dens_D005_harm_to}
\caption{Harmonic\\
$\textrm{MSE} = 16.44$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_dens_D005_biharm_to}
\caption{Biharmonic\\
$\textrm{MSE} = 15.32$}
\end{subfigure}
\caption[Inpainting of ``parrots'' with spatially and tonally optimized
mask]{Inpainting of ``parrots'' with a 5 \% spatially and
tonally optimized mask with harmonic (\textbf{a}) and biharmonic
(\textbf{b}) inpainting.}
\label{fig:parrots_diff_opt}
\end{figure}
Already the isotropic variant of SPH inpainting reduces the MSE of the
diffusion-based methods by at least 34 \%, whereas the anisotropic version
achieves a reduction by 44 \%. Further comparisons between SPH inpainting and
diffusion-based strategies when equipped with Voronoi-based densification are
included in the supplement.
In order to assess the performance of our inpainting method
further,
we compare it with other existing methods in the literature. As a first example,
we consider results for harmonic inpainting on ``trui'' for an optimized 5 \%
mask from \cite{MH11}. There, spatial optimization was done with a probabilistic
sparsification and further improved with a Nonlocal Pixel Exchange (NLPE). With
optimally chosen mask points and gray values, harmonic inpainting
shows an impressive quality in reconstructing the original image.
We compare these results with the ones we got for a mixed order
consistency SPH inpainting with $C^{0}$-Mat\'{e}rn kernels.
To be fair, we only consider isotropic kernels since harmonic inpainting has
no way to incorporate anisotropy. The results are summarized in \cref{table:4}.
\begin{table}[htb]
\begin{tabular}{lcc}
Method & Spatially Optimized & Spatially \& Tonally Optimized \\ \midrule
Harmonic Inpainting & 23.21 (with NLPE) & 17.17 (with NLPE) \\
Mixed SPH (Isotropic) & \textbf{13.88} & \textbf{9.95}
\end{tabular}
\caption[MSE of 5 \% ``trui'']{MSE for inpaintings of ``trui'' on optimized
5 \% masks. Compared are results achieved with harmonic inpainting
in \cite{MH11} and results from our method with an
isotropic $C^{0}$-Mat\'{e}rn kernel with mixed order consistency.}
\label{table:4}
\end{table}
Evidently, we can outperform harmonic inpainting, both without and with tonal
optimization.
Further results in \cite{MH11} report MSEs for spatially and
tonally optimized harmonic inpainting on the images ``peppers'' and ``walter''.
We include these images together with results obtained with mixed order
consistency SPH inpainting with isotropic Gaussians in
\cref{fig:peppers_walter_opt_mixed_iso_gauss}. The MSEs are reported together
with those from \cite{MH11} in \cref{table:MSE_peppers_walter_mainb}. Again,
the results obtained with our method are 16 \% and 37 \% better, respectively.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/peppers}
\caption{Original image ``peppers''}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/peppers_dens_D005_gauss_mix_to}
\caption{Inpainted ``peppers''}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/walter}
\caption{Original image ``walter''}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/walter_dens_D005_gauss_mix_to}
\caption{Inpainted ``walter''}
\end{subfigure}
\caption{Images ``peppers'' and ``walter'' (left column) and
mixed order consistency SPH inpaintings with isotropic Gaussian kernels
on spatially and tonally optimized 5 \% masks (right column).}
\label{fig:peppers_walter_opt_mixed_iso_gauss}
\end{figure}
\begin{table}[htb]
\begin{tabular}{lcc}
Image & Harmonic Inpainting & Mixed SPH (Isotropic) \\ \midrule
``peppers'' & 19.38 & \textbf{16.29} \\
``walter'' & 8.14 & \textbf{5.15}
\end{tabular}
\caption[MSE of 5 \% ``peppers'' and ``walter'']{MSE for
inpaintings on optimized 5 \% masks including tonal optimization for images
``peppers'' and ``walter''. Compared are results achieved with harmonic
inpainting including NLPE from \cite{MH11} and results from our method with
an isotropic Gaussian kernel with mixed order consistency.}
\label{table:MSE_peppers_walter_mainb}
\end{table}
To evaluate SPH inpainting with anisotropic kernels, we consider the results
from \cite{HM17} achieved with edge-enhancing diffusion (EED) inpainting for
``trui'' with a mask of density 4 \% that is constructed by probabilistic
sparsification. The authors report the MSE of inpainting on this mask without
tonal optimization and improve the location of mask points further with NLPE
before considering tonal optimization. As competitor, we used a mixed order
consistency SPH inpainting with anisotropic $C^{0}$-Mat\'{e}rn kernels on an
optimized 4 \% mask. We also tried to improve our inpaintings with NLPE. However,
the reduction on MSE was negligible. Results for both inpainting methods are
summarized in \cref{table:3}.
\begin{table}[htb]
\begin{tabular}{lcc}
Method & Spatially Optimized & Spatially + Tonally Optimized \\ \midrule
Edge-Enhancing Diffusion (EED) & 24.20 & \textbf{10.79} (with NLPE) \\
Mixed SPH (Anisotropic) & \textbf{17.10} & 12.28
\end{tabular}
\caption[MSE of 4 \% ``trui'']{MSE for inpaintings of ``trui'' on optimized
4 \% masks. Compared are results from EED inpainting \cite{HM17}
and our method with an anisotropic $C^{0}$-Mat\'{e}rn kernel with mixed
order consistency.}
\label{table:3}
\end{table}
As can be seen, we outperform EED if we only incorporate spatial optimization,
but no tonal optimization. By construction, EED should perform better in
preserving edges \cite{We98}. Thus, we conjecture that probabilistic
sparsification, which only relies on pointwise errors, is inferior to our
Voronoi-based densification method as long as the former is not improved by a
consecutive NLPE.
To evaluate the performance of our method for images rich in texture, we
consider the exemplar-based inpainting technique from \cite{KB18} and the
results given there for an inpainting of a gray value version of the ``baboon''
image. In this setting, the authors report an MSE of 518.52 on a mask
constructed with ``densification by dithering'' and a consecutive NLPE. As tonal
optimization is not considered in \cite{KB18}, we compare to the result our
method could achieve for mixed order consistency inpainting with isotropic
Gaussians on an optimized 10 \% mask in \cref{fig:baboon}.
Already without tonal optimization, the MSE is 290.64, which
means we outperform the exemplar-based inpainting method by almost 44 \%.
Including tonal optimization improves the result further to an MSE of 223.37,
less than half of the MSE the exemplar-based method could achieve.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/baboon}
\caption{Image ``baboon''}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/baboon_mask_dens_D010_gauss_mix}
\caption{Optimized 10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/baboon_dens_D010_gauss_mix}
\caption{$\textrm{MSE} = 290.64$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/baboon_dens_D010_gauss_mix_to}
\caption{$\textrm{MSE} = 223.37$}
\end{subfigure}
\caption{Inpainting of ``baboon'' (rescaled to $256 \times 256$ pixels as in
\cite{KB18}) with a spatially optimized mask for a mixed order consistency
method with an isotropic Gaussian kernel.
We show the original image (\textbf{a}), an optimized
10 \% mask for mixed order consistency method (\textbf{b}), a mixed order
consistency inpainting result with given mask without tonal optimization
(\textbf{c}), and a mixed order consistency inpainting result with given mask
with tonal optimization.
Runtimes were 244.18 min for densification and 5.40 min for tonal
optimization.}
\label{fig:baboon}
\end{figure}
For the sake of completeness, we also consider the results on
``trui'' reported for a 10 \% masked in \cite{KB18}. Here, the exemplar-based
approach could achieve an MSE of 12.99 with spatial optimization including NLPE.
Our mixed consistency SPH inpainting equipped with isotropic Gaussian kernels
and a spatially optimized 10 \% mask can inpaint ``trui'' with an MSE of 6.65
which translates to an improvement of 49 \%. Tonal optimization decreases the
MSE further to 4.68 or an improvement of 64 \% compared to the exemplar-based
method.
\section{Conclusions and Outlook}
\label{sec:conclusions}
We have shown that smoothed particle hydrodynamics is a highly
competitive method for the challenging problem of sparse data inpainting. It
can produce results on par or even better than other, better explored PDE- or
exemplar-based inpainting strategies.
The success of SPH for sparse inpainting relies on several novel modifications.
With regard to the interpolation procedure, we presented a way to combine the
strength of first and zero order methods into a mixed order method. Moreover,
we presented a better approach to choose method parameters based on Voronoi
tesselations.
The main ingredient to reveal the potential of SPH is the use of optimally
chosen data. We proposed a new densification process based on Voronoi
tesselations, which lead naturally to a strategy based on a regional error
instead of a completely local pointwise error. Thus, a larger amount of data is
considered in the optimization procedure, yielding better suitable inpainting
masks. Furthermore, we introduced a so far unexplored formulation that allows
to use SPH for least-square approximation, i.e., the optimization of data
not only in the spatial, but also in the tonal domain.
What remains an ongoing topic of research is the question how
to choose anisotropies. Taking into account the superior performance of EED, it
seems natural to determine anisotropies based on gradient data or the structure
tensor. While this is straightforward for regular masks, it becomes more of
a challenge for randomly distributed or optimized mask points. In the context
of data optimization, one could think about computing anisotropies from the
original image. However, for compression purposes, storing this additional data
would reduce the rate of compression or necessitate to consider sparser mask,
such that there is overall less data to store. On the other hand, when it comes
to non-sparse inpainting tasks as briefly touched on in \cref{sec:scratch},
anisotropies could be determined from the known parts of the image, e.g. by
considering gradients similar to \cite{TF07}. An alternative used for object
removal in \cite{HH20} is to detect edges and their orientation to determine
anisotropies from these structures. Both strategies look promising to us when
it comes to improving the performance of SPH regarding classical inpainting
problems in future research.
We hope that our work will help to give SPH-based inpainting the attention
that it deserves. Moreover, we believe that some of our novel concepts,
e.g.~the Voronoi-based densification for data optimization, will also
be useful in applications beyond SPH-based inpainting.
\section*{Acknowledgments}
While working on this article, Matthias Augustin and Joachim Wei\-ckert
have received funding from the European Research Council (ERC)
under the European Union's Horizon 2020 research and innovation programme
(grant agreement no. 741215, ERC Advanced Grant INCOVID).
We thank Vassillen Chizhov for his support regarding programs
with spatial and tonal optimization for harmonic and biharmonic inpainting.
\bibliographystyle{siamplain}
\section{Inpainting on Regular and Random Masks}
In our first batch of examples, we compare the performance of
SPH inpainting to that of diffusion- and exemplar-based inpainting for a couple
of regular masks. For the test images of size $256 \times 256$, we consider the
6.25 \% mask (grid size 4 pixels) from the main article, but also a mask of
density 1.5625 \% (grid size of 8 pixels) and of density 25 \% (grid size of
2 pixels).
For the images from the Kodak database, the same grid sizes results in
densities of 1.04 \%, 4.16 \%, and 16.66 \%, respectively. For convenience, we
include all six masks in \cref{fig:reg_masks} as well as some of the results
already presented for ``trui'' and ``parrots''.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_reg_256_256_D0015625}
\caption{1.5625 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask_reg_256_256_D00625}
\caption{6.25 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_reg_256_256_D025}
\caption{25 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_reg_384_256_D00104}
\caption{1.04 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask_reg_384_256_D00416}
\caption{4.16 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_reg_384_256_D01616}
\caption{16.66 \% mask}
\end{subfigure}
\caption[Regular masks]{Regular masks of different densities
for images of size $256 \times 256$ (top row) and size $384 \times 256$
(bottom row).}
\label{fig:reg_masks}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_reg_D0015625_gauss_zero}
\caption{$\textrm{MSE} = 282.58$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_gauss_zero}
\caption{$\textrm{MSE} = 83.28$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_reg_D025_gauss_zero}
\caption{$\textrm{MSE} = 18.61$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_reg_D0015625_gauss_first}
\caption{$\textrm{MSE} = 288.33$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_gauss_first}
\caption{$\textrm{MSE} = 85.01$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_reg_D025_gauss_first}
\caption{$\textrm{MSE} = 18.47$}
\end{subfigure}
\caption[SPH inpainting of ``trui'' with regular masks]{
Inpainting of
``trui'' for regular masks with isotropic Gaussian kernels.
Densities from left to right are 1.5625 \%, 6.25 \%, and 25 \%.
Top row: Zero order consistency SPH inpainting.
Bottom row: First order consistency SPH inpainting.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_reg_D0015625_harm}
\caption{$\textrm{MSE} = 400.14$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_harm}
\caption{$\textrm{MSE} = 121.96$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_reg_D025_harm}
\caption{$\textrm{MSE} = 23.29$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_reg_D0015625_biharm}
\caption{$\textrm{MSE} = 270.70$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_reg_D00625_biharm}
\caption{$\textrm{MSE} = 67.95$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_reg_D025_biharm}
\caption{$\textrm{MSE} = 11.10$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/trui_reg_D0015625_eed_lambda_0_3_sigma_1_3}
\caption{$\textrm{MSE} = 265.70$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_reg_D00625_eed_lambda_0_2_sigma_0_8}
\caption{$\textrm{MSE} = 60.68$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/trui_reg_D025_eed_lambda_0_3_sigma_0_6}
\caption{$\textrm{MSE} = 10.56$}
\end{subfigure}
\caption[Diffusion inpainting of ``trui'' with regular masks]{
Inpainting of
``trui'' for regular masks. Densities from left to right are
1.5625 \%, 6.25 \%, and 25 \%.
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.3$ and $\sigma=1.3$, $\lambda=0.2$ and $\sigma=0.8$, and
$\lambda=0.3$ and $\sigma=0.6$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_reg_D0015625_gauss_zero}
\caption{$\textrm{MSE} = 261.05$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D00625_gauss_zero}
\caption{$\textrm{MSE} = 95.31$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D025_gauss_zero}
\caption{$\textrm{MSE} = 33.81$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_reg_D0015625_gauss_first}
\caption{$\textrm{MSE} = 264.02$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D00625_gauss_first}
\caption{$\textrm{MSE} = 95.39$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D025_gauss_first}
\caption{$\textrm{MSE} = 34.19$}
\end{subfigure}
\caption[SPH inpainting of ``peppers'' with regular masks]{
Inpainting of
``peppers'' for regular masks with isotropic Gaussian kernels.
Densities from left to right are 1.5625 \%, 6.25 \%, and 25 \%.
Top row: Zero order consistency SPH inpainting.
Bottom row: First order consistency SPH inpainting.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D0015625_harm}
\caption{$\textrm{MSE} = 381.26$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D00625_harm}
\caption{$\textrm{MSE} = 127.12$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D025_harm}
\caption{$\textrm{MSE} = 37.00$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D0015625_biharm}
\caption{$\textrm{MSE} = 244.80$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D00625_biharm}
\caption{$\textrm{MSE} = 86.40$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_reg_D025_biharm}
\caption{$\textrm{MSE} = 30.01$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_reg_D0015625_eed_lambda_0_3_sigma_1_9}
\caption{$\textrm{MSE} = 216.62$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_reg_D00625_eed_lambda_0_3_sigma_1_8}
\caption{$\textrm{MSE} = 72.63$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_reg_D025_eed_lambda_0_6_sigma_1_3}
\caption{$\textrm{MSE} = 25.94$}
\end{subfigure}
\caption[Diffusion inpainting of ``peppers'' with regular masks]{
Inpainting of
``peppers'' for regular masks. Densities from left to right are
1.5625 \%, 6.25 \%, and 25 \%.
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.3$ and $\sigma=1.9$, $\lambda=0.3$ and $\sigma=1.8$, and
$\lambda=0.6$ and $\sigma=1.3$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D0015625_gauss_zero}
\caption{$\textrm{MSE} = 252.35$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D00625_gauss_zero}
\caption{$\textrm{MSE} = 69.13$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D025_gauss_zero}
\caption{$\textrm{MSE} = 12.47$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_reg_D0015625_gauss_first}
\caption{$\textrm{MSE} = 254.53$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D00625_gauss_first}
\caption{$\textrm{MSE} = 69.28$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D025_gauss_first}
\caption{$\textrm{MSE} = 11.66$}
\end{subfigure}
\caption[SPH inpainting of ``walter'' with regular masks]{
Inpainting of
``walter'' for regular masks with isotropic Gaussian kernels.
Densities from left to right are 1.5625~\%, 6.25~\%, and 25~\%.
Top row: Zero order consistency SPH inpainting.
Bottom row: First order consistency SPH inpainting.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D0015625_harm}
\caption{$\textrm{MSE} = 423.03$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D00625_harm}
\caption{$\textrm{MSE} = 115.18$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D025_harm}
\caption{$\textrm{MSE} = 17.02$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D0015625_biharm}
\caption{$\textrm{MSE} = 229.23$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D00625_biharm}
\caption{$\textrm{MSE} = 48.54$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_reg_D025_biharm}
\caption{$\textrm{MSE} = 4.73$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_reg_D0015625_eed_lambda_0_2_sigma_1_2}
\caption{$\textrm{MSE} = 221.08$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_reg_D00625_eed_lambda_0_1_sigma_1_1}
\caption{$\textrm{MSE} = 38.20$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_reg_D025_eed_lambda_0_1_sigma_0_7}
\caption{$\textrm{MSE} = 4.25$}
\end{subfigure}
\caption[Diffusion inpainting of ``walter'' with regular masks]{
Inpainting of
``walter'' for regular masks. Densities from left to right are
1.5625 \%, 6.25 \%, and 25 \%.
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.2$ and $\sigma=1.2$, $\lambda=0.1$ and $\sigma=1.1$, and
$\lambda=0.1$ and $\sigma=0.7$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D0015625_gauss_zero}
\caption{$\textrm{MSE} = 1082.97$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D00625_gauss_zero}
\caption{$\textrm{MSE} = 786.14$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D025_gauss_zero}
\caption{$\textrm{MSE} = 484.12$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_reg_D0015625_gauss_first}
\caption{$\textrm{MSE} = 1069.05$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D00625_gauss_first}
\caption{$\textrm{MSE} = 787.46$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D025_gauss_first}
\caption{$\textrm{MSE} = 488.93$}
\end{subfigure}
\caption[SPH inpainting of ``baboon'' with regular masks]{
Inpainting of
``baboon'' for regular masks with isotropic Gaussian kernels.
Densities from left to right are 1.5625 \%, 6.25 \%, and 25 \%.
Top row: Zero order consistency SPH inpainting.
Bottom row: First order consistency SPH inpainting.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D0015625_harm}
\caption{$\textrm{MSE} = 943.91$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D00625_harm}
\caption{$\textrm{MSE} = 738.12$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D025_harm}
\caption{$\textrm{MSE} = 473.74$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D0015625_biharm}
\caption{$\textrm{MSE} = 1203.91$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D00625_biharm}
\caption{$\textrm{MSE} = 890.13$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_reg_D025_biharm}
\caption{$\textrm{MSE} = 540.67$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_reg_D0015625_eed_lambda_5_2_sigma_0_6}
\caption{$\textrm{MSE} = 940.73$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_reg_D00625_eed_lambda_11_4_sigma_0_1}
\caption{$\textrm{MSE} = 733.31$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_reg_D025_eed_lambda_9_6_sigma_3}
\caption{$\textrm{MSE} = 473.66$}
\end{subfigure}
\caption[Diffusion inpainting of ``baboon'' with regular masks]{
Inpainting of
``baboon'' for regular masks. Densities from left to right are
1.5625 \%, 6.25 \%, and 25 \%.
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=5.2$ and $\sigma=0.6$, $\lambda=11.4$ and $\sigma=0.1$, and
$\lambda=9.6$ and $\sigma=3.0$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_reg_D00104_gauss_zero}
\caption{$\textrm{MSE} = 277.88$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_reg_D00416_gauss_zero}
\caption{$\textrm{MSE} = 132.79$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_reg_D01616_gauss_zero}
\caption{$\textrm{MSE} = 57.52$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_reg_D00104_gauss_first}
\caption{$\textrm{MSE} = 261.11$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_reg_D00416_gauss_first}
\caption{$\textrm{MSE} = 124.81$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_reg_D01616_gauss_first}
\caption{$\textrm{MSE} = 55.87$}
\end{subfigure}
\caption[SPH inpainting of ``parrots'' with regular masks]{
Inpainting of
``parrots'' for regular masks with isotropic Gaussian kernels.
Densities from left to right are 1.04 \%, 4.16 \%, and 16.66 \%.
Top row: Zero order consistency SPH inpainting.
Bottom row: First order consistency SPH inpainting.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_reg_D00104_harm}
\caption{$\textrm{MSE} = 313.94$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_reg_D00416_harm}
\caption{$\textrm{MSE} = 139.10$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_reg_D01616_harm}
\caption{$\textrm{MSE} = 60.39$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_reg_D00104_biharm}
\caption{$\textrm{MSE} = 266.33$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_reg_D00416_biharm}
\caption{$\textrm{MSE} = 128.18$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_reg_D01616_biharm}
\caption{$\textrm{MSE} = 54.70$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/parrots_reg_D00104_eed_lambda_0_7_sigma_2}
\caption{$\textrm{MSE} = 258.32$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_reg_D00416_eed_lambda_1_5_sigma_2}
\caption{$\textrm{MSE} = 118.79$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/parrots_reg_D01616_eed_lambda_1_6_sigma_1_5}
\caption{$\textrm{MSE} = 52.08$}
\end{subfigure}
\caption[Diffusion inpainting of ``parrots'' with regular masks]{
Inpainting of
``parrots'' for regular masks. Densities from left to right are
1.04 \%, 4.16 \%, and 16.66 \%.
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.7$ and $\sigma=2.0$, $\lambda=1.5$ and $\sigma=2.0$, and
$\lambda=1.6$ and $\sigma=1.5$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D00104_gauss_zero}
\caption{$\textrm{MSE} = 502.30$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D00416_gauss_zero}
\caption{$\textrm{MSE} = 225.16$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D01616_gauss_zero}
\caption{$\textrm{MSE} = 104.52$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D00104_gauss_first}
\caption{$\textrm{MSE} = 501.80$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D00416_gauss_first}
\caption{$\textrm{MSE} = 218.19$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D01616_gauss_first}
\caption{$\textrm{MSE} = 99.95$}
\end{subfigure}
\caption[SPH inpainting of ``girl'' with regular masks]{
Inpainting of
``girl'' for regular masks with isotropic Gaussian kernels.
Densities from left to right are 1.04 \%, 4.16 \%, and 16.66 \%.
Top row: Zero order consistency SPH inpainting.
Bottom row: First order consistency SPH inpainting.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D00104_harm}
\caption{$\textrm{MSE} = 593.92$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D00416_harm}
\caption{$\textrm{MSE} = 247.97$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D01616_harm}
\caption{$\textrm{MSE} = 104.55$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D00104_biharm}
\caption{$\textrm{MSE} = 469.80$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D00416_biharm}
\caption{$\textrm{MSE} = 215.31$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_reg_D01616_biharm}
\caption{$\textrm{MSE} = 99.54$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_reg_D00104_eed_lambda_0_3_sigma_2}
\caption{$\textrm{MSE} = 371.32$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_reg_D00416_eed_lambda_0_6_sigma_2}
\caption{$\textrm{MSE} = 166.89$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_reg_D01616_eed_lambda_0_7_sigma_2}
\caption{$\textrm{MSE} = 78.41$}
\end{subfigure}
\caption[Diffusion inpainting of ``girl'' with regular masks]{
Inpainting of
``girl'' for regular masks. Densities from left to right are
1.04 \%, 4.16 \%, and 16.66 \%.
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.3$ and $\sigma=2.0$, $\lambda=0.6$ and $\sigma=2.0$, and
$\lambda=0.7$ and $\sigma=2.0$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D00104_gauss_zero}
\caption{$\textrm{MSE} = 489.67$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D00416_gauss_zero}
\caption{$\textrm{MSE} = 247.10$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D01616_gauss_zero}
\caption{$\textrm{MSE} = 108.35$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D00104_gauss_first}
\caption{$\textrm{MSE} = 477.34$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D00416_gauss_first}
\caption{$\textrm{MSE} = 244.17$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D01616_gauss_first}
\caption{$\textrm{MSE} = 105.77$}
\end{subfigure}
\caption[SPH inpainting of ``plane'' with regular masks]{
Inpainting of
``plane'' for regular masks with isotropic Gaussian kernels.
Densities from left to right are 1.04 \%, 4.16 \%, and 16.66 \%.
Top row: Zero order consistency SPH inpainting.
Bottom row: First order consistency SPH inpainting.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D00104_harm}
\caption{$\textrm{MSE} = 551.02$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D00416_harm}
\caption{$\textrm{MSE} = 277.22$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D01616_harm}
\caption{$\textrm{MSE} = 116.11$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D00104_biharm}
\caption{$\textrm{MSE} = 497.11$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D00416_biharm}
\caption{$\textrm{MSE} = 250.55$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_reg_D01616_biharm}
\caption{$\textrm{MSE} = 103.77$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_reg_D00104_eed_lambda_0_8_sigma_2}
\caption{$\textrm{MSE} = 426.88$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_reg_D00416_eed_lambda_1_8_sigma_0_9}
\caption{$\textrm{MSE} = 225.21$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_reg_D01616_eed_lambda_1_4_sigma_1_6}
\caption{$\textrm{MSE} = 92.32$}
\end{subfigure}
\caption[Diffusion inpainting of ``plane'' with regular masks]{
Inpainting of
``plane'' for regular masks. Densities from left to right are
1.04 \%, 4.16 \%, and 16.66 \%.
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.8$ and $\sigma=2.0$, $\lambda=1.8$ and $\sigma=0.9$, and
$\lambda=1.4$ and $\sigma=1.6$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D00104_gauss_zero}
\caption{$\textrm{MSE} = 284.43$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D00416_gauss_zero}
\caption{$\textrm{MSE} = 143.04$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D01616_gauss_zero}
\caption{$\textrm{MSE} = 67.49$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D00104_gauss_first}
\caption{$\textrm{MSE} = 274.08$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D00416_gauss_first}
\caption{$\textrm{MSE} = 140.25$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D01616_gauss_first}
\caption{$\textrm{MSE} = 65.32$}
\end{subfigure}
\caption[Inpainting of ``hats'' with regular masks]{
Inpainting of
``hats'' for regular masks with isotropic Gaussian kernels.
Densities from left to right are 1.04 \%, 4.16 \%, and 16.66 \%.
Top row: Zero order consistency SPH inpainting.
Bottom row: First order consistency SPH inpainting.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D00104_harm}
\caption{$\textrm{MSE} = 305.05$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D00416_harm}
\caption{$\textrm{MSE} = 155.78$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D01616_harm}
\caption{$\textrm{MSE} = 69.45$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D00104_biharm}
\caption{$\textrm{MSE} = 283.68$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D00416_biharm}
\caption{$\textrm{MSE} = 140.67$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_reg_D01616_biharm}
\caption{$\textrm{MSE} = 65.96$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_reg_D00104_eed_lambda_0_6_sigma_2}
\caption{$\textrm{MSE} = 254.01$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_reg_D00416_eed_lambda_0_5_sigma_2}
\caption{$\textrm{MSE} = 119.05$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_reg_D01616_eed_lambda_0_6_sigma_2}
\caption{$\textrm{MSE} = 55.24$}
\end{subfigure}
\caption[Diffusion inpainting of ``hats'' with regular masks]{
Inpainting of
``hats'' for regular masks. Densities from left to right are
1.04 \%, 4.16 \%, and 16.66 \%.
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.6$ and $\sigma=2.0$, $\lambda=0.5$ and $\sigma=2.0$, and
$\lambda=0.6$ and $\sigma=2.0$.}
\end{figure}
Over all images and densities, the results of SPH inpainting
are somewhere between the results achieved by harmonic and biharmonic
diffusion, respectively. However, SPH can, in general, not achieve the same
quality as EED inpainting.
To investigate further, we consider the same images, but now
equipped with masks of randomly chosen mask pixels instead of regular masks.
For each image, we have created random masks of densities 1 \%, 5 \%, and
10 \%.
For each of these masks, we perform SPH inpainting with an isotropic Gaussian
kernel either for zero or first order consistency, harmonic inpainting,
biharmonic inpainting, and inpainting with EED.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D001_trui}
\caption{1 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask_ran_D005_trui}
\caption{5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D010_trui}
\caption{10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_ran_D001_gauss_zero}
\caption{$\textrm{MSE} = 649.00$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_gauss_zero}
\caption{$\textrm{MSE} = 208.48$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_ran_D010_gauss_zero}
\caption{$\textrm{MSE} = 110.48$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_ran_D001_gauss_first}
\caption{$\textrm{MSE} = 693.58$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_gauss_first}
\caption{$\textrm{MSE} = 197.58$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_ran_D010_gauss_first}
\caption{$\textrm{MSE} = 99.23$}
\end{subfigure}
\caption[SPH inpainting of ``trui'' with random masks]{
Inpainting of ``trui''
for random masks of different densities.
Top row: Masks with densities of 1 \%, 5 \%, and 10 \%.
Middle row: Zero order consistency SPH inpainting with isotropic
Gaussian kernel.
Bottom row: First order consistency SPH inpainting with isotropic
Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_ran_D001_harm}
\caption{$\textrm{MSE} = 655.65$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_harm}
\caption{$\textrm{MSE} = 226.06$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_ran_D010_harm}
\caption{$\textrm{MSE} = 121.66$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_ran_D001_biharm}
\caption{$\textrm{MSE} = 579.71$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/trui_ran_D005_biharm}
\caption{$\textrm{MSE} = 146.46$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/trui_ran_D010_biharm}
\caption{$\textrm{MSE} = 70.99$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/trui_ran_D001_eed_lambda_1_sigma_2}
\caption{$\textrm{MSE} = 554.98$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/trui_ran_D005_eed_lambda_0_2_sigma_0_8}
\caption{$\textrm{MSE} = 134.92$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/trui_ran_D010_eed_lambda_0_4_sigma_0_8}
\caption{$\textrm{MSE} = 62.51$}
\end{subfigure}
\caption[Diffusion inpainting of ``trui'' with random masks]{
Inpainting of
``trui'' for random masks of densities 1 \%, 5 \%, and 10 \% (left to right).
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=1.0$ and $\sigma=2.0$, $\lambda=0.2$ and $\sigma=0.8$, and
$\lambda=0.4$ and $\sigma=0.8$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D001_peppers}
\caption{1 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D005_peppers}
\caption{5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D010_peppers}
\caption{10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D001_gauss_zero}
\caption{$\textrm{MSE} = 643.74$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D005_gauss_zero}
\caption{$\textrm{MSE} = 196.64$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D010_gauss_zero}
\caption{$\textrm{MSE} = 115.62$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D001_gauss_first}
\caption{$\textrm{MSE} = 807.45$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D005_gauss_first}
\caption{$\textrm{MSE} = 201.50$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D010_gauss_first}
\caption{$\textrm{MSE} = 109.25$}
\end{subfigure}
\caption[SPH inpainting of ``peppers'' with random masks]{
Inpainting of
``peppers'' for random masks of different densities.
Top row: Masks with densities of 1 \%, 5 \%, and 10 \%.
Middle row: Zero order consistency SPH inpainting with isotropic
Gaussian kernel.
Bottom row: First order consistency SPH inpainting with isotropic
Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D001_harm}
\caption{$\textrm{MSE} = 712.59$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D005_harm}
\caption{$\textrm{MSE} = 217.79$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D010_harm}
\caption{$\textrm{MSE} = 119.63$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D001_biharm}
\caption{$\textrm{MSE} = 553.56$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D005_biharm}
\caption{$\textrm{MSE} = 151.51$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_ran_D010_biharm}
\caption{$\textrm{MSE} = 88.45$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_ran_D001_eed_lambda_0_1_sigma_0_9}
\caption{$\textrm{MSE} = 542.87$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_ran_D005_eed_lambda_0_5_sigma_1_5}
\caption{$\textrm{MSE} = 135.32$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_ran_D010_eed_lambda_0_4_sigma_1_8}
\caption{$\textrm{MSE} = 68.56$}
\end{subfigure}
\caption[Diffusion inpainting of ``peppers'' with random masks]{
Inpainting of
``peppers'' for random masks of densities 1 \%, 5 \%, and 10
\% (left to right).
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.1$ and $\sigma=0.9$, $\lambda=0.5$ and $\sigma=1.5$, and
$\lambda=0.4$ and $\sigma=1.8$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D001_walter}
\caption{1 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D005_walter}
\caption{5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D010_walter}
\caption{10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D001_gauss_zero}
\caption{$\textrm{MSE} = 600.02$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D005_gauss_zero}
\caption{$\textrm{MSE} = 180.58$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D010_gauss_zero}
\caption{$\textrm{MSE} = 93.79$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D001_gauss_first}
\caption{$\textrm{MSE} = 745.89$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D005_gauss_first}
\caption{$\textrm{MSE} = 167.00$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D010_gauss_first}
\caption{$\textrm{MSE} = 80.22$}
\end{subfigure}
\caption[SPH inpainting of ``walter'' with random masks]{
Inpainting of
``walter'' for random masks of different densities.
Top row: Masks with densities of 1 \%, 5 \%, and 10 \%.
Middle row: Zero order consistency SPH inpainting with isotropic
Gaussian kernel.
Bottom row: First order consistency SPH inpainting with isotropic
Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D001_harm}
\caption{$\textrm{MSE} = 672.50$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D005_harm}
\caption{$\textrm{MSE} = 212.61$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D010_harm}
\caption{$\textrm{MSE} = 113.88$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D001_biharm}
\caption{$\textrm{MSE} = 526.65$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D005_biharm}
\caption{$\textrm{MSE} = 115.72$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_ran_D010_biharm}
\caption{$\textrm{MSE} = 50.82$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_ran_D001_eed_lambda_0_1_sigma_0_6}
\caption{$\textrm{MSE} = 466.66$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_ran_D005_eed_lambda_0_2_sigma_0_9}
\caption{$\textrm{MSE} = 91.90$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_ran_D010_eed_lambda_0_2_sigma_0_8}
\caption{$\textrm{MSE} = 37.92$}
\end{subfigure}
\caption[Diffusion inpainting of ``walter'' with random masks]{
Inpainting of
``walter'' for random masks of densities 1 \%, 5 \%, and 10 \%
(left to right).
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.1$ and $\sigma=0.6$, $\lambda=0.2$ and $\sigma=0.9$, and
$\lambda=0.2$ and $\sigma=0.8$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D001_baboon}
\caption{1 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D005_baboon}
\caption{5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D010_baboon}
\caption{10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D001_gauss_zero}
\caption{$\textrm{MSE} = 1253.29$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D005_gauss_zero}
\caption{$\textrm{MSE} = 939.62$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D010_gauss_zero}
\caption{$\textrm{MSE} = 778.42$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D001_gauss_first}
\caption{$\textrm{MSE} = 1481.98$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D005_gauss_first}
\caption{$\textrm{MSE} = 1124.76$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D010_gauss_first}
\caption{$\textrm{MSE} = 873.64$}
\end{subfigure}
\caption[SPH inpainting of ``baboon'' with random masks]{
Inpainting of
``baboon'' for random masks of different densities.
Top row: Masks with densities of 1 \%, 5 \%, and 10 \%.
Middle row: Zero order consistency SPH inpainting with isotropic
Gaussian kernel.
Bottom row: First order consistency SPH inpainting with isotropic
Gaussian kernel.}
\label{fig:baboon_ran_sph}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D001_harm}
\caption{$\textrm{MSE} = 1038.79$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D005_harm}
\caption{$\textrm{MSE} = 794.68$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D010_harm}
\caption{$\textrm{MSE} = 688.23$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D001_biharm}
\caption{$\textrm{MSE} = 1441.11$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D005_biharm}
\caption{$\textrm{MSE} = 1049.28$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_ran_D010_biharm}
\caption{$\textrm{MSE} = 864.59$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_ran_D001_eed_lambda_5_sigma_0_7}
\caption{$\textrm{MSE} = 1028.85$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_ran_D005_eed_lambda_6_7_sigma_3}
\caption{$\textrm{MSE} = 789.49$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_ran_D010_eed_lambda_5_8_sigma_3}
\caption{$\textrm{MSE} = 686.94$}
\end{subfigure}
\caption[Diffusion inpainting of ``baboon'' with random masks]{
Inpainting of
``baboon'' for random masks of densities 1 \%, 5 \%, and 10 \%
(left to right).
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=5.0$ and $\sigma=0.7$, $\lambda=6.7$ and $\sigma=0.3$, and
$\lambda=5.8$ and $\sigma=3.0$.}
\label{fig:baboon_ran_diff}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D001_parrots}
\caption{1 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/mask_ran_D005_parrots}
\caption{5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D010_parrots}
\caption{10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_ran_D001_gauss_zero}
\caption{$\textrm{MSE} = 354.73$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_ran_D005_gauss_zero}
\caption{$\textrm{MSE} = 169.62$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_ran_D010_gauss_zero}
\caption{$\textrm{MSE} = 111.69$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_ran_D001_gauss_first}
\caption{$\textrm{MSE} = 394.41$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_ran_D005_gauss_first}
\caption{$\textrm{MSE} = 173.71$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_ran_D010_gauss_first}
\caption{$\textrm{MSE} = 115.51$}
\end{subfigure}
\caption[SPH inpainting of ``parrots'' with random masks]{
Inpainting of
``parrots'' for random masks of different densities.
Top row: Masks with densities of 1 \%, 5 \%, and 10 \%.
Middle row: Zero order consistency SPH inpainting with isotropic
Gaussian kernel.
Bottom row: First order consistency SPH inpainting with isotropic
Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_ran_D001_harm}
\caption{$\textrm{MSE} = 387.24$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_ran_D005_harm}
\caption{$\textrm{MSE} = 162.53$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_ran_D010_harm}
\caption{$\textrm{MSE} = 106.41$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_ran_D001_biharm}
\caption{$\textrm{MSE} = 304.72$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_art/parrots_ran_D005_biharm}
\caption{$\textrm{MSE} = 147.75$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/parrots_ran_D010_biharm}
\caption{$\textrm{MSE} = 102.72$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/parrots_ran_D001_eed_lambda_0_2_sigma_1_8}
\caption{$\textrm{MSE} = 294.05$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/parrots_ran_D005_eed_lambda_1_2_sigma_2}
\caption{$\textrm{MSE} = 137.04$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/parrots_ran_D010_eed_lambda_1_7_sigma_2}
\caption{$\textrm{MSE} = 92.75$}
\end{subfigure}
\caption[Diffusion inpainting of ``parrots'' with random masks]{
Inpainting of
``parrots'' for random masks of densities 1 \%, 5 \%, and 10 \%
(left to right).
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.2$ and $\sigma=1.8$, $\lambda=1.2$ and $\sigma=2.0$, and
$\lambda=1.7$ and $\sigma=2.0$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D001_girl}
\caption{1 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D005_girl}
\caption{5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D010_girl}
\caption{10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D001_gauss_zero}
\caption{$\textrm{MSE} = 662.11$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D005_gauss_zero}
\caption{$\textrm{MSE} = 271.68$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D010_gauss_zero}
\caption{$\textrm{MSE} = 175.94$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D001_gauss_first}
\caption{$\textrm{MSE} = 735.35$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D005_gauss_first}
\caption{$\textrm{MSE} = 273.61$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D010_gauss_first}
\caption{$\textrm{MSE} = 184.10$}
\end{subfigure}
\caption[SPH inpainting of ``girl'' with random masks]{
Inpainting of
``girl'' for random masks of different densities.
Top row: Masks with densities of 1 \%, 5 \%, and 10 \%.
Middle row: Zero order consistency SPH inpainting with isotropic
Gaussian kernel.
Bottom row: First order consistency SPH inpainting with isotropic
Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D001_harm}
\caption{$\textrm{MSE} = 710.72$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D005_harm}
\caption{$\textrm{MSE} = 265.84$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D010_harm}
\caption{$\textrm{MSE} = 177.32$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D001_biharm}
\caption{$\textrm{MSE} = 612.65$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D005_biharm}
\caption{$\textrm{MSE} = 233.97$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_ran_D010_biharm}
\caption{$\textrm{MSE} = 159.95$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_ran_D001_eed_lambda_0_8_sigma_2}
\caption{$\textrm{MSE} = 492.03$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_ran_D005_eed_lambda_1_4_sigma_2}
\caption{$\textrm{MSE} = 182.14$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_ran_D010_eed_lambda_1_sigma_2}
\caption{$\textrm{MSE} = 126.27$}
\end{subfigure}
\caption[Diffusion inpainting of ``girl'' with random masks]{
Inpainting of
``girl'' for random masks of densities 1 \%, 5 \%, and 10 \% (left to right).
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.8$ and $\sigma=2.0$, $\lambda=1.4$ and $\sigma=2.0$, and
$\lambda=1.0$ and $\sigma=2.0$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D001_plane}
\caption{1 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D005_plane}
\caption{5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D010_plane}
\caption{10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D001_gauss_zero}
\caption{$\textrm{MSE} = 717.64$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D005_gauss_zero}
\caption{$\textrm{MSE} = 313.08$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D010_gauss_zero}
\caption{$\textrm{MSE} = 209.96$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D001_gauss_first}
\caption{$\textrm{MSE} = 845.86$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D005_gauss_first}
\caption{$\textrm{MSE} = 341.57$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D010_gauss_first}
\caption{$\textrm{MSE} = 213.02$}
\end{subfigure}
\caption[SPH inpainting of ``plane'' with random masks]{
Inpainting of
``plane'' for random masks of different densities.
Top row: Masks with densities of 1 \%, 5 \%, and 10 \%.
Middle row: Zero order consistency SPH inpainting with isotropic
Gaussian kernel.
Bottom row: First order consistency SPH inpainting with isotropic
Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D001_harm}
\caption{$\textrm{MSE} = 705.55$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D005_harm}
\caption{$\textrm{MSE} = 309.14$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D010_harm}
\caption{$\textrm{MSE} = 209.28$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D001_biharm}
\caption{$\textrm{MSE} = 718.31$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D005_biharm}
\caption{$\textrm{MSE} = 315.87$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_ran_D010_biharm}
\caption{$\textrm{MSE} = 182.87$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_ran_D001_eed_lambda_0_1_sigma_0_4}
\caption{$\textrm{MSE} = 640.09$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_ran_D005_eed_lambda_1_9_sigma_0_6}
\caption{$\textrm{MSE} = 255.48$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_ran_D010_eed_lambda_1_2_sigma_2}
\caption{$\textrm{MSE} = 162.08$}
\end{subfigure}
\caption[Diffusion inpainting of ``plane'' with random masks]{
Inpainting of
``plane'' for random masks of densities 1 \%, 5 \%, and 10 \%
(left to right).
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.1$ and $\sigma=0.4$, $\lambda=1.9$ and $\sigma=0.6$, and
$\lambda=1.2$ and $\sigma=2.0$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D001_hats}
\caption{1 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D005_hats}
\caption{5 \% mask}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/mask_ran_D010_hats}
\caption{10 \% mask}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D001_gauss_zero}
\caption{$\textrm{MSE} = 353.40$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D005_gauss_zero}
\caption{$\textrm{MSE} = 171.43$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D010_gauss_zero}
\caption{$\textrm{MSE} = 123.37$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D001_gauss_first}
\caption{$\textrm{MSE} = 470.24$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D005_gauss_first}
\caption{$\textrm{MSE} = 191.77$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D010_gauss_first}
\caption{$\textrm{MSE} = 128.98$}
\end{subfigure}
\caption[SPH inpainting of ``hats'' with random masks]{
Inpainting of
``hats'' for random masks of different densities.
Top row: Masks with densities of 1 \%, 5 \%, and 10 \%.
Middle row: Zero order consistency SPH inpainting with isotropic
Gaussian kernel.
Bottom row: First order consistency SPH inpainting with isotropic
Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D001_harm}
\caption{$\textrm{MSE} = 349.80$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D005_harm}
\caption{$\textrm{MSE} = 175.68$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D010_harm}
\caption{$\textrm{MSE} = 119.83$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D001_biharm}
\caption{$\textrm{MSE} = 344.37$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D005_biharm}
\caption{$\textrm{MSE} = 165.10$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_ran_D010_biharm}
\caption{$\textrm{MSE} = 112.54$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_ran_D001_eed_lambda_0_6_sigma_2}
\caption{$\textrm{MSE} = 289.27$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_ran_D005_eed_lambda_0_7_sigma_2}
\caption{$\textrm{MSE} = 142.61$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_ran_D010_eed_lambda_0_6_sigma_2}
\caption{$\textrm{MSE} = 91.07$}
\end{subfigure}
\caption[Diffusion inpainting of ``hats'' with random masks]{
Inpainting of
``hats'' for random masks of densities 1 \%, 5 \%, and 10 \% (left to right).
Top row: Harmonic inpainting.
Middle row: Biharmonic inpainting.
Bottom row: Inpainting with EED.
Parameters are from left to right
$\lambda=0.6$ and $\sigma=2.0$, $\lambda=0.7$ and $\sigma=2.0$, and
$\lambda=0.6$ and $\sigma=2.0$.}
\end{figure}
Once again, we observe that first order consistency SPH
inpainting is prone to producing artifacts in the form of under- and overshoots.
These are in particular visible for lower densities. Zero order consistency
SPH inpainting is more stable and achieves better MSEs than harmonic inpainting.
Biharmonic inpainting is more suited to the distribution of mask points in most
cases and EED can once more benefit from its nonlinear and anisotropic nature.
\section{Scratch and Text Removal}
As classical applications of inpainting, we consider the
repair of scratches and removal of overlaid text. Images with scratches are
presented in \cref{fig:images_scratch} whereas \cref{fig:images_text} shows
the images overlaid with text. As competitors, we consider once more harmonic
inpainting, biharmonic inpainting, inpainting with EED, and the exemplar-based
inpainting approach by Criminisi et al. with disc-shaped patches.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_scratch}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_scratch}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_scratch}
\end{subfigure}
\vspace*{3ex}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_scratch}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_scratch}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_scratch}
\end{subfigure}
\caption{Images damaged by scratches.}
\label{fig:images_scratch}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_scratch_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 30.84$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_scratch_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 36.90$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_scratch_crim_pr9}
\caption{Exemplar-based\\
$\textrm{MSE} = 59.86$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_scratch_harm}
\caption{Harmonic\\
$\textrm{MSE} = 27.76$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_scratch_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 21.12$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_scratch_eed_lambda_1_1_sigma_1_1}
\caption{EED\\
$\textrm{MSE} = 16.90$}
\end{subfigure}
\caption[Inpainting of ``peppers'' for scratch removal]{
Inpainting of
damaged image ``peppers'' with different inpainting methods. For SPH
inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=1.1$ and $\sigma=1.1$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_scratch_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 83.01$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_scratch_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 60.14$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_scratch_crim_pr5}
\caption{Exemplar-based\\
$\textrm{MSE} = 132.18$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_scratch_harm}
\caption{Harmonic\\
$\textrm{MSE} = 66.73$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_scratch_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 27.55$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_scratch_eed_lambda_0_1_sigma_0_9}
\caption{EED\\
$\textrm{MSE} = 15.63$}
\end{subfigure}
\caption[Inpainting of ``walter'' for scratch removal]{
Inpainting of
damaged image ``walter'' with different inpainting methods. For SPH
inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=0.1$ and $\sigma=0.9$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_scratch_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 283.53$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_scratch_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 765.51$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_scratch_crim_pr11}
\caption{Exemplar-based\\
$\textrm{MSE} = 363.47$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_scratch_harm}
\caption{Harmonic\\
$\textrm{MSE} = 232.43$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_scratch_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 335.96$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_scratch_eed_lambda_6_sigma_3}
\caption{EED\\
$\textrm{MSE} = 231.40$}
\end{subfigure}
\caption[Inpainting of ``baboon'' for scratch removal]{
Inpainting of
damaged image ``baboon'' with different inpainting methods. For SPH
inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=6.0$ and $\sigma=3.0$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_scratch_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 40.61$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_scratch_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 51.99$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_scratch_crim_pr11}
\caption{Exemplar-based\\
$\textrm{MSE} = 86.43$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_scratch_harm}
\caption{Harmonic\\
$\textrm{MSE} = 34.97$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_scratch_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 31.41$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_scratch_eed_lambda_2_7_sigma_1_2}
\caption{EED\\
$\textrm{MSE} = 24.81$}
\end{subfigure}
\caption[Inpainting of ``girl'' for scratch removal]{
Inpainting of
damaged image ``girl'' with different inpainting methods. For SPH
inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=2.7$ and $\sigma=1.2$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_scratch_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 92.47$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_scratch_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 121.49$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_scratch_crim_pr9}
\caption{Exemplar-based\\
$\textrm{MSE} = 190.01$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_scratch_harm}
\caption{Harmonic\\
$\textrm{MSE} = 78.81$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_scratch_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 71.88$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_scratch_eed_lambda_2_sigma_0_5}
\caption{EED\\
$\textrm{MSE} = 62.87$}
\end{subfigure}
\caption[Inpainting of ``plane'' for scratch removal]{
Inpainting of
damaged image ``plane'' with different inpainting methods. For SPH
inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=2.0$ and $\sigma=0.5$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_scratch_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 47.63$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_scratch_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 56.53$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_scratch_crim_pr9}
\caption{Exemplar-based\\
$\textrm{MSE} = 66.15$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_scratch_harm}
\caption{Harmonic\\
$\textrm{MSE} = 36.97$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_scratch_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 35.01$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_scratch_eed_lambda_0_4_sigma_2}
\caption{EED\\
$\textrm{MSE} = 23.99$}
\end{subfigure}
\caption[Inpainting of ``hats'' for scratch removal]{
Inpainting of
damaged image ``hats'' with different inpainting methods. For SPH
inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=0.4$ and $\sigma=2.0$.}
\end{figure}
We observe mainly the same results as for the examples ``trui''
and ``parrots'' in the main article: Zero order consistency SPH inpainting
performs better than first order consistency. Both perform better than the
exemplar-based approach, but cannot reach the same quality as the
diffusion-based methods. Even when the MSE of the SPH inpainting comes close to
the MSE of diffusion-based methods, the former often produces some unpleasant
artifacts. However, we remind the reader that the task of repairing such
damages comes not naturally for SPH inpainting.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_text}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_text}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_text}
\end{subfigure}
\vspace*{3ex}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_text}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_text}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_text}
\end{subfigure}
\caption{Images overlaid with text.}
\label{fig:images_text}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_text_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 24.56$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_text_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 23.76$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_text_crim_pr7}
\caption{Exemplar-based\\
$\textrm{MSE} = 39.34$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_text_harm}
\caption{Harmonic\\
$\textrm{MSE} = 21.12$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/peppers_text_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 17.18$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_text_eed_lambda_0_4_sigma_1_7}
\caption{EED\\
$\textrm{MSE} = 12.44$}
\end{subfigure}
\caption[Inpainting of ``peppers'' for text removal]{
Inpainting of
image ``peppers'' overlaid with text for different inpainting methods. For
SPH inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=0.4$ and $\sigma=1.7$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_text_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 20.42$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_text_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 11.82$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_text_crim_pr7}
\caption{Exemplar-based\\
$\textrm{MSE} = 24.24$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_text_harm}
\caption{Harmonic\\
$\textrm{MSE} = 16.35$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/walter_text_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 6.08$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_text_eed_lambda_0_1_sigma_0_9}
\caption{EED\\
$\textrm{MSE} = 4.37$}
\end{subfigure}
\caption[Inpainting of ``walter'' for text removal]{
Inpainting of
image ``walter'' overlaid with text for different inpainting methods. For
SPH inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=0.1$ and $\sigma=0.9$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_text_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 183.52$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_text_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 193.30$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_text_crim_pr9}
\caption{Exemplar-based\\
$\textrm{MSE} = 273.64$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_text_harm}
\caption{Harmonic\\
$\textrm{MSE} = 170.17$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/baboon_text_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 202.89$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_text_eed_lambda_8_6_sigma_1_9}
\caption{EED\\
$\textrm{MSE} = 170.69$}
\end{subfigure}
\caption[Inpainting of ``baboon'' for text removal]{
Inpainting of
image ``baboon'' overlaid with text for different inpainting methods. For
SPH inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=8.6$ and $\sigma=1.9$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_text_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 31.60$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_text_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 29.88$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_text_crim_pr7}
\caption{Exemplar-based\\
$\textrm{MSE} = 45.88$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_text_harm}
\caption{Harmonic\\
$\textrm{MSE} = 29.08$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/girl_text_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 27.28$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_text_eed_lambda_1_1_sigma_1_9}
\caption{EED\\
$\textrm{MSE} = 21.10$}
\end{subfigure}
\caption[Inpainting of ``girl'' for text removal]{
Inpainting of
image ``girl'' overlaid with text for different inpainting methods. For
SPH inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=1.1$ and $\sigma=1.9$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_text_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 36.76$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_text_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 36.71$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_text_crim_pr7}
\caption{Exemplar-based\\
$\textrm{MSE} = 54.67$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_text_harm}
\caption{Harmonic\\
$\textrm{MSE} = 33.18$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/plane_text_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 28.64$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_text_eed_lambda_2_6_sigma_0_7}
\caption{EED\\
$\textrm{MSE} = 25.94$}
\end{subfigure}
\caption[Inpainting of ``plane'' for text removal]{
Inpainting of
image ``plane'' overlaid with text for different inpainting methods. For
SPH inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=2.6$ and $\sigma=0.7$.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_text_gauss_zero}
\caption{Zero order SPH\\
$\textrm{MSE} = 31.05$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_text_gauss_first}
\caption{First order SPH\\
$\textrm{MSE} = 34.67$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_text_crim_pr7}
\caption{Exemplar-based\\
$\textrm{MSE} = 42.41$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_text_harm}
\caption{Harmonic\\
$\textrm{MSE} = 26.53$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures_sup/hats_text_biharm}
\caption{Biharmonic\\
$\textrm{MSE} = 27.28$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_text_eed_lambda_0_5_sigma_2}
\caption{EED\\
$\textrm{MSE} = 19.41$}
\end{subfigure}
\caption[Inpainting of ``hats'' for text removal]{
Inpainting of
image ``hats'' overlaid with text for different inpainting methods. For
SPH inpainting, we used an isotropic Gaussian kernel.
Parameters for EED are
$\lambda=0.5$ and $\sigma=2.0$.}
\end{figure}
When it comes to text removal, SPH inpainting again performs
better than the exemplar-based approach, but not as good as the
diffusion-based methods. Whether the zero or the first order consistency
method performs depends on the image under consideration.
\section{Inpainting with Optimized Data}
In this section, we include some more results and comparisons
for spatially and tonally optimized inpaintings. As kernel for SPH, we mostly
consider the common choice of Gaussian kernels. Only for the images of size
$256 \times 256$ do we also include results for other kernels as differences
are not very large. For comparison, we also consider the results achieved by
harmonic and biharmonic inpainting, equipped with our Voronoi-based
densification and tonally optimized gray values.
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/peppers_dens_D005_gauss_mix_to}
\caption{Mixed order SPH\\
$\textrm{MSE} = 16.29$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_harm_to}
\caption{Harmonic\\
$\textrm{MSE} = 22.66$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_biharm_to}
\caption{Biharmonic\\
$\textrm{MSE} = 24.63$}
\end{subfigure}
\caption[Inpainting of ``peppers'' with spatially and tonally optimized
mask]{Inpainting of ``peppers'' with 5 \% spatially and
tonally optimized masks for different inpainting techniques. For SPH
inpainting, we used an isotropic Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_gauss_zero_to}
\caption{Gaussian\\
$\textrm{MSE} = 22.46$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_matern0_zero_to}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 23.19$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_matern2_zero_to}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 23.00$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_lucy_zero_to}
\caption{Lucy\\
$\textrm{MSE} = 23.75$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_cubic_zero_to}
\caption{cubic spline\\
$\textrm{MSE} = 24.17$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_wend_zero_to}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 27.62$}
\end{subfigure}
\caption[Inpainting of ``peppers'' with spatially and
tonally optimized mask]{Inpainting of ``peppers'' with a 5 \%
spatially and tonally optimized mask with a zero order consistency method and
anisotropic kernels.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_gauss_mix_to}
\caption{Gaussian\\
$\textrm{MSE} = 13.69$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_matern0_mix_to}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 13.97$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_matern2_mix_to}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 14.12$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_lucy_mix_to}
\caption{Lucy\\
$\textrm{MSE} = 14.72$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_cubic_mix_to}
\caption{cubic spline\\
$\textrm{MSE} = 14.75$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/peppers_dens_D005_an_wend_mix_to}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 15.99$}
\end{subfigure}
\caption[Inpainting of ``peppers'' with spatially and
tonally optimized mask]{Inpainting of ``peppers'' with a 5 \%
spatially and tonally optimized mask with a mixed order consistency method and
anisotropic kernels.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/walter_dens_D005_gauss_mix_to}
\caption{Mixed order SPH\\
$\textrm{MSE} = 5.15$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_harm_to}
\caption{Harmonic\\
$\textrm{MSE} = 9.20$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_biharm_to}
\caption{Biharmonic\\
$\textrm{MSE} = 6.31$}
\end{subfigure}
\caption[Inpainting of ``walter'' with spatially and tonally optimized
mask]{Inpainting of ``walter'' with 5 \% spatially and
tonally optimized masks for different inpainting techniques. For SPH
inpainting, we used an isotropic Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_gauss_zero_to}
\caption{Gaussian\\
$\textrm{MSE} = 9.10$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_matern0_zero_to}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 9.40$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_matern2_zero_to}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 9.25$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_lucy_zero_to}
\caption{Lucy\\
$\textrm{MSE} = 10.09$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_cubic_zero_to}
\caption{cubic spline\\
$\textrm{MSE} = 10.60$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_wend_zero_to}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 12.67$}
\end{subfigure}
\caption[Inpainting of ``walter'' with spatially and
tonally optimized mask]{Inpainting of ``walter'' with a 5 \%
spatially and tonally optimized mask with a zero order consistency method and
anisotropic kernels.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_gauss_mix_to}
\caption{Gaussian\\
$\textrm{MSE} = 4.38$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_matern0_mix_to}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 4.21$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_matern2_mix_to}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 4.27$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_lucy_mix_to}
\caption{Lucy\\
$\textrm{MSE} = 4.91$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_cubic_mix_to}
\caption{cubic spline\\
$\textrm{MSE} = 4.94$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/walter_dens_D005_an_wend_mix_to}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 5.32$}
\end{subfigure}
\caption[Inpainting of ``walter'' with spatially and
tonally optimized mask]{Inpainting of ``walter'' with a 5 \%
spatially and tonally optimized mask with a mixed order consistency method and
anisotropic kernels.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_art/baboon_dens_D010_gauss_mix_to}
\caption{Mixed order SPH\\
$\textrm{MSE} = 223.37$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_harm_to}
\caption{Harmonic\\
$\textrm{MSE} = 283.96$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_biharm_to}
\caption{Biharmonic\\
$\textrm{MSE} = 326.00$}
\end{subfigure}
\caption[Inpainting of ``baboon'' with spatially and tonally optimized
mask]{Inpainting of ``baboon'' with 10 \% spatially and
tonally optimized masks for different inpainting techniques. For SPH
inpainting, we used an isotropic Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_gauss_zero_to}
\caption{Gaussian\\
$\textrm{MSE} = 294.17$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_matern0_zero_to}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 289.53$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_matern2_zero_to}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 290.03$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_lucy_zero_to}
\caption{Lucy\\
$\textrm{MSE} = 305.76$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_cubic_zero_to}
\caption{cubic spline\\
$\textrm{MSE} = 306.82$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_wend_zero_to}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 313.08$}
\end{subfigure}
\caption[Inpainting of ``baboon'' with spatially and
tonally optimized mask]{Inpainting of ``baboon'' with a 10 \%
spatially and tonally optimized mask with a zero order consistency method and
anisotropic kernels.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_gauss_mix_to}
\caption{Gaussian\\
$\textrm{MSE} = 220.82$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_matern0_mix_to}
\caption{$C^{0}$-Mat\'{e}rn\\
$\textrm{MSE} = 222.31$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_matern2_mix_to}
\caption{$C^{2}$-Mat\'{e}rn\\
$\textrm{MSE} = 220.19$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_lucy_mix_to}
\caption{Lucy\\
$\textrm{MSE} = 226.21$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_cubic_mix_to}
\caption{cubic spline\\
$\textrm{MSE} = 226.60$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/baboon_dens_D010_an_wend_mix_to}
\caption{$C^{4}$-Wendland\\
$\textrm{MSE} = 227.16$}
\end{subfigure}
\caption[Inpainting of ``baboon'' with spatially and
tonally optimized mask]{Inpainting of ``baboon'' with a 10 \%
spatially and tonally optimized mask with a mixed order consistency method and
anisotropic kernels.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_dens_D005_gauss_zero_to}
\caption{Zero order isotrop.~SPH\\
$\textrm{MSE} = 36.74$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_dens_D005_an_gauss_zero_to}
\caption{Zero order aniso.~SPH\\
$\textrm{MSE} = 32.59$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_dens_D005_gauss_mix_to}
\caption{Mixed order isotrop.~SPH\\
$\textrm{MSE} = 24.49$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_dens_D005_an_gauss_mix_to}
\caption{Mixed order aniso.~SPH\\
$\textrm{MSE} = 21.86$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_dens_D005_harm_to}
\caption{Harmonic\\
$\textrm{MSE} = 32.77$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/girl_dens_D005_biharm_to}
\caption{Biharmonic\\
$\textrm{MSE} = 38.88$}
\end{subfigure}
%
\caption[Inpainting of ``girl'' with spatially and tonally optimized
mask]{Inpainting of ``girl'' with 5 \% spatially and tonally
optimized masks for different inpainting methods. For SPH inpainting, we used
a Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_dens_D005_gauss_zero_to}
\caption{Zero order isotrop.~SPH\\
$\textrm{MSE} = 33.74$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_dens_D005_an_gauss_zero_to}
\caption{Zero order aniso.~SPH\\
$\textrm{MSE} = 27.92$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_dens_D005_gauss_mix_to}
\caption{Mixed order isotrop.~SPH\\
$\textrm{MSE} = 19.60$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_dens_D005_an_gauss_mix_to}
\caption{Mixed order aniso.~SPH\\
$\textrm{MSE} = 16.20$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_dens_D005_harm_to}
\caption{Harmonic\\
$\textrm{MSE} = 30.28$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/plane_dens_D005_biharm_to}
\caption{Biharmonic\\
$\textrm{MSE} = 33.20$}
\end{subfigure}
%
\caption[Inpainting of ``plane'' with spatially and tonally optimized
mask]{Inpainting of ``plane'' with 5 \% spatially and tonally
optimized masks for different inpainting methods. For SPH inpainting, we used
a Gaussian kernel.}
\end{figure}
\begin{figure}[htb]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_dens_D005_gauss_zero_to}
\caption{Zero order isotrop.~SPH\\
$\textrm{MSE} = 30.42$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_dens_D005_an_gauss_zero_to}
\caption{Zero order aniso.~SPH\\
$\textrm{MSE} = 25.31$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_dens_D005_gauss_mix_to}
\caption{Mixed order isotrop.~SPH\\
$\textrm{MSE} = 20.42$}
\end{subfigure}
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_dens_D005_an_gauss_mix_to}
\caption{Mixed order aniso.~SPH\\
$\textrm{MSE} = 17.18$}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_dens_D005_harm_to}
\caption{Harmonic\\
$\textrm{MSE} = 27.19$}
\end{subfigure}
%
\quad
\begin{subfigure}[t]{0.29\textwidth}
\centering
\includegraphics[
width=\textwidth]{Figures_sup/hats_dens_D005_biharm_to}
\caption{Biharmonic\\
$\textrm{MSE} = 32.53$}
\end{subfigure}
%
\caption[Inpainting of ``hats'' with spatially and tonally optimized
mask]{Inpainting of ``hats'' with 5 \% spatially and tonally
optimized masks for different inpainting methods. For SPH inpainting, we used
a Gaussian kernel.}
\end{figure}
As expected, the mixed order anisotropic SPH inpainting
performs best in all cases with improvements over harmonic or biharmonic
inpaintings between 22 \% for ``baboon'' and 55 \% for ``walter''.
\end{document}
| {
"timestamp": "2021-08-25T02:04:24",
"yymm": "2011",
"arxiv_id": "2011.11289",
"language": "en",
"url": "https://arxiv.org/abs/2011.11289",
"abstract": "Digital image inpainting refers to techniques used to reconstruct a damaged or incomplete image by exploiting available image information. The main goal of this work is to perform the image inpainting process from a set of sparsely distributed image samples with the Smoothed Particle Hydrodynamics (SPH) technique. As, in its naive formulation, the SPH technique is not even capable of reproducing constant functions, we modify the approach to obtain an approximation which can reproduce constant and linear functions. Furthermore, we examine the use of Voronoi tessellation for defining the necessary parameters in the SPH method as well as selecting optimally located image samples. In addition to this spatial optimization, optimization of data values is also implemented in order to further improve the results. Apart from a traditional Gaussian smoothing kernel, we assess the performance of other kernels on both random and spatially optimized masks. Since the use of isotropic smoothing kernels is not optimal in the presence of objects with a clear preferred orientation in the image, we also examine anisotropic smoothing kernels. Our final algorithm can compete with well-performing sparse inpainting techniques based on homogeneous or anisotropic diffusion processes as well as with exemplar-based approaches.",
"subjects": "Image and Video Processing (eess.IV)",
"title": "Sparse Inpainting with Smoothed Particle Hydrodynamics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561721629777,
"lm_q2_score": 0.7310585669110203,
"lm_q1q2_score": 0.7082906047643632
} |
https://arxiv.org/abs/2103.11917 | The digrundy number of digraphs | We extend the Grundy number and the ochromatic number, parameters on graph colorings, to digraph colorings, we call them {\emph{digrundy number}} and {\emph{diochromatic number}}, respectively. First, we prove that for every digraph the diochromatic number equals the digrundy number (as it happen for graphs). Then, we prove the interpolation property and the Nordhaus-Gaddum relations for the digrundy number, and improve the Nordhaus-Gaddum relations for the dichromatic and diachromatic numbers bounded previously by the authors in [Electron. J. Combin. 25 (2018) no. 3, Paper {\#} 3.51, 17 pp.] | \section{Introduction}
It is common that classical results or problems on graph theory provide us interesting questions on digraph theory. An interesting question is what is the natural generalization of the chromatic number in the class of digraphs. In 1982 Neumann-Lara introduced the concept of dichromatic number as a generalization of the chromatic number to the class of digraphs. Specifically, in \cite{MR3000989, MR3322691, MR3979228, MR3692144, MR3705781, MR3593495} the authors study the dichromatic number in order to extend results on the chromatic number of graphs to the class of digraphs.
Furthermore, as an anecdote, M. Skoviera\footnote{Oral communication.}, after a talk about the diachromatic number in a conference, said: ``It looks that dichromatic number is the correct generalization for the chromatic number'', confirming the intuitiveness of the dichromatic number as a generalization of the chromatic number.
We consider finite digraphs, without loops and symmetric arcs are permitted. A \emph{(vertex) coloring} of a digraph $D$ is \emph{acyclic} if the induced subgraph of each chromatic class is acyclic, i.e., it only admits no directed cycles. The \emph{dichromatic number} $dc(D)$ of $D$ is the smallest $k$ such that $D$ has an acyclic coloring with $k$ colors \cite{MR693366}. This parameter is a generalization of the chromatic number for graphs, see \cite{MR2564801,MR1133813,MR3692144,MR3705781,MR2781992,MR3711038,MR1817491,MR3112565} for old and new results about dichromatic number. For a detailed introduction to digraphs we refer to \cite{MR2472389}.
A coloring of a digraph $D$ is \emph{complete} if for every pair $(i,j)$ of different colors there is at least one arc $(u,v)$ such that $u$ is colored $i$ and $v$ is colored $j$ \cite{MR2998438}. Note that any acyclic coloring of $D$ with $dc(D)$ colors is a complete coloring. The \emph{diachromatic number} $dac(D)$ of a digraph $D$ is the largest number of colors for which there exists an acyclic and complete coloring of $D$. Hence, the dichromatic and diachromatic numbers of a digraph $D$ are, respectively, the smallest and the largest number of colors in a complete acyclic coloring of $D$, see \cite{MR3875016}.
Let $D$ be a digraph of order $n$ whose vertices are listed in some specified order. In a \emph{greedy coloring} of $D$, the vertices are successively colored with positive integers according to an algorithm that assigns to the vertex under consideration the smallest available color. Hence, if the vertices of $D$ are listed in the order $v_1,v_2,\dots,v_n$, then the resulting greedy coloring $\varsigma$ assigns the color $1$ to $v_1$, that is, $\varsigma(v_1)=1$. If $v_1$ and $v_2$ are not a $2$-cycle, then assign $\varsigma(v_2)=1$, else $\varsigma(v_2)=2$. In general, suppose that the first $j$ vertices $v_1,v_2,\dots,v_j$, where $1\leq j < n$, in the sequence have been colored with the colors $1,\dots ,{t-1}$. Let $\{V_i\}_{i=1}^{t-1}$ be the set of chromatic classes.
Consider the vertex $v_{j+1}$, if there exists a chromatic class $V_i$, with the smallest $i$, for which $V_i\cup \{v_{j+1}\}$ is acyclic, then $\varsigma(v_{j+1})=i$, else $\varsigma(v_{j+1})=t$. When the algorithm ends, the vertices of $D$ have been assigned colors from the set $[k]\colon =\{1,2,\dots,k\}$ for some positive integer $k$. Note that any greedy coloring is a complete coloring. The \emph{digrundy number} $dG(D)$ is the largest number of colors in a greedy coloring, see \cite{MR3875016}.
In this paper, we explore the analogue parameter to the Grundy number for digraphs which we call the digrundy number (we recall that the Grundy number $\Gamma$ is also known as the First-Fit number $\chi_{FF}$).
The paper is organized as follows: In Section \ref{sec2} we prove the interpolation theorem for digrundy number and give a characterization of digrundy number and in Section \ref{sec4} we prove the inequalities called the Nordhaus-Gaddum relations for digrundy number and we improve those relations for dichromatic and diachromatic numbers.
\section{The digrundy and diochromatic numbers}\label{sec2}
Since the digrundy number $dG(D)$ is the largest number of colors in a greedy coloring, it follows that:
$$dc(D)\leq dG(D)\leq dac(D).$$
A \emph{digrundy coloring} of a graph $D$ is an acyclic coloring of $D$ having the property that for every two colors $i$ and $j$ with $i<j$, every vertex colored $j$ has a neighbor colored $i$. It is not hard to see that a coloring $\varsigma$ of a digraph $D$ is a digrundy coloring of $D$ if and only if $\varsigma$ is a greedy coloring of $D$. Therefore, for each vertex $v$ in the chromatic class $j$ and each chromatic class $V_i$, with $i<j$, $N^+(v)\cap V_i\not=\emptyset$ and $N^-(v)\cap V_i\not=\emptyset$ then,
\[dG(D)\leq \min\{\Delta^+(D),\Delta^-(D)\}+1.\]
Moreover, we have the following remark.
\begin{lemma}\label{Lem dc greedy}
For each digraph $D$, there exists an ordering $\phi$ of the vertices of $D$ such that the digrundy coloring attains the dichromatic number of $D$.
\end{lemma}
\begin{proof}
Let $D$ be a digraph and consider an acyclic coloring $\varphi:V(D)\to [dc(D)]$. Consider an ordering of $V(G)$ which respects the order of the chromatic classes. Recall that the ordering in each chromatic class is irrelevant since each class is acyclic. If $\varphi$ is not greedy, then let $i$ be the greatest integer such that $\varphi$ is a greedy coloring restricted to $V_1\cup V_2\cup\dots\cup V_i$. Denote by $\varphi_{i}$ the greedy coloring of $V_1\cup V_2\cup\dots\cup V_i$.
Let $\varphi_{i+1}:V_1\cup V_2\cup\dots\cup V_{i+1}\to[i+1]$ be the acyclic coloring such that $\varphi_{i+1}(u)=\varphi_{i}(u)$ for $u\in V_1\cup V_2\cup\dots\cup V_i$ and $\varphi_{i+1}$ recolors the vertices of $V_{i+1}$, respecting the order in $V_{i+1}$, using the greedy coloring. The coloring $\varphi$ is an optimal coloring of $D$, thus, when we recolor the vertices of $V_{i+1}$, there must be some vertex of color $i+1$ and since $V_{i+1}$ is acyclic, $\varphi_{i+1}$ uses exactly $i+1$ colors. Applying this process to each chromatic class $V_j$, with $j>i$ we obtain a greedy coloring of $D$ using $dc(D)$ colors.
\end{proof}
In \cite{MR3875016}, the authors proved the interpolation theorem for diachromatic number of a digraph $D$, that is, for every $k$ such that $dc(D) \leq k \leq dac(D)$ there exists an acyclic and complete coloring of $D$ using $k$ colors. In this section, we prove the interpolation theorem for digrundy number. The version for graphs was proved in \cite{MR539075}, for further information see \cite{MR2450569}.
\begin{theorem}
For a digraph $D$ and an integer $k$ with $dc(D) \leq k \leq dG(D)$, there is a digrundy coloring of $D$ using $k$ colors.
\end{theorem}
\begin{proof}
Let $\varsigma$ be a digrundy coloring of $D$ using the set of colors $[dG(D)]$, let $\phi$ be the corresponding vertex ordering and let $V_1,V_2,\dots,V_{dG(D)}$ be the color classes of $\varsigma$, where $V_i$ consists of the vertices colored $i$ by $\varsigma$ for $i\in [dG(D)] $. For each integer $i$ with $1\leq i \leq dG(D)+1$, let $a_i$ be the smallest number of colors in an acyclic coloring of $D$ which coincides with $\varsigma$ for each vertex belonging to $V_1\cup V_2\cup\dots\cup V_{i-1}$. Observe that $a_{dG(D)+1} = dG(D)$. Furthermore, for each integer $i$ with $1\leq i \leq dG(D)$, let $D_i$ be the subgraph of $D$ induced by $V_i\cup V_{i+1}\cup\dots\cup V_{dG(D)}$. Suppose that $a_1=dc(D)$. Since each vertex $x$ in $V_i\cup V_{i+1}\cup\dots\cup V_{dG(D)}$ is in at least one directed cycle with the others vertices
in each of the color classes $V_1,V_2,\dots, V_{i-1}$, it follows that in every coloring of $D$ that coincides with $\varsigma$ on $V_1\cup V_2\cup\dots\cup V_{i-1}$, none of the colors $1,2,\dots, i-1$ can be used for a vertex of $D_i$ and so
\begin{equation} \label{Eq1}
a_i = (i-1) + dc(D_i).
\end{equation}
Since $D_{i+1}$ is a subgraph of $D_i$, it follows that $dc(D_{i+1}) \leq dc(D_i)$. Furthermore, a coloring of $D_i$ using $dc(D_i)$ can be obtained from a coloring of $D_{i+1}$ using $dc(D_{i+1})$ by assigning all of the vertices in $V_i$ the same color but one that is different from the colors used in the coloring of $D_{i+1}$ using $dc(D_{i+1})$. Thus
\begin{equation} \label{Eq2}
dc(D_i)-1\leq dc(D_{i+1})\leq dc(D_i).
\end{equation}
By Equations (\ref{Eq1}) and (\ref{Eq2}),
\[a_i=(i-1)+dc(D_i)=i+(dc(D_i)-1)\leq i+dc(D_{i+1})\]
\[\leq i+dc(D_i)=1+(i-1)+dc(D_i)=1+a_i.\]
Therefore, $a_i\leq i+dc(D_{i+1})\leq 1+a_i$. Since $a_{i+1}=i+dc(D_{i+1})$, it follows that $$a_i\leq a_{i+1} \leq 1+a_i.$$ On the other hand, $a_1=dc(D)$ and $a_{dG(D)+1}=dG(D)$. Thus, for each integer $k$ with $dc(D)\leq k \leq dG(D)$, there is an integer $i$ with $1\leq i \leq dG(D)+1$ such that $a_i=k$.
By Lemma \ref{Lem dc greedy}, we may assume that $dc(D)<k<dG(D)$. Thus there exists a coloring $\varsigma'$ of $D$ using $k$ colors such that $\varsigma'$ coincides with $\varsigma$ for each vertex belonging to $V_1\cup V_2 \cup \dots \cup V_{i-1}$.
By Lemma \ref{Lem dc greedy}, let $\phi''$ be a vertex ordering such that $\phi$ and $\phi''$ coincides for $v\in V_1\cup V_2\cup\dots\cup V_{i-1}$ and such that when we apply the greedy algorithm on $D_i$, we obtain $dc(D_i)$ colors.
Let $\varsigma''$ be the greedy coloring with respect to $\phi''$. Suppose that $\varsigma''$ is an coloring of $D$ using $l$ colors. Then $\varsigma''$ is a digrundy coloring of $D$ using $l$ colors such that $\varsigma''$ coincides with $\varsigma'$ and $\varsigma$ on all of the vertices in $V_1\cup V_2\cup \dots \cup V_{i-1}$ and $\varsigma''$ assigns to each vertex of $D$ a color not greater than the color assigned to the vertex by $\varsigma'$. Therefore, $l\leq k$. On the other hand, by the definition of $a_i$, the coloring $\varsigma''$ cannot use less than $k=a_i$ colors, which implies that $l=k$ and so $\varsigma''$ is digrundy coloring of $D$ using $k$ colors.
\end{proof}
In 1982 G. Simmons \cite{MR726050} introduced a new type of coloring of a graph $G$ based on orderings of the vertices of $G$, which is similar to but not identical to greedy colorings of $G$. We extend this definition to digraphs using acyclic colorings.
Let $\phi\colon v_1,v_2,\dots,v_n$ be an ordering of the vertices of a digraph $D$. An acyclic coloring $\varsigma\colon V(D)\rightarrow \mathbb{N}$ of $D$ is a \emph{parsimonious $\phi$-coloring} of $D$ if the vertices of $D$ are colored in the order $\phi$, beginning with $\varsigma(v_1)=1$, such that each vertex $v_{i+1}$ $(1\leq i\leq n-1)$ must be assigned a color that has been used to color one or more of the vertices $v_1,v_2,\dots,v_i$ if possible. If $v_{i+1}$ can be assigned more than one color, then a color must be selected that results in using the fewest number of colors needed to color $D$. If $v_{i+1}$ form a directed cycle to every currently chromatic class, then $\varsigma (v_{i+1})$ is defined as the smallest positive integer not yet used. The \emph{parsimonious $\phi$-coloring number} $dc_\phi (D)$ of $D$ is the minimum number of colors in a parsimonious $\phi$-coloring of $D$. The maximum value of $dc_\phi (D)$ over all orderings $\phi$ of the vertices of $D$ is the \emph{ordered dichromatic number} or, more simply, the \emph{diochromatic number} of $D$, which is denoted by $dc^o (D)$.
P. Erd{\H o}s, W. Hare, S. Hedetniemi, and R. Laskar \cite{MR889347} showed that the ochromatic number of every graph always equals its Grundy number. This is also true for these generalizations for digraphs.
\begin{theorem}
For every digraph $D$, $dG(D)=dc^o(D)$.
\end{theorem}
\begin{proof}
In order to show that $dc^o(D)\leq dG(D)$, let $\phi\colon v_1,v_2,\dots,v_n$ be an ordering of the vertices of $D$ such that $dc_\phi (D)=dc^o(D)$. Consider the parsimonious coloring using $dc^o(D)$ colors, obtained using a greedy coloring, that is, whenever there is a choice of a color for a vertex, the smallest possible color is chosen. Suppose that this results in an coloring using $l$ colors of $D$. Then $dc_\phi (D)\leq l$. Furthermore, this coloring using $l$ colors is a Grundy coloring using $l$ colors. Therefore, $dG(D)\geq l$ and so
\[dc^o(D)=dc_\phi (D) \leq l \leq dG(D),\]
producing the desired inequality.
We show that $dc^o(D)\geq dG(D)$. Let $dG(D)=k$. Consider a Grundy coloring of the vertices of $D$, using the colors $1,2,\dots,k$, and let $V_1,V_2,\dots,V_k$ denote the chromatic classes $(1\leq i \leq k)$. Let $\phi\colon v_1,v_2,\dots,v_n$ be any ordering of $D$ in which the vertices of $V_1$ are listed first in some order, the vertices of $V_2$ are listed next in some order, and so on until finally listing the vertices of $V_k$ in some order. We now compute $dc_\phi (D)$. Assign $v_1$ the color $1$. Since $V_1$ is acyclic, every vertex in $\phi$ that belongs to $V_1$ is not in a monochromatic directed cycle using only vertices of $V_1$, therefore every vertex in $V_1$ must be colored 1 as well. Assume, for an integer $r$ with $1\leq r < k$, that the parsimonious coloring has assigned the color $i$ to every vertex in $V_i$ for $1\leq i \leq r$. Now, consider the vertices in $\phi$ that belong to $V_{r+1}$. Let $v_a$ be the first vertex appearing in $\phi$ that belongs to $V_{r+1}$. Since $v_a$ is in a directed cycle for each $V_i$ for every $i$ with $1\leq i \leq r$, it follows that $v_a$ cannot be colored any of the colors $1,2,\dots ,r$. Hence, the new color $r+1$ is assigned to $v_a$. Now if $v_b$ is any vertex belonging to $V_{r+1}$ such that $b>a$, then $v_b$ cannot be colored any of the colors $1,2,\dots,r$ since $v_b$ is in a directed cycle for each $V_i$ for $1\leq i \leq r$. However, since $v_b$ is not in a directed cycle for $V_{r+1}$, it follows that $v_b$ must be colored $r+1$. By mathematical induction, $dc_\phi (D) = k$. Thus, $dc^o (D)\geq dG(D)$, and the result follows.
\end{proof}
\section{On the Nordhaus-Gaddum relations}\label{sec4}
The Nordhaus-Gaddum inequality \cite{MR0078685} states that for every graph $G$ of order $n$\[\chi(G)+\chi(G^{c})\leq n+1.\]
These relations were extended to the pseudoachromatic numbers \cite{MR0256930} getting that for every graph $G$ of order $n$ \[\alpha(G)+\chi(G^{c})\leq n+1\qquad \text{ and }\qquad \alpha(G)+\alpha(G^{c})\leq\psi(G)+\psi(G^{c})\leq \left\lceil \frac{4n}{3}\right\rceil. \]
And for the Grundy number \cite{MR2432888} for every graph $G$ of order $n\geq 10$
\[\Gamma(G)+\Gamma(G^{c})\leq \left\lfloor \frac{5n+2}{4}\right\rfloor . \]
For digraphs, there exists the following results \cite{MR3875016}.
If $D$ is a digraph of order $n$, then
\begin{equation}\label{Eq3}
dc(D)+dc(D^{c})\leq \left\lceil \frac{4n}{3}\right\rceil\qquad \text{ and }\qquad dac(D)+dac(D^{c})\leq \left\lceil \frac{3n}{2}\right\rceil .
\end{equation}
In this section, we improve the upper bounds of Equation \ref{Eq3} and we prove a similar result for the digrundy number.
\begin{theorem}
If $D$ is a digraph of order $n$, then
$dc(D)+dc(D^{c})\leq n+1$.
\end{theorem}
\begin{proof}
The proof is by induction on $n$. The case of $D=K_1$ is trivial. Suppose that for each digraph $F$ of order at most $n-1\geq 1$, $dc(F)+dc(F^{c})\leq n$.
Let $D$ be a digraph of order $n$ and let $x\in V(D)$. Take an acyclic and complete coloring using $k$ colors of $D-x$ for $k=dc(D-x)$. Therefore, $dc(D)$ is at most $dc(D-x)+1$. Similarly, $dc(D^c)\leq dc(D^c-x)+1$. Hence, by induction hypothesis,
\[dc(D)+dc(D^c)-2\leq dc(D-x)+dc(D^c-x)\leq n\]
and $dc(D)+dc(D^c)\leq n+2.$
Supose that $dc(D)<dc(D-x)+1$ or $dc(D^c)<dc(D^c-x)+1$, then
\[dc(D)+dc(D^c)\leq n+1.\]
Assume that $dc(D)=dc(D-x)+1$ and $dc(D^c)=dc(D^c-x)+1$, which means that for each chromatic class $X$ of $D$ and $D^c$, $X\cup \{x\}$ contains a cycle. Then $2dc(D)\leq d_D(x)=d^+_D(x)+d^-_D(x)$ and $2dc(D^c)\leq d_{D^c}(x)=d^+_{D^c}(x)+d^-_{D^c}(x)$ then
\[2dc(D)+2dc(D^c)\leq d_D(x)+d_{D^c}(x)=2(n-1)\]
and the result follows.
\end{proof}
\begin{theorem}\label{dac}
If $D$ is a digraph of order $n$, then
\[dac(D)+dac(D^{c})\leq \left\lceil \frac{4n}{3}\right\rceil.\]
\end{theorem}
\begin{proof}
Let $dac(D)+dac(D^{c})=x$. Without loss of generality, $\frac{x}{2}\leq dac(D)$, that is, $dac(D)=\frac{x}{2}+\delta$ for some $0\leq\delta\leq\frac{x}{2}$. Let $\omega$ denote the maximum order of a complete subdigraph (a complete symmetric digraph) in $D$. Since the set of singular chromatic classes induces a complete subdigraph in $D$ it follows that
\[dac(D)=\frac{x}{2}+\delta\leq \omega+\frac{n-\omega}{2}\]
thus $x+2\delta\leq 2\omega+n-\omega$ and $x-n+2\delta\leq \omega.$
On the other hand, $dac(D^c)\leq n-\omega+1$ because each complete subdigraph of $D$ is an independent set of vertices in $D^c$. Hence \[dac(D^c)\leq n+1-x+n-2\delta=2n+1-x-2\delta,\]
\[x=dac(D)+dac(D^{c})\leq \frac{x}{2}+\delta + 2n+1-x-2\delta=-\frac{x}{2}-\delta + 2n+1\]
and $\frac{3x}{2}\leq 2n+1-\delta.$ Finally, $dac(D)+dac(D^{c})\leq \left\lfloor\frac{4n+2}{3}\right\rfloor$ and the result follows.
\end{proof}
Finally, we prove the Nordhaus-Gaddum for the digrundy number.
\begin{theorem}
If $D$ is a digraph of order $n$, then \[dG(D)+dG(D^{c})\leq\begin{cases}
\begin{array}{c}
n+1\\
n+2\\
12\\
\left\lfloor \frac{5n+2}{4}\right\rfloor
\end{array} & \begin{array}{c}
\textrm{ if }n\leq4;\\
\textrm{ if }n\leq8;\\
\textrm{ if }n=9;\\
\textrm{ if }n\geq10.
\end{array}\end{cases}\]
\end{theorem}
\begin{proof}
Let $\mathcal{A}=\{A_1,\dots,A_p\}$ and $\mathcal{B}=\{B_1,\dots,B_q\}$ be optimal ordered vertex partitions of $D$ and $D^c$ for a digrundy coloring, respectively. Suppose that $\mathcal{A}$ has $a_1$ sets of order one, $a_2$ sets of order two and $a_3$ sets of order at least three. Similarly, $\mathcal{B}$ has $b_1$ sets of order one, $b_2$ sets of order two and $b_3$ sets of order at least three. From the assumption, $dG(D)=a_1+a_2+a_3$, $dG(D^c)=b_1+b_2+b_3$ and the definitions of $a_i$ and $b_i$ we have $a_1+2a_2+3a_3\leq n$ and $b_1+2b_2+3b_3\leq n$. We can write
\begin{equation}\label{eq4}
a_1+2a_2+3a_3+\epsilon_a= n
\end{equation}
and
\begin{equation}\label{eq5}
b_1+2b_2+3b_3+\epsilon_b= n,
\end{equation}
where $\epsilon_a,\epsilon_b\geq 0$ are the excess.
Consider the sets of order one of $\mathcal{A}$ and $\mathcal{B}$. We may suppose (eventually reorder) that they come last in the orderings.
Since $K$ and $L$ contains the singular classes of $D$ and $D^c$ respectively, where $K=\{v\in A_i\colon |A_i|=1\}$ spans a complete subdigraph in $D$ and $L=\{v\in B_j\in \mathcal{B}\colon |B_j|=1\}$ spans an independent set in $D$, thus $|K\cap L|\leq 1$. If $|K\cap L|=1$, then $dG(D)+dG(D^c)\leq n+1$ because if $\{x\}=K\cap L $, then $2(|\mathcal{A}|-1)\leq d^+_D(x)+d^-_D(x)$ and $2(|\mathcal{B}|-1)\leq d^+_{D^c}(x)+d^-_{D^c}(x)$ and $dG(D)+dG(D^c)\le |\mathcal{A}|+|\mathcal{B}|\leq n+1<\frac{5n+5}{4}$.
Assume $K\cap L=\emptyset$, we prove that $dG(D)+dG(D^c)\le |\mathcal{A}|+|\mathcal{B}|\le n+1<\frac{5n+5}{4}$. Let $\alpha_2$ and $\alpha_3$ be the number of sets in $\mathcal{A}$ contained in $L$ with 2- and at least 3-elements respectively, $\alpha = \alpha_2+\alpha_3$, and define similarly $\beta_2$ and $\beta_3$ for $\mathcal{B}$. Since $L$ (respectively $K$) is an independent set in $D$ (respectively in $D^c$), it follows that
$$\alpha, \beta \leq 1.$$
Classify the $2$-element sets into three groups. There are $a_{2,t}$ of them meeting $L$ in exactly $t$ elements. Define $b_{2,t}$ analogously (i.e., the number of $2$-sets of $\mathcal{B}$ meeting $K$ in $t$ vertices). We have
\[a_{2,2}=\alpha_2,\qquad a_2=a_{2,0}+a_{2,1}+a_{2,2},\qquad b_{2,2}=\beta_2,\qquad b_2=b_{2,0}+b_{2,1}+b_{2,2}.\]
All but $\alpha$ parts of $\mathcal{A}$ have points outside $L$, and at least $a_{2,0}$ of them have two or more. We get that $|\mathcal{A}|-\alpha + a_{2,0}\leq n-|L|$. Again, write this (and its analogue for $\mathcal{B}$, $|\mathcal{B}|-\beta + b_{2,0}\leq n-|K|$) in the following form:
\begin{equation}\label{eq6}
a_1 + a_2 + a_3 + a_{2,0} + b_1 = n + \alpha - \epsilon_\alpha
\end{equation}
\begin{equation}\label{eq7}
b_1 + b_2 + b_3 + b_{2,0} + a_1 = n + \beta - \epsilon_\beta
\end{equation}
Consider an $a_{2,1}$ two-element $\mathcal{A}$-set, say $\{v,v'\}$, that intersect $L$ in exactly one vertex, say $v\in L$ and $v'\not\in L$. Denote the set of these vertices $v\in L$ by $L_1$, and the set of vertices $v'\not\in L$ by $S$. Similarly, $K_1\colon = \{u \in K\colon \exists u' \not\in K$ such that $\{u,u'\}\in \mathcal{B}\}$, and $T\colon =\{u' \not\in K\colon \exists u \in K$ such that $\{u,u'\}\in \mathcal{B}\}$. We have
\[|S|=a_{2,1},\qquad S\cap (K \cup L) = \emptyset,\qquad |T| = b_{2,1},\qquad T\cap (K \cup L) = \emptyset.\]
\begin{claim}\label{claim6} $|S\cap T |\leq 1.$
\begin{proof}
The sets of order one of an optimal ordered partition can be taken such that they have the greatest color labels, otherwise, we can reorder them in such a way.
Assume, on the contrary, that $x_1,x_2\in S \cap T$. This means that there are $u_1,u_2\in L$ such that the two-element parts $\{u_1,x_1\}$ and $\{u_2,x_2\}$ belong to $\mathcal{A}$, and there are $v_1,v_2\in K$ such that $\{v_1,x_1\}$ and $\{v_2,x_2\}$ belong to $\mathcal{B}$. By definition we already know the status of the pairs, namely $v_1v_2,v_2v_1\in F(D)$ and $u_1u_2,u_2u_1\notin F(D)$. Let $<_{\mathcal{A}}$ denote position of the elements in the ordering $\mathcal{A}$.
By symmetry (between $\{u_1,x_1\}$ and $\{u_2,x_2\}$), we may suppose that the order of these classes of the partition ${\mathcal{A}}$ is
\[\{u_1,x_1\}<_{\mathcal{A}}\{u_2,x_2\}<_{\mathcal{A}}\{v_1\}<_{\mathcal{A}}\{v_2\}.\]
Then $\{u_1,x_1\}$ and $u_2$ implies $x_1u_2,u_2x_1\in F(D)$, i.e., $x_1u_2,u_2x_1\notin F(D^c)$. Therefore, $\{v_1,x_1\}$ and $u_2$ implies $v_1u_2,u_2v_1\in F(D^c)$ since $\{v_1,x_1\}<_{\mathcal{B}} \{u_2\}$, see Figure \ref{fig1} a).
Note that, $\{u_2,x_2\}$ and $v_1$ implies $x_2v_1,v_1x_2\in F(D)$, i.e., $x_2v_1,v_1x_2\notin F(D^c)$. This implies that $\{v_1,x_1\}<_{\mathcal{B}}\{v_2,x_2\}$, otherwise, $v_1$ violate the greedy requirement of the partition $\mathcal{B}$. Then, $\{v_1,x_1\}$ and $v_2$ implies $x_1v_2,v_2x_1\in F(D^c)$ and $\{v_1,x_1\}$ and $x_2$ implies $x_1x_2,x_2x_1\in F(D^c)$, i.e., $x_1v_2,v_2x_1\notin F(D)$ and $x_1x_2,x_2x_1\notin F(D)$, see Figure \ref{fig1} b).
Finally, $\{u_1,x_1\}$ and $x_2$ implies $x_2u_1,u_1x_2\in F(D)$, i.e., $x_2u_1,u_1x_2\notin F(D^c)$. On one hand, $\{u_1,x_1\}$ and $v_2$ implies $v_2u_1,u_1v_2\in F(D)$. On the other hand, $\{v_2,x_2\}$ and $u_1$ implies $v_2u_1,u_1v_2\in F(D^c)$ which is impossible, and the lemma follows, see Figure \ref{fig1} c).
\begin{figure}[ht!]
\begin{center}
\includegraphics{fig1}
\caption{Proof of Claim \ref{claim6}. Digons are represented with edges and dashed edges represent edges in the complement.}\label{fig1}
\end{center}
\end{figure}
\end{proof}
\end{claim}
Claim \ref{claim6} shows that the sets $K$, $L$, $S$, $T$ are almost disjoint. Let $\gamma = |S\cap T|$ and denote by $n-\epsilon_\gamma$ the order of the union of these four sets. By Claim \ref{claim6}, $\gamma \le1$. We obtain
\begin{equation}\label{eq8}
|L \cup K| + |S\cup T | = a_1 + b_1 + a_{2,1} + b_{2,1} - \gamma = n - \epsilon_\gamma.
\end{equation}
Adding the five equalities (\ref{eq4})-(\ref{eq8}) and denoting $\epsilon = \epsilon_a + \epsilon_b + \epsilon_\alpha + \epsilon_\beta + \epsilon_\gamma$ we get
\[4(a_1 + a_2 + a_3 + b_1 + b_2 + b_3 ) = 5n + (\alpha + \beta + \gamma) + (\alpha_2 + \beta_2 ) - \epsilon = 5n + s\]
That is, when $K\cap L=\emptyset$ we have that $dG(D)+dG(D^c)\leq \frac{5n+s}{4}$ for some integer $s$. Since $\alpha, \beta, \gamma\le1$ it follows that $s\leq 5$.
In both cases, $K\cap L\neq\emptyset$ and $K\cap L=\emptyset$, $dG(D)+dG(D^c)\leq \frac{5n+s}{4}$ for some $s\le5$. The following claim is essential in order to prove that $s\le4$.
\begin{claim}\label{claim7} If $\alpha = 1$ then
\begin{enumerate}
\item[(1)] there is no class $B \in \mathcal{B}$ with $B \subset S$;
\item[(2)] there is no class $B \in \mathcal{B}$, $B\subset S \cup K$ with $|B \cap S| = |B|-1$;
\item[(3)] there is no class $A_i \in \mathcal{A}$, $A_i \subset L \cup T$ with $|A_i \cap T | = 1$.
\item[(4)] $\gamma = 0$.
\end{enumerate}
\begin{proof}
Indeed, $\alpha = 1$ gives an $A_j\subseteq L$ belonging to $\mathcal{A}$. The first two statements are based on the fact that $D[S,A_j]$ is a complete bipartite digraph. Let $w\in A_j$, $y\in S$. Then there is a $u\in L$ such that $\{y,u\}\in \mathcal{A}$. Since $L$ is independent, the greedy requirement between $u$ and $A_j$ implies that $u$ (and its class $\{y,u\}$) precedes $A_j$ in $\mathcal{A}$. Then there arcs between $w$ and the class $\{y,u\}$, it should be $wy$ and $yw$, and thus $D[S,A_j]$ is a complete bipartite digraph.
In order to prove (1) suppose, for a contradiction, that $B\subset S$ for $B\in \mathcal{B}$. Take any element $w \in A_j$. This implies $w\in L$ which by the definition of $L$ gives $\{w\}\in \mathcal{B}$, too, and thus there must be a non-arc between $w$ and $B$, a contradiction.
To prove (2) suppose, on the contrary, that $B\in \mathcal{B}$, $B\subseteq K\cup S$, and $B\cap K = \{v\}$. Since $\{w\}\in\mathcal{B}$ for all $w\in A_j$, there is a non-arc from $w$ to $B$, therefore $vw$ and $wv$ are arcs. Consider $\{v\}\in \mathcal{A}$ and $A_j$. There should be arcs $vw_1w_2v$, $w_1,w_2\in A_j$, a contradiction.
To prove (3) suppose $A_i \cap T = \{x\}$ and $(A_i\setminus \{x\})\subseteq L$. Notice that $i<j$ otherwise $u\in A_i\cap L$ would violate the greedy requirement between $u$ and $A_j$ in $\mathcal{A}$. Then there is a digon from $w\in A_j$ to $x$. By definition of $T$ there is a $v\in K$ such that $\{v,x\}\in \mathcal{B}$. Consider $\{w\}$ and $\{v,x\}$ in $\mathcal{B}$, $vw,wv \in F(D^c)$ follows (for every $w\in A_j$). Then the greedy requirement on $D$ is violated between the classs $A_j$ and $\{v\}\in\mathcal{A}$.
Note that (4) is a particular case of (3).
\end{proof}
\end{claim}
Similar to Claim \ref{claim7}, if $\beta=1$, then $\gamma = 0$. Conversely, $\gamma = 1$ implies $\alpha = \beta = 0$, hence $s \leq 1$ and we are done. From now on, we suppose that $\gamma = 0$, that is, $|S \cap T| = 0$, and then $s \leq 4$ and $dG(D)+dG(D^c)\leq \frac{5n+4}{4}.$
In the sequel, we will prove that for $n\geq 10$, $s\le2$ and in this case we have that $dG(D)+dG(D^c)\leq \frac{5n+2}{4}$.
Since $s\leq 2(\alpha + \beta)-\epsilon$, if $\alpha + \beta \leq 1$ or $\epsilon \geq 2$ it follows that $s \leq 2$.
Assume that $\alpha = \beta = 1$ and $ \epsilon \leq 1$. In this case there exists a class $A'\in \mathcal{A}$, $A'\subseteq L$ (naturally, it is disjoint from $L_1$), and there exists a class $B'\in \mathcal{B}$, $B'\subseteq K$ (and $B'\cap K_1=\emptyset$).
We claim that there is no class $A\in\mathcal{A}$ contained in $L\cup T$, other than $A'$.
Claim \ref{claim7} implies that such a class $A$ intersects both $L$ and $T$ in at least two vertices. If such an $A$ exists then $\epsilon_a \geq 1$ in Equation (\ref{eq4}). Also, $A$ should be counted twice on the left-hand-side of Equation (\ref{eq6}), implying $\epsilon_\alpha \geq 1$. Contradicting $\epsilon \leq 1$.
Similarly, there is no second $B$-class in $K\cup S$.
Let $W=V(D)\setminus (K \cup L \cup S \cup T )$, $|W|=\epsilon_\gamma$.
Consider the case $W=\emptyset$. Then there is no $A$-class covering the points of $T$, so $T$ should be empty. Similarly, $S=\emptyset$ follows. Then $V(D)= K\cup I$, hence $dG(D)+dG(D^c)\leq n+2$ and we are done.
Let $W\not= \emptyset$, since $ \epsilon \leq 1$ it follows that $|W| = 1$ and $\epsilon_a = \epsilon_b = \epsilon_\alpha = \epsilon_\beta = 0$. Let $A''$ be the $A$-class covering $W$. There are no more $\mathcal{A}$-classs in $T\cup (L \setminus L_1 ) \cup W$ so $|\mathcal{A}| = |K| + |S| + 2$. Similarly, $W\in B''\in \mathcal{B}$ and $|B| = |L| + |T| + 2$ giving $dG(D)+dG(D^c)\leq n+3$. Since $n + 3 \leq (5n + 2)/4$ we are done for $n \geq 10$.
\\
To finish, let $n\leq 9$. For these cases $dG(D)+dG(D^c)\leq \frac{5n+4}{4}$, implies $dG(D)+dG(D^c)\leq n+3$. We claim that if $dG(D)+dG(D^c)= n+3$ then $n=9$.
To prove the previous take the addition of the following seven pairwise disjoint sets:
\[n \geq |A'|+|B'|+|A''\setminus E|+|K_1|+|B''\setminus E|+|L_1|+|E|.\]
Here $|A'|\geq 2$, $|B'|\geq 2$, $|E|=1$. It is easy to see that $|A''\setminus E| + |K_1 |\geq 2$ and $|B''\setminus E|+|L_1|\geq 2$. Indeed, $K_1 = \emptyset$ implies $T=\emptyset$ and $A''\subseteq L \cup E$. Since $E\notin S$ we get $|A''|\geq 3$.
\end{proof}
\section*{Acknowledgments}
Partially supported by PAPIIT-M\'exico: IN108121, CONACyT-M\'exico 282280, A1-S-12891, 47510664.
| {
"timestamp": "2021-03-23T01:39:20",
"yymm": "2103",
"arxiv_id": "2103.11917",
"language": "en",
"url": "https://arxiv.org/abs/2103.11917",
"abstract": "We extend the Grundy number and the ochromatic number, parameters on graph colorings, to digraph colorings, we call them {\\emph{digrundy number}} and {\\emph{diochromatic number}}, respectively. First, we prove that for every digraph the diochromatic number equals the digrundy number (as it happen for graphs). Then, we prove the interpolation property and the Nordhaus-Gaddum relations for the digrundy number, and improve the Nordhaus-Gaddum relations for the dichromatic and diachromatic numbers bounded previously by the authors in [Electron. J. Combin. 25 (2018) no. 3, Paper {\\#} 3.51, 17 pp.]",
"subjects": "Combinatorics (math.CO)",
"title": "The digrundy number of digraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9688561685659695,
"lm_q2_score": 0.7310585669110202,
"lm_q1q2_score": 0.7082906021347395
} |
https://arxiv.org/abs/2109.02241 | Supervised DKRC with Images for Offline System Identification | Koopman spectral theory has provided a new perspective in the field of dynamical systems in recent years. Modern dynamical systems are becoming increasingly non-linear and complex, and there is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control. The central problem in applying Koopman theory to a system of interest is that the choice of finite-dimensional basis functions is typically done apriori, using expert knowledge of the systems dynamics. Our approach learns these basis functions using a supervised learning approach where a combination of autoencoders and deep neural networks learn the basis functions for any given system. We demonstrate this approach on a simple pendulum example in which we obtain a linear representation of the non-linear system and then predict the future state trajectories given some initial conditions. We also explore how changing the input representation of the dynamic systems time series data can impact the quality of learned basis functions. This alternative representation is compared to the traditional raw time series data approach to determine which method results in lower reconstruction and prediction error of the true non-linear dynamics of the system. | \section{Introduction}
\label{sec:introduction}
This document is a template for \LaTeX. If you are
reading a paper or PDF version of this document, please download the
electronic file, trans\_jour.tex, from the IEEE Web site at \underline
{http://www.ieee.org/authortools/trans\_jour.tex} so you can use it to prepare your manuscript. If
you would prefer to use LaTeX, download IEEE's LaTeX style and sample files
from the same Web page. You can also explore using the Overleaf editor at
\underline
{https://www.overleaf.com/blog/278-how-to-use-overleaf-with-}\discretionary{}{}{}\underline
{ieee-collabratec-your-quick-guide-to-getting-started\#.}\discretionary{}{}{}\underline{xsVp6tpPkrKM9}
If your paper is intended for a conference, please contact your conference
editor concerning acceptable word processor formats for your particular
conference.
IEEE will do the final formatting of your paper. If your paper is intended
for a conference, please observe the conference page limits.
\subsection{Abbreviations and Acronyms}
Define abbreviations and acronyms the first time they are used in the text,
even after they have already been defined in the abstract. Abbreviations
such as IEEE, SI, ac, and dc do not have to be defined. Abbreviations that
incorporate periods should not have spaces: write ``C.N.R.S.,'' not ``C. N.
R. S.'' Do not use abbreviations in the title unless they are unavoidable
(for example, ``IEEE'' in the title of this article).
\subsection{Other Recommendations}
Use one space after periods and colons. Hyphenate complex modifiers:
``zero-field-cooled magnetization.'' Avoid dangling participles, such as,
``Using \eqref{eq}, the potential was calculated.'' [It is not clear who or what
used \eqref{eq}.] Write instead, ``The potential was calculated by using \eqref{eq},'' or
``Using \eqref{eq}, we calculated the potential.''
Use a zero before decimal points: ``0.25,'' not ``.25.'' Use
``cm$^{3}$,'' not ``cc.'' Indicate sample dimensions as ``0.1 cm
$\times $ 0.2 cm,'' not ``0.1 $\times $ 0.2 cm$^{2}$.'' The
abbreviation for ``seconds'' is ``s,'' not ``sec.'' Use
``Wb/m$^{2}$'' or ``webers per square meter,'' not
``webers/m$^{2}$.'' When expressing a range of values, write ``7 to
9'' or ``7--9,'' not ``7$\sim $9.''
A parenthetical statement at the end of a sentence is punctuated outside of
the closing parenthesis (like this). (A parenthetical sentence is punctuated
within the parentheses.) In American English, periods and commas are within
quotation marks, like ``this period.'' Other punctuation is ``outside''!
Avoid contractions; for example, write ``do not'' instead of ``don't.'' The
serial comma is preferred: ``A, B, and C'' instead of ``A, B and C.''
If you wish, you may write in the first person singular or plural and use
the active voice (``I observed that $\ldots$'' or ``We observed that $\ldots$''
instead of ``It was observed that $\ldots$''). Remember to check spelling. If
your native language is not English, please get a native English-speaking
colleague to carefully proofread your paper.
Try not to use too many typefaces in the same article. You're writing
scholarly papers, not ransom notes. Also please remember that MathJax
can't handle really weird typefaces.
\subsection{Equations}
Number equations consecutively with equation numbers in parentheses flush
with the right margin, as in \eqref{eq}. To make your equations more
compact, you may use the solidus (~/~), the exp function, or appropriate
exponents. Use parentheses to avoid ambiguities in denominators. Punctuate
equations when they are part of a sentence, as in
\begin{equation}E=mc^2.\label{eq}\end{equation}
Be sure that the symbols in your equation have been defined before the
equation appears or immediately following. Italicize symbols ($T$ might refer
to temperature, but T is the unit tesla). Refer to ``\eqref{eq},'' not ``Eq. \eqref{eq}''
or ``equation \eqref{eq},'' except at the beginning of a sentence: ``Equation \eqref{eq}
is $\ldots$ .''
\subsection{\LaTeX-Specific Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead
of ``hard'' references (e.g., \verb|(1)|). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.
Please don't use the \verb|{eqnarray}| equation environment. Use
\verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}|
environment leaves unsightly spaces around relation symbols.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Do not use \verb|\nonumber| inside the \verb|{array}| environment. It
will not stop equation numbers inside \verb|{array}| (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.
If you are submitting your paper to a colorized journal, you can use
the following two lines at the start of the article to ensure its
appearance resembles the final copy:
\smallskip\noindent
\begin{small}
\begin{tabular}{l}
\verb+\+\texttt{documentclass[journal,twoside,web]\{ieeecolor\}}\\
\verb+\+\texttt{usepackage\{\textit{Journal\_Name}\}}
\end{tabular}
\end{small}
\section{Units}
Use either SI (MKS) or CGS as primary units. (SI units are strongly
encouraged.) English units may be used as secondary units (in parentheses).
This applies to papers in data storage. For example, write ``15
Gb/cm$^{2}$ (100 Gb/in$^{2})$.'' An exception is when
English units are used as identifiers in trade, such as ``3\textonehalf-in
disk drive.'' Avoid combining SI and CGS units, such as current in amperes
and magnetic field in oersteds. This often leads to confusion because
equations do not balance dimensionally. If you must use mixed units, clearly
state the units for each quantity in an equation.
The SI unit for magnetic field strength $H$ is A/m. However, if you wish to use
units of T, either refer to magnetic flux density $B$ or magnetic field
strength symbolized as $\mu _{0}H$. Use the center dot to separate
compound units, e.g., ``A$\cdot $m$^{2}$.''
\section{Some Common Mistakes}
The word ``data'' is plural, not singular. The subscript for the
permeability of vacuum $\mu _{0}$ is zero, not a lowercase letter
``o.'' The term for residual magnetization is ``remanence''; the adjective
is ``remanent''; do not write ``remnance'' or ``remnant.'' Use the word
``micrometer'' instead of ``micron.'' A graph within a graph is an
``inset,'' not an ``insert.'' The word ``alternatively'' is preferred to the
word ``alternately'' (unless you really mean something that alternates). Use
the word ``whereas'' instead of ``while'' (unless you are referring to
simultaneous events). Do not use the word ``essentially'' to mean
``approximately'' or ``effectively.'' Do not use the word ``issue'' as a
euphemism for ``problem.'' When compositions are not specified, separate
chemical symbols by en-dashes; for example, ``NiMn'' indicates the
intermetallic compound Ni$_{0.5}$Mn$_{0.5}$ whereas
``Ni--Mn'' indicates an alloy of some composition
Ni$_{x}$Mn$_{1-x}$.
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{fig1.png}}
\caption{Magnetization as a function of applied field.
It is good practice to explain the significance of the figure in the caption.}
\label{fig1}
\end{figure}
Be aware of the different meanings of the homophones ``affect'' (usually a
verb) and ``effect'' (usually a noun), ``complement'' and ``compliment,''
``discreet'' and ``discrete,'' ``principal'' (e.g., ``principal
investigator'') and ``principle'' (e.g., ``principle of measurement''). Do
not confuse ``imply'' and ``infer.''
Prefixes such as ``non,'' ``sub,'' ``micro,'' ``multi,'' and ``ultra'' are
not independent words; they should be joined to the words they modify,
usually without a hyphen. There is no period after the ``et'' in the Latin
abbreviation ``\emph{et al.}'' (it is also italicized). The abbreviation ``i.e.,'' means
``that is,'' and the abbreviation ``e.g.,'' means ``for example'' (these
abbreviations are not italicized).
A general IEEE styleguide is available at \underline{http://www.ieee.org/authortools}.
\section{Guidelines for Graphics Preparation and Submission}
\label{sec:guidelines}
\subsection{Types of Graphics}
The following list outlines the different types of graphics published in
IEEE journals. They are categorized based on their construction, and use of
color/shades of gray:
\subsubsection{Color/Grayscale figures}
{Figures that are meant to appear in color, or shades of black/gray. Such
figures may include photographs, illustrations, multicolor graphs, and
flowcharts.}
\subsubsection{Line Art figures}
{Figures that are composed of only black lines and shapes. These figures
should have no shades or half-tones of gray, only black and white.}
\subsubsection{Author photos}
{Head and shoulders shots of authors that appear at the end of our papers. }
\subsubsection{Tables}
{Data charts which are typically black and white, but sometimes include
color.}
\begin{table}
\caption{Units for Magnetic Properties}
\label{table}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{|p{25pt}|p{75pt}|p{115pt}|}
\hline
Symbol&
Quantity&
Conversion from Gaussian and \par CGS EMU to SI $^{\mathrm{a}}$ \\
\hline
$\Phi $&
magnetic flux&
1 Mx $\to 10^{-8}$ Wb $= 10^{-8}$ V$\cdot $s \\
$B$&
magnetic flux density, \par magnetic induction&
1 G $\to 10^{-4}$ T $= 10^{-4}$ Wb/m$^{2}$ \\
$H$&
magnetic field strength&
1 Oe $\to 10^{3}/(4\pi )$ A/m \\
$m$&
magnetic moment&
1 erg/G $=$ 1 emu \par $\to 10^{-3}$ A$\cdot $m$^{2} = 10^{-3}$ J/T \\
$M$&
magnetization&
1 erg/(G$\cdot $cm$^{3}) =$ 1 emu/cm$^{3}$ \par $\to 10^{3}$ A/m \\
4$\pi M$&
magnetization&
1 G $\to 10^{3}/(4\pi )$ A/m \\
$\sigma $&
specific magnetization&
1 erg/(G$\cdot $g) $=$ 1 emu/g $\to $ 1 A$\cdot $m$^{2}$/kg \\
$j$&
magnetic dipole \par moment&
1 erg/G $=$ 1 emu \par $\to 4\pi \times 10^{-10}$ Wb$\cdot $m \\
$J$&
magnetic polarization&
1 erg/(G$\cdot $cm$^{3}) =$ 1 emu/cm$^{3}$ \par $\to 4\pi \times 10^{-4}$ T \\
$\chi , \kappa $&
susceptibility&
1 $\to 4\pi $ \\
$\chi_{\rho }$&
mass susceptibility&
1 cm$^{3}$/g $\to 4\pi \times 10^{-3}$ m$^{3}$/kg \\
$\mu $&
permeability&
1 $\to 4\pi \times 10^{-7}$ H/m \par $= 4\pi \times 10^{-7}$ Wb/(A$\cdot $m) \\
$\mu_{r}$&
relative permeability&
$\mu \to \mu_{r}$ \\
$w, W$&
energy density&
1 erg/cm$^{3} \to 10^{-1}$ J/m$^{3}$ \\
$N, D$&
demagnetizing factor&
1 $\to 1/(4\pi )$ \\
\hline
\multicolumn{3}{p{251pt}}{Vertical lines are optional in tables. Statements that serve as captions for
the entire table do not need footnote letters. }\\
\multicolumn{3}{p{251pt}}{$^{\mathrm{a}}$Gaussian units are the same as cg emu for magnetostatics; Mx
$=$ maxwell, G $=$ gauss, Oe $=$ oersted; Wb $=$ weber, V $=$ volt, s $=$
second, T $=$ tesla, m $=$ meter, A $=$ ampere, J $=$ joule, kg $=$
kilogram, H $=$ henry.}
\end{tabular}
\label{tab1}
\end{table}
\subsection{Multipart figures}
Figures compiled of more than one sub-figure presented side-by-side, or
stacked. If a multipart figure is made up of multiple figure
types (one part is lineart, and another is grayscale or color) the figure
should meet the stricter guidelines.
\subsection{File Formats For Graphics}\label{formats}
Format and save your graphics using a suitable graphics processing program
that will allow you to create the images as PostScript (PS), Encapsulated
PostScript (.EPS), Tagged Image File Format (.TIFF), Portable Document
Format (.PDF), Portable Network Graphics (.PNG), or Metapost (.MPS), sizes them, and adjusts
the resolution settings. When
submitting your final paper, your graphics should all be submitted
individually in one of these formats along with the manuscript.
\subsection{Sizing of Graphics}
Most charts, graphs, and tables are one column wide (3.5 inches/88
millimeters/21 picas) or page wide (7.16 inches/181 millimeters/43
picas). The maximum depth a graphic can be is 8.5 inches (216 millimeters/54
picas). When choosing the depth of a graphic, please allow space for a
caption. Figures can be sized between column and page widths if the author
chooses, however it is recommended that figures are not sized less than
column width unless when necessary.
There is currently one publication with column measurements that do not
coincide with those listed above. Proceedings of the IEEE has a column
measurement of 3.25 inches (82.5 millimeters/19.5 picas).
The final printed size of author photographs is exactly
1 inch wide by 1.25 inches tall (25.4 millimeters$\,\times\,$31.75 millimeters/6
picas$\,\times\,$7.5 picas). Author photos printed in editorials measure 1.59 inches
wide by 2 inches tall (40 millimeters$\,\times\,$50 millimeters/9.5 picas$\,\times\,$12
picas).
\subsection{Resolution }
The proper resolution of your figures will depend on the type of figure it
is as defined in the ``Types of Figures'' section. Author photographs,
color, and grayscale figures should be at least 300dpi. Line art, including
tables should be a minimum of 600dpi.
\subsection{Vector Art}
In order to preserve the figures' integrity across multiple computer
platforms, we accept files in the following formats: .EPS/.PDF/.PS. All
fonts must be embedded or text converted to outlines in order to achieve the
best-quality results.
\subsection{Color Space}
The term color space refers to the entire sum of colors that can be
represented within the said medium. For our purposes, the three main color
spaces are Grayscale, RGB (red/green/blue) and CMYK
(cyan/magenta/yellow/black). RGB is generally used with on-screen graphics,
whereas CMYK is used for printing purposes.
All color figures should be generated in RGB or CMYK color space. Grayscale
images should be submitted in Grayscale color space. Line art may be
provided in grayscale OR bitmap colorspace. Note that ``bitmap colorspace''
and ``bitmap file format'' are not the same thing. When bitmap color space
is selected, .TIF/.TIFF/.PNG are the recommended file formats.
\subsection{Accepted Fonts Within Figures}
When preparing your graphics IEEE suggests that you use of one of the
following Open Type fonts: Times New Roman, Helvetica, Arial, Cambria, and
Symbol. If you are supplying EPS, PS, or PDF files all fonts must be
embedded. Some fonts may only be native to your operating system; without
the fonts embedded, parts of the graphic may be distorted or missing.
A safe option when finalizing your figures is to strip out the fonts before
you save the files, creating ``outline'' type. This converts fonts to
artwork what will appear uniformly on any screen.
\subsection{Using Labels Within Figures}
\subsubsection{Figure Axis labels }
Figure axis labels are often a source of confusion. Use words rather than
symbols. As an example, write the quantity ``Magnetization,'' or
``Magnetization M,'' not just ``M.'' Put units in parentheses. Do not label
axes only with units. As in Fig. 1, for example, write ``Magnetization
(A/m)'' or ``Magnetization (A$\cdot$m$^{-1}$),'' not just ``A/m.'' Do not label axes with a ratio of quantities and
units. For example, write ``Temperature (K),'' not ``Temperature/K.''
Multipliers can be especially confusing. Write ``Magnetization (kA/m)'' or
``Magnetization (10$^{3}$ A/m).'' Do not write ``Magnetization
(A/m)$\,\times\,$1000'' because the reader would not know whether the top
axis label in Fig. 1 meant 16000 A/m or 0.016 A/m. Figure labels should be
legible, approximately 8 to 10 point type.
\subsubsection{Subfigure Labels in Multipart Figures and Tables}
Multipart figures should be combined and labeled before final submission.
Labels should appear centered below each subfigure in 8 point Times New
Roman font in the format of (a) (b) (c).
\subsection{File Naming}
Figures (line artwork or photographs) should be named starting with the
first 5 letters of the author's last name. The next characters in the
filename should be the number that represents the sequential
location of this image in your article. For example, in author
``Anderson's'' paper, the first three figures would be named ander1.tif,
ander2.tif, and ander3.ps.
Tables should contain only the body of the table (not the caption) and
should be named similarly to figures, except that `.t' is inserted
in-between the author's name and the table number. For example, author
Anderson's first three tables would be named ander.t1.tif, ander.t2.ps,
ander.t3.eps.
Author photographs should be named using the first five characters of the
pictured author's last name. For example, four author photographs for a
paper may be named: oppen.ps, moshc.tif, chen.eps, and duran.pdf.
If two authors or more have the same last name, their first initial(s) can
be substituted for the fifth, fourth, third$\ldots$ letters of their surname
until the degree where there is differentiation. For example, two authors
Michael and Monica Oppenheimer's photos would be named oppmi.tif, and
oppmo.eps.
\subsection{Referencing a Figure or Table Within Your Paper}
When referencing your figures and tables within your paper, use the
abbreviation ``Fig.'' even at the beginning of a sentence. Do not abbreviate
``Table.'' Tables should be numbered with Roman Numerals.
\subsection{Checking Your Figures: The IEEE Graphics Analyzer}
The IEEE Graphics Analyzer enables authors to pre-screen their graphics for
compliance with IEEE Transactions and Journals standards before submission.
The online tool, located at
\underline{http://graphicsqc.ieee.org/}, allows authors to
upload their graphics in order to check that each file is the correct file
format, resolution, size and colorspace; that no fonts are missing or
corrupt; that figures are not compiled in layers or have transparency, and
that they are named according to the IEEE Transactions and Journals naming
convention. At the end of this automated process, authors are provided with
a detailed report on each graphic within the web applet, as well as by
email.
For more information on using the Graphics Analyzer or any other graphics
related topic, contact the IEEE Graphics Help Desk by e-mail at
graphics@ieee.org.
\subsection{Submitting Your Graphics}
Because IEEE will do the final formatting of your paper,
you do not need to position figures and tables at the top and bottom of each
column. In fact, all figures, figure captions, and tables can be placed at
the end of your paper. In addition to, or even in lieu of submitting figures
within your final manuscript, figures should be submitted individually,
separate from the manuscript in one of the file formats listed above in
Section \ref{formats}. Place figure captions below the figures; place table titles
above the tables. Please do not include captions as part of the figures, or
put them in ``text boxes'' linked to the figures. Also, do not place borders
around the outside of your figures.
\subsection{Color Processing/Printing in IEEE Journals}
All IEEE Transactions, Journals, and Letters allow an author to publish
color figures on IEEE Xplore\textregistered\ at no charge, and automatically
convert them to grayscale for print versions. In most journals, figures and
tables may alternatively be printed in color if an author chooses to do so.
Please note that this service comes at an extra expense to the author. If
you intend to have print color graphics, include a note with your final
paper indicating which figures or tables you would like to be handled that
way, and stating that you are willing to pay the additional fee.
\section{Conclusion}
A conclusion section is not required. Although a conclusion may review the
main points of the paper, do not replicate the abstract as the conclusion. A
conclusion might elaborate on the importance of the work or suggest
applications and extensions.
\appendices
Appendixes, if needed, appear before the acknowledgment.
\section*{Acknowledgment}
The preferred spelling of the word ``acknowledgment'' in American English is
without an ``e'' after the ``g.'' Use the singular heading even if you have
many acknowledgments. Avoid expressions such as ``One of us (S.B.A.) would
like to thank $\ldots$ .'' Instead, write ``F. A. Author thanks $\ldots$ .'' In most
cases, sponsor and financial support acknowledgments are placed in the
unnumbered footnote on the first page, not here.
\section*{References and Footnotes}
\subsection{References}
References need not be cited in text. When they are, they appear on the
line, in square brackets, inside the punctuation. Multiple references are
each numbered with separate brackets. When citing a section in a book,
please give the relevant page numbers. In text, refer simply to the
reference number. Do not use ``Ref.'' or ``reference'' except at the
beginning of a sentence: ``Reference \cite{b3} shows $\ldots$ .'' Please do not use
automatic endnotes in \emph{Word}, rather, type the reference list at the end of the
paper using the ``References'' style.
Reference numbers are set flush left and form a column of their own, hanging
out beyond the body of the reference. The reference numbers are on the line,
enclosed in square brackets. In all references, the given name of the author
or editor is abbreviated to the initial only and precedes the last name. Use
them all; use \emph{et al.} only if names are not given. Use commas around Jr.,
Sr., and III in names. Abbreviate conference titles. When citing IEEE
transactions, provide the issue number, page range, volume number, year,
and/or month if available. When referencing a patent, provide the day and
the month of issue, or application. References may not include all
information; please obtain and include relevant information. Do not combine
references. There must be only one reference with each number. If there is a
URL included with the print reference, it can be included at the end of the
reference.
Other than books, capitalize only the first word in a paper title, except
for proper nouns and element symbols. For papers published in translation
journals, please give the English citation first, followed by the original
foreign-language citation See the end of this document for formats and
examples of common references. For a complete discussion of references and
their formats, see the IEEE style manual at
\underline{http://www.ieee.org/authortools}.
\subsection{Footnotes}
Number footnotes separately in superscript numbers.\footnote{It is recommended that footnotes be avoided (except for
the unnumbered footnote with the receipt date on the first page). Instead,
try to integrate the footnote information into the text.} Place the actual
footnote at the bottom of the column in which it is cited; do not put
footnotes in the reference list (endnotes). Use letters for table footnotes
(see Table \ref{table}).
\section{Submitting Your Paper for Review}
\subsection{Final Stage}
When you submit your final version (after your paper has been accepted),
print it in two-column format, including figures and tables. You must also
send your final manuscript on a disk, via e-mail, or through a Web
manuscript submission system as directed by the society contact. You may use
\emph{Zip} for large files, or compress files using \emph{Compress, Pkzip, Stuffit,} or \emph{Gzip.}
Also, send a sheet of paper or PDF with complete contact information for all
authors. Include full mailing addresses, telephone numbers, fax numbers, and
e-mail addresses. This information will be used to send each author a
complimentary copy of the journal in which the paper appears. In addition,
designate one author as the ``corresponding author.'' This is the author to
whom proofs of the paper will be sent. Proofs are sent to the corresponding
author only.
\subsection{Review Stage Using ScholarOne\textregistered\ Manuscripts}
Contributions to the Transactions, Journals, and Letters may be submitted
electronically on IEEE's on-line manuscript submission and peer-review
system, ScholarOne\textregistered\ Manuscripts. You can get a listing of the
publications that participate in ScholarOne at
\underline{http://www.ieee.org/publications\_standards/publications/}\discretionary{}{}{}\underline{authors/authors\_submission.html}.
First check if you have an existing account. If there is none, please create
a new account. After logging in, go to your Author Center and click ``Submit
First Draft of a New Manuscript.''
Along with other information, you will be asked to select the subject from a
pull-down list. Depending on the journal, there are various steps to the
submission process; you must complete all steps for a complete submission.
At the end of each step you must click ``Save and Continue''; just uploading
the paper is not sufficient. After the last step, you should see a
confirmation that the submission is complete. You should also receive an
e-mail confirmation. For inquiries regarding the submission of your paper on
ScholarOne Manuscripts, please contact oprs-support@ieee.org or call +1 732
465 5861.
ScholarOne Manuscripts will accept files for review in various formats.
Please check the guidelines of the specific journal for which you plan to
submit.
You will be asked to file an electronic copyright form immediately upon
completing the submission process (authors are responsible for obtaining any
security clearances). Failure to submit the electronic copyright could
result in publishing delays later. You will also have the opportunity to
designate your article as ``open access'' if you agree to pay the IEEE open
access fee.
\subsection{Final Stage Using ScholarOne Manuscripts}
Upon acceptance, you will receive an email with specific instructions
regarding the submission of your final files. To avoid any delays in
publication, please be sure to follow these instructions. Most journals
require that final submissions be uploaded through ScholarOne Manuscripts,
although some may still accept final submissions via email. Final
submissions should include source files of your accepted manuscript, high
quality graphic files, and a formatted pdf file. If you have any questions
regarding the final submission process, please contact the administrative
contact for the journal.
In addition to this, upload a file with complete contact information for all
authors. Include full mailing addresses, telephone numbers, fax numbers, and
e-mail addresses. Designate the author who submitted the manuscript on
ScholarOne Manuscripts as the ``corresponding author.'' This is the only
author to whom proofs of the paper will be sent.
\subsection{Copyright Form}
Authors must submit an electronic IEEE Copyright Form (eCF) upon submitting
their final manuscript files. You can access the eCF system through your
manuscript submission system or through the Author Gateway. You are
responsible for obtaining any necessary approvals and/or security
clearances. For additional information on intellectual property rights,
visit the IEEE Intellectual Property Rights department web page at
\underline{http://www.ieee.org/publications\_standards/publications/rights/}\discretionary{}{}{}\underline{index.html}.
\section{IEEE Publishing Policy}
The general IEEE policy requires that authors should only submit original
work that has neither appeared elsewhere for publication, nor is under
review for another refereed publication. The submitting author must disclose
all prior publication(s) and current submissions when submitting a
manuscript. Do not publish ``preliminary'' data or results. The submitting
author is responsible for obtaining agreement of all coauthors and any
consent required from employers or sponsors before submitting an article.
The IEEE Transactions and Journals Department strongly discourages courtesy
authorship; it is the obligation of the authors to cite only relevant prior
work.
The IEEE Transactions and Journals Department does not publish conference
records or proceedings, but can publish articles related to conferences that
have undergone rigorous peer review. Minimally, two reviews are required for
every article submitted for peer review.
\section{Publication Principles}
The two types of contents of that are published are; 1) peer-reviewed and 2)
archival. The Transactions and Journals Department publishes scholarly
articles of archival value as well as tutorial expositions and critical
reviews of classical subjects and topics of current interest.
Authors should consider the following points:
\begin{enumerate}
\item Technical papers submitted for publication must advance the state of knowledge and must cite relevant prior work.
\item The length of a submitted paper should be commensurate with the importance, or appropriate to the complexity, of the work. For example, an obvious extension of previously published work might not be appropriate for publication or might be adequately treated in just a few pages.
\item Authors must convince both peer reviewers and the editors of the scientific and technical merit of a paper; the standards of proof are higher when extraordinary or unexpected results are reported.
\item Because replication is required for scientific progress, papers submitted for publication must provide sufficient information to allow readers to perform similar experiments or calculations and
use the reported results. Although not everything need be disclosed, a paper
must contain new, useable, and fully described information. For example, a
specimen's chemical composition need not be reported if the main purpose of
a paper is to introduce a new measurement technique. Authors should expect
to be challenged by reviewers if the results are not supported by adequate
data and critical details.
\item Papers that describe ongoing work or announce the latest technical achievement, which are suitable for presentation at a professional conference, may not be appropriate for publication.
\end{enumerate}
\section{Reference Examples}
\begin{itemize}
\item \emph{Basic format for books:}\\
J. K. Author, ``Title of chapter in the book,'' in \emph{Title of His Published Book, x}th ed. City of Publisher, (only U.S. State), Country: Abbrev. of Publisher, year, ch. $x$, sec. $x$, pp. \emph{xxx--xxx.}\\
See \cite{b1,b2}.
\item \emph{Basic format for periodicals:}\\
J. K. Author, ``Name of paper,'' \emph{Abbrev. Title of Periodical}, vol. \emph{x, no}. $x, $pp\emph{. xxx--xxx, }Abbrev. Month, year, DOI. 10.1109.\emph{XXX}.123456.\\
See \cite{b3}--\cite{b5}.
\item \emph{Basic format for reports:}\\
J. K. Author, ``Title of report,'' Abbrev. Name of Co., City of Co., Abbrev. State, Country, Rep. \emph{xxx}, year.\\
See \cite{b6,b7}.
\item \emph{Basic format for handbooks:}\\
\emph{Name of Manual/Handbook, x} ed., Abbrev. Name of Co., City of Co., Abbrev. State, Country, year, pp. \emph{xxx--xxx.}\\
See \cite{b8,b9}.
\item \emph{Basic format for books (when available online):}\\
J. K. Author, ``Title of chapter in the book,'' in \emph{Title of
Published Book}, $x$th ed. City of Publisher, State, Country: Abbrev.
of Publisher, year, ch. $x$, sec. $x$, pp. \emph{xxx--xxx}. [Online].
Available: \underline{http://www.web.com}\\
See \cite{b10}--\cite{b13}.
\item \emph{Basic format for journals (when available online):}\\
J. K. Author, ``Name of paper,'' \emph{Abbrev. Title of Periodical}, vol. $x$, no. $x$, pp. \emph{xxx--xxx}, Abbrev. Month, year. Accessed on: Month, Day, year, DOI: 10.1109.\emph{XXX}.123456, [Online].\\
See \cite{b14}--\cite{b16}.
\item \emph{Basic format for papers presented at conferences (when available online): }\\
J.K. Author. (year, month). Title. presented at abbrev. conference title. [Type of Medium]. Available: site/path/file\\
See \cite{b17}.
\item \emph{Basic format for reports and handbooks (when available online):}\\
J. K. Author. ``Title of report,'' Company. City, State, Country. Rep. no., (optional: vol./issue), Date. [Online] Available: site/path/file\\
See \cite{b18,b19}.
\item \emph{Basic format for computer programs and electronic documents (when available online): }\\
Legislative body. Number of Congress, Session. (year, month day). \emph{Number of bill or resolution}, \emph{Title}. [Type of medium]. Available: site/path/file\\
\textbf{\emph{NOTE: }ISO recommends that capitalization follow the accepted practice for the language or script in which the information is given.}\\
See \cite{b20}.
\item \emph{Basic format for patents (when available online):}\\
Name of the invention, by inventor's name. (year, month day). Patent Number [Type of medium]. Available: site/path/file\\
See \cite{b21}.
\item \emph{Basic format}\emph{for conference proceedings (published):}\\
J. K. Author, ``Title of paper,'' in \emph{Abbreviated Name of Conf.}, City of Conf., Abbrev. State (if given), Country, year, pp. \emph{xxxxxx.}\\
See \cite{b22}.
\item \emph{Example for papers presented at conferences (unpublished):}\\
See \cite{b23}.
\item \emph{Basic format for patents}$:$\\
J. K. Author, ``Title of patent,'' U.S. Patent \emph{x xxx xxx}, Abbrev. Month, day, year.\\
See \cite{b24}.
\item \emph{Basic format for theses (M.S.) and dissertations (Ph.D.):}
\begin{enumerate}
\item J. K. Author, ``Title of thesis,'' M.S. thesis, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.
\item J. K. Author, ``Title of dissertation,'' Ph.D. dissertation, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.
\end{enumerate}
See \cite{b25,b26}.
\item \emph{Basic format for the most common types of unpublished references:}
\begin{enumerate}
\item J. K. Author, private communication, Abbrev. Month, year.
\item J. K. Author, ``Title of paper,'' unpublished.
\item J. K. Author, ``Title of paper,'' to be published.
\end{enumerate}
See \cite{b27}--\cite{b29}.
\item \emph{Basic formats for standards:}
\begin{enumerate}
\item \emph{Title of Standard}, Standard number, date.
\item \emph{Title of Standard}, Standard number, Corporate author, location, date.
\end{enumerate}
See \cite{b30,b31}.
\item \emph{Article number in~reference examples:}\\
See \cite{b32,b33}.
\item \emph{Example when using et al.:}\\
See \cite{b34}.
\end{itemize}
\section{Introduction}
Dynamical systems are systems for which a function defines the time dependency of a point in a finite dimensional space. There are plenty of examples of such systems from airplanes flight paths to motion of a liquid in a container. While some can be described by sets of non-linear equations representing the desired physics, this approach can sometimes be a complex challenge. Numerical modeling is an appealing approach used in simulation which can help represent very complex non-linear systems which often leads to computationally intensive simulations. This becomes cumbersome in controlled dynamical systems where the controller feedback to the dynamical system is time sensitive and lagging information can have severe consequences. \\
Over the past 2 decades or so, a new perspective of dynamics capable of approaching the challenge posed by the complexity of certain dynamical systems has emerged: Data-driven modeling. These methods are capable of determining the spectrum of high-dimensional, non-linear dynamical systems. The specifics of these methods will be introduced in detail in the following section. Most popular current methods derive from Dynamic Mode Decomposition (DMD)\cite{schmid_2010}. This method has led to promising techniques and more recently deep learning has grabbed most of the research attention in this field of research. All derivative methods of DMD rely to some extent on the Koopman operator theory \cite{Koopman} also described later. DMD and deep learning approaches have already been used in many different areas with a substantial level of success and some of those examples are now presented. \\
\subsection{Data-driven modeling method relevance through examples:}
There is a broad spectrum of applications that can benefit from the Data-driven modeling methods though not all dynamical systems are well suited. As mentioned already, one important aspect of the deep neural network approach to learn the basis functions is that it is heavily dependent on access to existing data or ways to generate the required data. A good example and relevant application is environment control systems in buildings, where large amounts of sensor information is captured and even virtual building simulation can be used to generate datasets. In this case, using Koopman operators facilitates the comparison of complex data while also simplifying the system representation. In Eisenhower et al. 2016\cite{Eisenhower}, the spectral decomposition approach is used to analyze building system data by accelerating the comparison between models and data as well as drawing conclusions about the sensor functions. The authors show that using the spectral decomposition can help understand the changes in the different parts of a building and help highlight poor control performance.
A similar approach can also be applied to power grid systems to help evaluate the response to continuously changing local demand. In this case, the challenge is in the size of the system and the amount of heterogeneous dynamic sub-systems like power plants, transmission lines and other renewable energy systems. The objective in using the Koopman operator technique here is to help identify key dynamic phenomena to help understand cascading power outages. Susuki et al. 2017\cite{Susuki} successfully used Koopman Mode Decomposition (KMD) to enable direct computation from data without describing the complex underlying system.
More examples can be found in Xiao et al. 2020 \cite{Xiao} (Vehicle motion planning and control algorithms), Broad et al. 2019 \cite{Broad} (Human-Machine Systems to help users accomplish tasks), Ling et al. 2019 \cite{Ling} (Intelligent Transport Systems (ITS) to help with the reduction of fuel consumption), and Fonzi et al. 2020 \cite{Fonzi} (nonlinear dynamics fundamentals to the morphing airborne wind energy (AWE) aerostructures).
\subsection{Existing works:}
Advances in research from both the controls and computer science communities has contributed to much of the underlying mechanisms and tools discussed in this paper. Our proposed approach is a direct improvement of the original DKRC paper \cite{Han}, where we introduce the autoencoder to learn the basis functions. Exisitng works have utilized the same approach for just processing in the raw data alone \cite{Lusch_2018}. Other works have already considered integrating images into this framework such as DKRC-I \cite{DBLP:conf/crv/LaferriereLDFP21} and CKNet \cite{xiao2021cknet}. Our work looks at combining both the raw time series data and a spectrogram image which is derived only from the raw time series data as an additional input feature to the autoencoder network. Closer examination of the pros and cons between these existing works and how the underlying DKRC code could be restructured are outside of the scope of this paper but will be explored in future research.
\section{Preliminaries and Notations}\label{section_prelim}
The following preliminaries are cited from \cite{Huang}. In order to define the properties of the Koopman operator, lets take a discrete time dynamical system $x_{t+1}=\mathrm{T}(x_t)$ where $\mathrm{T}:\mathrm{X} \subset \mathbb{R}^n \rightarrow \mathrm{X}$. We also denote $\mathcal{B}\left(X\right)$ the Borel-$\sigma$ algebra on $\mathrm{X}$, $\mathcal{M}(\mathrm{X})$ a vector space of bounded complex valued measure on $\mathrm{X}$, and $\mathcal{F}$ the space of complex valued functions from $ X \rightarrow \mathbb{C}$. Associated with this discrete time dynamical system is the linear operator, $\mathbb{U}$, called the Koopman operator. The Koopman operator is an infinite-dimensional linear operator defined on the space of functions as follows:
\begin{equation}\label{eqn.koop}
\begin{gathered}
\left[\mathbb{U}\varphi\right]\left(x\right)=\varphi(T(x))
\end{gathered}
\end{equation}
Where the observable function $\varphi$ is mapped forward in time by the Koopman operator. The spectrum (eigenvalues and eigenfunctions) of the Koopman operator satisfy the following relationship
\begin{equation}\label{eqn.koop_eig}
\begin{gathered}
\left[\mathbb{U}_t\phi_\lambda\right]\left(x\right)=e^{\lambda t}\phi_\lambda(x)
\end{gathered}
\end{equation}
Where $\phi_\lambda$ is an eigenfunction and $\lambda\in\mathbb{C}$ is the associated eigenvalue. The eigenfunctions of the Koopman operator can be used as coordinates for the linear representation of nonlinear systems. The relationship between the spectrum of the Koopman system and stability is explored in \cite{Mauroy2013ASO}.\\
Dynamic Mode Decomposition (DMD) is a computational algorithm for approximating the spectrum of the Koopman operator \cite{schmid_2010}. Extended Dynamic Mode Decomposition (EDMD) is more accurate in approximating the spectrum of the Koopman operator for both linear and non-linear dynamical systems \cite{Williams_2015}. The formulation of EDMD is as follows.
\begin{equation}\label{eqn.data}
\begin{gathered}
\bar{X}=[x_1,x_2,\ldots,x_M] \\
\bar{Y}=[y_1,y_2,\ldots,y_M]
\end{gathered}
\end{equation}
Where $x_i\in X$and $y_i\in X$. The set $\bar{Y}$ is the time shifted time series data such that $y_i=T(x_i)$. Let $\mathcal{D}=\left\{\psi_1,\ \psi_2,\ \ldots,\ \psi_N\right\}$ be the set of dictionary functions or observables where $\psi_i\in\ L_2(X,\mathcal{B},\mu=\mathcal{G})$. Here $\mu$ is a positive invariant measure, but not necessarily an invariant measure of $T$. Let $\mathcal{G}_\mathcal{D}$ denote the span of $\mathcal{D}$ such that $\mathcal{G}_\mathcal{D}\subset\mathcal{G}$. The choice of dictionary functions are critical to the accuracy of the approximated eigenfunctions of the Koopman operator. Define the vector valued function $\Psi : X \rightarrow \mathbb{C}^N$
\begin{equation}\label{eqn.dict}
\begin{gathered}
\Psi\left(x\right):=[\psi_1(x),\psi_2(x),…,\psi_N(x)]^T
\end{gathered}
\end{equation}
$\Psi$ is the mapping from physical space to feature space. Any function $\phi,\hat{\phi}\in\mathcal{G}_\mathcal{D}$ with some set of coefficients $a, \hat{a}\in\mathbb{C}^N$ can be expressed as,
\begin{equation}\label{eqn.psia}
\begin{gathered}
\phi=\sum_{k=1}^{N}{a_k\psi_k=\Psi^Ta}
\end{gathered}
\end{equation}
\begin{equation}\label{eqn.psiahat}
\begin{gathered}
\hat{\phi}=\sum_{k=1}^{N}{{\hat{a}}_k\psi_k=\Psi^T\hat{a}}
\end{gathered}
\end{equation}
Let the future observable $\hat{\phi}$ be related to current observable $\phi$ by the Koopman operator in the feature space. We can write
\begin{equation}\label{eqn.psir}
\begin{gathered}
\hat{\phi}\left(x\right)=\left[\mathbb{U}\phi\right]\left(x\right)+r
\end{gathered}
\end{equation}
Where $r\in\mathcal{G}$ is a residual function that appears because $\mathcal{G}_\mathcal{D}$ is not necessarily invariant to the action of the Koopman operator. To find the optimal mapping which can minimize this residual such that we can obtain
\begin{equation}\label{eqn.psinor}
\begin{gathered}
\hat{\phi}\left(x\right)=\left[\mathbb{U}\phi\right]\left(x\right)
\end{gathered}
\end{equation}
We need to choose a basis function such that the span of $\mathcal{D}$ contains the minimum least squares solution to the following.
\begin{equation}\label{eqn.G}
\begin{gathered}
G=\frac{1}{M}\sum_{m=1}^{M}{\Psi\left(x_m\right)\Psi\left(x_m\right)^T}
\end{gathered}
\end{equation}
\begin{equation}\label{eqn.A}
\begin{gathered}
A=\frac{1}{M}\sum_{m=1}^{M}{\Psi\left(x_m\right)\Psi\left(y_m\right)^T}
\end{gathered}
\end{equation}
\begin{equation}\label{eqn.minopt}
\begin{gathered}
\min_{x}\norm{GK-A}_F
\end{gathered}
\end{equation}
The symbol $\norm{\cdot}_F$ denotes the Frobenius norm of a matrix. The explicit least squares solution to this optimization problem is given as,
\begin{equation}\label{eqn.Kedmd}
\begin{gathered}
K_{EDMD}=G^\dag A
\end{gathered}
\end{equation}
Where $G^\dag$ the Moore-Penrose pseudoinverse of matrix G. Therefore, assuming the dictionary of basis functions $\Psi$ spans the subspace $L_2(X,\mathcal{B},\mu)$ then we can say that the leading eigenfunctions of the Koopman operator and the associated eigenvalues given by the EDMD approximation can be computed. The right eigenvectors of $K$ generate the approximation of the eigenfunctions, given by,
\begin{equation}\label{eqn.Keigfunc}
\begin{gathered}
\phi_j=\Psi^Tv_j
\end{gathered}
\end{equation}
Where $v_j$ is the $j$-th right eigenvector of $K$, $\phi_j$ is the eigenfunction approximation of the Koopman operator associated with the $j$-th eigenvalue.
DMD approximates the Koopman operator with a specific choice of dictionary functions chosen to be the unit vectors $e_i\in\mathbb{R}^n$ of the lifted vector space.
\begin{equation}\label{eqn.unitbasis}
\begin{gathered}
e_1=\left[\begin{matrix}1\\0\\\vdots\\\end{matrix}\right],\ e_2=\left[\begin{matrix}0\\1\\\vdots\\\end{matrix}\right],\ \ldots
\end{gathered}
\end{equation}
\begin{equation}\label{eqn.unitset}
\begin{gathered}
\mathcal{D}=\{e_1^T,\ldots,e_N^T\}
\end{gathered}
\end{equation}
such that the least square solution to the DMD Koopman operator can be solved directly from the data.
\begin{equation}\label{eqn.Kdmd}
\begin{gathered}
K_{DMD}=\bar{Y}\ {\bar{X}}^\dag
\end{gathered}
\end{equation}
In this paper, the EDMD formulation is solved in the lifted space for a controlled dynamical system as formulated in \cite{Korda_2018}. therefore we can first redefine our datasets as,
\begin{equation}\label{eqn.dataxlift}
\begin{gathered}
X_{lift}=[\psi\left(x_1\right),\ \ldots,\ \psi\left(x_k\right)] \\
Y_{lift}=\left[\psi\left(y_1\right),\ \ldots,\ \psi\left(y_k\right)\right] \\
U=[u_1,\ldots,u_k]
\end{gathered}
\end{equation}
We want to obtain the lifted linear dynamical system
\begin{equation}\label{eqn.linearlifted}
\begin{gathered}
Y_{lift}=AX_{lift}-BU \\
\hat{x}=CX_{lift}
\end{gathered}
\end{equation}
The least squares solution for the lifted linear state space system is solved using \eqref{eqn.ab},\eqref{eqn.c}. Where $N$ is the lifting dimension, $m$ is the number of data snapshots, and the dimensions are $A_{Nx(1:N)}$, $B_{Nx(N+1:m)}$, $C_{nxN}$.
\begin{equation}\label{eqn.ab}
\begin{gathered}
\left[A,B\right]=\left[Y_{lift}\left[\begin{matrix}X_{lift}\\U\\\end{matrix}\right]^T\right]{\left[\left[\begin{matrix}X_{lift}\\U\\\end{matrix}\right]\left[\begin{matrix}X_{lift}\\U\\\end{matrix}\right]^T\right]^{-1}}
\end{gathered}
\end{equation}
\begin{equation}\label{eqn.c}
\begin{gathered}
C=XX_{lift}^\dag
\end{gathered}
\end{equation}
This solution leads to a better fit of the lifted linear dynamical system since the solution is obtained in the the lifted space.
\section{Algorithm}
Our implementation consists of several sub-processes which makeup the overall pipeline for determining the final linear lifted representation of the non-linear system. Provided time series data which contains measured state variables and control inputs, we first perform data pre-processing. During pre-processing, a Mel spectrogram image is created by sampling the available time series dataset at a frequency less than the data was measured. In this case, we lose some of the time resolution of the data, so in cases where high frequency sampling is not available, this approach may not be as data efficient. Most modern engineering systems do have high sampling rates, so this is not a major concern but rather something to consider when applying it to the system of interest. Once the spectrogram images are created, a convolutional autoencoder is trained to minimize the reconstruct error of the input image. At the output of the encoder in the convolutional autoencoder network, there is a bottleneck layer which captures the latent features of the images. These latent features are then combined together with the raw time series data as inputs to a fully connected autoencoder network. Similarly, the autoencoder trains to minimize the reconstruction error of the enriched input representation. The bottleneck layer of this network is equal to the lifting dimension, and once the mean absolute error between the input and output of the network is minimized, we can remove the decoder. Now we can use the encoder as the lifting basis function which we call the lifting DNN. Finally, the lifting DNN maps latent image data and raw time series data to the lifted state space. In the end, we want to find a linear state space model which minimizes reconstruction error as well as prediction error compared to the real non-linear system. Another criteria for this system is that it must be controllable. Once these 2 criteria are satisfied, the linear system is evaluated to see how well it can track the state trajectories given some initial conditions and control inputs. Further analysis on the controllability and stability of these systems is beyond the scope of this paper, as we are only focused on the model predictive element of the system identification problem.
\section{Convolutional Autoencoder}
The purpose of the convolutional autoencoder network (CAE) shown in figure \ref{fig:cae} is to learn the encoding from pixel space to the latent vector space and then decoding back to the pixel space minimizing the reconstruction error of the spectogram input image. The loss function used to update the weights and biases of the network is simply the mean square error. The encoder portion is composed of two convolution layers with LeakyReLU activation functions and including 'same' padding. The decoder is also using LeakyReLU activation function with the exception of the last activation function, which is Sigmoid. The main objective is to extract the latent representation of the state data as depicted in figure \ref{fig:ce}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Attachments/cnn_ae.png}
\caption{Convolutional autoencoder network mapping from pixel space to a latent vector space then back to pixel space.}
\label{fig:cae}
\end{figure}
After training we remove the decoder from the CAE. The encoder section shown in figure \ref{fig:ce} generates the latent labels from the input images which are then concatenated with the raw state data.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Attachments/cae_e.png}
\caption{Encoder network used to generate latent labels for each spectrogram image}
\label{fig:ce}
\end{figure}
\begin{algorithm2e}
\textbf{Input: } \\
dataset containing all data up to time $T$ $D = \{{\mathbf x}_i,{\mathbf y}_i,{\mathbf u}_i, q_i\}_{i=0}^{T}$\\
training loss from previous training iterations $L_{tot}$\\
\textbf{Output: } \\
Linear Lifted System ${X_{t+1}}_{lift}=A{X_t}_{lift}+BU_t$\\
lifting basis function DNN $\psi_{\theta}({\mathbf x})$\\
\SetAlgoLined
\rule{80mm}{0.3mm}\\[-1pt]
initialize DNN weights $\theta$\\
define number of training epochs $n_t$ \\
let $\psi_{\theta}$ be the lifting network $\psi_\theta:x_{nx1}\rightarrow\ X_{Nx1}$ \\
let $\psi_{\theta}^{-1}$ be the decoding network $\psi_\theta^{-1}:\ X_{Nx1}\rightarrow\ x_{nx1}$ \\
given the lifted system ${\psi_\theta(x_{t+1})=A\psi}_\theta\left(x_t\right)+Bu_t$ \\
which can also be written as $X_{t+1}=AX_t+Bu_t$\\
\ \\
\For{$n_t$}{
generating lifted states using $\psi_{\theta}$\\
${X_t} = \psi_{\theta}({\mathbf x})$\\
${Y_{t}} = \psi_{\theta}({\mathbf y})$\\
computing the bi-linear system\\
$\left[A,B\right]=\left[Y_{t}\left[\begin{matrix}{X_t}\\U_t\\\end{matrix}\right]^T\right]\left[\left[\begin{matrix}{X_t}\\U_t\\\end{matrix}\right]\left[\begin{matrix}{X_t}\\U_t\\\end{matrix}\right]^T\right]^{-1}$\\
compute linearization loss \\
$L_1=X_{t+1}-[A{X_t}+BU_t]$\\
compute controllability loss \\
$L_2=N_{lift}-rank(ctrb\left(A,B\right))$\\
compute encoder loss \\
$L_3=\psi_\theta^{-1}\left(X_{t+1}\right)-x_{t+1}$\\
compute total loss \\
$L_{tot} = L_1 + L_2 + L_3$\\
\uIf{$L_{tot} < min(L_{tot})$ and $L_2=0$}{
solve \eqref{eqn.ab} and \eqref{eqn.c} and store $A$,$B$,$C$ and $\theta_{final}$ \\
}
\Else{
backpropagate loss and update $\psi_{\theta}^{-1}$ and $\psi_{\theta}$ $\theta_{old} \rightarrow \theta_{new}$ \;
}
}
\caption{Unsupervised DKRC}\label{alg1}
\end{algorithm2e}
\begin{algorithm2e}
\textbf{Input: } \\
dataset containing all data up to time $T$ $D = \{{\mathbf x}_i,{\mathbf y}_i,{\mathbf u}_i, q_i\}_{i=0}^{T}$\\
\textbf{Output: } \\
Linear Lifted System ${X_{t+1}}_{lift}=A{X_t}_{lift}+BU_t$\\
lifting basis function DNN $\psi_{\theta}({\mathbf x})$\\
\SetAlgoLined
\rule{80mm}{0.3mm}\\[-1pt]
initialize DNN weights $\theta$\\
define number of training epochs $n_t$ \\
define the accuracy tolerance $\epsilon$ \\
let $\psi_{\theta}$ be the lifting network $\psi_\theta:x_{nx1}\rightarrow\ X_{Nx1}$ \\
given the lifted system ${\psi_\theta(x_{t+1})=A\psi}_\theta\left(x_t\right)+Bu_t$ \\
which can also be written as $X_{t+1}=AX_t+Bu_t$\\
lifting dimension $N=n+1$
\ \\
\For{$n_t$}{
train CAE $\gamma_{\theta}$ and $\gamma_{\theta}^{-1}$ \\
train AE $\psi_{\theta}(x)$ and $\psi_{\theta}^{-1}(x)$\\
generate spectrogram latent states using $\gamma_{\theta}$\\
generate lifted states using $\psi_{\theta}$\\
${X_t} = \psi_{\theta}({\mathbf x})$\\
${Y_{t}} = \psi_{\theta}({\mathbf y})$\\
computing the bi-linear system\\
$\left[A,B\right]=\left[Y_{t}\left[\begin{matrix}{X_t}\\U_t\\\end{matrix}\right]^T\right]\left[\left[\begin{matrix}{X_t}\\U_t\\\end{matrix}\right]\left[\begin{matrix}{X_t}\\U_t\\\end{matrix}\right]^T\right]^{-1}$\\
compute linearization heuristic \\
$h_1=X_{t+1}-[A{X_t}+BU_t]$\\
compute controllability criteria \\
$h_2=N_{lift}-rank(ctrb\left(A,B\right))$\\
compute total loss \\
$h_{tot} = h_1$\\
\uIf{$h_{tot} < \epsilon$ and $h_2=0$}{
solve \eqref{eqn.ab} and \eqref{eqn.c} and store $A$,$B$,$C$ and $\theta_{final}$ \\
}
\Else{
increment lifting dimension $N=N+1$\;
}
}
\caption{Supervised DKRC}\label{alg2}
\end{algorithm2e}
\section{Autoencoder}
The main purpose of the autoencoder network (AE) is to learn the lifted $N$-th dimension latent space mapping shown in Figure \ref{fig:ae}. The AE is the core of the supervised data-driven approach for the learning of the basis functions used to compute the Koopman operator.\\
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Attachments/dnn_ae.png}
\caption{Fully connected autoencoder network}
\label{fig:ae}
\end{figure}
The encoder portion is composed of a single dense layer using the tanh activation function. Following training of the AE, the decoder is removed and the lifted states can be predicted as shown in Figure \ref{fig:e}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Attachments/dnn_e.png}
\caption{Fully connected encoder network (lifting basis function) mapping latent image data and raw time series data to the lifted N-dimensional state.}
\label{fig:e}
\end{figure}
Once the lifting basis functions \begin{math} \psi_{\theta}\left(x\right) \end{math} have been learned, we can predict the lifted state for \begin{math} {X}_{t_{lift}} \end{math} as shown in figure \ref{fig:lift}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Attachments/dnn_lift.png}
\caption{Deep Neural Network that maps the latent image representations and raw data to the lifted state}
\label{fig:lift}
\end{figure}
In our case, we are dealing with forced dynamical systems, which require a different set of equations to compute the lifted linear representation of the system. The derivation of these formulas used in this section are beyond the scope of this text, but are cited for reference \cite{Korda_2018}.
In order to obtain the state space representation of our lifted linear forced dynamical system, we compute $A$, $B$, and $C$ matrices where $D$ is equal to zero.
\begin{equation}\label{eqn.AB}
\begin{gathered}
\left[A,B\right]=\left[X_{t+1_{lift}}\left[\begin{matrix}{X_t}_{lift}\\U_t\\\end{matrix}\right]^T\right]\left[\left[\begin{matrix}{X_t}_{lift}\\U_t\\\end{matrix}\right]\left[\begin{matrix}{X_t}_{lift}\\U_t\\\end{matrix}\right]^T\right]^{-1}
\end{gathered}
\end{equation}
\begin{equation}\label{eqn.C}
\begin{gathered}
C=X_t\left[{X_t}_{lift}\right]^T
\end{gathered}
\end{equation}
\begin{equation}\label{eqn.D}
\begin{gathered}
D=0
\end{gathered}
\end{equation}
In order to evaluate the accuracy of the lifted system, we compute the heuristic term $h_1$ using equation \ref{eqn.L1} which compares the lifted state at the next time step to the linear dynamic systems output which should result in zero if the approximation is accurate. This is in contrast to the original unsupervised DKRC algorithm (shown in algorithm 1) proposed in \cite{Han}, where the error is computed as a loss which is used to update the networks weights.
\begin{equation}\label{eqn.L1}
\begin{gathered}
{h_1=\ X}_{t+1_{lift}}-[A{X_t}_{lifted}+BU_t]
\end{gathered}
\end{equation}
The controllability matrix $Q$ is defined in equation \ref{eqn.ctrb}. The rank function determines the number of linearly independent columns in $Q$. By definition, a linear dynamical system is controllable if the rank of the controllability matrix is equal to the order of the system which in this case is equal to the lifted dimension $N$.
\begin{equation}\label{eqn.ctrb}
\begin{gathered}
Q=\left[\begin{matrix}B&AB&A^2B\\\end{matrix}\ \ \begin{matrix}\ldots&A^{N-1}B\\\end{matrix}\right]
\end{gathered}
\end{equation}
Hence, we define our second loss function $L_2$ to be equal to the difference between the rank of the controllability matrix and the order of the system $N$.
\begin{equation}\label{eqn.L2}
\begin{gathered}
h_2=N_{lift}-rank(Q)
\end{gathered}
\end{equation}
So finally, the total heuristic value $h$ is computed in equation \ref{eqn.losstot} as equal to the lifted prediction error heuristic $h_1$. The controllability criteria $h_2$ is not included in the total heuristic value since we only want to admit solutions which are controllable such that $h_2=0$. In order for an identified linear system to be admissible, it must have a total loss less than some constant $\epsilon$. Using the combination of heuristic measures and admissibility criteria, the final identified system will be controllable and accurate with in the specified tolerance. If a solution cannot be found at the lifting dimension $N=n+1$ then the AE networks latent space is increased to dimension $N=N+1$ for as many times as it takes to obtain a admissible solution.
\begin{equation}\label{eqn.losstot}
\begin{gathered}
h_{tot} = h_1<\epsilon
\end{gathered}
\end{equation}
\begin{equation}\label{eqn.control}
\begin{gathered}
rank(Q) = N
\end{gathered}
\end{equation}
The hypothesis here is that since we are approximating an infinite dimensional operator and deriving our lifted linear system from that approximation of the operator, then the larger dimension $N$ we choose should converge to the exact solution in the limit. In this case $\epsilon$ is equal to zero and the original nonlinear dynamics maps perfectly lifted linear state space system. Therefore, we could say that as $N$ increases towards infinity, $h_1$ goes to zero.
\begin{equation}\label{eqn.hyp}
\begin{gathered}
\lim_{N\rightarrow\infty} X_{t+1_{lift}}-[A{X_t}_{lifted}+BU_t]=0
\end{gathered}
\end{equation}
This is why if the solution is not accurate or the controllability criteria is not met, we increase the size of N incrementally. The proposed algorithm is shown in algorithm 2.
\section{Simulation Results}\label{section_main}
In this study, the usage of spectogram images with raw time series data is compared to only raw time series data as the input in order to determine if the quality of the learned basis functions can be improved with additional features included in the input (which are derived from the time series data). The main objective being to increase the quality of the learned basis functions such that we can accurately predict the dynamics for the design of a controller (MPC, LQR, ect). From this, we identified three metrics to evaluate the success of our model. The first metric is the complexity of the model setup in terms of data gathering and preparation, as well as the decoding of the model outputs for interpretation. A second metric considered is the dimension of the A and B matrices. Low dimension A and B matrices are targeted as it would require a lower computational effort in computing control signals. The last metric in this study is the average state error of a trajectory of states $\theta$ and $\dot{\theta}$. We used Python to implement the proposed algorithms with TensorFlow being the foundation for all neural networks in the architecture. The OpenAI gym 'Pendulum-V0' \cite{brockman2016openai} was modified to step forward at a frequency of 1,000 Hz in order to increase the sampling frequency for the spectrogram images time resolution to match well with the frequency resolution.
\subsection{Metric 1: Complexity of the model setup}
If we evaluate the model structure of different publications like Han et al. \cite{Han} and the example presented in Brunton et al. \cite{Brunton}, they use DNN structures only, which do not require the extra step of learning the latent space representation of the raw data. Leaving aside the autoencoders themselves and focusing on the data pre-processing step, the simple fact that the proposed model uses spectrogram images means that the data undergoes an extra transformation. The time series data conversion to image is a direct one but as the training progresses to the networks and reaches the first DNN encoder, it then gets merged with the initial raw data which increases the complexity of the architecture. Altogether, having this extra input feature of the spectrogram image does induce greater algorithmic and time complexity into solving this problem. Therefore, in evaluating the first criteria, the proposed model brings extra complexity with the addition of the CNN in comparison to the other models presented in literature \cite{Han} \cite{Brunton}. The hope is that this added complexity is compensated for in the fidelity of the identified system.
\subsection{Metric 2: Size of the A and B matrices}
Controlling the size of both A and B matrices enables a direct comparison for both the proposed approach and the model using the raw data only. The comparison here is done in terms of accuracy of the prediction. As the goal is to head toward a controller, the better model would be able to achieve good accuracy for an extended period of time.
The maximum lifted dimension used in this study was \begin{math} N=12 \end{math} and it was also the lifting dimension giving the best prediction in both cases. In figure \ref{fig:latlift12}, it is possible to see that the utilization of the latent images generates poor initial predictions. With the raw data only in figure \ref{fig:rawlift12} initial predictions are good while prediction after 3 seconds are less accurate. The results shown in Table \ref{table:1} support the hypothesis of equation \eqref{eqn.hyp} in that the accuracy increases with higher dimension approximations.
In both cases, the angular velocity $\dot{\theta}$ is not well predicted.
\begin{figure}[htbp]
\centering
\includegraphics[width=2.0in]{diagrams/12_rawlat_theta_time.png}
\hspace{0.1cm}
\includegraphics[width=2.0in]{diagrams/12_rawlat_thetadot_time.png}
\caption{Raw data + Latent image data compared to non-linear system lifting dimension = 12}
\label{fig:latlift12}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=2.0in]{diagrams/12_raw_theta_time.png}
\hspace{0.1cm}
\includegraphics[width=2.0in]{diagrams/12_raw_thetadot_time.png}
\caption{Raw data only compared to non-linear system lifting dimension = 12}
\label{fig:rawlift12}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{ |p{3cm}||p{1cm}|p{1cm}|p{1cm}|p{1cm}| }
\hline
\multicolumn{5}{|c|}{Average State Trajectory Prediction Mean Absolute Error} \\
\hline
Lifting Dimensions & $\theta$* & $\dot{\theta}$* & $\theta$** & $\dot{\theta}$**\\
\hline
3rd Dimension & 0.515 & 1.95 & 0.30 & 1.18\\
5th Dimension & 0.517 & 2.04 & 0.22 & 0.82\\
12th Dimension & 0.456 & 1.84 & 0.14 & 0.54\\
\hline
\end{tabular}
\caption{*Raw Data + Latent Image Data, **Raw Data Only}
\label{table:1}
\end{table}
\subsection{Metric 3: Average State Trajectory Prediction}
For this last criteria, a comparison of the mean absolute error over average state trajectory prediction is presented in Table \ref{table:1}. The results show an improvement for the raw data model only between the 3rd and 12th lifting dimension. The improvement trend only starts after the 5th dimension in the case where raw and latent image data is used for the basis function identification.
\section{Discussion}\label{secction_simulation}
One potential downfall of the simple pendulum system is that it is almost too simple of a non-linear system for the spectrogram image to bear any meaningful transient frequency information. To highlight this, we show our spectrogram image from the pendulum in figure \ref{fig:specbad}. It is clear that there is little difference distinguishing the trajectory of the dynamics of the measured state since it is repeatedly cycling through the same set of fixed values. A more complex example of a nonlinear system could be the measurement of acoustics such as what is shown in figure \ref{fig:specgood}. Visually it is clear that over time the state is evolving in a unique and identifiable manner, making the content of the image more useful for determining the trajectory of the audio or state. Ideally we would like to try out our approach on a more complex nonlinear system, especially a real system with noise and uncertainty to truly measure the robustness of the approach in performing accurate system identification.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.0in]{Attachments/periodicAF.png}
\caption{Periodicity of pendulum Spectrogram image for all training time}
\label{fig:specbad}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.0in]{Attachments/aucoustics.png}
\caption{Spectrogram Image from acoustic data}
\label{fig:specgood}
\end{figure}
\section{Conclusion}\label{section_conclusion}
The investigation around the use of spectogram images generated from a non-linear dynamical system showed that the approach can lead to accurate predictions, though it does not seem to be achieved as quickly as when only using raw data as the input. The current results provided by this work don't show conclusive evidence as to the benefit of the spectogram images in the quality increase in basis function identification at for low lifting dimensions. \\
Here, it is worth mentioning that only a single Neural Network format was evaluated and it would be worthwhile to evaluate different activation functions and layer structure in both the CAE and AE sections of the model. Also, the current model implementation uses simple CAE/AE networks and the usage of a variational autoencoders (VAE) networks should be considered. Existing work has shown that the use of VAEs and convolutional VAEs do a good job of reducing sample complexity in unexplored regions of the state space during sampling. While the spectogram images provided an enriched set of data, its accurate reconstruction through the initial autoencoder is crucial and could become a critical improvement leading to the actual desired outcome: An increased quality of the learned basis functions for more accurate prediction of transient dynamics. \\
A final aspect that wasn't considered in the current investigation is the testing of higher order nonlinear systems. The simple pendulum problem is considered as a simple non-linear system and may be poorly suited to highlight the benefits of using spectogram image representation of the non-linear state space data. Now that the framework is available, it would be very valuable to test the proposed approach against another non-linear dynamic system like a walking quadruped or biped structure. Additionally, this framework makes it possible that image data could be used in the Koopman operator framework in solving problems outside of control design or system identification.\\
In conclusion, the current investigation showed that the usage of a latent image data representation increases the complexity of the deep learning model, though it doesn't present any increase in prediction accuracy until a lifting dimension of 12. At that dimension, the approach shows a good capture of the transient aspect of the dynamic while the usage of only raw data still shows better prediction over short time. The major advantage that this approach has over previous implementations of DKRC is that it adds the supervised learning component to this problem. This greatly reduces training time and allows for several different heuristics and criteria to be tested against the identified models such that a linear system with desired properties can be found through an external optimization process. \\\\
| {
"timestamp": "2021-09-07T02:25:42",
"yymm": "2109",
"arxiv_id": "2109.02241",
"language": "en",
"url": "https://arxiv.org/abs/2109.02241",
"abstract": "Koopman spectral theory has provided a new perspective in the field of dynamical systems in recent years. Modern dynamical systems are becoming increasingly non-linear and complex, and there is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control. The central problem in applying Koopman theory to a system of interest is that the choice of finite-dimensional basis functions is typically done apriori, using expert knowledge of the systems dynamics. Our approach learns these basis functions using a supervised learning approach where a combination of autoencoders and deep neural networks learn the basis functions for any given system. We demonstrate this approach on a simple pendulum example in which we obtain a linear representation of the non-linear system and then predict the future state trajectories given some initial conditions. We also explore how changing the input representation of the dynamic systems time series data can impact the quality of learned basis functions. This alternative representation is compared to the traditional raw time series data approach to determine which method results in lower reconstruction and prediction error of the true non-linear dynamics of the system.",
"subjects": "Machine Learning (cs.LG); Systems and Control (eess.SY)",
"title": "Supervised DKRC with Images for Offline System Identification",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9019206712569267,
"lm_q2_score": 0.7853085758631159,
"lm_q1q2_score": 0.7082860378862825
} |
https://arxiv.org/abs/1308.5173 | Independence ratio and random eigenvectors in transitive graphs | A theorem of Hoffman gives an upper bound on the independence ratio of regular graphs in terms of the minimum $\lambda_{\min}$ of the spectrum of the adjacency matrix. To complement this result we use random eigenvectors to gain lower bounds in the vertex-transitive case. For example, we prove that the independence ratio of a $3$-regular transitive graph is at least \[q=\frac{1}{2}-\frac{3}{4\pi}\arccos\biggl(\frac{1-\lambda _{\min}}{4}\biggr).\] The same bound holds for infinite transitive graphs: we construct factor of i.i.d. independent sets for which the probability that any given vertex is in the set is at least $q-o(1)$. We also show that the set of the distributions of factor of i.i.d. processes is not closed w.r.t. the weak topology provided that the spectrum of the graph is uncountable. |
\section{Introduction}
\subsection{The independence ratio and the minimum eigenvalue}
An \emph{independent set} is a set of vertices in a graph, no two of which are adjacent.
The \emph{independence ratio} of a graph $G$ is the size of
its largest independent set divided by the total number of vertices.
If $G$ is regular, then the independence ratio is at most $1/2$,
and it is equal to $1/2$ if and only if $G$ is bipartite.
The adjacency matrix of a $d$-regular graph has
real eigenvalues between $-d$ and $d$.
The least eigenvalue $\lam$ is at least $-d$,
and it is equal to $-d$ if and only if the graph is bipartite.
So the distance of the independence ratio from $1/2$
and the distance of $\lam$ from $-d$ both measure
how far a $d$-regular graph is from being bipartite.
The following natural question arises: what kind of connection
is there between these two graph parameters?
A theorem of Hoffman \cite{hoffman1} gives a partial answer to this question.
It says that the independence ratio of a $d$-regular graph is at most
\begin{equation} \label{eq:hoffman}
\frac{-\lam}{d-\lam}
= \frac{1}{2} - \frac{\frac{1}{2}(\lam + d)}{2d - ( \lam + d )} .
\end{equation}
(For a simple proof see \cite[Theorem 11]{ellis1}.
Also see \cite[Section 4]{lyons_nazarov} for certain improvements.)
Hoffman's bound implies that $\lam \to -d$ as the independence ratio tends to $1/2$.
The converse statement is not true in general:
it is easy to construct $d$-regular graphs with $\lam$ arbitrarily close to $-d$
and the independence ratio separated from $1/2$.
However, for transitive graphs the converse is also true.
A graph $G$ is said to be \emph{vertex-transitive} (or \emph{transitive} in short)
if its automorphism group $\Aut(G)$ acts transitively on the vertex set $V(G)$.
\begin{introthm} \label{thm:crude}
Let $G$ be a finite, $d$-regular, vertex-transitive graph with
least eigenvalue $\lam$.
Then the independence ratio of $G$ is at least
$$ \frac{1}{2} - \frac{1}{3} \sqrt{ d(\lam + d) } .$$
In particular, if $\lam \to -d$, then the independence ratio converges to $1/2$.
\end{introthm}
The idea behind the proof is to consider random eigenvectors with eigenvalue $\lam$.
Let $\la$ be an arbitrary eigenvalue of the adjacency matrix
of some transitive graph $G$ and let $E_\la$ denote the eigenspace corresponding to $\la$,
that is, the space of eigenvectors with eigenvalue $\la$.
(Note that $E_\la$ is typically more than one dimensional, since $G$ is transitive.)
Furthermore, let $S_\la$ be the unit sphere in $E_\la$.
Now we pick a uniform random vector from $S_\la$.
Note that $S_\la$ is $\Aut(G)$-invariant, therefore
the distribution of this random vector is $\Aut(G)$-invariant, too.
Let us choose the vertices $v$ with the property that
the value of the eigenvector at $v$ is larger than at each neighbor of $v$.
(If $\la$ is negative, then
we expect many of the vertices with positive value to have this property.)
Clearly, these vertices form an independent set.
Since our random vector is invariant, the probability $q$
that a given vertex is chosen is the same for all vertices.
Therefore the expected size of this random independent set is $q |V(G)|$,
and consequently, the independence ratio of $G$ is at least $q$.
An estimate of $q$ yields Theorem \ref{thm:crude} above.
In many cases we obtain much sharper bounds.
When the graph has a lot of symmetry (for example,
when any pair of neighbors of a fixed vertex can be mapped
to any other pair by a suitable graph automorphism),
then the probability $q$ defined above is actually determined by $\la$.
In this case it equals $q_d(\la)$, the relative volume of
the $d-1$-dimensional regular spherical simplex defined by normal vectors
with pairwise scalar product $\frac{d-2-\la}{2(d-1)}$ (see Definition \ref{def:qd}).
There is a simple formula for $q_3(\la)$, see Theorem \ref{thm:3reg_main}.
We conjecture that $q \geq q_d(\la)$ for arbitrary transitive graphs
(provided that $\la$ is sufficiently small).
In other words, the worst-case scenario is when the graph has a lot of symmetry.
Of course, this would yield a lower bound $q_d(\lam)$ for the independence ratio.
We managed to prove this conjecture for $3$-regular transitive graphs and
$4$-regular arc-transitive graphs. We also showed that a well-known
conjecture in geometry would imply the $d$-regular, arc-transitive case.
(A graph is said to be \emph{arc-transitive} or \emph{symmetric}
if for any two pairs of adjacent vertices $(u_1,v_1)$ and $(u_2,v_2)$,
there is an automorphism of the graph mapping $u_1$ to $u_2$ and $v_1$ to $v_2$.)
The following theorems were obtained.
\begin{introthm} \label{thm:arc_tr}
Suppose that $G$ is a finite, $d$-regular, arc-transitive graph
with least eigenvalue $\lam$.
Then the independence ratio of $G$ is at least
$$ \frac{1}{2} - \frac{1}{3} \sqrt{\lam+d} .$$
In fact, a well-known conjecture in geometry (see Conjecture \ref{conj:geom})
would imply that the independence ratio is at least $q_d(\lam)$.
This has been proven in the case $d=4$: the independence ratio of
a finite, $4$-regular, arc-transitive graph is at least
\begin{equation} \label{eq:q4}
q_4(\lam) \geq \frac{1}{2} - \frac{1}{4} \sqrt{\lam+4} .
\end{equation}
\end{introthm}
\begin{introthm} \label{thm:3reg_main}
Suppose that $G$ is a finite, $3$-regular, vertex-transitive graph
with minimum eigenvalue $\lam$. Then the independence ratio of $G$ is at least
$$ q_3(\lam) = \frac{1}{8} + \frac{3}{4 \pi} \arcsin \left( \frac{1-\lam}{4} \right) =
\frac{1}{2} - \frac{3}{4 \pi} \arccos \left( \frac{1-\lam}{4} \right) .$$
In fact, the following stronger statement holds:
$G$ contains two disjoint independent sets $I_1, I_2$
with total size $|I_1 \cup I_2| \geq 2 q_3(\lam) |V(G)|$.
This means that the induced subgraph $G[I_1 \cup I_2]$ is bipartite
and has at least $2 q_3(\lam) |V(G)|$ vertices.
\end{introthm}
See Figure \ref{fig:compare} to compare the lower bound given
in Theorem \ref{thm:3reg_main} to Hoffman's upper bound \eqref{eq:hoffman}.
Note that $-3 \leq \lam \leq -2$ for any $3$-regular transitive graph
with the only exception of the complete graph $K_4$ for which $\lam = -1$.
(See Proposition \ref{prop:la_min_3reg} in the Appendix.)
\begin{figure}[ht]
\centering
\includegraphics[width=15cm]{fig}
\caption{Hoffman's upper bound \eqref{eq:hoffman}
and the lower bound of Theorem \ref{thm:3reg_main} for $\lam \in [-3,-1]$}
\label{fig:compare}
\end{figure}
\subsection{Random wave functions on infinite transitive graphs}
In order to generalize the above theorems
we define random wave functions on infinite transitive graphs $G$.
A \emph{wave function} with eigenvalue $\la$ on $G$ is a function
$f \colon V(G) \to \IR$ such that
$$ \sum_{u \in N(v)} f(u) = \la f(v) \mbox{ for each vertex } v \in V(G) ,$$
where $N(v)$ denotes the set of neighbors of $v$ in $G$.
So a wave function is basically an eigenvector of the adjacency operator of $G$,
except that it does not need to be in $\ell_2(V(G))$.
These random wave functions will also let us answer
an open question concerning factor of i.i.d.\ processes.
Suppose that we have independent standard normal random variables $Z_u$
assigned to each vertex $u$ of an infinite transitive graph $G$.
By a \emph{factor of i.i.d.\ process} on $G$ we mean random variables $X_v$, $v \in V(G)$
that are all obtained as measurable functions of the random variables $Z_u$, $u \in V(G)$
and that are $\Aut(G)$-equivariant (i.e., they commute with the natural action of $\Aut(G)$).
It is easy to see that for any factor of i.i.d.\ process
the correlation of $X_v$ and $X_{v'}$ converges to $0$
as the distance of $v$ and $v'$ goes to infinity.
So a random process that is $0$ everywhere with probability $1/2$
and $1$ everywhere with probability $1/2$ cannot be a factor of i.i.d.
However, it can be seen easily that this process can be approximated
by factor of i.i.d.\ processes provided that $G$ is amenable.
So the space of factor of i.i.d.\ processes is not closed, that is,
the distributions of these processes do not form a closed set w.r.t.\ the weak topology.
It has been an open question
whether the same is true on non-amenable graphs,
for example, on the $d$-regular tree, see \cite[Section 4, Question 4]{birs_report}.
We will show that the space of factor of i.i.d.\ processes
is not closed provided that the spectrum of $G$ is uncountable.
We say that a factor of i.i.d. process $X_v$, $v \in V(G)$
is a \emph{linear factor of i.i.d.}\ if each $X_v$ is
obtained as a (possibly infinite) linear combination of $Z_u$, $u \in V(G)$.
Note that linear factors have the following properties.
\begin{definition} \label{def:gaussian_process}
We call a collection of random variables $X_v$, $v \in V(G)$
a \emph{Gaussian process} on $G$ if
they are jointly Gaussian and each $X_v$ is centered (i.e., has mean $0$).
(Random variables are jointly Gaussian
if any finite linear combination of them is Gaussian.)
We say that a Gaussian process $X_v$ is $\Aut(G)$-invariant (or simply invariant) if
for any $\Phi \in \Aut(G)$ the joint distribution of
the Gaussian process $X_{\Phi(v)}$ is the same as that of the original process.
\end{definition}
We will prove that the adjacency operator $A_G$ has
approximate eigenvectors (satisfying a certain invariance property)
for any $\la$ in the spectrum $\la \in \si(A_G)$.
Then we will use these approximate eigenvectors
as coefficients to define linear factor of i.i.d.\ processes
converging in distribution to an invariant Gaussian process $X_v$
that satisfies the eigenvector equation at each vertex.
\begin{introthm} \label{thm:gaussian_ev}
Let $G$ be an infinite vertex-transitive graph with adjacency operator $A_G$.
Then for each point $\la$ of the spectrum $\si(A_G)$ there exists
a nontrivial invariant Gaussian process $X_v$, $v \in V(G)$ such that
\begin{equation} \label{eq:eigen}
\sum_{u \in N(v)} X_u = \la X_v \mbox{ for each vertex } v \in V(G) ,
\end{equation}
where $N(v)$ denotes the set of neighbors of $v$ in $G$.
Furthermore, the process $X_v$ can be approximated (in distribution)
by linear factor of i.i.d.\ processes. Clearly, we can assume that
these approximating linear factors have only finitely many nonzero coefficients.
\end{introthm}
An invariant Gaussian process satisfying \eqref{eq:eigen} will be called
a \emph{Gaussian wave function} with eigenvalue $\la$.
If the spectrum of $G$ is not countable, then
we can conclude that some of these Gaussian wave functions
cannot be obtained as factor of i.i.d.\ processes.
\begin{introthm} \label{thm:not_closed}
Let $G$ be an infinite transitive graph such that
the spectrum of the adjacency oparator $A_G$ is not countable.
Then there exist (linear) factor of i.i.d.\ processes on $G$ with the property
that the weak limit of their distributions cannot be
obtained as the distribution of a factor of i.i.d.\ process.
\end{introthm}
We can say more for Cayley graphs.
\begin{introthm} \label{thm:cayley}
Suppose that $G$ is the Cayley graph of a finitely generated infinite group.
Then a Gaussian wave function with eigenvalue $\lama \mathdef \sup \si(A_G)$
can never be obtained as the distribution of a factor of i.i.d.\ process.
\end{introthm}
In view of Theorems \ref{thm:gaussian_ev} and \ref{thm:cayley}
there exists a Gaussian wave function with eigenvalue $\lama$ that
can be approximated by factor of i.i.d.\ processes but cannot be obtained as one.
An independent and different proof of this result
was given by Russell Lyons in the special case
when $G$ is a regular tree (personal communication).
\subsection{Factor of i.i.d.\ independent sets}
Let $X_v$, $v \in V(G)$ be a random process
on our infinite transitive graph $G$.
As in the finite setting,
$I_{+} \mathdef \left\{ v \, : \, X_v > X_u, \forall u \in N(v) \right\}$
is a random independent set.
If our process is invariant, then the probability that $v \in I_+$
is the same for each vertex $v$, and thus this probability
can be used to measure the size of $I_+$.
If our process is a factor of some i.i.d.\ process $Z_v$,
then the resulting independent set is also a factor of $Z_v$.
In the infinite setting let $\lam$ denote the minimum of the spectrum $\si(A_G)$
and let $X_v$ be a linear factor of $Z_v$ approximating
the Gaussian eigenvector with eigenvalue $\lam$ (see Theorem \ref{thm:gaussian_ev}).
As the process $X_v$ converges in distribution to the Gaussian eigenvector,
the probability $P( v \in I_+ )$ approaches
the corresponding probability for the Gaussian eigenvector process,
which, as we will see, can be computed the exact same way
as in the finite case.
\begin{introthm} \label{thm:inf}
Theorems \ref{thm:crude}, \ref{thm:arc_tr} and \ref{thm:3reg_main}
give lower bounds $q$ (in terms of $\lam$) for the independence ratio
of finite transitive graphs with least eigenvalue $\lam$.
These bounds remain true in the following framework.
Let $\lam$ denote the minimum of the spectrum of an infinite transitive graph $G$.
Then for any $\eps > 0$ there exists a factor of i.i.d.\ independent set on $G$
such that the probability that any given vertex is in the set is at least $q-\eps$.
\end{introthm}
A special case of this infinite setting was investigated in \cite{csghv}.
When $G$ is the $d$-regular tree $T_d$,
then any factor of i.i.d.\ independent set on $G$
automatically gives a lower bound for the independence ratio of
$d$-regular finite graphs with sufficiently large girth.
In particular, for the $3$-regular tree $T_3$ one has $\lam = -2\sqrt{2}$.
Therefore the infinite version of Theorem \ref{thm:3reg_main} tells us that
there exists factor of i.i.d.\ independent set in $T_3$ with density
$$ \frac{1}{2} - \frac{3}{4\pi} \arccos\left(
\frac{1+2\sqrt{2}}{4} \right) \approx 0.4298 .$$
In \cite{csghv} the somewhat better bound $0.4361$ was obtained,
which is the current best. In fact,
\cite{csghv} was the starting point for the work in the present paper.
For previous results on the independence ratio of large-girth graphs
see \cite{Bo_ind_set,mckay,shearer,shearer2,lauer_wormald,cubic}.
\subsection*{Acknowledgments}
The authors are grateful to P\'eter Csikv\'ari
for the elegant proof of Proposition \ref{prop:la_min_3reg},
and to Gergely Ambrus, K\'aroly B\"or\"oczky, G\'abor Fejes T\'oth, and Endre Makai
for their remarks on Conjecture \ref{conj:geom}.
\section{Finite vertex-transitive graphs} \label{sec:2}
Throughout this section $G$ will denote a vertex-transitive, finite graph
with degree $d$ for some positive integer $d \geq 3$.
The least eigenvalue of its adjacency matrix $A_G$ will be denoted by $\lam$.
For now let $\la$ be an arbitrary eigenvalue of $A_G$.
Eventually, we will choose $\la$ as the minimum eigenvalue.
First we define what we mean by a random eigenvector.
\begin{definition} \label{def:random_ev}
Let $E_\la$ be the eigenspace corresponding to $\la$, that is,
$$ E_\la \mathdef \left\{ x \in \ell_2( V(G) ) \ : \ A_G x = \la x \right\} .$$
We fix some orthonormal basis $e_1, \ldots, e_l$ in $E_\la$,
and take independent standard normal random variables $\ga_1, \ldots, \ga_l$.
We call $\sum_{i=1}^l \ga_i e_i$
the \emph{random eigenvector with eigenvalue $\la$}.
\end{definition}
\begin{remark}
The (distribution of the) random eigenvector is clearly independent of
the choice of the basis $e_1, \ldots, e_l$, so it is well defined.
It also follows that the distribution of the random eigenvector is $\Aut(G)$-invariant.
(Note that in the introduction we defined the random eigenvector differently:
a uniform random vector on the unit sphere of $E_\la$,
which is just the normalized version of \
the random eigenvector of Definition \ref{def:random_ev}.)
\end{remark}
We will think of this random eigenvector as a collection of
real-valued random variables $X_v$, $v \in V(G)$ with the property that
they are jointly Gaussian and $\Aut(G)$-invariant, each $X_v$ is centered, and
$$ \sum_{u \in N(v)} X_u = \la X_v \mbox{ for each vertex } v, $$
where $N(v)$ denotes the set of neighbors of $v$ in $G$.
Since $G$ is transitive, each $X_v$ has the same variance.
After multiplying these random variables with a suitable positive constant
we might assume that $\var(X_v) = 1$ for each vertex $v$.
Next we define random independent sets by means of these random eigenvectors.
\begin{definition} \label{def:ind_set}
Let
\begin{align*}
I_{+} &= I^\la_{+} \mathdef
\left\{ v\in V(G) \ : \ X_v > X_u \mbox{ for each } u \in N(v) \right\} \mbox{, and}\\
I_{-} &= I^\la_{-} \mathdef
\left\{ v\in V(G) \ : \ X_v < X_u \mbox{ for each } u \in N(v) \right\} .
\end{align*}
Clearly, $I_{+}$ and $I_{-}$ are disjoint (random) independent sets in $G$.
\end{definition}
The $\Aut(G)$-invariance implies that the probability of
the event $v \in I_{+}$ is the same for all vertices $v$.
So from now on, we will focus on a fixed vertex $v$
(that we will call the root) and its neighbors $u_1, \ldots, u_d$.
For $X_v$ and $X_{u_i}$ we will simply write $X$ and $Y_i$, respectively.
Therefore we have
\begin{equation} \label{eq:ev}
\sum_{i=1}^d Y_i = \la X .
\end{equation}
Let us denote the covariance $\cov(Y_i,Y_j)$ by $c_{i,j}$.
It follows from \eqref{eq:ev} that
\begin{equation} \label{eq:cov_sum}
\la^2 = \cov(\la X, \la X) = \sum_{i,j} c_{i,j} =
d + 2 \sum_{i<j} c_{i,j} \mbox{, thus }
\sum_{i<j} c_{i,j} = \frac{\la^2 - d}{2} .
\end{equation}
Setting $ U_i \mathdef X - Y_i$
we have
$$ P(v \in I_{+}) = P( U_i > 0, 1 \leq i \leq d ) .$$
As we will see, this probability can be expressed as the volume of a certain spherical simplex.
\begin{definition}
Let $S^{d-1}$ denote the unit sphere in $\IR^d$.
A half-space is said to be \emph{homogeneous} if the defining hyperplane
(i.e., the boundary of the half-space) passes through the origin.
A vector $n$ orthogonal to the defining hyperplane and ``pointing outward''
is called an \emph{outer normal vector}.
Then the given (open) half-space consists of those $x \in \IR^d$
for which the inner product $n \cdot x$ is negative.
A $d-1$-dimensional \emph{spherical simplex} is
the intersection of $S^{d-1}$ and $d$ homogeneous half-spaces in $\IR^d$.
Up to congruence, a spherical simplex is determined by the $\binom{d}{2}$
pairwise angles enclosed by the outer normal vectors of the $d$ half-spaces.
If these $\binom{d}{2}$ angles are all equal, then we say that
the spherical simplex is \emph{regular}.
\end{definition}
Since $Y_1, \ldots, Y_d$ are centered and jointly Gaussian, they can be
written as the linear combinations of independent standard normal variables:
there exist independent standard Gaussians $Z_1, \ldots, Z_d$
and (deterministic) vectors $y_1, \ldots, y_d \in \IR^d$ such that
$Y_i$ is the inner product of $y_i$ and $Z=(Z_1,\ldots,Z_d)$.
Setting $x = (y_1 + \cdots + y_d)/\la$ and $u_i = x - y_i$ we have
$$ Y_i = y_i \cdot Z; X = x \cdot Z; U_i = u_i \cdot Z .$$
It is easy to see that for any deterministic vectors $a,b \in \IR^d$
the covariance $\cov(a \cdot Z, b \cdot Z)$ is equal to the inner product $a \cdot b$.
In particular,
\begin{equation} \label{eq:cov_inner_pr}
x \cdot x = \var(X) = 1; \ y_i \cdot y_j = \cov(Y_i, Y_j) = c_{i,j}; \
u_i \cdot u_j = \cov(U_i, U_j) .
\end{equation}
In this formulation the event $U_i > 0$ is that
the random point $Z$ lies in the homogeneous open half-space
with outer normal vector $-u_i$.
So the probability in question is equal to the measure of
the intersection of the homogeneous half-spaces with outer normal vectors $-u_i$
with respect to the standard multivariate Gaussian measure on $\IR^d$.
This is simply the volume of the corresponding $d-1$-dimensional spherical simplex
divided by the volume $\vol(S^{d-1})$ of the unit sphere $S^{d-1}$,
which is determined by the pairwise angles
\begin{equation} \label{eq:phi}
\varphi_{i,j} \mathdef \angle(u_i,u_j) =
\arccos\left( \frac{u_i \cdot u_j}{\| u_i \| \| u_j \|} \right) ,
\end{equation}
which, in turn, can be expressed using the inner products $y_i \cdot y_j = c_{i,j}$.
The probability $P(v \in I_{+})$ seems to be the smallest
when $G$ has a lot of symmetry.
To make this more precise, we first define what we mean by a ``lot of symmetry''.
\begin{definition}
We say that $G$ is \emph{cherry-transitive} if
any cherry (path of length $2$) in $G$ can be mapped to any other cherry
using a suitable graph automorphism of $G$.
\end{definition}
\begin{proposition} \label{prop:ch_tr}
If $G$ is cherry-transitive, then
$$ c_{i,j} = \frac{\la^2 - d}{d(d-1)} \mbox{ for all } i \neq j ,$$
and, consequently, the pairwise angles $\varphi_{i,j}$ are all equal to
\begin{equation} \label{eq:angle}
\arccos\left( \frac{d-2-\la}{2(d-1)} \right) .
\end{equation}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:ch_tr}]
If $G$ is cherry-transitive, then for any $i_1 \neq j_1$ and $i_2 \neq j_2$ there exists
an automorphism $\Phi \in \Aut(G)$ such that $\Phi$ fixes the root $v$ and
takes the unordered pair $u_{i_1}, u_{j_1}$ to $u_{i_2}, u_{j_2}$, that is,
$$ \Phi v = v, \Phi u_{i_1} = \Phi u_{i_2}, \Phi u_{j_1} = \Phi u_{j_2} \mbox{ or }
\Phi v = v, \Phi u_{i_1} = \Phi u_{j_2}, \Phi u_{j_1} = \Phi u_{i_2} .$$
Together with the $\Aut(G)$-invariance of the random eigenvector
this implies that $c_{i_1,j_1} = c_{i_2,j_2}$.
Since this holds for any two pairs of indices,
it follows that all $c_{i,j}$, $i \neq j$ are the same.
Using \eqref{eq:cov_sum} we conclude that for $i \neq j$
$$ c_{i,j} = \frac{\la^2 - d}{d(d-1)} .$$
Then easy calculation shows
(using notations introduced earlier) that
$$ u_i \cdot u_j = \frac{2(d-\la)}{d} \mbox{ and }
\| u_i \|^2 = \| u_j \|^2 = \frac{(d-\la)(d-2-\la)}{d(d-1)} .$$
Plugging this into \eqref{eq:phi} gives
$$ \varphi_{i,j} = \arccos\left( \frac{d-2-\la}{2(d-1)} \right) .$$
\end{proof}
We are now in a position to define the functions $q_d(\la)$.
\begin{definition} \label{def:qd}
For $-d \leq \la \leq d$ let $q_d(\la)$ denote the volume of
the $d-1$-dimensional regular spherical simplex corresponding to
the angle \eqref{eq:angle} divided by $\vol(S^{d-1})$.
Then $P(v \in I_{+}) = q_d(\la)$ for any cherry-transitive $G$.
In particular, the independence ratio of
any cherry-transitive graph $G$ is at least $q_d(\lam)$.
\end{definition}
So $P(v \in I_{+}) = q_d(\la)$ provided that $G$ has enough symmetry.
The following conjecture says that in the general (i.e., vertex-transitive) case
the probability should be larger than that.
\begin{conjecture} \label{conj:q_d}
For any transitive graph $G$ it holds that
$$ P(v \in I_{+}) \geq q_d(\la) $$
for any $\la$, or at least for sufficiently small $\la$: $\la \leq \la_0$ for some $\la_0$.
This would, of course, imply that the independence ratio of $G$ is at least $q_d(\lam)$
provided that $\lam \leq \la_0$.
\end{conjecture}
We will prove this conjecture for $d=3$ and $\la_0=-2$ in Section \ref{sec:3reg}.
The conjecture might be true for arbitrary $\la$,
but proving for $\la \leq \la_0 = -2$ will be sufficient for our purposes,
because $\lam \leq -2$ for any $3$-regular transitive graph except $K_4$.
A few properties of the functions $q_d(\la)$ are collected in the next proposition.
\begin{proposition} \label{prop:q_d_prop}
For any $d \geq 3$ $q_d$ is a monotone decreasing continuous function on $[-d,-1]$ with
$$ q_d(-d) = \frac{1}{2} \mbox{ and } q_d(-1) = \frac{1}{d+1} .$$
As for the behavior of $q_d$ around $-d$ we have
$$ q_d(\la) \geq \frac{1}{2} - \frac{ \pi \vol(S^{d-2}) }{ 4 \vol(S^{d-1}) }
\sqrt{\frac{\la+d}{d}} \geq \frac{1}{2} - \frac{1}{3} \sqrt{\la+d} .$$
\end{proposition}
\begin{proof}
Monotonicity and continuity follow readily from the definition of $q_d$.
For $\la = -d$ the angles $\varphi_{i,j}$ are $0$, so
the corresponding (degenerate) spherical simplex is a hemisphere,
thus $q_d(-d) = 1/2$ as claimed.
For $\la = -1$ the angles $\varphi_{i,j}$ are $\pi/3$.
It is not hard to see that the vertices of our spherical simplex in that case
will be the $d$ vertices of a face of a regular (Euclidean) simplex in $\IR^d$.
Then each of the $d+1$ spherical simplices belonging
to the $d+1$ faces has volume $\vol(S^{d-1})/(d+1)$.
(We could also argue that for $G = K_{d+1}$ and $\la = -1$
we have $P( v \in I_{+} ) = 1/(d+1)$,
and since $K_{d+1}$ is cherry-transitive,
$P( v \in I_{+} ) = q_d(-1)$.)
See Section \ref{sec:near_neg_d} for a proof of the claimed behavior around $-d$.
\end{proof}
\subsection{The $3$-regular, vertex-transitive case} \label{sec:3reg}
Now we turn to the proof of Theorem \ref{thm:3reg_main} that gives
a lower bound for the independence ratio of $3$-regular transitive graphs.
We will basically show that Conjecture \ref{conj:q_d}
is true when $d=3$ and $\la_0 = -2$.
For $d=3$ the surface of the unit sphere $S^{d-1} = S^2$ is $4\pi$ and
the area of a spherical triangle is $\al + \be + \ga - \pi$,
where $\al, \be, \ga$ are the angles enclosed by the sides of the spherical triangle.
As we have seen, the probability $P(v \in I_{+})$ equals
the area of a certain spherical triangle divided by $4 \pi$.
The angles of the spherical triangle in question are
$\pi - \varphi_{1,2}$, $\pi - \varphi_{1,3}$ and $\pi - \varphi_{2,3}$.
Therefore
\begin{multline} \label{eq:prob_3reg}
P(v \in I_{+}) = \frac{1}{4\pi} \left( \sum_{1\leq i<j \leq 3} (\pi - \varphi_{i,j} ) - \pi \right) =
\frac{1}{4\pi} \left(\frac{\pi}{2} + \sum_{1\leq i<j \leq 3} (\frac{\pi}{2} - \varphi_{i,j} ) \right) = \\
\frac{1}{4\pi} \left( \frac{\pi}{2} + \sum_{1\leq i<j \leq 3}
\arcsin\left( \frac{u_i \cdot u_j}{\| u_i \| \| u_j \|} \right) \right) .
\end{multline}
By Proposition \ref{prop:ch_tr} we have $c_{i,j} = (\la^2 - 3)/6$
and $\varphi_{i,j} = \arccos( (1-\la)/4 )$ in the cherry-transitive case, thus
\begin{equation} \label{eq:q_3}
q_3(\la) = \frac{1}{8} + \frac{3}{4 \pi} \arcsin \left( \frac{1-\la}{4} \right)
= \frac{1}{2} - \frac{3}{4 \pi} \arccos \left( \frac{1-\la}{4} \right) .
\end{equation}
\begin{proof}[Proof of Theorem \ref{thm:3reg_main}]
The statement of the theorem is true for the complete graph $K_4$
as the independence ratio is $1/4$ and the minimum eigenvalue is $-1$ in that case.
For any other $3$-regular transitive graph $G$ we have $\lam \leq -2$.
(See Proposition \ref{prop:la_min_3reg} in the Appendix.)
Therefore it suffices to prove that $P(v \in I_{+}) \geq q_3(\la)$, whenever $\la \leq -2$.
Recall that $Y_1, Y_2, Y_3$ are standard Gaussians with
pairwise covariances $c_{i,j}$. Therefore the matrix
$$
\begin{pmatrix}
1 & c_{1,2} & c_{1,3}\\
c_{1,2} & 1 & c_{2,3} \\
c_{1,3} & c_{2,3} & 1
\end{pmatrix}
$$
is positive semidefinite. In particular, its determinant is nonnegative:
$$ 1 + 2 c_{1,2} c_{1,3} c_{2,3} - c_{1,2}^2 - c_{1,3}^2 - c_{2,3}^2 \geq 0 .$$
Furthermore, according to \eqref{eq:cov_sum} we have
$c_{1,2}+c_{1,3}+c_{2,3} = (\la^2-3)/2 \geq 1/2$, because $\la \leq -2$.
It follows that each $c_{i,j}$ must be between $-1/2$ and $1$.
Indeed, let $x,y,z$ be real numbers between $-1$ and $1$
with $x+y+z \geq 1/2$ and $1+2xyz-x^2-y^2-z^2 \geq 0$.
Assume that $z < -1/2$. Then
\begin{multline*}
0 \leq 1+2xyz-x^2-y^2-z^2 = 1 + 2(z+1)xy - (x+y)^2 - z^2 \leq \\
1 + 2(z+1)\left( \frac{x+y}{2} \right)^2 - (x+y)^2 - z^2 =
1 + \frac{z-1}{2} (x+y)^2 - z^2 \leq
1 + \frac{z-1}{2} \left(\frac{1}{2}-z\right)^2 - z^2 < 0 ,$$
\end{multline*}
contradiction. Therefore $z \geq -1/2$. Similarly, $x,y \geq -1/2$, too.
Next we bound $u_i \cdot u_j/(\| u_i \| \| u_j \|)$ from below.
Using \eqref{eq:cov_inner_pr}, $x = (y_1+y_2+y_3)/\la$
and $c_{1,2}+c_{1,3}+c_{2,3} = (\la^2-3)/2$:
\begin{align*}
x \cdot y_1 &= \frac{1}{\la}( 1 + c_{1,2} + c_{1,3} ) =
\frac{1}{\la} \left( 1 + \frac{\la^2-3}{2} - c_{2,3} \right) =
\frac{\la}{2} - \frac{1}{2\la}-\frac{1}{\la} c_{2,3}, \\
\|u_1\|^2 &= \| x - y_1 \|^2 = 2 - 2 x \cdot y_1 =
2 - \la + \frac{1}{\la} + \frac{2}{\la} c_{2,3} .
\end{align*}
Similar formulas hold for $x \cdot y_i$ and $\|u_i\|$, $i=2,3$.
By the inequality of arithmetic and geometric means it follows that
$$ \| u_1 \| \| u_2 \| \leq \frac{\|u_1\|^2 + \|u_2\|^2}{2} =
2 - \la + \frac{1}{\la} + \frac{1}{\la}( c_{1,3} + c_{2,3} ) =
\frac{-1}{\la} \left( \frac{1}{2} - 2\la + \frac{\la^2}{2} + c_{1,2} \right) .$$
Note that this holds with equality when all $c_{i,j}$ are equal. Furthermore,
\begin{multline*}
u_1 \cdot u_2 = (x-y_1) \cdot (x-y_2) = 1 + c_{1,2} - x \cdot (y_1+y_2) =
1 + c_{1,2} + x \cdot (y_3 - \la x) = \\
1 + c_{1,2} + \left( \frac{\la}{2} - \frac{1}{2\la}-\frac{1}{\la} c_{1,2} \right) - \la =
\frac{-1}{\la} \left( \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) c_{1,2} \right) .
\end{multline*}
It follows that
$$ \frac{u_1 \cdot u_2}{\|u_1\| \|u_2\|} \geq
\frac{ \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) c_{1,2} }
{ \frac{1}{2} - 2\la + \frac{\la^2}{2} + c_{1,2} } ,$$
because the numerator is positive
(note that $-3 \leq \la \leq -2$ and $c_{1,2} \geq -1/2$).
The analogous inequality holds for any other pair of indices $i,j$.
Since $\arcsin$ is a monotone increasing function,
\eqref{eq:prob_3reg} yields that
$$
P(v \in I_{+}) \geq
\frac{1}{8} + \frac{1}{4\pi} \sum_{1\leq i < j \leq 3}
\arcsin\left( \frac{ \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) c_{i,j} }
{ \frac{1}{2} - 2\la + \frac{\la^2}{2} + c_{i,j} } \right) .
$$
Setting
$$ f(t) \mathdef
\arcsin\left( \frac{ \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) t }
{ \frac{1}{2} - 2\la + \frac{\la^2}{2} + t } \right) ,$$
we have
\begin{equation} \label{eq:proved_ineq}
P(v \in I_{+}) \geq
\frac{1}{8} + \frac{1}{4\pi} \sum_{1\leq i < j \leq 3} f( c_{i,j} ) .
\end{equation}
On the other hand,
\begin{equation} \label{eq:q3_with_f}
q_3(\la) = \frac{1}{8} + \frac{3}{4\pi} f\left( \frac{\la^2-3}{6} \right) ,
\end{equation}
which follows from \eqref{eq:q_3} and the definition of $f$.
(It also follows from the fact that when each $c_{i,j}$ is equal to $(\la^2-3)/6$,
then \eqref{eq:proved_ineq} should hold with equality.)
In view of \eqref{eq:proved_ineq} and \eqref{eq:q3_with_f}
we need to show that
\begin{equation} \label{eq:ineq_f}
\frac{1}{3} \sum_{1\leq i < j \leq 3} f( c_{i,j} )
\geq f\left( \frac{\la^2-3}{6} \right) ,
\end{equation}
where each $c_{i,j}$ is between $-1/2$ and $1$ and their average is $(\la^2-3)/6$.
This, of course, would follow from the convexity of $f$.
Unfortunately, $f$ is not convex on the entire interval $[-1/2,1]$.
We claim, however, that the tangent line to $f$ at $t_0 = (\la^2-3)/6$
is below $f$ on the entire interval $[-1/2,1]$, which still implies \eqref{eq:ineq_f}.
The rather technical proof of this claim can be found
in the Appendix (Lemma \ref{lem:tangent_line}).
Now let $\la = \lam \leq -2$, then $P(v \in I_{+}) \geq q_3(\lam)$.
So the expected size of the random independent set $I_{+}$ is at least $q_3(\lam) |V(G)|$,
thus the independence ratio of $G$ is at least $q_3(\lam)$.
To prove the second part of the statement we notice that
the random independent set $I_{-}$ (see Definition \ref{def:ind_set})
has the same expected size. Indeed, if we replace $X_v$, $v \in V(G)$ with
$X'_v = - X_v$, then $X'_v$, $v \in V(G)$ have the same joint distribution
and the roles of $I_{+}$ and $I_{-}$ interchange.
Since $I_{+}$ and $I_{-}$ are always disjoint,
the expected size of their union $I_{+} \cup I_{-}$ is at least $2 q_3(\lam) |V(G)|$.
Consequently, there must exist disjoint independent sets $I_1,I_2$ in $G$
with $|I_1 \cup I_2| \geq 2 q_3(\lam) |V(G)|$.
\end{proof}
For graphs with very large odd-girth Theorem \ref{thm:3reg_other}
of the Appendix gives a slightly better bound.
The proof is based on the same random eigenvector
but uses a different method to find large independent sets.
\subsection{The arc-transitive case} \label{subsec:edge_tr}
The following innocent-looking, and very plausible, conjecture
is open in dimension $n \geq 4$.
\begin{conjecture} \label{conj:geom}
Let $S$ be a sphere in the $n$-dimensional Euclidean space $\IR^n$.
We have $n+1$ spherical caps with the same given radius on $S$.
We want to find the configuration for which
the volume of the union of the caps is maximal.
It is conjectured that this optimal configuration is
always the one where the $n+1$ centers are
the vertices of a regular simplex in $\IR^n$.
\end{conjecture}
The statement of the conjecture is trivial for $n=2$,
while the $n=3$ case follows from the so-called Moment Theorem
of L.~Fejes T\'oth \cite[Theorem 2]{fejes_toth}.
In what follows we will explain
how the case $n=d-1$ of the above conjecture
implies that $P( v \in I_{+} ) \geq q_d(\la)$ holds
for every $d$-regular arc-transitive graph $G$,
and consequenlty the independence ratio of $G$ is at least $q_d(\lam)$.
In particular, the $d=4$ case follows from the $n=3$ case of
the conjecture which is known to be true, see Theorem \ref{thm:arc_tr}.
Using our previous notations, $P( v \in I_{+} )$ is the volume
of the spherical simplex $T$ determined by the half-spaces
with outer normal vectors $-u_i$, $i=1, \ldots, d$,
while $q_d(\la)$ is the volume of the same simplex
in the case when all the angles
$\varphi_{i,j} = \angle(u_i,u_j)$, $i \neq j$
are the same.
In other words, we need to show that
the volume of the spherical simplex $T$
is minimal when the angles $\angle(u_i,u_j)$ are the same.
If $G$ is arc-transitive, then the covariances
$\cov(X,Y_i) = x \cdot y_i$ are all equal. Since
$$ x \cdot y_1 + \cdots + x \cdot y_d = x \cdot (y_1+ \cdots + y_d) = x \cdot (\la x) = \la ,$$
we get that $x \cdot y_i = \la/d$ for each $i$.
It follows that the angle enclosed by $x$ and $u_i$
\begin{equation} \label{eq:delta}
\angle(x,u_i) = \de \mathdef \frac{ \pi - \arccos( \la / d) }{2} =
\frac{ \arccos( - \la/d ) }{2} \mbox{ for each } i.
\end{equation}
Now let $S_l$ be the set of points on $S^{d-1}$ that
has some fixed distance $l$ from $x$,
thus $S_l$ is a $d-2$-dimensional sphere for any $l$.
The intersection of $S_l$ and the half-space
with outer normal vector $u_i$ is
a spherical cap of radius depending only on $l$ and $\la$.
So the intersection of $S_l$ and our spherical simplex $T$ can be obtained
by removing $d$ spherical caps of the same given radius from $S_l$.
If Conjecture \ref{conj:geom} is true for $n=d-1$,
then the total volume of the removed area is maximal
for the ``regular configuration'' when each $\angle(u_i,u_j)$ is the same.
Therefore the $d-2$-dimensional volume of $T \cap S_l$
is minimal for the regular configuration for any $l$.
It follows that the $d-1$-dimensional volume of $T$
is also minimal for the regular configuration,
and this is what we wanted to prove.
\subsection{Bounds near $-d$} \label{sec:near_neg_d}
Even if Conjecture \ref{conj:geom} is not assumed to be true,
the above observations yield a lower bound
for the independence ratio of $d$-regular arc-transitive graphs
in the case when the least eigenvalue is close to $-d$.
As we have seen in \eqref{eq:delta}, $ \angle(x,u_i) = \de $ for each $i$,
which means that each point of $S^{d-1}$ at (spherical) distance
less than $\pi/2 - \de$ from $x$ is contained in our spherical simplex $T$.
These points form a spherical cap with center $x$ and radius $\pi/2 - \de$.
(In fact, this spherical cap is the ``inscribed ball'' of $T$.)
Using \eqref{eq:delta} and that
$\arccos(t) \leq \pi/2 \sqrt{1-t}$ for any $t \in [0,1]$, we get
$$ \de = \frac{ \arccos( - \la/d ) }{2} \leq \frac{\pi}{4} \sqrt{1+\la/d} $$
provided that $\la \leq 0$.
This spherical cap can be obtained by taking the hemisphere (around $x$)
and removing a strip of ``width'' $\de$ (in spherical distance).
The volume of this strip is clearly at most $\de \vol(S^{d-2})$, therefore
the volume of the spherical cap is at least $ \vol(S^{d-1})/2 - \de \vol(S^{d-2})$, whence
$$ P( v \in I_{+} ) \geq \frac{ \vol(S^{d-1})/2 - \de \vol(S^{d-2}) }{ \vol(S^{d-1}) } =
\frac{1}{2} - \frac{ \pi \vol(S^{d-2}) }{ 4 \vol(S^{d-1}) } \sqrt{\frac{\la+d}{d}} .$$
For $d=4$ we have $\vol(S^{2}) / \vol(S^{3}) = (4 \pi) / (2 \pi^2) = 2/\pi$,
so the bound is
$$ \frac{1}{2} - \frac{1}{4} \sqrt{\la+4} .$$
For general $d$, we use the estimate
$\vol(S^{d-2}) / \vol(S^{d-1}) \leq \sqrt{d}/\sqrt{2\pi}$
(see Lemma \ref{lem:vol_ratio} of the Appendix) to obtain the following bound
$$ \frac{1}{2} - \frac{\sqrt{\pi}}{4\sqrt{2}} \sqrt{\la+d} >
\frac{1}{2} - \frac{1}{3} \sqrt{\la+d} .$$
These are lower bounds for the probability $P( v \in I_{+} )$,
in particular, for $q_d(\la)$.
Thus the first part of Theorem \ref{thm:arc_tr} follows,
as well as the estimate \eqref{eq:q4} for $q_4(\la)$
and the last statement of Proposition \ref{prop:q_d_prop}.
We can even say something in the general (vertex-transitive) case.
Using $ x \cdot y_1 + \cdots + x \cdot y_d = \la $ and $x \cdot y_j \geq -1$:
$$ x \cdot y_i \leq \la + d - 1 \mbox{ for each } 1 \leq i \leq d .$$
Therefore the angle $\angle(x,y_i)$ is at least $\arccos( \la + d - 1 )$.
Using that $\arccos(t) \leq \pi/2 \sqrt{1-t}$ for any $t \in [0,1]$, it follows that
$$ \angle(x,u_i) \leq \de' \mathdef \frac{ \pi - \arccos( \la + d - 1 ) }{2} =
\frac{ \arccos( 1 - \la - d ) }{2} \leq \frac{\pi}{4} \sqrt{\la+d} $$
provided that $\la \leq -d+1$.
This means that our spherical simplex $T$ contains
the spherical cap with center $x$ and radius $\pi/2 - \de'$. Therefore
$$ P( v \in I_{+} ) \geq \frac{ \vol(S^{d-1})/2 - \de' \vol(S^{d-2}) }{ \vol(S^{d-1}) } =
\frac{1}{2} - \frac{ \pi \vol(S^{d-2}) }{ 4 \vol(S^{d-1}) } \sqrt{\la+d}
\geq \frac{1}{2} - \frac{\sqrt{\pi}}{4\sqrt{2}} \sqrt{d(\la+d)} .$$
Since $\sqrt{\pi}/(4\sqrt{2}) < 1/3$, Theorem \ref{thm:crude} follows.
\section{Infinite transitive graphs} \label{sec:3}
\subsection{Random wave functions}
Our goal now is to generalize the random eigenvectors
we introduced in Section \ref{sec:2} for infinite transitive graphs $G$.
For an infinite graph $G$ the adjacency operator
$A_G: \ell_2( V(G) ) \to \ell_2( V(G) )$ might not have any eigenvectors
(i.e., the point spectrum might be empty).
So the approach we used in the finite setting will not work here.
Instead, we will define random wave functions as the limit of linear factor of i.i.d.\ processes.
The coefficients of these linear factors will be approximate eigenvectors of $A_G$
that are invariant under automorphisms fixing some root $x \in V(G)$.
We start with proving that such approximate eigenvectors exist
for any $\la$ in the spectrum $\si(A_G)$.
Let $\St_x(G)$ denote the \emph{stabilizer subgroup},
that is, the group of automorphisms fixing $x$.
\begin{theorem} \label{thm:inv_appr_ev}
Let $G$ be an infinite vertex-transitive graph
with adjacency operator $A_G$ and with some fixed root $x$.
Then for any $\eps >0$ and any $\la$ in the spectrum $\si(A_G)$ there exists
a $\St_x(G)$-invariant vector $\al \in \ell_2( V(G) )$ such that
$$ \| \al \| =1 \mbox{ and } \|A_G \al - \la \al \| \leq \eps .$$
\end{theorem}
\begin{proof}
Consider the projection-valued measure $P_\la$
corresponding to the self-adjoint operator $A_G$.
This ``measure'' assigns an orthogonal projection
$P_S$ to each Borel set $S \subseteq \IR$.
According to spectral theory, one can integrate with respect to this measure.
For instance, the following formula holds:
$$ A_G = \int_\IR \la \, \mathrm{d} P_\la .$$
Furthermore, the projections $P_S$ have the property that if an operator $T$
commutes with $A_G$, then it also commutes with each projection $P_S$.
There is a unitary operator $U_\Phi$ corresponding to each $\Phi \in \Aut(G)$
(the one that permutes the coordinates of $\ell_2( V(G) )$ according to $\Phi$).
Since $U_\Phi$ commutes with $A_G$, it also commutes with the projections $P_S$.
Now let $\la_0$ be an arbitrary element of the spectrum $\si(A_G)$
and set $S=[\la_0-\eps, \la_0+\eps]$. We define $\al$ as the image of
the indicator function $\ind_x$ under the projection $P_S$:
$$ \al \mathdef P_{[\la_0-\eps, \la_0+\eps]} \ind_x .$$
Note that $\ind_x$ is a fixed point of $U_\Phi$ for any $\Phi \in \St_x(G)$, therefore
$$ U_\Phi \al = U_\Phi P_{[\la_0-\eps, \la_0+\eps]} \ind_x =
P_{[\la_0-\eps, \la_0+\eps]} U_\Phi \ind_x = P_{[\la_0-\eps, \la_0+\eps]} \ind_x = \al , $$
thus $\al$ is $\St_x(G)$-invariant.
On the other hand, since $P_S P_{\IR \sm S} = 0$, we have
$$ A_G \al - \la_0 \al = \left( \int_{\IR} \la-\la_0 \, \mathrm{d} P_\la \right) \al =
\left( \int_{[\la_0-\eps, \la_0+\eps]} \la-\la_0 \, \mathrm{d} P_\la \right) \al ,$$
which clearly implies that
$$ \| A_G \al - \la_0 \al \| \leq \eps \| \al \| .$$
It remains to show that $\al = P_{[\la_0-\eps, \la_0+\eps]} \ind_x \neq 0 $.
Assume that $P_{[\la_0-\eps, \la_0+\eps]} \ind_x = 0$.
It follows that $P_{[\la_0-\eps, \la_0+\eps]} \ind_v = 0$ for every vertex $v \in V(G)$.
Indeed, let $\Phi \in \Aut(G)$ such that $\Phi x = v$. Then $U_\Phi \ind_x = \ind_v$ and
$$ P_{[\la_0-\eps, \la_0+\eps]} \ind_v = P_{[\la_0-\eps, \la_0+\eps]} U_\Phi \ind_x =
U_\Phi P_{[\la_0-\eps, \la_0+\eps]} \ind_x = 0 .$$
This holds for each vertex $v$, which clearly implies
that $P_{[\la_0-\eps, \la_0+\eps]} = 0$. Then the operator
$$ B = \int_{\IR \sm [\la_0-\eps, \la_0+\eps]}
\frac{1}{\la-\la_0} \, \mathrm{d} P_\la $$
would be the inverse of $A_G - \la_0 I$ contradicting
our assumption that $\la_0 \in \si(A_G)$.
\end{proof}
\begin{remark}
There is a general theorem for Hilbert spaces
saying that every point of the spectrum of a self-adjoint operator
is an approximate eigenvalue \cite[Corollary 4.1.3]{sunder}.
So the real content of the above theorem is that one can find
approximate eigenvectors that are $\St_x(G)$-invariant.
This invariance will be crucial for us later on,
when we will use these approximate eigenvectors
as coefficients to define linear factor of i.i.d.\ processes.
\end{remark}
Suppose now that we have an i.i.d.\ process on $G$:
independent standard normal random variables $Z_u$ assigned to each vertex $u$.
We will consider processes $X_v$, $v \in V(G)$, where each $X_v$ is
a (possibly infinite) linear combination of $Z_u$, $u \in V(G)$.
We collected some obvious properties of such processes in the next proposition.
\begin{proposition} \label{prop:lin_factor}
Let $\be_{v,u}$, $v,u \in V(G)$ be real numbers and let
\begin{equation} \label{eq:lin}
X_v = \sum_{u \in V(G)} \be_{v,u} Z_u .
\end{equation}
The infinite sum in \eqref{eq:lin} converges almost surely if and only if
\begin{equation} \label{eq:square_sum}
\sum_{u \in V(G)} \be_{v,u}^2 < \infty .
\end{equation}
If \eqref{eq:square_sum} is satisfied,
then $X_v$ is a centered Gaussian with variance
$\var(X_v) = \sum_{u \in V(G)} \be_{v,u}^2$.
The process $X_v$, $v \in V(G)$ is $\Aut(G)$-invariant if and only if
\begin{equation} \label{eq:inv}
\be_{v,u} = \be_{\Phi v, \Phi u} \mbox{ for all } \Phi \in \Aut(G) .
\end{equation}
\end{proposition}
Now we are in a position to formally define linear factor of i.i.d.\ processes.
\begin{definition} \label{def:lin_factor}
We say that a process $X_v$, $v \in V(G)$ is a
\emph{linear factor} of the i.i.d process $Z_u$ if
it can be written as in \eqref{eq:lin}
for some real numbers $\be_{v,u}$, $v,u \in V(G)$
satisfying \eqref{eq:square_sum} and \eqref{eq:inv}.
\end{definition}
\begin{remark} \label{rm:lin_factor}
Let us fix a root $x \in V(G)$. For a linear factor the coefficients
$\al_u \mathdef \be_{x,u}$ clearly determine each $\be_{v,u}$.
Here $\al=(\al_u)_{u \in V(G)}$ can be any $\St_x(G)$-invariant vector in $\ell_2( V(G) )$.
So there is a one-to-one correspondance between linear factor of i.i.d.\ processes on $G$
and $\St_x(G)$-invariant vectors $\al \in \ell_2( V(G) )$.
Also, by Proposition \ref{prop:lin_factor} we have $\var(X_v) = \| \al \|^2$.
\end{remark}
Recall Definition \ref{def:gaussian_process} of invariant Gaussian processes.
\begin{definition} \label{def:gaussian_ev}
We call an invariant Gaussian process $X_v$, $v \in V(G)$
a \emph{Gaussian wave function} with eigenvalue $\la$ if
$$ \sum_{u \in N(v)} X_u = \la X_v \mbox{ for each vertex } v \in V(G) ,$$
where $N(v)$ denotes the set of neighbors of $v$ in $G$.
\end{definition}
\begin{example}
It was shown in \cite{csghv} that for the $d$-regular tree $T_d$
there exists an essentially unique Gaussian wave function for each $\la \in [-d,d]$.
Furthermore, this Gaussian wave function can be approximated
by factor of i.i.d.\ processes provided that
$\la$ is in the spectrum $\si(T_d) = [-2\sqrt{d-1}, 2\sqrt{d-1}]$.
\end{example}
In general, it is not clear for which $\la$ such Gaussian wave functions exist
and whether they are unique.
\begin{definition}
For a transitive graph $G$ we call the closed set
$$ \widetilde{\si}(G) \mathdef \left\{ \la \, : \,
\mbox{there exists a Gaussian wave function on $G$ with eigenvalue $\la$} \right\} $$
the \emph{Gaussian spectrum} of $G$.
\end{definition}
Theorem \ref{thm:gaussian_ev} claims that
for any $\la \in \si(A_G)$ there exists a Gaussian wave function on $G$,
which can be approximated by linear factor of i.i.d.\ processes.
Therefore $\widetilde{\si}(G) \supseteq \si(A_G)$.
\begin{proof}[Proof of Theorem \ref{thm:gaussian_ev}]
We use the $\St_x(G)$-invariant approximate eigenvectors
of Theorem \ref{thm:inv_appr_ev} to define linear factor of i.i.d.\ processes.
So let $\al$ be a $\St_x(G)$-invariant vector with
$\| \al^\eps \| = 1$ and $\| A_G \al^\eps - \la \al^\eps \| \leq \eps$.
By Remark \ref{rm:lin_factor} for each $\al^\eps$ there is
a corresponding linear factor $X_v^\eps$, $v \in V(G)$. Note that the process
$$ Y_v^\eps \mathdef \sum_{u \in N(v)} X_u^\eps - \la X_v^\eps $$
is also a linear factor, the corresponding
coefficient vector is $\de^\eps \mathdef A_G \al^\eps - \la \al^\eps$.
Therefore $X_v^\eps$ is an invariant Gaussian process with
$\var(X_v^\eps) = \| \al^\eps \|^2 = 1$ and
$$ \var\left( \sum_{u \in N(v)} X_u^\eps - \la X_v^\eps \right) =
\var\left( Y_v^\eps \right) = \| \de^\eps \|^2 =
\| A_G \al^\eps - \la \al^\eps \|^2 \leq \eps^2 .$$
Since the space of invariant Gaussian processes with variance $1$ is compact,
it follows that there exists a sequence $\eps_n$ converging to $0$
such that the processes $X_v^{\eps_n}$ converge in distribution.
The limit process will be a nontrivial invariant Gaussian process $X_v$
that satisfies the eigenvector equation \eqref{eq:eigen} at each vertex.
\end{proof}
\subsection{Factor of i.i.d.\ processes}
For a graph $G$ we defined an i.i.d.\ process on $G$
as independent standard normal random variables $Z_v$, $v \in V(G)$.
In other words, $Z = \left( Z_v \right)_{v \in V(G)}$ is
a random point in the measure space $(\Omega, \mu)$,
where $\Omega$ is $\IR^{V(G)}$ with the product topology
and $\mu$ is the product of standard Gaussian measures (one on each copy of $\IR$).
The natural action of $\Aut(G)$ on $V(G)$ gives rise to
an action of $\Aut(G)$ on $\Omega$: for $\Phi \in \Aut(G)$
and $\omega = \left( \omega_v \right)_{v \in V(G)} \in \Omega$ let
$$ \left( \Phi \cdot \omega \right)_v \mathdef \omega_{ \Phi^{-1} v } .$$
Let $G$ be an infinite transitive graph and suppose that
$F$ is a measurable $\Omega \to \Omega$ function
that is $\Aut(G)$-equivariant (i.e., commutes with the $\Aut(G)$-action).
Then $X = F(Z)$ is an invariant process on $G$.
Such a process $X = \left( X_v \right)_{v \in V(G)}$ is called
a \emph{factor} of the i.i.d.\ process $Z$.
An $\Aut(G)$-equivariant $F \colon \Omega \to \Omega$ function
is determined by $f = \pi_x \circ F$, where $\pi_x \colon \Omega \to \IR$ is
the projection corresponding to the coordinate of some fixed root $x$.
Here $f$ can be any $\St_x(G)$-invariant $\Omega \to \IR$ function.
So factor of i.i.d.\ processes can be identified with
measurable, $\St_x(G)$-invariant functions $f \colon \Omega \to \IR$.
Next we will prove Theorem \ref{thm:not_closed} and Theorem \ref{thm:cayley}
by showing that certain Gaussian wave functions $X_v$, $v \in V(G)$
cannot be obtained as factor of i.i.d.\ processes.
Since $X_v$ has finite variance in that case,
we can restrict ourselves to functions $f \in L_2(\Omega, \mu)$.
Let $\Hi \subset L_2(\Omega, \mu)$ be the subspace containing
those $f \in L_2(\Omega, \mu)$ that are $\St_x(G)$-invariant.
There is a natural way to define an adjacency operator $\IA$
on the Hilbert space $\Hi$. Let
$$ \left( \IA f \right) (\omega) \mathdef
\sum_{y \in N(x)} f\left( \Phi_{y \to x} \cdot \omega \right) ,$$
where $\Phi_{y \to x}$ is an (arbitrary) automorphism of $G$ taking $y$ to $x$.
Since $f$ is $\St_x(G)$-invariant, $\IA$ is well defined.
Suppose now that we have a Gaussian wave function with eigenvalue $\la$
that can be obtained as a factor of i.i.d.\ process.
Then the corresponding $f$ satisfies the eigenvector equation $\IA f = \la f$.
In particular, $\la$ needs to be in the point spectrum of $\IA$.
(Note that an eigenvector $f$ of $\IA$ does not necessarily give us a
Gaussian wave function: although the corresponding factor of i.i.d.\ process
will satisfy the eigenvector equation at each vertex,
$f(Z)$ might not have a Gaussian distribution.)
\begin{proof}[Proof of Theorem \ref{thm:not_closed}]
Since $L_2(\Omega, \mu)$ is a separable Hilbert space,
so is $\Hi$, and consequently the point spectrum of $\IA : \Hi \to \Hi$ is countable.
Therefore only for countably many $\la$'s
can we have a Gaussian wave function on $G$ that
can be obtained as a factor of i.i.d.\ process.
However, if $\si(A_G)$ is uncountable, then by Theorem \ref{thm:gaussian_ev}
$G$ has Gaussian wave functions for uncountably many different eigenvalues $\la$;
moreover, they can all be approximated by linear factor of i.i.d.\ processes.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:cayley}]
We will use two basic facts about the point spectra of
the adjacency operators $A_G$ and $\IA$.
First, $\lama$ is never in the point spectrum $\si_p(A_G)$
(we will give a short proof for this in the Appendix, see Lemma \ref{lem:lama}).
Second, $\si_p(\IA) \subseteq \si_p(A_G) \cup \{d\}$ for Cayley graphs
(this will be explained after the proof).
Therefore $\lama$ is not in the point spectrum of $\IA$
provided that $\lama <d$, and consequently,
a Gaussian wave function with eigenvalue $\lama$
cannot be obtained as a factor of i.i.d.\ process.
In the case $\lama = d$
the Gaussian wave function has to be constant,
that is, $X_u = X_v$ for any two vertices $u,v$.
However, for a factor of i.i.d.\ process the correlation between $X_u$ and $X_v$
should tend to $0$ as the distance of $u$ and $v$ goes to infinity.
\end{proof}
Next we will explain the relation between
the adjacency operators $A_G$ and $\IA$.
This can be found in \cite[Section 3]{kechris_tsankov} in a more general setting;
see also \cite[Theorem 2.1 and Corollary 2.2]{lyons_nazarov}.
Let $\nu$ denote the standard Gaussian measure.
Since $L_2(\IR, \nu)$ is a separable Hilbert space,
it has a countable orthonormal basis: $g_0, g_1, g_2, \ldots$,
where $g_0$ will be assumed to be the constant $1$ function.
Let $\II$ denote the set of finitely supported $V(G) \to \{0,1,2,\ldots\}$ functions.
For each $q \in \II$ we define an $\Omega \to \IR$ function:
$$ W_q(\omega) \mathdef \prod_{v \in V(G)} g_{q(v)} \left( \omega_v \right) .$$
Note that this is actually a finite product,
since all but finitely many terms are equal to $g_0 \equiv 1$.
According to \cite[Lemma 3.1]{kechris_tsankov}
the functions $W_q$, $q\in \II$ form an orthonormal basis of $L_2(\Omega, \mu)$.
It follows that $L_2(\Omega, \mu)$ is separable,
which fact was used in the proof of Theorem \ref{thm:not_closed}.
We defined the operator $\IA$ on the space $\Hi \subset L_2(\Omega, \mu)$
containing $\St_x(G)$-invariant functions.
When $G$ is a Cayley graph, there is a natural way to
extend $\IA$ to an adjacency operator over the whole space $L_2(\Omega, \mu)$.
Suppose that $\Ga$ is a finitely generated infinite group.
Let $S$ be a finite, symmetric set of generators
and let $G$ be the corresponding Cayley graph,
that is, $V(G) = \Ga$ and the vertex $v \in \Ga$
is adjacent to the vertices $\ga v$, $\ga \in S$.
The natural action of $\Ga$ on itself gives rise to
the following $\Ga$-action on $\Omega$:
$$ \left( \ga \cdot \omega \right)_v \mathdef
\left( \omega \right)_{\ga^{-1} v} .$$
(This is often called the \emph{generalized Bernoulli shift}.)
Then for $f \in L_2(\Omega, \mu)$ let
$$ \left( \IA f \right) (\omega) \mathdef
\sum_{\ga \in S} f\left( \ga \cdot \omega \right) .$$
This clearly extends our earlier definition of $\IA$.
There is a natural $\Ga$-action on $\II$ as well: for $q \in \II$
$$ \left( \ga \cdot q \right)(v) \mathdef q \left( \ga^{-1} v \right) .$$
It is compatible with the $\Ga$-action on $\Omega$ in the following sense:
$$ W_{\ga \cdot q}(\omega) = W_q \left( \ga^{-1} \cdot \omega \right) .$$
It means that
\begin{equation} \label{eq:Aop}
\IA W_q = \sum_{\ga \in S} W_{\ga \cdot q} .
\end{equation}
We now consider the the orbit $\{ \ga \cdot p \, : \, \ga \in \Ga \}$
of a given element $p \in \II$ and
the closure of the space spanned by the corresponding functions $W_{\ga \cdot p}$:
$$ H_p \mathdef \cl \left( \spn \left\{ W_{\ga \cdot p} \, : \, \ga \in \Ga \right\} \right)
\subset L_2(\Omega, \mu) .$$
It is clear from \eqref{eq:Aop} that $H_p$ is $\IA$-invariant.
If $p \equiv 0$, then $H_p$ consists of the contant functions on $\Omega$
and both the point spectrum and the spectrum of $ \IA \left|_{H_p} \right.$ is $\{ d \}$.
Otherwise the stabilizer $\Ga_p$ of $p$ is a finite subgroup of $\Ga$ , and
$ \IA \left|_{H_p} \right.$ is closely related to the original adjacency operator $A_G$.
Indeed, let $T_p \colon H_p \to \ell_2( V(G) ) \cong \ell_2(\Ga)$ be the operator defined by
$$ T_p \colon W_q \mapsto \ind_{ \{\ga \in \Ga \, : \, \ga \cdot p = q\} } ,$$
where $q$ is in the orbit of $p$.
It is easy to see that $T_p$ is a bounded operator
for which $T_p \IA \left|_{H_p} \right. = A_G T_p$.
Since $T_p$ is also bounded below, it follows that
$$ \si\left( \IA \left|_{H_p} \right. \right) \subseteq \si(A_G) \mbox{ and }
\si_p\left( \IA \left|_{H_p} \right. \right) \subseteq \si_p(A_G) $$
with equality when the stabilizer $\Ga_p$ is trivial.
Therefore for Cayley graphs the operators
$A_G \colon \ell_2( V(G) ) \to \ell_2( V(G) )$ and
$\IA \colon L_2(\Omega, \mu) \to L_2(\Omega, \mu)$
have the same spectra and point spectra
with the possible exception of the point $d$:
$$ \si(\IA) = \si(A_G) \cup \{d\} \mbox{ and }
\si_p(\IA) = \si_p(A_G) \cup \{d\} .$$
Consequently,
$$ \si_p\left( \IA \left|_{\Hi} \right. \right) \subseteq
\si_p(\IA) = \si_p(A_G) \cup \{d\} ,$$
which we used in the proof of Theorem \ref{thm:cayley}.
\subsection{Independent sets}
Let $G$ be an infinite transitive graph
and $\lam$ be the minimum of its spectrum $\si(A_G)$.
Consider linear factor of i.i.d.\ processes $X_v^n$
converging in distribution to a Gaussian wave function $X_v$
with eigenvalue $\lam$ as $n \to \infty$ as in Theorem \ref{thm:gaussian_ev}.
We define the following independent sets on $G$:
$$ I_{+} \mathdef \left\{ v \, : \, X_v > X_u, \forall u \in N(v) \right\} \mbox{ and }
I_{+}^n \mathdef \left\{ v \, : \, X_v^n > X_u^n, \forall u \in N(v) \right\} .$$
Then for each $n$ the independent set $I_{+}^n$ is a factor of
the i.i.d.\ process $Z_v$ (i.e., it is obtained as a measurable function
of $Z_v$, $v \in V(G)$ that commutes with the natural action of $\Aut(G)$.)
Furthermore, since the event $v \in I_+$ corresponds to an open set, we have
$$ \liminf_{n \to \infty} P( v \in I_+^n ) \geq P( v \in I_+ ) .$$
Therefore whenever we have a lower bound $q$ for $P( v \in I_+ )$,
it yields that for any $\eps>0$ there exists a factor of i.i.d.\
independent set with ``size'' greater than $q - \eps$.
Bounding $P( v \in I_+ )$, however, leads us to the same
optimization problem as in the finite case.
We need to estimate the volume of the same spherical simplex
with the exact same constraints.
(Of course, there might be a difference between the finite and infinite setting
in terms of what covariances $c_{i,j}$ can actually come up,
but our proofs used only the trivial constraints
that they form a positive semidefinite matrix and their sum is $(\lam^2-d)/2$,
which are true in the infinite case, too.)
Thus we obtain the exact same bounds and Theorem \ref{thm:inf} follows.
Actually, in Theorem \ref{thm:3reg_main} we proved the bound
only for graphs with $\lam \leq -2$ and argued that
the only finite, $3$-regular, transitive graph for which this does not hold
is the complete graph $K_4$. For infinite transitive graphs
$\lam \leq -2$ holds with no exception. This follows from the fact
that they contain arbitrarily long paths as induced subgraphs.
\section{Appendix} \label{sec:app}
\begin{theorem} \label{thm:3reg_other}
Suppose that $G$ is a finite, $3$-regular, vertex-transitive graph
with minimum eigenvalue $\lam$ and odd-girth $g$. Then the independence ratio of $G$ is at least
$$ \frac{5g-3}{16g} + \frac{g+1}{2g} \frac{3}{4 \pi} \arcsin \left( \frac{\lam^2-3}{6} \right) \geq
\frac{5}{16} + \frac{3}{8 \pi} \arcsin \left( \frac{\lam^2-3}{6} \right) - \frac{3}{16g} .$$
In fact, there exist two disjoint independent sets in $G$ such that
their average size divided by $|V(G)|$ is not less than the above bound.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:3reg_other}]
It is easy to check the statement for $K_4$.
According to Proposition \ref{prop:la_min_3reg}
$\lam \leq -2$ holds for any other finite, $3$-regular, transitive graph $G$.
Let $X_v$, $v \in V(G)$ be the random eigenvector corresponding to $\lam$.
Let $V_{+}$ denote the set of ``positive vertices'', that is,
$$ V_{+} \mathdef \left\{ v\in V(G) \ : \ X_v > 0 \right\} .$$
The expected size of $V_{+}$ is $|V(G)|/2$.
Since $\lam$ is negative, a vertex and its three neighbors cannot all be positive.
Therefore each vertex has degree at most two in the induced subgraph $G[V_{+}]$.
Thus each connected component of this subgraph is a path or a cycle.
We want to choose an independent set from each component.
We can choose at least half the vertices from paths and even cycles.
From an odd cycle of length $l \geq g$ we can choose $(l-1)/2$ vertices,
which is at least a $(g-1)/(2g)$ proportion of all vertices in that component.
(Recall that $g$ denotes the odd-girth of $G$, that is,
the length of the shortest odd cycle in $G$.)
We need one more observation, namely, that many of the components actually
contain only one vertex. Using our earlier notations,
let $v$ be an arbitrary vertex with neighbors $u_1,u_2,u_3$,
the corresponding random variables are $X$ and $Y_1,Y_2,Y_3$.
Note that $Y_1<0$, $Y_2<0$ and $Y_3<0$ imply that $X > 0$.
Therefore the probability $p$ that $v$ is an isolated vertex in $G[V_{+}]$ is
\begin{multline} \label{eq:prob_iso}
p \mathdef P \left( X > 0; Y_1<0; Y_2<0; Y_3<0 \right) =
P \left( Y_1<0; Y_2<0; Y_3<0 \right) = \\
P \left( y_i \cdot Z < 0; i=1,2,3 \right) =
\frac{1}{2} - \frac{1}{4\pi} \sum_{1\leq i,j\leq 3} \arccos (c_{i,j}) =
\frac{1}{8} + \frac{1}{4\pi} \sum_{1\leq i,j\leq 3} \arcsin (c_{i,j}).
\end{multline}
Note that $\arcsin$ it is a monotone increasing
odd function on $[-1,1]$, which is convex on $[0,1]$.
Furthermore, the average of $c_{i,j}$ is $(\lam^2-3)/6 \geq (2^2-3)/6 > 0$.
It is easy to see that these imply that
the right hand side of \eqref{eq:prob_iso} decreases (not increases)
if we replace each $c_{i,j}$ with their average $(\lam^2-3)/6$. Thus
\begin{equation} \label{eq:prob_iso_bound}
p \geq \frac{1}{8} + \frac{3}{4\pi} \arcsin \left( \frac{\lam^2-3}{6} \right) .
\end{equation}
Our independent set will contain all isolated vertices
and at least a $(g-1)/(2g)$ proportion of all the other vertices in $V_{+}$.
This yields the following lower bound for the independence ratio of $G$:
$$ p + \frac{g-1}{2g} \left( \frac{1}{2} - p \right) =
\frac{g-1}{4g} + \frac{g+1}{2g} p .$$
Combining this with \eqref{eq:prob_iso_bound} yields the desired bound.
We can choose an independent set with the same expected size
from the ``negative vertices'':
$$ V_{-} \mathdef \left\{ v\in V(G) \ : \ X_v < 0 \right\} .$$
This implies the second part of the theorem.
We mention that the proof also works in the infinite setting,
so there is an analogous theorem for infinite transitive graphs
(as in Theorem \ref{thm:inf}).
\end{proof}
\begin{remark}
Any non-trivial lower bound for the density of components
of size $3,5, \ldots$ in $G[V_+]$
would immediately yield an improvement in the above theorem.
In \cite{csghv} such non-trivial bounds
were obtained for the $3$-regular tree $T_3$.
\end{remark}
\begin{proposition} \label{prop:la_min_3reg}
Suppose that $G$ is a finite, connected, $3$-regular, vertex-transitive graph.
Then either $G$ is isomorphic to the complete graph $K_4$,
or the least eigenvalue $\lam$ of its adjacency matrix is at most $-2$.
\end{proposition}
The proof below is due to P\'eter Csikv\'ari.
\begin{proof}
Let $G$ be a connected, $3$-regular,
vertex-transitive graph with $\lam(G) > -2$.
We need to show that $G$ must be the complete graph $K_4$.
Cauchy's interlacing theorem implies that
$\lam(G) \leq \lam(H)$
whenever $H$ is an induced subgraph of $G$.
Therefore $\lam(H) > -2$ must hold for any induced subgraph.
Let $T$ denote the tree shown in Figure \ref{fig:gr}.
It is easy to see that the smallest eigenvalue of $T$ is $-2$.
We also have $\lam(C_{2k}) = -2$ for the cycle of length $2k$ for any $k \geq 2$.
Therefore $G$ can contain neither $T$, nor $C_{2k}$ as an induced subgraph.
\begin{figure}[ht]
\centering
\input{fig_gr}
\caption{The graph $T$ and the eigenvector corresponding to its least eigenvalue $-2$}
\label{fig:gr}
\end{figure}
We will distinguish three cases.
\smallskip
\noindent\textbf{Case 1.}
\textit{$G$ does not contain triangles.}\\
Let $u,v$ be two neighboring vertices, and let $u_1,u_2$ and $v_1,v_2$
denote the remaining two neighbors of $u$ and $v$, respectively.
Since $G$ contains no triangles, $u_1$,$u_2$,$v_1$,$v_2$ are pairwise distinct vertices.
The induced subgraph on the set $\{u,u_1,u_2,v,v_1,v_2\}$ must be
isomorphic to $T$ (the graph shown in Figure \ref{fig:gr}),
otherwise $G$ would contain a triangle or an induced $C_4$.
Since $G$ cannot contain $T$ as an induced subgraph, this is a contradiction.
\smallskip
\noindent\textbf{Case 2.}
\textit{$G$ contains triangles but no two share a common edge.}\\
Since $G$ is vertex-transitive,
there must be at least one triangle through every vertex.
We claim that any two triangles must be disjoint.
If they had two common vertices,
then they would share an edge,
and if they had exactly one common vertex,
then that vertex would have degree at least $4$.
So we have disjoint triangles in $G$,
exactly one through every vertex.
We claim that there can be at most one edge between two triangles
(with one endpoint in one triangle and one in the other).
Indeed, otherwise we would either have an induced $C_4$
or a vertex with degree at least $4$.
Let us consider the following graph $G^\ast$.
To each triangle in $G$ corresponds a vertex in $G^\ast$,
and we join two such vertices with an edge if
there is an edge between the corresponding triangles.
It is easy to see that $G^\ast$ will be $3$-regular as well.
Take a cycle in $G^\ast$ with minimum length $g \geq 3$.
There is a corresponding cycle of length $2g$ in the original graph $G$.
It is easy to see that this must be an induced cycle, contradiction.
\smallskip
\noindent\textbf{Case 3.} \textit{$G$ contains two triangles sharing an edge.}\\
Let $xy$ be an edge shared by triangles $xyu$ and $xyv$ (see the figure below).
\begin{center}
\input{fig_case3}
\end{center}
Then $x$ and $y$ already have degree $3$, while $u$ and $v$ still need an edge.
We claim that $uv$ must be an edge.
Otherwise $v$ would have a neighbour $z$ different from $x,y,u$.
Since $z$ cannot be adjacent to $x$ and $y$,
there is only one triangle through $v$,
while there are two triangles through $x$,
contradticting the transitivity of $G$.
So $uv$ is an edge, therefore each of $x,y,u,v$ has degree $3$.
Since $G$ is connected, $G$ cannot have any other vertices and thus isomorphic to $K_4$.
\end{proof}
The following lemma is probably known,
but we did not find an explicit reference,
so we give a short proof.
\begin{lemma} \label{lem:lama}
If $G$ is an infinite transitive graph, then the maximum $\lama$ of the spectrum of $A_G$
is never in the point spectrum of $A_G$.
\end{lemma}
\begin{proof}
In the case $\lama=d$, the equation $A_G f= d f$ means that the vector $f$ is harmonic.
However, the maximum principle implies that there are no $\ell^2$ harmonic functions.
Thus there is no eigenvector for $\lama$,
which is equivalent to saying that $\lama$ is not in the point spectrum of $A_G$.
For the non-amenable case (i.e. $\lama<d$),
Theorem II.7.8 in \cite{woess_book} implies that for any vertex $v$
$$ \sum_{n=0}^\infty \lambda_{max}^{-2n} \langle 1_v,A_G^{2n} 1_v \rangle <\infty ,$$
where the left hand side can be written in terms of the spectral measure $\mu_G$ as
$$ \sum_{n=0}^\infty \lama^{-2n} \int x^{2n} d\mu_G(x) \ge
\sum_{n=0}^\infty \lama^{-2n} \lama^{2n} \mu_G(\{\lama\}) .$$
This forces $\mu_G(\{\lama\})=0$, which means that $\lama$ is not in the point spectrum of $A_G$.
\end{proof}
\begin{lemma} \label{lem:vol_ratio}
$$ \frac{\vol(S^{d-2})}{\vol(S^{d-1})} < \frac{\sqrt{d}}{\sqrt{2\pi}} .$$
\end{lemma}
\begin{proof}
Using the formula
$$ \vol( S^{n-1} ) = \frac{2 \pi^{n/2}}{\Gamma(n/2)} $$
we need to show that
$$ \frac{ \Gamma\left( \frac{d-1}{2} \right) }{ \Gamma\left( \frac{d-2}{2} \right) } <
\sqrt{ \frac{d}{2} } .$$
Since $\Gamma$ is log-convex, the increments of its logarithm over intervals of length,
say, $1/2$ are increasing. Thus
$$\frac{\Gamma(\frac{d-1}2)}{\Gamma(\frac{d-2}2)} \leq
\frac{\Gamma(\frac{d}2)}{\Gamma(\frac{d-1}2)}$$
and multiplying both sides by the left hand side, we get
$$ \left(\frac{\Gamma(\frac{d-1}2)}{\Gamma(\frac{d-2}2)}\right)^2 \le
\frac{\Gamma(\frac{d}2)}{\Gamma(\frac{d-2}2)}= \frac{d-2}{2} < \frac{d}{2} $$
as required.
\end{proof}
\begin{lemma} \label{lem:tangent_line}
Let $\la \in [-3,-2]$ and
$$ f(t) \mathdef
\arcsin\left( \frac{ \frac{1}{2} - \la + \frac{\la^2}{2} + (1-\la) t }
{ \frac{1}{2} - 2\la + \frac{\la^2}{2} + t } \right) .$$
Then the tangent line to $f$ at $t_0 = (\la^2-3)/6$
is below $f$ on the entire interval $[-0.5,1]$.
\end{lemma}
\begin{proof}
We need to prove that
$$ f(t) - f'(t_0) t $$
takes its minimum value at $t_0$ on the interval $[-0.5,1]$.
This will follow from the fact that
$f'(t) < f'(t_0)$ for $-0.5 \leq t < t_0$ and
$f'(t) > f'(t_0)$ for $t_0 < t < 1$.
In order to make calculations easier we will use the following notations:
$$ a = \frac{1}{2} - \la + \frac{\la^2}{2} \geq 4.5; \hspace{4mm}
b = 1-\la \geq 3; \hspace{4mm}
c = a+b-1 \geq 6.5 ;$$
then
$$ f(t) = \arcsin\left( \frac{ a + b t }{ c + t } \right) .$$
It is easy to see that $ 0 < a+bt < c+t $ for $t \in [-0.5,1)$. Therefore we have
$$ f'(t) = \left( 1 - \left( \frac{ a + b t }{ c + t } \right)^2 \right)^{-\frac{1}{2}}
\frac{b(c+t) - (a+bt)}{(c+t)^2} =
\frac{bc - a}{(c+t) \sqrt{(c+t)^2 - (a+bt)^2}}. $$
Since $bc-a > 0$ it follows that $f'$ is positive on $[-0.5, 1)$
and thus $f$ is monotone increasing.
Next we study the intervals of monotonicity of $f'$.
First we note that
$$ (c+t)^2 - (a+bt)^2 = \left( c+a + (1+b)t \right)\left(c-a+ (1-b)t \right) .$$
Using $c-a = b-1$ we get that
$$ (c+t)^2 - (a+bt)^2 = (b^2-1) (t+d)(1-t) ,$$
where
$$ d = \frac{c+a}{b+1} = \frac{1-3\la+\la^2}{2-\la} =
1-\la - \frac{1}{2-\la} \geq \frac{11}{4} .$$
It follows that
$$ \frac{1}{(f'(t))^2} = \frac{b^2-1}{(bc-a)^2} (t+c)^2(t+d)(1-t) .$$
If we restrict ourselves to the interval $[-0.5,1)$
(where $f'$ is positive), then it suffices to examine the function
$$ g(t) = (t+c)^2(t + d)(1-t) .$$
Wherever $g$ is monotone increasing, $f'$ is monotone decreasing, and vice versa.
So we have a fourth-degree polynomial $g$ with leading coefficient $-1$,
whose roots are $-c$ (with multiplicity $2$), $-d$, and $1$.
Consequently, the derivative $g'$ is a third-degree polynomial
with negative leading coefficient and
with roots $-c$, $u$, $v$, where $-c < u < -d < v < 1$.
We distinguish the following two cases.
\noindent \textit{Case 1:} $v \leq -0.5$.
Then $g$ is monotone decreasing on $[-0.5, \infty)$,
therefore $f'$ is monotone increasing on $[-0.5,1)$,
and thus $f$ is convex on the whole interval,
which clearly implies the statement of the lemma.
\noindent \textit{Case 2:} $v > -0.5$.
Since the other two roots of $g'$ are less than $-d < -0.5$,
we know that $g$ is monotone increasing on $[-0.5,v]$ and
monotone decreasing on $[v,1)$. We claim that
\begin{equation} \label{eq:g}
g\left( -\frac{1}{2} \right) > g\left( \frac{1}{6} \right) .
\end{equation}
This would yield that $v<1/6$.
Since $1/6 \leq t_0 = (\la^2-3)/6$, we have
$g(-1/2) > g(1/6) > g( t_0 )$.
This means that
$g(t) > g(t_0)$ for $-0.5 \leq t < t_0$ and
$g(t) < g(t_0)$ for $t_0 < t < 1$.
As for $f'$,
$f'(t) < f'(t_0)$ for $-0.5 \leq t < t_0$ and
$f'(t) > f'(t_0)$ for $t_0 < t < 1$,
and the statement of the lemma clearly follows.
It remains to show \eqref{eq:g}.
Let $-1/2 = t_2 < t_1 = 1/6$.
Then $t_1 - t_2 = 2/3$; $t_2+c \geq 6$ and $t_2+d \geq 9/4$,
and consequently
\begin{multline*}
\frac{ g(t_1) }{ g(t_2) } =
\frac{1-t_1}{1-t_2} \left( 1 + \frac{t_1-t_2}{t_2+c} \right)^2
\left( 1 + \frac{t_1-t_2}{t_2+d} \right) \leq \\
\frac{5/6}{3/2}
\left( 1 + \frac{2/3}{6} \right)^2
\left( 1 + \frac{2/3}{9/4} \right) =
\frac{5}{9} \left( \frac{10}{9} \right)^2 \frac{35}{27} =
\frac{17500}{19683} < 1.
\end{multline*}
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2013-08-26T02:06:26",
"yymm": "1308",
"arxiv_id": "1308.5173",
"language": "en",
"url": "https://arxiv.org/abs/1308.5173",
"abstract": "A theorem of Hoffman gives an upper bound on the independence ratio of regular graphs in terms of the minimum $\\lambda_{\\min}$ of the spectrum of the adjacency matrix. To complement this result we use random eigenvectors to gain lower bounds in the vertex-transitive case. For example, we prove that the independence ratio of a $3$-regular transitive graph is at least \\[q=\\frac{1}{2}-\\frac{3}{4\\pi}\\arccos\\biggl(\\frac{1-\\lambda _{\\min}}{4}\\biggr).\\] The same bound holds for infinite transitive graphs: we construct factor of i.i.d. independent sets for which the probability that any given vertex is in the set is at least $q-o(1)$. We also show that the set of the distributions of factor of i.i.d. processes is not closed w.r.t. the weak topology provided that the spectrum of the graph is uncountable.",
"subjects": "Probability (math.PR); Combinatorics (math.CO)",
"title": "Independence ratio and random eigenvectors in transitive graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864293768268,
"lm_q2_score": 0.7154240079185319,
"lm_q1q2_score": 0.708260059089726
} |
https://arxiv.org/abs/1911.01724 | Connector-Breaker games on random boards | By now, the Maker-Breaker connectivity game on a complete graph $K_n$ or on a random graph $G\sim G_{n,p}$ is well studied. Recently, London and Pluhár suggested a variant in which Maker always needs to choose her edges in such a way that her graph stays connected. By their results it follows that for this connected version of the game, the threshold bias on $K_n$ and the threshold probability on $G\sim G_{n,p}$ for winning the game drastically differ from the corresponding values for the usual Maker-Breaker version, assuming Maker's bias to be $1$. However, they observed that the threshold biases of both versions played on $K_n$ are still of the same order if instead Maker is allowed to claim two edges in every round. Naturally, this made London and Pluhár ask whether a similar phenomenon can be observed when a $(2:2)$ game is played on $G_{n,p}$. We prove that this is not the case, and determine the threshold probability for winning this game to be of size $n^{-2/3+o(1)}$. | \section{Introduction}
A positional game is a perfect information game played by two players on a {\em board} $X$ equipped with a family of subsets $\mathcal F \subset 2^X$, which represent {\em winning sets}. During each round of such a game both players claim previously unclaimed elements of the board. For instance, in the $(m:b)$~Maker-Breaker variant, Maker and Breaker take turns claiming up to $m$ (as Maker) or up to $b$ (as Breaker) such elements.
Maker wins the game by claiming all elements of a winning set; Breaker wins otherwise. If $m=b=1$, the game is called {\em unbiased}. Otherwise, we call the game {\em biased} with $m$ and $b$ being the respective biases of Maker and Breaker.
Note that Maker-Breaker games are {\em bias monotone} in the sense that claiming more elements of the board never hurts the corresponding player. Given $(X,\mathcal{F})$ and having Maker's bias $m$ fixed, we thus can find an integer $b_0$, called the {\em threshold bias},
such that Breaker wins the $(m,b)$~Maker-Breaker game
if and only if $b\geq b_0$ holds (except for trivial games, where Maker can win before Breaker's first move).\\
In our paper, we will consider a variant of such Maker-Breaker games played on a graph $G$ sampled according to the {\em Binomial random graph model} $G_{n,p}$ (for short we will write $G\sim G_{n,p}$),
where we fix $n$ vertices and each edge appears with probability $p$ independently of all other choices. It is well known that for monontone increasing graph properties $\mathcal{F}$ this model always comes with a {\em threshold probability} $p^{\ast}$ (see e.g.~\cite{BT1987}) such that
$$
\mathbb{P}\left(G\sim G_{n,p}\text{ satisfies }\mathcal{F}\right)
\rightarrow
\begin{cases}
0 & ~ \text{if }p=o(p^{\ast})\\
1 & ~ \text{if }p=\omega(p^{\ast})~ .
\end{cases}
$$
For some properties $\mathcal{F}$ there is even a {\em sharp threshold}
in the sense that
$$
\mathbb{P}\left(G\sim G_{n,p}\text{ satisfies }\mathcal{F}\right)
\rightarrow
\begin{cases}
0 & ~ \text{if }p\leq (1+o(1))p^{\ast}\\
1 & ~ \text{if }p\geq (1+o(1))p^{\ast}
\end{cases}
$$
holds. One such example will be given in the following paragraph.
\medskip
{\bf Maker-Breaker connectivity game.}
The Maker-Breaker connectivity game is a game variant played on the edges of a graph $G$ with $\mathcal F$ consisting of all spanning trees of $G$. Lehman~\cite{L1964} stated that Maker wins the $(1:1)$~Maker-Breaker version of this game as the second player if and only if the graph $G$ contains two edge-disjoint spanning trees. Since the complete graph $K_n$ can be decomposed into even more spanning trees, a natural question is to ask what happens when Breaker's power gets increased by making his bias larger.
Chv\'atal and Erd\H{o}s~\cite{CE1978} initiated the study of the $(1:b)$ variant and they could prove that its threshold bias is bounded from above by $(1+o(1))n/\ln n$.
A matching lower bound was later given by Gebauer and Szab\'o \cite{GS2009}.
Now, if in the $(m:b)$ game on $K_n$ Maker and Breaker do not play according to a deterministic strategy but instead they play purely at random, the final graph consisting of Maker's edges will behave similarly to a random graph $G\sim G_{n,p}$ with $p=m/(m+b)$.
It is well known that the (sharp) threshold probability $p^\ast$
for $G\sim G_{n,p}$ being connected, i.e. where $G\sim G_{n,p}$ turns from almost surely being disconnected
to almost surely being connected, satisfies
$p^\ast=(1+o(1))\ln n/n$ (see e.g.~\cite{B1985}, \cite{JLR2000}).
Surprisingly, when $m=1$, the latter corresponds to $b=(1+o(1))n/\ln n$ and thus perfectly matches the threshold bias mentioned above. In other words, for most values of $b$, a randomly played $(1:b)$ Maker-Breaker connectivity game on $K_n$ is very likely to end up with the same winner as the corresponding deterministically played game. This phenomenon usually is referred to as {\em probabilistic intuition}. There is a wide range of other games fulfilling this property as well, for example the
perfect matching game, the Hamiltonicity game \cite{K2011}
and the
doubly biased $(m:b)$ connectivity game when Maker's bias satisfies $m=o(\ln n)$~\cite{HMS2012}.
But there also exist games, where this intuition fails, such as the diameter game \cite{BMP2016} and the $H$-game \cite{BL2000}.\\
A different approach to give Breaker more power is to play unbiased, but to thin the board instead. Stojakovi\'c and Szab\'o \cite{SS2005} initiated the study of Maker-Breaker games played on a random graph $G \sim G_{n,p}$, their main question being to find the threshold probability $p^\ast$
at which an almost sure Breaker's win turns into an almost sure Maker's wins. The existence of such a (not necessarily sharp) threshold is guaranteed by the fact that the property of Maker having a winning strategy is monotone increasing. Now, for the connectivity game it is obvious that the threshold probability needs to satisfy
$p^\ast\geq (1+o(1))\ln n/n$ since for smaller $p$ a random graph
$G\sim G_{n,p}$ almost surely contains isolated vertices (see e.g.~\cite{B1985}, \cite{JLR2000}).
Stojakovi\'c and Szab\'o could show that
indeed $p\geq (1+o(1))\ln n/n$ is enough for
Maker to win
the connectivity game on $G\sim G_{n,p}$ almost surely.
Interestingly, this threshold probability
asymptotically equals the reciprocal of the threshold bias for
the corresponding Maker-Breaker game on $K_n$ --~another phenomenon which has also been observed for
many other natural games
(see e.g. \cite{FGKN2015},~\cite{HKSS2009}, \cite{NSS2016},~\cite{SS2005}).\\
{\bf Connector-Breaker games.} Recently, under the name PrimMaker-Breaker games,
London and Pluh\'ar~\cite{LP2018}
introduced a connected version of the Maker-Breaker games discussed above.
These games, which we will call {\em Connector-Breaker games} in the following,
are played in the same way as the already described Maker-Breaker games,
with the only difference that {\em Connector}
(in the role of Maker)
needs to choose her edges in such a way that her graph stays connected throughout the game.
While London and Pluh\'ar \cite{LP2018}
studied the Connector-Breaker connectivity game on $K_n$, where
Connector aims for a spanning tree of $K_n$,
even more recently
Corsten, Mond, Pokrovskiy, Spiegel and Szab\'o~\cite{CMPSS2019} discussed the variant
in which Connector aims for an odd cycle of $K_n$.
For the unbiased game, London and Pluh\'ar proved the following:
\begin{thm}
Playing the $(1:1)$ Connector-Breaker game on a graph $G$ with $n$ vertices, Connector wins as the first player if and only if $G$ contains a copy of $H_n$, where $H_n$ is the graph $K_{n-2,2}$ with an additional edge inside its two-element color class.
\end{thm}
Moreover, one can easily see that for $b\geq 2$
the $(1:b)$ Connector-Breaker connectivity game is won by Breaker on every graph $G$~\cite{LP2018}. Thus, the threshold bias for such a game equals 2.
Also, if the game is played on $G\sim G_{n,p}$, then, by the theorem above,
$p$ needs to be almost 1 for Connector to have a winning strategy on $G$ almost surely. Note that both these observations are in huge contrast to the results for the Maker-Breaker analogue.
However, by increasing Maker's bias by just 1, London and Pluh\'ar~\cite{LP2018} showed that the situation changes suddenly.
\begin{thm}
Playing the $(2:b)$ Connector-Breaker game on $K_n$, Connector wins if ${b<n/(8 \ln n)}$, and Breaker wins if $b>n/\ln n$.
\end{thm}
This results shows that increasing Connector's bias makes a huge difference. In particular, the threshold bias in the $(2:b)$ variant is of the same order as in the corresponding Maker-Breaker game and thus, in contrast to the $(1:b)$ games, both variants behave similarly. Naturally, this made London and Pluh\'ar \cite{LP2018} ask whether something similar could be observed when playing the $(2:2)$ game on a graph $G\sim G_{n,p}$ and if it might behave similarly to the $(1:1)$ Maker-Breaker version. In this paper we show that the latter is not the case, and we prove the following result:
\begin{thm}\label{main}
The threshold probability $p^\ast$ for the $(2:2)$ Connector-Breaker connectivity game on $G\sim G_{n,p}$ is of size $n^{-2/3+o(1)}.$
\end{thm}
Hence, even if Connector's bias gets increased, a much denser random graph is necessary for Connector to have a chance at winning almost surely the connectivity game than in the respective Maker-Breaker variant of this game.
Since the proof of our theorem is rather technical and the proofs of the upper and lower bound require different techniques, we split the theorem into two parts.
\begin{thm}\label{main_breaker}
Let $\eps >0$ be a constant. For $p\leq n^{-2/3-\eps}$ a random graph $G\sim G_{n,p}$ a.a.s.~has the following property:
Playing a $(2:2)$ Connector-Breaker game
on the edge set of $G$, Breaker has a strategy to keep a vertex isolated in Connector's graph.
\end{thm}
\begin{thm}\label{main_connector}
Let $\eps >0$ be a constant. For $p\geq n^{-2/3+\eps}$ a random graph $G\sim G_{n,p}$ a.a.s.~has the following property:
Playing a $(2:2)$ Connector-Breaker game
on the edge set of $G$, Connector has a strategy to claim a
spanning tree.
\end{thm}
\medskip
\subsection{Organization of the paper}~
The main focus of this paper is proving Theorem~\ref{main_breaker} and Theorem~\ref{main_connector}. In Section \ref{sec:Preliminaries} we will give an overview over all required tools. In the Sections~\ref{sec:Breaker's strategy} and~\ref{sec:Connector's strategy} we will describe Breaker's and Connector's strategy respectively. We will also state some lemmas, from which it will follow that the given strategies succeed almost surely for the respective ranges of the edge probability $p$. We postpone the proofs of these lemmas to Section~\ref{sec:algorithm} (for Breaker's strategy) and Section~\ref{sec:structures} (for Connector's strategy). Finally, we will give some concluding remarks in Section \ref{sec:concluding}.
\subsection{Notation and terminology}~
The game-theoretic and graph-theoretic notation in our paper is rather standard and most of the times it follows the notation of \cite{HKSS2014} and \cite{W2001}.
For a positive integer $n$, we set $[n]:=\{k\in\mathbb{N}:~ 1\leq k\leq n\}$.
For a graph $G=(V,E)$ we write $V(G)$ and $E(G)$
for the vertex set and the edge set of $G$, respectively.
If $\{v,w\}$ is an edge from $E(G)$, we denote it with $vw$ for short.
A vertex $w$ is called a neighbour of $v$ in $G$
if $vw\in E(G)$ holds.
The neighbourhood of $v$ in $G$ is
$N_G(v)=\{w\in V(G):~ vw\in E\}$,
and with $d_G(v)=|N_G(v)|$ we denote the degree of $v$ in $G$.
Let subsets $A,B\subset V(G)$ be given. We let
$N_G(v,A)=N_G(v)\cap A$ be the
neighbourhood of $v$ in $A$, and we set
$d_G(v,A)=|N_G(v,A)|$ to be the degree of $v$ into $A$.
Moreover, we let
$N_G(A):=\bigcup_{v\in A} N_G(v)$,
$e_G(A):=\{vw\in E(G):~ v,w\in A\}$
and
$e_G(A,B):=\{vw\in E(G):~ v\in A,w\in B\}$.
Let two graphs $H$ and $G$ be given.
If $V(H)\subset V(G)$ and $E(H)\subset E(G)$ holds,
we call $H$ a subgraph of $G$,
and we write $H\subset G$ for short.
We also let $G\setminus H=(V(G),E(G)\setminus E(H))$ in this case.
If there is a bijection $f:V(H)\rightarrow V(G)$
such that $vw\in E(H)$ holds if and only if
$f(v)f(w)\in E(G)$ holds, the two graphs $H$ and $G$ are called isomorphic (denoted with $H\cong G$), and we also say that $H$ is a copy of $G$ in this case.
A path $P$ with $V(P)=\{v_1,v_2,\ldots,v_k\}$ and
$E(P)=\{v_iv_{i+1}:~ 1\leq i\leq k-1\}$
will be represented by its sequence of
vertices, e.g. $P=(v_1,v_2,\ldots,v_k)$.
Its length is its number of edges.
Assume that some Connector-Breaker game, played on the edge set of some graph $G$, is in progress. At any moment during the game, let $C$ be the graph consisting of Connector's edges and let $B$ be the graph consisting of Breaker's edges. For short,
also set $V_C=V(C)$, $E_C=E(C)$ and $E_B=E(B)$.
If an edge belongs to $B\cup C$, we call it claimed;
otherwise it is called free.
Given a distribution $\mathcal{D}$ and a random variable
$X$, we write $X\sim \mathcal{D}$ for $X$ being sampled according to the distribution $\mathcal{D}$.
With $Bin(n,p)$ we denote the binomial distribution
with parameters $n$ and $p$.
Moreover, with $G_{n,p}$ we denote
the Erd\H{o}s-Renyi random graph model on $n$ vertices and with edge probability $p$.
If $X$ is a random variable, we let $\mathbb{E}(X)$ denote its expectation. If $\mathcal{E}$ is an event, we let $\mathbb{P}(\mathcal{E})$ denote its probability.
A sequence of events $\mathcal{E}_n$ is said to hold
asymptotically almost surely (a.a.s.) if
$\mathbb{P}(\mathcal{E}_n)\rightarrow 1$ for $n\rightarrow \infty$.
Our main results are asymptotic.
Whenever necessary, we will assume $n$ to be large enough.
We will not optimize constants,
and whenever these are not crucial, we
will omit rounding signs.
\section{Preliminaries}\label{sec:Preliminaries}
\subsection{Maker-Breaker Box game} ~
A simple, yet very useful positional game is the following one, introduced by Chv\'atal and Erd\H{o}s~\cite{CE1978}, which usually is
helpful to describe strategies that aim to bound the degrees in the opponent's graph.
The game $Box(p,1;a_1,\ldots,a_n)$
is played on a hypergraph $(X,\mathcal{H})$,
with $\mathcal{H}=\left\{F_1,\ldots,F_n\right\}$
consisting of $n$ pairwise disjoint hyperedges (called {\em boxes}),
satisfying $|F_i|=a_i$ for every $i\in [n]$.
In every round, BoxMaker claims at most $p$ elements from
$X$ that have not been claimed before,
while BoxBreaker solely claims one such element.
If, throughout the game, BoxMaker succeeds in claiming all the elements of a box $F_i$, she is declared the winner of the game. Otherwise, i.e. when BoxBreaker succeeds in claiming at least one element in each box, BoxBreaker wins.
The following lemma is a well-known criterion for BoxBreaker to have a winning strategy in the Box game (see e.g.~\cite{CE1978}, \cite{HKSS2014}).
\begin{lemma}\label{lem:boxgame}
Let $a_i=m$ for every $i\in [n]$ and assume that $m>p(\ln n + 1)$, then BoxBreaker wins the game $Box(p,1;a_1,\ldots,a_n)$.
A winning strategy $\mathcal{S}$ is the following one: in every round, BoxBreaker claims an element which belongs to a box that he does not have an element from and which, among all such boxes, contains the largest number of Maker's elements.
\end{lemma}
In fact, the first sentence in the above lemma is Theorem 3.4.1 in~\cite{HKSS2014}, while the mentioned strategy is contained in its proof. As an immediate corollary of the above lemma we obtain the following:
\begin{cor}\label{cor:boxgame}
Let BoxMaker and BoxBreaker play the game
$Box(p,1;a_1,\ldots,a_n)$ with boxes $F_i$ of size
$|F_i|=a_i\geq m$. Then following the strategy $\mathcal{S}$
from Lemma~\ref{lem:boxgame}, BoxBreaker can guarantee that the following holds for every $i\in[n]$ throughout the game: as long as he does not claim an element in $F_i$,
the number of BoxMaker's elements in $F_i$ is bounded
by $p(\ln n + 1)$.
\end{cor}
\subsection{Probabilistic tools and basic properties of $G_{n,p}$}
In this section we present a few bounds on large deviations
of random variables that will be used to identify typical edge
distributions in a random graph $G\sim G_{n,p}$.
Most of the time, we will use the following inequalities due to Chernoff (see e.g.~\cite{AS2008}, \cite{JLR2000}).
\begin{lemma}\label{lem:Chernoff1}
If $X \sim Bin(n,p)$, then
\begin{itemize}
\item $\mathbb{P}(X<(1-\delta)np)< \exp\left(-\frac{\delta^2np}{2}\right)$ for every $\delta>0$ , and
\item $\mathbb{P}(X>(1+\delta)np)< \exp\left(-\frac{np}{3}\right)$ for every $\delta\geq 1$.
\end{itemize}
\end{lemma}
\begin{lemma}\label{lem:Chernoff2}
Let $X \sim Bin(n,p)$ with expectation $\mu=\mathbb{E}(X)$, and
let $k \geq 7\mu$, then
$$\mathbb{P}(X \geq k) \leq e^{-k}.$$
\end{lemma}
Moreover, we will make use of the well-known Markov inequality (see e.g.~\cite{JLR2000}).
\begin{lemma}\label{lem:markov}
Let $X\geq 0$ be a random variable. For every $t\geq 0$ it holds that
$$
\mathbb{P}\left( X\geq t \right)
\leq \frac{\mathbb{E}(X)}{t}~ .
$$
\end{lemma}
As a first application of Chernoff's inequalities
we will prove a few simple bounds on degrees
that are very likely to hold in a random graph $G\sim G_{n,p}$.
\begin{lemma}\label{lem:degree_bound}
Let $\eps>0$, $p=n^{-2/3-\eps}$ and let $G\sim G_{n,p}$.
Then with probability at least ${1-\exp(-n^{1/3-2\eps})}$
every vertex $v\in V(G)$ satisfies
\begin{equation}\label{cl:deg}
d_G(v)<2n^{\frac13-\eps}~ .
\end{equation}
\end{lemma}
\begin{proof}
For $v\in V(G)$ we have $d_G(v)\sim Bin(n-1,p)$
with $\mathbb{E}(d_G(v))=(n-1)p\sim np$. Applying
Lemma~\ref{lem:Chernoff1} we deduce that
$
\mathbb{P}\left(d_G(v)>2np\right) \leq
\exp\left( - \frac14 n^{1/3-\eps}\right).
$
Taking a union bound over all possible vertices $v$,
the claim follows.
\end{proof}
\begin{lemma}\label{lem:tec3_connector}
Let $\eps >0$, $p=n^{-2/3+\eps}$ and let $G\sim G_{n,p}$.
Let $A\subset V(G)$ be of size $n^{2/3}$,
then with probability at least
$1-\exp(-n^{\eps/2})$
every vertex $v\in V(G)\setminus A$ satisfies
$d_G(v,A)>n^{\eps/2}$.
\end{lemma}
\begin{proof}
Let $A$ be a fixed set of size $n^{2/3}$.
Generating $G\sim G_{n,p}$ yields that
for every vertex $v\in V(G)\setminus A$, we have
$d_G(v,A) \sim Bin(|A|,p)$ and thus
$\mathbb{E}(d_G(v,A)) = n^{\eps}.$
By Lemma~\ref{lem:Chernoff1}
we deduce that
$
\mathbb{P}\left(
d_G(v,A)\leq \frac{1}{2} n^{\eps}
\right) < \exp\left( - \frac{1}{8} n^{\eps} \right)
$. Taking a union bound over all possible $v$, the claim follows.
\end{proof}
\section{Breaker's strategy} \label{sec:Breaker's strategy}
\subsection{Defining bad vertices}
For $p=n^{-2/3-\eps}$ we aim to give a Breaker's strategy
that a.a.s. isolates a given vertex $x$ from Connector's graph
when a $(2:2)$ game is played on $G\sim G_{n,p}$.
In order to do so, we first define iteratively a set $B^x$
of vertices that are {\em bad} with respect to the aim of isolating $x$. If $x$ is carefully chosen (which we will manage later) then Breaker has a strategy to make sure that Maker in her move either does not even reach $B^x$, or in case she reaches $B^x$ then Breaker can immediately destroy all potential threads. More details will be given later. Algorithm~\ref{alg:bad} decribes how $B^x$ is constructed.
\begin{algorithm}[ht]\label{alg:bad}
\caption{Bad vertex set $B^x$ for given vertex $x$}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{~ graph $G$ and vertex $x\in V(G)$}
\Output{~ number of iterations $r_x$, bad vertex set
$B^x=\bigcup_{k\leq r_x} B_k^x$}
~ $B^x_1:=N_G(x)$\;
~ $B^x:=B^x_1$\;
\For{$i\geq 2$}{
\quad\mbox{} $B^x_i:=\left\{v\notin B^x\cup \{x\}:~ d_G(v,B^x)\geq 2\right\}$ \;
\quad\mbox{} $B^x\leftarrow B^x\cup B^x_i$\\
\lIf{$B^x_{i}=\varnothing$\\
\quad\mbox{}}{
halt with output $B_1^x,\ldots,B_{i-1}^x,B^x$ and
$r_x=i-1$}}
\end{algorithm}
The following lemma will be crucial for Breaker's strategy.
\begin{lemma}\label{lem:bad}
Let $n$ be a large enough integer and
let $\eps \geq 7 \ln\ln n / \ln n$.
For $p=n^{-2/3-\eps}$ generate $G\sim G_{n,p}$.
Then a.a.s. $G$ satisfies the following property:
For every set $M\subset V(G)$ of size~3, there exists a vertex $x$
such that Algorithm~\ref{alg:bad} produces
a set $B^x$ of vertices and a sequence
$(B_1^x,\ldots,B_{r_x}^x)$ of disjoint subsets of $B^x$
such that the following holds:
{\begin{enumerate}[label=\itmarab{B}]
\item\label{Bprop:neighbours1} $B_1^x=N_G(x)$
and $e_G(B_1^x)=0$,
\item\label{Bprop:neighbours2} for every $2\leq i\leq r_x$ and every vertex in $v\in B_i^x$ we have
$d_G\left(v,\bigcup_{k\leq i} B_k^x\right)=2$,
\item\label{Bprop:nonbad} for every vertex $v\in V\setminus (B^x\cup \{x\})$ it holds that $d_G(v,B^x)\leq 1$,
\item\label{Bprop:disjointM} $B^x\cap (M\cup N_G(M)) = \varnothing$.
\end{enumerate}}
\end{lemma}
We postpone the proof of the above lemma to Section~\ref{sec:algorithm} and recommend to read Breaker's strategy
first.
\subsection{The strategy}
In the following we prove Theorem~\ref{main_breaker}. Let Connector and Breaker play a $(2:2)$ game on $G\sim G_{n,p}$.
We will show that, under the condition that the property described in Lemma~\ref{lem:bad} holds,
Breaker has a strategy that isolates a vertex from Connector's graph.
Let $V_C^r$ denote the set of vertices that are covered by Connector's edges at the end of round $r$. Immediately after Connector's first move, we have
$|V_C^1|=3$ and thus, by the property from Lemma~\ref{lem:bad} (applied with $M=V_C^1$), we find a vertex $x$ such that
Algorithm~\ref{alg:bad} produces
a set $B^x$ of vertices and a sequence
$(B_1^x,\ldots,B_{r_x}^x)$ of disjoint subsets of $B^x$ such that the Properties~\ref{Bprop:neighbours1}--\ref{Bprop:disjointM}
hold with $M=V_C^1$.
Notice that, at this point $x\notin V_C^1\cup N_G(V_C^1)$
holds, according to \ref{Bprop:disjointM} and since $N_G(x)\subset B^x$.
In order to simplify notation,
let $B_0^x:=\{x\}$ and set
$B_{<i}^x := \bigcup_{\ell=0}^{i-1} B_{\ell}^x$
as well as $B_{\leq i}^x := \bigcup_{\ell=0}^{i} B_{\ell}^x$.
Breaker's strategy is to make
sure that for each round $r$, immediately
after his move the following property holds for every free edge $vw$:
\medskip\medskip
\begin{center}
\begin{minipage}{0.8\textwidth}
{\begin{enumerate}[label=\itmarab{Q}]
\item\label{B_maintain}
If there exists $0\leq i\leq r_x$
such that $v\in (N_G(V_C^r)\setminus V_C^r) \cap B_i^x$
and $w\in V_C^r$, then
$w\in B_{<i}^x$.
\end{enumerate}}
\end{minipage}
\end{center}
\medskip\medskip
Let us observe first that Breaker keeps $x$ isolated in Connector's graph, if he is indeed able to maintain \ref{B_maintain} for every free edge after each of his moves.
Assume this is not the case, i.e. there is some round $r$ in which Connector reaches vertex $x$.
Then immediately after Breaker's $(r-1)^\text{st}$
move, we have that \ref{B_maintain} holds
for every free edge and still
$x\notin V_C^{r-1}$.
From this it follows that immediately before Connector's $r^{\text{th}}$ move there cannot be a free edge $xw$ with $w\in V_C^{r-1}$.
Indeed, otherwise we would need
$x\in (N_G(V_C^{r-1})\setminus V_C^{r-1})\cap B_0^x$
and by \ref{B_maintain} we would get
$w\in B_{<0}^x = \varnothing$, a contradiction.
Thus, in order to reach $x$ during round $r$, Connector would need do claim a path $(w,v,x)$
of length 2, starting with some vertex $w\in V_C^{r-1}$ and ending in $x$. It then follows that
$v\in (N_G(V_C^{r-1})\setminus V_C^{r-1}) \cap B_1^x$.
However, using \ref{B_maintain}
for the free edge $wv$ at the end of round~$r-1$, this
would give
$w\in B_{<1}^x = B_0^x$ and thus $x=w$,
a contradiction.
Hence, we know that Connector cannot reach $x$ as long as Breaker restores \ref{B_maintain}
for every free edge.
It thus remains to verify that Breaker can indeed do so. We proceed by induction. \\
For round 1, observe that immediately after Connector's first move, there is no edge between $V_C^1$ and $B^x\cup \{x\}$, according to Property~\ref{Bprop:disjointM} (with $M=V_C^1$).
Thus, Property~\ref{B_maintain} holds at the end of round 1 for every free edge, independent of what Breaker's first move is,
as there does not exist any edge $vw$ as described in that property.\\
Let us assume then, that \ref{B_maintain} is satisfied immediately after Breaker's
$(r-1)^{\text{st}}$
move for every free edge, and let us explain how Breaker restores
\ref{B_maintain} in the next round.
Without loss of generality we may assume that in round $r$ Connector reaches exactly two new vertices, say $w_1$ and $w_2$, i.e. $V_C^{r}=V_{C}^{r-1}\cup \{w_1,w_2\}$.
If after Connector's $r^{\text{th}}$ move,
there exist at most two free edges that fail to satisfy
Property~\ref{B_maintain} (with $V_C=V_C^r$),
then Breaker claims these edges and by this easily restores
that \ref{B_maintain} holds for every free edge
at the end of round $r$.
So, assume for a contradiction that immediately after Connector's $r^{\text{th}}$ move there are
at least three free edges that do not satisfy \ref{B_maintain}. All of these edges need to be incident to $w_1$ or $w_2$, as before Connector's move the Property~\ref{B_maintain}
was true for every free edge (where $V_C=V_C^{r-1}$). Without loss of generality let $w_2$
be incident to at least two of these edges,
say $w_2v_1$ and $w_2v_2$. As these edges fail to hold \ref{B_maintain} after Connector's $r^{\text{th}}$ move, we have
$v_1\in (N_G(V_C^r)\setminus V_C^r) \cap B_{i_1}^x$
and
$v_2\in (N_G(V_C^r)\setminus V_C^r) \cap B_{i_2}^x$
for some $0\leq i_1,i_2\leq r_{x}$,
while $w_2\in V_C^r$ and
$w_2\notin B_{<i}^x$ with $i:=\max\{i_1,i_2\}$.
Now, since $w_2$ has two neighbours in $B^x\cup \{x\}$,
Algorithm~\ref{alg:bad} at some point must have added $w_2$ to $B^x$. Thus, we conclude that $w_2\in B_k^x$
for some $k \geq \max\{i_1,i_2\}$.
Consider first the case that in round $r$
Connector reaches $w_2$ by claiming a free edge
$yw_2$ with $y\in V_C^{r-1}$.
Then $y\notin \{v_1,v_2\}$. Moreover,
$w_2\in N_G(V_C^{r-1})\setminus V_C^{r-1}$
and, since \ref{B_maintain}
was true for $yw_2$ at the end of round~$r-1$
(with $V_C=V_C^{r-1}$),
we conclude $y\in B_{<k}^x$. But this means that
$w_2\in B_k^{x}$ has three neighbours in
$B_{\leq k}^x$ (namely $v_1,v_2$ and $y$), a contradiction to~\ref{Bprop:neighbours2}.
Consider then the case that in round $r$
Connector does not reach $w_2$ as in the first case.
That is, in round $r$ Connector
claims a path $(y,w_1,w_2)$ with $y\in V_C^{r-1}$
and
$w_1\in N_G(V_C^{r-1})\setminus V_C^{r-1}$.
We know that $w_2\in B_k$ has exactly two neighbours
in $B_{\leq k}^x$ according to Property~\ref{Bprop:neighbours2}, and these neighbours need to be
$v_1$ and $v_2$. It follows that the third edge, which does not satisfy~\ref{B_maintain} immediately before Breaker's $r^{\text{th}}$ move, cannot be incident to $w_2$ and thus needs to be of the form $v_3w_1$ with $v_3\in (N_G(V_C^r)\setminus V_C^r)\cap B_{i_3}^x$ for some $0\leq i_3\leq r_x$.
Then $v_3,w_2\in B^x$ are two neighbours of $w_1$
and hence Algorithm~\ref{alg:bad} must have added $w_1$ to $B^x$ at some point, say $w_1\in B_t^x$.
Since again $w_2\in B_k^x$ has exactly two neighbours in $B_{\leq k}^x$ and these are
$v_1$ and $v_2$, we must have $w_1\notin B_{\leq k}^x$, i.e. $t>k$ .
But now, by induction, Property~\ref{B_maintain}
was true for the free edge $yw_1$ at the end of round $r-1$, and thus $y\in B_{< t}^x$.
Moreover, as we assumed $v_3w_1$ to be an edge not satisfying \ref{B_maintain} after Connector's
$r^{\text{th}}$ move, we have $w_1\notin B_{<i_3}^x$ and thus $i_3\leq t$.
Hence, we obtain that the three neighbours
$w_2,v_3,y$
of $w_1\in B_t^x$ belong to $B_{\leq t}^x$,
as we have $w_2\in B_k^x\subset B_{\leq t}^x$ and
$v_3\in B_{i_3}^x\subset B_{\leq t}^x $
and $y\in B_{<t}^x$. This again leads to a contradiction with \ref{Bprop:neighbours2}.
\hfill $\Box$
\section{Connector's strategy} \label{sec:Connector's strategy}
\subsection{Defining good structures}
For $p=n^{-2/3+\eps}$ we aim to give a Connector's strategy with which Connector a.a.s. can reach every vertex of $G\sim G_{n,p}$.
In order to do so, we will first describe a few useful structures, that are typically contained in $G$ even after deleting a few edges
and which will help
Connector later on to reach any fixed vertex within
a small number of rounds.
Recall that $E_B$ denotes the set of Breaker's edges at any moment during a Connector-Breaker game,
while $V_C$ denotes the set of vertices incident to Connector's edges. Moreover,
denote with $\mathcal{T}_k$ the full binary tree
with $k$ levels.
\begin{dfn}\label{def:good}
Let $k\in\mathbb{N}$. Assume a $(2:2)$ Connector-Breaker game
on some graph $H$ is in progress.
Let $x\in V(H)\setminus V_C$. Then we call a subgraph $T \cong \mathcal{T}_k$
of $H$ \textit{good} with respect to
$(x,H)$
if the following conditions hold:
\begin{enumerate}
\item[(1)] $x\notin V(T)$,
\item[(2)] if $\overrightarrow{T}$ is the orientation
where the edges are oriented from the root to the leaves, then for every arc $\overrightarrow{uw}\in E(\overrightarrow{T})$ we have either $uw\notin E_B$ or ($uw\in E_B$ and $w\in V_C$).
\item[(3)] for every leaf $v$ of $T$ we have $vx\in E(H)\setminus E_B$.
\end{enumerate}
\end{dfn}
\begin{lemma}[Base strategy]\label{lem:connector:basic}
Assume a $(2:2)$ Connector-Breaker game
on some graph $H$ is in progress with
Connector being the next player to make a move.
Let $x\in V(H)\setminus V_C$
and let $k\geq 2$ be any integer.
Moreover, assume that $H$ contains a binary tree $T \cong \mathcal{T}_k$
which is good with respect to $(x,H)$
and such that its root $r$ belongs to
$V_C$ already.
Then Connector has a strategy $\mathcal{S}_x$
to
reach $x$ (i.e. to add $x$ to $V_C$)
within at most $k$ rounds.
\end{lemma}
\begin{proof}
We prove the statement by induction on $k$.
For $k=2$, by assumption we are given
a tree $T\cong \mathcal{T}_2$
the leaves of which are adjacent with $x$ in $H \setminus B$, according to Definition~\ref{def:good}. If one of the leaves belongs to $V_C$, then Connector can take the edge between that leaf and $x$.
Otherwise, according to (2) we obtain
$E(T)\cap E_B=\varnothing$. Then, since the root $r$ of $T$ belongs to $V_C$ by assumption, Connector can claim one edge
between $r$ and a leaf of $T$, and
for her second edge she can claim the edge between that leaf and $x$. Thus, she reaches $x$ within $1$ round.
Let $k>2$ then. Let $T\cong \mathcal{T}_k$ be a tree as described in the assumption of the lemma. Denote the root of $T$
with $r$, let $r_1$ and $r_2$ be the neighbours of $r$ in $T$,
and let $r_{1,1}$, $r_{1,2}$ and $r_{2,1}$, $r_{2,2}$ be the respective children of $r_1$ and $r_2$ in $T$. Each of the vertices $r_{i,j}$ is the root of a subtree $T_{i,j}\cong \mathcal{T}_{k-2}$
the leaves of which are adjacent with $x$ in $H\setminus B$. Let
$$
E_{i,j}:=\{r_ir_{i,j}\}\cup E(T_{i,j})\cup \left\{
xw:~ w~\text{is a leaf of }T_{i,j}\right\}
$$
for every $1\leq i,j\leq 2$,
and observe that the four sets $E_{i,j}$ are pairwise disjoint.
For the first round, Connector makes sure that
$r_1$ and $r_2$ are added to $V_C$
if they do not belong to $V_C$ already. This is possible since
for every $i\in [2]$ we have that $r_i\in V_C$ already before that round or $r_ir\notin E_B$ according to (2) in Definition~\ref{def:good}.
After Breaker's following move we know that
there are at least two sets $E_{i,j}$
with $1\leq i,j,\leq 2$ that Breaker did not touch in his move. Taking the union of two such sets,
say $E_{i_1,j_1}$ and $E_{i_2,j_2}$,
while identifying $r_1$ with $r_2$ if
$i_1\neq i_2$, we obtain
a binary tree $T'\cong \mathcal{T}_{k-1}$
which is good with respect to $(x,H')$
where $E(H')=E_{i_1,j_1}\cup E_{i_2,j_2}$.
Thus, by induction Connector needs at most $k-1$ further rounds for reaching $x$.
\end{proof}
Connector's main strategy will be split into different stages. Depending on the number of rounds played so far, she will use similar but different structures that help to increase $V_C$ until every vertex is reached. These structures are given by the following lemmas while the proofs of the lemmas will be given in Section~\ref{sec:structures}.
\begin{lemma}[Good structures for Stage I]\label{lem:tec1_connector}
For every constant $\delta >0$
there exists an integer $k_1\in\mathbb{N}$ such that the following holds.
Let $G\sim G_{n,p}$ with $p=n^{-2/3+\delta}$,
then with probability at least
$1-n^{-1}$ the following is true for every $r,x\in V(G)$:\\
Let $B$ be any subgraph of $G$ with $e(B)\leq n^{1/3}\ln n$, then the graph $G\setminus B$ contains a copy $T$ of $\mathcal{T}_{k_1}$ such that
$r$ is the root of $T$, $x\notin V(T)$ and
every leaf of $T$ is adjacent to $x$ in $G\setminus B$.
\end{lemma}
\begin{lemma}[Good structures for Stage II]\label{lem:tec2_connector}
For every constant $\delta >0$
there exists an integer $k_2\in\mathbb{N}$ such that the following holds.
Let $G\sim G_{n,p}$ with $p=n^{-2/3+\delta}$,
and let $A\subset V(G)$ be of size $n^{1/3}$,
then with probability at least
$1-n^{-1}$
the following is true for every $x\in V\setminus A$:
Let $M$ be any subset of $V\setminus \{x\}$,
let $B$ be any subgraph of $G$ with
$d_B(v) \leq \ln^2 n$ for every $v\in V\setminus M$ and such that $e(B)\leq n^{2/3}\ln n$, then
$G$ contains
a vertex $z\in N_{G\setminus B}(A)$ and four vertex disjoint copies $T_{\ell}$ of $\mathcal{T}_{k_2}$ with roots $r_{\ell}$
such that for every $\ell\in [4]$ we have:
{\begin{enumerate}[label=\itmarab{S}]
\item\label{good2:x} $x\notin V(T_{\ell})$,
\item\label{good2:top} $zr_{\ell}\in E(G\setminus B)$,
\item\label{good2:edges} if $\overrightarrow{T_{\ell}}$ is the orientation
where the edges are oriented from the root to the leaves, then for every arc $\overrightarrow{uw}\in E(\overrightarrow{T_{\ell}})$ we have either $uw\notin E(B)$ or ($uw\in E(B)$ and $w\in M$),
\item\label{good2:leaves} for every leaf $v$ of $T_{\ell}$ we have $vx\in E(G\setminus B)$.
\end{enumerate}}
\end{lemma}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5,page=2]{good_structures}
\caption{Structure for Stage II}
\label{fig:structure2}
\end{center}
\end{figure}
We postpone the proofs of the above lemmas to Section~\ref{sec:structures} and recommend to read Connector's strategy
first.
\subsection{The strategy}
In the following we prove Theorem~\ref{main_connector}.
Let $\eps>0$ be given, and let
$k_1$ and $k_2$ be integers promised
by Lemma~\ref{lem:tec1_connector}
and Lemma~\ref{lem:tec2_connector} (applied with $\delta=\eps$),
respectively. Set $k:=\max\{k_1,k_2\}+2$.
Before revealing $G\sim G_{n,p}$ on the vertex set $V=[n]$,
we fix an arbitrary set $A_1\subset [n]$ of size $n^{1/3}$
and an arbitrary set $A_2\subset [n]$ of size $n^{2/3}$.
Then, with probability tending to~1, all the properties
from
Lemma~\ref{lem:tec1_connector}, Lemma~\ref{lem:tec2_connector} (applied for $A=A_1$)
and Lemma~\ref{lem:tec3_connector} (applied for $A=A_2$) hold.
From now on, let us condition on these.
Let Connector and Breaker play a $(2:2)$ game on $G$.
In the following we will first describe a strategy for Connector, and afterwards we will show that indeed it constitutes a winning strategy for the connectivity game on $G$, when we assume all the properties that we conditioned on above to hold.
The strategy will be described through the following two stages between which Connector alternates. If at any moment Connector cannot follow the strategy while $V\neq V_C$ still holds, then she forfeits the game. (We will show later that this does not happen).
\medskip
{\bf Strategy description:} Fix a vertex $r\in V$ to be Connector's start vertex, and set $V_C=\{r\}$ before the game starts. As long as $V\neq V_C$ holds, Connector plays as follows, starting with Stage~I for her very first move.
\begin{itemize}
\item[] \textbf{Stage I:} Let $x\in V\setminus V_C$
be an arbitrary vertex, where we first prefer the
vertices of $A_1$, secondly prefer the vertices of
$A_2$ and only afterwards consider all the remaining vertices.
Connector then
adds the vertex $x$ to $V_C$ within at most $k$ rounds.
The details of how she can do this
can be found later in the strategy discussion.
Immediately afterwards, if still $V\neq V_C$ holds, Connector proceeds
with Stage II.
\medskip
\item[] \textbf{Stage II:}
Let $x\in V\setminus V_C$ be an arbitrary vertex
maximizing $d_B(x)$ among all vertices in $V\setminus V_C$.
Connector then
adds the vertex $x$ to $V_C$ within at most $k$ rounds.
The details of how she can do this
can be found later in the strategy discussion.
Immediately afterwards, if still $V\neq V_C$ holds, Connector proceeds
with Stage I.
\end{itemize}
\medskip
{\bf Strategy discussion:} If Connector can follow the strategy, without forfeiting the game, until $V=V_C$ holds,
then it is obvious that she succeeds in occupying a spanning tree and thus wins the game. It thus remains to prove that Connector always can follow the proposed strategy.
In order to so, we start with two simple observations.
\begin{obs}\label{obs:degbound}
For as long as Connector can follow the proposed strategy, it holds that $d_B(v)<\ln^2 n$ for every $v\in V\setminus V_C$.
\end{obs}
\textit{Proof.} While the Connector-Breaker game on $G$ is going on, let us consider the Box game $Box(8k,1;n-1,\ldots,n-1)$
where for every vertex $i\in V(G)$ there is a box
$F_i$ of size $n-1$. In this auxiliary game, let Breaker take over the role of BoxMaker and let Connector be BoxBreaker in the following way. Whenever Breaker claims some edge $uw$ in the game on $G$, let BoxMaker claim one element in each of the boxes $F_u$ and $F_w$. Observe that this way, the number of BoxMaker's elements in any box $F_v$ will be equal to $d_B(v)$.
Furthermore, whenever in Stage II Connector fixes some vertex $x$ of largest degree $d_B(x)$
(in order to add this vertex to $V_C$ within the following $k$ rounds),
let BoxBreaker claim an element in the box $F_x$.
Observe that everything is within the rules then,
as the latter always repeats within at most $2k$ rounds
in which BoxMaker may get up to $2k\cdot 4=8k$ new elements
over all the boxes.
Now, Corollary~\ref{cor:boxgame} ensures that
whenever a vertex $x\in V\setminus V_C$ is selected for Stage II,
right at this moment we have
\begin{equation*}
d_B(x)=|F_x|<8k(\ln n + 1).
\end{equation*}
Since such a vertex $x$ is always chosen to have maximal Breaker degree among all vertices in $V\setminus V_C$ and since such a choice always repeats within at most $2k$ rounds, we obtain
$$
d_B(v)<8k(\ln n + 1) + 2k\cdot 2 < \ln^2 n
$$
whenever $v\in V\setminus V_C$. This proves the observation. {\color{red}\checkmark}
\begin{obs}\label{obs:roundsbound}
As long as Connector can follow the proposed strategy the following holds:
\begin{enumerate}
\item[(i)] If $A_1\not\subset V_C$, then $e(B)<n^{1/3} \ln n$.
\item[(ii)] If $A_2\not\subset V_C$, then $e(B)<n^{2/3} \ln n$.
\end{enumerate}
\end{obs}
\textit{Proof.} Stage~I always repeats after at most $2k$ rounds. Since for Stage I Connector prefers the vertices of $A_1$ to be added to $V_C$, it takes her at most $2k|A_1|$ rounds until $A_1\subset V_C$ holds, if she is able to follow the strategy. Thus, as long as $A_1\not\subset V_C$ holds, Breaker cannot have more than $2k|A_1|\cdot 2 < n^{1/3}\ln n$ edges. This proves statement (i). Statement (ii) can be proven analogously.~{\color{red}\checkmark}
\medskip
Now, using the Observations~\ref{obs:degbound} and \ref{obs:roundsbound} as well as the properties from Lemma~\ref{lem:tec1_connector},
\ref{lem:tec2_connector} and \ref{lem:tec3_connector}, we finally will show that Connector can always follow the proposed strategy. That is, assuming that so far Connector could follow her strategy, we will show that when she
fixes her next vertex $x$ according to Stage~I or Stage~II, she can really add this vertex to $V_C$ within at most $k$ rounds.
In order to do so, we will consider three cases.
\medskip
{\bf Case 1 ($A_1\not\subset V_C$):}
In this case we have $e(B)\leq n^{1/3}\ln n$ according to Observation~\ref{obs:roundsbound}.
Thus, by the property from Lemma~\ref{lem:tec1_connector}
we can find a copy $T$ of $\mathcal{T}_{k_1}$ in $G\setminus B$
such that $r$ is the root of $T$, such that $x\notin V(T)$ and such that every leaf of $T$ is adjacent to $x$ in $G\setminus B$.
In particular, $T$ is good with respect to $(x,G\setminus B)$.
Thus, following the base strategy $\mathcal{S}_x$ from Lemma~\ref{lem:connector:basic}, Connector can reach $x$ within $k_1\leq k$ rounds.
\medskip
{\bf Case 2 ($A_1\subset V_C$ and $A_2\not\subset V_C$):}
In this case we have $e(B)\leq n^{2/3}\ln n$ according to Observation~\ref{obs:roundsbound},
and $d_B(v)<\ln^2 n$ for every $v\in V\setminus V_C \subset V\setminus A_1$ according to Observation~\ref{obs:degbound}. Applying the property from
Lemma~\ref{lem:tec2_connector} (with $M=V_C$ and $A=A_1$)
we can find a vertex $z\in N_{G\setminus B}(A_1)$
and four vertex disjoint copies $T_{\ell}$ of $T_{k_2}$
with roots $r_{\ell}$ such that for every $\ell\in [4]$ we have that $zr_{\ell}\in E(G\setminus B)$ and $T_{\ell}$ is good with respect to $(x,G\setminus B)$. In the first round, Connector
claims an edge between $A_1$ and $z$ which is possible
as $A_1\subset V_C$ and $z\in N_{G\setminus B}(A_1)$.
Afterwards, consider the pairwise disjoint sets
$$
E_{\ell}:=\{zr_{\ell}\}\cup E(T_{\ell})\cup \left\{
xw:~ w~\text{is a leaf of }T_{\ell}\right\}
$$
for $\ell\in [4]$. As in the meantime Breaker claims only two edges, there will be at least two of these sets that Breaker does not touch until Connector's next move. Without loss of generality let these be the sets $E_1$ and $E_2$. Then
the union $\{zr_1,zr_2\}\cup E(T_1)\cup E(T_2)$ induces a copy of $\mathcal{T}_{k_2+1}$,
which is good with respect to $(x,G\setminus B)$.
Therefore, following the base strategy $\mathcal{S}_x$ from Lemma~\ref{lem:connector:basic}, Connector can reach $x$ within at most $k_2+1$ further rounds.
Hence, in total, Connector needs at most $k_2+2\leq k$ rounds in this case.
\medskip
{\bf Case 3 ($A_1\cup A_2\subset V_C$ and $V_C\neq V$):}
According to Observation~\ref{obs:degbound},
we have $d_B(x)<\ln^2 n$ before Connector
wants to add $x$ to $V_C$.
Following the property from Lemma~\ref{lem:tec3_connector}
(with $A=A_2$) we then conclude that $d_G(x,A_2)>n^{\eps/2}>d_B(x)$.
Therefore, since $A_2\subset V_C$, Connector immediately can claim an edge leading to $x$. \hfill $\Box$
\section{Analysis of Algorithm~\ref{alg:bad}}\label{sec:algorithm}
The aim of this section is to prove Lemma~\ref{lem:bad}.
For that reason we will prove a slightly more general lemma, Lemma~\ref{lem:tec}, from which Lemma~\ref{lem:bad} will follow.
For Lemma~\ref{lem:tec} we are going to apply Algorithm~\ref{alg:bad} to a set $A=\{x_1,\ldots,x_t\}$ of vertices, later choosing one of them carefully to obtain a vertex $x$ as promised by
Lemma~\ref{lem:bad}.
That is, we first fix $x_1$ and apply Algorithm~\ref{alg:bad} in order to determine the set $B^{x_1}$, then we repeat the algorithm for $x_2$ and so on. Amongst other properties we will obtain that it
is very likely that all the sets $B^{x_j}$ are pairwise disjoint and satisfy certain degree conditions. To simplify notation we set
\begin{equation}\label{eq:Bji}
B^{(j,i)} := \bigcup_{\ell < j} B^{x_{\ell}} \cup \bigcup_{k\leq i} B_k^{x_j}~ .
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.7]{bad_picture}
\caption{Structure of $B^{(j,i)}$}
\label{fig:Bji}
\end{center}
\end{figure}
That is, $B^{(j,i)}$ is the set of all bad vertices that are determined immediately after $B_i^{x_j}$ is created.
In particular,
$B^{(t,r_{x_t})} = \bigcup_{x\in A} B^{x}$ is the union of all bad vertices after the algorithm is proceeded for
all vertices $x_j$.
Moreover, we let
$$
a(j,i):=
\begin{cases}
(j,i-1)~ , & i\neq 1\\
(j-1,r_{x_{j-1}})~, & i=1
\end{cases}
$$
\vspace{1mm}
denote the pair coming immediately before $(j,i)$ in lexicographic order, for $(j,i)\neq (1,1)$.
\newpage
\begin{lemma}[Technical Lemma]\label{lem:tec}
Let $n$ be a large enough integer,
let $\eps \geq 7 \ln\ln n / \ln n$
and let $t\in\mathbb{N}$ be any constant.
For $p=n^{-2/3-\eps}$ generate a random graph $G\sim G_{n,p}$.
Then with probability at least $1-n^{-\eps/4}$ there
exists a set $A=\{x_1,\ldots,x_t\}\subset V(G)$
of size $t$, such that successively applying Algorithm~\ref{alg:bad}
for $x_1,\ldots,x_t$ the following holds for every $j\in [t]$ and $i \leq \tilde{r}_j:= \min\{ r_{x_j}, \lceil 1/\eps \rceil\}$:
\begin{enumerate}[label=\itmarab{P}]
\item\label{prop:newx}
$x_j\notin B^{a(j,1)} \cup N_G(B^{a(j,1)})$ ~ ,
\item\label{prop:sizeBji}
$|B^{x_j}_i|<n^{(1-i\eps)/3}$ ~ ,
\item\label{prop:edgesBji}
$e(B^{x_j}_i)=0$ ~ ,
\item\label{prop:disjoint}
$ B_i^{x_j} \cap \left( N_G(B^{a(j,1)}) \cup B^{a(j,1)} \right) = \varnothing $ ~ ,
\item\label{prop:neighbours}
if we define
$N_{(j,i)}^s:= \left\{ v\in V\setminus B^{(j,i)} : d_G\left(v, B^{(j,i)} \right) \geq s\right\}$ then
$$
|N_{(j,i)}^s| \leq \left( 2j\eps^{-1} + i \right) n^{\frac{3-s(1+\eps)}{3}}~~ \text{for every }s\in \{0,1,2,3\}~ ,
$$
\end{enumerate}
and for every $k\in [t]$ we have
\begin{enumerate}[label=\itmarab{P},start=6]
\item\label{prop:rounds}
$r_{x_k} = \tilde{r}_k$.
\end{enumerate}
\end{lemma}
Before proving Lemma~\ref{lem:tec}, let us first show how it implies Lemma~\ref{lem:bad}.
\begin{proof}[Proof of Lemma~\ref{lem:bad}]
Apply Lemma~\ref{lem:tec} with $t=7$. Then a.a.s.~we can find a set $A=\{x_1,\ldots,x_t\}$ as promised by this lemma.
Now, fix any set $M\subset V(G)$ of size 3.
Since $|A|=7$, it will be enough to verify the following two statements.
\begin{enumerate}
\item[(i)] Every vertex $x\in A$ satisfies \ref{Bprop:neighbours1}--\ref{Bprop:nonbad}.
\item[(ii)] At most six vertices $x\in A$ do not satisfy \ref{Bprop:disjointM}.
\end{enumerate}
For (i), consider any $x_j\in A$.
Property~\ref{Bprop:neighbours1} follows immediately by the definition of $B_1^{x_j}$
and Property~\ref{prop:edgesBji}.
Moreover, Property~\ref{Bprop:nonbad} follows immediately from the halt condition of Algorithm~\ref{alg:bad}.
To see Property~\ref{Bprop:neighbours2}, let $v\in B_i^{x_j}$. By the algorithm, $v$ is added to $B_i^{x_j}$ if
$d_G \left(v,\bigcup_{k< i} B_k^{x_j} \right)\geq 2 $.
Moreover, we have $v\in V\setminus B^{a(j,i)}$,
because of Property~\ref{prop:disjoint} and since
${B_i^{x_j}\cap \left( \bigcup_{k<i} B_i^{x_j}\right)=\varnothing}$ according to the algorithm.
Now, using the Properties~\ref{prop:neighbours} and~\ref{prop:rounds},
and provided $n$ is large enough,
we deduce $|N_{a(j,i)}^3| < n^{-\eps/2}$
and thus $v\notin N_{a(j,i)}^3=\varnothing$. This yields
$d_G \left(v,\bigcup_{k<i} B_k^{x_j} \right)\leq
d_G \left(v, B^{a(j,i)} \right) \leq 2 $.
Finally, using that $e_G(B_i^{x_j})=0$
according to Property~\ref{prop:edgesBji}, we deduce
$d_G \left(v,\bigcup_{k\leq i} B_k^{x_j} \right) = 2 $, proving~\ref{Bprop:neighbours2}.
Let us prove (ii) then. For any $k<j$, we have $B^{x_k}\subset B^{a(j,1)}$ by Definition (\ref{eq:Bji}) and
since $B^{x_k}=\bigcup_{i\leq r_{x_k}} B_{i}^{x_k}$ by Algorithm~\ref{alg:bad}.
Thus, using Property~\ref{prop:disjoint} we conclude that
$B^{x_j}$ and $B^{x_k}$ are disjoint.
Moreover, since $B^{x_k}\subset B^{a(j,i)}$
we also obtain that $N_G(B^{x_k})\subset N_G(B^{a(j,i)})$.
Thus, using Property~\ref{prop:disjoint} again,
we get that $G$ does not have any edges between $B^{x_j}$ and $B^{x_k}$.
As a consequence we have that every vertex $v$ which is adjacent to
but not contained in $B^{x_j}$ for some $j\in [7]$
needs
to be element of $V\setminus B^{(t,r_{x_t})}$.
However, according to Property~\ref{prop:neighbours} and since
$r_{x_j}\leq \lceil 1/\eps \rceil$
holds by Property~\ref{prop:rounds},
we obtain $N_{(t,r_{x_t})}^3 = \varnothing$
for large enough $n$.
This implies that every vertex of
$V\setminus B^{(t,r_{x_t})}$
is adjacent to at most two of the sets
$B^{x_j}$ with $j\in [7]$.
We conclude that at most 3 of the pairwise disjoint sets $B^{x_j}$ may contain a vertex
of $M$. If a vertex $v\in M$ belongs to some set $B^{x_j}$ with $j\in [7]$, then $v\notin B^{x_k}\cup N_G(B^{x_k})$
for every $k\neq j$. If otherwise a vertex $v\in M$
belongs to $V\setminus B^{(t,r_{x_t})}$, then
it is adjacent to at most two of the sets $B^{x_j}$.
Hence, there are at most six vertices $x\in A$
such that $M\cap (B^x \cup N_G(B^x))\neq \varnothing$. This proves statement~(ii).
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:tec}]
For the proof of Lemma~\ref{lem:tec} we expose the edges
of $G\sim G_{n,p}$
step by step with respect to the given algorithm, and only during the process we choose the vertices of $A$ randomly.
To be more precise, we proceed as follows:
We first choose $x_1$ uniformly at random from $V(G)=[n]$ and then apply Algorithm~\ref{alg:bad} for $x_1$. Once, Algorithm~\ref{alg:bad} has been applied for $x_{j-1}$ and afterwards $B^{x_{j-1}}$ is determined, we choose $x_j$ uniformly at random from $[n]$ and
apply Algorithm~\ref{alg:bad} for $x_j$. While doing this, we always expose only those edges which have not been exposed yet and which are needed to determine the next set $B_{i}^{x_j}$ in the algorithm. For example:
When applying the algorithm for $x_1$,
we first expose only the edges incident to $x_1$ so that we are able to determine $B_1^{x_1}$. Once this set is fixed, we expose all edges incident to $B_1^{x_1}$ that have not been exposed yet, so that we can find $B_2^{x_1}$.
We then expose all edges incident to $B_2^{x_1}$
that have not been exposed yet, and so on.
For the analysis of the algorithm, we consider the pairs $(j,i)$, with $j\in [t]$ and $i\in [\tilde{r}_j]$, in lexicographic order. We consider the following event:\\
~~~~ \begin{tabular}{ll}
$E_{(j,i)}$: & ~~ for all pairs until and including $(j,i)$ the Properties~\ref{prop:newx}--\ref{prop:neighbours} hold,\\
& ~~ and the Property~\ref{prop:rounds} is true for all $k<j$~ .
\end{tabular}
\vspace{0.5cm}
We will show that
\begin{equation}\label{goal}
\mathbb{P}\left( \overline{E_{(j,i)}} \big| E_{a(j,i)} \right) < 5n^{-\eps/3}
\end{equation}
for every pair $E_{(j,i)}$,
where $E_{a(1,1)}$ is the event which is always true.
Before going into detail,
let us first prove that Lemma~\ref{lem:tec} follows, once
(\ref{goal}) is proven.
\begin{clm}\label{claim_final}
If (\ref{goal}) holds, then
$\mathbb{P}\left( E_{(t,r_{x_t})} ~ \text{and} ~ r_{x_t}=\tilde{r}_t \right) \geq 1- n^{-\eps/4}$.
\end{clm}
\textit{Proof.}
Observe first that for every $j\in [t]$ the events
$E_{(j,r_{x_j})}$ and $E_{(j,\tilde{r}_j)}$ are equivalent.
Indeed, by definition
$E_{(j,r_{x_j})}$ implies $E_{(j,\tilde{r}_j)}$, since $r_{x_j}\geq \tilde{r}_j$.
Now, let $E_{(j,\tilde{r}_j)}$ be given
and let us explain why $E_{(j,r_{x_j})}$ follows then.
If we assume that the latter does not hold, then $\tilde{r}_j \neq r_{x_j}$,
and by definition of $\tilde{r}_j$ we then have
$\tilde{r}_j=\lceil 1/\eps \rceil < r_{x_j}$. Applying
\ref{prop:sizeBji} for $(j,\tilde{r}_j)$, which is given under assumption of
$E_{(j,\tilde{r}_j)}$, we obtain $B_{\tilde{r}_j}^{x_j} = \varnothing$.
But this means that Algorithm~\ref{alg:bad}, when processed for vertex $x_j$, must have already stopped, i.e. $r_{x_j}<\tilde{r}_j$,
a contradiction.
Moreover, by looking at the above argument more carefully we see that whenever one of the events $E_{(j,r_{x_j})}$ and $E_{(j,\tilde{r}_j)}$ holds,
we must have $r_{x_j}=\tilde{r}_j\leq \lceil 1/\eps \rceil$.
For every $j\in [t]$ we now conclude that
\begin{align*}
\mathbb{P}\left( \overline{E_{(j,r_{x_j})} } \right)
&
= \mathbb{P}\left( \overline{E_{(j,\tilde{r}_j)}} \right)
\leq \sum_{i=1}^{\tilde{r}_j} \mathbb{P}\left(
\overline{E_{(j,i)}} \Big| E_{a(j,i)} \right)
+ \mathbb{P}\left( \overline{ E_{a(j,1)} } \right) \\
&
\stackrel{(\ref{goal})}{\leq} \tilde{r}_j \cdot 5n^{-\frac{\eps}{3}}
+ \mathbb{P}\left( \overline{ E_{(j-1,r_{x_{j-1}})} } \right)
< \frac{10}{\eps} n^{-\frac{\eps}{3}}
+ \mathbb{P}\left( \overline{ E_{(j-1,r_{x_{j-1}})} } \right) ~ .
\end{align*}
Applying the above inequality recursively we finally obtain
\begin{align*}
\mathbb{P}\left( \overline{E_{(t,r_{x_t})} ~ \text{and} ~ r_{x_t}=\tilde{r}_t } \right) =
\mathbb{P}\left( \overline{E_{(t,r_{x_t})} } \right)
< t\cdot \frac{10}{\eps} n^{-\frac{\eps}{3}} < n^{-\frac{\eps}{4}}
\end{align*}
as claimed. {\color{red}\checkmark}
\medskip
It thus remains to prove (\ref{goal}). We start with a few observations.
\begin{obs}\label{obs:addvertex}
If Algorithm~\ref{alg:bad} adds a vertex $v$ to the set
$B_i^{x_j}$, then $d_G(v,B_{i-1}^{x_j})\geq 1$ and
\linebreak
${d_G \left(v,\bigcup_{k\leq i-1} B_{k}^{x_j}\right)\geq 2}$.
\end{obs}
\textit{Proof.}
Algorithm~\ref{alg:bad} adds a vertex $v$ to $B_i^{x_j}$
if $d_G \left(v,\bigcup_{k\leq i-1} B_{k}^{x_j}\right)\geq 2$
and only if $v$ was not already added to some $B_{k}^{x_j}$
with $k<i$. However, the latter ensures
$d_G \left(v,\bigcup_{k\leq i-2} B_{k}^{x_j}\right)\leq 1$
and thus $d_G(v,B_{i-1}^{x_j})\geq 1$. {\color{red}\checkmark}
\medskip
\begin{obs}\label{obs:ifEaji}
If $E_{a(j,i)}$ holds, then the following is true:
\begin{enumerate}
\item[(i)] $\left| \bigcup_{k\leq i-1} B_k^{x_j} \right|
\leq \left|B^{a(j,i)}\right| < n^{1/3}$
and $\left| \bigcup_{k\leq i-1} N_G(B_k^{x_j}) \right|
\leq \left|N_G(B^{a(j,i)})\right| < n^{2/3-\eps}$.
\item[(ii)] If Algorithm~\ref{alg:bad} adds a vertex $v$ to $B_i^{x_j}$, then $v\in V\setminus B^{a(j,i)}$.
\end{enumerate}
\end{obs}
\textit{Proof.}
If $E_{a(j,i)}$ holds, we obtain
\begin{align*}
|B^{a(j,i)}| &
\stackrel{ (\tref{eq:Bji}) }{\leq}
\sum_{k<j} \sum_{\ell \leq r_k} |B_{\ell}^{x_k}|
+ \sum_{\ell<i} |B_{\ell}^{x_j}|
\stackrel{ \tref{prop:sizeBji} }{\leq}
\sum_{k<j} \sum_{\ell \leq r_k} n^{\frac{1-\ell \eps}{3}}
+ \sum_{\ell<i} n^{\frac{1-\ell\eps}{3}} < 2jn^{\frac{1-\eps}{3}} < n^{\frac13}
\end{align*}
and
\begin{equation*}
|B^{a(j,i)}\cup N_G(B^{a(j,i)})|
\stackrel{ (\ref{cl:deg}) }{\leq}
2jn^{\frac{1-\eps}{3}} + 2jn^{\frac{1-\eps}{3}}\cdot 2n^{\frac13-\eps} < n^{\frac23-\eps}
\end{equation*}
provided $n$ is large enough. Thus, (i) follows.
For (ii) observe that,
according to the algorithm, no vertex from $\bigcup_{k\leq i-1} B_k^{x_j}$
can be added to $B_i^{x_j}$.
Moreover, using Property~\ref{prop:disjoint}, no vertex in $B^{a(j,1)}$ has a neighbour in $B_{i-1}^{x_j}$
(or in $\{x_j\}$ in the case when $i=1$, because of \ref{prop:newx}), while every vertex
being added to $B_i^{x_j}$ needs to have such a neighbour according to Observation~\ref{obs:addvertex}
(or since $B_1^{x_j}=N_G(x_j)$ if $i=1$).
It thus follows that no vertex from $B^{a(j,i)}=B^{a(j,1)} \cup \bigcup_{k\leq i-1} B_k^{x_j}$ is added to $B_i^{x_j}$.~{\color{red}\checkmark}
\medskip
Now, for each $j\in [t]$ and $i\in [\tilde{r}_j]$ we will prove (\ref{goal}),
by showing that, under condition of $E_{a(j,i)}$, each of the Properties~\ref{prop:newx}--\ref{prop:neighbours} in Lemma~\ref{lem:tec} fails to hold for the pair $(j,i)$ with probability smaller than $n^{-\eps/3}$. This is obviously enough for showing
(\ref{goal}) when $i>1$. To get (\ref{goal}) for $i=1$,
recall that under condition of
$E_{a(j,1)}=E_{(j-1,r_{j-1})}$ we also have that
$r_{x_{j-1}}=\tilde{r}_{j-1}$
(as shown in the proof of Claim~\ref{claim_final}), making sure that
\ref{prop:rounds} holds for $(j,1)$ as well.
We discuss each of the Properties~\ref{prop:newx}--\ref{prop:neighbours} seperately.
\medskip
{\bf Property \ref{prop:newx}: }
The statement is trivially true for $j=1$. So, let $j>1$
and let us condition on $E_{a(j,1)}$.
The vertex $x_j$ is chosen uniformly at random from $[n]$
after Algorithm~\ref{alg:bad} has been applied for $x_1,\ldots,x_{j-1}$ and $B^{a(j,1)}$ was determined.
Now, conditioned on $E_{a(j,1)}$, we have
$|B^{a(j,1)}\cup N_G(B^{a(j,1)})| < n^{2/3}$
due to Observation~\ref{obs:ifEaji}.
It thus follows that
\begin{equation}\label{P1bound}
\mathbb{P} \left( \text{\ref{prop:newx} fails for $j$} ~
\big| ~ E_{a(j,1)} \right) < n^{-\frac13} <n^{-\frac{\eps}{3}} ~ .
\end{equation}
{\bf Property \ref{prop:sizeBji}: } Let $j\in [t]$.
Consider first the case when $i=1$. Then
$B_1^{x_j}=N_G(x_j)$ and using Claim~\ref{cl:deg} we have
\begin{equation*}
\mathbb{P} \left( \text{\ref{prop:sizeBji} fails for $B_{1}^{x_j}$} \big| E_{a(j,1)}\right) < \exp(-n^{\frac13-2\eps}) < n^{-\eps}~ .
\end{equation*}
So, let $i>1$ from now on and consider the moment
immediately after $B_{i-1}^{x_j}$ was determined, i.e. when
all remaining edges incident to $B_{i-1}^{x_j}$
get exposed in order to determine $B_i^{x_j}$.
When we condition on $E_{a(j,i)}$,
only vertices from $v\in V\setminus B^{a(j,i)}$ can be added to $B_i^{x_j}$ according to Observation~\ref{obs:ifEaji}.
Moreover, before $B_{i-1}^{x_j}$
was determined, for every vertex $v\in V\setminus B^{a(j,i)}$
all the edges towards $B_{i-1}^{x_j}$
have not been exposed so far.
Now, if a vertex $v\in V\setminus B^{a(j,i)}$ is added to $B_{i}^{x_j}$ then by Observation~\ref{obs:addvertex} one of the following two options needs to happen:
\begin{enumerate}
\item[(i)] $d_G(v,B_{i-1}^{x_j})\geq 2$, or
\item[(ii)] $d_G(v,B_{i-1}^{x_j})= 1$ and $d_G(v,\bigcup_{k<i-1}B_{k}^{x_j})\geq 1$.
\end{enumerate}
Conditioned on $E_{a(j,i)}$, the expected number of vertices in (i) is smaller than
$n \cdot |B_{i-1}^{x_j}|^2\cdot p^2
\stackrel{\tref{prop:sizeBji}}{<} n^{1/3-2(i+2)\eps/3}~ .
$
For (ii), observe that $d_G(v,\bigcup_{k<i-1}B_{k}^{x_j})\geq 1$
means
$v\in \bigcup_{k<i-1} N_G(B_{k}^{x_j})$.
Since we have
$
\left| \bigcup_{k<i-1} N_G\left( B_{k}^{x_j} \right) \right|
< n^{2/3-\eps}
$
according to Observation~\ref{obs:ifEaji},
we get that
the expected number of vertices in $V\setminus B^{a(j,i)}$ satisfying (ii) is at most
$
n^{2/3 - \eps}\cdot |B_{i-1}^{x_j}|\cdot p
\stackrel{\tref{prop:sizeBji}}{<} n^{1/3-(i+5)\eps/3}~ .
$
Summing up, we get that the (conditional)
expected size of $B_{i}^{x_j}$
is at most
$$
n^{\frac{1-2(i+2)\eps}{3}} + n^{\frac{1-(i+5)\eps}{3}}
< n^{\frac{1-(i+4)\eps}{3}}
$$
and thus, using Markov's inequality (Lemma~\ref{lem:markov}), we obtain
\begin{equation}\label{P2bound}
\mathbb{P} \left( \text{\ref{prop:sizeBji} fails for $(j,i)$} \big| E_{a(j,i)}\right) < n^{-\frac{4\eps}{3}} < n^{-\eps}~ .
\end{equation}
{\bf Property \ref{prop:edgesBji}: }
Again we condition on the event $E_{a(j,i)}$.
By Observation~\ref{obs:addvertex}
all the vertices we add to $B_i^{x_j}$ need to come from
$V\setminus B^{a(j,i)}$. Thus, when $B_i^{x_j}$
is determined, neither of the edges in $E(B_i^{x_j})$ has been exposed before. With probability at least $1-n^{-4\eps/3}$
we get $|B_i^{x_j}|<n^{(1-i\eps)/3}$,
according to (\ref{P2bound}).
If we condition on the latter, the expectation of
$e_G(B_{i}^{x_j})$ is smaller than
$
|B_{i}^{x_j}|^2 \cdot p \stackrel{\tref{prop:sizeBji}}{<} n^{-4\eps/3} ~ .
$
Thus, using Markov's inequality (Lemma~\ref{lem:markov}),
\begin{equation*}
\mathbb{P}\left(\text{\ref{prop:edgesBji} fails for $(j,i)$} \big| E_{a(j,i)} \right) \leq n^{-\frac{4\eps}{3}} + n^{-\frac{4\eps}{3}} < n^{-\frac{\eps}{3}}~ .
\end{equation*}
{\bf Property \ref{prop:disjoint}:}
Let $i=1$. If $j=1$ then the statement is trivially true. Otherwise, we know from (\ref{P1bound})
that, under condition of $E_{a(j,i)}$, we have
$x_j\notin B^{a(j,1)} \cup N_G(B^{a(j,1)})$ with probability at least $1-n^{-1/3}$.
This implies $B_1^{x_j}\cap B^{a(j,1)}=\varnothing$
and thus it remains to check that it is unlikely to have a vertex from
$N_G(B^{a(j,1)})\setminus B^{a(j,1)}$ landing in $B_1^{x_j}$.
Note that before $B_1^{x_j}$ gets determined none
of the edges between $N_G(B^{a(j,1)})\setminus B^{a(j,1)}$ and $x_j$ has been exposed so far, when
$x_j\notin B^{a(j,1)} \cup N_G(B^{a(j,1)})$. Thus, using
Observation~\ref{obs:ifEaji},
we (conditionally) expect at most
$
|N_G(B^{a(j,1)})|\cdot p
< n^{2/3-\eps} \cdot p = n^{-2\eps}
$
vertices in $\left( N_G(B^{a(j,1)})\setminus B^{a(j,1)}\right)
\cap B_1^{x_j}$. It follows that
$$
\mathbb{P}\left(\text{\ref{prop:disjoint} fails for $(j,1)$} \big| E_{a(j,1)} \right)
< n^{-\frac13} + n^{-2\eps} < n^{-\eps}~ .
$$
Let $i>1$ then. Under assumption of $E_{a(j,i)}$, we have that
$B_{i-1}^{x_j} \cap N_G(B^{a(j,1)}) = \varnothing$
according to \ref{prop:disjoint}.
But then, according to Observation~\ref{obs:addvertex}, no vertex from
$B^{a(j,1)}$ is added to $B_i^{x_j}$, giving that
$B_i^{x_j}\cap B^{a(j,1)} = \varnothing$.
It thus remains to check that it is unlikely to have a vertex from
$N_G(B^{a(j,1)})\setminus B^{a(j,1)}$ landing in $B_i^{x_j}$.
Using that $B_{i-1}^{x_j}\cap B^{a(j,1)} = \varnothing$ by \ref{prop:disjoint}, we note that before $B_i^{x_j}$ gets determined, no edge between $N_G(B^{a(j,1)})\setminus B^{a(j,1)}$ and $B_{i-1}^{x_j}$ has been exposed.
Now, applying Observation~\ref{obs:addvertex},
a vertex $v$ from $N_G(B^{a(j,1)})\setminus B^{a(j,1)}$
is added to $B_{i}^{x_j}$
if
\begin{enumerate}
\item[(i)] $d_G(v,B_{i-1}^{x_j})\geq 2$, or
\item[(ii)] $d_G(v,B_{i-1}^{x_j})= 1$ and $d_G(v,\bigcup_{k<i-1}B_{k}^{x_j})\geq 1$.
\end{enumerate}
Hereby, again using Observation~\ref{obs:ifEaji} as well as \ref{prop:sizeBji}, the (conditional) expected number of vertices in (i) is at most
$
|N_G(B^{a(j,i)})|\cdot |B_{i-1}^{x_j}|^2 p^2 < n^{-3\eps}~ .
$
For (ii), observe that
$N_G(B^{a(j,1)})\setminus B^{a(j,1)}\subset V \setminus B^{a(j,i)}$ holds, since
$\bigcup_{k<i}B_{k}^{x_j}$ and $N_G(B^{a(j,1)})$ are disjoint due to Property~\ref{prop:disjoint}.
Therefore, if
$d_G(v,\bigcup_{k<i-1}B_{k}^{x_j})\geq 1$
and $v\in N_G(B^{a(j,1)})\setminus B^{a(j,1)}$, then
$v\in N_{a(j,i)}^2$.
Using \ref{prop:neighbours} and~\ref{prop:rounds}, we have that
$\left| N_{a(j,i)}^2 \right|< n^{1/3}$.
Thus, the (conditional) expected number of vertices
satisfying (ii) is bounded from above by
$
n^{1/3}\cdot |B_{i-1}^{x_j}|\cdot p
\stackrel{\tref{prop:sizeBji}}{<} n^{-4\eps/3}~ .
$
Summing up, we expect at most
$$
n^{-3\eps} + n^{-\frac{4\eps}{3}} < n^{-\eps}
$$
vertices in $B_i^{x_j}\cap (N_G(B^{a(j,1)}) \setminus B^{a(j,1)} )$.
By Markov's inequality (Lemma~\ref{lem:markov}) we obtain
\begin{equation*}
\mathbb{P}\left(\text{\ref{prop:disjoint} fails for $(j,i)$} \big| E_{a(j,i)} \right)
< n^{-\eps}~ .
\end{equation*}
{\bf Property~\ref{prop:neighbours}:}
Consider first the case when $(j,i)=(1,1)$.
The bound on $|N_{(1,1)}^0|$ is trivially true.
So, let $s\geq 1$. Immediately after $B^{(1,1)}=N_G(x_1)$ is determined,
none of the edges between $V\setminus B^{(1,1)}$
and $B^{(1,1)}$ has been exposed. Moreover, according to Lemma~\ref{lem:degree_bound}, with
probability at least $1-\exp(-n^{1/3-2\eps})$
we have $|B^{(1,1)}|<n^{(1-\eps)/3}$.
Thus, if we condition on that bound, the expected size
of $N_{(1,1)}^s$ is at most
$
n\cdot \left( |B^{(1,1)}|\cdot p \right)^s
< n^{(3-s(1+4\eps))/3}~ .
$
It follows that
\begin{align*}
\mathbb{P}\Big(\text{\ref{prop:neighbours} fails for $(1,1)$} \Big) &
\leq \sum_{s\in [3]} \frac{n^{\frac{3-s(1+4\eps)}{3}}}{\left( 2\eps^{-1} + 1 \right) n^{\frac{3-s(1+\eps)}{3}}}
< n^{-\frac{\eps}{3}}~ .
\end{align*}
So let $(j,i)\neq (1,1)$ from now on.
Again, the bound on $|N_{(j,i)}^0|$ is trivially true.
Under condition of
$E_{a(j,i)}$ we have
$
|B_i^{x_j}| < n^{(1-i\eps)/3}
$
with probability at least $1-n^{-4\eps/3}$,
according to (\ref{P2bound}). Condition on the latter from now on. Given that $E_{a(j,i)}$ holds, we additionally get
\begin{align*}
\left| N_{a(j,i)}^s \right|
\stackrel{\tref{prop:neighbours}}{\leq}
\begin{cases}
\left( 2j\eps^{-1} + i-1 \right) n^{\frac{3-s(1+\eps)}{3}}
& ~ \text{if }i> 1\\
\left( 2(j-1)\eps^{-1} + r_{x_{j-1}} \right) n^{\frac{3-s(1+\eps)}{3}}
\stackrel{\tref{prop:rounds}}{<} \left( 2j\eps^{-1} + i-1 \right) n^{\frac{3-s(1+\eps)}{3}}
& ~ \text{if }i = 1
\end{cases}
\end{align*}
for every $s\in \{0,1,2,3\}$. Thus,
\begin{align}\label{neighbour_step1}
\left| N_{(j,i)}^s \right|
& = \left| N_{(j,i)}^s \cap N_{a(j,i)}^s \right|
+ \left| N_{(j,i)}^s \setminus N_{a(j,i)}^s \right| \nonumber \\
& \leq \left( 2j\eps^{-1} + i-1 \right) n^{\frac{3-s(1+\eps)}{3}}
+ \left| N_{(j,i)}^s \setminus N_{a(j,i)}^s \right|~ .
\end{align}
Now, for $s\in [3]$, if a vertex $v$ ends up being in
$N_{(j,i)}^s \setminus N_{a(j,i)}^s$
then, by definition, $v\in V\setminus B^{(j,i)} \subset V\setminus B^{a(j,i)}$ and $d_G\left( v, B^{a(j,i)} \right) = t$
for some $t<s$. But this means that $v\in N_{a(j,i)}^t$
and, in order to be added to $N_{(j,i)}^s$,
the vertex $v$ needs to get at least $s-t$ edges
towards $B_i^{x_j}$ (which get exposed only after $B_i^{x_j}$ has been determined, since $v\in V\setminus B^{(j,i)}$). We conclude that the (conditional) expected size of
$\left| N_{(j,i)}^s \setminus N_{a(j,i)}^s \right|$ is at most
\begin{align*}
\sum_{t<s} \left| N_{a(j,i)}^t \right| \cdot
\left( \left| B_i^{x_j} \right|\cdot p \right)^{s-t}
& \leq
\left( 2j\eps^{-1} + i-1 \right)
\sum_{t<s} n^{\frac{3-t(1+\eps)}{3}}
\left( n^{\frac{-1-(i+3)\eps}{3}} \right)^{s-t} \\
& =
\left( 2j\eps^{-1} + i-1 \right)
\sum_{t<s} n^{\frac{3-s(1+\eps) - (i+2)(s-t)\eps}{3}}\\
& \leq
\left( 2j\eps^{-1} + i-1 \right)
\sum_{t<s} n^{\frac{3-s(1+\eps) - 3\eps}{3}}
< n^{\frac{3-s(1+\eps) - 2.5\eps }{3}},
\end{align*}
where the first inequality uses \ref{prop:sizeBji} and \ref{prop:neighbours}, and the last inequality uses
that $i\leq \tilde{r}_j\leq 2\varepsilon^{-1}$.
Thus, with Markov's inequality (Lemma~\ref{lem:markov}) and union bound, we obtain
\begin{align*}
\mathbb{P}\left( \exists s\in[3]:~
\left| N_{(j,i)}^s \setminus N_{a(j,i)}^s \right|
> n^{\frac{3-s(1+\eps)-\eps}{3}} \right) < n^{-\frac{\eps}{2}} ~ .
\end{align*}
Combining this with (\ref{neighbour_step1}),
we see that with (conditional) probabilitiy at least
$1-n^{-\eps/2}$ we have
$$
\left| N_{(j,i)}^s \right|
\leq \left( 2j\eps^{-1} + i-1 \right) n^{\frac{3-s(1+\eps)}{3}}
+ n^{\frac{3-s(1+\eps)-\eps}{3}}
< \left( 2j\eps^{-1} + i \right) n^{\frac{3-s(1+\eps)}{3}}
$$
for every $s\in [3]$, and thus
\begin{equation*}
\mathbb{P}\left(\text{\ref{prop:neighbours} fails for $(j,i)$} \big| E_{a(j,i)} \right)
< n^{-\frac{4\eps}{3}} + n^{-\frac{\eps}{2}} < n^{-\frac{\eps}{3}}~ .
\end{equation*}
This finishes the proof of Lemma~\ref{lem:tec}.
\end{proof}
\section{Good structures for Connector}
\label{sec:structures}
\subsection{Technical Lemma}
Throughout Section~\ref{sec:structures}, we will consider the sequence $(\alpha_i)_{i\in\mathbb{N}}$ given by
\begin{equation}\label{def:alpha}
\alpha_i:=3\cdot 2^{i-1} - 2~ ,
\end{equation}
and note that $\alpha_1=1$ and
$\alpha_{i+1}=2(\alpha_i+1)$ hold.
In order to simplify our argument, in the next lemma we will consider $\eps$ to be a real number of the form
$\eps = 1/(9 \cdot 2^{k-2}-3)$
with $k\in\mathbb{N}$. Note that this yields
\begin{equation}\label{alpha}
\alpha_k \eps = \frac{2}{3}~~ \text{and} ~~
\alpha_{k-1} \eps = \frac{1}{3} - \eps~ .
\end{equation}
Given $G\sim G_{n,p}$ and $x\in V(G)$, with high probability
the following lemma will provide us with a suitable subgraph $H$
of $G$ which later (see e.g. Claim~\ref{cl:Tv})
will turn out to contain a bunch of
copies of $\mathcal{T}_k$ that help to prove
Lemma~\ref{lem:tec1_connector} and Lemma~\ref{lem:tec2_connector}.
In order to simplify notation, we set
$$I_k=\left\{ (i,j,\ell):~ 1 \leq i \leq k, 1 \leq j \leq 2^{k-i}, 1 \leq \ell \leq 4 \right\}.$$
Starting with vertex disjoint sets $V_{(i,j,\ell)}\subset V(G)$
for $(i,j,\ell)\in I_k$
we will iteratively find well-behaving subsets
$M_{(i,j,\ell)}$ of those
which later turn out to be good candidate sets for embedding
the vertices of $\mathcal{T}_k$, even when some edges
of $G$ are not allowed to be used. Hereby, the tuple $(i,j)$ will represent the position of a vertex in the desired tree, while the component $\ell$ is used in order to apply our argument on disjoint subsets of $V(G)$ labeled with distinct indices $\ell$,
so that we will be able to find a few edge-disjoint copies of $\mathcal{T}_k$.
\begin{lemma}[Decomposition of $G_{n,p}$]\label{lem:Decomp}
Let $k\geq 3$ be an integer, let
$\eps =(9 \cdot 2^{k-2}-3)^{-1}$
and let $n\in\mathbb{N}$ be large enough.
Let $G\sim G_{n,p}$ with $p=n^{- 2/3 +\eps}$.
Fix $x \in V(G)$
and~let
$$
V(G) = \{x\} \cup \bigcup\limits_{(i,j,\ell) \in I_k} V_{(i,j,\ell)} \cup R
$$
be any partition of $V(G)$ such that
$|V_{(i,j,\ell)}|=n/(2^{k+4})$ holds.
Then with probability at least $1-\exp(-\ln^{1.5}n)$ there exist sets $M_{(i,j,\ell)}\subset V(G)$ and a subgraph $H \subset G$ with
$$V(H)=\bigcup\limits_{\substack{(i,j,\ell) \in I_k}} M_{(i,j,\ell)} \cup \{x\}$$
such that the following is true for every $(i,j,\ell) \in I_k$:
\begin{enumerate}[label=\itmarab{D}]
\item\label{decomp:inclusion} $M_{(i,j,\ell)} \subset V_{(i,j,\ell)}$
\item\label{decomp:size}
$|M_{(i,j,\ell)}|=n^{1/3+\alpha_i \eps} \ln^{- 2\alpha_i} n$
\item\label{decomp:cherries} if $i\geq 2$ then $\forall v \in M_{(i,j,\ell)}: d_H\left(v,M_{(i-1,2j-1,\ell)}\right) \geq 1$ and $d_H\left(v,M_{(i-1,2j,\ell)}\right) \geq 1$
\item\label{decomp:degreebound} if $i\geq 2$ then $\forall v \in M_{(i-1,2j-1,\ell)}\cup M_{(i-1,2j,\ell)}: d_H(v,M_{(i,j,\ell)}) \leq n^{(\alpha_{i}- \alpha_{i-1})\eps}$
\item\label{decomp:firstneighbours} $\forall v \in M_{(1,j,\ell)}: x \in N_H(v)$
\item\label{decomp:edges} $\forall e \in E(H) ~ \exists (i,j,\ell)\in I_k: e\in E_G(M_{(i-1,2j-1,\ell)} \cup M_{(i-1,2j,\ell)},M_{(i,j,\ell)}) $
\end{enumerate}
where we define $M_{(0,j,\ell)}:=\{x\}$ for
every $j\in [2^k]$ and $\ell\in [4]$.
\end{lemma}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{Mijl}
\caption{Part of the structure of $H$, depicted in {\color{red} red}}
\label{fig:Mijl}
\end{center}
\end{figure}
\begin{proof}
Fix $x \in V(G)$
and~let
$$
V(G) = \{x\} \cup \bigcup\limits_{(i,j,\ell) \in I_k} V_{(i,j,\ell)} \cup R
$$
be any partition of $V(G)$ such that
$|V_{(i,j,\ell)}|=n/(2^{k+4})$.
We will iteratively construct a subgraph $H$ together with sets $M_{(i,j,\ell)}\subset V_{(i,j,\ell)}$ through Algorithm~\ref{alg:good}
and we will prove that with probability at least $1-\exp(-\ln^{1.5}n)$
the algorithm succeeds in creating these in such a way that all the Properties~\ref{decomp:inclusion}--\ref{decomp:edges}
hold. Again, we will expose the edges of $G\sim G_{n,p}$ while the algorithm is running. That is, whenever a new set is going to be determined, we will only expose those edges
which have not been exposed before and which are needed for the corresponding step in the algorithm.
\begin{algorithm}[ht]\label{alg:good}
\caption{Good subgraph $H$ for vertex $x$}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{~ graph $G$, vertex $x\in V(G)$, partition of $V(G)$ as described in Lemma~\ref{lem:Decomp}}
\Output{~ subgraph $H$, sets $M_{(i,j,\ell)}$}
~ $M_{(0,j,\ell)} := \{x\}$ for every $j\in [2^k]$ and $\ell\in [4]$\;
~ $V(H) := \{x\}$ and $E(H):=\varnothing$ \;
\For{$1 \leq i \leq k$}{
\For{$1 \leq j \leq 2^{k-i}$}{
\For{$1 \leq \ell \leq 4$}{
\quad\mbox{} $M_{(i,j,\ell)}:=\{v \in V_{(i,j,\ell)}:~ d_G(v,M_{(i-1,2j-1,\ell)}) \geq 1, ~
d_G(v,M_{(i-1,2j,\ell)}) \geq 1 \}$ \;
\lIf{$|M_{(i,j,\ell)}| \geq
n^{1/3+\alpha_i \eps} \ln^{-2\alpha_i} n$\\
\quad\mbox{}}{
remove randomly selected vertices from $M_{(i,j,\ell)}$\\ \hspace{16.5mm} until $|M_{(i,j,\ell)}| =
n^{1/3+\alpha_i \eps}\ln^{-2\alpha_i} n$}
\quad\mbox{} $V(H) \leftarrow V(H) \cup M_{(i,j,\ell)}$ \\
\quad\mbox{} $E(H) \leftarrow E(H) \cup E_G(M_{(i-1,2j-1,\ell)}\cup M_{(i-1,2j,\ell)},M_{(i,j,\ell)})$ \\
}}}
{halt with output $H$ and sets $M_{(i,j,\ell)}$}
\end{algorithm}
Following Algorithm~\ref{alg:good} it is obvious that
the Properties~\ref{decomp:inclusion},
\ref{decomp:cherries}, \ref{decomp:firstneighbours} and \ref{decomp:edges} hold.
Let $\mathcal{E}_t$ be the event that the Properties~\ref{decomp:size} and \ref{decomp:degreebound} hold for every $i\leq t$
(and every $j\leq 2^{k-i}$ and $\ell\in [4]$).
In the following we will show that
$$
\mathbb{P}(\overline{\mathcal{E}_1})
\leq \exp\left( - n^{\frac{1}{3}} \right)
~~~ \text{and} ~~~
\mathbb{P}\left(\overline{\mathcal{E}_t} \Big| \mathcal{E}_{t-1} \right)
\leq \exp\left( - 2\ln^{1.5} n \right)
$$
for every $2\leq t\leq k$. Observe that, once these two inequalities are proven, we can deduce that
$\mathbb{P}(\mathcal{E}_k)\geq {1 - k \exp\left( - 2\ln^{1.5} n \right)}
\geq 1- \exp\left( - \ln^{1.5} n \right)$,
from which Lemma~\ref{lem:Decomp} follows.
\medskip
\underline{The event $\mathcal{E}_1$:} Property~\ref{decomp:degreebound}
holds trivially when $i=1$. For \ref{decomp:size}
observe that a standard Chernoff argument (apply Lemma~\ref{lem:Chernoff1} and union bound) yields that,
with probability at least $1-\exp(- n^{1/3})$,
for every $j\in [2^{k-1}]$ we have
$$|N_G(x,V_{(1,j,\ell)})|
\geq \frac{1}{2} p|V_{(1,j,\ell)}|
\geq \frac{n^{\frac13+\eps}}{2^{k+5}} \geq
n^{\frac13+\eps} \ln^{-2} n~ .$$
Since $M_{(1,j,\ell)}$ is obtained from
$N_G(x,V_{(1,j,\ell)})$ by reducing the latter to size
$n^{1/3+\eps} \ln^{-2} n$, we then obtain
that \ref{decomp:size} holds for $i=1$. Thus, $\mathbb{P}(\overline{\mathcal{E}_1})\leq
\exp(- n^{1/3})$.
\medskip
\underline{The event $\mathcal{E}_t$:} Assume that $\mathcal{E}_{t-1}$ holds.
Observe that, before the sets $M_{(i,j,\ell)}$ get determined by Algorithm~\ref{alg:good},
none of the edges between the sets $V_{(i,j,\ell)}$, with $j\leq 2^{k-i}$,
and the sets
$M_{(i-1,j',\ell)}$, with $j'\leq 2^{k-i+1}$,
have been revealed so far.
Now, conditioning on Property~\ref{decomp:size} for $i=t-1$,
a Chernoff-type argument yields the following
with probability at least
$1-\exp\left( - \ln^{1.8} n \right)$:
\begin{enumerate}
\item[(i)]
$d_G(v,M_{(i-1,2j-1,\ell)})\leq \ln^{1.9} n$ and
$d_G(v,M_{(i-1,2j,\ell)})\leq \ln^{1.9} n$
for every $v\in V_{(i,j,\ell)}$ with $j\leq 2^{k-i+1}$,
\item[(ii)]
$e_G(M_{(i-1,2j-1,\ell)}, V_{(i,j,\ell)})
\geq 0.5 \cdot |M_{(i-1,2j-1,\ell)}|\cdot |V_{(i,j,\ell)}| \cdot p
$
for every $j\leq 2^{k-i+1}$~ .
\end{enumerate}
In fact, for (i) observe that
$|M_{(i-1,2j-1,\ell)}|=|M_{(i-1,2j,\ell)}|
\stackrel{\tref{decomp:size}}{=}
n^{1/3+\alpha_{i-1}\eps}\stackrel{(\ref{alpha})}{\leq} p^{-1}$ which implies that the (conditional) expectation of the degrees in (i) is bounded by $1$. Thus, applying Lemma~\ref{lem:Chernoff2} ensures that the probability for one vertex $v$ failing to satisfy the degree conditions in (i) is bounded
by $2j\exp(-\ln^{1.9}n)$; a union bound completes the argument. Moreover, by Lemma~\ref{lem:Chernoff1} and a simple union bound, the property in (ii) fails with probability at most
$\exp\left(-n^{2/3}\right)$.
Now, let us condition on the properties in (i) and (ii) from now on.
Consider
\begin{align*}
\widetilde{M}_{(i,j,\ell)}
& := \left\{v\in V_{(i,j,\ell)}:~
d_G(v,M_{(i-1,2j-1,\ell)})\geq 1 \right\} ~ ,\\
\widehat{M}_{(i,j,\ell)}
& := \left\{v\in \widetilde{M}_{(i,j,\ell)}:~
d_G(v,M_{(i-1,2j,\ell)})\geq 1 \right\}
\end{align*}
and observe that, according to Algorithm~\ref{alg:good}, $M_{(i,j,\ell)}$
has size $n^{1/3+\alpha_i\eps}\ln^{-2\alpha_i} n$
if and only if
$\widehat{M}_{(i,j,\ell)}$
is at least of that size.
In order to see that the latter is likely to hold,
we first observe that
\begin{align*}
|\widetilde{M}_{(i,j,\ell)}|
\stackrel{\text{(i)}}{\geq}
\frac{e_G(M_{(i-1,2j-1,\ell)}, V_{(i,j,\ell)})}{\ln^{1.9} n}
\stackrel{\text{(ii)},\tref{decomp:size}}{\geq}
n^{\frac{2}{3}+(\alpha_{i-1}+1)\eps}\ln^{-2\alpha_{i-1}-2} n~ .
\end{align*}
Notice that when we determine the set
$\widetilde{M}_{(i,j,\ell)}$ we don't need
to reveal the edges
between $\widetilde{M}_{(i,j,\ell)}$ and
$M_{(i-1,2j,\ell)}$. Thus, we can expose these edges afterwards, and by a Chernoff-type argument
(apply Lemma~\ref{lem:Chernoff1} and union bound)
we get, with probability at least $1-\exp\left(-n^{1/3}\right)$, that
$$e_G(\widetilde{M}_{(i,j,\ell)} , M_{(i-1,2j,\ell)})
\geq 0.5 \cdot |\widetilde{M}_{(i,j,\ell)}|
\cdot |M_{(i-1,2j,\ell)}| \cdot p
$$
for every $j\leq 2^{k-i+1}$. It then follows that
\begin{align*}
|\widehat{M}_{(i,j,\ell)}|
\stackrel{\text{(i)}}{\geq}
\frac{e_G(\widetilde{M}_{(i,j,\ell)} , M_{(i-1,2j,\ell)})}{\ln^{1.9} n}
\stackrel{\tref{decomp:size}}{\geq}
n^{\frac{1}{3}+(2\alpha_{i-1}+2)\eps}\ln^{-4\alpha_{i-1}-4} n
\stackrel{(\tref{def:alpha})}{=}
n^{\frac{1}{3}+\alpha_i\eps}\ln^{-2\alpha_i} n
\end{align*}
from which \ref{decomp:size} follows,
as was explained above.
As next, let us look at Property~\ref{decomp:degreebound}.
Fix $s\in \{0,1\}$ and consider the set
$\widetilde{M}_{v,(i,j,\ell)}:=N_G(v,V_{(i,j,\ell)})$
for every $v\in M_{(i-1,2j-s,\ell)}$.
According to a Chernoff-type argument
(apply Lemma~\ref{lem:Chernoff1} and union bound),
with probability at least
$1-\exp\left(- n^{1/3}\right)$,
it holds that
$
|\widetilde{M}_{v,(i,j,\ell)})|
\leq n^{1/3 + \eps}
$
for every $v\in M_{(i-1,2j-s,\ell)}$.
Notice as before that, when we determine the set
$\widetilde{M}_{v,(i,j,\ell)}$ we don't need
to reveal the edges
between $\widetilde{M}_{v,(i,j,\ell)}$ and
$M_{(i-1,2j-(1-s),\ell)}$. Thus, we can expose these edges afterwards, and another Chernoff-type argument
(apply Lemma~\ref{lem:Chernoff1} and union bound)
yields that,
with probability at least
$1-n\exp\left( - n^{(\alpha_{i-1}+2)\eps}\right)
\geq 1 - n^{-\eps}$,
it holds that
$$e_G\left(\widetilde{M}_{v,(i,j,\ell)},M_{(i-1,2j - (1-s),\ell)}\right)
\leq 0.5p|\widetilde{M}_{v,(i,j,\ell)}|\cdot |M_{(i-1,2j - (1-s),\ell)}|
\stackrel{\tref{decomp:size}}{\leq} n^{(\alpha_{i-1}+2)\eps}
\stackrel{(\tref{def:alpha})}{=}
n^{(\alpha_i - \alpha_{i-1})\eps }$$
for every $v\in M_{(i-1,2j-s,\ell)}$.
Finally, notice that a vertex $w\in M_{(i,j,\ell)}\subset V_{(i,j,\ell)}$ can only become a neighbour of $v$ in $H$
if $w\in \widetilde{M}_{v,(i,j,\ell)}$ and
$E_G(w,M_{(i-1,2j-(1-s),\ell)})\neq \emptyset$,
where the first condition
comes from
the definition of $\widetilde{M}_{v,(i,j,\ell)}$,
and the second is a consequence of
the construction of $M_{(i,j,\ell)}$
in the Algorithm~\ref{alg:good}.
But this immediately implies
$$
d_H(v,M_{(i,j,\ell)})
\leq e_G\left(\widetilde{M}_{v,(i,j,\ell)} ,
M_{(i-1,2j - (1-s),\ell)}\right)
\leq n^{(\alpha_i - \alpha_{i-1})\eps } ~ ,
$$
as is required for Property~\ref{decomp:degreebound}.
Hence, summing up all the failure probabilities that occured in our argument, we see that
\begin{align*}
\mathbb{P}\left(\overline{\mathcal{E}_t} \Big| \mathcal{E}_{t-1} \right)
& \leq \exp\left(-\ln^{1.8}n \right)
+ \exp\left(-n^{\frac{1}{3}} \right)
+ 2\left(
\exp\left(-n^{\frac{1}{3}} \right)
+ \exp\left(-n^{-\eps} \right) \right) \\
& < \exp\left( - 2\ln^{1.5} n \right)~ .
\end{align*}
This finishes the proof of Lemma~\ref{lem:Decomp}.
\end{proof}
Having Lemma~\ref{lem:Decomp} in our hands, we are now able
to prove Lemma~\ref{lem:tec1_connector} and Lemma~\ref{lem:tec2_connector} in the following.
For short, let us write
\begin{equation}\label{def:Ml}
M_{\ell} :=\bigcup\limits_{i,j:~ (i,j,\ell) \in I_k} M_{(i,j,\ell)}
\end{equation}
for every $\ell\in [4]$,
and for every $i\in [k]$ set
\begin{equation}\label{def:Li}
L_i :=\bigcup\limits_{j,\ell:~ (i,j,\ell) \in I_k} M_{(i,j,\ell)}
~~ \text{and} ~~ L_0:=\{x\}~ .
\end{equation}
\medskip
\subsection{Finding good structures -- Part I}
\begin{proof}[Proof of Lemma~\ref{lem:tec1_connector}]
Let $\delta>0$ be given. Then fix $k\in\mathbb{N}$ such that
$$
\delta\geq \eps := \frac{1}{9\cdot 2^{k-2}-3}
$$
holds, and let $k_1=k+1$.
We will prove the lemma for
$
p=n^{-2/3+\eps}
$
and notice that by the monotonicity
the lemma then follows for $p=n^{-2/3+\delta}$ as well.
As before, we set
$$I_k=\left\{(i,j,\ell)\in \mathbb{N}^3:~
1\leq i\leq k,~ 1\leq j\leq 2^{k-i},~ 1\leq \ell\leq 4
\right\}~ .$$
Let $V=[n]$ be the vertex set, and let $x,r\in V$ be fixed.
Before exposing all the edges of $G\sim G_{n,p}$
we fix a partition
$$
[n]=\{x\}\cup \bigcup_{(i,j,\ell)\in I_k} V_{(i,j,\ell)} \cup R~
$$
of the vertex set such that
$r\in R$ and $|V_{(i,j,\ell)}|=n/(2^{k+4})$
for every $(i,j,\ell)\in I_k$.
We will show that with probability
at least $1-n^{-3}$
a random graph $G\sim G_{n,p}$
is such that for every subgraph
$B$ with $e(B)\leq n^{1/3}\ln n$
the graph $G\setminus B$ contains a copy $T$
of $\mathcal{T}_{k_1}$ as desired,
additionally satisfying that
$V(T)\subset ([n]\setminus R)\cup \{r\}$.
Taking a union bound over all choices of $r$ and $x$,
it then follows that the property described in
Lemma~\ref{lem:tec1_connector} holds with probability
at least
$
1-n^{-1}~ .
$
In order to do so, we first expose the edges of $G$ on $[n]\setminus R$.
By Lemma~\ref{lem:Decomp} we know that with probability at least $1-\exp(-\ln^{1.5} n)$ there exist subsets
$M_{(i,j,\ell)}\subset V_{(i,j,\ell)} $
and a subgraph $H\subset G$ on the vertex set
$\bigcup_{(i,j,\ell)\in I_k} M_{(i,j,\ell)}\cup \{x\}$
such that all the Properties~\ref{decomp:inclusion}--\ref{decomp:edges} hold.
Let $\mathcal{E}_H$ be the event that such a graph $H$ with vertex sets $M_{(i,j,\ell)}$ exists.
From now on, we will condition on $\mathcal{E}_H$ to hold.
Recall the definition of $M_{\ell}$ in (\ref{def:Ml}) and $L_i$ in (\ref{def:Li}).
We first observe that there must be many copies of
$\mathcal{T}_k$ in $H$ with all leaves being adjacent to $x$ in $H$.
\begin{clm}\label{cl:Tv}
For every $\ell\in [4]$ and every
$v\in M_{(k,1,\ell)}$ there exists a tree $T_v \cong \mathcal{T}_k$
such that
\begin{enumerate}[label=\itmarab{T}]
\item\label{tree:root} $v$ is the root of $T_v$,
\item\label{tree:VE} $V(T_v)\subset M_{\ell}$ and $E(T_v)\subset E(H)$,
\item\label{tree:leaves} all the leaves of $T_v$ are adjacent to $x$ in $H$,
\item\label{tree:children} for all vertices in $V(T_v)\cap L_i$ and $i\geq 2$, their children in $T_v$ belong to $L_{i-1}$.
\end{enumerate}
\end{clm}
\textit{Proof.}
Label the vertices of $\mathcal{T}_k$ in such a way that
the root gets label $(k,1)$ and for every vertex
with label $(i,j)$ and $i>1$ its two children
get the labels
$(i-1,2j-1)$ and $(i-1,2j)$, respectively.
Let $\ell\in [4]$ and $v\in M_{(k,1,\ell)}$ be given.
Applying Property~\ref{decomp:cherries} iteratively
we find an embedding of $\mathcal{T}_k$ into $H$,
such that the root of $\mathcal{T}_k$ is mapped to $v$,
and such that the vertex with label $(i,j)$ in $\mathcal{T}_k$
is mapped to a vertex in $M_{(i,j,\ell)}\subset M_{\ell} \cap L_i$.
Let $T_v$ denote the resulting copy of $\mathcal{T}_k$ in $H$.
Also every leaf of $\mathcal{T}_k$ has some label $(1,j)$ with $1\leq j\leq 2^{k-1}$ and thus
the corresponding leaf in $T_v$ is contained in $M_{(1,j,\ell)}$.
By Property~\ref{decomp:firstneighbours}
it follows that the latter is adjacent to $x$ in $H$.
Thus the claim follows. {\color{red}\checkmark}
\medskip
From now on, for every $v\in L_k = \bigcup_{\ell \in [4]}M_{(k,1,\ell)} $ fix a tree $T_v$ as described above.
Since, for the property that we are aiming for,
we need to have control on how many such trees become useless when some edge is removed from $H$,
we define
\begin{equation}\label{def:Se1}
S_e := \left\{
v\in L_k: ~
e\in E(T_v)\cup \{xw:~ w~ \text{is a leaf of }T_v \}
\right\}
\end{equation}
for every edge $e\in E(H)$.
Under assumption of the event $\mathcal{E}_H$, we next deduce that $S_e$ does not get too large.
\begin{clm}\label{cl:Se}
Let $\mathcal{E}_H$ hold. Then $|S_e|\leq n^{2/3-\eps}$
for every $e\in E(H)$.
\end{clm}
\textit{Proof.}
Let $e\in E(H)$, then
$e$ is incident to at least one vertex $y\in M_{(i,j,\ell)}$
for some $(i,j,\ell)\in I_k$, because of Property~\ref{decomp:edges}.
For every vertex $v\in S_e$ there must exist a path $P$ in $T_v$ leading from $y$ to $v$. According to Property~\ref{tree:children}, this path $P$ needs to be
of the form
$
P=(y,v_{i+1},v_{i+2},\ldots,v_{k-1},v)
$
with $v_s\in L_s$ for every $i+1\leq s\leq k-1$.
Following Property~\ref{decomp:degreebound} and Property~\ref{decomp:edges},
we have $d_H(v_s,L_{s+1})\leq n^{(\alpha_{s+1}-\alpha_s)\eps}$
for every $i+1\leq s\leq k-1$ and
$d_H(y,L_{i+1})\leq n^{(\alpha_{i+1}-\alpha_i)\eps}$.
Thus, the number of all possible such $y$-$v$-paths $P$ that
belong to some $T_v$ with $v\in L_k$ is bounded from above
by
$$
\prod_{s=i}^{k-1}
n^{(\alpha_{s+1}-\alpha_s)\eps}
=
n^{(\alpha_{k}-\alpha_i)\eps} \leq n^{\frac23-\eps},
$$
where the last inequality holds by (\ref{alpha}) and since $\alpha_i\geq \alpha_1=1$. This proves the claim. {\color{red}\checkmark}
\medskip
We next expose the edges incident to
$r\in R$. By a standard Chernoff argument (apply Lemma~\ref{lem:Chernoff2} and union bound), we conclude that with probability at least $1-\exp(-0.5\ln^2 n)$ it holds that
\begin{equation}\label{eq:deg_Se}
d_G(r,S_e) < \ln^2 n
\end{equation}
for every $e\in E(H)$, and
\begin{equation}\label{eq:deg_M}
d_G\left(r, M_{(k,1,\ell)} \right) > \frac{1}{2}p |M_{(k,1,\ell)}|
\stackrel{\tref{decomp:size} }{=}
\frac{n^{-\frac{1}{3} + (\alpha_k+1) \eps}}{2 \ln^{2\alpha_k} n}
\stackrel{(\ref{alpha})}{=}
\frac{n^{\frac{1}{3} + \eps}}{2 \ln^{2\alpha_k} n}
\end{equation}
for every $\ell\in [4]$.
Conditioning on~(\ref{eq:deg_Se}) and (\ref{eq:deg_M}) as well as the event $\mathcal{E}_H$,
which together hold with probability at least
$1-\exp(-0.5\ln^{1.5} n)\geq 1-n^{-3}$, it remains to prove that
for every subgraph
$B$ with $e(B)\leq n^{1/3}\ln n$
the graph $H\setminus B \subset G\setminus B$
contains a copy $T$
of $\mathcal{T}_{k_1}$ as was described at the beginning
of the proof.
So, let $B$ of size
$e(B)\leq n^{1/3}\ln n$ be given.
Set $S_B:= \bigcup_{e\in B}S_e$
and observe that
$$
d_{G\setminus B}\left(r, S_B \right)
<d_{G}\left(r, S_B \right)
\stackrel{(\tref{eq:deg_Se})}{<}
|B|\cdot \ln^2 n
< n^{\frac13}\ln^3 n~ .
$$
For every $\ell \in [4]$ we thus obtain
\begin{align*}
d_{G\setminus B} \left( r, M_{(k,1,\ell)} \right)
\geq
d_{G} \left( r, M_{(k,1,\ell)} \right) - e(B)
\stackrel{(\ref{eq:deg_M})}{>}
\frac{n^{\frac13+\eps}}{2\ln^{2\alpha_k} n} -
n^{\frac13}\ln n
> d_{G\setminus B}\left(r, S_B \right)~ ,
\end{align*}
provided $n$ is large enough.
Hence, for every $\ell \in [4]$, we find a vertex
$x_{\ell} \in N_{G\setminus B}(r)$ with
$x_{\ell} \in M_{(k,1,\ell)} \setminus S_B$.
By the choice of $T_{x_{\ell}}$ (Claim~\ref{cl:Tv}),
the tree $T_{x_{\ell}}$
is a copy of $\mathcal{T}_k$ in $H$ with $x_{\ell}$ being its root,
such that every leaf of $T_{x_{\ell}}$
is adjacent to $x$ in $H\subset G$ and such that
$V(T_{x_{\ell}})\subset M_{\ell}$.
Moreover, by the definition of $S_e$ in (\ref{def:Se1}), and since
$x_{\ell}\notin S_B$, we find that
$$E(T_{x_{\ell}})\cup \left\{xw:~ w\text{ is a leaf of $T_{x_{\ell}}$} \right\}\subset E(H)\setminus E(B) ~ .$$
Now, set $T$ to be the union
of $T_{x_1}$, $T_{x_2}$ and $\{x_1r,x_2r\}$. Then,
since $M_{1}\cap M_2 = \varnothing$,
we get that
$T\subset G\setminus B$ is a copy of $\mathcal{T}_{k+1} = \mathcal{T}_{k_1}$ with all its leaves being adjacent to $x$ in $G\setminus B$. Moreover, $r$ is the root of $T$,
and we have $x\notin V(T)$, since $x\notin M_{\ell} \supset V(T_{x_{\ell}})$.
\end{proof}
\medskip
\subsection{Finding good structures -- Part II}
\begin{proof}[Proof of Lemma~\ref{lem:tec2_connector}]
Let $\delta>0$ be given. Fix $k\in\mathbb{N}$ such that
$\delta\geq \eps := 1/(9\cdot 2^{k-2}-3)$
holds, and let $k_2=k$. Again, it is enough to prove the lemma for
$p=n^{-2/3+\eps}$ and to use the monotonicity
of the desired property. Again,
we set
$$I_k=\left\{(i,j,\ell)\in \mathbb{N}^3:~
1\leq i\leq k,~ 1\leq j\leq 2^{k-i},~ 1\leq \ell\leq 4
\right\}~ .$$
Let $V=[n]$ be the vertex set,
let $A\subset V$ with $|A|=n^{1/3}$ be fixed,
and let $x\in V\setminus A$.
Before exposing the edges of $G\sim G_{n,p}$
let
$$
[n]=\{x\}\cup \bigcup_{(i,j,\ell)\in I_k} V_{(i,j,\ell)} \cup R~
$$
be any partition of the vertex set satisfying
$A\subset R$ and $|V_{(i,j,\ell)}|=n/(2^{k+4})$
for every $(i,j,\ell)\in I_k$. We will show that with probability
at least $1-n^{-2}$ a random graph $G\sim G_{n,p}$ is such that for every vertex set $M\subset V\setminus \{x\}$ and every subgraph $B$, with
$e(B)\leq n^{2/3}\ln n$
and $d_B(v)\leq \ln^2 n$ for every $v\in V\setminus M$,
the graph $G\setminus B$
contains a vertex $z$ and four binary trees $T_{\ell}$
as described by the lemma, additionally satisfying that $z\in R\setminus A$ and $V(T_{\ell})\subset V_{\ell}:=\bigcup_{(i,j)} V_{(i,j,\ell)}$ for every $\ell\in [4]$.
Then, taking union bound over all choices of $x$,
Lemma~\ref{lem:tec2_connector} follows immediately.
\medskip
In order to do so, we first expose the edges of $G$ on $[n]\setminus R$.
By Lemma~\ref{lem:Decomp} we know that with probability at least $1-\exp(-\ln^{1.5} n)$ there exists a subgraph $H\subset G$ with $V(H)\subset [n]\setminus R$ as well as subsets
$M_{(i,j,\ell)}\subset V_{(i,j,\ell)} $ satisfying
the Properties~\ref{decomp:inclusion}--\ref{decomp:edges}.
Let $\mathcal{E}_H$ be the event that such a graph $H$ with vertex sets $M_{(i,j,\ell)}$ exists.
From now on, we will condition on $\mathcal{E}_H$ to hold.
Following Claim~\ref{cl:Tv} we then know that
for every $\ell\in [4]$ and every vertex
$v\in M_{(k,1,\ell)}$ there exists a tree $T_v \cong \mathcal{T}_k$ in $H$
such that $V(T_v)\subset V_{\ell}$
and such that all the leaves of $T_v$ are adjacent to $x$ in~$H$.
\medskip
Set $R'=R\setminus A$ and for every $Q\subset L_k = \bigcup_{\ell \in [4]} M_{(k,1,\ell)}$
let
\begin{equation}\label{def:big}
Big_Q : =\left\{ u\in R':~ d_G(u,Q)> n^{\frac13+\frac{\eps}{2}} \right\}~ .
\end{equation}
Next expose all remaining edges of $G$. Under the assumption of $\mathcal{E}_H$
we then have the following
\begin{clm}\label{cl:Big}
Let $\mathcal{E}_H$ hold.
Then, with probability at least $1-\exp(-0.5\ln^2 n)$ the following holds:
\begin{enumerate}
\item[(a)]
$\forall \ell\in [4] ~ \forall v\in R':
~ d_G(v,M_{(k,1,\ell)}) > n^{1/3+2\eps/3}
$,
\item[(b)]
$
|N_G(A)\cap R'| > n^{2/3+2\eps/3}
$,
\item[(c)]
for every $Q\subset L_k$ of size $n^{1-2\eps}$ we have
$
|Big_Q|\leq n^{2/3+\eps/2}
$.
\end{enumerate}
\end{clm}
\textit{Proof.}
For (a) notice that $d_G(v,M_{(k,1,\ell)})\sim Bin(|M_{(k,1,\ell)}|,p)$
and thus
$$\mathbb{E}(d_G(v,M_{(k,1,\ell)})) = |M_{(k,1,\ell)}|p
\stackrel{ \tref{decomp:size} }{=}
\frac{n^{-\frac{1}{3}+(\alpha_k+1)\eps}}{\ln^{2\alpha_k} n}
\stackrel{(\ref{alpha})}{=} \frac{n^{\frac{1}{3}+\eps}}{\ln^{2\alpha_k} n}~ .$$
Applying Chernoff's inequality (Lemma~\ref{lem:Chernoff1}) and a union bound over all choice of
$\ell\in [4]$ and $v\in R'$ we get
$\mathbb{P}(\text{(a) fails}) \leq 4n \exp(-n^{1/3})~ . $
For (b), observe first that $|R'|> \frac{n}{4}$.
Applying Chernoff's inequality (Lemmas~\ref{lem:Chernoff1}
and~\ref{lem:Chernoff2}) and a union bound,
we see that with probability at least
$1-\exp(-0.9\ln^2 n)$ we have
$$
e_G(A,R')\geq \frac{1}{2}|A||R'|p > \frac{n^{\frac23+\eps}}{8}
~~~
\text{and}
~~~
d_G(v,A)<\ln^2 n ~~~ \text{for every }v\in R'~ .
$$
From these inequalities we can conclude that
$$
|N_G(A)\cap R'| \geq \frac{e_G(A,R')}{\ln^2 n} > n^{\frac23(1+\eps)}~ .
$$
Thus, $\mathbb{P}(\text{(b) fails}) \leq \exp(-0.9\ln^2 n)$.
For (c) we start by observing that according to a
Chernoff and union bound argument
(applying Lemma~\ref{lem:Chernoff2})
the following property
holds with probability at least $1-\exp(-n)$:
for every subsets $X\subset L_k$ and $Y\subset R'$ of sizes
$|X|=n^{1-2\eps}$ and $|Y|=n^{2/3+\eps/2}$
we have $e_G(X,Y)< n^{1+\eps/2}$.
Given that property,
assume that (c) fails to hold. Then there
exist $Q\subset L_k$ of size $n^{1-2\eps}$
and a subset $Big_Q'\subset Big_Q$
with $|Big_Q'|=n^{2/3+\eps/2}$.
We then have $e_G(Q,Big_Q')<n^{1+\eps/2}$
under assumption of the property mentioned above;
but also $e_G(Q,Big_Q') > n^{1/3+\eps/2} \cdot |Big_Q'| = n^{1+\eps}$
according to the definition of $Big_Q$ in (\ref{def:big}), a contradiction.
Thus, $\mathbb{P}(\text{(c) fails}) \leq \exp(-n)$.
Finally, summing up all the failure probabilities, that were obtained above, the claim follows.~{\color{red}\checkmark}
\medskip
Conditioning on the event $\mathcal{E}_H$
and on the properties described in Claim~\ref{cl:Big},
it remains to prove that
for every vertex set $M\subset V\setminus \{x\}$ and every subgraph $B$, with
$e(B)\leq n^{2/3}\ln n$
and $d_B(v)\leq \ln^2 n$ for every $v\in V\setminus M$,
the graph $G\setminus B$
contains a vertex $z$ and four binary trees $T_{\ell}$
as described at the beginning of the proof.
So, let any such $M$ and $B$ be given. Similarly to the proof of
Lemma~\ref{lem:tec1_connector} consider the set
\begin{align*}
S_e & = \left\{
v\in L_k: ~
e\in E(T_v)\cup \{xw:~ w~ \text{is a leaf of }T_v \}
\right\}
\end{align*}
for every edge $e\in E(H)$, and let
$S_X:=\bigcup_{e\in X}S_e$ for every $X\subset V$.
Further, let
\begin{align*}
B_i :=\left\{
e\in B\cap H:~ e=ab ~ \text{with }a\in L_{i-1}\setminus M
~ \text{and } b\in L_i
\right\} ~~ \text{and} ~~
B^{\ast} := \bigcup_{i\in [k]} B_i~ .
\end{align*}
Similarly to
Claim~\ref{cl:Se}, we prove the following
\begin{clm}
Let $\mathcal{E}_H$ hold. Then
$|S_{B^{\ast}}|\leq n^{1-2\eps}$.
\end{clm}
\textit{Proof.}
We first bound $|S_{B_i}|$ for every $i\in [k]$. If $v$ is a vertex in $S_{B_i}$,
then this means that the edge set $E(T_v)\cup \{xw:~ w~ \text{is a leaf of }T_v\}$
needs to contain an edge $e=ab\in B$ between a vertex $a\in L_{i-1}\setminus M$
and a vertex $b\in L_i$. By assumption on $B$ we have
$d_B(a) \leq \ln^2 n$. Moreover,
there must exist a path $P$ in $T_v$ leading from $b$ to $v$ which is
of the form
$
P=(b,v_{i+1},v_{i+2},\ldots,v_{k-1},v)
$
with $v_s\in L_s$ for every $i+1\leq s\leq k-1$.
Analogously to the proof of Claim~\ref{cl:Se} we know that
the number of such paths is at most
$
\prod_{s=i}^{k-1}
n^{(\alpha_{s+1}-\alpha_s)\eps}
=
n^{(\alpha_{k}-\alpha_i)\eps}
$.
Provided $n$ is large enough, we thus conclude that for $i\geq 2$ it holds that
\begin{align*}
|S_{B_i}|
& < |L_{i-1}|\cdot \ln^2 n \cdot n^{(\alpha_{k}-\alpha_i)\eps}
=
2^{k-i+3} n^{\frac13+\alpha_{i-1}\eps} \ln^2 n \cdot n^{\frac23-\alpha_i \eps} \\
& < n^{1-(\alpha_i-\alpha_{i-1})\eps} \ln^3 n
\leq n^{1-3\eps} \ln^3 n~
\end{align*}
where for the equation we make use of
Definition~(\ref{def:Li}), Property~\ref{decomp:size}, the equation $\alpha_k\eps = \frac{2}{3}$
from (\ref{alpha}), and
where in the last inequality we use that,
according to (\ref{def:alpha}),
$\alpha_i-\alpha_{i-1} = 3\cdot 2^{i-2} \geq 3$ holds.
Moreover, since $L_0=\{x\}$ and using (\ref{def:alpha}) again,
$$
|S_{B_1}| < |L_{0}|\cdot \ln^2 n \cdot n^{(\alpha_{k}-\alpha_1)\eps} = n^{\frac23-\eps} \ln^2 n~ .
$$
Hence, $|S_{B^{\ast}}|\leq \sum_{i\in [k]} |S_{B_i}| < n^{1-2\eps}$, which finishes the proof of the claim.~{\color{red}\checkmark}
\medskip
Now, under assumption of $|S_{B^{\ast}}|\leq n^{1-2\eps}$ and the properties in Claim~\ref{cl:Big}
it follows that
\begin{align*}
|(N_{G\setminus B}(A)\cap R')\setminus Big_{S_{B^{\ast}}}|
& \geq
|N_{G}(A)\cap R'| - e(B) - |Big_{S_{B^{\ast}}}| \\
& \stackrel{\text{(b),(c)}}{>}
n^{\frac23(1+\eps)} - n^{\frac23}\ln n - n^{\frac23+\frac{\eps}{2}}
> 2e(B)~ ,
\end{align*}
provided $n$ is large enough. Thus, there exists a vertex
$$
z\in (N_{G\setminus B}(A)\cap R')\setminus Big_{S_{B^{\ast}}}
$$
such that $d_B(z)=0$. In particular, we then obtain that
\begin{align*}
d_{G\setminus B}(z,M_{(k,1,\ell)})
=
d_{G}(z,M_{(k,1,\ell)})
\stackrel{\text{(a)}}{>}
n^{\frac13(1+2\eps)}
>
d_G(z,S_{B^{\ast}})
\end{align*}
for every $\ell\in [4]$,
where in the last inequality we use that
$z\notin Big_{S_{B^{\ast}}}$.
For every $\ell\in [4]$
we thus find a vertex $r_{\ell}\in M_{(k,1,\ell)}\setminus S_{B^{\ast}}$
such that $zr_{\ell} \in E(G\setminus B)$.
The latter already ensures that Property~\ref{good2:top} holds.
By the choice of $T_{\ell}:=T_{r_{\ell}}$
(Claim~\ref{cl:Tv}), we know that
$T_{\ell}$ is a copy of $\mathcal{T}_k$ in $H$
such that $V(T_{\ell})\subset M_{\ell}$
and such that all the leaves of $T_{\ell }$ are adjacent to $x$ in $H$.
Since $r_{\ell}\notin S_{B^{\ast}}$ we also know that
the set $E(T_{\ell})\cup \left\{xw:~ w\text{ is a leaf of }T_{\ell}\right\}$
does not contain any edges from $B_i$ for $i\in [k]$.
Thus, if there is an edge $e=ab\in B$ that belongs to $E(T_{\ell})$
with $a\in L_{i-1}$ and $b\in L_i$ for some $2\leq i\leq k$,
then $a\in M$ must hold, according to the definition of
$B_i$, $B^\ast$ and $S_{B^\ast}$. This yields Property~\ref{good2:edges}.
Analogously, if there were edges from $B$ incident to $x$ and
to a leaf of $T_{\ell}$, then $x\in M$ would need to hold.
However, we have $x\notin M$ by assumption, and thus
Property~\ref{good2:leaves} follows.
Finally, \ref{good2:x} holds, since
$V(T_{\ell})\subset M_{\ell}$ by Property~\ref{tree:VE},
and $x\notin M_{\ell}$.
\end{proof}
\section{Concluding remarks}\label{sec:concluding}
{\bf Adding more constraints.}
Another variant of Maker-Breaker games are
Walker-Breaker games (see e.g.~\cite{CT2016}, \cite{EFKP2014}
and~\cite{FM2019b})
which put even more constraints on the edges that Maker
may choose from in every round.
Here, Maker is only allowed to claim her edges
according to a walk. That is, in each round she must claim a free edge or she must walk along one of her already claimed edges,
such that this edge is incident to her current position
in the graph.
It is quite natural to ask what happens in the connectivity game
on $G\sim G_{n,p}$ when the game is played in the Walker-Breaker setting.
This is work in progress already. Moreover, one may consider the variant in which Breaker also needs to play as a Walker, as suggested in~\cite{EFKP2014} and~\cite{FM2019a}.
We have not considered this variant, but we would be interested in
how it behaves compared to the usual Maker-Breaker game
and the Connector-Breaker game, respectively.\\
{\bf Considering different graph properties.}
In our paper we consider the Connector-Breaker game
on $G\sim G_{n,p}$ in which Connector aims for a spanning tree.
By combining our argument
with the randomized strategy given by Ferber, Krivelevich and Naves~\cite{FKN2015},
we can even show that $n^{-2/3+o(1)}$ is the size of the threshold probability when Connector aims for a Hamilton cycle.
For a clearer presentation in this paper,
we however skip the full argument here. A proof will appear in a follow-up paper.
Furthermore, it would be interesting to consider other graph properties
and to study the relation between the Connector-Breaker game, the Walker-Breaker game and their Maker-Breaker analogue.
For example, consider the $H$-game where Maker (or Connector/Walker) wins if she claims all the edges of a copy of a given (constant size) graph $H$. Following~\cite{BL2000} and the approach given in~\cite{CT2016}
it turns our that for all the three types of games
the threshold bias for a $(1:b)$ game played on $K_n$ is of the same order, namely $\Theta(n^{1/m_2(H)})$, with $m_2(H)$ being
the maximum $2$-density of $H$. This is in contrast to the connectivity game discussed in this paper.
We wonder whether in the unbiased $H$-game on $G\sim G_{n,p}$
it also holds that the threshold probabilities
for winning either variant are of the same order.
\medskip
{\bf Acknowledgement.}
This project was started as a part of the Bachelor thesis of the second author who would like to thank the group of Anusch Taraz for providing a nice working environment and fruitful discussions.
| {
"timestamp": "2019-11-06T02:14:17",
"yymm": "1911",
"arxiv_id": "1911.01724",
"language": "en",
"url": "https://arxiv.org/abs/1911.01724",
"abstract": "By now, the Maker-Breaker connectivity game on a complete graph $K_n$ or on a random graph $G\\sim G_{n,p}$ is well studied. Recently, London and Pluhár suggested a variant in which Maker always needs to choose her edges in such a way that her graph stays connected. By their results it follows that for this connected version of the game, the threshold bias on $K_n$ and the threshold probability on $G\\sim G_{n,p}$ for winning the game drastically differ from the corresponding values for the usual Maker-Breaker version, assuming Maker's bias to be $1$. However, they observed that the threshold biases of both versions played on $K_n$ are still of the same order if instead Maker is allowed to claim two edges in every round. Naturally, this made London and Pluhár ask whether a similar phenomenon can be observed when a $(2:2)$ game is played on $G_{n,p}$. We prove that this is not the case, and determine the threshold probability for winning this game to be of size $n^{-2/3+o(1)}$.",
"subjects": "Combinatorics (math.CO)",
"title": "Connector-Breaker games on random boards",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864314449021,
"lm_q2_score": 0.7154240018510025,
"lm_q1q2_score": 0.708260054562505
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.